id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
175,659 | How to Rename a Modern SharePoint Site URL in Office 365 | This posts explains how to rename a Modern SharePoint site URL in Office 365.
... | 0 | 2019-10-22T20:23:09 | https://blogit.create.pt/miguelisidoro/2019/09/23/how-to-rename-a-modern-sharepoint-site-url-in-office-365/ | office365, sharepoint, collaboration, modernsharepoint | ---
title: How to Rename a Modern SharePoint Site URL in Office 365
published: true
tags: Office 365,SharePoint,Collaboration,Modern SharePoint
canonical_url: https://blogit.create.pt/miguelisidoro/2019/09/23/how-to-rename-a-modern-sharepoint-site-url-in-office-365/
---
The post [How to Rename a Modern SharePoint Site URL in Office 365](https://blogit.create.pt/miguelisidoro/2019/09/23/how-to-rename-a-modern-sharepoint-site-url-in-office-365/) appeared first on [Blog IT](https://blogit.create.pt).
This posts explains how to rename a Modern SharePoint site URL in Office 365.
## Introduction
Site URL Rename has been one of the most popular requests via [UserVoice](https://sharepoint.uservoice.com/forums/329214-sites-and-collaboration/suggestions/13217277-enable-renaming-the-site-collection-urls) and in [SharePoint Conference 2019](https://dev.to/mlisidoro/what-s-new-for-sharepoint-and-office-365-from-sharepoint-conference-2019-part-2-o5g), in one my favorite announcements of the event, Microsoft finally announced the possibilty to rename a Site URL.
This can be done either using the SharePoint Admin Center or using a PowerShell script.
## How It Works
### Using the SharePoint Admin Center
The easiest way to rename a site URL is using the SharePoint Admin Center. Select “Active Sites”, then the site you want to rename and click “Edit”.
<figcaption>Renaming a Site URL using SharePoint Admin Center</figcaption>
A popup will appear and the only thing you have to do is to write the new URL and ensure that the new URL is not being used.
<figcaption>Set the new URL for the SharePoint site</figcaption>
To read the entire article, click [here](https://blogit.create.pt/miguelisidoro/2019/09/23/how-to-rename-a-modern-sharepoint-site-url-in-office-365/).
Happy SharePointing!
The post [How to Rename a Modern SharePoint Site URL in Office 365](https://blogit.create.pt/miguelisidoro/2019/09/23/how-to-rename-a-modern-sharepoint-site-url-in-office-365/) appeared first on [Blog IT](https://blogit.create.pt). | mlisidoro |
175,749 | Make and Deploy a Serverless Application Into AWS lambda | At my job we needed a solution for writing, maintaining and deploying aws lambdas. The serverless fra... | 0 | 2019-09-24T09:13:37 | https://dev.to/pcmagas/make-and-deploy-a-serverless-application-into-aws-lambda-1ijd | serverless, lamda, aws, node | At my job we needed a solution for writing, maintaining and deploying aws lambdas. The serverless framework is a nodejs framework used for making and deploying serverless applications such as AWS Lambdas.
So we selected the serverless application as our choice for these sole reasons:
- Easy to manage configuration environment via enviromental viariables.
- Easy to keep a record of the lambda settings and change history via git, so we can kill the person who did a mistake. (ok ok just kidding, no human has been killed ;) ... yet )
- Because it also is node.js framework we can use the normal variety of the frameworks used for unit and integration testing.
- Also for the reason above we also could manage and deploy dependencies as well using combination of nodejs tools and the ones provided from the serverless framework.
- Ce can have a single, easy to maintain, codebase with more than one aws lambdas without the need for duplicate code.
# Install serverless
```
sudo -H npm i -g serverless
```
(For windows ommit the `sudo -H` part)
# Our first lambda
If not we need to create our project folder and initialize an node.js project:
```
mkdir myFirstLambda
cd myFirstLambda
npm init
git add .
git commit -m "Our first project"
```
Then install `serverless` as dev-dependency, we need that because on colaborative projects it will install all the required tools to deploy and run the project:
```
npm install --save-dev serverless
```
And then run the following command to bootstrap our first lambda function:
```
serverless create --template aws-nodejs
```
With that command 2 files have been generated:
* `handler.js` Where contains our aws lambda handlers.
* `serverless.yml` where it contains all the deployment and running settings.
Then on `handler.js` change the function `module.exports.hello` with a respective name representing the functionality. For our purpoce we will keep it as is. We can run the lambda function locally via the command:
```
sls invoke local --stage=dev --function hello
```
Which it will show the returning value of the function hello on `handler.js`. Also it is a good idea to place the command above as a `start` script into `package.json` at `scripts` section.
# Deploy aws lambda
First of all we need to specify the lambda name. So we need to modify the `serverless.yml` accorditly in order to be able to specify the AWS lambda name. So we change the `functions` sections from:
```
functions:
hello:
handler: handler.hello
```
Into:
```
functions:
hello:
handler: handler.hello
name: MyLambda
description: "My First Lambda"
timeout: 10
memorySize: 512
```
With that we can list the deployed lambda as `MyLambda` as aws console, also as seen above we can specify and share lambda settings.
Furthermore is good idea to specify enviromental variables via at the `environment:` section with the following setting:
```
environment: ${file(./.env.${self:provider.stage}.yml)}
```
With that we can use the `stage` for each deployment environment and each setting will be provided from .env files. Also upon deployment the `.env` files will be used in order to be able to specify the **deployed** lambda environmental variables as well.
Also is good idea to ship a template .env file named `.env.yml.dist` so each developer will need to do:
```
cp .env.yml.dist .env.dev.yml
```
And fill the appropriate settings. Also for production you need to do:
```
cp .env.yml.dist .env.prod.yml
```
Then exclude these files to be deployed except the on offered by the stage parameter (will seen bellow):
```
package:
include:
- .env.${self:provider.stage}.yml
exclude:
- .env.*.yml.dist
- .env.*.yml
```
Then deploy with the command:
```
sls deploy --stage ^environment_type^ --region ^aws_region^
```
As seen the pattern followed is the: `.env.^environment_type^.yml` where the `^environment_type^` is the value provided from the `--stage` parameter at both `sls invoke` and `sls deploy` commands.
Also we could specify depending the environment the lambda name using these settings as well:
```
functions:
hello:
handler: handler.hello
name: MyLambda-${self:provider.stage}
description: "My First Lambda"
timeout: 10
memorySize: 512
```
Where the `${self:provider.stage}` takes its value from the `--stage` parameter. Than applies where the `${self:provider.stage}` is met at the `serverless.yml` file. | pcmagas |
175,814 | Slider | Slider for site. Animation in the form of flipping cards in a circle. #javascript #html #css #w... | 0 | 2019-09-24T11:14:00 | https://dev.to/iderevyansky/slider-1h8d | codepen, javascript, html, webdev | <p>Slider for site. Animation in the form of flipping cards in a circle.</p>
{% codepen https://codepen.io/IDerevyansky/pen/jONdvOv %}
#javascript #html #css #web #react #nodejs | iderevyansky |
175,823 | Command Execution Tricks with Subprocess - Designing CI/CD Systems | The most crucial step in any continuous integration process is the one that executes build instructio... | 0 | 2019-10-01T02:10:12 | https://tryexceptpass.org/article/continuous-builds-subprocess-execution/ | python, subprocess, ci, continuousdelivery | ---
title: Command Execution Tricks with Subprocess - Designing CI/CD Systems
published: true
tags: python, subprocess, ci, continuousdelivery
canonical_url: https://tryexceptpass.org/article/continuous-builds-subprocess-execution/
cover_image: https://tryexceptpass.org/images/continuous-builds-execution.webp
---
The most crucial step in any continuous integration process is the one that executes build instructions and tests their output. There’s an infinite number of ways to implement this step ranging from a simple shell script to a complex task system.
Keeping with the principles of simplicity and practicality, today we’ll look at continuing the series on [Designing CI/CD Systems](https://tryexceptpass.org/designing-continuous-build-systems) with our implementation of the execution script.
Previous chapters in the series already established the [build directives](https://tryexceptpass.org/article/continuous-builds-parsing-specs/) to implement. They covered the format and location of the build specification file. As well as the [docker environment](https://tryexceptpass.org/article/continuous-builds-docker-swarm) in which it runs and its limitations.
## Execution using subprocess
Most directives supplied in the YAML spec file are lists of shell commands. So let's look at how Python's [subprocess](https://docs.python.org/3/library/subprocess.html) module helps us in this situation.
We need to execute a command, wait for it to complete, check the exit code, and print any output that goes to stdout or stderr. We have a choice between `call()`, `check_call()`, `check_output()`, and `run()`, all of which are wrappers around a lower-level `popen()` function that can provide more granular process control.
This `run()` function is a more recent addition from Python 3.5. It provides the necessary execute, block, and check behavior we're looking for, raising a `CalledProcessError` exception whenever it finds a failure.
Also of note, the [shlex](https://docs.python.org/3/library/shlex.html) module is a complimentary library that provides some utilities to aid you in making subprocess calls. It provides a `split()` function that's smart enough to properly format a list given a command-line string. As well as `quote()` to help *escape* shell commands and avoid shell injection vulnerabilities.
## Security considerations
Thinking about this for a minute, realize that you're writing an execution system that runs command-line instructions as written by a third party. It has significant security implications and is the primary reason why most online build services do not let you get down into this level of detail.
So what can we do to mitigate the risks?
[Read On ...](https://tryexceptpass.org/article/continuous-builds-subprocess-execution/) | tryexceptpass |
175,840 | Serenity automation framework - Part 2/4 - Automation Test with UI using Cucumber | Guideline for how to implement UI test using Cucumber with Serenity | 0 | 2019-09-25T09:13:07 | https://dev.to/cuongld2/serenity-automation-framework-part-2-4-automation-test-with-ui-using-cucumber-3n7b | serenity, java, cucumber, ui | ---
title: Serenity automation framework - Part 2/4 - Automation Test with UI using Cucumber
published: true
description: Guideline for how to implement UI test using Cucumber with Serenity
tags: #Serenity #Java #Cucumber #UI
---
Hi folks, I'm back with another post.
Please check out [this](https://dev.to/cuongld2/serenity-automation-framework-part-1-4-automation-test-with-api-2mb5) for previous post about Serenity.
At its core, Serenity is all about BDD.
The philosophy of Serenity is to make the test like a live documentation.
In this blog post I will share you guys how to implement UI test in Serenity with Cucumber and ScreenPlay Pattern.
And don't forget, how to create beautiful and detailed report like this:

I.Why Cucumber
Cucumber is a software tool used by computer programmers that supports behavior-driven development (BDD). Central to the Cucumber BDD approach is its plain language parser called Gherkin. It allows expected software behaviors to be specified in a logical language that customers can understand.
By using cucumber, we separate the intent of the tests from how it will be implemented.
Non-technical guys like BA or PO can easily understand what we are testing from feature file like
````javascript
Feature: Allow users to login to quang cao coc coc website
@Login
Scenario Outline: Login successfully with email and password
Given Navigate to quang cao coc coc login site
When Login with '<email>' and '<password>'
Then Should navigate to home page site
Examples:
|email|password|
|xxxxxxxxxx|xxxxxxxxxx|
@Login
Scenario Outline: Login failed with invalid email
Given Navigate to quang cao coc coc login site
When Login with '<email>' and '<password>'
Then Should prompt with '<errormessage>'
Examples:
|email|password|errormessage|
|a|FernandoTorres12345#|abc@example.com|
```
II.Implementation
We will go through the needed setup for implement test using Cucumber with Serenity.
1.POM file
We would need to use serenity-cucumber for our project.
So make sure to add dependency for that:
````javascript
<!-- https://mvnrepository.com/artifact/net.serenity-bdd/serenity-cucumber -->
<dependency>
<groupId>net.serenity-bdd</groupId>
<artifactId>serenity-cucumber</artifactId>
<version>1.9.45</version>
</dependency>
```
Also we need to add some plugins to build serenity report with maven
````javascript
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>8</source>
<target>8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.0</version>
<configuration>
<testFailureIgnore>true</testFailureIgnore>
</configuration>
</plugin>
<plugin>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.18</version>
<configuration>
<includes>
<include>**/features/**/When*.java</include>
</includes>
<systemProperties>
<webdriver.driver>${webdriver.driver}</webdriver.driver>
</systemProperties>
</configuration>
</plugin>
<plugin>
<groupId>net.serenity-bdd.maven.plugins</groupId>
<artifactId>serenity-maven-plugin</artifactId>
<version>${serenity.maven.version}</version>
<executions>
<execution>
<id>serenity-reports</id>
<phase>post-integration-test</phase>
<goals>
<goal>aggregate</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
```
2.Serenity config file
In order to set the default config for serenity, we can use serenity.conf file or serenity.properties
In this example I would like to show you about serenity.conf:
````javascript
webdriver {
base.url = "https://cp.qc.coccoc.com/sign-in?lang=vi-VN"
driver = chrome
}
headless.mode=false
serenity {
project.name = "Serenity Guidelines"
tag.failures = "true"
linked.tags = "issue"
restart.browser.for.each = scenario
take.screenshots = AFTER_EACH_STEP
console.headings = minimal
browser.maximized = true
}
jira {
url = "https://jira.tcbs.com.vn"
project = Auto
username = username
password = password
}
drivers {
windows {
webdriver.chrome.driver = src/main/resources/webdriver/windows/chromedriver.exe
}
mac {
webdriver.chrome.driver = src/main/resources/chromedriver
}
linux {
webdriver.chrome.driver = src/main/resources/webdriver/linux/chromedriver
}
}
```
We defined some common thing like where to store driver for each environment:
````javascript
drivers {
windows {
webdriver.chrome.driver = src/main/resources/webdriver/windows/chromedriver.exe
}
mac {
webdriver.chrome.driver = src/main/resources/chromedriver
}
linux {
webdriver.chrome.driver = src/main/resources/webdriver/linux/chromedriver
}
}
```
or take screenshot after each step:
````javacript
serenity {
take.screenshots = AFTER_EACH_STEP
}
```
3.Page Object
An experienced automation test is the one who can implement the tests in an abstracted ways for better understanding and maintenaince.
For best practices of implement test in UI, we should always define page object class for the web page we are interacting with.
In the case, the web page has a lot of functions and elements, we should separate the page object into multiple one according to the features it cover for better maintenaince.
For example with LoginPage of qcCocCoc site:
````javascript
@DefaultUrl("https://cp.qc.coccoc.com/sign-in?lang=vi-VN")
public class LoginPage extends PageObject {
@FindBy(name = "email")
private WebElementFacade emailField;
@FindBy(name = "password")
private WebElementFacade passwordField;
@FindBy(css = "button[data-track_event-action='Login']")
private WebElementFacade btnLogin;
@FindBy(xpath = "//form[@method='post'][not(@name)]//div[@class='form-errors clearfix']")
private WebElementFacade errorMessageElement;
public void login(String email, String password) {
waitFor(emailField);
emailField.sendKeys(email);
passwordField.sendKeys(password);
btnLogin.click();
}
public String getMessageError(){
waitFor(errorMessageElement);
return errorMessageElement.getTextContent();
}
}
```
Here we define how to find the web element, and what method we would need to use in that page.
Usually, we should get rid of Thread.sleep , and find more fluent wait like in the example
````javascript
public void login(String email, String password) {
waitFor(emailField);
emailField.sendKeys(email);
passwordField.sendKeys(password);
btnLogin.click();
}
```
In the above, we would wait for the emailField to appear, after that, we will run the next script.
If that field does not appear, a timeout error will happen.
4.Implement the test followed Cucumber:
First you need to declare the features file.
Features file should be located in test/resources/features folder:
````javascript
@Login
Scenario Outline: Login successfully with email and password
Given Navigate to quang cao coc coc login site
When Login with '<email>' and '<password>'
Then Should navigate to home page site
Examples:
|email|password|
|xxxxxxxxxx|xxxxxxxxxx|
```
IntelliJ offered us the way to automatically create the function for each step
You can click on the step and press "Alt + Enter", then follow the guide
I usually put the cucumber tests in test/ui/cucumber/qc_coccoc, and define the tests in the step package:
````javascript
public class LoginPage extends BaseTest {
@Steps
private pages.qcCocCoc.LoginPage loginPage_pageobject;
@cucumber.api.java.en.Given("^Navigate to quang cao coc coc login site$")
public void navigateToQuangCaoCocCocLoginSite() {
loginPage_pageobject.open();
}
@When("^Login with '(.*)' and '(.*)'$")
public void loginWithEmailAndPassword(String email, String password) {
loginPage_pageobject.login(email,password);
}
@Then("^Should navigate to home page site$")
public void shouldNavigateToHomePageSite() {
WebDriverWait wait = new WebDriverWait(getDriver(),2);
wait.until(ExpectedConditions.urlContains("welcome"));
softAssertImpl.assertAll();
}
@Then("^Should prompt with '(.*)'$")
public void shouldPromptWithErrormessage(String errorMessage) {
softAssertImpl.assertThat("Verify message error",loginPage_pageobject.getMessageError().contains(errorMessage),true);
softAssertImpl.assertAll();
}
}
```
Here we extend BaseTest for the benefit of using assertion
The value for email and password we put in the feature file can be gotten by using regex like @When("^Login with '(.*)' and '(.*)'$") and define input value for the function (String email, String password)
````javascript
@RunWith(CucumberWithSerenity.class)
@CucumberOptions(features = "src/test/resources/features/qcCocCoc/", tags = { "@Login" }, glue = { "ui.cucumber.qc_coccoc.step" })
public class AcceptanceTest {
}
```
We should create the AcceptanceTest class for more flexible way to run tests with tags.
We need to specify the path to the features file "src/test/resources/features/qcCocCoc/", and the path to the step file: "ui.cucumber.qc_coccoc.step"
5.How to run the test
- You can run the test from feature file by right click on the scenario and choose run in the IDE
- Or you can run from command line:
mvn clean verify -Dtest=path_to_the_AcceptanceTest
6.Serenity report
To create beautiful Serenity report, just run the following command line
mvn clean verify -Dtest=path_to_the_AcceptanceTest serenity:aggregate
The test report will be index.html and located in target/site/serenity/index.html by default
The summary report will look like this:

With screenshot capture after each step in Test Results tab:

As usual, you can always checkout the sourcode from github: [serenity-guideline](https://github.com/cuongld2/serenityguideline)
Yay. That's it for today.
If you like the blog post, leave a heart or a comment.
I will write another post for screenplay pattern with UI test in a couple of days.
Take care~~
Notes: If you feel this blog help you and want to show the appreciation, feel free to drop by :
[<img src="https://thepracticaldev.s3.amazonaws.com/i/cno42wb8aik6o9ek1f89.png">](https://www.buymeacoffee.com/dOaeSPv
)
This will help me to contributing more valued contents.
| cuongld2 |
175,848 | Azure Functions in the Portal – ALM | Author Credits: Michael Stephenson, Microsoft Azure MVP. Originally Published at Serverless360 Blogs... | 0 | 2019-09-24T12:53:51 | https://dev.to/suryavenkat_v/azure-functions-in-the-portal-alm-1a73 | azure, serverless | <p>Author Credits: <a href="https://www.serverless360.com/blog/author/michael" rel="noopener noreferrer" target="_blank">Michael Stephenson</a>, Microsoft Azure MVP.</p>
<p>Originally Published at <a href="https://serverless360.com/" rel="noopener noreferrer" target="_blank">Serverless360 Blogs</a>.</p>
<blockquote>This article is part of <a href="https://dev.to/azure/serverless-september-content-collection-2fhb" rel="noopener noreferrer" target="_blank">#ServerlessSeptember</a>. You'll find other helpful articles, detailed tutorials, and videos in this all-things-serverless content collection. New articles are published every day — that's right, every day — from community members and cloud advocates in the month of September.<br><br>
Find out more about how Microsoft Azure enables your Serverless functions at <a href="https://docs.microsoft.com/en-us/azure/azure-functions/?WT.mc_id=servsept_devto-blog-cxa" rel="noopener noreferrer" target="_blank">https://docs.microsoft.com/azure/azure-functions/</a>.</blockquote>
<p>One of the advantages of Azure is that for some use cases you can develop solutions in the Azure Portal. This has the benefits that you can just focus on writing some code and not have to worry about versions of Visual Studio and extensions and all of the other overheads which turn a few simple lines of code which can be written by anyone into something which requires an additional level of developer skills. Let’s face it the ALM processes have been around for years but there are still large portions of the developer community who don’t follow them.</p>
<p>The ability to just get the job done in the portal is compelling and I expect that we will see more of that in the future but it does give you a challenge when it comes to ALM activities like keeping a safe version of the code and being able to move between environments reliably</p>
<p>This article explores the options for being able to develop an Azure Function in the portal but use some of the basic ALM type activities which would only be a minor overhead but gives some good practices so that developing in the portal would be ok in the real world.</p>
<h2>My Process</h2>
<p>The process I am going to follow is as follows:</p>
<ul>
<li>I will have a development resource group which will contain my Azure Function and the code for it</li>
<li>The resource group will also contain other assets for the function like AppInsights and Storage</li>
<li>I will create a 2<sup>nd</sup> resource group called Test. In my Build process, I will refresh the Test resource group with the latest version so I can do some testing</li>
<li>Once I am happy with the test resource group I will then execute a release pipeline which will copy the latest for the function to other environments such as UAT which I assume are used by other testers etc</li>
</ul>
<p>To summarise the pipeline usage see below:</p>
<ul>
<li>Dev -> Test = Build Pipeline</li>
<li>-> UAT and beyond = Release Pipeline.</li>
</ul>
<h2>Assumptions</h2>
<p>I am going to make a few assumptions:</p>
<ul>
<li>The function apps and Azure resources will be created by hand in advance</li>
<li>Any config settings will be added to the function apps by hand.</li>
</ul>
<p>In this simple example, we are assuming that everything is quite simple, and we can just update the code between environments. In a future example, we will look at some more complex scenarios.</p>
<h2>Walk-through</h2>
<p>To begin the walk-through, let’s have a look at the code for our function below:</p>
<p><img class="alignnone size-full wp-image-80217451" src="https://www.serverless360.com/wp-content/uploads/2019/07/Function-Code.png" alt="Azure Function in the portal" /></p>
<p>You can see this is a very simple function which is just reading some config settings and returning them.</p>
<h3>Build Process</h3>
<p>From here we need to go to our Build process in Azure DevOps. The build process looks like the following:</p>
<p><img class="alignnone size-full wp-image-80217452" src="https://www.serverless360.com/wp-content/uploads/2019/07/build-process-azure-functions.png" alt="Build process in Azure DevOps" /></p>
<p>I have defined a build process which I could use for any function app in the portal. I would simply need to change the variables and subscriptions references and it could be reused easily via the Clone function.</p>
<p>The build executes the following steps:</p>
<ul>
<li>Show all build variables = I use this for troubleshooting as it shows the values for all build variables</li>
<li>Export code = This uses the App Service Kudu API features to download the source code for the function app as a zip file</li>
<li>Publish Artifact = This attaches the zip file to the build so I can use it in Release pipelines later</li>
<li>Azure Function App Deploy = This will deploy the zip file to the Test function app so that I can do some manual testing if I want.</li>
</ul>
<h3>A closer look at Export</h3>
<p>I think in the build process the Export Function Code step warrants a closer look. This step uses Powershell to execute a web request to download the code as a zip file. I have used the publisher profile in the function app to get the publisher credentials which I can save as build variables and then use as a basic authentication header for the web request to do the download. See the below piece of code</p>
<pre class="lang:ps decode:true ">$user = '$(my.functionapp.deployment.username)'
$pass = '$(my.functionapp.deployment.password)'
$pair = "$($user):$($pass)"
Write-Host $pair
$encodedCreds = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($pair))
$basicAuthValue = "Basic $encodedCreds"
$Headers = @{
Authorization = $basicAuthValue
}
Write-Host $(my.functionapp.name)
Invoke-WebRequest -Uri
"https://$(my.functionapp.name).scm.azurewebsites.net/api/zip/site/wwwroot/" -OutFile
"$Env:BUILD_STAGINGDIRECTORY\Function.zip" -Headers $Headers
</pre>
<h2>A closer look at Azure Function App Deploy</h2>
<p>The Azure Function app deploy is simply using the out of the box task. I am pointing to the zip file I have just downloaded in the earlier step and it will automatically deploy it for me. I have set the deployment type on this task to Zip deployment.</p>
<p><img class="alignnone size-full wp-image-80217476" src="https://www.serverless360.com/wp-content/uploads/2019/07/Function-App-Deploy.png" alt="Function App Deploy"/></p>
<h3>Release Process</h3>
<p>We now have a repeatable build process which will take the latest version of the code from my development function app and push it to the test instance and package the zip file so I can at some future point release this version of the code.</p>
<p>To do the release to other environments I have an Azure DevOps Release pipeline. You can see this below:</p>
<p><img class="alignnone size-full wp-image-80217454" src="https://www.serverless360.com/wp-content/uploads/2019/07/Azure-devops-release-pipeline.png" alt="Azure devops release pipeline" /></p>
<p>The Release pipeline contains a reference to the build output I should use and then contains a set of tasks for each environment we want to deploy to. In this case its just UAT. The UAT release process looks like the following:</p>
<p><img class="alignnone size-full wp-image-80217455" src="https://www.serverless360.com/wp-content/uploads/2019/07/UAT-release.png" alt="UAT release" /></p>
<p>You can see that in this case the Release process is very simple and really it’s a cut down version of the Build process. In this case, I am downloading the artifact we saved in the build. We then use the Azure Function deploy to copy the function to the UAT function app. We are just using the same OOTB configuration as in the above Build process but this time we are pointing to the UAT Function app.</p>
<p>I now just need to run the Release process to deploy the function to other environments.</p>
<h2>Limitations</h2>
<ul>
<li>I am not using any visual studio so I am unlikely to be automatically testing my functions much. I could potentially look at doing something in this area but it's out of the scope of this article</li>
<li>I am not keeping the code in source control in this article. I am happy that the zip file attached to the build is sufficient. I could possibly look to saving the zip to source control or unpacking it and saving files to source control if I wanted</li>
<li>I am not using any continuous integration here, you could maybe monitor Azure events with Logic Apps and then develop your own trigger.</li>
</ul>
<h2 style="padding: 5px 0 0px; margin-bottom: 10px; border-bottom: 3px solid #3081ed; display: inline-block;">Summary</h2>
<p>Hopefully, you can see that it is very simple to implement the most basic of ALM processes for your development in the portal effort which will add some maturity to it.</p> | suryavenkat_v |
175,856 | Quick vim tips to generate and increment numbers | Too lazy to type each one of them numbers | 0 | 2019-09-25T12:57:31 | https://irian.to/blogs/quick-vim-tips-to-generate-and-increment-numbers | vim, productivity, tips, numbers | ---
title: Quick vim tips to generate and increment numbers
published: true
description: Too lazy to type each one of them numbers
tags: vim, productivity, tips, numbers
canonical_url: https://irian.to/blogs/quick-vim-tips-to-generate-and-increment-numbers
---
There are times when I need to either increment or generate a column of numbers quickly in vim. Vim 8/ neovim comes with useful number tricks.

I will share two of them here.
# Quickly generate numbers with put and range
You can quickly generate ascending numbers by
```
:put=range(1,5)
```
This will give you:
```
1
2
3
4
5
```
We can also control the increments. If we want to quickly generate descending number, we do:
```
:put=range(10,0,-1)
```
Some other variations:
```
:put=range(0,10,2) // increments by 2 from 0 to 10
:put=range(5) // start at 0, go up 5 times
```
This trick might be helpful to generate a list when taking notes. In vim, display current line, we can use `line('.')`. This can be combined with put/range. Let's say you are currently on line # 40. To generate numbers to line 50, you do:
```
:put=range(line(','),50)
```
And you'll get:
```
40 // prints at line 41.
41
42
43
44
45
46
47
48
49
50
```
To adjust line number above, you change it to be `:put=range(line('.')+1,50)` to show the correct line number.
# Quickly increment column of numbers
Suppose we have a column of numbers, like the 0's in HTML below:
```
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
```
If we want to increment all the zeroes (1, 2, 3, ...), we can quickly do that. Here is how:
First, move cursor to top 0 (I use `[]` to signify cursor location).
```
<div class="test">[0]</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
<div class="test">0</div>
```
Using `VISUAL BLOCK` mode (`<C-v>`), go down 8 times (`<C-v>8j`) to visually select all 0's.
```
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
<div class="test">[0]</div>
```
Now type `g <C-a>`. Voila!
```
<div class="test">1</div>
<div class="test">2</div>
<div class="test">3</div>
<div class="test">4</div>
<div class="test">5</div>
<div class="test">6</div>
<div class="test">7</div>
<div class="test">8</div>
<div class="test">9</div>
```

_Wait a minute... what just happened?_
Vim 8 and neovim has a feature that automatically increment numbers with `<C-a>` (and decrement with `<C-x>`). You can check it out by going to `:help CTRL-A`.
We can also change the increments by inserting a number ahead. If we want to have `10,20,30,...` instead of `1,2,3,...`, do `10g<C-a>` instead.
_Btw, one super-cool-tips with `<C-a>` and `<C-x>` - you can increment not only numbers, but octal, hex, bin, and alpha! For me, I don't really use the first three, but I sure use alpha a lot. Alpha is fancy word for *alpha*betical characters. If we do `set nformats=alpha`, we can increments alphabets like we do numbers._
Isn't that cool or what? Please feel free to share any other number tricks with Vim in comment below. Thanks for reading! Happy vimming!
| iggredible |
176,198 | AWS Application Integration | Step Functions it helps in defining the lambda function Amazon MQ it’s replace... | 0 | 2019-09-25T05:21:36 | https://dev.to/vikashagrawal/aws-application-integration-18oi | # Step Functions
it helps in defining the lambda function
# Amazon MQ
it’s replacement of rabbit MQ
# SNS (Simple Notification Service)
• It's a push-based service.
• It can be used to push notifications to:
```
o Mobile devices
o SQS
o HTTP endpoint
o SMS text messages
o Email
o Lambda can be consumer of this topic and trigger another SNS or AWS services.
```
• Topic:
```
o An access point for allowing recipients to dynamically subscribe.
o It can deliver to multiple recipients together line iOS, Android and SMS.
o It's stored across multiple AZ.
o The messages from this topic will be delivered to all the subscribers.
```
# SQS (Simple Queue Service)
• It’s message oriented.
• It helps in integrating with other AWS services.
• It's a pull-based services.
• The maximum size of the message stored in the queue is 256 KB.
• Type of message can be XML, JSON, and unformatted text
• If the rate of producing the messages is more than the rate of consuming the message or vice-versa in that case, we can use Auto Scaling Group, so that if the messages are more than more EC2 instances would be created and if the messages are less than unused EC2 instances could be terminated.
• Messages in the queue can be kept from 1 min to 14 days and default is 4 days.
• There is guarantee for the messages getting processed at least once.
• Polling
```
o Short: Returns immediately if no messages are in queue.
o Long: Polls the queue periodically and only returns the response when a message is in the queue or timeout is reached.
```
• Visibility Timeout
```
o When the message is received by the consumer, this message gets marked as invisible in the queue. If the job gets processed by the consumer before the time out then it would be deleted from the queue else it would become visible again.
o Default is 30 seconds and maximum are 12 hours. Any executions, which needs more than 12 hours of execution better have a lambda function to this message and split it into multiple topics and have other lambdas integrated with these topics.
```
• Types
```
o Standard
This is the default queue.
Although the consumption of message is ensured based on the order they are received but nor guaranteed.
Chances are there for any of the messages getting consumed more than 1 time.
o FIFO
Consumption of message is guaranteed based on the order they are received.
All behavior is same as default queue with only limitations is 300 Tx/sec. Reason for this limitation could be because standard is the default implementation and bit of change in the design to get FIFO design brings in this limitation.
It will be consumed once unless it gets deleted by the consumer.
```
# SWF (Simple Workflow Service)
• It's a task oriented and contains following actors:
```
o Workflow starters: an application that initiate the workflow, e.g. website.
o Workers: it’s a program that gets tasks, process it and return the result.
o Decider: it controls the coordination of tasks.
```
• Domains: it’s a kind of meta data, a collection of related workflows, which is stored in JSON.
• Maximum workflow retention period is 1 year and stored in seconds.
• SWF brokers the interactions b/w workers and deciders.
• SWF makes sure that the task in not repeated.
• It allows decider to have clear idea of the progress of tasks and to start the new task.
# SES (Simple Email Service)
| vikashagrawal | |
176,301 | How to integrate Endtest with BrowserStack | Codeless Automated Testing for Mobile Apps with Endtest and BrowserStack | 0 | 2019-09-25T14:25:54 | https://dev.to/endtest/how-to-integrate-endtest-with-browserstack-2gkj | webdev, testing, productivity, devops | ---
title: How to integrate Endtest with BrowserStack
published: true
description: Codeless Automated Testing for Mobile Apps with Endtest and BrowserStack
tags: webdev, testing, productivity, devops
cover_image: https://thepracticaldev.s3.amazonaws.com/i/61k02d7k2t18rhhflj46.png
---
###**Introduction**###
[Endtest](https://endtest.io) allows you to create, manage and execute Automated Tests, without having to write any code.
By integrating with [BrowserStack](https://www.browserstack.com/), you can execute Mobile Tests created with Endtest on a range of real Android and iOS mobile devices offered by BrowserStack.
###**Getting Started**###
1) Go to your **BrowserStack** account.
2) Click on **App Automate** from the **Products** section:

3) Click on the **Show** button from the **Username and Access Keys** section from the left side of the page:

4) Go to the **Settings** page from [Endtest](https://endtest.io).
5) Add the **Username** and **Access Key** from BrowserStack App Automate in the BrowserStack User and BrowserStack Key inputs from the Endtest Settings page.

6) Click on the **Save** button.
Nice job! Your Endtest account is now connected with your BrowserStack account.
###**Running your first test**###
1) Upload your APK or IPA file in the **Drive** section from Endtest.

2) After that, go to the **Mobile Tests** section and click on the Run button.

3) Select **BrowserStack** from the **Grid** dropdown.
4) Select the **Platform** and the **Real Device** on which you want to execute your test on.
5) Select your APK file in the **APK Download URL** input.
If you select an iOS device, the **APK Download URL** input would be replaced with the **IPA Download URL** input.
After starting the test execution, you will be redirected to the Results section where you'll get live video and all the results, logs and details in real-time.
| razgandeanu |
177,761 | Caching JavaScript data file results when using Eleventy | How to cache the query results of a Web API to speed up the development of an Eleventy website. | 0 | 2019-09-27T19:14:56 | https://dev.to/heypieter/caching-javascript-data-file-results-when-using-eleventy-38ch | javascript, eleventy, cache | ---
title: Caching JavaScript data file results when using Eleventy
published: true
description: How to cache the query results of a Web API to speed up the development of an Eleventy website.
tags: javascript, Eleventy, cache
---
[Eleventy](https://www.11ty.io) by [Zach Leatherman](https://twitter.com/zachleat/) has become my default static site generator. It is simple, uses JavaScript, and is easy to extend. It allows me to include custom code to access additional data sources,
such as RDF datasets.
Querying data can take up some time, for example, when using an external Web API. During deployment of a website this is not a big deal, as this probably doesn't happen every minute. But when you are developing then it might become an issue: you don't want to wait for query results every time you make a change that doesn't affect the results, such as updating a CSS property, which only affects how the results are visualized. Ideally, you want to reuse these results without querying the data over and over again. I explain in this blog post how that can be done by introducing a cache.
The cache has the following features:
- The cache is only used when the website is locally served (`eleventy --serve`).
- The cached data is written to and read from the filesystem.
This is done by using the following two files:
- `serve.sh`: a Bash script that runs Eleventy.
- `cache.js`: a JavaScript file that defines the cache method.
An example Eleventy website using these two files is available on [Github](https://github.com/pheyvaer/eleventy-cache-example).
## Serve.sh
```bash
#!/usr/bin/env bash
# trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
rm -rf _data/_cache
exit 0
}
# Remove old folders
rm -rf _data/_cache # Should already be removed, but just in case
rm -rf _site
# Create needed folders
mkdir _data/_cache
ELEVENTY_SERVE=true npx eleventy --serve --port 8080
```
This Bash script creates the folder for the cached data and serves the website locally. First, we remove the cache folder and the files generated by Eleventy, which might still be there from before. Strictly speaking removing the latter is not necessary, but I have noticed that removed files are not removed from `_site`, which might result in unexpected behaviour. Second, we create the cache folder again, which of course is now empty. Finally, we set the environment variable `ELEVENTY_SERVE` to `true` and start Eleventy: we serve the website locally on port 8080. The environment variable is used by `cache.js` to check if the website is being served, because currently this information can't be extracted from Eleventy directly. Note that I have only tested this on macOS 10.12.6 and 10.14.6, and Ubuntu 16.04.6. Changes might be required for other OSs.
## Cache.js
```JavaScript
const path = require('path');
const fs = require('fs-extra');
/**
* This method returns a cached version if available, else it will get the data via the provided function.
* @param getData The function that needs to be called when no cached version is available.
* @param cacheFilename The filename of the file that contains the cached version.
* @returns the data either from the cache or from the geData function.
*/
module.exports = async function(getData, cacheFilename) {
// Check if the environment variable is set.
const isServing = process.env.ELEVENTY_SERVE === 'true';
const cacheFilePath = path.resolve(__dirname, '_data/_cache/' + cacheFilename);
let dataInCache = null;
// Check if the website is being served and that a cached version is available.
if (isServing && await fs.pathExists(cacheFilePath)) {
// Read file from cache.
dataInCache = await fs.readJSON(cacheFilePath);
console.log('Using from cache: ' + cacheFilename);
}
// If no cached version is available, we execute the function.
if (!dataInCache) {
const result = await getData();
// If the website is being served, then we write the data to the cache.
if (isServing) {
// Write data to cache.
fs.writeJSON(cacheFilePath, result, err => {
if (err) {console.error(err)}
});
}
dataInCache = result;
}
return dataInCache;
};
```
The method defined by the JavaScript file above takes two parameters: `getData` and `cacheFilename`. The former is the expensive function that you don't want to repeat over and over again. The latter is the filename of the file with the cached version. The file will be put in the folder `_data/_cache` relative to the location of `cache.js`. The environment variable used in `serve.sh` is checked here to see if the website is being served. Note that the script requires the package `fs-extra`, which adds extra methods to `fs` and is not available by default.
## Putting it all together
To get it all running, we put both files in our Eleventy project root folder. Do not forget to make the script executable and run `serve.sh`.
When executing the [aforementioned example](https://github.com/pheyvaer/eleventy-cache-example), we see that the first time to build the website it takes 10.14 seconds (see screencast below). No cached version of the query results is available at this point and thus the Web API has to be queried. But the second time, when we update the template, it only takes 0.03 seconds. This is because the cached version of the query results is used instead of querying the Web API again.

<p class="caption">Screencast: When the Web API is queried it takes 10.14 seconds. When the cached version of the query results is used it takes 0.03 seconds.</p>
| heypieter |
176,411 | How to Build a Dashboard of Live Conversations with Flask, React, and Nexmo | Nexmo recently introduced the Conversation API. This API enables you to have different styles of com... | 0 | 2019-09-25T15:58:22 | https://dev.to/vonagedev/how-to-build-a-dashboard-of-live-conversations-with-flask-react-and-nexmo-1kdh | flask, react, webdev, tutorial | Nexmo recently introduced the [Conversation API](https://developer.nexmo.com/conversation/overview). This API enables you to have different styles of communication (voice, messaging, and video) and connect them all to each other.
It's now possible for multiple conversations within an app to coincide and to retain context across all of those channels! Being able to record and work with the history of a conversation is incredibly valuable for businesses and customers alike so, as you can imagine, we're really excited about this.
> Find out more of what can be done with programmable conversations at Vonage Campus, our first customer and developer conference taking place in San Francisco on October 29-30. It's free to attend, so [request your invite now](https://web.cvent.com/event/9bba9ffb-c9b5-4022-a9b8-3a8184c70aa8/register)!
## What The Dashboard Does
This tutorial covers how to build a dashboard with Flask and React that monitors all current conversations within an [application](https://developer.nexmo.com/conversation/concepts/application). The goal is to showcase relevant data from the live conversations that are currently happening in real-time.
When a single [conversation](https://developer.nexmo.com/conversation/concepts/conversation) is selected from the list of current conversations, the connected [members](https://developer.nexmo.com/conversation/concepts/member) and [events](https://developer.nexmo.com/conversation/concepts/event) will be displayed. An individual member can then be selected to reveal even more information related to that particular [user](https://developer.nexmo.com/conversation/concepts/user).
<a href="https://www.nexmo.com/wp-content/uploads/2019/09/5d894f3a32766539224729.gif"><img src="https://www.nexmo.com/wp-content/uploads/2019/09/5d894f3a32766539224729.gif" alt="dashboard gif" width="500" class="alignnone size-full wp-image-30261" /></a>
## What Does The Conversation API Do?
The Nexmo [Conversation API](https://developer.nexmo.com/conversation/overview) enables you to build conversation features where communication can take place across multiple mediums including IP Messaging, PSTN Voice, SMS, and WebRTC Audio and Video. The context of the conversations is maintained through each communication event taking place within a conversation, no matter the medium.
Think of a conversation as a container of communications exchanged between two or more Users. There could be a single interaction or the entire history of all interactions between them.
The API also allows you to create Events and Legs to enable text, voice, and video communications between two Users and store them in Conversations.
## Workflow of The Application
<a href="https://www.nexmo.com/wp-content/uploads/2019/09/flowofapp.png"><img src="https://www.nexmo.com/wp-content/uploads/2019/09/flowofapp.png" alt="flow of app" width="1698" height="892" class="alignnone size-full wp-image-30265" /></a>
### Create A Nexmo Application
To work through this tutorial, you will need a [Nexmo account](https://dashboard.nexmo.com/sign-up?utm_source=DEV_REL&utm_medium=github&utm_campaign=https://github.com/nexmo-community/nexmo-python-capi). You can sign up now for free if you don’t already have an account.
This tutorial also assumes that you will be running [Ngrok](https://ngrok.com/) to run your [webhook](https://developer.nexmo.com/concepts/guides/webhooks) server locally.
If you are not familiar with Ngrok, please refer to our [Ngrok tutorial](https://www.nexmo.com/blog/2017/07/04/local-development-nexmo-ngrok-tunnel-dr/) before proceeding.
First, you will need to create a Nexmo Application:
```bash
nexmo app:create "Conversation App" http://demo.ngrok.io:3000/webhooks/answer http://demo.ngrok.io:3000/webhooks/event --keyfile private.key
```
Next, assuming you have already rented a Nexmo Number (`NEXMO_NUMBER`), you can link your Nexmo Number with your application via the command line:
```bash
nexmo link:app NEXMO_NUMBER APP_ID
```
### Clone [Git Repo](https://github.com/nexmo-community/nexmo-python-capi)
To get this app up and running on your local machine, start by cloning [this repository](https://github.com/nexmo-community/nexmo-python-capi):
```bash
git clone https://github.com/nexmo-community/nexmo-python-capi
```
Then install the dependencies:
```bash
npm install
```
Copy the example `.env.example` file with the following command:
```bash
cp .env.example > .env
```
Open that new `.env` file and fill in the Application ID and path to your `private.key` that we just generated when creating our Nexmo Application.
### Flask Backend
The important doc to inspect within our Flask files is the `server.py` one as it establishes all of the different endpoints the `Conversation API`.
The function, `make_capi_request()` connects to Nexmo and authenticates the application:
```python
def make_capi_request(api_uri):
nexmo_client = nexmo.Client(
application_id=os.getenv("APPLICATION_ID"), private_key=os.getenv("PRIVATE_KEY")
)
try:
response = nexmo_client._jwt_signed_get(request_uri=api_uri)
except nexmo.errors.ClientError:
response = {}
return jsonify(response)
```
Underneath that, we create the necessary routes:
```python
@app.route("/")
def index(): # Index page structure
return render_template("index.html")
@app.route("/conversations")
def conversations(): # List of conversations
return make_capi_request(api_uri="/beta/conversations")
@app.route("/conversation")
def conversation():# Conversation detail
cid = request.args.get("cid")
return make_capi_request(api_uri=f"/beta/conversations/{cid}")
@app.route("/user")
def user(): # User detail
uid = request.args.get("uid")
return make_capi_request(api_uri=f"/beta/users/{uid}")
@app.route("/events")
def events(): # Event detail
cid = request.args.get("cid")
return make_capi_request(api_uri=f"/beta/conversations/{cid}/events")
```
Once authenticated, each of these routes accesses the Conversation API based on the Application ID and eventually the Conversation or User ID.
### React Frontend
We’ll make use of React's ability to break our code into modularized and reusable components. The components we’ll need are:
<a href="https://www.nexmo.com/wp-content/uploads/2019/09/components.png"><img src="https://www.nexmo.com/wp-content/uploads/2019/09/components.png" alt="components - react tree" width="204" class="alignnone size-full wp-image-30252" /></a>
At the `App.js` level, notice that the `"/conversations"` endpoint is called within the constructor. Meaning that if there are any current conversations within the application, they are immediately displayed onto the page.
```javascript
fetch("/conversations").then(response =>
response.json().then(
data => {
this.setState({ conversations: data._embedded.conversations });
},
err => console.log(err)
)
);
```
The user then will have the option to select one of the conversations from the list and the meta details of that conversation, such as name and timestamp, will be displayed.
```javascript
<div>
<article className="message is-info">
<div className="message-header">
<p>{this.props.conversation.uuid}</p>
</div>
<div className="message-body">
<ul>
<li>Name: {this.props.conversation.name}</li>
<li>ttl: {this.props.conversation.properties.ttl}</li>
<li>Timestamp: {this.props.conversation.timestamp.created}</li>
</ul>
</div>
</article>
<Tabs
members={this.props.conversation.members}
events={this.props.events}
conversation={this.props.conversation}
/>
</div>
```
Notice that once a particular `conversation` has been selected two tabs become visible: `Events` and `Members`.
`Members` is set as the default state, meaning that is displayed first. It is at this point that the `"/conversation"` and `"/events"` endpoints are called. Using the `cid` that is passed within the state, the details of the current members and events are now available.
```javascript
refreshMembers = () => {
fetch("/conversation?cid=" + this.props.conversation.uuid)
.then(results => results.json())
.then(data => {
this.setState({ members: data.members });
});
};
refreshEvents = () => {
fetch("/events?cid=" + this.props.conversation.uuid)
.then(results => results.json())
.then(data => {
this.setState({ events: data });
});
};
```
The `MembersList.js` component will call the `/user` endpoint to retrieve even more data on that particular user, which then is shown within the `MemberDetail.js` component.
```javascript
showMemberDetails = user_id => {
fetch("/user?uid=" + user_id)
.then(results => results.json())
.then(data => {
this.setState({ member: data });
});
};
```
### Connect It All Together
To start up the backend, run the Flask command:
```bash
export FLASK_APP=server.py && flask run
```
And in another tab within your terminal, run the React command:
```bash
cd frontend-react && npm start
```
Open up `http://localhost:3000` in a browser, and your app will be up and running!
Any conversations that are currently running within that connected application will now be visible within this dashboard.
Congrats! You've now created an application with Flask, React, and Nexmo's [Conversation API](https://developer.nexmo.com/conversation). You now can now monitor all sorts of things related to your application's conversations. We encourage you to continue playing with and exploring this API's capabilities.
### Contributions And Next Steps
At Nexmo, the [Conversation API](https://developer.nexmo.com/conversation) is currently in beta and is ever-evolving based on your input and feedback. As always, we are happy to help with any questions in our [community slack](https://developer.nexmo.com/community/slack) or support@nexmo.com.
The post [How to Build a Dashboard of Live Conversations with Flask and React](https://www.nexmo.com/blog/2019/09/24/how-to-build-a-dashboard-of-live-conversations-with-flask-and-react-dr) appeared first on [Nexmo Developer Blog](https://www.nexmo.com/blog).
| lolocoding |
176,432 | Better Technical Interviews: Part 4 – My Opinions on Various Techniques | This post part of a series I'm writing on better technical interviews. I'd love your feedback in the... | 0 | 2019-09-25T19:59:03 | https://seankilleen.com/2019/09/better-technical-interviews-part-4-my-opinions-on-various-techniques/ | interviewing, culture, hiring | ---
title: Better Technical Interviews: Part 4 – My Opinions on Various Techniques
published: true
tags: interviewing,culture,hiring
canonical_url: https://seankilleen.com/2019/09/better-technical-interviews-part-4-my-opinions-on-various-techniques/
---
_This post part of [a series](https://seankilleen.com/2019/09/better-technical-interviews-part-1-whats-the-point/) I'm writing on better technical interviews. I'd love your feedback in the comments!_
* [Part 1 - What's the Point?](https://seankilleen.com/2019/09/better-technical-interviews-part-1-whats-the-point/)
* [Part 2 - Preparation](https://seankilleen.com/2019/09/better-technical-interviews-part-2-preparation/)
* [Part 3 - The Actual Interview](https://seankilleen.com/2019/09/better-technical-interviews-part-3-the-interview-itself/)
* [Part 4 - My Opinion on Various Techniques](https://seankilleen.com/2019/09/better-technical-interviews-part-4-my-opinions-on-various-techniques/)
* [Part 5 - Common Interview Questions](https://seankilleen.com/2019/10/better-technical-interviews-part-5-common-questions/)
## My Opinions on Certain Interview Practices
These are my personal opinions with some reasoning behind them.
### Should candidates code during the interview?
I say: No. If I am able, through conversation, to determine that this person has the fundamentals both conceptually and in terms of being a colleague, and is an open-minded, collaborative person that wants to improve, I usually don’t need to watch them code. Everyone has different styles, and I expect us all to be learning together anyway.
### To whiteboard or not to whiteboard?
I believe white-boarding for conceptual / architectural explanations is a helpful tool. It helps me see how a person uses that space to see and explain things, like how they would potentially approach a problem. I have not found that I get much out of code in whiteboard format. It will at best be pseudo-code, and to expect more than that is unfair in my opinion.
### Should I push the interviewees buttons to see how they respond?
Absolutely not. Would you do this to them in the real world? I should certainly hope not. If a client or coworker were to do this in some situation and someone handled it poorly, hopefully you’d be mentoring and coaching someone on how to improve, and also advocating for them in an instance where someone was treating them poorly.
### We’ll be doing coding; should they use Google?
If you ask someone to code, I’d suggest that you should treat it like you’re pairing with a teammate.
- It should be collaborative
- Tooling and resources should be available
- They should be able to use the machine / development environment of their choice
Otherwise, what’s the point? Pretending that devs don’t use tools or Google things just shows an interviewee that the exercise is pointless.
### What about having the candidate solve FizzBuzz?
If you’re bringing someone into a technical interview and don’t know whether or not they’d pass a problem like FizzBuzz, I think that’s the real problem.
Shift that process to the left and answer those questions earlier on. Ask a few screening questions. Run a small coding exam with something like coderpad.io, but don’t make it something so overdone. Put a little thought in. Be creative and clear.
### We came up with a pretty intricate problem we’re proud of. We think it’ll be a good litmus test.
Great. You should ask everyone on your team – particularly less senior developers – to complete that problem. And then you should adjust that problem based on what you will inevitably learn. And then you should think about how cloudy someone’s brain is when they feel under pressure,
### Should I have someone balance a B-tree, do factorial calculations, etc.?
Only if someone on your team has had to do something similar to that in the past year.
Otherwise you’re optimizing for the wrong-thing. I don’t need an algorithm wiz to write a great line-of-business app; I need someone who cares about a domain, collaborates with stakeholders, and is invested in improving as they go.
If you ask how someone would implement a fast-sorting algorithm and their answer is “first, I’d understand what we’re trying to optimize for, and then I’d open Google and research about different types of sorting algorithms” – I’d say that’s a solid answer.
### Should We use HackerRank, etc.?
No. At least not unless you’re utilizing the pairing functionality.
- These tools feed into the idea that if someone can solve a coding problem, they’ll be a good fit for a team.
- These tools risk dropping some senior folks from the funnel who avoid the sort of algorithmic minutae that this article recommends against.
- In my opinion, broadly speaking, these tools are reductive and commoditize the skillset you’re looking for and the notion of the work we do.
### I can’t really tell so much from conversation as what you’re expecting. Is that a problem?
I’d say yes. If you can’t have a conversation and determine whether this is the sort of person who will make the impact you need at your team / company, I would argue that you may not be the best person to give the interview.
### What about half-day / whole day interviews?
My opinion:
- If you’re scheduling a long interview because everyone needs to take a turn with an interviewee, I would argue that your interview process may be broken. Figure out who are the people to trust, and make the interview with them. At my current company we’ve had fantastic success with a 1-1.5 hour interview with two folks from a practice area, and a 30 minute conversation with one of our executives.
- If you bring someone in for a half day technical interview, it should be to work on a real style problem in a pairing or team setting, as close to an actual work day setup as possible. They should have access to tools, google, OSS, etc. during the process.
- If you’re having someone join an actual team for a half or full day session to work on actual client or product work, you need to compensate them for their time. Agree on an hourly rate that shows someone you respect their time and effort. Pay them for their time even if the interview ends early. Are you worried they’ll only last an hour? Do more prep and screening work up front. | seankilleen |
176,445 | React Native: Best Practices When Using FlatList or SectionList | Have you had any performance issues when using React Native SectionList or FlatList? I know I did. It... | 0 | 2019-09-25T17:10:30 | https://dev.to/m4rcoperuano/react-native-best-practices-when-using-flatlist-or-sectionlist-4j41 | reactnative, performance, javascript | Have you had any performance issues when using React Native [SectionList](https://facebook.github.io/react-native/docs/sectionlist) or [FlatList](https://facebook.github.io/react-native/docs/flatlist)? I know I did. It took me many hours and one time almost an entire week to figure out why performance was so poor in my list views (seriously, I thought I was going to lose it and never use React Native again). So let me save you some headaches (or maybe help you resolve existing headaches 😊) by providing you with a couple of tips on how to use SectionLists and FlatLists in a performant way!
(This article assumes you have some experience with React Native already).
## Section List Example

Above is a simple app example where users manage their tasks. The headers represent “categories” for each task, the rows represent a “task” that the user has to do by what date, and the Check is a button that marks tasks as “done” – simple!
From a frontend perspective, these would be the components I would design:
- **CategoryHeader**
- Contains the Title and an arrow icon on the left of it.
- **TaskRow**
- Contains the task’s Title, details, and the Check button that the user can interact with.
- **TaskWidget**
- Contains the logic that formats my task data.
This also uses React Native’s SectionList component to render those tasks.
And here’s how my **SectionList** would be written in my **TaskWidget**:
```javascript
<SectionList
backgroundColor={ThemeDefaults.contentBackgroundColor}
contentContainerStyle={styles.container}
renderSectionHeader={( event ) => {
return this.renderHeader( event ); //This function returns my `CategoryHeader` component
}}
sections={[
{title: 'General Project Management', data: [ {...taskObject}, ...etc ]},
...additional items omitted for simplicity
]}
keyExtractor={( item ) => item.key}
/>
```
Pretty straight forward right? The next thing to focus on is what each component is responsible for (and this is what caused my headaches).
## Performance Issues
If we look at **TaskRow**, we see that we have several pieces of information that we have to display and calculate:

1. Title
2. Description
3. Due date formatted
4. Due date from now calculated
5. “Check” button action
Previously, I would’ve passed a javascript object as a “prop” to my **TaskRow** component. Maybe an object that looks like this:
```json
{
"title": "Contact Joe Bob",
"description:": "Need to talk about project assesment",
"due_date": "2019-07-20"
}
```
I then would have my **TaskRow** display the first two properties without any modification and calculate the due dates on the fly (all this would happen during the component’s “render” function). In a simple task list like above, that would probably be okay. But when your component starts doing more than just displaying data, **following this pattern can significantly impact your list’s performance and lead to antipatterns**. I would love to spend time describing how SectionLists and FlatLists work, but for the sake of brevity, let me just tell you the better way of doing this.
## Performance Improvements
Here are some rules to follow that will help you avoid performance issues in your lists:
#### I. Stop doing calculations in your SectionList/FlatList header or row components.
Section List Items will render whenever the user scrolls up or down in your list. As the list recycles your rows, new ones that come into view will execute their `render` function. With this in mind, you probably don’t want any expensive calculations during your Section List Item's `render` function.
> Quick Story
> I made the mistake on instantiating `moment()` during my task component's render function (`moment` is a date utility library for javascript). I used this library so I could calculate how many days from "now" my task was due. In another project, I was doing money calculations and date formatting in each of my SectionList row components (also using `moment` for date formatting). During these two instances, I saw performance drop significantly on Android devices. Older iPhone models were also affected. I was literally pulling my hair out trying to find out why. I even implemented Pure Components, but (like I’ll describe later) I wasn’t doing this right.
So when should you do these expensive calculations? Do it before you render any rows, like in your parent component’s `componentDidMount()` method (do it asynchronously). Create a function that “prepares” your data for your section list components. Rather than “preparing” your data inside that component.
#### II. Make your SectionList’s header and row components REALLY simple.
Now that you removed the computational work from the components, what should the components have as props? Well, these components should just display text on the screen and do very little computational work. Any actions (like API calls or internal state changes that affect your stored data) that happen inside the component should be pushed “up” to the parent component. So, instead of building a component like this (that accepts a javascript object):
```javascript
<TaskRow task={taskObject} />
```
Write a component that takes in all the values it needs to display:
```javascript
<TaskRow
title={taskObject.title}
description={taskObject.description}
dueDateFormatted={taskObject.dueDateFormatted}
dueDateFormattedFromNow={taskObject.dueDateFormattedFromNow}
onCheckButtonPress={ () => this.markTaskAsDone(taskObject) }
/>
```
Notice how the `onCheckButtonPress` is just a callback function. This allows the component that is using TaskRow to handle any of the TaskRow functions. **Making your SectionList components simpler like this will increase your Section List’s performance, as well as making your component’s functionality easy to understand**.
#### III. Make use of Pure Components
This took a while to understand. Most of our React components extend from `React.Component`. But using lists, I kept seeing articles about using `React.PureComponent`, and they all said the same thing:
>When props or state changes, PureComponent will do a shallow comparison on both props and state
>https://codeburst.io/when-to-use-component-or-purecomponent-a60cfad01a81 and many other React Native Posts
I honestly couldn’t follow what this meant for the longest time. But now that I do understand it, I’d like to explain what this means in my own words.
Let’s first take a look at our TaskRow component:
```javascript
class TaskRow extends React.PureComponent {
...prop definitions...
...methods...
etc.
}
<TaskRow
title={taskObject.title}
description={taskObject.description}
dueDateFormatted={taskObject.dueDateFormatted}
dueDateFormattedFromNow={taskObject.dueDateFormattedFromNow}
onCheckButtonPress={ () => this.markTaskAsDone(taskObject) }
/>
```
**TaskRow** has been given props that are all primitives (with the exception of `onCheckButtonPress`). What PureComponent does is that it’s going to look at all the props it’s been given, it’s then going to figure out if any of those props have changed (in the above example: has the `description` changed from the previous description it had? Has the `title` changed?). If so, it will re-render that row. If not, it won’t! And it won’t care about the onCheckButtonPress function. It only cares about comparing primitives (strings, numbers, etc.).
My mistake was not understanding what they meant by "shallow comparisons". So even after I extended PureComponent, I still sent my TaskRow an object as a prop, and since an object is not a primitive, it didn’t re-render like I was expecting. At times, it caused my other list row components to rerender even though nothing changed! So don’t make my mistake. **Use Pure Components, and make sure you use primitives for your props so that it can re-render efficiently.**
## Summary, TLDR
Removing expensive computations from your list components, simplifying your list components, and using Pure Components went a long way on improving performance in my React Native apps. It seriously felt like night and day differences in terms of performance and renewed my love for React Native.
I’ve always been a native-first type of mobile dev (coding in Objective C, Swift, or Java). I love creating fluid experiences with cool animations, and because of this, I’ve always been extra critical/cautious of cross-platform mobile solutions. But React Native has been the only one that has been able to change my mind and has me questioning why I would ever want to code in Swift or Java again. | m4rcoperuano |
176,521 | Let's Fix Some A11y Issues this Hacktoberfest 👩💻👨💻 | With Hacktoberfest just around the corner I started thinking about what I would like to contribute th... | 0 | 2019-09-30T05:50:35 | https://www.upyoura11y.com/contribute-to-a11y-in-oss | a11y, hacktoberfest, webdev, showdev |
With Hacktoberfest just around the corner I started thinking about what I would like to contribute this year (both in October and going forward).
Over the course of the year I've been doing my best to be an advocate for accessibility in web applications, and so I thought a natural thing to do would be to **help improve accessibility in Open Source, one Pull Request at a time**!
## Would you like to join me?
I've added a page over at [Up Your A11y: Open A11y OSS Issues Looking for Help](https://www.upyoura11y.com/contribute-to-a11y-in-oss) to help identify issues actively looking for contributors.
You'll find open GitHub issues that:
- Are from open source projects
- Reference 'a11y', 'accessibility', or 'accessible'
- Have the "Help wanted" or "Good first issue" label
- Have no assignee or pull request yet
- Are for JavaScript or HTML projects
## Have a great Hacktoberfest
Whatever you end up doing this year and beyond in OSS, happy coding! 👩💻👨💻
--------
*Did you find this post useful? Please consider [buying me a coffee](https://www.buymeacoffee.com/mgkZuRU) so I can keep making content* 🙂 | s_aitchison |
176,535 | hello world | apparently I have one of these now, hello | 0 | 2019-09-25T19:58:49 | https://dev.to/weems/hello-world-5d2b | apparently I have one of these now, hello | weems | |
176,602 | Improving Performance for Low-Bandwidth Users with save-data | Detect when users have requested lighter pages and serve them less | 0 | 2019-09-25T23:42:08 | https://www.mikehealy.com.au/save-data-for-low-bandwidth-users/ | performance, http, wordpress, php | ---
title: Improving Performance for Low-Bandwidth Users with save-data
published: true
description: Detect when users have requested lighter pages and serve them less
cover_image: https://cdn.mikehealy.com.au/wp-content/uploads/2019/09/save-data-cover.jpg
tags: performance,http,wordpress,php
canonical_url: https://www.mikehealy.com.au/save-data-for-low-bandwidth-users/
---
Everyone likes a fast website, but some users have connections and data plans that make it especially critical. In some places bandwidth is expensive, and any bytes you don't need to serve can save your users money.
I recently learned about the ['save-data' HTTP header](https://nooshu.github.io/blog/2019/09/01/speeding-up-the-web-with-save-data-header/) that clients may send to signify that they want a lower-bandwidth experience. The header alone doesn't do much, but it makes their preferences clear to you so you can make your own optimizations.
On mobile Chrome this setting is called 'Lite Mode'. Desktop users can install a browser extension to enable the header. Other browsers likely have their own way of enabling the setting.
Once you've detected this setting you might choose to style elements differently (for example dropping large background images), avoid decorative background video, or perhaps skip custom fonts that might delay rendering and add to the bandwidth costs.
The setting can be detected server-side by looking for a HTTP header (save-data=on) or client side in JS to set a flag for your CSS selectors.
```
//PHP example
function saveData() {
return (isset($_SERVER["HTTP_SAVE_DATA"]) && strtolower($_SERVER["HTTP_SAVE_DATA"]) === 'on');
}
//JS example (courtesy of Nooshu)
//add save-data class name to document element for CSS selectors
if ("connection" in navigator) {
if (navigator.connection.saveData === true) {
document.documentElement.classList.add('save-data');
}
}
```
With the detection in place you can omit non-essential elements to low-bandwidth users. For example, on my WordPress website I've skipped queuing Google custom fonts, and dropped my masthead background image.
```
// functions.php
if( !saveData() ) {
wp_enqueue_style( 'fonts', 'https://fonts.googleapis.com/css?family=Open+Sans|Oswald:300,400,600');
}
/*
N.B. in WP it's good practice to prefix functions to avoid naming clashes.
I've skipped that for this example */
```
Here's my site on mobile Chrome with and without Lite Mode (aka sava-data).

This was a pretty easy change to make for some quick performance gains for low-bandwidth users. The improvements could be even bigger for heavier sites, or if the header was considered during site development too.
(This post was originally published at [mikehealy.com.au](https://www.mikehealy.com.au/)) | mike_hasarms |
176,622 | Fullstacking: Final Styling | Now that we have everything working, at least to a minimum, we can beautify the project Spoiler:... | 2,044 | 2019-09-26T18:51:01 | https://dev.to/heymarkkop/fullstacking-final-styling-4028 | reactnative | Now that we have everything working, at least to a minimum, we can beautify the project
Spoiler:

# Buttons
The first visual component we have to work is the input button. Since I didn't want to spend much time trying to make a cool button, I've imported this one: [react-native-really-awesome-button](https://github.com/rcaferati/react-native-really-awesome-button).
Not that difficult to swap our buttons to it:
```react
import AwesomeButton from "react-native-really-awesome-button";
// ...
<Button onPress={handleSubmit} title="Add Event"></Button>
// Button becomes
<AwesomeButton onPress={handleSubmit}> Add Event </AwesomeButton>
```
Sweet. But we'd like it to be centered, how we'd do that?
Well, React-Native has [StyleSheets](https://facebook.github.io/react-native/docs/stylesheet) which are similar to CSS.
```react
import {TextInput, StyleSheet, View} from 'react-native';
<View style={styles.card}>
{//...}
<AwesomeButton onPress={handleSubmit} style={styles.button} width={styles.button.width}> Add Event </AwesomeButton>
{//...}
</View>
const styles = StyleSheet.create({
card: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
button: {
margin: 10,
width: 200,
},
});
```
## Snackbars
[Snackbars](https://github.com/cooperka/react-native-snackbar) are great to outputs some message to the user.
Here's how I've used them, for example.
```react
import Snackbar from 'react-native-snackbar';
// ...
// When completing a UserLoginMutation
const onCompleted = async payload => {
if (payload.UserLogin.error) {
Snackbar.show({
title: payload.UserLogin.error,
duration: Snackbar.LENGTH_LONG,
backgroundColor: 'red',
color: 'white',
});
}
//...
```
## No cool input fields :(
I did some initial search, but didn't find any updated cool TextField to use as text input. If you know anything, please comment below ;D | heymarkkop |
176,625 | My First Js Canvas | I Give myself 6 days to write my first Js moving canvas as a challenge I already know pure Js but I... | 0 | 2019-09-26T01:57:11 | https://dev.to/aelhor/my-first-js-canvas-58kj | I Give myself 6 days to write my first Js moving canvas as a challenge I already know pure Js but I want to push myself | aelhor | |
176,766 | A Docker Antivirus in Ruby | Originally posted on Medium Last year, I've had the occasion to work on a project to build Docker im... | 0 | 2019-12-09T20:22:44 | https://medium.com/@wdhif/a-docker-antivirus-in-ruby-a4cceb4528e0 | docker, antivirus, clamav, atomic | *Originally posted on [Medium](https://medium.com/@wdhif/a-docker-antivirus-in-ruby-a4cceb4528e0)*
Last year, I've had the occasion to work on a project to build Docker images for developers. For security reasons, the developers were not allowed to push to a registry the Docker images they had built on their computer. Instead, they had to use a 'builder', in Ruby, that would take their Dockerfile and build the image for them, after running some tests of course.
One of those test was an antivirus, the [docker-antivirus](https://github.com/wdhif/docker-antivirus).
{% github wdhif/docker-antivirus no-readme %}
### How to run an Antivirus on a Docker image?
The first thing was to choose an antivirus, the choice was pretty straight forward. It should be open-source, run on Linux, and be performant. The answer was ClamAV.

ClamAV is an open-source antivirus that works on Linux with a public virus database with, as of 10 February 2017 contained over 5,760,000 virus signatures.
---
The idea was to:
1. Instantiate a Docker container with the image we want to test.
2. Mount the container file system
3. Run ClamAV on the mounted file system
4. Print some result
But we already got an issue here, **it is not possible to mount the root of a container**.
### Atomic to the rescue
Atomic is a project by Red Hat to deploy and manage container-based infrastructures. One of the product of Project Atomic is the [Atomic Run Tool](https://github.com/projectatomic/atomic).
One of the command added by this tool is [atomic mount](https://github.com/projectatomic/atomic/blob/master/Atomic/mount.py), which allow us to mount a container root. Atomic mount uses [OSTree](https://ostree.readthedocs.io/en/latest/), a library allowing us to interact with hierarchical file systems.
Using Atomic mount, we are able to mount the root of a container, and therefore this allow us to run ClamAV on it.
### Wrapping things up
By using both ClamAV and Atomic, I was able to create a little utility in Ruby to help me check viruses on a Docker image.

By running the docker-antivirus on the Busybox Docker image, we can confirm that this image is safe. We also have some informations about the scan itself. For example, how many files were scanned or how much time did it took. But we must also test the docker-antivirus on a malicious Docker image.
### Testing our solution with the EICAR test
The EICAR test file is a simple characters chain created by the European Institute for Computer Antivirus Research (EICAR) to test without any risks antivirus solutions.
```
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
```
This simple characters chain is designed to trigger any antivirus, although it is completely harmless. For testing purpose, I simply created a Docker image containing this file, the [docker-eicar](https://github.com/wdhif/docker-eicar).
Here, we can see the result of the docker-antivirus when analyzing the docker-eicar image:

As you can see, the docker-antivirus tells us that there is in fact something wrong with the docker-eicar image.
### What's next?
I am planning on adding more information about the virus itself in case it's detected. I would also like to make the docker-antivirus more easy to use, maybe by embedding it inside a docker image, or maybe by using static builds.
You can also participate yourself in the [development of this project](https://github.com/wdhif/docker-antivirus), contributions are more than welcome! | wdhif |
176,788 | Best Possible Hibernate Configuration for Batch Inserts | Problem In general, the hibernate entities (domains) are set to use database sequence as I... | 0 | 2019-09-26T14:10:56 | https://dev.to/smartyansh/best-possible-hibernate-configuration-for-batch-inserts-2a7a | database, batching, hibernate, java | #Problem#
In general, the hibernate entities (domains) are set to use database sequence as Id generator.
In such a case, for every insert hibernate makes two round trips to the database. It'll first make a round trip to get the next value of the sequence to set the identifier of the record. Then make another round trip to insert the record.
Assume, we want to insert 1000 records in the database. It'll result in a total of 2000 round trips to the database (1000 round trips each, to get the next value of a sequence and insert the records).
*Even in a case of very low network latency, performing a few thousand inserts may require a significant amount of time.*
The first major problem is to perform each insert separately.
The second major problem is to make a round trip to get the next value of the sequence every time.
#Theoretical Solution#
We should try to reduce the network round trips to the database for bulk inserts by batch processing of inserts.
Along with that, reduce the network round trips to get the next value of the sequence for every insert.
#Hibernate Solution#
**First problem:**
We know the obvious.
*Use the [JDBC batching](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#batch) provided by Hibernate.*
Set the following properties in the hibernate configuration file.
```xml
<property name="hibernate.jdbc.batch_size" value="100"/>
<property name="hibernate.order_inserts" value="true"/>
```
Now, hibernate will make a batch of 100 inserts and orders them. Then, it will make a single network round trip to the database to insert 100 records.
Therefore, the initial 1000 round trips to insert the records will reduce to 10.
**Second problem:**
We'll use enhance sequence identifier with an [optimizer](https://docs.jboss.org/hibernate/orm/current/userguide/html_single/Hibernate_User_Guide.html#identifiers-generators-optimizer) strategy like pooled or pooled-lo, which provides in-memory identifiers. Therefore, they reserve ids in memory to be used later.
Let's see what a 'pooled' optimizer strategy can do!
To enable it, we'll require to set the 'INCREMENT BY' of the database sequence to 100.
Then, set the pooled optimizer strategy with *increment_size = 100* in the entity:
```java
@Id
@GenericGenerator(
name = "sequenceGenerator",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
@Parameter(name = "sequence_name", value = "hibernate_sequence"),
@Parameter(name = "optimizer", value = "pooled"),
@Parameter(name = "initial_value", value = "1"),
@Parameter(name = "increment_size", value = "100")
}
)
@GeneratedValue(
strategy = GenerationType.SEQUENCE,
generator = "sequenceGenerator"
)
private Long id;
```
Now, hibernate will make a round trip to the database and set the sequence nextval to next 100 value. And, reserve the 100 values in memory to set the ids for the inserts.
With this approach, we'll make 1 round trip to fetch the next value of the sequence for every 100 records to insert.
Hence, the initial 1000 round trips to get the next value of the sequence will reduce to 10.
*Therefore applying both of the solutions, we can reduce the round trips to just 20 for 1000 inserts.*
We all may know about batch insert optimization. But, the sequence optimizer is quite a winner.
Not many of us know this kind of optimization strategy available already.
*Isn't it sound cool and smart.*
Guys, Please do share your thoughts with me. | smartyansh |
177,794 | Designing an API to mash up public and private data | Designing an API to mash up public an... | 0 | 2019-09-27T21:19:36 | https://dev.to/kp/designing-an-api-to-mash-up-public-and-private-data-1p6g | help, design | {% stackoverflow 58141399 %} | kp |
176,841 | Good Practices: Dates, Time & Time zones in PHP | Your PHP script does not cope with dates and time? We all been there, here is how you solve it... | 0 | 2019-09-26T12:52:28 | https://dev.to/anastasionico/good-practices-dates-time-time-zones-in-php-256g | php, datetime, dateinterval, dateperiod | ---
title: Good Practices: Dates, Time & Time zones in PHP
published: true
description: Your PHP script does not cope with dates and time? We all been there, here is how you solve it...
tags: PHP, DateTime, DateInterval, DatePeriod
cover_image: https://thepracticaldev.s3.amazonaws.com/i/5sbv0u0yq4wtbl6tehd9.jpg
---
How do you know when it’s time to sleep or wake up in the morning?
If you are a Web developer this question may be pretty simple,
If you start to feel asleep get a cup of coffee and go back on coding.
I have done it as well,
This may work but after a while, our productivity level will drop and this can lead to mistakes or even worse: result in bad code.
Our body, in order to prevent us to write down horrors that we would not understand the next morning or get fired by our project manager, has developed some extremely special cells in the back of our eyes called _Photosensitive Retinal Ganglion Cells_.

These cells are in charge of calculating the amount of luminosity we receive from the outside world and switch the state of our internal biological clock from awake to drowsy.
Unfortunately, our pc or MAC is not so lucky,
Yes, they have an internal clock but its efficiency is not even close to our and for a PHP developer that was a problem until a few years ago.
Now instead everything has changed!
Manage dates and times in PHP is still a pain but from PHP 5.2 on it has become easier and easier.
In this post, you are going to learn all about the classes that let you manage time on your PHP web application
##
## The series
In this series, we are exploring what are the best practices a web developer must take care of when creating or managing PHP code.
[Sanitize, validate and escape](http://anastasionico.uk/blog/good-practices-how-to-sanitize-validate-and-escape-in-php)
[Security and managing passwords](http://anastasionico.uk/blog/good-practices-php-security-manage-password)
[Handling error and exceptions](http://anastasionico.uk/blog/good-practices-handling-error-and-exceptions-in-php)
[Dates and Time](http://anastasionico.uk/blog/good-practices-date-time-time-zones-php)
##
## Server Time Zone
One of the first task that has to be in your checklist when you start a new web application is to **set the right time zone**.
By declaring it you will avoid errors with the database and annoying warning messages that will appear.
There are two different ways to set the time zone.
The first and the one I advice is to declare the time zone in the php.ini file.
With the following command
_date.timezone = ‘Europe/London’_
You can also declare the default time zone during runtime.
In this case you need to use the PHP command as below:
```php
date_default_timezone_set("Europe/London");
```
Of course, London it’s just an example you need to use the one you prefer.
[In the PHP manual, you can find the complete list of supported timezones.](https://www.php.net/manual/en/timezones.php)
##
## The DateTime class
In PHP 5.2 a little revolution began,
The _DateTime_ class was created and dealing with dates was not an impossible task anymore.
It actually became very easy to work with, and the number of functionality was impressive.
Let’s get technical for a second then you’ll see some example.
**The DateTime Class replace the built-in function date() and time().**
Both functions are still available but it is preferred to use the class and work with the date object instead.
_DateTime_ implements the _DateTimeInterface._
This interface provides several constants and a few methods such as _DateTime::diff()_ that return the difference between two dates, or _DateTime::format()_ that returns a date in a given format.
To create a new _DateTime_ object you need to pass a variable, in string format, with the date you want to create, the string will be then parsed and the object created.
You can use a lot of different formats to create an object, from UNIX, to pseudo-code, to American dates to plain English.
Have a look at these examples below:
```php
$stringsOfDate = ['Next Monday', '', '1 January 2019', '+1 week'];
foreach($stringsOfDate as $stringOfDate) {
$dateTime = new DateTime($stringOfDate);
echo $dateTime->format(DateTime::RSS) . PHP_EOL;
}
```
In this snippet we created and output 4 DateTime objects, injecting parameters in 4 different formats.
The first string is written in plain English and will show the date of next Monday, the second parameter passed is an empty string, PHP will NOT return an error in this case, it will return the current timestamp, then we got a date written in European format, eventually some pseudo-code.
They will all work and be shown in an RSS format, which, for the record, look like this:
_D, d M Y H:i:s O_
###
### The constants
The constant RSS in the snippet and the weird sequence of letters above are a representation of how the date is actually going to be displayed
Let’s start with the formatting code:
- **Y** represent a full four-digit year (2019)
- **M** its a two-digit month (03)
- **d** day of the month, two-digit with leading zeros (06)
- **D** three letters textual day (Tue)
- **H** 24h format hour with leading zeros (03)
- **i** two-digits minutes with leading zeros (59)
- **s** two-digits seconds with leading zeros (59)
- **O** difference to GMT in hours
- **T** time zone abbreviation (EST)
[There are a lot more of these and you can find all of them in the official PHP manual.](https://www.php.net/manual/en/function.date.php)
Now that you understood how these letters works let’s have a look at the constants that are available in DateTimeInterface, thus in DateTime Class.
- **DateTimeInterface::ATOM** Y-m-d\TH:i:sP
- **DateTimeInterface::COOKIE** l, d-M-Y H:i:s T
- **DateTimeInterface::ISO8601** Y-m-d\TH:i:sO
- **DateTimeInterface::RFC822** D, d M y H:i:s O
- **DateTimeInterface::RFC850** l, d-M-y H:i:s T
- **DateTimeInterface::RFC1036** D, d M y H:i:s O
- **DateTimeInterface::RFC1123** D, d M Y H:i:s O
- **DateTimeInterface::RFC2822** D, d M Y H:i:s O
- **DateTimeInterface::RFC3339** Y-m-d\TH:i:sP
- **DateTimeInterface::RFC3339\_EXTENDED** Y-m-d\TH:i:s.vP
- **DateTimeInterface::RSS** D, d M Y H:i:s O
- **DateTimeInterface::W3C** Y-m-d\TH:i:sP
###
### Date calculations
There are several ways you can do calculation while playing around with dates,
In this paragraph, I’ll show you the quick one and the proper one.
**The quickest way you can do calculations is by using the modify() method of the DateTime class.**
It needs a string as a parameter and returns the updated DateTime object
```php
// Creating a new DateTime object using ‘now’ as date
$dateTime = new DateTime();
// adding one month to current date and output the result
$dateTime->modify('+1 month');
echo $dateTime->format(DateTime::COOKIE) . PHP_EOL;
```
Another method, and a proper way to do calculation according to expert web developers, it is by using _DateInterval_ class together with _DateTime add()_ and _sub()_ methods.
In order to instantiate a _DateInterval_ object, you need to know about interval\_spec.
It stands for interval specification and it is just a weird string that requires a very specific format.
This string starts with the letter _P_, which stands for "Period" than a couple of pair of numbers are followed by period designators they are ordered from the biggest to the smallest.
Also, In case you want to specify the time you need to prefix it with the _T_ letter, it stands for "Time".
It is confusing, isn’t it?
Let’s build a few together.
Let’s say we want to add 1 month, 2 weeks, 3 days to a DateTime object.
And in another example we want to remove 1 year, 1hour, 1 minute and 30 seconds to a DateTime object.
1. We start adding the period ‘P’
2. Then adding 1 month ‘P1M’
3. Then adding 2 weeks ‘P1M2W’
4. Then adding 3 days ‘P1M2W3D’
That is the first string we need,
Let’s do the second with a time
1. We start adding the period ‘P’
2. Then adding 1 month ‘P1Y’
3. Now we need the time, let’s add the T prefix ‘P1YT’
4. Then adding 1 hour ‘P1YT1H’
5. Then adding 1 minute ‘P1YT1H1M’
6. Then adding 30 second ‘P1YT1H1M30S’
Now we have the two strings that we must use as parameter when creating the DateInterval objects
```php
$dateTime = DateTime::createFromFormat('d-m-Y H:i:s', "27-09-2019 20:45:30");
// the current $datetime value is 27-09-2019 20:45:30
$intervalToAdd = new DateInterval('P1M2W3D');
$dateTime->add($intervalToAdd);
// adding $intervalToAdd to $dateTime the current $datetime value is 27-10-2019 23:45:30
echo $dateTime->format(DateTime::COOKIE) . PHP_EOL;
// this block of code will output: Wednesday, 30-Oct-2019 20:45:30 UTC
$intervalToSubtract = new DateInterval('P1YT1H1M30S');
$dateTime->sub($intervalToSubtract);
// subtracting $intervalToSubtract to $dateTime the current $datetime value is 27-10-2019 23:45:30
echo $dateTime->format(DateTime::COOKIE) . PHP_EOL;
// this block of code will output: Tuesday, 30-Oct-2018 19:44:00 UTC
```
### Manual Calculations
#### What is the Unix epoch?
Sometimes you just want to do some calculation by yourself, without using all these fancy methods provided by the DateTime class.
PHP provides some functions that let you create dates and do not need you to invoke classes or even use OOP at all.
These PHP functions instead leverage the UNIX timestamp.
**Unix timestamp is a number that keeps increasing and count the number of seconds passed since the UNIX epoch, which is 1 January 1970 00:00:00 GMT.**
The advantage of using this method is that is time zone independent and really easy to implement into your code.
The two PHP functions are:
#### strtotime()
**This function parses English textual date-time description and returns a Unix timestamp**
You can use pretty much every sentence you preferer as a parameter and it will be accepted.
You don’t believe me?
Below there are just a few examples directly from the manual:
```php
strtotime("now");
// 1569396041
strtotime("10 September 2000");
// 968544000
strtotime("+1 day");
// 1569482441
strtotime("+1 week");
// 1570000841
strtotime("+1 week 2 days 4 hours 2 seconds");
// 1570188043
strtotime("next Thursday");
// 1569456000
strtotime("last Monday");
// 1569196800
```
That is incredibly easy to use and you can now manipulate and interact with these integers as you like the most.
#### mktime()
**The "make-time" function is a bit more complicated because of its arguments but not that much,**
It returns a Unix timestamp in the form of a long integer, that corresponds to the arguments given.
You do not need to insert all arguments, as a matter of fact, you do not need to insert any argument at all, since this function requires arguments to be inserted in a specific order PHP will consider the current moment for the ones that are left out.
Here there are a few examples
```php
// mktime(h, i, s, m, d, y)
date('F jS, Y g:i:s a', mktime(0, 0, 0, 0, 0, 2013));
// November 30th, 2012 12:00:00 am
date('F jS, Y g:i:s a', mktime(1, 1, 1, 1, 1, 2013));
// January 1st, 2013 1:01:01 am
date("M-d-Y", mktime(0, 0, 0, 12, 32, 1997));
// Jan-01-1998
date("M-d-Y", mktime(0, 0, 0, 13, 1, 1997));
// Jan-01-1998
```
If you paid attention you surely have noticed something weird in the result above,
The last two examples result in the first day of January 1998 even though we specify another date,
The reason for this is that if you pass a greater parameter than the value in a specific place allows, PHP is smart enough to understand it and reference to the next period.
In this case, there is no 32nd of December or 13th month in a year so PHP considers the value as the 1st day of January of the following year.
Isn’t that clever?
### How to compare Dates?
Another great functionality of _DateTime_ is to calculate the difference between two dates,
You can use the _diff_() function on a _DateTime_ object and pass another _DateTime_ object as a parameter and you will have a _DateInterval_ class (see below) representing the period between the two object
That’s it with the talk let’s see some code.
```php
$now = new DateTime();
$christmas = new DateTime('25 December');
if ($now > $christmas) {
$christmas = new DateTime('25 December next year');
}
$interval = $christmas->diff($now);
echo "$interval->days days until Christmas";
// 90 days until Christmas
```
_$interval_ is an object instance of the _DateInterval_ class and its method _days()_ return an integer that shows the days between now and Christmas day, 90 at the moment I am writing this sentence.
Better be fast with the presents!
##
## The DateInterval class
We have just seen an example of how _DateInterval_ is used in the previous section of this article,
Now we’ll dive into this class by itself.
**DateInterval can represent two lengths of time, a fixed one (eg: one week) or a relative (eg: yesterday).**
The reason we use _DateInterval_ is to modify DateTime instances.
We saw above how to use the _add()_ and _sub()_ methods to manipulate dates.
Dateinterval has a constructor method,
[if you do not know what a constructor method is you can read about it in the Object-Oriented Basic Series here](http://anastasionico.uk/blog/php-basics)
This construct requires an interval specification as we saw before, alternatively, you can use the function createFromDateString() that instead accepts a simple string
```php
$objConstructed = new DateInterval('P1DT12H');
$objFromDateString = DateInterval::createFromDateString('1 day + 12 hours');
object(DateInterval)#1 (16) {
["y"]=>
int(0)
["m"]=>
int(0)
["d"]=>
int(1)
["h"]=>
int(12)
["i"]=>
int(0)
["s"]=>
int(0)
["f"]=>
float(0)
["weekday"]=>
int(0)
["weekday_behavior"]=>
int(0)
["first_last_day_of"]=>
int(0)
["invert"]=>
int(0)
["days"]=>
bool(false)
["special_type"]=>
int(0)
["special_amount"]=>
int(0)
["have_weekday_relative"]=>
int(0)
["have_special_relative"]=>
int(0)
}
// Both ways return an identical DateInterval object
```
Note that _DateInterval_ does not support split seconds (microseconds or milliseconds etc.)
##
## The DateTimeZone class
If you have worked with clients from different countries you know that working in different time zones is very often a source of pain.
You save something on the database and the time stamp are just not the one you want,
You run a MySql query where the modified field has a specific period and it’s all screwed-up just because you are or the user is in a different time zone.
**PHP has solved this problem is several ways, one of them is the user of DateTimeZone class,**
It is very easy to create a _DateTimeZone_ object, the only value that you need is a string that represents one of the supported timezone stings.
In my opinion,
the most common use of this type of object is when creating a new _DateTime_ instance, the second argument of the construct, in fact, is an instance of the
_DateTimeZone_ Class, it is not mandatory but it’s more than advised to add it, especially in multilingual web applications.
```php
$timezone = new DateTimeZone('Europe/London');
$datetime = new DateTime('2019-10-01', $timezone);
object(DateTime){
["date"]=>
string(26) "2019-10-01 00:00:00.000000"
["timezone_type"]=>
int(3)
["timezone"]=>
string(13) "Europe/London"
}
$datetime->setTimezone(new DateTimeZone('Asia/Dhaka'));
object(DateTime){
["date"]=>
string(26) "2019-10-01 05:00:00.000000"
["timezone_type"]=>
int(3)
["timezone"]=>
string(10) "Asia/Dhaka"
}
```
Personally to don’t go mad when I know users from abroad are going to connect to one of the websites I developed.
I set the timezone as my server’s time zone, so every data will be consistent.
I can just change the dates using one of the functions in this article if I need to.
##
## The DatePeriod class
Did you ever forget the birthday of your parents, even worse the anniversary with your girlfriend?
Don’t worry, PHP got you covered.
DatePeriod is the PHP class in charge of dealing with recurring events and dates,
**There are 3 different ways to create a DatePeriod class,**
The first method is passing an _ISO 8601_ string, which is a string that describes an interval of period.
[You can create an ISO sting online here](https://www.infobyip.com/epochtimeconverter.php)
You need to use the second method when you know the total number of iteration you want your code to do.
The construct requires 3 arguments and a fourth is not mandatory
The arguments are the date you want to start the iteration, the interval of time that you want to use to iterate among the dates, the number of iteration you want to perform before the finish, and lastly, you can type the flag DatePeriod::EXCLUDE\_START\_DATE if you want to start from your second iteration
The third method is very similar to the second with the only difference that instead of defining the number of cycles you want to do you define the end date.
Here are the examples of each of these three methods
```php
echo "Constructor using Iso";
$iso = 'R8/2019-10-31T00:00:00Z/P7D';
$periodISO = new DatePeriod($iso);
foreach ($periodISO as $date) {
echo $date->format('Y-m-d');
}
echo "Constructor using defined recurrences";
$start = new DateTime('2019-10-31');
$interval = new DateInterval('P7D');
$recurrences = 8;
$periodRecurrences = new DatePeriod($start, $interval, $recurrences);
foreach ($periodRecurrences as $date) {
echo $date->format('Y-m-d');
}
echo "Constructor using end date";
$start = new DateTime('2019-10-31');
$interval = new DateInterval('P7D');
$end = new DateTime('2019-12-31');
$periodEnd = new DatePeriod($start, $interval, $end);
foreach ($periodEnd as $date) {
echo $date->format('Y-m-d');
}
```
## nesbot/carbon
**If you use Date and Times in your PHP application and you know how to use Composer you gotta use Brian Nesbitt component Carbon.**
It is very easy to use, it has very detailed documentation and a lot of functionality that improve the DateTime class over every almost aspect.
I have no more to say about it,
Just look at the snippet below
```php
$howOldAmI = Carbon::createFromDate(1975, 5, 21)->age;
Carbon::now()->subMinutes(2)->diffForHumans(); // '2 minutes ago'
if (Carbon::now()->isWeekend()) { echo 'Party!';}
$date = Carbon::now()->locale('it_IT');
echo $date->locale(); // it_IT
echo $date->diffForHumans(); // 1 secondo fa
echo $date->monthName; // settembre
echo $date->isoFormat('LLLL'); // giovedì 26 settembre 2019 09:28
```
That is pretty amazing.
[Here is the full documentation of Nesbot Carbon](https://carbon.nesbot.com/docs/)
If you discovered something new, learning more is as easy as tapping into the image below
[](http://eepurl.com/dIZqjf)
## Conclusion
As you find out in this article the computer in which our web application run do not have amazing cells at the back of their CPU that automatically switch depending on the hour our visitor log in or the time zone he is in.
There is also to consider that with no doubt that managing dates, times and time zones on your web applications are a daunting task.
However,
**we all are now working (hopefully) with PHP 7 and later versions of this programming language so there are no excuses not to implement the features you just read here.**
These good practices are going to actually make your job much easier and your software more scalable and reliable. | anastasionico |
177,035 | 10 tips on making to-do lists that will maximize your productivity | It happens that we get tons of tasks that we should finish them, but we’re used to be late at finishi... | 0 | 2019-09-26T15:04:30 | https://dev.to/lartwel/10-tips-on-making-to-do-lists-that-will-maximize-your-productivity-569g | productivity, career, motivation | It happens that we get tons of tasks that we should finish them, but we’re used to be late at finishing them due to many reasons. I’d love to share some thoughts I learned from books such as **Eat That Frog** and other resources. I experienced them with myself the time and they helped me improve my productivity and get the most of my days and hope they will help you too.
As you know, productivity is a crucial matter and it’s not optional in successful people lives and it should not be in yours as well.
You need to make a plan for your day in order to track your tasks and measure what consumes your time and what can be a good investment for it. You will also need to put plans for long-term too as I will mention later.
> The day that is not planned to is a wasted day.
Take it as a rule. Each day has its own incidents in addition to its routine, but you shouldn't let that dominate your daytime. Tracking your day giving you a better idea of what should be done and when to do so.
Take 5-10 mins before sleeping to make a plan list for what’ll you do the next day.
# Optimize your list
### 1- Choose your tasks wisely
We all have hobbies, but not all of them have the same importance nor worth our time. They shouldn’t have the same weight on your tasks list and even the trivial ones should not exist.
### 2- Prioritize tasks
Tasks that have the most positive impact on your day progress should be on the top of your list. Most of the time, they’re the tasks that have +70-80% more importance than other tasks that might be trivial. They may be not likable and I am not asking you to like them, but when you finish them at first you will feel the joy of achievement. Tasks tend to be energy-consuming so you better start with the most important ones as when you reach the end of the day, you’ll have got the most of it in a proper way.
### 3- Begin Immediately
It may sound like a cliche, but it is not. To overcome procrastination, you should start immersing yourself in the task immediately. When you leave the time for yourself to decide if you should do the task now or later, you’ll find an excuse to postpone it and it’ll be a false pretense most of the time.
### 4- No multi-tasking
We should define what multi-tasking is. If you consider multi-tasking is doing multiple tasks at the same time, you should stop doing it then. But if you find multi-tasking is the ability to finish a task and start another one immediately without being distracted, then it’s not bad at all, but you should separate hard tasks to get out of the pressure cycle.
### 5- Keep away of distraction
As I’ve mentioned before while doing a tough task we like to delay or even leave it. Having distractions around us help to enforce this idea. Keep an eye on what distractions do you have and keep away from them while doing a task. It can be a friend chatting/calling you, noise, children, games, etc…
### 6- Track your time
Each task in your list should be specified by the time it should take. If you felt that you estimated the time wrongly, it’s better to play the priority game here and decide if these tasks should take another task’s time or not. If you prioritized your list as we mentioned before, you’ll have an easy decision to make here. Just don’t fall in the trap of doing one or 2 tasks and leaving other ones with more importance neglected.
### 7- Consistency
Make planning a habit. You’ll be more organized and more aware of when and what will you achieve a certain goal.
### 8- Plan for the long-term too
Daily plans are great for saving the day, but you should plan for the long-term for larger projects and tasks and divide them into tiny tasks in the short-term.
### 9- Pick your own style
I prefer planning on sticky notes because I feel joy when I finish a tough task and check on it. I keep the stuck note near my desk where I am working to remind me of my tasks every time I see it. I used to use Google Keep. It's simple enough and has helpful features, but I am more into sticky notes nowadays. If you’re a developer and you have other fellow developers whom you plan tasks with, you may choose a service such as Trello, but it’s not our case here.
You can try different ways and pick the way you find it more suitable for you.
### 10- Don’t be hard on yourself
Your tasks don’t have to be strict at all. They should include some fun. And even after each worthy task, you should reward yourself for some time, but it shouldn’t be for so long to not mislead the track of your tasks. You should also make a free day out of tasks, jobs emails, etc… For me, I’d love to free a day of my week out of programming, freelance, or other stuff. I prefer watching a movie, visiting a friend, and I even read tech articles on such a day. I know that reading articles may not be a relaxing activity for some of you in this day, but I just love reading them so I do it (and they’re indeed being part of my list most of the time).
##Conclusion
Plans aren’t the only time saver and we all waste time, but not equally and it will happen that you will mislead the track of your time. You should be aware of this truth and minimize the time wasted as much as possible. Successful people are more organized and more aware of their weak points and they are always consistent and patient about working to achieve their goals. and you should be too. | lartwel |
177,104 | Multiple Tabs in VIM | A quick tutorial on how to use tabs in VIM | 0 | 2019-09-26T17:09:50 | https://dev.to/connorbode/multiple-tabs-in-vim-gn4 | vim, linux | ---
title: Multiple Tabs in VIM
published: true
description: A quick tutorial on how to use tabs in VIM
tags: vim, linux
---
I've recently been transitioning to using VIM as my full time editor. There are a lot of tricks to learn, but I think I'm getting the hang of things.
One major advantage is that I can now proficiently edit code over an SSH connection.
Anyways, what you're here for:
## Using multiple tabs in VIM
This is actually one of the simplest tricks to learn in VIM. There are three commands:
- `:tabnew <filename>` (open a new tab)
- `:tabn` (jump to the next tab)
- `:tabp` (jump to the previous tab)
- `:q` (close the file, which also closes the tab)
If you're using netrw, the default file browser, you can also open that up in a new tab by typing `:tabnew .`
Hope this helps!
---
Follow me here on dev.to or on [Twitter @connorbode](https://twitter.com/connorbode) for more tips like this!
| connorbode |
177,230 | Setting up Python unittests with GitHub annotations | Hello! This is my first post on DEV, so I'm going to try to make this quick and simple. If you want... | 0 | 2019-09-27T01:21:43 | https://dev.to/rdil/setting-up-python-unittests-with-github-annotations-3li1 | python, unittests, cirrus, ci | Hello!
This is my first post on DEV, so I'm going to try to make this quick and simple.
If you want inline examples of exactly where your code is failing, you can integrate [Cirrus CI with GitHub annotations](https://medium.com/cirruslabs/github-annotations-support-227d179cde31). This is super simple to do.
1. Start off by writing unittests. This is super simple.
2. Setup a basic CI pipeline (`.cirrus.yml` file). You will want to do something like this:
```yaml
tests_task:
# define Docker container
container:
image: python:latest
# install project requirements and the annotation result builder
install_script: |
pip install -r ./some-requirements-file.txt
pip install unittest-xml-reporting
# normally, you would run unittests with the main command
# we need to build XML reports, so use this command
script: python3 -m xmlrunner tests
# replace tests with the name of the module your unittests are in
# (always) upload results - even if the tests fail
always:
unittest_results_artifacts:
# where the outputted XML files are
path: ./*.xml
# required, even though it sounds wrong
format: junit
```
And that is all you need to do!
You should then get annotations.
Have a nice day! | rdil |
177,257 | hello,world | A post by 蒙蒙 | 0 | 2019-09-27T02:48:18 | https://dev.to/meng/hello-world-4554 | meng | ||
182,639 | Game Of Stakes Continues: Day 3 | Join the discussion in DAOBet Telegram groups: International community [EN] Validators group chat G... | 0 | 2019-10-04T09:33:54 | https://daobet.org/blog/game-of-stakes-continues-day-3/ | crypto, cryptocurrency, blockchain, gambling | Join the discussion in DAOBet Telegram groups:
- [International community [EN]](https://t.me/daobet)
- [Validators group chat](https://t.me/daobet_validators)
- [Game developers group chat](https://t.me/daobet_developers)
Join our social media to keep up to date with all the announcements: [Twitter](https://twitter.com/daobet_org/), [Facebook](https://facebook.com/DAObet.org/), and [LinkedIn](https://linkedin.com/company/daobet/).
Since the launch of GoS, the participating validators have produced an average of **24,000** blocks each A **27%** steak has already been activated.
A new member has joined the game: daobetglobal.
Thus, **24 validators** are now vying for victory on the network.
The events of the past day:
The three leaders have changed as the representatives of **kaiserlabs11** are in first place, **P2P Validator** are right behind them, and former leader **EOS Rio** is in third place.
In addition, the validators have expanded the p2p-peer-address list, thereby strengthening the network. The cohesive work of the validators allows the network to be more stable and more productive.
Many thanks to **Nick** from **Everstake** who made a guide on how to start a node without a docker: https://github.com/everstake/daobet/blob/master/MANUAL.md
And special thanks to the entire **Everstake** team for adding two more peers and launching a full node!
- daoseed1.everstake.one:9876
- daoseed2.everstake.one:9886
DAOBet full history node from: daofull.everstake.one:8888
This will make the [GoS network](https://daovalidator.com/leaderboard) stronger and more resilient to attacks and more perceptive for transactions.
New participants are continuing to join the game. Stay with us, and do not forget to vote for each other and claim rewards!
**P.S.**
Do not forget that you can still join Game Of Stakes and lay claim to the prizes!
The steps needed to start:
1) Fill in the following form https://daobet.typeform.com/to/Vcin1c if you have not already
2) Register your account using the given instructions https://github.com/DaoCasino/Game-of-Stakes
3) Launch you node with this script https://github.com/DaoCasino/Game-of-Stakes/tree/master/run-producer
4) Congrats on joining the Game!
Join the discussion in DAOBet Telegram groups:
- [International community [EN]](https://t.me/daobet)
- [Validators group chat](https://t.me/daobet_validators)
- [Game developers group chat](https://t.me/daobet_developers)
Join our social media to keep up to date with all the announcements: [Twitter](https://twitter.com/daobet_org/), [Facebook](https://facebook.com/DAObet.org/), and [LinkedIn](https://linkedin.com/company/daobet/). | daobet |
177,327 | How to send emails with just a few lines of code with Yagmail in Python | Original post How to send emails with just a few lines of code with Yagmail in Python On our last... | 2,455 | 2019-09-27T05:24:43 | https://dev.to/davidmm1707/how-to-send-emails-with-just-a-few-lines-of-code-with-yagmail-in-python-25pm | python, tutorial, learning |
Original post [How to send emails with just a few lines of code with Yagmail in Python](https://letslearnabout.net/tutorial/how-to-send-easily-emails-with-yagmail/ "Permalink to How to send emails with just a few lines of code with Yagmail in Python")
![Yagmail tutorial][1]
On our last lesson, [How to send beautiful emails with attachments using only Python][2], we built a script to send an HTML-based email with attachments.
If you are using a Gmail account, we can significantly simplify our code with [Yagmail][3], an STMP client for Gmail.
* * *
{% youtube D4dX1pueV54 %}
* * *
### Setting up everything
We are going to send emails using a Gmail account. Before anything, create a Gmail account (or use your own) and enable the "Less secure app access" in your account. You can do it here: [https://myaccount.google.com/u/0/security?hl=en&pli=1][4]
![Less secure access][5]
After that, you are set! Create an environment with Python (I use pipenv, so I create one with _pipenv shell_), install yagmail with _pip install yagmail_and you are ready to go!
* * *
### Our basic script
When I said that Yagmail simplifies our work I wasn't kidding. Let me show it to you.
First, create a Python file. Mine is _yagmail_sender.py_.
Now, write the following:
import yagmail
sender_email = YOUR_EMAIL
receiver_email = RECEIVER_EMAIL
subject = "Check THIS out"
sender_password = input(f'Please, enter the password for {sender_email}:n')
yag = yagmail.SMTP(user=sender_email, password=sender_password)
contents = [
"This is the first paragraph in our email",
"As you can see, we can send a list of strings,",
"being this our third one",
]
yag.send(receiver_email, subject, contents)
This is so clear that it doesn't need explanation.
But in case it does:
* As usual, we state our sender and receiver email. Also a subject and a password typed by the user.
* Then, we create a yagmail instance with the user and the password to log in.
* The last variable, _contents_, is a list of strings. This is the body of our email
* Then we send to the receiver _receiver_email_ with the subject _subject_ the email with the body that is listed on _contents_.
That's it. That's all you need:
![][6]
* * *
### Sending an email to a list of people
Can you imagine how to send that email to a list of email address?
Replace the old lines with the new ones:
receiver_email = RECEIVER_EMAIL # Old line
receiver_emails = [RECEIVER_EMAIL_1, RECEIVER_EMAIL_2, RECEIVER_EMAIL_3] # new line
....
yag.send(receiver_email, subject, contents) # Old line
yag.send(receiver_emails, subject, contents) # New line
Notice the change from singular to plural.
Time to run the code again:
![Yagmail tutorial - multiple emails sent][7]
Every receiver on the list has received the email!
Disclaimer: Even if it's a list, it behaves like a set in Python. Repeated emails won't be sent an email twice.
* * *
### Adding attachments
Sending attachments is incredible easy too. I don't want people fighting on the comments, so in this one I'll send two pictures: One of cats and another of dogs.
Add the desired files on the same root of your program and add the absolute path. Here's my _contents_:
contents = [
"This is the first paragraph in our email",
"As you can see, we can send a list of strings,",
"being this our third one",
"C:\\Users\\Void\\Desktop\\Codi\\Python\\yagmail\\dodgs.jpg",
"C:\\Users\\Void\\Desktop\\Codi\\Python\\yagmail\\cats.jpg"
]
I'm using a Windows OS, so I escaped the route with the backslash ( '').
That's all we need. Seriously. Run the code and you'll have two files attached to the email:
![][8]
In case the code fails (we lose the internet connection, our password is wrong, etc), let's wrap everything with a nice try/catch:
import yagmail
sender_email = YOUR_EMAIL
receiver_emails = [RECEIVER_EMAIL_1, RECEIVER_EMAIL_2, RECEIVER_EMAIL_3]
subject = "Check THIS out"
sender_password = input(f'Please, enter the password for {sender_email}:n')
try:
yag = yagmail.SMTP(user='smtpforletslearnabout@gmail.com', password=sender_password)
contents = [
"This is the first paragraph in our email",
"As you can see, we can send a list of strings,",
"being this our third one",
"C:\\Users\\Void\\Desktop\\Codi\\Python\\yagmail\\dodgs.jpg",
"C:\\Users\\Void\\Desktop\\Codi\\Python\\yagmail\\cats.jpg"
]
yag.send(receiver_emails, subject, contents)
except Exception as e:
print(f'Something went wrong!e{e}')
Let's introduce a wrong password:
![][9]
Nice, now we get a text informing us that something went wrong and the message.
If you didn't know, this message is the same we can get when using the [smtplib][2]: Yagmail is just a wrapper around that library that simplifies the code greatly, as you just saw.
* * *
### Conclusion
Yagmail is a wrapper around the smtplib library that help us to write short, good code. The catch is that you can only use it with Gmail addresses.
But as everybody uses them, that won't be a problem, right?
In case you are still using Hotmail or other email service, you can still use the _smtplib_ Python library. Learn how to do it here:
[How to send beautiful emails with attachments (yes, cat pics too) using only Python][2]
And yes, I know that after looking at the cats and dogs pics on the email attachment you want to see the whole picture, not the thumbnail. You deserved it:
![Yagmail tutorial cats][10]
![Yagmail tutorial dogs][11]
* * *
[Yagmail package][16]
[Yagmail docs][17]
* * *
[My Youtube tutorial videos][12]
[Final code on Github][13]
[Reach to me on Twitter][14]
[Read more tutorials][15]
[1]: https://i1.wp.com/letslearnabout.net/wp-content/uploads/2019/09/image-30.png?resize=688%2C325&ssl=1
[2]: https://letslearnabout.net/tutorial/how-to-send-beautiful-emails-with-attachments-using-only-python/
[3]: https://pypi.org/project/yagmail/
[4]: https://myaccount.google.com/u/0/security?hl=en&pli=1
[5]: https://i2.wp.com/humberto.io/img/emails/less-secure.png?w=688&ssl=1
[6]: https://i0.wp.com/letslearnabout.net/wp-content/uploads/2019/09/image-38.png?w=688&ssl=1
[7]: https://i1.wp.com/letslearnabout.net/wp-content/uploads/2019/09/image-39.png?fit=688%2C141&ssl=1
[8]: https://i0.wp.com/letslearnabout.net/wp-content/uploads/2019/09/image-40.png?w=688&ssl=1
[9]: https://i2.wp.com/letslearnabout.net/wp-content/uploads/2019/09/image-41.png?fit=688%2C45&ssl=1
[10]: https://i1.wp.com/letslearnabout.net/wp-content/uploads/2019/09/cats.jpg?fit=688%2C387&ssl=1
[11]: https://i0.wp.com/letslearnabout.net/wp-content/uploads/2019/09/dogs.jpg?w=688&ssl=1
[12]: https://www.youtube.com/channel/UC9OLm6YFRzr4yjlw4xNWYvg?sub_confirmation=1
[13]: https://github.com/david1707/yagmail_sender
[14]: https://twitter.com/DavidMM1707
[15]: https://letslearnabout.net/category/tutorial/
[16]: https://pypi.org/project/yagmail/
[17]: https://buildmedia.readthedocs.org/media/pdf/yagmail/latest/yagmail.pdf
| davidmm1707 |
177,345 | AngularJS vs ReactJS: Comparison between AngularJS and ReactJS | When you want to create a single page application the question arrives in your mind: What I will use?... | 0 | 2019-09-27T06:05:26 | https://dev.to/teclogiq/angular-vs-react-comparison-between-angular-and-react-10jo | angularvsreact, angularorreact, angular, react | When you want to create a single page application the question arrives in your mind: What I will use? AngularJS or ReactJS? I've been using AngularJS and ReactJS for a while now. Both frameworks are super fast, advanced, widely adopted JavaScript (JS) technologies that we use to create interactive single-page applications (SPAs).
Angular, the Model–View–Controller framework has become extremely popular among web developers. React is even more widely used by JavaScript programmers, React is actually a library, not a framework. React only has a View, but lacks Model and Controller components. So how did React became so popular and how can we reasonably compare a framework (AngularJS) with a library (React)?
We can differentiate Angular and React using following aspects:
• Data Binding
• Dependency Resolution
• Templating & Directives
• Componentization
• Performance
• Language
• Concept
##Data Biding
###AngularJS
Two-way data binding.
###ReactJS
One-way data binding.
##Dependencies
###AngularJS
Manages dependencies automatically.
###ReactJS
Requires additional tools to manage dependencies.
##Templating & Directives
###AngularJS
We make our own directives in Angular to insert data into templates.
###ReactJS
React doesn’t offer division into templates and directives or template logic.
##Componentization
###AngularJS
Based on the three layers — Model, View, and Controller.
###ReactJS
Only has a View, but lacks Model and Controller components.
##Performance
###AngularJS
The performance will decrease a lot if your application has too many watchers.
###ReactJS
React makes it simpler to control application performance but this doesn't mean we can't create a fast application in Angular.
##Language
###AngularJS
JavaScript + HTML
###ReactJS
JavaScript + JSX
##Concept
###AngularJS
Brings JavaScript into HTML Works with the real DOM Client-side rendering.
###ReactJS
Brings HTML into JavaScript Works with the virtual DOM Server-side rendering. | teclogiq |
177,585 | start each_with_index from 1 (This is good for UI) | 👍 Usual way books.each_with_index do |book, index| puts "#{index}: #{book.title}" end... | 0 | 2019-09-27T12:12:19 | https://dev.to/n350071/start-eachwithindex-from-1-this-is-good-for-ui-2hj1 | rails | ---
title: start each_with_index from 1 (This is good for UI)
tags: rails
published: true
---
## 👍 Usual way
```ruby
books.each_with_index do |book, index|
puts "#{index}: #{book.title}"
end
#=> 0: a
#=> 1: b
#=> 2: c
```
## 🦄 Start from 1
You can use `each.with_index(1)`.
This is good when you use each method in .erb file. Because users want to see index from 1 usually.
```.ruby
books.each.with_index(1) do |book, index|
puts "#{index}: #{title}"
end
#=> 1: a
#=> 2: b
#=> 3: c
```
---
## 🔗 Parent Note
{% link n350071/my-rails-note-47cj %}
| n350071 |
177,624 | Why I Love The Syntax.fm Podcast | I have a long commute everyday and there's nothing more satisfying than firing up an interesting podc... | 0 | 2019-09-27T14:07:01 | https://dev.to/bbarbour/why-i-love-the-syntax-fm-podcast-3mhn | webdev, podcast, beginners, watercooler | I have a long commute everyday and there's nothing more satisfying than firing up an interesting podcast or audiobook for the drive. I am an audio learner, so it's especially useful for me. I've gone through quite a few developer podcasts (many of which I found right here on Dev.to) constantly searching for new ones I want to add to my list.
Usually, I give them a couple of episodes and end up stopping. However, I always come back to Syntax, every week--like clockwork. In fact, this morning I turned off another podcast to go listen to the new Syntax episode from Wednesday.
That got me reflecting on my favorite parts about Syntax and why it always draws me back.
## Subject Matter
The hosts of the show, Scott and Wes teach web development for a living. Because of this, the duo discuss a wide breadth of technologies. They have entire episodes dedicated to Javascript, Node.js, React, CSS, security, Wordpress--so on and so forth. There are well over a hundred of them, a nice backlog to march through as a new listener.
I've listened to like eighty percent of them at this point, I think.
I love the episodes where they dive deep, weighing pros and cons. Yet, its nice that they sprinkle in Potluck episodes where they go over a broad range of listener prompted questions. I think what matters the most is that they tend to approach web technologies as the tools they are. They are't pushing a dogma.
Of course, its clear they have opinions on different technologies, yet they don't let that bleed through or overwhelm the message. For example, I have gotten the impression that neither Scott or Wes care for PHP much, or maybe are tired of it. Yet, they've expressed how instrumental it was during their early careers working in Wordpress and Drupal.
One of my favorite episodes was where Scott taught Wes about Vue.js. Wes had never used it before and is an experienced React developer--so I was able to relate. It led me to learning more about Vue in my spare time. So, kudos there to Scott.
Many other podcasts tend to do an interview format. I don't mind an occasional interview, but it grows old when done consecutively. I would rather have hosts that are consistent, than a different voice each episode. Part of the charm is getting to know the people you listen to every week. I feel like Scott and Wes are my friends, even though I've never met them or talked to them.
I never really felt that way with other podcasts, so they're doing something right.
When the duo do interview people, they ask pointed and in-depth questions. I've listened to other podcasts where they spent the entire episode inflating the ego of the person they have on as a guest, or asking easy questions that they can knock out of the park. Since Wes and Scott teach new web developers, they know the sorts of questions students ask and can frame them.
It's great, especially if you've never delved into a particular technology.
## Humility
I could be wrong. But, my impression from listening to over a hundred episodes is that Scott and Wes are truly humble and earnest people. Developers have a tendency to like to flex their mental muscle at times. I get a sense that many who do podcasts have pride and certain agendas they wish to push. This can bleed through in very obvious ways, so much that it makes me cringe.
This has never happened when listening to Syntax.
Now, let me say--we all have an agenda at the end of the day. I know that Scott and Wes do too, still they swing that hammer lightly.
If I had to guess, their agendas are to attract people towards their respective learning platforms. Even so, they show that the best teachers/mentors aren't the ones who show off or gloat--rather the ones that guide and mold their students.
I don't feel like the entire podcast is one big sales pitch. Even with the ad reads and the shameless plugs.
I'll admit, I've glanced at their courses quite a few times--especially if they introduce one on a subject I'm interested in. Yet, I've never bought anything from either host. Well, I did take Wes's Javascript 30 course, which is free (I highly recommend it.) Scott also has a bunch of free stuff on his Youtube channel.
## Humor
I find Scott and Wes hilarious. Both have goofy personalities. Wes can't pronounce ternary and every time it brings a smile to my face. Their jokes and sponsor ad read transitions have made me laugh out loud in my car like a buffoon. Often I look forward to the reads these days, especially with how Scott slides his way into one in a clever and pun-filled manner. Seeing as both of them are the about same age as me, I think there's some similarity in what we find funny. I doubt everyone will laugh as much as I do.
## Production Quality
Scott and Wes sound like crisp and clean radio hosts. They have awesome mics (as is necessary for professional video course makers.)
Scott's deep voice and Wes's charming Canadian accent make a stark and pleasant contrast. I've listened to other podcasts where I had no idea who was talking, as people sounded similar and droned together.
They don't interrupt each other, or talk over each other. If they do, I imagine those parts are clipped out in post production. Either that, or they have a good synergy--something hard to measure or even figure out.
I'm not knocking on other podcasters, who maybe can't afford good equipment or have sound engineering experience like Scott does. All I'm saying is, the audio and editing is icing on the cake. I've listened to their live episodes, which have lesser sound quality, and still had a good time.
---
Checkout [https://syntax.fm/](Syntax.fm) if you haven't. Give it a listen! I hope you enjoy it as much as I do.
| bbarbour |
177,745 | Using GraphDbs to Visualize Code/SQL dependencies | The Problem Over the last few years I have done a ton of legacy code/database clean up. Th... | 0 | 2019-10-11T15:16:04 | https://dev.to/dealeron/using-graphdbs-to-visualize-code-sql-dependencies-3370 | graphdb, sql, cypher | ## The Problem
Over the last few years I have done a ton of legacy code/database clean up. This often involved confirming that a table we suspected was dead wasn't referenced in any active repositories. There was also a ton of stored procedures left over from the days of old (I think over a thousand at its worst, we're down to about 60 nowadays). We needed a way to tell if a table was recursively referenced through these stored procedures back to any file in active repositories. This recursion made it really difficult to manually audit table usage.
About a year into dealing with very slow manual audits, I happened across [Neo4J](https://neo4j.com/) when doing R & D for a project. The thing that really struck me is how simplistic recursive queries are in [Cypher](https://neo4j.com/developer/cypher-query-language/) (the querying language Neo4J uses). More importantly, the language really helped in getting speedy answers to relationship-focused questions. So naturally, I set out to map our table/stored procedure/code dependency graph.
## The Solution
I won't go too much into the actual process that built these relationships, it was fairly crude and mostly focused on loading stored procedure, view definitions and repository information into memory, then using the regular expression:
```javascript
( |\.|\[|"|\n)StoredProcedure/Table/ViewName( |\.|\]|"|\n)
```
It returned a lot of false positives for tables that have simplistic names (I.E `User`, `Vehicle`). Fortunately for me, my predecessors liked weirdly named tables and occasionally using the `tTableName` naming pattern, so this crude RegEx was sufficient for most cases. For those curious, I just hosted a Neo4J container on my own machine using [Docker](https://www.docker.com/) to house the results, although Neo4J does provide free sandboxes you can spin up.
The end result of this is a series of nodes representing Entities (Files, Tables, Views, StoredProcedures) and Locations (SqlDatabase, GitRepository). The two relationships used are `LIVES_IN` (for mapping an entity to it's database or repository) and `USED_IN` (representing a dependency).
## They're Dead, Jim
The big question I originally wanted to ask was "What stored procedures cannot be referenced back to a file?"
The Cypher query for this looks something like this:
```javascript
MATCH(sp:StoredProcedure)
WHERE NOT (sp)-[:USED_IN*..]->(:File)
RETURN sp.Database, sp.Name
```
The `*..` makes the relationship match recursive. So it will include matches, for example, if a table is used in a stored procedure which is used in a stored procedure which is used in a file.
Because our crude RegEx leaned on the side of returning false positives over ignoring false negatives, this simplistic query actually helped us get rid of about 600 out of the 1000-ish stored procedures. I even used it to write the query for dropping the procedures:
```javascript
MATCH(sp:StoredProcedure)
WHERE NOT (sp)-[:USED_IN*..]->(:File)
RETURN 'USE '+sp.Database+' DROP PROCEDURE IF EXISTS '+sp.Name
```
(Using Cypher to write SQL definitely feels weird)
## Trace It Back
After the above was taken care of, we were often left with tables that we figured _should_ be dead, but were being referenced by code that was probably dead (mapping file->file relationships is not something I've managed to accomplish yet, it gets tricky when you use things like dependency injection). So another common query that ended up being used was:
```javascript
MATCH path = (t:Table {Database:'DealerOn', Name:'DealerMake'})-[:USED_IN*..]-> ↩
(:File)-[:LIVES_IN]->(:GitRepository)
RETURN path
```
Because this is returning actual nodes (within the path), Neo4J gives you a nice visualization of the dependencies. This is good for getting an initial feel for if one repository is more tightly coupled to the table in question than other repositories.

## You Can Ask Most Anything
I think the main reason I love GraphDbs as analytics tools is because they feel very fluid for answering any question you come up with, not just questions the database was designed to answer. A couple I've found useful over the months:
Get list of stored procedures that are used in more than one file:
```javascript
MATCH(sp:StoredProcedure)-[r:USED_IN]->(:File)
WITH sp, COUNT(r) AS pathCount
WHERE pathCount>1
RETURN sp.Name, pathCount
```
Get count of unique stored procedures used by each repository:
```javascript
MATCH(g:GitRepository)<-[:LIVES_IN]-(:File)<-[:USED_IN]-(sp:StoredProcedure)
RETURN g.Name, COUNT(DISTINCT sp)
```
Get the longest recursive stored procedure reference path (neo4j prevents these queries from being infinitely recursive):
```javascript
MATCH path = (sp:StoredProcedure)-[:USED_IN*..]->(:StoredProcedure)
RETURN path ORDER BY LENGTH(path) DESCENDING LIMIT 1
```
Finding stored procedures that reference tables/stored procedures from other databases:
```javascript
MATCH(sp:StoredProcedure)<-[:USED_IN]-(otherEntity:Entity)-[:LIVES_IN]->(db:SqlDatabase)
WHERE sp.Database <> db.Name
RETURN sp.Database, sp.Name, db.Name, otherEntity.Name
```
Find number of paths to a file each table has, and get a list of each repository the table is referenced in. Note that optional match allows the query to include tables that have no paths to files.
```javascript
MATCH(t:Table)
OPTIONAL MATCH paths = (t)-[:USED_IN*..]->(f:File)
RETURN t.Database, t.Name, COUNT(paths), COLLECT(DISTINCT f.Repository)
ORDER BY COUNT(paths) ASC
```
## What's Next
I'm looking into if I can figure out a way to build code->code dependency relationships (which I know is possible as I've seen other tools do it), and a better method for determining entity usage than a simplistic RegEx. With those two problems solved I really believe GraphDbs could be an extremely strong Code Dependency analyzer. | drmurloc |
177,821 | What I Look for During an Interview | A list of the qualities that make a candidate excel. | 0 | 2019-09-30T23:45:46 | https://dev.to/brewsterbhg/what-i-look-for-during-an-interview-117e | career, interview | ---
title: What I Look for During an Interview
published: true
description: A list of the qualities that make a candidate excel.
tags: #career, #interview
cover_image: https://thepracticaldev.s3.amazonaws.com/i/ejn7eqorx53qgj35mj5r.jpg
---
I've had to interview a number of developers throughout my last couple of jobs, and I've been slowly refining a list of common traits I've found that successful candidates have all shared. I wanted to talk about what I've found to be the largest indicators of a top candidate, and also some of the red flags. I find that most posts about interviews are from a candidates perspective—how to prepare for an interview, common interview questions—but I wanted to highlight some of the non-technical traits I'm looking out for. I should also preface this article by saying I'm not representative of all interviewers; everyone is going to have their own styles of interviewing, and what's appealing to me doesn't necessarily mean it's important to other employers. I'm just providing my perspective on what I've found to be the most important qualities. Alright, now that the preamble is out of the way, let's jump into it!
#### Trait 1: Attitude
I think attitude is one of (if not the most) important traits of a candidate. Especially in my current workplace that practices pair programming—I want a candidate that I could see myself working with for 8 hours at a time. We do code/PR reviews, so a successful candidate needs to be open to feedback and collaborating on problems with other team members. Now, not every workplace practices pair programming, but I feel this characteristic is still applicable across almost any circumstance. You don't want someone who's going to stir up a ton of conflict amongst the team.
In my experience, the more ego that's in a workplace, the more toxic the culture becomes. I believe the most positive environment is one that values inclusivity and mentorship. You could be an excellent developer, but if you're not willing to share that knowledge with your team to help build everyone up, then a lot of that value is immediately lost. We're all constituents of the same goal, so if we're putting up barriers between developers, we'll never be able to work as a cohesive unit. People often conflate high skill with success, but a focused team will always outperform an individual.
#### Trait 2: Passion
This one is tricky for me. I think a positive work/life balance is incredibly important to maintain, and I know not everyone has the time to pursue a bunch of side-projects. That being said, having a body of work or an online presence (whether it be contributions to OSS or involvement in a development community) makes it a lot easier to understand the capabilities & skillset of the candidate. Also, self-learning is incredibly important—especially in web development where things move so rapidly—so having an idea of where a candidate goes to keep up to date on emerging technologies is a huge benefit.
I don't need a candidate to have hundreds of green squares on their GitHub contribution chart. If they've made the effort to participate in development outside of work experience, it helps me understand the kind of developer they are before the interview. This assists me in refining the questions that I prepare, which usually results in a better interview. So yes, when you include links to your socials on your resume, interviewers _absolutely_ check them out. I understand this may be a touchy area, but again, it's not a dealbreaker for me. I don't need you to eat, sleep, and breathe code.
#### Trait 3: Communication
I'm not a fan of whiteboard interviews. I feel like it's not a good representation of a candidates ability to solve the types of challenges we deal with on a daily basis. I'm not saying that there's no value in these types of interviews—it's just not my style. I prefer learning about a candidate through asking questions about the projects they've worked on.
Keeping questions open-ended and having a candidate talk about their technical background has (in my experience) been a better indicator of a candidates ability, moreso than whether they can throw together a function that takes in two binary trees as arguments, and return whether or not they're perfect mirrors of each other. I'm much more interested in whether they can communicate technical concepts to me vs if they've memorized the common interview algorithms. You can tell a lot about someone's work ethic just by listening to how they talk about development.
I've met brilliant developers with terrible work ethic, which is kind of like mixing 16 year scotch with Coke Zero. There's certainly a lot of potential but it's been lost underneath a layer of disappointment, and if you drink too much you'll end up with a massive headache.
#### Trait 4: Honesty
If I'm asking you technical questions and something comes up that you're not familiar with, I don't want you to try and lie or talk your way around it. Be honest and tell me that it's not a concept that you're familiar with, and we can work it through it together. If a candidate seems genuinely interested in understanding a concept, then it demonstrates that they have a curiosity and willingness to learn. I never expect a candidate to fully understand every facet of the tools we work with, but if they're able to show they're strong, capable learners, then I'm less concerned about practical experience (this, of course, has limitations. I need the candidate to at least be familiar with the stack we work with).
I've had interviews where a candidate was clearly unfamiliar with a topic, but tried to talk their way around it. I understand the pressure of an interview scenario, and admitting you don't understand something might seem like a sign of weakness and hurt your chances at the position. But I can tell you that getting caught in a lie is 10 times worse than being not knowing something. This is an immediate red flag for me. It's also an indicator that a candidate might have trouble keeping open communication in a workplace setting when they're running into blocks, or struggling with a task. Don't do it!
***
These are just the things that I've noticed in my experiences thus far. Maybe they're obvious, but I'm still learning something new with every interview and constantly refining my practice. If you're in the position where you're performing interviews, I think it's important to take a moment of introspection after each one to review your process. Remember, this is going to be the candidate's first impression of the company, so it's important to maintain a balance of being able to assess a candidate's technical ability while still providing an authentic representation of the company's values.
Here's an interview tip: never project yourself onto the candidate. I've met too many people whose expectations for a candidate come from looking in a mirror. Don't expect the people you're interviewing to share your experience or exact skillset. A diverse workforce is a strong workforce.
I hope this provides some clarity on what we're observing from the other side of the table. Other interviewers, what are some of the traits of a candidate that are the most important to you? Let me know!
_Thanks for reading, and feel free to say hi on [Twitter](https://twitter.com/switchcasebreak)!_
| brewsterbhg |
177,826 | Gerenciando kits de desenvolvimento Java, Kotlin e Clojure facilmente com o SDKMAN | Você é uma pessoa que costuma trabalhar em projetos Java que possuem versões do SDKs (Software Deve... | 0 | 2019-09-27T23:42:34 | https://dev.to/collabcode/gerenciando-kits-de-desenvolvimento-java-kotlin-e-clojure-facilmente-com-o-sdkman-24d5 | java, kotlin, clojure, linux | 
Você é uma pessoa que costuma trabalhar em projetos Java que possuem versões do SDKs (Software Development Kits) diferentes?
No mundo Java não é incomum a necessidade de se trabalhar em versões mais antigas da plataforma como o Java 6 e 7 para corrigir problemas ou adicionar pequenas funcionalidades no projeto, entretanto, a maioria dos projetos mais recentes exige o Java 8 ou 11, o que na minha opinião torna a rotina de uma pessoa desenvolvedora Java um inferno, já que ficar alternando entre versões do SDK durante o seu dia de trabalho é um processo chato.
Para piorar a situação, cada sistema operacional (Linux, MacOS e Windows) trabalha com as variáveis de ambiente do Java de maneira diferente.
Se colocarmos na ponta do lápis imagino que perdemos uma parte de nossa sanidade e de tempo nesse processo. É por esse motivo que tenho uma boa notícia para você!
## Olá SDKMAN!
O SDKMAN é uma ferramenta CLI (terminal) que nos ajuda a gerenciar vários kits de desenvolvimento. Ele nos fornece uma maneira conveniente de instalar, alternar, listar e remover verões.
Você consegue gerenciar versões paralelas facilmente em qualquer sistema operacional semelhante ao Unix (pro pessoal que usa Windows recomendo instalar um bash Linux).
Com ele conseguimos instalar diversos SDKs da JVM, como Java, Kotlin e Clojure (leiningen). Além de também nos ajudar com os gerenciadores de dependências, como o Ant, Maven e Gradle.
Como se não fosse suficiente também conseguimos instalar e configurar alguns frameworks, como Spring Boot, Vert.x, Spark e Micronaut.
E o melhor de tudo é que o SDKMAN é gratuito, leve, de código aberto e escrito em Bash.
## Instalando em distribuições Debian-like
A instalação do SDKMAN é bem simples, mas primeiro, instale os aplicativos zip e unzip que estão disponíveis nos repositórios padrão da maioria das distribuições Linux. Por exemplo, para instalar em sistemas baseados no Debian, basta executar:
`sudo apt install zip unzip`
Só então podemos instalar o SDKMAN:
`curl -s "https://get.sdkman.io" | bash`
Pronto, simples assim! Uma vez finalizado a instalação só precisamos carregar as variáveis de ambiente:
`source "$HOME/.sdkman/bin/sdkman-init.sh"`
Por fim, verifique se a instalação foi bem-sucedida usando o comando:
`sdk version`

Se você conseguiu visualizar a versão do SDKMAN no seu terminal, então meus parabéns!
Vamos em frente e ver como instalar e gerenciar SDKs.
## Instalando os SDKs
Você pode visualizar a lista de SDKs disponíveis no site do SDKMAN ou executando o comando:
`sdk list`

Como você viu a lista é bem extensa. Mas o processo de instalação é bastante simples, por exemplo, vamos instalar o SDK LTS mais atual do Java:
`sdk install java`

Caso deseje visualizar as outras versões disponíveis basta executar:
`sdk list java`

Para instalar uma versão específica de um SDK precisamos fazer uso da coluna identifier, por exemplo:
`sdk install java 8.0.222-amzn`
Esse mesmo fluxo de instalação pode ser utilizado em qualquer uma das opções na lista SDKs disponíveis no site do SDKMAN.
## Gerenciando os SDKs instalados
Agora que já instalamos algumas versões do Java (experimente instalar uma versão específica do Java), podemos naturalmente alternar entre essas versões instaladas.
Para isso vamos buscar novamente a lista de versões, só que dessa vez reparem na coluna **Status**, essa coluna tem a função de nos informar o que está instalado. E para sabermos qual versão está em uso sem precisar executar o comando *java -version* basta olhar para a informação da coluna **Use**.

Com esses dados em mão podemos facilmente alterar a versão do SDK que estamos usando, por exemplo vamos alterar a versão do Java 11 para o Java 8:
`sdk default java 8.0.222-amzn`

Para verificar o que está atualmente em uso para todos os SDKs, execute:
`sdk current`

Para atualizar um SDK desatualizado, faça:
`sdk upgrade java`
Você também consegue verificar o que está desatualizado, executando:
`sdk upgrade`
Para remover um SDK instalado, execute:
`sdk uninstall java 8.0.222-amzn`

## Removendo o SDKMAN
Se você não mais precisa do SDKMAN ou não gostou dele fique a vontade para removê-lo:
`tar zcvf ~ / sdkman-backup _ $ (data +% F-% kh% M) .tar.gz -C ~ / .sdkman`
E depois:
`rm -rf ~ / .sdkman`
Por fim, abra os arquivos **.bashrc, .bash_profile, .profile ou .zshrc** localize e remova as seguintes linhas.
`#THIS MUST BE AT THE END OF THE FILE FOR SDKMAN TO WORK!!!`
`export SDKMAN_DIR="/home/sk/.sdkman"`
`[[ -s "/home/sk/.sdkman/bin/sdkman-init.sh" ]] && source "/home/sk/.sdkman/bin/sdkman-init.sh"`
## Conclusão
Imagino que você já passou por esse sofrimento de instalar e gerenciar múltiplas versões de JDKs, se esse for o seu caso, o SDKMAN pode ser uma ótima opção!
Hoje você aprendeu como usar o SDKMAN para instalar diferentes versões, alternar entre versões e desinstalar, fizemos tudo isso usando o Java. Mas você pode usar esses mesmos métodos para lidar com a instalação de outras plataformas como Kotlin, Clojure (leiningen), Ant, Maven, Gradle, Spring Boot, Vert.x, Spark e Micronaut por exemplo.
****
## Finalizando…
Se você gostou desse post não esquece de dar um like e compartilhar 😄
Se quiser saber o que ando fazendo por ai ou tirar alguma dúvida fique a vontade para me procurar nas redes sociais como @ [malaquiasdev](https://twitter.com/malaquiasdev).
Para ler mais post meus acesse [MalaquiasDEV | A Vida, o código e tudo mais](http://malaquias.dev). | malaquiasdev |
179,525 | BxJS Weekly Episode 82 - javascript news podcast | Hey dev.to community! BxJS Weekly Episode 82 is now out! 🚀 Listen to the best javascript news of the... | 0 | 2019-09-29T10:44:44 | https://dev.to/yamalight/bxjs-weekly-episode-82-javascript-news-podcast-58ln | javascript, node, podcast, news | Hey dev.to community!
BxJS Weekly Episode 82 is now out! 🚀
Listen to the best javascript news of the week in a podcast form right here.
Here's all the mentioned links (also found [on github](https://github.com/BuildingXwithJS/bxjs-weekly/blob/master/links/19-39-Episode-82.md)):
## Getting started:
- [Learn React in 10 tweets (with hooks)](https://twitter.com/chrisachard/status/1175022111758442497?s=09)
- [How to collect, customize, and centralize Node.js logs](https://www.datadoghq.com/blog/node-logging-best-practices/)
- [Working with GitHub Actions](https://jeffrafter.com/working-with-github-actions/)
- [Thinking in React Hooks](https://wattenberger.com/blog/react-hooks)
- [Lessons from Building Node Apps in Docker (2019)](https://jdlm.info/articles/2019/09/06/lessons-building-node-app-docker.html)
## Articles & News:
- [Voidcall – making of](https://phoboslab.org/log/2019/09/voidcall-making-of)
- [Performance metrics for blazingly fast web apps](https://blog.superhuman.com/performance-metrics-for-blazingly-fast-web-apps-ec12efa26bcb)
- [React Lazy: a take on preloading views](https://blog.maximeheckel.com/posts/preloading-views-with-react)
- [An HTML attribute potentially worth \$4.4M to Chipotle](https://cloudfour.com/thinks/an-html-attribute-potentially-worth-4-4m-to-chipotle/)
- [Why JavaScript Tooling Sucks](https://www.swyx.io/writing/js-tooling/)
## Tips, tricks & bit-sized awesomeness:
- [TIL, if you visit devtools' "source" tab right after using the "performance" tab you get benchmarks](https://twitter.com/angustweets/status/1175846030392184832?s=09)
- [Top level await support added to V8](https://chromium.googlesource.com/v8/v8.git/+/0ceee9ad28c21bc4971fb237cf87eb742fc787b8%5E%21/)
- [Faster way to `await` two promises without `Promise.all`](https://twitter.com/NMeuleman/status/1176550808852291584?s=09)
## Releases:
- [V8 v7.8](https://v8.dev/blog/v8-release-78)
- [React Native 0.61](https://facebook.github.io/react-native/blog/2019/09/18/version-0.61)
- [React Router v5.1.0](https://reacttraining.com/blog/react-router-v5-1/)
- [Ava v2.4.0](https://github.com/avajs/ava/releases/tag/v2.4.0)
- [Node v12.11.0](https://nodejs.org/en/blog/release/v12.11.0/)
## Libs & demos:
- [Robot](https://thisrobot.life/)
- [stubborn](https://github.com/ybonnefond/stubborn)
- [kfchess](https://github.com/paladin8/kfchess)
- [geometric](https://github.com/HarryStevens/geometric)
- [deckdeckgo](https://github.com/deckgo/deckdeckgo)
- [endb](https://github.com/chroventer/endb)
- [JS13kGames 2019 entries](https://js13kgames.com/entries/2019)
## Interesting & silly stuff:
- [Hacktoberfest is now open!](https://hacktoberfest.digitalocean.com/details)
- [A new video series for beginners to learn Python programming (from Microsoft)](https://cloudblogs.microsoft.com/opensource/2019/09/19/new-python-training-video-series-beginners/)
Any feedback is appreciated 😁
Additional stuff:
- [Youtube channel](https://youtube.com/c/TimErmilov)
- [Twitch channel](https://www.twitch.tv/yamalight)
- [Discord server](https://discord.gg/hnKCXqQ)
- [BxJS Weekly github repo](https://github.com/BuildingXwithJS/bxjs-weekly)
Social media links:
- [Twitter](http://twitter.com/yamalight)
- [Github](http://github.com/yamalight)
If you enjoy my content, please consider [supporting me](https://codezen.net/support.html) 😉 | yamalight |
179,889 | List Comprehension, or How I learned to Stop Worrying and Love the List | Lists and Dictionaries are a powerful part of the Python toolbox. They are useful in any language, bu... | 0 | 2019-09-28T19:27:15 | https://dev.to/zmbailey/list-comprehension-or-how-i-learned-to-stop-worrying-and-love-the-list-49jo | ---
title: List Comprehension, or How I learned to Stop Worrying and Love the List
published: true
description:
tags:
---
Lists and Dictionaries are a powerful part of the Python toolbox. They are useful in any language, but Python has some new twists that make them even more versatile and easier to use. Many languages differ on how they create a List, and whether or not it can be populated upon instantiation. Some languages, like Java, require that a List be created empty and then populated afterwards. Python, however, allows us the luxury of creating a pre-populated list. All we have to do is write out the items in a list format:
```python
alpha = [‘a’,’b’,’c’]
```
###Lists and Loops
But say we want to create a much longer List, or a List based on a more complex pattern? For instance, what if we want all even numbers from 0 to 60? We could just write out every number, but that would take a long time and look very messy. In some languages the solution to this would be a loop, and add the next number in the sequence each iteration. Here’s how that would look:
```python
evens = []
for i in range(0,61,2):
evens.append(i)
```
###List Comprehension
This is fine for this level of complexity, but Python has shortcuts to simplify these kind of operations. List Comprehension operates a bit like a simplified for-loop, and outputs a new list where each iteration of the loop outputs another item in the output List. For example, the List Comprehension version of the above code would be:
```python
evens = [i for i in range(0,61,2)]
```
The basic syntax for this kind of statement is written as [*expression* for *item* in *list*], where the *list* is an input List, the *item* is each individual item from that list, and the *expression* is how you’re transforming the item. For instance if we wanted every even number multiplied by 3, we would write it as:
```python
evens = [I*3 for i in range(0,61,2)]
```
###Conditionals in List Comprehension
But let’s say we want to get more complex, what if we want to use an if-statement, does that mean we need to go back to a for-loop? Well, let’s start with the for loop again. This time, we’re going to take an input list of numbers, and output 1 for positive numbers, 0 for negative:
```python
input = [1,-2,56,30,-23,-4,52]
output = []
for i in input:
if i >= 0:
output.append(1)
else:
output.append(0)
```
Now we’re getting several layers of indentation here. So how do we put this into List Comprehension? To start with, let’s talk about inline if-statements. In addition to writing if-statements in the traditional format, Python also allows in-line if-statements, which are written in a single line. While not always appropriate, in-line statements are handy for using with List Comprehension. For example:
```python
#this statement:
if i >= 0:
output.append(1)
else:
output.append(0)
#can be re-written as:
output.append(1) if i >= 0 else output.append(0)
```
And so the inline statement is written as *expression* if *condition* else *2nd expression*. But how does this help us? Now that we can write an if-statement in a single line, we can incorporate it into a List Comprehension. If we want to re-write the previous loop, we now have a way to do that:
```python
output = [1 if i >= 0 else 0 for i in input]
```
Notice how we have simply placed the inline statement into the expression part of the List Comprehension, and now it directly returns a list of the correct format. In this example I have used an if-else statement, but when using an if statement with no *else* expression, the structure is slightly different. An if by itself should be written at the end instead, like so:
```python
output = [1 for i in input if i >= 0]
```
###Dictionary Comprehension
Now we can create new lists with complex conditions in a single line, and simplify our code. In addition to List Comprehension there is something else allowed in Python 2.7+: Dictionary Comprehension. Similar to List Comprehension, Dictionary Comprehension allows you to create a Dictionary in single line based on another collection and using inline statements. For instance, you could create a dictionary with a set of keys, all with an initial value of True:
```python
output_dict = {k : True for k in input}
```
Comprehensions can be very useful and very powerful tools in Python development, and I hope this helps you better understand how comprehensions work, and how they can be useful. | zmbailey | |
179,906 | Java Streams and Spliterators | This article discusses implementing Java 8 Streams and the underlying Spliterator implementation. Th... | 0 | 2019-09-28T22:03:54 | https://blog.hcf.dev/article/2019-03-28-java-streams-and-spliterators | java, stream, spliterator | ---
title: Java Streams and Spliterators
canonical_url: https://blog.hcf.dev/article/2019-03-28-java-streams-and-spliterators
tags:
- Java
- Stream
- Spliterator
---
This article discusses implementing [Java 8][Java 8] [`Stream`s][Stream] and the underlying [`Spliterator`][Spliterator] implementation. The nontrivial implementations described here are [`Permutations`][Permutations] and [`Combinations`][Combinations] streams, both of which provide a stream of [`List<T>`][List] instances representing the combinations of the argument [`Collection<T>`][Collection].
For example, the first [`Combinations`][Combinations] of `5` of a 52-card `Deck` are:
```bash
[2-♧, 3-♧, 4-♧, 5-♧, 6-♧]
[2-♧, 3-♧, 4-♧, 5-♧, 7-♧]
[2-♧, 3-♧, 4-♧, 5-♧, 8-♧]
[2-♧, 3-♧, 4-♧, 5-♧, 9-♧]
[2-♧, 3-♧, 4-♧, 5-♧, 10-♧]
[2-♧, 3-♧, 4-♧, 5-♧, J-♧]
[2-♧, 3-♧, 4-♧, 5-♧, Q-♧]
[2-♧, 3-♧, 4-♧, 5-♧, K-♧]
[2-♧, 3-♧, 4-♧, 5-♧, A-♧]
[2-♧, 3-♧, 4-♧, 5-♧, 2-♢]
[2-♧, 3-♧, 4-♧, 5-♧, 3-♢]
...
```
Complete [javadoc] is provided.
## Stream Implementation
The [`Permutations`][Permutations] stream is implemented in terms of [`Combinations`][Combinations]:
```java
public static <T> Stream<List<T>> of(Predicate<List<T>> predicate,
Collection<T> collection) {
int size = collection.size();
return Combinations.of(size, size, predicate, collection);
}
```
and the [`Combinations`][Combinations] stream relies on a [`Spliterator`][Spliterator] implementation provided through a [`Supplier`][Supplier]:
```java
public static <T> Stream<List<T>> of(int size0, int sizeN,
Predicate<List<T>> predicate,
Collection<T> collection) {
SpliteratorSupplier<T> supplier =
new SpliteratorSupplier<T>()
.collection(collection)
.size0(size0).sizeN(sizeN)
.predicate(predicate);
return supplier.stream();
}
```
The `supplier.stream()` method relies on [`StreamSupport`][StreamSupport]:
```java
public Stream<List<T>> stream() {
return StreamSupport.<List<T>>stream(get(), false);
}
```
The [`Spliterator`][Spliterator] implementation is the subject of the next section.
## Spliterator Implementation
The abstract [`DispatchSpliterator`][DispatchSpliterator] base class provides the implementation of [`Spliterator.tryAdvance(Consumer)`][Spliterator.tryAdvance]. The key logic is the current `Spliterator`'s `tryAdvance(Consumer)` method is tried and if it returns false, the next `Spliterator`<sup id="ref1">[1](#endnote1)</sup> is tried until there are no more `Spliterator`s to be supplied.
```java
private Iterator<Supplier<Spliterator<T>>> spliterators = null;
private Spliterator<T> spliterator = Spliterators.emptySpliterator();
...
protected abstract Iterator<Supplier<Spliterator<T>>> spliterators();
...
@Override
public Spliterator<T> trySplit() {
if (spliterators == null) {
spliterators = Spliterators.iterator(spliterators());
}
return spliterators.hasNext() ? spliterators.next().get() : null;
}
@Override
public boolean tryAdvance(Consumer<? super T> consumer) {
boolean accepted = false;
while (! accepted) {
if (spliterator == null) {
spliterator = trySplit();
}
if (spliterator != null) {
accepted = spliterator.tryAdvance(consumer);
if (! accepted) {
spliterator = null;
}
} else {
break;
}
}
return accepted;
}
```
Subclass implementors must supply an implementation of [`Iterator<Supplier<Spliterator<T>>> spliterators()`][DispatchSpliterator.spliterators]. In the [`Combinations`][Combinations] implementation, the key [`Spliterator`][Spliterator], [`ForPrefix`][ForPrefix], iterates over every (sorted) prefix and either supplies more `ForPrefix` `Spliterator`s or a single [`ForCombination`][ForCombination] `Spliterator`:
```java
private class ForPrefix extends DispatchSpliterator<List<T>> {
private final int size;
private final List<T> prefix;
private final List<T> remaining;
public ForPrefix(int size, List<T> prefix, List<T> remaining) {
super(binomial(remaining.size(), size),
SpliteratorSupplier.this.characteristics());
this.size = size;
this.prefix = requireNonNull(prefix);
this.remaining = requireNonNull(remaining);
}
@Override
protected Iterator<Supplier<Spliterator<List<T>>>> spliterators() {
List<Supplier<Spliterator<List<T>>>> list = new LinkedList<>();
if (prefix.size() < size) {
for (int i = 0, n = remaining.size(); i < n; i += 1) {
List<T> prefix = new LinkedList<>(this.prefix);
List<T> remaining = new LinkedList<>(this.remaining);
prefix.add(remaining.remove(i));
list.add(() -> new ForPrefix(size, prefix, remaining));
}
} else if (prefix.size() == size) {
list.add(() -> new ForCombination(prefix));
} else {
throw new IllegalStateException();
}
return list.iterator();
}
}
```
Size, supplied as a superclass constructor parameter, is calculated with the [`binomial()`][binomial] method. For an individual combination, the size is `1`.
```java
private class ForCombination extends DispatchSpliterator<List<T>> {
private final List<T> combination;
public ForCombination(List<T> combination) {
super(1, SpliteratorSupplier.this.characteristics());
this.combination = requireNonNull(combination);
}
@Override
protected Iterator<Supplier<Spliterator<List<T>>>> spliterators() {
Supplier<Spliterator<List<T>>> supplier =
() -> Collections.singleton(combination).spliterator();
return Collections.singleton(supplier).iterator();
}
}
```
Implementations should delay as much computation as possible until required in [`Spliterator.tryAdvance(Consumer)`][Spliterator.tryAdvance] allowing callers (including [`Stream`][Stream] thorough [`StreamSupport`][StreamSupport]) to optimize and avoid computation.
The complete implementation provides a [`Start`][Start] [`Spliterator`][Spliterator] returned by the [`SpliteratorSupplier`][SpliteratorSupplier] and a [`ForSize`][ForSize] spliterator to iterate over combination sizes.
```java
private class Start extends DispatchSpliterator<List<T>> {
public Start() {
super(binomial(collection().size(), size0(), sizeN()),
SpliteratorSupplier.this.characteristics());
}
@Override
protected Iterator<Supplier<Spliterator<List<T>>>> spliterators() {
List<Supplier<Spliterator<List<T>>>> list = new LinkedList<>();
IntStream.rangeClosed(Math.min(size0(), sizeN()),
Math.max(size0(), sizeN()))
.filter(t -> ! (collection.size() < t))
.forEach(t -> list.add(() -> new ForSize(t)));
if (size0() > sizeN()) {
Collections.reverse(list);
}
return list.iterator();
}
...
}
```
```java
private class ForSize extends DispatchSpliterator<List<T>> {
private final int size;
public ForSize(int size) {
super(binomial(collection().size(), size),
SpliteratorSupplier.this.characteristics());
this.size = size;
}
@Override
protected Iterator<Supplier<Spliterator<List<T>>>> spliterators() {
Supplier<Spliterator<List<T>>> supplier =
() -> new ForPrefix(size,
Collections.emptyList(),
new LinkedList<>(collection()));
return Collections.singleton(supplier).iterator();
}
...
}
```
## Honoring the API Predicate Parameter
The API defines a [`Predicate`][Predicate] parameter which provides a way for callers to dynamically short-circuit all or part of the iteration. The [`ForPrefix`][ForPrefix] and [`ForCombination`][ForCombination] `tryAdvance(Consumer)` methods are overridden as follows:
```java
private class ForPrefix extends DispatchSpliterator<List<T>> {
...
@Override
public boolean tryAdvance(Consumer<? super List<T>> consumer) {
Predicate<List<T>> predicate =
SpliteratorSupplier.this.predicate();
return ((prefix.isEmpty()
|| (predicate == null || predicate.test(prefix)))
&& super.tryAdvance(consumer));
}
...
}
private class ForCombination extends DispatchSpliterator<List<T>> {
...
public boolean tryAdvance(Consumer<? super List<T>> consumer) {
Predicate<List<T>> predicate =
SpliteratorSupplier.this.predicate();
return ((combination.isEmpty()
|| (predicate == null || predicate.test(combination)))
&& super.tryAdvance(consumer));
}
...
}
```
If a [`Predicate`][Predicate] is supplied and the current combination does not satisfy the `Predicate`, that *path* is pruned immediately. A [future blog post](/article/2019-10-29-java-enums-as-predicates) will discuss using this feature to quickly evaluate Poker hands.
<b id="endnote1">[1]</b> Obtained by calling the implementatioon of [`Spliterator.trySplit()`][Spliterator.trySplit]. [↩](#ref1)
[Java 8]: https://www.java.com/en/download/help/java8.html
[Collection]: https://docs.oracle.com/javase/8/docs/api/java/util/Collection.html
[List]: https://docs.oracle.com/javase/8/docs/api/java/util/List.html
[Predicate]: https://docs.oracle.com/javase/8/docs/api/java/util/function/Predicate.html
[Spliterator.tryAdvance]: https://docs.oracle.com/javase/8/docs/api/java/util/Spliterator.html#tryAdvance-java.util.function.Consumer-
[Spliterator.trySplit]: https://docs.oracle.com/javase/8/docs/api/java/util/Spliterator.html#trySplit--
[Spliterator]: https://docs.oracle.com/javase/8/docs/api/java/util/Spliterators.html
[StreamSupport]: https://docs.oracle.com/javase/8/docs/api/java/util/stream/StreamSupport.html
[Stream]: https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html
[Supplier]: https://docs.oracle.com/javase/8/docs/api/java/util/function/Supplier.html
[javadoc]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/overview-summary.html
[Combinations]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/ball/util/stream/Combinations.html
[DispatchSpliterator]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/ball/util/DispatchSpliterator.html
[DispatchSpliterator.spliterators]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/ball/util/DispatchSpliterator.html#spliterators--
[ForSize]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/src-html/ball/util/stream/Combinations.SpliteratorSupplier.html#line.177
[Permutations]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/ball/util/stream/Permutations.html
[Start]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/src-html/ball/util/stream/Combinations.SpliteratorSupplier.html#line.144
[binomial]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/ball/util/DispatchSpliterator.html#binomial-long-long-
[ForPrefix]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/src-html/ball/util/stream/Combinations.SpliteratorSupplier.html#line.208
[ForCombination]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/src-html/ball/util/stream/Combinations.SpliteratorSupplier.html#line.265
[SpliteratorSupplier]: https://blog.hcf.dev/javadoc/article/2019-03-28-java-streams-and-spliterators/src-html/ball/util/stream/Combinations.html#line.106
| allenball |
219,430 | Forms in React, a tale of abstraction and optimisation | Table of contents The basics Abstraction Optimisation In my example I use the Material-... | 0 | 2019-12-12T16:09:00 | https://dev.to/sabbin/forms-in-react-a-tale-of-abstraction-and-optimisation-3gb2 | react | #Table of contents
[The basics](#chapter-1)
[Abstraction](#chapter-2)
[Optimisation](#chapter-3)
In my example I use the [Material-UI](https://material-ui.com/) library, and mostly the [TextField component](https://material-ui.com/components/text-fields/).
It can be removed and adapted to any library or no library at all.
## The basics <a name="chapter-1"></a>
Below is an example of a basic form with a few inputs _(fullWidth is used just for view purposes only)_
```javascript
const Form = () => {
return (
<form>
<TextField label="Name" name="name" type="text" fullWidth />
<TextField label="Age" name="age" type="number" fullWidth />
<TextField label="Email" name="email" type="email" fullWidth />
<TextField label="Password" name="password" type="password" fullWidth />
<Button type="submit" fullWidth>
submit
</Button>
</form>
);
}
```
[CodeSandbox example](https://codesandbox.io/s/basic-zw3db)
In order to use the data and do something with it, we would need the following:
##### An object to store the data
For this we will use the `useState` hook from React
```javascript
const [formData, setFormData] = useState({});
```
##### A handler to update the data
- We need a function that takes the `value` and the `name` as a key from the input `event.target` object and updates the `formData` object
```javascript
const updateValues = ({ target: { name, value } }) => {
setFormData({ ...formData, [name]: value });
};
```
- Bind the function to the inputs `onChange` event
```javascript
<TextField ... onChange={updateValues} />
```
* *Extra*: Usually in forms there are components have some logic and not update the values via the `event` object and have their own logic, for example an autocomplete component, image gallery with upload and delete, an editor like CKEditor etc. and for this we use another handler
```javascript
const updateValuesWithParams = (name, value) => {
setFormData({ ...formData, [name]: value });
};
```
##### A handler to submit the data
- The function that does something with the data. In this case it displays it in the `console`.
```javascript
const submitHandler = e => {
e.preventDefault();
console.log(formData);
};
```
- Bind the function to the form `onSubmit` event
```javascript
<form onSubmit={submitHandler}>
```
Voila, now we have a form that we can use
[CodeSandbox example](https://codesandbox.io/s/basic-with-handlers-wmnhe)
##Abstraction <a name="chapter-2"></a>
The main idea with abstraction for me is not to have duplicate code or duplicate logic in my components, after that comes abstraction of data layers and so on...
Starting with the code duplication the first thing is to get the `inputs` out into objects and iterate them.
We create an `array` with each field as a separate `object`
```javascript
const inputs = [
{
label:'Name',
name:'name',
type:'text'
},
{
label:'Age',
name:'age',
type:'number'
},
{
label:'Email',
name:'email',
type:'email'
},
{
label:'Password',
name:'password',
type:'password'
},
]
```
And just iterate over it in our `form` render
```javascript
const Form = () => {
...
return (
<form onSubmit={submitHandler}>
{formFields.map(item => (
<TextField
key={item.name}
onChange={updateValues}
fullWidth
{...item}
/>
))}
<Button type="submit" fullWidth>
submit
</Button>
</form>
);
}
```
[CodeSandbox example](https://codesandbox.io/s/abstraction-of-fields-3zqpm)
So far so good, but what happens if we have more than one form? What happens with the handlers? do we duplicate them also?
My solution was to create a custom hook to handle this. Basically we move the `formData` object and handlers outside the components.
I ended with a `useFormData` hook
```javascript
import { useState } from "react";
const useFormData = (initialValue = {}) => {
const [formData, setFormData] = useState(initialValue);
const updateValues = ({ target: { name, value } }) => {
setFormData({ ...formData, [name]: value });
};
const updateValuesParams = ({ target: { name, value } }) => {
setFormData({ ...formData, [name]: value });
};
const api = {
updateValues,
updateValuesParams,
setFormData
};
return [formData, api];
};
export default useFormData;
```
Which can be used in our form components as follows
```javascript
const [formData, { updateValues, updateValueParams, setFormData }] = useFormData({});
```
The hook one parameter when called.
- __initialFormData__: An object with initial value for the `formData` state in the hook
The hook returns an array with two values:
- __formData__: The current formData object
- __api__: An object that exposes the handlers outside the hook
Our component now looks like this
```javascript
const Form = () => {
const [formData, { updateValues }] = useFormData({});
const submitHandler = e => {
e.preventDefault();
console.log(formData);
};
return (
<form onSubmit={submitHandler}>
{formFields.map(item => (
<TextField
key={item.name}
onChange={updateValues}
fullWidth
{...item}
/>
))}
<Button type="submit" fullWidth>
submit
</Button>
</form>
);
};
```
[CodeSandbox example](https://codesandbox.io/s/abstraction-with-useformdata-hook-ntbmb)
Can we go even further? __YES WE CAN!__
Let's take the example with two forms, what do we have duplicated now?
Well for starters we have the `submitHandler` and the actual `<form>` it self. Working on the `useFormData` hook, we can create a `useForm` hook.
```javascript
import React, { useState } from "react";
import { Button, TextField } from "@material-ui/core";
const useForm = (
initialFormDataValue = {},
initalFormProps = {
fields: [],
props: {
fields: {},
submitButton: {}
},
handlers: {
submit: () => false
}
}
) => {
const [formData, setFormData] = useState(initialFormDataValue);
const updateValues = ({ target: { name, value } }) => {
setFormData({ ...formData, [name]: value });
};
const updateValuesParams = ({ target: { name, value } }) => {
setFormData({ ...formData, [name]: value });
};
const formFields = initalFormProps.fields.map(item => (
<TextField
key={item.label}
defaultValue={initialFormDataValue[item.name]}
onChange={updateValues}
{...item}
{...initalFormProps.props.fields}
/>
));
const submitForm = e => {
e.preventDefault();
initalFormProps.handlers.submit(formData);
};
const form = (
<form onSubmit={submitForm}>
{formFields}
<Button type="submit" {...initalFormProps.props.submitButton}>
Submit
</Button>
</form>
);
const api = {
updateValues,
updateValuesParams,
setFormData,
getFormFields: formFields
};
return [form, formData, api];
};
export default useForm;
```
It takes the `useFormData` hook from before and adds more components to it. Mainly it ads the `form` component and the `formFields` to the hook.
The hook now has 2 parameters when called.
##### - initialFormData
An object with the value that we want to initialise the `formData` with
##### - initalFormProps
An object with the configurations for the `form`
- __fields__: Array with the fields objects
- __props__: Object with props for the fields components(_TextField_ in our case) and the submitButton component
- __handlers__: The handler for submit in this case
The hook is called as followed
```javascript
const Form = () => {
const [form] = useForm(
{},
{
fields: formFields,
props: {
fields: {
fullWidth: true
},
submitButton: {
fullWidth: true
}
},
handlers: {
submit: formData => console.log(formData)
}
}
);
return form;
};
```
[CodeSandbox example](https://codesandbox.io/s/abstraction-with-useform-hook-3cgtd)
The advantage of this custom hook is that you can override all of the methods whenever you need it.
If need only the fields from the from and not the plain form you can get them via the `api.getFormFileds` method and iterate them as you need.
I will write an article explaining and showing more example of this custom hook
## Optimisation <a name="chapter-3"></a>
My most common enemy was the re rendering of the components each time the `formData` object was changed. In small forms that is not an issue, but in big forms it will cause performance issues.
For that we will take advantage of the `useCallback` and `useMemo` hooks in order to optimise as much as we can in our hook.
The main idea was to memoize all the inputs and the form since it is initialised with a value, it should change only when the value is changed and not in any other case, so it will not trigger any unnecessary renders.
I ended up with the following code for the hook
```js
import React, { useState, useMemo, useCallback } from "react";
import { Button, TextField } from "@material-ui/core";
const useForm = (
initialFormDataValue = {},
initalFormProps = {
fields: [],
props: {
fields: {},
submitButton: {}
},
handlers: {
submit: () => false
}
}
) => {
const [formData, setFormData] = useState(initialFormDataValue);
const updateValues = useCallback(
({ target: { name, value, type, checked } }) => {
setFormData(prevData => ({
...prevData,
[name]: type !== "chechbox" ? value : checked
}));
},
[]
);
const updateValuesParams = useCallback(
(name, value) =>
setFormData(prevData => ({
...prevData,
[name]: value
})),
[]
);
const formFields = useMemo(
() =>
initalFormProps.fields.map(item => (
<TextField
key={item.label}
defaultValue={initialFormDataValue[item.name]}
onChange={updateValues}
{...item}
{...initalFormProps.props.fields}
/>
)),
[updateValues, initalFormProps, initialFormDataValue]
);
const submitForm = useCallback(
e => {
e.preventDefault();
initalFormProps.handlers.submit(formData);
},
[initalFormProps, formData]
);
const formProps = useMemo(
() => ({
onSubmit: submitForm
}),
[submitForm]
);
const submitButton = useMemo(
() => (
<Button type="submit" {...initalFormProps.props.submitButton}>
Submit
</Button>
),
[initalFormProps]
);
const form = useMemo(
() => (
<form {...formProps}>
{formFields}
{submitButton}
</form>
),
[formFields, formProps, submitButton]
);
const api = useMemo(
() => ({
updateValues,
updateValuesParams,
setFormData,
getFormFields: formFields
}),
[updateValues, updateValuesParams, setFormData, formFields]
);
return [form, formData, api];
};
export default useForm;
```
[CodeSandbox example](https://codesandbox.io/s/optimisation-with-useform-hook-do9f0)
##### Above and beyond
If we run the above example we would still have a render issue because of the `submitForm` callback, due to its `formData` dependency.
It's not the perfect case scenario but it's a lot better than no optimisation at all
My solution for this was to move the `formData` in the store. Since my `submitHandler` is always `dispatch` and I only send the action, I was able to access the `formData` directly from Redux Saga and therefore remove the `formData` from the hook and also from the dependency array of `sumbitForm` callback. This may not work for others so I did not include this in the article.
If someone has any thoughts on how to solve the issue with the `formData` from the `submitForm` I would be glad to hear them | sabbin |
179,942 | Your commit messages matter more than you think | You’ve just finished up a big chunk of code, you run your tests, and you’re ready to push it out the... | 0 | 2019-09-29T01:47:54 | https://dev.to/cathyc93/your-commit-messages-matter-more-than-you-think-3a15 | code, beginners, practices, cleancode | You’ve just finished up a big chunk of code, you run your tests, and you’re ready to push it out the door. In your haste, you type up a quick message and think “this is good enough”. A few weeks later, you’re scanning back through your commits looking for a certain change, and you find yourself more often than not having to glance through the code changes of each commit to figure out what that change *was really doing*. If this sounds familiar to you, you may want to be more reflective with each commit message and think about what you really want to convey.
**Here are a few reasons why you should spend a few more moments on that commit message:**
#1. You likely won’t remember your changes as clearly as you think you will
You may be saying to yourself “These changes are pretty straightforward. I don’t really need to describe what’s happening here.” But in the future, when you weren’t just dabbling in that portion of the codebase minutes before, you may find it more difficult to remember what you were trying to accomplish with those changes. By simply spending a few moments to write a more descriptive commit message, you may save your future self the time (and headaches) that you may have otherwise had to spend figuring out what you were doing in a given commit.
#2. Your teammates should be able to easily understand your changes
While I also strongly recommend having easy-to-decipher & self-documenting code, it is important to give the context of why you are making a change. Your teammates will likely want to glance through your code to understand the code changes more in depth, but having that first bit of context before diving in is invaluable.
#3. Easily find where a bug may have been introduced
By reading a clear commit message, you should be able to easily recall what change was introduced. If you find a bug in your app related to a certain piece of functionality, you can review the changes made to that portion of the app and more easily narrow down where something may have gone awry.
#4. You may need to rollback to a given commit
In the event of an app change that needs to be reverted, commits can be used to determine the last time the app was in a desirable state to roll back to.
#5. You may want to find code from specific changes
Even if you don’t plan to rollback to this given commit, you may be looking for a way in which you solved a given problem in the past. If you aren’t able to piece together all of the changes made for a certain feature just by looking at the current incarnation of the code (or would prefer the efficiency of a Version Control System visually presenting these changes), you can view the contents of a commit to neatly see how those changes were implemented.
I’m not arguing for long, verbose commit messages (because let’s be real — no one wants to read a commit message longer than a sentence or so), I am arguing for being purposeful and concise in our commit messages. As a good rule of thumb, **try to write your commit messages so that they take no more than 30 seconds for your audience to both read and understand the change.**
What are your thoughts on the importance of commit messages?
Let me know in the comments 🎉 | cathyc93 |
179,966 | What is the best serverless platform? | Azure Functions, AWS Lambda, Netlify Function, Google Cloud Function, IBM Cloud Function | 0 | 2019-09-29T04:41:23 | https://dev.to/syuraj/what-is-the-best-serverless-platform-1l48 | serverless, functions, faas | ---
title: What is the best serverless platform?
published: true
description: Azure Functions, AWS Lambda, Netlify Function, Google Cloud Function, IBM Cloud Function
tags: Serverless, Functions, Faas
---
I have used Azure extensively and the summary is it sucks.
UI is terribly complex, quite unresponsive.
CLI is not easy to use either.
And cold starts are terrible, reaching 30 seconds for a simple nodejs app.
Why do I have to deal with storage accounts, resource groups, application insights?
I have used Netlify functions too and it has it's own issues too.
I have listed the differences in this medium article.
https://medium.com/siristechnology/azure-function-vs-netlify-function-1509f1dcec52
What are your thoughts?
Which one is the best one comparatively in your experience?
| syuraj |
180,122 | Redundancy in Ruby: Feature or Bug? | The Ruby language came onto the scene in the mid 90’s thanks to a Yukihiro Matsumoto. He was a C++ pr... | 0 | 2019-09-29T17:19:42 | https://dev.to/mzakzook/redundancy-in-ruby-feature-or-bug-39i4 | ruby, beginners | The Ruby language came onto the scene in the mid 90’s thanks to a Yukihiro Matsumoto. He was a C++ programmer who wanted to have more fun programming and felt that most languages neglected the human experience of coding. He designed Ruby so that experienced programmers could jump in and immediately start writing powerful code.
To allow programmers from different origins to make sense of Ruby it seems Matsumoto felt it was best to create multiple methods to achieve the same goal, also known as method aliases.
For example, a C# programmer can jump over to Ruby and find parallels like the shared Array.Findall method. Whereas a SQL coder will be happy to find a familiar Array.Select method to achieve the same goal.
While it seems the redundancy allows for experienced programmers to make sense of Ruby faster, it also presents challenges for beginners. Is there a difference between Map & Collect? Reduce & Inject? Select & Findall? The list goes on...
I fall into the camp that believes that the redundancy allows for more freedom and should be appreciated. At its worst, it teaches beginners how to look up methods in Ruby docs and clarify questions that will continue to show up in other fashions as one learns to code.
Reference: https://en.m.wikipedia.org/wiki/Ruby_(programming_language) | mzakzook |
180,135 | 10 Visual Studio Code Extensions for Frontend Developers in 2020 | Visual Studio Code had 2.6 millions monthly active users in 2017 (last official number I could find,... | 0 | 2019-09-29T18:16:54 | https://thesmartcoder.dev/10-awesome-visual-studio-code-extensions-for-frontend-developers/ | productivity, vscode, javascript, react | Visual Studio Code had 2.6 millions monthly active users in 2017 (last official number I could find, certainly more by now) and is the arguably the best code editor out there at the moment. One of the best features is the Market Place offering tons of extensions to customize it exactly to your needs and helping you in writing high quality code. In this article I will recommend 10 VS Code extensions for frontend engineers working with HTML, CSS, JavaScript and frameworks like VueJS or ReactJS.
## JavaScript Code Snippets

This extension was created by Charalampos Karypidis and has been downloaded 4.5 million times so far. It provides Code Snippets for writing JavaScript, Typescript, React, Vue, HHTML, ... and supports ES6 syntax.
## NPM

Every developer knows NPM - the Node Package Manager. This extension helps you to manage your Package.json, provides warnings if dependencies are not installed yet and helps with version control.
## Prettier

Prettier from Esben Petersen is a pretty neat extension that has been downloaded close to 14 million times already. It helps you formatting your code and provides color keywords for more readable code.
## CSS Peek

CSS Peak helps you when working markup language class strings and IDs by identifying and enumerating the different styles that are already applied. This comes handy because you no longer have to jump between HTML and CSS files.
## Vetur

Vetur is the official VueJS Extension and was downloaded more than 20 million times already. It provides error checking capabilities, auto-completion features and provides Vue snippets. This is really cool if you are a Vue developer like me!
## ESLint

ESLint - what can I say. Many people love linting, many do not. But the value linting provides for clean code is hardly arguable and this extension with 24 million downloads is the best tool for it if you develop with JavaScript.
## Live Sass Compiler

The Live Sass Compiler extension is a small but powerful tool that can compile your SASS/SCSS files to CSS files in real time and provide a live preview of the compiled styles in your browser.
## Debugger for Chrome

Chrome for many developers is the number one browser when it comes to developing, testing and debugging there code. With this official extension for VS Code you can do so directly from Visual Studio Code - how cool is that!
## Live Server

Live Server by Ritwick Dey who also created Live Sass Compiler creates a local development server right in Visual Studio Code to serve your static and dynamic sites. Using the go-live button in your editor you can serve your code right away and the extension also supports live reloading - neat!
## Beautify

Last but not least in this collection comes Beautify, another great extension for code formatting much like Prettier. Almost 12 million downloads speak for themselves and you can format code written in JavaScript, JSON, CSS, Sass and HTML.
## Conclusion
This collection is far from being complete and the extensions are not necessarily the best but I hope it provides you with some very good tools to help you writing high quality code and become a better web developer. Let me know in the comments if you find something useful or have other suggestions for extensions you think are first class. | simonholdorf |
180,265 | How to Build a Simple Markdown Plugin for Your Gatsby Site | Learn how to write a simple Gatsby plugin that enables you to easily embed third-party videos into your content using a non-standard embed syntax. | 0 | 2019-09-30T03:07:05 | https://www.danielworsnup.com/blog/how-to-build-a-simple-markdown-plugin-for-your-gatsby-site/ | javascript, gatsby, plugin, tutorial | ---
title: How to Build a Simple Markdown Plugin for Your Gatsby Site
published: true
tags: javascript, gatsby, plugin, tutorial
description: Learn how to write a simple Gatsby plugin that enables you to easily embed third-party videos into your content using a non-standard embed syntax.
canonical_url: https://www.danielworsnup.com/blog/how-to-build-a-simple-markdown-plugin-for-your-gatsby-site/
---
_Cross-posted from [my website's blog](https://www.danielworsnup.com/blog/how-to-build-a-simple-markdown-plugin-for-your-gatsby-site/)._
The majority of software development boils down to automating tasks and processes that would otherwise consume valuable time, require manual effort, and be prone to accidental errors. Whenever you find yourself repeating some task or process repeatedly, there are a few questions you should immediately start asking yourself:
1. Can this be automated?
1. Is it worth the time to automate this?
1. Is it worth the financial investment (if any) to automate this?
Most of the time, the answer to all of the above will be an astounding __yes__. There's rarely a reason to waste time doing something that a computer can do for you much more quickly and with a much lower risk of error. Let's consider a few examples of common automations that many of us rely on today:
* **JavaScript transpilers**, such as Babel and TypeScript. These enable us to write modern JavaScript code with bleeding edge language features and maintain confidence that our code will run properly in a broad range of browsers and browser versions.
* **Development tools**, such as IDEs, IntelliSense, bundlers, debuggers, linters, and formatters. This category speaks for itself.
* **Automated testing**, which enables us to deploy our changes with confidence that core user flows have not been impacted.
* **Continuous integration and delivery**, which enable us build, test, and deploy code to production at the push of a single button.
Because we've become so accustomed to the value of automations such as these, it can be hard to imagine our lives without them. But this is how it used to be! Let's try not to take these things for granted, and instead remind ourselves that we are fortunate to be develop software in today's technological ecosystem.
With all of this in mind, it's important to note that there are legitimately _good_ reasons for not building an automation, such as when:
* You have higher priority work that requires more immediate attention, such as critical bug fixes or an imminent feature work deadline.
* You aren't sure yet if the task or process will be needed long-term and need more time to verify that it will be.
Let me know in the comments or on [Twitter](https://twitter.com/worsnupd) of other examples of powerful automations that we tend to take for granted, or other good reasons for postponing an automation!
The remainder of this post focuses on a simple automation I built for [my Gatsby blog](https://danielworsnup.com/blog). We'll take a look at why I built it, what it does, and how it works!
## Cross-Posting Blogs
If you are a blogger then you are more than familiar with the struggle for visibility. You want people to read your work (why else would you have created it?), but there are a lot of other bloggers out there, some of which have the advantage of being sponsored by well-known, respected organizations. Though our motives may differ, we all share a common desire to grow a readership, and so we are forced to figure out how to survive in a competitive market.
Making your content accessible in more places than just your personal website is an easy way to increase your visibility, and you can do so by _cross-posting_ to other blogging platforms, such as [DEV](https://dev.to) and [Medium](https://medium.com/). These platforms let you categorize and tag your content and will automatically point interested readers in your direction. On Medium you can even [get paid](https://help.medium.com/hc/en-us/articles/360018834314-Stories-that-are-part-of-the-metered-paywall)!
>Note: When cross-posting, be sure use [canonical URLs](https://en.wikipedia.org/wiki/Canonical_link_element) to point back to the original content on your website so that search engines don't penalize you for duplicate content.
If your personal blog is built with [Gatsby's blog starter](https://github.com/gatsbyjs/gatsby-starter-blog) (as mine is), you can cross-post to DEV extremely easily. Both platforms are powered by Markdown (for post content) and [Front Matter](https://www.gatsbyjs.org/docs/adding-markdown-pages/#frontmatter-for-metadata-in-markdown-files) (for post metadata), and although there are a few adjustments necessary, it's about as close to 1:1 as it gets. One notable difference, however, is the syntax each platform uses for third-party video embeds.
## Embedded Videos
Adding videos to your content can be extremely useful for conveying information that is hard to capture in another form, such as text or images. Here's a somewhat meta video demonstrating how it looks in one of my blog posts:
{% youtube zcjuXR8obvI %}
Out-of-the-box, Gatsby's blog starter does not support video embeds, but it's easy to add support by installing the [`gatsby-remark-embed-video` plugin](https://www.gatsbyjs.org/packages/gatsby-remark-embed-video/). With this plugin, you can embed videos into your posts using the following syntax:
```markdown
# An Awesome Video
Check out this awesome video:
`youtube: 12345abcde`
```
This will embed the Youtube video with ID `12345abcde`. On DEV, however, embedding the same Youtube video is done like this:
```markdown
# An Awesome Video
Check out this awesome video:
{% youtube 12345abcde %}
```
This is because DEV's third party embed syntax is based on [Liquid](https://shopify.github.io/liquid/)'s templating language, which DEV also supports in their Markdown.
## A Few Solutions
As with any problem, there are multiple approaches we could take to solve this issue. Two main ideas came to my mind:
1. Write all video embeds using `gatsby-remark-embed-video` syntax. Before cross-posting, go through and update all video embeds to use DEV syntax. These updates could be made manually, but it would be better to _automate_ this with a Regex find/replace, which would mitigate the risk of errors.
1. Write all video embeds using DEV syntax and figure out how to support this syntax in a Gatsby blog.
Option 2 is better for a few reasons:
1. The embed syntax becomes consistent across both platforms.
1. No extra update step is needed when preparing a blog post for cross-posting, which both saves time and prevents errors in the future.
1. I get to learn how to write a Gatsby markdown plugin!
This brings us to the meat of the post: building a custom plugin that lets us embed Youtube videos using DEV's embed syntax. Before diving into the implementation, let's first briefly look at how Gatsby works with your markdown source files.
## Gatsby and Markdown
Thanks to Gatsby's [flexible plugin architecture](https://www.gatsbyjs.org/docs/what-is-a-plugin/), populating a blog from markdown source files is a breeze. For a detailed tutorial on how to do this, check out [Creating a Blog with Gatsby](https://www.gatsbyjs.org/blog/2017-07-19-creating-a-blog-with-gatsby/). There are a few core plugins involved, and the remainder of this post assumes that these plugins are installed and configured:
* [gatsby-source-filesystem](https://www.gatsbyjs.org/packages/gatsby-source-filesystem/) - Reads markdown files from the filesystem and produces one Markdown node for each
* [gatsby-transformer-remark](https://www.gatsbyjs.org/packages/gatsby-transformer-remark/) - Transforms these Markdown nodes into HTML that is ready to be rendered in a browser
By default, `gatsby-transformer-remark`'s HTML output isn't much more than a 1:1 representation of the Markdown input, for example:
* An `h1` for each `#`
* An `li` for each `1.` or `*` in a list
* A wrapping `ol`/`ul` around each set of `li`s
For _most_ types of blog post content this is exactly what we want, but there are situations where the compiled HTML needs to be either 1) more sophisticated or 2) changed completely. Our Youtube video embed case is an example of the latter, but let's briefly take a look at an example of the former! Consider the following Markdown, which renders an image with some alt text:
```markdown

```
By default, `gatsby-transformer-remark` will produce the following HTML output for this Markdown input:
```html
<p>
<img src="my-amazing-image.png" alt="I'm the alt text">
</p>
```
Whilst this output is completely functional, it isn't optimized for the modern web. Instead of producing a simple `img` tag with a single `src` attribute, it would be much better to produce a fully [responsive image](https://developer.mozilla.org/en-US/docs/Learn/HTML/Multimedia_and_embedding/Responsive_images), complete with `srcset` and `sizes` that will ensure the best experience for a broad range of devices:
```html
<p>
<img
srcset="my-amazing-image-320w.jpg 320w,
my-amazing-image-480w.jpg 480w,
my-amazing-image-800w.jpg 800w"
sizes="(max-width: 320px) 280px,
(max-width: 480px) 440px,
800px"
src="my-amazing-image-800w.jpg"
alt="I'm the alt text">
</p>
```
Responsive images are far superior to simple images.
>Instead of the __developer__ deciding at __implementation__ time which image to load, responsive images enable the __browser__ to decide at __run__ time.
The reason we want to hand off this decision to the browser is because the browser is in the best position to make it! The browser has the most information about the user's browsing context and can take into account any or all of the following factors:
* Screen size
* Device orientation
* Pixel density
* Current network conditions
* The user's current data-saver preferences
Ultimately, the user benefits. However, getting Gatsby to render a _functional_ responsive image requires not only an extra transformation step when compiling Markdown to HTML, but also an image processing step that produces all of the required image sizes.
How can we do this? With more plugins!
### Plugins Within Plugins
Customizing the behavior of `gatsby-transformer-remark` requires hooking into its internals. Luckily for us, the `gatsby-transformer-remark` plugin _itself_ can be customized with plugins! For example, we can easily solve the responsive image problem by leveraging the great [`gatsby-remark-images` plugin](https://www.gatsbyjs.org/packages/gatsby-remark-images). In addition to providing the `srcset` and `sizes` attributes and resizing the original image, it also renders an elastic container to prevent layout jumps and supports the "blur up" placeholder loading effect. Amazing!
With all of our responsive image needs not only met but exceeded, we can return our focus to the Youtube video embed problem.
### How a gatsby-transformer-remark Plugin Works
Before jumping into the code for our custom plugin, we need to know a bit more about how plugins for `gatsby-transformer-remark` work.
>Plugins written for `gatsby-transformer-remark` define additional transformations that should be applied to your Markdown _before_ it gets compiled to the final HTML that is rendered on your Gatsby site.
Thankfully, we don't have to apply these transformations to raw Markdown source strings, which would be messy and unperformant.
#### Abstract Syntax Trees
`gatsby-transformer-remark` does the heavy lifting of parsing the raw Markdown source strings into [_Abstract Syntax Trees_](https://en.wikipedia.org/wiki/Abstract_syntax_tree), or ASTs. If you aren't familiar with the concept of an AST, don't be intimidated! It's just a fancy name for a simple idea:
>An Abstract Syntax Tree is a tree representation of a source code string. Each node in the tree represents a construct from the source code.
An AST begins as a 1:1 reflection of the source code string from which it was built, meaning that it could be traversed and compiled back into the original string if needed. Sometimes it's useful to operate on an unaltered AST. For example, our good friend [ESLint](https://eslint.org) examines your source code's unaltered AST—rather than the source code itself—for issues. Other times it's useful to have ASTs undergo mutating transformations to produce new ASTs, which are no longer equivalent to the original source. For example, [many compilers](https://en.wikipedia.org/wiki/Optimizing_compiler) will automatically optimize code by identifying and fixing parts of the code's AST that can perform more efficiently.
Our plugin is an example of scenario #2. We want to transform our Markdown ASTs in such a way that instances of the Youtube video embed string are replaced with embed HTML for the specified video.
#### Markdown ASTs
Internally, `gatsby-transformer-remark` uses the [remark processor](https://github.com/remarkjs/remark) to build ASTs that comply with the [MDAST spec](https://github.com/syntax-tree/mdast) (short for Markdown Abstract Syntax Tree). Among other things, this spec defines the various node types that can exist in a Markdown AST, such as `image`, `text` and `inlineCode`. Consider the following Markdown:
```markdown
I'm a paragraph containing `inline code`!
```
The resulting MDAST tree is as follows (with some irrelevant metadata removed for brevity):
```json
{
"type": "root",
"children": [
{
"type": "paragraph",
"children": [
{
"type": "text",
"value": "I'm a paragraph containing "
},
{
"type": "inlineCode",
"value": "inline code"
},
{
"type": "text",
"value": "!"
},
]
}
]
}
```
Notice how the nodes in the AST map directly to the constructs in the Markdown source: `text` -> `inlineCode` -> `text`, nested together under a `paragraph`.
Writing a transformer plugin for `gatsby-transformer-remark` boils down to traversing Markdown ASTs (such as the one above) and making changes to relevant nodes. Our Youtube video embed plugin simply needs to do the following:
1. Traverse the AST looking for nodes of type `text`
1. If the node's `value` matches the DEV video embed syntax, transform it!
Now that we have an idea of how a `gatsby-transformer-remark` plugin works and what ours needs to do, let's jump into the implementation!
## Building the Plugin
[The Gatsby docs](https://www.gatsbyjs.org/docs/creating-plugins/) do a great job of explaining how to create custom plugins. For simplicity, the plugin we build here will only support Youtube video embeds, but a fun open source project would be a plugin that supports all of DEV's third party embed tags ([they have a lot!](https://dev.to/p/editor_guide#liquidtags)) and possibly even the Liquid templating language. You heard it here first!
We'll create our embed plugin as a _local_ plugin, meaning that it is scoped to a specific Gatsby project and lives in the project's repository under the `plugins` directory of the project root. Create a directory for the Youtube video embed plugin:
```shell
cd path/to/gatsby/project
mkdir plugins # If it doesn't exist already
cd plugins
mkdir youtube-video-embed
```
The only files needed to create a plugin are `package.json` and `index.js`. Create these in the plugin directory:
```shell
cd youtube-video-embed
npm init # You can accept all of the default values
touch index.js
```
A plugin for `gatsby-transformer-remark` is simply a function that receives a Markdown AST as a parameter and alters it. Once configured, `gatsby-transformer-remark` will invoke this function once for each Markdown node, and recall that `gatsby-source-filesystem` produces a Markdown node for each Markdown source file in our project.
We'll implement our plugin function in `index.js`. Let's open it for editing and add the following code:
```javascript
module.exports = ({ markdownAST }) => {
console.log('video embed!', JSON.stringify(markdownAST))
}
```
Notice how `markdownAST` can be conveniently destructed from the first parameter to the plugin function. To make sure things are working, we're currently just logging the AST of each Markdown node to the build console.
### Configuring the Plugin
Next, we need to configure our Gatsby project to run the new plugin, which we accomplish by listing the plugin in `gatsby-config.js`. Since this is a plugin for `gatsby-transformer-remark`—not Gatsby itself—we list it under `gatsby-transformer-remark`'s own plugin list:
```javascript
module.exports = {
/* ... */
plugins: [
/* ... */
{
resolve: 'gatsby-transformer-remark',
options: {
plugins: [
'youtube-video-embed'
],
},
},
/* ... */
]
}
```
Note: Our plugin will soon render an `iframe` that loads the embedded video. Because of this, if your project also relies on the [`gatsby-remark-responsive-iframe` plugin](https://www.gatsbyjs.org/packages/gatsby-remark-responsive-iframe/), you have to list our plugin first:
```javascript
plugins: [
'youtube-video-embed',
'gatsby-remark-responsive-iframe'
]
```
With the configuration change in place, you should be able to run your Gatsby site (`npm run dev`) and see the AST of each Markdown node logged to the build console. If so, things are working! Now let's make the plugin do something useful.
### Traversing the AST
As mentioned earlier, we need to search `markdownAST` for nodes of type `text` so that we can transform them. We _could_ write our own loops and recursion to do this, but instead let's have the [`unist-util-visit` library](https://github.com/syntax-tree/unist-util-visit) do it for us:
```shell
npm i unist-util-visit
```
This library exposes a `visit` function, which allows us traverse a Markdown AST by specifying the following:
1. The type of node we want to visit (`text`), and
1. A function to be called once for each node of the specified type
In `index.js`, import the library and call `visit`:
```javascript
const visit = require(`unist-util-visit`)
module.exports = ({ markdownAST }) => {
visit(markdownAST, 'text', (node) => {
// We're at a text node!
})
}
```
The next step is to check the value of each visited `text` node to see if it matches DEV's Youtube video embed syntax. Recall that this syntax is `{% youtube 12345abcde %}`, where `12345abcde` is the ID of the Youtube video to embed.
Let's define a simple regular expression that matches the syntax and use it to check `node.value`. Each time we find a match, we log the video ID (which we can get from a match group) to the console:
```javascript
const YOUTUBE_REGEX = /^{% youtube (\w+) %}$/
module.exports = ({ markdownAST }) => {
visit(markdownAST, 'text', (node) => {
const match = YOUTUBE_REGEX.exec(node.value)
if (match) {
console.log('Found one! The ID is', match[1])
}
})
}
```
Assuming that you have added some Youtube video embeds to your Markdown files, you should see the video IDs logged to the console when you run this:
```shell
Found one! The ID is zcjuXR8obvI
Found one! The ID is Q2CNno4JuJM
Found one! The ID is CpYLXl0Rm74
```
We're so close! We have identified the text nodes we care about, and all that remains is to transform them.
### Transforming the AST
Remember [earlier](#abstract-syntax-trees) when we discussed mutating ASTs? That's exactly what we're going to do! The MDAST spec defines an `html` node type that represents raw HTML within a Markdown source file. Let's change the `type` of each video embed node to `html` and change the `value` to an HTML string that defines an `iframe` pointing to the embedded Youtube video:
```javascript
module.exports = ({ markdownAST }) => {
visit(markdownAST, 'text', (node) => {
const match = YOUTUBE_REGEX.exec(node.value)
if (match) {
const videoId = match[1]
node.type = 'html'
node.value = `
<iframe
type="text/html"
width="640"
height="360"
frameborder="0"
src="https://www.youtube.com/embed/${videoId}"
></iframe>
`
}
})
}
```
And that's it! If you rebuild your Gatsby site and open it in the browser, you will see embedded Youtube videos powered by the DEV embed syntax.
### Possible Improvements
The final plugin implementation above is intentionally minimal in order to be as digestible as possible, but there are many improvements that could be made when integrating this into your project. Here are a few thoughts:
* The HTML string only passes a few parameters to the Youtube `iframe` player, but there are [a variety of parameters](https://developers.google.com/youtube/player_parameters) you can use to configure the embedded video to your liking.
* Whilst this implementation works fine for only supporting Youtube video embeds, it doesn't scale well in its current state. To expand support to other video providers (such as Vimeo or Twitch) or other embed types (such as code snippets, music, tweets, etc), we would want to build a more generic system in which provider-specific details and behavior are abstracted. To see an example, check out the [source code for `gatsby-remark-embed-video`](https://github.com/borgfriend/gatsby-remark-embed-video), which I mentioned earlier in this post.
* This implementation will transform `text` nodes anywhere in the Markdown AST, but we probably only want the transformation to apply to top-level paragraphs that only contain a single text node.
* This implementation does not allow different video embed instances to be customized. For example, we may want some videos to autoplay and others to loop infinitely. In order to enable this, we'd want to extend the embed syntax to allow different parameters to be specified: `{% youtube 12345abcde loop=true autoplay=false %}`, for example.
## Beyond Gatsby Plugins
One of my favorite things about being a programmer is having the ability to make my life and others' lives easier by automating processes with code. Building automations not only saves precious time, but also prevents the silly errors often caused by manual grunt work on repetitive tasks. There are few better feelings than experiencing the fruits of these efforts and then fondly thanking the you-of-the-past for anticipating their value and setting aside time to build them.
The simple Markdown transformer plugin we built is just a small example of this. With the plugin in place, we can embed Youtube videos in our Gatsby blogs and cross-post to DEV without thinking twice about whether the video embeds will work properly.
Tell me in the comments or on [Twitter](https://twitter.com/worsnupd) about an automation that you have written for yourself or have shared with the world!
Happy coding!
## Thanks for reading!
I've been planning on starting a tech blog for several years now, which is a feeling that I'm sure some of you can relate to. Only recently have I started taking it seriously, and so far the reception has been very positive. I'd like to thank everyone for reading, liking, commenting, and (re)tweeting, etc. Let me know what I can do to help you on your journey!
#### Like this post?
Follow me on Twitter where I (re)tweet about frontend things: [@worsnupd](https://twitter.com/worsnupd)
| worsnupd |
180,181 | How to use React Dashboard analytics with an external site FOR FREE | 30+ smart components with included serverless analytics & user auth functions that works with Netlify. | 0 | 2019-09-29T19:54:10 | https://dev.to/dillonraphael/how-to-use-react-dashboard-analytics-with-an-external-site-for-free-3ap3 | react, serverless, netlify, javascript | ---
title: How to use React Dashboard analytics with an external site FOR FREE
published: true
description: 30+ smart components with included serverless analytics & user auth functions that works with Netlify.
tags: react, serverless, netlify, javascript
cover_image: https://cdn.sanity.io/images/axalcmta/posts/888cd6e5452663dddb161da4b6ae4bb3f28095b5-1343x705.png
---
With the latest release of [React Dashboard](https://reactdashboard.com), there is an included analytics function that lets you store your own analytics data. It doesn't store cookies or ip addresses, instead you get the necessities like your most popular pages, referrers, device, browser & country. This has a huge advantage as it's GDPR compliant.
The only requirements are the free version of Netlify & the sandbox database from mLab.
First create a database on mLab. You can choose the free sandbox version:

When the database is created, click on it and select the User tab. We need to create an admin user for this database:

Then keep note of your mongodb url:

`mongodb://<dbuser>:<dbpassword>@ds031978.mlab.com:31978/dbname`
Replace `<dbuser>` & `<dbpassword>` with the credentials of the user you just created above.
Now push the code in the React Dashboard folder to Github or Gitlab, then create a new site from Git on Netlify.

These are the required build steps. It should populate automatically.

Then there are 3 required environment variables:

Fill the `MONGODB_URL` with the mlab url above, containing the user credentials.
Then on the external site you want to track analytics, run this API request on page load. This can be done with vanilla javascript, jQuery, React or any frontend framework you prefer.
I created a little api request library that you can copy and paste into your code. It uses Axios, so you'll need to run npm install --save axios inside your project directory.
```js
import axios from 'axios'
const createAxios = (token) => {
const instance = axios.create({
headers: {
Accept: 'application/json'
}
});
return instance;
}
export const POST = (url, data, token) => createAxios(token).post(url, data, token);
export const PATCH = (url, data, token) => createAxios(token).patch(url, data);
export const PUT = (url, data, token) => createAxios(token).put(url, data);
export const DELETE = (url, params, token) => createAxios(token).delete(url, { params: params || { foo: 'bar' } });
export const GET = (url, params, token) => createAxios(token).get(url, { params });
```
Then lets use a React component as an example:
```js
import React from 'react'
import {POST} from '../lib/api'
const Index = () => {
React.useEffect(() => {
POST('https://<yournetlifyurl.com>/.netlify/functions/create-visitor', {
referrer: document.referrer,
path: window.location.pathname
})
}, [])
return (
<div><h1>Analytics Example</h1></div>
)
}
export default Index
```
It's that simple, and you can start tracking your analytics - free, without any tracking cookies. Check out React Dashboard at [https://reactdashboard.com](https://reactdashboard.com)
| dillonraphael |
180,193 | Intro to Graph Data Structures | Data structures are just ways we organize data. The one I'm sure you're familiar with is the list o... | 0 | 2019-09-29T22:27:25 | https://dev.to/amberjones/intro-to-graph-data-structures-abk | javascript, beginners, firstyearincode |
Data structures are just ways we organize data.
The one I'm sure you're familiar with is the **list** or **array**, a **linear** ordered sequence of values. This is your shopping list, your to-do, your reading, whatever.
Lets explore the way more exciting realm of **non-linear** graphs!
####But first, some basics:
A graph is comprised of objects connected by lines.
In JavaScript (and computer science at large), we refer to those objects and lines as **vertices and edges**.
The benefit of a graph structure is that not only can you represent nodes of data but also their _relationship_ to each other through properties assigned to their edges.
Two common properties of edges are **weights** and **direction**.
If a graph has weights, it is considered **weighted** and if it has direction, it is considered **directed**. Direction can go one way or both ways.
Susan can have a crush on Sally, but that doesn't mean Sally has a crush on Susan.

Now, imagine yourself, just floating in space all by your lonesome. You have a lot of knowledge, and no one to share it with.
Another space traveler appears, "Hey friend! Lets keep in contact". You give them your number, and suddenly, you have meaning and cease to be a singular speck of dust in space. You have become a node and you have created a connecting **edge**.
But it costs you.
Each time you call your space friend, you're billed by your telephone company $12393900.00. This is the **weight** of your connecting edge.
####Lets come back from space and look at IRL graph data structures

Classic example is Google Maps. Its just one big graph!
Streets intersecting are vertices, and the streets themselves are edges.
They are **weighted** by distance in length and time. The streets also have a **directionality** property...some streets only go one way.
Traversing a Graph refers to finding a path between two nodes, finding the shortest path from one node to another and finding the shortest path that visits all nodes [1].
On of many methods to traverse a graph is using **Dijkstra's algorithm** (or Dijkstra's Shortest Path First algorithm, SPF algorithm). This is the one Google used (or a variant of) to implement their map application. This algorithm was originally conceived by Dijkstra in 1958 in 20 minutes at a cafe in Paris [2].
Here's what it looks like in Javascript:

####A note on Tree Graphs...
That family tree you had to make in Kindergarten? Yup, a Tree Graph.
Here's the thing, **Tree Graphs** are a highly specialized form of a Graph, with a root node that all other nodes are decedents of.
Its important to make the distinction between a Tree Graph and a Graph, because they have some overlapping qualities like , but their rules on structuring data are completely different.
So in JavaScript, they are considered entirely different data structures.
For an in-depth and entertaining read on Trees, check out [this article] (https://dev.to/jillianntish/a-brief-descent-into-javascript-trees-48lm) by fellow DEV community member Jill.
Graphs are a non-hierarchical structures of how data relates, connecting our entire world!
Title Image: Social Network Analysis Visualization [Grandjean, M. (2016)]
[1] https://www.jenniferbland.com/the-difference-between-a-tree-and-a-graph-data-structure/
[2]https://www.vice.com/en_us/article/4x3pp9/the-simple-elegant-algorithm-that-makes-google-maps-possible | amberjones |
180,194 | Hacktoberfest Mini Competition: Win 50 bucks for open source contribution | In the wake of this years hacktober fest I am giving away 50 bucks for the most meaningful contribution to Super Productivity | 0 | 2019-09-29T21:19:57 | https://dev.to/johannesjo/i-am-giving-away-50-bucks-for-your-hacktoberfest-contribution-472p | hacktoberfest, contributorswanted, opensource, webdev | ---
title: Hacktoberfest Mini Competition: Win 50 bucks for open source contribution
published: true
description: In the wake of this years hacktober fest I am giving away 50 bucks for the most meaningful contribution to Super Productivity
tags: hacktoberfest, contributorswanted, opensource, webdev
cover_image: https://images.unsplash.com/photo-1498931299472-f7a63a5a1cfa?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=700&q=60
---
I am excited about open source, that's why I am excited about the [hacktober fest](https://hacktoberfest.digitalocean.com/). It hopefully will bring new people to face the daring prospect of submitting their first pull request allowing our wonderful community to grow.
I intend to participate myself. This will mean that I will have less time to spent own my own projects. This is how I came up with the idea to give away 50€ for the most meaningful contribution to my favorite open source app and side project [Super Productivity](https://github.com/johannesjo/super-productivity) within the month of this years october.
Isn't that great? Not only do you get to make your possibly first open source contribution, you also get paid for it! I'm a little bit jealous. I never get paid for mine...
If you think money and open source don't mix, I am very open about giving your money to a charity of your choosing instead. If you decide to do so I'll add another 50 bucks on top (well at least if I don't think the cause is absolutely outrageous or immoral).
There are many [open issues](https://github.com/johannesjo/super-productivity/issues) which could use your help. Earlier I also wrote [an article](https://dev.to/johannesjo/super-productivity-needs-your-help-5dmd) about other areas which could use some improvement. Also please feel free to contribute your own ideas. I want this to be a community driven project.
---
## details of the competition
At the end of the month I will decide which contribution had the biggest impact. Chances are that this might be feel a bit arbitrary. So please don't get mad, if I decide to give the money to someone else.
Only a single person will receive the reward.
The contribution doesn't necessarily have to be a PR, but those probably have the best chances.
The money will be send using paypal.
I happily help you with all the questions that occur on the way. I heavily recommend to have at least some experience with JavaScript/TypeScript though.
---
P.S.: Why 50 bucks? I thought about this for a long time actually and I am still not 100% convinced that this is the best possible move. I thought about what else people could want, but in the end money gives you the most flexibility. I also thought about making it more, but I wanted this to be a friendly compition and this prize to be just a small incentive rather than something that you do in hopes of paying your rent. I also consider this a small birthday present to myself, as I'm proud about my app and I really would like to have more people participating.
Please let me know, if you think this sucks and what I should have done instead!
| johannesjo |
180,251 | Learn to code on your iPhone📱 for FREE 🚀 | Did you know you can learn to code on your iPhone or iPad? You don’t need an expensive laptop or a t... | 0 | 2019-09-30T02:01:07 | https://dev.to/0xbanana/learn-to-code-on-your-iphone-for-free-2fni | ios, reactnative, go, python | Did you know you can learn to code on your iPhone or iPad?
You don’t need an expensive laptop or a top of the line desktop to learn how to write useable code. With these FREE apps from the App Store you can start learning RIGHT NOW!
**W3 - Web development and programming tutorials**
W3 is a wonderful resource every developer should have in their app arsenal. It’s your pocket guide to most programming languages. It covers all languages you'd want to learn and a bunch you may not have even heard of.
[W3](https://apps.apple.com/us/app/w3/id987493634)
**Code playground - Learn the basics of your favorite programming language on the go. All the content are organized in bite-sized form for you to learn in no time.**
Code playground is a wonderful app that takes you from "Hello World" to complex language topics in a flash! Each of the 5 languages has over 30 modules to explore and learn, and with it's own integrated compiler you have full control over the code. Modify the code and explore the possibilities at your fingertips.
[Code Playground](https://apps.apple.com/us/app/code-playground/id1452106609)
**Pythonista3 - A complete scripting environment for Python on your iOS device.**
In true Python fashion, batteries are included – from popular third-party modules like numpy, matplotlib, requests, and many more, to modules that are tailor-made for iOS. You can write scripts that access motion sensor data, your photo library, contacts, reminders, the iOS clipboard, and much more. You can also use Pythonista to build interactive multi-touch experiences, custom user interfaces, animations, and 2D games. Honestly it's great.
[Pythonista 3](https://apps.apple.com/us/app/pythonista-3/id1085978097)
**React native builder - Design and create React Native apps with ease**
Create React Native apps directly on your iOS device with minimal effort. Design layouts, complex pages, and get into the details of each component to fully flesh out your ideas and application flow. Once designed you can export your code to be integrated into any CI pipeline.
[React Native Builder](https://apps.apple.com/us/app/react-native-builder/id1452492770) | 0xbanana |
181,362 | Docker 101 - What it is and how to start using | Docker is awesome! | 2,568 | 2019-10-04T02:24:02 | https://dev.to/luturol/docker-101-what-it-is-and-how-to-start-using-1d84 | docker, begginers | ---
title: Docker 101 - What it is and how to start using
published: true
description: Docker is awesome!
tags:
- docker
- begginers
series: Docker for Begginers
---
You can see the original post in [my personal blog](https://luturol.github.io/docker/Docker-101)
# What is Docker?
Docker is a tool which controls containers and makes it possible for developers to use container with their own applications.
Putting on a container, you have the advantage to grant that your application will always run fine and with the same environment every time, cause it is configurated in a container with all needed specifications.
Docker has it's own interface standardized for all operations.
Has it's own container orchestration.
## What is a container?
It's something that packages up all application dependencies and code so the application runs as fine as it can. You can read more at [What is a Container? from docker page](https://www.docker.com/resources/what-container)
## Running a container
docker container run -t ubuntu top
**docker container run** initialize a container.
**-t** add an [pseudo-TTY](https://unix.stackexchange.com/questions/21147/what-are-pseudo-terminals-pty-tty) to the image.
**run** first execute a docker pull to get ubuntu image for the host. After downloaded, initialize the container with the Ubuntu image.
**top** is a Linux command that shows all active process order by memory consumption.
Looking at the output on bash you gonna see only the root process running, which means that all container is isolated from which other, avoiding conflicts between one and another.
Even using the same image from Ubuntu, it's important to note that the container does not have it's own kernel. It uses the host kernel and the image is used only to provide the file system and tools available on Ubuntu.
## Get inside the container
docker container exec -it
Allows you to enter the container terminal.
**-it** allows you to enter without executing any command and you can navigate on it.
## Show all containers
docker container ls
Shows all the containers that are running.
## Using container ID to get into it's terminal
docker container ls
--Get the container ID
docker container exec -it CONTAINER_ID bash
--to leave from the container
exit
The last argument selects the terminal to use. You can see on Docker hub which terminal is available to use.
### Stopping a container
docker container stop container_id
You can use only the first 3 digits in the container ID to identify the contaier, those digits is whats make a container unique.
It's possible to stop more than one container at once:
docker container stop d67 ead af5
### Removing a container
docker system prune
Remove all stopped containers.
Thanks for reading it. I've been using docker for a while now and it's amazing having all my projects running without conflict and even their on database. I'm tired to install and configure all different types of databases...
Don't forget to drink water and eat clean. | luturol |
180,341 | Update: Family Feud with React | Let's make a game together | 0 | 2019-09-30T09:59:07 | https://dev.to/thefern/update-family-feud-with-react-dk5 | hacktoberfest, react, game | ---
title: Update: Family Feud with React
published: true
description: Let's make a game together
tags: hacktoberfest, react, game
---
Excited to get this project started. Here's a quick update. I have created a starter react project, and deployed to [https://kodaman2.github.io/family-trivia-react/](https://kodaman2.github.io/family-trivia-react/)
I've added a public trello board, so there is clear and open communication.
[https://trello.com/b/Iy345e2L/family-trivia-react](https://trello.com/b/Iy345e2L/family-trivia-react)
Updated README with project phases on repo. [https://github.com/kodaman2/family-trivia-react](https://github.com/kodaman2/family-trivia-react)
Added initial design document [https://github.com/kodaman2/family-trivia-react/tree/master/docs/design](https://github.com/kodaman2/family-trivia-react/tree/master/docs/design). I've exported as PDF, but if you want to edit and add stuff I am using Balsamiq, I believe you can use the free version for 30 days so is perfect for hacktoberfest. Ideas are welcomed! | thefern |
180,413 | Constant Enum in Activerecord Rails | 🤔 Situation & Motivation Master data that is not so many records and not need to store... | 0 | 2019-09-30T10:33:51 | https://dev.to/n350071/constant-enum-in-activerecord-rails-4gpe | rails | ---
title: Constant Enum in Activerecord Rails
tags: rails
published: true
---
## 🤔 Situation & Motivation
Master data that is not so many records and not need to store into RDB because of the performance.
How we can store it in the code?
## 🦄 General solution
```ruby
class Hoge
module TYPE
A = 1
B = 2
TYPE_NAMES = {
A = 'alpha',
B = 'beta'
}
def self.keys
TYPE_NAMES.keys
end
def self.type_names
TYPE_NAMES.value
end
end
end
```
## 👍 Enum directory solution
This is not the usual way, but it's ok for migration of the version.
### Code template
```md
app/enums/v2/status_enum.rb
app/enums/v2/position_enum.rb
```
```ruby
module V2::StatusEnum
GREAT = 1
NOMAL = 2
BAD = 3
STATUS_NAMES = {
GREAT => 'GREAT',
NOMAL => 'NOMAL',
BAD => 'BAD'
}
end
```
```ruby
module V2::PositionEnum
BOSS = 1
NOMAL = 2
PLAYER = 3
STATUS_NAMES = {
BOSS => 'BOSS',
NOMAL => 'NOMAL',
PLAYER => 'PLAYER'
}
end
```
### Usage
```ruby
user = User.where(status: V2::StatusEnum::GREAT)
<%= V2::StatusEnum::STATUS_NAMES[user.status] %>
```
---
## 🔗 Parent Note
{% link n350071/my-rails-note-47cj %}
| n350071 |
180,423 | Dynamically change the iframe height from inside | 🤔 Situation Your page is embedded to an iframe which ID is parent-iframe, and you want to... | 0 | 2019-10-29T00:56:38 | https://dev.to/n350071/dynamically-change-the-iframe-height-from-inside-24en | javascript | ---
title: Dynamically change the iframe height from inside
tags: js
published: true
---
## 🤔 Situation
Your page is embedded to an iframe which ID is `parent-iframe`, and you want to change the height by your page content.
```html
<html>
<head>
</head>
<body>
<div>
<iframe src="https://your.html" id="parent-iframe"></iframe>
</div>
</body>
</html>
```
## 🦄 Solution
Run a script from inside to the iframe, when the page is ready.
```html
<head>
<script type="text/javascript">
$(document).ready(function() {
$("#parent-iframe", window.parent.document).height(document.body.scrollHeight);
});
</script>
</head>
```
---
## 🔗 Parent Note
{% link n350071/my-jquery-javascript-note-fp7 %}
| n350071 |
180,452 | The ary.map(&:to_s), what is this? | 👍 Instant summary They are equal. # ary = [:jack, :justin, :jojo] ary.map(&:to_s) ==... | 0 | 2019-09-30T12:07:10 | https://dev.to/n350071/the-ary-map-tos-what-is-this-n2i | rails | ---
title: The ary.map(&:to_s), what is this?
tags: rails
published: true
---
## 👍 Instant summary
They are equal.
```ruby
# ary = [:jack, :justin, :jojo]
ary.map(&:to_s) === ary.map{|obj| obj.to_s}
# => true
ary.map(&:to_s)
# => ["jack", "justin", "jojo"]
ary.map{|obj| obj.to_s}
# => ["jack", "justin", "jojo"]
```
## 🦄 Understanding
### 1. The & pass Proc object as a block.
The & in arg, ruby pass the Proc object as a block.
📚 [map(&b) (Japanese)](https://docs.ruby-lang.org/ja/latest/doc/symref.html#and)
### 2. automated to_proc
You may have a question that the `to_s` isn't a Proc object. That's right 👍 and, ruby automatically calls `Symbol#to_proc` before blocknized by the &.
It means that ..,
```ruby
ary.map(&:to_s) === ary.map(&:to_s.to_proc)
# => true
```
### 3. Symbol#to_proc method
[Symbol#to_proc](https://ruby-doc.org/core-2.5.0/Symbol.html#method-i-to_proc)
> Returns a Proc object which respond to the given method by sym.
```ruby
ary.map(&:to_s.to_proc) === ary.map(&Proc.new{|obj| obj.to_s })
# => true
```
## Sumarry
They are same.
```ruby
ary.map(&:to_s)
ary.map(&:to_s.to_proc)
ary.map(&Proc.new{|obj| obj.to_s })
ary.map{|obj| obj.to_s}
```
---
## 🔗 Parent Note
{% link n350071/my-rails-note-47cj %}
| n350071 |
180,460 | VS Code Extension - Render Process Diagrams with bpmn.io | VS Code Extension for Displaying BPMN 2.0 Files | 0 | 2019-09-30T12:35:17 | https://dev.to/pinussilvestrus/vs-code-extension-render-process-diagrams-with-bpmn-io-39ab | vscode, plugins, bpmn, webdev | ---
title: VS Code Extension - Render Process Diagrams with bpmn.io
published: true
description: VS Code Extension for Displaying BPMN 2.0 Files
tags: #vscode #plugins #bpmn #webdev
---
As another exciting pet project, we created a [Visual Studio Code](https://code.visualstudio.com/) extension for rendering BPMN 2.0 process diagrams. It is built on top of the amazing modeling toolkit [bpmn-io](https://bpmn.io/) and can be [found on GitHub](https://github.com/pinussilvestrus/vs-code-bpmn-io).
You can directly download it from [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=bpmn-io.vs-code-bpmn-io) or [follow the setup instructions](https://github.com/pinussilvestrus/vs-code-bpmn-io#development-setup) to get it running locally.
With this extension, it is possible to display a preview for your BPMN 2.0 diagrams. It also refreshes the preview whenever you re-focus it, to adapt made changes in the XML content.

As a next step, we plan to also add a modeling functionality to enable the powerful BPMN diagram editing capabilities of `bpmn-io`. This will help to easily create BPMN process diagrams in Visual Studio Code without writing one line of XML.

Feel free to [try out the extension](https://github.com/pinussilvestrus/vs-code-bpmn-io) and leave some feedback or a star for our work!
| pinussilvestrus |
180,525 | Analysis of ECS 236th meeting abstracts(2) - word embedding by Word2Vec and SCDV | Introduction This is an serial article about language analysis of ECS 236th meeting abstra... | 0 | 2019-10-07T02:58:58 | https://dev.to/konkon3249/analysis-of-ecs-236th-meeting-abstracts-2-word-embedding-by-word2vec-and-scdv-23i9 | python, nlp, scdv, scientific | # Introduction
This is an serial article about language analysis of ECS 236th meeting abstracts.
In this series, I've been explaining the technique used in my webapp ECS Meeting Explorer. The introduction of this app is available in an article below,
[ECS Meeting Explorer - webapp for scientific conference](https://dev.to/konkon3249/ecs-meeting-explorer-webapp-for-scientific-conference-4h7e)
My previous article about data scraping is in following link,
[Analysis of ECS 236th meeting abstracts(1) - data scraping with BeautifulSoup4](https://dev.to/konkon3249/analysis-of-ecs-236th-meeting-abstracts-1-data-scraping-with-beautifulsoup4-ma8)
In this atricle, I will give an explanation of word embedding, vectorization of words used in all abstract text.
# Preparation
In the series of article, I will use Python. Please install these libraries.
numpy > 1.14.5
pandas > 0.23.1
matplotlib > 2.2.2
beautifulsoup4 > 4.6.0
gensim > 3.4.0
scikit-learn > 0.19.1
scipy > 1.1.0
Before the analysis, please download all ECS 236th meeting abstracts from official site. Unzip and place it in same directory as jupyter-notebook.
Data scraping by BeautifulSoup4 was explained in my previous article, please check it before!
# Word embedding by Word2Vec
Word2Vec (W2V) is a machine learning model used to produce word embedding, which is words mapping to vector space.
Word2Vec is a kind of unsupervised learning, therefore we don't have to label training data. It is precious to me because it is a hard job at any time.
In this experiments, we use Word2Vec implemented in Gensim. So, we don't have to make models by ourselves. Further information about Word2Vec are below,
[models.word2vec – Word2vec embeddings(Gensim documentation)](https://radimrehurek.com/gensim/models/word2vec.html)
[Word2vec Tutorial | RARE Technologies](https://rare-technologies.com/word2vec-tutorial/)
The original paper of word2vec.
[Distributed Representations of Words and Phrases and their Compositionality](https://arxiv.org/abs/1310.4546)
Now, we have a list contains detail of all abstract, title, authors, affiliations, session name, and contents as follows,
```python
> dic_all
[{'num': '0001',
'title': 'The Impact of Coal Mineral Matter (alumina and silica) on Carbon Electrooxidation in the Direct Carbon Fuel Cell',
'author': ['Simin Moradmanda', 'Jessica A Allena', 'Scott W Donnea'],
'affiliation': 'University of Newcastle',
'session': 'A01',
'session_name': 'Battery and Energy Technology Joint General Session',
'contents': 'Direct carbon fuel cell DCFC as an electrochemical device...',
'mod_contents': ['direct','carbon','fuel','cell', ... ,'melting'],
'vector': 0,
'url': '1.html'}, ... ]
```
Then, Let's get lists of words modified for language analysis.
```python
# make word list for W2V learning
docs = [i['mod_contents'] for i in dic_all]
```
This is a code for learning Word2Vec model. Only few lines!
```python
#Word2Vec model learning and save it.
from gensim.models.word2vec import Word2Vec
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
model = Word2Vec(docs, sg=1, size=200, window=5, min_count=30, workers=4, sample=1e-6, negative=5, iter=1000)
print('corpus = ',model.corpus_count)
```
The line 5 'model = Word2Vec(docs, ...)' corresponds to Word2Vec learning. The parameter '_size_' sets the dimension of word vectors, in this case, 200. Please see [documents](https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.trainables) for the other parameters of this function.
After the learning, make a vocabulary and word vectors from Word2Vec model.
This vocabulary is saved as .npy file in same directory.
```python
#make word dictionary
vocab = [i for i in model.wv.vocab]
dictionary = {}
for n,j in enumerate(vocab):
dictionary[j] = n
np.save('dictionary.npy', np.array([dictionary]))
#make word vectors from the model.
word_vectors = [model.wv[i] for i in model.wv.vocab]
word_vectors = np.array(word_vectors)
```
Now, we gets the words list and corresponding vectors.
In this vector space, similarity between words is expressed as a distance. We usually uses [cosine distance](https://en.wikipedia.org/wiki/Cosine_similarity) for such a high-dimentional vector spaces.
The function for calculate a word similarity is below,
```python
def CalcSim(target, vectors, dictionary):
target_vec = vectors[dictionary[target]]
search_results = []
for n,vector in enumerate(vectors):
sim = cos_sim(target_vec,vector)
result = {'num': n, 'value': list(dictionary.keys())[n], 'similarity': sim}
search_results.append(result)
summary_pd = pd.io.json.json_normalize(search_results)
summary_sorted = summary_pd.sort_values('similarity', ascending=False)
return summary_sorted
```
Okay, Let's search the similar words to '_sustainable_', recent buzz-words.
```python
target='sustainable'
summary_sorted = CalcSim(target, word_vectors, dictionary)
summary_sorted[:10]
```
The result, top 10 words are shown like this,
| num | similarity | value |
|------|------------|--------------|
| 588 | 1 | sustainable |
| 105 | 0.648442 | renewable |
| 100 | 0.552662 | energy |
| 1625 | 0.54727 | fuels |
| 862 | 0.541807 | efficient |
| 1624 | 0.53353 | fossil |
| 13 | 0.521877 | electricity |
| 607 | 0.480525 | technologies |
| 138 | 0.472065 | production |
| 108 | 0.471985 | wind |
The word most similar to '_sustainable_' is '_renewable_'.
It's satisfactory result, isn't it?
# 2-dimentional visualization of word vectors
As I mentioned, the size of word vectors is 200.
It is impossible for human beings to imagine such a high dimensional data. Dimension reduction is needed for the visualization.
In this case, we will use Principal Component Analysis (PCA) from 200 to 100, and t-distributed Stochastic Neighbor Embedding (t-SNE) from 100 to 2. These methods are implemented in scikit-learn.
The function for dimension reduction is this,
```python
from sklearn.decomposition import IncrementalPCA
from sklearn.manifold import TSNE
def tsne_reduction(dataset):
n = dataset.shape[0]
batch_size = 500
ipca = IncrementalPCA(n_components=100)
for i in tqdm(range(n//batch_size)):
r_dataset = ipca.partial_fit(dataset[i*batch_size:(i+1)*batch_size])
r_dataset = ipca.transform(dataset)
r_tsne = TSNE(n_components=2, random_state=0, perplexity=50.0, n_iter=3000).fit_transform(r_dataset)
return(r_tsne)
w2v_tsne = tsne_reduction(word_vectors)
```
Now, we can plot 2-dimensional word vectors.

Left shows the scatter plots of all word vectors, Right shows some highlighted points with corresponding words.
# Precomputation of word-topic vectors by SCDV
We can estimate document vectors of abstract by averaging this word vectors with certain weight (such as tf-idf). But in this case, I will apply a method named SCDV: Sparse Composite Document Vectors to modify word vectors.
There are 2 steps for SCDV to build a document vector.
* Precomputation of word-topics vectors.
* Build sparse document vectors using word-topics vectors.
In this section, I will explain the former process.

This is a flow chart of computing word-topic vectors (image from [here](https://dheeraj7596.github.io/SDV/)). It is divided into 3 process.
1. Word vectors are classified into several clusters with soft clustering algorithms, which allows words to belong to every cluster with certain probability.
2. Word-cluster vectors are made by multiplying vectors with the probability of belonging for each cluster.
3. Concatenate all word-cluster vectors with idf (inverse document frequency) weighting to form word-topic vector.
This is a function to transform word vectors to word-topic vectors.
```python
def WordTopicVectors(word_vectors)
#Gaussian Mixture Modelling
num_clusters = 30
clf = GaussianMixture(n_components=num_clusters,covariance_type="full")
z_gmm = clf.fit(word_vectors)
idx = clf.predict(word_vectors)
idx_proba = clf.predict_proba(word_vectors)
#Calculate word idf
words = list(dictionary.keys())
words = np.array(words)
word_idf = np.zeros_like(words, dtype=np.uint32)
for doc in tqdm(docs):
lim = len(doc)
for w in doc:
if(lim == 0):
break
else:
idx = np.where(w == words)
word_idf[idx] += 1
lim -= 1
word_counts = word_idf
word_idf = np.log(len(docs) / word_idf) + 1
#Concatenate word vector with GMM cluster
gmm_word_vectors = np.empty((word_vectors.shape[0], word_vectors.shape[1] * num_clusters))
n = 0
for vector,proba,idf in zip(word_vectors,idx_proba,word_idf):
for m,p in enumerate(proba):
if(m == 0):
cluster_vector = vector * p
else:
cluster_vector = np.hstack((cluster_vector,vector * p))
gmm_word_vectors[n] = idf * cluster_vector
n += 1
return(gmm_word_vectors)
#Calculate word-topic vectors
gmm_word_vectors = WordTopicVectors(word_vectors)
```
In this function, we used gaussian mixture model for clustering. The number of cluster is recommended as 60 or higher in original paper, but now I choose 30 (because of memory issue for webapp).
The dimension of word-topic vectors will be 200(original word vector)×30(number of cluster) = 6000.
Then, visualize it with t-SNE dimension reduction!

Comparing to the word vectors by Word2Vec, The clusters for each words are separated clearly.
This means that these vectors well represent the relationship between words and topics.
Let's see the details of each cluster and corresponding words.

This figure clearly shows that words of same topic belongs to the same cluster.
# Conclusion
In this article, I demonstrated the word embedding by W2V and modification by SCDV.
I will explain about building document vector with this word-topic vectors!
| konkon3249 |
180,558 | Releasing a high-performance, lightweight, non-blocking and event-loop networking library written in pure Go | Github Page: https://github.com/panjf2000/gnet gnet is an Event-Loop network... | 0 | 2019-09-30T16:40:02 | https://dev.to/panjf2000/releasing-a-high-performance-lightweight-non-blocking-and-event-loop-networking-library-written-in-pure-go-4beb | go, networking | <p align="center">
<img src="https://raw.githubusercontent.com/panjf2000/gnet/master/logo.png" alt="gnet">
<br />
<a title="Build Status" target="_blank" href="https://travis-ci.com/panjf2000/gnet"><img src="https://img.shields.io/travis/com/panjf2000/gnet?style=flat-square"></a>
<a title="Codecov" target="_blank" href="https://codecov.io/gh/panjf2000/gnet"><img src="https://img.shields.io/codecov/c/github/panjf2000/gnet?style=flat-square"></a>
<a title="Go Report Card" target="_blank" href="https://goreportcard.com/report/github.com/panjf2000/gnet"><img src="https://goreportcard.com/badge/github.com/panjf2000/gnet?style=flat-square"></a>
<br/>
<a title="" target="_blank" href="https://golangci.com/r/github.com/panjf2000/gnet"><img src="https://golangci.com/badges/github.com/panjf2000/gnet.svg"></a>
<a title="Doc for gnet" target="_blank" href="https://gowalker.org/github.com/panjf2000/gnet?lang=en-US"><img src="https://img.shields.io/badge/api-reference-blue.svg?style=flat-square"></a>
<a title="Release" target="_blank" href="https://github.com/panjf2000/gnet/releases"><img src="https://img.shields.io/github/release/panjf2000/gnet.svg?style=flat-square"></a>
<a title="Mentioned in Awesome Go" target="_blank" href="https://github.com/avelino/awesome-go"><img src="https://awesome.re/mentioned-badge-flat.svg"></a>
</p>
# Github Page:
https://github.com/panjf2000/gnet
`gnet` is an Event-Loop networking framework that is fast and small. It makes direct [epoll](https://en.wikipedia.org/wiki/Epoll) and [kqueue](https://en.wikipedia.org/wiki/Kqueue) syscalls rather than using the standard Go [net](https://golang.org/pkg/net/) package, and works in a similar manner as [libuv](https://github.com/libuv/libuv) and [libevent](https://github.com/libevent/libevent).
The goal of this project is to create a server framework for Go that performs on par with [Redis](http://redis.io) and [Haproxy](http://www.haproxy.org) for packet handling.
`gnet` sells itself as a high-performance, lightweight, non-blocking network library written in pure Go which works on transport layer with TCP/UDP/Unix-Socket protocols, so it allows developers to implement their own protocols of application layer upon `gnet` for building diversified network applications, for instance, you get a HTTP Server or Web Framework if you implement HTTP protocol upon `gnet` while you have a Redis Server done with the implementation of Redis protocol upon `gnet` and so on.
**`gnet` derives from project `evio` while having higher performance.**
# Features
- [High-performance](#Performance) Event-Loop under multi-threads/goroutines model
- Built-in load balancing algorithm: Round-Robin
- Concise APIs
- Efficient memory usage: Ring-Buffer
- Supporting multiple protocols: TCP, UDP, and Unix Sockets
- Supporting two event-notification mechanisms: epoll in Linux and kqueue in FreeBSD
- Supporting asynchronous write operation
- Flexible ticker event
- SO_REUSEPORT socket option
# Key Designs
## Multiple-Threads/Goroutines Model
### Multiple Reactors Model
`gnet` redesigns and implements a new built-in multiple-threads/goroutines model: 『Multiple Reactors』 which is also the default multiple-threads model of `netty`, Here's the schematic diagram:
<p align="center">
<img width="820" alt="multi_reactor" src="https://user-images.githubusercontent.com/7496278/64916634-8f038080-d7b3-11e9-82c8-f77e9791df86.png">
</p>
and it works as the following sequence diagram:
<p align="center">
<img width="869" alt="reactor" src="https://user-images.githubusercontent.com/7496278/64918644-a5213900-d7d3-11e9-88d6-1ec1ec72c1cd.png">
</p>
### Multiple Reactors + Goroutine-Pool Model
You may ask me a question: what if my business logic in `EventHandler.React` contains some blocking code which leads to a blocking in event-loop of `gnet`, what is the solution for this kind of situation?
As you know, there is a most important tenet when writing code under `gnet`: you should never block the event-loop in the `EventHandler.React`, otherwise it will lead to a low throughput in your `gnet` server, which is also the most important tenet in `netty`.
And the solution for that would be found in the subsequent multiple-threads/goroutines model of `gnet`: 『Multiple Reactors with thread/goroutine pool』which pulls you out from the blocking mire, it will construct a worker-pool with fixed capacity and put those blocking jobs in `EventHandler.React` into the worker-pool to unblock the event-loop goroutines.
This new networking model is under development and about to be delivered soon and its architecture diagram of new model is in here:
<p align="center">
<img width="854" alt="multi_reactor_thread_pool" src="https://user-images.githubusercontent.com/7496278/64918783-90de3b80-d7d5-11e9-9190-ff8277c95db1.png">
</p>
and it works as the following sequence diagram:
<p align="center">
<img width="916" alt="multi-reactors" src="https://user-images.githubusercontent.com/7496278/64918646-a7839300-d7d3-11e9-804a-d021ddd23ca3.png">
</p>
Before you can benefit from this new networking model in handling blocking business logic, there is still a way for you to handle your business logic in networking: you can utilize the open-source goroutine-pool to unblock your blocking code, and I now present you [ants](https://github.com/panjf2000/ants): a high-performance goroutine pool in Go that allows you to manage and recycle a massive number of goroutines in your concurrency programs.
You can import `ants` to your `gnet` server and put your blocking code to the `ants` pool in `EventHandler.React`, which makes your business code non-blocking.
## Auto-scaling Ring Buffer
`gnet` utilizes Ring-Buffer to cache TCP streams and manage memory cache in networking.
<p align="center">
<img src="https://user-images.githubusercontent.com/7496278/64916810-4f8b6300-d7b8-11e9-9459-5517760da738.gif">
</p>
# Getting Started
## Installation
```sh
$ go get -u github.com/panjf2000/gnet
```
## Usage
It is easy to create a network server with `gnet`. All you have to do is just make your implementaion of `gnet.EventHandler` interface and register your event-handler functions to it, then pass it to the `gnet.Serve` function along with the binding address(es). Each connection is represented as a `gnet.Conn` interface that is passed to various events to differentiate the clients. At any point you can close a client or shutdown the server by return a `Close` or `Shutdown` action from an event.
The simplest example to get you started playing with `gnet` would be the echo server. So here you are, a simplest echo server upon `gnet` that is listening on port 9000:
### Echo server without blocking logic
```go
package main
import (
"log"
"github.com/panjf2000/gnet"
)
type echoServer struct {
*gnet.EventServer
}
func (es *echoServer) React(c gnet.Conn) (out []byte, action gnet.Action) {
top, tail := c.ReadPair()
out = top
if tail != nil {
out = append(top, tail...)
}
c.ResetBuffer()
return
}
func main() {
echo := new(echoServer)
log.Fatal(gnet.Serve(echo, "tcp://:9000", gnet.WithMulticore(true)))
}
```
As you can see, this example of echo server only sets up the `EventHandler.React` function where you commonly write your main business code and it will be invoked once the server receives input data from a client. The output data will be then sent back to that client by assigning the `out` variable and return it after your business code finish processing data(in this case, it just echo the data back).
### Echo server with blocking logic
```go
package main
import (
"log"
"time"
"github.com/panjf2000/ants/v2"
"github.com/panjf2000/gnet"
)
type echoServer struct {
*gnet.EventServer
pool *ants.Pool
}
func (es *echoServer) React(c gnet.Conn) (out []byte, action gnet.Action) {
data := c.ReadBytes()
c.ResetBuffer()
// Use ants pool to unblock the event-loop.
_ = es.pool.Submit(func() {
time.Sleep(1 * time.Second)
c.AsyncWrite(data)
})
action = gnet.DataRead
return
}
func main() {
// Create a goroutine pool.
poolSize := 64 * 1024
pool, _ := ants.NewPool(poolSize, ants.WithNonblocking(true))
defer pool.Release()
echo := &echoServer{pool: pool}
log.Fatal(gnet.Serve(echo, "tcp://:9000", gnet.WithMulticore(true)))
}
```
Like I said in the 『Multiple Reactors + Goroutine-Pool Model』section, if there are blocking code in your business logic, then you ought to turn them into non-blocking code in any way, for instance you can wrap them into a goroutine, but it will result in a massive amount of goroutines if massive traffic is passing through your server so I would suggest you utilize a goroutine pool like `ants` to manage those goroutines and reduce the cost of system resource.
### I/O Events
Current supported I/O events in `gnet`:
- `EventHandler.OnInitComplete` is activated when the server is ready to accept new connections.
- `EventHandler.OnOpened` is activated when a connection has opened.
- `EventHandler.OnClosed` is activated when a connection has closed.
- `EventHandler.React` is activated when the server receives new data from a connection.
- `EventHandler.Tick` is activated immediately after the server starts and will fire again after a specified interval.
- `EventHandler.PreWrite` is activated just before any data is written to any client socket.
### Ticker
The `EventHandler.Tick` event fires ticks at a specified interval.
The first tick fires immediately after the `Serving` events and if you intend to set up a ticker event, remember to pass a option: `gnet.WithTicker(true)` to `gnet.Serve`.
```go
events.Tick = func() (delay time.Duration, action Action){
log.Printf("tick")
delay = time.Second
return
}
```
## UDP
The `gnet.Serve` function can bind to UDP addresses.
- All incoming and outgoing packets will not be buffered but sent individually.
- The `EventHandler.OnOpened` and `EventHandler.OnClosed` events are not available for UDP sockets, only the `React` event.
## Multi-threads
The `gnet.WithMulticore(true)` indicates whether the server will be effectively created with multi-cores, if so, then you must take care with synchronizing memory between all event callbacks, otherwise, it will run the server with single thread. The number of threads in the server will be automatically assigned to the value of `runtime.NumCPU()`.
## Load balancing
The current built-in load balancing algorithm in `gnet` is Round-Robin.
## SO_REUSEPORT
Servers can utilize the [SO_REUSEPORT](https://lwn.net/Articles/542629/) option which allows multiple sockets on the same host to bind to the same port and the OS kernel takes care of the load balancing for you, it wakes one socket per `accpet` event coming to resolved the `thundering herd`.
Just use functional options to set up `SO_REUSEPORT` and you can enjoy this feature:
```go
gnet.Serve(events, "tcp://:9000", gnet.WithMulticore(true)))
```
# Performance
## Contrasts to the similar networking libraries
## On Linux (epoll)
### Test Environment
```powershell
# Machine information
OS : Ubuntu 18.04/x86_64
CPU : 8 Virtual CPUs
Memory : 16.0 GiB
# Go version and configurations
Go Version : go1.12.9 linux/amd64
GOMAXPROCS=8
```
###
#### Echo Server

#### HTTP Server

## On FreeBSD (kqueue)
### Test Environment
```powershell
# Machine information
OS : macOS Mojave 10.14.6/x86_64
CPU : 4 CPUs
Memory : 8.0 GiB
# Go version and configurations
Go Version : go version go1.12.9 darwin/amd64
GOMAXPROCS=4
```
#### Echo Server

#### HTTP Server

# License
Source code in `gnet` is available under the MIT [License](/LICENSE).
# Thanks
- [evio](https://github.com/tidwall/evio)
- [netty](https://github.com/netty/netty)
- [ants](https://github.com/panjf2000/ants)
# Relevant Articles
- [A Million WebSockets and Go](https://www.freecodecamp.org/news/million-websockets-and-go-cc58418460bb/)
- [Going Infinite, handling 1M websockets connections in Go](https://speakerdeck.com/eranyanay/going-infinite-handling-1m-websockets-connections-in-go)
- [gnet: 一个轻量级且高性能的 Golang 网络库](https://taohuawu.club/go-event-loop-networking-library-gnet)
| panjf2000 |
180,681 | Parte 1 -> Orientación | Conceptos Docker es una plataforma para desarrolladores y administradores de sistemas para desarroll... | 2,489 | 2019-10-01T17:49:33 | https://dev.to/gelopfalcon/parte-1-orientacion-3hlm | docker, espanol, devops, containers |
<h2>Conceptos</h2>
Docker es una plataforma para desarrolladores y administradores de sistemas para desarrollar, implementar y ejecutar aplicaciones con contenedores. El uso de contenedores de Linux para implementar aplicaciones se denomina `containerization`. Los contenedores no son nuevos, pero su uso para implementar aplicaciones fácilmente sí lo es.
`Containerization` es cada vez más popular porque los contenedores son:
- Flexibles: incluso las aplicaciones más complejas se pueden contener en contenedores.
- Ligeros: los contenedores aprovechan y comparten el núcleo del host.
- Intercambiables: pueden implementar actualizaciones y mejoras sobre la marcha.
- Portátiles: pueden construir localmente, implementar en la nube y ejecutar en cualquier lugar.
- Escalables: pueden aumentar y distribuir automáticamente las réplicas del contenedor.
- Apilables: pueden apilar servicios verticalmente y sobre la marcha.
<h3>Imágenes y contenedores</h3>
Un contenedor se inicia ejecutando una imagen. Una imagen es un paquete ejecutable que incluye todo lo necesario para ejecutar una aplicación: el código, un tiempo de ejecución, bibliotecas, variables de entorno y archivos de configuración.
Un contenedor es una instancia de tiempo de ejecución de una imagen: en qué se convierte la imagen en la memoria cuando se ejecuta (es decir, una imagen con estado o un proceso de usuario). Puede ver una lista de sus contenedores en ejecución con el comando docker ps, tal como lo haría en Linux.
<h2>Contenedores y máquinas virtuales</h2>
Un contenedor se ejecuta de forma nativa en Linux y comparte el núcleo de la máquina host con otros contenedores. Ejecuta un proceso discreto, no ocupa más memoria que cualquier otro ejecutable, lo que lo hace liviano.
Por el contrario, una máquina virtual (VM) ejecuta un sistema operativo "invitado" completo con acceso virtual a los recursos del host a través de un hipervisor. En general, las máquinas virtuales proporcionan un entorno con más recursos que la mayoría de las aplicaciones necesitan.

<h2>Prepare su entorno de trabajo</h2>
Instale una versión mantenida de Docker Community Edition (CE) o Enterprise Edition (EE) en una plataforma compatible.
> Para la integración completa de Kubernetes
> - Kubernetes en Docker Desktop para Mac está disponible en 17.12 Edge (mac45) o 17.12 Stable (mac46) y versiones posteriores.
> - Kubernetes en Docker Desktop para Windows está disponible en 18.06.0 CE (win70) y superior.
[Instalar docker](https://docs.docker.com/install/)
<h3>Test Docker version</h3>
1. Ejecute `docker --version` y asegúrese de tener una versión compatible de Docker:
`Docker version 17.12.0-ce, build c97c6d6`
2. Ejecute `docker info` para ver aún más detalles sobre su instalación de Docker:
`Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.12.0-ce
Storage Driver: overlay2
...`
<h3>Test instalación de Docker</h3>
1. Comprueba que tu instalación funciona ejecutando la simple imagen de Docker, hello-world:
```
docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89a5bf606bc8b315b9fc7
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
```
2. Enumere la imagen de hello-world que se descargó a su máquina:
```docker image ls```
3. Liste el contenedor hello-world (generado por la imagen) que sale después de mostrar su mensaje. Si todavía se estuviera ejecutando, no necesitaría la opción --all:
```
docker container ls --all
CONTAINER ID IMAGE COMMAND CREATED STATUS
54f4984ed6a8 hello-world "/hello" 20 seconds ago Exited (0) 19 seconds ago
```
<h2>Resumen y hoja de trucos</h2>
```
## List Docker CLI commands
docker
docker container --help
## Display Docker version and info
docker --version
docker version
docker info
## Execute Docker image
docker run hello-world
## List Docker images
docker image ls
## List Docker containers (running, all, all in quiet mode)
docker container ls
docker container ls --all
docker container ls -aq
```
<h2>Conclusión</h2>
En la `Containerization ` el CI / CD se ven beneficiados de los siguientes aspectos:
- Las aplicaciones no tienen dependencias del sistema.
- Las actualizaciones se pueden enviar a cualquier parte de una aplicación distribuida.
- La densidad de recursos se puede optimizar.
- Con Docker, escalar su aplicación es cuestión de girar nuevos ejecutables, no ejecutar hosts VM pesados. | gelopfalcon |
180,803 | Null-Safety vs Maybe/Option - A Thorough Comparison (Part 1/2) | An in-depth and practical comparison between Null-safety and the Maybe/Option type, both used to discard the infamous null pointer error. | 0 | 2019-10-02T12:15:53 | http://www.practical-programming.org/blog/null-safety-vs-maybe-option/index.html | null, maybe, option | ---
title: Null-Safety vs Maybe/Option - A Thorough Comparison (Part 1/2)
published: true
description: An in-depth and practical comparison between Null-safety and the Maybe/Option type, both used to discard the infamous null pointer error.
tags: null, maybe, option
cover_image: https://thepracticaldev.s3.amazonaws.com/i/mea3kjfqq02eco8m32ku.jpg
canonical_url: http://www.practical-programming.org/blog/null-safety-vs-maybe-option/index.html
---
Introduction
============
There are two effective approaches to eliminate the daunting null pointer error:
- The Maybe/Option pattern - mostly used in functional programming languages.
- Compile-time null-safety - used in some modern programming languages.
This article aims to answer the following questions:
- How does it work? How do these two approaches eliminate the null pointer error?
- How are they used in practice?
- How do they differ?
Notes:
- Readers not familiar with the concept of `null` might want to read first: <a href="http://www.practical-programming.org/blog/meaning-of-null/index.html" class="pml-link">A quick and thorough guide to 'null'</a>.
- For an introduction to `Maybe / Option` I recommend: <a href="https://fsharpforfunandprofit.com/posts/the-option-type/" class="pml-link">F#: The Option type</a>. You can also search the net for "haskell maybe" or "f\# option".
Why Should We Care?
===================
> "I call it my billion-dollar mistake. It was the invention of the null reference in 1965. ... This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. ..."
>
> -- Tony Hoare
In the context of Java, Professor John Sargeant from the Manchester school of computer science puts it like <a href="http://www.cs.man.ac.uk/~johns/npe.html" class="pml-link">this</a>:
> "Of the things which can go wrong at runtime in Java programs, null pointer exceptions are by far the most common."
>
> -- John Sargeant
We can easily deduce:
> "By eliminating the infamous null pointer error, we **eliminate one of the most frequent reasons for software failures**."
That's a big deal!
We should care about it.
Three Approaches
================
Besides showing the *reason* for the null pointer error, this article also aims to demonstrate how the null pointer error can be *eliminated*.
We will therefore compare three different approaches:
- **The language uses `null`, but doesn't provide null-safety.**
In these languages null pointer errors occur frequently.
Most popular languages fall into this category. For example: C, C++, Java, Javascript, PHP, Python, Ruby, Visual Basic.
- **The language doesn't support `null`, but uses `Maybe` (also called `Option` or `Optional`) to represent the 'absence of a value'**.
As `null` is not supported, there are no null pointer errors.
This approach is mostly used in some functional programming languages. But it can as well be used in non-functional languages.
At the time of writing, the most prominent languages using this approach are probably Haskell, F\#, and Swift.
- **The language uses null and provides compile-time-null-safety.**
Null pointer errors cannot occur.
Some modern languages support this approach.
Source Code Examples
====================
In this chapter we'll look at some source code examples of common use cases involving 'the absence of a value'. We will compare the code written in the three following languages representing the three approaches mentioned in the previous chapter:
- **Java (supports null, but not null-safe)**
<a href="https://en.wikipedia.org/wiki/Java_(programming_language)" class="pml-link">Java</a> is one of the industry's leading languages, and one of the most successful ones in the history of programming languages. But it isn't null-safe. Hence, it is well suited to demonstrate the problem of the null pointer error.
- **Haskell (Maybe type)**
<a href="https://en.wikipedia.org/wiki/Haskell_(programming_language)" class="pml-link">Haskell</a> is the most famous one in the category of pure functional languages. It doesn't support `null`. Instead it uses the `Maybe` monad to represent the 'absence of a value'.
Note: I am by no means a Haskell expert. If you see any mistake or need for improvement in the following examples, then please leave a comment so that the article can be updated.
- **PPL (supports null and is null-safe)**
The <a href="http://www.practical-programming.org/" class="pml-link">Practical Programming Language</a> (PPL) supports `null` and has been designed with full support for compile-time-null-safety from the ground up. However, be warned! PPL is just a *work in progress*, not ready yet to write mission-critical enterprise applications. I use it in this article because (full disclosure!) I am the creator of PPL, and I want to initiate some interest for it. I hope you don't mind - after reading this article.
All source code examples are available on <a href="https://github.com/pp-articles/null_vs_maybe/tree/master/examples/" class="pml-link">Github</a>. The Github source code files contain alternative solutions for some examples, not shown in this article.
Null-Safety
-----------
How does null-safety work in practice? Let's see.
### Null Not Allowed
We start with an example of code where `null` is *not allowed*.
Say we want to write a very simple function that takes a positive integer and returns a string. Neither the input nor the output can be `null`. If the input value is 1, we return "one". If it is not 1, we return "not one". How does the code look like in the three languages? And, more importantly, how safe is it?
#### Java
This is the function written in Java:
static String intToString ( Integer i ) {
if ( i == 1 ) {
return "one";
} else {
return "not one";
}
}
We can use the ternary operator and shorten the code a bit:
static String intToString ( Integer i ) {
return i == 1 ? "one" : "not one";
}
Note: I am using type `Integer`, which is a *reference* type. I am not using type `int`, which is a *value* type. The reason is that `null` works only with reference types.
To test the code, we can write a simple Java application like this:
public class NullNotAllowedTest {
static String intToString ( Integer i ) {
return i == 1 ? "one" : "not one";
}
public static void main ( String[] args ) {
System.out.println ( intToString ( 1 ) );
System.out.println ( intToString ( 2 ) );
}
}
If you want to try out this code you can use an online Java Executor like <a href="https://www.tutorialspoint.com/compile_java_online.php" class="pml-link">this one</a>. Just copy/paste the above code in the `Source File` tab, and click `Execute`. It looks like this:

If you have Java installed on your system, you can also proceed like this:
- Save the above code in file `NullNotAllowedTest.java`.
- Compile and run it by typing the following two commands in a terminal:
javac NullNotAllowedTest.java
java NullNotAllowedTest
The output written to the OS out device is:
one
not one
So far so good.
#### Haskell
In Haskell, there are a few ways to write the function. For example:
intToString :: Integer -> String
intToString i = case i of
1 -> "one"
_ -> "not one"
Note: The first line in the above code could be omitted, because Haskell supports type inference for function arguments. However, it's considered <a href="https://wiki.haskell.org/Type_signatures_as_good_style" class="pml-link">good style</a> to include the type signature, because it makes the code more readable. Hence, we will always include the type signature in the upcoming Haskell examples.
The above code uses pattern matching, which is the idiomatic way to write code in Haskell.
We can write a simple Haskell application to test the code:
intToString :: Integer -> String
intToString i = case i of
1 -> "one"
_ -> "not one"
main :: IO ()
main = do
putStrLn $ intToString 1
putStrLn $ intToString 2
As for Java, you can use an <a href="https://www.tutorialspoint.com/compile_haskell_online.php" class="pml-link">online Haskell executor</a> to try out the code. Here is a screenshot:

Alternatively, if Haskell is installed on your system, you can save the above code in file `NothingNotAllowedTest.hs`. Then you can compile and run it with these two commands:
ghc -o NothingNotAllowedTest NothingNotAllowedTest.hs
NothingNotAllowedTest.exe
The output is the same as in the Java version:
one
not one
#### PPL
In PPL the function can be written like this:
function int_to_string ( i pos_32 ) -> string
if i =v 1 then
return "one"
else
return "not one"
.
.
Note: The comparison operator `=v` in the above code is suffixed with a `v` to make it clear we are comparing **v**alues. If we wanted to compare references, we would use operator `=r`.
We can shorten the code by using an if-then-else *expression* (instead of an if-then-else *statement*):
function int_to_string ( i pos_32 ) -> string = \
if i =v 1 then "one" else "not one"
A simple PPL application to test the code looks like this:
function int_to_string ( i pos_32 ) -> string = \
if i =v 1 then "one" else "not one"
function start
write_line ( int_to_string ( 1 ) )
write_line ( int_to_string ( 2 ) )
.
At the time of writing there is no online PPL executor available. To try out code you have to <a href="http://www.practical-programming.org/ppl/downloads/install_PPL.html" class="pml-link">install PPL</a> and then proceed like this:
- Save the above code in file `null_not_allowed_test.ppl`
- Compile and run the code in a terminal by typing:
ppl null_not_allowed_test.ppl
Again, the output is:
one
not one
#### Discussion
As we have seen (and expected), the three languages allow us to write 'code that works correctly'. Here is a reprint of the three versions, so that you can easily compare the three versions:
- Java
static String intToString ( Integer i ) {
return i == 1 ? "one" : "not one";
}
- Haskell
intToString :: Integer -> String
intToString i = case i of
1 -> "one"
_ -> "not one"
- PPL
function int_to_string ( i pos_32 ) -> string = \
if i =v 1 then "one" else "not one"
A pivotal question remains unanswered:
> "What happens in case of a bug in the source code?"
>
> -- The Crucial Question
In the context of this article we want to know: What happens if the function is called with `null` as input? And what if the function returns `null`?
This question is easy to answer in the Haskell world. `null` doesn't exist in Haskell. Haskell uses the `Maybe` monad to represent the 'absence of a value'. We will soon see how this works. Hence, in Haskell it is not possible to call `intToString` with a `null` as input. And we can't write code that returns `null`.
PPL supports `null`, unlike Haskell. However, all types are *non-null by default*. This is a fundamental rule in all effective null-safe languages. A PPL function with the type signature `pos_32 -> string` states that the function cannot be called with `null` as input, and it cannot return `null`. This is enforced at `compile-time`, so we are on the safe side. Code like `int_to_string ( null )` simply doesn't compile.
> "By default all types are *non-null* in a null-safe language."
>
> "By default it is illegal to assign `null`."
>
> -- The 'non-null by default' rule
What about Java?
Java is not null-safe. Every type is *nullable*, and there is no way to specify a non-null type for a reference. This means that `intToString` can be called with `null` as input. Moreover, nothing prevents us from writing code that returns `null` from `intToString`.
So, what happens if we make a function call like `intToString ( null )`? The program compiles, but the disreputable `NullPointerException` is thrown at run-time:
Exception in thread "main" java.lang.NullPointerException
at NullNotAllowedTest.intToString(NullNotAllowedTest.java:4)
at NullNotAllowedTest.main(NullNotAllowedTest.java:10)
Why? The test `i == 1` is equivalent to `i.compareTo ( new Integer(1) )`. But `i` is `null` in our case. And executing a method on a `null` object is impossible and generates a `NullPointerException`.
This is the well-known reason for the infamous *billion-dollar mistake*.
What if `intToString` accidentally returns `null`, as in the following code:
public class NullNotAllowedTest {
static String intToString ( Integer i ) {
return null;
}
public static void main ( String[] args ) {
System.out.println ( intToString ( 1 ) );
}
}
Again, no compiler error. But a runtime error occurs, right? Wrong, the output is:
null
Why?
The reason is that `System.out.println` has been programmed to write the string `"null"` if it is called with `null` as input. The method signature doesn't show this, but it is clearly stated in the Java API documentation: "If the argument is null then the string 'null' is printed.".
What if instead of printing the string returned by `intToString`, we want to print the string's size (i.e. the number of characters). Let's try it by replacing ...
System.out.println ( intToString ( 1 ) );
... with this:
System.out.println ( intToString ( 1 ).length() );
Now the program doesn't continue silently. A `NullPointerException` is thrown again, because the program tries to execute `length()` on a `null` object.
As we can see from this simple example, the result of misusing `null` is inconsistent.
In the real world, the final outcome of incorrect `null` handling ranges from totally harmless to totally harmful, and is often unpredictable. This is a general, and frustrating property of all programming languages that support `null`, but don't provide `compile-time-null-safety`. Imagine a big application with thousands of functions, most of them much more complex than our simple toy code. None of these functions are implicitly protected against misuses of `null`. It is understandable why `null` and the "billion dollar mistake" have become synonyms for many software developers.
We can of course try to improve the Java code and make it a bit more robust. For example, we could explicitly check for a `null` input in method `intToString` and throw an `IllegalArgumentException`. We could also add a `NonNull` annotation that can be used by some static code analyzers or super-sophisticated IDEs. But all these improvements require manual work, might depend on additional tools and libraries, and don't lead to a satisfactory and reliable solution. Therefore, we will not discuss them. We are not interested in *mitigating* the problem of the null pointer error, we want to *eliminate* it. Completely!
### Null Allowed
Let's slightly change the specification of function `int_to_string`. We want it to accept `null` as input and return:
- `"one"` if the input is 1
- `"not one"` if the input is not 1 and not `null`
- `null` if the input is `null`
How does this affect the code in the three languages?
#### Java
This is the new code written in Java:
static String intToString ( Integer i ) {
if ( i == null ) {
return null;
} else {
return i == 1 ? "one" : "not one";
}
}
We could again use the ternary operator and write more succinct code:
static String intToString ( Integer i ) {
return i == null ? null : i == 1 ? "one" : "not one";
}
Whether to chose the first or second version is a matter of debate. As a general rule, we should value readability more than terseness of code. So, let's stick with version 1.
The crucial point here is that the function's signature has *not changed*, although the function's specification is now different. Whether the function accepts and returns `null` or not, the signature is the same:
String intToString ( Integer i ) {
This doesn't come as a surprise. As we saw already in the previous example, Java (and other languages without null-safety) doesn't make a difference between nullable and non-nullable types. All types are always nullable. Hence by just looking at a function signature we don't know if the function accepts `null` as input, and we don't know if it might return `null`. The best we can do is to document nullability for each input/output argument. But there is no compile-time protection against misuses.
To check if it works, we can write a simplistic test application:
public class NullAllowedTest {
static String intToString ( Integer i ) {
if ( i == null ) {
return null;
} else {
return i == 1 ? "one" : "not one";
}
}
static void displayResult ( String s ) {
String result = s == null ? "null" : s;
System.out.println ( "Result: " + result );
}
public static void main ( String[] args ) {
displayResult ( intToString ( 1 ) );
displayResult ( intToString ( 2 ) );
displayResult ( intToString ( null ) );
}
}
Output:
Result: one
Result: not one
Result: null
#### Haskell
This is the code in Haskell:
intToString :: Maybe Integer -> Maybe String
intToString i = case i of
Just 1 -> Just "one"
Nothing -> Nothing
_ -> Just "not one"
Key points:
- Haskell doesn't support `null`. It uses the `Maybe` monad.
The Maybe type is defined as follows:
data Maybe a = Just a | Nothing
deriving (Eq, Ord)
The <a href="http://hackage.haskell.org/package/base-4.12.0.0/docs/Data-Maybe.html" class="pml-link">Haskell doc</a> states: "The `Maybe` type encapsulates an optional value. A value of type `Maybe a` either contains a value of type `a` (represented as `Just a`), or it is empty (represented as `Nothing`). The `Maybe` type is also a monad."
Note: More information can be found <a href="https://stackoverflow.com/questions/29456824/what-is-the-maybe-type-and-how-does-it-work" class="pml-link">here</a> and <a href="https://wiki.haskell.org/Maybe" class="pml-link">here</a>. Or you can read about the <a href="https://fsharpforfunandprofit.com/posts/the-option-type/" class="pml-link">Option type in F#</a>.
- The function signature clearly states that calling the function with no integer (i.e. the value `Nothing` in Haskell) is allowed, and the function might or might not return a string.
- For string values the syntax `Just "string"` is used to denote a string, and `Nothing` is used to denote 'the absence of a value'. Analogously, the syntax `Just 1` and `Nothing` is used for integers.
- Haskell uses pattern matching to check for 'the absence of a value' (e.g. `Nothing ->`). The symbol `_` is used to denote 'any other case'. Note that the `_` case includes the `Nothing` case. Hence if we forget the explicit check for `Nothing` there will be no compiler error, and `"not one"` will be returned if the function is called with `Nothing` as input.
Here is a simple test application:
import Data.Maybe (fromMaybe)
intToString :: Maybe Integer -> Maybe String
intToString i = case i of
Just 1 -> Just "one"
Nothing -> Nothing
_ -> Just "not one"
displayResult :: Maybe String -> IO()
displayResult s =
putStrLn $ "Result: " ++ fromMaybe "null" s
main :: IO ()
main = do
displayResult $ intToString (Just 1)
displayResult $ intToString (Just 2)
displayResult $ intToString (Nothing)
Output:
Result: one
Result: not one
Result: null
Note the `fromMaybe "null" s` expression in the above code. In Haskell this is a way to provide a default value in case of `Nothing`. It's conceptually similar to the expression `s == null ? "null" : s` in Java.
#### PPL
In PPL the code looks like this:
function int_to_string ( i pos_32 or null ) -> string or null
case value of i
when null
return null
when 1
return "one"
otherwise
return "not one"
.
.
Note: A case *expression* will be available in a future version of PPL (besides the case *statement* shown above). Then the code can be written more concisely as follows:
function int_to_string ( i pos_32 or null ) -> string or null = \
case value of i
when null: null
when 1 : "one"
otherwise: "not one"
Key points:
- In PPL `null` is a regular type (like `string`, `pos_32`, etc.) that has one possible value: `null`.
It appears as follows in the top of PPL's type hierarchy:

- PPL supports union types (also called sum types, or choice types). For example, if a reference can be a string or a number, the type is `string or number`.
That's why we use the syntax `pos_32 or null` and `string or null` to denote nullable types. The type `string or null` simply means that the value can be any string *or* `null`.
- The function clearly states that it accepts `null` as input, and that it might return `null`.
- We use a `case` instruction to check the input and return an appropriate string. The compiler ensures that each case is covered in the `when` clauses. It is not possible to accidentally forget to check for `null`, because (in contrats to Haskell) the `otherwise` clause doesn't cover the `null` clause.
A simple test application looks like this:
```
function int_to_string ( i pos_32 or null ) -> string or null
case value of i
when null
return null
when 1
return "one"
otherwise
return "not one"
.
.
function display_result ( s string or null )
write_line ( """Result: {{s if_null: "null"}}""" )
.
function start
display_result ( int_to_string ( 1 ) )
display_result ( int_to_string ( 2 ) )
display_result ( int_to_string ( null ) )
.
```
Output:
Result: one
Result: not one
Result: null
Note the `"""Result: {{s if_null: "null"}}"""` expression used in function `display_result`. We use string interpolation: an expression embedded between a `{{` and `}}` pair. And we use the `if_null:` operator to provide a string that represents `null`. Writing `s if_null: "null"` is similar to `s == null ? "null" : s` in Java.
If we wanted to print nothing in case of `null`, we could code `"""Result: {{? s}}"""`
#### Discussion
Again, the three languages allow us to write code that works correctly.
But there are some notable differences:
- In Haskell and PPL, the functions clearly state that 'the absence of a value' is allowed (i.e. `Nothing` in Haskell, or `null` in PPL). In Java, there is no way to make a difference between nullable and non-nullable arguments (except via comments or annotations, of course).
- In Haskell and PPL, the compiler ensures we don't forget to check for 'the absence of a value'. Executing an operation on a possibly `Nothing` or `null` value is not allowed. In Java we are left on our own.
Here is a comparison of the three versions of function `int_to_string`:
- Java
static String intToString ( Integer i ) {
if ( i == null ) {
return null;
} else {
return i == 1 ? "one" : "not one";
}
}
- Haskell
intToString :: Maybe Integer -> Maybe String
intToString i = case i of
Just 1 -> Just "one"
Nothing -> Nothing
_ -> Just "not one"
- PPL
New version (not available yet):
function int_to_string ( i pos_32 or null ) -> string or null = \
case value of i
when null: null
when 1 : "one"
otherwise: "not one"
Current version:
function int_to_string ( i pos_32 or null ) -> string or null
case value of i
when null
return null
when 1
return "one"
otherwise
return "not one"
.
.
And here is the function used to display the result:
- Java
static void displayResult ( String s ) {
String result = s == null ? "null" : s;
System.out.println ( "Result: " + result );
}
- Haskell
import Data.Maybe (fromMaybe)
displayResult :: Maybe String -> IO()
displayResult s =
putStrLn $ "Result: " ++ fromMaybe "null" s
- PPL
```
function display_result ( s string or null )
write_line ( """Result: {{s if_null: "null"}}""" )
.
```
That's it for part 1. In part 2 (to be published soon) we'll have a look at some useful null-handling features used frequently in practice.
Header image by [dailyprinciples](https://pixabay.com/users/dailyprinciples-3836461/) from [Pixabay](https://pixabay.com). | practicalprogramming |
180,845 | Part 2: Classic Encryption Algorithms - Mono-alphabetic Substitution Ciphers | Table of Contents Table of Contents Mathematical Background Greatest common divisor Co... | 2,484 | 2019-10-03T14:28:30 | https://dev.to/kalkwst/part-2-classic-encryption-algorithms-mono-alphabetic-substitution-ciphers-3kkc | computerscience, security, beginners | ## Table of Contents
<!-- TOC -->
- [Table of Contents](#table-of-contents)
- [Mathematical Background](#mathematical-background)
- [Greatest common divisor](#greatest-common-divisor)
- [Coprime Numbers](#coprime-numbers)
- [Modular Multiplicative Inverse](#modular-multiplicative-inverse)
- [Monoalphabetic Substitution Ciphers](#monoalphabetic-substitution-ciphers)
- [Caesar Cipher](#caesar-cipher)
- [Decimation Cipher](#decimation-cipher)
- [Affine Cipher](#affine-cipher)
<!-- /TOC -->
## Mathematical Background
Before discussing some of the most known classical substitution algorithms, we need to set some mathematical foundations, that are used by these algorithms.
### Greatest common divisor
The **Greatest Common Divisor** (or `GCD`) of two numbers, is the largest number that divides them both. For example, the greatest common divisor of `8` and `36` is `4`, since `4` divides both `8` and `36` and no larger number exists that has this property.
The **Euclidean Algorithm** is a technique for quickly finding the **GCD** of two integers. The algorithm is based on the following observation: if *d* divides both *a* and *b*, then *d* also divides *a - b*. This means that the GCD of *a* and *b*, is the same as the GCD of *a - b* and *b*. As a result, we can use the following process to make an algorithm:
- If `a=b` stop. The GCD of `a` and `a` is `a`. Otherwise, go to step 2.
- If `a > b`, replace `a` with `a-b` and go to step 1.
- If `b > a`, replace `b` with `b-a` and go to step 1.
The Khan Academy has a great [article](https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/the-euclidean-algorithm) explaining the algorithm much better.
The implementation of the above could be the following:
```js
/**
* Calculate the greatest common divisor of two or more numbers.
* @param {...Number} arr The array of numbers to calculate the gcd of.
* @return {Number} The greatest common divisor of the provided numbers.
*/
const gcd = (...arr) => {
// Recursion function that calculates the gcd of two numbers.
const inner = (x, y) => (!y ? x : gcd(y, x % y));
// Return the gdc of all the elements in the array.
return [...arr].reduce((a, b) => inner(a, b));
};
```
### Coprime Numbers
Two integers, lets say `a` and `b` are said to be **coprime**, if the only positive integer that divides both of them is 1.
In other words, two numbers are **coprime** when their greatest common divisor is `1`.
Given the above, we can create a utility function to calculate a number of coprimes for a given integer:
```js
/**
* Calculate a list of coprimes for the given `number`. Note that this function can generate only
* positive coprime numbers.
*
* @param {Number} number The number of which to calculate the coprimes.
* @param {Number} [results=5] The number of coprimes to calculate.
* @returns {[Number]} The `results` first coprimes of the given `number`.
*/
const findCoprimesFor = (number, results = 5) => {
let coprimes = [1]; // A list to store all of our coprime numbers.
let idx = 2; // The current number.
// The only coprime of 0 is 1, so there is no need to fire the loop.
if (number === 0) {
return coprimes;
}
// While there are more results to be calculated.
while (coprimes.length !== results) {
// If the gcd of the number and the idx is 1, then these two numbers are coprime.
if (gcd(number, idx) === 1) {
// So add them to the list.
coprimes.push(idx);
}
// Increase to the next natural number.
idx++;
}
return coprimes;
}
```
### Modular Multiplicative Inverse
A multiplicative inverse is something you can multiply to a number by to get 1. So, if for example we have the number `3`, its multiplicative inverse is `1/3`. But for our purposes, we want an *integer* that when multiplied by `3` gives something that is congruent to `1 (mod 26)`. In our case `9` is such a number, since `3 * 9 = 27 = 1 (mod 26)`.
Again Khan Academy explains this greatly in their [article](https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/modular-inverses).
In order to calculate the inverse we can use a naive algorithm, as shown below:
```js
const mmi = (a, b) => {
a %= b;
for(let i = 1; i < b; i++){
if((a * i) % b === 1){
return i;
}
}
}
```
## Monoalphabetic Substitution Ciphers
In monoalphabetic ciphers, each character of the plaintext is replaced with a corresponding character of ciphertext. A single one-to-one mapping function (*f*) from plaintext to ciphertext character is used to encrypt the entire message using the same key (*k*).
### Caesar Cipher
More than 2000 years ago, the military secrets of the Roman empire were kept secret with the help of cryptography. The 'Caesar cipher' as it is now called, was used by Julius Caesar to encrypt messages by shifting letters alphabetically.
The first step is to assign a number to each letter. So we have the following:
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
| :--------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Letter** | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| **Index** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
#### Encryption
In order to encrypt a message, we convert its letters to numbers, as we did above, add the key to them, and then convert them back to letters.
In the following example, we are going to set our key `k` as 3, and encrypt the message `MEET AT TEN`.
| | M | E | E | T | | A | T | | T | E | N |
| :----------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| **Original Index + key** | 15 | 7 | 7 | 22 | | 3 | 22 | | 22 | 7 | 16 |
| **Cipher Text** | P | H | H | W | | D | W | | W | H | Q |
So our ciphertext will be `PHHW DW WHQ`.
When Caesar used the cipher, he always shifted by 3, buth there's no reason for us to stick with this convention. For example, we could have encrypted the message `MEET ME AT TEN` by shifting the letters by 5 instead of 3:
| | M | E | E | T | | A | T | | T | E | N |
| :----------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| **Original Index + key** | 17 | 9 | 9 | 24 | | 5 | 24 | | 24 | 9 | 18 |
| **Cipher Text** | R | J | J | Y | | F | Y | | Y | J | S |
So our ciphertext will be `RJJY FY YJS`.
There's a sublety to the Caesar cipher that hasn't come up yet. Imagine that we want to encrypt the message `MEET AT TWO` (note the change) with `5` as a key.
| | M | E | E | T | | A | T | | T | W | O |
| :----------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 22 | 14 |
| **Original Index + key** | 17 | 9 | 9 | 24 | | 5 | 24 | | 24 | 27 | 19 |
| **Cipher Text** | R | J | J | Y | | F | Y | | Y | (?) | T |
Note the question mark. It doesn't seem like there is a letter corresponding to the number 27. Such a letter would be two places past the letter `Z`. Whenever we are looking for a letter past the letter `Z`, we simply wrap around, and start back at the beginning of the alphabet again. In this way, the letter two past `Z` is `B`; so the encrypted message would be `RJJY FY YBT`.
The implementation of the above algorithm could be as follows:
```js
/**
* Encrypt the provided `plaintext` to a ciphertext using the Caesar's cipher.
*
* @param {String} plaintext The plaintext to be encrypted.
* @param {Number} key The key to be used by the algorithm.
* @return {String} The encrypted message.
*/
const encrypt = (plaintext, key) => {
/**
* Convert the plaintext by removing all non-letter characters and convert it to upper-case.
* This will remove all special characters, numbers and whitespace characters from the original
* string.
*/
plaintext = plaintext.replace(/[^a-zA-Z]/g, "").toUpperCase();
// The alphabet used by the algorithm.
let alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"];
// Create an empty string to store the ciphertext.
let ciphertext = "";
/**
* For each letter in the plaintext, calculate the index of the corresponding ciphertext letter
* and append it to the ciphertext string.
*/
for (let i = 0; i < plaintext.length; i++) {
let cipherIdx = (alphabet.indexOf(plaintext[i]) + key) % 26;
ciphertext += alphabet[cipherIdx];
}
return ciphertext;
};
```
#### Decryption
In order to decrypt the message, we just need to shift the letters **back** by the key. This corresponds to subtracting the key when we convert to numbers.
Again, using the `PHHW DW WZR` example:
| | P | H | H | W | | D | W | | W | Z | R |
| :----------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 15 | 7 | 7 | 22 | | 3 | 22 | | 22 | 7 | 16 |
| **Original Index - key** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| **Cipher Text** | M | E | E | T | | A | T | | T | E | N |
The implementation of the above algorithm could be as follows:
```js
/* Decrypt the provided `plaintext` to a ciphertext using the Caesar's cipher.
*
* @param {Number} key The key to be used by the algorithm.
* @param {String} plaintext The ciphertext to be decrypted.
* @return {String} The decrypted message.
*/
const decrypt = (plaintext, key) => {
let alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"];
// Create an empty string to store the plaintext.
let plaintext = "";
/**
* For each letter in the ciphertext, calculate the index of the corresponding plaintext letter
* and append it to the plaintext string.
*/
for (let i = 0; i < plaintext.length; i++) {
let plainIdx = mod(alphabet.indexOf(plaintext[i]) + key, 26);
plaintext += alphabet[plainIdx];
}
};
const mod = (m, n) => {
return ((n % m) + m) % m;
};
```
### Decimation Cipher
The decimation cipher is another monoalphabetic substitution cipher. As in the Caesar cipher we are shifting the letters forward, but instead of adding the key to the index, we do a multiplication.
#### Encryption
In order to encrypt a message, we once again convert its letters to numbers, multiply the key with them, and then convert them back to letters.
In the following example, we are going to set our key `k` as 63 and encrypt the message `MEET AT TEN`.
| | M | E | E | T | | A | T | | T | E | N |
| :-----------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| **Original Index * key** | 756 | 252 | 252 | 1197 | | 0 | 1197 | | 1197 | 252 | 757 |
| **Original Index * key MOD 26** | 2 | 18 | 18 | 1 | | o | 1 | | 1 | 18 | 3 |
| **Cipher Text** | C | S | S | B | | A | B | | B | S | N |
So our ciphertext will be `CSSN AB BSN`.
Once again, there is a sublety to the Decimation cipher that hasn't come up. We can't use just **any** number. Some keys may cause the cipher alphabet to map several plaintext letters to the same ciphertext letters.
For example, the `key` 10 using the standard Latin alphabet, we get the following:
| | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
| :---------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
| **Ciphertext Index** | 0 | 10 | 20 | 4 | 14 | 24 | 8 | 18 | 2 | 12 | 22 | 6 | 16 | 0 | 10 | 20 | 4 | 14 | 24 | 8 | 18 | 2 | 12 | 22 | 6 | 16 |
| **Ciphertext Alphabet** | A | K | U | E | O | Y | I | S | C | M | W | G | Q | A | K | U | E | O | Y | I | S | C | M | W | G | Q |
As you can notice, some letters appear two times, and some letters never appear. To be more precise, the letters `ACEGIKMOQSUWY` appear twice, and the letters `BDFHJLNPRTVXZ` never appear. In order to bypass this issue, we must select a key that is a [**coprime**](#coprime-numbers) of the length of the alphabet.
The implementation of the above algorithm could be as follows:
```js
/*
* Encrypt the provided `plaintext` to a ciphertext using the Decimation cipher.
*
* @param {String} plaintext The plaintext to be encrypted.
* @param {Number} key The key to be used by the algorithm.
* @return {String} The encrypted message.
*/
const encrypt = (plaintext, key) => {
/**
* Convert the plaintext by removing all non-letter characters and convert it to upper-case.
* This will remove all special characters, numbers and whitespace characters from the original
* string.
*/
plaintext = plaintext.replace(/[^a-zA-Z]/g, "").toUpperCase();
// The alphabet used by the algorithm.
// prettier-ignore
let alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"];
// Create an empty string to store the ciphertext.
let ciphertext = "";
/**
* For each letter in the plaintext, calculate the index of the corresponding ciphertext letter
* and append it to the ciphertext string.
*/
for (let i = 0; i < plaintext.length; i++) {
let cipherIdx = (alphabet.indexOf(plaintext[i]) * key) % 26;
ciphertext += alphabet[cipherIdx];
}
return ciphertext;
};
```
#### Decryption
Just like we decrypted Caesar cipher messages by subtracting the encryption key, we can decrypt a message encrypted using the Decimation cipher by multiplying the message by multiplying by the multiplicative inverse of the key. This in essence "reverses" the multiplication operation.
Using our `CSSN AB BSN` message, and since our key was `63` we need the modular multiplicative inverse of that key. This number is `19`. So, we will multiply our message with that number in order to decrypt it.
| | C | S | S | B | | A | B | | B | S | N |
| :--------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Original Index** | 2 | 18 | 18 | 1 | | o | 1 | | 1 | 18 | 3 |
| **Original Index * inverse** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| **Cipher Text** | M | E | E | T | | A | T | | T | E | N |
The implementation of the above, could be as follows:
```js
/*
* Decrypt the provided `ciphertext` to a ciphertext using the Decimation cipher.
*
* @param {String} ciphertext The ciphertext to be decrypted.
* @param {Number} key The key to be used by the algorithm.
* @return {String} The decrypted message.
*/
const decrypt = (ciphertext, key) => {
// The alphabet used by the algorithm.
let alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"];
// Create an empty string to store the plaintext.
let plaintext = "";
let inverse = mmi(key, 26);
/**
* For each letter in the ciphertext, calculate the index of the corresponding plaintext letter
* and append it to the plaintext string.
*/
for (let i = 0; i < ciphertext.length; i++) {
let plainIdx = (alphabet.indexOf(ciphertext[i]) * inverse) % 26;
plaintext += alphabet[plainIdx];
}
return plaintext;
};
```
### Affine Cipher
The Affine cipher works through a combination of modular multiplication and modular addition. In other words, the affine cipher is a combination of a Caesar's cipher and a multiplication cipher.
#### Encryption
In order to encrypt a plaintext with the affine cipher, we need two keys, `a` and `b`. Once again, we convert the letters to a number, then multiply it by `a`, and then add `b` to the result, and finally get the `result modulo 26`.
Let's encrypt the message `MEET AT TEN` with the affine cipher, using the keys `3` and `10`:
| | M | E | E | T | | A | T | | T | E | N |
| :-------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Index** | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| `y = (3*index + 10) mod 26` | 20 | 22 | 22 | 15 | | 10 | 15 | | 15 | 22 | 23 |
| **Ciphertext** | U | W | W | P | | K | P | | P | W | X |
The implementation of the above algorithm could be as follows:
```js
/*
* Encrypt the provided `plaintext` to a ciphertext using the Decimation cipher.
*
* @param {String} plaintext The plaintext to be encrypted.
* @param {Number} keyA The first key to be used by the algorithm.
* @param {Number} keyB The second key to be used by the algorithm.
* @return {String} The encrypted message.
*/
const encrypt = (plaintext, keyA, keyB) => {
/**
* Convert the plaintext by removing all non-letter characters and convert it to upper-case.
* This will remove all special characters, numbers and whitespace characters from the original
* string.
*/
plaintext = plaintext.replace(/[^a-zA-Z]/g, "").toUpperCase();
// The alphabet used by the algorithm.
// prettier-ignore
let alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"];
// Create an empty string to store the ciphertext.
let ciphertext = "";
/**
* For each letter in the plaintext, calculate the index of the corresponding ciphertext letter
* and append it to the ciphertext string.
*/
for (let i = 0; i < plaintext.length; i++) {
let cipherIdx = ((alphabet.indexOf(plaintext[i]) * keyA) + keyB) % 26;
ciphertext += alphabet[cipherIdx];
}
return ciphertext;
};
```
#### Decryption
As we discussed above, the affine cipher is a combination of the Caesar cipher and the Decimation cipher. Thus, the encryption process is a Caesar cipher merged with a multiplication cipher. In order to decrypt the message we need a combination of a Caesar and a multiplication cipher decryption.
First we need to calculate the modular multiplicative inverse of `keyA`. Then we perform the reverse operations performed by the encryption algorithm. So, we are going to multiply the index with the inverse of `keyA` and then subtract the `keyB` and calculate the modulo of the result.
| | U | W | W | P | | K | P | | P | W | X |
| :-------------------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **Index** | 20 | 22 | 22 | 15 | | 10 | 15 | | 15 | 22 | 23 |
| `y = (3*index + 10) mod 26` | 12 | 4 | 4 | 19 | | 0 | 19 | | 19 | 4 | 13 |
| **Ciphertext** | M | E | E | T | | A | T | | T | E | N |
The implementation of the above, could be like the following:
```js
/*
* Decrypt the provided `ciphertext` to a plaintext using the Affine cipher.
*
* @param {String} plaintext The encrypted to be decrypted.
* @param {Number} keyA The first key to be used by the algorithm.
* @param {Number} keyB The second key to be used by the algorithm.
* @return {String} The decrypted message.
*/
const decrypt = (ciphertext, keyA, keyB) => {
// The alphabet used by the algorithm.
// prettier-ignore
let alphabet = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P",
"Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"];
// Create an empty string to store the plaintext.
let plaintext = "";
let inverse = mmi(keyA, 26);
/**
* For each letter in the ciphertext, calculate the index of the corresponding plaintext letter
* and append it to the plaintext string.
*/
for (let i = 0; i < ciphertext.length; i++) {
let plainIdx = (alphabet.indexOf(ciphertext[i]) * inverse - keyB) % 26;
plaintext += alphabet[plainIdx];
}
return plaintext;
}
```
On the next part we are going to discuss the evolution of monoalphabetic substitution ciphers, the polyalphabetic substitution ciphers.
Do you have something to add? Leave a comment below, and thanks for reading! | kalkwst |
180,921 | #SQL30 Day 4: Video Game Sales | Welcome to the SQL showdown series! What is this and how does it work? I'm committing t... | 2,589 | 2019-10-06T11:24:43 | https://dev.to/zchtodd/sql30-day-4-video-game-sales-1a2 | sql, database, postgres, challenge | Welcome to the SQL showdown series!
### What is this and how does it work?
I'm committing to publishing a SQL challenge every day for 30 days. In each post I'll describe **my** solution to the last day's challenge. I'll follow that up with a full description of the challenge for the next day.
Write your own solution in the comments! Let's see who can come up with the most creative solutions.
I'll add connection details for a PostgreSQL database containing test data for the challenge. Solutions are by no means limited to PostgreSQL, but it's there if you want an easy way to test your query!
### Challenge #4: Video Game Sales
This challenge uses a data-set that contains aggregated video game sales dating back to 1980. This data-set comes from the [Kaggle open data-sets archive](https://www.kaggle.com/gregorut/videogamesales/).
Here's the challenge:
**Can you produce a report that displays one year per row, and the aggregated global sales by genre for that year as columns?**
There is only one table in the **day4** schema. The **videogame** table contains info on the game title, published year, genre, and sales for that game.
The **videogame** table has the following columns:
* name
* platform
* year
* genre
* publisher
* na_sales
* eu_sales
* jp_sales
* global_sales (in millions)
Here's an example to give you a better idea of the output you're after:

### Sandbox Connection Details
I have a PostgreSQL database ready for you to play with.

The password is the same as the username! To query the **videogame** table:
```sql
SELECT * FROM day4.videogame;
```
### Solution for Challenge #3
This is the question we were trying to answer with yesterday's SQL challenge:
**Can you find the average temperature per month for every county in the United States, as well as the coldest and hottest temperatures that county has experienced over the year?**
Part of this question sounds like a candidate for window functions, but how would we do this **by county**? The **station** and **measurement** tables can be joined through the **idx** column, but there doesn't appear to be a way to join to **county**.
Or is there?
```sql
WITH cte AS
(
SELECT geo_id,
s.idx,
NAME,
value,
taken,
date_trunc('month', taken) AS month
FROM day3.county c
JOIN day3.station s
ON ST_Contains(c.wkb_geometry, ST_SetSRID(St_MakePoint(s.lat, s.long), 4326))
JOIN day3.measurement m
ON s.idx = m.idx
)
SELECT cte.*,
avg(value) OVER geo_m,
min(value) OVER geo_y,
max(value) OVER geo_y
FROM cte WINDOW geo_m AS (PARTITION BY (geo_id, month)),
geo_y AS (PARTITION BY geo_id)
ORDER BY geo_id,
taken;
```
We saw [common table expressions](https://www.postgresql.org/docs/9.1/queries-with.html) and [window functions](https://dev.to/helenanders26/sql-301-why-you-need-sql-window-functions-part-1-6e1) in the solution to challenge #2, so we're getting a little more practice with those here.
The interesting part is the spatial join that connects weather polling stations in the **station** table with their respective county. PostGIS provides the **ST_Contains** function, and I use it here to determine if the county polygon contains the point defined by the station latitude and longitude.
The **ST_SetSRID** function basically tells PostGIS what coordinate system the point we just created is going to be in. 4326 is known as the [World Geodetic System](https://en.wikipedia.org/wiki/World_Geodetic_System) and is one of the most common SRID values that you'll see.
### More about PostGIS
GIS in SQL is a topic that I've barely touched on myself, but there are quite a few great resources out there on the subject.
* [Official PostGIS tutorial](https://postgis.net/workshops/postgis-intro/)
* [YouTube series on PostGIS](https://www.youtube.com/watch?v=tTUM9XfDvqk)
###Good luck!
Have fun, and I can't wait to see what you come up with! I'll be back tomorrow with a solution to this problem and a new challenge. | zchtodd |
181,036 | Pure css - Joker | A post by Ekaterina | 0 | 2019-10-01T14:53:07 | https://dev.to/petrekx/pure-css-joker-oi4 | codepen | {% codepen https://codepen.io/petrek/pen/yLBdXEa %} | petrekx |
181,234 | Communicating Technical Debt | Many developers feel that product management and executive leadership don't "get it" when we talk abo... | 2,551 | 2019-10-02T03:00:53 | https://dev.to/integerman/communicating-technical-debt-3gbd | architecture, communication, codequality, management | Many developers feel that product management and executive leadership don't "get it" when we talk about technical debt. At the same time, if you ask developers about factors vital to the long-term success of a project, paying down technical debt is high on the list. So, how can we communicate technical debt in a way that bridges that gap?
As a development manager, a core part of my job is to act as a bridge between product management and development teams. Let me share with you what's worked well for me so far.
This post is inspired, in part, by bob.js' wonderful article on [Accounting for Technical Debt](https://dev.to/rfornal/accounting-for-technical-debt-2pf0) and is intended as a bit of a companion for that article.
{% link https://dev.to/rfornal/accounting-for-technical-debt-2pf0 %}
## Defining Technical Debt
Let's start from Webster's definition of debt:
> Debt: a state of being under obligation to pay or repay someone or something in return for something received : a state of owing
In this context, we incur technical debt by taking temporary advantage of something while incurring an obligation to pay it back.
## Principle and Interest in Technical Debt
Extending further, we pay additional **interest** for our debt over time, meaning that a small piece of technical debt becomes larger over time.
As an example, if we cut some corners in implementing a feature by duplicating a method and making slight modifications, we pay interest on that decision every time we need to make a change that *should* modify both methods.
Even if only one method is actually modified, the failure to modify the duplicated method likely constitutes a bug, introducing a form of quality debt into the application in the form of a hidden bug waiting to be encountered.
In this example, the **principle** of the technical debt is the time saved during initial implementation while the **interest** is the additional time, quality, and risk costs incurred for that decision until the point that it is resolved.
I particularly like the financial analogy when talking about technical debt because it takes something hidden and unknown to business-oriented professionals and puts it in terms that they work with on a day to day.
This also highlights that while technical debt can be advantageous in the short term (when interest paid is less than the principle borrowed), it can be ruinous in the long-term.
## Penalties for Technical Debt
Above I mentioned the **increased amount of development time** penalty paid on future items when technical debt is in effect. This is likely the largest form of technical debt that we talk about as developers.
Additionally, we touched on the **quality debt** that can be incurred by brittle and unmaintainable code, lack of unit tests, and duplication that results in inconsistent behavior when modifying code. This is a form of risk that we pay off as interest when working with tech-debt heavy areas of code.
Another form of interest we pay on technical debt is **poor application performance**. This is typically paid when technical debt is at the design level as most performance issues ultimately turn out to be poorly designed flows through a system. Yes, you can make tactical improvements to code performance for days and weeks on end, but at a certain point it's hard to get additional improvements until you redesign with performance in mind.
**Security vulnerabilities** are another form of risk that we can incur as interest on technical debt. Uniquely, though, this does not just manifest itself as code is changed, but the risk starts from the day a vulnerability is present and continues on until it is resolved.
Finally, we do pay a **morale penalty** to developers when working on substandard code. This is frequently not isolated to just developers, however, as many developers are vocal about the more *interesting* bits of code they discover and this affects anyone within earshot.
I would further argue that poor code inside of the codebase encourages substandard work on the project because it is shown to be an acceptable level of workmanship, so this form of debt encourages future debt.
## Talking to Business about Technical Debt
Okay, so now we've talked about what debt is and the interest we pay until it is resolved, let's talk about communicating these things to business stakeholders.
First of all, product management and business stakeholders are **not** your adversary. These are your key partners. Any discussion with business on code needs to have trust and respect at its core - from both sides - or friction is guaranteed to occur and success is far less likely.
I would take it a step further and say that establishing a relationship of trust, collaboration, and respect with business partners is **even more important** to the long term success of a project than technical debt.
In order to have a healthy and productive conversation with business stakeholders, you need to do at least the following things:
- Come at the conversation looking to improve *understanding* in both you and them. You want to inform them of current and future obstacles and the prices paid by past decisions and you need to hear and understand their needs.
- You need to have an extreme amount of professionalism. Developers love to have fun, but when we're alien to the business in our conduct, it's not too hard to see why the business might not think that we're not capable of thinking about the things they think about.
- Have both data and anecdotal evidence to back up your claims.
- Have a few key top priority technical debt items already identified.
- Have flexible plans for resolving things that can be modified by business needs.
- Have an idea of how long that it will take to resolve things.
- Have ideas for ensuring that technical debt becomes less of a problem in the future.
- Be prepared to give progressive status reports as technical debt is paid down.
When talking to business stakeholders about serious levels of technical debt, you are essentially a doctor giving a patient a warning about health complications and future consequences. You need to tell the truth while not being alarmist, and you need to offer a plan to remedy the situation and monitor it going forward.
We'll talk more about these things in a bit.
## "Isn't Technical Debt the Developer's Fault?"
I hear this question sometimes from business. It's often not asked in a malicious way and it's even asked reluctantly at times. Most business people don't understand the nature of software development or software projects because they haven't developed code for them. As such, it's completely reasonable to assume that technical debt is the developer's fault.
The truth about this assumption, unfortunately, is that sometimes technical debt *is* our fault. Sometimes we didn't warn on the consequences of a decision as it was being made, sometimes we didn't notice it until it was too late, and sometimes developers get lazy, make mistakes, or are still growing the full set of skills they need.
However, I choose to believe that the majority of technical debt is not our fault or is detected too late to change without jeopardizing key business goals.
An analogy I like to make when dealing with business stakeholders who blame development for technical debt is one from farming.
In farming, if you repeatedly farm the same land season after season after season, you systematically render that land less fertile and productive by draining the nutrients from the soil. This is why farmers have adopted techniques such as crop rotation (leaving a field fallow or empty in off seasons to recharge) or to substitute soil-enriching crops for soil-depleting crops every so often to help fields recharge.
In this analogy, it makes it clear that while developers are working on what the business wants, project after project, the lack of time available to weed out the fields and allow the metaphorical soil to recharge is ultimately reducing the crop yield. You ultimately don't have bad farmers (developers), but a farming strategy and schedule that optimizes yield for the first few seasons at the expense of long-term viability.
# The Business Perspective
Sometimes sacrificing long-term health of a codebase is actually acceptable.
Sometimes you critically need to finish a project to stay in business. Sometimes you plan on completely rewriting an application or retiring it after a number of iterations in favor of a replacement. In this case, productivity in the short run *should* be the primary concern.
Other times business simply doesn't understand. They live in the world of looking at the needs of stakeholders, sales goals, deals won and lost, contracts at risk, support incidents, bug counts, and other concrete and understandable things and when they hear technical debt, it can be easy to assume it just means "code I'm not particularly fond of" and not "a massive quality risk waiting to unleash a flood of defects on our users".
This is why they need us to translate our day-to-day into something they can understand.
That means we have to look at metrics. Armed with data from issue tracking systems, time tracking systems, source control, code analysis tools, test coverage results, CI/CD pipelines, performance monitoring tools, and other sources you can put together some interesting figures such as:
- Defects by area of the application
- Time needed to complete a single feature (particularly over time as this shows losses in productivity)
- Time spent on development vs support activities
- % of code that is covered by unit tests over time
- "code smells" by source file
- "code smells" over time
- % of incoming requests that result in errors
Be creative. The exact metrics that are appropriate for your code are going to be unique to your organization and your current flavors of technical debt. If you need ideas on analyzing problems and coming up with ways of representing them, take a look at my post on [the 7 basic tools of software quality](https://dev.to/integerman/the-7-basic-tools-of-software-quality-16i1).
{% link https://dev.to/integerman/the-7-basic-tools-of-software-quality-16i1 %}
# Code Analysis
Code Analysis tools are something I absolutely would lean on. These could be anything from compiler or linter warnings to a dedicated tool that analyzes a codebase and generates recommendations. The exact tools used will vary by the programming language you're using, but I'll share what works well for me.
I use [SonarQube](https://www.sonarqube.org/) / [SonarCloud](https://sonarcloud.io) to scan a wide variety of code in many different languages.

This can also help me prioritize which files need attention most in a very visual manner suitable for communication with business stakeholders:

While SonarQube / SonarCloud is good for simple tracking that works well out of the box, more in-depth and customizable analysis may be necessary and should come from a specific tool suited to your programming language.
I use [NDepend](https://Ndepend.com) to analyze .NET assemblies and get detailed metrics and visualizations as to everything wrong with those projects in order to prioritize and track code smells over time.

NDepend, in particular, can pinpoint methods by size and complexity, code coverage, etc. and generate some very helpful graphs and charts for prioritizing and even potentially communicating technical debt.

*Disclaimer: While I have previously paid for NDepend, my current copy was provided by the developer*
# Tips for Communicating with Business Stakeholders
So, now that you have the metrics and data you need to prioritize and communicate technical debt, let's talk about that conversation.
First, you need to determine how big of a deal this is. This can range from a 30 second elevator pitch of "We'd really be more productive if we could fix X. Can I send you a short E-Mail with some details and get this included in a future sprint?" to "I've been looking at our code quality and I have some concerns I'd like to share with you. I'd like to set up a meeting for later this week to go over them and talk about some possible solutions. What day would work best for you?"
Second, you need to put together the appropriate level of communication. Typically this is going to be anywhere from a paragraph-long E-Mail to a one or two page report or a 5-10 slide deck. Your goal is to concisely communicate the problem to them in a way that they can understand and participate fairly in a discussion for prioritizing and planning a remedy for the issue.
Thirdly, you need to take their communication styles into account. Some people hate E-Mail or phone. Others hate formal meetings. Style matters too - some people want concise confident statements without any preliminaries while others want to really interact with you and shoot the breeze before getting to business. Some people are motivated by hard facts while others are swayed by stories of how individuals are impacted. Know your audience. When in doubt, try a mixture of approaches or have anecdotes ready to share if your data flops.
I would highly recommend keeping your presentation focused on the bare minimum needed to adequately communicate the problem. Do your homework going into the meeting, but don't bore them. If an executive wants details, they'll ask for it. You're not trying to sound smart or win points here - you're trying to bring a partner into a problem solving world that's not their forte.
Present your proposed plan for remedying things to them as part of the presentation. Expect to be asked questions around how many resources you'll need, how long it will take, what the risks are, and what things the business will be unable to do in the short term due to the loss of resources.
The other thing I would caution against going into these meetings - fair or otherwise - is that many at the executive level are focused on people's clock-in and clock-out times and if you're telling them you need to be able to dedicate time to technical debt and the dev team leaves immediately at closing time every day, shows up late, or takes long lunches, the executives may have trouble focusing on what you're saying - right or wrong, this just tends to be how many people think at that level.
# Closing
Now that we know how to communicate technical debt to business stakeholders and, hopefully, get buy in, let's look at [some strategies for paying down technical debt](https://dev.to/integerman/strategies-for-paying-off-technical-debt-2da9).
{% link https://dev.to/integerman/strategies-for-paying-off-technical-debt-2da9 %}
What's worked for you when talking with business stakeholders? What obstacles have you encountered other than what I've shared above?
---
Photo by Sabine Peters on Unsplash | integerman |
181,300 | My first time at JSConf Budapest, how was it? | Sharing my thoughts and experience on attending JSConf Budapest for the first time | 0 | 2019-10-04T11:35:51 | https://www.lirantal.com/blog/2019-10-04_jsconf-budapest-review | jsconf, speaking, devrel, conferences | ---
title: My first time at JSConf Budapest, how was it?
published: true
description: Sharing my thoughts and experience on attending JSConf Budapest for the first time
tags: jsconf, speaking, devrel, conferences
cover_image: https://thepracticaldev.s3.amazonaws.com/i/of0bgqoc2ghd5dpsrj9a.jpeg
canonical_url: https://www.lirantal.com/blog/2019-10-04_jsconf-budapest-review
---
Last week I had the privilege of attending [JSConf Budapest](https://jsconfbp.com) - an incredible experience with a magnificent team behind the program. This was also my first time to Hungary and as I had a few hours to kill over travel I wanted to share some of my experience for it:
JSConf Budapest - All The Goodness:
* [The People](#the-people)
* [The Talks](#the-talks)
* [The MC](#the-mc)
* [The Conference](#the-conf)
* [The Mozilla Lounge](#the-mozilla)
* [The PWA App](#the-pwa)
* [The City](#the-city)
My (own) Regrets:
* [Short on time](#short-on-time)
* [Re-doing my slides last minute](#slides-redo)
## The People <a name="the-people">
I almost moved this item below to the list of things I regret not doing more of. I had a couple of random chats with the attendees, met new speaker friends and had great conversations after my talk. Everyone I talked to where nice and had a great time.
Make no mistake, this totally extends to the people of Budapest as well!
## The Talks <a name="the-talks">
The agenda was completely colorful in topics, ranging from [brain waves audio in the browser](https://twitter.com/robrkerr/status/1177546518380912640?s=20) by [Braden Moore](https://twitter.com/braden_rm), to [learning from failures](https://twitter.com/jsconfbp/status/1177608040670990338?s=20) by [Isa Silveira](https://twitter.com/silveira_bells) and I loved every minute of it (when I was not practicing my fresh new talk version)
I learned that I really like these varied conference agendas where it's not just 9 hours of technical deep dives of staring at IDEs but instead expose ourselves to topics that are outside of our comfort zone. [JSHeroes](https://jsheroes.io) is another conference that follows this.
A side note on the agenda: I was stoked that we had talks on [Security](https://twitter.com/jsconfbp/status/1177572957847216130?s=20), [Accessibility](https://twitter.com/jsconfbp/status/1177130554095034369?s=20), [Performance](https://twitter.com/bogy0/status/1177176293537976321?s=20) and [Testing](https://twitter.com/jsconfbp/status/1177145361263202304?s=20) - all of which are cross-cutting concerns that are usually missed out of in our day to day engineering lives and other confs too.
## The MC <a name="the-mc">
The all-around fashionista [Paul](https://twitter.com/paul_v_m) did a great job keeping things lightweight and rolling as he was orchestrating the whole 2 days on stage, phew!
Thanks to Paul, challenging the status-quo, he wished to make everyone feel acceptable in their shoes and appreciate for who they are. ❤️👏
{% twitter 1177481973608017921 %}
{% twitter 1177129309380796418 %}
I'm pretty sure too that Paul had the [best socks](https://twitter.com/liran_tal/status/1177166418938384384?s=20) at the conf.
Let's send him hugs and wish him well for his upcoming talk at [QueerJS Amsterdam](https://queerjs.com) which he also helps organize 🎉
## The Conference <a name="the-conf">
Starting out with the venue, in a place called Akvarium Klub and as you might have guessed already - yes, it's a nightclub! More precisely, they actually host rock concerts there.
The atmosphere is quite intense, in a good way, and funny enough - speakers have a couple of quiet and ready-rooms all set up with mirrors, bathrooms, etc. Quite a thing!
This image doesn't do it justice but here you go:
{% twitter 1177124222734409729 %}
The conference organizers team had everything under control. I constantly saw them interact with speakers too during the day to make sure everyone feels well. If someone felt they'd go over or under time by a few minutes, they were super ok and making everyone feel comfortable and confident about their talks.
[Daniel](https://twitter.com/daniliptak) for instance, showed me a picture of a place called [Yoda Cave in Iceland](http://www.travelociraptor.com/yoda-cave-iceland/) a few minutes before my talk, and OMG getting there is a life-goal now.
[Malwine](https://twitter.com/malweene) did a great job at capturing every speaker's talk gist, follow up on her twitter profile to see all the artistic summaries ✨
Another personal highlight is perhaps this largest selfie I've ever taken 😆 with all of JSConf family!
{% twitter 1177626787930656768 %}
## The Mozilla Lounge <a name="the-mozilla">
Mozilla supported the conference through setting up a lounge place where attendees could hang out in a bit more ease on sofas and watch the live video stream of the talks while getting some work done on their laptops.
It looks like [Charlie Gerard](https://twitter.com/devdevcharlie/status/1177571665213054976?s=20) was able to capture some great moments from [Eva Ferreira's](https://twitter.com/evaferreira92) talk on animation techniques in the browser
{% twitter 1177571665213054976 %}
{% twitter 1177523863577595905 %}
## A PWA for Conference Agenda <a name="the-pwa">
I installed this progressive web application since day one of looking at the website, much earlier than the conference started and I can't recommend this highly enough for conference organizers.
Wi-Fi or not, the agenda is right there. Also to note that the PWA is *only* the agenda schedule so it's super on-point and handy!
<img alt="JSConf Budapest Conference Agenda Progress Web App" src="https://thepracticaldev.s3.amazonaws.com/i/2qsl7y098sciehqwoeh7.png" width="250px" />
## The City <a name="the-city">
Budapest is pretty flat which I like, and the center area is quite big and extends outwards than just revolving around a specific area of focus. If you're traveling with family (kids) then there's so many things to do that it really surprised us!
Here are a few pictures:



# My Regrets
The following are some of my own personal regrets of things I wish would've run "better". Don't mistake that for the conference experience which was solid fun 👌
## Short on time <a name="short-on-time" />
I've been doing a lot of conferences recently and we didn't have a proper family vacation so far this year so I took the opportunity to take my wife and my 5 year old kid with me.
The conference organizers were so kind to get them both tickets for the whole conference to make sure they're not left out. Much appreciation and deep gratitude ❤️ My wife ended up wandering across the city with our kid and they both enjoyed every minute of it.
Having them travel with me always had me in a bit of guilt to get back to them when the conference day ends and so I missed out on dinner, conference party activities and such.
I also missed out on me and [Ben's](https://twitter.com/BenedekGagyi) selfie promise to have a ton of them. Ben's a long time friend and also involved in organizing JSConf Budapest which meant he was also quite occupied with making sure everything runs smoothly. I guess it's going to be next time for us Ben to make up for it! 🤗
## Re-doing my slides last minute <a name="slides-redo" />
9pm the evening before my talk on the second day my brain decided I need to re-write the talk contents a bit to add interesting npm security ecosystem insights that I've been [researching](https://snyk.io/blog/why-npm-lockfiles-can-be-a-security-blindspot-for-injecting-malicious-modules/) and [sharing about](https://snyk.io/blog/how-much-do-we-really-know-about-how-packages-behave-on-the-npm-registry/)
{% twitter 1177489575603630082 %}
Then of course used up the morning hours of the 2nd day to go through all the details, do some dry-runs and joined at noon, just in-time to stress out before my talk:
{% twitter 1177564450427166721 %}
And yes, if you had a doubt, I still stress out before a talk. It happens to me more in conferences than in meetups but still takes its toll on me :)
Some moments from the talk below. I'd ❤️ *love* ❤️ to chat with you about open source security in general and more specifically in the space of Node.js and JavaScript so please give me a holla at [@liran_tal](https://twitter.com/liran_tal) 🙌
{% twitter 1177572957847216130 %}
{% twitter 1177577375971860481 %}
# Summary
In closing words, I had such a great time and enjoyed the whole conference experience. Amazing job by the volunteers and organizers to run this event smoothly and with so much empathy for speakers and attendees alike. See you next year I hope ❤️🎉
Did you also attend JSConf Budapest? How did you like it? | lirantal |
181,378 | Containers Developer Summit in 10 pictures | On Friday, September 27 IBM Developer SF hosted a 1-day Containers Developer Summit. We started with... | 0 | 2019-10-02T22:22:49 | https://maxkatz.org/2019/10/01/containers-developer-summit-in-10-pictures/ | conference, containers, events | ---
title: Containers Developer Summit in 10 pictures
published: true
date: 2019-10-02 01:55:55 UTC
tags: Conference,Containers,Events
canonical_url: https://maxkatz.org/2019/10/01/containers-developer-summit-in-10-pictures/
---
On Friday, September 27 [IBM Developer SF](https://www.meetup.com/IBM-Developer-SF-Bay-Area-Meetup) hosted a 1-day [Containers Developer Summit](https://containersdevelopersummit.splashthat.com). We started with a hands-on workshop on how to build your first container-based application followed by talks from Alibaba, Kong, Robin.io and IBM. Here are 10 pictures from the summit.
<figcaption id="caption-attachment-10768">Dave Nugent is kicking off the summit</figcaption>
<figcaption id="caption-attachment-10765">Marek Sadowski is sharing the workshop portion of the summit</figcaption>
<figcaption id="caption-attachment-10770">Marek is showing how to start the workshop</figcaption>
<figcaption id="caption-attachment-10773">Hands-on workshop on how to build and deploy your first container-based application</figcaption>
<figcaption id="caption-attachment-10772">Michael Ka from NeuVector<br>is sharing how to detect and prevent containers and Kubernetes attacks</figcaption>
<figcaption id="caption-attachment-10766">Andy Shi from Alibaba gave a talk on how to secure containers</figcaption>
<figcaption id="caption-attachment-10769">Kevin Chen from Kong shared how to add authentication, load-balancing, traffic throttling, transformations, caching, metrics, and logging across Kubernetes clusters</figcaption>
<figcaption id="caption-attachment-10767">Anthony Amanse from IBM gave a talk on how to deploy containers on OpenShift</figcaption>
<figcaption id="caption-attachment-10771">Marek and Dave from IBM gave a debate-style and fun talk on Kubernetes vs. OpenShift</figcaption>
<figcaption id="caption-attachment-10774">Ravikumar Alluboyina from Robin.io gave a talk on 1-click deployment of Hadoop cluster on IBM IKS Platform</figcaption>
You can also [view pictures from our prior summits](https://maxkatz.org/category/10pictures/). | maxkatz |
181,471 | Smart web design. Part II: Customizable color theme in 10 minutes 🦜⏱ | How to organize colors using CSS variables to make you design customizable. | 2,340 | 2019-10-02T09:47:09 | https://dev.to/rumkin/smart-web-design-part-ii-customizable-color-theme-in-10-minutes-27ho | css, html, web, ui | ---
title: Smart web design. Part II: Customizable color theme in 10 minutes 🦜⏱
published: true
description: How to organize colors using CSS variables to make you design customizable.
tags: css, html, web, ui
series: Smart web design
cover_image: https://thepracticaldev.s3.amazonaws.com/i/m2917692brm7zgcfpjf2.jpg
---
Customizable color palette use css variables for color values to make it easy to change via JS or CSS. But good theme requires colors with shades.
## HSL vs RGB
The oldest and the most popular color model on the Web is RGB (Red, Green, Blue). It has pretty simple and short syntax `#ff0000` (red). But it's really hard to remember all the combinations of colors and manipulate them without color picker.
But there is one color scheme which is extremely simple in work! It's [HSL](https://en.wikipedia.org/wiki/HSL_and_HSV) (Hue, Saturation, Lightness):
* Hue tells wether the color be blue, red, yellow, green or violet, it refers to color wheel angle.
* Saturation tells how much hue is there, wether it fade (0%) or bright (100%).
* Lightness tells wether it dark (0%, black) or light (100%, white).
HSL represents colors in more human way. Red in HSL is `hsl(0, 100%, 50%)` to turn it into blue we just need to change hue param `hsl(200, 100%, 50%)`.
## Theme Colors
Now we need to define theme colors. We will create duotone theme, but you can use as much colors as you need, just don't overdo it. Let's call our colors major and minor. To receive color shades we will change the lightness parameter. Shades will be named with two digit indexes from lightest to darkest, e.g. `--major-10` lightest and `--major-90` darkest.
```css
:root {
/* Major color is red */
--major-5: hsl(0, 100%, 95%);
--major-10: hsl(0, 100%, 90%);
--major-30: hsl(0, 100%, 70%);
--major-50: hsl(0, 100%, 50%);
--major-70: hsl(0, 100%, 30%);
--major-90: hsl(0, 100%, 10%);
/* Minor color is yellow */
--minor-5: hsl(50, 100%, 95%);
--minor-10: hsl(50, 100%, 90%);
--minor-30: hsl(50, 100%, 70%);
--minor-50: hsl(50, 100%, 50%);
--minor-70: hsl(50, 100%, 30%);
--minor-90: hsl(50, 100%, 10%);
--bg: white; /* background color */
--fg: black; /* text color */
}
```
Live demo:
{% jsfiddle https://jsfiddle.net/rumkin/y8pexrbu/60/ result,css,html,js %}
## Advanced theming
You can customize colors using custom css files in combination with media queries. For example printable version could be black and white only.
Basic `theme.css` file could look like this:
```css
:root {
--major-10: hsl(0, 100%, 90%);
/* ... */
--minor-10: hsl(50, 100%, 90%);
/* ... */
--bg: white;
--fg: black;
}
```
### Dark theme
Dark theme section in `theme.css`:
```css
@media (prefers-color-scheme: dark) {
:root {
--major-10: hsl(0, 100%, 90%);
/* ... */
--minor-10: hsl(50, 100%, 90%);
/* ... */
--bg: black;
--fg: white;
}
}
```
### Printable version
Printable theme section in `theme.css`:
```css
@media print {
:root {
--major-10: hsl(0, 0%, 90%);
/* ... */
--minor-10: hsl(50, 0%, 90%);
/* ... */
}
}
```
## Theme switching
There is a lot of options how to implement a switcher, but the simplest one is a switcher that changes link or style elements order, moving the target theme related element to the end of children list. This is how it implemented in live preview:
```js
function applyTheme(name) {
// Find style element with matching id
const style = document.getElementById(name + 'Theme')
// Move it to the end of child list
const parent = style.parentElement
parent.removeChild(style)
parent.appendChild(style)
// Remember user's choise
localStorage.setItem('theme', name)
}
// Get switcher
const switcher = document.getElementById('themeSwitcher')
// React on select input value changed
switcher.addEventListener('change', (e) => {
useTheme(e.target.value)
})
// Select color in select input and apply theme selection
applyTheme(
switcher.value = localStorage.getItem('theme')
)
```
## ⚠️ Accessibility
Please, don't forget to include high contrast themes in your design. It will be very helpful for people with color blindness.
## Conclusion
That's all. Now you know how simple it is to create a customizable CSS design for you site. Using [favicon-switcher](https://github.com/rumkin/favicon-switcher) from my previous article ["Smart web design. Part I: light/dark mode favicon."](https://dev.to/rumkin/smart-web-design-part-i-light-dark-mode-favicon-31f0) you able to make real stunning design.
## Credits
Cover image by [David Clode](https://unsplash.com/@davidclode?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash.com](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText). | rumkin |
181,641 | Answer: How do I generate a random 10 digit number in ruby? | answer re: How do I generate a random... | 0 | 2019-10-02T17:06:38 | https://dev.to/khalilgharbaoui/answer-how-do-i-generate-a-random-10-digit-number-in-ruby-5ej | ---
title: Answer: How do I generate a random 10 digit number in ruby?
published: true
---
{% stackoverflow 55379711 %} | khalilgharbaoui | |
181,665 | Puzzle Driven Development + Hacktoberfest = Contributor-friendly project | How Puzzle Driven Development can make your project more contributor friendly | 0 | 2019-10-02T19:09:25 | https://dev.to/dtinth/puzzle-driven-development-hacktoberfest-contributor-friendly-project-3k7a | hacktoberfest, opensource | ---
title: Puzzle Driven Development + Hacktoberfest = Contributor-friendly project
published: true
tags: hacktoberfest, opensource
description: How Puzzle Driven Development can make your project more contributor friendly
---
[**Puzzle Driven Development**](https://www.yegor256.com/2010/03/04/pdd.html), an Agile methodology invented by [Yegor Bugayenko](https://www.yegor256.com), is a way to break down a large task into smaller tasks using **TODO comments** (called “puzzles”) and an automation tool that converts them into GitHub issues where people can discuss and take on these work items.
The generated issues are **automatically closed** when the originating TODO comment is removed from the source code.
When working on each TODO item, **you have 1 hour to complete the task** (some teams prefer *30 minutes*). You do what you can, and the rest you put in more TODO comments as subtasks. (See the [original PDD article](https://www.yegor256.com/2010/03/04/pdd.html) for an example.)
I’ve used this methodology with great success in the [ELECT Live](https://github.com/codeforthailand/election-live) project, an open-source website that shows the live score during 2019 Thai general election vote-counting. It is part of the [ELECT initiative](https://elect.in.th/about/). The general election was near and they were looking for contributors, so I volunteered to help out as a front-end architect in that project and decided to try out PDD.
As it turns out, PDD enables us, within 8 days before the general election happens, to work on the project and communicate what needs to be done without the usual overhead of having to track the status of issues manually, to attract tens of contributors, and to close hundreds of issues.

But why did it work so well?
Divide and conquer.
## It lets you break down big tasks into smaller tasks as you go, as needed
Instead of the lengthy process of breaking down tasks upfront (sprint planning anyone?), I quickly created a simple skeleton with a bunch of placeholders.

Now, the webpage has been broken down into multiple sections. Each section has a placeholder content with a TODO marker. **One big task has been broken down into multiple smaller puzzles.**
## Smaller tasks means your project becomes more contributor friendly
Instead of having to spend hours implementing a component according to the specification, we gradually evolve the component towards what we want. The part where we didn't finish, other people can help.

In the image above, you can see a placeholder box turn into a placeholder component, into a working (but ugly) component, and into a more beautiful component. In fact, at least 5 different people worked on this. Some contributors are totally new to the project.
It also allowed me to better focus on the more important things most of the time.
For example, most web developers know how to style a webpage. But only a few people know the architecture of the entire project. It would not be as efficient if these key people had to spend a big chunk of their time adjusting CSS to be pixel perfect.
So, at one point I wanted to focus on data-binding and getting the contents to render. I decided to work on just that part, and leave the aesthetics part as puzzles to be solved later. While I switched to work on another component, other developers jumped in and help improved the aesthetics. More jumped in and improved the UX. And so on.
# PDD tools
Few tools are available to choose:
- **0pdd**, the original hosted tool: http://www.0pdd.com
> For ease of setup, I highly recommend this tool.
- **todo-actions** (by me): https://github.com/dtinth/todo-actions
> This one integrates with GitHub Actions and requires a MongoDB database. You’re in control of your data but setup is more complicated.
- *(know any other tool? please comment!)*
# And it’s Hacktoberfest!
Having learned about the effectiveness of PDD, I decided to try it on my side project, [**Bemuse**](https://github.com/bemusic/bemuse), a web-based open-source rhythm action game.
This project started in 2015 when `async function`s were not standardized yet. At that time, generators and promises are becoming the ES2015 standard, and so we used the [`co`](https://www.npmjs.com/package/co) library to convert a generator into async functions.
Fast-forward to 2019, now we have `async function` built into Node.js and web browsers. So, it’s about time to convert the remaining usage of `co` to `async function`s!
But instead of creating a big and rather tedious issue of having to convert the whole codebase, I grepped for `'co'`, [sprinkled TODO comments into the source code](https://github.com/bemusic/bemuse/commit/a437f8c09d7c1691606522cd34c9d2881d122e55), and let a bot turn them into issues:
[](https://github.com/bemusic/bemuse/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3Ahacktoberfest)
Hm…… also now that React hooks are stable, maybe some of these components could be made simpler? 😉

---
**Update:** Woke up and here is my inbox:
 | dtinth |
181,668 | Minimize redux boilerplate with these 7 lines of code! | The Setup Have you ever worked with applications where you use redux for state management?... | 0 | 2019-10-03T12:51:42 | https://dev.to/kokaneka/minimize-redux-boilerplate-with-these-4-lines-of-code-5ak0 | redux, react, node, javascript | ### The Setup
Have you ever worked with applications where you use redux for state management? I am sure you have. It's beautiful how the framework lets us use the one way state flow by dispatching actions, making use of pure functions and immutability to provide a nearly perfect state management option for small/medium apps.
But there is an issue I have with redux: the boilerplate that comes associated with it.
### The Issue
Although redux is not opinionated, there is generally a standard way of doing stuff: write action creators, use functions like 'mapStateToProps', 'mapDispatchToProps', use the 'connect' function, use thunk for async actions etc.
One of those "standards" is the way in which one performs a simple request, success/failure operation on an API.
Here's the drill:
1. Create a 'REQUEST' action and dispatch it.
2. Make the network request.
3. On success, dispatch the 'SUCCESS' action with the payload.
4. On failure, dispatch the 'FAILURE' action with the error.
This is so common that react official documentation has an entire article on how to minimize boilerplate for this pattern:
https://redux.js.org/recipes/reducing-boilerplate
### The actual Issue
But what if your problem statement does not fit into the straightjacket of this pattern mentioned above and you are unable to use any of the solutions mentioned in the link above. Same was the case with my problem statement and I thought to myself how I can still reduce my boilerplate.
Then, I Stumbled upon my constants.js file that held my actions. It looked something like this:

and whenever I wanted to import actions, I did this:

Or worse still, in some cases the constants were imported like so:

### A better way
Here's how the constants file can be made smaller, concise and easier to read.
First, write a util function and call it something like:

Then, the constants.js file can look something like this:

And the constants can then be used in this manner:

or like this:

So, in this way, we can minimize at least the constants boilerplate that causes files to bloat up and makes the code less understandable. | kokaneka |
321,664 | Icons in a React project | When I'm working on a project that needs icons, I always reach for Nucleo icons. (No, they're not pay... | 0 | 2020-04-28T18:52:09 | https://boldoak.design/blog/icons-in-a-react-project/ | react, gatsby, svg, icons | When I'm working on a project that needs icons, I always reach for [Nucleo](https://nucleoapp.com/) icons. (No, they're not paying me. But they _are_ really good.) Both their native and web apps allow for easy exporting of the SVG, but the native app can also export in JSX, which is perfect for my blog which runs on Gatsby, which itself runs on React.
This website's component structure is pretty straightforward: all the icons are located in `src/components/icons`, each icon having its own file. For example, the "left arrow" icon is named `arrow-left.js`. Being JSX, all the icons have a similar structure. For example purposes, I'm going to use one of their free icons. It is a paid product, after all.
``` jsx
import React from 'react';
function Zoom(props) {
const title = props.title || "zoom";
return (
<svg height="24" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
<title>{title}</title>
<g fill="currentColor">
<path d="M23.061,20.939l-5.733-5.733a9.028,9.028,0,1,0-2.122,2.122l5.733,5.733ZM3,10a7,7,0,1,1,7,7A7.008,7.008,0,0,1,3,10Z" fill="currentColor"/>
</g>
</svg>
);
};
export default Zoom;
```
This is fine to start with, but my icon use within the website is often alongside text, like this:
``` jsx
<button type="button">
<Zoom />
Search
</button>
```
In this use case, the icon's default title will result in a screen reader interpreting the button text as "zoom search," which would be confusing. So I removed the `const title` line and modified the title element to include a ternary operator:
``` jsx
{!!props.title &&
<title>{props.title}</title>
}
```
This allows the title to only be written if it's included in the component's use, like this:
``` jsx
<Zoom title="search" />
```
In my above example, though, I also don't want the icon visible to screen readers at all. So I added the `aria-hidden` property, which also looks at the title:
``` jsx
<svg aria-hidden={!props.title}>
```
All of this is well and good for each icon, but I have to make these changes all over again whenever I add a new icon. (Okay, it's not _that_ often, but it's still tedious.) We can improve this and make it a little more DRY, right? Right?
With that in mind, I created a new file: `/src/components/icons.js`. Within this file, a single function returns the SVG icon framework:
``` jsx
const icon = (path, className, title) => {
return (
<svg className={`icon ${className}`} aria-hidden={!title} height="24" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
{!!title &&
<title>{title}</title>
}
<g fill="currentColor">
{path}
</g>
</svg>
)
}
```
It uses the default `.icon` class (which my CSS framework styles with default height, color, etc.) and accepts additional classes. It also uses the `title` argument to determine ARIA visibility and the title element. Most importantly, it also accepts a custom `path` which, of course, determine's the icon's appearance.
The file exports all the icons used by my website. To do that, it returns the `icon` function call:
``` jsx
export const Zoom = (props) => {
return icon(paths.zoom, `icon--zoom${props.className ? ` ${props.className}` : ''}`, props.title)
}
```
You'll notice that the `path` is not defined here. Instead, I'm calling `paths.zoom` -- the constant `paths` is defined at the top of the file:
``` jsx
const paths = {
zoom: <path d="M23.061,20.939l-5.733-5.733a9.028,9.028,0,1,0-2.122,2.122l5.733,5.733ZM3,10a7,7,0,1,1,7,7A7.008,7.008,0,0,1,3,10Z" fill="currentColor"/>,
}
```
Every time I add a new icon, I copy its `path` and add it to this object and add a new export. It seems to me to be a little less work than adding a new file and making changes to it, but... I don't know. I'm open to suggestions.
The other added benefit to managing icons this way is importing them. With the icons all existing in separate files, including multiple icons looked something like this:
``` jsx
import { Heart } from "@icons/heart"
import { Clock } from "@icons/clock"
import { OpenExternal } from "@icons/open-external"
```
Now, importing multiple icons can be done on a single line:
``` jsx
import { Heart, Clock, OpenExternal } from "@icons"
```
I guess it's all about preference. There are many like it, as they say, and this one is mine. And speaking of preferences, I'm also simplifying my imports with the [`gatsby-plugin-alias-imports`](https://www.gatsbyjs.org/packages/gatsby-plugin-alias-imports/) plugin. I like it. 👍
_This post was originally published on [Bold Oak Design](https://boldoak.design/blog/icons-in-a-react-project/)._ | peiche |
181,884 | Fun with Unicode in Java | Normally developers don't pay much attention to character encoding in Java. However, when we crisscro... | 0 | 2019-10-03T06:14:19 | https://dev.to/maithilish/fun-with-unicode-in-java-47pn | unicode, encoding, java |
Normally developers don't pay much attention to character encoding in Java. However, when we crisscross between byte and char streams, things can get quite confusing unless we know the character set basics. Many tutorials and posts about character encoding are heavy in theory with little real examples. In this post, we try to demystify Unicode with easy to follow examples.
read the [Post ... ](https://www.codetab.org/post/java-unicode-basics/)
| maithilish |
181,951 | Nebular Hacktoberfest | Hacktoberfest is started a few days ago! And you still have no idea on how to start your amazing jou... | 0 | 2019-10-03T08:55:43 | https://dev.to/nikpoltoratsky/nebular-hacktoberfest-1gca | angular, hacktoberfest | Hacktoberfest is started a few days ago! And you still have no idea on how to start your amazing journey into the open-source world and finally get your T-Shirt? I have an answer for you. One of the best ways to start open-source contributions is to help us with [Nebular](https://github.com/akveo/nebular).
## What is Nebular?
Nebular is a customizable Angular 8 UI Library with a focus on beautiful design and the ability to adapt it to your brand easily. It comes with 4 stunning visual themes, a powerful theming engine with runtime theme switching and support of custom CSS properties mode. Nebular is based on Eva Design System specifications.
## How to start contributions?
- Open issues at the [GitHub repository](https://github.com/akveo/nebular)
- Search for issues for starters (issues which contain following labels on GitHub):
* help wanted
* good first issue
- Deal with those issues!
- Get your T-Shirt!
---
What's more important here, don't hesitate! Find interesting issues, deal with them and have fun! 🥳
Drop me a line if you have any questions - [@nikpoltoratsky](http://twitter.com/nikpoltoratsky) | nikpoltoratsky |
181,976 | Few Awesome CSS Snippets I Recently Learned | Center the absolute positioned content .center{ top : 50%; left : 50%; transform :... | 0 | 2019-10-31T14:15:18 | https://dev.to/3sanket3/few-awesome-css-snippets-i-recently-learned-33pb | css, beginners, webdev | # Center the absolute positioned content
```css
.center{
top : 50%;
left : 50%;
transform : translate(-50%,-50%);
}
```
Codepen: https://codepen.io/3sanket3/pen/LYYOWwV
# Maintain aspect ratio of the container
```css
.container{
padding-bottom:56.4%; /* aspect ratio 16:9*/
}
```
The percentage should be calculated as: 16:9(w:h) = h*100/w => 56.25%. Similarly,
- 1:1 => 100%
- 4:3 => 75%
Codepen: https://codepen.io/3sanket3/pen/dyyZWMX
# Truncate the text
```css
.truncate{
display:block;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
```
Codepen : https://codepen.io/3sanket3/pen/OJJOmqB
# Set width as the available rest of the space
It is useful when you have two adjacent containers. One has a fixed width and the other one should adjust according to the available remaining space.
```css
.fix-width-container{
float: left;
width: 200px;
}
.adjustable-container{
width: calc(100%-200px);
}
```
### Using CSS Variable
```css
// Declaring variable at root level, you can set it at common parent level
:root{
--left-pane-width : 200px;
}
.fix-width-container{
float: left;
width: var(--left-pane-width);
}
.adjustable-container{
width: calc(100%-var(--left-pane-width));
}
```
Codepen: https://codepen.io/3sanket3/pen/qBBVjgy
# Preserve line breaks in the preview
It is useful when we want to show a preview of the content user entered in let say `<textarea>`
```css
.preview{
white-space: pre-line;
}
```
CSS is something, which has no ends of learning. I would love to learn if you have such interesting snippets or links.
Icon Courtesy: https://thenounproject.com/term/css/60411/
| 3sanket3 |
182,045 | Learn to Program 03: The How | Finally we get to the how. How exactly will this series be structured to help you (the reader) learn... | 0 | 2019-10-03T11:55:42 | https://dev.to/pieterjoubert/learn-to-program-03-the-how-3ad8 | learning, beginners | Finally we get to the _how_. _How_ exactly will this series be structured to help you (the reader) learn programming? _How_ will you be able to measure your progress? _How_ exactly will this work?
The first answer concerns structure. The series will be structured as short posts (typically less than 5min to read) that cover a _single_ topic, concept etc. Attached to these posts will be small programs, where appropriate, where you can see and test the concept in question.
My advice would be to use a site like [Repl.it](https://repl.it/) in which to run and practise the programs provided. The examples will not be written in one language only. Different languages have different strengths and will be used in this way to showcase various aspects of programming. This is not a series in which you will learn _python_, or _rust_ or any specific language.
(As an aside a _proper_ Computer Science professor like [Donald Knuth](https://en.wikipedia.org/wiki/Donald_Knuth) writes a whole new language just to teach programming. I am not anywhere close to that level so my readers will need to be content with existing language!).
The idea is for bite sized pieces of content that you can read while standing in line at your local coffee joint. In the same way programmers will often tell you to keep your _methods_ to a size that fits on one computer screen, so too will I try and fit each post unto one screen (one screen while writing it, your reading experience might differ).
Finally, the entire process will follow (loosely), the _Socratic_ method. In other words each post will tend to end with a question or task, and the following post will start with a possible answer to that question or task.
This nicely leads to Task 1: Update the _rust_ example below to display your own name instead of mine:
```rust
fn main() {
println!("Hello Pieter!");
}
```
---
Link to [Main Content Post](https://dev.to/pieterjoubert/learning-to-program-02-the-what-4n2g). If you are confused about this post start here.
| pieterjoubert |
182,089 | How to Debug Node Serverless Using JetBrains WebStorm | One of the most useful tools in a developer's quiver is the debugger. The debugger allows a developer... | 0 | 2019-10-03T14:46:43 | https://tenmilesquare.com/how-to-debug-node-serverless-using-jetbrains-webstorm/ | javascript, serverless, webstorm, debugging | <!-- wp:paragraph -->
<p>One of the most useful tools in a developer's quiver is the debugger. The debugger allows a developer to not only step through code and track down bugs, but it is useful as a way to profile data structures. I find the ability to profile data structures to be extremely useful when working with scripting languages such as Python and Node. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Recently I was working on a Node serverless project and had no idea what fields existed on the serverless lambda objects (event, context, callback). When I went looking how to debug serverless, I struggled to find a solution that detailed debugging serverless in JetBrains WebStorm. The following will get you started debugging node serverless using JetBrains WebStorm.</p>
<!-- /wp:paragraph -->

<!-- wp:paragraph -->
<p>There are a ton of tutorials on how to install node, serverless, WebStorm, so I've assumed you've already taken care of that. For the purpose of this tutorial, we will be using macOS Mojave. Some locations may vary depending on your OS.</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li>Create a new node configuration: In the toolbar click <strong>Run</strong> --> <strong>Edit Configurations...</strong></li></ol>
<!-- /wp:list -->

<!-- wp:paragraph -->
<p>2. Create a new node configuration by click <strong>+</strong> and then <strong>Node.js </strong>from the dropdown</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":4828} -->
<figure class="wp-block-image"><img src="https://tenmilesquare.com/wp-content/uploads/2019/10/new-config-1024x633.jpg" alt="" class="wp-image-4828"/></figure>
<!-- /wp:image -->
<!-- wp:list -->
<ul><li>Fill in the configuration details<ul><li>Name: Anything you want</li><li>Working directory: This will default to the root of your project. Be sure it points to the directory with your serverless.js file</li><li>JavaScript file: this should point to the serverless binary: Typically /usr/local/bin/sls<br> If you do not know where sls is installed you can find it by typing <code><em><strong>which sls</strong></em></code> in the terminal</li><li>Application parameters: 'offline'<ul><li>Be sure to add any additional parameters you might need such as '-s local'</li></ul></li></ul></li></ul>
<!-- /wp:list -->
<!-- wp:image {"id":4830} -->
<figure class="wp-block-image"><img src="https://tenmilesquare.com/wp-content/uploads/2019/10/FIllin-config-1024x651.jpg" alt="" class="wp-image-4830"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>If you launch the Configuration as debug, the WebStorm debugger will automatically be hooked into the node process.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>This debug configuration may be obvious to a seasoned node developer, but if you're a language transplant like me, you may need help getting started with debugging serverless using WebStorm. This configuration will definitely help you get started understanding the framework and squashing those pesky scripting bugs.</p>
<!-- /wp:paragraph --> | ryboflavin42 |
182,097 | Remembering that I once was a terrible programmer | I think it's pretty fair to say that as we grow older and wiser, we tend to forget where we came from. And I think this recency-bias shows up frequently in software engineering as well. | 0 | 2019-10-03T13:46:44 | https://browntreelabs.com/i-have-come-a-long-way/ | ruby, programming, life, inspiration | ---
title: Remembering that I once was a terrible programmer
published: true
description: I think it's pretty fair to say that as we grow older and wiser, we tend to forget where we came from. And I think this recency-bias shows up frequently in software engineering as well.
tags: ruby, programming, life, inspiration
canonical_url: https://browntreelabs.com/i-have-come-a-long-way/
---
I think it's pretty fair to say that as we grow older and wiser, we tend to forget where we came from. And I think this recency-bias shows up frequently in software engineering as well. In software engineering, you tend to build on layers of layers of skills that you accumulate over a long period of time. After being a programmer for, lets say 10 years, you may forget how much you struggled early in the process. As you learn more advanced techniques, you tend to forget how much you may have struggled with even the simplest of programming concepts. I even find myself losing patience with new engineers from time to time, wondering how they couldn't understand some basic techniques that I take for granted every day.
### Checking our Hubris
I think this is a form of hubris -- excessive pride or self-confidence. As we grow more experienced in a given field, we gain more and more hubris. We tend to forget from where we started, and feel more confident in the skills we've acquired. Some of you may think *"But Chris! I struggle with imposter syndrome all the time!"*, and I agree with you (I do as well). Software Engineering is a very humbling field where one cannot literally learn enough. However, if you had 10+ years of experience and I asked you to write out a simple "fizzbuzz" problem on a whiteboard, you'd scoff at it wouldn't you? (you know you would).
The point I'm trying to make is that some people are just starting out in their programming careers. And they may struggle with basic algorithms, fizzbuzz-type challenges, and -- as we'll see in a moment -- writing a simple controller in Ruby on Rails. I think its easy to forget where you came from, because I know I have.
> if you had 10+ years of experience and I asked you to write out a simple "fizzbuzz" problem on a whiteboard, you'd scoff at it wouldn't you? (you know you would).
### On to REAPP -- a little diddy from 10 years ago
Recently, I received a [security vulnerability alert](https://help.github.com/en/articles/about-security-alerts-for-vulnerable-dependencies) for one of my private repositories in github. I recognized the name of the project, but I don't remember the last time I opened the codebase. The vulnerability alert came from a repository called "REAPP" (it stood for real estate application).
This application was my first attempt to build a Property Management platform -- a SaaS product that allows landlords/property managers to accept rent, manage tenants, and more. I have since re-written this application many times over the last 10 years or so, but this was it's first iteration. Clicking around in the codebase, I remembered how hard I worked to get this code to do much of anything. Specifically, I can remember a "properties" controller I wrote, and the struggle I had with it. I am pretty sure this controller made me abandon the project, because it was just too hard of a problem to solve.
### The controller that ended this project
Let me just paste the code below, for you all to enjoy. Continue below the fold for my analysis and thoughts.
```ruby
# GET /properties/1
# GET /properties/1.xml
def show
@property = Property.find(params[:id])
#Handing tenants
@tenants = @property.tenants.all
if @tenants.empty?
flash.now[:success] = "Weclome to LivingRoom!\r\n
We noticed you don't have any tenants set up, please find the 'tenants' box on the right hand side of your screen
and be sure to add your first tenant before anything else.
Thanks!"
end
#handling incomes, and the income chart series data
@incomes_all = @property.incomes.all
@incomes_full = @property.incomes
@incomes = @property.incomes.paginate(:page => params[:income_page], :per_page => 5)
@incomes_sum = @incomes_all.sum(&:income_amount)
@income_chart_monthly = incomes_chart_series_monthly(@incomes_full, 4.months.ago)
#handinling expenses and the expenses chart series data
@expenses_all = @property.expenses.all
@expenses_full = @property.expenses
@expenses = @property.expenses.paginate(:page => params[:expense_page], :per_page => 5)
@expenses_sum = @expenses_all.sum(&:expense_value)
@expense_chart_monthly = expenses_chart_series_monthly(@expenses_full, 4.months.ago)
@net_series = net_series_monthly()
#zillow chart stuff
@zillow_chart_url = zillow_chart()
@valuation = zillow_value()
end
def zillow_chart
@rillow = Rillow.new("MY-ZILLOW-CREDS")
@zillow_search = @rillow.get_search_results(@property.street_address,
@property.city+","+ @property.state)
@zillow_property_id = @zillow_search.find_attribute("zpid")
@zillow_property_id = @zillow_property_id.join
@z_chart = @rillow.get_chart(@zillow_property_id, "percent", :width => 200, :height => 130, :chart_duration => "5years")
@result_url = @z_chart.find_attribute("url")
return @result_url.join
end
def zillow_value
@rillow = Rillow.new("MY-ZILLOW-CREDS")
@zillow_search = @rillow.get_search_results(@property.street_address,
@property.city+","+ @property.state)
@zillow_search = @zillow_search.to_hash
@value = @zillow_search.find_attribute("valuationRange")
return @value
end
# GET /properties/1
# GET /properties/1.xml
def show_expenses
@property = Property.find(params[:id])
@tenants = @property.tenants.all
@expenses_full = @property.expenses
@expenses = @expenses_full.paginate(:page => params[:page], :per_page => 10)
@expense_chart_monthly = expenses_chart_series_monthly(@expenses_full, 4.months.ago)
end
# GET /properties/1
# GET /properties/1.xml
def show_incomes
@property = Property.find(params[:id])
@tenants = @property.tenants.all
@incomes_full = @property.incomes
@incomes = @incomes_full.paginate(:page => params[:page], :per_page => 10)
@income_chart_monthly = incomes_chart_series_monthly(@incomes_full, 4.months.ago)
end
# GET /properties/1
# GET /properties/1.xml
def show_tenants
@property = Property.find(params[:id])
respond_to do |format|
format.html # show.html.erb
format.xml { render :xml => @property }
end
end
```
### Let's talk about instance variables
I don't think I knew what the difference was between an instance variable and a regular variable. I definitely felt the need to make *almost everything* an instance variable, regardless of my understanding.
### How many variables are needed for income?
I pasted the `show` action in this controller for a reason -- its god
awful. I'm not sure why I needed 5 different instance variables to represent incomes:
```ruby
@incomes_all = @property.incomes.all
@incomes_full = @property.incomes
@incomes = @property.incomes.paginate(:page => params[:income_page], :per_page => 5)
@incomes_sum = @incomes_all.sum(&:income_amount)
@income_chart_monthly = incomes_chart_series_monthly(@incomes_full, 4.months.ago)
```
I *most likely* just wanted to use the paginated incomes for a property, not *all* of the incomes. From there, I could have pulled the chart data, and had it update when a user changed the query param for page. That would have been nice.
### What's a service object?
I like this part of the controller:
```ruby
@zillow_chart_url = zillow_chart()
@valuation = zillow_value()
```
First of all, these names make no sense. Why would I make a method named
`zillow_chart` that returns a url? These, and other methods, should have been placed in their own service objects to encapsulate different behaviors and concerns.
### Conclusion. We were all like this, once
This was a fun trip down memory lane. But I think some lessons can be learned here. Whenever I find myself frustrated reviewing a junior developer's code, I should remember that I once wrote an insane controller with 40+ instance variables, no separation of concerns, and -- oh yeah -- INLINE API CREDENTIALS. I am also pretty sure this was completely untested. Actually let me check ... yup, no tests whatsoever. 😊
| cpow |
182,130 | Looking for Advice: Best Method for Adding Fill-in-the-Blank Functionality to WordPress Plugin | Hi All, I'm building a new WordPress plugin for interactive fill-in-the-blank notes. Goal: Anyone wh... | 0 | 2019-10-03T14:58:12 | https://dev.to/davidshq/looking-for-advice-best-method-for-adding-fill-in-the-blank-functionality-to-wordpress-plugin-26om | wordpress, php, javascript | Hi All,
I'm building a new WordPress plugin for interactive fill-in-the-blank notes. Goal: Anyone who is presenting information and likes to use fill-in-the-blank style handouts could do so using this plugin (without the need for printed copies while retaining interactivity).
My rough plan is to:
1. Create a Custom Post Type.
2. Provide an editor for text notes (and images / other media) to be added by the presenter.(a)
3. Presenters would select a word/phrase they wanted to blank out.(b)
4. Participants at the presentation could pull up the notes on their phone/laptop and fill out the blanks as the presentation occurs.(c)
5. At the end of the presentation participants could enter their email address and have the notes (including their filled out answers and perhaps the correct ones if they differ) emailed to them.
It is a pretty simple/straightforward plugin and this is essentially an MVP. I'd love to add additional features in the future (save to PDF, Evernote, Google Drive, etc.; easy analytics for the presenter to gauge actual participation, etc.).
There are three areas in particular that I'm looking to gain additional perspective on, I've noted them as (a), (b), and (c) above. Below I provide additional details on these along with questions I have.
(a) I could use the Gutenberg block editor to create a block that handles this OR I could use the traditional TinyMCE non-Gutenberg editor. Thoughts on advantages of one over the other? Am I missing any additional (better) options?
(b) Ideally there would be several ways to do this, e.g., select the word and use a key combination (say Ctrl+Shift+B) and an editor button. Once this is done CSS will be used to show that the selected word/phrase will be shown as blank from the front end.
This could be done using a shortcode, like:
The best programming website is [lqdnotes]dev.to[/lqdnotes]
Or it could be done using an html attribute like:
The best programming website is <span class="lqd_blank">dev.to</span>
I tend to prefer the latter as it doesn't clutter the editor interface notes with shortcodes. Any reason to go with the former or another method altogether?
(c) When the front end page is displayed the parser would display the text as-is except for when it found a blanked word, in which case it would replace the blanked word with a form text box with some styling to ensure it looked as if it was in the regular flow of the page (e.g., no borders, sizing with rest of text, no difference in color, etc.).
Are there any better ways to accomplish this? Ideas you'd suggest?
I'd love to hear your thoughts!
| davidshq |
182,145 | Git Workflow: How much change is too much change for one commit for you? | Sometimes we need to fix a typo and sometimes we need to hunt down a particularly ugly bug. How much change to your code is too much change for one commit? What do you use to decide when to commit changes? | 0 | 2019-10-03T15:32:29 | https://dev.to/daveskull81/git-workflow-how-much-change-is-too-much-change-for-one-commit-for-you-2kd6 | git, discuss, opensource | ---
title: Git Workflow: How much change is too much change for one commit for you?
published: true
description: Sometimes we need to fix a typo and sometimes we need to hunt down a particularly ugly bug. How much change to your code is too much change for one commit? What do you use to decide when to commit changes?
tags: git,discuss,opensource
cover_image: https://thepracticaldev.s3.amazonaws.com/i/1e4x1hvxcy1g78sdk96a.jpg
---
>Cover Photo by [Yancy Min](https://unsplash.com/@yancymin) on [Unsplash](https://unsplash.com)
As I work more and more with git and GitHub I have become really interested in the version control processes of individuals and teams for developing software.
{% link daveskull81/git-workflow-do-you-commit-to-master-on-your-solo-projects-hi4 %}
Today, I am wondering about how much change is too much change for a single commit? What do you use to decide when to commit? When is it the right time for you to commit changes?
In developing software we are tasked with a variety of changes or fixes to make to an application. Sometimes it is just a typo or changing the color of a button. Sometimes we are tasked with hunting down a really ugly, difficult to reproduce bug that will take a few days to figure out and solve.
The ability to make commits with git allows us to pull the changes we make out of our application when needed. But, if the change is so drastic it can have serious consequences if the code is removed, especially when other code has come to depend on our changes.
How do you decide you have done enough change to a codebase that you should commit those changes? Is there a point for you when there is too much change for a single commit? How do you make that decision? Do you ever think about what would happen if this commit was rolled back and how the codebase would handle that when you make this decision? | daveskull81 |
182,200 | Testing Biometrics in Android Apps | How to automate your test for biometric apps with HeadSpin biometric SDK. | 0 | 2019-10-03T19:19:26 | https://dev.to/pancy/testing-biometrics-in-android-apps-5hji | android, testing, java, kotlin | ---
title: Testing Biometrics in Android Apps
published: true
description: How to automate your test for biometric apps with HeadSpin biometric SDK.
tags: android, testing, java, kotlin
cover_image: https://miro.medium.com/max/11350/1*X5gq7HQ_oTcBjUjPm1EmgA.jpeg
---
**Disclaimer: I’m working at [HeadSpin](https://headspin.io) developing SDKs and developer tools to make app-testing awesome.**
Biometrics have been increasingly vital to the digital economy. In China, some grocery stores offer face detection at checking out instead of cash or credit card. Apps are using biometric authentication as a more secure and smoother experience for users to access information.
If you have been writing automated tests for Android apps, chances are you are not new to Appium and the use of XPath API to query app components and simulate users’ interactions.
However, if your app incorporates a biometric authentication, which has become more common even for non-financial apps, it is not possible to automate your way in. Unless you can programmatically simulate a fingerprint impression on the device (you cannot), there is no way to herald your test parade through the biometric gate without **manually pressing your finger on the device**.
One way you can think of is to write a dedicated mock activity that fakes the whole biometric charades. But you’re just eating your own dog food because what you fake in the mock is what you test, not the actual behavior.
## Enter HeadSpin Biometric SDK
At [HeadSpin](https://headspin.io), we make testing mobile apps simple. We think it should be easy to test your apps because nobody wants to spend the same amount of development time fidgeting with the tests. HeadSpin wants developers to focus on building apps and delighting their customers all over the world.
We came up with a developer-friendly solution to testing biometric apps on Android — an Android library! All you have to do is import a component from the library, swap it with whatever you’re using in your app code, and run on a **real device** and start remotely control the biometric authentication on your app through our provided HTTP endpoints. Yes, the good old REST API everyone knows and loves.
Check out the demo video below.
{% youtube 3tA3Bk3ASfk %}<figcaption>I was able to remotely log into my test app without having to use my fingerprint.</figcaption>
Using HeadSpin SDK’s version of `FingerprintManager`, I was able to remotely send HTTP POST request to one of the REST endpoints provided by HeadSpin’s platform to authenticate my app, as shown above.
Here is a snippet of a demo activity using HeadSpin’s `HSFingerprintManager` to enable remote biometric authentication instead of Android’s `FingerprintManager`. It took me only 2–3 lines of code to swap in the HS component, and the app can be authenticated normally as well as remotely.
{% gist https://gist.github.com/jochasinga/d1a1d5a3dfd4ef9bf7eb3ed5c7e04dca %}
<figcaption>DemoFingerprintActivity.java</figcaption>
### Again it’s worth noting this is accomplished without human interventions.🤯
If you are looking into automating tests for biometric apps, look no further than [HeadSpin](https://headspin.io).
p.s. I’m also working on support for the new [BiometricPrompt](https://developer.android.com/reference/android/hardware/biometrics/BiometricPrompt) and [AndroidX](https://developer.android.com/reference/androidx/biometric/BiometricPrompt) supports for apps targeting Android P. and above in the next release of the SDK, so it’s looking exciting!
| pancy |
182,236 | How to use MSAL for signup in a web application | MSAL (Microsoft authentication library) is the most modern way to connect with servises, that Misroso... | 0 | 2019-10-03T21:12:06 | https://dev.to/yababay/how-to-use-msal-for-signup-in-a-web-application-5g0p | MSAL (Microsoft authentication library) is the most modern way to connect with servises, that Misrosoft provides. Oh boy... I will write this text till morning, if I will continue in English... Could you, guys, excuse me, if I use my native Russian to finish the article? Thank you! I was shure, that there is most frendly community in the world! To be honest, we all read such texts only for code examples, do we? So, I guarantee, that code will not be written in Russian (but in CoffeScript, and one can't say, what is worse for reading :)
Так вот, [MSAL](https://github.com/AzureAD/microsoft-authentication-library-for-js) - библиотека, которую можно использовать для подключения к сервисам Microsoft. И даже не можно, а нужно, поскольку мелкомягкие отказываются от более ранних своих технологий аутентификации (таких, как ADAL) в пользу этой, свежей. MSAL интегрирована в MS Graph - систему, объединяющую все сервисы Microsoft.
Теперь о мотивации. Зачем мне, убежденному и многолетнему стороннику Linux и человеку, настроенному по отношению к технологиям Microsoft довольно скептически, потребовалась одна из них? По простой причине: я не хочу создавать на своем сайте, посвященном онлайн-обучению, собственную службу аутентификации. Всё, что мне нужно - отправить зарегистрировавшемуся пользователю письмо с приглашением. Для этого не нужно запрашивать логин и пароль, прочие данные. Можно просто воспользоваться готовыми сервисами (Google, Microcoft, Яндекс, ВКонтакте) и, "пропустив" нового пользователя через них, получить адрес электронной почты. Вполне типичный по нынешним временам подход.
В общем, MSAL - одна из технологий, с помощью которых я регистрирую пользователей. Ее нельзя назвать самой простой из тех, что я задействовал для аутентификации, но, в конце концов, разобрался с ней, и вот делюсь опытом, который очень может кому-нибудь пригодиться, поскольку толковой документации по MSAL мало даже на английском. Технология еще новая, интенсивно развивающаяся и я бы даже сказал сыроватая.
Первое, что вам нужно сделать - подключить MSAL на странице, откуда начинается процесс аутентификации. Делается это, как всегда, встраиванием в html-разметку соответствующего внешнего скрипта:
``` html
<script src="https://secure.aadcdn.microsoftonline-p.com/lib/1.1.2/js/msal.min.js"></script>
```
Кстати, как я уже отметил, библиотека интенсивно развивается и есть уже как минимум версия `1.1.3`. За нее и более свежие не ручаюсь. Буду рассказывать о коде, который у меня действительно работает.
Прежде чем продемонстрировать возможности MSAL, приведу два пояснения. Во-первых, я мало пользуюсь современными фреймворками вроде React, Angular или Vue (этим почаще). Веб приложения, которые я создаю, достаточно просты, чтобы в них можно было обойтись средствами "ванильного" JavaScript, а посему использую CoffeeScript, позволяющий писать более лаконичный код. Для бэкенда, конечно же, не пишу на нем, там не обойтись без возможностей ES6. Но на фронтенде CoffeeScript до сих пор остается отличным инструментом, ускоряющим разработку. (Мне даже кажется, что такие возможности ES6, как функции-стрелки и некоторые другие позаимствованы как раз из CoffeeScript, где появились задолго до 2015 года). Так вот, как я и предупреждал: тех, кого не отпугнул русский язык, на котором написана данная статья, ждет еще одно испытание:
``` coffeescript
switch @.location.pathname
when '/signup-msal.html'
setTimeout ()->
req = scopes: ['user.read']
msal = new Msal.UserAgentApplication
auth:
clientId: '852e6552-blah-blah-blah'
redirectUri: 'https://blah-blah.ru/json/invite-with-msal'
msal.handleRedirectCallback (err, res)->
msal.acquireTokenSilent req
.then (res)->
headers = new Headers 'Authorization': "Bearer #{res.accessToken}"
options = method: "GET", headers: headers
fetch 'https://graph.microsoft.com/v1.0/me', options
.then (res)-> res.json()
.then (obj)->
query = "to=#{encodeURIComponent obj.userPrincipalName}&name=#{encodeURIComponent obj.displayName}"
window.location = "/json/invite-with-email?#{query}"
if not msal.getAccount()
msal.loginRedirect req
return
, 1000
when '/json/invite-with-msal'
if not /token/.test @.location.hash
alert 'Авторизация через сервис Microsoft не удалась.'
@.location = '/signup.html'
return
new Msal.UserAgentApplication
auth:
clientId: "852e6552-blah-blah-blah"
redirectUri: "https://blah-blah.ru/json/invite-with-msal"
```
На этом можно было бы и остановиться: подставьте в этом коде свой идентификатор приложения вместо `852e6552-blah-blah-blah`, свой адрес сервера вместо `blah-blah.ru` и получите работающую систему, выясняющую email регистрирующегося пользователя и отправляющую ему письмо с приглашением воспользоваться вашими услугами. Однако пояснения, конечно же, нужны, поскольку сам я, нагуглив массу примеров "работающего кода", смог заставить его действительно работать с большим трудом. Не думаю, что причина в моей умственной неполноценности. Скорее в недостатке документации, который я и восполняю этой статьей.
Так вот. Зачем в этом скрипте две (на самом деле три) "точки входа":
```
when '/signup-msal.html'
```
и
```
when '/json/invite-with-msal'
```
?
А это "чтобы два раза не бегать". Один и тот же скрипт используется на двух веб-страницах. Первая (`/signup-msal.html`) - обычная статическая страница. С нее незалогиненный (простите мне этот русский неологизм) пользователь перенаправляется на какие-то специальные сайты, где выясняется
* является ли он зарегистрированным пользователем принадлежащих Microsoft сервисов, таких, как Skype, Outlook и т.д. Если нет - предложат зарегистрироваться;
* выполнил ли зарегистрированный пользователь вход. Если нет - предложат залогиниться;
* доверяет ли пользователь вашему веб-приложению. Он ведь пользуется им впервые, поэтому должен подтвердить, что согласен с тем, чтобы вашему сайту была предоставлена некоторая информация о нем.
Пройдя всю эту цепочку, пользователь направляется сервисами Microsoft на ваш сайт, в нашем примере по адресу `/json/invite-with-msal`. Тут ситуация посложнее. Мы тоже загружаем библиотеку и создаем инстанс MSAL, но, в отличие от `/signup-msal.html`, не пользуемся явно ее возможностями, а позволяем ей сделать всё, что она считает нужным. А делает она, судя по всему (я точно не выяснял, но догадываюсь), следующее: из url (`location.hash`) извлекает токен и прочую полезную для себя информацию, сохраняет ее в `sessionStorage`, в общем делает что-то такое, после чего запрос адреса электронной почты становится возможным. Если заглянуть после всей этой бурной деятельности в `sessionStorage`, то можно увидеть там множество длинных и сложных записей, что свидетельствует о том, что аутентификация закончилась успешно. Кстати, на период отладки можно удалять записи из `sessionStorage` и делать таким образом пользователя вновь "незарегистрированным".
Нетрудно заметить, что на первом этапе мы имеем дело со статическим `html`-файлом, во втором - с какой-то активной частью бэкенда (в моем случае она написана на JavaScript для среды NodeJS). Здесь удобный момент убедиться, что запрос пришел действительно от серверов Microsoft (`login.microsoftonline.com`, `account.live.com` и т.п.), а не от хакеров, пытающихся взломать ваш сайт. Т.е. запросы, приходящие на этот url не с серверов Microsoft, нужно без колебаний отфутболивать, например, [сюда](https://www.youtube.com/watch?v=s23OLr0ZNPI) (русские поймут :).
Далее, поняв, что запрос пришел с серверов Microsoft, готовим третью точку входа (`/json/invite-with-email`) к приему письма, например, разрешив принимать на нее запросы в течение следующих 10 секунд. Ну, а тем временем скрипт на странице `/json/invite-with-msal`, сделав свои дела и набив `sessionStorage` необходимой требухой, возвращает управление обратно, к началу процесса, т.е. на `/signup-msal.html`. Но там ситуация уже изменилась. Если в первый раз браузер не был авторизирован, то теперь он получил необходимые для получения информации о пользователе полномочия (токен доступа). Он уже не перенаправляется по цепочке серверов Microsoft, а делает прямой запрос на `https://graph.microsoft.com/v1.0/me`, получает необходимую информацию о пользователе (в том числе email) и отправляет запрос с персональными данными на `/json/invite-with-email`. И вот уже здесь происходит занесение данных о пользователе в БД сервера, отправляется ему электронное письмо-приглашение и т.п.
Работающий пример можно посмотреть [здесь](https://js-invite.ru).
Это моя первая статья на `dev.to`. Я не знаю, много ли здесь русскоязычных пользователей, но если такой формат (текст на русском) окажется востребованным, я готов написать такие же материалы про аутентификацию средствами Google и Яндекс. А английским я не владею настолько, чтобы связно излагать на нем длинные рассуждения, уж извините, или, как у нас тут говорят в последние годы, сорян (от англ. sorry). Читать на английском - читаю, могу даже немного говорить и писать, но практики общения мало, поэтому знание английского - мое слабое место. | yababay | |
182,661 | Neo - спаситель человечества | Очень обнадеживающая новость: на смену ветерану Vim спешит NeoVim - редактор кода... | 0 | 2019-10-04T15:25:13 | https://dev.to/ryanlanciaux/neo-vim-for-web-development-56n9 | javascript, программирование, кодинг | ---
title: Neo - спаситель человечества
published: true
date: 2019-10-03 16:11:00 UTC
tags: JavaScript,Программирование,Кодинг
canonical_url: https://dev.to/ryanlanciaux/neo-vim-for-web-development-56n9
---
Очень [обнадеживающая новость](https://dev.to/ryanlanciaux/neo-vim-for-web-development-56n9): на смену ветерану Vim спешит NeoVim - редактор кода, делающий всё то же самое, но "более лучше". Это хорошо, потому что как устроен Vim изнутри знают в мире человек 5. Случись с ними что - и мир погрузится во мрак. А тут свежие силы, оказывается, подоспели! Надо будет попробовать. | yababay |
182,836 | Yarn Workspaces does not Honor .npmrc Location Precedence: Implications and Possible Solutions | Yarn Workspaces has a bug that does not respect the location precedence of .npmrc / .yarnrc files to configure registry settings if you run a yarn command in a selected workspace. | 0 | 2019-10-04T14:33:48 | https://doppelmutzi.github.io/yarn-workspaces-bug/ | yarn, yarnworkspaces, troubleshooting | ---
title: Yarn Workspaces does not Honor .npmrc Location Precedence: Implications and Possible Solutions
published: true
description: Yarn Workspaces has a bug that does not respect the location precedence of .npmrc / .yarnrc files to configure registry settings if you run a yarn command in a selected workspace.
canonical_url: https://doppelmutzi.github.io/yarn-workspaces-bug/
tags: yarn,yarnworkspaces,troubleshooting
---
Yarn Workspaces has a bug that does not respect the location precedence of .npmrc / .yarnrc files to configure registry settings if you [run a yarn command in a selected workspace](https://yarnpkg.com/lang/en/docs/cli/workspace/). Consider the following situation:
- A _.npmrc_ file located at home folder specifies a [registry](https://docs.npmjs.com/configuring-your-registry-settings-as-an-npm-enterprise-user) entry to use a private npm registry.
- A _.npmrc_ file located at a project root specifies a registry entry to target a public npm registry like this `registry=https://registry.npmjs.org`.
In my current project, I have such a situation. The default situation is that projects need a registry setup to use the internal [Artifactory](https://jfrog.com/artifactory/). One project requires a setup to target the public npm registry. The problem of this project is that adding a dependency to a specific yarn workspaces package with the following command uses the wrong registry setup (the setup of `~/.npmrc` instead of `.npmrc` file located at project root):
```bash
$ yarn workspace package-a add @rooks/use-previous
```
The problem is that wrong URLs are put into the `yarn.lock` file (targeting the private registry).
However, if you add a dependency globally with the `-W` flag, then the `.npmrc` precedence is honored:
```bash
$ yarn add @rooks/use-previous -W
```
This bug seems to exist [for a very long time](https://github.com/yarnpkg/yarn/issues/4458).
The following workarounds are possible:
- Use the `--registry` flag
```bash
$ yarn workspace package-a add @rooks/use-previous --registry 'https://registry.yarnpkg.com'
```
- Manually add the dependency to the `package.json` of _package-a_ and run `yarn install` from the root folder of the project.
- Copy `~/.npmrc` to every project root folder that need this registry setup and delete `~/.npmrc`. If you have private settings (e.g., your user credentials) in the file, pay attention that you do not push the file to VCS (add it to `.gitignore`).
- Don't use yarn workspaces. E.g., use [Lerna](https://github.com/lerna/lerna) with npm.
| doppelmutzi |
182,841 | Be more productive with these tools! ☔️ November picks for you | Here we are for another round of interesting libraries!! Let's see what the month of November will b... | 1,536 | 2019-11-06T06:54:08 | https://dev.to/paco_ita/be-more-productive-with-these-tools-november-picks-for-you-13hh | productivity, webdev, javascript, design | Here we are for another round of interesting libraries!! Let's see what the month of November will bring us. :tada:

[Compressorjs](https://fengyuanchen.github.io/compressorjs/) is a library to compress images, as the name suggests :smile:.
It uses the HTMLCanvasElement.toBlob API for the compression process.
A Blob object is created, representing the image contained in the canvas.
**Usage:**
```html
<input type="file" id="file" accept="image/*">
```
```javascript
import axios from 'axios';
import Compressor from 'compressorjs';
document.getElementById('file').addEventListener('change', (e) => {
const file = e.target.files[0];
if (!file) {
return;
}
new Compressor(file, {
quality: 0.6,
success(result) {
const formData = new FormData();
// The third parameter is required for server
formData.append('file', result, result.name);
// Send the compressed image file to server with XMLHttpRequest.
axios.post('/path/to/upload', formData).then(() => {
console.log('Upload success');
});
},
error(err) {
console.log(err.message);
},
});
});
```
There are different [options](https://github.com/fengyuanchen/compressorjs#options) available to set max sizes or the quality of the output image for instance. The results I tried are pretty good, with a compression around 70% and still no significative quality loss.

You can play with the [DEMO](https://fengyuanchen.github.io/compressorjs/) on the website.
-----------------------------

[Pagemap](https://larsjung.de/pagemap/) is an interesting library allowing to create a minimap for your site, similar to some code editors like VS Code. It can be especially useful for pages with lot of text content.

The usage is pretty straightforward:
- Add a canvas tag to your HTML page:
```html
<canvas id='map'></canvas>
```
- Fix the position on the screen (here top right):
```css
#map {
position: fixed;
top: 0;
right: 0;
width: 200px;
height: 100%;
z-index: 100;
}
```
- Init and style the mini map according to your elements:
```javascript
pagemap(document.querySelector('#map'), {
viewport: null,
styles: {
'header,footer,section,article': rgba(0,0,0,0.08),
'h1,a': rgba(0,0,0,0.10),
'h2,h3,h4': rgba(0,0,0,0.08)
},
back: rgba(0,0,0,0.02),
view: rgba(0,0,0,0.05),
drag: rgba(0,0,0,0.10),
interval: null
});
```
Here a [DEMO](https://larsjung.de/pagemap/latest/demo/text.html).
------------------------------

Mailgo library automatically opens a modal dialog when we click on :mailto and :tel links. It can redirect directly to Gmail or Outlook for emails and Telegram, WhatsApp or Skype for phone numbers.
**Usage:**
```html
<a href="mailto:mymail@gmail.com">mymail@gmail.com</a>
```
If you are scared to expose your email to potential spam, you can split the email address using the `data-address` and `data-domain` attributes:
```html
<a href="#mailgo" data-address="mymail" data-domain="gmail.com">write me!</a>
```
Click on the links of the demo to give it a try:
{% codepen https://codepen.io/manzinello/pen/RmeQEr %}
------------------------------

[Vant](https://youzan.github.io/vant/#/en-US/intro) is a library of UI components created for mobile applications, based on Vue.js. It lists many components like Action Components which can provide their own methods & options.
Below an example with the Card component:
```html
<!-- Basic Usage -->
<van-card
num="2"
price="2.00"
title="Title"
desc="Description"
thumb="https://img.yzcdn.cn/vant/t-thirt.jpg"
/>
<!-- Discount info -->
<van-card
num="2"
tag="Tag"
price="2.00"
title="Title"
desc="Description"
origin-price="10.00"
thumb="https://img.yzcdn.cn/vant/t-thirt.jpg"
/>
<!-- Custom Card -->
<van-card
num="2"
title="Title"
desc="Description"
price="2.00"
thumb="https://img.yzcdn.cn/vant/t-thirt.jpg"
>
<div slot="tags">
<van-tag plain type="danger">Tag</van-tag>
<van-tag plain type="danger">Tag</van-tag>
</div>
<div slot="footer">
<van-button size="mini">Button</van-button>
<van-button size="mini">Button</van-button>
</div>
</van-card>
```

Other than typical form elements like radio boxes, buttons and input fields, Van also provides file uploader, progress bars, swipe panel and password fields to mention some of its components.
Therefore it can be very useful to any Vue.js developer.
------------------------------

[Quokka.js](https://quokkajs.com/) is a developer productivity tool for rapid JavaScript / TypeScript prototyping. Runtime values are updated and displayed in your IDE next to your code, as you type.<br>
Currently supported editors are: VS Code, JetBrains, Atom and Sublime Text and it comes in two versions: Community (free) and Pro.

Some of its interesting features are:
#### Live Code Coverage
Once Quokka.js is running, you can see the code coverage on the left side of your editor. The coverage is live, so by changing the code the coverage will automatically be updated accordingly. This is a nice feature coming from the Wallaby.js product (the same team is behind quokka).

#### Live Feedback
You may create a new Quokka file, or start Quokka on an existing file. The results of the execution are displayed right in the editor.

#### Live Values Display (PRO version)
While the Live Comments feature provides an excellent way to log expression values and will keep displaying values when you change your code, sometimes you may want to display or capture expression values without modifying code. The Show Value and Copy Value features allow you to do exactly that.
To use these features, the expression being logged either needs to be selected, or the cursor position needs to be right after the expression when the command is invoked.

---------------------------------------
This concludes our November list. Come back next month to see some new libraries from the web. :raising_hand:

| paco_ita |
182,857 | How to avoid undefined error when comparing in JavaScript | Hey there people One of the most common errors we encounter in JavaScript is the undefined error whe... | 2,603 | 2019-10-04T15:35:11 | https://dev.to/adnanbabakan/how-to-avoid-undefined-error-when-comparing-in-javascript-15e3 | javascript, beginners, tip | Hey there people
One of the most common errors we encounter in JavaScript is the undefined error when trying to compare two values.
Let me give you an example so you understand it better.
Imagine you have an object that you want to check if a property value is equal to another value, what you are going to do is this:
```javascript
let myObj = {
firstName: "Adnan",
lastName: "Babakan",
age: 19
};
if(myObj.firstName === "Adnan") {
console.log("Yes it is true!");
}
```
This is OK and it will work pretty well, but when you have a scenario in which you don't know if the variable you are using is a object or not what will you do?
```javascript
let myObj = undefined;
if(typeof myObj === 'object') {
if(myObj.firstName === "Adnan") {
console.log("Yes it is true!");
}
}
```
You'll do a `typeof` comparison of course and that is totally fine.
There is another thing that can happen which is if you want to add some `else` statement in order to control the opposite result, which you will end up with a code like below:
```javascript
let myObj = undefined;
if(typeof myObj === 'object') {
if(myObj.firstName === "Adnan") {
console.log("Yes it is true!");
} else {
console.log("Nope!");
}
} else {
console.log("Not at all!");
}
```
So this is kinda messed up. Isn't it?
What is my solution? To use `try` and `catch` brothers! (Or maybe sisters)
Using those will give you a neater code and a solid one.
```javascript
let myObj = undefined;
try {
if(myObj.firstName === "Adnan") {
console.log("Yes it is true!");
} else {
console.log("No");
}
} catch(err) {
console.log("No");
}
```
So what's the point in this you might ask?
I'll give you few reasons for this:
1. It looks cool XD
2. It is way better since you now have access to the error which occurred
3. You have avoided unnecessary condition
Always remember that conditions are OK until you have no other choice. You better try to avoid them as much as possible.
A real life scenario for this to me was when trying to compare an object which I got from Mongoose (an ODM for MongoDB) like below:
```javascript
import userModel from './models/userModel'
app.post('/api/user', function() {
userModel.findOne({ username: 'adnanbabakan' }).then(user => {
let result = false;
try {
if(user.age == 19) {
result = true;
}
} catch(err) {}
res.json({
result
});
});
});
```
As you might have guessed this an API made with Express and I wanted to check a simple condition.
My code might have looked really messy with those conditions in it so I decided to do it like this and change the result to true only if my condition was true in any matter.
I hope you enjoyed this and let me know if I'm wrong or there are any better ways to accomplish this. I just wanted to share this solution of mine with you. | adnanbabakan |
193,389 | How to design a thorough hiring process for your company? — Part II | “Hire people who are smarter than you are — whose talents surpass yours — and give them opportunities... | 0 | 2019-10-23T07:22:56 | https://dev.to/mrsaeeddev/how-to-design-a-thorough-hiring-process-for-your-company-part-ii-2bai | startup, whoishiring, career, hiring |
*“Hire people who are smarter than you are — whose talents surpass yours — and give them opportunities for growth. It’s the smart thing to do and it is a sign of high personal humility.”― Bruna Martinuzzi*

In the first part of this series, we talked about pre-screening phases. In case you have missed it, you can read the first part by [clicking here](https://medium.com/@saeeddev/how-to-design-a-thorough-hiring-process-for-your-company-part-i-4ea3394b6d2c?source=friends_link&sk=465b444d17d38ef4516343bca99e9330) .
In this part, we will be discussing screening, interview and then next steps of hiring process.
## i) Screening :

Initial screening is done to assess that whether candidate is aware of basic things in that domain or not. For example, if you are hiring a software engineer then you may be checking their programming and problem solving skills.
Nowadays, HackerRank problems are used by most companies. They send that problem to the candidate and then decide on the basis of result whether to call the candidate or not.
However, some companies also do some sort of video call in which there’s a shared code editor and candidate has to solve the given problem on it. Other than that, some companies also have on-site screening interviews. You can follow whatever approach suites you better.
## ii) Technical Test :

Technical test is a bit longer than screening and it also includes coding problems plus system design/product design and behavioral questions.
Coding problems in a technical test are a bit hard. However, system design/product design questions are often asked by top tier companies to understand your know-how of design scale able distributed systems.
Behavioral questions are aimed at understanding the candidates perspective of the type of workplace they want to to work in, team which they prefer, and may be tech stack with which they want to take a start.
## iii) Offer :

After a candidate has passed all the phases of this process, you may make an offer to him. Normally, expected pay has been asked during the previous phases, so, you normally offer a trade-off between average pay at your company and the candidate’s expected pay.
## Tips :
If you are preparing for interview for a tech related position then keep the following points in mind :
* Always tailor your CV for a different position or may be company.
* Mention your accomplishments and achievements in the form of numbers. That is more impressive.
* Make sure to polish your Data Structures and Algorithmic concepts.
* Make sure to get a good understanding of system design/product design questions.
* Practice. Practice & Practice…
**Thanks for reading.**
I hope you enjoyed the article. Let me know in the comments that what approach do you take at your company to hire the best people.
Cheers!
| mrsaeeddev |
182,872 | How to use Geolocation, Geocoding and Reverse Geocoding in Ionic 4 | In this post, you will learn how to implement Geolocation in Ionic 4 apps using Ionic Native Plugins... | 0 | 2019-10-04T15:02:56 | https://enappd.com/blog/using-geolocation-geocoding-and-reverse-geocoding-in-ionic-4/45/ | ionic, geolocation, geocoding, geofencing | <main role="main"><article class=" u-minHeight100vhOffset65 u-overflowHidden postArticle postArticle--full is-supplementalPostContentLoaded is-withAccentColors" lang="en"><div class="postArticle-content js-postField js-notesSource editable" id="editor_6" g_editable="true" role="textbox" contenteditable="true" data-default-value="Title
Tell your story…" spellcheck="false"><section name="aef9" class="section section--body section--first section--last"><div class="section-divider"><hr class="section-divider"></div><div class="section-content"><div class="section-inner sectionLayout--insetColumn"><p name="d5da" class="graf graf--p graf-after--figure">In this post, you will learn how to implement Geolocation in Ionic 4 apps using Ionic Native Plugins. We will also learn how to Convert Geocode in Location address (Reverse Geocoding) and Location Address into Geocode(Geocoding) in a simple Ionic 4 app and test.</p><blockquote name="0fa7" class="graf graf--blockquote graf-after--p"><em class="markup--em markup--blockquote-em">Complete source code of this tutorial is available in the </em><a href="https://github.com/enappd/Geocoding.git" class="markup--anchor markup--blockquote-anchor" rel="noopener" target="_blank"><em class="markup--em markup--blockquote-em">GeoCoding In IONIC 4 app</em></a></blockquote><h3 name="8578" class="graf graf--h3 graf-after--blockquote">What is Ionic 4?</h3><p name="eaae" class="graf graf--p graf-after--h3 is-selected">You probably already know about Ionic, but I’m putting it here just for the sake of beginners. <strong class="markup--strong markup--p-strong">Ionic</strong> is a complete open-source SDK for hybrid mobile app development. Ionic provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass. Apps can be built with these Web technologies and then distributed through native app stores to be installed on devices.</p><p name="de54" class="graf graf--p graf-after--p">In other words — If you create native apps in Android, you code in <strong class="markup--strong markup--p-strong">Java</strong>. If you create native apps in iOS, you code in <strong class="markup--strong markup--p-strong">Obj-C</strong> or <strong class="markup--strong markup--p-strong">Swift</strong>. Both of these are powerful, but complex languages. With Cordova (and Ionic) you can write a single piece of code for your app that can run on both iOS and Android (and windows!), that too with the simplicity of HTML, CSS, and JS.</p><h3 name="1ad7" class="graf graf--h3 graf-after--p">What is Geolocation?</h3><p name="8825" class="graf graf--p graf-after--h3">The most famous and familiar location feature — Geolocation is the ability to track a device’s whereabouts using GPS, cell phone towers, WiFi access points or a combination of these. Since devices are used by individuals, geolocation uses positioning systems to track an individual’s whereabouts down to latitude and longitude coordinates, or more practically, a physical address. Both mobile and desktop devices can use geolocation.<br>Geolocation can be used to determine time zone and exact positioning coordinates, such as for tracking wildlife or cargo shipments.</p><p name="b122" class="graf graf--p graf-after--p">Some famous apps using Geolocation are</p><ul class="postList"><li name="43f4" class="graf graf--li graf-after--p">Uber / Lyft — Cab booking</li><li name="e8bd" class="graf graf--li graf-after--li">Google Maps (of course) — Map services</li><li name="90ff" class="graf graf--li graf-after--li">Swiggy / Zomato — Food delivery</li><li name="1801" class="graf graf--li graf-after--li">Fitbit — Fitness app</li><li name="fc18" class="graf graf--li graf-after--li">Instagram / Facebook — For tagging photos</li></ul><h3 name="116c" class="graf graf--h3 graf-after--li">What is <strong class="markup--strong markup--h3-strong">Geocoding and Reverse geocoding</strong>?</h3><p name="d0ef" class="graf graf--p graf-after--h3"><strong class="markup--strong markup--p-strong">Geocoding</strong> is the process of transforming a street address or other description of a location into a (latitude, longitude) coordinate. <br><strong class="markup--strong markup--p-strong">Reverse geocoding</strong> is the process of transforming a (latitude, longitude) coordinate into a (partial) address. The amount of detail in a reverse geocoded location description may vary, for example, one might contain the full street address of the closest building, while another might contain only a city name and postal code.</p><h3 name="6d27" class="graf graf--h3 graf-after--p">Post structure</h3><p name="3480" class="graf graf--p graf-after--h3">We will go in a step-by-step manner to explore the anonymous login feature of Firebase. This is my break-down of the blog</p><p name="87e0" class="graf graf--p graf-after--p"><strong class="markup--strong markup--p-strong">STEPS</strong></p><ol class="postList"><li name="6172" class="graf graf--li graf-after--p">Create a simple Ionic 4 app</li><li name="b6ab" class="graf graf--li graf-after--li">Install Plugins for Geocoding and Geolocation and get User Location</li><li name="7e09" class="graf graf--li graf-after--li">Get User Current Location (Geocoding)</li><li name="e24a" class="graf graf--li graf-after--li">Convert User Geolocation into an address (Reverse Geocoding)</li><li name="3666" class="graf graf--li graf-after--li">Convert User Entered Address into Geocode (Geocoding)</li></ol><p name="7e71" class="graf graf--p graf-after--li">We have three major objectives</p><ol class="postList"><li name="3785" class="graf graf--li graf-after--p">Get User Current Location which we will get in latitude and longitude (Geolocation)</li><li name="6f72" class="graf graf--li graf-after--li">Convert that latitude and longitude in Street Address (Reverse Geocoding)</li><li name="4686" class="graf graf--li graf-after--li">And again convert Street address entered by the user into latitude and longitude (Geocoding)</li></ol><p name="5a63" class="graf graf--p graf-after--li"><strong class="markup--strong markup--p-strong">Let’s dive right in!</strong></p><figure tabindex="0" contenteditable="false" name="2ab6" class="graf graf--figure graf-after--p"><div class="aspectRatioPlaceholder is-locked" style="max-width: 480px; max-height: 322px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 67.10000000000001%;"></div><img class="graf-image" data-image-id="1*RvWiQig34Ff-E6vQXHJwzg.gif" data-width="480" data-height="322" src="https://cdn-images-1.medium.com/max/720/1*RvWiQig34Ff-E6vQXHJwzg.gif"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Woah! where did he dive in 😛</figcaption></figure><h3 name="5996" class="graf graf--h3 graf-after--figure">Step1 — Create a simple Ionic 4 app</h3><blockquote name="0a81" class="graf graf--blockquote graf-after--h3">I have covered this topic in detail in <a href="https://enappd.com/blog/how-to-create-an-ionic-4-app-for-beginners/13/" class="markup--anchor markup--blockquote-anchor" rel="noopener" target="_blank">this blog</a>.</blockquote><p name="ac57" class="graf graf--p graf-after--blockquote">In short, the steps you need to take here are</p><ul class="postList"><li name="1e8a" class="graf graf--li graf-after--p">Make sure you have node installed in the system (V10.0.0 at the time of this blog post)</li><li name="1ae0" class="graf graf--li graf-after--li">Install <strong class="markup--strong markup--li-strong">ionic cli </strong>using npm</li><li name="508a" class="graf graf--li graf-after--li">Create an Ionic app using <code class="markup--code markup--li-code">ionic start</code></li></ul><p name="afc5" class="graf graf--p graf-after--li">You can create a <code class="markup--code markup--p-code">blank</code> starter for the sake of this tutorial. On running <code class="markup--code markup--p-code">ionic start blank</code> , node modules will be installed. Once the installation is done, run your app on browser using</p><pre name="f473" class="graf graf--pre graf-after--p">$ ionic serve</pre><h3 name="3872" class="graf graf--h3 graf-after--pre">Step2 — Install Plugins for Geocoding and Geolocation and get User Location</h3><h4 name="8eb1" class="graf graf--h4 graf-after--h3">Geolocation</h4><p name="b082" class="graf graf--p graf-after--h4">This plugin provides information about the device’s location, such as latitude and longitude. Common sources of location information include Global Positioning System (GPS) and location inferred from network signals such as IP address, RFID, WiFi and Bluetooth MAC addresses, and GSM/CDMA cell IDs.</p><p name="97fd" class="graf graf--p graf-after--p">This API is based on the W3C Geolocation API Specification, and only executes on devices that don’t already provide an implementation.</p><p name="0859" class="graf graf--p graf-after--p">For iOS you have to add this configuration to your configuration.xml file</p><pre name="87ab" class="graf graf--pre graf-after--p"><code class="markup--code markup--pre-code"><edit-config file="*-Info.plist" mode="merge" target="NSLocationWhenInUseUsageDescription"><br> <string>We use your location for full functionality of certain app features.</string><br></edit-config></code></pre><h4 name="1e0c" class="graf graf--h4 graf-after--pre">Installation</h4><pre name="a47d" class="graf graf--pre graf-after--h4">ionic cordova plugin add cordova-plugin-geolocation<br>npm install @ionic-native/geolocation</pre><h4 name="a3a9" class="graf graf--h4 graf-after--pre">Geocoding</h4><p name="4abd" class="graf graf--p graf-after--h4">This plugin Used for Converting street address in Geocode and vice versa</p><h4 name="f366" class="graf graf--h4 graf-after--p">Installation</h4><pre name="5232" class="graf graf--pre graf-after--h4">ionic cordova plugin add cordova-plugin-nativegeocoder<br>npm install @ionic-native/native-geocoder</pre><h3 name="c76f" class="graf graf--h3 graf-after--pre">Step — 3 Get User Current Location (Geocoding)</h3><p name="3b1b" class="graf graf--p graf-after--h3">Using this plugin The first step you will need to do is Add this plugin to your app’s module</p><p name="3529" class="graf graf--p graf-after--p">Import this plugin like this</p><pre name="6b2a" class="graf graf--pre graf-after--p">import { Geolocation } from "@ionic-native/geolocation/ngx";</pre><p name="79c4" class="graf graf--p graf-after--pre">and add this to Providers of your app Like this</p><pre name="1ce6" class="graf graf--pre graf-after--p">@NgModule({<br>declarations: [AppComponent],<br>entryComponents: [],<br>imports: [BrowserModule, IonicModule.forRoot(), AppRoutingModule],<br>providers: [<br>StatusBar,<br>Geolocation,<br>SplashScreen,<br>{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }<br>],<br>bootstrap: [AppComponent]<br>})</pre><p name="bb6b" class="graf graf--p graf-after--pre">So after Adding Your app.module.ts look like this</p><figure tabindex="0" contenteditable="false" name="f38c" class="graf graf--figure graf--iframe graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/e020f33b97b78d2e2ae43f380a155ef5.js %}</div></div></figure><p name="c53b" class="graf graf--p graf-after--figure">Now time to import this plugin in your home.ts in which we are going to take user current location</p><p name="56ba" class="graf graf--p graf-after--p">So for using this plugin in our home.ts first, we will import the plugin like this</p><pre name="255a" class="graf graf--pre graf-after--p"><code class="markup--code markup--pre-code">import { Geolocation } from '@ionic-native/geolocation/ngx';</code></pre><p name="3b1d" class="graf graf--p graf-after--pre">and eject it in our Constructor (Dependency injection) like this</p><pre name="34a8" class="graf graf--pre graf-after--p"><code class="markup--code markup--pre-code">constructor(private geolocation: Geolocation) {}</code></pre><p name="3827" class="graf graf--p graf-after--pre">And use this code for getting user location</p><pre name="36d7" class="graf graf--pre graf-after--p"><code class="markup--code markup--pre-code u-paddingRight0 u-marginRight0">this.geolocation.<strong class="markup--strong markup--pre-strong">getCurrentPosition</strong>().then((resp) => {<br> // resp.coords.latitude<br> // resp.coords.longitude<br>}).catch((error) => {<br> console.log('Error getting location', error);<br>});</code></pre><p name="0435" class="graf graf--p graf-after--pre">If you want a continuous tracking of user location, use can you this</p><pre name="1eb8" class="graf graf--pre graf-after--p"><code class="markup--code markup--pre-code u-paddingRight0 u-marginRight0">let watch = this.geolocation.<strong class="markup--strong markup--pre-strong">watchPosition</strong>();<br>watch.subscribe((data) => {<br> // data can be a set of coordinates, or an error (if an error occurred).<br> // data.coords.latitude<br> // data.coords.longitude<br>});</code></pre><p name="4d2a" class="graf graf--p graf-after--pre">So this function will give you user Current Location latitude and longitude</p><p name="8bff" class="graf graf--p graf-after--p">After adding this code your home.ts will something look like this</p><figure tabindex="0" contenteditable="false" name="1563" class="graf graf--figure graf--iframe graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/84a2893d438f8a438e30a87bfdef2d60.js %}</div></div></figure><figure tabindex="0" contenteditable="false" name="cce1" class="graf graf--figure graf-after--figure is-defaultValue"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 1400px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 200%;"></div><img class="graf-image" data-image-id="1*fwJY1LdaNP7sv3K080MYYw.jpeg" data-width="1080" data-height="2160" src="https://cdn-images-1.medium.com/max/720/1*fwJY1LdaNP7sv3K080MYYw.jpeg"><div class="crosshair u-ignoreBlock"></div></div></figure><h3 name="beb2" class="graf graf--h3 graf-after--figure">Step — 4: Convert User Geolocation into an address (Reverse Geocoding)</h3><p name="8dac" class="graf graf--p graf-after--h3">In this step we will use Native Geocoder Plugin</p><p name="6314" class="graf graf--p graf-after--p">Using this plugin The first step you will need to do is Add this plugin to your app’s module</p><p name="0eed" class="graf graf--p graf-after--p">Import this plugin like this</p><pre name="5fed" class="graf graf--pre graf-after--p">import { NativeGeocoder, NativeGeocoderOptions } from '@ionic-native/native-geocoder/ngx';</pre><p name="e9d4" class="graf graf--p graf-after--pre">and add this to Providers of your app Like this</p><pre name="d140" class="graf graf--pre graf-after--p">@NgModule({<br>declarations: [AppComponent],<br>entryComponents: [],<br>imports: [BrowserModule, IonicModule.forRoot(), AppRoutingModule],<br>providers: [<br>StatusBar,<br>Geolocation,<br>NativeGeocoder,<br>SplashScreen,<br>{ provide: RouteReuseStrategy, useClass: IonicRouteStrategy }<br>],<br>bootstrap: [AppComponent]<br>})</pre><p name="a74a" class="graf graf--p graf-after--pre">So after Adding Your app.module.ts look like this</p><figure tabindex="0" contenteditable="false" name="e3a2" class="graf graf--figure graf--iframe is-defaultValue graf-after--p"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/3c9f5a00ef84b17311cf41af5f782e8a.js %}</div></div></figure><p name="9d0b" class="graf graf--p graf-after--figure">Now add this plugin into your home.ts and use like this</p><p name="d5f5" class="graf graf--p graf-after--p">So for using this plugin in our home.ts first, we will import the plugin like this</p><pre name="7d09" class="graf graf--pre graf-after--p">import { NativeGeocoder, NativeGeocoderOptions, NativeGeocoderResult } from '@ionic-native/native-geocoder/ngx';</pre><p name="64c7" class="graf graf--p graf-after--pre">and eject it in our Constructor (Dependency injection) like this</p><pre name="972f" class="graf graf--pre graf-after--p"><code class="markup--code markup--pre-code">constructor(</code>private nativeGeocoder: NativeGeocoder<code class="markup--code markup--pre-code">) {}</code></pre><p name="c981" class="graf graf--p graf-after--pre">And use this code to Convert Your lat-long into the street address</p><pre name="40bc" class="graf graf--pre graf-after--p">reverseGeocode(lat, lng) {<br>if (this.platform.is('cordova')) {<br>let options: NativeGeocoderOptions = {<br>useLocale: true,<br>maxResults: 5<br>};<br>this.nativeGeocoder.reverseGeocode(lat, lng, options)<br>.then((result: NativeGeocoderResult[]) => this.userLocationFromLatLng = result[0])<br>.catch((error: any) => console.log(error));<br>} else {<br>this.getGeoLocation(lat, lng, 'reverseGeocode');<br>}<br>}<br>async getGeoLocation(lat: number, lng: number, type?) {<br>if (navigator.geolocation) {<br>let geocoder = await new google.maps.Geocoder();<br>let latlng = await new google.maps.LatLng(lat, lng);<br>let request = { latLng: latlng };<br>await geocoder.geocode(request, (results, status) => {<br>if (status == google.maps.GeocoderStatus.OK) {<br>let result = results[0];<br>this.zone.run(() => {<br>if (result != null) {<br>this.userCity = result.formatted_address;<br>if (type === 'reverseGeocode') {<br>this.latLngResult = result.formatted_address;<br>}<br>}<br>})<br>}<br>});<br>}<br>}</pre><p name="2d23" class="graf graf--p graf-after--pre">In this function, we will cover two cases. Actually, Cordova plugins used for Cordova devices only So in case of PWA it will give an error to us. In the case of PWA, we will use Google Geocode Feature</p><p name="9420" class="graf graf--p graf-after--p">So after adding this code in OUR home.ts our home.ts will something look like this</p><figure tabindex="0" contenteditable="false" name="18e0" class="graf graf--figure graf--iframe graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/c660a020ba4958d1c908bf2a174565d6.js %}</div></div></figure><figure tabindex="0" contenteditable="false" name="35c7" class="graf graf--figure graf-after--figure is-defaultValue"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 1400px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 200%;"></div><img class="graf-image" data-image-id="1*iaRaJPp5d578CpCHLw-h2A.jpeg" data-width="1080" data-height="2160" src="https://cdn-images-1.medium.com/max/720/1*iaRaJPp5d578CpCHLw-h2A.jpeg"><div class="crosshair u-ignoreBlock"></div></div></figure><h3 name="e2a3" class="graf graf--h3 graf-after--figure">Step — 5: Convert User Entered Address into Geocode (Geocoding)</h3><p name="302c" class="graf graf--p graf-after--h3">This step is the opposite of the previous step. And this step we use the same plugin we use in the previous step.</p><p name="c503" class="graf graf--p graf-after--p">Code for Reverse Geocoding</p><pre name="198b" class="graf graf--pre graf-after--p">if (this.platform.is('cordova')) {<br>let options: NativeGeocoderOptions = {<br>useLocale: true,<br>maxResults: 5<br>};<br>this.nativeGeocoder.forwardGeocode(address, options)<br>.then((result: NativeGeocoderResult[]) => {<br>this.zone.run(() => {<br>this.lat = result[0].latitude;<br>this.lng = result[0].longitude;<br>})<br>})<br>.catch((error: any) => console.log(error));<br>} else {<br>let geocoder = new google.maps.Geocoder();<br>geocoder.geocode({ 'address': address }, (results, status) => {<br>if (status == google.maps.GeocoderStatus.OK) {<br>this.zone.run(() => {<br>this.lat = results[0].geometry.location.lat();<br>this.lng = results[0].geometry.location.lng();<br>})<br>} else {<br>alert('Error - ' + results + ' & Status - ' + status)<br>}<br>});<br>}</pre><p name="9456" class="graf graf--p graf-after--pre">After adding This code you home.ts something look like this</p><figure tabindex="0" contenteditable="false" name="b3a6" class="graf graf--figure graf--iframe is-defaultValue graf-after--p"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/c660a020ba4958d1c908bf2a174565d6.js %}</div></div></figure><figure tabindex="0" contenteditable="false" name="b18c" class="graf graf--figure graf-after--figure is-defaultValue"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 1400px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 200%;"></div><img class="graf-image" data-image-id="1*dULevCFgJFqN_GA1IQoQSg.jpeg" data-width="1080" data-height="2160" src="https://cdn-images-1.medium.com/max/720/1*dULevCFgJFqN_GA1IQoQSg.jpeg"><div class="crosshair u-ignoreBlock"></div></div></figure><h3 name="c341" class="graf graf--h3 graf-after--figure">Conclusion</h3><p name="7b07" class="graf graf--p graf-after--h3">In this blog, we learned how to implement Geolocation Ionic 4 apps using Ionic Native Plugins. We also learnt how to Convert Geocode in Location address(Reverse Geocoding) and Location Address into Geocode(Geocoding) in a simple Ionic 4 app and test.</p><blockquote name="d788" class="graf graf--blockquote graf-after--p">Complete source code of this tutorial is available in the <a href="https://github.com/enappd/Geocoding.git" class="markup--anchor markup--blockquote-anchor" rel="noopener" target="_blank"><em class="markup--em markup--blockquote-em">GeoCoding In IONIC 4 app</em></a></blockquote><h3 name="84b4" class="graf graf--h3 graf-after--blockquote">Next Steps</h3><p name="178a" class="graf graf--p graf-after--h3">Now that you have learnt the implementation of Firebase anonymous login in Ionic 4, you can also try</p><ul class="postList"><li name="9f48" class="graf graf--li graf-after--p"><a href="https://enappd.com/blog/ionic-4-paypal-payment-integration-for-apps-and-pwa/16" class="markup--anchor markup--li-anchor" rel="nofollow noopener noopener noopener" target="_blank">Ionic 4 PayPal payment integration — for Apps and PWA</a></li><li name="aa95" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/ionic-4-stripe-payment-integration-with-firebase-for-apps-and-pwa/17" class="markup--anchor markup--li-anchor" rel="nofollow noopener noopener noopener" target="_blank">Ionic 4 Stripe payment integration — for Apps and PWA</a></li><li name="fc67" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/how-to-integrate-apple-pay-in-ionic-4-apps/21" class="markup--anchor markup--li-anchor" rel="nofollow noopener noopener noopener" target="_blank">Ionic 4 Apple Pay integration</a></li><li name="b1fc" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/twitter-login-in-ionic-4-apps-using-firebase/24" class="markup--anchor markup--li-anchor" rel="noopener" target="_blank">Twitter login in Ionic 4 with Firebase</a></li><li name="57fe" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/facebook-login-in-ionic-4-apps-using-firebase/25" class="markup--anchor markup--li-anchor" rel="noopener" target="_blank">Facebook login in Ionic 4 with Firebase</a></li><li name="97f4" class="graf graf--li graf-after--li"><a href="https://medium.com/enappd/using-geolocation-and-beacon-plugins-in-ionic-4-754b41304007" class="markup--anchor markup--li-anchor" target="_blank">Geolocation</a> in Ionic 4</li><li name="34a6" class="graf graf--li graf-after--li"><a href="https://medium.com/enappd/qr-code-scanning-and-optical-character-recognition-ocr-in-ionic-4-95fd46be91dd" class="markup--anchor markup--li-anchor" target="_blank">QR Code and scanners</a> in Ionic 4 and</li><li name="de81" class="graf graf--li graf-after--li"><a href="https://medium.com/enappd/how-to-translate-in-ionic-4-globalization-internationalization-and-localization-31ec5807a8bc" class="markup--anchor markup--li-anchor" target="_blank">Translations in Ionic 4</a></li></ul><p name="2930" class="graf graf--p graf-after--li">If you need a base to start your next Ionic 4 app, you can make your next awesome app using <a href="https://store.enappd.com/product/ionic-4-full-app/" class="markup--anchor markup--p-anchor" rel="noopener nofollow noopener noopener nofollow noopener noopener nofollow noopener noopener noopener" target="_blank">Ionic 4 Full App</a></p><figure tabindex="0" contenteditable="false" name="89dc" class="graf graf--figure graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 442px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 63.2%;"></div><img class="graf-image" data-image-id="1*2BzL8TesnBHuazHr3VA4SQ.jpeg" data-width="760" data-height="480" src="https://cdn-images-1.medium.com/max/720/1*2BzL8TesnBHuazHr3VA4SQ.jpeg"><div class="crosshair u-ignoreBlock"></div></div></figure><p name="6518" class="graf graf--p graf--empty graf-after--figure graf--trailing"><br></p></div></div></section></div></article></main> | bunyy |
183,086 | #Hacktoberfest 2019 my experience | Hacktoberfest It is my first time participating and the truth is surprising how technology can unite... | 0 | 2019-10-04T22:26:17 | https://dev.to/jlzaratec/hacktoberfest-2019-my-experience-45mh | hacktoberfest, spanish, programming | Hacktoberfest It is my first time participating and the truth is surprising how technology can unite people and allow us to contribute our grain of sand.
Is a great experience :P
#hacktoberfest smile | jlzaratec |
183,129 | Day 5 : Best Life | liner notes: Professional : Made it through my first week as a JavaScript Developer Advocate. My ma... | 0 | 2019-10-05T01:23:31 | https://dev.to/dwane/day-5-best-life-2dg8 | hiphop, code, blog, lifelongdev | _liner notes_:
- Professional : Made it through my first week as a JavaScript Developer Advocate. My manager said I had a great week, so that made me feel good. I'm just out here trying to live my best life. haha. Worked on wrapping up the projects assigned to me and got my pull requests in. Also did some on-boarding tasks. Pretty productive Friday.
- Personal : One perk of working remote was that I was able to do my laundry. Right now, I'm putting together tomorrow radio show. After that, I want to work on my site.

This weekend, like most weekends will be focused around HIPHOP and CODE activities. Tomorrow is OUR show (https://kNOwBETTERHIPHOP.com). Looking forward to doing that.
Sunday. My plan is to finish or get close to working on my site and watch a bunch of anime. haha
Going to try and get as ahead I can now so I can get more done during the weekend.
Have a great day and weekend!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube zcloEzJU27E %} | dwane |
189,421 | Kafka: Is it a Topic or a Queue? | Apache Kafka is pretty versatile and it's quite common to hear different names for it. It is referred... | 0 | 2019-10-16T13:25:45 | https://dev.to/itnext/kafka-is-it-a-topic-or-a-queue-4lom | beginners, showdev | [Apache Kafka](https://kafka.apache.org/) is pretty versatile and it's quite common to hear different names for it. It is referred to as a queuing service, message bus (it is way more than that!), streaming platform (this one is accurate by the way), etc. From my discussion with many folks (esp. who are new to Kafka), a common source of confusion tends to be:
**"Is Kafka a Topic or a Queue?"**

This is quite expected because the [official documentation](https://kafka.apache.org/documentation/#introduction) also uses the same terminology - *"The Kafka cluster stores streams of records in categories called topics."*
A Topic is one of the most fundamental concepts in Kafka - think of it as a bucket to which you send data and receive data from.
> *Ok, so it's not a queue then?*
Well, the truth is:
## "Kafka is both a Topic and a Queue"
Let' see how...
### Queue
[Queue based systems](https://en.wikipedia.org/wiki/Message_queue) are typically designed in a way that there are multiple consumers processing data from a queue and the work gets distributed such that each consumer gets a different set of items to process. Hence there is no overlap, allowing the workload to be shared and enables horizontally scalable architectures.
> How it is implemented differs from system to system e.g. Rabbit MQ, JMS, Kafka, etc.
#### Kafka as a Queue

To build an application to process data from Kafka, you can write a consumer (client), point it at a topic (or more than one topic, but let's just assume a single one for simplicity) and consume data from it!
If one consumer is not able to keep up with the rate of production just start additional instances of your consumer (i.e. scale out horizontally) and the workload will be shared among them. All these instances can be categorized under a single (logical) entity called **Consumer Group**
> In the above diagram, `CG1` and `CG2` stand for Consumer Group 1 and 2, which are consuming from a single Kafka topic with four partitions (`P0` to `P4`).
A Kafka topic is sub-divided into units called `partitions` for fault tolerance and scalability. *Consumer Groups allow Kafka to behave like a Queue*, since each consumer instance in a group processes data from a non-overlapping set of partitions (within a Kafka topic).
Note that the maximum amount of parallelism is limited by the number of partitions of your topic e.g. if you have four partitions in a topic and start off with two consumers (in a group), each consumer will be allocated two partitions each. You can bump up to a maximum of four instances in which case each consumer will be assigned to one partition.
### Topic
[Pub-Sub systems](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) use topics or channels to broadcast information to all subscribers/consumers. This is different than that of a queue where each consumer (assuming there are multiple consumers of course) gets a different set of data to process.
I used an example of a single application to explain the concept of *Kafka as a Queue*. Imagine you had to build multiple applications that need to process data from the same Kafka topic but do it differently e.g. one application to filter data by applying business rules while the other one needs to store it in a database - the possibilities are endless.
#### Kafka as a Topic
The key part here is the fact that *all* the applications need access to the *same* data (i.e. from the same Kafka topic). Take a look at this diagram (from the Kafka docs). `Consumer Group A` and `Consumer Group B` are different applications and both will receive **all** the data from a topic. It's as simple as that!

> Internally, each application can scale out it's processing by using the "queue" mechanism described above.
That's it for this blog post. I hope you found it useful and stay tuned for more 😀

I would love to have your feedback and suggestions. Just [tweet/DM](https://twitter.com/abhi_tweeter) or drop a comment 👇👇
| abhirockzz |
193,072 | How to jumpstart a React application | Quickly initialize a React app with create-react-app | 0 | 2019-10-22T14:38:07 | https://dev.to/cesareferrari/how-to-jumpstart-a-react-application-58p7 | reactjsjavascript | ---
title: How to jumpstart a React application
published: true
description: Quickly initialize a React app with create-react-app
tags: reactjs javascript
cover_image: https://ferrariwebdevelopment.s3.us-east-2.amazonaws.com/assets/create-react-app.jpeg
---
## Quickly initialize a React app with create-react-app
`create-react-app` is an `npm` module that sets up a skeleton React application from scratch. It will quickly and seamlessly create a scaffolding with all the directories, files, and libraries required to jumpstart an application.
You can find the project homepage [here](https://github.com/facebook/create-react-app).
`create-react-app` sets up the environment for developing and running a React application. It creates a project directory and initializes a `package.json` file with all the required dependencies, including Babel, and tools like react-script that do the transpiling automatically.
This is the command we run to create a React application. Replace `app-directory` with the name of the directory that contains the application.
```
npx create-react-app app-directory
```
Running this command creates the named directory and a `package.json` file inside of it.
It then downloads all the necessary `npm` modules and adds a `start` script that we can use to start the React application.
We can also run `create-react-app` from inside an existing directory, but we should make sure the directory is empty or we wouldn’t be able to run the command:
```
// run the command from inside a directory
npx create-react-app .
```
We can run the `start` script either with `npm` or `yarn`, from inside the project directory.
```
npm start
// or
yarn start
```
The application will be started and automatically served on port `3000`.
We can open the base application in our browser by navigating to `http://localhost:3000`
`create-react-app` builds a directory structure with a `public` directory and a `src` directory.
Within the `public` directory there's an `index.html` file that is used as the entry point of our application.
`index.html` contains a `div` element with an `id` of `root` that functions as the mount point for the React application.
The whole React application will be contained within this root element, and as
we will see, it will be built up of many components.
Tomorrow we are going to see how to build React functional components.
| cesareferrari |
194,566 | Streamlining the setup of a new user workspace on Ubuntu/Fedora | I’ve been wanting to streamline the process of how I set up a new workspace with all the base package... | 0 | 2019-10-24T14:28:52 | https://www.eddinn.net/2019/10/24/streamlining-the-setup-of-a-new-user-workspace-on-ubuntu-fedora/ | linux, scripting, automation, shell | ---
title: Streamlining the setup of a new user workspace on Ubuntu/Fedora
published: true
date: 2019-10-24 13:40:38 UTC
tags: Linux,Scripting,automation,shell
canonical_url: https://www.eddinn.net/2019/10/24/streamlining-the-setup-of-a-new-user-workspace-on-ubuntu-fedora/
cover_image: https://i2.wp.com/www.eddinn.net/wp-content/uploads/2019/10/initial-package-install.png?fit=1212%2C421&ssl=1
---
I’ve been wanting to streamline the process of how I set up a new workspace with all the base packages, programs, addons and dotfiles I use and need when I set up a new computer, so I decided to write a script that does exactly that for me.
Sure, I could use Ansible or Puppet and even just Git, to store, save and apply all my settings and programs, but you know.. I like shell scripts!
Also, this gives me the advantage to install everything without having to set up Ansible, Puppet or Git beforehand..
- - - - - -
So, lets go over what the scripts do and what they install (*the* ***README.md*** *in the repo goes into more detail, so make sure to read it*)
#### Looking at the initial-package-install.sh and post-initial.sh scripts
The **initial-package-install.sh** script checks what Linux distribution I’m using (*I only use Fedora and Ubuntu, so those are the only options, but it’s easy to add other distros if needed..*), and based on that information, installs the corresponding packagebase that I’ve selected, along with Google Chrome and TeamViewer, so that I have the tools I need without having to install them all by hand.
The **post-initial.sh** script then installs all the snap applications that I want, some Python3 pip modules, my dotfiles (*via* ***stowit.sh*** *which utilizes the* ***stow*** *command in a function*), Oh-My-Zsh for zsh and lastly all the extensions that I use for VSCode and Gnome-Shell.
#### What to take into consideration before running these scripts
1. Read the **README.md** to further familiarize yourself with what the scripts do and how `stowit.sh` works
2. Make sure to look over the packagebase and change out/add any packages that you want/need
3. The same goes for the snaps and Python3 pip packages and VSCode/Gnome-Shell extensions
4. Edit and/or replace the dotfiles that you need/use in the `./dots/` directory
5. Remember, this is not an replacement for an complete configuration, the scripts are just to get you up and running faster, you still need to configure individual settings and applications
- - - - - -
The scripts will keep on evolving and I will add new features as I need them to further streamline my workspace setup.
Go ahead and check out the initial-package-install script repository on my Github!
[Check out this repository on GitHub.com (this link opens in a new window)](https://github.com/eddinn/initial-package-install)
[Streamlining the setup of a new user workspace on Ubuntu/Fedora Read More »](https://www.eddinn.net/2019/10/24/streamlining-the-setup-of-a-new-user-workspace-on-ubuntu-fedora/) | eddinn |
196,878 | Whats your dev theme? | mine is the 10x hacker theme, whats yours? | 0 | 2019-10-28T15:52:34 | https://dev.to/fultonbrowne/whats-your-dev-theme-g53 | discuss, healthydebate, meta | mine is the 10x hacker theme, whats yours?
 | fultonbrowne |
197,063 | Managing Emotions As A Programmer | There's going to be times when you'll feel stuck. Your energy, drive, and emotion will go up and down... | 2,745 | 2019-10-28T17:08:19 | https://juniortosenior.substack.com/p/managing-emotions-as-a-programmer | career | There's going to be times when you'll feel stuck. Your energy, drive, and emotion will go up and down over time during your career. Managing your enthusiasm during the day to day work as a programmer takes significant practice. There's a good chance you will go through many ups and downs. I’ve found that it helps to visualize it as a time-series graph of your emotional state over time. It's important to have a general sense of where you are in relation to your historical emotions in the past.

When I talk about emotions I'm talking about how excited you are to wake up and get to work on whatever project you're working on that day. Some days you'll jump out of bed because you're working on an exciting project. Perhaps it's something new you've never worked on before. Maybe you have to solve a difficult algorithmic problem or work on designing a new class or service. These kinds of projects are what get us excited to go do our job every day. Sometimes it might not even be the technical side of things. Many engineers thrive off of the positive feedback they get from the people who use their software, whether they're customers or internal team members. We all love recognition for our work, and times when the software you write gets praised, whether it's for the intuitive user experience, the simplicity of the interface, the challenging problem you solved, The critical bug you fixed, the magic that your code does to save the team hours, or even days worth of effort. There's a real sense of accomplishment when your efforts get recognized by other people. These are good times, and you should enjoy your accomplishments.
The day to day emotions are something you need to learn how to control though. One good day or one bad day at work won't make a difference in the short term. We all have bad days, but it's helpful to be aware of when you string together multiple good or bad days in a row at work. Maybe even one or two bad weeks, and things suddenly feel much duller. It'll get harder to get out of bed in the morning. You start dreading your commute into the office. There may be a coworker who is difficult to work with, and it's hard not to let it affect your mood in the office. There may be a bug in production you just can't seem to figure out. You'll want to bang your head against the wall because you're so frustrated. You've thought through every scenario and read over your code changes multiple times. Why won't the code work?
You'll feel like you're going insane. We've all been there. The faster you can learn to manage your emotional state during frustrating times like these, the faster you'll grow as a programmer. You'll learn sometimes the best thing to do is walk away from the keyboard. For small issues, this could mean going for a quick walk outside to get some fresh air. Other times it may mean calling it quits for the day and heading home. We often do our best thinking away from a computer screen. Some of the hardest algorithms or trickiest solutions have come to us while we're not actually thinking about the problem. It could come to you in the car during your commute, in the shower before work, or even in your sleep.
My point is, do your best not to let your frustrations take over. The work will still be there tomorrow. Walk away from the keyboard and give your brain a chance to recover. Managing your mental and emotional state isn't something you often hear, but it has a big effect on our career.
---
This post was originally pubished on my newsletter [Junior To Senior](https://juniortosenior.substack.com/) | dpods |
197,065 | Fata Morganas in Accessibility | Sometimes you stumble over things where your first thoughts are, "Hey, thats great for accessibility!" or "Nice, it's always better to solve things in a browser-native way instead of relying on JavaScript". But sometimes things are to good to be true | 0 | 2019-10-28T16:36:12 | https://marcus.io/blog/fata-morganas-in-a11y | a11y | ---
title: Fata Morganas in Accessibility
published: true
description: Sometimes you stumble over things where your first thoughts are, "Hey, thats great for accessibility!" or "Nice, it's always better to solve things in a browser-native way instead of relying on JavaScript". But sometimes things are to good to be true
tags: a11y
cover_image: https://marcus.io/content/1-blog/20191023-fata-morganas-in-a11y/fatamorgana.jpg
canonical_url: https://marcus.io/blog/fata-morganas-in-a11y
---
*Originally posted on [marcus.io](https://marcus.io/blog/fata-morganas-in-a11y).*
Sometimes you stumble over things where your first thoughts are, "Hey, thats great for accessibility!" or "Nice, it's always better to solve things in a browser-native way instead of relying on JavaScript". [And this has happened to me before](https://marcus.io/blog/menu-or-not).
Unfortunately, once you dive in deeper in to these, at first, perfect solutions you read more and more about their imperfections or downright disadvantages. Here are some elements, attributes or techniques that initially piqued my interest. But luckily, after that "Using `<details>`/`<summary>` for a menu" episode, I decided to do some research before diving into the phase of building demos and excitedly writing about that new shiny a11y thing. Because sometimes these new techniques turn out to be some kind of fata morganas, or solutions that are not yet ready for prime-time.
### No. 1: Using `aria-role="feed"` solves accessibility issues for all users
I was happy when I first read about the `feed` role. To give context: It is meant to help in situations where there is infinite scrolling and a continuous stream of content (like for example Twitter's, Mastodon's or Facebook's status message list). And I wasn't alone. Quote Deque employee Raghavendra Satish Peri:
> Just like many developers and the accessibility professionals, I initially believed that role=”feed” would solve any accessibility-related problems for infinite scrolling.
But the disadvantage of relying on `role="feed"` alone to solve a feed's accessibility problem is that there are more than just screen reader users who are negatively affected by infinite scroll interfaces. Additionally, keyboard-only users, speech regocgnition software users, people using zoom, or switch devices and people with cognitive disabilities also have their problems with this pattern.
Aforementioned Raghavendra Satish Peri wrote [an interesting article about the scope of the problem](https://www.deque.com/blog/infinite-scrolling-rolefeed-accessibility-issues/) and the potential misunderstanding that comes with this role. Furthermore, he suggest an accessible infinite scroll design pattern, which looks promising and worth a look for everybody who plans to implement a feed of this sort themselves in the future (especially since there's no consensus on the "official" usage example by WAI-ARIA Authoring Practices).
### No. 2: Detecting screen readers in CSS with the `speech` media query
If you look at it superficially, `@media speech` seems to be a way to address screen readers via CSS, and a means to avoid `.visibility-hidden` classes, for example. On a second, more thorough look you'll notice that [there is a discussion ongoing in CSS Working Group (Drafts), regarding its removal](https://github.com/w3c/csswg-drafts/issues/1751).
Because one the one hand, [it is not supported in user agents at all](https://github.com/w3c/csswg-drafts/issues/1751#issuecomment-390288685) and on the other, screen readers use `screen` media type anyway, the agreement is to add a warning regarding `@media speech`: it "is for pure-audio UAs, **not** screen readers". So in the future it could be a way to detect Alexa, Siri or Google Home. In the present, though, it is no method to recognize screen readers.
Bear in mind that even it it *would work,* it wouldn't be a good idea to use it:
- There are more screen reader users than the ones who are 100% blind: Also, people who are partially sighted use these types of software, in addition to people with good eyesight but cognitive issues
- [You shouldn't aim to detect screen readers for ethical privacy reasons.](https://tink.uk/thoughts-on-screen-reader-detection/)
### No. 3: Using the `<dialog>` element for perfectly accessible modal windows
It's actually quite hard to build an accessible modal dialog, so `<dialog>` appears like a gift at first. To [quote Eric Bailey](https://css-tricks.com/some-hands-on-with-the-html-dialog-element/#comment-1751844):
> I like what dialog represents. HTML should pave the cowpaths of popular UI patterns. They should especially do this when those patterns come with tricky implementation concerns that developers may fail to consider, especially when those considerations include accessibility.
Additionally most of the time the advice – when it comes to ARIA and JavaScript in general – is: "Don't rebuild element or widgets when there is a native one at hand" ([ARIA Rule #1](https://www.w3.org/TR/using-aria/#rule1)). And while this is true in most cases (like using and styling a `<select>` instead of trying to re-build one) this advice is out of place when it comes to the `<dialog>` element.
Admittedly, at first glance the native dialog implementation looks rather good: [It is, like Chris Coyer states, "not just a semantic element, it has APIs and special CSS"](https://css-tricks.com/some-hands-on-with-the-html-dialog-element/). It brings `.show()`, `.showModal()` and `.hide()` methods (and hitting the `ESC` key closes it), a new mode for forms, namely `method="dialog"`. This means that in the following example, a button click would close the modal in the same way a button in a "normal" form would submit the form's data:
<form method="dialog"><button>Close</button></form>
Furthermore, `<dialog>` brings a `::backdrop` CSS pseudo to precisely style the often darkening overlay, and it even takes care of focus management (minus implementing a focus trap when a modal is open).
Sounds too good to be true? Well, it kind of is. Part of the reason for that is the well-intended focus algorithm that is built in. Unless there is an `autofocus` attribute present in the dialog it will focus the first focusable element. But it is not granted that this element will be one of the first items in a dialog. Scott O'Hara paints in his article ["Having an open dialog"](https://www.scottohara.me/blog/2019/03/05/open-dialog.html) the picture of a scenario where a dialog contains a long text, for example our most favorite internet texts (legal ones like terms of service). Would the first focusable element be at the end of this long, scrolling text a user would experience that a dialog would start in state where the content is already scrolled. Also, one important thing of dialog is focus management is still missing: the return of focus to the dialog-triggering element once it closes. So developers might think they take care of accessibility by using `<dialog>` – but in fact need to programmatically fill these gaps.
Aside from these problems Scott discovered some problems with the dialog in various screen reader and browser combinations. He concludes:
> tldr; I’m just going to say right now that the dialog element and its polyfill are not suitable for use in production. And it’s been that way since the dialog’s earliest implementation in Chrome, six-ish years ago.
### No. 4: Using `aria-label` as a means to describe everything
Maybe you were like me: When I first heard about `aria-label`, I thought: "Wonderful, a way to send more information to screen Reader (and screen readers only)."
But don't stop at this thought for three reasons:
- Firstly, don't go over the top and try to explain the usage of standard HTML controls and standard ARIA widgets. Adrian Roselli has summed it up wonderfully in his article ["Stop Giving Control Hints to Screen Readers"](https://adrianroselli.com/2019/10/stop-giving-control-hints-to-screen-readers.html) a few days ago.
- Secondly, you couldn't just apply `aria-label` on anything. If you aim to add the attribute on static content (minus elements with landmark roles like `<nav>`) be aware that you open pandora's box and can't rely on consistent screen reader behavior. To learn more, read "[What happens with aria-labelledby, aria-label and aria-describedbyon static HTML elements?](http://www.davidmacd.com/blog/does-aria-label-override-static-text.html)" by David MacDonald.
- Thirdly, if you are using `aria-label` and a realtime translation service like Google Translate its values won't be translated in every browser (only in Google Chrome at the time of writing). To circumvent this issue, you could use text perceivable for everyone, and, if you want to influence the accessible name of a given element, you could point to it via `aria-labelledby` .
| marcush |
198,060 | Remote job php, js | How to find a remote job | 0 | 2019-10-30T02:18:51 | https://dev.to/nghiata/remote-job-php-js-52im | php, javascript, react, laravel | ---
title: Remote job php, js
published: true
description: How to find a remote job
tags: php, js, React, laravel
---
I can code with php without framework and some framework (laravel, codeIgniter) and Javascript(vanilla, React). I want to find a remote job to work with. Is it available in here?
| nghiata |
199,509 | Philosophy of a Good Developer | Going beyond the technical aspects of what makes a good developer by taking a deep look at some fundamental rules. | 0 | 2019-11-28T11:05:44 | https://dev.to/xenoxdev/philosophy-of-a-good-developer-30c2 | productivity, webdev, beginners, career |
---
title: Philosophy of a Good Developer
published: true
description: Going beyond the technical aspects of what makes a good developer by taking a deep look at some fundamental rules.
tags: productivity, webdev, beginners, career
cover_image: https://thepracticaldev.s3.amazonaws.com/i/204ymbrtze33vsh890jd.png
---
It’s been a long time since we talked about something philosophical. So I thought I'd write a new post for you all, this time going back to basics. There's something I've been thinking about a lot lately.
#### Who is actually a good developer?
Well, of course, the answer we're exploring today is not strictly technical, but goes beyond that. Even if you are technically super strong, you can still be a bad developer. I’m sharing my thoughts as a long-time product manager, programmer and techie. To some of you, a few of the points might feel familiar and perhaps even obvious, but there's a reason they're here. I am certain most of you will find something to learn and improve upon, so stay with me until the end, because Rule #6 is the most important of them all.
Let’s explore what it takes to actually be a good developer. 😇😇
### Rule 1: Don’t be a lone wolf 🐺
When you are working on a commercial application or project, the first thing you need to understand is that your code is not just your code; it will be read and reviewed by multiple people. Even if right now, you are the only one working on the project, someone else will have to read and understand it in the future. Your code should be readable, not only for you but also for anyone else who might join the project down the line.
So make sure that you:
***1. Don't use silly variables names***
Oh yeah, I know most of us have been there and done that. To some of us, it might even seem like a bit of harmless fun, but that's far from the truth. Don't use random variable names; that's what rookies do. Your variable name should always make it clear what its purpose is. You need to understand that code will not only be interpreted by the computer but by humans as well.
```javascript
let vx34 = "Something" // ❌
let x = "Something" // ❌
let boxycat = "Something" // ❌
let catCount = 34 // ✅
let user_message = "Something" // ✅
```
***2. Always use comments***
When defining a function, along with giving it a proper name, make sure that you use comments. Of course, you don't have to do it everywhere, but more often than not, it helps to further clarify your intent with code comments. The rule of thumb is *Write a comment if it can save another developer's time to understand the code*. If another dev has to go back and forth in the code just to understand a single function and what it's actually doing, that's bad code. Be foresighted, add appropriate comments where needed.
Using Documentation Generators like JSdoc can also be efficient. Not only will your comments look good but it will also support a few cool IDE features like function definitions preview.
```javascript
/**
* Represents a book.
* @constructor
* @param {string} title - The title of the book.
* @param {string} author - The author of the book.
*/
function Book(title, author) {
}
```
To learn more about JSdoc, check out [this](https://dev.to/paulasantamaria/document-your-javascript-code-with-jsdoc-2fbf) article.
***3. Make a project wiki***
This is mostly ignored, but it's a highly critical rule to follow. See, the thing is programming is not a linear process. You might stumble upon a dozen problems while trying to make something work. Whatever it is, write it down. Let's say, even if it's just a problem that you faced while installing MongoDB on your Linux machine, write down every piece of information that can help your teammates solve this same problem if they ever run into it in the future. A well-documented code always has a well-documented wiki. How to run the dev env, how to use the design system, how to export env variables locally, whatever it is, **write a wiki!** Trust me, you will save a lot of your (and your team's) time. Here are some useful links that I found:
* [Documenting your projects on GitHub](https://guides.github.com/features/wikis/)
* [How to write wikis in Gitlab](https://docs.gitlab.com/ee/user/project/wiki/)
**And oh, while we're on the topic of documentation, here's a neat little tool you can use to read your node modules' documentation easily.**
{% post teamxenox/moddoc-a-new-way-to-read-documentation-of-node-modules-3ok4 %}
***4. Format code properly***
Last but not the least, indenting code well. I recommend using tools like prettier to make this easy. You can enable the "format on save" feature also; I love this feature. Here is the [full guide](https://dev.to/robertcoopercode/using-eslint-and-prettier-in-a-typescript-project-53jb) on how to do it.
```javascript
// Bad 🙅🏻♂️ ❌
function Book(title, author) {
if(data){
let data = false
}
}
// Good 🤩 ✅
function Book(title, author) {
if(data){
let data = false
}
}
```
So do remember that working on a project is a collaborative effort. Save everyone's time by using the above tips. As Ned Stark said:
>"The lone wolf dies but the pack survives." 🐺☠

### Rule 2: You can't remember it all 🤔
So imagine it's a regular day, you're working on something and then suddenly, you notice that something else is broken. You know how to fix it but really don't have the bandwidth right now. So you decide to make a mental note of it and get back to it later. But you never do.
Sounds familiar? Well, that's just how the human brain works. 🧠 In this particular context, think of your brain as RAM. It remembers certain things during a process and then erases them later, which is exactly why we need to write things down. Computers save things to their hard drives, and in the same manner, we should be writing down important things as well if we want to recall them later. It sounds like such a basic thing to do and yet so many of us struggle with it.
Writing these things down is also supremely helpful in building great software because it lets you focus on the task at hand while also giving you a big picture view when needed. **More on this later in Point #6**!
Using `TODO:` comments is highly effective. This can not only help *you* remember to do certain tasks later but can also encourage your fellow teammates to do those tasks instead if you couldn't get to them for some reason.
But if you wanna step up your game, I recommend using a tool like Todo Plus

With this, you can either create a to-do file or track all your `TODO:` comments in a single file. Highly recommend it.
So don't just keep things in your mind palace. **Write them down!**.

### Rule 3: Get into the user's shoes 🥾
As developers, technically you are the first users of the app. Your work is not limited to writing logic or completing features. You also need to ensure that the feature(s) you are building is actually usable. Always ask yourself: If I were the intended user of this app, would I be able to use it? The answer to this should always be a resonant yes. Test all the features to pass *your* standards first, and as a computer engineer, those standards - as well as your expectations from a feature - should be high. Don't wait until the QA team gives you a list of bugs, NO! If you are having trouble understanding the user's perspective, sit down with your PM and understand it. It's okay if you can't fully wrap your mind around it, but you must always try.

This is especially important for full stack developers. If you don't just want to be a full stack developer in name but rather a champion of the people 🏆, you *must* learn how to do many things well, including UIX design. I talk about this in detail in my article on how to improve your CSS, which is a powerful tool for any full stack dev.
{% post teamxenox/the-only-way-to-improve-your-css-game-1m2k %}
### Rule 4: No Shortcuts...
...except the application shortcuts. 😜
There are many best practices for devs based on this rule only. I'm sure you have been in a situation where you have to make the same or similar thing multiple times, and you find yourself copy-pasting the same code over and over again. In a corner of your heart, you *know* that redundancy isn't good, and following the separation-of-concern rule, you should probably make a function out of it. But your lazy brain tells you it could take a long time, so you decide to skip it. After all, you're saving time you could put into building a new feature, right? Wrong! By doing so, you decrease the performance of the app and end up wasting more time when you revisit it to remove that redundant code. And trust me, it's damn frustrating!

Anything worth doing, is worth doing right!
You will face many other situations like this on a daily basis, and you will have two choices: the easy one and the right one. So just take that extra leap and do it. **There are no shortcuts.** 🙅🏼♂️🚫
### Rule 5: Don't play the blame game 😡

As a PM, I have faced this problem countless times that whenever something is broken, developers are quick to point fingers at each other. This does nothing but damages the teamwork. We need to understand that it's a team game. If you knew about the existence of a certain issue already and you did nothing to fix it or even bring it to light, you are equally responsible. Always take full responsibility and ownership for the project and don't play the blame game.
Great teams do it on a subconscious level, and I've seen it work so well. Good managers help in fostering this approach while bad ones do the exact opposite (but that's a discussion for another time).
### Rule 6: Look at the bigger picture 🤨🖼
In the end, this is the most important mindset that can differentiate you from others. Of course, you are doing your job just like anyone else, and at the end of the month, you will get your salary for the work you've done. Now, that's great and you can stop there, but I dare you to think beyond that. Whether you've considered it or not, your name is associated with the company or product you are working on. You are not just writing code, my friend, **you are leaving behind a legacy.**
You need to create opportunities for yourself to step back and get an overview of things. Having a big picture view is important to stay aligned with the mission and reassess tasks/priorities with respect to how they help you achieve the ultimate goal, i.e. build great software that helps users be awesome.
At the end of the day, if your creation is garbage, that's on you. Inversely, if your creation is flawless, that's on you as well. Do your best work, and your work will speak for you.

## Conclusion
Being a good developer takes more than just technical skills. If improving technical skills is all you have your sights set upon, that's a low bar. You need to keep in mind that:
* When working on a commercial project, you shouldn't operate like a lone wolf; write easily understandable code and document everything!
* You can't remember everything, so write stuff down; use To-do comments to get back to things you skipped earlier.
* You must put yourself in the user's shoes and build features they can use; judge what you build on your own standards before anything goes to QA.
* Shortcuts will only come back to bite you in the behind; when you do something, do it right.
* Blaming others when things break down will lead you nowhere; take extreme ownership of what you're building and foster teamwork.
* A software is more than just a collection of features, so make sure you step back and look at the big picture every so often. What you build is your legacy.
So expand your mind and be the best developer you can be. I've written at length about how you can improve your focus and increase your brainpower to deliver your best work. 🧠 See the article below to explore this more.
{% post teamxenox/use-the-full-power-of-your-brain-to-be-a-better-developer--27pe %}
**What tips would *you* give to someone to help them become a good developer? I'd love to hear, so write them down in the comments!** 👇🏼💬
---
### P.S. Are you an Open-Source enthusiast?
There are many reasons to love OSS. It's fundamentally collaborative in nature, and everyone involved is building something for the love of it. I've met so many great developers through open-source ever since I started Team XenoX. 🔥 If *you* are also someone who is an open-source enthusiast and looking to build cool products in a collaborative environment and meet awesome people, I welcome you to join me in [XenoX Multiverse](http://bit.ly/xnxmltvrs). Check out some of the [stuff we've made](http://bit.ly/madebyxenox) this year.

***Oh, and if you're looking for work, we're hiring at Skynox Tech! Go ahead and apply [here](http://bit.ly/2CA3qDe).*** 😀💯
Have a great day and I'll see you all again very soon!
| sarthology |
201,518 | How To Build A Twitter Hashtag Tweets Viewing Tool Tutorial | Build a Twitter Hashtag Tweets Viewing Tool Tutorial | 0 | 2019-11-06T20:19:36 | https://www.codewall.co.uk/how-to-build-a-twitter-hashtag-viewing-tool-tutorial/ | node, javascript, twitterapi, api | ---
title: How To Build A Twitter Hashtag Tweets Viewing Tool Tutorial
published: true
description: Build a Twitter Hashtag Tweets Viewing Tool Tutorial
tags: nodejs,javascript,twitterapi,api
canonical_url: https://www.codewall.co.uk/how-to-build-a-twitter-hashtag-viewing-tool-tutorial/
---
Twitter is an incredible social media platform for end users but, it’s also immense for data analyzers too. Twitter offers an API to conduct informative searches and display these results in your own web tools. From there, the world is your oyster, especially for social media marketers.
In this tutorial, we will build a simple website that displays tweets with performance indicators like ‘Retweets’ and ‘Favorites’ for any hashtag we desire. The website will be built on NodeJS with ExpressJS, if you’ve already got this then great, if not, you can follow my tutorial here – [basic NodeJS & ExpressJS setup](https://www.codewall.co.uk/setting-up-a-local-web-server-with-nodejs-expressjs/),
Here is the final result below

### Prerequisites
The code used in this tutorial will be entirely JavaScript, CSS & HTML, so, all you need in place is the following two points.
1. Apply for a [Twitter Developers Account](https://developer.twitter.com/content/developer-twitter/en.html) and wait for approval (This could take up to a couple of weeks)
2. A [basic NodeJS & ExpressJS setup](https://www.codewall.co.uk/setting-up-a-local-web-server-with-nodejs-expressjs/), you can follow my earlier tutorial to get this up and running in less than 30 mins!
### Installing & Configuring Twit
First up, we need to install the beautiful [Twit](https://github.com/ttezel/twit) library which allows us to configure our API credentials and also gives us some pre-defined API functionality. Twit is a neat Twitter API client for Node and saves a boatload of time fleshing out all the code yourself.
Install Twit by running
`npm install twit`
Then, require the library in your server.js file by adding the following code near to the top of the file –
```javascript
const twit = require("twit")
```
Lastly, configure a new Twit instance with your API credentials –
```javascript
let Twitter = new twit({
consumer_key: 'your_consumer_key',
consumer_secret: 'your_consumer_secret',
access_token: 'your_access_token',
access_token_secret: 'your_access_token_secret',
timeout_ms: 60 * 1000, // optional HTTP request timeout to apply to all requests.
strictSSL: true, // optional - requires SSL certificates to be valid.
});
```
### Searching for some tweets
Before we make it all beautiful and user-friendly, we can test searching for tweets from a hashtag by running the API call and logging the response to the console. For this example, I used the ‘#100DaysOfCode’ hashtag for the `q` parameter, which I like to think stands for ‘Query’.
Let’s add the code to search tweets on Twitter, just after the Twit instance setup.
```javascript
Twitter.get('search/tweets', {
q: '#100DaysOfCode',
count: 100,
result_type: "mixed"
}).catch(function (err) {
console.log('caught error', err.stack)
}).then(function (result) {
console.log('data', result.data);
});
```
Now re-run your server.js file and check out the response in the console, it should look similar to below –

As you can see from the screenshot above, each tweet comes with a lot of useful data, albeit some of it hidden within the console because they are further objects, but still really handy data. The most obvious pieces of data are the **retweet\_count** and **favorite\_count**.
#### So, how do we make this user-friendly and ultimately digestible information?
1. Add a single HTML input field to allow submission of hashtags to the backend.
2. Configuring the server.js file to handle post data from the HTML form and use it within the API call.
3. Return the response to our index file.
4. Parse the data and build our beautiful HTML.
Let’s go…
### Adding an HTML form to the index.ejs file
Add the following code to your index.ejs file, for quickness I’ve used the bootstrap and font awesome CDN.
```html
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<title>Twitter Hashtag Viewer</title>
<link href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" rel="stylesheet"
type="text/css">
<link href="/css/style.css" rel="stylesheet" type="text/css">
<link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet"
type="text/css">
</head>
<body>
<div class="container">
<div class="form mb-2 mt-2">
<fieldset>
<form action="/" method="post">
<div class="input-group">
<input class="form-control" name="hashtag" placeholder="eg. #100DaysOfCode" required type="text">
<input type="submit" value="Analyze!">
</div>
</form>
</fieldset>
</div>
</div>
</body>
</html>
```
With the above code inserted into your file, it should look something like below –

### Configuring Our server.js to handle Post requests
#### Installing and configuring Body-parser
Now we need to write the logic to handle the posting of input values into the form above. First of all, we need to install some middleware which will give us this functionality, namely body-parser. Body-parser has access to the req and res objects giving us the ability to interrogate what data is passed during the post.
Run the following command to install it –
```
npm install body-parser --save
```
Then, at the top of your server.js file, require it, and lastly, tell the app to utilize its power.
```javascript
const bodyParser = require('body-parser')
app.use(bodyParser.urlencoded({ extended: true }));
```
#### Adding our post handler
Add the following JS to your server.js file which will handle a simple posting of the hashtag input form with the name ‘hashtag’.
```javascript
app.post('/', function (req, res) {
console.log(req.body.hashtag);
if (req.body.hashtag !== undefined) {
res.render('index', {hashtag: req.body.hashtag})
}
res.render('index', {hashtag: null})
});
```
#### Adjusting the index file to print hashtag variable passed in from the post handler
Add the following EJS markup to your index.ejs file, somewhere that you want the hashtag to print out after it’s been submitted to the server and returned as a variable.
```javascript
<% if(hashtag !== null){ %>
<h3>All popular tweets for <%- hashtag %></h3>
<% } %>
```
Now, if you reboot your server, navigate to the index file and submit a new hashtag, you should see the value printed to the page! See below, I submitted the hashtag ‘code’

### Putting it all together and displaying tweets
So, we’ve got our Twitter API client ready, the ability to post data from an HTML form, all is left to do is build the logic for the API call to include the hashtag and return data to the index file. Once that’s done, we can format the data to look good and digestible.
The next pieces of code will need to be completely changed if you want to build more functionality into the project, but for now, it’s sole purpose is to handle hashtag inputs and query the Twitter API with them.
#### Edit your server.js files post handler
Adjust your Post handler to look the same as below, with your own API credentials –
```javascript
app.post('/', function (req, res) {
if (req.body.hashtag !== null) {
let Twitter = new twit({
consumer_key: 'your_consumer_key',
consumer_secret: 'your_consumer_secret',
access_token: 'your_access_token',
access_token_secret: 'your_access_token_secret',
timeout_ms: 60 * 1000, // optional HTTP request timeout to apply to all requests.
strictSSL: true, // optional - requires SSL certificates to be valid.
});
Twitter.get('search/tweets', {
q: req.body.hashtag, // use the user posted hashtag value as the query
count: 100,
result_type: "mixed"
}).catch(function (err) {
console.log('caught error', err.stack)
res.render('index', {
hashtag: null,
twitterData: null,
error: err.stack
});
}).then(function (result) {
// Render the index page passing in the hashtag and the Twitter API results
res.render('index', {
hashtag: req.body.hashtag,
twitterData: result.data,
error: null
});
});
}
});
```
#### Edit your index.ejs file to handle the Twitter Data
Adjust your index.ejs file to look similar to below, which does the following –
* Uses font-awesome for like and retweet icons
* Logic to handle if twitter data is present
* JavaScript to build and append HTML to the page
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Twitter Hashtag Viewer</title>
<link href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" rel="stylesheet"
type="text/css">
<link href="/css/style.css" rel="stylesheet" type="text/css">
<link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet"
type="text/css">
</head>
<body>
<div class="container">
<div class="form mb-2 mt-2">
<fieldset>
<form action="/" method="post">
<div class="input-group">
<input class="form-control" name="hashtag" placeholder="eg. #100DaysOfCode" required type="text">
<input type="submit" value="Analyze!">
</div>
</form>
</fieldset>
</div>
<div class="container-fluid">
</div>
<% if(hashtag !== null){ %>
<h3>All popular tweets for <%- hashtag %></h3>
<% } %>
<div id="tweets"></div>
<% if(twitterData !== null){ %>
<script>
let twitterData = <%- JSON.stringify(twitterData) %>;
let tweetHTML = '<div class="row">';
for (let index = 0; index < twitterData.statuses.length; index++) {
var createdDateTime = new Date(twitterData.statuses[index].created_at).toUTCString();
tweetHTML += '<div class="col-sm-4"><div class="card mb-3">' +
'<div class="card-body">' +
'<h5 class="card-title">@' + twitterData.statuses[index].user.screen_name + '</h5>' +
'<h6 class="card-subtitle mb-2 text-muted">' + twitterData.statuses[index].user.name + '</h6>' +
'<p class="card-text">' + twitterData.statuses[index].text + '<</p>' +
'<p class="card-text"><i class="fa fa-retweet" aria-hidden="true"></i> ' + twitterData.statuses[index].retweet_count + ' <i class="fa fa-heart" style="color:red;" aria-hidden="true"></i> ' + twitterData.statuses[index].favorite_count + '</p>' +
// '<a class="card-link" href="#">Another link</a>' +
'<p class="card-text"><small class="text-muted">Created on '+createdDateTime.toString()+' </small></p>' +
'</div>' +
'</div>' +
'</div>';
}
tweetHTML += '</div>';
var tweetsContainer = document.getElementById('tweets');
tweetsContainer.insertAdjacentHTML('beforeend', tweetHTML);
</script>
<% } %>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js"></script>
</body>
</html>
```
Save both files and reboot your Node server, navigate to the index page and search for a tweet. You should now have a very clean HTML page with all of the popular and latest tweets for that hashtag, see example below for #code.

### Summary
This tutorial was written to once again show the power of the Twitter API’s many uses, with data like this the information can be forever valuable. Especially to businesses looking for trends. Whatever your ideas, this article gives you a strong foundation to get set up quickly and analyze tweets from within your own project. Check out the [Twitter Standard search API documentation](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html) for further reading on the API method used in this article.
Twitter is an incredible social media platform for end-users but, it’s also immense for data analyzers too. Twitter offers an API to conduct informative searches and display these results in your own web tools. From there, the world is your oyster, especially for social media marketers.
Cross Posted From : https://www.codewall.co.uk/
| danenglishby |
201,526 | Ali Spittel talks learning to code without a Computer Science degree | It's a real pleasure to share an interview I did with Ali Spittel! Ali is a great example of a suc... | 0 | 2019-11-08T18:13:46 | https://www.nocsdegree.com/ali-spittel-talks-learning-to-code-without-a-computer-science-degree/ | nocsdegree, javascript, career, inspiration | ---
title: Ali Spittel talks learning to code without a Computer Science degree
published: true
date: 2019-11-06 20:57:00 UTC
tags: nocsdegree, javascript, career ,inspiration
canonical_url: https://www.nocsdegree.com/ali-spittel-talks-learning-to-code-without-a-computer-science-degree/
---

It's a real pleasure to share an interview I did with [Ali Spittel](https://www.alispit.tel)! Ali is a great example of a successful developer who didn't need a CS degree to get jobs and be successful. She is also location-independent which is one of the great perks of working in the tech industry and is great at sharing tips for newbies on [Twitter](https://twitter.com/aspittel). Be sure to check out her [Ladybug podcast](https://ladybug.dev/). Enjoy!
## Hey Ali, thanks a lot for doing the interview! Could you give an introduction for coders who want to know more about you?
Hey! I’m Ali, I’m a digital nomad, so I don’t have a permanent location - right now I’m based in New Hampshire, but that’s changing next week! I teach people to be software engineers at General Assembly, a coding bootcamp. Teaching is such a challenge, and I love it. Before that I was a software engineer at a few different startups. Outside of my 9-5, I write a blog geared towards newer programmers, [We Learn Code](https://welearncode.com/) and I have a [podcast](https://ladybug.dev/) with two of my amazing friends. I have worked with React and Python for most of my career, but I’ve also worked with Vue and Rails at varying points.
## What first got you interested in programming?
When I was a sophomore in college, I had an extra course block and an intro to computer science class fit in my schedule. I had no idea what that meant, I thought I was going to learn how to use Microsoft Word better or something! But, the course was taught in Python and I found it super fun -- I could type something into the computer and it would do what I told it to do. I decided that I wanted to double major in computer science, and I even became a teaching assistant for my college. The next semester I took a class on data structures and algorithms in C++, and it was a lot more difficult. A lot of the people in the course had been coding since childhood and I was a total newbie. I ended up doing okay in the class, but I felt like I didn’t fit in as a programmer and so I quit.
## I read that you dropped out of CS at college. How did you get back into coding and back on your feet?
I ended up spending the next semester in DC interning, and I realized that I could automate a lot of the data analysis work that I was assigned to using programming. I realized the real-life application of programming at that point, and I found it really fun again. That summer, I got a software engineering internship as a result of the previous one, which turned into a job. I did finish my degree, but I expedited the process by taking night classes and writing my theses off camps so that I could be a software engineer full time. So it was this super quick cycle of learning to code, then quitting, and then accidentally becoming one fulltime!
## Are there any tips for people learning to code that you wish you had been told when you were starting?
Getting used to failure and picking the wrong solution at first is a huge part of writing code and it doesn’t mean you’re bad at it. Bugs are inevitable, and error messages are helpful, not terrifying! I am a total perfectionist, and programming makes me break out of that sometimes. At first, it was really difficult to deal with, and I thought it meant I wasn’t a good programmer, but now I know that getting errors and that certain things being difficult is normal.
## Has your lack of Computer Science degree ever been brought up when seeking work?
I’ve never had it brought up, luckily! I think it would be most likely to come up when looking for a first job, and my first job was for a very relaxed early-stage startup that didn’t place a huge value on higher education, which was pretty lucky for me. That being said, I’ve had recruiters reach out from most of the big, brand-name tech companies over the past few years, and none of the ones I’ve talked to have even asked how I learned to code. Honestly, I’d have no interest in working for a company that judged me based on a lack of a computer science degree anyways.
## Do you think employers are getting better at recruiting self-taught developers now?
I think recruiters are good at reaching out to anyone that’s getting recruited a lot. So people with experience or who have computer science degrees. I didn’t get reached out to by recruiters as much when I was at a point in my career where I would have benefitted from them. Now I get reached out to a ton, but I also know enough people personally in the industry that I would be more likely to reach out to them rather than responding to cold recruitment.
## What has been the most satisfying moment in programming for you?
I have two moments that stand out - I remember years ago my boss told me that the best part of programming is when you figure out that with enough time you could probably figure out how to build anything. I started a blog a few years ago where I learned a new technology each week and built something with it. Learning those new things made me realize how similar languages and frameworks really are, and I felt like I realized that I was able to teach myself new things pretty easily.
The second is whenever I have students graduate, it’s awesome to see them be successful -- I can write however many lines of code myself, but the thousands of people I’ve taught can collectively make a much bigger impact than me, and that’s pretty cool.
## As a self-taught developer do you find that you are able to communicate better with coding students as you’ve been in the same position?
I think I’m in this really lucky place from an educational perspective since I have some computer science background, completely self-taught web development, and then have taught at a bootcamp for the last two and a half years. I’ve seen so many different ways of learning to code and their benefits and drawbacks. I think if I were to go back, learning at a bootcamp would have been awesome. The structure and accountability would have been really nice!
## What are you most excited about in terms of web development today?
I’m so excited about the evolution of frontend development - when I started, I was working in AngularJS with gnarly error messages and we had to write custom Webpack configurations instead of using create-react-app. The last five years or so have made frontend development so much easier, and I can’t wait for that to become even more true!
## I know you are one of the founders of the Ladybug podcast. Do you have any big goals or plans for the future you want to share with us?
I have so much fun with [Ladybug podcast](https://ladybug.dev/) since it’s a group project - instead of working alone like I do on my blog, I have two amazing friends that I get to do it with. I’m excited to see it keep growing and to figure out both the podcasting ecosystem (which is super complex in itself) and how to produce episodes that are the most helpful for our audience!
| petecodes |
201,573 | Day 28 : Where do we go now? | liner notes: Professional : Got up early to sit in on some talk prep sessions. Picking up some tips... | 0 | 2019-11-07T00:31:10 | https://dev.to/dwane/day-28-where-do-we-go-now-2kp7 | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Got up early to sit in on some talk prep sessions. Picking up some tips. Also did some research to help out someone that had a question in the community. I found some stuff I think will help, but I won't know till maybe tomorrow because of the time difference. Looking to see where do we go now that I have this voting application done. I may do a write up or start another project, possibly involving IoT.
- Personal : Finally sat down to finish some training for the radio station. They were pretty long! Looking to go through more tracks for this week's show. Also want to start the new project. I keep saying that. haha

Tomorrow, going to clean up the code for my application some more. Hopefully I was able to help the person in the community. If not, I'll work on that some more and do some research on my next application.
As far as personal work, more radio show tracks and personal side project. Getting back to working on this radio show.
Have a great day!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube j_R8JpG2A8g %} | dwane |
201,636 | Skills/learning roadmap | Greetings Dev Community. I am currently enrolled in an introductory web development course and would... | 0 | 2019-11-07T05:27:19 | https://dev.to/ljtea/skills-learning-roadmap-n2n | beginners, bootcamp, discuss, womenintech | <h2>Greetings Dev Community.</h2>
I am currently enrolled in an introductory web development course and would like to further my studies with front end language and skills.
I learn best in classroom environments vs online study, which reflects my choices for current and future learning pursuits.
So far I've learned the following:
<ul>
<li>fundamentals of html5</li>
<li>fundamentals of css + flex-box</li>
</ul>
<img src="https://media.giphy.com/media/SiKqNZqksVYWmQEMjd/source.gif">
I've been exploring learning opportunities to pursue once I complete this introductory course, while I continue practicing my html + css skills
Some ideas I've been looking into are:
Pursue a two week javascript course.
Then
Pursue front end development bootcamp.
<h3>OR</h3>
Pursue front end development bootcamp in January.
I know many of you either share similar or different journeys on how you've
became developers. I would like to hear your insights on your learning journey. As a beginner to where you are today.
<ul>
<li>Did you pursue bootcamp? if so, how did you prepare yourself.</li>
<li>Did you work on the fundamentals of html+css and javascript before immersing yourself fully into a bootcamp/ developer program.</li>
<li>Are there certain developer tools you felt were easy to learn yourself?</li>
</ul>
Thank you,
LJTea
| ljtea |
202,077 | Configurando LetsEncrypt con Apache en Debian | Guía para configurar paso a paso LetsEncrypt con Apache en Debian. | 0 | 2019-11-19T07:55:29 | https://dev.to/jpblancodb/configurando-letsencrypt-con-apache-en-debian-6k3 | apache, letsencrypt, debian, servers | ---
title: Configurando LetsEncrypt con Apache en Debian
published: true
description: Guía para configurar paso a paso LetsEncrypt con Apache en Debian.
tags: apache, letsencrypt, debian, servers
---
Siguiendo con el post LetsEncrypt en Nginx vamos a ver como configurarlo con Apache en Debian.
{% link https://dev.to/jpblancodb/configurando-letsencrypt-con-nginx-en-ubuntu-2ini %}
1. Instalamos certbot en el servidor:
```
echo 'deb http://ftp.debian.org/debian jessie-backports main' | sudo tee /etc/apt/sources.list.d/backports.
sudo apt-get update
sudo apt-get install python-certbot-apache -t jessie-backports
```
2. Configuramos el ServerName de Apache
```
sudo nano /etc/apache2/sites-available/000-default.conf
```
Tener en cuenta que 000-default.conf es la configuración por defecto de Apache pero si tienen otro archivo de configuración, deben configurar el ServerName y ServerAlias según corresponda dentro del tag <VirtualHost></VirtualHost>
```
ServerName dominio.com
ServerAlias www.dominio.com
```
3. Reiniciamos Apache:
```
sudo systemctl restart apache2
```
4. Configuramos el firewall:
Ejemplo ufw:
```
sudo ufw allow 'WWW Full'
```
Ejemplo iptables:
```
sudo iptables -I INPUT -p tcp --dport 443 -j ACCEPT
```
5. Generamos el certificado con certbot:
```
sudo certbot --apache
```
Listo! Ya tenemos configurado nuestro certificado con LetsEncrypt y Apache. Recuerden que el certificado hay que renovarlo, para ver el instructivo de como se configura la renovación automática pueden verlo en el paso 5) de Configurando LetsEncrypt con Nginx en Ubuntu.
Cualquier duda o consulta no duden en dejar sus comentarios o bien me pueden consultar vía [Twitter](https://twitter.com/jpblancodb).
Saludos! | jpblancodb |
201,719 | How to Make the Most of Office Etiquette as a Programmer | In the software development industry, chances are that you are working in an overall good environment... | 0 | 2019-11-07T09:33:41 | https://dev.to/danilapetrova/how-to-make-the-most-of-office-etiquette-as-a-programmer-364g | office, career | In the software development industry, chances are that you are working in an overall good environment. The companies hold onto their employees, as the demand is high but the supply of competent and reliable programmers is far from sufficient. Providing a good working space is at the top of the list by the IT industry hiring standards. So is supplying other benefits like snacks and sometimes lunch. Not to mention having access to high-quality coffee - let’s be honest this is the developers’ preferred fuel. That comes in addition to a dedicated dining room and a space to relax and take a break.
So in my experience as a part of of a [java software development company] (https://dreamix.eu/technologies/partner-with-java-ee-development-company), the office conditions are not ones to complain about too much. However, this does not in any way mean that the developers do not need to take care of their own behaviour in the office. Etiquette seems to be a bit of an afterthought in modern society, so here are a few tips on what is accepted as appropriate in the workplace.
#Respect personal space
This should go without saying, but respecting the personal space of others is key in a professional environment. So especially, on the occasion when someone is showing you signals they need you to respect their personal space, you should be respectful to that.
Also, on the occasion, where you have communicated with someone that they are being too friendly, and they do not take that remark, it is completely valid to take this situation with your superiors.
Friends are often made at work, however, you should always account for the boundaries others may set. The same way you would like to have yours respected in return. So maybe don't go hugging someone who clearly showed you they feel uncomfortable when you do that.
#Eating and Consumption in the Working Area
For one, most offices have a dedicated kitchen or dining area. So naturally, it is the place to go when you are enjoying a meal. However, we all know that we enjoy an occasional snack on our desk while working. Especially when we are in a groove with our work, or cramming to meet our deadlines. And there is nothing wrong with that!
There are a few things to be mindful of in order to be a considerate colleague to those around you when it comes to eating on your desk. First up is not being loud. So maybe consider that chips are not the best office food. That goes for the type of package that is loud to open. And so does for loud crunching on your crispy snack.
The next thing we have to mention is eating foods that have a strong smell, as it has a tendency to cling on the fabrics, and is also rude to those who have not yet gone out to eat! Such things would be recently delivered warm food, butter, garlic, popcorn and so on.
Try not to be slurping on liquids too! Stay hydrated, but make sure to remember basic manners as you go about it and not be disruptive. It is rare, to be called out for it, but that does not mean you should let yourself be inconsiderate to the other people in the room who are trying to concentrate.
And the most important thing of all. Sharing! Treat your colleagues to the snack and maybe they will forget all about you interrupting their work.
#Talking in the office
Usually, offices have a dedicated space for breaks and some even have small booths or rooms you can use when you talk on the phone. So you should take advantage of the conditions provided to you. The goal is for you to be able to carry out your needed communication, without it being disruptive to your colleague’s work and concentration.
This is not to take away from chatting with your colleagues as a great way for overall teamwork bonding. But when it comes to the workflow, this sort of interruption could be bad for you. I am in no way saying you should not do that. But rather do it in the break room, or while you are waiting for your coffee to be made. Take your conversation out of the working area to provide better conditions for those still working. There you can laugh, tell jokes or partake in casual chit chat with no negative consequences whatsoever.
Being loud in the office is generally not accepted well, so while you socialize with your colleagues, try to do so, by keeping in mind you are still in a working environment.
#Desk Appearance
The great thing about your desk is that it is your own. Aside from not leaving confidential details around, chances are you can choose how you want it to be decorated.
Still, there are some general tips you should follow for the sake of professional appearance and hygiene. For starters, avoid leaving food around. Or if you keep a snack on your desk, keep it in a lunchbox or a drawer, well-packed so it does not leave crumbs or aroma in your area. That, of course, includes remembering to consume your food while it is fresh or throw it out if you feel it will end up spoiling before you get a chance to.
#Work Area Etiquette
Have you met programmers who have a fidgeting habit of typing something and deleting it while they think? Or other habits that include rhythmically tapping a pen, on the desk or keyboard. If you are one of those people and this helps you focus, consider grabbing a silent fidget cube. Likely you can exercise your habit without being disruptive.
If you are in an open office type of environment chances are you can see your colleague’s monitors. Aside from the times they ask you to check something or work together, you should stick to your own. No need to cause people to feel uneasy.
And speaking of uneasy, the cold-hot office wars seem to be raging at all times. Temperature control can usually be moderated by the employees, however, you should be considerate when tweaking the settings. If you blast it on cold in the summer, you could end up getting yourself or a colleague sick. At the same time raising the temperature too high without confirming with the other people in the room is not much better. Sweating in your warm winter clothes and then heading out will get you sick as easily as the cold current.
Wage your wars diplomatically - talk to your colleagues and come to an acceptable compromise with a temperature that is acceptable to all.
#The Importance of Office Workplace Manners
Most of the tips above seem very common sense. And they can be considered just that. But the purpose of this article is to serve as a reminder to be just a little bit more mindful. While etiquette used to be one of the most important things in society, nowadays it is, in fact, an afterthought. Something people build when they become inclined to, rather than as a necessity to be integrated with others.
Do you think that this type of unspoken rules should be upheld in a professional setting? What is your biggest pet-peeve when it comes to how employees behave? What is something that people do at work and bothers you?
Make sure to add on to the discussion below in the comments! | danilapetrova |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.