id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
308,231 | 7 tech trends shaping the future of eCommerce | We have put together a list of top 7 trends that can help your eCommerce business grow and prosper in the future. | 0 | 2020-04-14T05:38:53 | https://dev.to/jignesh_simform/7-tech-trends-shaping-the-future-of-ecommerce-4dgd | ecommerce, ai, ar, voicesearch | ---
title: 7 tech trends shaping the future of eCommerce
published: true
description: We have put together a list of top 7 trends that can help your eCommerce business grow and prosper in the future.
tags: eCommerce, AI, AR, VoiceSearch
---
The eCommerce industry has evolved a great deal over the past few years. We have technological interventions to thank for some of these changes, while a more conscious consumer is responsible for driving the rest of them.
However, innovations and interventions still haven’t come to a halt. You can expect the eCommerce industry to embrace newer ways and get rid of the old ones in the future. We have put together a list of top 7 trends that can help your eCommerce business grow and prosper in the future.
## 1. AR and VR
Unlike the popular perception, AR and VR are used for a lot more than just gaming and entertainment purposes. eCommerce is one of the sectors that is utilizing these technologies to deliver a much-enhanced user experience to customers.
AR allows customers to get a feel of the product from the comfort of their couch and also while they are one a go. It helps improve the customer satisfaction rate since people now end up with more realistic expectations of the product. It is expected that [100 million customers](https://www.gartner.com/en/newsroom/press-releases/2019-04-01-gartner-says-100-million-consumers-will-shop-in-augme) might use AR this year. The increased penetration of these technologies is another encouraging sign for eCommerce business owners.
Businesses like IKEA and Sephora are perfect examples of all that can be achieved with the correct implementation of this technology. They made the try-before-you-buy option available to customers, which then resulted in improved sales and increased brand popularity.
## 2. AI
There is probably no other topic hotter than Artificial Intelligence in the tech world right now. It’s no surprise that it branches out into the eCommerce world as well. Whether it be establishing the most efficient delivery routine or predicting market trends to roll out relevant offers, AI can take care of it all.
If it weren’t for artificial intelligence, Amazon would not have been able to change its product prices [2.5 million times a day](https://insidebigdata.com/2019/11/30/how-amazon-used-big-data-to-rule-e-commerce/). One can only imagine how much human resource that would’ve required if it were supposed to happen manually.
Personalization is one of the many useful applications of AI in eCommerce. You can provide shoppers with personalized suggestions and exclusive offers based on their shopping patterns. It will not only leave the customer a lot more satisfied but also help you boost sales. We merely scratched the surface of what you can do with AI in eCommerce. It doesn’t matter if your business is big or small; you need to leverage AI for better output.
## 3. Payment Methods
Cart abandonment has been haunting eCommerce business owners since time immemorial. Unavailability of the preferred payment method is one of the many reasons people leave the eCommerce portal without buying anything.
There are around [140 online payment methods](https://www.europeanpaymentscouncil.eu/sites/default/files/inline-files/Payment%20Methods%20Report%202019%20-%20Innovations%20in%20the%20Way%20We%20Pay.pdf) available as of today. And you need to strive to include as many of them in your eCommerce portal as possible. 50% of shoppers leave an app if they do not find their preferred payment method, and that should be a good enough motivation for you.
User preferences for the payment method vary from region to region and from niche to niche. All you need to do is identify the right Alternative Payment Methods and make them available on your portal. The options can vary right from credit cards to cryptocurrency. The gist is that the customer experience should not take a hit because of the unavailability of the payment methods.
## 4. Chatbots
Conversational chatbots are elevating customer experience across eCommerce platforms as we speak. Conversational chatbots aren’t synonymous with rolling out canned messages but with providing customers with humane and smarter interactions.
There are [1.4 billion](https://acquire.io/blog/chatbots-trends/) people that are using chatbots as of now. There isn’t an issue of acceptance of this technology at all. Live chat software has a 73% satisfaction rate when it comes to users interacting with businesses.
## 5. Mobile commerce
The USP of eCommerce has been the fact that it allows shoppers to find and buy products from the comfort of their couch. But there is something even better than that. Shopping while you are on the go, waiting in a queue, in the tube, etc. Mobile commerce gives all these freedoms to users, so it must be pretty easy to guess the reason for its popularity.
It is expected for [more than half of online sales going to mobile commerce by 2021](https://99firms.com/blog/mcommerce-statistics/#gref). The time is ripe for you to make your business ready for this instrumental shift. Some of the boxes you need to check in the process are making sure the UI/UX is user friendly, the app utilizes the features of the device, and so on. With the mobile commerce side sorted, you can tap into a much bigger user base.
## 6. Voice search
Voice search is the dark horse of eCommerce technology trends. We guess mainstream voice assistants such as Siri, Google Assistant, Alexa, and Cortana are to thank for it. Shoppers are now much more comfortable doing a voice search compared to typing in their query.
There are multiple ways you can leverage this trend to your advantage. You can make your catalog a lot more voice search friendly so that people reach your platform when they make a voice search query. The other way is to provide a voice search option within your eCommerce application. Thankfully, both these ways need you to focus on Natural Language Processing, so it should not be much of a big deal to apply both the methods.
## 7. Headless eCommerce
As eCommerce infrastructures get huge, it becomes increasingly difficult to roll out changes on the frontend. There is so much to take care of at once. And then there is always the possibility of multiple things going wrong. Headless eCommerce presents the solution to this problem by making the frontend relatively independent of the backend. It is at the back of this advantage that it is now trending across the eCommerce ecosystem.
With headless eCommerce architecture, it is possible to manage multiple frontends without making many changes in the backend. The feature allows you to stay at the top of your game and keep up with the constantly changing market trends. You can roll out new offers and campaigns with ease and keep the users satisfied.
These eCommerce technology trends can help you take your business to the next level. You will be able to reach and retain a much larger user base, which would also result in increased sales.
*For a more indepth look into the trends you can check out our [Future of eCommerce](https://www.simform.com/future-ecommerce-trends-you-need-to-know/) article.*
Do you think we missed out any technology trend that is affecting the eCommerce ecosystem? Feel free to share your opinion down in the comments section.
| jignesh_simform |
308,280 | Ubuntu Server 18.04 Üzerine Zimbra Server 8.8.15 Kurulumu | Zimbra Kurmak için Sistem gereksinimleri; En fazla 50 kullanıcıyı destekleyen bir Zimbra s... | 0 | 2020-04-14T08:20:29 | https://dev.to/aciklab/ubuntu-server-18-04-uzerine-zimbra-server-8-8-15-kurulumu-2h2e | zimbra, ubuntuzimbra, zimbraserver, acikkaynak | ---
title: Ubuntu Server 18.04 Üzerine Zimbra Server 8.8.15 Kurulumu
published: true
description:
tags: #zimbra #ubuntuzimbra #zimbraserver #acikkaynak
---
#####Zimbra Kurmak için Sistem gereksinimleri;
En fazla 50 kullanıcıyı destekleyen bir Zimbra sunucusu için önerilen sistem gereksinimleri:
Mevcut kaynaklarınıza bağlı olarak 4 vCPU veya daha fazlası
-8 GB RAM veya daha fazlası
-50 GB kullanılabilir disk alanı
-Dns sunucusu
[Daha detaylı bilgi için tıklayınız] (http://docs.zimbra.com/docs/shared/8.6.0/system_requirements/wwhelp/wwhimpl/js/html/wwhelp.htm#href=System_Requirements_86.System_Requirements_for_Zimbra_Collaboration.html)
Benim kendi kurmuş olduğum topoloji şu şekilde;

#####Öncelikle aşağıdaki linke tıklayarak **Ubuntu 18.04 Sunucuzumu** indiriyoruz.
[Download](https://ubuntu.com/download/server)
Sunucumuzun Kurulum Aşamasında Network Ayarlarını Manuel Olarak Aşağıdaki Gibi Ayarlıyoruz...

Network ayarlarından sonra karşımıza gelecek olan **SSH** servisini de aktif ediyoruz.

###Sistemimiz açıldıktan sonra;
Hostname olarak **" mail.pardus.lab "** veriyoruz.
```sh
sudo hostnamectl set-hostname mail.pardus.lab
```
##### - sudo nano /etc/hosts klasörün' de Aşağıda görüldüğü üzere DNS Serverimin ip adresini giriyorum...
######NOT: **mail.pardus.lab** adresine nslookup çektiğimiz zaman bize karşılık gelen İp adresini vermesi gerekmektedir. Bu yüzden konunun ilerleyen aşamalarında DNS server' a MX ve Host kaydı oluşturacağız.
```sh
127.0.0.1 localhost
192.168.1.32 mail.pardus.lab pardus.lab
```
##### - sudo nano /etc/resolv.conf klasöründe' de
######pardus.lab domainin ekli olup olmadığını kontrol ediyoruz.
```sh
nameserver 127.0.0.1
pardus.lab 192.168.1.32
```
Tüm bu network ayarlamalarından sonra DNS Server tarafına geçiyoruz. Ve Host adını bir IP adresine yönlendirmek için **"Host"** ve e-posta adresinize gelen maillerin, e-posta hesabınızın yer aldığı sunucuya yönlendirilmesi için **"MX"** kayıtları oluşturuyoruz.


Oluşturmuş olduğumuz Host ve MX kayıtlarını bu şekilde görüntülüyoruz.

Bu ayarlamalardan sonra son olarak Hem Windows Server ortamında hemde Ubuntu Server ortamında oluşturmuş olduğumuz kayıtları **check** ediyoruz...
Bunun için öncelikle windows ortamında komut ekranını açarak;
**" nslookup mail.pardus.lab "** adresini giriyorum ve domain ismini çözdüğünü görüyorum.

Daha sonra Ubuntu Server tarafına geçiyorum ve komut satırına;
**" dig mail.pardus.lab mx "** yazarak kaydı çözdüğünü görüyorum.

#####Evet bu ayarlamalarda bittikten sonra **Zimbra** kurulumuna geçebiliriz...
Önce ssh ile puty aracılığıyla ubuntuserver' a bağlanıyorum. Ve root oluyorum.
Daha sonra;
Zimbra'nın son sürümünü, yerel sunucuya indirelim.
```sh
wget https://files.zimbra.com/downloads/8.8.15_GA/zcs-8.8.15_GA_3869.UBUNTU18_64.20190918004220.tgz
```
Dosyayı çıkaralım;
```sh
tar xvf zcs-8.8.15_GA_3869.UBUNTU18_64.20190918004220.tgz
```
Çıkardığımız dosyayı açalım;
```sh
cd zcs-8.8.15_GA_3869.UBUNTU18_64.20190918004220
```
Kurulumu yapmak için ./install.sh betiğini çalıştıralım.
```sh
sudo ./install.sh
```
Yükleme sırasında ;
- Lisans koşullarını kabul etmek ve kurulumu başlatmak için **“ Y ”** yazın.
```sh
Do you agree with the terms of the software license agreement? [N] Y
```
- Zimbra paket deposunu kullanmayı kabul edin.
```sh
Use Zimbra's package repository [Y] Y
```
- Kurulacak paketleri seçin
```sh
Select the packages to install
```
- Ve sırasıyla bize sorulacak paketler için aşağıda ki işlemleri tek tek sırası geldikçe yapalım...
```sh
Install zimbra-ldap [Y]
Install zimbra-logger [Y]
Install zimbra-mta [Y]
Install zimbra-dnscache [Y]
Install zimbra-snmp [Y]
Install zimbra-store [Y]
Install zimbra-apache [Y]
Install zimbra-spell [Y]
Install zimbra-memcached [Y]
Install zimbra-proxy [Y]
Install zimbra-drive [Y]
Install zimbra-imapd (BETA - for evaluation only) [N]
Install zimbra-chat [Y]
```
Sistem değişikliğini kabul ediyoruz.
```sh
The system will be modified. Continue? [N] Y
```
Daha sonrasında ise Zimbra paketlerinin indirilmesi ve kurulumu başlayacaktır.
Kurulum aşamasından sonra admin parolasını belirlediğimiz ekran gelecektir.

Sonrasında ise kurulumu tamamlamak için önce **"a"**
Kaydetmek için **"yes"**
Sistemi modifiye etmek için **"yes"**
Zimbra Sunucu kurulumunu başarı bir şekilde tamamlamış oluyoruz.

Yönetici arayüzüne erişin için " https://ip-addres|hostname:7071 "


Evet görüldüğü üzere başarılı bir şekilde giriş yapmış olduk. Bir sonra ki çalışmamızda Zimbra Mail'e **" Domain ekleme ve kullanıcı oluşturma "** işlemlerini yapacağız...Kolay Gelsin.
[Açık Kaynak Yazılımları]
(https://www.acikkaynak.blog/ubuntu-server-18-04-uzerine-zimbra-server-8-8-15-kurulumu/)
| ekarabulut |
308,648 | Dark Mode: three Lint checks to help | Three Lint checks to help you developing dark mode on Android | 0 | 2020-04-14T10:43:06 | https://dev.to/dbottillo/dark-mode-three-lint-checks-to-help-b32 | android, darkmode, kotlin, lint | ---
title: Dark Mode: three Lint checks to help
published: true
description: Three Lint checks to help you developing dark mode on Android
tags: android, darkmode, kotlin, lint
---
Implementing Night Mode in Android is pretty straightforward: you have a theme with attributes and you can just define those attributes in the `values-night` qualifier and that's it. Five minutes and you are done. Well...
Most likely, reality is different: you have a project 4-5 years old, you don't even have a second theme, you have been using hardcoded colors `#FFFFFF` everywhere, maybe at times you felt brave and used instead a reference color `@color/white` . And maybe some other times you had to tint something programmatically with your friend `ContextCompat.getColor(context, R.color.white)`to tint that icon that comes from an API.
So now your code is polluted with things that will be hard to migrate to night: you have white defined directly, as a reference and in code. Guess what, white is not white in night mode anymore :)
And even if you have a brand new project, how do you make sure someone in future will not do the same, how do you enforce that night mode is implemented in every commit/PR? That's where Lint checks come in to help us!
Most of Android developers are familiar with Lint, but just as a refresh, Lint is a static analysis tool which looks for different types of bugs; you can find out more at [https://developer.android.com/studio/write/lint](https://developer.android.com/studio/write/lint).
Android comes with some Lint checks already made for you, but you can extend them and add your own! Here I'm going to show you how to add three Lint checks to your project: one to detect the usage of a direct color (eg. `#FFFFFF`), one to detect if you have defined a color and its night mode equivalent, and one to check all those color's name that may be problematic (eg. `white, red, green`).
I’ve already wrote a post about writing a custom Lint rule ([https://dev.to/dbottillo/how-to-write-a-custom-rule-in-lint-23gf](https://dev.to/dbottillo/how-to-write-a-custom-rule-in-lint-23gf)) so I'm going to assume you know how to write one and focus on the specific content of the three rules.
The first two rules are heavily inspired from Saurabh Arora's post: https://proandroiddev.com/making-android-lint-theme-aware-6285737b13bc
## Direct Color Detector
The first rule is very simple: you shouldn't use any hardcoded color values like `#FFFFFF`.
Let's do a bit of TDD and write a test to validate what we want from the Lint rule:
```kotlin
@Test
fun `should report a direct color violation`() {
val contentFile = """<?xml version="1.0" encoding="utf-8"?>
<View
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/toolbar"
android:background="#453344"
android:foreground="#667788"
android:layout_width="match_parent"
android:layout_height="wrap_content" />"""
TestLintTask.lint()
.files(TestFiles.xml("res/layout/toolbar.xml", contentFile).indented())
.issues(DIRECT_COLOR_ISSUE)
.run()
.expect("""
|res/layout/toolbar.xml:5: Error: Avoid direct use of colors in XML files. This will cause issues with different theme (eg. night) support [DirectColorUse]
| android:background="#453344"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|res/layout/toolbar.xml:6: Error: Avoid direct use of colors in XML files. This will cause issues with different theme (eg. night) support [DirectColorUse]
| android:foreground="#667788"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|2 errors, 0 warnings""".trimMargin())
}
```
That's a lot to get through, let's break it down:
```kotlin
val contentFile = """<?xml version="1.0" encoding="utf-8"?>
<View
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/toolbar"
android:background="#453344"
android:foreground="#667788"
android:layout_width="match_parent"
android:layout_height="wrap_content" />"""
```
So, here we are defining the content of an hypothetic XML file containing a view tag with few attributes, both `background` and `foreground` contain an hardcoded color so we should expect two violations for both of them.
Next, we can build a test lint task, which is a class lint provides to test your test files:
```kotlin
TestLintTask.lint()
.files(TestFiles.xml("res/layout/toolbar.xml", contentFile).indented())
```
With `.files()` you can simulate to pass a number of files to lint. Think of them like all your project's file, in this case I'm just passing a fake `res/layout/toolbar.xml` whose `contentFile` is the xml we defined in the previous section.
```kotlin
TestLintTask.lint()
.files(TestFiles.xml("res/layout/toolbar.xml", contentFile).indented())
.issues(DIRECT_COLOR_ISSUE)
```
With `.issues` you can pass a number of issues lint will use to perform the detection, in this case:
```kotlin
val DIRECT_COLOR_ISSUE = Issue.create("DirectColorUse",
"Direct color used",
"Avoid direct use of colors in XML files. This will cause issues with different theme (eg. night) support",
CORRECTNESS,
6,
Severity.ERROR,
Implementation(DirectColorDetector::class.java, Scope.RESOURCE_FILE_SCOPE)
)
```
So this is how you can create a new issue in Lint: `DirectColorUse` is the id of the issue, then there is a brief description and an explanation, followed by category, priority, severity and the implementation. The `DirectColorDetector.kt` file is the one responsible for detection. If you try to run the test of course it will fail since this file doesn't exist yet. Let’s create an empty one so we can run the test:
```kotlin
class DirectColorDetector : ResourceXmlDetector() {
}
```
Finally at the end of the test we have:
```kotlin
TestLintTask.lint()
.files(TestFiles.xml("res/layout/toolbar.xml", contentFile).indented())
.issues(DIRECT_COLOR_ISSUE)
.run()
.expect("""
|res/layout/toolbar.xml:5: Error: Avoid direct use of colors in XML files. This will cause issues with different theme (eg. night) support [DirectColorUse]
| android:background="#453344"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|res/layout/toolbar.xml:6: Error: Avoid direct use of colors in XML files. This will cause issues with different theme (eg. night) support [DirectColorUse]
| android:foreground="#667788"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|2 errors, 0 warnings""".trimMargin())
```
`run()` makes the Lint task running and `expect` is a convenient method of the `TestLintTask` to assert what's the outcome of running Lint on those files. Here we are expecting those two violations about the background and the foreground values. Of course if you run this test it will fail since we haven't actually implemented `DirectColorDetector` yet. Let's do that:
```kotlin
class DirectColorDetector : ResourceXmlDetector() {
override fun getApplicableAttributes(): Collection<String>? = listOf(
"background", "foreground", "src", "textColor", "tint", "color",
"textColorHighlight", "textColorHint", "textColorLink",
"shadowColor", "srcCompat")
override fun visitAttribute(context: XmlContext, attribute: Attr) {
if (attribute.value.startsWith("#")) {
context.report(
DIRECT_COLOR_ISSUE,
context.getLocation(attribute),
DIRECT_COLOR_ISSUE.getExplanation(TextFormat.RAW))
}
}
}
```
Extending `ResourceXmlDetector` means we have a chance to do something when Lint is going through `XML` files, and through overriding some methods we can perform our check. Here we are using `getApplicableAttributes()` to notify Lint that the detector should be running on those attributes of any XML file. In the `visitAttribute` we have a chance to look at the attribute value and if it starts with `#` it means we can report a violation. You can do that by using the `context` (which is NOT the Android context but the Lint context) and call `report` on it with the values of the `DIRECT_COLOR_ISSUE` defined before.
Here we go! Now if you run the test it will be green.
## Missing Night Color
This check is about having a color defined without a night version. And as we did before, let's first write a test to validate our assumption:
```kotlin
@Test
fun `should report a missing night color violation`() {
val colorFile = TestFiles.xml("res/values/colors.xml",
"""<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="color_primary">#00a7f7</color>
<color name="color_secondary">#0193e8</color>
</resources>""").indented()
val colorNightFile = TestFiles.xml("res/values-night/colors.xml",
"""<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="color_primary">#224411</color>
</resources>""").indented()
TestLintTask.lint()
.files(colorFile, colorNightFile)
.issues(MISSING_NIGHT_COLOR_ISSUE)
.run()
.expect("""
|res/values/colors.xml:4: Error: Night color value for this color resource seems to be missing. If your app supports night theme, then you should add an equivalent color resource for it in the night values folder. [MissingNightColor]
| <color name="color_secondary">#0193e8</color>
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|1 errors, 0 warnings""".trimMargin())
}
```
Pretty straightforward, the color `color_secondary` doesn't have a night definition so it should be reported. Let's have a look at the new Issue:
```kotlin
val MISSING_NIGHT_COLOR_ISSUE = Issue.create("MissingNightColor",
"Night color missing",
"Night color value for this color resource seems to be missing. If your app supports night theme, then you should add an" +
" equivalent color resource for it in the night values folder.",
CORRECTNESS,
6,
Severity.ERROR,
Implementation(MissingNightColorDetector::class.java, Scope.RESOURCE_FILE_SCOPE)
)
```
This is similar to the previous one, of course we have a different id, description and explanation. I'm re-using the same category, priority and severity but feel free to define your own based on your needs. Last is the implementation that points to a new detector called `MissingNightColorDetector`:
```kotlin
class MissingNightColorDetector : ResourceXmlDetector() {
private val nightModeColors = mutableListOf<String>()
private val regularColors = mutableMapOf<String, Location>()
override fun appliesTo(folderType: ResourceFolderType): Boolean {
return folderType == ResourceFolderType.VALUES
}
override fun getApplicableElements(): Collection<String>? {
return listOf("color")
}
override fun afterCheckEachProject(context: Context) {
regularColors.forEach { (color, location) ->
if (!nightModeColors.contains(color))
context.report(
MISSING_NIGHT_COLOR_ISSUE,
location,
MISSING_NIGHT_COLOR_ISSUE.getExplanation(TextFormat.RAW)
)
}
}
override fun visitElement(context: XmlContext, element: Element) {
if (context.getFolderConfiguration()!!.isDefault)
regularColors[element.getAttribute("name")] = context.getLocation(element)
else if (context.getFolderConfiguration()!!.nightModeQualifier.isValid)
nightModeColors.add(element.getAttribute("name"))
}
}
```
Ok, this is way more complicated than the previous one! The reason is that for this specific detection, we can't report violations straight away: Lint is going through all project's files as if it was a tree. That means that while it's visiting the `res/values/colors` folder we don't know anything yet about the `res/values-night/colors` which is probably going to be visited later on. So, in this case I'm memorising all the `res/values/colors` in a map of color name and location (we need the location to know where the violation is in the file) and all the night colors, then throw a detection at the end of the project evaluation if they don't match. Let's do it step by step:
```kotlin
private val nightModeColors = mutableListOf<String>()
private val regularColors = mutableMapOf<String, Location>()
override fun appliesTo(folderType: ResourceFolderType): Boolean {
return folderType == ResourceFolderType.VALUES
}
override fun getApplicableElements(): Collection<String>? {
return listOf("color")
}
```
The first two are just internal variables to store the list of night mode colors and the map of colors/location for the regular colors. `appliesTo` lets you specify that, among all the xml files, you are only interested in the `ResourceFolderType.VALUES` so it will skip drawables, layouts, etc. Finally in the `getApplicableElements()` we are telling Lint to call this rule only for the tag `color` since we don't care about `style`, `bool`, etc..
```kotlin
override fun visitElement(context: XmlContext, element: Element) {
if (context.getFolderConfiguration()!!.isDefault)
regularColors[element.getAttribute("name")] = context.getLocation(element)
else if (context.getFolderConfiguration()!!.nightModeQualifier.isValid)
nightModeColors.add(element.getAttribute("name"))
}
```
`visitElement` is the method that we can use when the `color` tag is visited, here it first checks the folder configuration (eg. `default` or `night`). If it's default we add name and location to the map of regular colors, if it's night we just save the color name in the night mode colors list.
```kotlin
override fun afterCheckEachProject(context: Context) {
regularColors.forEach { (color, location) ->
if (!nightModeColors.contains(color))
context.report(
MISSING_NIGHT_COLOR_ISSUE,
location,
MISSING_NIGHT_COLOR_ISSUE.getExplanation(TextFormat.RAW)
)
}
}
```
Finally with the `afterCheckProject` we have another chance to report a violation after the whole project has been visited, here we can loop through all the regular colors and if we don't find the equivalent in the night mode colors list, we can report a violation. Please pay attention to the location parameter in the `report` method: we are basically telling Lint where the violation occurred in the file.
## Non Semantic Color
The third check is a bit more specific based on the project, but the idea here is that color names like `white` or `red` shouldn't be used. There is a high probability that `white` is not white in night and that `red` is maybe red but not that specific red in night. It would be better to use semantic color: instead of `red` use maybe `error` and instead of `white` use `surface`. So let's write a new test:
```kotlin
@Test
fun `should report a non semantic color violation`() {
val contentFile = """<?xml version="1.0" encoding="utf-8"?>
<View
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/toolbar"
android:background="@color/white"
android:foreground="@color/red"
android:layout_width="match_parent"
android:layout_height="wrap_content" />"""
TestLintTask.lint()
.files(TestFiles.xml("res/layout/toolbar.xml", contentFile).indented())
.issues(NON_SEMANTIC_COLOR_ISSUE)
.run()
.expect("""
|res/layout/toolbar.xml:5: Error: Avoid non semantic use of colors in XML files. This will cause issues with different theme (eg. night) support. For example, use primary instead of black. [NonSemanticColorUse]
| android:background="@color/white"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|res/layout/toolbar.xml:6: Error: Avoid non semantic use of colors in XML files. This will cause issues with different theme (eg. night) support. For example, use primary instead of black. [NonSemanticColorUse]
| android:foreground="@color/red"
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|2 errors, 0 warnings""".trimMargin())
}
```
This is very similar to the first test that we wrote, so I'm not going through the details, we are expecting two violations because background is using `@color/white` and foreground is using `@color/red`.
```kotlin
val NON_SEMANTIC_COLOR_ISSUE = Issue.create("NonSemanticColorUse",
"Non semantic color used",
"Avoid non semantic use of colors in XML files. This will cause issues with different theme (eg. night) support. " +
"For example, use primary instead of black.",
CORRECTNESS,
6,
Severity.ERROR,
Implementation(NonSemanticColorDetector::class.java, Scope.RESOURCE_FILE_SCOPE)
)
```
Again, nothing new here, it's just a new issue definition for `NonSemanticColorUse`, let's jump to the detector:
```kotlin
class NonSemanticColorDetector : ResourceXmlDetector() {
override fun getApplicableAttributes(): Collection<String>? = listOf(
"background", "foreground", "src", "textColor", "tint", "color",
"textColorHighlight", "textColorHint", "textColorLink",
"shadowColor", "srcCompat")
override fun visitAttribute(context: XmlContext, attribute: Attr) {
if (checkName(attribute.value)) {
context.report(
NON_SEMANTIC_COLOR_ISSUE,
context.getLocation(attribute),
NON_SEMANTIC_COLOR_ISSUE.getExplanation(TextFormat.RAW))
}
}
private fun checkName(input: String): Boolean {
return listOf("black", "blue", "green", "orange",
"teal", "white", "orange", "red").any {
input.contains(it)
}
}
}
```
This detector is very similar to the first detector about hardcoded colors: we just define which attributes we are interested in and with the `visitAttribute` you can check the attribute value and if it contains any of `black,blue,green,etc..` we report the violation. I also want to mention that this will work for images as well: if you have something like `app:srcCompat="@drawable/ic_repeat_black_24dp"` it will report and for a good reason! If you don't tint that image then it may not work in night.
## Conclusion
With the three new lint checks if you now run Lint you will find them in the reports:

Of course, this is a starting point, you can decide to go through them and fix all of them in one go or just have a sanity check on how far you are to fix all the potential issue while implementing night mode. My suggestion is to make them not an error in the beginning, otherwise it will fail all your builds, and turning them into error only when you have fixed all of them. Turning them into error helps prevent new code and feature to be added to the project without night mode in mind preventing that to go into the build.
Tip: you can suppress them as you do with any other lint rule:
```xml
<com.google.android.material.floatingactionbutton.FloatingActionButton
android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:srcCompat="@drawable/ic_repeat_black_24dp"
tools:ignore="NonSemanticColorUse" />
```
Finally, you can find a full implementation on one of my app here: [https://github.com/dbottillo/MTGCardsInfo/commit/9bfd1e051f5c264cb7b5316efd50c6af9723b922](https://github.com/dbottillo/MTGCardsInfo/commit/9bfd1e051f5c264cb7b5316efd50c6af9723b922)
Happy coding! | dbottillo |
308,681 | Exploiting sessionStorage API to design a user-friendly multi-step Lead Form | TL;DR This article breaks down the tiresome task of filling a multi-step form using sessio... | 0 | 2020-04-14T12:38:01 | https://dev.to/zeeshan/exploiting-sessionstorage-api-to-design-a-user-friendly-multi-step-lead-form-42jh | javascript, api, form, userexperience | ##TL;DR
This article breaks down the tiresome task of filling a multi-step form using sessionStorage API. Result? Better UX
##Use Case
One of the websites I developed for a [coffee vending machine business](http://cityvendinguae.com/) has a multi-step quote request form page. I had created a number of links on the home page and other main pages to the multi-step form page. But didn't have enough leads coming to that page.
So what?
I added a little more style to those link buttons and micro-interaction to the links on the home page. I wasn't satisfied.
So I thought of displaying a part of the multi-step form in the home page hero and filling the form redirects to the page where the user can fill the rest of the form.

<figcaption> Lead Generation Form displayed on the home page</figcaption>
##Choosing the tool
With the design done already, I was searching for the code blocks that will help me implement it. The first thing came to my mind was using **[localStorage API](https://developer.mozilla.org/en/docs/Web/API/Window/localStorage)**.
>The read-only localStorage property allows you to access a Storage object for the Document's origin; the stored data is saved across browser sessions.
~ MDN
But I want the data to be cleared when the user quits or when the session ends. So this wasn't the perfect one for me although it partially fulfills my idea.
The next line of localStorage Docs on MDN gave me the glimpse of the tool I might use instead
>localStorage is similar to sessionStorage, except that while data stored in localStorage has no expiration time, data stored in sessionStorage gets cleared when the page session ends — that is when the page is closed.
~ [MDN](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage)
##Implementing sessionStorage API
The great thing about sessionStorage is that it survives page reloads and restarts and only gets deleted while the session ends or quitting the browser.
>A default session time is 30 minutes
Say, these are the inputs I need to store in sessionStorage

<figcaption> These 4 inputs are to be captured using sessionStorage API</figcaption>
Add an event listener that listens for the page load and performs a function
```javascript
window.addEventListener("load", doFirst, false);
```
So, while the page is loaded, **doFirst** function is activated which in turn listens for button click on the form
```javascript
function doFirst()
{
var button = document.getElementById("button");
button.addEventListener("click", saveForm, false);
}
```

<figcaption> Form is filled randomly</figcaption>
When the button click has listened, a **saveForm** function is activated which stores the form values using **sessionStorage API**.
```javascript
function saveForm()
{
let name = document.getElementById("name").value;
// Save data to sessionStorage
sessionStorage.setItem("name", name);
sessionStorage.setItem("email", email);
sessionStorage.setItem("phone", phone);
sessionStorage.setItem("company", company);
document.getElementById("name").value = '';
document.getElementById("email").value = '';
document.getElementById("phone").value = '';
document.getElementById("company").value = '';
}
```
Clicking on the button takes to the multi-step form. On loading, this page, an on.load event is fired which activates the function that gets our form input and sets into the input field.
```javascript
document.onload = display();
function display()
{
// Get saved data from sessionStorage
document.getElementById("name2").value = sessionStorage.getItem('name');
document.getElementById("email2").value = sessionStorage.getItem('email');
document.getElementById("phone2").value = sessionStorage.getItem('phone');
document.getElementById("company2").value = sessionStorage.getItem('company');
}
```

<figcaption> Session Storage in action inside Application Tab > Storage </figcaption>
So that's how I did it!
##Takeaways
The key benefit of such an approach is it makes the task of filling a multi-step form easier, which is often regarded as a tiresome task. Though it doesn't cut any cost in effect, it contributes to better form-experience.
Let me know your thoughts! I would be happy to hear your feedback/critics on this approach and what would you have done instead. Also, feel free to leave your tips on designing a better form experience.
Links
[sessionStorage](https://developer.mozilla.org/en-US/docs/Web/API/Window/sessionStorage) - MDN Docs | zeeshan |
308,940 | What’s new in Ember Octane | Written by Anjolaoluwa Adebayo-Oyetoro✏️ Ember.js is an open-source MVC-based JavaScript framework... | 0 | 2020-05-11T13:06:55 | https://blog.logrocket.com/whats-new-in-ember-octane/ | ember, javascript, webdev | ---
title: What’s new in Ember Octane
published: true
date: 2020-04-14 13:00:56 UTC
tags: ember,javascript,webdev
canonical_url: https://blog.logrocket.com/whats-new-in-ember-octane/
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/g18t65hmwox75ymr59ug.png
---
**Written by [Anjolaoluwa Adebayo-Oyetoro](https://blog.logrocket.com/author/anjolaoluwaadebayooyetoro/)**✏️
[Ember.js](https://emberjs.com/) is an open-source MVC-based JavaScript framework suited for building large scale client-side applications. It helps developers be more productive out of the box and comes preconfigured with almost all that you need to get an application up and running.
It’s official [website](https://emberjs.com/) describes Ember.js as:
> A productive, battle-tested JavaScript framework for building modern web applications. It includes everything you need to build rich UIs that work on any device.
One of the good things about Ember.js is its backward compatibility. This makes it easy to integrate the latest features of the framework in your apps without having to deal with breaking changes.
In its latest release Ember Octane, which was introduced as Ember 3.15, comes with a lot of features and provides updates to Ember’s components and reactivity system, these changes include:
- Glimmer components
- Glimmer reactivity
- Reusable DOM behavior with modifiers
- Fully refreshed tutorials and component guides
- Improved tooling
## What is Ember Octane?
According to its [documentation](https://emberjs.com/editions/octane/):
> Ember Octane describes a set of new features that, when taken together, represent a foundational improvement to the way you use Ember.js. It has modern, streamlined components, and state management that make it fun to build web applications. With seamless interoperability for existing apps, teams can migrate at their own pace, while developers building new apps start out with the best that Ember has to offer.
Let’s take a look at some of the newest features that got shipped in the latest version of the framework.
[](https://logrocket.com/signup/)
## Glimmer components
Ember used to have a [single component system](https://guides.emberjs.com/v1.10.0/components/) where you had to configure a “root element” using a JavaScript micro syntax:
```jsx
import Component from '@ember/component';
export default Component.extend({
tagName: 'p',
classNames: ["tooltip"],
classNameBindings: ["isEnabled:enabled", "isActive:active"],
})
```
With Glimmer components, you can say goodbye to this as it allows you to create a component with no root element at all. This makes creating root components much easier and eliminates the special cases that come from having a second API just for working with the root element of a component.
Your components can now be rewritten like this:
```jsx
<p class="tooltip {{if @isEnabled 'enabled'}} {{if @isActive 'active'}}">
{{yield}}
</p>
```
You can also create a component with no root element at all to improve the performance and it will work, like this:
```jsx
<p>{{yield}}</p>
<hr>
```
## Glimmer reactivity
Reactivity is the way modern JavaScript frameworks detect state changes, and how they efficiently propagate the changes through the system. A very good example is how the DOM is automatically updated whenever data in our application changes.
Reactivity, according to [Wikipedia](https://en.wikipedia.org/wiki/Reactive_programming):
> Is a [programming paradigm](https://en.wikipedia.org/wiki/Programming_paradigm) oriented around [data streams](https://en.wikipedia.org/wiki/Dataflow_programming) and the propagation of change. This means that with this paradigm it is possible to express static or dynamic data streams with ease, and also communicate that an inferred dependency within the associated execution model exists, which facilitates the automatic propagation of the changed data flow
Ember Octane offers a simpler reactivity model called “tracked properties”, which is denoted with the `@tracked` annotation. Adding `@tracked` to the property of a class makes it reactive such that if there is any change to the property, any part of the DOM that uses that property will get updated automatically.
## Reusable DOM behavior with modifiers
Another update to the Ember component model is element modifiers, a feature that allows you to build reusable DOM behavior that isn’t connected to any specific component, modifiers are similar to how mixins work and should replace classic mixins as you would not have to deal with issues such as naming conflicts.
For example, let’s say we have a third-party library that exposes `activateTabs` and `deactivateTabs` functions, both of which take an element. In classic Ember, you could write a mixin like this:
```jsx
import Mixin from '@ember/object/mixin';
export default Mixin.create({
didInsertElement() {
this._super();
activateTabs(this.element);
}
willDestroyElement() {
this._super();
deactivateTabs(this.element);
}
})
```
And then you would use it in a component like this:
```jsx
import Component from '@ember/component';
export default Component.extend(Tabs, {
// ...
});
```
With element modifiers, this code block can be reimplemented. This is what our `Tabs` mixin looks like when reimplemented as a modifier:
```jsx
import { modifier } from 'ember-modifier';
export default modifier(element => {
activateTabs(element);
return () => deactivateTabs(element);
});
```
You can use a modifier on any element using element modifier syntax:
```jsx
<div {{tabs}}></div>
```
Element modifiers are really straightforward to use. We simply created a function that takes the element, activates it, and returns a destructor function that would run when Ember tears down the element.
## Fully refreshed tutorial and component guides
The Ember team also overhauled the documentation with the [Super Rentals Tutorial](https://guides.emberjs.com/v3.15.0/tutorial/) as a guide for teaching the Octane way to build Ember apps.
The guides also underwent a major refresh, elevating components and eliminating confusing organization (like the separation between templates and components). The new guides deemphasize controllers, which are less important in Octane.
Before Octane:
<figcaption id="caption-attachment-16978">Before Octane</figcaption>
After Octane:
<figcaption id="caption-attachment-16979">After Octane</figcaption>
## Improved tooling
For Octane, the Ember inspector has been updated to support Octane features in a first-class way, including tracked properties and Glimmer components.
The refreshed inspector eliminates duplicate concepts and outdated language (like “View Tree”). It also has numerous visual improvements, including a new component tooltip that better reflects Octane idioms. It also updates the component tooltip, which fixes a long-standing issue with physically small components.

## Basic usage
Let’s take a look at how we can get started with Ember Octane.
This tutorial assumes the reader has the following:
- [Node.js 10x](https://nodejs.org/en/download/) or higher
- [Yarn](https://yarnpkg.com/lang/en/) / [npm 5.2 or higher installed](https://www.npmjs.com/get-npm) on their PC
Install the [Ember-CLI](https://ember-cli.com/) tool, this toolkit is for Ember.js that helps you bootstrap Ember projects on the fly.
Install the CLI tool with the following command:
```jsx
npm install -g ember-cli
```
Installing the Ember CLI package globally gives us access to the `ember` command in our terminal, the `ember new` command helps us create a new application.
Next, create an ember project with the `new` command:
```jsx
ember new ember-quickstart
```
This command will create a new directory called `ember-quickstart` and set up a new Ember application with all the necessary files and configurations for bootstrapping a project inside of it:

Change directory into the application directory:
```jsx
cd ember-quickstart
```
Start the development server:
```jsx
ember serve
```
You should get something similar to this running on `http://localhost:4200` after running the `ember serve` command:

## Conclusion
Ember Octane brings updates to help you build even more powerful applications. Good news – you do not need to change your whole app to use Octane’s features! All features are available for you to opt into, one piece at a time.
There are more amazing features not covered in this article. For a full list of the updates, read the [release notes](https://blog.emberjs.com/2019/12/20/octane-is-here.html).
Which new features stand out to you? Let me know in the comments section.
* * *
## Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps

[LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.
In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.
[Try it for free](https://logrocket.com/signup/).
* * *
The post [What’s new in Ember Octane](https://blog.logrocket.com/whats-new-in-ember-octane/) appeared first on [LogRocket Blog](https://blog.logrocket.com). | bnevilleoneill |
309,058 | Spinning up a highly available Prometheus setup with Thanos | The Problem Prometheus has become one of the standard tools of any monitoring solutions du... | 0 | 2020-04-15T09:40:44 | https://appfleet.com/blog/progressive-feature-driven-delivery-with-flagger/ | devops, kubernetes | #The Problem#
Prometheus has become one of the standard tools of any monitoring solutions due to it's simple and reliable architecture and ease of use. Despite this, the tool has some shortcomings when working on a certain scale. When trying to scale Prometheus, one major issue you quickly bump into is the problem of cross-shard visibility.
Prometheus encourages a functional sharding approach. Even a single Prometheus server provides enough scalability to free users from the complexity of horizontal sharding in virtually all use cases.
While this is a great deployment model, you often want to access all the data through the same API or UI – that is, a global view. For example, you can render multiple queries in a Grafana graph, but each query can be done only against a single Prometheus server.
#The Solution#
Thanos is an open-source, highly available Prometheus setup with long term storage capabilities that seeks to act as a "silver bullet" to solve some of the shortcomings that plague vanilla Prometheus setups. Thanos allows users to aggregate Prometheus data natively by directly querying the Prometheus API, efficiently compact it, and most importantly, de-duplicate data.
[Thanos' architecture](https://www.youtube.com/watch?v=l8syWgJ98sk) introduces a central query layer across all the servers via a sidecar component that sits alongside each Prometheus server and a central Querier component that responds to PromQL queries. This makes up a Thanos deployment.
##Background##
Following the [KISS](https://en.wikipedia.org/wiki/KISS_principle) and Unix philosophies, Thanos is made of a set of components with each filling a specific role.
* Sidecar: connects to Prometheus, reads its data for query and/or uploads it to cloud storage.
* Store Gateway: serves metrics inside of a cloud storage bucket.
* Compactor: compacts, downsamples and applies retention on the data stored in the cloud storage bucket.
* Receiver: receives data from Prometheus’ remote-write WAL, exposes it and/or uploads it to cloud storage.
* Ruler/Rule: evaluates recording and alerting rules against data in Thanos for exposition and/or upload.
* Querier/Query: implements Prometheus’ v1 API to aggregate data from the underlying components.
See those components on this diagram:

Thanos integrates with existing Prometheus servers through a [Sidecar process](https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar#solution), which runs on the same machine or in the same pod as the Prometheus server.
The purpose of the Sidecar is to backup Prometheus data into an Object Storage bucket, and give other Thanos components access to the Prometheus metrics via a gRPC API.
The Sidecar makes use of the `reload` Prometheus endpoint. Make sure it’s enabled with the flag `--web.enable-lifecycle`.
##Installing Thanos##
###Prerequisites###
To install Thanos you'll need:
* One or more [Prometheus](https://prometheus.io/) v2.2.1+ installations with a persistent disk.
* Optional object storage.
The easiest way to deploy Thanos for the purposes of this tutorial is to deploy the Thanos sidecar along with Prometheus using the official Helm chart.
To deploy both- just run the next command, putting the values to a file `values.yaml` and changing `--namespace` value beforehand:
```
helm upgrade --version="8.6.0" --install --namespace="my-lovely-namespace" --values values.yaml prometheus-thanos-sidecar stable/prometheus
```
Take a note that you need to replace two placeholders in the values: `BUCKET_REPLACE_ME` and `CLUSTER_NAME`. Also, adjust all the other values according to your infrastructure requirements.
###External Storage###
The following configures the sidecar to write Prometheus’ data into a configured object storage:
```
thanos sidecar \
--tsdb.path /var/prometheus \ # TSDB data directory of Prometheus
--prometheus.url "http://localhost:9090" \ # Be sure that the sidecar can use this url!
--objstore.config-file bucket_config.yaml \ # Storage configuration for uploading data
```
The format of YAML file depends on the provider you choose. Examples of config and up-to-date list of storage types Thanos supports are available [here](https://thanos.io/storage.md/).
Rolling this out has little to zero impact on the running Prometheus instance. It is a good start to ensure you are backing up your data while figuring out the other pieces of Thanos.
##Deduplicating data from Prometheus HA pairs##
The Query component is also capable of deduplicating data collected from Prometheus HA pairs. This requires configuring Prometheus’s `global.external_labels` configuration block to identify the role of a given Prometheus instance.
A typical choice is simply the label name “replica” while letting the value be whatever you wish. For example, you might set up the following in Prometheus’s configuration file:
```
global:
external_labels:
region: eu-west
monitor: infrastructure
replica: A
# ...
```
In a Kubernetes stateful deployment, the replica label can also be the pod name.
Reload your Prometheus instances, and then, in Query, we will define `replica` as the label we want to enable deduplication to occur on:
```
thanos query \
--http-address 0.0.0.0:19192 \
--store 1.2.3.4:19090 \
--store 1.2.3.5:19090 \
--query.replica-label replica # Replica label for de-duplication
--query.replica-label replicaX # Supports multiple replica labels for de-duplication
```
Go to the configured HTTP address, and you should now be able to query across all Prometheus instances and receive de-duplicated data.
#Next Steps#
At this point, you should have an idea of how Thanos approaches the task of solving Prometheus's shortcomings. Thanos takes Prometheus and extends the functionality with the sidecar component to introduce a central query layer to act as a long term metrics store with the ability to de-duplicate your metric data. I hope this overview has helped you gain valuable context surrounding Thanos and the issues it solves. Thanks for reading!
| jimaek |
309,066 | Top tips for escalating the performance of Android apps
| Launching an application for your online business is the smartest idea you can go with. More than 2.5... | 0 | 2020-04-14T14:20:17 | https://dev.to/poojamakkar87/top-tips-for-escalating-the-performance-of-android-apps-2lkl | android, appdevelopment, mobileapp | Launching an application for your online business is the smartest idea you can go with. More than 2.5 billion people use a smartphone. This huge population can be targeted according to its preferences for your business promotion using an application. So far, the Android operating system is ruling the world with the highest number of users. Hence, an android application will be the best bet for your business ventures.
How can an app become a favorite utility for a user? By providing what a user needs and optimizing its performance, it can be easily achieved. How the performance of an Android app is optimized and made better? Ask the leading [Android app development company](https://metadesignsolutions.com/android-app-development-company-in-india) and you will get the following answers.
**Performance optimizing tricks for Android apps**
Users can belong to any generation of smartphone owners. In fact, their smartphones can be old or the latest models. You don’t know or can predict this feature of an audience accurately. It means that the Android application you want to develop should perform uniformly across all genres of smartphones. How can this be done? Here is the list of tips from the top-performing Android app developing agencies you can consider.
**Performance testing**
The first and most important step is to test the performance of your Android application. Your app should be highly efficient in terms of performing in different networks, multiple devices, APIs, servers, etc. This test should be done beforehand so that you can be sure about its overall performance.
A strategic procedure should be followed while testing your application on different devices. In fact, the test list should be chosen accordingly so that you can measure its performance efficiently and make certain changes if needed. The tests include:
**Device performance**
An Android app development company in India will ensure that your application is compatible with every Android device running in the market. In fact, your app should open faster, require less memory and does not need much power to run. These kinks should be addressed if present. Check for app-background features whether your app is retaining the status of a work proceeding in the right way. Also, the app’s capability to integrate with GPS, Wi-Fi, social media, etc should be judged.
**Network performance**
Your app needs to be network optimized as you can fixate on the network use. Users will access Wi-Fi, mobile networks, and hotspots to use your app. Make sure it can perform well in any network condition.
**API/Server performance**
The performance of your application will also depend on the data received and sent to servers. Low loading time will enhance functioning. Data communication is a crucial part of your app’s performance capabilities. Maintaining a database will ensure the smooth performance of an app when the server is down.
**Performance optimization**
Once your Android app is debugged and ready to be launched, you need to run performance optimization tests. Let us dig a little deeper into this aspect. When you hire an Android app developer, consider discussing the following points elaborately.
**CDN acceleration**
The use of a content delivery network or CDN accelerates the speed of communications between APIs. Using the nearest server will reduce latency, payload, round-trip time, and data size.
**Reducing image size**
The volume of an image will decide the loading time of your application. Compressing images without compensating the resolution will escalate the performance of your app. The loading time of your app’s content will reduce to a minimum. Using WebP format for images, not PNG, will help reduce the app’s loading time.
**Exclusion of unnecessary features**
Hire Android app developers India who can find unnecessary features included in the app’s framework. These features will cause an unwanted burden on the API to contact a server, seek data and load. Trimming your app’s weight will automatically enhance its speed.
**Reusing data templates**
New data templates take time to load and reduce the performance of an app. The smart developers suggest reusing data templates so that your app can load faster. This is a brilliant example of resourcefulness.
**Conclusion**
Choosing the best among the Android app developers India will need some insights too. Find out the names that are highly experienced in developing apps in your domain. Hire a skilled team of developers to work on your idea and seek these tips to increase your Android app’s performance efficiently.
| poojamakkar87 |
309,088 | What Goes Into Log Analysis? | I've talked here before about log management in some detail. And I've talked about log analysis in h... | 0 | 2020-04-14T14:58:01 | https://www.scalyr.com/blog/what-goes-into-log-analysis/ | logging, programming | I've talked here before about <a href="https://www.scalyr.com/blog/log-management-what-is-it-and-why-you-need-it/">log management in some detail</a>. And I've talked about log analysis in high-level terms when <a href="https://www.scalyr.com/blog/calculating-the-roi-of-log-analysis-tools/">making the case for its ROI</a>. But I haven't gone into a ton of detail about log analysis. Let's do that today.
At the surface level, this might seem a little indulgent. What's so hard? You take a log file and you analyze it, right?
Well, sure, but what does that mean, exactly? Do <em>you</em>, as a <em>human,</em> SSH into some server, open a gigantic server log file, and start thumbing through it like a newspaper? If I had to guess, I'd say probably not. It's going to be some interleaving of tooling, human intelligence, and heuristics. So let's get a little more specific about what that looks like, exactly.
<h3>Log Analysis, In the Broadest Terms</h3>
In the rest of this post, I'll explain some of the most important elements of log analysis. But, before I do that, I want to give you a very broad working definition.
Log analysis is the process of turning your log files into data and then making intelligent decisions based on that data.
It sounds simple in principle. But it's pretty involved in practice. Your production operations generate all sorts of logs: server logs, OS logs, application logs, etc. You need to take these things, gather them up, treat them as data, and make sense of them somehow. And it doesn't help matters any that log files have some of the most unstructured and noisy data imaginable in them.
So log analysis takes you from "unstructured and noisy" to "ready to make good decisions." Let's see how that happens.
<img class="aligncenter wp-image-1447" src="https://library.scalyr.com/2018/07/19175312/Magnifying_Glass_Doing_Log_Analysis.png" alt="" width="304" height="245">
<!--more-->
<h3>Collection and Aggregation</h3>
As I just mentioned, your production systems are going to produce all sorts of different logs. Your applications themselves produce them. So, too, do some of the things your applications use directly, such as databases. And then, of course, you have server logs and operating system logs. Maybe you need information from your mail server or other, more peripheral places. The point is, you've got a lot of sources of log data.
<img class=" wp-image-1448 alignleft" src="https://library.scalyr.com/2018/07/19175313/Pull_quote-you_can_start_to_regard_your_production_operations_as_a_more_deliberate_whole.png" alt="" width="238" height="226">So, you need to collect these different logs somehow. And then you need to <a href="https://www.scalyr.com/blog/what-is-log-aggregation-and-how-does-it-help-you/">aggregate them</a>, meaning you gather the collection together into a whole.
By doing this, you can start to regard your production operations not as a hodgepodge collection of unrelated systems but as a more deliberate whole.
<h3>Parsing and Semantic Interpretation</h3>
Let's say you've gathered up all of your log files and kind of smashed them together as your aggregation strategy. That might leave you with some, shall we say, variety.
<pre class="lang:default decode:true">111.222.333.123 HOME - [03/Mar/2017:02:44:19 -0800] "GET /some/subsite.htm HTTP/1.0" 200 198 "http://someexternalsite.com/somepage" "Mozilla/4.01 (Macintosh; I; PPC)"</pre>
<pre class="lang:default decode:true">2015-12-10 04:53:32,558 [10] ERROR WebApp [(null)] - Something happened!
</pre>
<pre class="lang:default decode:true">6/15/16,8:23:25 PM,DNS,Information,None,2,N/A,ZETA,The DNS Server has started.</pre>
As you can see, parsing these three very different styles of log entry would prove interesting. There seems to be a timestamp, albeit in different formats, and then a couple of the messages have kind of a general message payload. But beyond that, what do you do?
That's where the ideas of parsing and semantic interpretation come in. When you set up aggregation of the logs, you also specify different parsing algorithms, and you assign significance to the information that results. With some effort and intelligence, you can start weaving this into a chronological ordering of events that serve as parts of a whole.
<h3>Data Cleaning and Indexing</h3>
You're going to need to do more with the data than just extract it and assign it semantic meaning, though. You'll have missing entries where you need default values. You're going to need to apply certain rules and transformations to it. And you're probably going to need to filter some of the data out, frankly. Not every last byte capture by every last logging entity in your ecosystem is actually valuable to you.
In short, you're going to need to "clean" the data a little.
Once you've done that, you're in good shape, storage-wise. But you're also going to want to do what databases do: <a href="https://en.wikipedia.org/wiki/Database_index">index the data</a>. This means storing it in such a way to optimize information retrieval.
<h3>High-Powered Search</h3>
The reason you need to index as part of your storage and cleaning process is pretty straightforward. Any good log analysis paradigm is going to be predicated upon search. And not just any search --- <a href="https://www.scalyr.com/blog/searching-1tb-sec-systems-engineering-before-algorithms/"><em>really good </em>search</a>.
This makes sense when you think about it. Logs collect tons and tons of data about what your systems are doing in production. To make use of that data, you're going to need to search it, and the scale alone means that search has to be fast and sophisticated. We're not talking about looking up the customer address in an MS Access table with 100 customer records.
<h3>Visualization Capabilities</h3>
Once you have log files aggregated, parsed, stored, and indexed, you're in good shape. But the story doesn't end there. What happens with the information is just as important for analysis.
First of all, you definitely want good visualization capabilities. This includes relatively obvious things, like seeing graphs of traffic or dashboards warning you about spikes in errors. But it can also mean some <a href="https://www.scalyr.com/blog/surprising-use-cases-for-log-visualization/">relatively unusual or interesting visualization scenarios</a>.
Part of log analysis means having the capability for deep understanding of the data, and visualization is critical for that.
<h3>Analytics Capability</h3>
You've stored and visualized your data, but now you also want to be able to slice and dice it to get a deeper understanding of it. You're going to need analytics capability for your log analysis.
To get a little more specific, analytics involves automated assistance interpreting your data and discovering patterns in it. Analytics is a discipline unto itself, but it can include concerns such as the following:
<ul>
<li>Statistical modeling and assessing the significance of relationships.</li>
<li>Predictive modeling.</li>
<li>Pattern recognition.</li>
<li>Machine learning.</li>
</ul>
To zoom back out, you want to gather the data, have the ability to search it, and be able to visualize it. But then you also want automated assistance with combing through it, looking for trends, patterns, and generally interesting insights.
<h3>Human Intelligence</h3>
Everything I've mentioned so far should be automated in your operation. Of course, the automation will require setup and intervention as you go. But you shouldn't be doing this stuff yourself manually. In fact, you shouldn't even write your own tools for this because <a href="https://www.scalyr.com/product">good ones already exist</a>.
But none of this is complete without human intervention, so I'll close by mentioning that. Log analysis requires excellent tooling with sophisticated capabilities. But it also requires a team of smart people around it that know how to set it up, monitor it, and act on the insights that it provides.
Your systems generate an awful lot of data about what they're doing, via many log files. Log analysis is critical to gathering, finding, visualizing, understanding, and acting on that information. It can even mean the difference in keeping an edge on your competition. | daedtech |
309,095 | Why You Shouldn't Learn C | Knowledge of the C programming language is often touted as the mark of a “true” programmer. You don’t really know programming unless you know this language, or so the wisdom goes. Many aspiring programmers have been advised by senior developers (or gatekeepers) to learn C to up their skills and bring them to the next level. […] | 0 | 2020-04-14T15:04:37 | http://erikscode.space/index.php/2020/04/14/why-you-shouldnt-learn-c/ | c |
---
title: Why You Shouldn't Learn C
published: true
description: Knowledge of the C programming language is often touted as the mark of a “true” programmer. You don’t really know programming unless you know this language, or so the wisdom goes. Many aspiring programmers have been advised by senior developers (or gatekeepers) to learn C to up their skills and bring them to the next level. […]
tags: c
canonical_url: http://erikscode.space/index.php/2020/04/14/why-you-shouldnt-learn-c/
cover_image: https://i2.wp.com/erikscode.space/wp-content/uploads/2020/04/phil-hearing-YbT8wrbPMug-unsplash-scaled.jpg?fit=2560%2C1707
---
*Originally posted on [eriksCode.space](http://erikscode.space/index.php/2020/04/14/why-you-shouldnt-learn-c/)*
Knowledge of the C programming language is often touted as the mark of a “true” programmer. You don’t *really* know programming unless you know this language, or so the wisdom goes. Many aspiring programmers have been advised by senior developers (or gatekeepers) to learn C to up their skills and bring them to the next level. That’s what this blog is all about, leveling up, so let’s discuss why learning C might be a waste of your time.
What’s The Point of This Article?
---------------------------------
This blog is dedicated to helping junior programmers level themselves up, bridging the gap between “Programming 101” and advanced levels of software engineering. There’s a ton of steps in between, is learning this esoteric language one of them?
Conventional wisdom says yes, and if you want to convince yourself to learn the C programming language, you don’t need to look very far. Lots of blog posts are dedicated to telling you to learn C, many questions are asked in online forums “should I learn C” with most answers being a resounding yes.
To me, this is information that has been parroted so many times that no one really thinks about it anymore. The typical boilerplate reasons for learning C are given in every one of these posts or Q&As. At this point, it’s one of those “well, I learned it, so everyone else should too” things.
Personally, I think the advice out there regarding this is wrong and I’ll tell you why. I don’t want you to needlessly waste your time on something that will not pay off in the long run. So here are the myths of learning C, why it might be waste of your time, and what you can do instead.
Myth #1 – It’s the lingua franca of programming languages
---------------------------------------------------------
This is the idea that C is the inspiration for every (or most) programming language that came after it. This is actually true for the most part, the myth is that this fact is at all meaningful.
This is the same reasoning behind learning Latin to become a better English speaker. Sure, in the process of learning Latin, you will gain a perspective on your native language (if it happens to be English) that you wouldn’t otherwise get. However, this is true with learning ***any*** new language.
Likewise, learning C isn’t going to magically make you a better JavaScript programmer *any more than learning Perl will make you a better PHP programmer*. There is value in learning the ins and outs of a new language and seeing how that language community addresses common problems. The value isn’t in the language itself though, it’s in the problem solving skills you exercise.
Will learning C make you a better programmer in your day-to-day language? If you’re an absolute beginner, yes but so will learning ***any*** language. If you’ve been writing code for about 2 or so years? Maybe marginally, but again, so will learning ***any other language***.
### The better way to level up
Like I’ve said repeatedly now, learning a new language will make you a better programmer and is a fine approach to leveling up your skill set. But let’s put some thought into which language to learn to optimize our time instead of reaching for that old copy of K&R C like everyone on the internet told you to.
If you’re a beginner and you’ve learned one programming language, your second one should be either:
1. A language of a different paradigm
2. A language relevant to your preferred domain
A language of a different paradigm means that if you’e learned an object oriented programming language, you should learn a functional programming language now. ***Or better yet***, learn functional programming concepts in the context of your original language.
Many programming languages have both OOP and FP features and you can learn the basics of both in one language. Both Python and JavaScript are great examples of this. Even modern C# and Java support many FP features. If you can grasp both paradigms in your main language, you can really get some cool stuff done.
Learning a language relevant to your preferred domain means thinking about what kind of programming you want to do, and learning a prevalent language in that domain. If you’re into data science and you already know Python, learn R. If you like web development and you learned JavaScript, give a back-end language a go (PHP, Ruby, Python). If you want to get into embedded programming, *learn C.*
With that last sentence, please take note that I am **not** saying there is **no reason** to learn C. There are times when it’s relevant and knowing it is a prerequisite for some forms of programming.
Myth #2 – You’ll learn how computers work
-----------------------------------------
This is perhaps the most cited reason the gatekeepers give for learning the C programming language. “C get’s you as close to the metal as you can get without learning assembler” I read one comment say, or “you really learn about computer architecture when you learn C.”
These things just aren’t true. First of all, if getting “as close as you can get to the metal” is important, then why aren’t there as many people advocating for learning assembler as there are for learning C? Wouldn’t that be better by definition? In fact, you will *certainly* learn a lot about how computers work if you learn assembler, so why is C the magic language for learning about a “computer’s architecture?”
To put it bluntly, it’s not. And even if C did bestow some kind of arcane computer architecture knowledge, what good does that do you? If knowing how computers work is that important to programming, why do we stop at C? Why not the basics of electrical engineering? Theory of computation? Physics?
When people say “it gets you close to the metal” they actually mean “you learn about pointers,” which aren’t ***that*** complicated (though certainly not simple). Sure, you need to know C if you want to write device drivers, embedded software, or operating systems, and these are “close to the metal” types of things. But unless those are the kinds of things you want to work on, working on them won’t make you a better programmer *in general*.
If you want to be a web developer, writing a printer driver isn’t going to help you as much as building websites will. If you want to be a data engineer, writing embedded software isn’t going to help you as much as learning about building data pipelines will. If you want to make mobile apps, writing an operating system isn’t going to help you as much as writing more mobile apps will.
For the vast majority of gatekeepers who advocate C as a must-know language, they’ll advise you to learn enough of the language to work with pointers. I think the inherent value in learning about pointers is actually in learning about memory management. For most of us, you can learn all you’ll ever *really* need to know about memory management in an afternoon or two of wikipedia articles, YouTube videos, and blog posts on the topic. The rest you can learn as problems arise.
### The better way to level up
If you want to know how computers work, writing code is really just a start, if relevant at all. You’ll also have to learn about circuits, electronics, physics, and so on. And of course, there is nothing wrong with this if that’s what makes you tick, but these things are not paramount to being a good programmer or software engineer.
Memory management, and to some extent, garbage collection, actually *is* an important thing to know, especially with tricky performance bugs. Do a google search for how memory works in computers, then read up on how your target language handles memory management and garbage collection. This is really the extent of what you need to know as a beginner.
Instead of learning “how computers work,” I think it’s more beneficial to learn different kinds of systems administration. For example, an aspiring web developer should have at least basic skills in administering a Linux server, a data engineer will find value in learning how to work with databases, and just about everyone can benefit from some basic cloud administration skills.
Unless your programming goals are to work in environments that require knowing C, learning the aforementioned skills will pay off dividends more.
Myth #3 – You’ll be better at debugging
---------------------------------------
This is another reason you’ll see given by the gurus of why you should learn C. The reasoning goes that, because many compilers and interpreters are written in C, the trickier bugs require knowledge of the language to troubleshoot.
For example, the Python interpreter is written in C. Say you’re writing your company’s CRM in Django, and as more and more users are using it, the more complicated it gets. Eventually, the complexity and usage will cause some weird behavior that will have you tearing your hair out trying to understand.
The logic here is that knowing C will somehow magically make you able to interpret this behavior better, but this just isn’t true. Performance bugs are inherently tricky, and knowing C will only make you marginally more capable of figuring out the cause.
### The better way to level up
Learning C to get better at debugging is like taking up jogging to get better at swimming. The best way to get better at debugging is by practicing debugging.
Instead of learning C, learn more about your language’s error messages, or how certain exceptions are triggered. Practice reading logs and error messages. Learn the ins and outs of your language’s profiling capabilities. Better yet, learn how to use a debugging tool. If you’re into web development, getting intimately familiar with Chrome tools will be 100x more valuable than learning C.
> I try not to use anecdotal evidence for anything but in this case, I can’t resist. Recently, I wrote some Ruby code that was SO BAD, I was triggering C compiler errors. I was able to figure out the problem though, not because I knew C, but because I am not an idiot. It’s not like you become illiterate to error messages if you don’t know C, you just have to read them a little closer.
Myth #4 – It just makes you *better*
------------------------------------
The worst gatekeepers love giving this advice, “learning C just *makes you a better programmer.*” As if learning C was the puberty of programming and the pathway to true 10x adulthood.
Again, this is patently absurd. If you’re a PHP developer and your web apps are insecure, knowing C isn’t going to change that. If your favorite code to write is data crunching in Jupyter notebooks, learning C isn’t going to make you any better at it.
Every now and then, I’ll see someone mention that the C language is the best way to learn data structures and algorithms. I think this belief stems from the fact that most of these people probably learned DS&A in C because that’s how their university taught it.
If knowing these things are so important and language agnostic, you can learn them in any language.
### The better way to level up
The best way to get better at programming is by programming. In fact, knowing just the syntax of a language doesn’t actually mean you know that language and certainly doesn’t mean it’s time to learn a new one.
Instead, learn design patterns. Learn basic software architecture. Learn how to do TDD in your target language. Learn a popular library or framework in your language. Work on a personal project in your language. All these things will have exponentially more benefits than learning C.
When you should learn C
-----------------------
This article borders on sacrilege and as such, people are likely reading things I am not saying. Let me be clear, I am *not* trashing the C language. C is useful and powerful; it has an important role in the history of computing.
If this article is trashing anything, it’s the gatekeepers that tell impressionable programmers to waste their time learning something just because that’s what the old gatekeepers told them when they were impressionable programmers.
There are times you need to know C. There are times when it’s imperative in fact, and let there be no record showing I ever said anything to the contrary. You should learn C if:
- You’re writing embedded software and the best way to do that is in C
- You’re writing a device driver and the best way to do that is in C
- You’re taking a class in school that requires you to learn C
- You’re contributing to a project that is written in C
- You just want to learn C
If you want to learn C for no other reason than you just want to, then do it. Keeping yourself engaged and interested is what will keep you coming back and trying over and over. This is what leads to skill. | erikwhiting88 |
309,150 | Invonto Announces $5M Program for Businesses Affected by Coronavirus | At Invonto, we are closely monitoring COVID-19 developments. The health and safety of our employees a... | 0 | 2020-04-14T17:15:49 | https://www.invonto.com/insights/covid19-support-initiatives/ | At Invonto, we are closely monitoring COVID-19 developments. The health and safety of our employees and customers are our top priority. Since March 16th, we have been operating virtually and will continue to do so until it is safe to operate normally.
We understand the impact the current crisis is having on small and medium-sized businesses across the USA. Many businesses are finding it difficult to stay operational and support customers remotely. Without the proper technology infrastructure, we fear many businesses will not be able to stay efficient and retain their customers for long. Conversely, we have seen that companies that have invested in digital solutions have supported themselves and even thrived during this difficult time. We believe that the right technology has the power to help businesses meet customer needs, maintain revenue streams, and support employees. In an effort to support struggling US businesses affected by the crisis, Invonto will be launching a $5 million program to partner with businesses in need of improving their technology solutions.
We are extending our support with several key initiatives:
We will host client applications on our Amazon AWS and Microsoft Azure infrastructure at no cost for one year.
We will provide development and support resources on an hourly basis without requiring any commitments and retainer fees. No project is too small.
We will offer a pay-as-you-go model that includes no down payments. Payments can be deferred up to 60 days from the initiation of our services.
We will substantially reduce our development and consulting fees for new projects.
We will expand our services to help businesses support and maintain existing systems to ensure they stay operational.
We are using Google Meet, Zoom, and Slack for communication. We also have a suite of solutions required for software design, development, and project collaboration. We are making all of these tools available to our customers so there will be no upfront or additional costs for managing projects.
Interested businesses must meet the following requirements for eligibility:
US-based companies only
Businesses with less than 1000 employees
We are not healthcare professionals. We cannot be on the frontlines against the Coronavirus. However, we don’t want to stay on the sidelines either. We still want to do our part.
By taking these measures, we hope to help companies meet their current technology needs and prepare their business for accelerated growth when the current crisis ends. In addition to our $5 million program, we are also talking with biotech and pharmaceutical companies researching solutions for Coronavirus and offering our technical assistance.
If you know someone who has a project in mind, please [fill out this contact form](https://www.invonto.com/insights/covid19-support-initiatives/). If you have any questions, please feel free to contact us at any time.
Stay safe and healthy! | invonto | |
309,172 | TDD by Example: Part 1: The Money Example | Summary Beck opens this part of the book giving a summary of what to expect. He lays out t... | 5,922 | 2020-04-14T17:32:11 | https://emmanuelgenard.com/tdd-by-example-part-1-the-money-example | tdd, kentbeck |
## Summary
Beck opens this part of the book giving a summary of what to expect. He lays out the rhythm of TDD and what the reader will find surprising. There isn't much to summarize here so I'll just quote him.
> 1. Quickly add a test.
> 2. Run all tests and see the new one fail.
> 3. Make a little change.
> 4. Run all tests and see them all succeed.
> 5. Refactor to remove duplication.
> The surprises are likely to include
> - How each test can cover a small increment of functionality
> - How small and ugly the changes can be to make the new tests run
> - How often the tests are run
> - How many teensy-weensy steps make up the refactorings
## Commentary
These seem simple enough and self explanatory. I didn't think I would have any thoughts other than, "Let's start!". But then I remembered how often I've tried to add a test and had no idea what to do.
I don't see:
0. How to know what test to write
I think step 0 could be the most important. It would help answer not just what the next test is but how many there are and what they cover. It might help answer what kind of test to write. Which would probably be lead to:
-1. The many kinds of tests and what they cover.
But it's likely that those two steps would require a different book.
| edgenard |
309,183 | Pros & Cons of Remote Work? | Image credits - https://www.ringover.com/img/blog/big/13-remote-working-2.png With internet coverage... | 0 | 2020-04-19T15:44:21 | https://dev.to/vinayhegde1990/pros-cons-of-remote-work-11do | productivity, career | _Image credits - https://www.ringover.com/img/blog/big/13-remote-working-2.png_
With internet coverage practically available everywhere in the world, every one of us is aware of what Coronavirus a.k.a COVID-19 is and the ripple effects it's having across the IT Industry. In most countries, the intensity of the situation demanded a mandatory need to stay home, practice social distancing along with an enforced lockdown to flatten the curve. However as SaaS has to have a **[Business Continuity Plan](https://www.ibm.com/services/business-continuity/plan)** irrespective of circumstances, the whole eco-system transitioned into remote working or rather, work from home.
Now, this led to very interesting insights into the whole work-from-home paradigm, some of which are already documented on Dev.to
{% link https://dev.to/helenanders26/being-kind-while-working-in-lockdown-5912 %}
{% link https://dev.to/ben/what-are-the-hardest-parts-about-working-from-home-3m5i %}
{% link https://dev.to/teamxenox/covid-19-self-isolation-work-from-home-and-developers-22kd %}
However, my post isn't an addition to best practices towards WFH (_have included one at the bottom, though_ :relieved:) but a little corollary based on my experience. With people gradually coming to terms with remote work being the new normal, this is also an attempt to engage with fellow developers here (_i.e: you_) to understand the mindset, rationale, and viewpoints behind it.
Let me start off by listing some advantages/disadvantages of remote work.
### **Pros**
1. It helps save a lot of time traveling to and fro from office. For e.g: The average time spent on commuting in **Mumbai** is ~2.5-3 hours.
2. You can focus your energies on your spouse/kids, parents/elderly family members which is one way to achieve a personal work-life balance.
3. There's always an extra hand to help out in household chores.
4. Organizations save on operational expenditure by not having to operate/maintain a dedicated workspace, relocate employee talent across geographies.
5. You're in charge of your own schedule and productivity.
### **Cons**
1. If a strict schedule/discipline isn't adhered to, office hours will most likely overlap into personal time thus disrupting the work-life balance outlined above.
2. Remote work requires people to communicate frequently which can be very tedious if there's a lack of high-speed internet connectivity, power outages, and software/hardware failure (_VPN issues anyone?_ :cold_sweat:)
3. Too many meetings/calls since little things that can be explained by walking over to a co-worker's desk will now need to be done virtually.
4. A possible feeling of isolation since there will be little personal interaction which doesn't occur as much when working in a dedicated office.
5. Family members, kids and/or pets can be a distraction sometimes.
**PS**: As promised, here is **[article1](https://blog.stylight.com/working-from-home)** and **[article2](https://toggl.com/out-of-office-why-go-remote)** for getting the best out of yourselves while doing work from home :grinning:
---
#### **What are some of the pros & cons you've experienced in remote work?** | vinayhegde1990 |
309,218 | E2 - Creating the Web Form | Open your index.php file (see Create project folder) and follow these steps: 1. Add an HTML html ele... | 5,954 | 2020-04-14T18:37:00 | https://dev.to/herobank110/e2-creating-the-web-form-1bdd | html, tutorial, php | Open your index.php file (see Create project folder) and follow these steps:
1. Add an HTML html element and a body inside it.
```html
<html><body></body></html>
```
2. Add an HTML form element inside the body with blank action and method attributes (we’ll set them later).
```html
<form action="" method=""></form>
```
3. Add an HTML div element inside the form and add a label and an input element inside it for the first name field.
```html
<div> <label>First Name:</label> <input name="first_name"> </div>
```
4. Repeat step 3 for the last name field.
5. Add an HTML button element inside the form with inner text: submit.
```html
<button>Submit</button>
```
6. Try running the code. Notice it does nothing when submitted as the action and method aren’t set. Set the action to "process_my_form.php" and the method to "get" so we can process it in the next stage.
The final code should look something like this:
```html
<html><body><form action="process_my_form.php" method="get">
<div> <label>First Name:</label> <input name="first_name"> </div>
<div> <label>Last Name:</label> <input name="last_name"> </div>
<button>Submit</button>
</form></body></html>
```
Parent topic: [Example 2](https://dev.to/herobank110/example-2-saving-data-from-a-form-4g33) | herobank110 |
309,231 | A minimal authorization policy builder for NodeJs | auth-policy A minimal authorization policy builder which defines if a viewer can perform a... | 0 | 2020-04-14T18:47:45 | https://dev.to/hereisnaman/a-minimal-authorization-policy-builder-for-nodejs-60d | npm, node, javascript, authorization | # auth-policy
A minimal authorization policy builder which defines if a viewer can perform an action on an entity. The Policy can be defined in a declarative manner and can be consumed at various layers of any application.
**Github**: https://github.com/hereisnaman/auth-policy
**NPM**: https://www.npmjs.com/package/auth-policy
## Usage
```
yarn add auth-policy
```
```javascript
import Policy from 'auth-policy'
// create a new policy
const userPolicy = new Policy();
// register concern
userPolicy.register('update', ({ viewer, entity: user, value }) => {
if(viewer.role === 'Admin') return true;
if(viewer.id === user.id) {
if(value.role === 'Admin') return false;
return true;
}
return false;
});
// verify authorization
userPolicy.can(viewer).perform(':update').having(value).on(user);
```
## Documentation
Name | Description
------------ | -------------
viewer| The user for whom the authorization is being verified.
action| A string which defines the action to be performed by the viewer.
entity| The object against which the action is to be performed.
value| The value associated with the action.
### Concerns
Every policy has multiple concerns, each of which maps to an action performed by the viewer and defines if the viewer is authorized to perfom that certain action. Concerns are added to a policy using the `register` function.
```javascript
import Policy from 'auth-policy';
const userPolicy = new Policy();
// registering a single concern
// associated action = ':read'
userPolicy.register('read', ({ viewer }) => !!viewer);
// registering multiple concerns with same authorization policy
// associated actions = ':update', ':delete'
userPolicy.register(['update', 'delete'], ({ viewer, entity }) =>
viewer.role === 'Admin' || viewer.id === entity.id
);
```
### Child Policies
Any policy can have multiple child policies which can be included using the `include` function. It is recommended to have a single root level policy and nest all the other entity level policies inside it.
A policy can be included in two ways, either by passing a prebuilt instance of `Policy` or using a callback function which receives a fresh instance of `Policy` in the argument that can be used to define the concerns inside the function. Policies can be deeply nested as much as you need.
```javascript
import Policy from 'auth-policy';
const postPolicy = new Policy();
// associated action = ':read'
postPolicy.register('read', ({ viewer, entity }) =>
entity.isPublished || viewer.id === entity.publisher_id
);
const policy = new Policy();
// including a prebuilt policy
// available actions = 'post:read'
policy.include('post', postPolicy);
// using a callback function to define a new policy
// accociated actions = 'user:read', 'user:email:update', 'user:phone_number:update'
policy.include('user', p => {
p.register('read', ({ viewer }) => !!viewer);
// include another set of nested policies at once
p.include(['email', 'phone_number'], p => {
p.register('update', ({ viewer, entity: user }) => viewer.id === user.id);
});
});
```
### Authorization
Once the policy is defined we can simply use the `can` function chain to verify the access to the viewer for a certain action.
```javascript
import Policy from 'auth-policy';
const policy = new Policy();
policy.include('invite', p => {
p.register('read', () => true);
p.register('update', ({ viewer, entity: invite, value }) => {
if(viewer.id === invite.organiser_id) return true;
if(viewer.id === invite.user_id) {
if(invite.status === 'Requested' && value.status === 'Accepted')
return false;
return true;
}
return false;
});
});
const viewer = { id: 1 };
const organiser = { id: 2 };
const invite = { user_id: 1, organiser_id: 2, status: 'Requested' };
policy.can(viewer).perform('invite:read').on(invite); // true
const updatedValue = { status: 'Accepted' };
/* pass value using `having` function if
* there is any value associated with the action. */
policy.can(viewer).perform('invite:update').having(updatedValue).on(invite) // false
policy.can(organiser).perform('invite:update').having(updatedValue).on(invite) // true
```
| hereisnaman |
309,239 | Relay: the GraphQL client that wants to do the dirty work for you | This series of articles is written by Gabriel Nordeborn and Sean Grove. Gabriel is a frontend develo... | 5,967 | 2020-04-16T20:44:10 | https://dev.to/zth/relay-the-graphql-client-that-wants-to-do-the-dirty-work-for-you-55kd | react, graphql, relay | > This series of articles is written by [Gabriel Nordeborn](https://github.com/zth) and [Sean Grove](https://github.com/sgrove). Gabriel is a frontend developer and partner at the Swedish IT consultancy [Arizon](https://arizon.se) and has been using Relay for a long time. Sean is a co-founder of [OneGraph.com](https://www.onegraph.com), unifying 3rd-party APIs with GraphQL.
This is a series of articles that will dive juuuuust deep enough into Relay to answer - *definitively* - one question:
Why in the world would I care about Relay, Facebook’s JavaScript client framework for building applications using GraphQL?
It’s a good question, no doubt. In order to answer it, we’ll take you through parts of building a simple page rendering a blog. When building the page, we’ll see two main themes emerge:
1. Relay is, in fact, an utter workhorse that *wants* to do the dirty work for you.
2. If you follow the conventions Relay lays out, Relay will give you back a fantastic developer experience for building client side applications using GraphQL.
We’ll also show you that Relay applications are scalable, performant, modular, and resilient to change *by default,* and apps built with it are future proofed for the new features in development for React right now.
Relay comes with a (relatively minor) set of costs, which we’ll examine honestly and up-front, so the tradeoffs are well understood.
## Setting the stage
This article is intended to showcase the ideas and philosophy of *Relay*. While we occasionally contrast how Relay does things against other GraphQL frameworks, this article is not primarily intended as a comparison of Relay and other frameworks. We want to talk about and dive deep into *Relay* all by itself, explain its philosophy and the concepts involved in building applications with it.
This also means that the code samples in this article (there are a few!) are only here to illustrate how Relay works, meaning they can be a bit shallow and simplified at times.
We’ll also focus exclusively on the new [hooks-based APIs for Relay](https://relay.dev/docs/en/experimental/step-by-step), which come fully-ready for React’s Suspense and Concurrent Mode. While the new APIs are still marked as experimental, Facebook is rebuilding facebook.com using Relay and said APIs exclusively for the data layer.
Also, before we start - this article will assume basic familiarity with GraphQL and building client side JavaScript applications. [Here’s an excellent introduction to GraphQL](https://graphql.org/learn/) if you feel you’re not quite up to speed. Code samples will be in TypeScript, so a basic understanding of that will help too.
*Finally*, this article is pretty long. See this as a reference article you can come back to over time.
With all the disclaimers out of the way, let’s get going!
# Quick overview of Relay
> Relay is made up of a compiler that optimizes your GraphQL code, and a library you use with React.
Before we dive into the deep end of the pool, let’s start with a quick overview of Relay. Relay can be divided into two parts:
1. The *compiler*: responsible for all sorts of optimizations, type generation, and enabling the great developer experience. You keep it running in the background as you develop.
2. The *library*: the core of Relay, and bindings to use Relay with React.
At this point, all you need to know about the compiler is that it’s a separate process you start that watches and compiles all of your GraphQL operations. You'll hear more about it soon though.
In addition to this, for Relay to work optimally, it wants your schema to follow three conventions:
- All `id` fields on types should be *globally unique* (i.e. no two objects - even two different *kinds* of objects - may share the same `id` value)*.*
- The `Node` interface, meaning: objects in the graph should be fetchable via their `id` field using a top level `node` field. Read more about globally unique id’s and the `Node` interface (and why it’s nice!) [here](https://dev.to/zth/the-magic-of-the-node-interface-4le1).
- Pagination should follow the connection based pagination standard. Read more about what connection based pagination is and why it is a good idea [in this article](https://dev.to/zth/connection-based-pagination-in-graphql-2588).
We won’t dive into the conventions any deeper at this point, but you’re encouraged to check out the articles linked above if you’re interested.
# At the heart of Relay: the fragment
Let’s first talk about a concept that's at the core of how Relay integrates with GraphQL: Fragments. It’s one of the main keys to Relay (and GraphQL!)'s powers, after all.
Simply put, fragments in GraphQL are a way to group together common selections on a specific GraphQL type. Here’s an example:
```graphql
fragment Avatar_user on User {
avatarUrl
firstName
lastName
}
```
> For the curious: Naming the fragment `Avatar_user` is a convention that Relay enforces. Relay wants all fragment names to be globally unique, and to follow the structure of `<moduleName>_<propertyName>`. You can read more about naming conventions for fragments [here](https://relay.dev/docs/en/experimental/a-guided-tour-of-relay#fragments), and we'll talk about _why_ this is useful soon.
This defines a fragment called `Avatar_user` that can be used with the GraphQL type `User`. The fragment selects what’s typically needed to render an avatar. You can then re-use that fragment throughout your queries instead of explicitly selecting all fields needed for rendering the avatar at each place where you need them:
```graphql
# Instead of doing this when you want to render the avatar for the author
# and the first two who liked the blog post...
query BlogPostQuery($blogPostId: ID!) {
blogPostById(id: $blogPostId) {
author {
firstName
lastName
avatarUrl
}
likedBy(first: 2) {
edges {
node {
firstName
lastName
avatarUrl
}
}
}
}
}
# ...you can do this
query BlogPostQuery($blogPostId: ID!) {
blogPostById(id: $blogPostId) {
author {
...Avatar_user
}
likedBy(first: 2) {
edges {
node {
...Avatar_user
}
}
}
}
}
```
This is convenient because it allows reusing the definition, but more importantly it lets you add and remove fields that are needed to render your avatar as your application evolves *in a single place*.
> Fragments allow you to define reusable selections of fields on GraphQL types.
## Relay doubles down on fragments
To scale a GraphQL client application over time, it’s a good practice to try and co-locate your data requirements with the components that render said data. This will make maintenance and extending your components much easier, as reasoning about your component and what data it uses is done in a single place.
Since GraphQL fragments allow you to define sub-selections of fields on specific GraphQL types (as outlined above), they fit the co-location idea perfectly.
So, a great practice is to define one or more fragments describing the data your component needs to render. This means that a component can say, “I depend on these 3 field from the `User` type, regardless of who my parent component is.” In the example above, there would be a component called `<Avatar />` that would show an avatar using the fields defined in the `Avatar_user` fragment.
Now, most frameworks let you use GraphQL fragments one way or another. But Relay takes this further. In Relay, almost *everything revolves around fragments*.
# How Relay supercharges the GraphQL fragment
At its core, Relay wants every component to have a complete, explicit list of all of its data requirements listed alongside the component itself. This allows Relay to integrate deeply with fragments. Let’s break down what this means, and what it enables.
## Co-located data requirements and modularity
With Relay, you use fragments to put the component’s data requirements right next to the code that’s actually using it. Following Relay's conventions guarantees that every component explicitly lists every field it needs access to. This means that no component will depend on data it doesn't explicitly ask for, making components modular, self-contained and resilient in the face of reuse and refactoring.
Relay does a bunch of additional things to enable modularity through using fragments too, which we'll visit a bit later in this article.
## Performance
In Relay, components will only re-render when the *exact fields* they're using change - with no work on your part! This is because each *fragment* will subscribe to updates only for the data it selects.
That lets Relay optimize how your view is updated by default, ensuring that performance isn’t unnecessary degraded as your app grows. This is quite different to how other GraphQL clients operate. Don’t worry if that didn’t make much sense yet, we’ll show off some great examples of this below and how important it is for scalability.
With all that in mind, let’s start building our page!
> Relay doubles down on the concept of fragments, and uses them to enable co-location of data requirements, modularity and great performance.
# Building the page to render the blog post
Here’s a wireframe of what our page showing a single blog post will look like:

First, let’s think of how we’d approach this with getting all the data for this view through a single top-level query. A very reasonable query to fulfill the wireframe’s need might look something like this:
```graphql
query BlogPostQuery($blogPostId: ID!) {
blogPostById(id: $blogPostId) {
author {
firstName
lastName
avatarUrl
shortBio
}
title
coverImgUrl
createdAt
tags {
slug
shortName
}
body
likedByMe
likedBy(first: 2) {
totalCount
edges {
node {
firstName
lastName
avatarUrl
}
}
}
}
}
```
One query to fetch all the data we need! Nice!
And, in turn, the structure of UI components might look something like this:
```jsx
<BlogPost>
<BlogPostHeader>
<BlogPostAuthor>
<Avatar />
</BlogPostAuthor>
</BlogPostHeader>
<BlogPostBody>
<BlogPostTitle />
<BlogPostMeta>
<CreatedAtDisplayer />
<TagsDisplayer />
</BlogPostMeta>
<BlogPostContent />
<LikeButton>
<LikedByDisplayer />
</LikeButton>
</BlogPostBody>
</BlogPost>
```
Let’s have a look at how we’d build this in Relay.
## Querying for data in Relay
In Relay, the root component rendering the blog post would typically look something like this:
```typescript
// BlogPost.ts
import * as React from "react";
import { useLazyLoadQuery } from "react-relay/hooks";
import { graphql } from "react-relay";
import { BlogPostQuery } from "./__generated__/BlogPostQuery.graphql";
import { BlogPostHeader } from "./BlogPostHeader";
import { BlogPostBody } from "./BlogPostBody";
interface Props {
blogPostId: string;
}
export const BlogPost = ({ blogPostId }: Props) => {
const { blogPostById } = useLazyLoadQuery<BlogPostQuery>(
graphql`
query BlogPostQuery($blogPostId: ID!) {
blogPostById(id: $blogPostId) {
...BlogPostHeader_blogPost
...BlogPostBody_blogPost
}
}
`,
{
variables: { blogPostId }
}
);
if (!blogPostById) {
return null;
}
return (
<div>
<BlogPostHeader blogPost={blogPostById} />
<BlogPostBody blogPost={blogPostById} />
</div>
);
};
```
Let’s break down what’s going on here, step by step.
```typescript
const { blogPostById } = useLazyLoadQuery<BlogPostQuery>(
graphql`
query BlogPostQuery($blogPostId: ID!) {
blogPostById(id: $blogPostId) {
...BlogPostHeader_blogPost
...BlogPostBody_blogPost
}
}
`,
{
variables: { blogPostId }
}
);
```
The first thing to note is the React hook `useLazyLoadQuery` from Relay:
`const { blogPostById } = useLazyLoadQuery<BlogPostQuery>`. `useLazyLoadQuery` will start fetching `BlogPostQuery` as soon as the component renders.
For type safety, we’re annotating `useLazyLoadQuery` to explicitly state the type, `BlogPostQuery`, which we import from `./__generated__/BlogPostQuery.graphql`. That file is *automatically* generated (and kept in sync with changes to the query definition) by the Relay compiler, and has all the type information needed for the query - how the data coming back looks, and what variables the query wants.
> **Disclaimer time**!: As mentioned, `useLazyLoadQuery` will start fetching the query as soon as it renders. **However, note that** Relay actually doesn’t want you to lazily fetch data on render like this. Rather, Relay wants you to start loading your queries as soon as you can, like right when the user is clicking the link to a new page, instead of as the page renders. Why this is so important is talked about at length in this [blog post](https://reactjs.org/docs/concurrent-mode-suspense.html#approach-3-render-as-you-fetch-using-suspense), and in [this talk](https://www.youtube.com/watch?v=Tl0S7QkxFE4), which we warmly recommend you to read and watch.
> We’re still using the lazy load variant in this article though because it's a more familiar mental model for most people, and to keep things as simple and easy to follow as possible. But, please do note that as mentioned above this isn't how you should fetch your query data when building for real with Relay.
Next, we have our actual query:
```typescript
graphql`
query BlogPostQuery($blogPostId: ID!) {
blogPostById(id: $blogPostId) {
...BlogPostHeader_blogPost
...BlogPostBody_blogPost
}
}`
```
Defining our query, there’s really not a whole lot left of the example query we demonstrated above. Other than selecting a blog post by its id, there’s only two more selections - the fragments for `<BlogPostHeader />` and `<BlogPostBody />` on `BlogPost`.
> Notice that we don’t have to import the fragments we’re using. These are included automatically by the Relay compiler. More about that in a second.
Building your query by composing fragments together like this is very important. Another approach would be to let components define their own *queries* and be fully responsible for fetching their own data. While there are a few valid use cases for this, this comes with two major problems:
- A ton of queries are sent to your server instead of just one.
- Each component making their own query would need to wait until they’re actually rendered to start fetching their data. This means your view will likely load quite a lot slower than needed, as requests would probably be made in a waterfall.
> In Relay, we build UIs by composing components together. These components define what data they need themselves in an opaque way.
## How Relay enforces modularity
Here’s the mental model to keep in mind with the code above:
> As the `BlogPost` component, I only know I want to render two children components, `BlogPostHeader` and `BlogPostBody`. I don’t know what data they need (why would I? That’s their responsibility to know!).
> Instead, they’ve told me all the data they need is in a fragment called `BlogPostHeader_blogPost` and `BlogPostBody_blogPost` on the `BlogPost` GraphQL type. As long as I include their fragments in my query, I know I’m guaranteed to get the data they need, even though I don’t know any of the specifics. And when I have the data they need, I'm allowed to render them.
We build our UI by composing components that define their own data requirements *in isolation*. These components can then be composed together with other components with their own data requirements. However, no component really knows anything about what data other components need, other than from *what GraphQL source (type)* the component needs data. Relay takes care of the dirty work, making sure the right component gets the right data, and that all data needed is selected in the query that gets sent to the server.
This allows you, the developer, to think in terms of *components* and *fragments* in isolation, while Relay handles all the plumbing for you.
Moving on!
## The Relay compiler knows all GraphQL code you’ve defined in your project
Notice that while the query is referencing two fragments, there’s no need to tell it _where_ or in what file those fragments are defined, or to import them manually to the query. This is because Relay enforces *globally unique* names for every fragment, so that the Relay compiler can *automatically* include the fragment definitions in any query that's being sent to the server.
Referencing fragment definitions by hand, another inconvenient, manual, potentially error-prone step, is no longer the developer’s responsibility with Relay.
> Using fragments tightly coupled to components allows Relay to hide the data requirements of a component from the outside world, which leads to great modularity and safe refactoring.
Finally, we get to rendering our results:
```typescript
// Because we spread both fragments on this object
// it's guaranteed to satisfy both `BlogPostHeader`
// and `BlogPostBody` components.
if (!blogPostById) {
return null;
}
return (
<div>
<BlogPostHeader blogPost={blogPostById} />
<BlogPostBody blogPost={blogPostById} />
</div>
);
```
Here we render `<BlogPostHeader />` and `<BlogPostBody />`. Looking carefully, you may see that we render both by passing them the `blogPostById` object. This is the object in the query where *we spread their fragments*. This is the way fragment data is transferred with Relay - passing the object where the fragment has been spread to the component using the fragment, which the component then uses to get the actual fragment data. Don't worry, Relay doesn't leave you hanging. Through the type system Relay will ensure that you're passing the _right_ object with the _right_ fragment spread on it. More on this in a bit.
Whew, that’s a few new things right there! But we’ve already seen and expanded on a number of things Relay does to help us - things that we would normally have to do manually for no additional gain.
> Following Relay’s conventions ensures that a component **cannot** be renderer without it having the data it asks for. This means you’ll have a hard time shipping broken code to production.
Let’s continue moving down the tree of components.
## Building a component using fragments
Here's the code for `<BlogPostHeader />`:
```typescript
// BlogPostHeader.ts
import * as React from "react";
import { useFragment } from "react-relay/hooks";
import { graphql } from "react-relay";
import {
BlogPostHeader_blogPost$key,
BlogPostHeader_blogPost
} from "./__generated__/BlogPostHeader_blogPost.graphql";
import { BlogPostAuthor } from "./BlogPostAuthor";
import { BlogPostLikeControls } from "./BlogPostLikeControls";
interface Props {
blogPost: BlogPostHeader_blogPost$key;
}
export const BlogPostHeader = ({ blogPost }: Props) => {
const blogPostData = useFragment<BlogPostHeader_blogPost>(
graphql`
fragment BlogPostHeader_blogPost on BlogPost {
title
coverImgUrl
...BlogPostAuthor_blogPost
...BlogPostLikeControls_blogPost
}
`,
blogPost
);
return (
<div>
<img src={blogPostData.coverImgUrl} />
<h1>{blogPostData.title}</h1>
<BlogPostAuthor blogPost={blogPostData} />
<BlogPostLikeControls blogPost={blogPostData} />
</div>
);
};
```
> Our examples here only define one fragment per component, but a component could define _any number_ of fragments, on _any number_ of GraphQL types, including multiple fragments on the same type.
Let’s break it down.
```typescript
import {
BlogPostHeader_blogPost$key,
BlogPostHeader_blogPost
} from "./__generated__/BlogPostHeader_blogPost.graphql";
```
We import two type definitions from the file `BlogPostHeader_blogPost.graphql`, autogenerated by the Relay compiler for us.
The Relay compiler will extract the GraphQL fragment code from this file and generate type definitions from it. In fact, it will do that for *all* the GraphQL code you write in your project and use with Relay - queries, mutations, subscriptions and fragments. This also means that the types will be kept in sync with any change to the fragment definition automatically by the compiler.
`BlogPostHeader_blogPost` contains the type definitions for the fragment, and we pass that to `useFragment` (`useFragment` which we'll talk more about soon) ensuring that interaction with the data from the fragment is type safe.
But what on earth is `BlogPostHeader_blogPost$key` on line 12 in `interface Props { … }`?! Well, it has to do with the type safety. You really _really_ don’t have to worry about this right now, but for the curious we’ll break it down anyway (the rest of you can just skip to the next heading):
That type definition ensures, via some dark type magic, that you can only pass the right object (where the `BlogPostHeader_blogPost` fragment has been spread) to `useFragment`, or you’ll have a type error at build time (in your editor!). As you can see, we take `blogPost` from props and pass it to `useFragment` as the second parameter. And if `blogPost` does not have the right fragment (`BlogPostHeader_blogPost`) spread on it, we’ll get a type error.
It doesn't matter if another fragment with the _exact same_ data selections has been spread on that object, Relay will make sure it's the _exactly right_ fragment you want to use with `useFragment`. This is important, because it's another way Relay guarantees you can change your fragment definitions without any other component being affected implicitly.
Relay eliminates another source of potential errors: passing the _exact_ right object containing the _right_ fragment.
## You can only use data you’ve explicitly asked for
We define our fragment `BlogPostHeader_blogPost` on `BlogPost`. Notice that we explicitly select two fields for this component:
- `title`
- `coverImgUrl`
That’s because *we’re using these fields in this specific component*. This highlights another important feature of Relay - data masking. Even if `BlogPostAuthor_blogPost`, the next fragment we’re spreading, also selects `title` and `coverImgUrl` (meaning they *must* be available in the query on that exact place where we'll get them from), we won’t get access to them unless we *explicitly ask for them* via our own fragment.
This is enforced both at the type level (the generated types won’t contain them) *and* at runtime - the values simply won’t be there even if you bypass your type system.
This can feel slightly weird at first, but it’s in fact another one of Relay’s safety mechanisms. If you know it’s impossible for other components to implicitly depend on the data you select, you can refactor your components without risking breaking other components in weird, unexpected ways. This is *great* as your app grows - again, every component and its data requirements become entirely self-contained.
> Enforcing that all data a component requires is explicitly defined means you can’t accidentally break your UI by removing a field selection from a query or fragment that some other component was depending on.
```typescript
const blogPostData = useFragment<BlogPostHeader_blogPost>(
graphql`
fragment BlogPostHeader_blogPost on BlogPost {
title
coverImgUrl
...BlogPostAuthor_blogPost
...BlogPostLikeControls_blogPost
}
`,
blogPost
);
```
Here we're using the React hook `useFragment` to get the data for our fragment. `useFragment` knows how to take a _fragment definition_ (the one defined inside the `graphql` tag) and an object _where that fragment has been spread_ (`blogPost` here, which comes from `props`), and use that to get the data for this particular fragment.
Just to reiterate that point - no data for this fragment (`title`/`coverImgUrl`) will be available on `blogPost` coming from props - that data will only be available as we call `useFragment` with the fragment definition and `blogPost`, the object where the fragment has been spread.
And, just like before, we spread the fragments for the components we want to render - in this case, `BlogPostAuthor_blogPost` and `BlogPostLikeControls_blogPost` since we're renderering `<BlogPostAuthor />` and `<BlogPostLikeControls />`.
> For the curious: since fragments only describes what fields to select, `useFragment` won’t make an actual request for data to your GraphQL API. Rather, a fragment *must* end up in a query (or other GraphQL operation) at some point in order for its data to be fetched. With that said, Relay has some really cool features which will let you refetch a fragment all by itself. This is possible because Relay can _generate queries automatically_ for you to refetch specific GraphQL objects by their `id`. Anyway, we digress...
> Also, if you know Redux, you can liken `useFragment` to a selector that lets you grab only what you need from the state tree.
```typescript
return (
<div>
<img src={blogPostData.coverImgUrl} />
<h1>{blogPostData.title}</h1>
<BlogPostAuthor blogPost={blogPostData} />
<BlogPostLikeControls blogPost={blogPostData} />
</div>
);
```
We then render the data we explicitly asked for (`coverImgUrl` and `title`), and pass the data for the two children components along so they can render. Notice again that we pass the object to the components where we spread their fragments, which is at the root of the fragment `BlogPostHeader_blogPost` this component defines and uses.
## How Relay ensures you stay performant
When you use fragments, each fragment will subscribe to updates only for the data it’s actually using. This means that our `<BlogPostHeader />` component above will only re-render by itself if `coverImgUrl` or `title` on the specific blog post it’s rendering is updated. If `BlogPostAuthor_blogPost` selects other fields and those update, this component still won’t re-render. Changes to data is subscribed to *at the fragment level*.
This may sound a bit confusing and perhaps not that useful at first, but it’s incredibly important for performance. Let’s take a deeper look at this by contrasting it to how this type of thing is typically done in when dealing with GraphQL data on the client.
> With Relay, only the components using the data that was updated will re-render when data updates.
## Where does the data come from in your view? Contrasting Relay to other frameworks
All data you use in your views must originate from an actual operation that gets data from the server, like a query. You define a query, have your framework fetch it from the server, and then render whatever components you want in your view, passing down the the data they need. The source of the data for most GraphQL frameworks is *the query*. Data flows from the query down into components. Here’s an example of how that’s typically done in other GraphQL frameworks (arrows symbolize how data flows):

> Note: framework data store is what’s usually referred to as the cache in a lot of frameworks. For this article, assume that `"framework data store" === cache`.
The flow looks something like:
1. `<Profile />` makes the `query ProfileQuery` and a request is issued to the GraphQL API
2. The response is stored in some fashion in a framework-specific data store (read: cache)
3. The data is delivered to the view for rendering
4. The view then continues to pass down pieces of the data to whatever descendant components needs it (`Avatar`, `Name`, `Bio`, etc.). Finally, your view is rendered
## How Relay does it
Now, Relay does this quite differently. Let’s look at how this illustration looks for Relay:

What’s different?
- Most of the initial flow is the same - the query is issued to the GraphQL API and the data ends up in the framework data store. But then things start to differ.
- Notice that all components which use data get it *directly from the* *data store (cache)*. This is due to Relay’s deep integration with fragments - in your UI, each fragment gets its own data from the framework data store directly, and *does* *not* rely on the actual data being passed down to it from the query where its data originated.
- The arrow is gone from the query component down to the other components. We’re still passing some information from the query to the fragment that it uses to look up the data it needs from the data store. But we’re passing no real data to the fragment, all the real data is retrieved by the fragment itself from the data store.
So, that’s quite in depth into how Relay and other GraphQL frameworks tend to work. Why should you care about this? Well, this setup enables some pretty neat features.
> Other frameworks typically use a query as the source of data, and rely on you passing the data down the tree to other components. Relay flips this around, and lets each component take the data it needs from the data store itself.
## Performance for free
Think about it: When the query is the source of the data, any update to the data store that affects any data that query has *forces a re-render for the component holding the query*, so the updated data can flow down to any component that might use it. This means updates to the data store causes re-renders that must cascade through any number of layers of components that don’t really have anything to do with the update, other than taking data from parent components in order to pass on to children components.
Relay's approach of each component getting the data it needs from the store directly, and subscribing to updates only for the exact data it uses, ensures that we stay performant even as our app grows in size and complexity.
This is also important when using subscriptions. Relay makes sure that updated data coming in from the subscription only causes re-renders of the components actually using that updated data.
> Using a `Query` as the source of data means your entire component tree will be forced to re-render when the GraphQL cache is updated.
## Modularity and isolation means you can safely refactor
Removing the responsibility from the developer of routing the data from the query down to whichever component actually *needs* said data also removes another chance for developers to mess things up. There’s simply *no way* to accidentally (or worse, intentionally) depend on data that should just be passing through down the component tree if you can't access it. Relay again makes sure it does the heavy work for you when it can.
> Using Relay and its fragment-first approach means it's really hard to mess up the data flow in a component tree.
It should of course be noted though that most of the cons of the “query as the source of data” approach can be somewhat mitigated by old fashioned manual optimization - `React.memo`, `shouldComponentUpdate` and so on. But that’s both potentially a performance problem in itself, and also prone to mistakes (the more fiddly a task, the more likely humans are to eventually mess it up). Relay on the other hand will make sure you stay performant without needing to think about it.
> Each component receiving its own data from the cache also enables some really cool advanced features of Relay, like partially rendering views with the data that’s already available in the store while waiting for the full data for the view to come back.
## Summarizing fragments
Let’s stop here for a bit and digest what type of work Relay is doing for us:
- Through the type system, Relay is making sure this component *cannot* be rendered without the *exact* right object from GraphQL, containing its data. One less thing we can mess up.
- Since each component using fragments will only update if the exact data it uses updates, updates to the cache is performant by default in Relay.
- Through type generation, Relay is ensuring that any interaction with this fragment's data is type safe. Worth highlighting here is that type generation is a core feature of the Relay compiler.
Relay’s architecture and philosophy takes advantage of how much information is available about your components to the computer, from the data dependencies of components, to the data and its types offered by the server. It uses all this and more to do all sorts of work that normally we - the developers who have *plenty* to do already - are required to deal with.
> It's easy to underestimate how quickly views become complex. Complexity and performance is handled by default through the conventions Relay force you to follow.
This brings some real power to you as a developer:
- You can build composable components that are almost completely isolated.
- Refactoring your components will be fully safe, and Relay will ensure you’re not missing anything or messing this up.
The importance of this once you start building a number of reusable components cannot be overstated. It’s *crucial* for developer velocity to have refactoring components used in large parts of the code base be safe.
> As your app grows, the ease and safety of refactoring becomes crucial to continue moving fast.
# Wrapping up our introduction to Relay
We’ve covered a lot of ground in this article. If you take anything with you, let it be that Relay *forces* you to build scalable, performant, type safe applications that will be easy and safe to maintain and refactor.
Relay really does do your dirty work for you, and while a lot of what we’ve shown will be possible to achieve through heroic effort with other frameworks, we hope we’ve shown the powerful benefits that *enforcing* these patterns can bring. Their importance cannot be overstated.
## A remarkable piece of software
Relay is really a remarkable piece of software, built from the blood, sweat, tears, and most importantly - experience and deep insight - of shipping and maintaining products using GraphQL for a long time.
Even though this article is pretty long and fairly dense, we've barely scratched the surface of what Relay can do. Let's end this article with a list detailing some of what more Relay can do that we haven't covered in this article:
- Mutations with optimistic and complex cache updates
- Subscriptions
- Fully integrated with (and heavily leveraging) Suspense and Concurrent Mode - ready for the next generation of React
- Use Relay to manage your local state through Relay, enjoying the general benefits of using Relay also for local state management (like integration with Suspense and Concurrent Mode!)
- Streaming list results via `@stream`
- Deferring parts of the server response that might take a long time to load via `@defer`, so the rest of the UI can render faster
- Automatic generation of queries for refetching fragments and pagination
- Complex cache management; control how large the cache is allowed to get, and if data for your view should be resolved from the cache or the network (or both, or first the cache and then the network)
- A stable, mature and flexible cache that _Just Works (tm)_
- Preload queries for new views as soon as the user indicates navigation is about to happen
_ Partially render views with any data already available in the store, while waiting for the query data to arrive
- Define arguments for fragments (think like props for a component), taking composability of your components to the next level
- Teach Relay more about how the data in your graph is connected than what can be derived from your schema, so it can resolve more data from the cache (think "these top-level fields with these variables resolve the same User")
This article ends here, but we really encourage you to go on and read the article on pagination in Relay. Pagination in Relay bring together the powerful features of Relay in a beautiful way, showcasing just how much automation and what incredible DX is possible when you let a framework do all the heavy lifting. [Read it here](https://dev.to/zth/pagination-with-minimal-effort-in-relay-gl4)
Here’s a few other articles you can continue with too:
- [The magic of the `Node` interface](https://dev.to/zth/the-magic-of-the-node-interface-4le1). An article about the `Node` interface, globally unique IDs and what power those things bring.
- [Connection based pagination](https://dev.to/zth/connection-based-pagination-in-graphql-2588). An introduction to why doing connection based pagination is a good idea.
Thank you for reading!
# Special thanks
Many thanks to Xavier Cazalot, Arnar Þór Sveinsson, Jaap Frolich, Joe Previte, Stepan Parunashvili, and Ben Sangster for thorough feedback on the drafts of this article!
| zth |
309,250 | E4 - Using PHP Debugging Tools in Visual Studio Code | If you have not already setup XDebug in Visual Studio Code, please refer to the earlier stage, Settin... | 5,954 | 2020-04-14T19:11:41 | https://dev.to/herobank110/e4-using-php-debugging-tools-in-visual-studio-code-53p9 | php, vscode, tutorial | If you have not already setup XDebug in Visual Studio Code, please refer to the earlier stage, [Setting up PHP debugging in Visual Studio Code](https://dev.to/herobank110/additional-setup-43ko#php-debug). This post assumes you have XDebug installed and the launch configuration is created.
1. Copy this code into a file `index.php` as a starting point.
```php
<?php
function makeMessage($name, $age)
{
// Make the name have a capital first letter
$first = strtoupper($name[0]);
// Make the rest have small letters.
$end = strtolower(substr($name, 1));
// Save the two parts in the name variable.
$name = $first . $end;
// Ensure the age is in the valid range.
assert(0 < $age && $age < 1000, "age must be between 1 and 999");
// Return the final message.
return "$name is $age years old!";
}
$message = makeMessage("david", 99);
echo $message;
?>
```
2. Click on line 18 where $message is set and add a breakpoint from Run > Toggle Breakpoint (hotkey <kbd>F9</kbd>). A red circle will appear on the left side of it.

3. Start debugging on the Run tab by clicking Run (hotkey <kbd>F5</kbd>) in Listen for XDebug mode.

4. Open this page in your browser (make sure Apache is started.) You should see the browser will not show anything and the Visual Studio Code window will gain focus.

5. Use the step into button to investigate inside the `makeMessage()` function.

6. Use the step over button for a few lines. You will see the state of local variables in the Run tab when they are assigned values.


7. Enter a debug command in the Debug tab (View > Debug Console) to check the first letter in $name. You can enter any valid expression here.

8. Press Continue to let the execution return to normal speed.

9. When you return to the browser, you will see that the output is ‘David is 99 years old!’
Parent topic: [Example 4](https://dev.to/herobank110/example-4-debugging-php-in-visual-studio-code-407p) | herobank110 |
309,314 | Drafts and scheduled publishing | A nice feature I had in Jekyll was drafts and the option to schedule publishing of content. In Jekyl... | 0 | 2020-04-14T20:19:29 | https://andeers.com/2020/04/draft-scheduled/ | 11ty | A nice feature I had in Jekyll was drafts and the option to schedule publishing of content.
In Jekyll you placed drafts in a separate folder called `_drafts` and when you built the site with the flag `--drafts` they where included.
I have gone a different way in my Eleventy site, and keep the drafts in the same folder (`_posts`) as the published content. To mark something as draft I add it to the front matter: `draft: true`.
Combined with that I have the following functions in `.elevent.js`.
These functions do things based on environment, you can read how to add environment variables to Eleventy in [this other post I wrote](https://andeers.com/2019/03/eleventy-essentials/ "Eleventy Essentials on Andeers.com").
``` js
const drafts = (item) => {
if (!eleventyVars.development && item.data.draft) {
// If not development, and draft is true, skip it.
return false
}
// Return everything by default.
return true;
};
```
This function is used as a array filter callback to skip the drafts if the current environment is not development.
## Scheduled publishing
For scheduling content I do a similar approach, I check the defined date vs the current date, and skip the item in production if it's in the future.
``` js
const future = (item) => {
// If item date is before now it's safe to publish it.
if (item.date <= now) {
return true;
}
// If it's in the future we can publish it in development.
if (eleventyVars.development) {
return true;
}
// In future and not development, skip it.
return false;
};
```
The scheduling check only works when the site is deployed/built. So I have used ifttt.com to automatically rebuild my website hosted on netlify.com each night.
## The complete solution
I have all of this in a helper function I use to generate the collection.
``` js
function getPosts(collectionApi) {
const globs = [
'./_posts/*',
];
const now = new Date();
const drafts = (item) => {
// See function above.
};
const future = (item) => {
// See function above.
};
return collectionApi.getFilteredByGlob(globs)
.filter(item => !!item.data.permalink)
.filter(drafts)
.filter(future)
.reverse();
}
```
It's important that you add these checks to all collections you output, so your drafts are not published on the archive page or in the rss feed.
If you want to look at my whole code you can [check it out on Github](https://github.com/andeersg/andeers.com/blob/master/.eleventy.js "Andeers.com on Github").
## Possible changes
If you want to keep the drafts in a separate folder the `getPosts` function could be changed to include the drafts folder in the globs-array instead of checking the draft variable. I'm considering doing this my self.
| andeersg |
309,319 | LeetCode Challenge: Counting Elements | This is the last problem in Week 1 of the month-long LeetCode challenge. Looks like this is the firs... | 5,835 | 2020-04-15T10:03:33 | https://dev.to/13point5/leetcode-challenge-counting-elements-545o | computerscience, beginners, leetcode, problemsolving | This is the last problem in Week 1 of the month-long LeetCode challenge.
Looks like this is the first of the 5 new questions that were promised to make things interesting.
# Problem
Given an integer array arr, count element x such that x + 1 is also in arr.
If there're duplicates in arr, count them separately.
__Example 1:__
```
Input: arr = [1,2,3]
Output: 2
Explanation: 1 and 2 are counted cause 2 and 3 are in arr.
```
__Example 2:__
```
Input: arr = [1,1,3,3,5,5,7,7]
Output: 0
Explanation: No numbers are counted, cause there's no 2, 4, 6, or 8 in arr.
```
__Example 3:__
```
Input: arr = [1,3,2,3,5,0]
Output: 3
Explanation: 0, 1 and 2 are counted cause 1, 2 and 3 are in arr.
```
__Example 4:__
```
Input: arr = [1,1,2,2]
Output: 2
Explanation: Two 1s are counted cause 2 is in arr.
```
__Constraints:__
```
1 <= arr.length <= 1000
0 <= arr[i] <= 1000
```
# Clarification
The first 3 test cases are kind of misleading because they don't really show what is actually expected from us.
However, the last example gives some info.
From what we can observe, they're asking for the count of x in the array if x+1 also exists in the array.
# Approach 1: Space Optimized

If the array is sorted then x+1 will be right after x. But if there are multiple occurrences of x then the 1st x+1 will come after the last x.
So, we need to count the frequency of each x and if the next element is different we need to check if it is x+1 or not.
__Algorithm:__
1. Initialize variables count to store the final result and currCount to store the count of the current element that is being tracked.
2. Sort the array
3. Iterate through the array
4. If the current element is equal to the previous element then increment currCount
5. Else if the current element is 1 greater than the previous element then add currCount to count. Reset currCount.
6. Return count
__Code:__
```python
def approach1(arr):
if len(arr) == 1:
return 0
arr = sorted(arr)
count = 0
currCount = 1
for i in range(1, len(arr)):
if arr[i] == arr[i-1]:
currCount += 1
else:
if arr[i] == arr[i-1] + 1:
count += currCount
currCount = 1
return count
```
__Complexity analysis:__
```
Time complexity: O(n*log(n))
Space complexity: O(1)
```
# Approach 2: Time Optimized

Since we're dealing with counts we can use a hash map to maintain the count of each number and then easily check for the presence of the next number.
__Code:__
```python
def approach2(arr):
count = 0
freq = {}
for num in arr:
freq[num] = freq.get(num, 0) + 1
for x in freq:
if x+1 in freq:
count += freq[x]
return count
```
__Complexity analysis:__
```
Time complexity: O(n)
Space complexity: O(n)
```
# Summary
Not many people talk about this but sometimes a programmer needs to consider the tradeoffs between space and time complexity of different solutions because not all situations have enough space to achieve optimal runtime complexity. The same applies the other way around.
As usual,
{% replit @13point5/Counting-elements %} | 13point5 |
309,407 | What does your week look like? | Weekly routine during Covid-19 | 0 | 2020-04-15T01:20:57 | https://dev.to/klawrow/what-does-your-week-look-like-1ohh | watercooler, webdev, dotnet, freelance | ---
title: What does your week look like?
published: true
description: Weekly routine during Covid-19
tags: watercooler, webdev, dotnet, freelance
---
I'd like to walk you through my current week and would love to hear about yours.
## Variables
1. 2 small children, a toddler and a student which is _now_ being homeschooled.
2. Part-time Development Manager due to scheduling conflicts with variable #1 and company budget.
3. Freelancing to make up for the loss of salary due to variable #2.
## Daily Morning Routine
Most days start the same, hitting snooze 2 or 3 times and _finally_ rolling out of bed at 07:55. I open the blinds in my "office", which was once the guest room, and I start the coffee machine. While I sip away at the fresh cup of joe :coffee:, I get the laptop booted and start clearing out the inbox.
## Monday - Friday
I start the week by planning and setting the goals to achieve for the week. I’ll check in on the team, answer any questions, update Jira tickets and handle the occasional support requests. The rest of the work week is providing guidance, making sure tasks are on track and executing on my action items.
After clocking out around noon we have a family lunch and afterwards we prepare for homeschool. We’ll browse through the class schedule and assignments, so my wife and I can be prepared to guide our child. Luckily our school district is proactive and has provided the necessary resources to help guide us through this _new world_. Depending on the daily chaos of having 2 children at home, my wife and I take turns on homeschooling and entertaining our toddler to not distract the other :stuck_out_tongue:.
On a good day, before sunset, we go on a walk around the neighborhood to help relax and wear down the kids :wink:.
In the evenings, once the kids are in bed, it’s time to clock in for the 3rd shift :clock8:; freelancing and prospecting. Sites like [Upwork](https://www.upwork.com/fl/claro) and [Moonlight Work](https://www.moonlightwork.com/app/users/5137) have made this possible and I’m grateful to have a skill set that has decent freelance pay.
## Weekend
Saturdays are reserved for family, :no_entry_sign: no work allowed :no_entry_sign:! Sundays are for freelancing/prospecting and winding down. If we are lucky enough, we will watch a show or movie together as a family.
## Conclusion
For better or worse, things have changed in our lives during the Covid-19 pandemic. For me it’s brought family unity and the chance to work on freelance projects. Stay safe out there and take care of each other.
| klawrow |
309,467 | How to Make your First Contribution to Open Source, A Step by Step Guide | You want to contribute to Open Source! That’s amazing! The world thanks you! You’ve already thought i... | 0 | 2020-04-15T02:42:56 | https://dev.to/scottstern06/how-to-make-your-first-contribution-to-open-source-a-step-by-step-guide-4hof | opensource, github, javascript, beginners | You want to contribute to Open Source! That’s amazing! The world thanks you! You’ve already thought it was a good idea and some google searches later, you’re here. Congratulations, let’s get started so you can join the army in making the software world, or the real world a better place!
In 2020, Open Source is the most popular it’s ever been! If you’re a developer or want to get into software development you will eventually come across the term “Open Source”, as a consumer of it and possibly a contributor to it.
***Step 1 — Find a project you’re personally invested in!***
My first contributions to Open Source was on [Eslint](https://github.com/eslint/eslint). Am I super passionate about Javascript linting? No, not necessarily, well maybe, but I’m weird. It was a project I used daily and owed a lot of my learning to in the beginning of my frontend development journey. I saw an opportunity to dive deep into a tool I used daily.
***Step 2 — Find an issue to work on***
This one is pretty self explanatory, just go to this issues page of any repository and find an issue you think would be fun to work on. A few good labels to filter by are:
- “Good First Issue”
- “Good First Contribution”
- “Accepting Merge Requests”
- “Beginner Friendly”
These are just suggestions but every repository is a little bit different. The goal is to find issues that are “beginner friendly” and that get you working in the code base, getting used to the development/code review process. It’s entirely possible that the project doesn’t have any of these labels, if that’s the case, reach out to someone or comment on the issue asking if it’s a good first issue to tackle. Sensing a common theme here? More helpful links can be found [here](https://github.com/freeCodeCamp/how-to-contribute-to-open-source#direct-github-searches).
***Step 3 — Claim the issue***
This one sounds obvious but it’s not. Time and time again, I see multiple people working on the same issue. If the issue is unassigned OR the issue has been assigned but there hasn’t been any activity on it for a while, then go ahead and make a comment.
Finally, make a comment, something like:
> “Hey! I am super interested in taking this ticket on, has anyone else picked this up yet?”
You can literally copy and paste this if you’d like, I wont tell. ;)
If someone has claimed the ticket but there hasn’t been any progress on the issue, still go ahead and make a comment asking if that person was still planning to work on the issue.
Then when you get the go ahead that it’s free to work on, it’s yours, go for it, don’t look back.
***Step 4 — Start working!***
**Fork the project**
1. Go to the repository and clone, SSH or HTTPS is fine, it really just depends on your local set up.
2. Open your terminal in a root directory, like Desktop or something fancy.
3. `git clone link-to-repo`
4. Then CD or change directory into `path/to/directory`
5. YOU’RE IN!
**Add upstream to your git remote**
> Why? So here’s the deal, big fancy Open Source projects won’t let you push directly to their projects….harsh, i know. There’s a way around this...shhhhhhh
1. You’ll need to change your local `git remote` to reference the upstream fork so that you can rebase or merge when code changes in the main repository. This should help you [set that up](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork).
**Make a branch**
1. `git checkout -b your-branch-name` — [More on this](https://git-scm.com/docs/git-checkout#Documentation/git-checkout.txt-emgitcheckoutemltbranchgt)
**Update your local environment to make sure its up to date with the parent repo**
> Why? People are going to keep merging in pull requests into the repo while you’re working. To make sure you have the latest changes in the main repository, do the following:
1. `git rebase upstream/master` or `git merge upstream/master`
**Do work! Add Your Changes! Commit your changes! Push your work!**
1. Add and commit your changes using `git add .` and `git commit -m 'your message'`.
2. `git push origin your-branch-name`
3. Go to your fork, and open a pull request. You will need to open the pull request from your fork against the main repo like so. Instead of `sstern:master` it’ll be the name of your branch `awesome-reader-of-scotts-blog:your-branch-name`.
**Step 5 — Get Stuck?**
Most Open Source projects will have a gitter, discord or slack channel for questions. Go to the chat and ask your questions and someone will unblock you. You can usually find the url to these in the projects README.
If this is not the case, find someone active on the repo you see commenting on issues and Pull/Merge Requests and message them directly, I’m sure they’ll be happy to help.
Hope you learned something!
Scott | scottstern06 |
309,646 | Get Going With Go | This article is meant to be a quick introduction to installing the Go programming language, running y... | 0 | 2020-04-15T04:25:45 | https://dev.to/ezzy1337/get-going-with-go-1ba3 | go | This article is meant to be a quick introduction to installing the Go programming language, running your first script, and building a binary that can be executed on Windows, Mac OS X, and Linux. I've included links to resources from Google and Digital Ocean that go into much greater detail on these topics but if you are just looking for a quick 5-minute introduction this is the article for you.
## Downloading Go
Google provides an installer for the major operating systems (Windows, macOS X, and Linux). You can also download a tarball of binaries and install directly from source code, but that's out of scope for this article. For this step download and run the installer for your OS, following the prompts.
You've probably heard or seen articles about the GOPATH before. This was a top-level directory where all of your go projects had to be located. Fortunately, it's no longer needed. At least not on Mac OS X. At this point, a little code is needed to test the installation.
## Go Say Hello
Unfortunately, Go doesn't have a REPL (Read Evaluate Print Loop) so you'll be saving and running code from a file.
Copy the code below and save it as `greeting.go`. Golang requires the `.go` extension but the rest of the filename could be anything you want. Without covering Go syntax to much this simple script when executed, prompts for a name and says hello to whatever name was entered.
```go
package main
import (
"bufio"
"fmt"
"os"
)
func main() {
reader := bufio.NewReader(os.Stdin)
fmt.Print("What is your name? ")
text, _ := reader.ReadString('\n')
fmt.Println("Hello ", text)
}
```
There are 2 important structural parts to the sample code. The first is `package main` which identifies this file contains the entrypoint to your code. The second is `func main()` which is the actual entrypoint. If either of these is missing or is not named main the code will not execute.
## Go Run V Go Build
There are 2 ways to run the sample script. During development, you'll mostly use `go run` which compiles the code and executes it in one step. When it comes time to deploy the code you should use `go build` which creates a binary that can be executed by calling directly from the terminal.
### Go Run
Go Run is pretty simple and has one required argument, a package name. In this case, you will use the filename. Here is an example of the `greeting.go` package. `go run greeting.go`.
### Go Build
Go build is a little more complex. The simplest form is `go build <package-name>`. This creates a binary named the same as the package name, minus the `.go` extension. So `go build greeting.go` would create a binary named `greeting`. The new binary can be executed with `./greeting`.
The more advanced form is using the `-o` flag which allows you to name the binary produced by `go build`. Rebuilding the `greeting.go` package using the `-o` flag would look like this `go build -o hello.exe greeting.go` which would produce a binary file named `hello.exe`.
## Conclusion
- Use `go run <package-name>` when testing code in development.
- Use `go build <package-name>` when preparing to ship code.
- The file with `package main` at the top of the file, must contain the `func main()` function which is the entrypoint to the app.
- Your Go code no longer has to go under the GOPATH.
## Additional Resources
[Official Getting Started](https://golang.org/doc/install)
[How To Build And Install Go Programs](https://www.digitalocean.com/community/tutorials/how-to-build-and-install-go-programs) by Digital Ocean
| ezzy1337 |
309,755 | 17 Agile Testing Interview Questions and Answers that you Should Know.
| Agile Testing Interview Questions and Answers will help you prepare for Agile methodology and agile p... | 0 | 2020-04-15T08:54:20 | https://dev.to/promode/16-agile-testing-interview-questions-and-answers-that-you-should-know-3j2h | testing, tutorial, beginners, webdev | Agile Testing Interview Questions and Answers will help you prepare for Agile methodology and agile process interviews for testers or developers.
Learn Cypress Tutorial: https://cypresstutorial.com.
Learn API Testing: https://www.learnapitesting.com
Automation Tester Training: https://thetestingacademy.com
## Question 1: What is Agile Testing?
Its a software testing practice that follows the principles of Agile software development. It is an iterative software development methodology where requirements keep changing as per the customer needs.
Testing is done in parallel to the development of an iterative model.Test team receives frequent code changes from the development team for testing an application.
> In case you want to learn using video, click on the below video to learn as video.
[](https://youtu.be/rE_V_xhiajc "Agile Testing Interview Questions and Answers")
## Question 2. What is Agile Manifesto?
Agile manifesto defines 4 key points:
- i. Individuals and interactions over process and tools
- ii. Working software over comprehensive documentation
- iii. Customer collaboration over contract negotiation
- iv. Responding to change over following a plan
## Question 3: How is Agile Testing different from other traditional Software Development Models?
It is one of the common Agile Testing Interview Questions.
In Agile Methodology, testing is not a phase like other traditional models. It is an activity parallel to development in the Agile.
The time slot for the testing is less in the Agile compared to the traditional models.
The testing team works on small features in Agile whereas the test team works on a complete application after development in the traditional models.
## Question 4: When do we use Agile Scrum Methodology?
i. When the client is not so clear on requirements
ii. When the client expects quick releases
iii. When the client doesn’t give all the requirements at a time
## Question 5: What are Product Backlog and Sprint Backlog?
Product Backlog:
Product Backlog is a repository where the list of Product Backlog Items stored and maintained by the Product Owner.The list of Product Backlog Items are prioritized by the Product Owner as high and low and also could re-prioritize the product backlog constantly.
Sprint Backlog:
Group of user stories which scrum development team agreed to do during the current sprint (Committed Product Backlog items). It is a subset of the product backlog.
## Question 6: What is the difference between Burn-up and Burn-down chart?
Burn Down Charts provide proof that the project is on track or not. Both the burn-up and burn-down charts are graphs used to track the progress of a project.
Burn-up charts represent how much work has been completed in a project whereas Burn-down chart represents the remaining work left in a project.
## Question 7: What are the types of burn-down charts?
i. Product burndown chart
ii. Sprint burndown chart
iii. Release burndown chartiv. Defect burndown chart
## Question 8: What is Product Burndown Chart?
A graph which shows how many Product Backlog Items (User Stories) implemented/not implemented.
## Question 9: What is Sprint Burndown Chart?
A graph which shows how many Sprints implemented/not implemented by the Scrum Team.
## Question 10: What is Release Burndown Chart?
A graph which shows List of releases still pending, which Scrum Team have planned.
## Question 11: What is Defect Burndown Chart?
A graph which shows how many defects identified and fixed.
## Question 12: What is a Daily Stand-up Meeting?
Daily Stand-up Meeting is a daily routine meeting.
It brings everyone up to date on the information and helps the team to stay organized.
Each team member reports to the peers the following:
- What did you complete yesterday?
- Any impediments in your way?
- What do you commit to today?
- When do you think you will be done with that?
## Question 13: What is a Sprint Planning Meeting?
The first step of Scrum is the Sprint Planning Meeting where the entire Scrum Team attends. Here the Product Owner selects the Product Backlog Items (User Stories) from the Product Backlog.
Most important User Stories at the top of the list and least important User Stories at the bottom. Scrum Development Team decides and provides effort estimation.
## Question 14: What is a Sprint Review Meeting?
In the Sprint Review Meeting, Scrum Development Team presents a demonstration of a potentially shippable product.
Product Owner declares which items are completed and not completed.
Product Owner adds the additional items to the product backlog based on the stakeholder’s feedback.
## Question 15: What is a Sprint Retrospective Meeting?
Scrum Team meets again after the Sprint Review Meeting and documents the lessons learned in the earlier sprint such as “What went well”, “What could be improved
## Question 16: What is a Task Board?
A task board is a dashboard which illustrates the progress that an agile team is making in achieving their sprint goals.
- i. User Story: Actual Business Requirement (Description)
- ii. To Do: All the tasks of current sprint
- iii. In Progress: Any task being worked on
- iv. To Verify: Tasks pending for verification
- v. Done: Tasks which are completed
--
Be sure to subscribe for more videos like this!
[](https://www.youtube.com/TheTestingAcademy?sub_confirmation=1 "TheTestingAcademy")
| promode |
309,804 | PowerShell Core as default shell on a Debian devcontainer | Introduction Here we'll cover setting up powershell on a dev container with a debian:buste... | 0 | 2020-04-15T11:35:28 | https://dev.to/eliises/powershell-core-as-default-shell-on-a-debian-devcontainer-36fk | # Introduction
Here we'll cover setting up powershell on a dev container with a `debian:buster` baseimage.
At the bottom of this article you can also find the full [devcontainer.json](#devcontainerjson) and [dockerimage](#dockerimage), which you can skip to.
Credit to: https://www.phillipsj.net/posts/powershell-as-default-shell-on-ubuntu/
All code snippets can be found in [terraform-pester-devcontainer-example](https://github.com/EliiseS/terraform-pester-devcontainer-example) repository.
# Installing Powershell 7
Here's the PowerShell install snippet from our debian dockerfile.
```Dockerfile
# Install PowerShell 7
RUN wget https://packages.microsoft.com/config/debian/10/packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& rm packages-microsoft-prod.deb \
&& apt-get update \
&& apt-get install -y powershell \
```
# Set PowerShell as default shell
Next to set powershell as our default shell we must find it in the list of available shells with:
```bash
$ cat /etc/shells
# /etc/shells: valid login shells
/bin/sh
/bin/bash
/bin/rbash
/bin/dash
/usr/bin/pwsh
/opt/microsoft/powershell/7/pwsh
```
The last item in the list is the PowerShell shell location, which we need to use in our `devcontainer.json` file to set it as our default shell.
```json
"settings": {
"terminal.integrated.shell.linux": "/opt/microsoft/powershell/7/pwsh",
},
```
# Optional PowerShell profile set up
If you want to be able to customize your PowerShell like you would with bash, such as to add aliases, you can set up a profile using the below.
```Dockerfile
# Powershell customization
RUN \
## Create PS profile
pwsh -c 'New-Item -Path $profile -ItemType File -Force' \
## Add alias
&& pwsh -c "'New-Alias \"tf\" \"terraform\"' | Out-File -FilePath \$profile"
```
# Complete files
## Dockerimage
```Dockerfile
FROM debian:buster
# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive
# Configure apt and install packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends apt-utils 2>&1 \
# Verify git, process tools, lsb-release (common in install instructions for CLIs), wget installed
&& apt-get -y install git procps lsb-release wget \
# Install Editor
&& apt-get install vim -y \
# Install PowerShell 7
&& wget https://packages.microsoft.com/config/debian/10/packages-microsoft-prod.deb \
&& dpkg -i packages-microsoft-prod.deb \
&& rm packages-microsoft-prod.deb \
&& apt-get update \
&& apt-get install -y powershell \
#
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Powershell customization
RUN \
## Create PS profile
pwsh -c 'New-Item -Path $profile -ItemType File -Force' \
## Add alias
&& pwsh -c "'New-Alias \"tf\" \"terraform\"' | Out-File -FilePath \$profile"
# Switch back to dialog for any ad-hoc use of apt-get
ENV DEBIAN_FRONTEND=dialog
```
## devcontainer.json
```json
{
"name": "Debian 10 & PowerShell",
"dockerFile": "Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/opt/microsoft/powershell/7/pwsh",
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-vscode.powershell"
]
}
```
# Known issues
These are the issues that I've run into:
- PowerShell Core has fewer modules and commands available when compared to Powershell
- `Remove-Item` command has been unusable due to exasperated results with a known issue: https://github.com/PowerShell/PowerShell/issues/8211
| eliises | |
309,806 | HOW TO SETUP MSSQL ON MAC/LINUX OS USING DOCKER AND AZURE DATA STUDIO | Ensure you have docker setup on your machine. Follow this link to setup docker on your machine docs.... | 0 | 2020-04-15T10:39:37 | https://dev.to/adeyemiadekore2/how-to-setup-mssql-on-mac-linux-os-using-docker-and-azure-data-studio-2p6m | Ensure you have docker setup on your machine. Follow this link to setup docker on your machine [docs](https://docs.docker.com/docker-for-mac/install/).
Pull the mssql ubuntu image from the docker hub.
`sudo docker pull mcr.microsoft.com/mssql/server:2019-CU3-ubuntu-18.04`
After that then enter the following command
`sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=<YourStrong@Passw0rd>" \
-p 1433:1433 --name sql223 \
-d mcr.microsoft.com/mssql/server:2019-CU3-ubuntu-18.04
`
Ensure you replace `<YourStrong@Passw0rd>` with your password
-p indicates your port
--name specifies the name of your container
Run the command below to view all containers currently running on your machine.
`sudo docker ps -a`

The GUI tool we'll be using is [Azure Data Studio](https://docs.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio?view=sql-server-ver15)
Follow the link and download the azure data studio.
currently, these are our details.
Password: This is what we declared in SA_PASSWORD as earlier.
Username: this value is `sa`
Server: localhost
Then fill out these details in Azure data studio.

Lastly, since we might want to create a new database in the azure data studio
Enter the following in the `new Query section` of the dashboard.
```
IF NOT EXISTS (
SELECT name
FROM sys.databases
WHERE name = N'DemoDB'
)
CREATE DATABASE [DemoDB]
GO
```

The above command creates a database called DemoDB. you can connect to this database and run various actions like migrations.
To connect to the DemoDB database we created, below is a typical connection string.
```
String connectionString = @"
Server=127.0.0.1;
Database=DemoDB;
User Id=sa;
Password=yourPassword
";
``` | adeyemiadekore2 | |
309,880 | How to kern lettering with SVG's dx attribute. | Using SVG's <text> element and dx attribute to visually kern letters. | 0 | 2020-04-25T11:29:39 | https://dev.to/makingthings/how-to-kern-lettering-with-svg-s-dx-attribute-3po9 | svg, html, css | ---
title: How to kern lettering with SVG's dx attribute.
published: true
description: Using SVG's <text> element and dx attribute to visually kern letters.
tags: svg, html, css
cover_image: https://res.cloudinary.com/makingthings/image/upload/v1587813538/articles/svg/parade-uneven.jpg
---
##The devil is in the detail
___

Have you ever played the [Kerning Game](https://type.method.ac/#)? I scored 72 out of 100, it's not easy and the scoring is a little subjective (that's my excuse 😀).
To kern a word is to adjust the space between letters and whilst it might be overkill to do this on every blog post `<h1>`, there are occasions where a wordmark might benefit from a little tlk (tender-loving-kerning)
{% codepen https://codepen.io/limitedunlimited/pen/eYpBwxd %}{% gist https://gist.github.com/jamesgrubb/963197b7553c06eff8ec0adbbbbfa562 %}
In the above *Codepen*, the word **Devil** is set in Google's **Akronim** type-face. With the browser's default font-kerning settings - `font-kerning: auto`, notwithstanding, the first two letters **D** and **e** sit a little awkwardly, *perhaps they are socially distancing?*
###dx to the rescue
> "The dx attribute indicates a shift along the x-axis on the position of an element or its content." *[MDN Web docs](https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/dx)*
If we add a unitless value to the `<text>` `dx` attribute our Devil will shift along the x-axis by that units amount (and in relation to the svg's viewBox dimensions)
{%codepen https://codepen.io/limitedunlimited/pen/OJyWONv %}
{%gist https://gist.github.com/jamesgrubb/f3039f381a9d418c567ea6355f58a413 %}
If you continue reading the MDN Web docs they describe using multiple dx values
>If there are multiple values, dx defines a shift along the x-axis for each individual glyph relative to the preceding glyph. If there are fewer values than glyphs, the remaining glyphs use a value of 0. If there are more values than glyphs, extra values are ignored.
Let's help the **D** and **e** kiss and makeup by adjusting the individual spaces. I find that once you start tinkering you end up looking at all the letter spaces. This is what I ended up with.
{% codepen https://codepen.io/limitedunlimited/pen/pojRaGx %}
{% gist https://gist.github.com/jamesgrubb/dd1e787dc0b728fb81774ba202298f74 %}
As I mentioned, kerning is quite subjective and if you can, it's worth asking someone to look over your shoulder and get their kerning opinion.
*!!!WARNING!!!* once you start looking at kerning you will see bad kerning everywhere

### Conclusion
I hope you enjoyed this article. Of course, this method relies on using SVG in your markup. If you are using HTML, you could, for example, wrap each letter with a `<span>` with a `display: inline-block` class and adjust margins or relative positions. There are a number of other methods out there.
#### Usefull links
1 [creating-web-type-lockup](https://css-tricks.com/creating-web-type-
lockup/)
2 [11-kerning-tips](https://99designs.co.uk/blog/tips/11-kerning-tips/)
3 [How to manipulate SVG text](https://vanseodesign.com/web-design/how-to-manipulate-svg-text/)
| makingthings |
309,892 | Tutorial: Run a Python Script with an Alexa Voice Command | No better time than quarantine time to play around with Python and Alexa. This tutorial demonstrates... | 0 | 2020-04-16T13:03:34 | https://dev.to/wayscript/tutorial-run-a-python-script-with-an-alexa-voice-command-4k8m | python, tutorial, alexa, beginners | *No better time than quarantine time to play around with Python and Alexa. This tutorial demonstrates how to use WayScript. We say that WayScript gives developers superpowers. We'd love your feedback on the platform we're building. Check it out while learning something new today.*
##The WayScript Alexa Skill
In this tutorial, you will learn how to use WayScript to run your Python script in the cloud using an Alexa voice command.
This is done using the WayScript Alexa Skill, which you will first need to enable on your Amazon Alexa account.
After enabling the skill, you will then need to link it to your WayScript account using the Alexa app. Once successfully linked, you should see a message like this:

##The Alexa Trigger
The next step is to build a WayScript program with an Alexa Trigger. To do so, first go to your WayScript dashboard and select "Build From Scratch".
Give your program a name, like "Run Python with Alexa".

Once your program is created, you'll start by adding an [Alexa Trigger](https://docs.wayscript.com/library/triggers/alexa-trigger).
Click the "+" icon next to "Add Trigger(s)" and select the Alexa Trigger.

You should now have the Alexa Trigger in the "Triggers" section of your program.

##Set up the Alexa Trigger
Now that you have added the trigger to your program, let's get it set up! Start by clicking on the trigger to open it in the settings panel.
You should see something like this:

Set "Activate Trigger" to "On".
Now, notice that the trigger shows the text "Alexa, launch WayScript and run my 'Run Python with Alexa' program".

That's kind of a mouthful, so you'll set the "Program Alias" field to create a phrase that's easier to say and easier for Alexa to understand.
In this case, you'll simply set the Program Alias field to "Python".

Now you have a much nicer phrase!

##Add a Python Script
With the Alexa Trigger in place, it's time to add a Python Script. Start by dragging the Python module into your program.

Then, click on the Python module and paste your script into the code box.

##That's it! You're done!
Run your Python script from any of your Alexa-enabled devices by saying "Alexa, launch WayScript and run my 'Python' program."

Of course, you don't have to stop at Python. You can run any WayScript program - including an evergrowing list of modules - using the Alexa Trigger. | tjmd |
309,905 | Why Mobile Analytics Tool Is A Boon For The Mobile Applications? | From the app idea generation to its development, the only fantasy mobile app owner have is to get a m... | 0 | 2020-04-15T13:40:27 | https://dev.to/jamesjo10097237/why-mobile-analytics-tool-is-a-boon-for-the-mobile-applications-5gc6 | mobiledevelopment, mobileapp, mobileappdevelopment, appdevelopment | From the app idea generation to its development, the only fantasy mobile app owner have is to get a million app downloads.
During app development, UI and UX designing, QA and testing, devising marketing strategies and preparing for the launch, the app owner just yearns for the huge buzz that the app would create, user engagement, positive feedback and rating, and not to forget the grand success.
All this could be known when the app is out in the market. It’s true.
Why-Mobile-Analytics-Tool-Is-A-Boon-For-The-Mobile-Applications
Don’t presume that just the app features and its great design would bring a major user traction. Although it’s true, but not completely as analyzing the user’s data after launching an application gives the valuable insights, which,if kept into the consideration can be a game-changer for the application.
__How it is possible? Sounds confusing, right?__
The mobile app success can be skyrocketed even after it is launched with the proper analysis of users’ data.By the users’ data, we mean:
– Customers’ retention after downloading the app
– The channels through which user finds your app
– How many customers are sharing the app info on social media?
– Users’ experience
– Number of impressions, views or clicks your app is getting
The data give a vivid picture of what additions can make the app what users are looking for. Measuring the data is the important part that cannot be overlooked.
But, in real terms,translating the data into rewarding decisions would bring the change and that can be known through mobile analytics tools.
According to the Aberdeen Group report, __“The companies using mobile analytics saw an 11.6 percent increase in brand awareness, while those without a mobile-specific analytics strategy had a 12.9 percent decrease.”__
Let’s get on the path of app data measurement and optimization on mobile using the popular analytics tools such as Google mobile app analytics, Flurry analytics, Apsalar, Countly, Localytics, Mixpaneland a lot more.
According to the Forrester research, __“46% of the firms have implemented a mobile analytics solution.”__
Take a glance at how the mobile analytics would be advantageous to businesses:
__1) Check what matters the most__
When the first version of the mobile app is released, instead of just aiming how to allure more users, analyzing the existing users’ experience and engagement throughout the app is of vital importance.
With analytics tools, the complete user journey can be tracked that provides the information about how the app is used and to which features the users had given more importance. It helps in knowing which features in the app should not be changed in the next phase.
The app usage patterns also help in understanding the users’ engagement behavior and enable taking the data-driven decisions accordingly.
__2) Find the exact point that you miss__
The analytics tools are so powerful that can go into the depth of app usage and can find out for how long the users reviewed the mobile app reviews before downloading the mobile app or after downloading the app, which features users have used and then never returned to the app.
This helps in finding the root cause or the technical issues that are not letting the mobile app to tap its full potential in the market.
Also, the mobile app owner has to constantly monitor the app rank in the app store and make the comparison with the competitors’ app to check the strategies, devices or platforms they have used to reach a large share of the audience.
__For instance:__ The competitors’ app is getting more download counts, revenue and in-app purchases from the Android platform, then you can also launch the app on the Android platform to gain more app downloads and increase the ROI.
__3) Track mobile campaigns at fingertips__
No matter how unique the marketing campaign is launched, but if it’s not able to impact the target audience, it will not bring any success at the end.
However, if the campaign performance could be known in the mid of a marketing campaign, then making the changes in the plan is possible. But, assessing the campaign success in the middle is impossible.
Mobile analytics tools have made it viable by enabling the app owner to identify how the campaign is working by tracking the traffic, conversion rate and sales data over different channels.
According to the customer surveys, “Nearly 52% of all the mobile app installation decisions are made while browsing the App Store and the other half is based on advertising, blogs, recommendations, and other sources.”
This data help in analyzing on which channel the conversion rate is high and what can be done to increase the transaction size over other channels.
__The last word…__
The Mobile analytics tools from mobile app development to the final launch provides great customers’ insights that help in bringing the app before the customers that they want.
After the app release, the tool also aids in knowing what upgrades needed, bugs need to fix, and promotional campaigns’status and impact, which help the app publisher to take the smart decisions and enable them to address the issues efficiently. | jamesjo10097237 |
309,912 | A way for managing your API versions with Azure | Hi guys, I've recently been playing for a personal project with Azure Front Door service and found a... | 0 | 2020-04-15T15:55:53 | https://dev.to/jaloplo/a-way-for-managing-your-api-versions-with-azure-17pe | azure, management, webdev, routing | Hi guys,
I've recently been playing for a personal project with [__Azure Front Door__][Azure Front Door Docs] service and found a utility that would be great in my opinion. For those of you that don't know what is [__Azure Front Door__][Azure Front Door Docs] service let me say that allows you to define, manage, and monitor your global routing for your web traffic by optimizing for best performance and instant global failover for high availability. This is the first sentence that appears in the [_documentation_][Azure Front Door Docs] but you have many more functionalities like URL-based routing, URL redirection, URL rewriting, Multiple-site hosting, Session affinity, Custom domains, etc.
## You have an API app
So, _I want to expose how to maintain and evolve your API app_ providing access to different versions using Azure Front Door service. I assume __you have an API app__ that is working properly fine and granting services to users worldwide and your host is in Azure or any other web hosting provider, doesn't matter at this point. Now, __you want to add some more functionalities__ and some of your methods will change completely the response. This could be seen as a __major change__ and implies a version change so how will you manage this new version to your whole community of users?
In this case, the best approach should be to maintain the two different versions and recommend to your users to, proactively and progressively, modify their applications to the new API version.
## But, how will you deploy your changes to the internet?
If you modify your current API app, you will have to plan a shutdown time -seconds, minutes, hours or any other kind of time depending on your changes and your deployment way- and inform your users about it.
Another approach could be to deploy your new version as a new app in your hoster. This implies a different URL from the current app making your users be confused and increase the difficulty to modify their apps to the new version.
As we see, there isn't a completely good solution that covers everything. So, Azure Front Door service comes here to resolve our problems.
## Configuring Front Door for URL routing
[__Azure Front Door__][Azure Front Door Docs] provides us with the power of __URL-based routing__. This means that you can route your traffic based on patterns to different pools. In the case of your API app, __we will consider that you deployed the new version as a different API app__ to avoid any disturbance for your users so you have two different pools, __version 1 and version 2 API apps__. It seems that now we are able to keep a _unique domain_ and, depending on the URL parameters, use one or another app.
By default, [__Azure Front Door__][Azure Front Door Docs] provides you with this URL so _you don't have to worry about it_ although you want to use your own domain. Please, follow this [article][Azure Front Door Custom Domain Docs] in order to configure your own domain.
Our pattern strategy to define our API versions will be configured in Azure Front Door, so we have to decide how we want to serve our methods. A common pattern nowadays is to add the version number in the URL like:
```
https://api.domain.com/v1.0/api-method
```
And that's how you will configure your routing rules. Let's take a look at the following screenshot with the most relevant parameters to configure. The first one allows us to set the __URL pattern__ where we describe our version 1 API app calls. We must avoid the protocol and domain name of the URL and set the query like the following `/api/v1.0/*`.

The following parameter is the _route type_ which defines how to __process the request__, in our case, you must select _Forward_ and the proper _Backend Pool_. The third one I want to show you is _"Custom forwarding path"_ parameter where we should __type the URL of our API app__ to match with. If we open the API app for notifications functionality we should write here the relative URL to the notifications methods, `/api/notifications/` as an example.
```
/api/v1.0/messages -> /api/notifications/messages
/api/v1.0/messages?$select(title,desc) -> /api/notifications/messages?$select(title,desc)
```
This is all the stuff you need to configure the service to provide with URL-based routing functionality. Moving forward, you are able to configure new versions of your API adding more and more rules and disabling or removing those that you don't need anymore.
Managing versions of your API app enabling or disabling routing rules is easier than starting or stopping API apps in different web hosting platforms or doing lines of code that reads an URL parameter and decides which response to send.
> _Consider this as a "Single Responsibility" pattern of **SOLID** principles where you have two clearly separated layers one for version management and the other for providing the proper functionality._
## What else can do
As I said at the beginning of this article, [__Azure Front Door__][Azure Front Door Docs] service provides lots of different functionalities you can take advantage of. If you want to evolve from here you can deploy your API app at several data centers and provide Geo-affinity and load balancing you your users depending on their localizations.
The following list provides you with links to the official documentation and I expose in this article:
* [Azure Front Door Docs]
* [Azure Front Door Custom Domain Docs]
* [Azure Front Door Routing Methods]
* [Azure Front Door Routing Architecture]
Hope you enjoyed reading this article. Please, leave any comment to help me and the community if you want to share something.
[Azure Front Door Docs]: https://docs.microsoft.com/en-us/azure/frontdoor/front-door-overview
[Azure Front Door Custom Domain Docs]: https://docs.microsoft.com/en-us/azure/frontdoor/front-door-custom-domain
[Azure Front Door Routing Methods]: https://docs.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods
[Azure Front Door Routing Architecture]: https://docs.microsoft.com/en-us/azure/frontdoor/front-door-routing-architecture | jaloplo |
309,976 | Deploying a MERN stack | Solution 1: Heroku (https://www.heroku.com/) Positives: Easy set up. Uses Docker containers within A... | 0 | 2020-04-15T15:27:30 | https://dev.to/stuartcreed/deploying-a-mern-stack-54ai | mern, deploy, heroku, aws | Solution 1: Heroku (https://www.heroku.com/)
Positives: Easy set up. Uses Docker containers within Amazon EC2 instances so that you only pay for what you use (PAAS).
Negatives: Costs £7 a month. Free tier times out your container after 30 mins of inactivity.
Solution 2: Amazon AWS EC2 Linux Instance.
Positives: This is a full machine (not a container) so you can configure it to your hearts content. This is available in the Amazon Free Tier for 12 months.
Negatives: Costs £14 a month after the first 12 months. You have to set up the server yourself, but you would be surprised how easy this is to do with NGINX or Apache. This article shows you how to set one up using NGINX https://link.medium.com/SvBPqJdXH5
Solution 3:
Amazon Lightsail
Positives: £3.50 a month so cost effective. MERN environment and apache server already set up.
Negatives: A restricted machine - but will allow a vast majority of what you need for Web Development. It is essentially streamlined EC2 machine with everything you should need for web development already set up.
| stuartcreed |
310,041 | What's the story behind your first money made with software? | Do not be shy! Tell us how you first got into the world of commercial software? | 0 | 2020-04-15T17:17:11 | https://dev.to/binaryforgeltd/what-s-the-story-behind-your-first-money-made-with-software-2a4 | software, discuss, memories, career | ---
title: What's the story behind your first money made with software?
published: true
description: Do not be shy! Tell us how you first got into the world of commercial software?
tags: software, discussion, memories, career
---
Can you remember the very first person **brave enough** to pay for a piece of code made by you? 🙆♂️
What was it that you have done?
Was it really bad?
...or were you bursting with pride? Both perhaps? :)
You simply cannot be bothered with material goods and never made a single penny with software? I want to hear from you *too*.
---
My first time took place roughly around 11 years ago - can you remember when these money-management, space-war browser games like OGame were on fire? No? Well, that's fine. 😬
I was learning PHP at the time and eventually I conceived a dream - my very own OGame clone, only better! 💪
I spent couple of weeks putting together a truly horrible codebase and went on to publish it on a free hosting.
Needless to say, it was not resembling anything like a completed game.
Spamming various message boards I managed to find around 100 players to join the game (and put the free server down as a result) but what happened next shocked me... (no clickbait)
I received an email from someone offering to buy the game from me as it seemed *"promising"*! 🤯
Now, that was something new to me - I thought for couple of minutes and accepted the whopping offer of the equivalent of **$40**...
...hey, for a regular teenager from Eastern Europe that was a **pile of cash** at the time. It went quick from there - I got paid by wire transfer, sent the source files via email (who would have thought one can have a git repo) and that was it - never heard of it again. 😭
---
🎼 🎷 ...Money for nothing, and chips for free... 🎧
---
Tell me your story! 🔥 I am sure there will be a few fun, *brutal* or unusual memories to share. | binaryforgeltd |
310,049 | How do you handle database migrations ? | At work, we use flyway to manage our database migrations. We have multiple test environnement and mul... | 0 | 2020-04-15T17:25:54 | https://dev.to/iinku/how-do-you-handle-database-migrations-1g62 | database, sql, help, ask | At work, we use flyway to manage our database migrations. We have multiple test environnement and multiple staging version of our applications. Every Sunday, a script download and install a fresh database backup from the prod. How do you handle your migrations in this scenario ? Every time you install a new version, you downgrade to the prod version ? | iinku |
310,355 | How to handle surprise changes within the project 😌🙌 | Web development field has been growing over time, the community frequently contributes and... | 0 | 2020-04-16T03:42:11 | https://dev.to/sarl23/how-to-handle-surprise-changes-within-the-project-2871 | webdev, beginners | ---
title:How to handle surprise changes within the project 😌🙌
published: true
description:
tags: help, webdev, beginners
---
_Web development field has been growing over time, the community frequently contributes and strengthens new and early technological generations, each person, each taste is divided into back or front, each a fundamental part of the other, darkness and light as fundamental as the same existence._
If you do front-end development, you must understand that the needs that are established by the design team are the most important to guarantee a correct, excellent and primordial user experience providing clear contents and designs that fit the requirements. This seems easy and as end users or consumers we usually categorize the pages depending on its design and web usability, which includes the ease with which a user navigates through a website; to achieve this you must build a simple, intuitive and fluid navigation. It is even possible to say that it is of little use to have a good design if the texts are very long or not very explanatory.
For those who are independent developers, whose approach to the client is constant and who do not have any knowledge of the design or internal functioning of a website, __do not despair__, they want both the project they commissioned and their tastes to be reflected in the website they want. The changes they will ask you for as the project develops will increase, some changes will be simple while others will be tedious and unnecessary, _in these cases patience will be a great companion_, try to mitigate, if not eliminate, the discussions that arise from changes in functionality and design, either by color, by position, by size, and so on. This type of changes will be present throughout the life cycle of the project.
I can assure you that those moments will be very impertinent, but that is why must be clear from start to end, explaining to the client in a common way the steps that you are going to follow, the life cycle that you will carry in the project, showing him periodic progress and letting them know what you are going to develop after showing what you already achieved. It is important to establish documents where you specify how you will design the page (look and feel) so the client knows how it will look at the end, that way, your process will be very comfortable. Do not be impatient, at the beginning this can be annoying, but it will help you build a skill that will increase your professional capacity.
##Go for it!😉
| sarl23 |
310,062 | How do I freeze columns in Data Table? | As the headline says, I would like to freeze columns in my mat-table... I would like to write the fun... | 0 | 2020-04-15T18:00:21 | https://dev.to/anyanx_500v/how-do-i-freeze-columns-in-data-table-2dh1 | As the headline says, I would like to freeze columns in my mat-table... I would like to write the function myself instead of using [sticky] or css.
I already designed a small function where the selected column is set to disabled... I would like to freeze the behaviour and the selected column, can you please help me?
// MyInterface
export interface SusaColumn {
attribute: string;
name: string;
mobile: string;
object?: any;
frozen?: boolean; //
}
private displayedColumns: SusaColumn[] = [
{ attribute: 'accountNumber', name: 'Konto', mobile: 'Konto:', object: null, frozen: false },
{ attribute: 'name', name: 'Erlöse / Kostenarten', mobile: 'Erlöse / Kostenarten:', object: null, frozen: false },
{ attribute: 'kag', name: 'KAG', mobile: 'KAG:', object: null, frozen: false }
];
private freezeColumn(attributeName: string) {
if (attributeName) {
const displayedColumn = this.displayedColumns.find((c) => c.attribute === attributeName);
if (displayedColumn) {
displayedColumn.frozen = !displayedColumn.frozen; // here i make the column disabled
const columnIndex = this.columns.findIndex((c) => c === attributeName);
if (columnIndex > -1) {
this.columnFilters.controls[columnIndex] = new FormControl({ value: '', disabled: displayedColumn.frozen });
}
}
}
} | anyanx_500v | |
310,084 | How to Distribute Secrets for PowerShell Scripts Using Ansible | The Use Case Managing secrets is hard. Everything needs to run under its own username/pass... | 0 | 2020-04-17T05:47:40 | https://dev.to/mieel/how-to-distribute-secrets-for-powershell-scripts-using-ansible-3mne | ansible, powershell, windows |
# The Use Case
Managing secrets is hard. Everything needs to run under its own username/password, and apparently keeping plaintext passwords in scripts is really bad.
To manage secrets better at our company we already implemented the following practices:
1) Avoiding hardcoding credentials (or any other configuration data) in the scripts, and commit scripts and configs separately.
2) Not actually storing real values in the config files: Instead, use placeholders like `#{sqlserverPassword}#` as values, and use a **CI Task** to replace the tokens with the actual values when deploying the script (many CI platforms have this feature out-of-the-box).
So even though we don't store passwords in Source Control anymore, when the scripts are deployed, the files may still contain passwords in plaintext.
You could use `Integrated Security` to avoid using passwords altogether, but this is only an option if your workers and resources are in the same **Windows Domain**.
You can generate `SecureStrings` and have your script users use those instead. It's pretty secure because only users can decrypt the `SecureStrings` that the same user had created on the same machine. But the downside is only users can decrypt the `SecureStrings` that the same user had created on the same machine 😅
So how to solve this?
First, we need a method for adding and retrieving secrets: Use Microsofts new [SecretsManagement Module](https://devblogs.microsoft.com/powershell/secrets-management-development-release/).
> **Note**: The Module is still in PreRelease, but it can Add Secrets and Get Secrets, so good enough for our use case for now
So what is this Module?
> The Secrets Management module helps users manage secrets by providing a set of cmdlets that let you store secrets locally, using a local vault provider, and access secrets from remote vaults. This module supports an extensible model where local and remote vaults can be registered and unregistered on the local machine, per user, for use in accessing and retrieving secrets.
Notice that it says *per user*: Users can only access their own local vault (assuming we are not using external vaults like **Azure KeyVault**, which probably would make this whole tutorial unnessecary).
To have the password available for the user to use, we must perform an `Add-Secret` command under the user context, so that the user can do `Get-Secret` to retrieve the password stored in their own vault.
The workflow would be something like: Invoke a command with the User Credentials passed as `-Credential`, executing a Scriptblock `{ Add-Secret -Name MySecret -Secret SuperSecretPassword }` .
🚦 *If there is a better/different way to provide secrets under different user contexts, please let me know.
For now, we continue this route.* 🚦
# The PowerShell Script to Add Secrets
I tried many methods to perform commands under a different user context:
- use `Invoke-Command -ScriptBlock $Scriptblock -Credential $creds`
- why not? This requires **WinRM** to be available for the given user, even if the command is run on `localhost`, so I looked for something else.
- the `Start-Job -ScriptBlock $Scriptblock -Credential $creds | Wait-Job | Receive-Job` Combo
- why not? This worked pretty consistently, up until to point I tried to integrate this in a CI Task. I got stuck getting ❌`2100,PSSessionStateBroken` errors, and it seems to be a [common issue](https://issues.jenkins-ci.org/browse/JENKINS-49159) in CI systems. Apparently it has something to do with credentials not being passed while 'double hopping', resolving this would be again setting up WinRM in conjunction with something called CrepSSP.
So finally I ended up using the [Invoke-CommandAs](https://github.com/mkellerman/Invoke-CommandAs) Module, which under the hood creates a **Scheduled Task** as the user, carries out your commands you specify in a `$ScriptBlock`, and finally removes the Task.
An example snippet for adding 1 secret for 1 user on 1 machine would be:
```
$userName = "domain\user_account"
$userPassword = "userPassword" # ✋Warning If the password contains $ signs, use single quotes!
[pscredential]$credObject = New-Object System.Management.Automation.PSCredential ($userName, $userPassword)
$Credentials = Get-Credential $credObject
$ScriptBlock = {
If ( -not(Get-Module Microsoft.PowerShell.SecretsManagement -listAvailable) ) {
Write-Host "Secret Module not found, installing.."
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Install-Module -Name Microsoft.PowerShell.SecretsManagement -RequiredVersion 0.2.0-alpha1 -AllowPrerelease -Repository psgallery -Force -Scope CurrentUser
}
# proof that it worked:
Write-Host $env:computername
Write-Host $env:username
Get-SecretInfo
}
Invoke-CommandAs -Scriptblock $ScriptBlock -AsUser $Credentials
```
> **Note**: Notice the Tls Securtity protocol, I almost flipped my desk when I couldn't figure out why the Module wouldn't download, until I saw [this](https://devblogs.microsoft.com/powershell/powershell-gallery-tls-support/)
**Problem**: The user needs the right amount of user rights permissions to pull this off. I haven't fully figured out the exact set of permissions, I thought the user only needs the local `log on as a batch job` permission, but couldn't get this to work consistenly.
**Solution**: So I decided to throw this hack in for now: I ended up temporaly adding the user to the local `Administrators` group, add the secrets, and then remove it again from the group.
```
NET LOCALGROUP "Administrators" $userName /ADD | Out-Null
Invoke-CommandAs -Scriptblock $ScriptBlock -AsUser $Credentials
NET LOCALGROUP "Administrators" $userName /remove| Out-Null
##📣 If anyone has a better idea, please let me know :halp: 📣
```
🎉 Great, now that we have cobbled together a working script, we can now use this for every secret, for each user, on every machine that the user operates on, right? Also, apparantly current security conventions suggest that we should run each service as a different user, therefore increasing the amount users we have to manage 🤯. You might see how this can be a bit tedious to manage manually.
But here comes **Ansible**.
# The Ansible Playbook
> Ansible is a radically simple IT automation engine that automates [cloud provisioning](https://www.ansible.com/provisioning?hsLang=en-us), [configuration management](https://www.ansible.com/configuration-management?hsLang=en-us), [application deployment](https://www.ansible.com/application-deployment?hsLang=en-us), [intra-service orchestration](https://www.ansible.com/orchestration?hsLang=en-us), and many other IT needs.
If you haven't played around with Ansible yet, I suggest watching the [live-streams](https://www.youtube.com/watch?v=goclfp6a2IQ&list=PL2_OBreMn7FplshFCWYlaN2uS8et9RjNG&index=4) of this Jeff Geerling guy or check out his [Ansible book](https://leanpub.com/ansible-for-devops) (which is currently free)
> NOTE: If there are more 'ansible' ways to achieve this use-case, please let me know.
The highlevel workflow would be:
- hosts: list of servers
- variables:
- `accounts`:
a dictionary where key = accountname, value = password
- `account_mapped_secrets`:
a dictionary where key = accountname, value= list of secret-keys
- `secrets`:
a dictionary where key = secret-key, value=secret-value
- playbook:
- Get list of `accounts` that have secrets mapped
- Loop over `accounts`, and for each `account`
- construct a secret key/value disctionary for each `account_secret` of the `account`
- use the above `Invoke-CommandAs` powershell snippet under the `account` user context to add each secret-key/value
A play would look something like this:
- the `main.yml` file
```
- name: Loop over Accounts that has Secrets to deployed
include_tasks: add_secrets.yml
with_items: "{{ account_mapped_secrets }}"
loop_control:
loop_var: account
extended: yes
```
- the included tasks `add_secrets.yml` file
```
---
- name: "${{ account }}$ -- Create temp Vault dict"
set_fact:
account_secrets_{{ ansible_loop.index }}: {}
- name: "${{ account }}$ -- Populate Secrets"
set_fact:
account_secrets_{{ ansible_loop.index }}: "{{ lookup('vars', 'account_secrets_' ~ ansible_loop.index) |default({}) | combine( {item: secrets[item]} ) }}"
with_items: "{{ account_mapped_secrets[account] }}"
- name: "${{ account }}$ -- Add Secrets to Local Vault"
win_shell: |
$userName = '{{ account }}'
$userPassword = '{{ accounts[account] }}'
[securestring]$secStringPassword = ConvertTo-SecureString $userPassword -AsPlainText -Force
[pscredential]$credObject = New-Object System.Management.Automation.PSCredential ($userName, $secStringPassword)
$ScriptBlock = {
Get-SecretInfo | Remove-Secret -Vault BuiltInLocalVault #✋for demo purposes we delete any existing Secrets.
$json = @"
{{ lookup('vars', 'account_secrets_' ~ ansible_loop.index) | to_nice_json }}
"@
($json | ConvertFrom-Json).psobject.properties | ForEach-Object { Add-Secret -Name $_.Name - Secret $_.Value }
Get-SecretInfo
}
try {
NET LOCALGROUP "Administrators" $userName /ADD | Out-Null
Invoke-CommandAs -ScriptBlock $ScriptBlock -AsUser $credObject -verbose
NET LOCALGROUP "Administrators" $userName /DELETE | Out-Null
} catch {
Write-Error $_
}
register: shellresult
- name: "${{ account }}$ -- Fail if output is not exptected"
fail:
msg: shellresult.stdout_lines
when: shellresult.stdout.find("Vault") == -1
- name: "${{ account }}$ -- Show Errors"
fail:
msg: "{{ shellresult.stderr }}"
when: shellresult.stderr != ""
- name: "${{ account }}$ -- Print Result"
debug:
msg: "{{ shellresult.stdout_lines }}"
```
✋ You need to setup an [Ansible Role](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html) to make this snippet work as it is.
My directory looks someting like this:
```
/etc/ansible
├── ansible.cfg
├── environs
│ ├── dev # copy this subfolder for each environment (dev,test,prod...) you might have
│ │ ├── group_vars
│ │ │ └── all
│ │ │ ├── main.yaml # contains account_mapped_secrets variable
│ │ │ └── accounts.yaml # contains accounts variable
| | | └── secrets.yaml # contains secrets variable
│ │ ├── hosts
....
├── roles
│ ├── secrets
│ │ ├── tasks
│ │ │ ├── add_secrets.yml # the core of the play
│ │ │ └── main.yml # the main tasks file that includes the above file in a loop
├── play_secrets_role.yaml # a play that calls the secrets role
```
To run this play for the `dev` environment, run: `ansible-playbook -i environs/dev play_secrets_role.yaml`
## Explaining the playbook:
- You can notice that I'm construcing an unique dictionary in each loop `account_secrets_{{ ansible_loop.index }}`. In the first versions of my playbook I just used `account_secrets` as the variable, and apparantly in Ansible, once you set a fact, you can't unset it. So the secrets dictionary would just keep appending between each loop, and the last user would end up having all the secrets.
- `ansible_loop.index` is the current iteration in the loop, but is only available when you have the `extended: yes` option when looping.
- We use a jinja `combine()` expression to dynamically create the `account_secrets_x` dictionary. Note that I refer to the above variable with the `lookup('vars', 'account_secrets_' ~ ansible_loop.index)` syntax instead of the `account_secrets_{{ ansible_loop.index }}`.
This is because the following won't work: `"{{ account_secrets_{{ ansible_loop.index }}) |defaul t({}) | combine( {item: secrets[item]} ) }}"` as you can't have double interpolation inside a jinja expression.
## Wait a minute, you're still storing plaintext passwords in the variable files!
Here comes Ansible again. By separate the `accounts` and `secrets` from the `account_mapped_secrets` into different variable files we can encrypt the first two with [Ansible Vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html#file-level-encryption)
Then when running your `ansible-playbook`, pass in the extra argument `--ask-vault-pass` | mieel |
310,088 | Build an Event Planner App with Vue.js, Firebase, and Auth0's Passwordless | Quickly build an event planner application that utilizes Auth0's Passwordless feature! | 0 | 2020-04-15T18:59:51 | https://auth0.com/blog/build-an-event-planner-app-with-vuejs-firebase-and-auth0s-passwordless/ | javascript, vue, firebase | ---
title: Build an Event Planner App with Vue.js, Firebase, and Auth0's Passwordless
published: true
description: Quickly build an event planner application that utilizes Auth0's Passwordless feature!
tags: #javascript #vuejs #firebase
canonical_url: https://auth0.com/blog/build-an-event-planner-app-with-vuejs-firebase-and-auth0s-passwordless/
---
TL;DR: To date, passwords and passcodes remain the most used means of gaining access to our protected accounts on the online platforms we love. However, if you are like me with loads of accounts across various online platforms, remembering all passwords can quickly become a herculean task mentally especially when certain online platforms require you to provide strong passwords with special characters, capitalized words, and numbers like you're trying to gain access to Fort Knox. Wouldn't it be wonderful not to have to remember passwords when logging into an application? That is the type of application we will be building in this article.
[Read on 📖](https://auth0.com/blog/build-an-event-planner-app-with-vuejs-firebase-and-auth0s-passwordless/?utm_source=dev&utm_medium=sc&utm_campaign=vuejs_passwordless) | bachiauth0 |
310,098 | What are the best practices for architecting API authentication? | Dear Geek, We are building an API and I am confused as to what kind of security we need? There... | 5,965 | 2020-04-16T11:47:11 | https://dev.to/brentonhouse/what-are-the-best-practices-for-architecting-api-authentication-kd5 | > Dear Geek,
>
> We are building an API and I am confused as to what kind of security we need? There are so many out there being used (_OAuth 1.0a, OAuth 2.0, SAML, username/password, API Key, JWT, and plenty of others_) and I am not sure what the best practices are for implementing authentication for our APIs. What advice to you have?
>
> — OVERWHELMED BY SECURITY OPTIONS
#### Dear Overwhelmed,
There really are a lot of options for security when designing and architecting APIs but I can help you narrow down things and point you to some best practices for choosing authentication options for your APIs!
##### API Strategy
There are several things to take into consideration when looking at security for APIs and it is important to make sure it aligns with your organizations overall security strategy.
Let's look at a few of the most frequently used methods of API authentication.
#### No Security
You may be thinking about opening up your API to everyone with no security. I would not do this. Even if you data is non-sensitive and you may not care who sees your data, you should be thinking about rate limiting in order to protect your resources.
Look instead at using `API Key`, which I talk about next.
#### API Key
This is an option if the data you are presenting is non-sensitive. An `API Key` is a unique value generated for use by an API client. `API Key` is not really authentication as it is a way of filtering requests by client. You still have no idea who is using your API with that `API Key`. Adding an `API Key` requirement to your API will at least allow you to limit the number of requests per registered client.
Allowing the client to reset the API Key is an important feature as the key might become compromised.
Most APIs will require true authentication which is when a lot architects find themselves looking at `OAuth 2.0` which I cover next.
#### OAuth 2.0
You will see this form of Authentication used on a lot of APIs. This involves an end user authenticating and getting a token that can be used by the client to then authenticate with your API. I won't go into details here as the `OAuth 2.0` process can be challenging to understand if you're new to this. I will tell you that there are several different OAuth flows and you will need to work with your `OAuth 2.0` provider to see which flows they support. `OAuth 2.0` does have flows that support server-to-server communication but not all organizations and providers will have these flows enabled.
The one downside to some `OAuth 2.0` flows is that it can get pretty ugly. You probably have some awesome designs showing a nice branded login flow for your app or website. But the reality is you will get thrown out of that into something much less branded before completing and getting back to your app.
The other downside is the additional screens themselves. Depending on what the timeout is for your app or website. You might have 3-5 screens and a whole lot more clicks just to open a link.
There is not much you can do about the downsides as security is more important than aesthetics.
My only point to pointing out the downsides is that you do want to be aware of what `OAuth 2.0` flows are supported (and enabled) for your API and what it means for your clients if they are turned off.
#### Username/Password
Some APIs authenticate with username and password, often in the form of Basic Auth in the header. Even when combined with SSL, this is not a recommended solution for securing your API. You will often see this with older APIs that were created using a webpage paradigm. This also often led to APIs being created that were session based (or worse, session based with cookies).
Speaking of session-based APIs. Please don't do this! RESTful APIs are designed to be stateless!
If you are thinking about doing this, first see if your API falls into the one exception to this rule:
**THERE IS NO EXCEPTION.**
**Don't do it.**
For the love of all that is good. Just don't.
#### Others
Here are a few of the other authentication methods you might find out there.
| Auth | Comments |
|------------|-------------------------------------------------------------------------------------------------------|
| JWT | Uses a JWT to authenticate. Easy to setup and use but user must manually manage token creation, etc. More secure alternative to API Keys. |
| OAuth 1.0a | Less secure than OAuth 2.0. Just use OAuth 2.0 |
| SAML | Used for some SSO system. Difficult to use and manage for APIs |
| | |
### Recommendations
- Use `OAuth 2.0` but with flows enabled to support server-to-server, device authorization, etc. so you can ensure your API Client are secure while also enabling a great user experience!
- Use API Key authentication with caution if publishing non-sensitive data
- Avoid username/password authentication.
- Avoid maintaining state in your API calls.
| About Brenton House |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| With 25 years of experience in the development world, Brenton House leads Developer Relations for Axway's API and mobile products, He has worked closely with many clients across various industries including broadcasting, advertising, retail, financial services, transportation, publishing, supply chain, and non-profits. Brenton's passion for everything API and mobile combined with his strategy and design experience, has enabled him to help developers create captivating products that inspire and delight audiences. |
Ask-a-Geek questions are answered by Brenton House, an API and Mobile Geek who has been working in dev community for 25+ years. | brentonhouse | |
310,237 | Aprendendo do zero a criar uma aplicação desktop com JavaScript, Electron Js e Vue.Js | Fonte de Estudos: Casa do Código Livro: VueJs Construa Aplicações Incríveis Instalando o CLI do V... | 0 | 2020-04-15T23:36:28 | https://dev.to/gustavo_nascimento/aprendendo-do-zero-a-criar-um-projeto-com-vue-js-3k1e | beginners, javascript, vue | > Fonte de Estudos: Casa do Código
> Livro: VueJs Construa Aplicações Incríveis
- Instalando o CLI do Vue.Js em modo Global
```javascript
npm i -g @vue/cli
```
- Criando um projeto Vue.Js
```javascript
vue create <nome do app>
```
- Iniciando um servidor no Vue.Js
```javascript
npm run serve
```
- Na instalação vai aparecer a pergunta abaixo. Eu costumo deixar em "default" e pressionar ENTER.
```sh
Vue CLI v4.3.1
? Please pick a preset: (Use arrow keys)
❯ default (babel, eslint)
Manually select features
```
- Logo em seguida vai começar a instalação do repositório git e a instalação do CLI plugins conforme abaixo.
```sh
Vue CLI v4.3.1
✨ Creating project in /mnt/c/Users/Fantasma/Desktop/web/VueJs/teste.
🗃 Initializing git repository...
> ejs@2.7.4 postinstall /mnt/c/Users/Fantasma/Desktop/web/VueJs/teste/node_modules/ejs
> node ./postinstall.js
added 1195 packages from 852 contributors and audited 25249 packages in 292.71s
40 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
�🚀 Invoking generators...
�📦 Installing additional dependencies...
added 54 packages from 39 contributors and audited 25532 packages in 54.138s
42 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
⚓ Running completion hooks...
�📄 Generating README.md...
�🎉 Successfully created project teste.
�👉 Get started with the following commands:
$ npm run serve
```
- Acesse a pasta do projeto criado.
```javascript
$ cd teste
```
- Instalando a Lib Vuetify (Eu deixo como Default(recomendado))
```javascript
vue add vuetify
```
- Instalando o electron ao seu projeto
```javascript
vue add electron-builder
```
Obs: Quando estiver instalando vai fazer as seguintes perguntas abaixo. Usei a versão 5.0.0
```javascript
? Still proceed? (y/N) Yes
? Choose Electron Version (Use arrow keys)
4.0.0
5.0.0
6.0.0
```
| gustavo_nascimento |
310,413 | Patrones de diseño en #javascript - Singleton | El patrón Singleton está diseñado para restringir la creación de objetos pertenecientes a una clase.... | 0 | 2020-04-16T06:53:29 | https://dev.to/3rchuss/patrones-de-diseno-en-javascript-singleton-81g | javascript, design, designpatterns, codenewbie | El patrón **Singleton** está diseñado para restringir la creación de objetos pertenecientes a una clase. **instancia única**.
Veamos un ejemplo:
```javascript
const alumnos = {
// Todos los alumnos
listaAlumnos : [],
// Obtener un alumno
get: function(id) {
return this.listaAlumnos[id]
},
// Crear un alumno
crear: function(datos) {
this.listaAlumnos.push(datos);
},
// Listar todos los alumnos
listado: function() {
return this.listaAlumnos;
}
}
const infoAlumno = {
nombre: 'Jesus',
edad: 30
}
const infoAlumno2 = {
nombre: 'Juan',
edad: 20
}
alumnos.crear(infoAlumno);
alumnos.crear(infoAlumno2);
const listado = alumnos.listado();
console.log(listado);
//(2) [{…}, {…}]
//0: {nombre: "Jesus", edad: 30}
//1: {nombre: "Juan", edad: 20}
const alumno = alumnos.get(0);
console.log(alumno);
//{nombre: "Jesus", edad: 30}
```
**Singleton permite la creación directa de objetos.**
Uno de los patrones más comunes y fáciles de usar en proyectos pequeños.
| 3rchuss |
310,419 | Secrets of choosing the right pricing strategy for your mobile app project | For each App entrepreneur, building a great mobile app becomes the overwhelming focus. Since the appl... | 0 | 2020-04-16T07:07:56 | https://dev.to/ltdsolace/secrets-of-choosing-the-right-pricing-strategy-for-your-mobile-app-project-5fi7 | mobileapps, apps | For each App entrepreneur, building a great mobile app becomes the overwhelming focus. Since the application development process has been more than just software development, a lot of time and effort is spent on discovery, design, and development. Of course, providing value to the end-users is the prime objective behind any application however pricing a digital product is an equally important aspect, one that should not be overlooked. It should be carefully determined to build a beneficial and scalable mobile application business.
The product pricing strategy plays an important role in getting your application the initial traction and visibility in the app store. How you determine the cost of your product decides the success of your application in terms of the number of downloads and user retention over the long haul.
However, it resembles walking on a tightrope- applications that are offered for free or at a generally lower price accumulate more market share and can enter in the market flawlessly while applications that are offered at a greater price have the potential to garner a good ROI.
So, how would you choose the appropriate pricing strategy? A lot of factors are important while choosing the price for your app, market research, the value and usefulness of your application, and the market competition. This combined with an understanding of customer psychology is a must to run a successful mobile application business.
If you plan on building an application, or your product is in the development process and you have not given much of an afterthought to this equally essential aspect, don’t worry, we will separate what is a product pricing strategy, the popular pricing models for mobile applications and help you strategically decide the appropriate price for your application.
**What is a product pricing strategy?**
Pricing strategy is the manner by which you make money from your mobile application. Each breakthrough idea has some business model behind it. While at first, you may only be concerned about rapidly launching a Minimum Viable Product, having a pricing strategy in place will just help you with building a robust product. Pricing a digital product may appear to be a simple task, yet it isn’t. Play Store and App store are sprawling with a huge number of mobile apps and the types of pricing strategies have evolved over time. But, before we dive excessively deep into the types of mobile app pricing strategies and discuss their advantages and disadvantages, we would like to share some significant insights that will help you to take the appropriate choice.
**Product Pricing Strategies for your Mobile App-**
*1. Human Behaviour and psychology-*
Human psychology plays a major role in deciding the cost of your product. Consumers are aware of the rapid development in technology. It is the growing rise in consumerism that steers the extent of advancements. Customers today are all well-read, well informed and they generally strive to make a conscious decision. For each product, there are ten different choices available in the market, so how would you assure a customer will choose your product? Simple, you must ensure that the perceived value of your product is greater than its cost.
A customer should consistently feel that the cost of your product is correct and taking advantage of this becomes more simple when they don’t have a reference point, to begin with. Before taking a decision, we humans will in general compare products and their alternatives available in the market and assess them for their advantages and disadvantages. In this way, while deciding the pricing strategy you should consider other products available in the market, characterize your value proposition and ensure you are offering greater value than your rivals.
*2. What the market is willing to pay-*
The perfect pricing strategy should revolve around what the market is willing to pay. And, to confirm that, you should test your product at different price levels and see how the market responds. An application has a recurring revenue stream. To stay competitive you have to use new functionality and offer regular updates. So it is necessary to holistically see the recurring revenue model and not worry about the revenue per price point.
Mobile app developers offer a product for free to accumulate a huge user base and generate revenue from advertisements. But later on, when you offer an updated product for a specific cost you had just devalued the product and can’t generally sell it for more because the perceived value is low. Hence the better approach is to test your product at various price levels and analyze the market reaction towards it.
*3. Build a product that users want-*
A product should be built with a goal to address user concerns and it must solve their concern. When you build something that users really need, it is easier to decide the price. Search for factors that can improve the value of your app: a stellar design, an awesome user journey. To determine the mobile application pricing model you need to know the application development cost, which includes factors like the actual cost, the value the product offers and the money required to flourish it. Having a basic knowledge of the present market trends is similarly significant and rolling out new updates and improved functionality is the key while deciding on a penetrative pricing model.
*4. Study the market and competitive landscape-*
Market-driven pricing is an extraordinary way to deal with an optimum price point for your mobile app. Considering the competitive landscape and analyzing your user base can assist you to discover opportunities where the competition is lacking. For example, launching your application at a generally lower cost than your competitors gives you an advantage in the market. Understanding the market demand for a product and viewing it from the customer’s viewpoint helps a lot in deciding the appropriate price. It is important to establish early on what makes your product special and what value does the competition in the market demand for your app.
*5. Robust marketing strategy-*
In this fast-paced environment, the power and reach of social media is growing each day. In such a situation, backing your pricing strategy with a robust marketing strategy is more like a need. Having an understanding of the present market trends along with a marketing strategy helps in maximum user acquisition. To retain users over the long haul, rolling out upgrades in each quarter is a good strategy. So, use the power of social media to make the buzz around your product and gain initial traction. Try not to be focused on the highest point of the funnel, it is ideal to look at things comprehensively. Curating rich content for social media platforms ensures the word about your application reaches far and wide and creates the maximum number of downloads. You can also know- Tips and tricks to increase your mobile app engagement.
*6. Affordability-*
Other than offering the ideal functionality, your application should be affordable for customers. It is not fair to offer value to your customers at a good price. Penetration pricing strategy helps businesses to attract new customers because of the low cost. It is viewed as an exemplary strategy since it uses behavioral psychology to drive customers to buy the product. The pricing for any digital product should also consider the sustainability of your business.
*7. Understand what your product and target customers demand-*
It is significant to realize early on that no one size fits all. What may have worked for another person, would not really work for you. There will be new open doors that you can take advantage to build an effective mobile application business. Think out of the box, be unconventional in your ways, and look out for alternate models to generate revenue. For example, the wide pool of data generated from your application could be of value to someone. Healthcare applications generate a large amount of data that is helpful for doctors and researchers.
*8. Paid vs free-*
Deciding, whether to launch an application for free or paid, includes some factors, similar to the necessary number of users. If you need to get a huge user base at once, free is the go-to decision. With paid applications, users anticipate more features and functionalities. They surely don’t like paying every time a new upgrade is rolled out. Additionally, when the clients pay to profit in-application buys, you should guarantee it offers adequate worth.
Also, when the customers pay to avail in-application purchases, you should ensure that it offers sufficient value. Analyzing from what the competition is doing is a good way to establish yourself in the market. You can’t choose a paid pricing model when others in your tech environment are offering similar functional applications for free. However, if your application has advanced functionality and a unique recommendation which isn’t found in similar applications, users can be happy to pay more.
**Types of Mobile App Pricing Strategies-**
*1. Free-*
Generally, we don’t get a lot of things for free. The ‘Free’ tag isn’t only a pleasing surprise but also a great tool to attract a large pool of customers. The same applies to the free application pricing model. As the name recommends, these applications are allowed to free download, and in most cases, the basic source of income is through promotions. Free applications are often observed as a great tool to attract in a huge pool of users and retain them over the long haul. They are intended to encourage communication and customer service. Depending upon the purpose of your application, there are two types of free pricing strategies to choose from.
The previous is a Completely free strategy, which is used when you already have a well-established product or a service in place. In such a case, the application is an add- on tool for the users. The objective behind totally free applications isn’t to make money directly from the application but redirect potential customers to other revenue streams. For example, free applications include features such as coupons, discount notices and other pertinent information that encourages users to take action. The application is used to spark interest, support the marketing strategy and drive users to other revenue channels.
The latter is an In-app advertisement strategy. In this strategy, applications are offered for free but users see adds while using the app. Primarily, gaming applications use this strategy to create ad revenue. With in-application advertisements, the key is to ensure that the advertisements that are being displayed are important to the users. There are benefits through which you can easily sort out the ads by user relevance and format and ensure the ads in your application actually pique the users’ interest.
*2. Freemium-*
The freemium pricing model is a modified version of the free pricing strategy. This model is more popular and generally used by organizations. Freemium applications can be downloaded for free but include limited functionality and features. To produce revenue from these applications, in-application purchase opportunities are created. There are three freemium strategies to choose from depending upon the levels, features, and incentives.
Two-tiered approach – It includes applications that customers can download and use for free but there is a premium functionality that must be benefited upon payment. This strategy is widely used in gaming applications where users need to buy in-application currencies, extra lives or upgrade to additional levels.
The second model includes applications which offer full functionality but just for a limited timeframe. The thought behind this strategy is that users can see the vision and get familiar with the application’s utility, and after a while, they will benefit the services.
The last model includes applications where all the features and functionalities are free except it accompanies built-in advertisements. The catch here is that a small amount is charged from users to offer an advertisement free app.
Read more at- [https://solaceinfotech.com/blog/secrets-of-choosing-the-right-pricing-strategy-for-your-mobile-app-project/] | ltdsolace |
310,430 | High available Kubernetes cluster with single control plane node | This article was originally published at my SRE blog Why single node control plane? Benef... | 6,001 | 2020-04-16T07:47:44 | https://vorozhko.net/high-available-kubernetes-cluster-with-single-control-plane-node | kubernetes, sre, aws | *This article was originally published at [my SRE blog](https://vorozhko.net/high-available-kubernetes-cluster-with-single-control-plane-node)*
## Why single node control plane?
**Benefits are:**
* Monitoring and alerting are simple and on point. It reduce the number of false positive alerts.
* Setup and maintenance are quick and straightforward. Less complex install process lead to more robust setup.
* Disaster recovery and recovery documentation are more clear and shorter.
* Application will continue to work even if Kubernetes control plane is down.
* Multiple worker nodes and multiple deployment replicas will provide necessary high availability for your applications.
**Disadvantages are:**
* Downtime of control plane node make it impossible to change any Kubernetes object. For example to schedule new deployments, update application configuration or to add/remove worker nodes.
* If worker node goes down during control plane downtime when it will not be able to re-join the cluster after recovery.
**Conclusions:**
* If you have a heavy load on Kubernetes API like frequent deployments from many teams then you might consider to use multi control plane setup.
* If changes to Kubernetes objects are infrequent and your team can tolerate a bit of downtime when single control plane Kubernetes cluster can be great choice.
## Reliable single node Kubernetes control plane
Lets deep into details how to make single node control plane cluster reliable and high available.
There are main 3 steps for single node HA cluster:
* Frequent etcd backups
* Monitoring of main Kubernetes components
* Automated control plane disaster recovery
## Frequent etcd backups
The only stateful component of Kubernetes cluster is etcd server. The etcd server is where Kuberenetes store all API objects and configuration.
Backing up this storage is sufficient for complete recovery of Kubernetes cluster state.
### Backup with etcdctl
etcdctl is command line tool to manage etcd server and it's date.
command to make a backup is:
#### Making a backup
```
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
```
command to restore snapshot is:
```
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db
```
> Note: You might need to specify paths to certificate keys in order to access etcd server api with etcdctl.
#### Store backup at remote storage
It's important to backup data on remote storage like s3. It's guarantee that a copy of etcd data will be available even if control plane volume is inaccessible or corrupted.
Step 1: Make an s3 bucket:
```
aws s3 mb etcd-backup
```
Step 2: Copy snapshot.db to s3 with new filename:
```
filename=`date +%F-%H-%M`.db
aws s3 cp ./snapshot.db s3://etcd-backup/etcd-data/$filename
```
Step 3: Setup s3 object expiration to clean up old backup files
```
aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --life
cycle-configuration file://lifecycle.json
```
Example of lifecycle.json which transition backups to s3 Glacier:
```json
{
"Rules": [
{
"ID": "Move rotated backups to Glacier",
"Prefix": "etcd-data/",
"Status": "Enabled",
"Transitions": [
{
"Date": "2015-11-10T00:00:00.000Z",
"StorageClass": "GLACIER"
}
]
},
{
"Status": "Enabled",
"Prefix": "",
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 2,
"StorageClass": "GLACIER"
}
],
"ID": "Move old versions to Glacier"
}
]
}
```
### Simplify etcd backup with Velero
Velero is powerfull Kubernetes backup tool. It simplify many operation tasks.
With Velero it's easier to:
* Choose what to backup(objects, volumes or everything)
* Choose what NOT to backup(e.g. secrets)
* Schedule cluster backups
* Store backups on remote storage
* Fast disaster recovery process
#### Install and configure Velero
1)Download latest version of [Velero](https://github.com/vmware-tanzu/velero/releases)
2)Create AWS credential file:
```
[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
```
3)Create s3 bucket for etcd-backups
```aws s3 mb s3://kubernetes-velero-backup-bucket```
4)Install velero to kubernetes cluster:
```
velero install --provider aws \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--bucket kubernetes-velero-backup-bucket \
--secret-file ./aws-iam-creds \
--backup-location-config region=us-east-1 \
--snapshot-location-config region=us-east-1
```
>Note: we use s3 plugin to access remote storage. Velero support many different [storage providers](https://velero.io/plugins/). See which works for you best.
#### Schedule automated backups
1)Schedule daily backups:
```velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"```
2)Create a backup manually:
```velero backup create <BACKUP NAME>```
#### Disaster Recovery with Velero
>Note: You might need to re-install Velero in case of full etcd data loss.
When Velero is up disaster recovery process are simple and straightforward:
1)Update your backup storage location to read-only mode
```
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
```
By default, *<STORAGE LOCATION NAME>* is expected to be named *default*, however the name can be changed by specifying *--default-backup-storage-location* on velero server.
2)Create a restore with your most recent Velero Backup:
```
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
```
3)When ready, revert your backup storage location to read-write mode:
```
kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
```
### Conclusions
* Kubernetes cluster with infrequent change to API server is great choice for single control plane setup.
* Frequent backups of etcd cluster will minimize time window of potential data loss.
### What's coming next:
* Monitoring of main Kubernetes components
* Automated control plane disaster recovery
| vorozhko |
310,541 | Adding custom business logic to Hasura using Dark | Hasura has recently implemented a way to create custom mutations called Actions. Want to handle compl... | 0 | 2020-04-17T06:34:34 | https://hasura.io/blog/dark-and-hasura-actions/ | hasura, darklang | ---
title: Adding custom business logic to Hasura using Dark
published: true
date: 2020-04-16 09:56:14 UTC
tags: hasura, darklang
canonical_url: https://hasura.io/blog/dark-and-hasura-actions/
---
Hasura has recently implemented a way to create custom mutations called Actions. Want to handle complex business logic? Actions are the way to go!

To create a new action, you need to provide a definition, handler (REST endpoint), and specify the kind – sync or async. When you run the custom GraphQL mutation, Hasura makes a `POST` request to the specified handler with the mutation arguments and the session variables. If you want to know more about the machinery behind it, check out [the docs](https://hasura.io/docs/1.0/graphql/manual/actions/index.html) or our [article introducing actions](https://hasura.io/blog/introducing-actions/).
In this article, we're going to explore how Hasura and [Dark](https://darklang.com/) work together by creating a custom mutation and implementing a handler for it using Dark. And all of that will be done without leaving the browser!
## What is Dark?
Developing services is a pretty complicated job. Before you actually start writing code, you need to decide on stuff like hosting, CI/CD pipeline, language, and then you need to stitch it all together. Another problematic thing around it is deployment. Making a commit, creating PR, running CI, actual deploy — it's a lot. So what Dark aspires to do is to take this whole complexity away from us, and leave developers to only worry about writing code.
As being a _setupless_ solution, Dark consists of language, editor, runtime, and infrastructure, so that you don't need to spend time figuring all of that on your own. What's more, ease of writing _deployless_ backends is one of Dark's primary concepts, so while your writing your code, every change is instantly deployed to the cloud!
The language itself is described as _a statically-typed functional/imperative hybrid, based loosely on ML._ The dark compiler was written in OCaml, and syntax wise, you may spot a resemblance between these two languages.
With Dark, you write your code in a structured editor that makes sure you won't write syntactically incorrect code. There's no parser included, which means no syntax errors — with every keystroke, you modify the AST directly.

## Creating an action
### Online mafia game
Do you know _the_ [_Mafia_](https://en.wikipedia.org/wiki/Mafia_(party_game)) game? It's an old-timey party game in which the objective is for the mafia to _kill off_ civilians until they are the majority, or the civilians to _kill off_ the entire mafia. As the rules can be easily extended, players could be assigned with many different roles.
Yet, for our example, let's take only three: mafioso, civilian, and a doctor. There are also some constraints regarding roles:
- There can be only one doctor.
- The ratio between mafia players and all players should be around 1/3.
- The minimum number of mafia players is 2.
For example, for eight players, there should be one doctor, two mafiosos, and five civilians.
When a new player enters a game, I want to assign him a role. However, how do I make sure the above constraints aren't violated? How do I know what characters are still available?
In real life, someone probably would need to go through all the cards and choose the characters based on the players count, and then deal the cards to the players.
But what about an online version of the game? There is no game master among the players and I can't have information about other players' roles on the frontend, because it's secret. That's when actions come into play! I'm going to create a custom mutation that performs the following logic:
- Fetch all taken roles from the DB, along with the number of players in the game.
- Based on already present characters, check which are still available.
- From a set of available roles, choose a random one.
- Insert a new user into the database.
<figcaption>Hasura custom mutation flow</figcaption>
### New Hasura Action
The first thing to do is to define the mutation and all the required types. The definitions below mean:
- The name of the custom mutation is `CreateNewPlayer`.
- It accepts two arguments: a string `nickname` and `game_id` of type `CreateNewPlayerUuid`, which is a custom scalar.
- The mutation returns `CreateNewPlayerOutput`, which consists of newly created player info.
<figcaption>New Action definition</figcaption>
Next, I need to provide an HTTP endpoint to which Hasura will make POST requests whenever `CreateNewPlayer` mutation is called. In my case, I'm putting a link to my Dark canvass with a route being `new_player`.
<figcaption>HTTP handler created with Dark</figcaption>
As the last step, I'm saying that the kind of the mutation is `Synchronous`, which means that Hasura would keep the request open till receiving a response from the handler.
<figcaption>Kind of the Action</figcaption>
## Creating REST endpoint in Dark
If you want to create an HTTP handler, you probably follow these four steps:
1. Agree upon request body parameters.
2. Write an implementation.
3. Make endpoint accessible.
4. Test it by sending a request.
Dark took a different approach. One of the core concepts of Dark is _[Trace-Driven-Development](https://darklang.github.io/docs/trace-driven-development)_. It allows you to develop your backends from incoming requests. In other words, you can send a request to the endpoint that doesn't even exist. Then, based on the received request, you can implement your handler. Let's see it in action!
I'm going to call my newly created mutation.

It will result in an error because I haven't implemented it yet in Dark. But I can go to my Dark canvass, check the 404 section and see that Dark captured the request that Hasura for `CreateNewPlayer` action.
<figcaption>404 section</figcaption>
Now, by clicking `+` button, I can convert nonexistent HTTP endpoint into a real handler. I can also check what exactly was sent to Dark in a request body.

### Implementation
I will skip some implementation details related to determining a new role. You can find screenshots with code [here](https://gist.github.com/beerose/c96a082d26551c541eed7e6ba8398e47).
As the first step, I'm making a request to Hasura to extract the information I need – all the roles that are already taken and the number of players. As you can see in the screenshot below, there are already three civilians and one doctor.
<figcaption>Fetch all needed data from Hasura</figcaption>
The next step is to randomly choose a role from all available roles.
<figcaption>Available roles</figcaption>
<figcaption>New Role</figcaption>
As I have a new role for the player, now I can make a call to Hasura and insert a new player to the database.
<figcaption>Call to Hasura and handler returning data</figcaption>
All the implementation is done, so now I can start using `CreateNewPlayer` mutation 🎉
<figcaption>Call mutation from Hasura Console</figcaption>
## Connecting Action with the graph
The custom mutation I just created returns information about the newly created player. The next thing I'd need is information about the game and other players. But I don't want to make another call to Hasura. Instead, I want to get all the additional info by calling only `CreateNewPlayer` mutation.
In order to obtain that, I'm going to modify the definition, so that it also returns `gameId` and create a new relationship between the Action and `game` table. I also need to return `gameId` from the handler.
<figcaption>Add gameId to type definition</figcaption>
<figcaption>New relationship</figcaption>
<figcaption>Return gameId from the handler</figcaption>
Thanks to the new relationship, I can now fetch all the data about the game I want! I can fetch info about the players associated with the same game as well.
<figcaption>Mutation</figcaption>
<figcaption>Result</figcaption>
## Summary
We explored how to add custom business logic to Hasura in a few steps, learned how to take advantage of the Trace Driven Development, and how quickly we can get a rest endpoint up and running with Dark. Moreover, we added a custom mutation with Hasura Actions, and took things to the next level by creating an relationship and connecting it to the graph!
Enjoyed this article? Join us on [Discord](discord.gg/hasura) for more discussions on Hasura & GraphQL!
Sign up for our [newsletter](http://eepurl.com/dBUfJ5) to know when we publish new articles. | hasurahq_staff |
310,596 | Introduction: styled-off-canvas | As a big fan of styled-components, I always had the need for an Off-Canvas or Burger-Menu in my proje... | 0 | 2020-05-28T10:49:14 | https://dev.to/marcostreng/introduction-styled-off-canvas-bna | javascript, react, styledcomponents | As a big fan of [styled-components](https://www.styled-components.com/), I always had the need for an Off-Canvas or Burger-Menu in my projects.
When working with styled-components, it feels unpleasant to use one of the plain CSS based menus. You have to import `.css` files, you probably have to overwrite some styling, your styling is divided in 'two worlds': plain CSS and styled-components. So I wrote [styled-off-canvas](https://github.com/marco-streng/styled-off-canvas).
## Demo
Yes, there is a [DEMO](https://styled-off-canvas.netlify.app/)
## Components
[styled-off-canvas](https://github.com/marco-streng/styled-off-canvas) comes with three components: `<StyledOffCanvas />`,` <Menu />` and `<Overlay />`.
`<StyledOffCanvas />` is a wrapping component which provides all settings/properties.
`<Menu />` is the off-canvas menu itself. You can pass anything you want as children (e.g. styled list of react-router links)
`<Overlay />` is an optional component which renders a semi-transparent layer above your app content.
## Implementation
This is a simple example of how to use [styled-off-canvas](https://github.com/marco-streng/styled-off-canvas). You can also find a code example [here](https://github.com/marco-streng/styled-off-canvas/tree/master/example).
```jsx
import React, { useState } from 'react'
import { StyledOffCanvas, Menu, Overlay } from 'styled-off-canvas'
const App = () => {
const [isOpen, setIsOpen] = useState(false)
return (
<StyledOffCanvas
isOpen={isOpen}
onClose={() => setIsOpen(false)}
>
<button onClick={() => setIsOpen(!isOpen)}>Toggle menu</button>
<Menu>
<ul>
<li>
<a onClick={() => setIsOpen(false)}>close</a>
</li>
<li>
<a href='/about'>About</a>
</li>
<li>
<a href='/contact'>Contact</a>
</li>
</ul>
</Menu>
<Overlay />
<div>this is some nice content!</div>
</StyledOffCanvas>
)
}
export default App
```
## Customization
There are a lot of properties to customize the menu, like for example: colors, position, size or transition-duration.
Additionally you can use the styled-components `css` property on every component.
## Plans for the future
[styled-off-canvas](https://github.com/marco-streng/styled-off-canvas) should stay lightweight and simple. So I don't want hundrets of options and possibilities. Currently I'm thinking about adding some transition to the page content.
## Suggestions or feedback
If you got any kind of feedback, suggestions or ideas - feel free! Write a comment below this article or fork/clone from GitHub. There is always space for improvement! | marcostreng |
310,603 | How to Build a Successful Mobile App for Your Online Services Business? | Developing a mobile app for your service-based business is now considered essential. It is due to th... | 0 | 2020-04-16T12:57:20 | https://dev.to/ishawnmike/how-to-build-a-successful-mobile-app-for-your-online-services-business-2m4n | mobileappdevelopment, businessapps, onlinebusinessservices | ---
title: How to Build a Successful Mobile App for Your Online Services Business?
published: true
description:
tags: Mobile App Development, Business Apps, Online business services,
---

Developing a mobile app for your service-based business is now considered essential. It is due to the fact that consumer requirements have now changed. Today, consumers want ease and portability. They want to get information about brands whenever and from wherever they want. Also, the use of smartphones has increased drastically and so has the use of mobile apps. The use of these apps is happening on various levels and degrees. Some users reported their addiction to these apps while others found them really helpful in knowing about the features of a business and getting the service with just one click.
With the emerging demand for mobile applications, a service-based business can hardly survive without a compelling app. Building an app is not that difficult. With a clear strategy, even new business owners can now develop great apps for their businesses. Free templates are also available online and can be a good way to start off building the first app for your business.
In this article, we will discuss building an app on your own for your service-based business. Let us see how.
**1- Be clear about the message you want to convey**

The first step is to be clear about what you actually want to give to your clients. What are your core offerings and what will be the look and feel of your app? If you already have a website, a YouTube channel or a Facebook page, try to follow the same theme so that your service business can become a brand.
**2- Problem identification**
The next step is to identify the problem that your potential customers might be facing or even your present customers are facing. This problem identification can be done by doing research about your clients and taking their feedback on what they want to see in your potential services. Figure out how you can give value through your services to your clients.
**3- Confirm the need for your app**
Rethink again if really what you intend to offer through your app will fill the gap left by other apps available in the market. At times it happens that something that you see as a problem isn’t really something that hasn’t been addressed before. So you have to do research to identify areas that others have left unresolved.
**4- Set your objectives**
Be clear about your objectives, as they will help you in channelizing your efforts in the right direction. Of course, you want an app as it will benefit your business, but how? Do you want it to drive more sales for your business, or to enhance your customers’ experience, or to establish a competitive edge in the market? These are some examples of goals you might want to achieve through your app.
**5- Research about your target audience**
Now you need to know who your target audience is. Keep in mind that your targeted group will not be exactly the same as you have researched at the time of developing your business website. The number of people using apps is also higher than those using websites.
**6- Consider additional features and functionalities**
Create an outline that includes major decisions regarding your app. Decide whether your app will offer a free download or in-app purchasing deals. For instance, an app by [carpet cleaning London](https://www.carpetbright.uk.com/carpet-cleaning/london/) may provide membership for a 50% discount on carpet cleaning as an in-app purchase.
**7- Create a sketch or wireframe**
Once everything is finalized, you now need to [create your app’s wireframe](https://careerfoundry.com/en/blog/ux-design/how-to-create-your-first-wireframe/). You can also decide what will be the color, design, and theme of your app. You can use wireframing websites that can display the real look of your app.
**8- Build your app**
Now, its time to work on the real functionality of your app. You can either use DIY app builders and make changes in the tools as per your business goals. Else, you can also take help from a professional who will delineate the app’s APIs, servers, data diagrams, etc. and handle complex backend tasks.
**9- Keep the design on the forefront**
Service business apps need to be designed aptly so that people are drawn to explore them. A service app should have a compelling design that can cause its potential users to stop and check what the app has to offer.
**10- Hire a professional**
Now, you can hire a professional to implement your design and develop your app. Make sure you book the best professional who understands your vision and is willing to turn your ideas into a reality.
**11- Make a developer account**
For selling your business app on platforms like the App Store and [Google Play](https://play.google.com/store?hl=en), you need to make separate accounts. For the [App Store](https://www.apple.com/ios/app-store/), you have to pay an annual fee of $99, while Google play takes one-time charges of $25.
**12- Use data analytics**
Use tools like Localytics and [flurry](https://www.flurry.com/) to keep track of your app downloads. You will also get to know the best parts of your app and the parts that are not getting any response.
**13- Beta testing**
Select the right [beta testers](https://www.infoworld.com/article/3191442/the-5-best-beta-testing-tools-for-your-app.html) for testing your app in the real-world environment. It will provide you some valuable feedback for your app.
**14- Launch your app**
Once all the testing has been done, you can launch your app on the desired platforms. Go through the rules and guidelines of each platform. You have to wait for the review process before your app is finally launched.
**15- Take feedback and improvise**
Feedback will help you will know about the response to your app. Try to improve your app based on the feedback of your app users.
**16- Keep updating and testing**
Once the app is created, your work isn’t done yet. You have to keep it updated and test for each up-gradation. In this way, you will be able to serve your customers in a better way and build a good name for your business.
Creating an app requires dedication and a desire to grow. However, it should be kept in mind that a business app should be created after thoroughly researching your competitors’ apps so that it can stand out for your potential clients.
| ishawnmike |
310,640 | Angular Table row-span and col-span based on typescript data object | Angular Table row-span and col-span based... | 0 | 2020-04-16T14:22:25 | https://dev.to/gaurangdhorda/angular-table-row-span-and-col-span-based-on-typescript-data-object-30ca | angular, typescript, javascript | ---
title: Angular Table row-span and col-span based on typescript data object
published: true
tags: #angular #typescript #javascript
---
{% stackoverflow 61249759 %} | gaurangdhorda |
310,662 | A Day of Azure for an AWS User | This post was originally published on the Leading EDJE website in May 2018. This year the Global Az... | 0 | 2020-04-20T14:33:45 | https://dev.to/leading-edje/a-day-of-azure-for-an-aws-user-gle | cloud, azure | ---
title: A Day of Azure for an AWS User
published: true
description:
tags: cloud, azure
---
> This post was originally published on the Leading EDJE website in May 2018.
>
> This year the Global Azure Bootcamp is [going virtual](https://virtual.globalazure.net/) and will be hosted from Apr 23rd to 25th 2020, with plans to return to the original format next year.
At the recent Global Azure Bootcamp event held at the Microsoft office in Columbus, I was the only one with a laptop covered in AWS stickers. Despite a strong feeling of imposter syndrome, I actually felt right at home due to the large number of similarities between AWS and Azure. Perhaps more surprisingly, as someone with a Java and Linux background, the platform appears very welcoming to the code I write.
Most of the topics had a presentation and then a lab giving us the chance to get hands-on with the services discussed.
## Infrastructure as Code using Amazon Resource Manager
This was listed on the agenda as "Advanced IaC with PowerShell and ARM Templates", and I had to do some searching to figure out what this was going to be about. It turns out that this is the Azure equivalent of AWS CloudFormation which I'm very familiar with and was a perfect example of things being similar but not quite the same.
A few things struck me in particular:
* ARM creates resources in a Resource Group (making it seem a bit like a CloudFormation stack), but Resource Groups are used extensively without ARM, and a Resource Group could contain both manually created resources and resources created via ARM (but that is probably not a good idea).
* By default ARM updates run in "incremental mode", and if a resource is removed from the template it will **not** be deleted from the Resource Group. Specifying "complete mode" will delete resources in the group that are not part of the template.
* Templates are in JSON - having switched from JSON to YAML for CloudFormation, I would not want to go back to JSON.
* Templates have a variables section that can store computed values to be referenced within the template, and there is a much larger set of functions available than in CloudFormation.
* The example templates for creating VMs in Azure seemed a lot more complicated than their equivalent in CloudFormation. I'm not sure if this is because they were showing every possible option, or if there is less default configuration.
## IoT
Everyone has to have an Internet of Things framework and Azure is no different. There appear to be a lot of similarities to the AWS offerings, but not having used them extensively it's hard to compare. Because we did not have any IoT devices to use at the bootcamp we used simulated devices, which is fine but ultimately turns it into a basic messaging demonstration.
This session did inspire me to download Windows 10 IoT core and install it on a spare Raspberry Pi 2 that I had at home. While the Pi 2 is a supported platform, it was so painfully slow that I lost any interest in doing any more with it - I'm sure it would run much better on a Pi 3.
## Kubernetes
Both AWS (Elastic container service for Kubernetes - EKS) and Azure (Azure Kubernetes Service - AKS) have managed Kubernetes services in preview. The difference with Azure is that it's accessible for anyone to use immediately, whereas you have to apply to join the AWS preview. This may be a difference in philosophy between the platforms - there seem to be a lot of services in preview in Azure that anyone can start using, presumably taking on a risk that the service may change significantly before it is fully released.
I've been using AWS EC2 Container Service (ECS) for a couple of years to run containers in production and it's worked pretty well for us but it is fairly basic in terms of scheduling, and everything is AWS specific. The promise of both EKS and AKS is the ability to define container deployments using the standard Kubernetes tools while running on a platform where you don't need to worry about managing (or paying for) master nodes.
I enjoyed the chance to play with AKS and use Kubernetes for the first time, using the generous free trial you get for Azure when you first sign up (we also got $300 in credit for attending the bootcamp which was very nice).
During this lab I had the chance to use the Azure Cloud shell. Running a bash shell in a browser window on Microsoft's cloud (in a tab of Microsoft Edge) shows a great level of support for Linux users in Azure. Interestingly the Powershell version of Azure Cloud shell has just switched to running on Linux instead of Windows.
## Azure Serverless
Azure Functions are similar to AWS Lambda with a different set of supported languages but some overlap. Logic Apps (based upon BizTalk) allow workflows to be defined and systems to be connected without writing any code, and there's nothing really equivalent in AWS.
Because this was the last topic of the day it didn't have a lab so I haven't gone hands on with Azure functions yet to see if Java functions are as painfully slow to start on Azure as they are on AWS.
## Summary
I really enjoyed the Global Azure Bootcamp and I'll probably attend next year as well. Thank-you to the organizers and Microsoft and the other sponsors (although the sponsor who gave away an Amazon Echo dot might want to rethink their choice of prize).
Both AWS and Azure offer such a large number of services that it's hard to compare the platforms. However, both offer a lot of similar building blocks and a lot of applications could be run equally well on either.
| andrewdmay |
310,790 | The Citadel Architecture at AppSignal | DHH just coined the term "Citadel," which finally gives us an excellent way to reference how we appro... | 0 | 2020-04-16T16:15:38 | https://blog.appsignal.com/2020/04/08/the-citadel-architecture-at-appsignal.html | webdev, architecture, ruby | DHH just coined the term "Citadel," which finally gives us an excellent way to reference how we approach tech at AppSignal. We said, "Hey, this is us! Our thing has a name now".
<blockquote class="twitter-tweet" data-conversation="none"><p lang="en" dir="ltr">In addition to the Majestic Monolith, someone should write up the pattern of The Citadel: A single Majestic Monolith captures the majority mass of the app, with a few auxiliary outpost apps for highly specialized and divergent needs.</p>— DHH (@dhh) <a href="https://twitter.com/dhh/status/1247522358908215296?ref_src=twsrc%5Etfw">April 7, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
To explain how AppSignal uses the Citadel pattern, we'll share a bit on how our system works. AppSignal is a monitoring product that has a user-facing application and an API that the monitoring agent sends data to. This data is then processed and turned into graphs and insights.
## Monolith
The application our customers interact with is a monolithic Rails app, with parts of the front-end written in React. The backend is entirely written in Ruby and talks to a few databases (we split out data from different customers to separate clusters for scaling reasons). This Rails app also handles a bunch of other tasks such as sending out alerts to external services.
When we started this Rails app processed incoming data from our monitoring agent as well, we foresaw that data ingestion would turn out to be a bottleneck. So we used a Sinatra app running on a subdomain that ingested data and created Sidekiq jobs that were processed by the Rails app.
## Growing Pains
This architecture worked well for years. As our business grew, it became clear that the specific task of processing incoming data from the agents was going to need special treatment. When you're monitoring billions and billions of requests, you run into hard limits. The main limiting factor wasn't so much that Ruby is slow (we all know that it is not 😉), but that the way we had architected things caused too much locking in our databases.
## An Outpost
We looked at several possibilities and then decided Kafka was the best fit for our situation. We had some experience with Rust and thought that its speed and reliability would be a very good fit for this system. We rewrote our data ingestion and processing system in Rust, using Kafka as a combination of queue and storage system.
We only moved the incoming data processing part of the Rails app to this outpost service. The rest of the systems works well in the form of a monolithic app. We understand it deeply, and we like keeping things simple. The monolith still handles most of the logic and interacts with Kafka heavily. Our wish to keep our monolith led to us writing a [Kafka gem](https://github.com/appsignal/rdkafka-ruby), so the main app can communicate with the outpost easily.
If you want to learn more about how Kafka works at AppSignal [check out the Railsconf talk](https://www.youtube.com/watch?v=-NMDqqW1uCE) I gave about this.
## Life at the Citadel
This brings us to the current situation where we are very happy in our citadel. As DHH said:
>A single Majestic Monolith captures the majority mass of the app with a few auxiliary outpost apps for highly specialized and divergent needs.
In our case, we have a single outpost service for our highly specialized needs. If there were a RailsConf this year, we would have given DHH some extra stroopwafels as appreciation for giving it a name. 🍪
| thijsc |
310,841 | Specify XCode version for GitHub workflows | The build process on GitHub failed when I used the XCTUnwrap function. This feature was added in XCode 11. I updated the workflows yml file to specify the XCode version. | 0 | 2020-04-16T17:02:42 | https://monicagranbois.com/blog/swift/specify-xcode-version-for-github-workflows/ | xcode, github, workflows | ---
title: Specify XCode version for GitHub workflows
published: true
description: The build process on GitHub failed when I used the XCTUnwrap function. This feature was added in XCode 11. I updated the workflows yml file to specify the XCode version.
tags: xcode, github, workflows
canonical_url: https://monicagranbois.com/blog/swift/specify-xcode-version-for-github-workflows/
---
I wrote a unit test that used the [XCTUnwrap](https://developer.apple.com/documentation/xctest/3380195-xctunwrap) function. This feature was added in XCode 11 and
> "Asserts that an expression is not nil and returns the unwrapped value."
However, when I pushed my code to GitHub, the build broke with the following error.
> error: use of unresolved identifier 'XCTUnwrap'
<!--more-->
To fix this I updated my `.github/workflows/swift.yml` file to include the XCode version. I found the solution from [this GitHub Community Forum Post](https://github.community/t5/GitHub-Actions/Selecting-an-Xcode-version/m-p/31105). You can specify the version in the `env` using the `DEVELOPER_DIR` tag. GitHub maintains a list of available XCode versions [here.](https://github.com/actions/virtual-environments/blob/master/images/macos/macos-10.15-Readme.md#xcode)
Here is what my `swift.yml` file looks like. This is for a swift package that is included in other iOS projects.
```yml
name: Swift
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
env:
DEVELOPER_DIR: /Applications/Xcode_11.4.app/Contents/Developer
jobs:
build:
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- name: Build
run: swift build -v
- name: Run tests
run: swift test -v
```
| monicag |
311,010 | What to Consider When Picking a New Programming Language | How to prioritize what language to learn when there are so many options. | 0 | 2020-04-27T21:17:18 | https://dev.to/tylerlwsmith/what-to-consider-when-picking-a-new-programming-language-5b2f | languages, careeradvice | ---
title: What to Consider When Picking a New Programming Language
published: true
description: How to prioritize what language to learn when there are so many options.
tags: Languages, Career Advice
---
I spend a lot of time on Twitter, and I regularly listen to a half dozen tech podcasts. Because of this, I hear about a lot of programming languages.
The amount of languages to choose from is overwhelming. On the server, open source favorites like Ruby and Python are still frequently used, but increasingly the attention is awarded to relative newcomers in the space like Go, Elixir, F#, Kotlin and Rust. Functional languages that compile down to JavaScript like Elm, Facebook's ReasonML, and ClojureScript are also increasingly talked about.
In the mobile and application space, newer cross-platform solutions that utilize JavaScript and Dart compete with established native platform-specific solutions like C#, Java and Swift.
With so many languages to choose from, it can feel paralyzing. How do you know you're making a good choice when there are so many options? You can narrow your focus substantially by looking at some practical considerations.
## Understand Why You Want to Learn a New Language
Understanding *why* you want to learn a new language may be the most important step. If you want to learn LISP just because you think LISP is interesting, then skip the rest of this article and start learning LISP.
For many of us, the reason for learning a new language will likely be for a *personal project*, for *employment opportunities*, or to *solve a specific problem*.
**If you want to learn a new language for personal projects, focus on free and open source languages and tools**. The licenses for products like .NET's Visual Studio IDE can be complicated and are subject to change. This could be a hindrance at any point in building your personal project, and might stop the project completely in its tracks.
You won't always be able to use open source all the time (example: Unity game development), but when all other things are equal, open source is worth consideration.
**If you're learning a language to become more employable, then you should find what languages are popular where you live.** Go to [Indeed](https://www.indeed.com/) and [Dice](https://www.dice.com/) and type in your city and the kind of development you're interested in. Find the languages that come up most frequently. If 4 out of 5 companies are looking for C# developers, then you should probably focus on learning C#.
Not every city will have developer jobs. If you're open to moving, find the languages that are popular in cities you're interested in living in. If you're looking to work remote, keep reading for more considerations.
**If you want to learn a new language to solve a specific problem, find the popular languages for the problem you're trying to solve.** Many areas of programming have one-or-two obvious choices. When that's the case, pick one of the obvious choices.
* Are you trying to do professional game development? C++ or C# are your main choices.
* Interested in machine learning? Python is probably your best bet.
* Want to build web applications? Learn JavaScript.
* Want to do hobby robotics with Arduinos? Learn C++.
When there are many solutions in your problem area, other important considerations may be licensing (addressed above) and *popularity*.
## Look at Language Popularity
Assessing popularity can be extremely valuable when deciding what language to learn. It might not be talked about enough.
A popular language will typically have a bigger community for help, more resources to learn from, and more libraries to use in your projects. This lets you spend less time reinventing the wheel and more time writing the unique parts of your applications.
The [TIOBE Index](https://www.tiobe.com/tiobe-index/) is considered an authority on language popularity. Spend some time reviewing it.
**In general, prioritize learning languages in the TIOBE Index's Top 20**. These languages have more jobs, a bigger community, and are less likely to be abandoned by their authors overnight. However, the following are exceptions to this recommendation:
1. None of the top 20 languages address the problem you are trying to solve.
2. Your region's employment opportunities are primarily in languages that aren't in the top 20.
3. The language is in *decline*.
## Consider the Language Trend
While languages rarely die outright, after enough time, many languages will get neglected. This can be particularly true when a language's development is spearheaded by a large corporation like Microsoft or Apple.
At the time that this blog post was published, two versions of Visual Basic occupy the TIOBE Index Top 20, yet [Microsoft recently announced that it will no longer be adding new features to the language](https://devblogs.microsoft.com/vbteam/visual-basic-support-planned-for-net-5-0/). Apple has been neglecting Objective C to focus on its newer language, Swift.
And even though Perl is free, open source, and still in the TIOBE's top 20, its usage has been on the decline for years.
In 2020, I would avoid Visual Basic, Objective C and Perl.
**Favor learning languages that aren't consistently trending towards decline** unless they solve a specific problem of yours, or that language is where your region's opportunities reside.
## Final Thoughts
While there are countless programming languages to choose from, many of us are writing code to address problems that can be solved in nearly any language. Newer languages in particular are created to address problems that these older languages don't solve particularly well. Those problems might not be your problems.
When picking a new language to learn, focus on problems that are more tangible, like what you're trying to build, employment opportunities available, and the popularity/trend of the language to ensure you get the most value out of your learning investment.
| tylerlwsmith |
310,864 | Semantic versioning in JavaScript projects made easy | If you've used a package manager like npm or yarn before, you're probably familiar with a versioning... | 0 | 2020-05-23T16:16:58 | https://dev.to/stijnva/semantic-versioning-in-javascript-projects-made-easy-3h63 | tutorial, productivity, javascript | If you've used a package manager like npm or yarn before, you're probably familiar with a versioning format like X.Y.Z, where X, Y, and Z each represent a number, separated by dots. But what do those numbers mean?
This versioning format is called [Semantic Versioning](https://semver.org/) (or SemVer for short). Those three numbers correspond to: `<MAJOR>.<MINOR>.<PATCH>`. Updating the major version means introducing a breaking change, the minor version is incremented when adding a new feature and the patch version is increased when including backward-compatible bug fixes. Increasing the version number (often called "bumping") also requires an update of the project's changelog. However, managing this manually for every release seems like a tedious task. After all, a developer most likely prefers writing code over documentation. Luckily, there are some tools to help automate this!
## 🛠 Tools
[Standard version](https://github.com/conventional-changelog/standard-version) is a utility that takes care of all these versioning steps. It bumps the version, writes the changes to the changelog, and creates a git tag with the new version. It requires [conventional commit](https://www.conventionalcommits.org/en/v1.0.0/) messages when committing, meaning all commit messages should follow a specific pattern:
```
<type>[optional scope]: <description>
[optional body]
[optional footer]
```
The `fix:` and `feat:` types correlate to the `PATCH` and `MINOR` version respectively. Adding a `BREAKING CHANGE:` prefix to the body or footer of the commit message indicates a bump of the `MAJOR` version.
But how can you make sure contributors stick to this format, to prevent standard version from breaking?
Similar to how a linter like [eslint](https://eslint.org/) can be used to analyze your code, a tool like [commitlint](https://github.com/conventional-changelog/commitlint) can be used to analyze your commit messages. By adding commitlint as a commit-msg git hook, all commit messages can be evaluated against a predefined config, ahead of the actual commit. So if the linter throws an error, the commit fails. An easy way to create those git hooks, is by using a helper like [husky](https://github.com/typicode/husky), which allows you to define your hooks directly inside the `package.json`.
Additionally, using an interactive CLI like [commitizen](https://github.com/commitizen/cz-cli), simplifies writing the commit messages in the conventional commit format by asking questions about your changes and using your answers to structure the message.
## 💿 Setup
Install all the necessary tools.
```bash
npm install --save-dev standard-version commitizen @commitlint/{cli,config-conventional} husky
```
Create a `commitlint.config.js` file in the root of the project. This file defines the rules that all commit messages should follow. By extending the conventional commit config, created by the commitlint team, all conventional commit rules will be added to the config.
```js
module.exports = {extends: ['@commitlint/config-conventional']};
```
Configure the hook in the `package.json`.
```json
{
...
"husky": {
"hooks": {
"commit-msg": "commitlint -E HUSKY_GIT_PARAMS"
}
}
}
```
A commit not following the conventional commit pattern, will now fail and give appropriate feedback regarding what caused the error:
```bash
$git commit -m "non-conventional commit"
husky > commit-msg (node v10.15.3)
⧗ input: non-conventional commit
✖ subject may not be empty [subject-empty]
✖ type may not be empty [type-empty]
✖ found 2 problems, 0 warnings
ⓘ Get help: https://github.com/conventional-changelog/commitlint/#what-is-commitlint
husky > commit-msg hook failed (add --no-verify to bypass)
```
Next, initialize the conventional changelog adapter to make the repo commitizen-friendly:
```bash
npx commitizen init cz-conventional-changelog --save-dev --save-exact
```
Add 2 scripts to the `package.json`: one to run the commitizen cli and one for standard-version:
```json
{
...
"scripts": {
"cm": "git-cz",
"release": "standard-version"
}
}
```
## 💻 Usage
Now, when using `npm run cm` to commit, commitizen's cli will be shown. It asks a series of questions about the changes you're committing and builds the message based on the provided answers. For example, committing a new feature looks like this:

When everything is ready for a new release, use standard-version to update the version number, changelog and create the git tag:
```bash
npm run release
```
Standard version's output shows the bumping of the minor version to 1.1.0, as expected when committing a feature, and that a correct git tag was created.
```bash
✔ bumping version in package.json from 1.0.0 to 1.1.0
✔ outputting changes to CHANGELOG.md
✔ committing package.json and CHANGELOG.md
husky > commit-msg (node v10.15.3)
✔ tagging release v1.1.0
ℹ Run `git push --follow-tags origin master && npm publish` to publish
```
The outputted changes to the `CHANGELOG.md` look like this:
```md
# Changelog
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
## 1.1.0 (2020-04-13)
### Features
* short desc, added to changelog ([cd9dbc9](https://github.com/Hzunax/semantic-versioning-example/commit/cd9dbc9627b7fc64ba0490e495fd71686a604e57))
```
Each `fix`, `feat`, or `BREAKING CHANGE` commit will show up in the changelog with its short description and a link to the commit on the remote.
Standard version also takes care of committing these changes (with a conventional commit message), so all that's left to do is push the changes to the remote and we're done!
## 📖 Further reading
I made an [example setup](https://github.com/Hzunax/semantic-versioning-example) where I use the tools described in this post. Feel free to check out the commit messages and how they are represented in the changelog.
For more complex configurations and more detailed information on the tools and concepts used in this post, check out the links below.
- [Conventional commits](https://www.conventionalcommits.org/en/v1.0.0/)
- [SemVer](https://semver.org/)
- [Commitizen](https://github.com/commitizen/cz-cli)
- [Standard version](https://github.com/conventional-changelog/standard-version)
- [Commitlint](https://commitlint.js.org/#/)
- [Husky](https://github.com/typicode/husky) | stijnva |
310,903 | Twilio Hackathon Start | Been thinking of a few ideas to do for the Twilio Hackathon. We have recently had some issues at work... | 0 | 2020-04-16T18:34:09 | https://dev.to/jhanna60/twilio-hackathon-start-1lng | twiliohackathon | Been thinking of a few ideas to do for the Twilio Hackathon. We have recently had some issues at work that could be improved with a clever application using the API's provided by Twilio.
Work in progress!
#twiliohackathon | jhanna60 |
310,913 | Free Developer-friendly high-res backgrounds for Microsoft Teams | Last week Microsoft released custom background feature for Microsoft Teams. This was a feature long... | 0 | 2020-04-21T18:16:09 | https://platform.uno/blog/free-developer-friendly-high-res-backgrounds-for-microsoft-teams/ | news | ---
title: Free Developer-friendly high-res backgrounds for Microsoft Teams
published: true
date: 2020-04-16 18:44:51 UTC
tags: News
canonical_url: https://platform.uno/blog/free-developer-friendly-high-res-backgrounds-for-microsoft-teams/
---
Last week Microsoft released custom background feature for Microsoft Teams. This was a feature long reserved for Microsoft employees only, likely for testing usage. The Great news is – custom background feature is now available to everyone, and our design team at Uno Platform took some time to create developer-friendly, high resolution custom backgrounds with flags over 30 of your favorite technologies, programming languages and tools you use. There are three different designs to choose from – enjoy!
> ### About Uno Platform
>
> For those new to Uno Platform – it enables for creation of single-source C# and XAML apps which run natively on iOS and Android, and Web via WebAssembly. Uno Platform is Open Source (Apache 2.0) and [available on GitHub](https://github.com/unoplatform/uno). To learn more about Uno Platform, see [how it works](https://platform.uno/how-it-works/), or [create a small sample app](https://platform.uno/docs/articles/getting-started-tutorial-1.html).
Just right-click and save images below to:
```
%APPDATA%\Microsoft\Teams\Backgrounds\Uploads
```
Here is how they look, we had them hand-drawn for the following technologies: [Uno Platform & WinUI](#winui-uno-background) (of course), [Uno Platform](#uno-background), [WinUI](#winui-background), [Visual Studio](#visual-studio-background), [Windows](#windows-bg), [GitHub](#github-bg), [Xamarin](#xamarin-bg), [Blazor](#blazor-bg), .[net](#dotnet-bg), [.NET Foundation](#foundation-bg), [Android](#android-bg), [iOS](#ios-bg), [WebAssembly](#webassembly-bg), [Azure](#azure-bg), [Microsoft MVP](#mvp-bg), [C++](#cplusplus-bg), [C#](#csharp-bg), [F#](#fsharp-bg), [JavaScript](#javascript-bg), [HTML](#html-bg), [Python](#python-bg), [Java](#ava-bg), [Ruby](#ruby-bg), [React](#react-bg), [Angular](#angular-bg), [Vue](#vue-bg), [PHP](#php-bg), [Slack](#slack-bg), [Docker](#docker-bg), [Flutter](#flutter-bg).
  
If there is something missing, drop us a tweet at [@UnoPlatform](https://twitter.com/UnoPlatform) and we may be able to create one for you.
### Uno Platform & WinUI



### Uno Platform



### WinUI



### Visual Studio



### Windows



### GitHub



### Xamarin



### Blazor



### .NET



### .NET Foundation



### Android



### iOS



### WebAssembly



### Azure



### MVP



### c++



### C#



### F#



### JavaScript



### HTML



### Python



### Java



### Ruby



### React



### Angular



### Vue



### PHP



### Slack



### Docker
### 


### Flutter



> ### About Uno Platform
>
> For those new to Uno Platform – it enables for creation of single-source C# and XAML apps which run natively on iOS and Android, and Web via WebAssembly. Uno Platform is Open Source (Apache 2.0) and [available on GitHub](https://github.com/unoplatform/uno). To learn more about Uno Platform, see [how it works](https://platform.uno/how-it-works/), or [create a small sample app](https://platform.uno/docs/articles/getting-started-tutorial-1.html).
>
>
Special thanks to our amazing designer team – Jessica, Mark and Xavier for doing these so quickly
Uno Platform Team
The post [Free Developer-friendly high-res backgrounds for Microsoft Teams](https://platform.uno/blog/free-developer-friendly-high-res-backgrounds-for-microsoft-teams/) appeared first on [Uno Platform](https://platform.uno). | unoplatform |
310,929 | Converting from AsciiDoc to Google Docs and MS Word | Updated 16 April 2020 to cover formatting tricks & add import to Google Docs info Short and swee... | 0 | 2020-04-17T09:05:13 | https://rmoff.net/2020/04/16/converting-from-asciidoc-to-google-docs-and-ms-word/ | asciidoc, pandoc, googledocs | ---
title: Converting from AsciiDoc to Google Docs and MS Word
published: true
date: 2020-04-16 00:00:00 UTC
tags: asciidoc,pandoc,google docs
canonical_url: https://rmoff.net/2020/04/16/converting-from-asciidoc-to-google-docs-and-ms-word/
---
_Updated 16 April 2020 to cover formatting tricks & add import to Google Docs info_
Short and sweet this one. I’ve written in the past how[Ilove Markdown](https://rmoff.net/2017/09/12/what-is-markdown-and-why-is-it-awesome/) but I’ve actually moved on from that and now firmly throwmy hat in the [AsciiDoc](http://www.methods.co.nz/asciidoc/) ring. I’llwrite another post another time explaining why in more detail, but inshort it’s just more powerful whilst still simple and readable withoutcompilation.
So anyway, I use AsciiDoc (adoc) for all my technical (and oftennon-technical) writing now, and from there usually dump it out to HTMLwhich I can share with people as needed:
```
asciidoctor --backend html5 -a data-uri my_input_file.adoc
```
(`-a data-uri` embeds any images as part of the HTML file, for easiersharing)
But today I needed to generate a MS Word (docx) file, and found a neatcombination of tools to do this:
```
INPUT_ADOC=my_input_file.adoc
asciidoctor --backend docbook --out-file - $INPUT_ADOC| \
pandoc --from docbook --to docx --output $INPUT_ADOC.docx
# On the Mac, this will open the generated file in MS Word
open $INPUT_ADOC.docx
```
## Customising code block highlighting
You can customise the syntax highlighting used for code sections bysetting `--highlight-style` when calling `pandoc`, e.g.:
```
asciidoctor --backend docbook --out-file - $INPUT_ADOC| \
pandoc --from docbook --to docx --output $INPUT_ADOC.docx \
--highlight-style espresso
```

Use `pandoc --list-highlight-styles` to get a list of availablestyles. You can also customise a theme by writing it to a file(`pandoc --print-highlight-style pygments > my.theme`), editing thefile (`my.theme`) and then passing it as the argument to`--highlight-style` e.g.
```
asciidoctor --backend docbook --out-file - $INPUT_ADOC| \
pandoc --from docbook --to docx --output $INPUT_ADOC.docx \
--highlight-style my.theme
```
## Customising other styles (e.g. inline code / literal)
The above `--highlight-style` works great for code blocks, but whatabout other styles that you want to customise? Perhaps you want tochange the formatting used for code that’s inline in a paragraph too,not just blocks. To do this with `.docx` output from pandoc you usethe `--reference-doc` parameter, and pass in a `.docx` file with thestyles set up as you want.
To create a `.docx` file with all the styles that pandoc may use intranslating your source asciidoc, run:
```
pandoc -o my-custom-styles.docx \
--print-default-data-file reference.docx
```
Open `my-custom-styles.docx` in Word and modify the style definitionsas required

Now add this argument to pandoc when you invoke it:
```
asciidoctor --backend docbook --out-file - $INPUT_ADOC| \
pandoc --from docbook --to docx \
--output $INPUT_ADOC.docx \
--highlight-style my.theme \
--reference-doc=my-custom-styles.docx
```

## Converting Asciidoc to Google Docs format
Using the above process is the best way I’ve found to write content inasciidoc and then import it, with embedded images, into Google Docs.It’s not an ideal workflow (it’s solely one-way only), but it does meanthat if Google Docs is your preferred collaboration & review tool youcan still prepare your content in asciidoc.
Once you’ve got your asciidoc ready, you export it to docx (via theabove asciidoctor & pandoc route), and then upload the `.docx` toGoogle Drive, from where you can ``Open in Google Docs''

## References
- [Asciidoctor](https://asciidoctor.org/)
- On the mac: `brew install asciidoctor`
- [Pandoc](https://pandoc.org/)
- On the mac: `brew install pandoc`
- [AsciiDocextension for VS Code](https://marketplace.visualstudio.com/items?itemName=joaompinto.asciidoctor-vscode)
- VSCode is my new favourite editor (but I still ❤️ emacs for org-mode) | rmoff |
310,961 | Open source in 2020 | Open Source Projects For over 20 years the Open Source Initiative (OSI) has worked to raise awareness... | 0 | 2020-04-16T19:55:14 | https://dev.to/madilraza/open-source-in-2020-2ilj | opensource, in, 2020, githubteam | Open Source Projects
For over 20 years the Open Source Initiative (OSI) has worked to raise awareness and adoption of open source software, and build bridges between open source communities of practice. As a global non-profit, the OSI champions software freedom in society through education, collaboration, and infrastructure, stewarding the Open Source Definition (OSD), and preventing abuse of the ideals and ethos inherent to the open source movement.
Open source works
Inspiration
Today the IT era is running and generation is very interested in working remotely to the home
What it does
The huge platform for the Tech geeks and the developers related to IT and data science and development to work remotely and earn the money
How I built it
It is a large story starting with my whole documentation and the needed and the resources that I want to do that project and what understanding to do .working on the different libraries of the java and scala on a single platform to reduce the hard code and make understandable approach for everyone .and working more efficiently as possible
Challenges I ran into
there were big changes that I faced during the designing face and after the design it was very tuff to implement all the interfaces and use multi-purpose API and lib to do smoothly and the backend was very horrible job that was the biggest challenge after it was the face of testing and reducing the chances of the bug and make as proficient as possible
Accomplishments that I’m proud of
today I have a global platform especially for Pakistani to work remotely and earn as they want on their projects without any difficulties all the payments and the resources and tools are free
What I learned
I can do anything if want to do nothing is impossible
What’s next for FREEMEET APPLICATION
I will try to give my mega project to digital Pakistan program as they can especially work on my project on for the development of my beloved country Pakistan
Built With
javafxml
jquery
location api
mongodb
netbeen
scala
springboot
A description of the project and the works presented.
Let’s build something together.
Get in touch! | madilraza |
321,634 | BCP Calltree | What I built Every company has a Business Continuity Plan (BCP) to deal with unforeseen ev... | 0 | 2020-04-28T20:03:46 | https://dev.to/teamwicket/bcp-calltree-1n1a | twiliohackathon | ## What I built
Every company has a Business Continuity Plan (BCP) to deal with unforeseen events that may disrupt the business' normal activity. The possibility to reach all the employees in an automated fashion, using Twilio APIs, simplifies dramatically the process of contacting manually everyone in the organisation, by means of traditional phone calls.
In this Calltree model, one employee has the designated role of CHAMPION. The CHAMPION can initiate an event by specifying the target role to send the messages to, in a hierarchical-style (CHAMPION -> MANAGER -> LEADER -> REPORTER), and also a text as body of the SMS.

When an employee replies to a SMS sent by the system, the response is then saved in the database. The champion, through the UI, can check the status of the event(s), terminate it and check the results in the dashboard.
The first level of statistics is an overview of the overall events - number of SMS sent out, number of replies and average response time in minutes. The second level of statistics is more comprehensive and collects data from each employee.
When the number of the replies is equal to the number of SMS sent out during the event initiation, the system will automatically terminate the event for the champion, if the event has not been terminated prematurely by the champion.
#### Category Submission:
Engaging Engagements
## Link to Code
[BCP Calltree on Github](https://github.com/TeamWicket/Twilio-BCP-CallTree)
## How I built it
This is a fairly standard 3-tier application, composed of frontend, backend and database.
The frontend stack is React (with JavaScript and TypeScript - statically typed). The backend is a mixture of Java and Kotlin languages, in combination with Spring Boot, REST APIs are documented using OpenAPI v3 specifications (with SpringDoc and Swagger implementation), embedded JMS and Spring Data.
For demonstration purposes the application is using an in-memory database (H2), but the system is ready to use PostgreSql, just by selecting the appropriate property file at startup.
CircleCI has been used as a building tool, connected to the Github repository for automated build after every code push. | teamwicket |
311,170 | Harp/Jade Debug Snippet | I’m using Harp with Jade recently. At the beginning, it was hard for me to figure out the JSON data s... | 0 | 2020-04-17T04:55:56 | http://english.catchen.me/2018/08/harp-jade-debug-snippet.html | debug, harp, jade | ---
title: Harp/Jade Debug Snippet
published: true
date: 2018-08-31 14:59:00 UTC
tags: debug,harp,jade
canonical_url: http://english.catchen.me/2018/08/harp-jade-debug-snippet.html
---
I’m using [Harp](http://harpjs.com/) with [Jade](http://jade-lang.com/) recently. At the beginning, it was hard for me to figure out the JSON data structure used by Harp at build time. It was also hard to debug JavaScript function written in Jade and executed at Harp compile time. In the end, I figured out that I could dump that JSON as a string to `console.log` in browser. Everything is so much easier now.
<script src="https://gist.github.com/CatChen/e2ad53a2050b76e3d15fc5f33ea37ecc.js"></script>
Now I have that [`debug.jade`](https://github.com/CatChen/catchen.me/blob/master/public/_partials/debug.jade) file in [my project](https://github.com/CatChen/catchen.me). Whenever I want to examine some JSON data in Harp, I just call `!= partial('debug', { data: anything })` and pass the right `data`.
 | catchen |
311,182 | `docker run -p 127.0.0.1:8080:8080`? and if not, do I need SSL certificate? | It is this repo, actually; powered by fastify. About SSL, I can try docker run -p 8080:8080, but the... | 0 | 2020-04-17T04:49:55 | https://dev.to/patarapolw/docker-run-p-127-0-0-1-8080-8080-and-if-not-do-i-need-ssl-certificate-54ao | docker, help, devops, fastify | It is [this repo](https://github.com/patarapolw/rep2recall/tree/lessons), actually; powered by fastify.
About SSL, I can try `docker run -p 8080:8080`, but then I would need to access `http://0.0.0.0:8080` which gets permanent redirect (301) to `https`, but then, I don't have local HTTPS. (I rely on Heroku's for online.)
I think the culprit is Chrome's security itself.
Ok, I decided to ask on StackOverflow.
{% stackoverflow 61268560 %} | patarapolw |
311,644 | How to detect a change in HTML5 Local Storage in the same window? | I want to detect if a local storage variable value is changed or not, But I have noticed that if I ad... | 0 | 2020-04-17T08:57:46 | https://dev.to/sayuj/how-to-detect-a-change-in-html5-local-storage-in-the-same-window-33k5 | help, javascript, html | I want to detect if a **local storage variable** value is changed or not, But I have noticed that if I added an event listener on **'storage'** event, I can detect the change only if I open the same link in ** another window** and then change the **local storage value** from that window, then it will show me the change in the **first window**. But I don't want this behavior I just wanted to observe the change in the **same window**. Is there any way to do it? | sayuj |
311,661 | The Blockchain Way of Programming | A fun-to-read technical eBook to expand your programming career. | 0 | 2020-04-17T12:28:43 | https://dev.to/web3coach/the-blockchain-way-of-programming-7h5 | php, java, blockchain | ---
title: The Blockchain Way of Programming
published: true
description: A fun-to-read technical eBook to expand your programming career.
tags: php,java,database,blockchain
cover_image: https://d33wubrfki0l68.cloudfront.net/5a333c1f545f8bdfe6989d56dc4103ec81fccf5e/5f480/images/free_chapter.png
---
Hi dev.to,
**Lukas:** How are you?
**Dev.to:** Great. We grew in the number of users and added many new cool features into the platform. What about you?
**Lukas:** I have seen the new features! The DevToConnect is cool! I am launching a new project [https://web3.coach], and I would like the dev.to community to be apart of it. Therefore, **I will be sharing all my articles on this great platform.**
## What's the project about?
**I am writing an eBook teaching developers how blockchain works and how to program blockchain systems.** The eBook is not specific to any particular blockchain. Opposite. It contains various peer-to-peer, blockchain, and cryptographical design patterns useful for any software developer who wants to expand his programming career. No cryptocurrencies involved!
## What's inside?
This product will contain everything I know about blockchain
development. It will have theory, diagrams as well as the full
source code stored in a private Github repository.
## What will you build?
> You will build a blockchain from scratch in Go.
Don't worry; you don't need to have any prior Go experience to start reading the book. It's a very powerful and beginner-friendly language, and you will pick it up quickly.
## What will you learn?
By learning blockchain, you will explore:
- Peer-to-peer systems software architecture
- Event-based architecture
- How servers can communicate autonomously (BTC, ETH, XRP)
- Go programming language ❤
- Solidity programming language (Turing machines)
- Encoding and secure hashing
- Asymmetric cryptography and general internet security
## Why Go?
Because like blockchain, it's a fantastic technology for your overall
programming career:
- Trendy language
- Better paid than an average PHP/Java/Javascript position
• Optimized for multi-core CPU architecture. You can spawn thou-
sands of light-weight threads(Go-routines) without problems -
- Practical for highly parallel and concurrent software such as
blockchain networks
- Easy to get started and be productive
- Nearly C++ level of performance out of the box
- Compiles to binary and is very portable
## What's blockchain good for?
I know many developers think blockchain is just a hype and how no use-case, but that's a myth!
Blockchain technology has various incredible use-cases transforming major industries as we speak from banking to supply chains and self-sovereign identity.
I have been working on this for more than a year, but I am
finally going to wrap it up. I will be releasing it in a few weeks.
## How can you get started?
**You can download TODAY the first 6 chapters of the book for FREE:**
https://web3.coach

Ready to start a new programming journey?
PS: If you have any question or want to follow the book updates, add me on Twitter: https://twitter.com/Web3Coach | web3coach |
311,673 | Neural Network from Scratch Using PyTorch | In this article I show how to build a neural network from scratch. The example is simple and short to... | 0 | 2020-04-17T09:51:40 | https://dev.to/lankinen/neural-network-from-scratch-using-pytorch-457k | PyTorch, python, machinelearning, neuralnetworks | In this article I show how to build a neural network from scratch. The example is simple and short to make it easier to understand but I haven’t took any shortcuts to hide details.
Looking for Tensorflow version of this same tutorial? [Go here.](https://dev.to/lankinen/neural-network-from-scratch-using-tensorflow-1kc8)
```
import torch import matplotlib.pyplot as plt
```
First we create some random data. x is just 1-D tensor and the model will predict one value y.
```
x = torch.tensor([[1.,2.]])
x.shape
CONSOLE: torch.Size([1, 2])
y = 5.
```
The parameters are initialized using normal distribution where mean is 0 and variance 1.
```
def initalize_parameters(size, variance=1.0):
return (torch.randn(size) * variance).requires_grad_()
first_layer_output_size = 3
weights_1 = initalize_parameters(
(x.shape[1],
first_layer_output_size))
weights_1, weights_1.shape
CONSOLE: (tensor([[ 0.3575, -1.6650, 1.1152],
[-0.2687, -0.6715, -1.2855]],
requires_grad=True),
torch.Size([2, 3]))
bias_1 = initalize_parameters(1)
bias_1, bias_1.shape
CONSOLE: (tensor([-2.5051], requires_grad=True),
torch.Size([1]))
weights_2 = initalize_parameters((first_layer_output_size,1))
weights_2, weights_2.shape
CONSOLE: (tensor([[-0.9567],
[-1.6121],
[ 0.6514]], requires_grad=True),
torch.Size([3, 1]))
bias_2 = initalize_parameters([1])
bias_2, bias_2.shape
CONSOLE: (tensor([0.2285], requires_grad=True),
torch.Size([1]))
```
The neural network contains two linear functions and one non-linear function between them.
```
def simple_neural_network(xb):
# linear (1,2 @ 2,3 = 1,3)
l1 = xb @ weights_1 + bias_1
# non-linear
l2 = l1.max(torch.tensor(0.0))
# linear (1,3 @ 3,1 = 1,1)
l3 = l2 @ weights_2 + bias_2
return l3
```
Loss function measures how close the predictions are to the real values.
```
def loss_func(preds, yb):
# Mean Squared Error (MSE)
return ((preds-yb)**2).mean()
```
Learning rate reduces gradient making sure parameters are not changed too much in each step.
```
lr = 10E-4
```
Helper function that updates the parameters and then clears the gradient.
```
def update_params(a):
a.data -= a.grad * lr
a.grad = None
```
Training contains three simple steps:
1. Make prediction
2. Calculate how good the prediction was compared to the real value (When calculating loss it automatically calculates gradient so we don't need to think about it)
3. Update parameters by subtracting gradient times learning rate
The code continues taking steps until the loss is less than or equal to 0.1. Finally it plots the loss change.
```
losses = []
while(len(losses) == 0 or losses[-1] > 0.1):
# 1. predict
preds = simple_neural_network(x)
# 2. loss
loss = loss_func(preds, y)
loss.backward()
# 3. update parameters
update_params(weights_1)
update_params(bias_1)
update_params(weights_2)
update_params(bias_2)
losses.append(loss)
plt.plot(list(range(len(losses))), losses)
plt.ylabel('loss (MSE)')
plt.xlabel('steps')
plt.show()
```

It changes a lot how many steps it takes to get to loss under 0.1.
[Source Code on Github](https://github.com/RealLankinen/machine-learning-from-scratch/blob/master/neural-network/PyTorch%20Neural%20Network.ipynb) | lankinen |
311,846 | Stateful property-based testing with QuickCheck State Machine | A gentle introduction to quickcheck-state-machine, a Haskell library for testing stateful programs. | 0 | 2020-04-17T12:23:31 | https://meeshkan.com/blog/2020-04-17-quickcheck-state-machine | testing, tutorial, advanced, haskell | ---
title: Stateful property-based testing with QuickCheck State Machine
description: A gentle introduction to quickcheck-state-machine, a Haskell library for testing stateful programs.
author: Mike Solomon
canonical_url: https://meeshkan.com/blog/2020-04-17-quickcheck-state-machine
published: true
tags:
- testing
- tutorial
- advanced
- haskell
---
Property-based testing is a technique where you make assertions about a system's output with respect to its input. For example, if the input to a system (a function, a server, etc) is two numbers, property-based testing could assert that the output of the system should be the sum of these numbers. This type of testing frees you from having to come up with input data. Instead, you define relationships between the system's input and output. Then the test runner verifies that the relationships hold.
**Stateful property-based testing (SPBT)** is another technique for when the tested system retains a state. This is the case, for example, when the system is a database or a queue or a file. If I write an entry to a database and then list all entries in the database, I would expect the entry I wrote to be part of the list. That is a *stateful* property of the database.
There are libraries available in several different languages for SPBT. In this article, I will use [`quickcheck-state-machine`](https://github.com/advancedtelematic/quickcheck-state-machine). I like `quickcheck-state-machine` for many reasons:
1. It is written in Haskell, which means you get access to Haskell's type safety and fast performance.
1. Its opinionated structure splits SPBT into component parts, which helped my learning process.
1. It builds a state machine, which can be manipulated outside of the test. `quickcheck-state-machine`'s function `prettyCommands` uses the state machine, for example, to make really nice logs after the test is run.
1. Fine-grained control of [generation](https://hackage.haskell.org/package/QuickCheck-2.14/docs/Test-QuickCheck.html#g:8) and [shrinking](https://hackage.haskell.org/package/QuickCheck-2.14/docs/Test-QuickCheck.html#g:6) is possible. This allows you to do more targeted testing.
1. Its use of the [higher-kinded types (HKTs)](https://www.stephanboyer.com/post/115/higher-rank-and-higher-kinded-types) `Symbolic` and `Concrete`. It allows you to extract commands from a state machine using the `Symbolic` HKT and then run it using the `Concrete` HKT.
1. It can test the parallel execution pf commands to find bugs arising from race conditions.
This article shows how to use `quickcheck-state-machine` to build a state machine and use it for SPBT. It uses version `0.7.0` of `quickcheck-state-machine`. The system under test will be a [FIFO queue](https://en.wikipedia.org/wiki/Queue_(abstract_data_type)) of integers that uses the file system to store entries.
As the `quickcheck-state-machine` library is under active development, the API is subject to change. I will do my best to revise this article as the API changes.
## Model, Command, Response
The fundamental building blocks of a state machine built with `quickcheck-state-machine` come in three types. One represents a model of the system, one represents the commands that can be issued to the system and one represents responses to the commands.
All three need to be polymorphic in accepting an HKT with the signature `(Type -> Type)`, which I'll call `r`. This polymorphism will never be used directly. But it is used by `quickcheck-state-machine` internally to inject two different HKTs: `Symbolic` and `Concrete`.
The `Symbolic` HKT is used by `quickcheck-state-machine` when generating a series of commands from a state machine. In contrast, the `Concrete` HKT is used when the state machine is executing. In models that can be created in pure contexts like the one below, this distinction is not useful. But when models use types that only exist in monadic contexts, the distinction is important.
```haskell
data Model (r :: Type -> Type) = Model [Int] deriving (Show, Eq, Generic)
deriving anyclass instance ToExpr (Model Concrete)
data Command (r :: Type -> Type)
= Push Int
| Pop
| AskLength
deriving stock (Eq, Show, Generic1)
deriving anyclass (Rank2.Functor, Rank2.Foldable, Rank2.Traversable, CommandNames)
data Response (r :: Type -> Type)
= Pushed
| Popped (Maybe Int)
| TellLength Int
deriving stock (Show, Generic1)
deriving anyclass (Rank2.Foldable)
```
Let's unpack what's going on here. The `Model` is an array of integers that we'll use to simulate a FIFO queue. There are three `Command`s - you can `Push` an integer onto the queue, `Pop` something off of the queue (either nothing or an integer), and `AskLength` to the queue. The `Response`s to these three commands are confirming that a value has been `Pushed`, telling us the integer that has been `Popped` and `TellLength`.
It isn't necessary to have a one-to-one correspondence between commands and responses. Haskell's pattern matching will allow us to define the function for any valid command/response pair.
## Defining the queue
Here is a FIFO queue for integers that reads and writes the queue to the file system. Each integer is separated by a colon:
```haskell
-- push to the head of the queue
pushToQueue :: String -> Int -> IO ()
pushToQueue fname x = do
fe <- doesFileExist fname
if (not fe) then do
withFile fname WriteMode $ \handle -> hPutStr handle $ show x -- write the number
else do
txt <- withFile fname ReadMode $ \handle -> hGetLine handle
let split = splitOn ":" txt
-- append the number to the beginning of the string
withFile fname WriteMode $ \handle -> hPutStr handle $ intercalate ":" (show x : split)
-- pop from the back of the queue
popFromQueue :: String -> IO (Maybe Int)
popFromQueue fname = do
fe <- doesFileExist fname
if (not fe) then return $ Nothing else do
txt <- withFile fname ReadMode $ \handle -> hGetLine handle
let split = splitOn ":" txt
if (length split == 1) then
-- remove the file if queue is empty
removeFile fname
else
-- remove the last element
withFile fname WriteMode $ \handle -> hPutStr handle $ intercalate ":" $ init split
return $ if null split then Nothing else Just (read (last split) :: Int)
-- get the length of the queue
lengthQueue :: String -> IO Int
lengthQueue fname = do
fe <- doesFileExist fname
if (not fe) then return 0 else do
txt <- withFile fname ReadMode $ \handle -> hGetLine handle
let split = splitOn ":" txt
return $ length split
```
## Initializing the model
The first step in creating my state machine is to initialize the model. The initializer function needs to be polymorphic as it will eventually accept the `Symbolic` and `Concrete` HKTs depending on if you are in generation or execution mode. In this case, as I am using an array as the underlying model, the logical initializer is an empty array.
```haskell
initModel :: Model r
initModel = Model []
```
## Transitions
The next thing I need to do for my state machine is to create transitions. The transitions are used to both generate commands and execute the tests, so the function needs to remain polymorphic.
The transition function takes a model, a command, and a response. It then returns the underlying model after the command has been applied. You can think of the model as transitioning from one state to the next.
In the implementation below, I make my own FIFO queue with `Pop` and `Push`. Then `AskLength` will return the length of the model:
```haskell
transition :: Model r -> Command r -> Response r -> Model r
transition (Model m) (Push x) Pushed = Model (x : m)
transition (Model m) Pop (Popped _) = Model (if null m then m else init m)
transition m AskLength (TellLength _) = m
```
## Preconditions
Preconditions are guards that apply to certain commands based on the current state. `Top` represents the precondition always being satisfied. `Bot` is the opposite, the precondition is never satisfied. The `Logic` type contains various boolean operators that can be applied to the model and command. The outcome of the operator determines if the precondition is satisfied or not.
Because the pre-condition is only used when generating lists of programs, it doesn't need to use concrete values. So it doesn't need to be polymorphic and exists only for the `Symbolic` HKT.
In this model, every command can be executed irrespective of the state. So I return `Top`:
```haskell
precondition :: Model Symbolic -> Command Symbolic -> Logic
precondition _ _ = Top
```
## Postconditions
Postconditions are where the correctness of the response is asserted. I like this API because it provides a one-stop-shop for all assertions. In other SPBT libraries, it is easy to litter assertions all over the place, which makes the code more difficult to read. In `quickcheck-state-machine`, the only checks for correct behavior are in the postconditions.
Postconditions only are checked when the state machine is actually running. This means they only exist in the `Concrete` HKT.
Note that the model passed to the postcondition function is the one **before** the command executes. It is often useful to apply the transition to the model when evaluating the response, as I do below:
```haskell
postcondition :: Model Concrete -> Command Concrete -> Response Concrete -> Logic
-- after a push, assert the pushed element is at the head of the new model
postcondition mod cmd@(Push x) resp = x .== head m'
where Model m' = transition mod cmd resp
-- after a pop, assert that the popped element is at the end of the old model
postcondition (Model m) Pop (Popped x) = x .== if null m then Nothing else Just $ last m
-- the length of the model and the length of the SUT should always be aligned
postcondition (Model m) AskLength (TellLength x) = length m .== x
```
## Invariants
Invariants take a model and assert that the model is always in a certain state, irrespective of the command and response. Invariants also run after every step in the state machine, which makes them expensive to run. Because of this, `quickcheck-state-machine` uses a `Maybe` to allow for no invariants to be returned.
As there is no invariant behavior I want to see in this model, I can return `Nothing`:
```haskell
invariant = Nothing
```
## Generator
The generator is one of the places that `quickcheck-state-machine` really shines. You create generators using `QuickCheck` combinators, so any existing `QuickCheck` custom combinators can be repurposed for `quickcheck-state-machine`.
Here, I use the `oneof` combinator, which generates commands with a uniform distribution. Because I am in the command generation phase, the `Symbolic` HKT is used:
```haskell
generator :: Model Symbolic -> Maybe (Gen (Command Symbolic))
generator _ = Just $ oneof [(pure Pop), (Push <$> arbitrary), (pure AskLength)]
```
## Shrinker
Like in `QuickCheck`, the shrinker takes a value and returns an array of new values to test. Most `QuickCheck` programs never use the shrinker directly, but here, I use it to specify what does and doesn't need to be shrunk. This allows the generation to move fast through values that have no logical relationship.
For example, below, I only apply the shrinker to numbers pushed onto the stack, as I want to test if the size of the numbers matters. In all other places, there is no shrinker used:
```haskell
shrinker :: Model Symbolic -> Command Symbolic -> [Command Symbolic]
shrinker _ (Push x) = [ Push x' | x' <- shrink x ]
shrinker _ _ = []
```
## Semantics
Semantics take a command using the `Concrete` HKT, which signifies that it is used only when the tests are actually executing, and returns the result of the execution in the monadic context (here `IO`).
```haskell
semantics :: String -> Command Concrete -> IO (Response Concrete)
semantics fname (Push x) = do
pushToQueue fname x
return Pushed
semantics fname Pop = do
val <- popFromQueue fname
return $ Popped val
semantics fname AskLength = do
val <- lengthQueue fname
return $ TellLength val
```
## Mock
The purpose of Mock is to generate dummy responses when the state machine is in command generation mode (thus the `Symbolic` HKT). It is the foil to [Semantics](#semantics), which creates `Concrete` responses from real commands during text execution mode. The content of the mock responses is thrown away, as all the library uses `mock` for is to create a `Response` used to effectuate a transition between states.
One nice thing about `mock` is that, if you want to, you can create a full-fledged mock of your model, and this can be useful if you'd like to use SPBT to generate `(Comamnd Response)` pairs that can be used to induce a spec of the model (ie to induce a JSON schema or an OpenAPI spec).
```haskell
mock :: Model Symbolic -> Command Symbolic -> GenSym (Response Symbolic)
mock _ (Push _) = pure Pushed
mock _ Pop = pure $ Popped Nothing
mock _ AskLength = pure $ TellLength 0
```
## Cleanup
Cleanup, like the semantics, exists within the monad that the system is executing. It's called after each series of commands is executed.
Since I don't need any cleanup for our queue, I can write an empty function in the `IO` monadic context:
```
cleanup :: model Concrete -> IO ()
cleanup _ = return ()
```
## Building the state machine
Now that I have all of the ingredients, I can build my state machine.
Because the system under test takes one argument, I also pass that argument to the state machine.
```haskell
sm :: String -> StateMachine Model Command IO Response
sm s = StateMachine initModel transition precondition postcondition
invariant generator shrinker (semantics s) mock cleanup
```
## Testing
Now for the fun part, let's run my tests!
First, I want each test to execute in its own FIFO queue, which means a different file for each queue. I chose the pcg unique random number generator to accomplish this. This guarantees that each number generated will be unique during the run of a program.
```haskell
newRand :: IO Int
newRand = do
g <- create
i <- uniform g
return i
```
Then, I define the property.
`forAllCommands` uses a state machine in its first argument to generate the commands. This is a state machine that will only run for the `Symbolic` HKT, not the `Concrete` HKT, so it won't ever touch `IO`. The next argument is a lower bound for the number of commands in a sequence. I use `Nothing` for the lower bound, meaning no lower bound.
The last argument to `forAllCommands` is a function that accepts a sequence of commands and returns a monadic property. Monadic properties are defined in `QuickCheck.Monadic` and can be used whenever a property exists in a monadic context. The convenience method `monadicIO` can be used to define properties in the `IO` context, and `run` lifts the result of the monadic execution to the `PropertyM` context. `PropertyM` is `QuickCheck`'s monad transformer, and in our case we are transforming the `IO` monad. To learn more about monadic transformations, [`mtl`](https://hackage.haskell.org/package/mtl-2.2.2) is a great place to start.
So, the sequence below is:
1. Create a new random number and lift it to the `PropertyM` monadic context.
1. Create a file name using this number.
1. Create a state machine using this filename.
1. Call `runCommands` from `quickcheck-state-machine`, which already executes in the `PropertyM` context, so there is no need to prefix it with `run`.
1. Use `quickcheck-state-machine`'s pretty printer `prettyCommands` on the histogram generated by run result.
```haskell
state_machine_properties :: Property
state_machine_properties = forAllCommands (sm "") Nothing $ \cmds -> monadicIO $ do
id <- run newRand
let fname = "queues/queue" <> (show id) <> ".txt"
let sm' = sm fname
(hist, _model, res) <- runCommands sm' cmds
prettyCommands sm' hist (checkCommandNames cmds (res === Ok))
```
Lastly, I execute the test and create the `queues` directory if it doesn't exist yet:
```haskell
main :: IO ()
main = do
createDirectoryIfMissing False "queues"
quickCheck state_machine_properties
```
When I run `stack test` from the command line, I see the following:
```bash
quickcheck-state-machine-tutorial> test (suite: quickcheck-state-machine-tutorial-test)
+++ OK, passed 100 tests.
Commands (264 in total):
33.7% Pop
33.7% Push
32.6% AskLength
quickcheck-state-machine-tutorial> Test suite quickcheck-state-machine-tutorial-test passed
Completed 2 action(s).
```
And voila! Our tests pass.
## GitHub repo
All of this is on the github repo ['meeshkan/quickcheck-state-machine-example'](https://github.com/meeshkan/quickcheck-state-machine-example).
## Conclusion
Stateful property based testing is a great way to find bugs in stateful systems. SPBT exists in several other frameworks as well:
- [hypothesis](https://hypothesis.works/) in Python and Java
- [fast-check](https://github.com/dubzzz/fast-check) in JavaScript, TypeScript and PureScript
- [proper](https://github.com/proper-testing/proper) in Erlang and Elixir
- [FsCheck](https://fscheck.github.io/FsCheck/) in F# and C#
I hope you find this technique useful! If you'd like your repos to benefit from _automatic SPBT_, I will take this opportunity to **shamelessly plug the Meeshkan alpha on [meeshkan.com](https://meeshkan.com)**.
## Follow up exercises
Here are three follow up exercises you can do to expand your understanding of `quickcheck-state-machine` and SPBT!
### Novice
Let's create a bug in the queue!
In the implementation of the FIFO queue, instead of adding the number to the head via `show x : split`, add it to the tail using `split ++ [show x]`. See if it's caught.
### Intermediate
Create a new bug where the queue stops accepting new values once there are 50 values.
You can do this in the `pushToQueue` function. If you run the tests as-is, you won't find it. There are three ways you can find the bug:
- Increase the number of times [QuickCheck](https://hackage.haskell.org/package/QuickCheck-2.14) runs
- Change the generator so that the frequency of push is greater than the frequency of pop. For inspiration, check out [QuickCheck generator combinators](https://hackage.haskell.org/package/QuickCheck-2.13.2/docs/Test-QuickCheck.html#g:9) and see if there is one that allows certain outcomes to happen with greater frequency than others.
- Change the lower bound in `forAllCommands` from `Nothing` to something a bit higher.
This may take a long time to run depending on your parameters because of the shrinker. The shrinker will try to find a specific range of values to produce the bug, but because the bug is not linked to the specific value, it won't be able to meaningfully shrink.
For example, here is an excerpt of console output that would happen if you are able to provoke the bug. Here, I see that the postcondition for `AskLength` failed at the barrier of 50 results in the queue.
```bash
Model [+0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0]
== AskLength ==> TellLength 50 [ 0 ]
Model [0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0
,0]
PostconditionFailed "PredicateC (51 :/= 50)" /= Ok
```
### Advanced
In the implementation of the queue, we use the funtion [`withFile`](https://hackage.haskell.org/package/base-4.12.0.0/docs/System-IO.html#g:5) for all file-based IO. Haskell also has the functions [`writeFile`](https://hackage.haskell.org/package/base-4.12.0.0/docs/System-IO.html#g:7) and [`readFile`](https://hackage.haskell.org/package/base-4.12.0.0/docs/System-IO.html#g:7). Try using these instead and you'll hit a nasty bug!
Can you anticipate what the bug will be? Once you run into the bug, was your guess right? How does this bug show the difference between `withFile` vs `writeFile` and `readFile`?
| mikesol |
311,865 | INTRO TO SIMPLE LINEAR REGRESSION!!! | A sneak peek into what Linear Regression is and how it works. Linear regression is a simple machine... | 0 | 2020-04-17T13:06:06 | https://dev.to/adityaberi8/intro-to-simple-linear-regression-1l96 | machinelearning, datascience, python, jupyter |
A sneak peek into what Linear Regression is and how it works.
Linear regression is a simple machine learning method that you can use to predict an observations of value based on the relationship between the target variable and the independent linearly related numeric predictive features.
For example: Imagine you have a data-set that describes key characteristics of a set of homes like land acreage, number of storeys, building area, and sales. Based on these features and the relationship with the sales price of these homes, you could build a multivariate linear model that predicts the price a house can be sold for based on its features.
Linear regression is a statistical machine learning method you can use to quantify and make predictions based on relationships between numerical variables which assumes that the data is free from missing values and outliers.
It also assumes that there’s a linear relationship between predictors and predictants & that all predictors and independent of each other.
Lastly, it assumes that residuals are normally distributed.
Ready for a mini-project?
We have all the libraries we need in our Jupyter Notebook. Now let’s set up our plotting perimeters. We want matplotlib to plot out inline within our Jupyter Notebook, so we will say percentage sign matplotlib inline and then let’s just set our dimensions for our data visualizations to be 10 inches wide and eight inches high.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
```
```
from pylab import rcParams
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import scale
%matplotlib inline
rcParams['figure.figsize']=10,8
```
-So we’re just going to create some synthetic data in order to do a linear regression. Let’s first create a variable called ‘rooms’. We’re going to set rooms equal to two times a set of random numbers (so we’re going to need to call the random number generator which is going to be np.random.randn) and we’ll pass in how many ever values we want- in this case it will be 100. We pass in a one and say plus three. This is the equation we’re using to generate random values to populate the rooms field or to create a synthetic variable that represents the number of rooms in a home.
```
rooms=2*np.random.rand(100,1)+3
rooms[1:10]
array([[4.04467357],
[3.77241135],
[3.14321164],
[4.48142986],
[3.18493126],
[3.8132922 ],
[4.72655406],
[3.08916389],
[3.89772928]])
```
Now, let’s create a synthetic variable called ‘price’. We’ll say that price is equal to 265 plus six times the number of rooms plus the absolute values (we call the abs function). The absolute value of, & again we’re going to call a random number generator, so that’s np.random.randn and 100 values, a pass of one, and then let’s just take a look at the first 10 records, so we’ll say price one through 10, run this.
```
price=265+6*rooms +abs(np.random.randn(100,1))
price[1:10]
array([[290.20050075],
[287.83631918],
[284.26968068],
[292.46209605],
[285.20161696],
[288.07388113],
[293.77699261],
[284.59783984],
[289.71316513]])
```
Now, let’s create a scatter plot of our synthetic variables just so we get an idea of what they look like and the relationship between them. So to do that we’re going to call the plot function- plt.plot and we’ll pass in rooms in price. price is going to be on our y-axis and rooms is going to be on our x-axis. Let’s also pass a string that reads r hat, this specifies that a point plot should be generated instead of the default line plot.
```
plt.plot(rooms,price,'r.')
plt.xlabel("no. of rooms,2020 Average")
plt.ylabel("2020 Avg home price")
plt.show()
```
To see the plot see the cover image:) :)
What this plot says is, as the number of rooms increase, the price of the house increases.
Makes sense, right?
So now, let’s just do a really simple linear regression. So for our model here, we’re going to use rooms as the predictor, so we’re going to say, x is equal to rooms and we want to predict for the price, so y is going to be equal to price. Let’s instantiate a linear regression object, we’ll call it LinReg and then we’ll say LinReg is equal to LinearRegression and then we’ll fit the model to the data. So to do that we will say LinReg.fit and we’ll pass in our variables x and y.
```
X=rooms
y=price
LinReg= LinearRegression()
LinReg.fit(X,y)
print(LinReg.intercept_,LinReg.coef_)
[265.39215904] [[6.10708427]]
```
Holding all other features fixed, a 1 unit increase in Rooms is associated with an increase of 6.10708427 in price
The intercept (often labeled as constant) is the point where the function crosses the y-axis. In some analysis, the regression model only becomes significant when we remove the intercept, and the regression line reduces to Y = bX + error. A regression without a constant means that the regression line goes through the origin wherein the dependent variable and the independent variable is equal to zero.
```
print(LinReg.score(X,y))
0.9679030603885265
```
-Our linear regression model is performing really well! Our r squared value is close to 1 and that’s a good thing!
This was just a small sneak peek into what Linear Regression is. I hope you got an idea as to how Linear Regression works through the mini-project!
Feel free to respond to this blog below for any doubts and clarifications! | adityaberi8 |
319,325 | CONFLICT MANAGEMENT SYSTEM - PRACTICE IN SOCIAL LIFE AND TECHNOLOGY CULTURE | Talk about complex systems (whatever they are). It takes a strong logical basis to find "bugs" or errors, it may be small and trivial but it will be fatal if the scale is large. Especially the big "bugs"! | 0 | 2020-04-25T15:46:19 | https://dev.to/darkterminal/conflict-management-system-practice-in-social-life-and-technology-culture-4glb | discuss, techtalk | ---
title: CONFLICT MANAGEMENT SYSTEM - PRACTICE IN SOCIAL LIFE AND TECHNOLOGY CULTURE
published: true
description: Talk about complex systems (whatever they are). It takes a strong logical basis to find "bugs" or errors, it may be small and trivial but it will be fatal if the scale is large. Especially the big "bugs"!
tags: discuss, techtalk
---
Talk about complex systems (whatever they are). It takes a strong logical basis to find "bugs" or errors, it may be small and trivial but it will be fatal if the scale is large. Especially the big "bugs"!
When logic is used to "debug" a problem in a system, it will produce new reason or understanding, this allows us as humans in its application to calculate/formulate it with mathematical / physics theory to find solutions.
Unconsciously, the actual working system of the human brain has been able to calculate the formulations that will be generated by logic and produce new values, namely conclusions that we practically call "action".
However, if we determine conclusions or actions that are not in accordance with what we want, then the diction can be seen from the "processor" of a computer that has special specifications depending on usage and needs.
If you can take advantage of this conflict management system that already exists in every human being, then any problems you can face, even if you are unable to survive in the system, you can go out and create a new system and create a "backdoor" to occasionally view and learn from "bugs" from the old system.
What do you think? and what is your opinion? | darkterminal |
319,452 | How To Install PostgreSQL On Windows 10 | And Use It From Your Terminal Installation Instructions Go to PostgreSQL Databa... | 0 | 2020-04-25T19:51:52 | https://dev.to/jimmymcbride/how-to-install-postgresql-on-windows-10-3d8d | postgres, database | ## And Use It From Your Terminal
### Installation Instructions
Go to [PostgreSQL Database Download](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads) and download the version of PostgreSQL you want under the "Windows x86-64" column. Once the installer is done downloading, run that bad boy.

Once you run the installer you downloaded, you should see a screen that looks like this:

Click `Next >`. Then choose your installation directory:

Select components. I would leave all these checked and click `Next >`.

Select the data directory. I left its default value.

Set up a password for your PostgreSQL user. Do something that's really easy to remember. If you're not worried about somebody hacking and finding sensitive data in your local PostgreSQL data, you would probably be fine setting it to `password`.

Set up PostgreSQL port. Default is 5432, I recommend leaving it like that.

Choose your locale. I speak English and I'm from the United States, so I choose `English, United States`.

Once we get to the pre-installation, click `Next >`.

Then click `Next >` again and it install will start!

Now, in your Windows search bar, type: `Edit the system environment variables`. Click on that and you should see this:

Then click on the `Environment Variables` button.

Click on PATH under user variables and then `Edit...` and then add the path to your PostgreSQL's bin folder to the list of locations in your PATH variable.

Click `OK` then `OK` and `OK` again. Once you've closed out of everything you should open up your terminal and type `psql -U postgres` and it will ask you for PostgreSQL's password. Whatever you set it as during the installer will be what you want to type in, and tada! :tada: You can now use PostgreSQL in the terminal now! | jimmymcbride |
320,920 | Clean Code for "Adult" | 🌙 Tonight, I decided to do nothing but reading. I just randomly went to some blogs and articles until... | 0 | 2020-04-27T18:59:10 | https://dev.to/ghackdev/clean-code-for-adult-2eh | beginners, discuss, todayilearned | 🌙 Tonight, I decided to do nothing but reading. I just randomly went to some blogs and articles until I found one article written by Dan Abramov.
That article had spoken an immense wisdom to me. It was talking about Clean Code that maybe we can found it everywhere because everyone has been talking about Clean Code for a long time ago.
But I found it's different. In that article, [Dan Abramov](https://twitter.com/dan_abramov) was looking at Clean Code from the "Adult" perspective. Based on one of his true experiences, he said that it took years to realize that he was looking at Clean Code from the "Child" perspective.
## The Story Began...
Once upon a time, Dan and his colleague are working on the same project. For more detail you can read the full article [here](https://overreacted.io/goodbye-clean-code/) because I just want to summarize the whole story.
Dan's colleague and the team had finished coding on some functionalities. And their code worked and everything was fine. But when Dan saw the code, he found that the code was repetitive or maybe he can just say that the code isn't clean.
Apparently, Dan has an Idea to make the code "Cleaner" from Dan's perspective at that time. Eventually, he did it. The code is half the total size and the duplication is gone completely. Which is good. It was more than enough to carry Dan to the very tight sleep because it was already late at night at that time. 😴
## The Next Morning...
The plot twisted. Dan's boss and the team politely asked Dan to revert the changes. Dan was aghast. Yeah, the code was a mess and Dan's code was clean! 😲
It took years for Dan to see they were right.
## 👶 Childish Mindset
Do you remember your reaction when you hear about Clean Code for the first time? Obsessed. Yeah, maybe that's a phase that many of us go through.
Yeah, just like a kid when Dad comes home and brings the coolest toy among his friends. The kid will always try to proudly show off his new to cool toy to his friends.
In the time that we acquired the Clean Code as our new superpower, we'll always try to implement it everywhere whenever we feel unconfident with our code. Just like a superhero that always wants to bring justice whenever he encounters bad things.
> “I’m the kind of person who writes clean code” 🦸♂️
## The Turning Point
Eventually, Dan realized that "refactoring" the code at that time was a disaster. There are two points :
- First, Dan didn't talk with the people who wrote the code he refactored. It destroys the trust between each other in the engineering team.
- Second, Dan's code traded the ability to change requirements for reduced duplication, and it was not a good trade.
## So, should we write dirty code instead?
No. From this story, we could learn that we should think deeply before we could decide to do something. It's not only about "clean" or "dirty" code. Because clean code is not the goal.
In the real world, sometimes we need to shrink our ego, pride, or idealism down. Because we should consider seeing the problems from different perspectives.
## 🌱 Coding is a Long Journey
I believe that coding is a long long journey, when I saw my first line of code in my repository I posted about 3 years ago, I saw a lot of transformations in a lot of aspects in the term of coding.
The more I learned about concepts, paradigms, architecture, and also looking at problems from different perspectives, the more I realize that we should think more deeply before we could decide or use something.
## Conclusions
Remember the story about the kid I told you earlier in this article? Imagine that you are the kid in that story. And right now, you are not a kid anymore, you are an adult now, and you just realized that the toy given by your Dad is not just a toy.
Because you're looking at the gift from your Dad from a child's perspective you saw it like it's a toy. But when you see it from an adult's perspective finally you understand that it's a computer and you realize that you could build something amazing with it!. 👨💻
But, besides that, you should also realize that from another perspective, you could also do bad things that could harm the others with that computer.
So, in my opinion, it's about perspective. Don't let your understanding about Clean Code or other fancy things shackle you to see problems or solutions from different perspectives.
> Let clean code guide you. Then let it go. -- Dan Abramov | ghackdev |
321,208 | Smooth Operator: Concurrent Mode in React | Warning: At the time of writing this, Concurrent Mode is not yet stable and should not be used in p... | 0 | 2020-04-30T05:00:58 | https://medium.com/javascript-in-plain-english/smooth-operator-concurrent-mode-in-react-bf303de4c161 | softwareengineering, programming, react, javascript | ---
title: Smooth Operator: Concurrent Mode in React
published: true
date: 2020-04-27 15:36:32 UTC
tags: software-engineering,programming,react,javascript
canonical_url: https://medium.com/javascript-in-plain-english/smooth-operator-concurrent-mode-in-react-bf303de4c161
---
[](https://medium.com/javascript-in-plain-english/smooth-operator-concurrent-mode-in-react-bf303de4c161?source=rss-5676ca6163ee------2)
Warning: At the time of writing this, Concurrent Mode is not yet stable and should not be used in production.
[Continue reading on JavaScript in Plain English »](https://medium.com/javascript-in-plain-english/smooth-operator-concurrent-mode-in-react-bf303de4c161?source=rss-5676ca6163ee------2) | bbennett7 |
321,356 | React Navigation v5 Example (React Native) | navigation in React native Apps with React Navigation 5 | 0 | 2020-04-28T10:16:58 | https://dev.to/paulobunga/navigating-react-native-app-with-react-navigation-5-4hn8 | reactnative, navigation, android, ios | ---
title: "React Navigation v5 Example (React Native)"
published: true
description: "navigation in React native Apps with React Navigation 5"
tags: reactnative, navigation, android, ios
cover_image: https://thepracticaldev.s3.amazonaws.com/i/u1x7n8mbvor1nq6tcbk0.jpg
---
##Step 1: Set up a blank react-native project
Open terminal in the working directory and run
```BASH
npx react-native init ExampleApp
```
cd into project folder i.e ExampleApp
```BASH
cd ExampleApp
```
##Step 2: Add the necessary dependencies.
```BASH
yarn add @react-navigation/native
```
```BASH
yarn add react-native-reanimated react-native-gesture-handler react-native-screens react-native-safe-area-context @react-native-community/masked-view
```
The libraries we've installed so far are the building blocks and shared foundations for navigators, and each navigator in React Navigation lives in its own library.
So far what we have installed is the foundation for navigation using React Navigation. However, in order to start navigating to different screens/scenes/pages in our app, we will need to install other navigators depending on how we want to navigate.
#The 3 most common ones are
1. Stack Navigator
2. Drawer Navigator (As the name suggests, it provides a navigation drawer navigator)
3. BottomTabs (Provides bottom tabbed navigation).
//Install only the one that you need. Refer to the documentation to understand in details
```BASH
yarn add @react-navigation/stack
yarn add @react-navigation/bottom-tabs
yarn add @react-navigation/drawer
```
All these navigators contain properties (Navigator and Screen) that are essential in setting up the Navigator.
#Step 3: Building our navigation
Let's get started by bootstrapping and creating all the files we are going to work with.
```BASH
mkdir src && cd src && touch package.json
mkdir navigation screens components
cd navigation && touch AppNavigator.js MainTabs.js
cd .. && cd screens && touch Home.js Profile.js Contacts.js
cd .. && cd components && touch ContactListItem.js Avatar.js Icons.js
```
Open our App.js and add the following code to create a stack navigator
```JSX
// In App.js in a new project
import * as React from 'react';
import { NavigationContainer } from '@react-navigation/native';
import AppNavigator from 'src/navigation/AppNavigator.js';
const App = () => {
return (
<NavigationContainer>
<AppNavigator />
</NavigationContainer>
);
}
export default App;
```
Now open the AppNavigator.js file and add code
```JSX
import * as React from 'react';
import { createStackNavigator } from '@react-navigation/stack';
import MainTabs from './MainTabs';
const Stack = createStackNavigator();
const AppNavigator = () => {
return (
<Stack.Navigator>
<Stack.Screen name='MainTabs' component={MainTabs} />
</Stack.Navigator>
);
}
export default AppNavigator;
```
Next, we open the MainTabs file and edit the code as below
```JSX
//MainTabs.js
import * as React from 'react';
import { createBottomTabNavigator } from '@react-navigation/bottom-tabs';
import Home from 'src/screens/Home';
import Profile from 'src/screens/Profile';
import Contacts from 'src/screens/Contacts';
const Tabs = createBottomTabNavigator();
const MainTabs = () => {
return (
<Tab.Navigator>
<Tab.Screen name='Home' component={Home} />
<Tab.Screen name='Home' component={Profile} />
<Tab.Screen name='Home' component={Contacts} />
</Tab.Navigator>
);
}
export default MainTabs;
``` | paulobunga |
321,408 | Getting started with Spring Security - Adding JWT | This is the second part of the spring security post I started. Json Web Token: standard that defines... | 0 | 2020-05-10T10:47:35 | https://dev.to/jhonifaber/getting-started-with-spring-security-adding-jwt-485c | java, security, jwt, spring | This is the second part of the spring security post I <a href="https://dev.to/jhonifaber/getting-started-with-spring-security-authentication-and-authorization-32de" target="_blank">started</a>.
**Json Web Token:** standard that defines a self-contained way for transmitting information as a JSON object. Consist of three parts separated by dots.
+ **Header:** signing algorithm(SHA256,HS512...) + type of the token
+ **Payload:** contains the claims.
+ **Signature:** header(base64 encoded) + payload(base64 encoded) + a secret and all encoded with the algorithm specified in the header.
**Claim:** piece of information in the body of the token.
First of all, add the <a href="https://mvnrepository.com/artifact/io.jsonwebtoken/jjwt" target="_blank">dependency</a> that allows us to create jwt's and validate them.
```java
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt</artifactId>
<version>0.9.1</version>
</dependency>
```
<span></span><br>
## Authenticate user and return token
We are going to create a class which have all the related features about jwt. In the following example, when creating the token, Im just adding the subject, the authorities of that user(We just have one and is ROLE_SENSEI, check out *MyUserDetails* class), and the expiration time. You can add custom claims with *claim(key, value)* or pass a map of claims to *setClaims()*. I'm signing the token with the string "key" for this example. In a real project, it could be retrieved from the application configuration file.
To read the jwt, you need to pass the key to validate the signature of token and call *parseClaimsJws* with the token, then you will be able to get the body.
```java
@Service
public class JwtService {
private static final int EXPIRATION_TIME = 1000 * 60 * 60;
private static final String AUTHORITIES = "authorities";
private final String SECRET_KEY;
public JwtService() {
SECRET_KEY = Base64.getEncoder().encodeToString("key".getBytes());
}
public String createToken(UserDetails userDetails) {
String username = userDetails.getUsername();
Collection<? extends GrantedAuthority> authorities = userDetails.getAuthorities();
return Jwts.builder()
.setSubject(username)
.claim(AUTHORITIES, authorities)
.setExpiration(new Date(System.currentTimeMillis() + EXPIRATION_TIME))
.signWith(SignatureAlgorithm.HS512, SECRET_KEY)
.compact();
}
public Boolean hasTokenExpired(String token) {
return Jwts.parser()
.setSigningKey(SECRET_KEY)
.parseClaimsJws(token)
.getBody()
.getExpiration()
.before(new Date());
}
public Boolean validateToken(String token, UserDetails userDetails) {
String username = extractUsername(token);
return (userDetails.getUsername().equals(username) && !hasTokenExpired(token));
}
public String extractUsername(String token) {
return Jwts.parser()
.setSigningKey(SECRET_KEY)
.parseClaimsJws(token)
.getBody()
.getSubject();
}
public Collection<? extends GrantedAuthority> getAuthorities(String token) {
Claims claims = Jwts.parser().setSigningKey(SECRET_KEY).parseClaimsJws(token).getBody();
return (Collection<? extends GrantedAuthority>) claims.get(AUTHORITIES);
}
}
```
When the user tries to log in, we expect a username and a password (userAuthenticationRequest) and if the authentication goes well, we will respond with the token(AuthenticationResponse).
```java
@Data
@NoArgsConstructor
@AllArgsConstructor
public class AuthenticationRequest {
private String username;
private String password;
}
```
```java
@Data
@NoArgsConstructor
@AllArgsConstructor
public class AuthenticationResponse {
private String token;
}
```
In our controller, we autowired AuthenticationManager so that we can authenticate the passed object.
We need to build an Authentication object by using *UsernamePasswordAuthenticationToken*, it receives two params, the principal(username) and the credentials(password). This object is passed to AuthenticationProvider who is responsible for doing the validation.
If the validation is successful, the token is created, otherwise throws an exception.
```java
@RestController
@RequiredArgsConstructor
public class UserController {
private final AuthenticationManager authenticationManager;
private final MyUserDetailService myUserDetailService;
private final JwtService jwtService;
@PostMapping("/login")
public AuthenticationResponse createToken(@RequestBody AuthenticationRequest authenticationRequest) throws Exception {
try {
UsernamePasswordAuthenticationToken authentication = new UsernamePasswordAuthenticationToken(authenticationRequest.getUsername(), authenticationRequest.getPassword());
authenticationManager.authenticate(authentication);
} catch (BadCredentialsException e) {
throw new Exception("Invalid username or password", e);
}
UserDetails userDetails = myUserDetailService.loadUserByUsername(authenticationRequest.getUsername());
String token = jwtService.createToken(userDetails);
return new AuthenticationResponse(token);
}
}
```
My security config class is as follows. You need to override authenticationManagerBean in order to autowired it. Here I'm allowing everyone to */login* but for any other resource you must be authenticated.
```java
@EnableWebSecurity
public class WebSecurity extends WebSecurityConfigurerAdapter {
@Autowired
private MyUserDetailService myUserDetailService;
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(myUserDetailService);
}
@Override
public void configure(HttpSecurity http) throws Exception {
http.csrf().disable()
.authorizeRequests().antMatchers("/login").permitAll()
.anyRequest().authenticated();
}
@Bean
public PasswordEncoder getPasswordEncoder() {
return NoOpPasswordEncoder.getInstance();
}
@Bean
@Override
public AuthenticationManager authenticationManagerBean() throws Exception {
return super.authenticationManagerBean();
}
}
```
Remember I have the same UserDetails class from the previous <a href="https://dev.to/jhonifaber/getting-started-with-spring-security-authentication-and-authorization-32de" target="_blank">post</a>, so the password is 'pass' and the username could be whatever. So, if we try to log in, we can see the token returned.
<img src="https://dev-to-uploads.s3.amazonaws.com/i/fu8vl2sd9yoy6i9k9gt8.png" alt="postman log in"/>
<span></span><br>
## Intercept request
**SecurityContext** is used to store the details of the currently authenticated user.
We are now going to extract the token from the authorization header and validate it. To intercept a request we use filters.
First, We create JwtAuthorizationFilter that will be executed once per request and is responsible for user authorization.
We now get the token from the header, extract the username and check the token is valid. If everything is fine, we build the Authentication object with those user details, set the user in the SecurityContext and allow the request to move on with *filterChain.doFilter*.
I have defined my constants as class fields but it could be better to create a class 'JwtConstants' and have them all there.
```java
@Component
public class JwtAuthorizationFilter extends OncePerRequestFilter {
private static final String HEADER_TOKEN_PREFIX = "Bearer ";
private static final String HEADER_AUTHORIZATION = "Authorization";
@Autowired
private MyUserDetailService myUserDetailService;
@Autowired
private JwtService jwtService;
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
String authorizationHeader = request.getHeader(HEADER_AUTHORIZATION);
if (authorizationHeader != null && authorizationHeader.startsWith(HEADER_TOKEN_PREFIX)) {
String token = authorizationHeader.replace(HEADER_TOKEN_PREFIX, "");
String username = jwtService.extractUsername(token);
UserDetails userDetails = myUserDetailService.loadUserByUsername(username);
if (jwtService.validateToken(token, userDetails)) {
UsernamePasswordAuthenticationToken authentication = new UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities());
SecurityContextHolder.getContext().setAuthentication(authentication);
}
}
filterChain.doFilter(request, response);
}
}
```
In our webSecurity class, we will add sessionManagement to be stateless, because we don't want spring to create any session. Secondly, we'll add the created filter. This means, read JwtAuthorizationFilter before the UsernamePasswordFilter.
```java
@Autowired
private JwtAuthorizationFilter jwtAuthorizationFilter;
//...
@Override
public void configure(HttpSecurity http) throws Exception {
http
.csrf().disable()
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS)
.and()
.authorizeRequests().antMatchers("/login").permitAll()
.anyRequest().authenticated()
.and().addFilterBefore(jwtAuthorizationFilter, UsernamePasswordAuthenticationFilter.class);
}
//...
```
Let's try it on postman, I have created this GET to test what we've done.
```java
@GetMapping("/test")
public String getTest(){
return "test";
}
```

To check the statelessness, try making the request again without the *Authorization* header, you will see how you get a *403 Forbidden* because each request is self-contained. | jhonifaber |
321,410 | HTML to DOM (following...) | Following the previous snippet toDom, here is another example String.prototype.toDOM = function ()... | 0 | 2020-04-28T12:01:44 | https://dev.to/artydev/html-to-dom-following-2e2k | html, dom | Following the previous snippet **toDom**, here is another example
```js
String.prototype.toDOM = function () {
var d = document,
i,
a = d.createElement("div"),
b = d.createDocumentFragment();
a.innerHTML = this;
while ((i = a.firstChild)) b.appendChild(i);
return b;
};
const users = [
{
prenom: "Hal",
age: 45,
},
{
prenom: "Bert",
age: 54,
}
];
const mail = ({ prenom, age }) =>
`<li onclick="alert('${prenom}')">
Bonjour ${prenom} vous avez ${age} ans
</li>`;
const style = `
ul {
list-style: none;
padding: 0;
margin: 0;
}
li:hover {
cursor: pointer;
background: blue;
color: white
}`
document.body.append(
`
<style>
${style}
</style>
<ul>
${users.map(mail).join('')}
</ul>
`.toDOM()
);
```
You can test it here : [toDom](https://flems.io/#0=N4IgtglgJlA2CmIBcA2AHAOgMwBoQDMIEBnZAbVADsBDMRJEDACwBcxYQ8BjAe0pfj9kIEAF8cVWvUYArUtz4ChDAMosAThEoBzDAAd1PFkYCee+BmMARAPIBZAAQBeB-gCulLiwh8HACgBKB2AAHUoHBwA3anUHKGc4ni43On4cMIiIiHTwzOoEqAwudXhqAQBRBFSWPxCQKAhIuoCczIcAIwKikrL4KySUwRYAMXVqbWrAgG4Mh2oMLUp4dQAJABU7ABkEliYIYhncgHc9hH8-CAT5wnViFgBhU6gAoPaMaj1zSihHoigLgKHCIlFhudThdqHUSHMK8Sh3BxuYjLYgJMizUK5CIGQQ8MBIBx1FbUWB1Vp5bTwAkAFgArOTxBjZtiSpQ8QS6gAhZYsMnMuaUgm06kMsIAXRhnj4CLA1CICT8wAcOLZYBwAvgDlEQScAD5ZgADAA8sEufC4pq4AGsnHUSTy-AByAAkwBVeNEjoCdX1uU5fBkPDBDld7rAoiiQdR1Ei8AAXiHgON4BHqPDZkaAPSm3UGyVwhF3ExnFwGsJuWDBWamu4AWiLCAJbKWQOV1BgWm0BIADK3ZeptFoe1CwqakEweLHYpiIslbjx1AS9DwtAJ1K32tRrdpDB4oAT2rA3PBW7xYAuCScIAIwqIwmXvgNqhh2jwoCZ3p9BP9DRmG-BfTaV1-zvXIs3-QCHCNCtIIiV0kRRDBZT0PxZSIAIMEDLQnS9CMM0zGDDUsHhbDsQIwkBMJOBAZEEC8Hx4WEbskG7WtqTQFixAkEAaDoYQimIeQQDhJQWGEMQxTwXjpGsPExCAA)
| artydev |
321,426 | Apagando o cache do Redis | Se você não usa cache na sua aplicação, considere usar. Otimiza muito a performance e, se bem aplicad... | 0 | 2020-04-28T12:29:46 | https://dev.to/sr2ds/apagando-o-cache-do-redis-37b7 | redis, cache | Se você não usa cache na sua aplicação, considere usar. Otimiza muito a performance e, se bem aplicado, vai te polpar bons recursos e ainda deixar o usuário bem feliz pela agilidade na entrega das requisições. Mas hoje não vamos falar de como implementar isso, é só um artigo rápido para te ajudar a limpar o cache caso precise.
## Como listar todo o cache
```
redis-cli keys *
```
## Como apagar todo cache
```
redis-cli FLUSHALL ASYNC
```
## Como filtrar a lista do cache
```
redis-cli keys "*ALGUMA_PALAVRA_NO_MEIO*"
```
## Como apagar entradas específicas
```
redis-cli --raw keys "*ALGUMA_PALAVRA_NO_MEIO*" | xargs redis-cli del
```
É isso aí! Como falei, é só um guia para consulta rápida!
Até a próxima! | sr2ds |
321,441 | Is it okay to build portfolio not using my own ideas? | Hi, I'm beginning to build my own portfolio and I've been searching for ideas and I've found it. So... | 0 | 2020-04-28T12:51:26 | https://dev.to/hycarldev_/is-it-okay-to-build-portfolio-not-using-my-own-ideas-49ik | Portfolio, discuss, frontend | Hi, I'm beginning to build my own portfolio and I've been searching for ideas and I've found it.
So I kinda like, use that template <b>BUT</b>, I didn't just copy the whole thing and leave it like that, I improvise the site to my own likings, like added some css elements such as typewriter effect, added some components to showcase my skills.
The point is, I added some of my own ideas to a template to make the site as my portfolio, it's acceptable, right? Or people will judge like, "euw, are you even a developer?", "go get a creative mind bro"?
I need some feedbacks and yeah, if you wanna see, just let me know, thanks! | hycarldev_ |
321,481 | The Beginning: Why Code? | This post was originally published at https://jessesbyers.github.io./ on October 15, 2019, when I was... | 0 | 2020-04-28T13:42:02 | https://dev.to/jessesbyers/the-beginning-why-code-1n1d | career, codenewbie | *This post was originally published at https://jessesbyers.github.io./ on October 15, 2019, when I was a Software Engineering student at Flatiron School.*
Recently I've been asked by a number of friends and family members:
> "Why code?"
For me, the answer really comes down to kids. As a middle school science teacher, I was always focused on helping my students develop skills around science and engineering practices, including modeling, the design process and experimental design, and computational thinking. I started realizing that at some schools, students are introduced to simple programming and robotics starting as early as Kindergarten...but at my school, since we had virtually no technology program, we were providing almost zero opportunities in this area. I took it upon myself to introduce coding in the context of middle school science class. But first I had to learn a few things before I could teach it.

I started off using [code.org](https://code.org) resources to introduce my students how to create their own apps using [App Lab](https://code.org/educate/applab), and then give students the opportunity to present science projects through an app as opposed to more traditional formats such as slideshows, posters, or essays. Many students loved learning to create apps, but I felt torn because I had spent so much time teaching a skill for students to **present** their science, when they needed to spend more time actually **doing** the science.
The following year, I discovered the [Project GUTS](http://http://www.projectguts.org) curriculum, which is a curriculum designed to teach programming skills in the context of a science classroom.
> Project GUTS — Growing Up Thinking Scientifically — is an integrated science and computer science program for middle school students serving schools and districts internationally. Growing up thinking scientifically means learning to look at the world and to ask questions, developing and using computer models that help answer questions through scientific inquiry, and using critical thinking to assess which ideas are reasonable and which are not. To grow up thinking scientifically means knowing science to be a computing-rich, dynamic, creative endeavor, a way of thinking, rather than a body of facts.
>* -http://www.projectguts.org/*
Through a series of modules, students learn basic block coding skills using the [StarLogo Nova](https://www.slnova.org) platform, and apply them towards creating scientific models and simulations. Students need to decide what variables they want to manipulate, program those variables, and then run tests using their models to collect data and discover new things about the content. I had 8th graders creating disease transmission models for MRSA, and collecting data around how various factors affected the spread of the disease and survival rates. Every group took a different approach, and the results were inspiring.

Throughout that journey, I had to teach myself basic coding skills so that I could teach the curriculum and stay just a step ahead of the kids through learning new concepts and debugging projects. However, I was hooked. I had not felt so creative or challenged for a long time, and these were two feelings that had been missing in my current work. I also saw the direct impact of my kids being challenged to be creative in new and unfamiliar ways, which had a very positive impact on their science learning.
Fast forward a year, and I have decided to pursue a new career in Software Engineering, and have enrolled as a part-time student in [Flatiron School's](https://flatironschool.com) Software Engineering Course. I want to challenge myself and be creative everyday, and with the first month under my belt, I can say that I am meeting that first goal. After the course, I would either like to work on an engineering team developing science educational software and/or curriculum, or perhaps move into a technology teaching role. Stay tuned as I blog about the challenges I encounter on the journey, my solutions, and my projects. | jessesbyers |
321,507 | AQI | Air Quality Index I'v been experimenting to fetch data related to AQI in China recently.... | 0 | 2020-04-28T14:11:23 | https://www.chuanjin.me/2016/11/10/air-quality-data-md/ | python | # Air Quality Index
I'v been experimenting to fetch data related to [AQI](https://en.wikipedia.org/wiki/Air_quality_index) in China recently.
A good resource is http://aqicn.org, which provides air pollution data in a good way by city, and I try to write a python script to crawl the data from it.
{% gist https://gist.github.com/chuanjin/07a0f43530464cabe7ff3cb45107e4d1 %}
Another website is http://www.stateair.net/, it reports AQI from US consulate in Chinese major cities, i.e. Beijing, Shanghai, Guangzhou, Chengdu and Shenyang. Similarly, the code to fetch data from it shown below.
{% gist https://gist.github.com/chuanjin/4d0307f1dbcb19c2743a348045b137ea %}
Both of the code above are using Python and handy packages like [requests](http://docs.python-requests.org/en/master/) and [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/).
The stateair website also provides historical data for download in CSV format, therefore I use those data to create a simple visulization page. Check it from https://chuanjin.github.io/pm25/ and source code is availabe from Github [repo](https://github.com/chuanjin/pm25).
Last but not least, I found [pm25.in](http://pm25.in/) website, and they provide real time data for all Chinese cities, even better, they have opened their API for develper and I also created a Python library based on their API. Code available from https://github.com/chuanjin/python-aqi-zh
| chuanjin |
321,515 | Watershed Monitor: JavaScript and Rails Project | This post was originally published at https://jessesbyers.github.io./ on March 12, 2020, when I was a... | 0 | 2020-04-28T14:22:51 | https://dev.to/jessesbyers/watershed-monitor-javascript-and-rails-project-576c | javascript, googlemaps, ruby, rails | *This post was originally published at https://jessesbyers.github.io./ on March 12, 2020, when I was a Software Engineering student at Flatiron School.*
*I created Watershed Monitor to fill a real environmental need. We currently have the science we need to make the needed changes to protect our water quality, and we have many laws and regulations at every level related to managing and protecting our water quality. However, most government agencies and organizations lack the capacity to effectively monitor and enforce these regulations and support best practices. This application aims to help reduce this capacity problem. By calling on the public to collect and report data on water quality, the data can be used to help agencies and organizations prioritize their enforcement and support where it is most needed.*

[Check out the project on Github](https://github.com/jessesbyers/watershed-monitor) and [Watch a video walkthrough](https://drive.google.com/file/d/1LQ7Do3U6YBOZkHgLxWyPjXvE9GQQYANr/view?usp=sharing).
## Project Requirements
This project required me to create a Single Page Application with a Rails API Backend and JavaScript Frontend. All communication between the frontend and backend was required to happen asynchronously through AJAX with data communicated in JSON format. It needed to organize data through JavaScript Objects and Classes, include a has many relationship, and include at least 3 AJAX calls using fetch to complete CRUD actions. I fulfilled these requirements by integrating the Google Maps Javascript API so that users could use an interactive map interface in order to more easily input geographic data and view data without having to worry about latitude and longitude coordinates.
### Rails API Backend
The Rails component of this project is very straightforward. There is a Category model and an Observation model, and each Category has many Observations, and each Observation belongs to a Category. The Category model allows for easy organization and filtering of the data by category, and users primarily interact with the Observation model.
```
class ObservationsController < ApplicationController
def new
observation = Observation.new
end
def create
observation = Observation.new(observation_params)
observation.save
render json: ObservationSerializer.new(observation)
end
def index
observations = Observation.all
render json: ObservationSerializer.new(observations)
end
def show
observation = Observation.find(params[:id])
render json: ObservationSerializer.new(observation)
end
def destroy
observation = Observation.find(params[:id])
observation.destroy
end
private
def observation_params
params.require(:observation).permit(:name, :description, :category_id, :latitude, :longitude)
end
end
```
The Observations Controller includes logic for the create, read, and destroy actions, and leverages functionality from the fast JSON API gem to create serializers and customize how data is organized for communication with the JavaScript front end.
```
class ObservationSerializer
include FastJsonapi::ObjectSerializer
attributes :name, :description, :latitude, :longitude, :category_id, :created_at, :category
end
```
As a result, observation index data is displayed with associated categories at localhost:3000/observations:

### Google Maps JavaScript API Integration
This application relies heavily on the Google Maps Javascript API for the frontend display and user interaction. This API is a codebase that includes JavaScript functions and objects such as maps, markers, and info windows. The first step in getting the front end up and running was to research and experiment with how these objects can be created, modified, and deleted. The[ documentation](https://developers.google.com/maps/documentation/javascript/tutorial) was very helpful in navigating this exploration.
To integrate the maps API, I needed to add a script to the bottom of the body of my index.html file. This script made a connection to the google maps API, included my access key, and included a callback to the initMap() function which would set up my base map.
```
<script id="api" async defer src="https://maps.googleapis.com/maps/api/js?key=###I&callback=initMap"
type="text/javascript"></script>
```
Each type of object has a constructor function which allows construction of new instances of each object with a variety of options, such as the examples below.
#### Setting up the base map
```
let mapCenter = { lat: 45, lng: -90}
let map = new google.maps.Map(document.getElementById('map'), {zoom: 3, center: mapCenter});
```
This creates a map centered on North America, with a zoom level allowing us to view the entire continent.
#### Constructors for Markers and Info Windows
```
let obsMarker = new google.maps.Marker({
position: {lat: this.latitude, lng: this.longitude},
map: map,
label: {
text: number.call(this),
fontSize: "8px"
},
icon: iconColor.call(this)
})
```
This creates a new marker object based on geographic coordinates from the database, and it can be customized for icon shape, color, label text, size, etc.
```
let infowindow = new google.maps.InfoWindow({
content: observationDetails
});
```
This creates a new info window, that can be populated with details fetched from the database.
#### Setter and Getter Methods
Beyond these constructors, I also used google's built-in setter and getter methods to obtain and change coordinates, to set or reset markers on a map, and to change specific properties of the markers on the map. For example:
```
function placeMarker(latLng, map) {
let placeholder = new google.maps.Marker({
position: latLng,
map: map
});
placeholder.setDraggable(true)
placeholder.setIcon('http://maps.google.com/mapfiles/ms/icons/blue-pushpin.png')
let markerCoordinates = [placeholder.getPosition().lat(), placeholder.getPosition().lng()]
newMarkerArray.push(placeholder)
this.showNewObservationForm(markerCoordinates, map, placeholder)
}
```
Within this function, the setDraggable() setter method is used to make the marker draggable when creating a new observation for the map, and uses the setIcon() method to change the marker icon from the default shape to a pushpin shape. The getPosition() getter method is used to then collect the exact latitude and longitude coordinates from the pushpin placeholder marker, so they can be stored in an array and later used in the post request to the backend while creating a new observation entry in the database.

#### Event Listeners and Events
Finally, the Google Maps JavaScriptAPI includes many event listeners and events that are similar to vanilla JavaScript events. Since many users are accustomed to using clicks, double clicks, and drags to navigate a map on any site, I needed to carefully plan out how to enable and disable event listeners so that my custom events for adding and deleting database entries were not conflicting with regular google map navigation events.
```
addObs.addEventListener('click', function() {
addObs.disabled = true
alert("Click on a location on the map to add a new observation.");
let addMarkerListener = map.addListener('click', function(e) {
Observation.placeMarker(e.latLng, map);
google.maps.event.removeListener(addMarkerListener)
});
})
```
This example shows how I paired a traditional event listener (clicking on the "Add" navbar button) with a google map listener in order to allow users to add a marker to the map as well as add the data to the database. At the end of the function, the event listener is removed to re-enable the default google maps behavior.

### Object Oriented Javascript Frontend
I organized the frontend across two classes, ObservationsAdapter and Observation.
The observation class is responsible for building and rendering markers and info windows using data retrieved from the user or from the database.
The adapter class is responsible for all communication between the frontend and backend, and includes all of the functions related to fetching data.
* A GET fetch request is used to populate the map with all observations from the database when the view button is clicked.
```
fetchObservations(map) {
fetch(this.baseURL)
.then(response => response.json())
.then(json => {
let observations = json.data
observations.forEach(obs => {
let observation = new Observation(obs.id, obs.attributes.name, obs.attributes.description, obs.attributes.category_id, obs.attributes.latitude, obs.attributes.longitude)
observation.renderMarker(map)
})
})
}
```
* A POST fetch request is used to send user-input to the create action in the Observations Controller, which is then used to create and persist an observation instance in the database.
```
addMarkerToDatabase(newObservation, map) {
let configObj = {
method: "POST",
headers: {
"Content-Type": "application/json",
"Accept": "application/json"
},
body: JSON.stringify(newObservation)
};
fetch(this.baseURL, configObj)
.then(function(response) {
return response.json()
})
.then(json => {
let obs = json.data
let observation = new Observation(obs.id, obs.attributes.name, obs.attributes.description, obs.attributes.category_id, obs.attributes.latitude, obs.attributes.longitude)
observation.renderMarker(map)
})
.catch(function(error) {
alert("ERROR! Please Try Again");
console.log(error.message);
});
}
```
* A DELETE fetch request is used to delete an individual observation instance from the database when a user clicks on the marker label for the corresponding observation id.
```
removeObsFromDatabase(marker) {
let id = parseInt(marker.label.text)
markersArray.map(marker => {
google.maps.event.clearListeners(marker, 'dblclick')
})
let configObj = {
method: "DELETE",
headers:
{
"Content-Type": "application/json",
"Accept": "application/json"
},
};
fetch(`${this.baseURL}/${id}`, configObj)
.then(function(json) {
marker.setVisible(false)
marker.setMap(null)
})
}
```
## Future Enhancements
While this project has succeeded in delivering the functionality needed for the public to report water quality observations, more work needs to be done to make it a fully-functioning application. In the future, I would like to add the following features:
* Add user login, and allow users to view all data, but only delete their own observations
* Add an admin role, which allows a government entity or organization to access the database directly and work with the data in more complex ways than the public would.
* Replace the Water Quality Data category with a new class for Water Quality, and fetch quantitative data from a public API to display on the map instead of user input.
If you didn't already, feel free to [check out the project on Github](https://github.com/jessesbyers/watershed-monitor) and [watch a video walkthrough](https://drive.google.com/file/d/1LQ7Do3U6YBOZkHgLxWyPjXvE9GQQYANr/view?usp=sharing). | jessesbyers |
321,518 | Climate Data Dashboard: React-Redux Project | This post was originally published at https://jessesbyers.github.io./ on April 14, 2020, when I was a... | 0 | 2020-04-28T14:27:58 | https://dev.to/jessesbyers/climate-data-dashboard-react-redux-project-1ilb | javascript, react, redux, rails | *This post was originally published at https://jessesbyers.github.io./ on April 14, 2020, when I was a Software Engineering student at Flatiron School.*
*Climate Data Dashboard is a tool for science teachers and students to promote data analysis and productive discussion about data. As a middle school teacher, I was always trying to help my students to examine and compare data across multiple sources. My students needed more practice making observations about the data, as well as generating questions about the data to guide further inquiry. As a teacher, I struggled to find and present appropriate data sources. The Data Climate Dashboard addresses all of these challenges by providing a collection of data sources that can displayed together, and providing opportunities for students to interact with the data as well as interact with the ideas of their classmates, which drives discussion and further inquiry.*

[Check out the project on Github](https://github.com/jessesbyers/climate-data-dashboard-frontend) and [Watch a video walkthrough](https://drive.google.com/file/d/1IVsYRaElQui7Se3lXT8yAWIy3wBS149a/view?usp=sharing).
## Project Overview
This project was created with a Ruby on Rails backend API which manages all of the teacher and student data related to the charts and observations (notices) and questions (or wonders). The frontend was created as a React-Redux application using React-Router to manage RESTful routing and Thunk to manage asynchronous fetch actions.
## Managing State in React-Redux
The most challenging aspect of this project was planning how I would manage my data in the backend API as well as in the frontend. I needed to structure my chart and notice/wonder data in the API based on their has_many/belongs_to relationship, and in the first draft of my project, I set up the initial state in my reducer according to this same belongs_to/has_many nested structure. While I was able to create all of my basic functionality using this deeply nested state, it became clear that a complex state structure would cause more difficulty than efficiency.
### Version 1: Deeply Nested State and a Single Reducer
#### Initial State in the Reducer
As I mentioned, my first draft included a deeply nested initial state in my reducer. Mirroring the relationships in the backend, the state looked something like this:
```
state = {
charts: [
{chart1 object },
{chart2 object },
{chart2 object },
...
]
```
However, the nesting became more complex when we consider the array of notices and wonders that belong to each chart object:
```
state.charts[0] = {
id: chart_id
name: chart_name,
data_url: source_of_raw_data,
screenshot_url: url_of_image,
notices: [
{notice1 object},
{notice2 object},
{notice3 object},
...
],
wonders: [
{wonder1 object},
{wonder2 object},
{wonder3 object},
...
]
}
```
Within each notices or wonders array, the objects look like this:
```
state.charts.notices[0] = {
id: notice_id,
content: content_text,
votes: 7,
chart_id: 1
}
```
#### ManageStudentInput Reducer
Putting it all together, although the data was highly structured and organized, it was incredibly difficult to work with, especially in the reducer. Especially when trying to add, delete, and edit notices and wonders, without mutating state.
The reducer started out simple enough for adding and deleting charts, using the spread operator to make sure the original state is not mutated in the process:
```
export default function manageStudentInput(state = {charts: [], requesting: false}, action) {
let i
switch (action.type) {
case 'START_ADDING_CHARTDATA_REQUEST':
return {
...state,
requesting: true
}
case 'ADD_CHARTDATA':
return {
charts: state.charts.concat(action.chart),
requesting: false
}
case 'DELETE_CHART':
return {
charts: state.charts.filter(chart => chart.id !== action.id),
requesting: false
}
```
However, the complexity increased significantly when I started managing the actions related to notices and wonders. I had to find each object by its index, and break apart each state object and spread each element in order to add, delete, or update a single property.
```
case 'ADD_WONDER':
console.log(action)
i = state.charts.findIndex(chart => chart.id === action.mutatedWonder.chart_id)
return {
...state,
charts: [...state.charts.slice(0, i),
{
...state.charts[i], wonders: [...state.charts[i].wonders, action.mutatedWonder]
},
...state.charts.slice(i + 1)
],
requesting: false
}
case 'DELETE_NOTICE':
i = state.charts.findIndex(chart => chart.id === action.chartId)
return {
...state,
charts: [...state.charts.slice(0, i),
{...state.charts[i], notices: state.charts[i].notices.filter(notice => notice.id !== action.notice_id)
},
...state.charts.slice(i + 1)
],
requesting: false
}
case 'UPVOTE_WONDER':
i = state.charts.findIndex(chart => chart.id === action.updatedWonder.chart_id)
return {
...state,
charts: [...state.charts.slice(0, i),
{...state.charts[i], wonders: [...state.charts[i].wonders.filter(wonder => wonder.id !== action.updatedWonder.id), action.updatedWonder]
},
...state.charts.slice(i + 1)
],
requesting: false
}
```
This is just a taste of the result, you can see the entire 212-line reducer [here](https://github.com/jessesbyers/climate-data-dashboard-frontend/blob/v1-nested_state/src/reducers/manageStudentInput.js). Needless to say, although the app functioned this way, this is not the ideal solution!

### Revised Version: Simple State and Multiple Reducers
#### Initial State in the Reducer
I branched my repository and refactored the entire application with a simplified state, which separated charts, notices, and wonders into separate keys with an array of objects for each. State did not retain the has_many/belongs_to relationships between the models, but it didn't need to since all of the notices and wonders had a foreign key, chart_id.
```
state = {
charts: [
{chart1 object },
{chart2 object },
{chart2 object },
...
],
notices: [
{notice1 object},
{notice2 object},
{notice3 object},
...
],
wonders: [
{wonder1 object},
{wonder2 object},
{wonder3 object},
...
]
}
```
#### CombineReducers: Charts, Notices, and Wonders
I used combineReducers to manage state for the three different models across individual reducers:
```
import { combineReducers } from 'redux'
import chartsReducer from './chartsReducer'
import noticesReducer from './noticesReducer'
import wondersReducer from './wondersReducer'
const rootReducer = combineReducers({
charts: chartsReducer,
notices: noticesReducer,
wonders: wondersReducer
});
export default rootReducer
```
By removing the nesting in the initial state, I was able to organize the actions for each model into its own individual file. Better yet, I was able to add, delete, and edit state without manipulating deeply nested data with spread operators, as in my previous example. Actions that would have had return values of 4 lines, have been reduced to 1-liners!
```
export default function chartsReducer(state = [], action) {
switch (action.type) {
case 'START_ADDING_CHARTDATA_REQUEST':
return state
case 'ADD_CHARTDATA':
return [...state, action.chart]
case 'DELETE_CHART':
return state.filter(chart => chart.id !== action.id)
case 'START_ADDING_DATA_SOURCE_REQUEST':
return state
case 'ADDING_DATA_SOURCE':
return state
default:
return state
}
}
```
Manipulating data in the notices and wonders reducers had a more significant improvement. A complex code snippet involving slicing and dicing an array by index numbers was greatly simplified, using a simple map function and conditional logic:
```
case 'DELETE_NOTICE':
let remainingNotices = state.map(notice => {
if (notice.id === action.notice_id) {
return action.notice_id
} else {
return notice
}
})
return remainingNotices
```
You can see all of the final reducers [here](https://github.com/jessesbyers/climate-data-dashboard-frontend/tree/master/src/reducers).

## Lessons Learned
Needless to say, this refactoring of my working code was a significant time investment, but it was clear that it needed to be done. I had created a lot of extra work by having an overly complicated nested state structure, and it really didn't gain me any efficiency in my containers and components. Simple state structures are definitely the way to go. That said, working through thew challenge of writing reducer logic with a deeply nested initial state was a tremendous learning opportunity for me. My understanding of the spread operator was shaky before tackling this project, and I had to work through multiple instances of breaking apart data and putting it back together again. I refined my debugging skills and developed a sound process for examining the return values of each action. Both of these skills will certainly come in handy in the future...but not while tackling a deeply nested state. I will definitely be using simple states and combining reducers from now on!
[Check out the project on Github](https://github.com/jessesbyers/climate-data-dashboard-frontend) and [Watch a video walkthrough](https://drive.google.com/file/d/1IVsYRaElQui7Se3lXT8yAWIy3wBS149a/view?usp=sharing).
Want to learn more about how the project works under the hood? Check out my second blog post about the project: [React-Redux: How it Works](https://dev.to/jessesbyers/react-redux-how-it-works-5d84).
| jessesbyers |
321,533 | Angular Events vs. Observables | When Angular 2 came out, it adopted the Observable as an intregal part of its architecture. rxJS add... | 3,337 | 2020-04-28T14:47:16 | https://dev.to/jwp/angular-events-2jlk | angular, events, typescript | When Angular 2 came out, it adopted the [Observable](https://www.google.com/search?q=angular+observabl) as an intregal part of its architecture. [rxJS added all of these Observable oriented functions, operators and support](https://rxjs-dev.firebaseapp.com/api). All good; yes very good, except for one thing. It's ramp up time is quite steep.
**Simplicity First**
None of the DOM architecture has Observables built-into it like Angular. That's because it was built on the [event model](https://www.google.com/search?q=dom+events). Indeed reading up on the [DOM Living Standard](https://dom.spec.whatwg.org/) we see just two words referring to something Observable; both in the context of Events.
**The EventHandler**
The pre-historic 'observable' was the [event handler](https://www.google.com/search?q=dom+event+handlers) which registers a function to listen for an event. It is asynchronous, because it never knows when the event will happen. The event itself is time-independent and, the event architecture of the DOM is baked-in. It's so deep that the current standard shows no indication of change. So while the event is pre-historic, in this case it just means that; it is, was, and will continue to be around the DOM world for a long time to come.
**The EventHandler is it's own type of Observable**
I was reading up on [StackOverFlow's bazillion posts on Angular Observables](
https://stackoverflow.com/search?tab=votes&q=%5bangular%5d%20observables), the other day. There are over 24 thousand questions on Observables. One of the common comments are ["Don't ever use events in Angular, the support may be pulled at some time, use Observables"](https://stackoverflow.com/questions/36076700/what-is-the-proper-use-of-an-eventemitter/36076701#36076701). This is just [bad, opinionated advice](https://en.wikipedia.org/wiki/Drinking_the_Kool-Aid).
[What's a ba-zillion anyway?](
https://www.google.com/search?q=bazillion)
**Computer Programming Event History**
The history of [events in computer programming goes back 50 years or more]
(https://www.google.com/search?q=computer+programming+events). This is a good thing, in that there are probably no bugs left to find in this architecture.
There is some [criticism of events](https://en.wikipedia.org/wiki/Event-driven_programming#Criticism) but it doesn't seem to be substantial nor is there any credit given for those conclusions.
**Event Usage in Angular is Fine**
On your journey to learning about observables, don't forget about the more simple event pattern. It works quite well, and has a rich solid history. So much so, that the Observable architecture [adapts to events](https://www.google.com/search?q=rxjs+fromevent) easily allowing anyone to transform an event into an Observable.
JWP2020
| jwp |
321,589 | CLEO.one Review – Trading Automation Made Simple | Every trader with some skin in the game has experienced the impact of emotions and psychology on exec... | 0 | 2020-04-28T16:34:21 | https://blog.coincodecap.com/cleo-one-review-trading-automation-made-simple/?utm_source=rss&utm_medium=rss&utm_campaign=cleo-one-review-trading-automation-made-simple | trading, cryptotrading, tradingbots | ---
title: CLEO.one Review – Trading Automation Made Simple
published: true
date: 2020-04-28 16:10:59 UTC
tags: Trading,crypto-trading,trading-bots
canonical_url: https://blog.coincodecap.com/cleo-one-review-trading-automation-made-simple/?utm_source=rss&utm_medium=rss&utm_campaign=cleo-one-review-trading-automation-made-simple
---
Every trader with some skin in the game has experienced the impact of emotions and psychology on execution. It’s very tough to develop the discipline to stick to your trading plan when the markets get hectic. Fear or greed? Most of us know how overpowering and triggering they can be.
The good news is that [trading automation](https://coincodecap.com/category/trading-automation) is meant to solve this, but not all trading automation is created equal, and not all trading strategies should be automated. Look for a tool that helps you work on your strategy, identify the winners, and can execute well. In this article, we will review CELO.one.
[CLEO.one](https://cleo.one/?utm_source=cpincodecap) has what it takes to do it all with an abundance of data available, millions of ways to combine it, unrivaled backtesting capabilities, and ease of use.
> [**Read CELO.one customer reviews on CoinCodeCap**](https://coincodecap.com/product/cleo.one-9)
## **Features**
CLEO.one lets you build flexible strategies that would typically take a lot of programming through simple typing, backtest them across several asset categories, and run live strategies on your live crypto account.
Here is a list of available features in [CLEO.one](https://cleo.one/?utm_source=cpincodecap):
- Create trading strategies using plenty of data and custom parameters
- Backtest trading strategies for crypto, forex, and equities with plenty of analysis data
- Use simultaneous Trailing Take Profit and Stop Loss
- Automate crypto strategies ([crypto trading bots](https://coincodecap.com/category/trading-automation?utm_source=coincodecap_blog)) on the exchange of your choice
- Paper trade to test out strategies in the current condition of the market
- Free access to some profitable strategies
<figcaption>CLEO.one strategies Review</figcaption>
## **Pros and Cons**
- What sets CLEO.one aside is the ability to create flexible trading strategies, build your crypto bots, and put them to test straight away.
- Technical indicators, candlestick patterns, crypto market caps, % volume, and price changes are all available out of the box.
<figcaption>CLEO.one crypto bot creation review</figcaption>
- Editing strategies is straightforward by having the option of creating new versions of the strategy before they are deployed as crypto bots.
- It has a user-friendly backtesting tool for crypto trading strategies, where any user can test up to 10 strategies for free.
- Traders that are new to automation can benefit from free strategies that are available free of charge. These strategies can be further edited, backtested, and even implemented on one’s portfolio within the platform.
- Ready to help customer success, representatives offer free onboarding calls.
CLEO.one is still in Beta, and some much-anticipated features are yet to be added to the platform:
- Leveraged trading is one of the most requested features and is on the roadmap.
- The marketplace is another major missing piece. With so much data available, it will be fascinating to see what kind of strategies the community will come up with.
## **How it Works**
Once you have registered for the free CLEO.one account you get an email with various options for you to get used to the features of CLEO.one, its portfolio page, and strategy builder.
Additionally, you can request for an onboarding call to go through the platform with one of the Success managers and develop a better understanding of the platform. You can also browse through the CLEO.one helpdesk documentation at any time.
As soon as you create an account, you can start building your strategies through simple typing.
Anything from candlestick patterns to technical indicators, volume, and price movements to crypto fundamentals at your disposal.
Creating a strategy like “Buy when ETH when BTC price is up by 2% in the last day, and Alts Market Cap is up by 3% in the last 3 days” takes 2 minutes.
<figcaption>Creating a trading strategy</figcaption>
All strategies are backtested on historical data with a single click. Once you are happy with the results, another click will have you trading your winning strategy on the exchange of your choice
## **Comparison with other products**
Crypto bots and tools for trading automation are quite common, yet even the most popular ones lack transparency or require coding. What makes CLEO.one fundamentally different is that the strategy is the cornerstone of the platform, it is well suited for traders that seek transparency in their crypto bots and/or that want to build and put their bots to test. You always know what you’re trading.
The backtesting tool deserves a special mention as the easiest to operate and most insightful in crypto space. CLEO.one is the only platform that empowers traders by providing the ultimate testing environment and encouraging them to improve trading skills rather than blindly giving out questionable signals.
<figcaption>CLEO.one backtesting review</figcaption>
To ensure accuracy for all trading decisions, you make CLEO.one provides data from top banks and financial institutions, unlike many other platforms.
## **User Experience**
CLEO.one does not require a degree in Computer Science or prior coding knowledge to operate. A Strategy can be produced by simple typing, and the interface is intuitive and clean. That strategy can be deployed as a crypto bot with a single click.
Also, there is a choice of free ready-to-use strategies for beginner traders. Customer support is also active and ready to help with any questions.
## **Performance and Security**
CLEO.one does not store your funds; instead, your balance remains with the exchange of your choice. Besides, the bot has restricted access to your account through the API keys that your exchange generates.
The bot can only execute trade but cannot order withdrawal or deposit of funds into your account.
CLEO.one uses bank-level-to-bank-level encryption to keep your data safe and sound.
## **Pricing**
· Free Plan – €0
· Starter – € 49/month
· Trader – €149/month
· Trader Pro – €249/month
## **Conclusions**
Most experienced traders want to automate their strategies, and CLEO.one has what it takes to become their go-to tool.
If the idea of becoming better at trading, developing strategies, and testing them, using a wealth of indicators, candlesticks, other assets, or price action fits your plan, CLEO.one is a good fit for you.
Let us know what you think about our CLEO.one review in the comment section.
- [Cryptocurrency —Advantage, Disadvantage, and Risk](https://blog.coincodecap.com/cryptocurrency-advantage-disadvantage-risk/)
- [Different Types of Crypto Trading Bots](https://blog.coincodecap.com/different-types-of-crypto-trading-bots/)
- [TrailingCrypto Review – A Multi-Exchange Crypto Trading Platform](https://blog.coincodecap.com/trailingcrypto-review-a-multi-exchange-crypto-trading-platform/)
- [2020 Investor’s Guide to Crypto](https://blog.coincodecap.com/investors-guide-to-crypto/)
- [ChangeNOW review – A reliable way to exchange crypto](https://blog.coincodecap.com/changenow-review-a-secure-crypto-exchange/)
- [TRIBTC Review — A Crypto Option Trading Platform](https://blog.coincodecap.com/tribtc-review-a-crypto-option-trading-platform/)
- [An Overview of Binary Options Trading](https://blog.coincodecap.com/an-overview-of-binary-options-trading/)
- [Understanding Cryptocurrency Trading Bots [2020]](https://blog.coincodecap.com/a-guide-to-cryptocurrency-trading-bots/)
The post [CLEO.one Review – Trading Automation Made Simple](https://blog.coincodecap.com/cleo-one-review-trading-automation-made-simple/) appeared first on [CoinCodeCap Blog](https://blog.coincodecap.com). | coinmonks |
321,599 | How To...Rails: Validations | This post was originally published at https://jessesbyers.github.io./ on January 28, 2020, when I was... | 7,249 | 2020-04-28T16:36:38 | https://dev.to/jessesbyers/how-to-rails-validations-395j | ruby, rails, codenewbie | *This post was originally published at https://jessesbyers.github.io./ on January 28, 2020, when I was a Software Engineering student at Flatiron School.*
*Validations in Rails allow us to protect the data that is entered into our database by checking whether it meets certain criteria or requirements. This is important because savvy hackers could edit the HTML on our form views and create or modify fields that should not be available to them. To prevent this, we can use common validation methods that are included in ActiveRecord, create custom validation methods, and display error messages and redirects when a user tries to enter invalid data.*
### Contact List Example
For this post, I’ll continue using the Contact List app example from my previous posts on [Rails CRUD](https://dev.to/jessesbyers/how-to-rails-basic-crud-restful-routes-and-helper-methods-1h8) and [Complex Associations](https://dev.to/jessesbyers/how-to-rails-complex-associations-nested-forms-and-form-helpers-5g5k). In this example, the data is used by the Fire Department in order to rescue both people and their pets. Therefore, it is vital that only good data is entered into the database so that the data on each household will be useful and accurate.
## Validation Methods in ActiveMethod
### 1. Add the method underneath the class name in the model.
Some common validations let you check that a certain attribute is present, unique, or is a number. You can further customize the validation by including length, whether certain characters are included or excluded, and more. While the methods automatically generate error messages, you can also add a custom error message with the validation if you would like.

### 2. Update the controller to run validations
After the validations are set in the model, they will only be called if you use the .valid? method, or if you try to save or update an instance (by calling .create, .save, or .update). You must update the create action in the controller to check if the instance is valid. If the instance is valid and the save is successful, it will redirect to the contact show page. If not, it will re-render the form.

### 3. Add error messages on form
Whenever an instance is invalid, it generates a hash that includes the attributes of the model with corresponding error messages. You can add a section at the top of the form to print out the content of those messages if, and only if, the record is invalid.

Since the new action is set with the instance variable @contact, it will print out those messages if there are any, and that section will not render at all if there are no errors. The form will also keep the valid values that were already added to the form so they will not need to be re-entered.
### 4. Update CSS to highlight error fields

Finally, you can add CSS styling to highlight the error fields so it will be easy for the user to find and correct the values. The re-rendered form will include a list of all of the invalid data, and highlight the fields to be corrected.

## Custom Validation Methods
The validation methods in ActiveRecord cover most of the common scenarios that might want to validate for. However, you can also create custom validation methods within the model to account for these uncommon use cases.
### 1. Create a ruby method within the class
Let’s say we want to validate that the address is from the local state and zip code, in order to prevent residents of other states or towns to be entered into the database. We can create an instance method using Ruby logic to identify the condition that is not allowed, and identify the error message that should be printed if that condition is true.
### 2. Add a validation line at the top of the model
In this case, instead of using “validates”, use “validate” with the custom method name.

### 3. Update the controller and views
The rest of the process will be exactly the same as when you use built-in validation methods, as described above.

Validations are pretty straightforward, and more detail can be found in the [Active Record Validations guide](https://guides.rubyonrails.org/active_record_validations.html).
### How To Series
*In this series of How To posts, I will be summarizing the key points of essential topics and illustrating them with a simple example. I’ll briefly explain what each piece of code does and how it works. Stay tuned as I add more How To posts in the series each week!*
* [Post 1: How To...Rails: Basic CRUD, RESTful Routes, and Helper Methods](https://dev.to/jessesbyers/how-to-rails-basic-crud-restful-routes-and-helper-methods-1h8)
* [Post 2: How To...Rails: Complex Associations, Nested Forms, and Form Helpers](https://dev.to/jessesbyers/how-to-rails-complex-associations-nested-forms-and-form-helpers-5g5k)
| jessesbyers |
321,611 | A brief introduction to how Node.js works. | When it comes to web applications, there are some crucial success parameters, such as performance, sc... | 0 | 2020-04-28T16:52:45 | https://www.simform.com/what-is-node-js/ | node, webdev, javascript | When it comes to web applications, there are some crucial success parameters, such as performance, scalability, and latency. Node.js is the javascript runtime environment that achieves low latency with high processing by taking a "non-blocking" model approach. Many leading enterprises like Netflix, Paypal, eBay, IBM, Amazon, and others rely entirely on the flawless performance of Node.js.
The maturity of Node.js within companies is a piece of strong evidence of the platform's versatility. It is moving to surpass being merely a web application platform and beginning to be utilized for agile experimentation with business automation, data and IoT solutions.
__So what exactly is Node.js and how does it work?__
Node.js is an open-source, Javascript runtime environment on Chrome’s V8 that lets you effortlessly develop fast and scalable web applications. It utilizes an event-driven, non-blocking I/O model that makes it lightweight, efficient and excellent for data-intensive real-time applications that run across shared devices.
To understand what is so special about Node.js in 2020. We have covered the topic in detail: [What is Node.js? Where, when and how to use it with examples](https://www.simform.com/what-is-node-js/?utm_source=dev.to&utm_medium=inline_cta&utm_campaign=nodejs)
__How does Node.js work?__

Node.js is an epitome of an exceptionally customizable and scalable tech. The server engine utilizes an event-based, non-blocking I/O model. This makes the adaptation of Javascript easier to the machine language providing execution of the code super fast. Thanks to Javascript and Node.js, the code operates faster in server-to-client direction. This enhances the performance ability of the web applications to the next level. To be more precise, web application development in Node.js ensures a steady and secure non-blocking I/O model, simplifying the code beautifully.
Node.js runs over Google’s V8 Javascript engine, where web applications are event-based in an asynchronous manner. Node.js platform uses a “single-threaded event loop.”
So, how exactly does Node.js handle concurrent requests along with a single-threaded model? Well! “Multi-threaded request-response” architecture is an event loop that is much slower and unable to handle multiple concurrent threads at a time.

The platform does not follow a similar request/response multi-threaded stateless model; instead, it goes by a simplified single-threaded event loop model. As per Node.js developers, a specific library called “Libuv” provides this mechanism known as an event loop. This Node.js processing model is majorly based on Javascript event-based model along with the callback mechanism.
__Conclusion__
Node.js runs on a single-threaded event loop and is famous for its asynchronous non-blocking model. Due to its many advantages like scalability, speed, and high performance, it has become a standalone choice for developing modern-day web applications. To dive deeper into Node.js and understand Where, when and how to it, explore the entire blog
I would love to answer your questions and discuss the topic at length. Feel free to drop your questions in the comments or let's get in touch @tejaskaneriya
| tejaskaneriya |
321,622 | Competitive Programming | What is Competitive Programming Competitive Programming is an art. Most of the developers... | 0 | 2020-04-28T17:51:46 | https://dev.to/developer_anand/competitive-programming-20e4 | competitiveprogramming, algoritham, datastructures | #What is Competitive Programming
Competitive Programming is an *art*. Most of the developers don't do this because they think **it's really tough ?**. But it isn't tough it's like a game when you start playing with problems you start loving it. **CP** is like a game but it also tests your patience level.
##How to start Competitive Programming ?
Most of the beginners don't know how to start **CP** even i myself don't know this when i started my journey in *CP* world. When i started *CP* most of the time i was unable to solve even basic ***problems*** and this is the time when we say that we can't do this. But the only way to be good in this is ***keep doing*** competitive programming.
###Some rules to start competitive programming
- learn one programming language completely.
- learn data-structures
- Strong your basics and for this
- Daily write code
- Read and apply algorithms.
####Golden rule of *CP*
The only rule is just **learn before** you **write** code. The problem with us is this world change us and we don't want to wait and that's why most of the developers jump in competitive programming before knowing the basics and that's why they'll not able to solve problems and they become frustrated. So first complete your basics and write code daily.
#####How we can do this regularly ?
This is the common problem with us that we start something and after 3 or 4 days we become bore and we leave that thing.
**Solution** of this problem is
- do commitment with yourself
- write your goal somewhere( like on paper etc.)
and the best way of doing this is make a **PR** on github and name it whatever you want eg. [100 days of CODE](https://github.com/developer-anand/100-Days-of-code) and push your code daily.
| developer_anand |
321,675 | Reducing Your Database Hosting Costs: DigitalOcean vs. AWS vs. Azure | Compare three of the most popular cloud providers, AWS vs. Azure vs DigitalOcean database hosting costs for your MongoDB® app - ScaleGrid blog | 0 | 2020-04-28T19:12:53 | https://dev.to/scalegrid/reducing-your-database-hosting-costs-digitalocean-vs-aws-vs-azure-bl4 | mongodb, aws, azure, digitalocean | ---
title: Reducing Your Database Hosting Costs: DigitalOcean vs. AWS vs. Azure
published: true
description: Compare three of the most popular cloud providers, AWS vs. Azure vs DigitalOcean database hosting costs for your MongoDB® app - ScaleGrid blog
tags: MongoDB, AWS, Azure, DigitalOcean
---
<link rel="canonical" href="https://scalegrid.io/blog/reducing-your-database-hosting-costs-digitalocean-vs-aws-vs-azure/" />
<p style="text-align: justify;"><a href="https://scalegrid.io/blog/reducing-your-database-hosting-costs-digitalocean-vs-aws-vs-azure/"><img class="alignnone size-full wp-image-5273" src="https://scalegrid.io/blog/wp-content/uploads/2020/04/Reducing-Your-Database-Hosting-Costs-DigitalOcean-vs-AWS-vs-Azure-ScaleGrid-Blog.jpg" alt="Reducing Your Database Hosting Costs: DigitalOcean vs. AWS vs. Azure" width="100%" height="auto" /></a></p>
<p style="text-align: justify;">If you’re hosting your databases in the cloud, choosing the right cloud service provider is a significant decision to make for your long-term hosting costs. This is especially apparent in today's world where organizations are doing whatever they can to optimize and reduce their costs. Over the last few weeks, we have been inundated with requests from SMB customers looking to improve the ROI on their database hosting. In this article, we are going to compare three of the most popular cloud providers, <a style="color: #2da964;" href="https://scalegrid.io/blog/reducing-your-database-hosting-costs-digitalocean-vs-aws-vs-azure/" target="_blank">AWS vs. Azure vs. DigitalOcean</a> for their database hosting costs for <a style="color: #2da964;" href="https://scalegrid.io/mongodb.html" target="_blank">MongoDB® database</a> to help you decide which cloud is best for your business.</p>
<h2 style="padding-top: 15px;">Comparing Cloud Instance Costs</h2>
<p style="text-align: justify; padding-bottom: 10px;">So, which cloud provider provides the most cost-effective solution for database hosting? We compare AWS vs. Azure vs. DigitalOcean using the below instance types:</p>
<table style="background: #def5fe;" width="100%">
<tbody>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 20px 15px;"><b>AWS</b></td>
<td style="padding: 20px 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://aws.amazon.com/ec2/pricing/on-demand/" target="_blank" rel="nofollow">EC2 instances</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 20px 15px;"><b>Azure</b></td>
<td style="padding: 20px 15px; ; border-left: 2px solid white;"><a style="color: #2da964;" href="https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/" target="_blank" rel="nofollow">VM instances</a></td>
</tr>
<tr>
<td style="padding: 20px 15px;"><b>DigitalOcean</b></td>
<td style="padding: 20px 15px; ; border-left: 2px solid white;"><a style="color: #2da964;" href="https://www.digitalocean.com/pricing/" target="_blank" rel="nofollow">Droplets</a></td>
</tr>
</tbody>
</table>
<p style="text-align: justify;">Since database hosting is more dependent on memory (RAM) than storage, we are going to compare various instance sizes ranging from just 1GB of RAM up to 64GB of RAM so you can see how costs vary across different application workloads.</p>
<p style="text-align: justify; padding-bottom: 10px;">Let’s take a look at the monthly cost (720 hours) of database hosting for standalone, on-demand, <a style="color: #2da964;" href="https://scalegrid.io/pricing.html#section_pricing_dedicated" target="_blank">dedicated instances</a> on AWS, Azure and DigitalOcean. As you can see from the graph below, DigitalOcean database hosting provides significant cost-savings over both AWS and Azure. Additionally, their Droplet pricing is extremely simple and easy to understand - $5/GB.</p>
<p style="text-align: justify;"><a href="https://scalegrid.io/blog/reducing-your-database-hosting-costs-digitalocean-vs-aws-vs-azure/#costs"><img class="alignnone size-full wp-image-5273" src="https://scalegrid.io/blog/wp-content/uploads/2020/04/Monthly-Database-Hosting-Costs-AWS-vs-Azure-vs-DigitalOcean.png" alt="Monthly Database Hosting Costs: AWS vs. Azure vs. DigitalOcean - ScaleGrid Blog" width="100%" height="auto" /></a></p>
<p style="text-align: justify;">As you can see from the above chart, on average, DigitalOcean instance costs are over 28% less expensive than AWS and over 26% less than Azure.</p>
<h2 style="padding-top: 15px;">Comparing ScaleGrid Database Hosting Costs: AWS vs. Azure vs. DigitalOcean</h2>
<p style="text-align: justify; padding-bottom: 10px;">As mentioned above, the reason we decided to write this article is because of a recent increase in questions from customers on how they can reduce their database hosting costs, so we wanted to make sure to compare the costs of our fully managed DBaaS solution across cloud providers as well. Here are the configurations for this comparison:</p>
<table style="background: #def5fe;" width="100%">
<tbody>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 20px 15px;"><b>Plan</b></td>
<td style="padding: 20px 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html#section_pricing_dedicated" target="_blank">Dedicated Hosting</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 20px 15px;"><b>Database</b></td>
<td style="padding: 20px 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/mongodb.html" target="_blank">MongoDB® Database</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 20px 15px;"><b>Replication Strategy</b></td>
<td style="padding: 20px 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_digital_ocean&replica=deployment_mongodb_2+1&instance=Micro#section_pricing_dedicated" target="_blank">2 Replicas + Arbiter</a></td>
</tr>
</tbody>
</table>
<p style="text-align: justify; padding-top: 10px;">Our Dedicated Hosting plans are all-inclusive, including all machine, disk, and network costs, as well as 24/7 support. These plans are fully managed for you across any of these cloud providers, and comes with a comprehensive console to automate all of your database management, monitoring and maintenance tasks in the cloud.</p>
<p style="text-align: justify; padding-bottom: 10px;">Let’s take a look at how ScaleGrid Dedicated Hosting pricing compares across AWS vs. Azure vs. DigitalOcean:</p>
<table style="background: #def5fe;" width="100%">
<tbody>
<tr style="border-bottom: 2px solid white;">
<th style="padding: 20px 15px;" width="40%"><b>ScaleGrid Dedicated Plans</b></th>
<th style="padding: 20px 15px; border-left: 2px solid white;" width="20%"><b>AWS</b></th>
<th style="padding: 20px 15px; border-left: 2px solid white;" width="20%"><b>Azure</b></th>
<th style="padding: 20px 15px; border-left: 2px solid white;" width="20%"><b>DigitalOcean</b></th>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 15px;">2GB</td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_aws_standard&replica=deployment_mongodb_2+1&instance=Small#section_pricing_dedicated" target="_blank">$190</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_azure&replica=deployment_mongodb_2+1&instance=Small#section_pricing_dedicated" target="_blank">$187</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_digital_ocean&replica=deployment_mongodb_2+1&instance=Micro#section_pricing_dedicated" target="_blank">$104</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 15px;">4GB</td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_aws_standard&replica=deployment_mongodb_2+1&instance=Medium#section_pricing_dedicated" target="_blank">$330</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_azure&replica=deployment_mongodb_2+1&instance=Medium#section_pricing_dedicated" target="_blank">$374</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_digital_ocean&replica=deployment_mongodb_2+1&instance=Small#section_pricing_dedicated" target="_blank">$140</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 15px;">8GB</td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_aws_standard&replica=deployment_mongodb_2+1&instance=Large#section_pricing_dedicated" target="_blank">$657</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_azure&replica=deployment_mongodb_2+1&instance=Large#section_pricing_dedicated" target="_blank">$750</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_digital_ocean&replica=deployment_mongodb_2+1&instance=Medium#section_pricing_dedicated" target="_blank">$300</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 15px;">16GB</td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_aws_standard&replica=deployment_mongodb_2+1&instance=XLarge#section_pricing_dedicated" target="_blank">$1,164</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_azure&replica=deployment_mongodb_2+1&instance=XLarge#section_pricing_dedicated" target="_blank">$1,250</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_digital_ocean&replica=deployment_mongodb_2+1&instance=Large#section_pricing_dedicated" target="_blank">$500</a></td>
</tr>
<tr style="border-bottom: 2px solid white;">
<td style="padding: 15px;">32GB</td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_aws_standard&replica=deployment_mongodb_2+1&instance=X2XLarge#section_pricing_dedicated" target="_blank">$1,912</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_azure&replica=deployment_mongodb_2+1&instance=X2XLarge#section_pricing_dedicated" target="_blank">$2,025</a></td>
<td style="padding: 15px; border-left: 2px solid white;"><a style="color: #2da964;" href="https://scalegrid.io/pricing.html?db=MONGODB&cloud=cloud_digital_ocean&replica=deployment_mongodb_2+1&instance=XLarge#section_pricing_dedicated" target="_blank">$800</a></td>
</tr>
</tbody>
</table>
<h3 style="padding-top: 15px;">How much can you save migrating to DigitalOcean?</h3>
<p style="text-align: justify; padding-bottom: 10px;">So, are you’re deploying MongoDB® database on <a style="color: #2da964;" href="https://scalegrid.io/mongodb/aws.html" target="_blank">AWS</a> or <a style="color: #2da964;" href="https://scalegrid.io/mongodb/azure.html" target="_blank">Azure</a>, and wondering how you can lower your database hosting costs? Let’s see how much you can save by migrating your hosting for <a style="color: #2da964;" href="https://scalegrid.io/mongodb/digitalocean.html" target="_blank">MongoDB® database to DigitalOcean</a>:</p>
<p style="text-align: justify;"><a href="https://scalegrid.io/blog/reducing-your-database-hosting-costs-digitalocean-vs-aws-vs-azure/#savings"><img class="alignnone size-full wp-image-5274" src="https://scalegrid.io/blog/wp-content/uploads/2020/04/DigitalOcean-Cost-Savings-vs-AWS-and-Azure-Database-Hosting-at-ScaleGrid.png" alt="DigitalOcean Savings Over AWS and Azure for Database Hosting at ScaleGrid" width="100%" height="auto" /></a></p>
<p style="text-align: justify;">ScaleGrid’s Dedicated Hosting service with 2 Replicas + Arbiter for MongoDB® database on DigitalOcean saves you on average 122% on your monthly AWS hosting costs, and 140% on your monthly Azure hosting costs. The above chart outlines the cost savings across different plans, and ranges from around 80% cost-savings for 2GB of RAM, up to 153% cost-savings in our 32GB of RAM plan size.</p>
<h2 style="padding-top: 15px;">DigitalOcean Advantages</h2>
<p style="text-align: justify;">DigitalOcean provides many advantages for database hosting, and you can learn more about them in our <a style="color: #2da964;" href="https://scalegrid.io/blog/the-best-way-to-host-mongodb-on-digitalocean/" target="_blank">The Best Way to Host MongoDB on DigitalOcean</a> blog post. Here’s a quick overview of the key advantages:</p>
<ul>
<li>Developer-friendly</li>
<li>Simple pricing</li>
<li>SSD-based VMs</li>
<li>High performance</li>
</ul>
<h2 style="padding-top: 15px;">DigitalOcean Hosting FAQs</h2>
<h3 style="padding-top: 15px;">Is my database cluster still highly available?</h3>
<p style="text-align: justify;">Yes. All of our high availability options are offered in DigitalOcean, including 2 Replicas + 1 Arbiter, 3 Replicas and custom replica set setups. DigitalOcean does not have the concept of availability zones (AZ), so we distribute the nodes across <a style="color: #2da964;" title="MongoDB DigitalOcean Regions - ScaleGrid" href="https://help.scalegrid.io/docs/mongodb-data-centers#digitalocean-data-centers" target="_blank">different regions</a>. For example, in the US, we distribute nodes across New York 3, New York 2 and New York 1.</p>
<h3 style="padding-top: 15px;">Does it affect latency?</h3>
<p style="text-align: justify;">Yes, you can see an increase in latency. Ideally, we would want to see both the application and the database in the same datacenter. So, if you're hosting your application in AWS or Azure and move your database to DigitalOcean, you will see an increase in latency. However, the average latencies between AWS US-East and the DigitalOcean New York datacenter locations are typically only 17.4 ms round trip time.</p>
<h3 style="padding-top: 15px;">How can I migrate?</h3>
<p style="text-align: justify; padding-bottom: 20px;">ScaleGrid provides an Import wizard to <a style="color: #2da964;" href="https://help.scalegrid.io/docs/mongodb-migrations-between-plans" target="_blank">migrate data from one cluster to another.</a> If you have any special needs for your migration, please contact <a style="color: #2da964;" href="mailto:support@scalegrid.io" target="_blank">support@scalegrid.io</a>.</p>
| scalegridio |
321,794 | A workout routine with Oracle DEV GYM | As developers , we must to find the opportunity to go to the gym and be healthy, count with an... | 0 | 2020-04-29T03:02:37 | https://dev.to/ricdev2/a-workout-routine-with-oracle-dev-gym-14an | java, programming, productivity, sql | ---
title: A workout routine with Oracle DEV GYM
published: true
description:
tags: java, programming, productivity, sql
cover_image: https://images.unsplash.com/photo-1556817411-31ae72fa3ea0?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=3150&q=80
---
As developers , we must to find the opportunity to go to the gym and be healthy, count with an excellent routine for to see the results soon, but we don't have to forget that not just the body needs exercise, the brain too, and needs a special gym for that. But the gym what I going to say isn't a real gym with sweat and weights. This gym is where the programmers who knows Java, SQL, databases and logic recurring for get in shape with exercises to solve, improve the logic and participate in tournaments.
So, Oracle designed a website for practice, have a lot of exercises, tournaments and classes.

Register.

You can check your score and trophies.

The only this what you need is register and start the exercises. I hope this help you.
Link:
[https://devgym.oracle.com/pls/apex/f?p=10001:2001::::2001::](https://devgym.oracle.com/pls/apex/f?p=10001:2001::::2001::)
| ricdev2 |
321,822 | Aprendi Lógica de Programação. E agora? | Escrito por @maiarquino Aprendi Lógica de Programação. E ago... | 0 | 2020-04-29T01:21:47 | https://dev.to/elasprogramam/aprendi-logica-de-programacao-e-agora-356p | iniciante, desenvolvimento, programar, carreira | Escrito por @maiarquino
{% link https://dev.to/maiarquino/aprendi-logica-de-programacao-e-agora-419k %} | elasprogramam |
322,543 | Predicting fines for GDPR violations with tidymodels | Recently we on the tidymodels team launched tidymodels.org, a new central location with resources and... | 0 | 2020-07-17T17:15:26 | https://juliasilge.com/blog/gdpr-violations/ | machinelearning, datascience, tutorial, rstats | ---
title: Predicting fines for GDPR violations with tidymodels
published: true
date: 2020-04-22 00:00:00 UTC
tags:
canonical_url: https://juliasilge.com/blog/gdpr-violations/
---
Recently we on the tidymodels team launched [tidymodels.org](https://www.tidymodels.org/), a new central location with resources and documentation for tidymodels packages. There is a TON to explore and learn there! 🚀 You can check out the [official blog post](https://www.tidyverse.org/blog/2020/04/tidymodels-org/) for more details.
I'm publishing here [another screencast demonstrating how to use tidymodels](https://juliasilge.com/category/tidymodels/). This is a good video for folks getting started with tidymodels, using a recent [`#TidyTuesday` dataset](https://github.com/rfordatascience/tidytuesday) on GDPR violations. SCARY!!! 😱
<!--html_preserve-->
<iframe src="https://www.youtube.com/embed/HvODHnXHJf8" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" allowfullscreen title="YouTube Video"></iframe>
<!--/html_preserve-->
Here is the code I used in the video, for those who prefer reading instead of or in addition to video.
## Explore the data
Our modeling goal here is to understand what kind of GDPR violations are associated with higher fines in the [#TidyTuesday dataset](https://github.com/rfordatascience/tidytuesday/blob/master/data/2020/2020-04-21/readme.md) for this week. Before we start, what are the most common GDPR articles actually about? I am not a lawyer, but very roughly:
- **Article 5:** principles for processing personal data (legitimate purpose, limited)
- **Article 6:** lawful processing of personal data (i.e. consent, etc)
- **Article 13:** inform subject when personal data is collected
- **Article 15:** right of access by data subject
- **Article 32:** security of processing (i.e. data breaches)
Let’s get started by looking at the data on violations.
```
library(tidyverse)
gdpr_raw <- readr::read_tsv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-04-21/gdpr_violations.tsv")
gdpr_raw
## # A tibble: 250 x 11
## id picture name price authority date controller article_violated type
## <dbl> <chr> <chr> <dbl> <chr> <chr> <chr> <chr> <chr>
## 1 1 https:… Pola… 9380 Polish N… 10/1… Polish Ma… Art. 28 GDPR Non-…
## 2 2 https:… Roma… 2500 Romanian… 10/1… UTTIS IND… Art. 12 GDPR|Ar… Info…
## 3 3 https:… Spain 60000 Spanish … 10/1… Xfera Mov… Art. 5 GDPR|Art… Non-…
## 4 4 https:… Spain 8000 Spanish … 10/1… Iberdrola… Art. 31 GDPR Fail…
## 5 5 https:… Roma… 150000 Romanian… 10/0… Raiffeise… Art. 32 GDPR Fail…
## 6 6 https:… Roma… 20000 Romanian… 10/0… Vreau Cre… Art. 32 GDPR|Ar… Fail…
## 7 7 https:… Gree… 200000 Hellenic… 10/0… Telecommu… Art. 5 (1) c) G… Fail…
## 8 8 https:… Gree… 200000 Hellenic… 10/0… Telecommu… Art. 21 (3) GDP… Fail…
## 9 9 https:… Spain 30000 Spanish … 10/0… Vueling A… Art. 5 GDPR|Art… Non-…
## 10 10 https:… Roma… 9000 Romanian… 09/2… Inteligo … Art. 5 (1) a) G… Non-…
## # … with 240 more rows, and 2 more variables: source <chr>, summary <chr>
```
How are the fines distributed?
```
gdpr_raw %>%
ggplot(aes(price + 1)) +
geom_histogram(fill = "midnightblue", alpha = 0.7) +
scale_x_log10(labels = scales::dollar_format(prefix = "€")) +
labs(x = "GDPR fine (EUR)", y = "GDPR violations")
```

Some of the violations were fined zero EUR. Let’s make a one-article-per-row version of this dataset.
```
gdpr_tidy <- gdpr_raw %>%
transmute(id,
price,
country = name,
article_violated,
articles = str_extract_all(article_violated, "Art.[:digit:]+|Art. [:digit:]+")
) %>%
mutate(total_articles = map_int(articles, length)) %>%
unnest(articles) %>%
add_count(articles) %>%
filter(n > 10) %>%
select(-n)
gdpr_tidy
## # A tibble: 304 x 6
## id price country article_violated articles total_articles
## <dbl> <dbl> <chr> <chr> <chr> <int>
## 1 2 2500 Romania Art. 12 GDPR|Art. 13 GDPR|Art. … Art. 13 4
## 2 2 2500 Romania Art. 12 GDPR|Art. 13 GDPR|Art. … Art. 5 4
## 3 2 2500 Romania Art. 12 GDPR|Art. 13 GDPR|Art. … Art. 6 4
## 4 3 60000 Spain Art. 5 GDPR|Art. 6 GDPR Art. 5 2
## 5 3 60000 Spain Art. 5 GDPR|Art. 6 GDPR Art. 6 2
## 6 5 150000 Romania Art. 32 GDPR Art. 32 1
## 7 6 20000 Romania Art. 32 GDPR|Art. 33 GDPR Art. 32 2
## 8 7 200000 Greece Art. 5 (1) c) GDPR|Art. 25 GDPR Art. 5 2
## 9 9 30000 Spain Art. 5 GDPR|Art. 6 GDPR Art. 5 2
## 10 9 30000 Spain Art. 5 GDPR|Art. 6 GDPR Art. 6 2
## # … with 294 more rows
```
How are the fines distributed by article?
```
library(ggbeeswarm)
gdpr_tidy %>%
mutate(
articles = str_replace_all(articles, "Art. ", "Article "),
articles = fct_reorder(articles, price)
) %>%
ggplot(aes(articles, price + 1, color = articles, fill = articles)) +
geom_boxplot(alpha = 0.2, outlier.colour = NA) +
geom_quasirandom() +
scale_y_log10(labels = scales::dollar_format(prefix = "€")) +
labs(
x = NULL, y = "GDPR fine (EUR)",
title = "GDPR fines levied by article",
subtitle = "For 250 violations in 25 countries"
) +
theme(legend.position = "none")
```

Now let’s create a dataset for modeling.
```
gdpr_violations <- gdpr_tidy %>%
mutate(value = 1) %>%
select(-article_violated) %>%
pivot_wider(
names_from = articles, values_from = value,
values_fn = list(value = max), values_fill = list(value = 0)
) %>%
janitor::clean_names()
gdpr_violations
## # A tibble: 219 x 9
## id price country total_articles art_13 art_5 art_6 art_32 art_15
## <dbl> <dbl> <chr> <int> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2 2500 Romania 4 1 1 1 0 0
## 2 3 60000 Spain 2 0 1 1 0 0
## 3 5 150000 Romania 1 0 0 0 1 0
## 4 6 20000 Romania 2 0 0 0 1 0
## 5 7 200000 Greece 2 0 1 0 0 0
## 6 9 30000 Spain 2 0 1 1 0 0
## 7 10 9000 Romania 2 0 1 1 0 0
## 8 11 195407 Germany 3 0 0 0 0 1
## 9 12 10000 Belgium 1 0 1 0 0 0
## 10 13 644780 Poland 1 0 0 0 1 0
## # … with 209 more rows
```
We are ready to go!
## Build a model
Let’s preprocess our data to get it ready for modeling.
```
library(tidymodels)
gdpr_rec <- recipe(price ~ ., data = gdpr_violations) %>%
update_role(id, new_role = "id") %>%
step_log(price, base = 10, offset = 1, skip = TRUE) %>%
step_other(country, other = "Other") %>%
step_dummy(all_nominal()) %>%
step_zv(all_predictors())
gdpr_prep <- prep(gdpr_rec)
gdpr_prep
## Data Recipe
##
## Inputs:
##
## role #variables
## id 1
## outcome 1
## predictor 7
##
## Training data contained 219 data points and no missing data.
##
## Operations:
##
## Log transformation on price [trained]
## Collapsing factor levels for country [trained]
## Dummy variables from country [trained]
## Zero variance filter removed no terms [trained]
```
Let’s walk through the steps in this recipe.
- First, we must tell the `recipe()` what our model is going to be (using a formula here) and what data we are using.
- Next, we update the role for `id`, since this variable is not a predictor or outcome but I would like to keep it in the data for convenience.
- Next, we take the log of the outcome (`price`, the amount of the fine).
- There are a lot of countries in this dataset, so let’s collapse some of the less frequently occurring countries into another `"Other"` category.
- Finally, we can create indicator variables and remove varibles with zero variance.
Before using `prep()` these steps have been defined but not actually run or implemented. The `prep()` function is where everything gets evaluated.
Now it’s time to specify our model. I am using a [`workflow()`](https://tidymodels.github.io/workflows/) in this example for convenience; these are objects that can help you manage modeling pipelines more easily, with pieces that fit together like Lego blocks. This `workflow()` contains both the recipe and the model (a straightforward OLS linear regression).
```
gdpr_wf <- workflow() %>%
add_recipe(gdpr_rec) %>%
add_model(linear_reg() %>%
set_engine("lm"))
gdpr_wf
## ══ Workflow ═══════════════════════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: linear_reg()
##
## ── Preprocessor ───────────────────────────────────────────────────────────────────────
## 4 Recipe Steps
##
## ● step_log()
## ● step_other()
## ● step_dummy()
## ● step_zv()
##
## ── Model ──────────────────────────────────────────────────────────────────────────────
## Linear Regression Model Specification (regression)
##
## Computational engine: lm
```
You can `fit()` a workflow, much like you can fit a model, and then you can pull out the fit object and `tidy()` it!
```
gdpr_fit <- gdpr_wf %>%
fit(data = gdpr_violations)
gdpr_fit
## ══ Workflow [trained] ═════════════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: linear_reg()
##
## ── Preprocessor ───────────────────────────────────────────────────────────────────────
## 4 Recipe Steps
##
## ● step_log()
## ● step_other()
## ● step_dummy()
## ● step_zv()
##
## ── Model ──────────────────────────────────────────────────────────────────────────────
##
## Call:
## stats::lm(formula = formula, data = data)
##
## Coefficients:
## (Intercept) total_articles art_13
## 3.76607 0.47957 -0.76251
## art_5 art_6 art_32
## -0.41869 -0.55988 -0.15317
## art_15 country_Czech.Republic country_Germany
## -1.56765 -0.64953 0.05974
## country_Hungary country_Romania country_Spain
## -0.15532 -0.34580 0.42968
## country_Other
## 0.23438
gdpr_fit %>%
pull_workflow_fit() %>%
tidy() %>%
arrange(estimate) %>%
kable()
```
| term | estimate | std.error | statistic | p.value |
| --- | --- | --- | --- | --- |
| art\_15 | -1.5676538 | 0.4651576 | -3.3701564 | 0.0008969 |
| art\_13 | -0.7625069 | 0.4074302 | -1.8715031 | 0.0626929 |
| country\_Czech.Republic | -0.6495339 | 0.4667470 | -1.3916188 | 0.1655387 |
| art\_6 | -0.5598765 | 0.2950382 | -1.8976404 | 0.0591419 |
| art\_5 | -0.4186949 | 0.2828869 | -1.4800789 | 0.1403799 |
| country\_Romania | -0.3457980 | 0.4325560 | -0.7994295 | 0.4249622 |
| country\_Hungary | -0.1553232 | 0.4790037 | -0.3242631 | 0.7460679 |
| art\_32 | -0.1531725 | 0.3146769 | -0.4867613 | 0.6269450 |
| country\_Germany | 0.0597408 | 0.4189434 | 0.1425986 | 0.8867465 |
| country\_Other | 0.2343787 | 0.3551225 | 0.6599939 | 0.5099950 |
| country\_Spain | 0.4296805 | 0.3643060 | 1.1794494 | 0.2395796 |
| total\_articles | 0.4795667 | 0.1656494 | 2.8950705 | 0.0041993 |
| (Intercept) | 3.7660677 | 0.4089156 | 9.2098904 | 0.0000000 |
GDPR violations of more than one article have higher fines.
## Explore results
Lots of those coefficients have big p-values (for example, all the countries) but I think the best way to understand these results will be to visualize some predictions. You can predict on new data in tidymodels with either a model or a `workflow()`.
Let’s create some example new data that we are interested in.
```
new_gdpr <- crossing(
country = "Other",
art_5 = 0:1,
art_6 = 0:1,
art_13 = 0:1,
art_15 = 0:1,
art_32 = 0:1
) %>%
mutate(
id = row_number(),
total_articles = art_5 + art_6 + art_13 + art_15 + art_32
)
new_gdpr
## # A tibble: 32 x 8
## country art_5 art_6 art_13 art_15 art_32 id total_articles
## <chr> <int> <int> <int> <int> <int> <int> <int>
## 1 Other 0 0 0 0 0 1 0
## 2 Other 0 0 0 0 1 2 1
## 3 Other 0 0 0 1 0 3 1
## 4 Other 0 0 0 1 1 4 2
## 5 Other 0 0 1 0 0 5 1
## 6 Other 0 0 1 0 1 6 2
## 7 Other 0 0 1 1 0 7 2
## 8 Other 0 0 1 1 1 8 3
## 9 Other 0 1 0 0 0 9 1
## 10 Other 0 1 0 0 1 10 2
## # … with 22 more rows
```
Let’s find both the mean predictions and the confidence intervals.
```
mean_pred <- predict(gdpr_fit,
new_data = new_gdpr
)
conf_int_pred <- predict(gdpr_fit,
new_data = new_gdpr,
type = "conf_int"
)
gdpr_res <- new_gdpr %>%
bind_cols(mean_pred) %>%
bind_cols(conf_int_pred)
gdpr_res
## # A tibble: 32 x 11
## country art_5 art_6 art_13 art_15 art_32 id total_articles .pred
## <chr> <int> <int> <int> <int> <int> <int> <int> <dbl>
## 1 Other 0 0 0 0 0 1 0 4.00
## 2 Other 0 0 0 0 1 2 1 4.33
## 3 Other 0 0 0 1 0 3 1 2.91
## 4 Other 0 0 0 1 1 4 2 3.24
## 5 Other 0 0 1 0 0 5 1 3.72
## 6 Other 0 0 1 0 1 6 2 4.04
## 7 Other 0 0 1 1 0 7 2 2.63
## 8 Other 0 0 1 1 1 8 3 2.96
## 9 Other 0 1 0 0 0 9 1 3.92
## 10 Other 0 1 0 0 1 10 2 4.25
## # … with 22 more rows, and 2 more variables: .pred_lower <dbl>,
## # .pred_upper <dbl>
```
There are lots of things we can do wtih these results! For example, what are the predicted GDPR fines for violations of each article type (violating only one article)?
```
gdpr_res %>%
filter(total_articles == 1) %>%
pivot_longer(art_5:art_32) %>%
filter(value > 0) %>%
mutate(
name = str_replace_all(name, "art_", "Article "),
name = fct_reorder(name, .pred)
) %>%
ggplot(aes(name, 10^.pred, color = name)) +
geom_point(size = 3.5) +
geom_errorbar(aes(
ymin = 10^.pred_lower,
ymax = 10^.pred_upper
),
width = 0.2, alpha = 0.7
) +
labs(
x = NULL, y = "Increase in fine (EUR)",
title = "Predicted fine for each type of GDPR article violation",
subtitle = "Modeling based on 250 violations in 25 countries"
) +
scale_y_log10(labels = scales::dollar_format(prefix = "€", accuracy = 1)) +
theme(legend.position = "none")
```

We can see here that violations such as data breaches have higher fines on average than violations about rights of access. | juliasilge |
323,319 | WeWatch - virtual couch to watch videos with people | What I built During COVID-19, people are bound at their homes. Times of watching shows tog... | 0 | 2020-05-01T06:59:01 | https://dev.to/michaelxie/wewatch-virtual-couch-to-watch-videos-with-people-44md | twiliohackathon | [Comment]: # (All of this is placeholder text. Use this format or any other format of your choosing to best describe your project.)
[Reminder]: # (Make sure you've submitted the Twilio CodeExchange agreement: https://ahoy.twilio.com/code-exchange-community)
[Important]: # (By making a submission, you agree to the competition's terms: https://www.twilio.com/legal/twilio-dev-hackathon-terms)
## What I built
During COVID-19, people are bound at their homes. Times of watching shows together with your friends feel like a distant past.
WeWatch is a virtual couch for friends to watch videos together.
1. Invite friends to same room
2. Paste in video link (currently supports YouTube and other sites)
- video is synchronized across each client (play, pause, scrub)
3. React to videos while video streaming with your friends
#### Category Submission:
COVID-19 Communications
## Demo Link
None yet
## Link to Code
https://github.com/Michael-Xie/wewatch
## How I built it (what's the stack? did I run into issues or discover something new along the way?)
- React, Express, Node, Twilio Sync, (Twilio Programmable Video)
- react-player
### WIP
- video streaming for participants
### Issues
- there are a lot of sync updates, even thought played or paused once
- as the play and pause are pressed more, more lag is created between clients
## Additional Resources/Info | michaelxie |
325,882 | Setting up Multiple Environments on React Native for iOS and Android | Tutorial explaining how to set up multiple environments on React native for iOS and Android and be able to install each version on the same device (spoiler: different bundle identifiers/application suffixes). | 0 | 2020-05-02T17:35:17 | https://dev.to/therealemjy/setting-up-multiple-environments-on-react-native-for-ios-and-android-e5j | reactnative, mobileprogramming, ios, android | ---
title: Setting up Multiple Environments on React Native for iOS and Android
published: true
description: Tutorial explaining how to set up multiple environments on React native for iOS and Android and be able to install each version on the same device (spoiler: different bundle identifiers/application suffixes).
tags: React Native, Mobile Programming, iOS, Android
---
As soon as a project starts scaling, it becomes obvious the ability to have more than one environment is crucial.
I just published an article talking about just that; check it out!
https://medium.com/@maximejulian/setting-up-multiple-environments-on-react-native-for-ios-and-android-c43f3128754f | therealemjy |
327,145 | Quiz App with React | Made quiz app with React. Pls do give ur valuable feedback on it... https://realquizbee.netlify.app/... | 0 | 2020-05-04T15:41:10 | https://dev.to/gauravsingh9356/quiz-app-with-react-1bcn | react, javascript, devops, css | Made quiz app with React. Pls do give ur valuable feedback on it...
https://realquizbee.netlify.app/ | gauravsingh9356 |
327,152 | My first blog | Hi my name is hamza and i am from pakistan i love to do seo work and i have small website you can vis... | 0 | 2020-05-04T16:08:11 | https://freewindowsvpslifetime.com | freevps, vps | Hi my name is hamza and i am from pakistan i love to do seo work and i have small <a href="https://freewindowsvpslifetime.com">website</a> you can visit my <a href="http://freevpslifetime.com">free vps website</a> any time for amazing tutorial | windows_vps |
327,232 | Becoming a Developer in 15 weeks | My Journey Through a Coding Bootcamp and Should You Do It? | Today marks the first day of the last week of my 15 week Code Bootcamp and it's been quite the experi... | 0 | 2020-05-04T20:53:33 | https://dev.to/austinoso/becoming-a-developer-in-15-weeks-my-journey-through-a-coding-bootcamp-and-should-you-do-it-2ngj | Today marks the first day of the last week of my 15 week Code Bootcamp and it's been quite the experience. Assuming you're reading this article because you have an interest in coding bootcamps, my hope is that this helps provide an inside look of what it's like to attend one.
## **Before Bootcamp And Why I Decided It Was Right For Me**
To be straight first, I'm wasn’t entirely new to web development before starting this course. I have worked on some basic HTML and CSS projects before and I did take a few Udemy courses. My initial idea was to just learn entirely from home. I figured with Udemy courses being only $10-12 on sale and the multitude of online free resources available, I shouldn't have any issues learning everything online. However, I found out that it wasn't the best option for me.
While it is entirely possible to learn everything from home and probably for free. I didn't think it was right for me. A here are a few of my reasons:
- Time: I already really enjoyed programming and I knew it was right for me. But, while looking into it, learning from home without much of a structured environment or curriculum pointing me in the right direction, wouldn’t be as painless as I thought it would be. Reading a lot of other blogs and came to realize it might take quite a bit longer than I anticipated. Plus I wasn't sure at what point I'd be ready to work.
- Environment: One smaller thing bootcamps provided was the environment. Being able to socialize with people in the same position with similar mindsets and having instructors and coaches on standby appealed to me.
- Career Services: Most bootcamps offer some sort of career services like helping you build a great resumé and online presence, mock interviews, and other such support while job searching.
- Soft Skills: One major thing companies look for today is soft skills. Having always taught myself and being more of a shut in. I didn't want to be lacking any social skills or be in the position of my being able to explain my code or thought process to my coworkers.
## **After My Decision/Deciding Where I Should Go**
After I figured out I wanted to attend a bootcamp. I spent a lot of time figuring out which bootcamp I wanted to go to. Even though it was a bit of a commute. I decided I should probably look for schools in San Francisco. I spent hours reading student reviews and blogs. Reading into each camps website carefully until it came down to two options. I went through the prep work of both camps, went through a personality and technical interview with both and was accepted by both schools. Ultimately I decided to go with Flatiron as more of a gut choice since both schools on paper seemed almost identical. However, my choice was mainly persuaded by the vibe I was given. Plus I was also able to check out the Flatiron campus before hand, while my other option I didn't really have that ability until after I wanted to make my decision.
## **During Bootcamp**
Unfortunately towards the middle-to-end point of my program the Covid-19 pandemic stuck forcing us to work from home. But, during my time on campus, the only complaint I had was the commute. Traveling from the Central Valley to San Francisco took about 4 hours each day, 5 days/week. Other than that, I love every second of it. Of course at times I struggled and I tend to be the type who just likes to figured things on my own. But one thing I realized is asking for help is often the best thing to do if you start getting too stuck. Because often what might be challenging for you, someone might have a little more experience in it to help you where you might get confused. Or you just misspelled a variable and you just need a set of fresh eyes.
## **After Bootcamp?**
Well I'm not quite sure yet, like most, my program offers career coaching and I'm just getting started on that. From just the one meeting I've had with my coach, I felt pretty confident that with their help, I'll be able to land a job within 6 months. But, my plan is to update this part when and if that time comes.
## **Should You Go To A Coding Bootcamp?**
I would say it depends. Before asking that question, I feel you should first ask yourself if you really want to become a developer. Seek out real experiences from actual developers and find out what their daily life is like. While not trying to discredit "Day in the Life" videos, they often don't paint an accurate picture of an actual daily life. At least, that's what I've understood from the majority of developers who say otherwise. Also try practicing a bit at home first. Take a few Udemy courses (courses by Colt Steele are great) or theres also plenty of free options available.
But my answer is, if you answer to that question is "Yes", and you want an accelerated learning experience, have the savings or someone who can support you, and can afford it. I'd say a bootcamp for me was one of the greatest experiences of my life and it's a bit of a bitter sweet ending. While I'm a bit sad this is all over. I'm excited for the future I've been prepared for.
| austinoso | |
327,444 | Web Surgery - Ep.6 - Adding Netlify CMS to the website | This episode we will start to add Netlify CMS so the website. It can easily be done by adding an html and config.yml under admin. It is promising running on localhost but it is not fully working yet because we still have to figure out the Auth for GitHub, also we may add more functionality in the future, tell me what you suggest at my [Twitch](https://www.twitch.tv/cheukting_ho). | 0 | 2020-05-05T01:08:18 | https://dev.to/cheukting_ho/web-surgery-ep-6-adding-netlify-cms-to-the-website-55la | jamstack, webdev, jekyll, netlifycms | ---
title: Web Surgery - Ep.6 - Adding Netlify CMS to the website
published: true
description: This episode we will start to add Netlify CMS so the website. It can easily be done by adding an html and config.yml under admin. It is promising running on localhost but it is not fully working yet because we still have to figure out the Auth for GitHub, also we may add more functionality in the future, tell me what you suggest at my [Twitch](https://www.twitch.tv/cheukting_ho).
tags: JAMStack, Webdev, Jekyll, NetlifyCMS
---
This episode we will start to add Netlify CMS so the website. It can easily be done by adding an html and config.yml under admin. It is promising running on localhost but it is not fully working yet because we still have to figure out the Auth for GitHub, also we may add more functionality in the future, tell me what you suggest at my [Twitch](https://www.twitch.tv/cheukting_ho).
The code is work in progress but if you want to check it out, it will be uploaded and updated in [this repo](https://github.com/Cheukting/animal-crossing-wishlist)
| cheukting_ho |
327,261 | How to sync data from Coda to Google Sheets (and vice versa) with Google Apps Script tutorial | Keep your data synced across your Coda docs and Google Sheets so you don't have to copy and paste anymore. | 0 | 2020-05-04T20:31:40 | https://coda.io/@atc/how-to-sync-data-from-coda-to-google-sheets-and-vice-versa-with-google-apps-script-tutorial | googlesheets, coda, googleappsscript, tutorial | ---
title: How to sync data from Coda to Google Sheets (and vice versa) with Google Apps Script tutorial
description: Keep your data synced across your Coda docs and Google Sheets so you don't have to copy and paste anymore.
published: true
date: 2020-05-04 17:56:22 UTC
tags: google sheets, coda, google apps script, tutorial
canonical_url: https://coda.io/@atc/how-to-sync-data-from-coda-to-google-sheets-and-vice-versa-with-google-apps-script-tutorial
cover_image: https://p-ZmF7dQ.b0.n0.cdn.getcloudapp.com/items/eDu6gJdK/sync_cover.jpg?v=ac30e3861b0816b32558bc9609efd701
---
## Two new scripts
Last year I published [a tutorial](https://coda.io/@atc/how-to-sync-data-between-coda-docs-and-google-sheets-using-googl) on how to sync data between two Coda docs and data between two Google Sheets. What was missing from the tutorial was how to sync data between a **Coda doc** and a **Google Sheet**. Writing these scripts was definitely more challenging than the original script I wrote for syncing two Coda docs since the data model for Coda is different from Google Sheets. Please read the [caveats](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir#_lueHe) below about these scripts to learn about some of the roadblocks I encountered when writing these scripts.
If you are reading this, chances are you have a lot of experience with Google Sheets, Coda, and perhaps the [Coda API](https://coda.io/developers/apis/v1beta1). I’m going to skip the introduction to Coda as I did with the [last tutorial](https://coda.io/@atc/how-to-sync-data-between-coda-docs-and-google-sheets-using-googl) and get straight to the point on how you can:
1. Sync data from [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq)
2. Sync data from [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq)
_If you want to skip right to using the Google Apps Scripts, go to the other two pages in this doc (mentioned above) or go to_ [_this repo_](https://github.com/albertc44/coda-google-apps-script) _which contains all four scripts for syncing data (PRs welcome). Here are two video tutorials if you prefer a visual tutorial._
### Coda to Google Sheets
{% youtube mAdAe8GVCdA %}
### Google Sheets to Coda
{% youtube xVWu9jdBm_U %}
## Features
There are some limitations to the scripts which I’ll discuss later on in this blog post, but these are the main features for each script:
### [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq#_luV4e)
- New rows that get **added or deleted** in your Coda table will also get added or deleted in Google Sheets
- Existing rows that get **updated** in Coda will also get updated in Google Sheets
- You can **re-arrange the columns** in your Google Sheet and the sync will still sync the appropriate columns in your Google Sheet
- You can **add or insert new columns** in your **Google Sheet** and write formulas in these new columns
- You can **add or insert new columns** in your table in **Coda** and these columns won’t get synced to Google Sheets (unless you create a new column in Google Sheets with the same column name as the one in your Coda table)
### [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq)
- New rows that get **added or deleted** in your Google Sheet worksheet will also get added or deleted in your Coda table
- Existing rows that get **updated** in your Google Sheet worksheet will also get updated in Coda
- You can **sort and filter** the rows in your target Coda table and the script will still add, delete, and update the appropriate rows in Coda
- You can **add rows** to your Coda table and not get them deleted on the sync by adding a “Do not delete” [checkbox column](https://help.coda.io/en/articles/1235680-overview-of-column-formats#types-of-column-formats) in your Coda table that is set to `true` (more about this later in the post)
Some of the features in the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script also apply to the [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) script, but I haven’t fully tested every use case. If you see any bugs, please add them to the repo’s [issues list](https://github.com/albertc44/coda-google-apps-script/issues).
## Setup: Coda to Google Sheets script
Starting in [line 9](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js#L9) to [line 14](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js#L14) of the [coda_to_sheet.js](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js#L10) script, you’ll need to enter in some of your own data to make the script work. Step-by-step:
1. Go to [script.google.com](https://script.google.com/home) and create a new project and give your project a name.
2. Go to Libraries then Resources and paste the following string of text/numbers into the library field: `15IQuWOk8MqT50FDWomh57UqWGH23gjsWVWYFms3ton6L-UHmefYHS9Vl`.
3. Click Add and then select a version of the library to use (as of May 2020, version 8 is the latest)
4. Copy and paste the [entire script](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js) into your Google Apps Script project and click File then Save.
5. Go to your Coda [account settings](https://coda.io/account), scroll down until you see “API SETTINGS” and click Generate API Token. Copy and paste that API token into the value for `YOUR_API_KEY` in the script. _Note: do not delete the single apostrophes around_ `YOUR_API_KEY`.
6. Get the the doc ID from your Coda doc by copying and pasting all the characters after the `_d` in the URL of your Coda doc (should be about 10 characters). You can also use the _Doc ID Extractor_ tool in the [Coda API docs](https://coda.io/developers/apis/v1beta1#section/Using-the-API/Resource-IDs-and-Links). Copy and paste your doc ID into `YOUR_SOURCE_DOC_ID`.
7. Go back to your [account settings](https://coda.io/account) and scroll down to the very bottom until you see “Labs.” Toggle “Enable Developer Mode” to ON.
8. Hover over the table name in your Coda doc and click on the 3 dots that show up next to your table name. Click on “Copy table ID” and paste this value into `YOUR_SOURCE_TABLE_ID`.
9. To get your Google Sheets ID, get all the characters after `/d/` in your Google Sheets file up until the slash and paste this into `YOUR_GOOGLE_SHEETS_ID`. See [this link](https://stackoverflow.com/a/36062068/1110697) for more info.
10. Write in the name of the worksheet from your Google Sheets file where data will be synced into in the `YOUR_GOOGLE_SHEETS_WORKSHEET_NAME` value.
11. In Google Sheets, create a new column name at the end of your column headers called something like `Coda Source Row URL` and make sure there is no data in that column below the header. Write that column name in `YOUR_SOURCE_ROW_URL_COLUMN_NAME`.
12. Go back to Google Apps Script, click on the Select function dropdown in the toolbar, and select `runSync`. Then click the play ▶️ button to the left of the bug 🐞 button. This should copy over all the data from your Coda doc to Google Sheets.
13. To get the script to run every minute, hour, or day, click on the clock 🕒 button to the left of the ▶️ button to create a [time-driven trigger](https://developers.google.com/apps-script/guides/triggers/installable#time-driven_triggers).
14. Click Add Trigger, make sure runSync is set as the function to run, “Select event source” should be `Time-driven`, and play around with the type of time based trigger that fits your needs. I like to set the “Failure notification settings” to `Notify me immediately` so I know when my script fails to run.
## Setup: Google Sheets to Coda script
Most of the steps above apply to the [sheets_to_coda.js](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js) script as well but there are few extra features.
1. You can follow steps 1–10 above to fill out [line 12](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js#L12) to [line 18](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js#L18) in the script (except [line 14](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js#L14) mentioned in the next step). The main difference is that “SOURCE” and “TARGET” are flipped around since you are now syncing from a _source_ Google Sheet to a _target_ Coda doc.
2. Your Coda table _cannot_ have a column named `Coda Row ID`. If you need to use a column with this name, replace the `TARGET_ROW_ID_COLUMN` variable with another value.
3. If you have _edit access_ to the Google Sheet, follow step 11 above and write in the column name in `YOUR_SOURCE_ROW_URL_COLUMN_NAME`.
4. If you want the ability to add rows to your Coda table and NOT have these rows deleted every time the sync runs, create a column in your Coda table and name it `Do not delete`. This column should be a checkbox column format and you will check the box for every row you manually add to your Coda table that you want to keep in that table. Otherwise, the script will delete that row and always keep the Coda table a direct copy of what’s in your Google Sheets file. If you change the name of this `Do not delete` column, you must edit the value of the `DO_NOT_DELETE_COLUMN` variable in [line 22](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js#L22) of the script as well.
5. If you want the script to completely delete and re-write the rows in your Coda table each time the script runs, set the `REWRITE_CODA_TABLE` to `true` in [line 23](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js#L23). This may make the script run faster, but may not be faster for larger tables (few thousand rows). For Google Sheets files where you only have _view-only access_, this setting will automatically get set to `true`.
6. Follow steps 12–14 [above](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir) to set up your time-driven trigger.
## Use cases with Google Sheets
Some of the most common use cases for integrating your application with Google Sheets can be found in the [G Suite Marketplace](https://gsuite.google.com/marketplace/category/works-with-spreadsheet) for Google Sheets. From a business perspective, being able to visualize your data in Google Sheets allows you to slice and dice your data in ways you cannot do in on platform like Salesforce, for instance (FYI there’s a Salesforce [add-on](https://support.google.com/docs/answer/9073952?co=GENIE.Platform%3DDesktop&hl=en) for Google Sheets).

The opposite is true too. Your team or company’s data may be stored in a Google Sheet but the data just sits there without being “actionable.” Let’s say you have a bunch of customer information and you want to create mailing labels with your customers’ names and addresses. Being able to “export” your data from Google Sheets into a mail merge application like Avery will make it easy to create the mailing labels you need.
Then there’s the pinnacle of productivity in Google Sheets: _keeping data synced between your application and Google Sheets at all times_.
When Google Sheets first came out, it was a game-changer since changes you make on your browser are instantly reflected in your colleague’s file. We have come to expect this with tools we use in the browser. But having data synced between Google Sheets and your other applications at all times is less common, and this is why the [Google Sheets API](https://developers.google.com/sheets/api) is so important. From a Coda perspective, there are several use cases you might want to keep your Coda doc synced with a Google Sheet (and vice versa):
### Data synced from your Google Sheet
- **HR & recruiting** — All your candidates are stored in a Google Sheet but you want to be able to move candidates through different stages in the interviewing pipeline and Google Sheets isn’t sufficient for your needs. Having all your candidates in a table in Coda means you can use templates like [this one](https://coda.io/@evanatcoda/coordinating-candidates) to manage candidates more effectively.
- **E-commerce and ERP** — Orders, customers, and POs may all be different tabs in a Google Sheet that gets updated through Shopify or some other e-commerce platform. In order to _manage_ your e-commerce business, you may want to see charts, calendar of shipments, and reports that Google Sheets cannot provide easily. Syncing the data from Google Sheets to Coda means you can do ERP properly (see [this template](https://coda.io/@wilson-silva/mini-e-commerce-erp) as an example).
- **Customer Feedback** — You may have a ticketing system like Zendesk or Intercom and all feedback lands in a Google Sheet somewhere. You can do some basic analytics in the Google Sheet but to _reply_ to the feedback means you have to go into Gmail and start replying to customers. If your customer feedback is all in a Coda doc, you can run analytics _and_ send emails using the [Gmail Pack](https://coda.io/packs/gmail) (see [this template](https://coda.io/@hales/customer-feedback-hub)).
### Data synced to your Google Sheet
- **3rd-party vendor reporting** — Your vendors may not be using Coda yet, but you have all your vendor data in Coda and need to send them the data in a format they prefer. While you could [publish your Coda doc](https://help.coda.io/en/articles/3727616-intro-to-publishing), the vendor still wants the data in a Google Sheet you have edit access to.
- **Data “backup”** — Your team may create thousands of rows of data every quarter in a Coda doc and want to start each quarter “fresh.” Coda docs grow with your teams and they may get slow as you add in more functionality, so having a backup of your data in Google Sheets is another reason to sync data from your Coda doc to Google Sheets.
- **Finance & Accounting** — Most internal finance and accounting functions still use Excel and spreadsheets for month-end reporting, taxes, and other business-critical activities. As your data grows in Coda, you can keep your finance counterparts in the loop by having your data synced to a Google Sheet which your finance team can use for their reporting and forecasting purposes.
## Setting up Google Apps Scripts
Before you start using the scripts to sync data from Coda to Google Sheets or vice versa, you need to have Google Apps Script setup correctly. Just navigate to [script.google.com](http://script.google.com) and click on **New Project**. You’ll land in the GAS script editor. At this point, click on **Resources→Libraries** in the toolbar and you’ll want to paste in the following Coda library for Google Apps Script:
```
15IQuWOk8MqT50FDWomh57UqWGH23gjsWVWYFms3ton6L-UHmefYHS9Vl
```
After you add the library, you can pick a version of the library to use (I just picked the latest version to take advantage of all the latest features in Coda’s API):
<figcaption><em>Add Coda’s library for Google Apps Script</em></figcaption>
## Syncing a Coda doc to Google Sheets
Setting up the script for syncing a table from a Coda doc to a Google Sheets requires a few simple inputs. I walk through how to get some of these inputs in my [previous tutorial](https://coda.io/@atc/how-to-sync-data-between-coda-docs-and-google-sheets-using-googl), so read that if you have any questions on how to get the following inputs:
- **Coda doc ID:** This is the string of characters after the `_d` in the URL of your Coda doc
- **Coda table ID:** The unique ID for the table you want to sync from in Coda. If you have _Enable Developer Mode_ turned on in your account settings, you can get the table ID by simply clicking the 3 dots next to your table:

- **Google Sheet ID** — This is the string of characters after the `/d` in the URL of your Google Sheet (see [documentation here](https://developers.google.com/sheets/api/guides/concepts#spreadsheet_id) on how to get this ID).
- **Google Sheet worksheet name** — Name of the individual worksheet in your Google Sheet you want to sync data _into_ from your Coda doc
- **Source Row Column** — This is the only customization you’ll have to do to your Google Sheet. You’ll need to add a column (typically the last column in your Google Sheet) that’s called something like `Coda Source Row URL`. This is the name used in the [script](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq). This is an **important** column to have in your Google Sheet since it will store the unique URL to a row in your Coda table. More about this later.
Once you have these inputs, you’re ready to get started with syncing your data!
### Column names in Google Sheets
Try to keep the name of the columns in your Google Sheet the same as the columns in your Coda table. All the columns you want to sync from your Coda table to the Google Sheet should have its own column.
The one exception is the `TARGET_SHEET_SOURCE_ROW_COLUMN` variable which you’ll see in the script. Whatever value you put in this variable should also be the name of the column in your Google Sheet. You should put this column at the end of your table in Google Sheets like so:
<figcaption><em>Source row column to put in your Google Sheet</em></figcaption>
This column will be overwritten by the Google Script with the unique source row URL from Coda (every row in a Coda table has a unique identifier). The reason why we need this column for the source row URL is so that the script knows which rows have been added to the Google Sheet so that if you delete any rows in the _source_ Coda doc, those rows can be deleted in the _target_ Google Sheet. This brings me to a quick aside about the benefits of these source row URLs (these are called `browserLink`s in the [API](https://coda.io/developers/apis/v1beta1)).
### A unique row identifier
If you are a heavy user of Google Sheets, you may find yourself creating a “unique ID” column in table so that when you reference that row somewhere else in your Google Sheet, you can do a `VLOOKUP` to pull all the data related to that row. Sometimes you can get away with a column of data (maybe it’s a customer name, task name, or project name). For instance, in this screenshot the unique ID is the `StaffID` column:
<figcaption><em>Unique ID column in Google Sheets</em></figcaption>
To cover the cases where your table does not have a unique ID, the script puts the unique row URL from Coda into the `TARGET_SHEET_SOURCE_ROW_COLUMN` to act as the unique identifier. The [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) script also utilizes this column (assuming you have edit access to the Google Sheet). In lieu of this unique ID column, there’s no way for the script to know which rows have been added to the Google Sheet from Coda since there’s _no native row ID system in Google Sheets_ (see [this thread](https://stackoverflow.com/questions/38114591/is-it-possible-to-access-row-id-of-a-google-spreadsheet)).
### Fabricating a unique ID in Google Sheets
One alternative to fabricate this unique identifier in your data set is to concatenate a bunch of columns together in hopes that this new column will be the unique ID for that row:
<figcaption><em>Creating your own unique ID in Google Sheets</em></figcaption>
In the above screenshot, `Feature` is actually a pretty unique column of data. But to be 100% sure, there’s a `Fabricated ID` column which concatenates `Feature`, `Team`, and `Milestone` to create a “more unique” ID in the event there are two `Features` with the same name. This is not a perfect method due to two reasons:
1. The fabricated ID column might not be unique enough and it might be duplicated in other rows (which means you would have to concatenate more columns of data to fabricate that unique ID)
2. The columns you have concatenated may change (in this case, the `Team` or `Milestone` may change which would ruin the uniqueness of the ID)
In a previous life as a financial analyst, I employed this fabricated ID trick quite often but I had to choose the columns wisely. Typically in a report that has a time series, this would involve picking a dimension (e.g. west region), metric (e.g. sales), and the date for that specific row. This worked for static reports where data wasn’t getting deleted or updated too often. It’s a lot more risky to utilize this strategy with a shared Google Sheet with your team where data is constantly changing. Choose your columns wisely if you go down this path.

### Mixing columns in Coda
The advantages of having a unique identifier for the rows in Coda also applies to columns in Coda as well (this benefit is realized in the [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) script). For syncing Coda to Google Sheets, the script _has_ to use the actual names of the columns in Google Sheets since there’s also no _native column ID in Google Sheets_. This means if your column in Coda is named `Projects` but you accidentally misspell the column name in Google Sheets to `Project`, the data will not sync over correctly from Coda to Google Sheets.
One feature of the script is that you can re-order the columns in Coda and the data will still sync over correctly based on the column names. So your tables in Coda and Google Sheets could be organized like this, and the sync would still work:

The `sortCodaTableCols()` [function](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js#L167) re-arranges the columns in Coda to reflect the order of the columns in Google Sheets by simply looking for the column name in Coda:
```
var headerCodaTable = sourceRows[0]['cells'].map(function(row) { return row['column'] });
var sheetsColOrder = [];
headerRow.map(function(col) {
sheetsColOrder.push(headerCodaTable.indexOf(col))
})
var sortedSourceRows = sourceRows.map(function(row) {
var cells = sheetsColOrder.map(function(col) {
if (col == -1) {
return {
column: null,
value: null,
}
}
else {
return {
column: headerCodaTable[col],
value: row['cells'][col]['value'],
}
}
});
return {cells: cells}
})
return sortedSourceRows;
```
This means you can have your own “custom” columns in Coda or Google Sheets which can even contain formulas, and they won’t corrupt the sync from `Task`, `Team`, and `Project` to their respective columns in Google Sheets. As long as these custom column names in Coda or Google Sheets don’t show up in the other platform, then you can do whatever you want with these custom columns:

This could be useful if you work with a vendor who needs to see data in a Google Sheet to perform certain calculations that could be meaningful to them but don’t really matter to you and your Coda doc. As long as there isn’t a column name in the Google Sheet that matches the name of a column in your Coda table, then everything will work as intended.
### Adding and deleting rows
The main `runSync()` function runs two other functions: `addDeleteToSheets()` and `updateSheet()`. The logic here is to _add_ any new rows from Coda to Google Sheets and _delete_ any rows from Google Sheets that were deleted from Coda. As mentioned [above](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir), the script uses a `TARGET_SHEET_SOURCE_ROW_COLUMN` to keep track of all the unique rows that need to be synced from Coda to Google Sheets.
An added benefit of using this “source row column” in Google Sheets is that you can add new rows of data to Google Sheets manually and leave the “source row column” blank. When the sync runs, the script essentially skips these new rows because they don’t have a URL that maps to an existing row in Coda. I’m not sure about the exact use case for when you would want to do this, but perhaps your Coda doc keeps track of sales from a store and your accounting team gets the data synced to a Google Sheet like this:

The columns in yellow are the ones that get synced from your Coda doc. The first 3 rows get synced correctly because you see values in the `Source Row URL` column. The accounting team realizes that there are more sales that were not accounted for and don’t exist in your Coda doc. They might manually add rows 5 and 6 and have a column they use internally called `Manual Enter` to keep track of the rows they are manually adding to the Google Sheet. When the sync runs next, rows 5 and 6 won’t get overwritten or deleted because they left the `Source Row URL` column blank.
### Updating rows
The `addDeleteToSheets()` function was relatively simple to write, but `updateSheet()` was much more difficult given that rows in Google Sheets might be sorted in all kinds of ways. Additionally, I felt that scanning the entire Google Sheet for a source row URL and then scanning each column value to see if an update is needed was inefficient. Even if you have only 100 rows in your Coda doc that you want to sync to Google Sheets, that means there could potentially be 10,000 comparisons just for the row URLs alone every time the sync runs.
One option I considered was just blowing up the entire list of data in Google Sheets first (deleting all the rows) and re-writing the data from Coda to Google Sheets. This also didn’t feel right because for larger tables this could potentially hit Google Apps Script [rate limits](https://developers.google.com/apps-script/guides/services/quotas) and would prevent the need for the `addDeleteToSheets()` function, prevent the need for the “source row column” in Google Sheets, and wouldn’t allow the user to manually add rows to the Google Sheet because those rows would get wiped out on the sync.

My thinking was to create two 2-D tables that were sorted exactly the same. The first table contains the rows from Coda that also exist in Google Sheets. The second table contains the rows in Google Sheets. The tables would contain the same number of rows and columns so you could then do a sequential comparison between the source Coda table and the target Google Sheet and see if there are any updates that need to be made in the Google Sheet.
The first thing to do was to convert the row objects in Coda to a 2-D table that is more similar to Google Sheets’ row objects. The `convertValues()` [function](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js#L239) “flattens” the Coda row object so that each row object simply contains an array of column values:

Most of the work in these scripts is actually just data munging so that the data is in format that is acceptable for Coda and Google Sheets. Once the tables are sorted in the same order in terms of rows and columns, the script can now check cell by cell if there are any chances that need to be synced over to Google Sheets.
I felt this sequential comparison of cells between the Coda and Google Sheets table was more performant than scanning for each row URL. The number of comparisons between the source and target tables is limited to the number of “cells” in either table. In this example, the script would only have to make 15 comparisons before figuring out that there are three cells in Coda that have been updated and need to be synced over to Google Sheets:

While this may seem like a performance boost, there is a lot of pre-processing to get the rows sorted correctly, so the net result might be same in terms of rows and cells scanned. There much more smarter people out there who understand sorting algorithms, so there may be an even more efficient approach here 🤷♂️.
### A little helper sort function
In order to get the tables sorted perfectly before doing the cell by cell comparison, I needed to figure out a way to sort an array of arrays by some value. In this case, we have a bunch of arrays of column values that represent our rows, and the unique ID we want to sort on is the source row URL:
<figcaption><em>How do we sort each row object by the 7th element (row URL)?</em></figcaption>
I created this little `sortArray()` [function](https://github.com/albertc44/coda-google-apps-script/blob/master/coda_to_sheets.js#L239) that’s one of the workhorses in the script. It seems like such a common problem and I was surprised there wasn’t a built in sort function to sort an array of arrays (or maybe I just didn’t search hard enough). So if I want to sort the `targetRows` object below which contains all the rows in my Google Sheet, I run the `sort()` function on it and pass in the `sortArray()` function and the returned `sortedTargetRows` object is…as you expected…sorted by the source row URL:
```
var sortedTargetRows = targetRows.**sort(sortArray)**;
function sortArray(a, b) {
var x = a[rowURLIndex];
var y = b[rowURLIndex];
if (x === y) {
return 0;
}
else {
return (x < y) ? -1 : 1;
}
}
```
One thing I learned about the `sort()` [function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) is that if you pass in what they call a `compareFunction` (in my case the `sortArray()` function), to sort values by alphabetical order, it actually sorts in alphabetical order for values with _uppercase_ letters followed by _lowercase_ letters. Here is a list of values and how you expect them to be sorted versus how the `sort()` function actually sorts stuff:
<figcaption><em>WTF?</em></figcaption>
Now if you sort this list of values in a spreadsheet or Coda table, you’ll get the results in the `What you expect` column. I couldn’t figure out why the sorted values didn’t match up with what I expected after sorting the values in Google Sheets. Then after some debugging I realized this is the default behavior of the `sort()` function in [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort). A common workaround is to apply the `toUpperCase()` function to the value so that you are doing a case-insensitive sort. Unfortunately, this won’t work for the script because it’s possible for a table in Coda to have two row IDs with the same order of six characters but just be capitalized differently (e.g. a row ID of `NPmgrG` and `NPMGRG` could exist in the same table).

In our case, we need to find a _case-sensitive_ sort to account for the uniqueness of row IDs. I searched for a function like this to no avail. Then I realized it doesn’t matter if the script doesn’t sort the table in the alphabetical order I expect as long as it applies the same “incorrect” sort to both the source and target tables _equally_. This means both tables will still be sorted in the same order just not in the order we expect from a typical sort in Google Sheets or Excel.
## Syncing Google Sheets to a Coda doc
After writing the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script, I thought the [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) script would be a breeze since I had written all the functions to convert and sort data. All I would have to do is just switch around some variables and everything would work out just fine. Turns out I was completely wrong since there are a bunch of edge cases to account for in Google Sheets that makes the sync a little more difficult compared to Coda to Google Sheets.
You can follow most of the [steps](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir) in the Coda to Google Sheets setup to get the values you need for the script to run, but there are a few caveats and extra options you can set to get similar functionality as the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script:
- **Target Row Id Column** — This is a key that stores each row’s unique ID from a Coda table. This value is in the “source row URL” (last 6 characters). Be default this variable is set as “Coda Row ID,” so make sure you don’t have a column in your Coda table with this name.
- **Do Not Delete Column** — Unlike Google Sheets, the script is not written in a way where you can add additional rows to the _target_ Coda table without having them deleted when the sync runs. As mentioned [above](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir) for the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script, you can add rows to your _target_ Google Sheet and not have them deleted on the sync. You need to create a checkbox column in your Coda table called `Do not delete` and check off the box for that row if you don’t want it to get deleted on the sync. If you prefer a different column name, just change the value for the `DO_NOT_DELETE_COLUMN` variable.
- **Rewrite Coda Table** — Unlike the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script, you have the option to completely delete all the rows in your _target_ table and re-write them with all the rows from your _source_ Google Sheet. Set the `REWRITE_CODA_TABLE` variable to true if you want this behavior (may result in a faster sync).
### Column and row limitations
If you have edit access to the Google Sheet, you will need to add a column at the end of your table called something like “Source Row URL” similar to the “Coda Source Row URL” pattern mentioned [above](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir) for the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script. After writing the data from Google Sheets to Coda for the first time, the unique row URLs from Coda are copied over into this “Source Row URL” column in your editable Google Sheets. Obviously this doesn’t apply to Google Sheets where you only have read-only access (more on that later).
One limitation of the script is that if you add a new column to the Google Sheet, you also need to add that same column name to the Coda table. It’s ok if the column _order_ isn’t the same in Coda, but that column name just needs to exist somewhere in the Coda table. You can just hide the column in Coda to make the table nice and clean. This is actually a limitation caused by the way I structured the script, so hopefully it doesn’t cause you too much inconvenience 😬.

Be careful with empty rows in your data in Google Sheets because those rows also get “synced” over to Coda. Not only will those empty rows show up in your Coda table, they will get their own source URLs. Ideally, the Google Sheet won’t contain any empty rows and this won’t be a problem for you.
A few other “small” things:
1. **Formulas don’t sync** — Probably not a huge surprise as this is also a limitation of the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script. Any columns with formulas you sync over to Coda will just be hard-coded to that column in your Coda table.
2. **Resetting column formats** — When your Coda table is blank and you’re syncing over rows for the _first time_ from Google Sheets, you may have to change some of the column formats to the proper format. For instance, if your dates in Google Sheets are in [Zulu format](https://stackoverflow.com/questions/8405087/what-is-this-date-format-2011-08-12t201746-384z), Coda will sometimes interpret these values as a select list. After the sync, just change the column format in Coda to the Date format you want and future syncs will work just fine.
3. **You can’t sort your Google Sheet** — The script looks for empty source row URLs in the `SOURCE_SHEET_SOURCE_ROW_COLUMN` in your Google Sheet and it scans that column until it finds an empty value to start pasting in new source row URLs from Coda. If you sort your table, that column will get all jumbled and the script will break. New rows that you add to the Google Sheet should have the source row URL column blank and these blank cells need to be contiguous.
### Setting a timer for source row URLs
You will notice that the data syncs over pretty quickly to Coda, but the `SOURCE_SHEET_SOURCE_ROW_COLUMN` (aka the “source row URL”) takes a couple seconds to show up in your Google Sheet. The reason this happens is because of the steps that need to happen for this sync to work:
1. Find the rows that need to be added from Google Sheets to Coda
2. Insert those new rows into Coda
3. Coda snapshots the new data added to your table
4. Look to see if the source row URLs have shown up in the Coda table
5. Copy over the source row URLs to Google Sheets once those URLs show up
The key step in #3 since that snapshot can take a few seconds to happen. If we try to copy the source row URLs right after the rows are inserted into the Coda table, the script will come up with nothing an no row URLs will show up in your Google Sheet.
To get around this, I added a little sleep timer to basically check for the source row URLs every two seconds:
```
while(currentCodaRows.length <= allRows['targetRows'].length) {
timer += 2;
if (timer == 60) { break; }
Utilities.sleep(2000);
currentCodaRows = retrieveRows();
}
```
The `allRows[‘targetRows’]` object contains all the rows in your Coda table when the script runs for the first time. Every two seconds, the loop retrieves the rows in the Coda table in hopes that the the number of `currentCodaRows` has _exceeded_ the number of original rows when the script first ran. The loop also breaks after 30 seconds if, for some reason, the Coda API cannot retrieve all the number of current rows added to the table. So far it hasn’t taken more than five seconds for the URLs to show up, but this is on a small data set of a 5–10 rows being added each time I tested the script.

This sure seems like a heck a lot of work just to added some new rows to a table in Coda. That’s why I put in a `REWRITE_CODA_TABLE` [variable](https://github.com/albertc44/coda-google-apps-script/blob/master/sheets_to_coda.js#L23) to override all this source row URLs business.
### Deleting and re-writing rows each time
As discussed with [updating rows](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_su8ir) in the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script, I wanted to avoid this pattern of syncing data:
1. Delete all rows in the target
2. Copy all the rows from the source
3. Insert the copied rows into the blank target table
It didn’t seem like the right solution especially for a large table of thousands of rows because if you’re only changing or adding a few rows, the script has to delete and re-add all these thousands of rows. The simplicity of this approach is tempting, nonetheless. Just like the [Coda -> Google Sheets](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Coda-Google-Sheets_suhDq) script, the she[Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) script is broken down into `addDeleteToCoda()` and `updateCoda()` functions. The former function adds and deletes rows while the latter updates any existing rows in Coda that may have changed in the source Google Sheet.
Blowing up the Coda table each time the sync runs would prevent the need for individual functions that add, delete, and update because the nature of blowing something up is that you can re-build from scratch. I haven’t measured which option is more performant but my hunch is that for smaller tables of data, setting `REWRITE_CODA_TABLE` to `true` may actually make the script run faster at the expense of not having the source URLs in your Google Sheet.
The `REWRITE_CODA_TABLE` option is actually important for Google Sheets files you only have _read-only access_ to. By default, you can’t write source row URLs to a Google Sheet you have view-access to, so there’s no point in using source row URLs to figure out which rows need to be added, deleted, and updated. **Side note:** the script doesn’t work on Google Sheets that have been [published to the web](https://support.google.com/docs/answer/183965?co=GENIE.Platform%3DDesktop&hl=en). You’ll know the Google Sheet is published when the URL has a `2PACX` in the URL like so:

### Getting permissions from Google Sheets
Instead of having to remember if you need to switch the `REWRITE_CODA_TABLE` variable to `true` when you’re syncing from a read-only Google Sheet, I did a little hack to get the permissions you have on the Google Sheet by trying to add the logged in user (you) as an editor to the Google Sheet:
```
function sheetsPermissions() {
try {
fromSpreadsheet.addEditor(Session.getActiveUser());
}
catch (e) {
REWRITE_CODA_TABLE = true; // If no access automatically rewrite Coda tables each sync
}
}
```
If you have _edit-access_ to the Google Sheet, nothing happens since you are already an editor. If there is an error, then that means you don’t have permissions to add yourself as an editor to the Google Sheet (which means you only have read-only access). Int this case, the `REWRITE_CODA_TABLE` is set to `true` and the script goes on and blows up the Coda table and replaces it brand new with data from your Google Sheet.
## Final Caveats & Notes
There are many other variables to consider before implementing these scripts into your daily business-critical processes, but I think the given feature set should get you 90% of the way there. Having said that, there are a few more things to think about and small limitations about the scripts in general I’ve discovered along the way. This is by no means an exhaustive list.
### Using simple triggers in Google Apps Script
I thought that the [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) script could take advantage of [simple triggers](https://developers.google.com/apps-script/guides/triggers) to fire off the script. Basically you could have the script fire right when you make an edit to any cell, the moment the Google Sheet loads, etc. Unfortunately, there are a few [restrictions](https://developers.google.com/apps-script/guides/triggers#restrictions) to using simple triggers, and it looks like the script has to be entirely contained in Google Sheets to utilize simple triggers. Additionally, I don’t think the script could keep up with the speed of edits if you are looking for near real-time syncing. Data would just get choked as the script waits for source row URLs to appear and data would start pouring into your Coda doc.

### Rate limits
There are rate limits for [Google Apps Script](https://developers.google.com/apps-script/guides/services/quotas) as well as [Coda](https://help.coda.io/en/articles/3370370-are-there-any-size-limitations-on-docs-accessible-via-the-api). I’ve tried syncing tables with 10,000 rows in both scripts (6 columns) and they both seem to work. I think in one test the [Google Sheets -> Coda](https://coda.io/d/How-to-sync-data-from-Coda-to-Google-Sheets-and-vice-versa-with-_dv4i9X8bdFe/Google-Sheets-Coda_suKcq) sync resulted in some rows missing in the Coda table. For the first time you sync data over, I’d recommend just doing a regular copy and paste instead of relying on the sync to copy all the data over correctly. Most likely, subsequent additions and edits would be as large so the sync should run smoothly.
### V8 runtime
If you have existing Google App Scripts, you may have noticed this fun error message at the top of your editor:

These scripts utilize the [V8 runtime](https://developers.google.com/apps-script/guides/v8-runtime) which takes advantage of a bunch of modern JavaScript features. The only changes I needed to make to upgrade the scripts was changing the syntax for `for each` [loops](https://developers.google.com/apps-script/guides/v8-runtime/migration#avoid_for_eachvariable_in_object).
### Moving off Coda or Google Sheets to a dedicated database
It’s tempting to use a Google Sheet or Coda doc as your de facto database. The interface is familiar, easy to edit and use, and it lives in your browser. The danger is when it feels _so convenient_ that you start putting thousands or hundreds of thousands of rows into your spreadsheet and maybe rely on Zapier or these Google Apps Scripts to sync data in and out of other applications you use every day to get work done.
If the process isn’t business-critical and your team can put up with this annoying little thing:
<figcaption><em>Source: Ben Collins</em></figcaption>
…then by all means continue doing what you’re doing and pass the Google Sheet to the next intern or analyst who has to put up with updating it in the future. I would consider migrating your data to a dedicated database platform (like [Google BigQuery](https://cloud.google.com/bigquery)) which has a nice integration with Google Sheets. Lots more to say about this subject, but I’ll just leave it at that.
## Not a programmer
Most of this post is me pretending to know what I’m talking about. I’m not a programmer, and the scripts could probably be improved 10X by someone who actually knows what they’re doing and understands how algorithms work. There are unnecessary loops and bugs stamped all over the scripts so please proceed with caution ⛔️. If you happen to be someone who knows more about this stuff than me, consider [contributing](https://github.com/albertc44/coda-google-apps-script) to the code. I just did the bare minimum to get something to work and hopefully these scripts will be sufficient to get you on you your merry way of not having to copy and paste between Coda and Google Sheets 🤙.

* * * | albertc44 |
327,263 | Re-introducing JavaScript Objects using Object constructor | If you have been with me since the last two posts, first of all, thank you for coming back! And secon... | 6,405 | 2020-05-05T14:06:05 | https://dev.to/salyadav/re-introducing-javascript-objects-using-object-constructor-1bmk | javascript, oop, beginners, explainlikeimfive | If you have been with me since the last two posts, first of all, thank you for coming back! And second, the contents of this post is a learning curve for me as well. So there are a few questions that I am still seeking answers to, see if you can help me out.
Let's begin...
In the [previous post](https://dev.to/salyadav/re-introducing-javascript-objects-2-h63), we had discussed *Constructor Function* way of creating objects and I had left you wondering about memory optimization using *Prototypes*.
But before we get into the vast concept of *prototypes*, I want to discuss two more ways of creating objects-
1- Using **Object Constructor**
2- Using **Object.create(..)** function (This one has a little surprise in it.🎁)
So without further ado, let's dive in:
### Using Object Constructor
JavaScript offers a class called *Object*, and every object we create, no matter which way, is an instance of this class. And so, we can simply call this class constructor to create an instance of it-
```javascript
var person3 = new Object({
name: 'Maria',
greeting: function() {
console.log('Hola! Mi nombre es ' + this.name + '.');
}
});
```
We basically pass the JSON into the *Object Constructor* to get an object instance.
If you working on-hands with the above examples (including the one in [#1](https://dev.to/salyadav/re-introducing-javascript-objects-1-3m4h) using *Object Literals*), you will be wondering WHAT IS THE DIFFERENCE? As far as Object properties and prototypes are concerned, there is absolutely NO difference. And if you are a beginner, let's have it this way.
> Note (NOT for beginners)-
> Well, after much Googling, I am still boggled as to what indeed is the difference between creating using Object Literal and Object Constructor! May be "nothing" is answer. May be not. I have my suspicions on memory usage. Since, an object is allocated space in HEAP instead of the CALL STACK (in case primitives), I am not really sure of how these two ways are different. **If you have any idea, do comment below or DM me.**
Let's see what our developer console shows for the two cases:
1. Using **Object Literal**

2. Using **Object Constructor**

### Using Object.create function
Before we move on to compare all the ways of creating objects and which way do we go about for different scenarios, we have one last way of creating an object, which I personally find the most exciting one! (Told you, there is a 🎁)...
If you refer the [Mozilla developer docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/create), you will see a plethora of methods housed by this *Object Class*. One of which is *.create(...)*. Let's see how to use it.
We created *person3* in the above example. Here, we pass this object reference to create a new object.
```javascript
var person4 = new Object.create(person3);
//try calling the greeting function in person3
//to check if person4 actually got created.
person4.greeting();
//What output do you see?
```
The above functioned well, did it not?
Well, you will be surprised when you call `console.log(person4)`. What do you see!
An EMPTY OBJECT! What! 😨
Again, try `person4.greeting()`. Works fine!
Then where on earth is this function getting called from?
The secret lies in its **prototype**.
Now open `__proto__`. What do you see?

The reason for this behavior is clearly mentioned in the mozilla docs I referred you to earlier. Let me lay it for you:
*"What we pass is the object that should be the prototype of the newly-created object."*
And so, the *person4* is actually an empty object with a reference to *person3* in its prototype. This leads us to the next important concept in JS- **Prototype Chain and Inheritance**. Also, this queer behavior makes a great source of many JS Interview questions (which I will share towards the end).
But before we move on, let's play around with this object a bit-
Try these out:
1- We know that *person4* is an empty object. But try `person4.name`. What do you see? Where is this value coming from?
2- Now run `person4.name = "Maria Junior"`. And see what the object holds using `console.log(person4)`.
3- Run `person4.greeting()` and what do you see? Since this function was not in the object when we printed it earlier, where is it getting fetched from? And what is it printing?
4- Now try `person4.__proto__.greeting()`. Can you explain the result?
### Try this actual interview question!
What will be the output? (Answer without running the code)
```javascript
var muffin_order_1 = new Object({
name: 'muffin',
quantity: 10,
delivery: function () {
console.log('Days to deliver: ' + this.quantity*2);
}
});
muffin_order_1.delivery();
var muffin_order_2 = Object.create(muffin_order_1);
delete muffin_order_2.quantity;
muffin_order_2.delivery();
```
By now you should have an abstract idea about what are prototypes and what are we aiming for with them. Nevertheless, I will formalize them for you in the future posts. Meanwhile...
You have made it so far! You deserve a 🍰. Go ahead, take a break!
Reference:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/create | salyadav |
327,313 | 13 Weeks Challenge | Hey, everyone. This is my first post, I have considered writing for a long time, in this lock-down pe... | 0 | 2020-05-04T20:01:09 | https://dev.to/javi/13-weeks-challenge-40gf | challenge, computerscience, codenewbie, interview | Hey, everyone. This is my first post, I have considered writing for a long time, in this lock-down period I thought to give it a short. I am preparing for my upcoming interviews and I was quite weak in Data Structures and Algorithms. Honestly, I never tried so hard and have long given excuses to keep it for the last.
So, I finally thought to start. This lock-down period is giving me many of my firsts.
I was following various threads and including this one -
https://www.reddit.com/r/cscareerquestions/comments/1jov24/heres_how_to_prepare_for_tech_interviews/
So, I have made a 13 week plan on what and how I will cover all the numerous algorithms and data structures, hope I become successful in my venture.
Will tell you about my journey and accomplishments along the way.
Thankyou.
| javi |
327,316 | I am going to build 6 products in 12 months | I am going to build 6 products in 12 months on my way to create financial freedom for myself. And I seem to have picked the worst time to start. | 0 | 2020-05-04T20:10:41 | https://dev.to/ericadamski/i-am-going-to-build-6-products-in-12-months-i85 | ---
title: I am going to build 6 products in 12 months
published: true
description: "I am going to build 6 products in 12 months on my way to create financial freedom for myself. And I seem to have picked the worst time to start."
---
Originally posted on [attempts.space](https://attempts.space)
I am going to build 6 products in 12 months on my way to create financial freedom for myself. And I seem to have picked the worst time to start. Right in the middle of COVID-19. The largest economic depression since THE great depression and do I go and find a secure job?
No. I do the exact opposite. I start working for myself.
This all started when the office for my day job closed and we began to work from home. Without a commute I found myself with ample amount of time to do things _just for me_. It was the most empowering and freeing 2 weeks of my life. Working on what I want, feeling like I am making an impact in the lives of people. I haven't felt that energized in ages.
So I started to chase that.
I started a [vlog](https://www.youtube.com/playlist?list=PLCct7zpeN4PFQJ1dfW1NpCfsqBZ9shT-7), producing [YouTube videos](https://www.youtube.com/channel/UC8s7-Eb0vO7BJfudkTQUWyQ), developing and releasing [side projects](https://team-img.now.sh), [writing](https://journal.ericadamski.dev), and reading more. I created whatever, whenever. And it was **AMAZING**. So amazing that I wanted more.
Then the next month passed and I hadn't been able to rekindle that experience. I couldn't find that same passion no matter the amount of searching. I figured out that the more time I spent on my day job and the less time I spent on things I wanted to do the worse I felt.
So I am taking a stand.
I am going to challenge myself into financial independence. Into my dream job, my destiny, into an entrepreneur.
Over the next year I am going to create and launch 6 new products and work to make them profitable. The most successful business I run currently is [Team-Img](https://team-img.now.sh). It is profitable, but no where near sustainable for me to live on with only \$7 MRR.
Thats right, 7 whole dollars. The math is hard but that is a total of \$84 a year. Not much to live on.
But that is my inspiration. [Team-Img](https://team-img.now.sh) was the catalyst that showed me I could produce something useful.
And so begins my journey. Starting from \$7 MRR I will build a life for myself and my family by working on things I love with people who care. It is going to be an adventure, a challenge but most of all it will be - hard. Also fun. I am thrilled to be going on this journey and it starts today.
With inspiration from [Pulsar](https://www.trypulsar.com/) I am going to count [Flawk](https://flawk.to), which is super early alpha, as my first of 6 products. I have 6 others in the pipeline and over the next year I am going to attempt to make each one a profitable product.
[Flawk](https://flawk.to) isn't even close. But that is where the fun is.
0/6 profitable products and counting.
| ericadamski | |
327,393 | Job Search Week 11 | I started looking at python this week just to get an idea of what it was like to use and I quite enjo... | 5,212 | 2020-05-04T21:45:35 | https://dev.to/kealanheena/job-search-week-11-16de | makers | I started looking at python this week just to get an idea of what it was like to use and I quite enjoyed it but I think from now on I'm gonna focus on using a MERN stack because JavaScript is my favourite language to code in.
Day 1
Naturally, I started with the basics going over pythons lists(arrays), dictionary(hashes) etc. I had heard python was similar to ruby but I now see why people said it. But the syntax was all a little bit different so I'd often catch myself trying to type ruby which I was a little bit confusing but I managed.
Day 2
Next, I moved onto making an app in python. I used the battle app because I think it's a nice thing to start with. I started by building a character class and giving some basic stats that they'd all need giving them health magic and a basic attack. I then made the enemy and the player using this class.
Day 3
I then built a class that would run the game and started working on magic allowing it to first do damage then reduce your magic bar and finally I added healing magic to restore your health.
Day 4
I spent Thursday building usable items in the game magic potions health potions and attack items. I added these to the players choice of attacks on their turn. After that, I started adding some styling for the health bars and magic bars.
Day 5
Finally, I added multiple enemies and also added so artificial intelligence giving them a small chance to heal if they dropped below 50% health.
Summary
Python was interesting but I want to work on JavaScript right now and get really good at it. I feel like I'm better of knowing one thing really well than being a jack of all trades.
| kealanheena |
327,484 | Become a Better Developer By Doing a Few Things | Subscribe to my email list now at http://jauyeung.net/subscribe/ Follow me on Twitter at https://twi... | 0 | 2020-05-05T01:45:04 | https://dev.to/aumayeung/become-a-better-developer-by-doing-a-few-things-5h7o | career, webdev, productivity, codenewbie | **Subscribe to my email list now at http://jauyeung.net/subscribe/**
**Follow me on Twitter at https://twitter.com/AuMayeung**
**Many more articles at https://medium.com/@hohanga**
**Even more articles at http://thewebdev.info/**
To be a good programmer, we should follow some easy to adopt habits to keep our programming career long-lasting.
In this article, we’ll look at some of them and how to adopt them.
Take Responsibility
===================
We should take responsibility for our work. Like with any other task, we make a commitment to ensure that we write software the right way.
We have to accept the responsibility for an outcome that we’re going to produce.
Therefore, we shouldn’t make excuses or blame someone else for anything. It just doesn’t help and make us look bad.
Provide Options
===============
When we run into problems, then we should give the stakeholders a few options on how to move forward.
It’s much better than making one or more crappy excuses that don’t help to solve our problem.
Therefore, finding an alternative way to move forward is much better than making excuses and doing nothing other than that.
Dealing With Software Entropy
=============================
The software can turn into a mess real fast if we don’t have much discipline with our code.
Then we’ll run into problems in the future when we work on it.
Entropy is chaos and disorder. And if we don’t plan and commit to making our code clean, then it’ll turn into a mess real fast.
Therefore, we got to minimize entropy so that everyone can enjoying working with the code.
We just got to clean up our code before it turns into a maintainable mess.
If We See Broken Things, Then We Should Fix It
==============================================
We definitely should find out what’s broken and fix them so that people won’t be frustrated.
We don’t have to take all our time fixing broken things immediately, but we should make note of them or make a quick fix so that we can deal with them in a better way later.
This way, our code won’t turn into a dumpster that no one wants to use or work with.
Remember the Big Picture
========================
We got to make a note of the big picture. Even though we probably won’t get to work on every part of a big system, we should still be able to see the big picture so that we can make things that work well with the rest of the system.
Without the big picture, we would have problems later on when we find out that what we changed doesn’t work with the rest of the system.
Good-Enough Software
====================
Nothing, including software, is going to be perfect. It’s hard to make every perfect and deliver things with a reasonable timeline.
Therefore, we got to think of the trade-offs that we’ll have to make in order to deliver something.
Ideally, everything is delivered incrementally, so that we can make our stuff better and then make them better afterward.
Good enough doesn’t mean sloppy code. It means that we make things that meet users' requirements. However, we deliver what we can within the timeline given.
To outline the requirements clearly, we got to involve stakeholders so that they’re going to be happy with what we deliver.
If we’re working on a different product, then we’ll have different constraints. We got to make sure that we outline those clearly so everyone’s on the same page.
Make Quality a Requirements Issue
=================================
Quality is definitely something that we should consider as part of the requirements.
We would never skip quality for any reason because we’ll suffer if we don’t deliver something good to customers.
Also, the code has to be good so we can continue working on it.
Therefore, this is something that we should build into the timeline so that we wouldn't skimp on quality just to release something.
Conclusion
==========
We got to take responsibility for our work and make what’s good in the given amount of time.
The code got to be good, and we need to find a way to move forward given changing requirements.
Also, we should clean up our code and fix issues so that they won’t become a problem for users.
Quality should always be built into our timeline so that we have time to deliver something that’s good for users and developers alike. | aumayeung |
327,528 | looking for answers !, strapi vs nest js for my next project | I have never worked with any of the 2, I come from the world of laravel and typescript. I want to ex... | 0 | 2020-05-05T04:25:33 | https://dev.to/warriordev/looking-for-answers-strapi-vs-next-js-for-my-next-project-nestjs-or-strapi-1dm3 | strapi, nestjs, typescript, backend | I have never worked with any of the 2, I come from the world of laravel and typescript.
I want to explore these options for a new project, but I can't decide.
If you have to compare them, how would they compare? | warriordev |
327,553 | React performance optimization with useMemo & memo | In this article, I will provide a set of techniques to optimize child components re-rendering. There... | 0 | 2020-05-05T05:54:34 | https://dev.to/max_frolov_/react-performance-optimization-with-usememo-memo-hki | react, webdev, javascript, tutorial | In this article, I will provide a set of techniques to optimize child components re-rendering. There are many circumstances of unnecessary component re-rendering. Usually, it happens because of the parent component inside which the state changes.
Firstly we should note:
>any manipulation with component state leads to re-rendering of entire components tree in spite of the interaction with parent component state.
In case your app is small, without heavy components - additional re-rendering is bearable. It doesn't affect app performance so much. The bigger app and individual components inside it - the more noticeable effects of unnecessary re-rendering are. It leads to processes delay and increasing of loads on all components.
Here is the example of such re-rendering. To track the re-rendering I left <code>console.log</code> in the render of each internal component. The number of the re-rendered element will be displayed in the console.
####---FormContainer####
####------ItemComponent1 (console.log)####
####---------ItemComponent2 (console.log)####

###__There are several options to solve this problem:__###
####__№1 - useMemo__####
This hook is mainly designed to optimize calculations. The calculation restarts if the dependencies specified as a second argument changed. Thus, the load on the component reduce.
<code>useMemo</code> is also applicable to components, returning them persisted. It works if the dependencies do not change during the component 's lifecycle. In case we don't specify dependencies (leave an empty array) - the component remains, as it was at the time of initialization. All the passed parameters remain closed in the initial state.
```jsx
import React from 'react'
// local variables
const FIELD_NAMES = {
FIRST_NAME: 'firstName',
LAST_NAME: 'lastName'
}
const FormContainer = () => {
const [formValues, changeFormValues] = React.useState({
[FIELD_NAMES.FIRST_NAME]: '',
[FIELD_NAMES.LAST_NAME]: ''
})
const handleInputChange = fieldName => e => {
const fieldValue = e.target.value
changeFormValues(prevState => ({
...prevState,
[fieldName]: fieldValue
}))
}
return (
<div>
<input
type='text'
onChange={handleInputChange(FIELD_NAMES.FIRST_NAME)}
name={FIELD_NAMES.FIRST_NAME}
value={formValues[FIELD_NAMES.FIRST_NAME]}
/>
<input
type='text'
onChange={handleInputChange(FIELD_NAMES.LAST_NAME)}
name={FIELD_NAMES.LAST_NAME}
value={formValues[FIELD_NAMES.LAST_NAME]}
/>
<ItemComponent1 />
</div>
)
}
const ItemComponent1 = () => {
console.log('ITEM 1 RENDERED')
return React.useMemo(
() => (
<div>
<span>Item 1 component</span>
<ItemComponent2 />
</div>
),
[]
)
}
const ItemComponent2 = () => {
console.log('ITEM 2 RENDERED')
return <div>Item 2 component</div>
}
```
In the example above, we used <code>useMemo</code> inside <code>ItemComponent1</code>. Thus anything returns the component will be initialized only once. It won't be re-rendered at the time of parent re-rendering.
Below you can see the result of how hook works:

As you see, when the state changes inside the <code>FormContainer</code>, the <code>useMemo</code> does not allow component <code>ItemComponent1</code> to re-render.
One more thing. Let's assume we specified <code>firstName</code> as a dependency passed via props from the parent. In this case, the component will be re-rendered only if <code>firstName</code> value changes.
####__№2 - memo__####
You can reach the same effect using a high order component (<code>HOC</code>) named <code>memo</code>. If you don’t want the component <code>ItemComponent2</code> involved in re-rendering - wrap it in <code>memo</code>. Here we go:
```jsx
const ItemComponent2 = React.memo(() => {
console.log('ITEM 2 RENDERED')
return <div>Item 2 component</div>
})
```
If we pass props to a component wrapped in a <code>HOC memo</code>, we will be able to control the re-rendering of that component when the prop changes. To do this we should pass as a second argument a function which:
1. Compares the props values before and after the change (<code>prevProps</code> and <code>nextProps</code>)
2. Returns a boolean value upon which React will understand whether to re-rendered the component or no.
```jsx
const ItemComponent1 = ({ firstNameValue, lastNameValue }) => {
console.log('ITEM 1 RENDERED')
return (
<div>
<span>Item 1 component</span>
<ItemComponent2
firstNameValue={firstNameValue}
lastNameValue={lastNameValue}
/>
</div>
)
}
const ItemComponent2 = React.memo(
() => {
console.log('ITEM 2 RENDERED')
return <div>Item 2 component</div>
},
(prevProps, nextProps) =>
prevProps.firstNameValue !== nextProps.firstNameValue
)
```
In this example above we compare old <code>firstName</code> and new props state. If they are equal the component will not be re-rendered. Hereby we ignore the <code>lastName</code> on which change the <code>firstName</code> will have the same value all the time. That is why the component will not be re-rendered.
You can see the result below:

>React controls the second argument by default. This means comparing of the props value only to primitive types and one nested level. Props as <code>objects, functions, or arrays</code> will cause component re-render. You can control such moments describing needed comparison in the function. The function should passed as a second argument.
Another hooks tutorials:
[<code>useState</code>](https://dev.to/max_frolov_/react-hooks-usestate-rules-and-tips-for-component-state-manipulation-1gd6) [<code>useReducer</code>](https://dev.to/max_frolov_/react-hooks-usereducer-complex-state-handling-243k)
More tips and best practices on my [twitter](https://twitter.com/max_frolov_).
Feedback is appreciated. Cheers! | max_frolov_ |
327,989 | Streaming data from Kafka to Elasticsearch - video walkthrough | Getting data from Kafka into Elasticsearch is easy with the Kafka Connect sink connector. Check o... | 5,469 | 2020-05-05T08:57:34 | https://dev.to/confluentinc/streaming-data-from-kafka-to-elasticsearch-1348 | apachekafka, elasticsearch, search, kafkaconnect | Getting data from Kafka into Elasticsearch is easy with the Kafka Connect sink connector.

Check out this video tutorial on how to use the connector and a walk through some of the common requirements when using it including
- Updating and deleting documents in Elasticsearch
- Handling schemas and field mappings, including Timestamp fields
- Changing the name of the target index
- Error handling
🎥 Watch it here: https://rmoff.dev/kafka-elasticsearch-video
👾 You can also try it all out for yourself and follow along with the code using Docker at https://rmoff.dev/kafka-elasticsearch
| rmoff |
328,013 | Setting Up Netlify Forms With Gatsby | I recently built a website using Gatsby, choosing to host it on the Netlify free tier. One of the fea... | 0 | 2020-05-05T10:07:07 | https://chrisharding.io/setting-up-netlify-forms-with-gatsby | gatsby, netlify, webdev | I recently built a website using Gatsby, choosing to host it on the Netlify free tier. One of the features I wanted to try was form handling. Typically, handling form posts requires wiring up and hosting a backend to process requests. Netlify forms allows me deploy a simple static site with zero backend, letting them handle the rest for me. You can read more about the functionality here.
##React setup
The setup is relatively straight forwards. Firstly, we need to add in the usual form, input and button tags. Below is a sample component for submitting a user email address.
{% gist https://gist.github.com/wdchris/7d009b67e39ba89d4aa0d4b92bca27a2.js %}
Notice two extra attributes on the form tag. `data-netlify` is used to tell Netlfy we want to track this form. `data-netlify-honeypot` is the name of an input within the form we’ll use to counter bot attacks. Note also that we need to specify a `name` attribute on the input we're tracking. This is a key, the input being the value, which will be sent to the Netlify forms api.
The last pieces of markup we need are two hidden inputs.
{% gist https://gist.github.com/wdchris/e0242e38fdff3e02ad79c9de00ce3c29.js %}
The first wires up the form name to the Netlify api. The second is the aforementioned honeypot input field for tricking bots. This will be hidden from users, but will likely be completed by any bots informing Netlify to ignore this response.
## Hooking it up
Now that the markup is in place, we just need to add some standard React code to track the state of the input and send a POST request for Netlify to capture.
{% gist https://gist.github.com/wdchris/36a4a327ed83983eeaef6855d99fbbe8.js %}
Here we have used React hooks to track the state. A `fetch` method is then used to send a POST request, encoding the form name and email address response to be sent to the Netlify api.
## That’s all folks
And that’s all it takes. Push this to deploy and you’ll notice submissions coming through on the Netlify forms dashboard. It will track a form called *register*, and will contain a field called *email*. All this built on a static React site with a stateful front-end. You can see the full code sample [here](https://gist.github.com/wdchris/c64309f50add47196c3d743292ef6660). | chrisharding |
328,024 | 10 Tips To Get Out Of Your Own Way And Start That Side Project | Let's START working on our «Dream Projects»! My last 10 years of coding had its ups and downs.... | 0 | 2020-05-05T10:31:44 | https://dev.to/rohovdmytro/start-doing-your-dream-project-here-s-what-works-for-me-4ff8 | productivity, sideprojects, weddev | > Let's START working on our «Dream Projects»!
---
My last 10 years of coding had its ups and downs. To have more ups I've implemented tiny habits to be more productive. It's time to share! If one of them will make you 1% more productive everyday — I am taking it!
Beware! Some of the tips might be silly (or against best practices). But in my personal experience they DO make me more productive. At least in my case.
And hey, ultimately, productivity is a speed of achieving our dreams. Let's do something about it!
---
## How To Finally Start Doing Your Dream Project
---
I am defining the problem of starting as a problem of mental walls. Let me explain.
I have a medium size personal goal — to acquire 10 000 people to use my products that will help people to become MORE self-aware. THAT is my personal realistic dream.
But! I am not sure which tools exactly I will build (AND people will use). Not cool. And understanding yourself in not an easy task. But in order to figure out the tool to build I need to iterate on my ideas (and hypothesizes). And in order to iterate I need to... start and to... finish.
To start the WHOLE project and to finish the whole project. Daaamn. :)
I want my week to look like this:

And NOT like this (irony):

Hey, naming convention of my variables IS important (from some perspective). But while looking at a bigger picture there are more important problems I want to face. For example. Am I delivering real value to the people with my products?
> Mental bottlenecks are the places we feel like we need to make a good decision but it might not be THAT important from a bigger stand point.
My opinion:
> One the most most important meta-skills is the ability to start and to finish.
Cause overthinking is not fun. Momentum is!
And hey, hey, hey! I'm sharing my real-life experience. IF you are interested in building side projects that would challenge you to become better & stronger — [join me, join my newsletter](https://blog.swingpulse.com).
To the tips.
# Tips
---
## Picking The Idea
We need good idea. But good ideas does not come from nowhere. They should be based on something. So my current idea is not based on data. I simply don't have real-life product data. That's why my current idea is simply based on intuition (consider this as an inner source of data).
The idea is: tracking daily stuff in a fun way using... emojis.
And then I am telling myself:
> Let's spend some quality time to find the perfect shape for this idea in order to test it in the wild.
Let your first idea be based on intuition. You will get experience and data later.
## Naming is hard. Avoid it!
Naming a project is hard. But it should not be hard to name things while in active development. What I do is I give temporary technical names. For this project I chose the name «Trackerion».
Sometimes I name projects literally. My previous project was called: «habits-with-feelings».
And that is fine. You might have a perfect name while working or talking to people. Just don't get stuck with this initially.
## Biggest Tip To File Organization
All I am starting is a single file — App.js. And only later — everything.js else on/demand.js. And that is fine. The structure will happen later.
And that's all.
## Don't Stuck With Your Tools
I can easy imagine this conversation, happening in heaven:
> — What was the last thing that you remember?
— npm install
Sometimes you can get older while installing all the android tooling or NPM packages. But there is a hack. Don't get stuck with your tools! Do other stuff in the context of the current task / project.
- Visualize the problem
- Plan the code
- Priorities features

For example, while some script was spinning, I've started exploring uploading my app to the Google Play Store and this is what I've found.

This simple act won be a couple of days because I've realised that I need to upload my app as soon as I can.
Don't stuck with your tools. There is always something useful.
## Expect The Unexpected Things
So I was at the peak of excitement during the development when my phone's screen got smashed. And I've just set up everything to develop with it.
Yikes!
How to win in this situation?
> The win is to put as least attention to the bad news and keep going.
Found my very old phone, set up everything again, send my broken phone to the fastest repair service.
Feeling sad, feeling bad, accepting life, moving on.
## Minimise Style Fapping
This one is golden. I think this rule is one the biggest time saver for me. This rule saved me DOZENS of hours.
> Don't. Play. With. Styles.
I've stopped tweaking paddings, margins, colors etc. For some reason I was doing this during the active development.
No-no-no-no.
Instead, I have this rule:
> Put UI blocks onto their places.
Minimising time to tweak styles, tweaking. Focusing on building a clear screen/page structure.

`color: green` is fine, `color: red` is fine.
## Resist The Urge Of Upgrading To The Latest & Greatest
Upgrading to «latest and greatest» is... tricky. Cause you ofter run into some compatibility issues...
Upgrading feels good, dealing with issues takes time.
## Landing Page — Just Find The Reference
To complete a basic version of the landing app I usually go with some clear reference.

You can transform doing a landing page into a rocket science, it is important. But you are fine to start having the basics.
Done is better than perfect.
## Have a Distraction Paper
When I write on a piece of paper solving a particular task, I always get somewhat irrelevant thoughts. They feel good, but they are distracting. The solution is to have a «distraction paper». That paper where you write «everything else».
It really, really, really helps.
## Manage Technical Challenges
This one is a big one.
I've noticed a tendency to look for technical challenges. Yeah, learning as a programmer is fun. But here is a tricky part: it is not nesesarry helps you to finish. If you have a clear goal to learn something new — that is fine. But otherwise it's not wise.
While you are some fancy term shared by a speaker who was solving a $ 100k business problem, time goes by.
Every step can contain a challenge that can take weeks or months. And that is a lot of time!
## Build The Everyday Momentum With Some Sweet Stuff
Most of the time I am quite strict with myself in terms how I spend my time. But! If you have a bad mood... If things don't go so well... If it's an early morning and one cup of coffee was not enough...
> Gain momentum with some soul healing refactoring.
:)
or...
> Play with styles. A **little** bit!
Cause we are all human beings.
The End.
This is part of the series. I have multiple ideas for the next articles! If you are interested in some of them — let me know in the comments.
- How to overcome the middle launch crisis during the development. About restoring the faith in your project if it starts upsetting you. Basically, it's about a meta-skill of finishing. :)
- Why coder will NEVER launch a good product. This one is about different roles (hats) we should use to move forward with our personal projects. And a high price I've paid to understand that.
Let me know in the comments! And if my words are relevant to you, follow me somewhere (or everywhere).
- [Twitter](https://twitter.com/rg_for_real)
- [YouTube](https://www.youtube.com/channel/UCeyCNhFmy-dhrrbByqBraPQ?view_as=subscriber)
- [Newsletter](https://blog.swingpulse.com)
Cheers!
P.S. What stops you from starting? | rohovdmytro |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.