id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
129,601
Crypto Exchange clone Scripts to start your exchange like Binance
Get a cryptocurrency clone script to start your exchange as like binance,coinbase,etc
0
2019-06-28T12:05:33
https://dev.to/harryzaara/crypto-exchange-clone-scripts-to-start-your-exchange-like-binance-4j9h
bitdealclonescript
--- title: Crypto Exchange clone Scripts to start your exchange like Binance published: true description: Get a cryptocurrency clone script to start your exchange as like binance,coinbase,etc tags: bitdeal clonescript --- Bitdeal - Leading Clone Script Provider, furnishes the best clone scripts of top crypto exchanges to build your own cryptocurrency exchange as like Okex,coinbase,upbit,binance and lot more. We develop and deploy the best clone scripts with many new features and plug-ins to make your exchange stand unique in the crowd. 15 clone script to start your crypto exchange like binance, bitstamp and more Binance Clone Script Bitstamp Clone Script Bithump Clone Script Coinbase Clone Script Bitfinex Clone Script Poloniex Clone Script Okex Clone Script Paxful Clone Script Kraken Clone Script Upbit Clone Script Korbit Clone Script Bitflyer Clone Script Coincheck Clone Script Gemini Clone Script Bitmax Clone Script Bitdeal - Leading Clone Script Provider, provides the best exchange clone scripts across the world to start your own exchange as like Binance, Coinbase, Okex, Poloniex, Bithump, etc., We provide features like an automated trading bot, Unhackable Admin panel, User-friendly trader panel, etc with secured API's. You can start your own online crypto exchange as like popular exchanges to acquire the larger portion of the crypto market. [Book a live demo for any exchange clone script st Bitdeal !!](https://www.bitdeal.net/cryptocurrency-exchange-clone-scripts)
harryzaara
129,667
Explore cosmos with Serverless
Serverless, JavaScript and CosmosDB
0
2019-06-29T11:38:47
https://dev.to/azure/explore-cosmos-with-serverless-422o
serverless, javascript, tutorial, showdev
--- title: Explore cosmos with Serverless published: true description: Serverless, JavaScript and CosmosDB tags: serverless, javascript, tutorial, showdev cover_image: https://thepracticaldev.s3.amazonaws.com/i/3lz77fmx35f6qoaysxti.gif --- Follow me on [Twitter](https://twitter.com/chris_noring), happy to take your suggestions on topics or improvements /Chris > You thought this was about space didn't you? Yea my bad :). But now that I have your attention let's talk about Serverless and how we can use that with Azure Cosmos DB. It's so easy it's out of this world ;) ![](et.gif) What we are going to do is show: - **Serverless** and how to create a function in the Cloud - **Create** a Azure Cosmos DB database - **Show** how can read and write data to your Azure Cosmos DB ## Serverless - look Ma no servers Serverless is *the new black*, the thing that everybody talks about and for good reason. Gone are the days when you had that server room with no oxygen and full of cables and lord knows what else. Now you can just focus on the code and rest assured that your code lives in someone else server room ;), that is *the Cloud*. The point is, it's not a thing you need to care about anymore. Serverless is, of course, a bit more than just *no servers*, it's fully managed, there is not even a web server to configure. I've covered the *whys* and what offerings in this [article](https://dev.to/azure/serverless-from-the-beginning-using-azure-functions-azure-portal-part-i-28o1), so if you are interested, go and have a read. We will focus on a specific offering namely Azure Functions. The reason it's chosen for this specific article is: - **Easy to get started with** , comes with great extensions for all major IDEs including VS Code that lets you scaffold functions, debug and more. - **Creates a connection** to your Azure Cosmos DB database so you can just focus on changing the data and not having to bother with instantiating connections ## Resources - [Free account Azure account](https://azure.microsoft.com/en-gb/free/?wt.mc_id=devto-blog-chnoring) You will need to sign up on Azure to use Azure CLI and deploying Azure Functions - [How to install azure cli](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest&wt.mc_id=devto-blog-chnoring) For some of the activities, we do we will use Azure CLI in the terminal. This is a great way to manage your resources. - [Create an Azure function in VS Code](https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-function-vs-code?wt.mc_id=devto-blog-chnoring), This will show you how to build your first Azure Function in VS Code - Working with Azure Cosmos DB + Azure function - [Working with Azure Cosmos DB and Node.js](https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-nodejs?wt.mc_id=devto-blog-chnoring) This article takes you through building a Node.js app, create an Azure Cosmos DB app and connect the two. [Create an Azure Cosmos DB using Azure CLI](https://docs.microsoft.com/en-us/azure/cosmos-db/manage-with-cli?wt.mc_id=devto-blog-chnoring) This shows how you can set up your Cosmos DB database using the Azure CLI [Create an Azure Cosmos DB using Azure CLI + additional resources](https://docs.microsoft.com/en-us/azure/cosmos-db/scripts/create-database-account-collections-cli?toc=%2fcli%2fazure%2ftoc.json?wt.mc_id=devto-blog-chnoring) This shows all the different resources you might need to create before and during setting up Azure Cosmos DB, a bit longer version than the above link - [Series I wrote on getting started with Serverless](https://dev.to/azure/serverless-series-16o8) - [Serverless CRUD with Azure Cosmos DB client](https://github.com/simonaco/pbp-serverless) This is a repository by my colleague Simona Cotin that shows how you can do a full CRUD using the MongoDB client - [Reference page for Serverless + Azure Cosmos DB at GitHub](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-functions/functions-bindings-cosmosdb-v2.md) This is the most complete page there is on the combination Serverless + JavaScript + Azure Cosmos DB - [Slack WebHooks](https://slack.com/intl/en-gb/help/articles/115005265063-incoming-webhooks-for-slack) - [Better looking Slack messages with attachments](https://api.slack.com/docs/message-attachments) ## Steps Let's look at this from a mile-high view and see what we need to do, to accomplish what we want. We need to: 1. **Scaffold** a Azure Cosmos DB database in Azure 2. **Create** a Serverless app and create an Azure Function for each call. We show how you can read data, create it and update it 3. **Configure** our functions, to connect to our database, so working with the database is made real easy ## Create a Azure Cosmos DB database Ok. There are two ways to do this. We can either: - **Create** it through the Azure CLI - **Use** the portal and create the database Let's show how to do it in the `Azure-CLI` at first and then switch to the terminal gradually. Let's face it, it's great to feel like a hacker but sometimes you want to see what you are doing. You also want to make sure it's all there and some things are more practical to do in a UI, like data entry. ### Using Azure CLI To create the database we need to take the following steps: 1. **Resource group**, Create a resource group for our database or use an existing group 2. **Azure Cosmos DB**, Create a Azure Cosmos DB account 3. **Database**, Create the database 4. **Add** collections and data **Create the resource group** ``` az group create \ --name mycosmosgroup \ --location "West Europe" ``` ![](https://thepracticaldev.s3.amazonaws.com/i/g2lbhb61ykxj3bw8oqs7.png) The above says `"provisioningState": "Succeeded"`, which means we succeeded in creating our resource group. **Create the Azure Cosmos DB Account** Next step is about creating our own account. The account name should be lowercase. ``` az cosmosdb create \ --resource-group mycosmosgroup \ --name cosmosaccount-chris-sql \ --kind GlobalDocumentDB \ --locations "North Europe"=0 "West Europe"=1 \ --default-consistency-level "Session" \ --enable-multiple-write-locations true ``` This might take a little while. All databases take a little time to scaffold. If you are impatient you can go visit it in the portal and it should look something like this: ![](https://thepracticaldev.s3.amazonaws.com/i/ipo9zujz20hn7uuloahp.png) Last step is to create the database. **Creating the database** Ok, we have a database account but we don't have a database, yet. Now, to create a database we need to create something called a *container*. Let's head to the portal and find our database account. Now click the tab `Data Explorer`. It should now look something like: ![](https://thepracticaldev.s3.amazonaws.com/i/lmh3yqezefi6ip1kkdsz.png) By hitting the indicated button we get a dialog opened that's asking us to create a new container, like so: ![](https://thepracticaldev.s3.amazonaws.com/i/xad0l96g3oivm5p14cwr.png) Fill in values for the highlighted parts. You will notice that the `Partition key` will prepend the value with a `/` so `id`, becomes `/id`. I know what you are thinking, what the heck is a partition key? Let's see what the info icon tells us: ![](https://thepracticaldev.s3.amazonaws.com/i/kj781lrr13a32th5eitl.png) Partitions are a chapter all to itself. If you are really interested in knowing more have a look at this [article](https://blog.maartenballiauw.be/post/2012/10/08/what-partitionkey-and-rowkey-are-for-in-windows-azure-table-storage.html) For now, we are happy knowing it's a column we point out in our table. Ok, so we created our container and we should have something looking like this: ![](https://thepracticaldev.s3.amazonaws.com/i/6ztcoxyxusf0xmwelaxr.png) Now what? Now we fill it with data and we do that by clicking the `New Item` button. It should present you with an edit view like so: ![](https://thepracticaldev.s3.amazonaws.com/i/r3gbi4y1f1ljyp8folsx.png) So change the item to say: ``` { "id": "1", "name": "tomato" } ``` Now hit `Save` and repeat the procedure and create another item with the value `{ "id": "2", "name": "lettuce" }`. We want to show this data in a Serverless function right? Right? Yea I thought so :) So Serverless is next. ## Serverless - Reading and Writing to/from our database Ok then, Serverless part it is. We need to support interacting with our Azure Cosmos DB database We will build the following: 1. An Azure Function App, this will hold all of our functions 2. Three different Azure Functions, each supporting an HTTP verb 3. *in/out* bindings to Azure Cosmos DB, this is configuration we write that creates a database connection for us ### Create an Azure Function App To be able to use Azure Functions they need to exist inside of an Azure Function app. We will create it using VS Code so we need to make sure we have the correct extension installed. So look for `Azure Functions` in the extension area. It should look like this: ![](https://thepracticaldev.s3.amazonaws.com/i/gkl1rs6toru3ilmnti2m.png) Next step is to scaffold our app. We do that by opening up the command palette. That is done by either selecting `View > Command Palette` in the menu or you can hit `CMD + SHIFT + P`, if you are on a Mac. ![](https://thepracticaldev.s3.amazonaws.com/i/adf6m44bse86yewrzz3z.png) Select the current directory, JavaScript as language and HTTP trigger. The last will ensure that your project contains at least one function. Name the function `ProductsGet` and give it `Anonymous` as authorization level. ### Create remaining functions For this step, we need to bring up our command palette again. `CMD + SHIFT + P`. Select `Azure Functions: Create Function` and select `HttpTrigger`. Name the function `ProductsCreate`. Now repeat this process and create another function called `ProductsUpdate` ![](https://thepracticaldev.s3.amazonaws.com/i/0w4ytzw2fmft0y23q9sy.png) We should now configure the allowed HTTP methods for each function. Go into each function directory and open up `function.json`. There you will find `bindings` and specific config looking like this: ```json "methods": [ "get", "post" ] ``` Ensure you configure it in the following way: - `ProductsGet`, should only support `get` - `ProductsCreate`, should only support `post` - `ProductsUpdate`, should only support `put` ### Configure functions with Azure Cosmos DB Next up we need to revisit the `function.json` for each function. This time we need to add a binding that lets us talk to our Azure Cosmos DB database. Essentially we need to create something called a *binding* that not only holds a connection to our Azure Cosmos DB database but also easily lets us do a CRUD on a specific collection. The configuration looks pretty much like this: ``` { "name": "productDocument", "type": "cosmosDB", "databaseName": "ToDoList", "collectionName": "Items", "createIfNotExists": true, "connectionStringSetting": "CosmosDB", "direction": "in" } ``` Depending on what we are trying to do we either need a binding of `"direction": "in"` or for it to be binding of type `out`, in which case it would look like this `"direction": "out"`. If we only want to read data we need this to say `"direction": "in"`. Creating we need it to say `"direction": "out"`. Updating is a different matter, in fact, we need two sets of bindings, one that retrieves our record and one to create/replace our record, more on that later. So what are we saying here? We are saying that we have a `databaseName` called `TodoList`, change that to whatever you called your database. We also have a `collectionName` called `Items` which you could see in the Azure portal was the case. We also define it as `direction` having the value `in`. This means that this will set up the connection for us and whatever object we are given will contain the data we need. There is one more thing here `connectionStringSetting` pointing to the value `CosmosDB`, now this is an AppSetting we set up in our Azure Functions app project. We haven't deployed it to Azure yet so we need to store this somewhere, for now, that somewhere is in the file `local.settings.json`. It should look like this currently: ```json "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "node", "CosmosDB": "[connection string to our database]" } } ``` You need to go to the portal and our created database, click the tab menu `Keys` and copy the value in the field `Primary Connection String` and then set it to the value of above key `CosmosDB`. ### Additional configuration We are not quite done there I'm afraid. There are two more things we need to do: 1. **Dependencies**, set up dependencies to be installed 2. **Storage account**, set up a storage account The first bit we solve by opening up the file `host.json` and ensure it has the following content: ```json { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[1.*, 2.0.0)" } } ``` The key and value for `extensionBundle` instruct Azure that it should be installing all needed libraries so we can talk to queues, databases and pretty much all cool things we can integrate our Azure Function App with as input and output bindings. The second and last thing we need to do is to create a storage account. We can easily do so in the portal by clicking `Add Resource`, type `Storage Account` and follow the instructions. Then we need to obtain the connection string and we need to add an entry to the file `local.settings.json`, like so: ```json "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "[storage account connection string]", "FUNCTIONS_WORKER_RUNTIME": "node", "CosmosDB": "[database connection string]" } } ``` ### Write some code At this point, we want to write some code in our `ProductsGet` functions. So we need to edit `ProductsGet/index.js` to say the following: ```js module.exports = function (context, req) { for(var i =0; i< context.bindings.productDocument.length; i++) { let product = context.bindings.productDocument[i]; context.log('id', product.id); context.log('name', product.name); } context.res = { // status: 200, /* Defaults to 200 */ body: "Ran ProductsGet" }; }; ``` Looking at code above we see that `context.binding.productDoucment` is containing a list of our `Items`. We iterate that list in the code and print out `id` and `name`. Because of configuration like this in `function.json`: ```json { "name": "productDocument", "type": "cosmosDB", "databaseName": "ToDoList", "collectionName": "Items", "createIfNotExists": true, "connectionStringSetting": "CosmosDB", "direction": "in" } ``` we establish a connection to our Azure Cosmos DB database and a specific collection `Items` and ensure to create a handle `productDocument`. Imagine having to write the connect to database code yourself, not fun right? ### Deploy and test it out Ok, next step is to deploy it. To deploy it we need to do two things: 1. **Deploy the Azure Function app**, this is as simple as clicking a button in VS Code 2. **Deploy the local app settings** to Azure, this is needed to ensure our AppSettings in Azure correctly points out our Azure Cosmos DB database but also our storage account. Let's start with the deploy part. Ensure you have the following extension installed in VS Code: ![](https://thepracticaldev.s3.amazonaws.com/i/k2kwed7fygebkmia7pte.png) This will enable you to easily deploy anything to Azure. We started this article talking about another extension namely `Azure Functions`. Ensure you have them both installed and life will be a lot simpler. Now as for the deployment part. Start by clicking the highlighted Azure icon below ![](https://thepracticaldev.s3.amazonaws.com/i/c8y12pttws3dood1l76d.png) Then we come here: ![](https://thepracticaldev.s3.amazonaws.com/i/couf0kfd3ilklg8yjzed.png) Indicated at the top right is an icon looking like a flash. Clicking that will deploy the app to Azure. Afterward, it will become part of the list below the flash icon. It's also where we would go to redeploy changes. The last step is to ensure we upload all the local app settings to Azure. To do that we need to talk our deployed Azure Function App and right-click it and choose to upload them. It should look like so: ![](https://thepracticaldev.s3.amazonaws.com/i/ubjmhjteub624nkmoyi9.png) At this point, the app should actually work. So lets head to the Azure Portal and try it out: ![](https://thepracticaldev.s3.amazonaws.com/i/fncix588u4dt2bha0l26.png) Just click on the correct function `ProductsGet` then click `Run`. This will produce a terminal window below that will print the following: ![](https://thepracticaldev.s3.amazonaws.com/i/dek7ma9vjobcxf0sl9ue.png) As you can see from the above image it is reading from the Azure Cosmos DB database. Awesome right, we have a working connection. Now to talk the remaining operations. ## Adding remaining operations So far we have been reading so how to support these additional operations? Well, we need to configure each `function.json` and give it the correct type of configuration and we, of course, need to go into each `index.js` and add the necessary code to either Create or Update. ### Supporting creation So far we have been supporting how to read data from our database. We accomplished that by creating a binding that had a `direction` value of `in` and we also needed to point out our database and what collection to read from. Creating something is about creating a binding with `direction` having the value `out`. This gives us a handle that we can add data to. Let's start with the configuration part. In the file `ProductsCreate/function.json` we add the following entry to our bindings array: ```json { "name": "productDocument", "type": "cosmosDB", "databaseName": "ToDoList", "collectionName": "Items", "createIfNotExists": true, "connectionStringSetting": "CosmosDB", "direction": "out" } ``` Ok that sets us up nicely so we can focus on the second step, adding code to `ProductsCreate/index.js`: ```js module.exports = function (context, req) { const { id, name } = req.body; // creating doc context.bindings.productDocument = JSON.stringify({ id: id, name: name }); // saving doc context.done(); }; ``` Note how the configuration value for they key `name` in `function.json` matches `context.bindings.productDocument`. What we do then is to assign an object to `productDocument`: ```js context.bindings.productDocument = JSON.stringify({ id: id, name: name }); ``` Above we are creating the record we want to be inserted. To actually *do the save* we end it all with the command `context.done()` We don't believe this until we tried it right? The first thing we need to do is to redeploy our project to Azure. Because we deployed it once already, it's super simple to redeploy. Click the Azure extension on the left panel then right-click your Azure Functions app project and select `Deploy to Function App` like below: ![](https://thepracticaldev.s3.amazonaws.com/i/jguzbglb16mmtd12sshb.png) This will redeploy it all. Wait for it to finish. > Done? good let's continue For this one, let's head to the Azure Portal and test it (you can use cURL or any client that supports sending REST calls). In the portal we want to select our function and click the `Test` tab on the right: ![](https://thepracticaldev.s3.amazonaws.com/i/chl9ux18riuyeu59yb64.png) Now enter a request like so: ![](https://thepracticaldev.s3.amazonaws.com/i/mpxplv3jzash64tpy0eg.png) Next step is to click the `Run` button. We should be getting a result like the below ![](https://thepracticaldev.s3.amazonaws.com/i/dnqc2j91l1t49nhju2he.png) To really verify that this worked, head to the resource page for your database and see if you have a third record: ![](https://thepracticaldev.s3.amazonaws.com/i/qqdjwpllrct2w7t0t30d.png) There it is ! :) That's it for creation, let's talk about updating next. ### Supporting update Updating a record is a different matter. We need to support two bindings. Why is that you ask? Well, we need a binding of type `in` so we can retrieve the record we need. Then we need a second `out` binding to actually *replace* the record. > Hmm ok, show me Ok then, bindings first, they should look like this: ```json { "name": "productDocument", "type": "cosmosDB", "databaseName": "ToDoList", "collectionName": "Items", "createIfNotExists": true, "connectionStringSetting": "CosmosDB", "direction": "out" }, { "name": "productDocumentsIn", "type": "cosmosDB", "databaseName": "ToDoList", "collectionName": "Items", "createIfNotExists": true, "connectionStringSetting": "CosmosDB", "direction": "in" } ``` now to the code: ```js module.exports = function (context, req) { const { id, name } = req.body; // or you could limit the in to only be one doc const foundDoc = context.bindings.productDocumentsIn.find(p => p.id == id); // do the update context.bindings.productDocument = foundDoc; context.bindings.productDocument.name = name; context.done(); context.res = { // status: 200, /* Defaults to 200 */ body: "Record updated" }; }; ``` What we can see is how we first retrieve the record from our input binding `productDocumentsIn` and assign it to the variable `foundDoc`. Next thing we do is assign `foundDoc` to our `out` handle found on `context.bindings.productDocument`. We do this to ensure we don't lose any values on this record should or values on the body only represent a partial update. Next thing we do is take the `name` that came with our `body` and apply that, like so `context.bindings.productDocument.name = name` and of course we finish that off with a `context.done()`. That's it, that is how we support updating. ## Summary We started off by creating a Azure Cosmos DB database. Then we put some data in there. Next, we created some serverless functions. Thereafter we learned about *bindings* and how those could save us time when it came to creating a connection towards our database. Thanks to the bindings we ended up writing quite little code to support reading and writing. How about you try this too? :)
softchris
129,851
Around the Web – 20190628
And now for somethings to read (or watch) over the weekend, if you have some spare time that is....
359
2019-07-22T13:55:41
https://wipdeveloper.com/around-the-web-20190628/
blog
--- title: Around the Web – 20190628 published: true tags: Blog canonical_url: https://wipdeveloper.com/around-the-web-20190628/ cover_image: https://wipdeveloper.com/wp-content/uploads/2017/07/WIPDeveloper.com-logo-black-with-white-background.png series: Around the Web --- And now for somethings to read (or watch) over the weekend, if you have some spare time that is. ## [Puns and Jokes](https://www.gooddaysirpodcast.com/221) **Good Day, Sir! Show** – Jeremy Ross – In this episode, we discuss what was learned from scanning 10.2 billion lines of Apex code, Tableau, the top 10 highest-paid CEO’s, Hacker News asks what Salesforce is, and Salesforce Certifications and Trailhead fraud. ## [The cost of JavaScript in 2019 · V8](https://v8.dev/blog/cost-of-javascript-2019) **v8.dev** – Jun 25 – Note: If you prefer watching a presentation over reading articles, then enjoy the video below! If not, skip the video and read on. One large change to the cost of JavaScript over the last few years has been an improvement in how fast browsers can… ## [Google’s new reCAPTCHA has a dark side](https://www.fastcompany.com/90369697/googles-new-recaptcha-has-a-dark-side) **Fast Company** – Jun 27, 8:00 AM – We’ve all tried to log into a website or submit a form only to be stuck clicking boxes of traffic lights or storefronts or bridges in a desperate attempt to finally convince the computer that we’re not actually a bot. For many years, this has been… ## [Raspberry Pi 4 on sale now from $35](https://www.raspberrypi.org/blog/raspberry-pi-4-on-sale-now-from-35/) **Raspberry Pi** – Eben Upton – Jun 23, 11:00 PM – We have a surprise for you today: Raspberry Pi 4 is now on sale, starting at $35. This is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while… ## Till Next Week Want to share something? Let me know by leaving a comment below or emailing [brett@wipdeveloper.com](mailto:brett@wipdeveloper.com) or following and tell me on [Twitter/BrettMN](https://twitter.com/BrettMN). Don’t forget to sign up for **[The Weekly Stand-Up!](https://wipdeveloper.com/newsletter/)** to receive free the [WIPDeveloper.com](https://wipdeveloper.com/) weekly newsletter every Sunday The post [Around the Web – 20190628](https://wipdeveloper.com/around-the-web-20190628/) appeared first on [WIPDeveloper.com](https://wipdeveloper.com).
brettmn
129,894
How to Start Your Day Calmly
As a software developer, it is imperative to have a calm start to the day and remain focused all day. So here is how I found that calm when I followed the 9 steps..
0
2019-06-29T08:32:25
https://dev.to/lucpattyn/how-to-start-your-day-calmly-1mdf
calm, focus, software, development
--- title: How to Start Your Day Calmly published: true description: As a software developer, it is imperative to have a calm start to the day and remain focused all day. So here is how I found that calm when I followed the 9 steps.. tags: #calm #focus #software #development --- 1) Wake up early 2) Don't touch the mobile, don't receive any calls either. Make sure you had turned off the WiFi/Data before falling to sleep last night. 3) Get Fresh 4) Go for a walk 4.1) Give a few rounds 4.2) Freehand Exercise 4.3) Meditate/Pray whatever works (Order may vary from 4.1 to 4.3) 5) Spend some time with elderly persons, preferably wise (in my case an aged road-side tea-stall owner) 6) Don't touch the mobile yet 7) Do some reading (Newspaper maybe) 8) Get Ready 8.1) Shower 8.2) Breakfast (Order may vary from 7 to 8.2) 9) Head out * Now you are ready to turn on WiFi/Data and take calls For someone who is not consistent, this is hard to follow in a daily basis, so I hardly have a calm start to the day :|
lucpattyn
129,954
graphql reddit
Postman now supports GraphQL! - https://blog.graphqleditor.com/graphql-postman/ Generate code from yo...
0
2019-06-29T14:01:28
https://dev.to/kkrzeminski/graphql-reddit-5ae9
graphql, materia
Postman now supports GraphQL! - https://blog.graphqleditor.com/graphql-postman/ Generate code from your GraphQL schema with a single function call - https://graphql-code-generator.com/ Introduction to Materia — An API Builder made for Frontend developers https://medium.com/materia-in-depth/introduction-to-materia-an-api-builder-made-for-frontend-developers-ded3830ab60f Materia https://getmateria.com/ https://github.com/materiahq/materia-server
kkrzeminski
129,999
As Spoken by a Dead Enterprise Developer by the Tree
Programmer said, 1. Like a good shepherd gently corrects the route of a wayward lamb, the shepherd...
0
2019-06-29T17:27:24
https://dev.to/vivainio/as-spoken-by-a-dead-enterprise-developer-by-the-tree-1nh2
programming
--- title: As Spoken by a Dead Enterprise Developer by the Tree published: true tags: programming canonical_url: --- ![](https://cdn-images-1.medium.com/max/1024/1*LjL8YlDyyXfKG0Ilzv4VeA.jpeg) Programmer said, **1.** Like a good shepherd gently corrects the route of a wayward lamb, the shepherd of an application steers the code towards safer paths without coercing or blaming the code. **2.** You have been told that you should use backslash in Windows as a path separator. But know that backslash is an abomination in my house and should only be used as an escape character. Let Windows software that breaks without backslash break, and let us not talk about that software anymore. **3.** Man that thinks too much looks at an idea and thinks he can do better, and spends his nights belaboring for the perfect implementation without typing in the program. But his wise neighbor implements his first idea, sees how it operates and modifies it accordingly. His neighbor is blessed with riches and respect, while the man starves and is ridiculed. **4.** You say you should depend on interfaces and not concretions. Yet you put the interface in the same dll as the implementation, like a drunkard that soils his pants also on the outside. Know that interface is of no use if it can not be used without dragging along the implementation. **5.** If you inject your dependencies, you can have different life cycles and states within those dependencies. But transient dependency is almost like a static function when you look close enough. Programmer said, **6.** You were told to write many unit tests. Yet the man that told you, does not write unit tests in the privacy of his own house. Words are wind, and man is judged by his deeds, not his words. **7.** Foolish young man sees a new library and knows that his elders don’t know it. So in order to acquire wives, the young man peacocks around with his knowledge above his elders, for surely he must be wiser than them. Instead, a wise man knows that the library was written by another such lovelorn young man, again to impress other people. Young man learns this from the wise man, studies computing outside JavaScript, and earns a respectable wife and a good house in his time. **8.** Foolish old man wants to teach the youth in his village. But they won’t listen to him because he is old and busted. Do not be that old man. Instead, be the wise old man that listens to the youth, for they get around the block more and have their own stories of war and conquest to tell. And then see what you can use and what you must discard. Programmer said, **9.** You were said that Object Oriented is garbage and Functional Programming is too hard. But wise men know that Object Oriented programming is not garbage when done in moderation, and Functional Programming is not hard when done in moderation. And the village idiot, writing simple procedural programs in Go, laughs at both, with belly fat from rich foods and good life. **10.** But yet know that Haskell is doomed to obscurity, as for obscurity it was written and in obscurity it thrives. **11.** Text is bytes written as UTF-8. It shall not have a Byte Order Marker in the beginning, for that is an aBOMination. Let no more be said about that. Programmer said, **12.** Many men you respect use Rust and you aspire to the same. But you should not use Rust before they fix their editor support, for the editor support is still bad. **13.** Wise man looks at code searching for expressiveness and clarity. But equally wise man accepts an ugly language if the tooling is built by rich masters with fanatic drive towards quality. Beautiful program will not keep you warm in winter breeze, but fast compiler just may.
vivainio
130,060
Pure CSS hover effect
Pure css button and link hover effect using text-shadow animation, no data-attribute or pseudo...
0
2020-03-07T08:40:16
https://dev.to/4rm/pure-css-hover-effect-479l
codepen
--- title: Pure CSS hover effect published: true tags: codepen --- <p>Pure css button and link hover effect using text-shadow animation, no data-attribute or pseudo classes needed.</p> {% codepen https://codepen.io/fromwireframes/pen/agqNGx %}
4rm
130,082
Card Hover Effect | CSS | Cubic Bezier Updated
Hey there! Here's my new pen, check it out also on my youtube video to see the whole process http...
0
2019-06-30T00:21:01
https://dev.to/kaioalmeidacost/card-hover-effect-css-cubic-bezier-updated-542f
codepen, sass, html, code
{% codepen https://codepen.io/KaioRocha/pen/orpKxq %} Hey there! Here's my new pen, check it out also on my youtube video to see the whole process https://youtu.be/Izm8SutcQv4
kaioalmeidacost
130,221
File Creation Time in Linux
The stat utility can be used to retrieve the Unix file timestamps namely atime, ctime and mtime. Of t...
0
2019-06-30T14:28:43
https://www.anmolsarma.in/post/linux-file-creation-time/
linux, filesystem, todayilearned
--- title: File Creation Time in Linux published: true tags: linux, filesystem, todayilearned canonical_url: https://www.anmolsarma.in/post/linux-file-creation-time/ --- The [`stat`](http://man7.org/linux/man-pages/man1/stat.1.html) utility can be used to retrieve the Unix file timestamps namely `atime`, `ctime` and `mtime`. Of these, the benefit of `mtime` which records the last time when the file was modified is immediately apparent. On the other hand, `atime`<sup id="fnref:1"><a href="#fn:1">1</a></sup> which records the last time the file was accessed has been called [“perhaps the most stupid Unix design idea of all times”](https://lore.kernel.org/lkml/20070804210351.GA9784@elte.hu/). Intuitively, one might expect `ctime` to record the creation time of a file. However, `ctime` records the last time when the metadata of a file was changed. Typically, Unices do not record file creation times. While some individual filesystems do record file creation times<sup id="fnref:2"><a href="#fn:2">2</a></sup>, until recently Linux lacked a common interface to actually expose them to userspace applications. As a result, the output of `stat` (GNU coreutils v8.30) on an ext4 filesystem (Which does record creation times) looks something like this: ``` $ stat . File: . Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 803h/2051d Inode: 3588416 Links: 18 Access: (0775/drwxrwxr-x) Uid: ( 1000/ anmol) Gid: ( 1000/ anmol) Access: 2019-06-23 10:49:04.056933574 +0000 Modify: 2019-05-19 13:29:59.609167627 +0000 Change: 2019-05-19 13:29:59.609167627 +0000 Birth: - ``` With the “`Birth`” field, meant to show the creation time, sporting a depressing “`-`”. The fact that `ctime` does not mean creation time but change time coupled with the absence of a real creation time interface does lead to quite a bit of confusion. The confusion seems so pervasive that the `msdos` driver in the Linux kernel [happily clobbers](https://elixir.bootlin.com/linux/v5.1.14/source/fs/fat/inode.c#L883) the FAT creation time with the Unix change time! The limitations of the current `stat()` system call have been known for some time. A new system call providing extended attributes was [first proposed in 2010](https://www.spinics.net/lists/linux-fsdevel/msg33831.html) with the new [`statx()`](https://lwn.net/Articles/685791/#statx) interface finally [being merged into Linux 4.11 in 2017](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a528d35e8bfcc521d7cb70aaf03e1bd296c8493f). It took so long at least in part because kernel developers quickly ran into one of the hardest problems in Computer Science: [naming things](https://lkml.org/lkml/2010/7/22/249). Because there was no standard to guide them, each filesystem took to calling creation time by a different name. [Ext4](https://elixir.bootlin.com/linux/v5.1.14/source/fs/ext4/ext4.h#L744) and [XFS](https://elixir.bootlin.com/linux/v5.1.14/source/fs/xfs/libxfs/xfs_inode_buf.h#L40) called it `crtime` while [Btrfs](https://elixir.bootlin.com/linux/v5.1.14/source/fs/btrfs/btrfs_inode.h#L187) and [JFS](https://elixir.bootlin.com/linux/v5.1.14/source/fs/jfs/jfs_incore.h#L46) called it `otime`. Implementations also have slightly different semantics with JFS storing creation time only with the [precision of seconds](https://elixir.bootlin.com/linux/v5.1.14/source/fs/jfs/jfs_imap.c#L3166). Glibc took a while to add a wrapper for statx() with support landing in [version 2.28](https://www.sourceware.org/ml/libc-alpha/2018-08/msg00003.html) which was released in 2018. Fast forward to March 2019 when GNU [coreutils 8.31](https://lists.gnu.org/archive/html/coreutils-announce/2019-03/msg00000.html) was released with `stat` finally gaining support for reading the file creation time: ``` $ stat . File: . Size: 4096 Blocks: 8 IO Block: 4096 directory Device: 803h/2051d Inode: 3588416 Links: 18 Access: (0775/drwxrwxr-x) Uid: ( 1000/ anmol) Gid: ( 1000/ anmol) Access: 2019-06-23 10:49:04.056933574 +0000 Modify: 2019-05-19 13:29:59.609167627 +0000 Change: 2019-05-19 13:29:59.609167627 +0000 Birth: 2019-05-19 13:13:50.100925514 +0000 ``` * * * 1. The impact of `atime` on disk performance is mitigated by the use of [`relatime`](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime) on modern Linux systems. <sup>[return]</sup> 2. For ext4, one can get the `crtime` of a file using the `stat` subcommand of the confusingly named [`debugfs`](https://linux.die.net/man/8/debugfs) utility. <sup>[return]</sup>
anmolsarma
130,245
Fixing the World With Software
There are so many big problems the world faces today. We hear about them in the news every day. You want to do something good in the world. You want to make a change. But you like building software. What can a software developer do to make the world a better place?
0
2019-06-30T15:29:35
https://jorin.me/fixing-the-world-with-software/
career, motivation, technology, life
--- title: Fixing the World With Software published: true description: There are so many big problems the world faces today. We hear about them in the news every day. You want to do something good in the world. You want to make a change. But you like building software. What can a software developer do to make the world a better place? tags: career, motivation, technology, life canonical_url: https://jorin.me/fixing-the-world-with-software/ cover_image: https://thepracticaldev.s3.amazonaws.com/i/5mmrcbpd5d7qowx3eqof.jpg --- There are so many big problems the world faces today. We hear about them in the news every day. You want to do something good in the world. You want to make a change. But you like building software. What can a software developer do to _make the world a better place_? Probably most of humanity had big problems in their time. In some way or another we might always fight for survival, for our existence. Uniquely nowadays is the global scale of our problems. The challenges we are facing have never been more complex and as part of our information-driven society the media constantly keeps reminding us of the threads we are facing. At the same time our abilities and the reach of our actions has never been greater. I mostly see three obvious challenges nowadays: 1. **Keeping the global human society in order to prevent fatal conflicts between different groups of humans.** Even with free and diverse societies we need to find ways to solve conflicts peacefully. Modern warfare has advanced to a point where any wrong action could make the whole planet uninhabitable. 2. **Finding ways to control human health and prevent epidemics.** We still understand surprisingly little about the biology we are made of. A single virus could wipe out all of humanity. 3. **Managing to live in harmony with our environment.** The impact humans have on the planet has become so big that we have to find ways to keep our planet's ecosystem in balance. We need to find sustainable ways to deal with [carbon emissions](http://worrydream.com/ClimateChange/), air pollution, energy consumption, waste, water pollution, food production and many other issues. The most frightening thing with these issues is that they seem too big for anyone to solve and it is difficult to even know where to start. [John Kay](https://www.johnkay.com/) has some great advice for achieving complex goals: {% youtube _BoAtYL3OWU %} But how can software help approaching these problems? These issues seem to be only solvable by politicians and policy makers, scientists and engineers, company managers and journalists. _And this, I think, is the key:_ **I don't believe software can change the world. Humans can.** I think our role as software developers is to support the ones that can make a change. We build tools and platforms to enable others. **If you are building software and want to tackle critical issues, ask yourself who you are building software for. Think about how you can enable others.** _[(cover photo by NASA)](https://unsplash.com/photos/Q1p7bh3SHj8)_
jorinvo
130,353
Você já pode atualizar suas AWS Lambdas para o Node.js 10 LTS
Não faz muito tempo que a AWS anunciou que um dos seus serviços que mais uso (AWS Lambda) daria então...
0
2019-06-30T21:36:18
https://malaquias.dev/posts/voce-pode-atualizar-aws-lambdas-nodejs-10-LTS/
node, aws, serverless
Não faz muito tempo que a [AWS anunciou que um dos seus serviços que mais uso (AWS Lambda) daria então suporte ao runtime do Node.js 10 LTS](https://aws.amazon.com/about-aws/whats-new/2019/05/aws_lambda_adds_support_for_node_js_v10/), isso provavelmente faz parte dos planos da empresa de cada vez mais suportar plataformas modernas nos seus serviços. Atualmente já suportando código de outras plataformas como por exemplo, Python, Java, C#, Ruby e Go. Você ainda está usando o Node.js 6.x? ## Você precisa sair do Node.js 6.x O Node.js 6.x foi mantido como LTS (suporte de longo prazo) desde de 2016 e o seu ciclo de vida chegou ao fim em 30 de abril de 2019. Logo essa versão não receberá novas atualizações de bugs críticos, correções de segurança, patches ou outras atualizações importantes. Com a adição do Node.js 10.x, o AWS Lambda continua a suportar dois runtimes da plataforma JavaScript. O Node.js 8.10 ainda é suportado, no entanto, ele está entrando na fase final de manutenção e suporte que vai até 2020, enquanto a 10.x está atualmente na fase LTS. Acompanhando a comunidade Serverless em volta desse comunicado pude perceber que muita gente ainda possui o receio de realizar o upgrade para a versão 10.x por pensarem que esse novo runtime é beta na AWS. Fiquei com a impressão que muita gente ficou com o pé atrás por ainda não estarem adaptados a uma nova cultura de que o seu ambiente de desenvolvimento/produção não será de sua responsabilidade e sim do provedor do serviço. O fato da AWS não anunciar esse suporte como beta para mim é justificativa mais do que suficiente para atualizar os meus AWS Lambdas. ## Sim, já estou usando Node.js 10.x O Node.js 10.x traz uma versão mais recente do [V8](http://v8.dev). Isso introduz várias alterações na maneira como nosso código é compilado, armazenado em cache e executado. Hoje você já pode alterar para a nova versão sem precisar fazer nenhuma alteração de código para garantir a compatibilidade basta atualizar a configuração da AWS Lambda para a nova versão. Além disso o Node.js 10.x já foi implementado em todas as regiões onde o serviço está disponível. ## Principais diferenças entre o Node.js 6.x e o Node.js 10.x As métricas fornecidas pelo [Benchmarking do Node.js](https://benchmarking.nodejs.org) destacam os benefícios de desempenho da atualização do Node.js 6 para a versão LTS mais recente, o Node.js 10.x: * As operações por segundo são quase duas vezes mais altas no Node.js 10.x; * A latência diminuiu em 65% no Node.js 10.x; * O peso de carregamento do container é 35% menor no Node.js 10.x, resultando em melhor desempenho no caso de um [cold start](https://docs.aws.amazon.com/pt_br/lambda/latest/dg/running-lambda-code.html); * O Node.js 10.x foi o primeiro runtime a ser atualizado para o OpenSSL 1.1.0; * [Suporte nativo para HTTP 2, adicionado primeiramente no Node.js 8.x LTS, foi estabilizado no Node.js 10.x, ele oferece aprimoramentos de desempenho sobre HTTP 1 (incluindo latência reduzida e sobrecarga de protocolo minimizada) e adiciona suporte para solicitação de priorização e push do servidor](http://hipsters.tech/http2-magia-com-o-novo-protocolo/); * A versão 10.x introduz novos recursos de linguagem JavaScript, como Function.prototype.toString () e async-await por exemplo. ## Atualizando o runtime pelo console da AWS ![](https://i.ibb.co/64F7xCT/Screen-Shot-2019-06-23-at-09-20-38.png) ![](https://i.ibb.co/rFjtz2h/Screen-Shot-2019-06-23-at-09-20-50.png) ![](https://i.ibb.co/XWbQ1vJ/Screen-Shot-2019-06-23-at-09-21-00.png) ## Atualizando o runtime pelo Serverless Framework Antes de atualizar o runtime no Serverless Framework, você precisa ter a versão do Node.js 10.x instalada em sua máquina, para mim a melhor maneira de se manter diversas versões do Node.js é utilizando o NVM, já comentei como instalar e utilizar o NVM no artigo [Como instalar o Node.js corretamente no Linux](https://malaquias.dev/posts/como-instalar-o-nodejs-corretamente-no-linux/). Com a certeza que temos o NVM instalado nos resta então instalar a versão 10.x: ``` nvm install 10 ``` E atualizamos o runtime dentro do arquivo **serverless.yml** conforme o exemplo abaixo: ``` provider: name: aws runtime: nodejs10.x ``` Pronto agora é só fazer o deploy, um café e relaxar. ## Conclusão A maioria das aplicações em produção criadas usando Node.js fazem uso das versões LTS, por isso é altamente recomendável que você atualize qualquer aplicação ou AWS Lambda que está atualmente usando a versão Node.js 6.x para o Node.js 10.x, a versão LTS mais recente do momento. Crie a cultura de manter não apenas o node como também suas dependências atualizadas, evitando surpresas como quebra de versão e problemas de segurança. **** ## Finalizando… Se você gostou desse post não esquece de dar um like e compartilhar 😄 Se quiser saber o que ando fazendo por ai ou tirar alguma dúvida fique a vontade para me procurar nas redes sociais como @ [malaquiasdev](https://twitter.com/malaquiasdev). Para ler mais post meus acesse [MalaquiasDEV | A Vida, o código e tudo mais](http://malaquias.dev).
malaquiasdev
130,356
Decoupling code and secrets
A personal insight on better practices for dealing with secrets management in codebases
0
2019-07-05T18:16:35
https://dev.to/a0viedo/decoupling-code-and-secrets-1562
codebases, secretsmanagement, decoupling, software
--- title: Decoupling code and secrets published: true description: A personal insight on better practices for dealing with secrets management in codebases tags: codebases, secrets management, decoupling, software cover_image: https://www.howtogeek.com/wp-content/uploads/2015/11/encryption.png.pagespeed.ce.r1UYcb9ZXB.png --- I was talking with a colleague on switching from [Vault](https://www.vaultproject.io/) to [Kubernetes secrets](https://kubernetes.io/docs/concepts/configuration/secret/) the other day. To my surprise a lot of **our application code was dependent on our Vault implementation**. Our migration would need opening a PR for every service we maintain. While trying hard not to grimace I started wondering on ways how to avoid this from the start. Two alternatives came to my mind which I'll describe here. # Solution A An alternative is to wrap the logic of dealing with Vault's specifics and bundle that into a library. As result, once migrating to K8s secrets we could change the library's code once and it will impact the services using it. But it wouldn't be a cross-language solution i.e. you would have to maintain one library for Go and another library for Node.js. # Solution B Another solution is to inject the secrets into the application process as environment variables. This approach decouples your applications from your secret management approach: the changes happen once for all your applications. Since it allows for more flexibility and portability than the solution A, this solution would be a better choice for a long-term scenario. This solution also follows the [12 Factor App](https://12factor.net/config) approach for configuring applications. # In conclusion Handling secrets for applications is a complex subject. Even if you use a solution that facilitates the concerns I covered here - being language-agnostic and having it configured as part of the provisioning of your images/instances - you're not exempt of application logic logging secrets in the wild. You will spend more time thinking, designing and implementing decoupled components. It will also make your architecture easier to maintain and easier to apply changes. ### Further reading If you want to read on secrets management in detail I've selected a few resources as follow up: - [Secrets management guide](https://blog.cryptomove.com/secrets-management-guide-approaches-open-source-tools-commercial-products-challenges-db560fd0584d) - [Managing Passwords and Application Secrets: Common Anti-Patterns](https://blog.envkey.com/managing-passwords-and-secrets-common-anti-patterns-2d5d2ab8e8ca) - [Secret Management Architectures: Finding the balance between security and complexity ](https://medium.com/slalom-technology/secret-management-architectures-finding-the-balance-between-security-and-complexity-9e56f2078e54) Many thanks to [Ruben Bridgewater](http://twitter.com/BridgeAR), [Ben](http://twitter.com/BenedekGagyi), and [Ujjwal Sharma](http://twitter.com/ryzokuken) for reviewing a first draft of this blog post.
a0viedo
130,771
Considerations in Migrating From Ember to React
Are you considering migrating your Ember app to React? With explosive popularity and extensive...
0
2019-07-01T23:48:20
https://www.thisdot.co/blog/considerations-in-migrating-from-ember-to-react
ember, react
Are you considering migrating your Ember app to React? With explosive popularity and extensive adoption of React within the JavaScript community, many developers have been exploring the opportunity to migrate to one of the most popular technologies in today’s market. As our Ember friends have begun to explore React as their migration target, some questions we have been asked are: **How does Ember differ from React?** **What are the pros and cons of each framework or library?** **What mistakes can I avoid if I decide to migrate my application?** We hope to shed some light on these questions and make your migration efforts easier. ## Exploring Ember and React — General Differences When comparing Ember and React, it’s important to realize that Ember is an ***application framework.*** React, meanwhile, is just a ***library*** for rendering views. So we’re not dealing with an apples-to-apples comparison here. When making comparisons, you can look at the differences between Ember and React in the following ways. * Ember is an opinionated framework with full features. It is mature and has many Rails similarities. * React is a flexible, “un-opinionated” library that has a low barrier to hiring and learning. Ember has been described as having lots of “magic” — many things work by convention. This can be a godsend to many developers and is without a doubt one of Ember’s strongest attributes. If you’re an Ember developer, you’re also probably used to the first-class support Ember provides for routing, data models, and dependency injection. While you will find that React, the library, does not provide some of these Ember niceties, the React community has stepped in and developed solutions for many of the features Ember developers are used to. In many cases, these solutions have become de facto choices and synonymous with React development. Some examples include [React Router](https://github.com/ReactTraining/react-router) and [Redux](https://redux.js.org/). But even these solutions have [popular](https://github.com/reach/router) [alternatives](https://github.com/mobxjs/mobx), in line with the library’s and community’s stance on staying un-opinionated and flexible. ## Porting Your Ember app to React One strong selling point for React is the focused nature of the React library itself. Because it’s so focused on doing *one* thing just right— it becomes very easy to integrate React with other technologies. React is perfect for cases where you want to add dynamic, client-side functionality to an app that uses other technologies. Facebook.com, for example, isn’t a traditional 100% client-side app; it sprinkles in React components where user interaction is required and uses other technologies, like server rendering, to generate other portions. While, admittedly, most of us are not building Facebook, the key here is the ultimate *flexibility of React*. When migrating an existing app from Ember to React, the flexibility of React means you are able to incrementally migrate pages and functionality. At the same time, any new pages can be written in React from the beginning. This approach of mixing Ember and React in the same app has huge benefits, like not needing to drop work on new features to focus on a large rewrite. Still, it comes with its own significant costs. The most obvious cost is that engineers need to maintain code in two UI paradigms. But perhaps even more important are the potential file size increases. If you simply import React in the same bundle as Ember, you’ve just increased your bundle size by about [32kB gzipped](https://gist.github.com/Restuta/cda69e50a853aa64912d). To mitigate this bloat, it’s important to use [code splitting](https://reactjs.org/docs/code-splitting.html), which involves creating multiple bundles from some predefined split points, then lazy loading the individual bundles as needed (or using [prefetching](https://medium.com/webpack/link-rel-prefetch-preload-in-webpack-51a52358f84c)). Pages that use React could use this method in order to download React only when they actually use it. ### Rewriting Your App The other approach is to rewrite your entire app in one concerted effort. Depending on your team size and resource allocation, this might mean putting major new product development on hold. Some might think I’m crazy for even mentioning this as a possibility, but it’s actually the most common way people migrate…and for good reason. Why? If you have buy-in and experienced talent, rewriting your app wholesale can potentially give a huge boost to engineer morale and pay down technical debt. These two factors could lead to much faster iteration on future products, particularly if your existing application is many years old and had numerous different maintainers. ### Choosing Your Approach There isn’t one single recommendation for which approach you should take in every case. Every app and team is different, so you’ll need to evaluate the decisions in your particular context. For example, if you already wanted to give your app a completely new design (not just a facelift), a rewrite is more likely. Still, keep reminding yourself that, even if it has shortcomings,*your existing codebase works (hopefully)*. It likely has years of hardening, bug fixes, and performance tuning. While you might have learned from previous mistakes, you will undoubtedly make new ones. ## Learning React Thankfully, there’s no shortage of resources both online and offline for learning React. Aside from the [official documentation](https://reactjs.org/docs/getting-started.html) (which is great), here are some of our top picks: * [The Beginner’s Guide to React](https://egghead.io/courses/the-beginner-s-guide-to-react) | free video series on egghead.io by Kent C. Dodds * [Learn React.js in 5 minutes](shttps://medium.freecodecamp.org/learn-react-js-in-5-minutes-526472d292f4) | by Per Harald Borgen * [React Training Workshop](https://reacttraining.com/) | by Michael Jackson * [Mentoring, support, and migrations](https://www.thisdot.co/mentors?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1)| by This Dot Labs While there are many differences, modern Ember follows a very similar component-centric model that React popularized. However, instead of using separate templates with a custom language (Glimmer/Handlebars) in React, you’ll typically write your markup using JavaScript, or, optionally, with the [JSX syntax extension to JavaScript](https://reactjs.org/docs/introducing-jsx.html), which is actually just sugar for [regular function calls to React.createElement()](https://reactjs.org/docs/react-without-jsx.html). Take a look at this simple comparison: ## Ember *app/templates/components/example.hbs* <button {action "sayHello"}>Click me</button> *app/components/example.js* import Component from '@ember/component'; const Example = Component.extend({ actions: { sayHello() { alert('hello world'); } } }); export default Example; ## React *src/Example.js* import React, { Component } from 'react'; export class Example extends Component { sayHello = () => { alert('hello world'); }; render() { return ( <button onClick={this.sayHello}>Click me</button> ); } } Ember has the concept of computed properties and observers, while React doesn’t. You could also use the [extremely similar](https://mobx.js.org/refguide/computed-decorator.html#-computed) [MobX](https://github.com/mobxjs/mobx) library, but I’d recommend trying to learn idiomatic React first. You may not need [MobX or Redux](https://medium.com/@dan_abramov/you-might-not-need-redux-be46360cf367)! ## Conclusion In summary, the transition from Ember to React is less of a major shift than it used to be now that Ember has embraced components. However, choosing React means you may have to also choose from a [large number of community libraries](https://www.robinwieruch.de/essential-react-libraries-framework/) to provide functionality you’ve come to expect. The team at [This Dot](https://www.thisdot.co/labs?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1) has been through a number of this migrations and can help your team succeed in planning and execution. Whether that’s through [stellar mentorships](https://www.thisdot.co/mentors?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1), or providing [clean architecture and hands on engineering ](https://www.thisdot.co/labs?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1)to your team, we’d like to hear from you. [Get in touch](https://www.thisdot.co/contact?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1). ![[Hire the team at This Dot to help](https://www.thisdot.co/contact?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1)](https://cdn-images-1.medium.com/max/2964/1*jTSw9Uf2Qk9I6OuyaG7tEA.png)*[Hire the team at This Dot to help](https://www.thisdot.co/contact?utm_source=medium&utm_medium=blog&utm_campaign=ember-to-react&utm_content=v1)* *Thanks to [Robert Ocel] (https://medium.com/@rob_21155)*
thisdotmedia_staff
130,792
Is the Suspense Getting to You?
0
2019-07-31T17:19:25
https://dev.to/westbrook/is-the-suspense-getting-to-you-2k5m
lithtml, litelement, webcomponents, slots
--- title: Is the Suspense Getting to You? published: true description: tags: lit-html, litelement, web components, slots cover_image: https://thepracticaldev.s3.amazonaws.com/i/vh0zvdx2huosert8ajlg.jpeg --- > _DISCLAIMER: While the code discussed in this article is fairly short in lines of code, it has a pretty robust feature set, much of which I do not purport to fully understand. While one of my goals with this article is to better understand the code discussed herein, one which I feel I've achieved (many paragraphs around suggested alterations have gone through a number of versions in and of themselves), I look forward to any insight you’d like to share regarding any functionality I might be missing or misunderstanding supporting a deeper and broader understanding of what's being delivered. In the case that you're interested in sharing, please share in the comments below so that I can update the main body of the article as needed. Thanks!_ <hr/> # And so, it begins... Last year at Chrome Dev Summit, [Justin Fagnani](https://twitter.com/justinfagnani) presented on some really exciting extensions to the currently available [`lit-html`](https://lit-html.polymer-project.org/) and [`LitElement`](https://lit-element.polymer-project.org/) feature set. Including ideas like advanced scheduling for longrunning tasks, chunked rendering, and more, it's quite worth full and repeated watches. If you’ve not already checked it out, I suggest you jump straight to the work presented around async rendering as powered by the `runAsync` directive for `lit-html`: {% youtube ypPRdtjGooc?t=16m54s %} `runAsync` looks like a pretty awesome addition to the set of [directives](https://lit-html.polymer-project.org/guide/template-reference#built-in-directives) you can choose to employ when working with `lit-html` and, while it’s taken me a long time to get it in writing, I’ve been thinking a lot about what the technique would look like in a more declarative context, something more DOM driven. I wanted to take the power that this addition would give `lit-html` and apply it to `LitElement` so that it was easily accessible to the broader web components community. Something like: ```html <do-something-lazily wait="2000"> <div slot="success">Success</div> <div slot="initial">Initial</div> <div slot="error">Error</div> <div slot="pending">Pending</div> </do-something-lazily> ``` You could then push things a little further so that you can have a staged “pending” state via something like: ```html <do-something-lazily wait="2000"> <div slot="success">Success</div> <div slot="initial">Initial</div> <div slot="error">Error</div> <staged-pending slot="pending" wait="500"> <div slot="success">Waiting a lot</div> <div slot="pending">Waiting a little</div> </staged-pending> </do-something-lazily> ``` <figcaption>Elements named for specific intent, not for actual usage.</figcation> # But, really, let's do it! To make this possible, we can apply the `runAsync()` directive, in the most creatively named `<run-async />` element, and it looks like the following: {% stackblitz lit-element-run-async file=demo-el.js %} Making the DOM for each of the possible states of the async action a slot named after the current stage (though `error` seemed more appropriate that `failure`, change my mind!) means that with very little work we get a generic version of the example listed above available for us to use. We can take advantage of our `fallbackAction` that translates the `wait` attribute into the milliseconds with which to start a countdown before our "asynchronous action" completes. By supplying an actual `action` this really starts to come to life. The example below takes advantage of [jsonplaceholder.typicode.com](https://jsonplaceholder.typicode.com/) and a little bit of synthetic delay over the pipe to really give you an idea of how this could work: {% glitch honorable-link app file=preview %} For example only, notice the use of the following to help the placeholder JSON take random amounts of extra time to make it back to the client: ```js var wait = ms => new Promise((r, j)=>setTimeout(r, ms)); // ... const simulatedDelay = Math.floor(Math.random() * Math.floor(2000)); await wait(simulatedDelay); ``` This means that an otherwise "immediate" response to the request for content takes a perceivable amount of time and we are allowed to experience the benefits of the nested `<run-async />` element in the "pending" slot. Allowing the UI to be even more communicative with the user based on network conditions is one of the most immediately valuable benefits of this technique. This is much better than keeping your users in suspense as to what's going on in your application as it acquires the content and data with which those users want to interact. # Why can’t I have it now? Why? Indeed. Currently, this feature is sitting on the [`lit-html` repo](https://github.com/polymer/lit-html) as an [open PR](https://github.com/Polymer/lit-html/pull/657) that hasn’t had much love from [Justin and team](https://twitter.com/polymer) for some time. _Maybe_ if everyone reading this is also interested in this functionality we can guilt them into finishing the work and making it available in a production release! I’d be very excited to wrap this implementation of `<run-async />` element with some tests and get it on NPM in sort order if it were. That’s not to say that the current implementations (both the directive and my element) aren't without issue. ## Remaining issues, and open questions ### When is it initialized? As currently proposed the only way for the code to get into the "initial" state is for a `new InitialStateError();` to be thrown, which is not my favorite thing in the world. Firstly, I think the code should be in the "initial" state by default, not by explicit action, so I don't know why we need this interface (pardon the pun) to begin with. Luckily, in the context of our `<run-async />` element, we can hide this implementation detail a bit. However, it still feels a little hinky and whether it's me, you, or the next person to test out `runAsync()`, I think it'll continue to be an issue about which people develop confusion. Please share your thoughts about this approach in the comments below, _OR_ even better comment directly into the PR about it. You can [agree with me](https://github.com/Polymer/lit-html/pull/657/files#r255359797), possibly suggest a better way forward, or suggest some docs to support a broader understanding of this use case, to help me and the next person be less confused. Whichever way, I'll count it as a win! **When throwing isn't really throwing** Even the code internals of `runAsync()` rely on thrown errors to manage various state transitions. Particularly, this approach is used to reject our lazy action in favor of a new one. Here, the `pendingPromise` stored internally by `runAsync()` also gets rejected with a throw. In preparation for this possibility, you will have been able to acquire a reference that promise when it moves to "pending" via a custom event, at which point you can capture any errors that it runs into: ```js this.addEventListener('pending-state', ({detail}) => { detail.promise.catch(() => {}); }); ``` In the above example, I catch everything that might reject this promise. If this is the path that `runAsync()` releases with, an API for fully managing this state will need to be added to `<run-async />`. The work is never done, amirite? Where this causes an issue is when the same `pendingPromise` is used to announce the state of the action moving from "pending" to "initial". ```js Promise.resolve(promise).then( (value: unknown) => { // ... }, (error: Error) => { const currentRunState = runs.get(part); runState.rejectPending(new Error()); ``` This causes `pendingPromise` to reject even when the `new InitialStateError()` is thrown. AND, being the custom event that supplied the `pendingPromise` is only dispatched when `currentRunState.state === 'pending'`, which is queried after a microtask to mirror the most recently rendered state, which would be "initial" when throwing `new InitialStateError()` immediately when your action cannot complete due to a missing key. ```js (async () => { // Wait a microtask for the initial render of the Part to complete await 0; const currentRunState = runs.get(part); if (currentRunState === runState && currentRunState.state === 'pending') { part.startNode.parentNode!.dispatchEvent( new CustomEvent('pending-state', { composed: true, bubbles: true, detail: {promise: pendingPromise} }) ); } })(); ``` This means that you won't have received a reference to `pendingPromise` by the point that it rejects in this context in order to catch the error thrown. This doesn't block any of the later functionality, but having random errors flying around your application is certainly not the sort of thing that we engineers pride ourselves about. To work around this issue, I suggest we expand the contexts where the `pending-state` event will be dispatched to include the "initial" state, like so: ```js if ( currentRunState === runState && ( currentRunState.state === 'pending' || currentRunState.state === 'initial' ) ) { ``` I'm not 100% sure that this captures the whole of the functionality of which `pendingPromise` is supposed to be the basis or not, but it allows the page to run error-free while supplying the lazily loaded content UI that `runAsync()` is purpose-built to provide. I've [suggested this change in the PR](https://github.com/Polymer/lit-html/pull/657/files#r309194878), so feel free to agree or suggest other paths forward here as well. ### I want to be "pending", again... If you’ve looked closely at my code sample and the PR you’ll notice that I reacquire the `runState` from the `runs` map when testing whether or not to allow the UI to be updated to the “pending” state when a template for that state is available. Currently, the PR outlines the following code: ```js // If the promise has not yet resolved, set/update the defaultContent if ((currentRunState === undefined || currentRunState.state === 'pending') && typeof pending === 'function') { part.setValue(pending()); } ``` However, `currentRunState` is taken from the previous run as `const currentRunState = runs.get(part);` before being later set to “pending” via: ```js const runState: AsyncRunState = { key, promise, state: 'pending', abortController, resolvePending, rejectPending, }; runs.set(part, runState); ``` This means that the check of `currentRunState.state === 'pending'` can never be true if you attempting to supply a new `key` to the directive. In the case of the above example, that means you won’t be able to get back to the “Early Wait” or “Long Wait” messaging when requesting a second (or later) form of data to display. I’ve outlined the following to get around this issue: ```js const runState = runs.get(part); // If the promise has not yet resolved, set/update the defaultContent if ((runState === undefined || runState.state === 'pending') && typeof pending === 'function') { ``` While I agree that it’s not the most creative or even informational variable naming, without going back to the `runs` map for the current state you will never be able to find that state to be `'pending'`. Hopefully [this suggestion](https://github.com/Polymer/lit-html/pull/657/files#r307304583) helps this addition to move towards a merge, soon. ### TODOs Beyond the realities that I've run into preparing `<run-async />`, Justin has noted some specific contexts where he'd like to add polish to this PR: [here](https://github.com/Polymer/lit-html/pull/657/files#diff-47605b25f29d88148b59b1949601eb08R59) and [here](https://github.com/Polymer/lit-html/pull/657/files#diff-47605b25f29d88148b59b1949601eb08R66). The ability to customize invalidation of the `key` and the emission of a custom error when aborting the promise would certainly be quality additions to this piece of functionality. However, I feel like they don't need to be blocking the PR by any means. Extending this directive to support those as additional features down the road seems like a decent balance between getting this out soon and ensuring all use cases are covered long term. # What do you think? How do you feel about the `runAsync()` directive? Does it make sense to wrap something like this in a custom element? Have I wrapped the directive in a way that you could see getting benefit from? I'd love to hear your thoughts in the comments below. I can also be found on Twitter as [@westbrookj](https://twitter.com/WestbrookJ) or on the [Polymer Slack Channel](https://join.slack.com/t/polymer/shared_invite/enQtNTAzNzg3NjU4ODM4LTkzZGVlOGIxMmNiMjMzZDM1YzYyMzdiYTk0YjQyOWZhZTMwN2RlNjM5ZDFmZjMxZWRjMWViMDA1MjNiYWFhZWM). Hit me up at any of these places if you wanna talk more about lazy UIs, LitElement, the modern web, or improvised music...I'm always down to chat! <hr /> ```html <do-something-lazily wait=${justinMergesThePRInMilliseconds}> <div slot="success">Publish to NPM</div> <div slot="initial">Have the idea...</div> <div slot="error">Learn about issues I've not seen, yet.</div> <div slot="pending">Review the PR</div> <!-- WE ARE HERE! --> </do-something-lazily> ``` <hr/> Cover image by [Tertia van Rensburg](https://unsplash.com/@tertia) on [Unsplash](https://unsplash.com)
westbrook
131,261
Jumpstart your API onboarding using Postman
Jumpstart your API onboarding using Postman
0
2019-07-02T20:27:07
https://www.dwolla.com/updates/jumpstart-your-api-onboarding-using-postman/
api, testing, webdev, postman
--- title: Jumpstart your API onboarding using Postman published: true description: Jumpstart your API onboarding using Postman tags: api, testing, webdev, postman canonical_url: https://www.dwolla.com/updates/jumpstart-your-api-onboarding-using-postman/ cover_image: https://cdn.dwolla.com/com/prod/20190625100458/postman-blog-featured-image-01.png --- ## Postman Background [Postman](https://www.getpostman.com/) is the perfect tool for quickly testing any API, and continues to be my go-to for sending and inspecting calls to the Dwolla API. Postman has grown over the last several years to offer an abundance of features to support every stage of the API lifecycle. Whether you are an API provider (e.g. Dwolla) or an API consumer (e.g. Dwolla client), there are features that can help with creating/designing APIs as well as testing and debugging. Postman offers an easy-to-use user interface that doesn’t require you to read further documentation to understand. It just works. At Dwolla, we use Postman across engineering teams to quickly test new features and functionalities as they are rolled out to [the API](https://docs.dwolla.com/). Since we offer a white-labeled platform by design, it’s often used during internal stakeholder demos to represent the interface displayed to clients to help them visualize how new functionality could be consumed. My team, Developer Relations, offers dedicated developer support to all Dwolla clients. Using Slack we interact directly with developers integrating with the Dwolla API, providing one-on-one collaborative support. Postman is used as a key debugging tool used to help third-party developers work through any roadblocks they may encounter in the integration process. There are many ways we use Postman internally, but what we value most is how it can be used to help onboard external developers! ## Jumpstart Onboarding to the Dwolla API There are many things that go into offering a quality experience for developers integrating with the Dwolla API. To name a few: Simplified onboarding, clear documentation, a well-designed API, and tools that help you integrate the API quickly and efficiently. Postman is one of these tools that can be used by developers to get to know an API more closely. A while back, I wrote a post about [why you should use Postman](https://discuss.dwolla.com/t/using-postman-to-explore-and-debug-dwollas-v2-api/2758) and published a shared collection of all available calls in the Dwolla API. The intent was to have a holistic view of what you can do with the Dwolla API and jump right in to start making API calls. The collection includes required request details for all endpoints so you don’t have to spend time fiddling around with things like headers and knowing the structure of a certain POST request body. ![Dwolla Postman collection](https://cdn.dwolla.com/com/prod/20190628103604/PostmanBlogScreenshot1-1.png) The Postman collection caters to developers of all skill levels. If you’re a beginner looking to understand how APIs work, the collection doesn’t require you to tweak much out of the box to see how the API behaves. If you’re an experienced developer looking to roll your own [API wrapper](https://developers.dwolla.com/pages/sdks.html), the collection can serve as a spec to help define how to implement each method. In this post, I’m extending the use of collections further and building out a workflow using [Postman’s collection runner](https://learning.getpostman.com/docs/postman/collection_runs/building_workflows). Sign up for a [Sandbox account](https://accounts-sandbox.dwolla.com/sign-up), obtain your API keys and see how easy it is to use the Dwolla ACH payment API to send money! ## Try the Send Money Collection Runner Dwolla has various payment workflows that are tailored to each business’s unique use case. Each workflow contains a set of steps that are required prior to actually sending through a payment, which can include tasks like: creation of a Customer record, addition of a bank account and optional bank verification. The Send Money workflow walks through sending a payout, triggering requests along the way to create a receiving Customer record and initiate a payment. Using Postman’s collection runner we can [build a workflow](https://learning.getpostman.com/docs/postman/collection_runs/building_workflows) to run requests in a sequential order to match this specific payment flow. Follow these steps to setup the Send Money collection runner: 1. Click the ‘run with Postman’ button and select Open with to import the collection [![Run in Postman](https://run.pstmn.io/button.svg)](https://app.getpostman.com/run-collection/d49125f7c64c530895e5#?env%5BDwollaSendMoney.template%5D=W3sia2V5IjoiY2xpZW50X2lkIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6ImNsaWVudF9zZWNyZXQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9XQ==) 2. Click the ‘eye’ icon to edit environment variables in the DwollaSendMoney.template ![Edit environment variables](https://cdn.dwolla.com/com/prod/20190628105531/PostmanBlogScreenshot2.png) 3. Once your initial variables are updated, you’re ready to run the collection! ![Run collection](https://cdn.dwolla.com/com/prod/20190628105718/PostmanBlogScreenshot3.png) **Note:** If you haven’t already, sign up for a sandbox account to obtain your client_id and client_secret. **You’ll want to set your client_id and client_secret in both initial and current value.** 4. Verify the Environment is set and click Run ![Verify environment and run](https://cdn.dwolla.com/com/prod/20190628105852/PostmanBlogScreenshot4.png) Needless to say, I’m a big fan of Postman and am excited to see its progression in the API space. Questions about using Postman or how to integrate with the Dwolla API? Please don’t hesitate to reach out!
spencerhunter
131,267
What is Artificial Intelligence?
Revolution of Artificial Intelligence will more impactful than the invention of Personal Computers,...
0
2019-07-02T19:55:49
https://dev.to/jay_tillu/what-is-artificial-intelligence-4038
machinelearning, computerscience, ai
Revolution of Artificial Intelligence will more impactful than the invention of Personal Computers, Internet, Smartphone or any other invention that is ever created by humans. As we all know there is only one reason behind all the that was ever made by human hands were to automate the tedious and repeated task and to live better and easy life than yesterday. We create vehicles because it was too hard and painful for us to ride on animals and also not everyone can afford it. We create Bulb because it’s too much pain for us to live in darkness, and these reasons are the same in every innovation. **We did it because we need it!!!** Same when we create computers we want to automate the logical tasks like performing the same calculations which takes our too much time, for creating and storing data, or for manipulating it, etc. As time goes computers became much and much powerful and smart, but still they can do only that task which is told them to do. They cannot perform the task based on their own logic and understanding and that is when Artificial Intelligence comes into the picture. >**The idea was very revolutionary that “to create the machine that is intelligent enough to perform any task that human intelligence can perform.”** Planning, Thinking, Learning, Understanding, Knowledge representation, Problem Solving, etc. ###How old is the concept of Artificial Intelligence? --- The dream of artificial intelligence was first thought of in Indian Philosophies like those of *Charvaka*, a famous philosophy tradition dating back to 1500 BCE and surviving documents around 600 BCE *(According to Wikipedia).* The seed of modern AI was planted by some classical philosophers in 1940s. But the main boom and gear up were given to this idea by our tech giants Google, Microsoft, and Amazon, etc. If they didn’t put their focus on AI then still the idea of AI would become just some research ideas. ![John McCarthy](https://cdn-images-1.medium.com/max/800/1*USZ0OqlV_ZKGKMabL7LLsQ.jpeg) John McCarthy ![Marvin Minsky](https://cdn-images-1.medium.com/max/800/1*ILr958_uynj8PPk0tSeqQQ.jpeg) Marvin Minsky Marvin Minsky and John McCarthy are considered as the father of Artificial Intelligence. They define the basic principals of Artificial Intelligence. ###What are the uses of Artificial Intelligence? --- Today Artificial Intelligence is used everywhere and many of us also use it quite a lot in our day to day life. Some of them are … * Amazon recommends you what should you buy next online. * Google Assistant, Microsoft Cortana, Amazon Alexa, Apple Siri they all use AI to understand and perform what you say. * Google Photos use to recognize faces and objects in photos. * Gmail uses AI based mail system to recognize and filter spam emails. * Today many smartphones use AI based cameras to improve your photos. * Facebook, YouTube, and Twitter are used AI to detect and remove malicious content on their platforms. * Gmail and Google Message use AI to understand your conversion and then predict smart replies. Above are just examples of narrow AI, but still, AI is empowering our life and the future and implementation are endless… ![Siri (Smart Assistant by Apple)](https://cdn-images-1.medium.com/max/800/1*QSB5Ydc3p-YfAdw90mERjA.jpeg) ![Gmail Smart Replies](https://cdn-images-1.medium.com/max/800/1*g4sW9CP1ubq0j98IfrLCzQ.jpeg) That’s it for today guys. Stay it touch for more such content. Till then Keep Coding, Keep Loving… Jai Hind, Vande Mataram 🇮🇳 Wanna get in touch with me? Here are links. I’ll love to become your friend. 😊 - [My Site] (https://www.jaytillu.in) - [My Blogs] (https://blogs.jaytillu.in) - [Twitter](https://twitter.com/jay_tillu) - [Facebook](https://www.facebook.com/jaytillu.1314/) - [Instagram](https://www.facebook.com/jaytillu.1314/) - [Medium](https://medium.com/jay-tillu) - [LinkedIn](https://www.linkedin.com/in/jaytillu/) - [Github](https://www.github.com/jay-tillu) - [Stack overflow](https://stackoverflow.com/users/8509590/jay-tillu) or just mail me at jd.tillu@gmail.com
jay_tillu
131,980
Two Programmers Walk Into a Bar...
Uncaught TypeError: bar is not a function
0
2019-07-03T14:50:41
https://dev.to/bennypowers/two-programmers-walk-into-a-bar-4bp4
jokes
--- title: Two Programmers Walk Into a Bar... published: true tags: jokes --- Uncaught TypeError: bar is not a function
bennypowers
132,013
Continuous Integration Explained
What is Continuous Integration (CI)? A software development practice of merging code changes to a main branch many times per day. Learn how to get started.
0
2019-07-03T16:03:55
https://semaphoreci.com/continuous-integration
cicd, productivity, beginners, devops
--- title: Continuous Integration Explained published: true description: "What is Continuous Integration (CI)? A software development practice of merging code changes to a main branch many times per day. Learn how to get started." tags: cicd, productivity, beginners, devops canonical_url: https://semaphoreci.com/continuous-integration cover_image: https://thepracticaldev.s3.amazonaws.com/i/adlbbaouajmsd5j26wd6.png --- Continuous integration enables iterative software development, reduces risks from defects and makes developers highly productive. ## What is Continuous Integration? Continuous integration (CI) is a software development practice in which developers merge their changes to the main branch many times per day. Each merge triggers an automated code build and test sequence, [which ideally runs in less than 10 minutes](https://dev.to/markoa/what-is-proper-continuous-integration-585c). A successful CI build may lead to further stages of continuous delivery. If a build fails, the CI system blocks it from progressing to further stages. The team receives a report and repairs the build quickly, typically within minutes. All competitive technology companies today practice continuous integration. By working in small iterations, the software development process becomes predictable and reliable. Developers can iteratively build new features. Product managers can bring the right products to market, faster. Developers can fix bugs quickly and usually discover them before they even reach users. Continuous integration requires all developers who work on a project to commit to it. Results need to be transparently available to all team members and build status reported to developers when they are changing the code. In case the main code branch fails to build or pass tests, an alert usually goes out to the entire development team who should take immediate action to get it back to a "green" state. ## Why do we need Continuous Integration? In business, especially in new product development, usually **we can't figure everything up front**. Taking smaller steps helps us estimate more accurately and validate more frequently. A shorter feedback loop means having more iterations. And **it’s the number of iterations, not the number of hours invested, that drives learning**. For software development teams, working in long feedback loops is risky, as it increases the likelihood of errors and the amount of work needed to integrate changes into a working version software. Small, controlled changes are safe to happen often. And by automating all integration steps, developers avoid repetitive work and human error. Instead of having people decide when and how to run tests, a CI tool monitors the central code repository and runs all automated tests on every commit. Based on the total result of tests, it either accepts or rejects the code commit. ## Extension with Continuous Delivery Once we automatically build and test our software, **it gets easier to release it**. Thus Continuous Integration is often extended with Continuous Delivery, a process in which code changes are also automatically prepared for a release (CI/CD). In a fine-tuned CI/CD process, all code changes are being deployed to a staging environment, a production environment, or both after the CI stage has been completed. Continuous delivery can be a fully automated workflow. In that case, it's usually referred to as **Continuous Deployment**. Or, it can be partially automated with manual steps at critical points. What's common in both scenarios is that developers always have a release artifact from the CI stage that has gone through a standardized test process and is ready to be deployed. ## CI and CD pipeline CI and CD are often represented as a pipeline, where new code enters on one end, flows through a series of stages (build, test, staging, production), and published as a new production release to end users on the other end. Each stage of the CI/CD pipeline is a logical unit in the delivery process. Developers usually divide each unit into a series of subunits that run sequentially or in parallel. ![ci/cd pipeline](https://thepracticaldev.s3.amazonaws.com/i/sp55zyim5y5u1l1xczqy.png) I shared a more detailed post on CI/CD pipelines here on dev.to: {% link https://dev.to/markoa/ci-cd-pipeline-a-gentle-introduction-2n8k %} ## Prerequisites for doing Continuous Integration The basic prerequisites for implementing continuous integration include: - Automating builds; - Automating testing; - More frequent commits to a single source code repository, and - Providing visibility of the process and real-time access to CI status to the team. Teams that don't practice CI yet should take small steps, continuously improve, and iterate on code and process in a way that helps the organization grow. On every step in the journey to full CI/CD, the development team's productivity will rise, as well as the velocity of the entire business. ## A typical development workflow You can apply continuous integration in most software projects, including web applications, cloud-native microservices, mobile apps, system software, IoT / embedded systems and more. For example, [Semaphore](https://semaphoreci.com) integrates with GitHub, bringing CI/CD into the standard pull request-based development process. Here's a typical continuous integration workflow that Semaphore users practice on a daily basis: - A developer creates a new branch of code in GitHub, makes changes in the code, and commits them. - When the developer pushes her work to GitHub, Semaphore builds the code and then runs the automated test suite. - If Semaphore detects any errors in the CI pipeline (status: red), the developer gets a Slack notification or sees a message on her personal dashboard on Semaphore. - If the developer has opened a pull request, Semaphore also reports the CI status on the pull request page on GitHub. - Otherwise, the user gets a notification that CI has passed (status green). Semaphore automatically initiates the next pipeline which deploys a new version of the application to a staging server. This allows QA or anyone else on the team to test the changes in a production-like environment. - Once another developer has verified the changes in a peer review, the author can merge the new branch of code into the master branch. - Semaphore runs one more build and test pipeline on the master branch, and when it passes it deploys a new version of the code to production. The team gets a notification about a new release via Slack. ## Continuous Integration best practices **Treat master build as if you're going to make a release at any time**. Which implies some team-wide don'ts: - Don't comment out failing tests. File an issue and fix them instead. - Don't check-in on a broken build and never go home on a broken build. **Keep the build fast: up to 10 minutes**. Going slower is good but doesn't enable [a fast-enough feedback loop](https://dev.to/markoa/what-is-proper-continuous-integration-585c). **Parallelize tests**. Start by splitting by type (eg. unit and integration), then adopt tools that can parallelize each. **Have all developers commit code to master at least 10 times per day**. Avoid long-running feature branches which result in large merges. Build new features iteratively and use feature flags to hide work-in-progress from end users. **Wait for tests to pass before opening a pull request**. Keep in mind that a pull request is by definition a call for another developer to review your code. Be mindful of their time. Test in a clone of the production environment. For example, you can [define your CI environment with a Docker image](https://semaphoreci.com/product/docker), and make the CI environment match production 100%. An alternative is to customize the CI environment so that bugs due to difference with production almost never happen. **Use CI to maintain your code**. For example, run scheduled workflows to detect newer versions of your libraries and upgrade them. **Keep track of key metrics**: total CI build time (including queue time, which your CI tool should maintain at zero) and how often your master is red. *** Did you find the post useful? Let me know by ❤️-ing or 🦄-ing it below! What would you like to learn next about CI/CD? Let me know in the comments. Thanks for reading. 🙏
markoa
132,054
Stages of learning
I realized there are five stages of learning: Awareness, Panic, Avoidance, Acceptance, and finally Learning
0
2019-07-03T17:47:40
https://zellwk.com/blog/stages-of-learning
learning, thoughts
--- title: Stages of learning description: "I realized there are five stages of learning: Awareness, Panic, Avoidance, Acceptance, and finally Learning" canonical_url: https://zellwk.com/blog/stages-of-learning tags: learning, thoughts published: true --- Over time, I realized there are five stages of learning. 1. Awareness 2. Panic 3. Avoidance 4. Acceptance 5. Learning ## Awareness - "Oh! This is possible?!" - "Ah, so that's how you solve this". - "This is good. I need to learn this". In the awareness stage, you learn about a problem. And you realize you need to find a solution. ## Panic Panic might come to some people. This depends on how much pressure you put on yourself. If you pressure yourself hard, you'll get into panic mode. If you set a deadline for learning, you're giving yourself pressure. Most people don't realize this. They set an ambitious deadline for themselves and they fail hard. If you set a deadline to learn something, that deadline you set is probably ambitious. It's ambitious because learning usually takes more time and effort than you account for. If you put too much pressure on yourself, you may get overwhelmed. You may look for shortcuts instead of actually learning what you're supposed to learn. ## Avoidance When panic/overwhelm sets in, we tend to avoid what we're doing. - "I can't do this right now" - "I'm not smart enough" - "I need a break" - "Life happens" We give ourselves <span class="strikeout"> excuses </span> permission to drop the thing we're learning. It's okay to pause for a breather if you can't catch your breath. We all need a breather sometimes. But it's not okay to give up. (Unless you decide it's something you never wanted to do for the rest of your life. In which case, giving up is a good choice). The unfortunate thing is: Some people never realize they're avoiding. They search the internet, hoping to find "good tutorials" that'll teach them everything they need to know. And they use "I can't find any good tutorials" as an excuse not to learn. ## Acceptance This is where you accept you bit off more than you can chew, and decide to chew it anyway. You accept the tough challenge ahead. And you prepare to face it head-on. For most people, it's when they say "I'm going to fucking learn this no matter what". This is when we dedicate the necessary resources, time, and energy to learn the thing we need to learn. If you get into this mode, anything you learn stays with you for a long time. Before this stage, you don't actually learn. You may remember something for a short while, but you'll forget about it quickly. ## Learning And so learning begins. We read everything we need to read. We do everything we need to do. We code if we have to. We think if we have to. We get our hands dirty if we have to. Learning is like a marathon. There's no best pace. All we have is the pace we're comfortable with. And the pace changes according to our states. - If we go too slow, we get bored. - If we go too fast, we get into an overwhelmed or panic state. So what's important is to pace yourself. Take it step by step. Go slow if you're running out of breath, and run faster if you're getting bored. At a certain point, we may decide we learned enough. And learning stops. ## Mastery Mastery is a continuous process where you learn more and more things about the same subject. You get deeper into the subject and you're able to sieve out the nuances. Mastery comes with repeated learning. It comes with going through the five stages over and over again. 1. Awareness 2. Panic 3. Avoidance 4. Acceptance 5. Learning With enough time, you'll become a master one thing. <hr> Thanks for reading. This article was originally posted on [my blog](https://zellwk.com/blog/stages-of-learning ). Sign up for [my newsletter](https://zellwk.com) if you want more articles to help you become a better frontend developer.
zellwk
133,014
Freeing up space on a Ubuntu server
This is my first DEV.to post. If I'm using wrong tags or doing some weird formatting, please tell me....
0
2019-07-05T17:35:53
https://dev.to/redcreator37/freeing-up-space-an-a-ubuntu-server-526c
ubuntu, server
<sup>This is my first DEV.to post. If I'm using wrong tags or doing some weird formatting, please tell me. It'd be much appreciated.</sup> I have a small Ubuntu Server-powered server. A nettop<sup>[1]</sup> with a 40 GB disk drive which I use for experimenting with Linux stuff (because I don't care that much if it breaks) and hosting a GitLab instance. The 40 GB disk is actually an old laptop one, this nettop came with a 300 GB one which I put into my old laptop (perhaps not a that good idea) and now I'm too lazy to replace the drives again and reinstall all software (especially on my laptop). A side effect is that the server is way slower than before because it's running off that old SATA I drive that's really starting to show its age (not to mention it's also overheating all the time). *[1] Nettops are small PCs meant to serve as a multimedia storage device or a set top box. They were very common about 10 years ago before streaming services went mainstream and people wanted to be able to watch movies on their TVs without having to burn them to DVDs.* --- So this is going to be a writeup about the things I usually do when attempting to free up some space on Ubuntu-powered servers. Maybe it'll be useful to someone 😉. The first thing I'll do is to SSH into the server and check the space available: ```bash ssh redcreator37@192.168.xxx.xxx df -h ``` Here I'm checking the disk space with *df -h* command. The -h switch is used to display the space in GBs instead of blocks. Here's the output: ![](https://thepracticaldev.s3.amazonaws.com/i/yc8jbwmdb8ny6dw05o3i.png) If you look closer, you can see that there are 26 GBs of free space available on the disk: ``` /dev/sda2 37G 9.5G 26G 28% / ``` You've probably noticed that the full disk capacity is 37 GBs although the disk is labeled as a 40 GB one (here we have to take a number of things into account). While 26 GBs is still a lot of storage, I want to get rid of old files and packages I don't use anymore. Many of these are libraries I've installed to compile a program and two then forgot to remove them after I've stopped playing with those programs. The first thing we will do here is to update the package cache and remove unused packages (the autoremove command). ![](https://thepracticaldev.s3.amazonaws.com/i/wl4hn330x155hc540d9m.png) Apparently there are no unused packages to remove, but 44 packages can be upgraded. I'll do that later because I'll remove the things I don't need anymore first (so I won't be upgrading the stuff and removing it just a moment later). If you've been paying attention, you've probably noticed the -y switch at the end of command. This switch just tells apt to skip any confirmation prompts, which can be sometimes do more harm than good so use it wisely (and feel free to modify the apt commands I'm using here, especially if there's something wrong with them). At this point we can either figure out all the names of packages we want to remove and do it with *sudo apt remove* or use a tool like Aptitude to search for packages and also check for any broken dependencies. Run Aptitude with *sudo aptitude* (you can install it with *sudo apt install aptitude -y*). Navigate through the package list with arror keys and mark packages for deletion with - (hyphen). I've selected some packages that I know I won't use anymore and now there are 6 broken packages... 😐 I've chosen to examine the packages and I'm starting to notice I should really put the original drive back - it started to go through the database checking for package relations and it has really taken a good amount of time for it. ![](https://thepracticaldev.s3.amazonaws.com/i/rbnlxjcp4qkdpg9nzgos.png) To make it all worse, it even hung at some point. Guess I'm just going to let it remove the packages, broken dependencies or not. I've tried pressing g to start the operations (remove the packages) and nothing. It just put a message "Trying to fix broken packages..." in front of me and froze completely. Even the little progress indicator on the right side of that red status bar at the bottom stopped working, the cursor has disappeared and menu shortcut (CTRL + T) isn't working... Well, time to restart it... Surprisingly, pressing CTRL + C a few times did work and terminate the Aptitude process. I've started Aptitude again and this time disabled automatic package resolving in settings (CTRL + T, choose Options menu, Preferences) ![](https://thepracticaldev.s3.amazonaws.com/i/wfcq8uxblj802qceie4k.png) Now I can just hope this solves the problem. Let's try again (**in case your computer isn't as slow as this one and still responds, press g to start the operations**). That did work - this time instead of just displaying the message, there was an Apply button which I've pressed to skip the scanning process. Now I at least know which packages are causing the problems: ![](https://thepracticaldev.s3.amazonaws.com/i/v151v82ffg1ostwb7yp6.png) I've told it to remove the broken packages by selecting each one and pressing - (minus). As I'm removing all GUI-related stuff here these are going to be useless anyway. You can also choose other actions from the Package menu (menus are accessed through CTRL + T in case you've missed). Now we're ready to go (press g again and it should work). You can usually just let it do its stuff unless there's something really bad going on with your package configuration (in which case you may need to confirm a few things). After waiting for a while it should work and remove all selected packages (and, of course, do everything else you've instructed it). You can run *df -h* again if you want to see how much space we've freed, but don't be surprised if there won't be any huge improvement just yet. We can now (finally) upgrade the packages. Do a quick *sudo apt upgrade && sudo apt dist-upgrade*. After that, run *sudo apt clean && sudo apt autoclean && sudo apt autoremove* (you can also append the -y switch to those to skip any confirmation prompts or run them separately if you want). At this point we're pretty much done with the package management part. Something else you can do is to run *dpigs* (from debian-goodies package) to find out which packages take up the most disk space. Last but not least, we can free up a significant amount of space by hunting down huge files and directories. A good utility to find such space hogs is ncdu (an ncurses version of the du command). Run *ncdu /* to start scanning for big files. This can take a while as it has to scan the entire / drive. ![](https://thepracticaldev.s3.amazonaws.com/i/u1q3b7qm9z7gn49cgvft.png) You can navigate through the directory structure to find the biggest files in each directory. If you find a really huge file you want to delete just press d, select yes and it should be gone. One of the most common space hogs are old versions of the Linux kernel and logs, which can pile up if you don't have some neat script checking and deleting them. As for kernels, always keep one or two old versions as a fallback in case there are some problems with the latest one. Any proper package manager should take care of that but if it doesn't for some reason and you end up with many outdated kernels you may delete them (you're doing this **on your own risk** so don't blame Linux for magically not working anymore) and make sure you reconfigure GRUB to avoid boot issues afterwards. I hope this post will be helpful to someone although it's not much. This is my first post on DEV.to and I'm open to any suggestions. If you have a questions or just found an error in my article, feel free to leave a comment 🙂
redcreator37
132,062
Was not able to comment out jsx in react, so I made my own snippets
While I was on a react project, I was not able to comment out jsx which was really frustrating. The...
0
2019-07-04T14:47:12
https://dev.to/raisaugat/was-not-able-to-comment-out-jsx-in-react-so-i-made-my-own-snippets-2853
vscode, productivity, react, tutorial
While I was on a react project, I was not able to comment out jsx which was really frustrating. The way we comment out jsx is ` {/* comment */}` But the default comment function on vscode doesnot comment out jsx. So, I made my own snippets to comment out jsx. First, open command palette ![](https://thepracticaldev.s3.amazonaws.com/i/twr7gox7txgou6dg393c.png) Search for configure user snippets. But before choosing, look for the language type on your work file. ![](https://thepracticaldev.s3.amazonaws.com/i/rkg2czlkqde414wb5hnx.png) Then choose javascript or javascriptreact. ![](https://thepracticaldev.s3.amazonaws.com/i/bwgcm1sqlz0gjod4hh4c.png) After that you will see some examples of how you can make a snippets. Copy below code and paste it. `"Comment out jsx": { "prefix": "jsx", "body": [ "{/*", "${TM_SELECTED_TEXT}", "*/}" ], "description": "Comment out jsx" }` Save the file and you are good to go. Select the code you want to comment and insert snippets.
raisaugat
132,484
Problem Solving with Project Euler, Part One: Multiples of 3 and 5
This post will contain potential spoilers/solutions for the first Project Euler problem, so read with...
0
2019-07-04T21:18:51
https://dev.to/dylanesque/problem-solving-with-project-euler-part-one-multiples-of-3-and-5-53bl
javascript, showdev, programming, beginners
--- title: Problem Solving with Project Euler, Part One: Multiples of 3 and 5 published: true description: tags: javascript, showdev, programming, beginners --- *This post will contain potential spoilers/solutions for the first Project Euler problem, so read with caution if you're actively working on it!* ## What is Project Euler? According to [freeCodeCamp](https://www.freecodecamp.org/), where I first encountered these problems: > Project Euler (pronounced Oiler) is a series of challenging mathematical/computer programming problems meant to delve into unfamiliar areas and learn new concepts in a fun and recreational manner. > The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. > Although mathematics will help you arrive at elegant and efficient methods, the use of a computer and programming skills will be required to solve most problems. > *from the Project Euler home page* I'll be working through these to detail how to approach problem-solving, so let's dive into the first problem! ## Multiples of 3 and 5 The problem on [FCC](https://learn.freecodecamp.org/coding-interview-prep/project-euler/problem-1-multiples-of-3-and-5) states: > If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. > Find the sum of all the multiples of 3 or 5 below the provided parameter value `number`. There is a definite test for reading comprehension here, as the problem states that we need to sum all numbers **below** the number passed into the function as an argument, so I'm taking that into consideration. What data do I need to solve this problem? I simply need a list of the aforementioned numbers, and then to add them up. This is a very straightforward problem, but I'll be detailing a more exhaustive process to follow in future blog entries that will be helpful for much more difficult problems. First, I'm setting up variables in the form of an empty array to push the relevant numbers into: ``` let numbers = []; ``` There are a few ways that I could populate the array with the necessary data, and what comes to my mind now is to: + Set up a `for` loop with the iterator value set to the `number` minus 1, that runs while the iterator is above 2 (this was initially set to zero, but I realized while typing this that there are obviously no positive multiples of 3 or 5 beneath 3, so there's not point in running unnecessary iterations), and subtracts one from the iterator with every pass. + The loop will run a check with each pass to see if the [modulus] (https://en.wikipedia.org/wiki/Modulo_operation) of the iterator value and (3 or 5) equals zero. If so, that values gets [pushed](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push) (read: added to the end of the array) to the `numbers` array. This looks like: ``` for (let i = number - 1; i > 2; i--) { if (i % 3 == 0 || i % 5 == 0) { numbers.push(i); } ``` Finally, I'l going to run the [.reduce](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) method on the `numbers` array, and return that value. `Array.reduce()` is one of the hairier built-in JavaScript methods to learn, but the short version is that it runs a process over an array to *reduce it down to a single value*. So the completed code looks like this: ``` function multiplesOf3and5(number) { let numbers = []; for (let i = number - 1; i > 2; i--) { if (i % 3 == 0 || i % 5 == 0) { numbers.push(i); } } return numbers.reduce((a, b) => a + b, 0); } multiplesOf3and5(1000); ``` ...and it works! ## Final Thoughts I can do more work here, including analyzing the [Big O](https://en.wikipedia.org/wiki/Big_O_notation) result of the algorithm, and using that information to improve the runtime. Did you also work on this problem? If so, what did your solution look like?
dylanesque
132,771
laravel
0
2019-07-05T09:26:40
https://dev.to/usmanard/laravel-6fl
laravel, designpattern
--- title: laravel published: true description: tags: laravel, design pattern --- How can we use factory pattern in laravel or php without using switch or if/else statements?
usmanard
132,938
Why we wrote (yet) another state management tool
Redux vs. MobX? Most current state management solutions don't let you manage state using h...
0
2019-07-05T15:13:56
https://dev.to/adamklein/why-we-wrote-yet-another-state-management-tool-el3
javascript, react, frontend, redux
## Redux vs. MobX? Most current state management solutions don't let you manage state using hooks, which causes you to manage local and global state differently and have a costly transition between the two. Which brought us to look for solutions that use React hooks. The problem is that hooks only run inside React components. ## What about Context?! Using plain React Context is not the best solution for state management: - When managing global state using Context in a large app, you will probably have many small, single-purpose providers. Soon enough you'll find a Provider wrapper hell. - When you order the providers vertically, you can’t dynamically choose to depend on each other without changing the order, which might break things. - Context doesn't support selectors, render bailout, or debouncing ## Our guidelines To have global state management, we need a top-level provider. We wanted the developer to manage immutable state using hooks. We also wanted to allow for selectors and render-bailout for performance reasons. And lastly, we wanted to make sure there is no initialization code so that packages that use state management are easily pluggable into your app. Iterating over and over the API got us to a solution that we feel is easy and powerful. We called the library Reusable. Here is the API: ### Provider: ```jsx const App = () => ( <ReusableProvider> ... </ReusableProvider> ) ``` ### Define a store: ```javascript const useCounter = createStore(() => { const [counter, setCounter] = useState(0); return { counter, increment: () => setCounter(prev => prev + 1) } }); ``` ### Subscribe to the store: ```javascript const Comp1 = () => { const something = useCounter(); } const Comp2 = () => { const something = useCounter(); // same something } ``` ### Use a selector: ```javascript const Comp1 = () => { const isPositive = useCounter(state => state.counter > 0); // Will only re-render if switching between positive and negative } ``` ## Find out more To find out more and start experimenting with Reusable, visit the repo: [https://github.com/reusablejs/reusable](https://github.com/reusablejs/reusable) You can also check out the video from ReactNext, where Maayan Glikser and myself present the library: {% youtube oy-6urveWzo %}
adamklein
132,940
Javascript String Methods You Must Know to Become an Algorithms Wizard
In this article I want to talk about few basic string methods which are most commonly used in Javascr...
1,610
2019-07-06T12:07:20
https://dev.to/uptheirons78/javascript-string-methods-you-must-know-to-become-an-algorithms-wizard-c84
javascript, codenewbie, beginners, webdev
In this article I want to talk about few basic string methods which are most commonly used in Javascript and very useful when it comes to solve problems and algorithms. I've been working on solving algorithms both on FreeCodeCamp and CodeWars for the last 4 weeks and find out I used a lot of times this methods. If you are interested in Arrays Methods you can check my post about them: [Javascript Arrays Methods You Must Know to Become an Algorithms Wizard](https://dev.to/uptheirons78/javascript-arrays-methods-you-must-know-to-become-an-algorithms-wizard-2ec7) 1) Getting length of a string with **_.length_** ```javascript let str = "i am a string"; console.log(str.length); //13 ``` 2) Getting an array from a string with **_.split()_**. Remember it is possible to use a specified separator string to determine where to make each split. ```javascript const str = "Luke, I am your Father"; console.log(str.split());//[ 'Luke, I am your Father' ] console.log(str.split(''));//["L", "u", "k", "e", ",", " ", "I", " ", "a", "m", " ", "y", "o", "u", "r", " ", "F", "a", "t", "h", "e", "r"] console.log(str.split(' '));//[ 'Luke,', 'I', 'am', 'your', 'Father' ] console.log(str.split(','));//[ 'Luke', ' I am your Father' ] ``` Let's see an easy algorithm I solved on CodeWars where given a string of words the function must return an array of integers equals to the words length. ```javascript function wordsLength(str) { return str.split(' ') //first split the string at spaces to have an array of words; .map(word => word.length); //second use array map() to trasform any array element to its length with .length; } wordsLength('Luke, I am your father'); //[ 5, 1, 2, 4, 6 ] ``` 3) Convert a string into upper case with **_toUpperCase()_** ```javascript const str = 'I find your lack of faith disturbing.'; console.log(str.toUpperCase()); //I FIND YOUR LACK OF FAITH DISTURBING. ``` 4) Convert a string into lower case with **_toLowerCase()_** ```javascript const str = 'Help me, Obi-Wan Kenobi. You’re my only hope.'; console.log(str.toLowerCase()); //help me, obi-wan kenobi. you’re my only hope. ``` 5) Check if a string contains specified characters **_includes()_**. It will return a boolean value (true or false). It is possible to add the position within the string at which to begin searching ```javascript const str = 'The Force will be with you. Always.'; console.log(str.includes('Force')); //true //Attention: it is case sensitive! console.log(str.includes('force')); //false //Often it will be better to transform the given string to lowercased //and then check if it includes or not what you are looking for. const newStr = str.toLowerCase(); console.log(newStr.includes('force')); //true //Add the position where to start searching console.log(str.includes('w', 0)); //true console.log(str.includes('T', 1)); //false ``` 6) Check if a string starts with specified characters with **_startWith()_**. It will return a boolean value and it is possible to add the position where to start searching. It is case sensitive. ```javascript const str = 'Never tell me the odds!'; console.log(str.startsWith('Never')); //true console.log(str.startsWith('Never', 1)); //false console.log(str.startsWith('never', 0)); //false ``` 7) Check if a string ends with specified characters with **_endsWith()_**. It will return a boolean value and it is possible to add the length parameter (optional). It is case sensitive. ```javascript const str = 'Do. Or do not. There is no try.'; console.log(str.endsWith('try.')); //true console.log(str.endsWith('Try.')); //false console.log(str.endsWith('try', 30)); //true console.log(str.endsWith('try.', 30)); //false ``` 7) Check for the first occurrence of a specified value in a string with **_.indexOf()_**. If the value is not in the string it will return -1. It is possible to add a second parameter starting the search at specified index. ```javascript const str = 'When gone am I, the last of the Jedi will you be. The Force runs strong in your family. Pass on what you have learned.'; console.log(str.indexOf('h')); //1 console.log(str.indexOf('H')); //-1 console.log(str.indexOf('h', 2)); //17 console.log(str.indexOf('J', str.length)); //-1 ``` 8) Check for the last occurrence of a specified value in a string with **_.lastIndexOf()_**. If the value is not in the string it will return -1. It is possible to add the index of the last character in the string to be considered as the beginning of a match. ```javascript const str = 'When gone am I, the last of the Jedi will you be. The Force runs strong in your family. Pass on what you have learned.'; console.log(str.lastIndexOf('h')); //105 console.log(str.lastIndexOf('h', 100)); //97 console.log(str.lastIndexOf('.')); //117 console.log(str.lastIndexOf('.', 0)); //-1 ``` 9) Repeat a given string with **_.repeat()_**. ```javascript const str = 'There’s always a bigger fish.'; console.log(str.repeat(2));//There’s always a bigger fish.There’s always a bigger fish. //Attention: count will be converted to integer! console.log(str.repeat(5.5));//There’s always a bigger fish.There’s always a bigger fish.There’s always a bigger fish.There’s always a bigger fish.There’s always a bigger fish. ``` 10) Replace a pattern in a given string with **_replace()_**. The pattern can be a string or a regex and the replacement can be a string or a function to be called on each match. Attention: if the replacer or pattern is a string only the first occurence will be replaced. ```javascript const string = 'Fear is the path to the dark side.'; console.log(string.replace('Fear', 'Tears')); //Tears is the path to the dark side. console.log(string.replace(/a/gi, 'A'));//FeAr is the pAth to the dArk side. ``` 11) Get a specific character from a string using **_charAt()_**. A string representing the character (exactly one UTF-16 code unit) at the specified index is returned. An empty string if index is out of range! ```javascript const string = 'Fear leads to anger'; console.log(string.charAt(1));//e console.log(string.charAt(string.length - 1));//r console.log(string.charAt(string.length));//'' Index out of range! //Attention: if no index is provided the default one is 0; console.log(string.charAt());//F ``` 12) Get the UTF-16 code of the letter at the given index in a string with **_charCodeAt()_**. This method is very useful with algorithms like ROT13 or Caesar Cypher. If no index is provided the default one is 0. ```javascript const string = 'We must keep our faith in the Republic.'; console.log(string.charCodeAt(0));//87 console.log(string.charCodeAt(5));//115 //If you want all the UTF-16 values of any letter in a string //split the string to have an array of letters //map the array and change any letter to its utf-16 value with charCodeAt(); const utfValuesArr = string.split('').map(letter => letter.charCodeAt()); console.log(utfValuesArr); //[87, 101, 32, 109, 117, 115, 116, 32, 107, 101, 101, 112, 32, 111, 117, 114, 32, 102, 97, 105, 116, 104, 32, 105, 110, 32, 116, 104, 101, 32, 82, 101, 112, 117, 98, 108, 105, 99, 46] ``` 13) Get a string created from the specified sequence of UTF-16 code units with the static **_String.fromCharCode()_** method. ```javascript console.log(String.fromCharCode(65));//A console.log(String.fromCharCode(105, 106, 107));//ijk console.log(String.fromCharCode(32));//'' empty space! const arr = [77, 97, 121, 32, 116, 104, 101, 32, 70, 111, 114, 99, 101, 32, 66, 101, 32, 87, 105, 116, 104, 32, 89, 111, 117]; const quote = arr.map(n => String.fromCharCode(n)); console.log(quote.join('')); //May the Force Be With You ``` 14) Get a section of a string, returned in a new string, without modifying the original one with **_slice()_**. It takes two parameter. BeginIndex, or where to start to slice the string and the optional EndIndex where to stop to slice it. If no EndIndex is provided it will slice the string to the end. Attention: A negative index counts backwards from the end of the string ```javascript const string = 'I’m just a simple man trying to make my way in the universe.'; console.log(string.slice(1));//’m just a simple man trying to make my way in the universe. console.log(string.slice(0,10));//I’m just a console.log(string.slice(-3));//se. ``` 15) Get the part of the string between the start and end indexes, or to the end of the string with **_substring()_**. Attention: any argument value that is less than 0 or greater than stringName.length is treated as if it were 0 and stringName.length respectively. Any argument value that is NaN is treated as if it were 0. ```javascript const string = 'Darth Vader'; console.log(string.substring(0));//Darth Vader console.log(string.substring(6));//Vader console.log(string.substring(1,6));//arth ``` 16) Remove whitespaces from both ends of a string with **_trim()_**. ```javascript const string = ' Yoda '; console.log(string.trim());//Yoda ``` This don't want to be an exhaustive list of all javascript string methods, but a list of the ones I find out to be the most important when it comes to solve problems and algorithms. To get better at JS and problem solving I suggest to "play" a lot with all these methods and to subscribe both on [FreeCodeCamp](https://www.freecodecamp.org/) or [Codewars](https://www.codewars.com/) where you can find a lot of algorithms to work with and brush up your javascript knowledge. On [Codewars](https://www.codewars.com/) you can look for 7kyu or 6kyu algorithms about "strings" and train with them. It will be funny! I will update this article with new information and some algorithms on strings, based on the reactions and comments. **_Code Long And Prosper_**
uptheirons78
133,021
Scripting Docker Commands With Spinup.sh
In Day 3, I included a blurb from my DevOps-y friends about the natural progression of abstractions o...
1,373
2019-07-05T17:51:08
https://blog.henryneeds.coffee/scripting-docker-commands-with-spinup-sh/
devops, docker, productivity, bash
In Day 3, I included a blurb from my DevOps-y friends about the natural progression of abstractions on top of containers: > You usually start with docker run CLI commands and graduate to tools with more layers of abstraction as you need them. Docker-compose comes next, followed by automating several commands with Bash scripts, which is eventually followed by Kubernetes. I also shared with you a better way to handle switches in Bash scripts. Today I'll show you how I moved from running my own Docker commands to running off of one shell script with a handful of flags. --- For the first few months of learning how to use Docker, and then how to utilize it in projects, I was running A LOT of individual Docker CLI commands every day. I then moved on to having Docker Compose manage things for me. However, even that started to be a burden for some edge cases I was dealing with. After thinking back on my conversations with other friends in the DevOps field, I remembered they all told me that you'll hit a certain point of wanting to automate your workflows... so I started diving into Bash scripts. I knew I would want my script to do a few different things: - Spin up all of my container infrastructure - One version with explicit commands for each piece of infrastructure (dev) - One version running off docker-compose (prod) - Tear down all of my container infrastructure (teardown) First thing I did was to make sure my `docker-compose.yaml` file was built out. That would be my source of truth - if running `docker-compose up -d` made everything work correctly, then the rest of this script would be based on what was written in that file. ```bash version: "3.7" services: app: container_name: hquinn-app image: "hquinn_app:latest" networks: - hquinn-net ports: - "8080:8080" restart: always volumes: - type: volume source: hquinn_app_home target: /var/www/html networks: hquinn-net: volumes: hquinn_app_home: ``` > I felt that I should point out that this is just an example project. `hquinn-app` isn't a real image, container, or project. Use this as a template to plug in your own information. It's a good training exercise! Running `docker-compose up -d` seems to work with this yaml configuration. Good! Now we can move on to creating our Bash script. We're going to call this `spinup.sh`. Let's start by setting the improved switch statement that we learned about yesterday. We'll include four flags (help, dev, prod, teardown) as well as a catchall for errors. ```bash #!/bin/bash while getopts ":hdpt" opt; do case ${opt} in h ) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; d ) exit 1 ;; p ) exit 1 ;; t ) exit 1 ;; \? ) echo "Invalid option: %s" "$OPTARG" 1>&2 exit 1 ;; esac done shift $((OPTIND -1)) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; ``` Solid. This switch is going to make it really easy to just plug in commands we want to run for each flag. Production will probably be the easiest since we'll be leaning on the `docker-compose.yaml` files we already built out. Let's fill those commands into the `p )` case: ```bash p ) # Rebuild image docker-compose build # Spin up container docker-compose up -d exit 1 ;; ``` As you can see, we're really just having this bash script run the same commands we would run ourselves to start up our containers, volumes, and networks. We're just splitting up the different jobs into different flags so we can utilize the same script to accomplish a number of different tasks. Our dev case [`d )`]isn't going to be much different. We're just manually creating a network and running one long `docker run` command to start up our container: ```bash d ) # Rebuild image hquinn_app docker-compose build --no-cache # Create hquinn-net bridge network for container(s) to communicate docker network create --driver bridge hquinn-net # Spin up hquinn-app container docker run -d --name hquinn-app --restart always -p 8080:80 -v hquinn_app_home:/var/www/html --network hquinn-net hquinn_app:latest exit 1 ;; ``` > Henry, what's the actual difference between your dev and prod builds here? Great question, reader! This is part of the fun (pain?) of initially learning about containers. There are a lot of different ways of dealing with the same tasks and you learn best practices as you go. When I initially wrote this script, I was working on that ColdFusion, Informix, and MySQL project. Due to the way it was initially built before it was handed to me, we needed to run different sets of commands to spin it up depending on if we were running it locally for development or if we were running it in production for actual use by judges. As I dug deeper into Docker, I had all kinds of sources telling me what should have been obvious: > One of the main tenants of containers is that your code should run the same everywhere. It's the same containers, just running on different engines. That's to say that I should be running the same commands to run the same containers everywhere. Since I wasn't, I was still falling prey to the whole `but, it worked on MY machine` gotcha. Since then I've trimmed this script down a bit. I still like having the longer commands in my `d )` case, though. It allows me to test some things quickly in the way that I stand up my infrastructure that I can then solidify in my `docker-compose.yaml` files that I can then run in production environments. This is another tenant of containers, we can treat our infrastructure as code. Once our `docker-compose.yaml` is fine-tuned to our liking, we can check it into version control and know that it's safe for all time. Now the `t )` case is meant to tear down all of our infrastructure. Kill containers, and remove containers, images, volumes, and networks. That way we can get a clean slate to spin up and test out new changes we made to our infrastructure. We're going to accomplish this with a number of `if/then` blocks: ```bash # If hquinn-app container is running, turn it off. running_app_container=`docker ps | grep hquinn-app | wc -l` if [ $running_app_container -gt "0" ] then docker kill hquinn-app fi ``` For this particular block, we're setting a variable named `running_app_container` to the output of `docker ps | grep hquinn-app | wc -l`. Which means if the container `hquinn-app` is up and running, `running_app_container` is set to the number of lines returned by that command. The `if/then` block then checks to make sure the controlling variable is greater than 0. If true, it runs the command `docker kill hquinn-app` to kill the container. We'll use a series of these blocks to manage our containers, images, volumes, and networks. Let's see the entire `spinup.sh` script, with all of the parts plugged in: ```bash #!/bin/bash while getopts ":hdpt" opt; do case ${opt} in h ) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; d ) # Rebuild image hquinn_app docker-compose build --no-cache # Create hquinn-net bridge network for container(s) to communicate docker network create --driver bridge hquinn-net # Spin up hquinn-app container docker run -d --name hquinn-app --restart always -p 8080:80 -v hquinn_app_home:/var/www/html --network hquinn-net hquinn_app:latest exit 1 ;; p ) # Rebuild image docker-compose build # Spin up container docker-compose up -d exit 1 ;; t ) # If hquinn-app container is running, turn it off. running_app_container=`docker ps | grep hquinn-app | wc -l` if [ $running_app_container -gt "0" ] then docker kill hquinn-app fi # If turned off hquinn-app container exists, remove it. existing_app_container=`docker ps -a | grep hquinn-app | grep Exit | wc -l` if [ $existing_app_container -gt "0" ] then docker rm hquinn-app fi # If image for hquinn_app exists, remove it. existing_app_image=`docker images | grep hquinn_app | wc -l` if [ $existing_app_image -gt "0" ] then docker rmi hquinn_app fi # If hquinn_app_home volume exists, remove it. existing_app_volume=`docker volume ls | grep hquinn_app_home | wc -l` if [ $existing_app_volume -gt "0" ] then docker volume rm hquinn_app_home fi # If hquinn-net network exists, remove it. existing_hquinnnet_network=`docker network ls | grep hquinn-net | wc -l` if [ $existing_hquinnnet_network -gt "0" ] then docker network rm hquinn-net fi exit 1 ;; \? ) printf "Invalid option: %s" "$OPTARG" 1>&2 exit 1 ;; esac done shift $((OPTIND -1)) printf "USAGE: ./spinup.sh [OPTION]... \n\n" printf "-h for HELP, -d for DEV, -p for PROD, or -t for TEARDOWN \n\n" exit 1 ;; ``` All in all, this is looking pretty tight. You can add more commands in if you need anything more complicated. You can add more flags to handle more edge cases, too. This script (and a handful of others like it) really helped me through the last six months of my job with the courts. However, with the projects I'm working on now, the amount of these scripts I would need to remain productive is going to be a burden to maintain. We need more power and more control over what we're doing. Hence, my deep dive into Kubernetes. I haven't forgotten about it. I'm starting to dig into the books that I bought. As far as Kubernetes THe Hard Way goes, [Christian Corbin](https://dev.to/christiancorbin) pointed out that the tutorial might be out of date. To that end, I think I'm going to drop K8s The Hard Way and just focus on the books that I bought and the [Kubernetes.io](https://kubernetes.io/docs/tutorials/) when I need some hands-on practice. DevOps is all about iterating on and improving processes. Happy to change things here as better opportunities come up! --- It's a holiday weekend and I'm headed to Maine. Time to `spinDOWN.sh`. /rimshot I'll try to write some more while I'm on vacation, though you might not hear from me until next week. Stay frosty. [https://henryneeds.coffee](https://henryneeds.coffee) [Blog](https://henryneeds.coffee/blog) [LinkedIn](https://linkedin.com/in/henryquinniv) [Twitter](https://twitter.com/quinncuatro)
quinncuatro
133,177
POP's cloud-based security services
As part of our general security automation toolkit we use a number of third-party cloud-based...
0
2019-07-11T06:32:23
https://dev.to/wegotpop/pop-s-cloud-based-security-services-43ej
security, devops, automation
--- title: POP's cloud-based security services published: true description: tags: security, devops, automation --- As part of our general security automation toolkit we use a number of third-party cloud-based services that allow us to take advantage of expertise and specialisation in those partners. The services we use are: * Probely * Ghost Inspector * Buildkite * Sentry ## Probely [Probely](https://probely.com/) is a security scanning service that offers a variety of different scans. Lightning scans check endpoints for basic issues and we run these against production and staging environments before the work day starts and just before it ends. We use the OWASP extension to the more in-depth scan and we run that several times a week. This scan is more akin to a manual penetration test (and there is a human element to the process so it isn't fully automated) and when we are asked for the result of our last penetration test I now supply the most recent PDF export of the Probley report. So far this has kept auditors and researchers happy. When we first ran Problely it discovered a problem in our AWS and application setup and we felt it was important to get a clean report so we resolved the issue and generally we try to resolve any issues it raises within two weeks of its discovery. Recently though we found that very few Problely customers have zero issues in their scans and clients have been equally suspicious. If I had known that then I might have kept in a low-priority issue. Having corrected various HTTP proxy issues the most pernicious issues that Problely now finds for us our XSS style issues. This has changed some of the ways we use our features now. ## Ghost Inspector We started using [Ghost Inspector](https://ghostinspector.com/) as a way of improving our integration testing and removing some of the headless tests that happen in our test runner system. It has been such a life saver in terms of catching regressions that it would terrifying not to have it or an equivalent system. It's rare now that we ship a regression all the way to production that we have to revert. The vast majority are all caught in the testing process. Ghost Inspector allows you to capture test scenarios via a browser plugins but the resulting CSS selectors tend to be very specific and brittle. The system comes into its own if you actually program the steps yourself. Test steps can be abstracted into modules and reshared. The system has a pretty good screenshot comparison tools. A complete history of runs and failures (again providing a great audit trail). Videos are available of what happened during the test run. It also provides a fake email system which is something we thought we were going to have to build ourselves. Ghost Inspector has changed the way we've approached some of the ways we now develop the system. We have screens that are specifically designed to expose information to Ghost Inspector for verification. We've massively improved the structure of our DOM to be more machine readable. We also have some classic hot points in our code like primary keys used as page parameters where it would be too hard to switch to opaque identifiers so instead we have regression tests that try to access different pages by direct access to confirm that the access control system is working as intended. This is the only way barring a big rewrite that we could guarantee this potential security hole is not in fact a problem. We've also seen that a lot of our clients use manual testing process when checking our platform for vulnerabilities. Often they share their process when they do discover bugs and we've been able to turn those into Ghost Inspector tests. We've been able to build a suite of tests based on the different expertise of the various companies we've worked with. It is now like we have a little team of virtual security researchers on our side! ## Buildkite [Buildkite](https://buildkite.com/) is the glue that sticks all our automation together. While advertised as a continuous integration (CI) platform is actually a general kind of task automation system and is useful for any kind of scheduled or triggered task. Buildkite does manage all our CI builds but it also handles deployments, refreshes of our pre-production environment and other tasks. Within its pipelines it will deploy software and then use Ghost Inspector and verify the results of the deployment. Before we switched to Buildkite we were using [Jenkins. This is a venerable tool but there were two major issues for us with trying to run our own CI service. Firstly the truth is that running Jenkins effectively is hard. It relies on filesystem access, to get auditable controls you need to run plugins which need to be updated and maintained, the pipeline functionality is a late addition to the system rather than a core element. Secondly after attending a few secops conferences it was clear that Jenkins was one of the top targets they look for when looking for bug bounties (the number one is unsurprisingly Wordpress). This means we had a high-profile target that we didn't have confidence in our ability to secure. By not using Jenkins a number of drive-by hackers will just pass you by and move on to other organisations that are. We needed to make a change. One of the key differences between Buildkite and CI systems like CircleCI is that you bring your own agents to Buildkite. We can run jobs in our own AWS accounts and assign our own security policies to the agents. This means we can choose to completely isolate one cluster of agents without affecting the more privileged permissions of clusters that do deployments to ECS that need a lot of permissions. We do also use CircleCI if what we want is pure CI or lightweight automation and the contents of the repo is low risk. ## Sentry [Sentry](https://sentry.io/) is primarily an exception reporting and aggregation service. However because it also records who in your team has looked at an exception it means it can be used to confirm that you make periodic reviews of the errors that occur on your system. You should probably use an error aggregation service anyway but it is worth looking at how you can use them to also provide a way of delivering security objectives, for example explicitly ignoring common but irrelevant errors and looking for unexpected issues.
rrees
133,264
my friend site contact form is not working
Hello everyone i am new to this website. i have try almost each and everything which available at go...
0
2019-07-06T06:39:29
https://dev.to/samsuna/my-friend-site-contact-form-is-not-working-1eo8
wordpress
Hello everyone i am new to this website. i have try almost each and everything which available at google. but my contact form is not sending data to my email. here is my frnd site http://www.fridaypic.com/contact/
samsuna
133,462
Video: Disagreements are learning opportunities 📹
When people passionately debate something, I'm immediately curious and interested. Some of my best learning moments have come out of observing an intense debate.
0
2019-07-06T15:09:22
https://dev.to/conw_y/video-disagreements-are-learning-opportunities-2g3l
learning, debate, arguments, disagreements
--- title: Video: Disagreements are learning opportunities 📹 published: true description: When people passionately debate something, I'm immediately curious and interested. Some of my best learning moments have come out of observing an intense debate. tags: learning, debate, arguments, disagreements cover_image: https://thepracticaldev.s3.amazonaws.com/i/v5ga9d5cugyk1gvix0fz.png --- When people passionately debate something, I'm immediately curious and interested. Some of my best learning moments have come out of observing an intense debate.
conw_y
133,542
Container with the Most Water - Code Challenge
Leetcode Problem 11 This problem is pretty straight forward. Given an array of heights, find the tw...
1,220
2019-07-06T21:33:30
https://dev.to/rygelxvi/container-with-the-most-water-code-challenge-34ff
ruby, javascript, beginners, challenge
[Leetcode Problem 11](https://leetcode.com/problems/container-with-most-water/) This problem is pretty straight forward. Given an array of heights, find the two indices that could contain the most water between them. Generally there are two ways of solving this, the brute force method, and the two pointer method. ### Brute Force This method will calculate every possible combination possible to determine the answer. This requires a nested loop resulting in a complexity of O(n^2). Javascript ``` var maxArea = function(height) { let mostWater = 0 for (let l = 0; l < height.length - 1; l++) { for (let r = height.length - 1; r > l; r--) { mostWater = Math.max(mostWater, (Math.min(height[l], height[r])*(r-l))) } } return mostWater } ``` A better way of doing this is to utilize two pointers. ### Two Pointer Method For this method we utilize two pointers that are placed on opposite ends of the array to iterate through once resulting in a time complexity of O(n). The area between the current left (l) and right (r) indexes and if the area is larger than the current max, it is set as max. The indices are moved based on which one is smaller or if equal the left one in this situation. It is arbitrary which is checked in the if statement. Javascript ``` var maxArea = function(height) { let max = 0 let l = 0 let r = height.length - 1 while (l < r) { if (height[l] > height[r]) { max = Math.max(height[r] * (r-l), max) r-- } else { max = Math.max(height[l] * (r-l), max) l++ } } return max }; ``` In Ruby... ``` def max_area(height) max_area = 0 l_idx = 0 r_idx = height.length - 1 while (r_idx > l_idx) if height[r_idx] >= height[l_idx] max_area = [(height[l_idx] * (r_idx - l_idx)), max_area].max l_idx += 1 else max_area = [(height[r_idx] * (r_idx - l_idx)), max_area].max r_idx -= 1 end end return max_area end ``` For more readability you can always separate out the max_area (ruby) or max (javascript) into multiple lines. Javascript ``` if (height[l] > height[r]) { maybeMax = height[r] * (r-l) r-- } else { maybeMax = height[l] * (r-l) l++ } max = Math.max(maybeMax, max) ``` Ruby ``` if height[r_idx] >= height[l_idx] maybe_max = height[l_idx] * (r_idx - l_idx) l_idx += 1 else maybe_max = height[r_idx] * (r_idx - l_idx) r_idx -= 1 end max_area = maybe_max if maybe_max > max_area ``` Decided to mix up the syntax on the last line for variety since the Ruby and JS solutions look so similar.
rygelxvi
133,582
Try Reinforcement Learning with Donkey Car
1 create virtual env I'm using pyenv $ python -m virtualenv py37 --python=python3.7...
0
2019-07-10T01:21:57
https://dev.to/0xkoji/try-reinforcement-learning-with-donkey-car-5e4a
machinelearning, python
### 1 create virtual env I'm using pyenv ``` $ python -m virtualenv py37 --python=python3.7 ``` zsh Activate py37 and install packages ``` $ pip install python-socketio flask eventlet pygame numpy pillow h5py scikit-image opencv-python gym ``` I created `dCar folder` for the following repos ### 2 Clone donkeycar ```zsh $ mkdir dCar $ cd dCar $ git clone https://github.com/wroscoe/donkey donkeycar $ pip install -e donkeycar ``` ### 3 Clone self-driving sandbox ```zsh $ git clone https://github.com/tawnkramer/sdsandbox.git $ cd sdsandbox $ pip install -r requirements.txt ``` ### 4 Clone donkey_gym ```zsh $ git clone https://github.com/tawnkramer/donkey_gym $ pip install -e donkey_gym ``` Almost there. We need one more thing for running the simulator. Download binary https://github.com/tawnkramer/donkey_gym/releases Then, store it into `Applications` ### 5 Run simulator In this case, we will run simulator from `dCar` directory ```zsh $ python donkey_gym/examples/reinforcement_learning/ddqn.py --sim=/Applications/donkey_sim.app/Contents/MacOS/donkey_sim ``` Then you will see the simulator like below ![](https://thepracticaldev.s3.amazonaws.com/i/rra6fjfctbabgcbuo3dj.png) Hit `Play!` to start learning. ```zsh /Users/koji.kanao/Documents/py37/lib/python3.7/site-packages/skimage/viewer/__init__.py:6: UserWarning: Viewer requires Qt warn('Viewer requires Qt') WARNING: Logging before flag parsing goes to stderr. W0709 21:17:37.700056 4583351744 deprecation_wrapper.py:119] From ddqn.py:26: The name tf.keras.initializers.normal is deprecated. Please use tf.compat.v1.keras.initializers.normal instead. W0709 21:17:37.701951 4583351744 deprecation_wrapper.py:119] From ddqn.py:218: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. W0709 21:17:37.702128 4583351744 deprecation_wrapper.py:119] From ddqn.py:220: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2019-07-09 21:17:37.712400: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA W0709 21:17:37.712966 4583351744 deprecation_wrapper.py:119] From ddqn.py:221: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead. starting DonkeyGym env donkey subprocess started binding to ('0.0.0.0', 9091) waiting for sim to start.. 2019-07-09 21:17:37.883 donkey_sim[69097:904450] Could not find image named 'ScreenSelector'. waiting for sim to start.. waiting for sim to start.. waiting for sim to start.. waiting for sim to start.. waiting for sim to start.. 2019-07-09 21:17:54.684 donkey_sim[69097:904450] Color LCD preferred device: AMD Radeon Pro 560 (high power) 2019-07-09 21:17:54.684 donkey_sim[69097:904450] Metal devices available: 2 2019-07-09 21:17:54.684 donkey_sim[69097:904450] 0: Intel(R) HD Graphics 630 (low power) 2019-07-09 21:17:54.684 donkey_sim[69097:904450] 1: AMD Radeon Pro 560 (high power) 2019-07-09 21:17:54.684 donkey_sim[69097:904450] Using device AMD Radeon Pro 560 (high power) waiting for sim to start.. got a new client ('127.0.0.1', 58233) SceneSelectionReady connection dropped waiting for sim to start.. got a new client ('127.0.0.1', 58234) W0709 21:18:01.760221 4583351744 deprecation.py:506] From /Users/koji.kanao/Documents/py37/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor Episode: 0 EPISODE 0 TIMESTEP 30 / ACTION [0.70797646, 0.3] / REWARD 0.8670342954954999 / EPISODE LENGTH 30 / Q_MAX 0 fps 14.5105030403251 EPISODE 0 TIMESTEP 60 / ACTION [-0.5889623, 0.3] / REWARD -0.4946022760490001 / EPISODE LENGTH 60 / Q_MAX 0 EPISODE 0 TIMESTEP 90 / ACTION [0.40704942, 0.3] / REWARD -1.9541489999891999 / EPISODE LENGTH 90 / Q_MAX 0 fps 20.018557585621934 episode: 0 memory length: 107 epsilon: 0.9895139999999955 episode length: 107 Episode: 1 episode: 1 memory length: 108 epsilon: 0.9894159999999954 episode length: 1 Episode: 2 EPISODE 2 TIMESTEP 120 / ACTION [0.884501, 0.3] / REWARD 0.772067625056 / EPISODE LENGTH 12 / Q_MAX 26.828781 episode: 2 memory length: 147 epsilon: 0.9855939999999938 episode length: 39 Episode: 3 episode: 3 memory length: 148 epsilon: 0.9854959999999937 episode length: 1 Episode: 4 EPISODE 4 TIMESTEP 150 / ACTION [0.49025267, 0.3] / REWARD 0.86992308065572 / EPISODE LENGTH 2 / Q_MAX 26.773907 EPISODE 4 TIMESTEP 180 / ACTION [0.06281003, 0.3] / REWARD -1.0 / EPISODE LENGTH 32 / Q_MAX 24.477978 episode: 4 memory length: 180 epsilon: 0.9823599999999924 episode length: 32 Episode: 5 EPISODE 5 TIMESTEP 210 / ACTION [-0.56007874, 0.3] / REWARD 4.7462194558588 / EPISODE LENGTH 30 / Q_MAX 26.608269 episode: 5 memory length: 211 epsilon: 0.979321999999991 episode length: 31 Episode: 6 episode: 6 memory length: 212 epsilon: 0.979223999999991 episode length: 1 Episode: 7 EPISODE 7 TIMESTEP 240 / ACTION [0.34383777, 0.3] / REWARD 1.068961473664648 / EPISODE LENGTH 28 / Q_MAX 26.291294 episode: 7 memory length: 251 epsilon: 0.9754019999999893 episode length: 39 Episode: 8 episode: 8 memory length: 252 epsilon: 0.9753039999999893 episode length: 1 Episode: 9 EPISODE 9 TIMESTEP 270 / ACTION [0.60506356, 0.3] / REWARD -0.38350943820499994 / EPISODE LENGTH 18 / Q_MAX 27.504139 episode: 9 memory length: 294 epsilon: 0.9711879999999875 episode length: 42 Episode: 10 episode: 10 memory length: 295 epsilon: 0.9710899999999875 episode length: 1 Episode: 11 EPISODE 11 TIMESTEP 300 / ACTION [-0.45778775, 0.3] / REWARD 1.26087653748968 / EPISODE LENGTH 5 / Q_MAX 34.701992 ```
0xkoji
133,684
React Navigation - A Light Overview
Introduction One of the most important tasks while building a react native app that need...
0
2019-07-08T12:52:08
https://dev.to/kpose/react-navigation-a-light-overview-fc3
react, reactnative, reactnavigation
--- title: React Navigation - A Light Overview published: true description: tags: react, react native, react navigation --- ### **Introduction** ![React Navigation](https://thepracticaldev.s3.amazonaws.com/i/ku1w0hylc6xpyxpflyrl.png) One of the most important tasks while building a react native app that needs some navigation is selecting the perfect navigation library for your project. React Navigation is a standalone library that allows a developer to implement this functionality easily. At the end of the tutorial, you should have a pretty good knowledge on the various navigators from React Navigation and how to implement them. ### **Project Setup** Assuming that you have Node 10+ installed, you can use npm to install the Expo CLI command line utility: `npm install -g expo-cli` Then run the following commands to create a new React Native project called "NavOptions": `expo init NavOptions` `cd NavOptions` `npm start # you can also use: expo start` This will start a development server for you. The next step is to install the react-navigation library in your React Native project: `yarn add react-navigation` We will be exploring three Navigation options: - Stack Navigation - Tab Navigation - Drawer Navigation ### **Using Stack Navigator** First let's create a new folder, **components** in our root directory. After that, create two files, _Homescreen.js_ and _Aboutscreen.js_ . Our project folder should look like the image below: ![](https://thepracticaldev.s3.amazonaws.com/i/g65j9eswgjbrqbz39y0j.png) Add the block of code below to _Homescreen.js_ ```javascript //With ES7 syntax, you could type 'rcn" to bootstrap a react native component skeleton import React, { Component } from 'react' import { Text, View, Button } from 'react-native' import { createStackNavigator, createAppContainer } from 'react-navigation'; export default class Homescreen extends Component { render() { return ( <View style={styles.container}> <Text> Welcome To Home Screen </Text> <Button title = "Go to About Page" onPress={() => this.props.navigation.navigate('About')} /> </View> ) } } const styles = StyleSheet.create({ container: { flex : 1, alignItems: 'center', justifyContent: 'center' }, }); ``` ```javascript //Aboutscreen.js import React, { Component } from 'react' import { Text, View, Button } from 'react-native' import { createStackNavigator, createAppContainer } from 'react-navigation'; export default class Aboutscreen extends Component { render() { return ( <View style = {styles.container}> <Text> This is the About Screen. </Text> </View> ) } } const styles = StyleSheet.create({ container: { flex : 1, alignItems: 'center', justifyContent: 'center' }, }); ``` Now, let's also make some changes to _App.js_. We’ll import what we need from _react-navigation_ and implement our navigation there. It is useful to implement our navigation in the root __App.js__ file because the component exported from __App.js__ is the entry point (or root component) for a React Native app, and every other component is a descendant. As you will see, we will encapsulate every other component inside the navigation functions. ```javascript //App.js import React from 'react'; import { StyleSheet, Text, View } from 'react-native'; import { createStackNavigator, createAppContainer } from "react-navigation"; import HomeScreen from './components/HomeScreen'; import AboutScreen from './components/AboutScreen'; export default function App() { return ( <AppContainer /> ); } const AppNavigator = createStackNavigator({ Home : { screen : HomeScreen }, About: { screen: AboutScreen } }); const AppContainer = createAppContainer(AppNavigator); const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', alignItems: 'center', justifyContent: 'center', }, }); ``` _createStackNavigator_ provides a way for our app to transition between screens, where each new screen is placed on top of a stack. It is configured to have the familiar iOS and Android look and feel: new screens slide in from the right on iOS and fade in from the bottom on Android. Above we passed in a route configuration object to the _createStackNavigator_ function. The __Home__ route corresponds to the _HomeScreen_, and the **About** route corresponds to _AboutScreen_. If we wanted to indicate which is the initial route(first screen to be shown), we can add a separate object: ```javascript //Aboutscreen.js const AppNavigator = createStackNavigator({ Home: { screen: HomeScreen }, About: { screen: AboutScreen } },{ initialRouteName: "Home" }); ``` >Note that the _Home_ and _About_ route name-value pairs are enclosed by an overall route object. To run our app, you’ll need to download the Expo client app, make sure your command line is pointed to the project folder, and your computer and phone is connected to the same network, then run the following command: `npm start` ### **Using Tab Navigator** One of the most common style of navigation in mobile apps is tab-based navigation. This can be tabs on the bottom of the screen or on the top below the header (or even instead of a header). Here we will focus on how to implement tab navigation using _createBottomTabNavigator_. Let’s add another screen in our app by creating a **ProductScreen.js** file under _/components_. Add the following to your ProductScreen.js ```javascript //ProductScreen.js import React, { Component } from 'react' import { Text, View } from 'react-native' export default class ProductScreen extends Component { render() { return ( <View style = {styles.container}> <Text> Welcome to Product's page </Text> </View> ) } } const styles = StyleSheet.create({ container: { flex : 1, alignItems: 'center', justifyContent: 'center' }, }); ``` Next, we will import our _ProductScreen_ into _App.js_. Also, we will implement our Tab Navigation by importing *createBottonTabNavigation*. Replace _createStackNavigator_ with _createBottomTabNavigator_ in the _AppNavigator_ object. Our _App.js_ should be looking like this now: ```javascript //App.js import React from 'react'; import { StyleSheet, Text, View } from 'react-native'; import { createBottomTabNavigator, createAppContainer } from "react-navigation"; import HomeScreen from './components/HomeScreen'; import AboutScreen from './components/AboutScreen'; import ProductScreen from './components/ProductScreen'; export default function App() { return ( <AppContainer /> ); } const AppNavigator = createBottomTabNavigator({ Home : { screen : HomeScreen }, About: { screen: AboutScreen }, Product: { screen: ProductScreen } }, { initialRouteName: "Home" }); const AppContainer = createAppContainer(AppNavigator); const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', alignItems: 'center', justifyContent: 'center', }, }); ``` If we run our app, we should see our new navigator tabs. ### **Drawer Navigation** Like we did while implementing Tab Navigation, we will replace _createBottomTabNavigator_ in our _App.js_ with _createDrawerNavigator_, but first we will import the Navigator: `import { createDrawerNavigator, createAppContainer } from "react-navigation";` Then update our _AppNavigator_ variable: ```javascript const AppNavigator = createDrawerNavigator({ Home: { screen: HomeScreen }, About: { screen: AboutScreen }, Contact: { screen: ContactScreen } }, { initialRouteName: "Home" }); ``` We can also decide to add icons beside the route names, to do this i added a few images to our assets folder, then added _navigationoptions_ to the different screens/routes. ![](https://thepracticaldev.s3.amazonaws.com/i/un03y56okv2dlpo4f3ct.png) Make the following changes to our _HomeScreen.js:_ ```javascript //With ES7 syntax, you could type 'rcn" to bootstrap a react native component skeleton import React, { Component } from 'react' import { Text, View, Button, Image, StyleSheet } from 'react-native' import { createStackNavigator, createAppContainer } from 'react-navigation'; export default class Homescreen extends Component { static navigationOptions = { drawerLabel: 'Home', drawerIcon: ({tintColor}) => ( <Image source = {require('../assets/home-icon.png')} style= {[styles.icon, {tintColor: tintColor}]} /> ) } render() { return ( <View style={styles.container}> <Text> Welcome To Home Screen </Text> <Button title = "Go to About Page" onPress={() => this.props.navigation.navigate('About')} /> </View> ) } } const styles = StyleSheet.create({ container: { flex : 1, alignItems: 'center', justifyContent: 'center' }, icon: { width:24, height:24, } }); ``` Make the same changes to our _AboutScreen.js_ and _ProductScreen.js_, *make sure* to use the appropriate icon directory path. The _tintColor_ prop lets us apply any color based on active or inactive states of navigation tabs and labels. For example, we can change the active state color for our nav drawer labels. Go to the _AppNavigator_ variable and add to the options object: ```javascript const AppNavigator = createDrawerNavigator({ Home: { screen: HomeScreen }, About: { screen: AboutScreen }, Product: { screen: ProductScreen } }, { initialRouteName: "Home", contentOptions: { activeTintColor: '#136207' } }); ``` ### **Conclusion** I hope you you were able to learn a few things from this article, you can as well leave some claps and spread some love. Next, we will be building a full application and will be centering on exploring React Navigation to the fullest. You can also check out the [final code](https://github.com/kpose/NavOptions) on my github repo.
kpose
133,706
Publish a modern JavaScript (or TypeScript) library
Did you ever write some library code together and then wanted to publish it as an NPM package but rea...
1,422
2019-07-07T11:47:10
https://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/
javascript, typescript, library, howto
--- title: Publish a modern JavaScript (or TypeScript) library series: Publish a modern JavaScript (or TypeScript) library canonical_url: https://tobias-barth.net/blog/Publish-a-modern-JavaScript-or-TypeScript-library/ published: true description: cover_image: https://thepracticaldev.s3.amazonaws.com/i/k9yi2eiqfetm2iodn7dn.jpg tags: - javascript - typescript - library - howto --- Did you ever write some library code together and then wanted to publish it as an NPM package but realized you have no idea what is the technique du jour to do so? Did you ever wonder "Should I use Webpack or Rollup?", "What about ES modules?", "What about any other package format, actually?", "How to publish Types along with the compiled code?" and so on? Perfect! You have found the right place. In this series of articles I will try to answer every one of these questions. With example configurations for most of the possible combinations of these tools and wishes. ### Technology base This is the set of tools and their respective version range for which this tutorial is tested: - ES2018 - Webpack >= 4 - Babel >= 7.4 - TypeScript >= 3 - Rollup >= 1 - React >= 16.8 ( code aimed at other libraries like Vue or Angular should work the same ) Some or even most of that what follows could be applied to older versions of these tools, too. But I will not guarantee or test it. ### Creation The first thing to do before publishing a library is obviously to write one. Let's say we have already done that. In fact, it's [this one](https://github.com/4nduril/library-starter/tree/init). It consists of several source files and therefore, modules. We have provided our desired functionality, used our favorite, modern JavaScript (or TypeScript) features and crafted it with our beloved editor settings. What now? What do we want to achieve in this tutorial? 1. Transpile modern language features so that every browser in one of the last 2 versions can understand our code. 1. Avoid duplicating compile-stage helpers to keep the library as small as possible. 1. Ensure code quality with linting and tests. 1. Bundle our modules into one consumable, installable file. 1. Provide ES modules to make the library tree-shakable. 1. Provide typings if we wrote our library in TypeScript. 1. Improve collaborating with other developers (from our team or, if it is an open source library, from the public). Wow. That's a whole lot of things. Let's see if we can make it. Note that some of these steps can be done with different tools or maybe differ depending on the code being written in TypeScript or JavaScript. We'll cover all of that. Well, probably not all of that, but I will try to cover the most common combinations. The chapters of this series will not only show configurations I think you should use, but also will I explain the reasoning behind it and how it works. If you aren't interested in these backgrounds, there will be a link right at the top of each post down to the configurations and steps to execute without much around. ### Go! We will start with the first points on our list above. As new articles arrive, I will add them here as links and I will also try to keep the finished articles updated when the tools they use get new features or change APIs. If you find something that's not true anymore, please give me a hint. 1. [Transpile modern language features – With Babel](https://dev.to/4nduril/transpile-modern-language-features-with-babel-4fcp). 2. [Compiling modern language features with the TypeScript compiler](https://dev.to/4nduril/compiling-modern-language-features-with-the-typescript-compiler-36m3). 3. [Building your library: Part 1](https://dev.to/4nduril/building-your-library-part-1-5cii) 4. [Check types and emit type declarations](https://dev.to/4nduril/check-types-and-emit-type-declarations-1i29) 5. [How to bundle your library and why](https://dev.to/4nduril/how-to-bundle-your-library-and-why-1gao) 6. [Bundling your library with Webpack](https://dev.to/4nduril/bundling-your-library-with-webpack-12ig) Oh and one last thing™: I'll be using `npm` throughout the series because I like it. If you like `yarn` better, just exchange the commands.
4nduril
133,930
6 Months at Microsoft
I started off 2019 with my 2018 summary where I announced that I had left Readify and joined...
0
2019-07-08T06:03:27
https://www.aaron-powell.com/posts/2019-07-08-6-months-at-microsoft/
career, microsoft, devrel
--- title: 6 Months at Microsoft published: true tags: career,microsoft,meta,devrel canonical_url: https://www.aaron-powell.com/posts/2019-07-08-6-months-at-microsoft/ --- I started off 2019 with my [2018 summary](https://dev.to/aaronpowell/2018---a-year-in-review-mi0-temp-slug-5290213) where I announced that I had left Readify and [joined Microsoft](https://dev.to/aaronpowell/starting-2019-with-a-new-job-2cgm-temp-slug-6541646). Well, today is 6 months since I started and I wanted to take some time to reflect on my first 6 months at Microsoft and the first 6 months in a Developer Relations role, aka DevRel. ## So, like, what _do_ you do for a job? This is probably the most common question I get asked, well, I've _always_ been asked it just now it's been a bit harder to define. It's especially tricky when talking to someone outside of the tech industry because "DevRel" doesn't make sense to people who aren't in tech and even then it's no guarantee! 🤣 6 months ago I really didn't have a clue what my job would entail and part of that is because everyone's approach to the job is different, some people travel the world presenting at conferences, some people live stream on twitch, some people do podcasts, some people write. So, what **do** I do? **I produce content.** This is how I look at my job; my job is to produce content. Now, this takes different forms for different people but for me, I do a lot of my content production in the form of blogs. But blogging isn't the finality of content production, once I've written a blog I might extract a talk out of it to submit to some events, turn it into official documentation, propose it to user groups, present it internally or appear on a pod/vod-cast. Because of this, I spent a lot more time writing code than I have for a very long time, the bulk of my days are spent writing code. After all, if I didn't write code then I'd have nothing to produce content on! In fact, I've created 30 posts already this year which is the most blogging I've done since 2013 (34 posts) but still shy of my busiest year of 2010 (with 80 posts, but I'm not sure if they are date-tagged correctly from the many years of rebuilding my site). I really enjoy writing, I've been blogging for over a decade now and the fact that it's my job to do it makes me happy. I don't do much in the way of conferences, I don't really subscribe to that style of DevRel. Sure, I do conferences, but spending every other week on an aircraft is exhausting, so I'm more select on the travel I do. ## Learning new things As I'm always looking for new ideas for content and to engage with different audiences I've been able to spend a lot of time picking up new technologies. I did my first bit of [Golang](https://www.aaron-powell.com/tags/golang), learnt [WebAssembly](https://www.aaron-powell.com/tags/webassembly) and built an [IoT project in F#](https://www.aaron-powell.com/tags/iot) (which I still have more content to come!). ## Being at Microsoft Microsoft is a fascinating organisation to work for as there's simply nothing else on the same scale to compare it to. Having come from a large Microsoft partner I had some thoughts on what it'd be like to work with Microsoft tools, but in reality, it's nothing like that (I've joked that Readify is more Microsoft than Microsoft). And being in a completely distributed team is very much a change for me. I'm still quite in the mindset of "I go to the office to work" but in since my office is my home office so I get up of a morning and my main task for the day is done! Quite often I'll head into the Microsoft Reactor to work rather than working at home, partially because when my kids aren't in daycare it's a bit distracting being home with them and partially to get me out of the house. But because we're distributed we do everything online; we chat through Slack for pretty much everything, we organise video calls at a random time to inconvenience different groups of people each time (there's no single time that works for **everyone**!) and ever so occasionally an email is sent. This is a double-edged sword though as I found myself early on checking Slack on Saturday's (overlap with the US Friday) for example. This isn't the best, you need to disconnect from work a bit, and over the last few months, I've got better at the weekend being non-work time (my laptop rarely gets touched between Friday evening and Monday morning). ## It's fun Honestly, I'm having so much fun, probably the most fun I've had at work for a while now. I don't say that intending to throw shade at Readify or anything, but I'd forgotten what it was like to be writing code all the time, building experiments, that sort of stuff. Here's to another 6 months. Oh, and most importantly... **I survived my first reorg!** 🤣
aaronpowell
134,213
Pros of Using Django for Web Development
0
2019-07-08T16:44:36
https://djangostars.com/blog/top-14-pros-using-django-web-development/
django, python, webdev, programminglanguages
--- title: Pros of Using Django for Web Development cover_image: https://djangostars.com/blog/uploads/2019/01/Cover-1-11-.png published: true description: tags: django, python, web development, programming languages canonical_url: https://djangostars.com/blog/top-14-pros-using-django-web-development/ --- **Django is one of the [top frameworks for web developmeht](https://hackernoon.com/7-best-web-development-backend-frameworks-in-2018-22a5e276cdd), but why is it so popular among developers and business owners? Let’s review the reasons why so many applications and features are being developed with Django.** **1. Django is simple** Django’s documentation is exemplary. It was initially launched with high-quality docs, and they are still maintained at the same level, which makes it easy to use. More than that, one of Django’s main purposes is to simplify the development process: it covers the basics, so you can focus on the more unique and/or complex features of your project. **2. Django works on Python** The framework is based on Python — a high-level, dynamic, and interpreted programming language, well-loved by developers. Although it’s hard to find a language that can cover most programming tasks and problems, Python is a great choice for many of them. It’s one of the [most popular languages of 2018](https://www.economist.com/graphic-detail/2018/07/26/python-is-becoming-the-worlds-most-popular-coding-language), competing with C/++ and Java. ![IMG_1-5.png](https://ucarecdn.com/dc214a80-0d85-4f0b-a3f5-d3c3c385cf31/) Python is: - Portable. Your code can be ported to many platforms, from PC and Linux to PlayStation. - Multi-paradigm. It supports object-oriented programing, which is a simple way to code, as well as imperative programming. - More interactive than most other languages. It resembles a pseudo-code language and helps you focus on solving a task, rather than on syntax. [Python web application development with Django](https://djangostars.com/services/python-django-development/) requires less code and less effort. Also, Python has extensive libraries, which make it easy to learn or switch to this language from another one. Customers like Python since it usually takes less time to write the code and, thus, less money to complete the technical part of a project. **3. Django has many useful features and extras to simplify development** Django has adopted Python’s “batteries included” approach — the framework has everything necessary to develop a fully fledged application out of the box. ![IMG_2-3.png](https://ucarecdn.com/1955b26f-4dbc-4216-9c96-0324738365ac/) You don’t need to spend hours customizing it to build a simple application or a prototype since all of the essentials are already available. But if you need additional features for a more complex app, there are well over 4,000 packages for Django to cover profiling, testing, and debugging. The framework also has tool packages for working with cutting-edge technology such as data analysis, AI, and machine learning. They are easy to set up and use in your project. Plus, they’re great if you’re [using Django for FinTech](https://djangostars.com/industries/fintech/) or other math-heavy industries. **4. Django is time-effective** Is Django development good for MVPs and prototypes? Yes, thanks to multiple features that make it time- and cost-effective. Let’s sum them up: - There’s a flexible, well-structured admin panel, better than Laravel or Yii’s, for example. - It allows you to reuse code from current or other projects (there is also a library of reusable apps, tools, and features). - It has great templates and forms; they were even copied by other projects. - It has many out-of-the-box libraries and tools that allow you to assemble a good prototype in record time. **5. Django suits any kind of project** Django is not an enterprise solution like C# or Java, yet it suits most types of projects, no matter their size. For example, if you’re building a social media type web application, Django can handle the growth at any scale and capacity, be it heavy traffic or volumes of information. But if you want to make something simple, using Django for web development of a blog or a book database, for instance, is an excellent choice as well since it has everything you need to quickly assemble a working application. In addition to that, Django is: - Cross-platform. You can create applications that will run on Windows, as well as on Mac or Linux. - Compatible with most major databases. You can use one or several different databases in one project thanks to Django’s ORM, and switch between the databases with only one line of code. ![IMG_3-3.png](https://ucarecdn.com/9f9bcd46-e609-48d7-ac46-736f54ae194f/) **6. Django is DRY and KISS compliant** Django follows the DRY (Don’t Repeat Yourself) principle, which means you can replace frequently repeated software patterns with abstractions, or use data normalization. This way, you avoid redundancy and bugs. Plus, reusing the code simplifies development so you can focus on coding unique features. KISS means “Keep It Short and Simple”, among its many variations. In Django, it means simple, easy to read, and understandable code. For example, methods shouldn’t be longer than 40-50 lines. **7. Django is secure and up-to-date** Django is always kept up to a high standard, following the latest trends in website security and development. That definitely answers the question “Is Django good for web development?” — as security is a priority in any project. Django is updated regularly with security patches, and even if you’re using an older version of the framework, its security is still maintained with new patches. It’s no wonder since Django has an LTS (Long-term Support) version. **8. Django is backward-compatible** You can use the interface of Django’s older versions, and most of its features and formats. In addition, it has an understandable roadmap and descriptions — the release notes contain all the information you need to know about changes and, more importantly, when new changes become incompatible with previous releases. **9. Django is compatible with DevOps** You can also enhance your project using the DevOps methodology, which aims to shorten lifecycles while maintaining business objectives. It’s especially good if you’re using Django for banking web applications since they are quite complex. It’s great because you can: - Solve problems faster with improved operational support. - Use the continuous delivery approach (an app is produced in short cycles to ensure it is reliable enough to be released at any time). - Increase the productivity of your team through collaborative working. **Read More**: [What is DevOps and Why You Should Have It](https://djangostars.com/blog/what-is-devops/) ![IMG_4-2.png](https://ucarecdn.com/6529cc2d-619e-458e-8adf-a67fe73274f6/) **10. Django has its own infrastructure** Django doesn’t depend on any outside solutions. It has pretty much everything, from a web server and a templating engine to an Object Relational Mapper (ORM), which allows the framework to use different databases and switch between them within one project. Plus, Django has libraries and tools for building forms to receive input from users. That’s important for any website that’s supposed to do more than just publish content. **11. Django has a REST framework for building APIs** The benefits of using Django for web development also include its Representational State Transfer (REST) framework — a popular toolkit for building web APIs. Django’s REST is powerful enough to build a ready-to-use API in just three lines of code. One of its key advantages is that it’s extremely flexible: data is not tied to any methods or resources, so REST can return different data formats and handle multiple types of calls. As a result, it can meet the requirements of different customers. ![IMG_5-1.png](https://ucarecdn.com/cd2ffc04-904e-41a9-8e20-ec3f3945ae2d/) **12. Django is time-tested** The Django framework has been around for more than a decade, and during that time, it has become the choice of many companies for creating their web applications. A few of the famous examples are: - Instagram; - Spotify; - NASA; - Disqus. **13. Django has a big, supportive, and professional community** Advantages of Django also include its big, professional community. It’s quite easy to find good developers who know Django inside out and have experience coding with it. That’s a good testament to the framework’s popularity – but it also means that: - You can find help or, at least, the right direction in solving harder programming cases; - The Django community is quick to respond to bugs and fix them; - As an open source framework, Django is constantly improving – by means of new libraries, for example. **14. It’s easy to find Django developers to hire** A huge advantage of the large Django community is that it’s easy to find good developers for your team. Moreover, you can extend an existing team, since all [Django developers](https://djangostars.com/) use the same documentation, code pretty much the same way, and can easily read each other’s code. ## Bottom Line The numerous advantages of web development using Python and Django framework can be summarized in three short phrases: less effort, less time, and less money. You can use Django to start a small, simple project, and continue using it when the project grows, ensuring its high quality, functionality, and security. You can also use it to test an idea and save a lot of money if you find the project won’t be worth investing in. On the other hand, Django allows you to build a complex web application that can handle heavy traffic and huge volumes of information. It also has numerous packages with additional tools to power cutting-edge technology such as data analysis and machine learning. Django could be the best fit for your next business idea regardless of what type of software project it is. This article about [pros of using Django for web development](https://djangostars.com/blog/top-14-pros-using-django-web-development/) is originally posted on Django Stars blog.
djangostars
134,281
My desktop setup - Part 3: The VS Code
I already discussed at length about my OS look and my terminal configuration. Now it is time to talk...
1,336
2019-07-08T18:18:39
https://manoel.tech/03-my-desktop-setup-3/
I already discussed at length about my [OS look](/01-my-desktop-setup) and my [terminal configuration](/02-my-desktop-setup-2). Now it is time to talk about my main daily companion, the VS Code editor. --- When I first started to learn how to do web pages, I used the Notepad from Windows to edit HTML code (good old times!). When I first saw Macromedia Dreamweaver (now an Adobe product) and had the chance to work with syntax highlighting, auto-completion, file management and a lot of snippets, my then 15-years-old mind stood in awe. Fast forward almost two decades[^1]. I've spent this time going in and out of the dev world. Tried full-fledged IDEs (Eclipse, NetBeans), light-weight editors (Sublime), sheer minimalism (Vim). But I believe Microsoft's Visual Studio Code has hit a sweet spot for me. <img src="https://thepracticaldev.s3.amazonaws.com/i/f3u9u4lv7xo99wpxt6n1.png" alt="The Visual Studio Code editor with the Night Owl theme"> ## Themes The first theme which captured my attention was Wes Bos' Cobalt2. It is a Nice balance between shades of blue and yellow, pretty comfortable for continuous usage. Recently, I have also been using Sarah Drasner's Night Owl. Besides being absolutely gorgeous, it also provides a light version, considerably useful for when the room is extremely sunbathed. ## Fonts I love ligatures. They allow me to reduce the cognitive load when looking at a symbol instead of a sequence of characters which must be read sequentially have a special meaning[^2]. Being someone who is quite financially spartan, I searched for good free alternatives. My main contenders are Space Mono and Fira Code. It is interesting to select the Nerd Fonts patched version, which includes glyphs to decorate the command line and to use in your daily tasks with fonts, glyphs and icons. The fonts are available at the [Nerd Fonts site](nerdfonts.com) or at the [GitHub repo](https://github.com/ryanoasis/nerd-fonts/tree/master/patched-fonts/). Another alternative, quite new actually, is the [Victor Mono](https://rubjo.github.io/victor-mono/) font. It provides a set of cursive-like italics, which may add some more flair to your code, if it is for your liking. It may download for free, but if you mean to use it, a donation to its creator is a kind gesture for those who can afford it. ## Extensions Some extensions are useful if you work with a specific tool or language, but these are some wonderful ones that are quasi-content-agnostic: - [Bracket Pair Colorizer 2](https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer-2): for when you don't know what is inside of what; - [Color Highlight](https://marketplace.visualstudio.com/items?itemName=naumovs.color-highlight): See if #068910 is the green you are thinking - directly in your code, as you type; - [Git History](https://marketplace.visualstudio.com/items?itemName=donjayamanne.githistory), [Git Indicators](https://marketplace.visualstudio.com/items?itemName=lamartire.git-indicators) & [GitLens](https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens): if you use git, you can cover your bases with these; - [Polacode](https://marketplace.visualstudio.com/items?itemName=pnp.polacode): When the code is so beautiful that it must be shared with the world in a perfectly arranged frame; - [Settings Sync](https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync): keep your doubts away from which setting to choose in each of your different computers - by making them the same; - [WakaTime](https://marketplace.visualstudio.com/items?itemName=WakaTime.vscode-wakatime): log where is your time going in your projects. --- And that's all folks! Thanks for reading! [^1]: "Time, Time, Time / See what's become of me" [^2]: e.g. A "fat arrow" instead of an "equal followed by a greater than"
manoeltlobo
134,298
Never use array_merge in a loop in PHP
array_merge in loop is a performance killer.
0
2019-07-08T19:13:09
https://dev.to/klnjmm/never-use-arraymerge-in-a-for-loop-in-php-5go1
performance, php, loop
--- title: Never use array_merge in a loop in PHP published: true description: array_merge in loop is a performance killer. tags: performance, php, loop --- I often see people using `array_merge` function in a `for`/`foreach`/`while` loop 😱 like this : ```php $arraysToMerge = [ [1, 2], [2, 3], [5, 8] ]; $arraysMerged = []; foreach($arraysToMerge as $array) { $arraysMerged = array_merge($arraysMerged, $array); } ``` It's a very bad practice because it's a performance killer (especially in memory). Since PHP 5.6, there is a new operator : the spread operator ```php $arraysToMerge = [ [1, 2], [2, 3], [5,8] ]; $arraysMerged = array_merge([], ...$arraysToMerge); ``` * No more performance problem * BONUS : no more `for`/`foreach`/`while` loop * BONUS : process in *one line* Look now at your code base to find code that you can improve 👩‍💻👨‍💻! --- ## Thank you for reading, and let's stay in touch ! If you liked this article, please share. You can also find me on [Twitter/X](https://twitter.com/enjimmyklein) for more PHP tips.
klnjmm
134,753
Run Puppeteer/Chrome Headless on EC2 Amazon Linux AMI
— augmented on June 2019 This article is based closely on MockingBot’s article of the same title. Th...
0
2019-07-09T16:08:22
https://dev.to/kerion7/run-puppeteer-chrome-headless-on-ec2-amazon-linux-ami-2ae8
headless, puppeteer, aws, tutorial
— augmented on June 2019 This article is based closely on MockingBot’s [article of the same title](https://medium.com/mockingbot/run-puppeteer-chrome-headless-on-ec2-amazon-linux-ami-6c9c6a17bee6). The original article helped a great deal, but some parts were outdated due to new libraries’ and packages’ versions. Hence, I’m jotting down what needs to be augmented from the original article here, for my own reference and for others who encounter the same situation too! When you have reached this part of the article: > Now only 6 left, as Amazon Linux don’t have gtk builtin, we need borrow packages from other distributions: Pay heed that some of the packages no longer exist. ![](https://thepracticaldev.s3.amazonaws.com/i/0kaktzh7jni9zztct7e7.png) ## Tip 💡: Go to the URL (up till the last trailing slash) For example, for: http://mirror.centos.org/centos/7/os/x86_64/Packages/atk-2.22.0-3.el7.x86_64.rpm Navigate to: http://mirror.centos.org/centos/7/os/x86_64/Packages/ And search for **atk-** : ![Perhaps by the time you read this article, the version have increased again!](https://cdn-images-1.medium.com/max/1600/1*iW1gnS_tQRgLr5DXhPLnLw.png) Hence, the nearest match would be **atk-2.28.1–1.el7.x86_64.rpm** . Go through the list of items to install with this technique, and you should be almost there! *Almost.* ![Unlike what’s mentioned, there are still missing deps after installing the list of packages!](https://cdn-images-1.medium.com/max/1600/1*FFkWmu90BQoC7CdHbvUQEg.png) Alas, there are still some missing dependencies being listed when you run `ldd chrome | grep not` . After googling around, [this seals the deal](https://github.com/GoogleChrome/puppeteer/issues/391#issuecomment-325420271): ![The number of emojis speak volumes about this comment!](https://cdn-images-1.medium.com/max/1600/1*7lxwDWIzaLh4iKp-r3Y_Bg.png) ![After the long yum install, one last dep!](https://cdn-images-1.medium.com/max/1600/1*rXgvxL5km7vXiHg_6JFODQ.png) After running the long `yum install` command, you will be left with one last dependency: **libpng12.so.0** . Same thing, look for the **x86_64** version of the libpng12 package in the Centos packages listing page and download it. ![](https://cdn-images-1.medium.com/max/1600/1*ZXuKZhTpdcDZv42a8MXRJQ.png) And finally, all the dependencies have been assembled. ![](https://media1.tenor.com/images/f83bcd6317336c7c262fdb76d314e006/tenor.gif?itemid=14098175) ## Ending note Hopefully the tips mentioned above will/have helped in your attempt to install Puppeteer on the AMI! If there are any parts that are unclear here, and it is also unclear in the original post, do give a shoutout. Similarly, if you have tried and noticed my tips are outdated, do let me know too.
kerion7
134,901
New "How to build an App" series by Tom Scott
Everything you didn't know you needed to know
0
2019-07-10T01:01:52
https://dev.to/turnerj/new-how-to-build-an-app-series-by-tom-scott-39c3
beginners, app, business
--- title: New "How to build an App" series by Tom Scott published: true description: Everything you didn't know you needed to know tags: beginner, app, business --- Tom Scott is a [popular YouTuber](https://www.youtube.com/user/enyay) who (amongst many other things) made [Emojli](https://en.wikipedia.org/wiki/Emojli) - [an instant messaging app where you could only use emoji to communicate](https://www.youtube.com/watch?v=GsyhGHUEt-k). His new series titled "How to build an App: Everything you didn't know you needed to know" is 15 videos where he talks about everything from finding out if your idea is any good, making money from your app to surviving on the app store. Each video goes for about 10-15 minutes. Here's the trailer for [the series that is all available right now, for free, on his channel](https://www.youtube.com/playlist?list=PL96C35uN7xGJu6skU4TBYrIWxggkZBrF5). {% youtube tO8aJ-TUtJY %} Personally while I'm not planning on building an app, there are definitely a few videos I'm wanting to check out which can apply to other businesses and projects in general. If you're still struggling to come up with an idea for an app but still want to build something, check out my previous article ["So you want to launch a product? First you need an idea."](https://dev.to/turnerj/so-you-want-to-launch-a-product-first-you-need-an-idea-5ib).
turnerj
134,927
Next-level visualizations with ExploreTrees.SG
Last year, I wrote about one of my most ambitious side project ever, ExploreTrees.SG. It was simply b...
0
2019-07-12T00:42:28
https://cheeaun.com/blog/2019/07/next-level-visualizations-exploretrees-sg/
visualizations, trees, singapore
--- title: Next-level visualizations with ExploreTrees.SG published: true tags: visualizations, trees, singapore canonical_url: https://cheeaun.com/blog/2019/07/next-level-visualizations-exploretrees-sg/ --- Last year, I [wrote about one of my most ambitious side project ever](https://cheeaun.com/blog/2018/04/building-exploretrees-sg/), [ExploreTrees.SG](https://exploretrees.sg/). It was simply **breath-taking**. ![Tree families legend on layers panel, on ExploreTrees.SG](https://cheeaun.com/blog/images/screenshots/web/tree-families-legend-layers-panel-exploretrees-sg@2x.png) Revisiting the masterpiece --- I haven’t touch it since then. March this year, I *suddenly* had an itch to update the dataset and see if anything’s changed. I ran the script to fetch the latest data and [got this](https://twitter.com/cheeaun/status/1108010887984472069): ![Terminal showing a script generating trees data](https://cheeaun.com/blog/images/screenshots/software/terminal-script-generating-trees-data@2x.jpg) It’s a script that scrapes data from [National Parks Board](https://www.nparks.gov.sg/)’s [Trees.sg](http://trees.sg/), showing the total count of trees and species, and generates a GeoJSON file. From the count, I can compare the previous year’s number of trees to this year’s. March last year, **564,266** trees. March this year, **564,678** trees. It increased by few hundreds! Looking back previously, I attempted to render **all** trees on the map *in* the web browser, but failed due to exceedingly large file size and slow performance. I ended up uploading the data to [Mapbox Studio](https://www.mapbox.com/mapbox-studio/) as [vector tileset](https://docs.mapbox.com/help/glossary/vector-tiles/), to be served back on the map. It’s not a pure client-side solution, but a back-end supported one, which makes it no different from Trees.sg *except* it’s faster 🤷‍♂️. Ultimately, I still want to achieve this pure client-side solution because I *love* to push the limits 😉. A year has passed, technologies have improved, right? I took a hard look at `trees-everything.geojson` which contains *all* trees data. It’s **197.6 MB** in size, which is *insane* for any web sites. I came up with two ideas: 1. Give up on the GeoJSON format. Embrace normal JSON, remove all the keys and only store values. Convert, for example `{"id": "123", "height": 200}` into `["123", 200]`. Keys will be hardcoded somewhere else in the code. 2. Group *all* coordinates of trees into an array, technically like a line, and convert into [Encoded Polyline Algorithm Format](https://developers.google.com/maps/documentation/utilities/polylinealgorithm). It’s a lossy compression algorithm that allows you to store a series of coordinates as a single string. It’s lossy, with a precision of 5 decimal places, roughly [1 m in distance near equator](https://en.wikipedia.org/wiki/Decimal_degrees#Precision). For example, `[[1.27612,103.84744], [1.28333,103.85945]]` will be encoded into a shorter string: `wfxFouyxRal@ajA`. Here’s the code: ```js const data = JSON.parse(fs.readFileSync('data/trees-everything.geojson')); const props = data.features.map(f => Object.values(f.properties).map(v => v === null ? '' : v)); const points = data.features.map(f => f.geometry.coordinates); const line = polyline.encode(points); const finalData = { props, line }; ``` One is the `props` variable storing all the values and one more `line` variable for the encoded polyline string. The final result is a **37.7 MB** JSON file. That’s **524% smaller** than the GeoJSON file! 😱 After I compress the file with [gzip](https://en.wikipedia.org/wiki/Gzip), it becomes [**5.7 MB**](https://twitter.com/cheeaun/status/1108020569251827712), **3,466% smaller**! 😱😱 ![macOS Finder window showing trees data files, in GeoJSON, JSON and Gzip formats](https://cheeaun.com/blog/images/screenshots/software/finder-trees-data-files@2x.jpg) I think 5.7 MB is not bad. According to this [2017 article on SpeedCurve](https://speedcurve.com/blog/web-performance-page-bloat/), the average web page size was 3 MB and it predicted that by 2019, it will be 4 MB. > According to the HTTP Archive, **almost 16% of pages today – in other words, about 1 out of 6 pages – are 4 MB or greater in size**. That “today” was in 2017. Compared to those sites that load 4 MB of *junk* plus few bytes of real content, I’m building an *awesome* site with 5 MB of useful data plus few hundred kilobytes of map tiles and JavaScript files. Obviously at this point, I’m trying really hard to justify my actions 😅. I’m very excited that I manage to squeeze the bytes out of the dataset, but it’s still not small enough to be *lower* than the average web page size, so I try to deceive myself that I can do this for a good reason 😂. As I look at the dataset, I noticed a few changes. I investigated and found that: - Some flowering trees are gone, which I have no idea what happened until now. - The API URL has changed from `https://imaven.nparks.gov.sg/arcgis/rest/services/maven/PTMap/FeatureServer/2/query` to `https://imaven.nparks.gov.sg/arcgis/rest/services/maven/Hashing_UAT/FeatureServer/0/query`. The returned response has changed too. The most significant one is the **tree girth** data. It used to be precise measurements like 1.1 meters, but it became ranges like `0.0 - 0.5` and `> 1.5`. I don’t know the exact reason but I guess tree girths are pretty complicated stuff. While re-fetching the data from scratch, I decided to restructure the grids. The fetching works by constructing a list of grids that will be passed as boundaries for the API calls. In other words, every box is equivalent to one API request. The previous one looks like this: ![Square grids around Singapore boundary](https://cheeaun.com/blog/images/screenshots/web/square-grid-singapore-boundary.png) 60 boxes, 60 API requests. The right and bottom side are sort of *neglected*, and luckily there are no trees data in those areas. The new grid looks like this: ![The new square grids around Singapore boundary](https://cheeaun.com/blog/images/screenshots/web/new-square-grid-singapore-boundary@2x.jpg) 630 boxes, 630 API requests. Higher coverage over the whole Singapore *but* not the further southern parts. Fortunately, most trees are covered, as in the trees in other areas are not recorded by National Parks *yet*. So, I got **all** the new data. I plotted them on the map with [Deck.gl](https://deck.gl/#/), a large-scale WebGL-powered data visualization library by Uber. It has better performance when it comes to large quantities of map features as I’ve tried in [my previous side project](https://cheeaun.com/blog/2019/02/building-busrouter-sg/). It was… **slow**. 😰 Nope, it’s not the fault of the library or Mapbox. It’s Chrome. My Chrome desktop browser was taking a *long* time decoding the JSON response. Turns out JSON parsing is pretty darn slow for super large files. It’s also a synchronous blocking operation. It took **more than 10 seconds** on my Macbook Air (2019). 😰 As I start to switch from my home-grown build scripts to [Parcel](https://parceljs.org/), even the Parcel build step fails with “Allocation failed - JavaScript heap out of memory” errors. I guess it tries to read the JSON file, probably doing something to it and Node keeps running out of memory 😅. I *could* fix the build step but let’s not get sidetracked here. Maybe I need a faster `JSON.parse`. Maybe I could run it in a [web worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers), but it could be even slower due to the huge payload in message passing. I had a different idea in mind, that is to use a *different* data format. Mapbox uses [Protocol buffers](https://developers.google.com/protocol-buffers/) for the map tiles. Deck.gl supports [some form of binary data](https://deck.gl/#/documentation/developer-guide/performance-optimization?section=on-using-binary-data) with an upcoming [RFC](https://github.com/uber/deck.gl/blob/master/dev-docs/RFCs/v7.x-binary/binary-data-rfc.md), which looks… quite complicated for me. I ended up using [MessagePack](https://msgpack.org/) because: 1. Protocol buffers need types (`double`, `int64`, etc) and it’s quite troublesome to do a quick conversion from JSON. I’ve tried converting the GeoJSON file using [`Geobuf`](https://github.com/mapbox/geobuf) and the file size still seems bigger than MessagePack’s (with my combo compression ideas mentioned above). 2. Deck.gl’s binary data thing doesn’t seem to be stable *yet* and needs manual manipulation on the data. 3. MessagePack just works™. There are two libraries available that can perform encoding and decoding of the MessagePack format: - [msgpack-lite](https://www.npmjs.com/package/msgpack-lite) (listed on the official site) - [@ygoe/msgpack](https://www.npmjs.com/package/@ygoe/msgpack) I tried both and chose the latter because it’s [smaller in bundle size, according to Bundlephobia](https://bundlephobia.com/scan-results?packages=msgpack-lite,@ygoe/msgpack). I’m not choosing based on performance because I’ve confirmed that both are [way faster](https://twitter.com/cheeaun/status/1109818670711078912) than `JSON.parse`. Instead of more than 10 seconds, the data is decoded in **about 3 seconds** or less. The file size is smaller too, at around **30 MB** but after gzipped, it becomes the same as the gzipped JSON file at roughly **5 MB**. Too bad, I was hoping it'll be even smaller 😅. In the previous dataset, I noticed a few discrepancies such as two or more trees, with different IDs, located at the *exact* same coordinates. This time, I try to clean it up and remove duplicates, partially with the hopes of reducing the file size. It might sound silly but I’m actually performing *strict* comparison of *exact* coordinates. I’m not kidding that there's a few trees with *exactly* the same coordinates up to every single decimal place. Imagine two trees with the coordinates of 103.702059 longitude and 1.406719 latitude, with not a single difference in the decimals. 😅 Someone told me before that there could be a tree growing on top of another tree, but I quite… doubt it. 🤨 {% youtube 0WLrI2djC3g %} I checked the data, and could find other similarities in the name and species name. Okay, I could be wrong but I [decided to remove the potential duplicates](https://twitter.com/cheeaun/status/1110175312225030144) anyway, since this affects the map user interface. Two overlapping tree dots on a map poses quite a challenge, especially for visualization and user interactions. I have high suspicion that the original dataset actually comes from multiple datasets, via different agencies, which could explain this phenomenon. The final gzipped file size remains the same 😅. Pushing the limits, again --- As the dataset is finalised with decent loading and decoding performance, I proceed to reimplement most of what I did in the first version using [Mapbox GL JS](https://docs.mapbox.com/mapbox-gl-js/api/), with Deck.gl instead. In the original implementation with Mapbox GL JS: - Every tree dot is styled via the [`circle`](https://docs.mapbox.com/mapbox-gl-js/style-spec/#layers-circle) layer. - Pure client-side solution is impossible if there are [over 500,000 data points](https://docs.mapbox.com/help/troubleshooting/working-with-large-geojson-data/#even-bigger-data), thus the server-side solution via Mapbox Studio. - Not all 500,000+ trees are rendered on the map in higher zoom levels because that’s how the vector tiles work; dropping or coalescing features on every zoom level if the limits on the tiles are exceeded. Here’s a code example: ```js map.addLayer({ id: 'trees', type: 'circle', source: 'trees-source', 'source-layer': 'trees', paint: { 'circle-color': [ 'case', ['all', ['to-boolean', ['get', 'flowering']], ['to-boolean', ['get', 'heritage']]], 'magenta', ['to-boolean', ['get', 'flowering']], 'orangered', ['to-boolean', ['get', 'heritage']], 'aqua', 'limegreen' ], 'circle-opacity': [ 'case', ['to-boolean', ['get', 'flowering']], 1, ['to-boolean', ['get', 'heritage']], 1, .5 ], 'circle-radius': [ 'interpolate', ['linear'], ['zoom'], 8, .75, 14, ['case', ['to-boolean', ['get', 'flowering']], 3, ['to-boolean', ['get', 'heritage']], 3, 1.25 ], 20, ['case', ['to-boolean', ['get', 'flowering']], 10, ['to-boolean', ['get', 'heritage']], 10, 6 ] ], 'circle-stroke-width': [ 'interpolate', ['linear'], ['zoom'], 11, 0, 14, 1, ], 'circle-stroke-color': 'rgba(0,0,0,.25)', }, }); ``` For the new implementation with Deck.gl: - The dots are rendered with [`ScatterPlotLayer`](https://deck.gl/#/documentation/deckgl-api-reference/layers/scatterplot-layer). May look the same as the former but the styles are written differently. - Pure client-side solution becomes possible. The [documentation](https://deck.gl/#/documentation/developer-guide/performance-optimization?section=general-performance-expectations) quotes: > On 2015 MacBook Pros with dual graphics cards, most basic layers (like `ScatterplotLayer`) renders fluidly at 60 FPS during pan and zoom operations up to about 1M (one million) data items, with framerates dropping into low double digits (10-20FPS) when the data sets approach 10M items. - **All** trees are rendered in **all** zoom levels. The new code example: ```js new MapboxLayer({ id: 'trees', type: ScatterplotLayer, opacity: 1, radiusMinPixels: .1, radiusMaxPixels: 5, lineWidthUnits: 'pixels', getLineWidth: 1, getLineColor: [0,0,0,200], getRadius: (d) => (d.flowering || d.heritage) ? 100 : 3, getFillColor: (d) => { if (d.flowering && d.heritage) return colorName2RGB('magenta'); if (d.flowering) return colorName2RGB('orangered'); if (d.heritage) return colorName2RGB('aqua'); return colorName2RGB('green'); }, }); ``` Yeap, shorter. I have to create a new function called `colorName2RGB` to convert color names (`orangered`, `aqua`, etc) to RGB values in array form (`[R,G,B]`), because Deck.gl doesn’t support them. This function is surprisingly simple because it uses `canvas`’s magic [`fillStyle`](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/fillStyle) property to convert a `CSS` color, instead of a lookup table, thanks to this [StackOverflow answer by *JayB*](https://stackoverflow.com/a/47355187/20838). Everything became [crazy *smooth*](https://twitter.com/cheeaun/status/1114532995799478272). {% youtube 1F2qgbvRwpQ %} The reimplementation didn’t take long. There were some tiny differences in how the circle scales for different zoom levels but not significant enough for anyone to notice. Deck.gl has yet again *amazed* me with its powerful features and performance. Honestly, I always get wowed every single time by how much I can achieve with this library! And I didn't stop there 😉. This was my previous **failed** attempt: ![3D trees rendered based on girth and height, on a map, in Singapore](https://cheeaun.com/blog/images/screenshots/software/3d-trees-girth-height-map-singapore@2x.png) It’s a failed attempt mainly due to performance. It was **too darn laggy** rendering *all* the 3D tree trunks! I also suspect that some girth measurements were wrong, which you could see that one huge hexagon in the image above, at Fort Canning Park. This was done with Mapbox GL JS’s 3D extrusion features. This is my [second attempt](https://twitter.com/cheeaun/status/1114696975515963398), with Deck.gl: ![ExploreTrees.SG showing 3D tree trunks](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-tree-trunks-1.jpg) Those orange dots are the tree trunks in 3D. Lets zoom in. ![ExploreTrees.SG showing 3D tree trunks, zoomed-in](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-tree-trunks-2.jpg) ![ExploreTrees.SG showing 3D tree trunks, further zoomed-in](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-tree-trunks-3.jpg) ![ExploreTrees.SG showing 3D tree trunks, much further zoomed-in](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-tree-trunks-4.jpg) This attempt is about **3 to 4 times faster** in rendering *everything* 😱 (based on my perception). **But**, it still lags when panning around in higher levels 😢. Ugh, I feel like I’m on roller coaster ride when things become fast, super fast, then slow again, then fast again, and then end up slow again 😖. Technically it’s my own fault. Repeatedly, I did try to make it fast and *then* purposely make it slow again. I have no idea why I keep doing this 😂 Anyway, I had an idea on how to make it fast 😏. I limit this 3D mode to higher zoom levels and then filter the list of trees based on the map’s geographical bounds. Like this: ```js const bounds = map.getBounds(); const results = ... // magically filter the list of trees based on bounds trees3DLayer.setProps({ data: trees3Dify(results), }); ``` The returned value of `map.getBounds()` is the smallest bounds that encompasses the visible region of the map. Instead of rendering 500,000+ trees, it can be made to only render few thousands instead. Basically this not only solves the performance problem, but also makes sense. There’s no point rendering 3D trees on lower zoom levels anyway since they’ll all look like small dots. Users still have to zoom in to see the 3D trunks in detail, which is the same reason why 3D buildings are only visible in higher zoom levels on maps like Google Maps and Apple Maps. Thanks to this, it’s [fast again](https://twitter.com/cheeaun/status/1114833009289469952) 😬. {% youtube KPMsFXVsC20 %} The right side is the Developer Tools Console, logging the number of 3D trees rendered every time the map is zoomed, panned or rotated. It’s quite noticeable that there’s completely no lag at all when panning around. The 3D trees outside of the bounds will only start rendering after the pan, zoom or rotate ends. **Perfect**. What’s left for me is to finish up the remaining reimplementation, remove all the old code, choose a better color for these tree trunks and wrap up! ![Thinking about 3D tree trunks](https://cheeaun.com/blog/images/figures/diagram/thinking-3d-tree-trunks@2x.jpg) At first, I wanted to color the trunks as brown but it’s too dark and doesn’t contrast well with the dark map tiles. This explains why I choose brighter brown or orange as my first attempt. And then I changed the color to white because it’s still doesn’t *feel* right somehow… Yes… 3D tree trunks look kind of weird, right? I thought to myself, **what was I trying to do again?** What’s the purpose of the 3D renderings? Isn’t 2D enough for this visualization? I guess that drawing 3D trees on a map would be cool but didn’t really think much after that. But what makes them look weird? Oh! The tree [crowns](https://en.wikipedia.org/wiki/Crown_(botany))! It looks weird because it’s *incomplete*. Wait, the problem is that I don’t have the crown data, so how am I going to draw the tree crowns? Depending on the tree species or families, I might need to draw different shapes of crowns, which can be *a lot* of work 😅. So I start to think, what’s the simplest form of a tree crown that is achievable? Of course, **another cylinder**. I did a few [quick sketches](https://twitter.com/cheeaun/status/1119443171031703552) during my free time: ![ExploreTrees.SG logo and tree trunk with crown sketches](https://cheeaun.com/blog/images/photos/objects/exploretrees-sg-logo-tree-trunk-crown-sketches.jpg) I have the height information of the trees, but it doesn’t necessarily mean the height of the trunks themselves. I could “chop off” the trunk to about 75% and leave the rest for the crown. The crown could take up 50% of the height so that it looks like it’s *covering* the trunk. As for the radius of the crown, I can roughly measure it based on the height as well. From my trials and errors, I found the sweet spot for the radius to be roughly 40% of the height. Sounds like *complicated* math, but… **[voilà!](https://twitter.com/cheeaun/status/1114866910682681344)** ![ExploreTrees.SG faux 3D trees](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-1.jpg) ![ExploreTrees.SG faux 3D trees, zoomed-in](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-2.png) ![ExploreTrees.SG faux 3D trees, further zoomed-in](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-3.png) This is the moment when I felt that my efforts have finally becoming fruitful. Little did I know that the crowns actually make such a huge difference! To be honest, I got super excited when this actually works. I tried to finish up the work and ensuring feature parity with the old implementation. It reached a point where it's [*almost* complete](https://twitter.com/cheeaun/status/1114919843541573632). {% youtube gA213scyW_s %} I’ve also enabled [3D buildings](https://twitter.com/cheeaun/status/1115060661401182208), for a more *complete* picture. {% youtube Sbh8vS_DKYg %} …and [more photos](https://twitter.com/cheeaun/status/1115630822764105728)! ![ExploreTrees.SG faux 3D trees — overlapping tree crowns](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-4.png) ![ExploreTrees.SG faux 3D trees — cute tiny trees](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-5.png) ![ExploreTrees.SG faux 3D trees — huge trees and tiny trees together](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-6.png) Look at those cute little trees! 😍 ![ExploreTrees.SG faux 3D trees — long queue of trees](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-7.png) ![ExploreTrees.SG faux 3D trees — curved line of trees](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-8.png) ![ExploreTrees.SG faux 3D trees — tree formations in context with surrounding 3D buildings](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-9.png) Nearby 3D buildings tend to give a pretty [realistic context](https://twitter.com/cheeaun/status/1115632732481019916) to the 3D trees around them. In a way, it feels like there’s a pattern on how these trees are planted based on the surrounding geography 🤔. ![ExploreTrees.SG faux 3D trees — trees on Changi Beach Park](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-10.png) ![ExploreTrees.SG faux 3D trees — trees clashing with buildings](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-faux-3d-trees-11.png) This image above shows one of the data mismatch cases, near Cecil Street, where the trees are no longer there and an interim food centre is built in this area. Alright, I’m almost done. The 2D tree markers are working. Visualization filters work. 3D trees perform really well. Just *one* thing left: **laggy** blue *marker highlighter* and tree information *hover cards*, as rendered on the videos above. On desktop browsers, when the mouse cursor hovers over a tree, the blue marker highlighter will appear around it, with the hover card popping out from the bottom right of the page. There’s a significant lag when the cursor moves across multiple trees on the map as the user interface is trying to keep up. ![ExploreTrees.SG marker highlighter and hover card](https://cheeaun.com/blog/images/figures/diagram/exploretrees-sg-marker-highlighter-hover-card@2x.png) I tried three attempts. My **first** piece of code was using the [`mousemove` event](https://docs.mapbox.com/mapbox-gl-js/api/#map.event:mouseover) from Mapbox GL JS. When the event is fired, a specific layer from the map can be queried and the marker or feature information can be extracted to be displayed as part of the hover card. The query is done by converting the pixel coordinates of the cursor to the actual map coordinates in latitude & longitude, and then find the nearest map feature from the coordinates. This operation is quite tedious — the `mousemove` event fires too often, there are way too many features under the cursor and every single call is blocking the UI thread, thus affecting the rendering speed of the marker highlighter and the hover card. My **second** attempt was using Deck.gl’s powerful [picking engine](https://deck.gl/#/documentation/developer-guide/adding-interactivity), which uses something called the [Color Picking Technique](https://deck.gl/#/documentation/developer-guide/writing-custom-layers/picking): > Rather than doing traditional ray-casting or building octrees etc in JavaScript, deck.gl implements picking on the GPU using a technique we refer to as "color picking". When deck.gl needs to determine what is under the mouse (e.g. when the user moves or clicks the pointer over the deck.gl canvas), all pickable layers are rendered into an off-screen buffer, but in a special mode activated by a GLSL uniform. In this mode, the shaders of the core layers render picking colors instead of their normal visual colors. Honestly, I have no idea how this works. I roughly get it that the engine tries to pick the color under the cursor and somehow manage to find the relevant feature which matches the color?!? But how? Anyway, I’ve tried it and this method doesn’t work too. Everything’s *still* slow 😭. So, lo and behold my **third** attempt using a spatial index for points with a library called [`geokdbush`](https://github.com/mourner/geokdbush): > A geographic extension for [kdbush](https://github.com/mourner/kdbush), the fastest static spatial index for points in JavaScript. > > It implements fast [nearest neighbors](https://en.wikipedia.org/wiki/Nearest_neighbor_search) queries for locations on Earth, taking Earth curvature and date line wrapping into account. Inspired by [sphere-knn](https://github.com/darkskyapp/sphere-knn), but uses a different algorithm. I think it’s pretty cool because it’s written by [Vladimir Agafonkin](https://github.com/mourner), the creator of [Leaflet](https://leafletjs.com/). Here’s a rough code snippet: ```js const index = new KDBush(data, (p) => p.position[0], (p) => p.position[1]); ... const nearestPoints = geokdbush.around(index, point.lng, point.lat, 1, radius); ``` The `geokdbush.around` method returns an array of the closest points from a given location in order of increasing distance. It has optional arguments to set custom maximum results, maximum distance in kilometers to search within, and even an additional function to further filter the results! 🤯 In fact, these methods are so useful that I also use it for filtering the rendering of the 3D trees based on the map bounds (in the beginning, for the `trees3Dify` call). The result is [**insanely fast**](https://twitter.com/cheeaun/status/1116960702118318082). ⚡️ {% youtube SxsmT9trxaM %} Even works in [3D mode](https://twitter.com/cheeaun/status/1116960752122777601): {% youtube 4YmBeA35fD0 %} I’m quite satisfied with this. But I think I can **do more**. 💪 Plus ultra --- Initially I have an idea for getting a listing of all tree species mapped to their own crown shape patterns. I couldn’t find such list unfortunately 😭. My research led me to few subjects like [tree crown classes](https://openoregon.pressbooks.pub/forestmeasurements/chapter/5-3-crown-classes/) and I especially like this diagram from [Structures of Forest](https://www.toppr.com/guides/science/forest-our-lifeline/structure-of-forest/): ![Crown of tree diagram, showing Pyramidal, Full-crowned, Vase, Fountain, Spreading, Layered, Columnar and Weeping](https://cheeaun.com/blog/images/figures/diagram/crown-of-tree.jpg) These are the various crown shapes of trees and I think I only covered the “full-crowned” type, with blocky cylinders. The other crowns would be a bit difficult to model in 3D… Similar to how the tree trunks are rendered, the tree crowns are rendered with deck.gl’s [SolidPolygonLayer](https://deck.gl/#/documentation/deckgl-api-reference/layers/solid-polygon-layer), which works like [PolygonLayer](https://deck.gl/#/documentation/deckgl-api-reference/layers/polygon-layer) but without strokes. Extrusion is enabled from the `extruded` option combined with z-indices of the coordinates. The circle shape is formed with [@turf/circle](https://turfjs.org/docs/#circle). Elevation is performed via the `getElevation` accessor. ![Coordinates forming a polygon, with extrusion and elevation](https://cheeaun.com/blog/images/figures/diagram/exploretrees-sg-polygon-extrusion-elevation@2x.png) This is **the basics**. Creating complex polygons for all the different tree crowns would be too complicated for me, both computationally *and* mathematically 😵. While developing this project, I’ve been following the updates on the then-upcoming [version 7.0 of Deck.g](https://github.com/uber/deck.gl/blob/master/CHANGELOG.md#deckgl-v70)l. The changes that caught my attention was the (re-)introduction of [`SimpleMeshLayer`](https://github.com/uber/deck.gl/issues/2890) and [`ScenegraphLayer`](https://github.com/uber/deck.gl/issues/2871), which allows the rendering of actual 3D models in formats such as [OBJ](https://en.wikipedia.org/wiki/Wavefront_.obj_file), [PLY](https://en.wikipedia.org/wiki/PLY_(file_format)) and [glTF](https://en.wikipedia.org/wiki/GlTF). This means… I can put *real* tree models on the map! 😱 Wait the minute, I don’t have a species-to-crown-type mapping yet, so rendering a super realistic full-crowned tree for every single tree would be kind of weird, especially for those tiny little trees. Even if I have the mapping, it would take a gargantuan effort for me to code the species-specific crowns for all 500,000+ trees, manage all the 3D model files, *and* tune the asset loading & map rendering performance at the same time! There *has* to be some compromise here. I want to replace the cylinder tree crowns with something more realistic **but** cannot be *too* realistic. So it should be **semi-realistic**, right? I read through the [documentation](https://deck.gl/#/documentation/deckgl-api-reference/layers/simple-mesh-layer) and noticed this interesting piece of code from the example: ```js import {CubeGeometry} from 'luma.gl' ``` That’s one of scenegraph model nodes provided by [luma.gl](https://luma.gl/#/). Pretty neat. They are like pre-made 3D objects that can be used like, as what it described, WebGL components. One of the geometries that I found is the [SphereGeometry](https://luma.gl/#/documentation/api-reference/geometry-nodes/sphere), which looks like what I need. Or perhaps, a sphere is what I think would be a better alternative than a cylinder, right? 🤔 Not only that I can create a sphere object with this, I can also apply a texture on it with the `texture` property for `SimpleMeshLayer`. Here’s a code snippet: ```js const treesCrownLayer = new MapboxLayer({ id: 'trees-crown', type: SimpleMeshLayer, texture: document.getElementById('leaves'), mesh: new SphereGeometry(), ... }); ``` The `texture` property accepts one of these 3 types of value: - A luma.gl `Texture2D` instance - An `HTMLImageElement` - A URL string to the texture image So the `document.getElementById('leaves')` code above is a reference to an `img` element in the HTML page. I use this method to *preload* the image instead of lazy-load it during runtime. As for the image, I *googled* and managed to sift through *hundreds* of images to find **one** “leaves” texture image from [Dim's environmental and architectural textures](https://opengameart.org/content/dims-enviromental-and-architectural-textures). Honestly, it’s not easy to find a *good* texture image at all 😂, but luckily there are existing available resources mostly for game development 😀. Here’s the moment of truth 🤞: ![ExploreTrees.SG 3D realistic trees](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-1.jpg) **Oh my god, It works!!** 😱😱😱 ![ExploreTrees.SG 3D realistic trees — tree crowns with less vertices](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-2.jpg) For this image, I tried playing around with the number of vertices. If it’s reduced, the sphere will have less triangular faces and look less spherical. I kind of suspect that reducing the number of vertices might improve performance but doesn’t seem to affect much. Anyway, before I get too happy with this result, there are few problems: - The trees look a bit too dark, so might need some sort of lighting. Maybe use brighter colors too. - The tree crown need to be “see-through”, to simulate the empty spaces between the leaves. - The leaves on the crown are too big, especially when zoomed in. I’ll need to make them smaller. Using [Affinity Photo](https://affinity.serif.com/en-gb/photo/) and [Affinity Designer](https://affinity.serif.com/en-gb/designer/), I made some adjustments from the original texture image (top left): ![Tree leaves texture image being masked and flipped into a repeated pattern](https://cheeaun.com/blog/images/figures/diagram/tree-leaves-texture-image-masked-flipped-repeated-pattern@2x.jpg) First, I select the dark areas between the leaves and make them alpha-transparent. To reduce the size of the leaves, I make the image bigger instead, by flipping it 3 times, creating a repeatable leaves image pattern. The results: ![ExploreTrees.SG 3D realistic trees, zoomed out with surrounding 3D buildings](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-3.jpg) ![ExploreTrees.SG 3D realistic trees — trees at road intersection](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-4.jpg) ![ExploreTrees.SG 3D realistic trees — highlighted tree and zoomed in](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-5.jpg) Not bad. The leaves are smaller and it’s possible to see the tree trunk hiding *inside* the crown. 👀 As for the lighting, I use the new lighting effects system [introduced in deck.gl v7.0](https://medium.com/vis-gl/introducing-deck-gl-v7-0-c18bcb717457), particularly the [`AmbientLight`](https://deck.gl/#/documentation/deckgl-api-reference/lights/ambient-light) source and the new experimental [`SunLight`](https://deck.gl/#/documentation/deckgl-api-reference/lights/sun-light) source. `SunLight` is a variation of `DirectionalLight` which is automatically set based on a UTC time and the current viewport. In other words, it **simulates the sun** by calculating the sun position with a JavaScript library called [SunCalc](https://github.com/mourner/suncalc) (again, created by Vladimir Agafonkin, creator of Leaflet 🤩). Coincidentally as I was [researching on how the code and formula work](https://www.aa.quae.nl/en/reken/zonpositie.html), I saw Vladimir [tweeting](https://twitter.com/mourner/status/1120748294190141440) about his tiny [900-byte function to calculate the position of the Sun](https://observablehq.com/@mourner/sun-position-in-900-bytes)! 😱 If I’m not mistaken, it’s like a simplified version of SunCalc. Without further ado, I implemented them all, this way: ```js const phaseColor = getPhaseColor(timestamp); const ambientLight = new AmbientLight({ intensity: phaseColor === 'dark' ? 1 : 1.5, }); const sunLight = new SunLight({ timestamp, intensity: phaseColor === 'dark' ? .5 : 2, });; const lightingEffect = new LightingEffect({ ambientLight, sunLight, }); treesCrownLayer.deck.setProps({ effects: [lightingEffect], }); ``` The light intensities reduce when the sun phase is “dark” to simulate night time. Before 🔅: ![ExploreTrees.SG 3D realistic trees — the lighting before](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-lighting-before.jpg) After 🔆: ![ExploreTrees.SG 3D realistic trees — the lighting after](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-realistic-trees-lighting-after.jpg) Looks much better! Here’s [a time-lapse of the sun lighting](https://twitter.com/cheeaun/status/1124981208427810816) on the trees, to show how the (sun) light direction changes based on time. Also, it’s not affected by user's local time and will always be in actual Singapore time, since the formula depends on location coordinates instead of time. {% youtube n21C4BcQ7l8 %} Unfortunately, the lighting transition from night to day and day to night is kind of abrupt for now. Just in case someone stays on the site for too long, I’ve also added a time interval to update the lighting every 10 minutes. Along the way, I’ve added a few useful POIs (Points of Interest) that are [listed on NParks](https://www.nparks.gov.sg/gardens-parks-and-nature), which include these categories of places: - Parks - Community gardens - Heritage roads - Skyrise greenery On their [site](https://www.nparks.gov.sg/gardens-parks-and-nature), it looks like this: ![NParks site showing parks, community gardens, heritage trees, heritage road and skyrise greenery](https://cheeaun.com/blog/images/screenshots/web/nparks-site-parks-community-gardens-heritage-trees-heritage-road-skyrise-greenery@2x.png) On ExploreTrees.SG: ![ExploreTrees.SG showing parks, community gardens and skyrise greenery](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-parks-community-gardens-skyrise-greenery@2x.png) Besides the POIs, I’ve also made the train stations and bus stops more prominent to guide navigation and exploration. These markers only appear on higher zoom levels as they are only useful in those levels and to prevent clutter on the map. As a side quest, I took this chance to try out [lit-html](https://lit-html.polymer-project.org/) for the hover card, replacing the constant `innerHTML` layout-trashing (destroying and rebuilding the content on every hover event). The final boss --- Despite the fact that I’ve ticked off *all* the checkboxes in my todos for this project, there’s one problem that I ignore since the beginning of this exercise. **It doesn’t work well on mobiles.** 😞 There are a few issues: - MessagePack decoding crashes on Mobile Safari. When I try switch it to JSON format, the browser didn’t crash but it takes a significant amount of time to load. Either way, it’s a pretty bad user experience. - Even after everything is loaded successfully, Mobile Safari will randomly crash after few minutes of panning and zooming on the map. I’ve tried asking a friend to try the site on an iPad, which I suspect has more memory and processing power than iPhone, yet the site still crashes 😭. - The site doesn’t crash on Chrome Android but still kind of laggy. 🐌 Most-likely suspicion could be the **memory pressure**, as highlighted by Deck.gl’s [documentation on performance](https://deck.gl/#/documentation/developer-guide/performance-optimization): > Modern phones (recent iPhones and higher-end Android phones) are surprisingly capable in terms of rendering performance, but are considerably more sensitive to memory pressure than laptops, resulting in browser restarts or page reloads. They also tend to load data significantly slower than desktop computers, so some tuning is usually needed to ensure a good overall user experience on mobile. I guess loading a 5MB compressed file, which uncompresses into a 30MB file, that loads 500,000+ trees on a WebGL-powered 3D-rendered map is just… too much? 😝 I’ve tried dozens of ways to fix this, like: - Reduce the number of layers on the map. - Reduce the number of polygons on the map. - Remove some features and code that might use a lot of memory. - Apply micro-optimisations that I thought would help. - Reduce number of trees by using cluster mode with [`supercluster`](https://github.com/mapbox/supercluster), which kind of defeats the purpose why I build this in the first place. Nothing works. It’s pretty daunting 😩. This is the only thing that blocks me from launching this new version of ExploreTrees.SG and I have to make the call. ![ExploreTrees.SG visitor flow for version 1 and version 2](https://cheeaun.com/blog/images/photos/objects/exploretrees-sg-visitor-v1-v2.jpg) After days of contemplating, I decided to *not* replace the version 1.0 of ExploreTrees.SG and instead, slot in the version 2.0 **together** with it. The old version 1.0 works fine and seems more stable on mobile browsers. The new version works fine on desktop browsers but *not* the iPad. I’m not sure about Android tablets so I’ll assume they’re slightly better or worse than iPad. 🤷‍♂️ On version 1.0, all the tree data are served from [Mapbox Studio](https://www.mapbox.com/mapbox-studio/), as vector [tilesets](https://docs.mapbox.com/studio-manual/reference/tilesets/). They are *partially* loaded based on zoom levels, so not everything is shown at once. This probably helps in reducing memory pressure, using less bandwidth, and having better performance. On version 2.0, which I kept pushing the limits, **all** the trees data are loaded in the web browser, so there’s no more round trip to the server. Technically it’s pure client-side and uses a lot of bandwidth and all the power from the user’s machine to render everything nicely. I needed a way to switch between the versions based on certain device capabilities. There’s no way for me to detect the device's or browser’s ability to handle large memory pressure. There’s also no way to detect possible crashes on a web app *before* it could crash. User agent string detection won’t work here either. Since there’s no reliable way to detect these conditions, I applied touch detection instead: ```js const isTouch = 'ontouchstart' in window || navigator.msMaxTouchPoints; const hqHash = /#hq/.test(location.hash); const renderingMode = !hqHash && isTouch ? 'low' : 'high'; ``` I made a few assumptions: - Most modern mobile phones have touch screens. - Older phones won’t be able to load the site anyway since it uses WebGL (sorry 🙇‍♂️). - Even though iPad is quite powerful, it also has a touch screen so this detection will rule it out. - The only exception would be desktop computers with touch screens. I call version 2 as “High-quality mode”, and provide an option for users who are routed to version 1 to switch to this mode. ![ExploreTrees.SG showing a link “Try high-quality mode?“](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-try-high-quality-mode@2x.jpg) This option will reload the page and switch to version 2 by appending `#hq` to the URL. After that if the site crashes, the user can still go back to version 1. To further optimise the performance of the site, I make good use of the `renderingMode` variable to conditionally enable or disable certain features on the map. I took one step ahead by applying [code splitting](https://parceljs.org/code_splitting.html) with dynamic imports. I extract all version 2 related dependencies into a separate bundle and load it asynchronously on the page. ```js const { MapboxLayer, AmbientLight, SunLight, LightingEffect, ScatterplotLayer, SolidPolygonLayer, SimpleMeshLayer, SphereGeometry, msgpack, polyline, KDBush, geokdbush, circle, throttle, } = await import('./hq.bundle'); ``` Besides that, I did an interesting trick for the MessagePack data file. I realised that MessagePack doesn’t have an [official MIME type](https://github.com/msgpack/msgpack/issues/194). Even though I started using Parcel for bundling files, the site is still deployed to GitHub Pages via the [GitHub Pages Deploy Action](https://github.com/JamesIves/github-pages-deploy-action). This means gzip compression could only be available for certain file extensions. For example, `.js` files will be served with gzip compression but not `.png` files. I tried using `.mp` extension for the MessagePack file and… it’s not compressed unfortunately 😟. Since the site traffic is also handled by Cloudflare, I looked through [the list of content types that Cloudflare will compress](https://support.cloudflare.com/hc/en-us/articles/200168396). I needed a file format that… - Is whitelisted in the server's content type compression list, from either GitHub Pages or Cloudflare. I’m aware that [Netlify allows custom headers](https://www.netlify.com/docs/headers-and-basic-auth/) which may be able to set gzip headers, but I’m not moving the site there for now. - Bypasses Parcel’s [intelligent asset handling](https://parceljs.org/assets.html). For example, if `.json` is used, Parcel will include the file content *into* the JavaScript bundle. - Is not confusing for me to revisit in the future. If I use `.png`, which could work, the future me would be confused and wonder why it’s used for a non-image file 😅. I tried pre-compressing the file into `.gz` but weird things just happen when trying to read the file in JS. 😅 In the end, I chose `.ico` because it’s one of the formats that is a (image) binary *and* can be gzip compressed at the same time. It is also rarely used, unlike normal image formats, which could prevent conflicts with other files. I could use `.ttf` or `.otf` extensions too but it could be confused with actual font files. I named the file as `trees.min.mp.ico`, tested it and it works. 🤩 The launch and the aftermath --- On [30 April 2019](https://twitter.com/cheeaun/status/1123053445546512384), I finally relaunched [ExploreTrees.SG V2](https://exploretrees.sg). ![GIF screenshot of ExploreTrees.SG](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-screenshot.gif) Despite all my effort rewriting the code and pushing the limits, I’m especially proud of the 3D trees. It feels like an **achievement-unlocked** moment for me. 🤩 ![ExploreTrees.SG showing an overview of all 3D trees](https://cheeaun.com/blog/images/screenshots/web/exploretrees-sg-3d-trees-overview@2x.jpg) It got [featured on Reddit /r/singapore](https://www.reddit.com/r/singapore/comments/bj0xze/my_friend_took_nparkss_data_and_built_a_site_for/), thanks to a friend 😉. ![Reddit post on /r/singapore titled “My friend took NParks’s data and built a site for people to visualise the various trees around Singapore"](https://cheeaun.com/blog/images/screenshots/web/reddit-friend-took-nparks-data-built-site-visualise-trees-singapore@2x.png) Feedback has been positive so far. I’m surprised that the conditional routing for Version 1 and 2 seems smooth enough that no one noticed it. 😝 A day after the launch, I [received a pretty interesting email](https://twitter.com/cheeaun/status/1123589545230880769) at 2 AM. ![Email titled “Map Building Help”, received at 2:14 AM](https://cheeaun.com/blog/images/screenshots/software/email-map-building-help@2x.jpg) So this person asked me for my *advice* on creating a map for all **durian trees in Singapore**! I was reading this in the morning, had a good laugh and ignored it. 😂 It was Labor Day and there’s no work. In the afternoon, I had a hunch that this person might **not** be joking after all. So, I did some research on durians, found a Wikipedia page on [List of *Durio* species](https://en.wikipedia.org/wiki/List_of_Durio_species), looked through the dataset and found **286 Durio species**! 😱 ![Terminal showing a node script for durian listing down the durio species](https://cheeaun.com/blog/images/screenshots/software/terminal-node-script-durian-durio-species@2x.png) I didn’t know that there are durian trees in Singapore! Even more surprising, [durian](https://en.wikipedia.org/wiki/Durian) trees have [flowers](http://www.wildsingapore.com/wildfacts/plants/fruittrees/durio/zibethinus.htm)! 😱 The next logical step is to quickly load them up [on a map](https://gist.github.com/cheeaun/1d68f6acb589a30335e1c7c927153d18). ![Durio species trees plotted on a map](https://cheeaun.com/blog/images/screenshots/web/durio-species-trees-map@2x.png) Looking at this, I was thinking to myself. Are there any other fruit trees in Singapore? I searched through the Internet and found a list of fruit tree species names. I was thinking of how to visualise the fruits on a map, maybe *still* colorful circles or fruit icons instead? There could be too many exotic fruits to be shown so maybe need to be filtered to only those “popular” ones? Hmm, yeah… Lots of ideas, yet so little time. From my experience in [building side projects](https://cheeaun.com/blog/2016/01/building-side-projects/), it’s always good to hold on to an idea for some time before executing them. This time, I’ve already had **so much fun** rebuilding everything, squeezing every inch of the performance, and even modelling 3D trees on a map. Most importantly I manage to launch my [open-source](https://github.com/cheeaun/exploretrees-sg) work to the public before I got bored of it. 😆 This has been fun and I’ve learnt a lot. So, till next time then. 😉
cheeaun
135,024
Blockchain No-Brainer: Ownership in the Digital Era
We discuss how the meaning and perception of ownership is changing, while the second will discuss how Smart Contract automation can help deliver safe, fair, fast, low-cost, transparent and auditable transactional interoperability.
0
2019-07-10T09:44:20
https://dev.to/erlangsolutions/blockchain-no-brainer-ownership-in-the-digital-era-1dch
blockchain, smartcontracts, dlt
--- title: Blockchain No-Brainer: Ownership in the Digital Era published: true description: We discuss how the meaning and perception of ownership is changing, while the second will discuss how Smart Contract automation can help deliver safe, fair, fast, low-cost, transparent and auditable transactional interoperability. tags: Blockchain, Smart Contracts, DLT, cover_image: https://i.imgur.com/BQFwHYG.jpg --- #Introduction Seven months of intense activity have passed since the release of Blockchain 2018 Myth vs. Reality [article](https://www2.erlang-solutions.com/bcmdev). As a follow-up to that blog post, I would like to take the opportunity to analyse in further detail the impact that this new technology has on our perceptions of asset ownership and value, how we are continuously exploring new forms of transactional automation and then conclude with the challenge to deliver safe and fair governance. Since the topic to cover is vast, I have decided to divide it into two separate blog posts, the first of which will cover how the meaning and perception of ownership is changing, while the second will discuss how Smart Contract automation can help deliver safe, fair, fast, low-cost, transparent and auditable transactional interoperability. My intention is to provide an abstract and accessible summary that describes the state-of-the-art of blockchain technology and what motivations have led us to the current development stage. While these posts will not focus on future innovation, they will serve as a prelude to more bold publications I intend to release in the future. #Digital Asset Ownership, Provenance and Handling ###How we value Digital vs. Physical Assets In order to understand how the notion of ownership is currently perceived in society, I propose to briefly analyse the journey that has brought us to the present stage and the factors which have contributed to the evolution of our perceptions. Historically people have been predominantly inclined to own and trade physical objects. This is probably best explained by the fact that physical objects stimulate our senses and don’t require the capacity to abstract, as opposed to services for instance. Ownership was usually synonymous with possession. Let us try to break down and extract the fundamentals of the economy of physical goods: we originally came to this world and nothing was owned by anyone; possession by individuals then gave rise to ownership ‘rights’ (obtained through the expenditure of labour - finding or creating possessions); later we formed organisations that exercised territorial control and supported the notion of ownership (via norms and mores that evolved into legal frameworks), as a form of protection of physical goods. Land and raw materials are the building blocks of this aspect of our economy. When we trade (buy or sell) commodities or other physical goods, what we own is a combination of the raw material, which comes with a limited supply, plus the human/machine work required to transform it to make it ready to be used and/or consumed. Value was historically based on a combination of the inherent worth of the resource (scarcity being a proxy) plus the cost of the work required to transform that resource into an asset. Special asset classes (e.g. art) soon emerged where value was related to intangible factors such as provenance, fashion, skill (as opposed to the quantum of labour) etc. We can observe that even physical goods contain an abstract element: the design, the capacity to model it, package it and make it appealing to the owners or consumers. In comparison, digital assets have a stronger element of abstraction which defines their value, while their physical element is often negligible and replaceable (e.g. software can be stored on disk, transferred or printed). These types of assets typically stimulate our intellect and imagination, as our senses get activated via a form of rendering which can be visual, acoustic or tactile. Documents, paintings, photos, sculptures and music notations have historical equivalents that predate any form of electrically-based analog or digital representations. The peculiarity of digital goods is that they can be copied exactly at very low cost: for example, they can be easily reproduced in multiple representations on heterogeneous physical platforms or substrates thanks to the discrete nature in which we store them (using a simplified binary format). The perceivable form can be reconstructed and derived from these equal representations an infinite number of times. This is a feature that dramatically influences how we value digital assets. The opportunity to create replicas implies that it is not the copy nor the rendering that should be valued, but rather the original digital work. In fact, this is one of the primary achievements that blockchain has introduced via the hash lock inherent to its data structure. If used correctly the capacity to clone a digital item can increase confidence that it will exist indefinitely and therefore maintain its value. However, as mentioned in my previous blog post (Blockchain 2018 - [Myth vs. Reality] (https://www2.erlang-solutions.com/bcmdev)) the immutability and perpetual existence of digital goods are not immune from facing destruction, as at present there is a dependence on a physical medium (e.g. hard disk storage) that is potentially subject to alteration, degradation or obsolescence. A blockchain, such as that of the Bitcoin network, represents a model for vast replication and reinforcement of digital information via so-called Distributed Ledger Technology (DLT). Repair mechanisms can intervene in order to restore integrity in the event that data gets corrupted by a degrading physical support (i.e. a hard disk failure) or a malicious actor. The validity of data is agreed upon by a majority (the level of majority varying across different DLT implementations) of peer-to-peer actors (ledgers) through a process known as consensus. This is a step in the right direction, although the exploration of increasingly advanced platforms to preserve digital assets is expected to evolve further. As genetic evolution suggests, clones with equal characteristics can all face extinction by the introduction of an actor that makes the environment unfit for survival in a particular form. Thus, it might be sensible to introduce heterogeneous types of ledgers to ensure their continued preservation on a variety of physical platforms and therefore enhance the likelihood of survival of information. ##The evolution of services and their automation In the previous paragraph, we briefly introduced a distinction between physical assets and goods where the abstraction element is dominant. Here I propose to analyse how we have started to attach value to services and how we are becoming increasingly demanding about their performance and quality. Services are a form of abstract valuable commonly traded on the market. They represent the actions bound to the contractual terms under which a transformation takes place. This transformation can apply to physical goods, digital assets, other services themselves or to individuals. What we trade, in this case, is the potential to exercise a transformation, which in some circumstances might have been applied already. For instance, a transformed commodity, such as refined oil, has already undergone a transformation from its original raw form. Another example is an artefact where a particular shape can either be of use or trigger emotional responses, such as artefacts with artistic value. Service transformation in the art world can be highly individualistic (depending on the identity of the person doing the transforming (the artist; the critic; the gallery etc) or the audience for the transformed work. Thus, Duchamp’s elevation (or, possibly, degradation) of a porcelain urinal to artwork relied on a number of connected elements (i.e. transformational actions by actors in the art world and beyond) for the transformation to be successful - these elements are often only recognised and/or understood after the transformation has been affected. Even the rendering from an abstract form, such as with music notation or a record, the actual sound is a type of transformation that we consider valuable and commonly trade. These transformations can be performed by humans or machinery. With the surge of interest in digital goods, there is a corresponding increasing interest in acquiring services to transform them. As these transformations are being automated more and more, and the human element is progressively being removed, even services are gradually taking the shape of automated algorithms that are yet another form of digital asset, as is the case with Smart Contracts. Note, however, that in order to apply the transformation, an algorithm is not enough, we need an executor such as a physical or virtual machine. In Part 2 we will analyse how the automation of services has led to the evolution of Smart Contracts, as a way to deliver efficient, transparent and traceable transformations. ##Sustainability and Access to resources Intellectual and imagination stimulation is not the only motivator that explains the increasing interest in digital goods and consequently their rising market value. Physical goods are known to be quite costly to handle. In order to create, trade, own and preserve them there is a significant expenditure required for storage, transport, insurance, maintenance, extraction of raw materials etc. There is a competitive and environmental cost involved, which makes access to physical resources inherently non-scalable and occasionally prohibitive, especially in concentrated urban areas. As a result, people are incentivised to own and trade digital goods and services, which turns out to be a more sustainable way forward. For example, let us think about an artist who lives in a densely populated city and needs to acquire a canvas, paint, brushes, and so on, plus studio and storage space in order to create a painting. Finding that these resources are difficult or impossible to access, he/she decides to produce their artwork in a digital form. Services traditionally require resources to be delivered (e.g. raw material processing). However, a subset of these (such as those requiring non-physical effort, for instance, stock market trading, legal or accounting services) are ideally suited to being carried out at a significantly lower cost via the application of algorithmic automations. Note: this analysis assumes that the high carbon footprint required to drive the ‘Proof of Work’ consensus mechanism used in many DLT ecosystems can be avoided, otherwise the sustainability advantage can be legitimately debated. ##The Generative Approach The affordable access to digital resources, combined with the creation of consistently innovative algorithms has also contributed to the rise of a generative production of digital assets. These include partial generation, typically obtained by combining and assembling pre-made parts: e.g. [Robohash](https://robohash.org/text) derives a hash from a text added to the URL that leads to a fixed combination of mouths, eyes, faces, body and accessories. Other approaches involve Neural Net Deep Learning: e.g. [ThisPersonDoesNotExist](https://thispersondoesnotexist.com/image) uses a technology known as [Generative Adversarial Network](https://www.lyrn.ai/2018/12/26/a-style-based-generator-architecture-for-generative-adversarial-networks/) (GAN) released by NVidia Research Labs to generate random people faces, Magenta uses a Google TensorFlow library to generate Music and Art, while [DeepArt](https://deepart.io/) uses a patented neural net implementation based on the [19-layer VGG network](http://www.robots.ox.ac.uk/~vgg/research/very_deep/). In the gaming industry we should mention [No Man’s Sky](https://www.nomanssky.com/), a mainstream Console and PC Game that shows a successful use of [procedural generation](https://nomanssky.fandom.com/wiki/Procedural_generation). Project [DreamCatcher](https://autodeskresearch.com/projects/dreamcatcher) also uses a generative design approach that leverages a wide set of simulated solutions that respond to a set of predefined requirements that a material or shape should satisfy. When it comes to Generative Art, it is important to ensure scarcity by restricting the creation of digital assets to limited editions, so an auto-generated item can be traded without the danger that an excess of supply triggers deflationary repercussions on its price. In Blockchain 2019 Part 2 we will describe techniques to register Non Fungible Tokens (NFT) on the blockchain in order to track each individual replica of an object while ensuring that there are no illegal ones. Interesting approaches directly linked to Blockchain Technology have been launched recently such as the AutoGlyphs from LarvaLabs, although this remains an open area for further exploration. Remarkably successful is the case of [Obvious Art](https://obvious-art.com/) where another application of the GAN approach resulted in a [Generated Artwork being auctioned off for $432,500](https://medium.com/datadriveninvestor/machine-learning-generated-artwork-auctions-off-for-432-500-c377be74146f). ##What prevents mass adoption of digital goods Whereas it is sensible to forecast a significant expansion of the digital assets market in the coming years, it is also true that, at present, there are still several psychological barriers to overcome in order to get broader traction in the market. The primary challenge relates to trust. A purchaser wants some guarantees that traded assets are genuine and that the seller owns them or acts on behalf of the owner. DLT provides a solid way to work out the history of a registered item without interrogating a centralised trusted entity. Provenance and ownership are inferable and verifiable from a number of replicated ledgers while block sequences can help ensure there is no double spending or double sale taking place within a certain time frame. The second challenge is linked to the meaning of ownership outside of the context of a specific market. I would like to cite as an example the [closure of Microsoft’s eBook store](https://www.bbc.com/news/technology-47810367). Microsoft’s decision to pull out of the ebook market, presumably motivated by a lack of profit, could have an impact on all ebook purchases that were made on that platform. The perception of the customer was obviously that owning an ebook was the same as owning a physical book. What Microsoft might have contractually agreed through its End-User License Agreement (EULA), however, is that this is true only within the contextual existence of its platform. This has also happened in video games where enthusiast players are perceiving the acquisition of a sword, or armour as if they were real objects. Even without the game closing down its online presence (e.g. when its maintenance costs become unsustainable), a lack of interest or reduced popularity might result in a digital item losing its value. There is a push, in this sense, towards forms of ownership that can break out from the restrictions of a specific market and be maintained in a broader context. Blockchain’s DLT in conjunction with Smart Contracts, that exist potentially indefinitely, can be used to serve this purpose allowing people to effectively retain their digital items’ use across multiple applications. Whether those items will have a utility or value outside the context and platform in/on which they were originally created remains to be seen. Even the acquisition of digital art requires a substantial paradigm shift. Compared to what happens with physical artefacts, there is not an equivalent tangible sense of taking home (or to one’s secure storage vault) a purchased object. This has been substituted by a verifiable trace on a distributed ledger that indicates to whom a registered digital object belongs. Sensorial forms can also help in adapting to this new form of ownership. For instance, a digital work of art could be printed, a 3D model could be rendered for a VR or AR experience or 3D printed. In fact, to control what you can do with a digital item is per se a form of partial ownership, which can be traded. This is different from the concept of fractional ownership where your ownership comes in a general but diluted form. It is more a functional type of ownership. This is a concept which exists in relation to certain traditional, non-digital assets, often bounded by national laws and the physical form of those assets. For instance, I can own a classic Ferrari and allow someone else to race it; I can display it in my museum and charge an entry fee to visitors; but I will be restricted in how I am permitted to use the Ferrari name and badge attached to that vehicle. The transition to these new notions of ownership is particularly demanding when it comes to digital non-fungible assets. Meanwhile, embracing fungible assets, such as a cryptocurrency, has been somewhat easier for customers who are already used to relating to financial instruments. This is probably because fungible assets serve the unique function of paying for something, while in the case of non-fungible assets there is a range of functions that define their meaning in the digital or physical space. #Conclusion In this post we have discussed a major emerging innovation that blockchain technology has influenced dramatically over the last two years - the ownership of digital assets. In Blockchain 2019 - Part 2 we will expand on how the handling of assets gets automated via increasingly powerful Smart Contracts. What we are witnessing is a new era that is likely to revolutionise the perception of ownership and reliance on trusted and trustless forms of automation. This is driven by the need to increase interoperability, cost compression, sustainability, performance (as in the speed at which events occur) and customisation, which are all aspects where traditional centralised fintech systems have not given a sufficient solution. It is worthwhile, however, to remind ourselves that the journey towards providing a response to these requirements, should not come at the expense of safety and security. Privacy and sharing are also areas heavily debated. Owners of digital assets often prefer their identity to remain anonymous, while the benefit of socially shared information is widely recognised. An art collector, for instance, might not want to disclose his or her personal identity. Certainly, a lot more still remains to be explored as we are clearly just at the beginning of a wider journey that is going to reshape global digital and physical markets. At Erlang Solutions we are collaborating with partners in researching innovative and performant services to support a wide range of clients. This ranges from building core blockchain technologies to more specific distributed applications supported by Smart Contracts. Part of this effort has been shared on our website where you can find some information on who we work with in the [fintech](https://www2.erlang-solutions.com/bcdevfin) world and some interesting case studies, others of which remain under the scope of NDAs. This post intentionally aims at providing a state-of-the-art analysis. We soon expect to be in a position to release more specific and, possibly controversial, articles where a bolder vision will be illustrated. [Get notifications]( https://www2.erlang-solutions.com/blockchaindev) when more content gets published - you know the drill, we need your contact details - but we are not spammers!
erlangsolutions
139,483
Shawn Wang on His Involvement in Open Source: I Look for Projects That Will Die if I Don't Get Involved
We talked to Shawn Wang, a full-stack developer who works on Developer Experience at...
0
2019-07-17T06:45:15
https://dev.to/gitnation/shawn-wang-on-his-involvement-in-open-source-i-look-for-projects-that-will-die-if-i-don-t-get-involved-5fn3
react, reactnative, typescript
### We talked to Shawn Wang, a full-stack developer who works on Developer Experience at Netlify, helps moderate [/r/reactjs](https://reddit.com/r/reactjs), and teaches React and TypeScript at Egghead.io *Shawn Wang, a proud full-stack developer and, as he calls himself, an infinite builder from Netlify, has talked to React Advanced about his web development career, projects in open source, the decision to study Machine Learning, and the importance of building and being active in the community. Shawn is coming to London to give a talk at [React Advanced Conference](https://reactadvanced.com/?utm_source=blogpost&utm_medium=devto&utm_campaign=interview), Oct 25, 2019.* ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/ipnp76f2g1gplvy2o7yh.png) #### Hello Shawn, and welcome to the interview with React Advanced! Please, share your story. How did you get passionate about web development? I used to work in finance and was basically an "Excel monkey:" wrote financial models in Excel, ran the numbers, tried to make decisions based on that numbers. Eventually, handwriting formulas got too much, and so I learned VBA. Then, my spreadsheets got so big, Excel started crashing, so I learned Python and Haskell to do the number crunching.  All of it was informal: learn-on-the-job type of stuff. In the end, I realized that I enjoyed the coding part of the job the most; however, I was also the bottleneck - if people needed some analysis done, they yelled at me and I hit the button. I finally thought that if I only learned to make user interfaces, I could have taken myself out of the equation and started writing actual software products people would buy and use. That had been my seven-year journey towards realizing I wanted to do web development. #### Can you, please describe your previous work experience culminating in your current position, working on Developer Experience at Netlify? I had held only one dev job prior, maintaining a design system at a large hedge fund in NYC. It was good, but not great. I started focusing on the React community a lot more seriously in 2018, becoming active in meetups and on Twitter/Reddit. I became a moderator on /r/reactjs and got accepted for my first conference talk in August. I did all of it in my free time. Eventually, that got me noticed by Netlify, who were looking for this kind of community involvement and React expertise. {% twitter 1122992717884215296 %} #### How did you get involved with egghead.io? Do you think mentoring and teaching is your ultimate calling? I got invited by Joel, one of the egghead.io founders. I simply took a project I was working on and turned that into a course on Storybook, React and Typescript, and that did very well. I'm not sure teaching is my ultimate calling since it requires a lot of patience, but I enjoy doing a bit of it and egghead is a fantastic place to do it. #### How would you describe your involvement in Open Source? How many projects have you contributed to?  Open Source is important because it lets us learn for free and also dramatically lowers the cost of development. My first big contribution was to React, and I documented the process in a talk [now featured in the React docs](https://reactjs.org/docs/how-to-contribute.html#introductory-video). I have no idea how many projects I have contributed to.  What does matter to me now is that I go deep rather than broad. I also look for projects where no one else is involved (so nothing would happen or it would die if I didn't get involved), rather than projects that don't need me (like React). {% youtube rPuwZJEA-9U %} #### What's behind React Typescript Cheatsheet? Why did you feel compelled to write it? Why Typescript as opposed to JavaScript? I felt compelled to make it because I was learning TypeScript for work and I felt the official docs did not serve my needs very well. So I just made my own cheatsheet of tips I picked up because I found myself constantly looking stuff up. Eventually, other people started contributing and now it has blown up into a whole set of cheatsheets. I think the tagline is very appropriate: TypeScript is JavaScript that scales. The common criticism of TS is that it requires a build step and it may be replaced by official JS types in the future. For my purposes, these costs are small, and the benefits far outweigh the costs. [38% of production bugs at Airbnb could have been prevented if they used TypeScript](https://twitter.com/swyx/status/1093670844495089664). People who think this could have been addressed with more tests seriously underestimate Airbnb testing culture, and also discount how types can complement tests. In 2019, the burden of proof is no longer on TypeScript advocates. #### Are you studying Machine Learning? Why did you decide to study the subject? I am indeed taking some ML courses. I think my impact is enhanced by leverage. There are many forms of leverage, but the software, in particular, offers leverage through automation and scalability. This is very attractive to make use of. I don't intend to be a professional ML engineer but I think its anticipated importance in my lifetime warrants some study now. In particular, I am interested in computer vision ([which is unreasonably effective](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)) and [generative adversarial networks](https://thispersondoesnotexist.com/). Additionally, I am doing it via the GATech OMSCS, which will help give me some formal qualifications for this second career. (Completely worthless except for immigration bureaucrats to check a box.) #### What talks have you given in recent years? Why do you think it's important to participate and organize conferences within the communities? These are all the talks I've given in recent years, so if anyone's interested, please take a look: https://www.swyx.io/talks/ I love to participate in conferences to meet people and to Learn in Public, but I would probably never organize conferences because it is so much work! {% youtube -f-rZepNKW0 %} #### If you could organize the world in one of three ways - no scarcity, no problems, or no rules - which way would you do it? No scarcity. Life would be boring with no problems, and chaos with no rules. At least with no scarcity, our problems would merely be "first world problems." But no child has to go hungry. #### Are you excited about the upcoming conference in London? What are you going to talk about and what are your expectations from the event? Yes! Very! I'll be speaking about React Hooks under the hood, where we will live code a React clone from scratch to practice closures and establish a great mental model for understanding how Hooks work. {% twitter 1144686353113866241 %} ### [Get a Regular ticket for the conference](https://reactadvanced.com/?utm_source=blogpost&utm_medium=devto&utm_campaign=interview) --- *The interview was prepared with the assistance of Marina Vorontsova, a copywriter from [Soshace.com](https://soshace.com/). Soshace is a hiring platform for web developers: hire a developer or apply for a remote job.* *** ## About GitNation [GitNation](https://gitnation.org/#about) is a foundation contributing to the development of the technological landscape by organizing events which focus on the open source software. We organize meaningful and entertaining JavaScript conferences and meetups, connecting talented engineers, researchers, and core teams of important libraries and technologies. Besides offering [single conference tickets](https://gitnation.org/#events), the organization also sells a GitNation Multipass providing discounted access to multiple [remote JavaScript conferences and workshops](https://portal.gitnation.org/multipass).
gitnation
139,598
DISCUSS: What are you loving about "on-call" in your role?
Hey there! Jay here, host of On-Call Nightmares Podcast and Azure Cloud Advocate. Are you current...
0
2019-07-12T13:45:27
https://dev.to/azure/discuss-what-are-you-loving-about-on-call-in-your-role-1pcp
mentoring, beginners, infra, discuss
Hey there! ![](https://thepracticaldev.s3.amazonaws.com/i/3k39qaepqkpql4wl2pa4.gif) Jay here, host of [On-Call Nightmares Podcast](https://oncallnightmares.podomatic.com/) and [Azure Cloud Advocate](https://developer.microsoft.com/en-us/advocates/index.html?WT.mc_id=devto-blog-jagord). Are you currently on-call at your current role? Let's discuss some of the aspects of being on-call that you find are the most valuable. What are you loving right now about the on-call practice in your current role? I believe on-call is an incredible learning resource for teams in technology because it requires you to create some structure in how to handle failure. Share with me and lets discuss more on how we can lift up on-call work and not make it feel like it's such a "nightmare." Leave your comments below! While you're at it, make sure you check out the [free Azure $200 in credits and 12 months of free services.](https://azure.microsoft.com/en-us/free/?WT.mc_id=devto-blog-jagord) That's a ton of cool resources to start demoing that app you might have! The tutorial, [Create a Node.js web app in Azure](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-get-started-nodejs?WT.mc_id=devto-blog-jagord) is a great way to start learning.
jaydestro
139,889
Hacking 101
What does it actually mean to be a hacker? Well here's a clip of a dark web hacke...
0
2019-07-24T01:02:33
https://dev.to/spencerlindemuth/hacking-101-122c
hacking, security
#What does it actually mean to be a hacker? ####Well here's a clip of a dark web hacker in action: ![Hacker](https://media.giphy.com/media/bv4U1HR7W2XQc/giphy.gif) Hacking is clearly the precise combination of finding the right ski-mask, and clever use of your banana hand. The people behind the Equifax data breach in 2017 penetrated the firewall by using a technique called multi-port exploitation where they cleverly used a whole _buschel_ of bananas to target numerous ports at once. All jokes aside, the Equifax breach, which leaked the names, social security numbers, addresses, birth dates, and driver's license numbers of over _145 million_ Americans alone, was carried out by exploiting a flaw in [Apache Struts](https://en.wikipedia.org/wiki/Apache_Struts_2), bad encryption techniques by Equifax, and insufficient breach detection mechanisms. So is this what __hacking__ is? Finding __flaws__ in __systems__, and __exploiting__ them using __advanced__ programming techniques? It certainly can be! While those are a lot of buzz words we hear on tv about hacking, it can also be a plethora of other things! It really boils down to the age old question... Is posting to your friends facebook feed while they use the bathroom hacking? ![Hacked](http://images.junkee.com/wp-content/uploads/2014/07/hack.jpg) Hacking is defined as: "The gaining of unauthorized access to data in a system or computer". By definition this means that posting on your friend's account _is_ hacking! Wow! That means we are all hackers! So with this newfound knowledge about ourselves, it should be _pretty easy_ to steal some data from one of the worlds largest credit reporting agencies! We just have to wait for the system administrator to get up to use the bathroom! While this may sound facetious and far-fetched, it brings us to most unknown __hacking__ technique: ##Social Engineering Social engineering is: The use of deception to manipulate individuals into divulging confidential or personal information that may be used for fraudulent purposes. That sounds really vague, so let's break it down into a couple examples. ####Phishing: Phishing is the use of deceptive websites and emails to coerce personal information out of people to gain _unauthorized access_ to their accounts. ![Imgur](https://imgur.com/axJNvDp.jpg) If we dig a little deeper into this email we gain some insight on just what phishing is. First we notice that the address we received this email from is _close_ to wellsfargo.com but snuck an extra 's' in there to try and get the recipients to glaze over it without a second glance. Next we have an embedded hyperlink with the _text_: "wellsfargo.com/account", but if you hover over the link, you see that it actually links to "http://www.some-sketchy-site-that-looks-like-wells-fargo.com". A good phisher will send you to a site that looks exactly like the target sites login page, then after you type in your credentials in an input field, log them in plain text and tell the user there was an error logging in, to get _even more_ password variations out of the user, or redirect to actual target site, and high five themselves for successfully stealing information. Next we see a phone number for a customer service line. Well if we are being _smart about our security_ we will call the customer service line just to make sure this email is legit! So we pick up our landline and dial the number here on the screen, and get a very nice voice on the other end of the line, who is almost _too_ eager to have us verify some account information, before telling us the link is indeed legit! Now you feel safer! Now you've given enough information to the __hacker__ to gain access to your bank... And probably all your social media accounts... And your email... _And your work email..._ And your work computer... And now this is a full blown corporate attack. ####Front-door social engineering: Front-door social engineering is spoofing RFID tags, following people through doors, and onto elevators or pretending to be someone else to gain access to a system illegitimately. There's a famous saying, - "If you have a clipboard and a safety vest, you can walk in anywhere" The same thing goes for I.T. people, and system administrators, and janitors, and anyone that works in a corporate office really. Someone wearing a white button up and a tie, could walk into a corporate office and say, "Did someone on this floor open an I.T. ticket?" Someone raises their hand, then the imposter comments, "This might take a few minutes if you want to go grab a coffee." Now this imposter is working unsupervised on a workstation copying user-data and emails to a flash drive. This sounds far fetched but it's a surprisingly common methods of "hacking". EY’s Global Information Security Survey 2017 found 74% of cyber attack sources are careless or unaware employees. [[1]](https://www.ey.com/Publication/vwLUAssets/GISS_report_2017/$FILE/REPORT%20-%20EY%20GISS%20Survey%202017-18.pdf) This is an exploit of a system, but not one that reads 1's and 0's and communicates over ethernet. This is the exploitation of people and their central nervous system. People are desperate to please, help, and avoid consequences, and _hackers_ exploit this daily, because it's much easier than filtering through millions of lines of code on the internet to find a hole. ##Cyber attacks Don't fear! This isn't one of those articles where I lure you in with a topic like hacking and then tell you about how it isn't what you think it is, thus leaving you high and dry. There is still a large amount of cyber attacks carried out on a daily basis. This is _hacking_ in a more traditional sense. This is the person in an abandoned warehouse surrounded by servers with 9 monitors in a grid suspended in the air, typing in a bash terminal that you see in the movies. Hacking in this sense, when broken down into smaller bits, is pretty simple and straightforward, although the techniques are not. To put it in layman's terms, hacking in this sense is finding an error in some code, and using it to gain unauthorized access to data or execute code on a remote machine. This is also a vague description, so let's break it down. ####Remote Code Execution Remote code execution is every hackers dream. A hacker finds a way to send code to a machine, which runs the code, returning information, or opening up doors for bigger exploitations. A good example is the 2015 Android [Stagefright](https://en.wikipedia.org/wiki/Stagefright_(bug)) media server exploit, or the 2014 [Shellshock](https://en.wikipedia.org/wiki/Shellshock_(software_bug)) hack which is described as "causing Bash to unintentionally execute commands when the commands are concatenated to the end of function definitions stored in the values of environment variables." The technical explanation is when Bash opens a new instance, it reads a list of predefined functions in a table referred to as environment variables, and executes them without verifying they were created legitimately. This was exploited in many different vectors, such as open SSH and via DHCP requests, where a hacker could append specialized privilege escalation functions onto a request to the server that would open bash to execute the initial command, not verify the appended code, and run all the commands haphazardly. ####Code Injection Code injection is the exploitation of a computer bug that is caused by processing invalid data and is part remote code execution and part system manipulation. This is the categories that script kiddies fall into and lands high on the list of most common attacks. Code injection canvases SQL injection, scripting, and packet spoofing. This is the kind of hacking that takes advantage of poorly written code, and lack of input validation. An example is the 2012 [Heartbleed](https://en.wikipedia.org/wiki/Heartbleed) attack, that was eventually patched with _just one line of code_! The Heartbleed attack was an exploit carried out against the Open SSL protocol being used to encrypt web traffic across the world. To verify the SSL connection was still open, the web browser would send a "heartbeat" packet to the browser, which would then respond with the size and message of the original packet, but the server wouldn't verify wether the packet it was sending back was the same size as the original, instead rendering memory overflow data. A simplified explanation: ![heartbleed](https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Simplified_Heartbleed_explanation.svg/480px-Simplified_Heartbleed_explanation.svg.png) This exploit was fixed by this single line of code (or 56 characters for the curious): ```javascript if (1 + 2 + payload + 16 > s->s3->rrec.length) return 0; ``` That line isn't hard for even a beginner to read and understand, yet the effect radius of the exploit will forever be unknown. This brings us to Spencer's top tips for avoiding getting hacked or fired for being "careless and unaware": ###1: Always lock your computers when you walk away from them! This seems obvious, but is more common than it should be! It's simple. ###### (⊞ Win + L) on PC, or (⌘ Command + ^ Control + Q) on mac. Don't get data jacked while you're getting caffeine jacked! ###2: This may be a hard realty to take in.. But __NO__... That Nigerian Prince does _NOT_ want to share his fortune with you. Things in life that sound too good to be true almost always are. Especially when they are communicated over email! No one will send you an email telling you how much money you've just won, or about an all expenses paid cruise for 2 that you got so lucky to be selected for! Look at Publishing Clearing House! They at least have the decency to show up to your front door with a giant check! Banks and other institutions __DO NOT__ communicate private account information over email either. They may send you an information update verification to alert you if someone is tampering with your account, but they will not ask for any change to your personal information, or account verification over email to avoid this exact scenario. When in doubt, call your bank _from the number on google_, or in the _yellow pages_ if you are still on AOL. ###3: Use a password manager! There are tons available for tons of different price points. (Again here, you get what you pay for. You might not think it's important, but would you buy your car from Wal-Mart or your prescription meds 7-11 brand? Don't be cheap with your security!) Google has a password manager built into their programs now, iCloud has Keychain, and well-known third managers include [Keeper](https://keepersecurity.com/), [LastPass](https://www.lastpass.com/hp), and [DashLane](https://www.dashlane.com/). They can help you generate long, unique passwords for every account, while never forgetting or typing one again thanks to autofill, and also protecting you from bruteforce attacks, or plain and simple weak password techniques like hunter12 (although all I see is ********). Even a super-computer will have a hard time guessing: __LdjgfdksjhgJDLRKJZSKLFJKL:hjt4io4wr8iro1euwrewpt8o43tuO*U$#*OU$O#I$J#_E)(R#$%*UEOIJDKLGSJDKFAJSDGFKJHFGUYT*U(RU#)$IU#(ru30tujsdklgjdkgjfskgjeoir#UR5q9oRU__ (I would have guessed Passw0rd first, but that string would have been my second guess) ###3: Number three sucks. It's a tip that isn't really a tip, but maybe something to turn some gears in your mind to help deal with the inevitable reality. Sometimes you just can't avoid attacks. The identities compromised in the Equifax breach were unavoidable by the people actually effected. Equifax was collecting your information with the "consent" you gave by being born, or signing up for that low limit/high interest credit card at 18 so you could get 5% off at Macy's and buy cigarettes without your parents finding out. The tip here is __BE READY__ when your information _is_ leaked. Put a credit freeze on your account to prevent criminals from opening lines of credit under your name. Change your passwords. Enable 2 step verification for all of your accounts. Even accounts like your Snapchat, Instagram, and iCloud are still tied to other accounts, can provide verification for important accounts, or leak things to public you didn't want getting out. It's a scary, hacked-together world out there... Don't get caught with your naughty pics on a Reddit thread, or the FBI going through your work computer at the Department of Labor because your computer gave hackers access to sensitive Department of energy nuclear data.
spencerlindemuth
140,261
The CLI part 2: Interacting with Heroku in the command line
How to use the CLI to deploy to heroku
0
2019-07-16T15:20:47
https://dev.to/heroku/the-cli-part-2-interacting-with-heroku-in-the-command-line-4p6l
cli, bash, tips, webdev
--- title: The CLI part 2: Interacting with Heroku in the command line published: true description: How to use the CLI to deploy to heroku tags: cli, bash, tips, web-dev --- A large part of what makes Heroku special is the user’s ability to execute a variety of tasks within the command line. Any user can create & scale apps, manage packages, adjust settings, and check the health of their Heroku app without ever loading the Heroku Dashboard. Now that we’ve taken a dive into the basics of the Unix command line (which was covered in [Part One](https://dev.to/heroku/the-cli-for-beginners-63f)), we can start using the Heroku CLI with it. Let’s dive in! > Before we get too far, please note that this article was written on a macOS laptop and some of the commands may be slightly different if you’re on a Windows or Linux machine. ### Installing the Heroku CLI and Logging In Before we do anything too exciting, we have to install the Heroku CLI in the command line. This is an easy task if you have Git and Homebrew installed. You will need to[ sign up for Heroku ](https://signup.heroku.com/)to get started :) You can use the Heroku installer to add the Heroku CLI to your system. [The Heroku CLI documentation](https://devcenter.heroku.com/articles/heroku-cli) goes over the installation options in more detail. > if the above method doesn’t work, you can use homebrew on *most* computers If you don’t have Homebrew installed, the instructions are [on the Homebrew site](https://brew.sh/) Run `brew tap heroku/brew && brew install heroku` inside of the command line and everything should be ready to go. Initialize your app You’ll need to initialize a git repository in the folder with your code. Navigate to the directory of a project that you’d like to host on Heroku and run `git init`. > A common point of confusion for most early developers is between git and GitHub. The software tool git can be used locally on your own machine, or with many different online (generally called “remote”) repositories. GitHub is the largest and most popular option for a remote repository service. In this tutorial we’re going to use git to create a local repository, and Heroku as our remote repository ### Authenticating with Heroku Once you’ve got the CLI installed, you’ll need to log in to your Heroku account before you can make the most out of the tool. Run `heroku login` and enter your credentials to log in this way. Next, navigate to the directory of a project that you’d like to host on Heroku and run `heroku create` to create a new app. Doing this will generate a random name for your app, or you can assign a name by running `heroku create [app-name-here]`. ### Deploying your app to Heroku If you browse to the URL that was generated for you when you created the app, you just get a default page that says that your app is under construction. The next phase of deployment takes just a few minutes. We need to populate our app with some code. If your app isn’t already set up as a git repository, you’ll want to run `git init.` If you’re unfamiliar with git from the command line, this is[ a solid guide ](https://medium.com/@george.seif94/a-full-tutorial-on-how-to-use-github-88466bac7d42)that doesn’t go too deep into its complexity. Once you’ve created your Heroku app, run `git push heroku master` after committing to push your project directory to the URL that was established when you ran `heroku create`. Do note that there are some other configuration settings that must be adjusted for this to work depending on the language that your app is written in. Luckily, you can find information on working with a variety of languages in the Heroku [documentation](https://devcenter.heroku.com/categories/language-support). Heroku has beginner guides and “starter apps” available for seven different languages[ in the dev center](https://devcenter.heroku.com/start). ### Running your app locally It takes perhaps half a minute to send your code up to the Heroku cloud. Pushing an app to Heroku just to have it break because of a configuration error can cause a headache. You’ll need to install the dependencies for your application locally before this step. With a node app you can do this with `npm install` The Heroku CLI offers functionality to run your app locally so that you can avoid this problem altogether. Run `heroku local` to launch the app and navigate to [http://localhost:5000](http://localhost:5000) in your browser to view the site. ### Other CLI features - Rename apps, view apps, get logs, and more The ability to create and deploy applications is an important aspect of the Heroku CLI, but it can do a variety of other things as well. If you want to rename your app, run `heroku apps:rename [newname]` in the app folder and swap out newname with the name that you want to change to. You can also see a complete list of Heroku apps that you’re a creator or contributor to by running `heroku apps`. If you’re running into issues and want to see the logs associated with your project, run `heroku logs` from the app directory. A complete list of commands and their functions can be found by running `heroku help`. The Heroku CLI is a versatile and powerful tool with a variety of use cases. Working with the client inside of the command line is a breeze once you’ve established some basic familiarity with it. If you’re ever looking to utilize a particular feature or need a refresher on some of the capabilities that the CLI has, you can always view the Heroku documentation [here](https://devcenter.heroku.com/categories/reference).
nocnica
140,460
Optimized Script loading for Browser
for more info- https://jasonformat.com/modern-script-loading/
1,485
2019-07-14T13:54:05
https://dev.to/64bitdev/optimized-script-loading-for-browser-1o9j
javascript, scriptloading, browser, perf
for more info- https://jasonformat.com/modern-script-loading/
64bitdev
140,484
Create Page in React
Page in React
0
2019-07-16T08:32:22
https://dev.to/aziziyazit/create-page-in-react-1mj6
react, reactpage, reactarchitecture
--- title: Create Page in React published: true description: Page in React tags: #react, #reactpage #react-architecture --- ## Page Page can be defined as a Container of the modules. Page is the closest common ancestor for its modules (container children). One of the important technique when creating a page is [Lifting State Up](https://reactjs.org/docs/lifting-state-up.html). And the reason why we are not using Context API in the Page is because we want to limit the re-rendering on the page level. > When the nearest <MyContext.Provider> above the component updates, this Hook (useContext) will trigger a rerender with the latest context value passed to that MyContext provider. (React docs) [You can refer the "Create Module in React" article here](https://dev.to/aziziyazit/create-module-in-react-5gg) ### Page as closest common ancestor of moduleA and moduleB ~~~jsx function PageA() { const [stateA, setStateA] = useState('') const [stateB, setStateB] = useState('') return ( <div> <moduleA setStateA={setStateA} stateB={stateB} /> <moduleB setStateB={setStateB} stateA={stateA} /> </div> ) } ~~~ ### moduleA ~~~jsx function moduleA(props) { const { setStateA, stateB } = props const moduleAContextValue = {setStateA, stateB} return ( <ModuleAContext.Provider value={moduleAContextValue}> <componentA1 /> <componentA2 /> </ModuleAContext.Provider> ) } ~~~ ### moduleB ~~~jsx function moduleB(props) { const { setStateB, stateA } = props const moduleBContextValue = {setStateB, stateA} return ( <ModuleBContext.Provider value={moduleBContextValue}> <componentB1 /> <componentB2 /> </ModuleBContext.Provider> ) } ~~~ ### component might look like this ~~~jsx function componentA1() { const moduleAContext = useContext(ModuleAContext) const { stateB, setStateA } = moduleAContext function handleClick() { setStateA('state A') } return ( <div onClick={handleClick}>{stateB}</di> ) } ~~~ The Page state and Module context can be illustrated like below: ![](https://thepracticaldev.s3.amazonaws.com/i/g0iesl40ah2gvsr7e0if.png) > Page manage the communication between modules using technique called Lifting state up. While Module manage the communication between components using Context API (createContext & useContext) and useReducer. ### next series On the next article, we will discuss on managing communication between pages and the "page communication manager" is either Redux, Router or [Custom Hook](https://github.com/dai-shi/reactive-react-redux). ### code All the code sample can be viewed here [Codesandbox](https://codesandbox.io/s/react-architecture-c1rx2?fontsize=14)
aziziyazit
140,490
Typescript HOCs with Apollo in React - Explained.
Typescript HOCs with Apollo is pretty tricky. I don't know maybe it is just me but this… This is i...
0
2019-07-14T16:22:57
https://medium.com/@mickeyboston/apollo-typescript-hocs-in-react-explained-93032db3e89a?sk=ec5b1010055762efcfe69e4c95c9c1ea
react, typescript, graphql, apollo
Typescript HOCs with Apollo is pretty tricky. I don't know maybe it is just me but this… ![example of graphql function from react-apollo](https://thepracticaldev.s3.amazonaws.com/i/d1ds615vyh10injr4xgc.png) This is intimidating, but it is undoubtedly necessary. How else do you want compiler to check your props in and out of the wrapped component? This is how VSCode helper describes graphql function from react-apollo. Typescript wants no harm and to protect you from yourself. I will elaborate and extend the examples from apollo-graphql docs because they lack some use-cases like chaining HOCs or creating a query HOC with *config.name*, *config.props*. ###Let's first dive into graphql the HOC creator {% gist https://gist.github.com/georgeshevtsov/33aacb7e3f74090a923db4ef231a49b0 %} 1. _TProps_ - Interface/Type describes so-called _InputProps_, notice it extends from _TGraphQLVariables_. 2. _TData_ - Type for the response from the query. 3. _TGraphQLVariables_ - Type for variables needed for query/mutation. 4. _TChildProps_ - This one is generated for you based on _TData_ and _TGraphQLVariables_ unless you want it to be customized. **DataProps** are props to be generated for you from _TData_ and _TGraphQLVariables_, unless you provide a custom type. This is used for the query type of action. It packs all useful for query control properties into data object. {% gist https://gist.github.com/georgeshevtsov/0a6fc5903abb3749b370eab43b5501de %} **DataValue** is the one which caused me questions, mainly because it wraps _TData_ with _Partial_. This design will make you check that data is not undefined in each and every consumer of HOC. To avoid it you can provide you own _TChildProps_. {% gist https://gist.github.com/georgeshevtsov/c78753626e18a1bdb1f6b6abe2fb36bd %} **QueryControls** are props that usually go packed into data, among them there is (once the request is resolved) the prop which we typed as a response type, this is done via intersection of _Partial<TData>_ and _QueryControls<TData, TGraphQLVariables>_. {% gist https://gist.github.com/georgeshevtsov/ed3e6c91be43fd18c5b4fe34d0de5998 %} I will not go into further detail dissecting _QueryControls_, because I think this information is enough to get by for the purpose of this article, in case you are prone to more exploration, feel free to clone [react-apollo](https://github.com/apollographql/react-apollo) and dig deeper. ## Let's get into the simplest query example with TS. Following the [official doc of apollographql](https://www.apollographql.com/docs/react/recipes/static-typing/) pattern I want to fill in all the gaps, which were not obvious to me for the first couple of times when I worked with these structures, and hopefully this article will help you not go crazy, and after reading it you will get one step closer to the typescript mastery. ### Query HOC without Variables Let's first review the shortest variant possible. {% gist https://gist.github.com/georgeshevtsov/882c51b117a000c6f52686323805be7f %} **withBurgersNoChildProps** gets * _TProps_ - {} * _TData_ - _Response_ * _TGraphQLVariables_ - {} by default * _TChildProps_ - if omitted will be generated for you as _Partial_ from the previous three types. This means data is an optional value, and you have to use non-null assertion operator - **"!"** everywhere. There is a way to avoid these checks. {% gist https://gist.github.com/georgeshevtsov/4784aa908acbca183ac188e4993ea6a5 %} **withBurgersWithChildProps** gets * _TProps_- {} * _TData_- Response * _TGraphQLVariables_- {} by default * _TChildProp_ gets _ChildProps_, see how it uses _ChildDataProps_ from react-apollo. Let's see what is under the hood of _ChildDataProps_ type. {% gist https://gist.github.com/georgeshevtsov/806116259c9b426c176b254ffee4e954 %} _ChildDataProps_ makes an intersection between _TProps_ and _DataProps_ using _TData_ and _TGraphQLVariables_. Notice that there is no _Partial_ around _DataProps_ this time, how it was in the graphql definition example in the default value for _TChildProps_. This means that data is going to be definitely present in the wrapped component which is handy. ### Query HOC with Variables Here's the example of how to pass props to your wrapped component and be able to validate them. {% gist https://gist.github.com/georgeshevtsov/8ae3ffdbeeedc8e48caa62cb8383b0a5 %} To get the right burger, api needs a name, otherwise the request is going to fail without it. I described _InputProps_ as an interface and extended it from Variables to avoid code duplication, it is mandatory to have Variables joined with _InputProps_, otherwise TS compiler will not know what props - _Variables_ you need for your request in a graphql hoc. ### Query HOC with config.options *Options* is a way to map your incoming props in the HOC. For example you can map props which are named in their own way, to be treated as _variable props_ useful for the query request. {% gist https://gist.github.com/georgeshevtsov/f21a4e4b0000b9654675c154dd5de6fa %} Now there is no need to extend _InputProps_ from _Variables_, because the query request is going to be satisfied with a substitution. TS also checks types inside options object declaration, so it would not let you use a property of a different type from a given _Variable_. ### Query HOC with Options.name The purpose of this is when you have multiple Query HOCs wrapped around one single component, _data_ prop returned from all of them will eventually conflict, so you give every Query a specific _name_. For this example, all the knowledge from above has to be put to a test. Because now we will write the result of the query into a custom specified property name. **graphql** function is not going to type this one for us, so we need to type the result of the query request by ourselves. {% gist https://gist.github.com/georgeshevtsov/e6e14b2fdf7029eb86cccc5bf4b28c79 %} **withBurgerWithName** - gets the name of _burgerRequest_. _burgerRequest_ will now store everything which was previously stored in data, to be able to type it we need to remember how apollo types the data prop for us. We need to mimic _ChildDataPros_ type, here is a shortened version of it. ``` type ChildDataPros = TProps & { data: DataValue<TData, TGraphQLVariables> } ``` Notice how manually created _ChildProps_ reflects the structure of _ChildDataProps_ with data renamed to _burgerRequest_. ### Chained Query HOCs Now goes the fun part - typing the chain of HOCs. Many of you might know about compose function from apollo. In typescript it throws all your typings out the window. Here's the definition of compose function. ``` function compose(...funcs: Function[]): (...args: any[]) => any; ``` According to this, compose is a _curried function_. First invocation of it accepts functions, second invocation accepts anything in any quantities, the return value is any. ![the error when using compose function with graphql hoc in typescript](https://thepracticaldev.s3.amazonaws.com/i/k29ub8lt9qfzziy0v8az.png) The aftermath 1. _Props_ passed from outside are not validated (see _BurgerFactory_) 2. _Props_ passed from HOC to wrapped component are not typed (has any type) Let's fix second point first. ![fix for incorrect prop typing in wrapped component](https://thepracticaldev.s3.amazonaws.com/i/qa6av99zg0oba9m3ncm8.png) All we need to do, is to type the props in a wrapped component explicitly, which you would naturally do should you use a standalone component instead of an arrow function. To fix the first point we just have to give up using compose with Typescript. Giving up compose leads to the most straightforward option. Let's review the example we have two HOCs prepared. One is fetching the beverage, goes by a trivial name _withBeverage_ and the other one is our good old friend _withBurger_. I will show you how to jam them together. **withBurger** - this time no salad, things got really serious. {% gist https://gist.github.com/georgeshevtsov/5c04b2b12734aec69cdcdcdf8370ef8d %} **withBeverage** is familiar in outline but it satisfies the example. {% gist https://gist.github.com/georgeshevtsov/7d561939b75bf0dcd183b3f84fcfa1ce %} The combination without compose will look somewhat like this ``` withBeverage(withBurger(MealComponent)) ``` With the configuration of above described HOCs, the compiler will give us this error ![example of a typical error when combining two hocs in TS ](https://thepracticaldev.s3.amazonaws.com/i/43xo9tsef7643e7nj4ze.png) We are interested in the first paragraph. ``` 'ComponentClass<BurgerInputProps, any>' is not assignable to parameter of type 'ComponentType<BeverageInputProps & BeverageVariables & { beverageRequest: DataValue<BeverageResponse, BeverageVariables>; }>' ``` The line starting from _ComponentType<BeverageInputProps & …_ is describing the component and its _PropTypes_ returned after invocation of _withBeverage_. It conflicts with the _PropTypes_ of the component returned from the invocation of _withBurger_. To fix this compiler error, we need to make intersections of the returned type from _withBeverage_ with the incoming props type from _withBurger_. {% gist https://gist.github.com/georgeshevtsov/8b346da57c63e9d34e4cd27f6a30414a %} First on line 2, I created an intersection of _BeverageInputProps_ and _BurgerInputProps_ this is needed to validate incoming props to get both requests running correctly. On line 9, I created and intersection of _BurgerInputProps_ & _BeverageChildProps_, by now you should understand that this one is put in the placeholder of _TProps_. Remember earlier, the conflict of returned props from _withBeverage_ and received props from _withBurger_, this way _withBurger_ will know that it expects not only specific for a burger query variable but also some data from _withBeverage_. Well that's all folks. I am thinking about doing the same explanation article about HOCs for Mutation, but I am not sure if I make it before react-apollo will release their version with hooks for everything because then everybody including me will completely forget about HOCs. Please feel free to ask questions, give pieces of advice, suggest better approaches in the comments.
georgeshevtsov
140,509
Learning a New Language?
Is anyone currently learning a new language? What language and why? How are you going about...
0
2019-07-14T17:47:14
https://dev.to/dinocodes/learning-a-new-language-1ag6
watercooler
--- title: Learning a New Language? published: true description: tags: watercooler --- Is anyone currently learning a new language? What language and why? How are you going about learning it? I am currently learning Italian with my wife. My great-grandfather on my dads side came from Italy and my grandfather would speak some Italian to us growing up. Currently, I am using Duolingo (42 day streak!) every day and practicing a little with my wife.
dinocodes
140,863
Incremental improvements can lead to significant gains
While reading the book Atomic Habits by James Clear, I was reflecting that my choice of embracing Ema...
0
2019-07-15T14:59:50
https://dev.to/shrysr/incremental-improvements-can-lead-to-significant-gains-48k
emacs, habits, productivity
--- title: Incremental improvements can lead to significant gains published: true tags: emacs,habits, productivity canonical_url: --- While reading the book [Atomic Habits by James Clear](https://jamesclear.com/atomic-habits), I was reflecting that my choice of embracing [Emacs](https://www.gnu.org/software/emacs/) and progressively gaining mastery over it was intimately connected with the philosophy preached in the book. My efforts initially started out with a craving for a system to quantify and manage my tasks, habits, notes, blog writing, job applications and projects in a custom environment, and to be able to build toolkits of code to perform repetitive tasks. As mentioned in an [earlier blog post](https://shrysr.github.io/post/2b0b2c79-3f6e-4079-a07d-9e382fda8954/), I tried several approaches before settling on Emacs. The idea was to find or create a single system to track everything of importance in my life (with ease and efficiency). This was instead of a fragmented approach of using multiple tools and techniques, for example, Sublime Text / Atom as a text editor and [Todoist](https://todoist.com/?lang=en) as a task management tool. I started with a vanilla configuration of Emacs and painstakingly borrowed (and eventually) modified lisp snippets to implement desired ‘features’ or behaviors. It was a just a couple of features every week, initially focused on Org mode’s behavior alone. That was nearly 3 years ago. As of now, I am able to manage my blog [hugo], view my email [mu4e], browse the web [w3m], seamlessly capture notes / ideas / tasks from (almost) anywhere [Org mode], chat on IRC, build multi-language code notebooks with ease [Org babel]. All the above provide me significant advantages in speed and efficiency which still have plenty of potential to improve. Sure, I certainly appear closer to my goal today.. however, I did not know if it was a pipe dream when I started out. It was often extremely frustrating, right from memorizing the ‘crazy’ keybindings in Emacs, to struggling with getting a lisp snippet to work as expected. Choosing Emacs had unexpected rewards as well. For example, the need of synchronizing my notes and Emacs configuration with multiple machines led me to Git. [Magit’s](https://magit.vc/) easily accessible commands and relatively visual interface has been a massive help in getting things done with Git, despite not having any deep technical knowledge of how Git works. My journey with Emacs is testament that an incremental, compounding improvement over time can ultimately result in significant gains. It is also important to acknowledge that I am standing on the shoulder of giants and the awesome [Scimax](https://github.com/jkitchin/scimax) is a cornerstone in my toolkit.
shrysr
141,199
Displaying Geofences and Polygons on Google Maps
Well, the last few weeks I had been working in the redesign of LOCALLY Engage which will be launched...
0
2019-07-16T08:02:21
https://dev.to/danvoyce/displaying-geofences-and-polygons-on-google-maps-3lio
geofences, googlemaps, location
Well, the last few weeks I had been working in the redesign of <a href="https://locally.io/products/engage">LOCALLY Engage</a> which will be launched in the next few days! This feature was quite challenging and required me to use things I haven’t used for a long time! One of the things I had to do was plotting a map to display the Beacons and Geofences of a certain customer. The scope was pretty simple in our case as Geofences were always polygons or circles, while the beacons were represented by markers. As the proposal of this dashboard was make everything dynamic which means that all the data had to be pulled with AJAX requests. (We decided against moving to React at this time because the legacy system we inherited was already built using CakePHP's templating language) The map info also needed to pulled via AJAX and then displayed in the client side. After building all the functions in the back end, my decision wouldn’t had been different than use Maps JavaScript API which is a very simple way to achieve this! ##Let’s do some coding!## **The Server response** Imagining that you have a JSON response like that for Beacons: ```javascript { "beacons":[ { "lat":-37.829996, "lng":144.967707, "address":"65 Coventry Street, Southbank, Victoria 3006, Australia" }, { "lat":-37.829857, "lng":144.967851, "address":"1021 / 65 Coventry street, Southbank, Victoria 3006, Australia" } ] } ``` And a JSON response like that for Geofences: ```javascript { "geofences": [{ "type": "POLYGON", "object": "POLYGON((-76.94644698125 38.74057911875,-76.33609541875 38.74057911875,-76.33609541875 39.35093068125,-76.94644698125 38.74057911875))", "description": "A nice Polygon!", "lat": 38.94403, "lng": -76.539546, "coordinates": [ { "lat": 38.74057911875, "lng": -76.94644698125 }, { "lat": 38.74057911875, "lng": -76.33609541875 }, { "lat": 39.35093068125, "lng": -76.33609541875 }, { "lat": 38.74057911875, "lng": -76.94644698125 } ] }, { "type": "RADIUS", "object": "POINT(-83.4885590000 42.5722669000)", "description": "An amazing circle!", "lat": 42.5722669, "lng": -83.488559, "radius": 0.010840289 }] } ``` ##Creating a Google Map## Let’s create a very basic map to display our objects: ```html <!DOCTYPE html> <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no"> <meta charset="utf-8"> <title>Beacons and Geofences Map</title> </head> <body> <div id="map"></div> <script> function initMap() { map = new google.maps.Map(document.getElementById('map'), { center: {lat: -34.397, lng: 150.644}, zoom: 5, mapTypeId: google.maps.MapTypeId.ROADMAP }); } </script> <script async defer src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap"> </script> </body> </html> ``` ##Displaying Beacons and Geofences## Now, displaying the Beacons, looping the response object and: ```javascript var markers = []; //Creating the icon var icon = new google.maps.MarkerImage('/path/to/icon/image.png', new google.maps.Size(32, 32)); //Let's display the Beacon address when clicked on the arrow var markerContent = '<p>' + beacons.address + '</p>'; var infoWindowMap = new google.maps.InfoWindow({ content: markerContent }); //Now the marker, our Beacon var marker = new google.maps.Marker({ position: { lat: beacon.lat, lng: beacon.lng }, draggable: false, icon: icon }); // When we click on the marker, show the window with the address marker.addListener('click', function() { infoWindowMap.open(map, marker); }); //Closing the info Window with when the mouse is out of the beacon google.maps.event.addListener(marker, 'mouseout', function() { infoWindowMap.close(); }); //Setting the beacon in the map marker.setMap(map); //Let's store these beacons in an array, so we can cluster them! markers.push(marker); //Clustering the beacons var options = {imagePath: '/path/to/cluster/image'}; var cluster = new MarkerClusterer(map, markers, options); To display the geofences, it is as simple as it can be. Looping the geofences object, we need to know what type of object we’ll create, as the API has different options and functions for polygons and circles: //Configuration for objects type: POLYGON if (geofence.type == 'POLYGON') { var paths = geofence.coordinates; var polygon = new google.maps.Polygon({ paths: paths, strokeColor: '#FF0000', strokeOpacity: 0.8, stokeWeight: 2, fillColor: '#FF0000', fillOpacity: 0.35 }); //Set the polygon in the map polygon.setMap(map); //Plot the bounds var bounds = new google.maps.LatLngBounds(); for (var i = 0; i < paths.length; i++) { bounds.extend(paths[i]); } map.fitBounds(bounds); } else { //It is a circle! var circle = new google.maps.Circle({ strokeColor: '#FF0000', strokeOpacity: 0.8, strokeWeight: 2, fillColor: '#FF0000', fillOpacity: 0.35, map: map, center: { lat: geofence.lat, lng: geofence.lng }, radius: geofence.radius * 1609.344, }); //Set the circle in the map circle.setMap(map); //Plot the bounds var bounds = new google.maps.LatLngBounds(); bounds.union(mapType.getBounds()); map.fitBounds(bounds); } ``` ##Final Result, Map done!## ![](https://thepracticaldev.s3.amazonaws.com/i/kkdp8e8ymxvug4ouaxdf.png) The map should look as cool as that! The result is a very nice map combining Geofences and Beacons and this info is handy for many things! The Google Maps JavaScript API really makes the task easy and trustful, with a few code lines, show where your geofences are and what beacons you have around! <table data-number-column="false"><tbody><tr><td width="300px"><img src="https://media.licdn.com/dms/image/C5603AQHgKkH-76SKkw/profile-displayphoto-shrink_800_800/0?e=1568851200&v=beta&t=bESUPMv4-7uXiC8-fHZrNKuSG3wDv4ilmPbCvOdK1FM" ><p>&nbsp;</p></td> <td rowspan="1" colspan="1" data-colwidth="0"><p><strong>Lorena Santana - Platform Developer</strong></p><p>System analyst/Web development in the last +8 years with different technologies and methodologies. Experience with ERP's, CMS and frameworks such as CakePHP, Bootstrap and Wordpress. Full stack developer, self-taught, expertise in database modeling and maintenence, also in requirements gathering. Experience with agile methodologies and work well under pressure. Experience with interns.</p><p>&nbsp;</p><p>&nbsp;</p></td></tr></tbody></table>
danvoyce
141,349
Angular 2+ vs ReactJS vs Vue.js
Intro Sometimes to make a choice regarding the Javascript framework might be not that easy...
0
2019-07-16T15:35:01
https://2muchcoffee.com/blog/angular-react-vue/
angular, react, javascript
##Intro <p align="justify">Sometimes to make a choice regarding the Javascript framework might be not that easy. The popular frameworks as Angular and React got a serious competitor like VueJS. Of course, there are other frameworks, however, this article will focus on these 3 frameworks as the most well-known. A lot of developers at the beginning of their career ask themselves: what framework should I choose: React or Angular or Vue? Which one is better?</p> <p align="justify">As a developer at web and mobile app development company, I have a lot of experience working with the frameworks and will share some useful information with you. This article will focus on the objective comparative analysis that will incorporate the opinions of developers who coding with the mentioned frameworks. They share why they chose a precise framework and what they like or dislike about it.</p> <p align="justify">Here is the table showing some general information on Vue vs Reactjs vs Angular.</p> ![general information on Vue vs Reactjs vs Angular](https://thepracticaldev.s3.amazonaws.com/i/lysxlkkpzp7hfcofj0w9.png) <p align="justify"><b>As a source we took Upwork and here is a pie chart showing the open positions for Angular, Vue and React developers for October 2018:</b></p> ![diagram-showing-the-open-positions-for-Angular-Vue-and-React-developers](https://thepracticaldev.s3.amazonaws.com/i/b0z2b02tqnyrgafoq7gf.png) <p align="justify">Let’s consider all benefits and drawbacks of each framework/library more specifically.</p> ## Advantages and disadvantages of Angular in mobile app programming **Pros of Angular:** <ol type="1"> <li><p align="justify">It can be scaled to large teams.</p></li> <li><p align="justify">Thanks to its structure your mistakes can hardly cause catastrophic failure.</p></li> <li><p align="justify">There is a rich set of “out of the box” functions.</p></li> <li><p align="justify">MVVM (Model-View-ViewModel) allows many devs work on the same application section and use the same data set.</p></li> <li><p align="justify">Detailed documentation which helps your team (but it takes some time for devs to review it).</p></li> <li><p align="justify">The great community provides support and the core team creates new features and new versions of Angular.</p></li> </ol> **Cons of Angular:** <ol type="1"> <li><p align="justify">You need to learn Typescript, what can be rather difficult.</p></li> <li><p align="justify">It’s not as flexible as other frameworks or libraries.</p></li> <li><p align="justify">You have to learn much new concept e.g. module, directive, components, bootstrapping, services, dependency injection etc.</p></li> <li><p align="justify">Angular performs manipulation directly on DOM, while React or Vue do that through virtual DOM.</p></li> </ol> **Comment of our Angular developer:** <p align="justify"><i>Vadim from 2muchcoffee:</i> As I started with Angular, the choice of front-end frameworks was not big. First I disliked Angular’s difficulty. But now I’m glad to code on this framework. As for me, Vue and React are libraries. And Angular is a real framework with many features and it’s enough for full-fledged app development. If one really gets acquainted with it the development process goes fast. And there is one more pro: choosing Angular you surely find a job because of a strong demand.</p> ## Advantages and disadvantages of ReactJS **Pros of ReactJS:** <ol type="1"> <li><p align="justify">Its syntax is easy so one can learn it quickly. </p></li> <li><p align="justify">React is much more flexible than Angular: it bundles together many useful concepts.</p></li> <li><p align="justify">It simplifies the work with DOM (using the virtual DOM) because it allows optimizing the eventual number of changes before updating and rendering DOM. It can search, remove and modify elements from the DOM tree quickly.</p></li> <li><p align="justify">React is an open source library that gets many updates from many developers.</p></li> <li><p align="justify">The developers can easily move from the older version to the latest ones.</p></li> </ol> **Cons of React:** <ol type="1"> <li><p align="justify">Sometimes React offers too much choice.</p></li> <li><p align="justify">For saving data you need to use Redux or another library because you can hardly keep data in React itself.</p></li> <li><p align="justify">You won’t find “out of the box” functions and have to bundle additional functions yourself.</p></li> </ol> **Comment of our (2muchcoffee) React developer:** <p align="justify"><i>Alex from 2muchcoffee: development with React 2 years ago, and you know, I learned it quickly. Before that, I had tried to learn Angular but this framework had been rather difficult. I like React because it’s flexible and fast. The rendering process is much more quickly. Well, the only thing I don’t like is that I always have to search for additional components.</p> ## Advantages and disadvantages of Vue **Pros of Vue:** <ol type="1"> <li><p align="justify">VueJS is easy to learn and work.</p></li> <li><p align="justify">Its documentation is easy to understand.</p></li> <li><p align="justify">Developers almost do not use some extra libraries because this framework already has great built-in functionality.</p></li> <li><p align="justify">If creating SPA (single-page applications) Vue is a good solution. Besides, smaller interactive parts are simply integrated into the existing infrastructure.</p></li> <li><p align="justify">Vue weights not much but works quickly and stays flexible enough.</p></li> <li><p align="justify">It’s also based on DOM same like React is. Nevertheless, it provides more delicate work in question of execution of references to each node of the tree.</p></li> </ol> **Cons of Vue:** <ol type="1"> <li><p align="justify">The community is not very big, although it is quickly growing. The largest part of Vue’s community is from China. That’s why you’ll have to solve appearing issues on your own.</p></li> <li><p align="justify">As for me, job offers are important. And you’ll find a small percentage of Vue job offers.</p></li> </ol> **Comment of a freelance worker coding on Vue:** <p align="justify"><i>Anna:</i> I like Vue because it took all good things from Angular and React. The docs are really clear and it took me about one day to handle Vue. I found a job as a freelance Vue.js developer and that’s enough for me. And yeah, I heard about all the problems with lack of job offers and so on… but if you are good at coding you’ll find work anyway.</p> ## Conclusion <p align="justify">Now you are able to compare these frameworks from both advantages and disadvantages sides. Nevertheless, Angular is the most convenient in question of complex big data projects, React is the best for limit-haters because of the lack of any frames, and Vue is like a golden middle way between previous two frameworks. As for us, they all are good enough, and to decide which one to use is up to you according to your requirements, goals and resources.</p> <p align="justify">Remember that there is no “bad” or “good” framework, there is the framework which is most suitable for some project.</p> <p align="justify">If you plan to develop a startup idea yourself, first of all, you should find out which framework fits your project. You have to find the right tool for the right job. Being a beginning developer you may choose the framework depending on its functions, difficulty and amount of job offers. Try to work with each framework and decide which one fits you.</p> Liked that? We’ve done our best! Go to our <u><a href="https://2muchcoffee.com/blog/"> blog </a></u> to find more useful articles.
2muchcoffeecom
141,548
Markdown Image Alignment
I'm having a problem with markdown images. It seems markdown only center align images. How do I left...
0
2019-07-16T23:33:05
https://dev.to/johntelford/markdown-image-alighment-3dek
gatsby
I'm having a problem with markdown images. It seems markdown ![]() only center align images. How do I left align markdown images?
johntelford
141,580
Removing Short Imports in NativeScript, the Easy Way (with VS Code)
As NativeScript continues to evolve, some of the features that made sense in 2015 are creating more t...
0
2019-07-17T03:42:58
https://dev.to/toddanglin/removing-short-imports-in-nativescript-the-easy-way-with-vs-code-660
nativescript, vscode, regex, javascript
As NativeScript continues to evolve, some of the features that made sense in 2015 are creating more trouble than they're worth in 2019. One such feature is the so called "short import." Short imports were intended to improve the developer experience by saving a few key strokes. Instead of this: ```typescript import { device } from 'tns-core-modules/platform' ``` A developer could simply use: ```typescript import { device } from 'platform' ``` It's convenient, but it has created unintended side effects. And with great tooling like VS Code that can automatically add imports, the value of this feature low. So, now it's deprecated (as of {N} 5.2). [Read all about it on the NativeScript blogs](https://www.nativescript.org/blog/say-goodbye-to-short-imports-in-nativescript). ## Now what? While short imports still work, they will become a real problem with NativeScript 6.0, when webpack becomes the default and only available {N} build system. Webpack expects the full import path, so short imports quickly cause webpack problems. But if you are maintaining an app that has been around for a while, removing short imports could be quite a chore. In my case, the app I was updating had more than 275 short imports to fix! I started to fix the imports manually and _quickly_ realized that was a mistake. If only there was a way to automatically fix all these imports... ## VS Code (and RegEx) to the rescue Everyone knows that [VS Code has great search and replace tools](https://code.visualstudio.com/docs/editor/codebasics?WT.mc_id=devto-blog-toanglin#_search-across-files), but what may not be as obvious is that you can use regular expressions in the search _and_ replace. Simply toggle "Use Regular Expressions" on and use standard JavaScript regex in the Search, and use `$1`, `$2`, etc to include matching groups in the Replace. _NOTE: To run really advanced regex searches with backreferences and lookaround (like we'll do today) does require a setting change in VS Code. Go to Settings > Features > Search > Use PCRE2 and check the box to enable these extra capabilities. [See the VS Code docs for more context](https://code.visualstudio.com/docs/editor/codebasics?WT.mc_id=devto-blog-toanglin#_search-across-files)._ And I know, I know. If you have a problem and use RegEx to solve it, now you have two problems. But it beats the tedious alternative. ### RegEx to find short imports The trick to building the right regular expression for this task is trying find short imports for things that _should_ have the `tns-core-modules` prefix while _avoiding_ imports for "non-core" modules AND imports that already have the correct long format. Fortunately, most NativeScript modules follow a similar convention: - Core modules use "plain" names with no prefix (like `application` or `ui/layout`) - Non-core modules usually prefix the module name with `nativescript-` or `ns-` - Local modules referenced by path usually start with `./` or `../` or `~/` By example, we want to FIND imports that look like this: - `import * as imageSource from "image-source";` - `import * as util from "utils/utils";` - `import { Image } from "ui/image";` While AVOIDING imports that look like this: - `import { Settings } from "../models/app-models";` - `import * as firebase from "nativescript-plugin-firebase";` - `import { Color } from "tns-core-modules/color/color";` AND THEN, when we have a match, we want to insert "`tns-core-modules/`" in front of the import path while keeping the rest of the line the same. Put it all together, and we get this search regex: ```javascript (from \")(?!nativescript)(?!ns)(?!tns-core-modules)((?:[\w\-\/])+)(\") ``` Since regex always looks like a cat walked across your keyboard, let's explain in english: - Start finding imports by searching for `from "` (including the quote) (and store in matching group 1)... - Followed by any number of word characters OR `-` (dashes) OR `/` (whacks)(and store in matching group 2)... - BUT ignore if the matching group contains the words `nativescript` or `ns` or `tns-core-modules` - These are likely non-core modules OR imports that already have the correct long format - And finally, match the trailing `"` (quote) This combo does a remarkably accurate job of finding the imports we want while ignoring everything else. You can [play with a working version of this regex using Regexr in your browser](https://regexr.com/4hj93) to experiment with what it matches and misses. When running the search in VS Code, to avoid false positives, you obviously also want to exclude files and folders that are not part of your code, like `node_modules`, `platforms`, `*.js` (assuming you're using TypeScript), `.git`. ### Replace syntax When the search finds a match to this pattern, it will create three matching groups with these values: 1. `from "` 2. `[name of matched module]` 3. `"` For example, when the following line is matched: ```typescript import { Image } from "ui/image"; ``` The matching groups will contain: 1. `from "` 2. `ui/image` 3. `"` We want to insert `tns-core-modules/` between group 1 and group 2. Using VS Code, our replace syntax is: ```javascript $1tns-core-modules/$2$3 ``` Boom! Job done. Almost... ## Another search and replace pass If you're using the plain JavaScript `require()` syntax to import modules, you clearly need a different search pattern. Even in my TypeScript-based app, I had a handful of imports done with `require()` for reasons specific to module implementation. Conceptually, the search regex we need is similar to above: ```javascript (require\(\")(globals)+(\"\)) ``` This pattern matches: 1. `require("` 2. `globals` 3. `")` You could modify the "globals" pattern if you need a wider search. In my case, this was the only `require()` in my app that needed the `tns-core-modules` prefix. The replace pattern is the same as before: ```javascript $1tns-core-modules/$2$3 ``` ## Wrap up Hopefully you've learned two things in this post: 1. Eliminating short imports in your NativeScript app doesn't have to be tedious manual process 2. VS Code search and replace is REALLY powerful when combined with regex and matching groups
toddanglin
141,589
A Simple Introduction to git
Version Control Systems(VCS) are one of the most important things a developer needs to know about. Si...
0
2019-07-23T22:00:01
https://dev.to/dagasatvik10/a-simple-introduction-to-git-55df
beginners, productivity, git
*Version Control Systems(VCS) are one of the most important things a developer needs to know about. Since its release in 2005, git has continued to be one of the most widely used VCS. So without further delay, lets get started with basics of git.* First thing to do is install git on whatever OS you are using. Let's see how it's done on Ubuntu. ```console $ sudo apt-add-repository ppa:git-core/ppa $ sudo apt-get update $ sudo apt-get install -y git ``` Check that git is installed in your system. You should see something like this. ```console $ git --version git version 2.20.1 (Apple Git-117) ``` **Great!** Let's make your first commit. Create a new project folder and initialize it as a git repository. ```console $ mkdir git-tutorial $ cd git-tutorial $ git init Initialized empty Git repository in /home/sa.daga/src/demo/git-tutorial/.git/ ``` There are 3 different areas where code can exist in git: * Unstaged * Staged * Committed Run the following command on the terminal in the git-tutorial directory. ```console $ git status On branch master No commits yet nothing to commit (create/copy files and use "git add" to track) ``` `git status` command tells you which code is staged and which is unstaged. Create a new file in the git-tutorial directory and again run `git status`. ```console $ echo "test1" > a.txt $ cat a.txt test1 $ git status On branch master No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) a.txt nothing added to commit but untracked files present (use "git add" to track) ``` As you can see, it shows that you have a file named **a.txt** in your directory and it is untracked. Untracked is git's way of saying that currently it does not care about the changes in this file. Changes in **a.txt** are currently in the unstaged area. *Note: Don't confuse untracked with unstaged. Files are untracked but changes are unstaged.* Now add this file to the staged area and again run `git status`. ```console $ git add a.txt $ git status On branch master No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: a.txt ``` **a.txt** has now moved to the staged area. Let's modify **a.txt** and again run `git status`. ```console $ echo "test2" >> a.txt $ cat a.txt test1 test2 $ git status On branch master No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: a.txt Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: a.txt ``` As you can see, `git status` tells us that the **a.txt** has been modified. So the **a.txt** that contains *test1* is in the staged area whereas change *test2* is in the unstaged area. *Note: Output of git commands can contain information about what commands you can run next. Like in the output of above command, git displays commands that you can run if you want to unstage a staged change, add unstaged changes to staged area or discard changes in a file.* You can even see the modifications in each file by running `git diff`. ```console $ git diff diff --git a/a.txt b/a.txt index 9daeafb..8e042fb 100644 --- a/a.txt +++ b/a.txt @@ -1 +1,2 @@ test1 +test2 ``` Output of `git diff` shows that *test2* has been appended to **a.txt** in the next line of *test1*. Let's add your first commit. Only staged changes are committed. ```console $ git commit -m "some commit message" [master (root-commit) 4d75ac6] some commit message 1 file changed, 1 insertion(+) create mode 100644 a.txt ``` `git commit` command is used to add a commit. Option **-m** is used to pass a commit message to the commit which describes what changes are being committed. Replace *some commit message* with a custom message which describes the changes being committed. *Note: A commit message is necessary to add a commit. If __-m__ option is not passed along with* `git commit`*, an editor(usually vim or nano) is opened where you have to write a commit message.* Again run `git status` followed by `git diff`. ```console $ git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: a.txt no changes added to commit (use "git add" and/or "git commit -a") $ git diff diff --git a/a.txt b/a.txt index 9daeafb..8e042fb 100644 --- a/a.txt +++ b/a.txt @@ -1 +1,2 @@ test +git ``` `git status` no longer shows *no commits yet* message. Also since the staged changes were committed, only unstaged changes are displayed in `git status`. `git diff` displays the changes in each file with respect to staged changes if they exist else with respect to committed changes. To see history of commits, you can run `git log`. ```console $ git log commit 4d75ac6ce5c533b2ec7bac5e0e00e112b2d5d417 (HEAD -> master) Author: dagasatvik10 <dagasatvik10@gmail.com> Date: Wed Jul 24 02:13:23 2019 +0530 some commit message ``` By default, with no arguments, `git log` lists the commits made in the project in reverse chronological order; that is, the most recent commits show up first. ## Wrap up Congrats on working your way through this tutorial! In it, we covered basic git commands like `git init`, `git status`, `git add`, `git diff`, `git commit` and `git log`. Keep exploring more about git and if you have any issues, don't hesitate to ask. *This was my very first blog, so please provide me with feedback on what can I do to improve.*
dagasatvik10
141,630
Design Systems (Part I: Foundations)
The web was built as a set of interconnected pages, and blossomed from how content was historically c...
1,622
2019-07-17T08:02:17
https://dev.to/emmabostian/design-systems-part-i-foundations-45hd
design
The web was built as a set of interconnected pages, and blossomed from how content was historically consumed: through books. Since books format content in a series of pages, it was only natural for web pages to leverage the familiar paradigm. Thus, web pages were born. Other technology terminology stems from printed books: bookmarks and pagination are two such examples. And while the traditional concept of web pages worked for decades, we've quickly realized that this paradigm is no longer viable for building sustainable web applications. Many companies are in the throes of a paradigm shift towards more modular web applications through the use of reusable components. And while modular web applications are more scalable and testable, they can also provide some challenges. Building component libraries is good practice, so long as all teams within an organization are developing and consuming one library; unfortunately this often isn't the case. More likely you'll see multiple teams building the same component in different places; this is a catalyst for application inconsistency. This is where design systems shine. Design systems allow teams throughout an organization to define their own identity and bake it into accessible and consistent components. These are subsequently consumed by product teams and can have an invaluable impact on the success of a product. Let's delve into some of the foundational knowledge of design systems, and learn how your team can adopt them to build accessible, scalable, and consistent products. ![Question](https://cdn-images-1.medium.com/max/1600/1*u6sHZc6oUiNkK-KEm_4CNg.png) # What Is A Design System? A design system is a set of reusable standards and components which reinforce a brand's identity. These standards and components allow teams to efficiently build user interfaces with respect to accessibility, performance, consistency, and brand. While the industry hasn't officially defined a design system, in general it's comprised of three facets: a *design language*, a *component library*, and a *style guide*. We'll dive into each of these areas in-depth a bit later. ![Man](https://cdn-images-1.medium.com/max/1600/1*q9EwW7QMFCf1Onj5aWnCjw.png) # Benefits Of A Design System There are many benefits of establishing a design system. Here are some of the benefits. ### Accessibility Design systems bake accessibility into the design language and component library, ultimately ensuring that every customer can use your product, or products, and achieve the same results. Through the design language we can ensure that the color palette has sufficient contrast, the typography scale is legible, and the content is digestible. These design language patterns are the foundation for building the component library, which ensures we leverage semantic HTML elements. And when HTML isn't sufficient, we incorporate WAI-ARIA to fill in the gaps. Accessibility is no longer an after-thought. ### Trickle-Down Style Updates When a design pattern is updated, developers no longer have to struggle to update the components in multiple places. Design systems provide one source-of-truth for components and patterns. Thus, styles must only be updated in one place. The changes are then propagated to products with just a quick update to the library package version. ### Responsiveness Component libraries are built to be responsive. They account for varying screen resolutions and viewports. ### Consistency Arguably one of the most vital benefits of a design system is consistency. As the number of teams working on a product scales, the UI remains consistent. ### Easy On-Boarding Having a design system significantly reduces the on-boarding time of new team members by providing one source-of-truth for them to learn. By providing comprehensive documentation in one easy-to-find location, we reduce the overwhelm of staring a new job. Additionally, developer and designers can easily collaborate cross-team when there is one component library being used. ### Improved Development Speed Once the component library has had a stable release, development speed will be drastically improved. Developers will no longer be burdened with building components from scratch and ensuring that they're accessible and responsive. They will simply be able to import and leverage these components. ![Clock](https://cdn-images-1.medium.com/max/1600/1*21XQR8XXHSkhHhxMOluZ4w.png) # Drawbacks Of A Design System While there are many benefits of a design system, there are also some drawbacks. Below are some of the most common. ### Time Consumption Design systems aren't built overnight. Often they take many months to years to build a stable version. Additionally, design systems are never "done." There will be points within a system life cycle where the focus is primarily on maintenance instead of active development. And while these periods of maintenance take less designer and developer resources, the ultimate reality is that design systems are products, not projects; they must be nurtured to survive. ### Large, Up-Front Commitment For a system to succeed, there must be an up-front investment of designers and developers. Often a lack of dedicated resources leads to system failure. ### Product Team Buy-In Product teams are a design system's primary stakeholders. Without their buy-in and support a system cannot succeed. ![Credit cards](https://cdn-images-1.medium.com/max/1600/1*5DM3w5HAwMVgU21OjESMYQ.png) # Who Is Building Design Systems? There are many revered companies adopting the paradigm of design systems. The benefits of systems are being more widely acknowledged, causing industry leaders to pave the way for adoption. Below are some of the most widely known design systems being developed. ### Mailchimp Mailchimp is a popular tool for marketing email campaigns. They have developed a comprehensive [design system](https://ux.mailchimp.com/patterns/color). They've even published a separate [content style guide](https://styleguide.mailchimp.com/) to help their employees build products with the persona of Mailchimp. ![Mailchimp](https://cdn-images-1.medium.com/max/1600/1*6it8yZpb4K5kt4u0pw4yYw.png) ### Material Design Google [Material Design](https://material.io/design/) is one of the most notable systems to date. Not only does the style guide include foundation such as color and iconography, but their [component library](https://material.io/develop/) serves iOS, web, Android, and Flutter. ![Google](https://cdn-images-1.medium.com/max/1600/1*QayqhTIbeNgXCX2eMddCxA.png) ### IBM Carbon IBM [Carbon](https://www.carbondesignsystem.com/) is another revered design system. They provide components in React, Vue, Angular, and of course, vanilla JavaScript. ![Carbon](https://cdn-images-1.medium.com/max/1600/1*kv-2MEKe0y21LJ5z5j3JDg.png) ### Atlassian Atlassian's [design system](https://www.atlassian.design/guidelines/brand/color) is housed within a beautiful style guide. They provide many resources for brand, identity, and iconography. ![Atlassian](https://cdn-images-1.medium.com/max/1600/1*q1CK-2wA1UWxt7WIUrIQEw.png) --- I hope part one served as a solid foundation of design systems. Part two will cover Design Language in-depth. Feel free to let me know what you think about design systems down below. _All graphics are courtesy of [unDraw](https://undraw.co/illustrations)._
emmabostian
141,904
End to End testing with Selenium — Retrospective
Original article here: End to End testing with Selenium — Retrospective I was working on the QA auto...
0
2019-07-17T20:22:28
https://dev.to/neetjn/end-to-end-testing-with-selenium-retrospective-3o07
selenium, testing, qa, automation
Original article here: [End to End testing with Selenium — Retrospective](https://medium.com/p/f7673ce5035b) I was working on the QA automation team at Transparent Language for roughly a year. I helped establish the team, as well as develop and mature their infrastructure, tools, and training processes. The following is an outlook on my greatest challenges and solutions to common misconceptions regarding testing with Selenium as well as common problems with infrastructure. # What is Selenium? Selenium is software testing framework commonly used for developing unit tests for web applications, end to end tests, or automating otherwise redundant tasks against websites or web applications. Selenium was originally developed in-house by developers over at Google, and later polished and released by Thoughtworks as the most well known and used implementation "Selenium Webdriver". The idea behind Webdriver is quite simple; an official api is maintained by the Selenium team, you download bindings for your favorite programming language (C#, Python, Java, etc.), you acquire a webdriver for the browser you would like to test against, and finally you have your selenium bindings talk to your webdriver which acts as a marionette for your target browser. The concept is quite clever, and Selenium allows developers to adequately test their applications with ease against any and all browsers. Naturally, our development teams did utilize Selenium for basic unit tests, but we had numerous products that spoke with each other, and tasking our developers with writing true end to end tests while simultaneously developing new features and bug fixing didn't make sense; especially because we had multiple development teams for our in house projects. The time investment for learning how different components reacted with each other and creating all new tests that run against these assumptions would hinder further growth. Our solution was to create a dedicated team to develop all encompassing tests. # First hurdle - writing reusable code My team and I decided to write our tests with Python, as this was the dominant programming language being used by our company for web services. My first hurdle when I began working with Selenium was designing clean, and re-usable code. It's "standard practice" to follow the Page Object Model when leveraging Selenium, regardless of bindings being used, and that makes sense. We began constructing generic objects for each page of our target web application that contained simple getters to find DOM elements and construct selenium WebElement instances. This greatly helped reduce our lead time for developing new tests as we could now simply instantiate defined page objects and leverage the robust selenium bindings. ```python class HomePage: def __init__(self, webdriver): self.wd = webdriver @property def notification_box(self): return self.wd.find_element_by_css_selector('div.notif') ``` Soon after establishing our page object methodology however, we ran into a brick wall. We were developing a new suite of tests for a new application, B, but to access this application we had navigate through the application we'd previously written tests for, A. To solve this we began creating client packages for each product we tested against, in which we would export to be consumed by other projects that required definitions for foreign web pages. ```python from unittest import TestCase from selenium import webdriver from products.barfoo.pages import UserPage from pages.foobar import HomePage class AppTests(TestCase): def setUp(self): self.webdriver = webdriver.Chrome() def tearDown(self): self.webdriver.exit() def test_user_page(self): home_page = HomePage(self.webdriver) user_page = UserPage(self.webdriver) home_page.user_profile.click() self.assertEqual(user_page.username.text, "john") ``` # Growing pains As my team picked up the pace, and our tests became more intricate and complex, we realized our setup made state management quite tricky. Our page objects became bloated with routine functionality ie; creating a new user, logging in, etc. We also had to monkey patch a lot of the core selenium bindings to work uniformly with different browsers due to discrepancies in adherence to the selenium webdriver specification between webdriver maintainers. This led to inconsistencies across the board with all of our projects. We had to design a leaner codebase, a framework for designing our client packages and test suites. My initial approach was to create another layer of abstraction. I called this the PMC model (Page, Model, Controller). ![https://cdn-images-1.medium.com/max/800/1*huzV4q5IXmAp12pgXgp_jw.png](https://cdn-images-1.medium.com/max/800/1*huzV4q5IXmAp12pgXgp_jw.png) An incredibly simple concept. Pages and Modals would be separate entities that only define elements, any auxiliary functionality or stateful logic will be handled by a Controller. Each product will have it's own controller that serves as a marionette for it's attributed project, allows us to directly reference it's attributed page or modal entities. When a new controller is instantiated, it will trickle down a webdriver instance as well as any configurations to it's child pages or modals. This webdriver instance will automatically be patched to work uniformly across any browser. The PMC model helped up develop tests faster, keep our business logic and web application models separated, and simplified our client packages. Instead of having to import and instantiate n arbitrary page objects, we now only had to import and instantiate a single controller. Note: Our PMC model was later replaced by a framework I'd developed explicitly for selenium testing with Python, py-component-controller. # Infrastructure As our test suites grew exponentially larger, we needed a leaner build pipeline. We were using Bamboo as our CD/CI tool of choice (later Concourse to promote infrastructure as code), but our tests were bogging down our Bamboo agents and cleaning up after ourselves proved to be much more tedious than expected. We also had to ensure our Bamboo agents were provisioned with our specific version of Python and other modules and services to run as intended. We resolved this by leveraging containers, namely Docker. Using Docker, we were able to ensure that our tests ran inside an isolated environment we had complete control over, and cleaning up was as simple as deleting containers after they had served their purpose. Testing against multiple browsers also became increasing difficult because we had decided to leverage the Selenium Grid, and host our grid internally as opposed to using services such as Sauce Labs which offers scale-able Selenium infrastructures. We ran into a myriad of network related issues, and the manual intervention whenever an operating system or browser had an update was a drastic time investment. After coming to the conclusion that there was virtually no difference between testing against a virtualized browser versus a baremetal machine, we eventually made the decision to explicitly test against only Chrome and Firefox, which allowed us to ditch our grid setup and run tests even faster than we ever could have. We again leveraged Docker to run both Chrome and Firefox headlessly within a virtual container. One other large challenge was ensuring consistency between both development and "production" test environments. To this effect, we used vagrant to ensure each developer had a predictable development environment to resolve issues as quickly as possible. We also leveraged Ansible to help automate redundant tasks such as installing or updating specific browsers or webdrivers. # Decreasing lead time With our infrastructure being almost entirely automated, and our code base adhering to proper standards with a single framework for developing tests - after months of maturing our processes, we had presumed our lead time would have dived significantly. We were wrong. Though these factors did help, and were ultimately the reason why we were able to reach a state where our tests and results could prove useful, a major benefactor to ensuring a smooth lead time from newly added feature to test in production was properly understanding the scope of our end to end tests. After a certain point, we began testing anything and everything we possibly could considering how simple and streamline it was to toss a new test into our existing suites. We had strayed away from the "happy path", or testing regular user interactions and began creating even more complex tests for edge cases that could and would otherwise be discovered by manual QA. We were using a waterfall approach, which had helped us develop our infrastructure and processes, but this also allowed for malleable tangents into unimportant features which would result in wasted time investments by over engineering. The move to an agile workflow greatly helped structure and stabilize the team once we reached MVP. It can be quite easy to be carried away when trying to establish projects like automated testing because you must figure out what does and doesn't work, as well as the right tools to use. However it's integral for overall productivity and value to understand your project, however intricate and robust, is at the end of the day simply a suite of tests that will not be used or consumed by an end user. It is a tool to simply validate business logic and performance. # Maintenance and cost Once we reached MVP, maintaining our tests became quite simple. We had triggers to run tests against our internal test sites, as well as our live sites. Our tests would run on non-office hours, to give us feedback as soon as we entered the office. We also leveraged git web hooks in our build pipeline to trigger any particular test cases affected by any code base changes, which helped maintain a fast feed forward loop. Utilizing Atlassian's Jira, the team was also able to coordinate with other development teams to ensure any breaking or major changes were promptly responded to. Given our infrastructure, our only bottleneck was our given resources for running our test containers. Our containers ran both Chrome and Firefox, which made them rather CPU intensive - however, maintaining a single EC2 instance to run our tests on was much more cost efficient than relying on services like SauceLabs which can become quite pricey when running tests in parallel. # Key takeaways Summarizing this post, the following is what I learned while on the automation team at Transparent Language: * Automated tests are code. Properly maintaining large scale end to end tests requires DevOps practices, and a range of skills from basic web development to SysOps. Downplaying automated QA will lead to mediocre products, and can ultimately be a loss of revenue for your company if you spend months developing infrastructure that can't scale or offer you insightful feedback in a timely manner. * The payoff is there, but it may not always be necessary. Automated end to end testing can be an incredibly useful resource. It can provide instant feedback when your application has been either deployed or updated. However, there is a significant time investment to handle this correctly. You must be prepared to not receive valuable feedback for months depending on the size of your given product(s) you're testing against. * Automating QA does not mean manual QA is not required or shouldn't be preformed. As valuable as automated quality assurance can be, it can't account for use cases that you haven't explicitly defined. When new features are introduced into applications,
neetjn
141,912
The Name's CORS. Rack::CORS.
My latest project required using Rails to create an API backend. Ruby has a nifty --api flag that cre...
0
2019-07-17T22:41:42
https://dev.to/aidiri/the-name-s-cors-rack-cors-27og
beginners, ruby, rails, todayilearned
My latest project required using Rails to create an API backend. Ruby has a nifty `--api` flag that creates a repo with only the folders you would need. It cuts down a lot of the extra folders you would get when normally creating a new Rails app. It also automatically adds a line into your gemfile to include Rack-CORS. Following instructions, I uncommented `rack-cors` in my gemfile to make sure it was "working" and copy-pasted some extra code into my `application.rb` file "to allow CORS to work". Done. Cool. As usual, I was absolutely baffled. What had I just done, what was anything actually doing, what did any of it mean and what even *was* the meaning of life?? I moved on and hoped I'd understand later on. I never really got that explanation. So I decided to research it myself. I regretted it instantly. --- ##What is CORS? Anytime you want to bring resources into your web app that come from different [origins](https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#Definition_of_an_origin), you'd implement Cross-Origin Resource Sharing, or CORS for short. CORS has standardized the way we retrieve cross-origin resources. How? … >["...it uses additional HTTP headers to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin."](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) What this means is that CORS is going ahead and telling your browser which resources your app is allowed to access when it's trying to access a *resource that is not at its original domain* (aka cross-origin). And the only reason CORS even needs to do that is because of the Single-Origin Policy, which states that only resources from a single origin can be requested. Browsers abide by this policy by default for obvious security reasons. But that means we need to somehow figure out how to allow the cross-origin requests we do want to go through. We do that by including the right CORS headers into our request. *Side note:* JSONP (JSON with Padding) is another way to resolve the same-origin policy. Because HTML script tags are an exception to the cross-origin restrictions, JSONP simply turns data into an HTML script tag to bypass the single-origin policy. It's admittedly a bit “hacky” so not really used anymore unless a browser doesn't support CORS. [*](https://dev.socrata.com/docs/cors-and-jsonp.html) --- ##What's Rack::CORS? Rack::CORS is a middleware gem for Rack compatible web apps that helps you more easily implement CORS. It breaks the things down and turns making those cross-domain requests a cleaner and easier process. Plus, the error messages are super helpful. ###How To Use It The [documentation](https://github.com/cyu/rack-cors) for Rack CORS is helpful in explaining what to add where, but if you don't know much about CORS in general, it can be difficult to understand what you're looking at. First, as the documentation explains, you want to open your `config/application.rb` file and add something like this into your code: ```ruby config.middleware.insert_before 0, Rack::Cors do allow do origins 'example.com', 'localhost:3000' resource '/publicStuff/*', headers: :any, methods: [:get, :post] resource '/myStuff/*', headers: :any, methods: :any end end ``` - The first line uses `insert_before 0` so Rack CORS can run before any other middleware that might interfere with it. - Line three is where you define which origins your app will accept resources from. This will be the domain name of the origin(s) you want to let through. - Line four is doing the same thing except we're defining the resource(s), *or path(s)*, that we want to allow through. - In the same line you've defined your allowed resource path(s), you'll define which request parameters to allow. Most likely you'll be using the methods and headers parameters. If you want to allow multiple methods, you'd use the bracket notation to list your methods: `methods: [:get, :post, :options]` Let's look at the above example again and break things down. ```ruby config.middleware.insert_before 0, Rack::Cors do allow do origins 'example.com', 'localhost:3000' resource '/publicStuff/*', headers: :any, methods: [:get, :post] resource '/myStuff/*', headers: :any, methods: :any end end ``` - It allows requests from the `example.com` domain or from `localhost:3000`. - The paths allowed are: - any path that starts with `/publicStuff/` and - the `/myStuff/` path. - These possible paths could look something like `example.com/publicStuff/records/` or `localhost:3000/myStuff/`. - For the `'/publicStuff/'` path it allows requests with any header, but only allows requests with the GET and POST method. - Requests for the `'/myStuff/'` path are allowed with any method *and* any header. --- If you're creating a small app you'll only ever run on your own computer or localhost, you might not need to worry about defining specific origins and routes: You might have noticed the `'*'` wildcard is a sort of 'all things go' symbol. Using the `'*'` wildcard when defining the origin would allow all requests from any origin. Same would apply when using the wildcard for your resource paths. With resource paths, you can either allow any and all resource paths, (`resource '*'`) or allow any extension of a specific path root (`resource '/examplepath/* '`). When defining parameters, you'll instead want to use something like `headers: :any` and not the `'*'` wildcard. So if you were to exclusively use the `'*'` wildcard for your small localhost app it might look something like this: ```ruby config.middleware.insert_before 0, Rack::Cors do allow do origins '*' resource '*', headers: :any, methods: :any end end ``` That's nice and simple. --- There's a lot more to CORS than what's written here, and could be a whole blog post, or two, on its own. So if the concept still feels a little shaky, that's ok! There's a lot going on there that you don't necessarily need to understand to be a good programmer, but if you can understand what these few lines are doing it'll make some of those fetch request errors make a lot more sense. Happy Coding! --- [Same-origin policy - MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#Definition_of_an_origin) [Cross-Origin Resource Sharing (CORS) - MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) [CORS & JSONP](https://dev.socrata.com/docs/cors-and-jsonp.html) [Rack CORS Documentation](https://github.com/cyu/rack-cors)
aidiri
141,963
Iris version 11.2 released
It is my pleasure and honor to spread the news about the new Iris release. As open-source proje...
1,532
2019-07-19T19:27:10
https://dev.to/kataras/iris-version-11-2-released-22bc
go, webdev, opensource, github
It is my pleasure and honor to spread the news about the new [Iris](https://github.com/kataras/iris) release. <a href="https://iris-go.com"> <img align="right" width="169px" src="https://iris-go.com/images/icon.svg?v=a" title="logo created by @merry.dii" /> </a> As open-source project authors and/or managers, we owe a lot to our users - to our co end-developers that they learn and work with our project, no matter how big or small it is, spreading its potentials to their co-workers and learning the deep and nice parts of a whole programming language through our guideliness. We are all exciting when a feature request of us goes live to a popular project, right? This happens almost every week, to Iris, every week a new user feature request is discussed, accepted and finally implemented. Iris is not just another open source web framework written in Go. It is a Community. This release is not an exception to that long-term tradition. Iris version 11.2 is done through [130 new commits](https://github.com/kataras/iris/compare/v11.1.1...v11.2.3) to the main repository and [151 commits](https://github.com/kataras/neffos/commits/v0.0.8) to its new websocket implementation, [neffos](https://github.com/kataras/neffos) repository: - 17 bugfixes and minor improvements - 13 of 17 bugfixes and improvements are reported and requested by its end-users themselves! - 5 new features and major improvements - all examples and middlewares are updated and tested with go 1.12 --------------- Let's start with the most easy to use feature for your daily development with Iris. ## Automatic Public Address with TLS Wouldn't be great to test your web application server in a more "real-world environment" like a public, remote, address instead of localhost? There are plenty of third-party tools offering such a feature, but in my opinion, the [ngrok](https://github.com/inconshreveable/ngrok) one is the best among them. It's popular and tested for years, like Iris, in fact, it has ~600 stars more than Iris itself. Great job [@inconshreveable](https://github.com/inconshreveable/)! Iris v11.2 offers ngrok integration. This feature is simple yet very powerful. It really helps when you want to quickly show your development progress to your colleagues or the project leader at a remote conference. Follow the steps below to, temporarily, convert your local Iris web server to a public one. 1. Go head and [download ngrok](https://ngrok.io), add it to your $PATH environment variable, 2. Simply pass the `WithTunneling` configurator in your `app.Run`, 3. You are ready to [GO](https://www.facebook.com/iris.framework/photos/a.2420499271295384/3261189020559734/?type=3&theater)! [![tunneling_screenshot](https://user-images.githubusercontent.com/22900943/61413905-50596300-a8f5-11e9-8be0-7e806846d52f.png)](https://www.facebook.com/iris.framework/photos/a.2420499271295384/3261189020559734/?type=3&theater) - `ctx.Application().ConfigurationReadOnly().GetVHost()` returns the public domain value. Rarely useful but it's there for you. Most of the times you use relative url paths instead of absolute(or you should to). - It doesn't matter if ngrok is already running or not, Iris framework is smart enough to use ngrok's [web API](https://ngrok.com/docs) to create a tunnel. Full `Tunneling` configuration: ```go app.Run(iris.Addr(":8080"), iris.WithConfiguration( iris.Configuration{ Tunneling: iris.TunnelingConfiguration{ AuthToken: "my-ngrok-auth-client-token", Bin: "/bin/path/for/ngrok", Region: "eu", WebInterface: "127.0.0.1:4040", Tunnels: []iris.Tunnel{ { Name: "MyApp", Addr: ":8080", }, }, }, })) ``` ## Routing: Handle different parameter types on the same path Something like this works now without any issues (order: top as fallback) ```go app.Get("/u/{username:string}", func(ctx iris.Context) { ctx.Writef("before username (string), current route name: %s\n", ctx.RouteName()) ctx.Next() }, func(ctx iris.Context) { ctx.Writef("username (string): %s", ctx.Params().Get("username")) }) app.Get("/u/{id:int}", func(ctx iris.Context) { ctx.Writef("before id (int), current route name: %s\n", ctx.RouteName()) ctx.Next() }, func(ctx iris.Context) { ctx.Writef("id (int): %d", ctx.Params().GetIntDefault("id", 0)) }) app.Get("/u/{uid:uint}", func(ctx iris.Context) { ctx.Writef("before uid (uint), current route name: %s\n", ctx.RouteName()) ctx.Next() }, func(ctx iris.Context) { ctx.Writef("uid (uint): %d", ctx.Params().GetUintDefault("uid", 0)) }) app.Get("/u/{firstname:alphabetical}", func(ctx iris.Context) { ctx.Writef("before firstname (alphabetical), current route name: %s\n", ctx.RouteName()) ctx.Next() }, func(ctx iris.Context) { ctx.Writef("firstname (alphabetical): %s", ctx.Params().Get("firstname")) }) /* /u/abcd maps to :alphabetical (if :alphabetical registered otherwise :string) /u/42 maps to :uint (if :uint registered otherwise :int) /u/-1 maps to :int (if :int registered otherwise :string) /u/abcd123 maps to :string */ ``` ## Content Negotiation **Sometimes a server application needs to serve different representations of a resource at the same URI**. Of course this can be done by hand, manually checking the `Accept` request header and push the requested form of the content. However, as your app manages more resources and different kind of representations this can be very painful, as you may need to check for `Accept-Charset`, `Accept-Encoding`, put some server-side priorities, handle the errors correctly and e.t.c. There are some web frameworks in Go already struggle to implement a feature like this but they don't do it correctly: - they don't handle accept-charset at all - they don't handle accept-encoding at all - they don't send error status code (406 not acceptable) as RFC proposes and more... But, fortunately for us, **Iris always follows the best practises and the Web standards**. Based on: - https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Charset - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding ```go type testdata struct { Name string `json:"name" xml:"Name"` Age int `json:"age" xml:"Age"` } ``` Render a resource with "gzip" encoding algorithm as application/json or text/xml or application/xml - when client's accept header contains one of them - or JSON (the first declared) if accept is empty, - and when client's accept-encoding header contains "gzip" or it's empty. ```go app.Get("/resource", func(ctx iris.Context) { data := testdata{ Name: "test name", Age: 26, } ctx.Negotiation().JSON().XML().EncodingGzip() _, err := ctx.Negotiate(data) if err != nil { ctx.Writef("%v", err) } }) ``` **OR** define them in a middleware and call Negotiate with nil in the final handler. ```go ctx.Negotiation().JSON(data).XML(data).Any("content for */*") ctx.Negotiate(nil) ``` ```go app.Get("/resource2", func(ctx iris.Context) { jsonAndXML := testdata{ Name: "test name", Age: 26, } ctx.Negotiation(). JSON(jsonAndXML). XML(jsonAndXML). HTML("<h1>Test Name</h1><h2>Age 26</h2>") ctx.Negotiate(nil) }) ``` [Read the full example](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/_examples/http_responsewriter/content-negotiation/main.go#L22). The [Context.Negotiation](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3342) method creates once and returns the negotiation builder to build server-side available prioritized content for specific content type(s), charset(s) and encoding algorithm(s). ```go Context.Negotiation() *context.NegotiationBuilder ``` The [Context.Negotiate](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3402) method used for serving different representations of a resource at the same URI. It returns `context.ErrContentNotSupported` when not matched mime type(s). - The "v" can be a single [iris.N](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3298-L3309) struct value. - The "v" can be any value completes the [context.ContentSelector](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3272) interface. - The "v" can be any value completes the [context.ContentNegotiator](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3281) interface. - The "v" can be any value of struct(JSON, JSONP, XML, YAML) or string(TEXT, HTML) or []byte(Markdown, Binary) or []byte with any matched mime type. - If the "v" is nil, the `Context.Negotitation()` builder's content will be used instead, otherwise "v" overrides builder's content (server mime types are still retrieved by its registered, supported, mime list) - Set mime type priorities by [Negotiation().MIME.Text.JSON.XML.HTML...](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3500-L3621). - Set charset priorities by [Negotiation().Charset(...)](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3640). - Set encoding algorithm priorities by [Negotiation().Encoding(...)](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3652-L3665). - Modify the accepted by [Negotiation().Accept./Override()/.XML().JSON().Charset(...).Encoding(...)...](https://github.com/kataras/iris/blob/8ee0de51c593fe0483fbea38117c3c88e065f2ef/context/context.go#L3774-L3877). ```go Context.Negotiate(v interface{}) (int, error) ``` ## The new Websocket package There are times that you simply can't improve something without a breaking change. After a year and a half without breaking changes, this version of Iris introduces two breaking changes for the best. The first one is the websocket module which was fully re-written and the second has to do with how you serve system (or embedded) directories. The new websocket package, which is selfhosted at <https://github.com/kataras/neffos>, is a work of 4 months daily designing, coding, re-designing and refactoring. Even there, from day-zero, **users immediately started to be participated** by asking questions and making proposals. Of course, as our trandition, they are discussed (a lot) and are all available by now: [Broadcast message to a Connection ID](https://github.com/kataras/neffos/issues/1#issuecomment-498013666) ![](https://thepracticaldev.s3.amazonaws.com/i/kzywdg6s1ku8uvsn8c0v.png) [IDGenerator for Iris](https://github.com/kataras/neffos/issues/1#issuecomment-508689819) ![](https://thepracticaldev.s3.amazonaws.com/i/iad8w2370gh1s7d1couk.png) [Server Ask method like Conn.Ask](https://github.com/kataras/neffos/issues/1#issuecomment-509024562) ![](https://thepracticaldev.s3.amazonaws.com/i/rhgavv5stehk8i80tuof.png) ![](https://thepracticaldev.s3.amazonaws.com/i/q2ozhqlxh9k8nz650nmt.png) [Add a cron example](https://github.com/kataras/neffos/issues/1#issuecomment-511046580) [Adapters support for scalability](https://github.com/kataras/neffos/issues/3) ![](https://thepracticaldev.s3.amazonaws.com/i/5j7r5unu2gu4uazaj504.png) The new websocket implementation is far better and faster at all use cases than we had previously and without the bugs and the compromises we had to deal brecause of the no-breaking-changes rule of the previous versions. Unlike the previous one which had only a simple go client that new one provides clients for Go and Typescript/Javascript(both nodejs and browser-side) and anyone can make a client for any language, C++ for example with ease. I can say that our new websocket module is very unique but feels like home with a lot of preparation and prototyping under the hoods. The result worth the days and nights I spent on this thing -- of course, you - as community will prove that point, based on your feedback in the end of the day. Let's see what the new version of websocket package offers that the previous v11.1.x one couldn't handle. | Feature | v11.1.x | v11.2.x (neffos) | |---------------------|:----------------------|---------:| | Scale-out using Nats or Redis | NO | YES | | Gorilla Protocol Implementation | YES | YES | | Gobwas/ws Protocol Implementation | NO | YES | | Acknowledgements | YES | YES | | Namespaces | NO | YES | | Rooms | YES | YES | | Broadcast | YES(but slow) | YES(faster than socket.io and everything else we've tested) | | Event-Driven architecture | YES | YES | | Request-Response architecture | NO | YES | | Error Awareness | NO | YES | | Asynchronous Broadcast | NO | YES | | Timeouts | YES | YES | | Encoding | YES (only JSON) | YES | | Native WebSocket Messages | YES | YES | | Reconnection | NO | YES | | Modern client for Browsers, Nodejs and Go | NO | YES | Except the new selfhosted [neffos repository](https://github.com/kataras/neffos). The [kataras/iris/websocket](https://github.com/kataras/iris/tree/v11.2.0/websocket) subpackage now contains (only) Iris-specific migrations and helpers for the neffos websocket framework one. For example, to gain access of the request's `Context` you can call the `websocket.GetContext(Conn)` from inside an event message handler/callback: ```go // GetContext returns the Iris Context from a websocket connection. func GetContext(c *neffos.Conn) Context ``` To register a websocket `neffos.Server` to a route use the `websocket.Handler` function: ```go // IDGenerator is an iris-specific IDGenerator for new connections. type IDGenerator func(Context) string // Handler returns an Iris handler to be served in a route of an Iris application. // Accepts the neffos websocket server as its first input argument // and optionally an Iris-specific `IDGenerator` as its second one. func Handler(s *neffos.Server, IDGenerator ...IDGenerator) Handler ``` **Usage** ```go import ( "github.com/kataras/neffos" "github.com/kataras/iris/websocket" ) // [...] onChat := func(ns *neffos.NSConn, msg neffos.Message) error { ctx := websocket.GetContext(ns.Conn) // [...] return nil } app := iris.New() ws := neffos.New(websocket.DefaultGorillaUpgrader, neffos.Namespaces{ "default": neffos.Events { "chat": onChat, }, }) app.Get("/websocket_endpoint", websocket.Handler(ws)) ``` <!-- The `iris/websocket` subpackage also contains type aliases and function shortcuts for the neffos package, which makes your coding experience easier by not importing more packages than you need in your Go source files, e.g. all `neffos.Conn, neffos.NSConn, neffos.New, neffos.Dial, neffos.Namespaces, neffos.Events, neffos.Struct ...` can be written as `websocket.Conn, websocket.NSConn, websocket.Dial, websocket.Namespaces, websocket.Events, websocket.Struct ...`, `neffos/exchange/redis.NewStackExchange` as `websocket.NewRedisStackExchange`, `neffos/gorilla.DefaultUpgrader, Dialer` as `websocket.DefaultGorillaUpgrader, Dialer` and so on. In fact you don't even need to import the `github.com/kataras/neffos` package at all if you don't want to, just keep using the `kataras/iris/websocket` subpackage and you will be totally fine. --> ### MVC | The new Websocket Controller The neffos package contains a feature to create events from Go struct values, its `NewStruct` package-level function. In addition, Iris has its own `iris/mvc/Application.HandleWebsocket(v interface{}) *neffos.Struct` to register controllers in existing Iris MVC applications(offering a fully featured dependency injection container for request values and static services) like any regular HTTP Controllers you are used to. ```go // HandleWebsocket handles a websocket specific controller. // Its exported methods are the events. // If a "Namespace" field or method exists then namespace is set, // otherwise empty namespace will be used for this controller. // // Note that a websocket controller is registered and ran under // a connection connected to a namespace // and it cannot send HTTP responses on that state. // However all static and dynamic dependencies behave as expected. func (*mvc.Application) HandleWebsocket(controller interface{}) *neffos.Struct ``` Let's see a usage example, we want to bind the `OnNamespaceConnected`, `OnNamespaceDisconnect` built-in events and a custom `"OnChat"` event with our controller's methods. **1.** We create the controller by declaring a NSConn type field as `stateless` and write the methods we need. ```go type websocketController struct { *neffos.NSConn `stateless:"true"` Namespace string Logger MyLoggerInterface } func (c *websocketController) OnNamespaceConnected(msg neffos.Message) error { return nil } func (c *websocketController) OnNamespaceDisconnect(msg neffos.Message) error { return nil } func (c *websocketController) OnChat(msg neffos.Message) error { return nil } ``` Iris is smart enough to catch the `Namespace string` struct field to use it to register the controller's methods as events for that namespace, alternatively you can create a controller method of `Namespace() string { return "default" }` or use the `HandleWebsocket`'s return value to `.SetNamespace("default")`, it's up to you. **2.** We inititalize our MVC application targets to a websocket endpoint, as we used to do with regular HTTP Controllers for HTTP routes. ```go import ( // [...] "github.com/kataras/iris/mvc" ) // [app := iris.New...] mvcApp := mvc.New(app.Party("/websocket_endpoint")) ``` **3.** We register our dependencies, if any. ```go mvcApp.Register( &prefixedLogger{prefix: "DEV"}, ) ``` **4.** We register one or more websocket controllers, each websocket controller maps to one namespace (just one is enough, as in most of the cases you don't need more, but that depends on your app's needs and requirements). ```go mvcApp.HandleWebsocket(&websocketController{Namespace: "default"}) ``` **5.** Next, we continue by mapping the mvc application as a connection handler to a websocket server (you may use more than one mvc applications per websocket server via `neffos.JoinConnHandlers(mvcApp1, mvcApp2)`). ```go websocketServer := neffos.New(websocket.DefaultGorillaUpgrader, mvcApp) ``` **6.** And the last step is to register that server to our endpoint through a normal `.Get` method. ```go mvcApp.Router.Get("/", websocket.Handler(websocketServer)) ``` We will not cover the whole neffos package here, there are a lot of new features. Don't be afraid, you can still do all the things you did previously without a lot of learning process but as you going further to more advanced applications you can achieve more by reading its [wiki](https://github.com/kataras/neffos/wiki) page. In fact there are so many new things that are written in a e-book which you can request direct online access 100% free. ### Examples * [Websocket Controller](https://github.com/kataras/iris/tree/v11.2.0/_examples/mvc/websocket) * [Basic](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic) * [Server](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/server.go) * [Go Client](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/go-client/client.go) * [Browser Client](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/browser/index.html) * [Browser NPM Client (browserify)](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/browserify/app.js) Interesting? Continue the reading by navigating to the **[learning neffos section](https://github.com/kataras/neffos#learning-neffos)**. ## The new FileServer We will continue by looking the new `FileServer` package-level function and `Party.HandleDir` method. Below is a list of the functions and methods we were using so far(as of v11.1.x): 1. `Party.StaticWeb(requestPath string, systemPath string) *Route` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/api_builder.go#L806) (the most commonly used) 2. `func NewStaticHandlerBuilder(dir string) StaticHandlerBuilder` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/fs.go#L194) 3. `func StaticHandler(systemPath string, showList bool, gzip bool) Handler` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/fs.go#L133) 4. `Party.StaticHandler(systemPath string, showList bool, gzip bool) Handler` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/api_builder.go#L640) 5. `Party.StaticServe(systemPath string, requestPath ...string) *Route` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/api_builder.go#L648) 6. `func StaticEmbeddedHandler(vdir string, assetFn func(name string) ([]byte, error), namesFn func() []string, assetsGziped bool) Handler` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/fs.go#L28) 7. `Party.StaticEmbeddedGzip(requestPath string, vdir string, gzipAssetFn func(name string) ([]byte, error), gzipNamesFn func() []string) *Route ` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/api_builder.go#L704) 8. `Party.StaticEmbedded(requestPath string, vdir string, assetFn func(name string) ([]byte, error), namesFn func() []string) *Route` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/core/router/api_builder.go#L688) 9. `Application.SPA(assetHandler Handler) *router.SPABuilder` [*](https://github.com/kataras/iris/blob/6564922661686d43954b8bcf1d4f469a3d2c3fa3/iris.go#L481) **That is a hell of functions that doing slightly differnet things but all resulting to the same functionality that an Iris-Dev wants in the end**. Also, the embedded file server was missing an important feature that a (physical) system's file server had, serve by content range (to be fair with ourselves, we weren't alone, the rest of third party tools and frameworks don't even have or think the half features that we provided to our users for embedded files, including this one). So, I was wondering, _in the spirit that we are free of the no-breaking-changes rule for this release on the websocket level_, to bring some break changes outside of the websocket module too by not just replacing but also removing all existing static handler functions, however I came up to the decision that it's better to let them exist for one major version more ~and call the new methods under the hoods but~ with a _deprecation_ warning that will be logged to the dev's terminal. Supposedly you had a `main.go` and on its line 18 `app.StaticWeb("/static", "./assets")` exists, the error will look like that: ![deprecation_output_example](https://user-images.githubusercontent.com/22900943/59934728-e4e6b780-9454-11e9-8494-af050fbd4cb6.png) > Note the hover, most code editors will navigate you to the source of the problem, the deprecation log takes the parameter values of the deprecated method, in that case the `StaticWeb` and suggests the new way. All those functions can be replaced with a single one package-level and one Party method. The package-level function gives you an `Handler` to work with and the other `Party` method will register routes on subdomain, subrouter and etc. At this point I am writing the issue I already completed this feature locally, not yet pushed but will be soon. It looks like that: ```go FileServer(directory string, options ...DirOptions) Handler ``` ```go Party.HandleDir(requestPath string, directory string, options ...DirOptions) *Route ``` Where the `DirOptions` are: ```go type DirOptions struct { // Defaults to "/index.html", if request path is ending with **/*/$IndexName // then it redirects to **/*(/) which another handler is handling it, // that another handler, called index handler, is auto-registered by the framework // if end developer wasn't managed to handle it manually/by hand. IndexName string // Should files served under gzip compression? Gzip bool // List the files inside the current requested directory if `IndexName` not found. ShowList bool // If `ShowList` is true then this function will be used instead // of the default one to show the list of files of a current requested directory(dir). DirList func(ctx Context, dirName string, dir http.File) error // When embedded. Asset func(name string) ([]byte, error) AssetInfo func(name string) (os.FileInfo, error) AssetNames func() []string // Optional validator that loops through each found requested resource. AssetValidator func(ctx Context, name string) bool } ``` If you used one of the above methods, **refactoring** your project's static file serving code blocks **is highly recommended**, it's quite easy in fact, here is how you can do it: ### Party.StaticWeb and Party.StaticServe **v11.1.x** ```go app.StaticWeb("/static", "./assets") ``` **v11.2.x** ```go app.HandleDir("/static", "./assets") ``` > If you used the `StaticWeb/StaticServe`, just make a replace-to-all-files to `HandleDir` operation in your code editor and you're done. ### StaticHandler **v11.1.x** ```go handler := iris.StaticHandler("./assets", true, true) ``` **v11.2.x** ```go handler := iris.FileServer("./assets", iris.DirOptions {ShowList: true, Gzip: true}) ``` ### StaticEmbeddedHandler **v11.1.x** ```go handler := iris.StaticEmbeddedHandler("./assets", Asset, AssetNames, true) ``` **v11.2.x** ```go handler := iris.FileServer("./assets", iris.DirOptions { Asset: Asset, AssetInfo: AssetInfo, AssetNames: AssetNames, Gzip: true}) ``` ### Party.StaticEmbedded and Party.StaticEmbeddedGzip **v11.1.x** ```go app.StaticEmbedded("/static", "./assets", Asset, AssetNames) ``` **v11.2.x** ```go app.HandleDir("/static", "./assets", iris.DirOptions { Asset: Asset, AssetInfo: AssetInfo, AssetNames: AssetNames, Gzip: true/false}) ``` ### Application.SPA **v11.1.x** ```go app.RegisterView(iris.HTML("./public", ".html")) app.Get("/", func(ctx iris.Context) { ctx.ViewData("Page", page) ctx.View("index.html") }) assetHandler := app.StaticHandler("./public", false, false) app.SPA(assetHandler) ``` **v11.2.x** ```go app.RegisterView(iris.HTML("./public", ".html")) // Overrides the file server's index route. // Order of this route registration does not matter. app.Get("/", func(ctx iris.Context) { ctx.ViewData("Page", page) ctx.View("index.html") }) app.HandleDir("/", "./public") ``` The above changes are not only syntactical. Unlike the standard net/http design we give the chance and the features to the end-developer to use different handlers for index files to customize the middlewares and any other options and code that required when designing a Single Page Applications. Previously something like `/static/index.html` -> `/static` should be manually handled by developer through `app.Get` to serve a directory's `index.html` file. Now, if a handler like this is missing then the framework will register it automatically, order of route registration does not even matter, Iris handles them on build state. Another new feature is that now the file server can handle content-range embedded files and also show a list of files in an embedded directory via the `DirOptions.ShowList` exactly like the system directories. The above `FileServer` function and `HandleDir` method handles every case in a single spot, all previous and new features are live inside those two. As a result from the 9(**nine**) functions and methods we had, we **end up with** just 2(**two**) with less code, more improvements and new features. That fact gives any user, experienced or newcomer an ideal place to start working without searching and reading more than they need to. ## New Jet View Engine This version contains a new new `View Engine` for the `jet` template parser as requested at: https://github.com/kataras/iris/issues/1281 ![](https://thepracticaldev.s3.amazonaws.com/i/c8h82fi59rc0ndxqaacm.png) ```go tmpl := iris.Jet("./views", ".jet") app.RegisterView(tmpl) ``` ## Bugfixes and minor improvements Let's continue by listing the minor bugfixes, improvements and new functions. For more information check the links after the function or method declaration. **1**. `Context.FullRequestURI()` - as requested at: https://github.com/kataras/iris/issues/1167. **2**. `NewConditionalHandler(filter func(ctx Context) bool, handlers ...Handler) Handler` as requested at: https://github.com/kataras/iris/issues/1170. **3**. `Context.ResetRequest(newReq *http.Request)` as requested at: https://github.com/kataras/iris/issues/1180. ![](https://thepracticaldev.s3.amazonaws.com/i/io3q7d7hlxasq5jn82o4.png) **4**. Fix `Context.StopExecution()` wasn't respect by MVC controller's methods, as reported at: https://github.com/kataras/iris/issues/1187. **5**. Give the ability to modify the whole session's cookie on Start and `Update/ShiftExpiration` methods and add a `StartWithPath` helper as requested at: https://github.com/kataras/iris/issues/1186. ![](https://thepracticaldev.s3.amazonaws.com/i/c0vx7r2034yl7jwtrpi9.png) **6**. Add `Context.ResponseWriter().IsHijacked() bool` to report whether the underline connection is hijacked or not. **7**. Add the ability to intercept the default error handler by seting a custom `ErrorHandler` to MVC application-level or per controller as requested at: https://github.com/kataras/iris/issues/1244: ![](https://thepracticaldev.s3.amazonaws.com/i/y9gokutivkj8qdbecctb.png) ```go mvcApp := mvc.New(app) mvcApp.HandleError(func(ctx iris.Context, err error) { ctx.HTML(fmt.Sprintf("<b>%s</b>", err.Error())) }) // OR type myController struct { /* [...] */ } // Overriddes the mvcApp.HandleError function. func (c *myController) HandleError(ctx iris.Context, err error) { ctx.HTML(fmt.Sprintf("<i>%s</i>", err.Error())) } ``` **8**. Extract the `Delim` configuration field for redis sessiondb as requested at: https://github.com/kataras/iris/issues/1256. And replace the underline redis client library to the [radix](https://github.com/mediocregopher/radix) one. ![](https://thepracticaldev.s3.amazonaws.com/i/xzgthdshrhgdr5n71s7t.png) **9**. Fix hero/mvc when map, struct, slice return nil as null in JSON responses, as reported at: https://github.com/kataras/iris/issues/1273. ![](https://thepracticaldev.s3.amazonaws.com/i/gwynvz0shtkyljrsqzg9.png) **10.** Enable `view.Django` pongo2 `addons` as requested at: https://github.com/kataras/iris/issues/1284. **11.** Add `mvc#Before/AfterActivation.HandleMany` and `GetRoutes` methods as requested at: https://github.com/kataras/iris/issues/1292. **12.** Fix `WithoutBodyConsumptionOnUnmarshal` option not be respected on `Context.ReadForm` and `Context.FormValues` as reported at: https://github.com/kataras/iris/issues/1297. **13.** Fix [jwt](https://github.com/iris-contrib/middleware/tree/master/jwt), [casbin](https://github.com/iris-contrib/middleware/tree/master/casbin) and [go-i81n](https://github.com/iris-contrib/middleware/tree/master/go-i81n) middlewares. **14.** Fix https://github.com/kataras/iris/issues/1298. **15.** Add `ReadQuery` as requested at: https://github.com/kataras/iris/issues/1207. **16.** Easy way to register a session as a middleware. Add `sessions/Sessions#Handler` and package-level `sessions.Get` function (examples below). ### Debugging 1. Warning messages for invalid registration of MVC dependencies and controllers fields or method input arguments. 2. Print information for MVC Controller's method maps to a websocket event. 3. `Context.RouteName()` which returns the current route's name. 4. `Context.HandlerFileName()` which returns the exact program's source code position of the current handler function that it's being executed (file:line). ## Examples Iris offers more than 110 examples for both experienced and new gophers. ## New Examples * [Serve using HTTP/3 Quic](https://github.com/kataras/iris/blob/v11.2.0/_examples/http-listening/http3-quic) as requested at: https://github.com/kataras/iris/issues/1295 ![](https://thepracticaldev.s3.amazonaws.com/i/wa944fg9cka7b3zwu8t8.png) * [Public domain address](https://github.com/kataras/iris/blob/v11.2.0/_examples/http-listening/listen-addr-public/main.go) * [Build RESTful API with the official MongoDB Go Driver and Iris](https://github.com/kataras/iris/tree/4cfdc6418532f49ff0159045a06080db0457eb55/_examples/tutorial/mongodb) * [Yet another dependency injection example and good practises at general](https://github.com/kataras/iris/blob/v11.2.0/_examples/hero/smart-contract/main.go) * [MVC Regexp](https://github.com/kataras/iris/blob/v11.2.0/_examples/mvc/regexp/main.go) * [Jet View Engine](https://github.com/kataras/iris/blob/v11.2.0/_examples/view/template_jet_0) and [Embedded Jet templates](https://github.com/kataras/iris/blob/v11.2.0/_examples/view/template_jet_1_embedded) * [Websocket](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic) * [Server](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/server.go) * [Go Client](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/go-client/client.go) * [Browser Client](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/browser/index.html) * [Browser NPM Client (browserify)](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/basic/browserify/app.js) * [GORM](https://github.com/kataras/iris/pull/1275) * [ReadQuery](https://github.com/kataras/iris/tree/master/_examples/http_request/read-query) * [Sessions Middleware](https://github.com/kataras/iris/blob/master/_examples/sessions/middleware/main.go) * [Content Negotiation](https://github.com/kataras/iris/blob/master/_examples/http_responsewriter/content-negotiation) * [Read YAML](https://github.com/kataras/iris/blob/master/_examples/http_request/read-yaml) ## Updated Examples * [Custom Router Wrapper](https://github.com/kataras/iris/blob/v11.2.0/_examples/routing/custom-wrapper/main.go) * [FileServer Basics](https://github.com/kataras/iris/blob/v11.2.0/_examples/file-server/basic/main.go) * [Embedding Files Into App Executable File](https://github.com/kataras/iris/blob/v11.2.0/_examples/file-server/embedding-files-into-app/main.go) * [Embedding Gziped Files Into App Executable File](https://github.com/kataras/iris/blob/v11.2.0/_examples/file-server/embedding-gziped-files-into-app/main.go) * [Single Page Application](https://github.com/kataras/iris/blob/v11.2.0/_examples/file-server/single-page-application/basic/main.go) * [Embedded Single Page Application](https://github.com/kataras/iris/blob/v11.2.0/_examples/file-server/single-page-application/embedded-single-page-application/main.go) 7. [Embedded Single Page Application with other routes](https://github.com/kataras/iris/blob/v11.2.0/_examples/file-server/single-page-application/embedded-single-page-application-with-other-routes/main.go) * [Websocket Native Messages](https://github.com/kataras/iris/blob/v11.2.0/_examples/websocket/native-messages/main.go) * [Websocket Controller](https://github.com/kataras/iris/tree/v11.2.0/_examples/mvc/websocket) * [Using the Redis Session Database](https://github.com/kataras/iris/blob/v11.2.0/_examples/sessions/database/redis/main.go)
kataras
142,002
Clip Path Transitions
published: false
0
2019-07-18T05:39:00
https://dev.to/ahmadbassamemran/clip-path-transitions-3kai
codepen, clippath, css3, animation
published: false
ahmadbassamemran
142,013
Weekly Coding Challenge - Week #18 - Social Media Buttons
Week #18 of the Weekly Coding Challenge program - Social Media Buttons
512
2019-07-18T06:22:06
https://www.florin-pop.com/blog/2019/07/social-media-buttons/
challenge, html, css
--- title: Weekly Coding Challenge - Week #18 - Social Media Buttons published: true description: Week #18 of the Weekly Coding Challenge program - Social Media Buttons tags: challenge, html, css canonical_url: https://www.florin-pop.com/blog/2019/07/social-media-buttons/ cover_image: https://www.florin-pop.com/static/1e69ded3b40ce1e1ac1f06accd030bb1/ff993/social-media-buttons.png series: Weekly Coding Challenge --- Theme of the week: **Social Media Buttons** ### Description We all know how important is Social Media nowadays for a business (be it either a blog or a product/service selling business) because of the amount of people using it. You can have the best product in the world, but if no one heard about it you are pretty much doomed as you won't make any sales, or, on the other hand, you can have a "decent" product and by marketing it well (on Social Media) you'll end up with a lot of sales, which means a lot of profit for your business. ## Useful Resources Check out my submission for this challenge: [Social Media Buttons](https://www.florin-pop.com/blog/2019/07/social-media-buttons/). See all the submissions in this [Codepen Collection](https://codepen.io/collection/DZJJoE/). Are you interested in joining the challenge? :smiley: Read [The Complete Guide](https://www.florin-pop.com/blog/2019/03/weekly-coding-challenge/) to find out how. Don't forget to share your creation! :wink: Happy Coding! :innocent:
florinpop17
142,017
Creating a Domain Model rapidly with Java and Spring Boot
In this short article, we are creating a domain model for a simple application. All the examples are in Java language. We use Spring Boot to accelerate the application startup.
0
2019-07-18T07:02:52
https://dev.to/colaru/creating-a-domain-model-rapidly-with-java-and-spring-boot-i85
java, spring
--- title: Creating a Domain Model rapidly with Java and Spring Boot published: true description: In this short article, we are creating a domain model for a simple application. All the examples are in Java language. We use Spring Boot to accelerate the application startup. tags: Java, Spring cover_image: https://thepracticaldev.s3.amazonaws.com/i/nqhni36nv8u1na2azhv1.png --- **Overview** In this short article, we are creating a domain model for a simple application. We illustrate how we are creating the classes/objects of the model, based on the application business description. We insist just on the entities that are part of the model. Here we talk about domain modeling in the case of a Web application with a client-server architecture where we have Java on the server-side (backend). All the examples are in Java language. We use Spring Boot to accelerate the application startup. **Start the implementation** Tools we are using: - [Java 8 JDK from AdoptOpenJDK](https://adoptopenjdk.net/) installed with [SdkMan](https://sdkman.io/) - [SpringBoot](https://spring.io/projects/spring-boot) - just for starting rapidly an app with integrated build using [Spring Starter](https://start.spring.io/) - [Maven](https://maven.apache.org/) - used internally for build - [JUnit](https://mvnrepository.com/artifact/junit/junit) - used for playing with the domain - [Intellij Idea](https://www.jetbrains.com/idea/) - for code editing (you can use [VS Code](https://code.visualstudio.com/) or [Sublime](https://www.sublimetext.com/)) If you are on Windows OS then the Java JDK 8 can be installed manually and also Maven can be installed manually. Spring Initializr -[https://start.spring.io](https://start.spring.io) can be used to download the Spring Boot starter app. But on MAC OS and Linux, the fastest way is to use the terminal commands as you can see after. Starting our app (we name it **cindykat** - a name very close to **syndicate**) and running it is is just like this: # install SDKMAN curl -s "https://get.sdkman.io" | bash # install JDK sdk install 8.0.212.j9-adpt # create app curl https://start.spring.io/starter.zip -d name=CindyKat -d groupId=com.colaru -d artifactIf=cindykat -d packageName=com.colaru.cindykat -d dependencies=web -d javaVersion=8 -o cindykat-springboot.zip # unzip, then run it unzip cindykat-springboot.zip -d cindykat-springboot cd cindykat-springboot && ./mvnw spring-boot:run Now our app is up and running on 8080 - it is a web application! No need for a web app - is ok to have something up and running. ![](https://thepracticaldev.s3.amazonaws.com/i/t02cv9gfijdson5jyya0.png) We have this project structure: ![](https://thepracticaldev.s3.amazonaws.com/i/we6h0n6qvufymv0avawx.png) **What is the application domain?** The domain model is the place in our code, where we are modelling the business of our app. Usually, an application model has to be a direct reflection of the business it implements. Our code has to be a story of the reality we are modelling. And we have to use a ubiquitous language used by all people working on the project. That's no need to introduce other abstractions different than what we have in the reality we are modelling. The nouns and verbs from the business have to appear in the name of our classes, fields, methods. When we introduce other things - we can be suspected of over-engineering. ![Domain model](https://thepracticaldev.s3.amazonaws.com/i/s5xur8r8awydi8dttbs3.png) Image source: [https://www.slideshare.net/Dennis_Traub/dotnetcologne2013-ddd](https://www.slideshare.net/Dennis_Traub/dotnetcologne2013-ddd) Why is the model so important? Because all the rest of the application will be in contact with the model. It is hard to have separate packages of DTOs or WS REST resources or persistence entities because of the marshalling/un-marshalling complications. We will be forced to do this if we don't want to expose the internal model outside the system. So we will use the model in all the layers of the application. You can be sure that the UI, persistence, reporting, WS, messaging integrations, all are using the domain model classes. And the model can become more an more complicated over time (hundreds of classes). And what is in the model will evolve and will influence the entire application. This is the hardest part to be understood in any app by a new development team joiner. Here is not something like a library or framework reused in other applications, but here is something specific to a business. And the businesses can be very, very complicated. **The DDD book** There are a few ways of starting a Java Web application: - Backend first - play with the Java domain using tests to create a domain model that is the core of the server-side - Frontend first - start with UX/UI with HTML/ CSS/ JavaScript and create some mockups that will illustrate the user interaction - you can play with a domain model in JSON or Typescript here also It is common to have a team that is working in the frontend, and a different team that is working in the backend and the contract between them is a Swagger/Open API spec. The first approach, modelling the domain, is the subject of this article. And is the subject also of a very well known book: [Domain-Driven Design](https://dddcommunity.org/book/evans_2003/) by Eric Evans. In this book, more topics are introduced apart from the graph entities described in this article. ![](https://thepracticaldev.s3.amazonaws.com/i/yo6e7bamp6rdc61bvomk.png) Image from [Domain Driven Design Quickly](https://www.infoq.com/minibooks/domain-driven-design-quickly/#minibookDownload,%20/minibooks/domain-driven-design-quickly/#minibookDownload/) book **Our application business explained** The application we want to implement is an analytics system for the data provided by [Google Trending Searches](https://trends.google.com/trends/trendingsearches/daily?geo=US) - The history of searches from Google Search reported per day and country by Google. We want to import some data from Google Trends, then store it in our system and make it possible to analyse it, show it in a different form, etc. ![](https://thepracticaldev.s3.amazonaws.com/i/rmivpmsneidd0zubyxjw.png) In this case, it will be simple to model our domain used in our analytics system. Because we are distingue our entities from the format used to get the searches data. See this Google Trends **Atom Syndication Format** XML snippet: <item> <title>Women's World Cup 2019</title> <ht:approx_traffic>1,000,000+</ht:approx_traffic> <description>2019 Women, Women World Cup, WWC</description> <link>https://trends.google.com/trends/trendingsearches/daily?geo=US#Women's%20World%20Cup%202019</link> <pubDate>Mon, 10 Jun 2019 22:00:00 -0700</pubDate> <ht:picture>https://t2.gstatic.com/images?q=tbn:ANd9GcTW4UzPHNC9qjHRxBr6kCUEns71l8XK6HYcmLpJbhlfZWUbeBQPiia1GDzN3Ehl7nfD-HPbgnG_</ht:picture> <ht:picture_source>CBSSports.com</ht:picture_source> <ht:news_item> <ht:news_item_title>&lt;b&gt;2019 Women&amp;#39;s World Cup&lt;/b&gt; scores, highlights: Canada squeaks by, Japan underwhelms, Argentina gets historic point</ht:news_item_title> <ht:news_item_snippet>Day 4 of the &lt;b&gt;2019&lt;/b&gt; FIFA &lt;b&gt;Women&amp;#39;s World Cup&lt;/b&gt; in France featured a two-game slate with two potential contenders opening their campaigns against slightly inferior opponents. When the dust settled, neither team looked particularly sharp as only one goal was&amp;nbsp;...</ht:news_item_snippet> <ht:news_item_url>https://www.cbssports.com/soccer/world-cup/news/2019-womens-world-cup-scores-highlights-canada-squeaks-by-japan-underwhelms-argentina-gets-historic-point/</ht:news_item_url> <ht:news_item_source>CBSSports.com</ht:news_item_source> </ht:news_item> <ht:news_item> <ht:news_item_title>&lt;b&gt;2019 Women&amp;#39;s World Cup&lt;/b&gt; scores, highlights, recap: Japan underwhelms in opener as Argentina gets historic point</ht:news_item_title> <ht:news_item_snippet>Day 4 of the &lt;b&gt;2019 World Cup&lt;/b&gt; has a small slate of action with just two games, but two contenders to win the tournament were scheduled to play their opener. After the first match, we may just be talking about one contender. With talented Canada set to play&amp;nbsp;...</ht:news_item_snippet> <ht:news_item_url>https://www.cbssports.com/soccer/world-cup/news/2019-womens-world-cup-scores-highlights-recap-japan-underwhelms-in-opener-as-argentina-gets-historic-point/</ht:news_item_url> <ht:news_item_source>CBSSports.com</ht:news_item_source> </ht:news_item> </item> So it is simple! We have an **Item** which has a list of **NewsItem**. We separately have a **Source** entity that will be reused between **NewsItem**. Also, a **Country** is needed for the **Item** specify the country/language. That's all. ![](https://thepracticaldev.s3.amazonaws.com/i/zymr3toj8b4ktrxg4y4r.png) **Creating the graph of entities** The domain model will be a separate package in our application. We choose to create a sub-domain named **newsfeed** (complete name **com.colaru.cindykat.domain.newsfeed)** for naming the package because it is possible to have other aggregates (groups of entities) in the future. It is a good idea to create an entire module for the domain (in a multi-module Maven/Gradle project) because in this way we can create a dependency to it in any other modules of the application - all the other modules will use the domain model. First, using the IDE, we will create an **Item** class, which is the primary entity (aggregate root in DDD terminology) and then the rest of the classes: package com.colaru.cindykat.domain.newsfeed; import java.util.Date; import java.util.List; public class Item { private String title; private List<Tag> description; private String link; private String picture; private Date pubDate; private String pubDateAsString; private String approxTraffic; private Long approxTrafficAsNumber; private List<NewsItem> items; private Country country; // generate getters/setters using the IDE } public class NewsItem { private String title; private String snippet; private String url; private Source source; } public class NewsItem { private String title; private String snippet; private String url; private Source source; } public class Country { private String name; private String countryCode; private String languageCode; private String flag; } public class Source { private String name; private String url; } public class Tag { private String name; } In the end, we will have this graph of entities: ![Entities graph](https://thepracticaldev.s3.amazonaws.com/i/nqhni36nv8u1na2azhv1.png) Now the project filesystem is like this: ![Project structure](https://thepracticaldev.s3.amazonaws.com/i/p40ikwb4k3vn1501m6da.png) **Anemic Domain** A common problem for a domain is an anti-pattern named [Anemic Domain by Martin Fowler](https://martinfowler.com/bliki/AnemicDomainModel.html) - the classes are having just state and not behaviour. We get from the real word only the **nouns** and not the **verbs**. In this case, the entities are **data structures** like in the functional programming language and not real **Java objects** as [Uncle Bob is describing in this article](https://blog.cleancoder.com/uncle-bob/2019/06/16/ObjectsAndDataStructures.html). And this is something normal because in general, the entities are our mappers to database tables used by an ORM persistence framework. We expect that the **verbs** will be in the services layer of our application outside the domain. But in this case another discussion is starting: are this services part of the domain? But also there it is no problem to introduce some logic directly in the entities - in this case, we have to be aware that we will have business logic in two places: services and in the domain model. We will introduce some business methods in the **Item** entity (some convertors from String to Date and from String to Long): public class Item { private Date pubDate; private String pubDateAsString; private String approxTraffic; private Long approxTrafficAsNumber; // other private fields public Date convertStringToDate(String pubDateAsString) throws ParseException { SimpleDateFormat parser = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss Z"); // Wed, 21 Dec 2016 13:00:00 +0200 return parser.parse(pubDateAsString); } public Long convertStringToLong(String approxTraffic) { return new Long(approxTraffic.toString().replaceAll(",", "").replace("+", "")); } } **Testing the domain** Even if the domain is simple, it is a good idea to start playing with it using and creating some tests using JUnit. First, we have to include the JUnit library as part of our Maven dependencies in pom.xml: <!-- https://mvnrepository.com/artifact/org.junit.jupiter/junit-jupiter-api --> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-api</artifactId> <version>5.3.2</version> <scope>test</scope> </dependency> Now we can create a first test to play with our simple graph of objects and to verify some small functionalities. This is a first working skeleton good enough to start our TDD development guided by tests for the next features we will add to this app: class DomainModelTests { private Item item; @BeforeEach void setUp() throws ParseException { item = new Item(); item.setTitle("Women's World Cup 2019"); item.setLink("https://trends.google.com/trends/trendingsearches/daily?geo=US#Women's%20World%20Cup%202019"); item.setPicture("https://t2.gstatic.com/images?q=tbn:ANd9GcTW4UzPHNC9qjHRxBr6kCUEns71l8XK6HYcmLpJbhlfZWUbeBQPiia1GDzN3Ehl7nfD-HPbgnG_"); // tags Tag tag = new Tag(); tag.setName("Word cup"); List<Tag> tags = new ArrayList<>(); tags.add(tag); item.setDescription(tags); NewsItem newsItem = new NewsItem(); // source Source source = new Source(); source.setName("USA TODAY"); newsItem.setSource(source); newsItem.setTitle("&lt;b&gt;2019 Women&amp;#39;s World Cup&lt;/b&gt; scores, highlights: Canada squeaks by, Japan underwhelms, Argentina gets historic point"); List<NewsItem> items = new ArrayList<>(); items.add(newsItem); item.setItems(items); } @Test void buildNewItemTest() { Assert.assertEquals(1, item.getItems().size()); Assert.assertEquals(1, item.getDescription().size()); } @Test void convertStringToLongTest() { String approxTraffic = "900,000+"; item.setApproxTraffic(approxTraffic); item.setApproxTrafficAsNumber(item.convertStringToLong(approxTraffic)); Assert.assertEquals(900000, item.getApproxTrafficAsNumber().longValue()); } @Test void convertStringToDateTest() { String pubDateAsString = "Mon, 1 Jun 2020 09:00:00 -0700"; item.setPubDateAsString(pubDateAsString); try { item.setPubDate(item.convertStringToDate(pubDateAsString)); } catch (ParseException e) { e.printStackTrace(); } Calendar cal = new Calendar.Builder().setCalendarType("iso8601") .setFields(YEAR, 2020, DAY_OF_MONTH, 1, MONTH, 5, HOUR, 18, MINUTE, 0, SECOND, 0) .build(); Assert.assertEquals(cal.getTime(), item.getPubDate()); } } When we run all the test the bar is green (I write the test first, then the tested method, the test run is failing, I add the implementation, I run the test again, and now it is passing): ![](https://thepracticaldev.s3.amazonaws.com/i/rwxmsnd8qsw3b42m9t75.png) **Conclusion** In this article, we've exposed how we created a simple model for a small application. As we can see, the domain model is the core part of the application. This classes that are composing the domain model are technology agnostic (concurrency, persistence is not in our interest now) - are describing just the reality of the business - nothing regarding the technologies used in the project. **Git Repository** We published the sources on GitHub: git clone https://github.com/colaru/cindykat-springboot.git cd cindykat-springboot mvn clean install // the tests will be executed successfully --- **You can [follow me on Twitter](https://twitter.com/colaru) where I continue to document my journey.** --- **Inspiration links for this article** - Value objects vs entities: [https://enterprisecraftsmanship.com/2016/01/11/entity-vs-value-object-the-ultimate-list-of-differences/](https://enterprisecraftsmanship.com/2016/01/11/entity-vs-value-object-the-ultimate-list-of-differences/) - Anemic domain: [https://martinfowler.com/bliki/AnemicDomainModel.html](https://martinfowler.com/bliki/AnemicDomainModel.html) - Domain services: [https://stackoverflow.com/questions/2268699/domain-driven-design-domain-service-application-service](https://stackoverflow.com/questions/2268699/domain-driven-design-domain-service-application-service) - Domain model: [https://martinfowler.com/eaaCatalog/domainModel.html](https://martinfowler.com/eaaCatalog/domainModel.html) - Service layer: [https://martinfowler.com/eaaCatalog/serviceLayer.html](https://martinfowler.com/eaaCatalog/serviceLayer.html) - Objects and data structures: [https://blog.cleancoder.com/uncle-bob/2019/06/16/ObjectsAndDataStructures.html](https://blog.cleancoder.com/uncle-bob/2019/06/16/ObjectsAndDataStructures.html) - Domain Driven Design Book by Eric Evans: [https://dddcommunity.org/book/evans_2003/](https://dddcommunity.org/book/evans_2003/) - Domain Driven Design Quickly book: [https://www.infoq.com/minibooks/domain-driven-design-quickly/#minibookDownload, /minibooks/domain-driven-design-quickly/#minibookDownload/](https://www.infoq.com/minibooks/domain-driven-design-quickly/#minibookDownload,%20/minibooks/domain-driven-design-quickly/#minibookDownload/) - Data structures vs objects by Uncle Bob: [https://blog.cleancoder.com/uncle-bob/2019/06/16/ObjectsAndDataStructures.html](https://blog.cleancoder.com/uncle-bob/2019/06/16/ObjectsAndDataStructures.html) - 10 Myths About Java in 2019: [https://developer.okta.com/blog/2019/07/15/java-myths-2019](https://developer.okta.com/blog/2019/07/15/java-myths-2019) - Which Java SDK Should You Use: [https://developer.okta.com/blog/2019/01/16/which-java-sdk](https://developer.okta.com/blog/2019/01/16/which-java-sdk)
colaru
142,091
Una interfaz para controlarlos a todos: Patrón adapter
Este bicho lleva implementando el patrón millones de años Siguiendo con la serie...
0
2019-07-18T10:29:43
https://medium.com/all-you-need-is-clean-code/una-interfaz-para-controlarlos-a-todos-patr%C3%B3n-adapter-a9073f3460b
designpatterns, ddd, diseñodepatrones, portsandadapters
--- title: Una interfaz para controlarlos a todos: Patrón adapter published: true tags: design-patterns,ddd,diseño-de-patrones,ports-and-adapters canonical_url: https://medium.com/all-you-need-is-clean-code/una-interfaz-para-controlarlos-a-todos-patr%C3%B3n-adapter-a9073f3460b --- ![](https://cdn-images-1.medium.com/max/1024/1*Ecgrp-oZm4n_jhfnGt-WYA.jpeg)<figcaption>Este bicho lleva implementando el patrón millones de años</figcaption> Siguiendo con la serie de [patrones de diseño](https://dev.to/mangelsnc/patrones-de-diseno-de-software-1lk5-temp-slug-6227082) hoy vamos a ver un patrón estructural: el **patrón adapter.** ### Introducción Como buen patrón estructural, el cometido del patrón adapter está relacionado con la forma en la que los objetos interactúan entre ellos. Para explicar un poco su cometido lo mejor es pensar en objetos de nuestro día a día, por ejemplo un adaptador de enchufes. Todos sabemos que las clavijas del enchufe europeo y del enchufe americano son distintas, y no solo eso, sino que existen muchos más tipos de enchufes: ![](https://cdn-images-1.medium.com/max/520/1*nn1e-1dEltXiSh_g446MEw.png)<figcaption>Distintos tipos de clavijas</figcaption> Cuando viajamos a Estados Unidos y empezamos a quedarnos sin batería en el móvil y decidimos ponerlo a cargar, nos podemos llevar una sorpresa si no hemos sido precavidos y hemos traído un adaptador europeo-americano: ![](https://cdn-images-1.medium.com/max/450/1*xy5PfFVZf4XcPxzDu6jfiA.jpeg)<figcaption>Adaptador europeo/americano</figcaption> Gracias a este pequeño cacharrito podremos cargar nuestro teléfono sin problemas. Y es que este pequeño dispositivo es la imagen perfecta de lo que hace el patrón: 1. Las clavijas planas del adaptador representan la interfaz conocida. 2. La parte negra representaría la clase adaptadora que hará que el elemento discordante (el enchufe del cargador) se comporte como es esperado. Todo esto es posible, por supuesto, porque ambos enchufes (tanto el americano como el europeo) tienen una función igual o similar, que es conducir electricidad. ### Ejemplo de implementación La implementación de este patrón es bastante sencilla, solo hay que seguir unos sencillos pasos: 1. **Identificar a los actores:** Esto sería determinar quién es el cliente y quiénes la parte a “adaptar”. 2. **Diseñar una interfaz común,** una que conozca el cliente y que todos los futuros adaptados puedan implementar. 3. **Crear un adaptador que implemente dicha interfaz.** Esto se haría para cada una de las partes a adaptar. Cada uno de estos adaptadores contendrá una instancia de la clase adaptada y “forzará” su uso para que se adapte a la nueva interfaz. 4. **Desacoplamos el cliente de la clase concreta.** En lugar de recibir una clase concreta, a partir de ahora recibirá un objeto cualquiera que implemente la nueva interfaz. A continuación vamos a ver un ejemplo de como sería la implementación de este patrón en un caso más o menos realista. Supongamos que en nuestro proyecto necesitamos loguear constantemente todo lo que hacemos, y para ello no nos vale solo con un tipo de logger, sino que tenemos varios, concretamente tres tipos: - **FileLogger:** Escribe el log en ficheros locales. El método usado para registrar un log es addLine, recibe como parámetro un string. - **DatabaseLogger:** Escribe el log como registros de una tabla en una base de datos. El método usado para registrar un log es insertRecord, recibe como parámetro un string. - **NetworkLogger:** Envía los logs a un servicio de terceros que luego nos muestra informes con paneles y gráficas muy chulas. El método usado para registrar un log es sendLog, recibe como parámetro un objeto de tipo LogLine. Por la otra parte, tenemos un servicio que hace uso de uno de estos servicios de log: {% gist https://gist.github.com/mangelsnc/8dd0c99e82b2b7877489c0a8110d00d6 %} Como se puede apreciar, aunque estos 3 servicios realizan prácticamente la misma tarea, poseen 3 APIs muy distintas. Si quisiéramos reemplazar el logger que usa nuestro servicio por otro no solo necesitaríamos inyectar el servicio adecuado, sino que necesitaríamos reescribir el uso del mismo, ya que el la forma de usar el logger cambiaría. Esto produce un fuerte acoplamiento entre el servicio y el logger… ¿Cómo podríamos evitarlo? Haciendo uso del patrón Adapter, obviamente ;) {% gist https://gist.github.com/mangelsnc/0fa37ee8f2e7ed0e32267baf118264e9 %} Cómo se puede apreciar, ahora todos los adaptadores siguen la interfaz acordada (LoggerInterface) y el servicio puede confiar en que cualquier logger que reciba, implementará por contrato el método log, y eso es suficiente para nuestro servicio, ya que los detalles de implementación de cada logger concreto le resulta indiferente. ### Ports & Adapters Quizá hayas oído del término ports & adapters haciendo referencia a la arquitectura hexagonal, y es que en gran medida en esto se basa la arquitectura hexagonal. Recordemos que en arquitectura hexagonal tenemos 3 capas principales: 1. Application 2. Domain 3. Infrastructure Habitualmente la capa Domain es la que contiene las interfaces que harán de puente entre nuestra lógica (Application) y los servicios de terceros (Infrastructure), de lo que deducimos que los “puertos” son las interfaces definidas en el dominio y los “adaptadores” son las implementaciones de esas interfaces que hacemos en la capa de infraestructura para acomodar servicios de terceros. ### Conclusión Como se puede ver este patrón está muy presente en nuestro día a día, y además es una herramienta muy potente que nos va a permitir: - Desacoplar nuestro código de las implementaciones de los servicios de terceros, forzando su adaptación a nuestro dominio. - Potenciar la cambiabilidad (podemos sustituir el servicio cambiando solo la inyección). - Simplificar nuestros tests (no necesitamos más que mockear la interfaz, sin necesidad de conocer detalles de implementación del código de terceros) Nos vemos en el próximo post! **Referencias:** - [Head First Design Patterns](http://amzn.to/2jPwUag) - [Refactoring to patterns](http://amzn.to/2iGokav) * * *
mangelsnc
142,349
Interesting facts in JavaScript
There is a great deal of amusing to be had when working in JavaScript. Notwithstanding for engineers...
0
2019-07-18T19:52:28
https://dev.to/shafikshaon/interesting-facts-in-javascript-22nk
javascript
There is a great deal of amusing to be had when working in JavaScript. Notwithstanding for engineers that associate with it day by day, a few pieces of the language stay unexplored. I'm going to feature few things you may not think about JavaScript. # NaN is a number `NaN` (Not a number) is being a number. Also, `NaN` is not equal to itself. Actually `NaN` not equal to anything. The only way to verify anything is `NaN` or not by `isNaN()`. ```javascript > typeof(NaN) "number" > NaN === NaN false ``` # null is an object `null` is an object. Sound odd! Right? But this is fact. ```javascript > typeof(null) "object" ``` In case, `null` has no value. So, `null` should not instance of `Object`. ```javascript > null instanceof Object false ``` # undefined can be defined `undefined` is not a reserved keyword in JavaScript. You can assign value to it. It doesn't through any syntax error. But, you can't assign value anywhere, it return undefined. ```javascript > var some_var; undefined > some_var == undefined true > undefined = 'i am undefined' ``` # 0.1 + 0.2 is not equal to 0.3 In JavaScript , `0.1 +0.2 == 0.3` return false. The fact is, how javascript store float number as binary. ```javascript > 0.1 + 0.2 0.30000000000000004 > 0.1 + 0.2 == 0.3 false ``` # Math.max() smaller than Math.min() The fact that `Math.max() > Math.min()` returns `false` sounds wrong, but it actually makes a lot of sense. ```javascript > Math.max() > Math.min() false ``` If no argument pass via `min()` or `max()` then it return follwoing values. ```javascript > Math.max() -Infinity > Math.min() Infinity ``` # 018 minus 045 equal to 3 In JavaScript, the prefix `0` will convert any number to octal. However, `8` is not used in octal, and any number containing an `8` will be silently converted to a regular decimal number. ```javascript > 018 - 045 -19 ``` Therefore, `018 — 017` is in fact equivalent to the decimal expression `18 — 37`, because `045` is octal but `018` is decimal. # Functions can execute itself Just create a function and immediately call it as we call other functions, with `()` syntax ```javascript > (function() { console.log('I am self executing'); })(); I am self executing ``` # Parenthesis position matter The return statement “does not see” that it has something to return so it returns nothing. Actually, JavaScript put `;` after return. ```javascript > function foo() { return { foo: 'bar' } } > foo(); undefined > function foo() { return { foo: 'bar' } } > foo(); {foo: "bar"} ``` # Missing parameter default value In JavaScript, you can set a parameter default value in the following way. ```javascript > function missingParamerCheck(name, age){ var name = name || 'John Doe' var age = age console.log(name) console.log(age) } > missingParamerCheck('', 23) John Doe 23 > missingParamerCheck('Mike', 18) Mike 18 ``` # Doesn't have integer data type In JavaScript, there is not `int`(integer) data type. All numbers are `Number` type. Actually it store float value for `int` number in memory level. # sort() function automatic type conversion The `sort()` function automatic convert value to string, that why something weired being happened. ```javascript > [1,5,20,10].sort() (4) [1, 10, 20, 5] ``` But, it can be fix by comparing. ```javascript > [1,5,20,10].sort(function(a, b){return a - b}); (4) [1, 10, 20, 5] ``` # Sum of Arrays and Objects ```javascript > !+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![] 9 > {} + [] 0 > [] + {} "[object Object]" > [] + [] "" > {} + {} "[object Object][object Object]" > {} + [] == [] + {} true ``` Ideally, you discovered some new information or possibly showed signs of improvement comprehension of what is new with these JavaScript pearls. What other unexplored/unprecedented JavaScript highlights do you know? Share them in the comments. This post is also available in [here](https://shafik.xyz/posts/interesting-facts-in-javascript/)
shafikshaon
142,612
Laravel Dockerization out of the box!
Docker is an open-source platform that makes it possible for software developers to Build, Ship, and...
0
2019-07-21T20:34:19
https://dev.to/nedsoft/laravel-dockerization-out-of-the-box-4aej
laravel, docker, devops
--- title: "Laravel Dockerization out of the box!" tags: ['laravel', 'docker', 'DevOps'] published: true cover_image: https://thepracticaldev.s3.amazonaws.com/i/er5dfy5gr5scx952x242.png --- [Docker](https://www.docker.com/what-docker) is an open-source platform that makes it possible for software developers to Build, Ship, and Run any App anywhere. In a high-level explanation, a `dockerized` App can be likened to a flower growing in a bucket whereby the bucket contains all that the flower needs to survive notwithstanding the environment. It basically places (_containerizes_) the App in a container that houses all that the App needs to run. This article assumes that the reader already understands how to use Laravel, our business is how to `dockerize` it. > This article was originally published on [medium](https://medium.com/@Oriechinedu/laravel-dockerization-out-of-the-box-c2214f6a6af8) ### Requirements To get a Laravel app running on Docker, the following must be put in place: - Laravel installed on your local machine. See the official documentation for steps to installing Laravel. - Docker installed on your local machine. Check here to install the appropriate distribution for your operating system - Docker Compose installed on your local machine. - Knowledge of the command line, if you're a Linux user, you should check out this Linux tutorial to understand basic Linux commands If you've met the requirements above, then you're ready and good to go. Let's dive deep into the main business. > The sample project used in this article can be cloned [here](https://github.com/oriechinedu/laravel-dockerization) ## Getting started Before we proceed, it's important we explain some basic concepts that we'll be using in this tutorials. Most prominent among them are images and containers. >I*mages*: The filesystem and metadata needed to run containers. They can be thought of as an application packaging format that includes all of the dependencies to run the application, and default settings to execute that application. The metadata includes defaults for the command to run, environment variables, labels, and healthcheck command. >*Containers*: An instance of an isolated application. A container needs the image to define its initial state and uses the read-only filesystem from the image along with a container specific read-write filesystem. A running container is a wrapper around a running process, giving that process namespaces for things like filesystem, network, and PIDs. > [Stackoverflow.com](https://stackoverflow.com/questions/21498832/in-docker-whats-the-difference-between-a-container-and-an-image) Another analogy that explains images and containers is class and objects in object-oriented programming. An image is a class while a container is an object of the class. Explaining further; in order to run a Laravel app on our local machine, we install LEMP, WAMP, XAMP, or MAMP as the case may be as well as phpmyadmin if it's needed. These pieces of software are bundled as images in docker environment. This means, to dockerize a Laravel app, we need to create an image that contains each of those packages, or we create different images for each of them and internetwork them inside the docker container In this article, we'll not be creating images from scratch, we are going to pull already built images from [docker hub](http://hub.docker.com). Below are the images we are going to pull from docker hub to enable us `dockerize` our app: - [creativitykills/nginx-php-server -php](https://hub.docker.com/r/creativitykills/nginx-php-server) and nginx maintained by `Neo Ighodaro` - [mysql:5.7](https://hub.docker.com/_/mysql) - [phpmyadmin/phpmyadmin](https://hub.docker.com/r/phpmyadmin/phpmyadmin) Click on each of the images to read details about them. Having established the necessary theories about what we are going to do, let us begin the main processes of dockerizing the app following the steps below. **Step 1 - Create the Laravel Project to be dockerized** Create the Laravel project that you want to dockerized. Open your terminal and cd into the root of the project. ``` ned@Ned-PC:/var/www/html/laravel-dockerization$ ``` **Step 2- Create the docker-compose.yaml file** ``` $ touch docker-compose.yaml ``` Next up, edit the `docker-compose.yaml` file as follows: ```yaml version: '3' services: core_services: image: creativitykills/nginx-php-server container_name: core ports: - "44678:80" volumes: - ./:/var/www networks: - backend mysql: container_name: db_mysql image: mysql:5.7 ports: - "33062:3306" environment: - "MYSQL_ROOT_PASSWORD=${DB_PASSWORD}" volumes: - "./db/mysql/data:/var/lib/mysql" - "./db/mysql/initdb.d:/docker-entrypoint-initdb.d" networks: - backend pma: container_name: pma image: phpmyadmin/phpmyadmin ports: - "44679:80" environment: - "PMA_HOST=db_mysql" - "MYSQL_DATABASE=${DB_DATABASE}" - "MYSQL_ROOT_PASSWORD=${DB_PASSWORD}" networks: - backend networks: backend: driver: bridge ``` > You can view the snippet above on [GitHub Gist](https://gist.github.com/oriechinedu/9b64f876304325677109c1148291c9ae#file-docker-compse-yaml) for the line numbers. We’ve finished what we need to do, but before we summarize, let me explain what is happening in the YAML file above. On line 1, we specified the version of docker-compose to be used. On line 2, we declared the beginning of the definition of our services, services here literally means images. On line 3, we declared the first service name `core_services`; note that the name is up to you to decide. This service would be an image comprising the Nginx and PHP interpreter and other dependencies such as composer. On line 4, we specified the name of the image to be pulled from the docker hub. On line 5, we specified the container name, that is the name for any instance of the `core_services` image. Note that the name is up to you to decide. On line 6–7, we specified the port through which we can access the app on the browser. It means we can access the app by visiting 0.0.0.44678 on the browser. On line 8–9, we mapped the directory of our project to `/var/www `inside the docker container. What it means is, copy the entire project located in `/var/www/html/laravel-dockerization` to `/var/www` inside the container. With this, any change we make externally in the project directory will be reflected inside the docker container and vice versa. On line 10–11, we specified the network to be used by the container so that any other container connected to the same network and communicate with each other. If you do not specify a network, docker creates one an attach the container to it. It’s advisable you specify the network to be used so that you can have full control of its behavior. From line 13–23, we created the mysql service. We specified the version to be used as 5.7, this can change depending on your choice. We specified the `MYSQL_ROOT_PASSWORD`. This `${DB_PASSWORD}` means replace this with the value of `DB_PASSWORD` in the .env file. So, ensure that the value of `DB_PASSOWRD` in the .env is the password you want to use for mysql. On line 20–21, we mapped the volume from the external directory to the mysql container volume. This volume ensures that data stored in the database is persisted in the local machine even if the container is deleted. You can name the directory what you choose to but ensure you create it before the mapping. On line 22, we mapped the external directory `./db/mysql/initdb.d` to `docker-entrypoint-initdb.d`. This volume is used importing data into the database. For instance, if you want to import an SQL dump into the database, you need to copy the sql dump file into the `db/mysql/initdb.d` and then access the file from docker-entrypoint-initdb.d inside the db_mysql container. We’ll demonstrate how to do this later in this article. You can read more about `docker-entrypoint` [here]("https://hub.docker.com/mysql"). From line 24–34, we created the phpmyadmin service with the container name pma. We exposed port 44679 which means we can visit `0.0.0.44679` to access the phpmyadmin. On line 30, you’ll notice we specified the `PMA_HOST` as the MySQL container `db_mysql`, this is very important. If the `PMA_HOST` is not in sync with the mysql container, you’ll not be able to access your phpmyadmin as the `PMA_HOST` will default to localhost. We also specified the `MYSQL_ROOT_PASSWORD ` and `MYSQL_DATABASE` to use the `DB_PASSWORD` and `DB_DATABASE` in the .env respectively. Note that `MYSQL_DATABASE` is optional. From line 36–38, we defined the backend network that we connected the containers in the previous lines. When docker encounters the backend network, it will look for where it’s defined and create it first before continuing. Next up, we confirm that the correct values are passed to the docker-compose from the `.env` by running the command below: ``` $ docker-compose config ``` The output of the command above should look like the screenshot below: ![](https://thepracticaldev.s3.amazonaws.com/i/kgujrq9w8r3mstfwhfgh.png) Below is the look of my .env file: ``` ... DB_CONNECTION=mysql DB_HOST=db_mysql DB_PORT=3306 DB_DATABASE=laravel_dockerization DB_USERNAME=root DB_PASSWORD=secret ... ``` Notice that the `DB_HOST` is set to the name of the mysql container defined in the docker-compose.yaml. Step 3 — Building the containers Now that we have set up the docker-compose.yaml, we are set to build the containers. Run the command below at the root of your project: ``` $ docker-compose up -d ``` The command above will build the containers and start them up. The flag `-d` starts the containers in interactive mode, the processes run in the background and you’ll be able to run other commands subsequently, but if you’d want to monitor the build processes, then just run: ``` $ docker-compose up ``` Next up, we need to give full permission to the storage directory so that docker would be able to read and write to it, run the command below: ``` $ sudo chmod 777 -R ./storage ``` Now, if the docker-compose command above build successfully, we confirm that the containers are running as supposed by running the command below: ``` $ docker ps ``` The output of the command above should look like the screenshot shown below: ![Docker ps screenshot](https://thepracticaldev.s3.amazonaws.com/i/tenjt2u6cbf00eo6rj74.png) The screenshot above shows that all the containers are running as expected. If all of the three containers are not running, it means something has gone wrong. You need to drop the containers and rebuild again removing the -d flag. This will make the build process verbose and you can easily point what went wrong and get it fixed. To do this run the command below: ``` $ docker-compose down && docker-compose up ``` Now head to the browser and visit `http://0.0.0.0:44678` and voila! our app is up! ![Screenshot of the running app](https://thepracticaldev.s3.amazonaws.com/i/cj4jybu4ahreo859f81s.png) To view the `phpmyadmin` dashboard visit `http://0.0.0.0:44679`: ![screenshot of phpmyadmin dashboard](https://thepracticaldev.s3.amazonaws.com/i/11z3e1aadmmlgpppph0q.png) > Note: if the database is not created automatically, you can manually create it using the PMA GUI or via MySQL command line. **Step 4 — Interacting with the containers** Now that the app is up and running, you might want to interact with the container to performs actions like running migration or importing SQL dump into the database. To enter the container, run the command below: ``` $ docker exec -it core bash ``` Remember that core is the name of the php and nginx container. You can now run migration inside the container. ``` $ php artisan migrate ``` To enter the mysql container, run: ``` $ docker exec -it db_mysql bash ``` --- ## How to import SQL dump file into the database There are situations that you might want to import large SQL dump file into your database, in this case, you might want to do it via the terminal. Follow the steps below to achieve that: - First copy the SQL dump into `db/mysql/initdb.d` - Enter the mysql container using `docker exec -it db_mysql bash` - run `cd docker-entrypoint-initdb.d` - `$ ls ` // to be sure the sql file is there - `$ mysql -h localhost -u root -p <database_name> < sql_dump_file.sql` - You’ll be prompted to enter your mysql password. > Note that, in copying the SQL dump file to the initdb.d, if you don’t have root access on your machine, you might want to use sudo command to copy the file via the terminal as shown below: ``` $ sudo cp /path/to/sql_dump_file.sql db/mysql/initdb.d ``` The screenshot below shows the steps above being applied. ![](https://thepracticaldev.s3.amazonaws.com/i/mtjl17iddo3ga2qhy2py.png) ### Conclusion In this article, we have demonstrated how to dockerize a Laravel application using already built docker images from the docker hub. We tried to explain what goes on in the docker-compose.yaml and also showed how we can interact with the containers and as well as showed how we can import SQL dump into the MySQL container. Should you have any contribution or question for me, reach out here or via [twitter](https://twitter.com/iam_nedsoft).
nedsoft
142,635
Time Distort in Super Intergalactic Gang (and in video games in general?)
When I started to develop SIG I had only a couple of things clear: It was going to be a Shoot em'...
0
2019-07-19T14:08:56
https://dev.to/martin_cerdeira/time-distort-in-super-intergalactic-gang-and-in-video-games-in-general-1hgb
gamedevelop, showdev, beginners
When I started to develop SIG I had only a couple of things clear: - It was going to be a Shoot em' up. - It was going to have a slow motion mechanics, as if it were Neo and the famous "bullet time". <em>Video demo</em> [![IMAGE ALT TEXT HERE](https://img.youtube.com/vi/4zkn1-7K1c0/0.jpg)](https://www.youtube.com/watch?v=4zkn1-7K1c0) We are usually used to seeing things in slow motion as a way of repeat an event that happens at a high speed and be able to see it clear. In fast actions such as in a sport, a fight or a car accident, seeing things at a slower speed than normal helps to appreciate what happens best. It is very attractive, for example, to see Messi in a slow motion play, and to notice how he seems to be affected less than his rivals by this time distortion, I.E: he moves relatively faster then rivals. Of course, this is not so, only his greater speed and skill in real time are the ones that achieve this effect. So, since the mechanics of time distort was the main thing in my game, it had to feel good and be gratifying, make the player feel like a great skillful who can dodge bullets and slip between enemies and shoot them in the face. That's why it was clear to me that I couldn't just implement it as a time slowdown, and that it had to affect different objects in particular ways in order to maximize the effect. The first thing I did was to define a global variable in my game that would represent the speed of time: <code>globlal TIME_SPEED = 1;</code> ![](https://thepracticaldev.s3.amazonaws.com/i/0ax8vp5yl3js3i9ly7v8.png) That is, it is used as a multiplier, being 1 the normal value and, for example, if I wanted to make everything go at half speed the value should be 0.5. Then, I use that multiplier in every place where there was some process that had something to do with time. For Example: Enemies advance and move at a given speed that is linked to time because, to travel a given distance, less time means greater speed. Also, for the most part, they have attacks based on a timer that dictates the frequency of the shot or attack. In the case of bosses, the timers are involved in changing states, for example: <code>passive >> attack 1 >> passive >> attack 2</code> Speed of bullets, frames of animations, movement of the character also are linked to time. With the global variable multiplying everywhere, the only thing left is to change its value and everything automatically goes slower in "time distort mode". What remains is to make the player experience it as a real time distortion and use it to his benefit. To slow everything down evenly would be intuitive but, through experimentation and a lot of playtesting what I determined was that it is perceived as a slowness and nothing else. It doesn't give that feeling of over power, it just moves everything slower and it doesn't even give a major advantage because although. For example: the bullets goes slower, so does the character. Here I think there can be thousands of ways to solve this problem and I don't think there's one that's optimal but I think the one I used worked quite well and it was: **Enemies:** All enemies, bosses and enemies bullets are affected by slow motion equally using a value of 0.3 as multiplier, ie less than half the normal speed. This affects your moving speed, your animations, your firerate, and any timer that modifies behavior or reaction to an event. That means that enemies "think" slower than in normal mode. **The player:** Characters are affected in their movements to a lesser extent than enemies, using a value of 0.5. The speed of the bullets is also affected but not the firerate which remains the same as normal speed. The latter sounds a bit strange but with this what is achieved is that the player can continue shooting and the bullets accumulate forming a denser row and when the time distort and accelerate produce a "matrix" effect. In short, during a time distort, the player has more time to react, feels more agile and able to literally slip between enemy bullets. Indirectly, it's like having a boost of speed and more firerate but in a slightly more interesting and covert way. Then, to accentuate the feeling of time distortion was generated a kind of ghost trail that follows the movements of the players with a delay and changed the pitch of the SFX and music to half. I think this method is generic enough to use in any other game or genre, so that's why I consider that sharing this thoughts and techniques could be useful to any game developer trying to achieve something similar to this. ![](https://thepracticaldev.s3.amazonaws.com/i/ex57u99kedve5gm6jwsm.gif)
martin_cerdeira
142,653
How to easily implement new In-App update feature to your Android App
Working all night to fix a bug or add a groundbreaking feature to your Android app and seeing only a...
0
2019-07-19T14:30:57
https://dev.to/sanojpunchihewa/how-to-easily-implement-new-in-app-update-feature-to-your-android-app-9e9
android
Working all night to fix a bug or add a groundbreaking feature to your Android app and seeing only a few users have updated their apps? No worries now, Android have now released a new feature called [in-app updates](https://developer.android.com/guide/app-bundle/in-app-updates) where you can prompt the user to update the app. ![](https://thepracticaldev.s3.amazonaws.com/i/5mg3mnkl2dmrxnf6wgly.png) _source: https://developer.android.com/guide/app-bundle/in-app-updates_ >Although some users enable background updates when their device is connected to an unmetered connection, other users may need to be reminded to update. In-app updates is a Play Core library feature that introduces a new request flow to prompt active users to update your app. — https://developer.android.com/guide/app-bundle/in-app-updates As an Android developer, I felt tired adding all these codes to my apps. Thus I developed a [library](https://github.com/SanojPunchihewa/InAppUpdater) which will put in place the in-app update feature cutting down all the lines of code to 5 lines of code. **Let’s get started!** ###Implementation ####Step 1: Add jitpack to your root level build.gradle at the end of repositories ```Gradle allprojects { repositories { maven { url "https://jitpack.io" } } } ``` ####Step 2: Add the dependency to app level build.gradle ```Gradle dependencies { implementation 'com.github.SanojPunchihewa:InAppUpdater:1.0.2' } ``` ####Step 3: Initialize the UpdateManager in your onCreate method of the Activity ```java @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Initialize the UpdateManager UpdateManager.Builder().mode(UpdateManagerConstant.IMMEDIATE).start(this); } ``` There are two update modes as Flexible and Immediate, - Flexible(`UpdateManagerConstant.FLEXIBLE`) (default) — User can use the app during update download, installation and restart needs to be triggered by user - Immediate (`UpdateManagerConstant.IMMEDIATE`)— User will be blocked until download and installation is finished, restart is triggered automatically ####Step 4: Call continueUpdate method in your onResume method to install waiting updates ```java @Override protected void onResume() { super.onResume(); UpdateManager.continueUpdate(this); } ``` **_Tadaa! That’s it. Now you have in-app updates in your android app_** If this helped you please give some :heart: to this article and ⭐️ the [library](https://github.com/SanojPunchihewa/InAppUpdater) Since I released the [library](https://github.com/SanojPunchihewa/InAppUpdater) recently 😉, if you find any bugs or have a suggestion please raise an [issue](https://github.com/SanojPunchihewa/InAppUpdater/issues) _Also if you find it hard to get working, leave a comment, I am always happy to help you_ 😃
sanojpunchihewa
142,680
Understanding RxJS Observables and why you need them
What is RxJS? RxJS is a framework for reactive programming that makes use of...
0
2019-07-19T17:07:51
https://blog.logrocket.com/understanding-rxjs-observables/
rxjs
--- title: Understanding RxJS Observables and why you need them published: true tags: rxjs canonical_url: https://blog.logrocket.com/understanding-rxjs-observables/ --- ![](https://thepracticaldev.s3.amazonaws.com/i/fcteb9li2egicl093xwt.png) ## What is RxJS? [RxJS](https://rxjs-dev.firebaseapp.com/) is a framework for reactive programming that makes use of [Observables](https://rxjs-dev.firebaseapp.com/guide/observable), making it really easy to write asynchronous code. According to the official [documentation](https://rxjs-dev.firebaseapp.com/), this project is a kind of reactive extension to JavaScript with better performance, better modularity, better debuggable call stacks, while staying mostly backwards compatible, with some breaking changes that reduce the API surface. It is the [official library used by Angular](https://angular.io/guide/observables) to handle reactivity, converting pull operations for call-backs into Observables. ## Prerequisites To be able to follow through in this article’s demonstration you should have: - [Node version 11.0 installed](https://nodejs.org/) on your machine. - [Node Package Manager version 6.7](https://nodejs.org/) (usually ships with Node installation). - [Angular CLI](https://cli.angular.io/) version 7.0 - The latest version of Angular (version 7) ```jsx // run the command in a terminal ng version ``` Confirm that you are using version 7, and [update to 7](https://angular.io/cli/update) if you are not. - Download this tutorial’s starter project [here](http://github.com/viclotana/ng_canvas) to follow through the demonstrations - Unzip the project and initialize the node modules in your terminal with this command ```jsx npm install ``` Other things that will be nice to have are: - Working knowledge of the Angular framework at a beginner level ## Understanding Observables: pull vs push To understand Observables, you have to first understand the pull and push context. In JavaScript, there are two systems of communication called push and pull. A **pull system** is basically a function. A function is usually first defined (a process called _production_) and then somewhere along the line called (this process is called _consumption_)to return the data or value in the function. For functions, the producer (which is the definition) does not have any idea of when the data is going to be consumed, so the function call literally pulls the return value or data from the producer. A **push system,** on the other hand, control rests on the producer, the consumer does not know exactly when the data will get passed to it. A common example is promises in JavaScript, promises (producers) push already resolved value to call-backs (consumers). Another example is RxJS Observables, Observables produces multiple values called a stream (unlike promises that return one value) and pushes them to observers which serve as consumers. [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ## What is a Stream? [A stream](https://angular.io/guide/observables) is basically a sequence of data values over time, this can range from a simple increment of numbers printed in 6 seconds (0,1,2,3,4,5) or coordinates printed over time, and even the data value of inputs in a form or chat texts passed through web sockets or API responses. These all represent data values that will be collected over time, hence the name stream. ## What are Observables? Streams are important to understand because they are facilitated by RxJS Observables. An Observable is basically a function that can return a stream of values to an observer over time, this can either be synchronously or asynchronously. The data values returned can go from zero to an infinite range of values. ## Observers and subscriptions For Observables to work there needs to be observers and subscriptions. Observables are data source wrappers and then the observer executes some instructions when there is a new value or a change in data values. The Observable is connected to the observer who does the execution through subscription, with a subscribe method the observer connects to the observable to execute a code block. ## Observable lifecycle With some help from observers and subscriptions the Observable instance passes through these four stages throughout its lifetime: - Creation - Subscription - Execution - Destruction ## Creating Observables If you followed this post from the start, you must have opened the Angular starter project in VS Code. To create an Observable, you have to first import Observable from RxJS in the `.ts` file of the component you want to create it in. The creation syntax looks something like this: ```jsx import { Observable } from "rxjs"; var observable = Observable.create((observer:any) => { observer.next('Hello World!') }) ``` Open your `app.component.ts` file and copy the code block below into it: ```jsx import { Component, OnInit } from '@angular/core'; import { Observable } from "rxjs/"; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit{ title = 'ngcanvas'; ngOnInit(): void { var observable = Observable.create() } } ``` ## Subscribing to Observables To tell RxJS to execute the code block on the Observable, or in a simpler term, to call the Observable to begin execution you have to use the subscribe method like this: ```jsx export class AppComponent implements OnInit{ title = 'ngcanvas'; ngOnInit(): void { var observable = Observable.create((observer:any) => { observer.next('Hello World!') }) observable.subscribe(function logMessage(message:any) { console.log(message); }) } ``` This subscribe method will cause “hello world” to be logged in the console. ## Executing Observables The observer is in charge of executing instructions in the Observable, so each observer that subscribes can deliver three values to the Observable: 1. **Next value:** With the next value, observer sends a value that can be a number, a string or an object. There can be more than one next notifications set on a particular Observable 2. **Error value:** With the error value, the observer sends a JavaScript exception. If an error is found in the Observable, nothing else can be delivered to the Observable 3. **Complete value:** With the complete value, the observer sends no value. This usually signals that the subscriptions for that particular Observable is complete. If the complete value is sent, nothing else can be delivered to the Observable. This can be illustrated with the code block below: ```jsx export class AppComponent implements OnInit{ title = 'ngcanvas'; ngOnInit(): void { var observable = Observable.create((observer:any) => { observer.next('I am number 1') observer.next('I am number 2') observer.error('I am number 3') observer.complete('I am number 4') observer.next('I am number 5') }) observable.subscribe(function logMessage(message:any) { console.log(message); }) } } ``` If you run the application at this point in the dev server with ```jsx ng serve ``` When you open up the console in the developer tools your log will look like this: ![error in console](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2019/07/angularerrorconsole.png?resize=1362%2C422&ssl=1) You will notice that either the error value or complete value automatically stops execution and so the number 5 never shows up in the console. This is a simple synchronous exercise. To make it asynchronous, let us wrap timers around some of the values. ```jsx export class AppComponent implements OnInit{ title = 'ngcanvas'; ngOnInit(): void { var observable = Observable.create((observer:any) => { observer.next('I am number 1') observer.next('I am number 2') setInterval(() => { observer.next('Random Async log message') }, 2000) observer.next('I am number 3') observer.next('I am number 4') setInterval(() => { observer.error('This is the end') }, 6001) observer.next('I am number 5') }) observable.subscribe(function logMessage(message:any) { console.log(message); }) } } ``` This will appear like this in your browser console: ![console errors](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2019/07/consoleerrors.gif?resize=566%2C270&ssl=1) Notice that the display of value was done here asynchronously, with the help of the setInterval module. ## Destroying an Observable To destroy an Observable is to essentially remove it from the DOM by unsubscribing to it. Normally for asynchronous logic, RxJS takes care of unsubscribing and immediately after an error or a complete notification your observable gets unsubscribed. For the knowledge, you can manually trigger unsubscribe with something like this: ```jsx return function unsubscribe() { clearInterval(observable); }; ``` ## Why Observables are so vital - Emitting multiple values asynchronously is very easily handled with Observables - Error handlers can also easily be done inside Observables rather than a construct like promises - Observables are considered lazy, so in case of no subscription there will be no emission of data values - Observables can be resolved multiple times as opposed to functions or even promises ## Conclusion We have been given a thorough introduction to Observables, observers and subscriptions in RxJS. We have also been shown the lifecycle process of Observables with practical illustrations. More RxJS posts can be found on the blog, happy hacking! * * * ## Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps   ![LogRocket Dashboard Free Trial Banner](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/1d0cd-1s_rmyo6nbrasp-xtvbaxfg.png?resize=1200%2C677&ssl=1)   [LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.   In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.   [Try it for free](https://logrocket.com/signup/). * * * The post [Understanding RxJS Observables and why you need them](https://blog.logrocket.com/understanding-rxjs-observables/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
142,767
List of Variable Fonts
OpenType variable fonts were introduced in 2016 as an extension to the OpenType specification. Techni...
0
2019-07-21T06:31:32
https://vuild.com/variable-fonts
tutorials
--- title: List of Variable Fonts published: true tags: Tutorials canonical_url: https://vuild.com/variable-fonts --- OpenType variable fonts were introduced in 2016 as an extension to the OpenType specification. Technically they allow a single font file to store a continuous range of design variants. In simple terms it means fonts that can dance. Being able to make your words literally come to life is quite the novelty & when used properly it’s bound to get attention. [Download as PDF](https://vuild.com/variable-fonts?pdf=1) 1. 1001 Fonts [https://www.1001fonts.com/variable-fonts.html](https://www.1001fonts.com/variable-fonts.html) 2. Agrandir [https://pangrampangram.com/products/agrandir](https://pangrampangram.com/products/agrandir) 3. Axis-Praxis [https://www.axis-praxis.org](https://www.axis-praxis.org) 4. Clother [https://black-foundry.com/clother](https://black-foundry.com/clother) 5. Compressa [https://compressa.preusstype.com](https://compressa.preusstype.com) 6. Creative Market [https://creativemarket.com](https://creativemarket.com/search?q=variable+font&categoryIDs=0) 7. Dinamo Pipeline [https://dinamopipeline.com](https://dinamopipeline.com) 8. DSType Foundry [https://www.dstype.com/variable-fonts](https://www.dstype.com/variable-fonts) 9. ETC Trispace [https://www.etc.supply/trispace](https://www.etc.supply/trispace) 10. Fit [https://djr.com/fit](https://djr.com/fit) 11. Fonts Arena [https://fontsarena.com/tag/variable-font](https://fontsarena.com/tag/variable-font) 12. FontSpace [https://www.fontspace.com/category/variable](https://www.fontspace.com/category/variable) 13. Gingham [http://koe.berlin/variablefont](http://koe.berlin/variablefont) 14. Grafier [https://pangrampangram.com/products/grafier](https://pangrampangram.com/products/grafier) 15. Handjet [https://www.rosettatype.com/Handjet](https://www.rosettatype.com/Handjet) 16. Innschbruck [https://danielstuhlpfarrer.com/project/innschbruck](https://danielstuhlpfarrer.com/project/innschbruck) 17. Inter [https://rsms.me/inter](https://rsms.me/inter) 18. Jabin [http://www.fridamedrano.com/jabin.html](http://www.fridamedrano.com/jabin.html) 19. Jost [https://indestructibletype.com/Jost.html](https://indestructibletype.com/Jost.html) 20. Lab DJR Font [https://djr.com/lab-variable](https://djr.com/lab-variable) 21. Marvin [https://www.readvisions.com/marvin](https://www.readvisions.com/marvin) 22. Movement [http://www.nmtype.com/movement](http://www.nmtype.com/movement) 23. Secuela [https://pinspiry.com/secuela-free-variable-font](https://pinspiry.com/secuela-free-variable-font) 24. Spezia [https://luzi-type.ch/spezia](https://luzi-type.ch/spezia) 25. TINY [http://velvetyne.fr/fonts/tiny](http://velvetyne.fr/fonts/tiny) 26. Variable Fonts [https://v-fonts.com](https://v-fonts.com) 27. @variablefonts on Twitter [https://twitter.com/variablefonts](https://twitter.com/variablefonts) 28. Venn [https://www.daltonmaag.com/library/venn](https://www.daltonmaag.com/library/venn) 29. Very Able Fonts [http://www.very-able-fonts.com](http://www.very-able-fonts.com) 30. Vinila [https://plau.co/vinila](https://plau.co/vinila) Let us know if we missed your favorite variable font (they are not easy to find). This data is from [Vuild’s variable font list](https://vuild.com/variable-fonts). Please visit [vuild.com](https://vuild.com) for more.
vuild
142,785
[Suggestion Needed] : Generate complex PDF using puppeteer
I am working on to generate pdf having complex header, footer & page border
0
2019-07-19T20:50:03
https://dev.to/irfaan008/suggestion-needed-generate-complex-pdf-using-puppeteer-28h5
headless, pdf, puppeteer, node
--- title: [Suggestion Needed] : Generate complex PDF using puppeteer published: true description: I am working on to generate pdf having complex header, footer & page border tags: headless, pdf, puppeteer, node --- While chrome headless is best option out in the market with open source tag for generating PDF, screenshot or almost anything else which actually chrome can do. I tried using it for generating PDF out of my HTML. This is what I wanted as end output <a href="https://ibb.co/BLSbTG7"><img src="https://i.ibb.co/v1KpLhr/header.png" alt="header" border="0"></a><br /> Page 2 : <a href="https://ibb.co/48gKz5G"><img src="https://i.ibb.co/zXHQp0Y/page2.png" alt="page2" border="0"></a> But here are the challenges I started facing after couple of hours : 1. Lack of support to load external resources for headerTemplate (we can't use external css) - This can be solved by using inline css but in my case I found it very difficult to write inline css as the header was complex. It has images, svg and other positioning style. And not to mention, this blue highlighted box should auto repeat on every page. 2. Lacks support to load images using their url on header & footer - They suggested to use base64 image 3. I want to have 5px border on pages. Now here is the output I am able to achieve using puppeteer : 1. The border breaks and doesn't respect the page.Here I haven't set the separate header template and hence not receiving the header on 2nd page. <a href="https://ibb.co/3mbY6LN"><img src="https://i.ibb.co/TMGqjF8/Screenshot-2019-07-20-at-2-11-27-AM.png" alt="Screenshot-2019-07-20-at-2-11-27-AM" border="0"></a> 2. If I set header template separately then border behaves weird. It starts after header template finishes <a href="https://ibb.co/QQrTK1P"><img src="https://i.ibb.co/1Q8Pmwr/Screenshot-2019-07-20-at-2-13-53-AM.png" alt="Screenshot-2019-07-20-at-2-13-53-AM" border="0"></a> Now at this stage I had read on web and found some suggestion which are as per below : 1. I should give top, left and right border to my headerTemplate and then should give left, right and bottom border to my body. This way final output will look like the first image. 2. I got to know about pdf merge, where I have been suggested to generate one page pdf having header only and then other pages (with top margin left enough to fit header) without header. And then finally merging header pdf to each of other pages using some pdf utilities. <strong>Before jumping into above approach, I would like to know if dev community has come across similar case and if someone can guide to find the perfect way out of it. You can reach me on irfaan.aa@gmail.com too.</strong>
irfaan008
145,532
How to get started with Selenium and Python
Step one, you need to get chromedriver. What is chromedriver? It is a separate binary that you must...
0
2019-07-22T12:54:48
https://dev.to/tonetheman/how-to-get-started-with-selenium-and-python-7p
selenium, python, automation
--- title: How to get started with Selenium and Python published: true description: tags: selenium,python,automation --- Step one, you need to get chromedriver. What is chromedriver? It is a separate binary that you must run to get Selenium and Chrome working. See for tiny explantion of what chromedriver does : https://dev.to/tonetheman/chromedriver-role-in-the-world-c06 To download chromedriver (which you must have to use Chrome with Selenium) go to this link: http://chromedriver.chromium.org/downloads Mainly note the version of Chrome you are using. I am using Windows 10 and this Chrome: Version 75.0.3770.142 (Official Build) (64-bit) So I will pick this version of chromedriver: https://chromedriver.storage.googleapis.com/index.html?path=75.0.3770.140/ You need to save the chromedriver.exe to a directory that you will remember or your working directory. You need to know where you saved chromedriver.exe because you will use the location in the Python script that you are about to write. ```python from selenium import webdriver ## note 1 driver = None try: cpath = "e:\\projects\\headless\\chromedriver.exe" ## note 2 driver = webdriver.Chrome(cpath) ## note 3 driver.get("https://google.com") ## note 4 import time ## note 5 time.sleep(3) finally: # note 6 if driver is not None: driver.close() ``` Note 1 - this is where you are loading the Python webdriver binding. This is a fancy way of saying we are telling Python about Selenium. If you do not have this line none of your Selenium scripts will run. Note 2 - this is the path of where I put chromedriver.exe your directory name will be different. The name does not matter either just pick somewhere on your disk. Note 3 - this is where Chrome will start up. Chrome and chromedriver.exe are both started on this line. If you looked at your processlist at the instant that line executes you will see a new Chrome instance start along with a chromedriver.exe. If you look closely chromedriver.exe starts first and it starts Chrome.exe Note 4 - this line navigates to google. Not exciting but at this point you will see your Selenium driven Chrome navigate to a web page. woooo!!!! Note 5 - at this point I put in a sleep so you can actually see what is happening. In general sleeps are bad when you are writing scripts. There are times when you are debugging when time.sleep is useful. This is one of those cases. Note 6 - this is shutting down chromedriver.exe and Chrome. You need this for cleanup. If you did not run that line Chrome.exe will still continue to run until you stop it manually. And that is it. Your first Selenium script with Python.
tonetheman
145,778
A Quick Intro to Apollo Client
A few months ago, we posted an article about how to fetch data from a GraphQL API. In it, we looked a...
0
2019-07-20T18:05:41
https://dev.to/eveporcello/a-quick-intro-to-apollo-client-1247
graphql, javascript, tutorial
A few months ago, we posted an article about how to [fetch data from a GraphQL API](https://moonhighway.com/fetching-data-from-a-graphql-api). In it, we looked at the [Snowtooth API](https://snowtooth.moonhighway.com), a fake ski resort with a real GraphQL API, and we were able to send queries to get data and send mutations to change data. We all had a great time. Now that a few months have passed, our hair is a little longer. Our beard a bit more full. And now we might be thinking: is it time to incorporate Apollo Client? Or more importantly, why might you want to incorporate Apollo Client? Apollo Client handles network requests and efficient caching. It will handle the network interface with the GraphQL API by creating a link. The cache will save queries and their resulting objects. A huge benefit of using REST is the ease with which you can handle caching. With REST, you can save the response data from a request in a cache under the URL that was used to access that request. For example, if Snowtooth was a REST API, you'd go to the lifts route, `/lifts`, to get lift data, and then you'd cache the data at under that URL. With GraphQL, we're dealing with a single endpoint, so having a localized caching solution is essential as we build fast, efficient apps. We could create our own cache, but leaning on the vetted Apollo Client is a great place to get started. ## Apollo Client Setup Let's create a simple Apollo Client that will get data from the Snowtooth API. Then, we'll set up the project using Create React App. Using React isn't required for working with Apollo Client, but running this will generate a project shell for us that might be useful when extending this into a real application. Start by running this command in the Terminal or Command Prompt: ``` npx create-react-app snowtooth-client ``` `npx` will execute the `create-react-app` command with the name of the project, in this case, `snowtooth-client`. Then make sure you're in the `snowtooth-client` directory and start the app: ``` cd snowtooth-client npm start ``` This will run the starter app on `localhost:3000`. Next, we'll install some dependencies: ``` npm i apollo-client graphql ``` All of the changes we'll make to our mini app will happen in the `src/index.js` file. The first change is that we'll highlight everything in the file and delete it. Then we'll create the client and log whatever the client is to the console: ```javascript import { ApolloClient } from "apollo-client"; const client = new ApolloClient(); console.log(client); ``` At this point, we should be seeing an error: `Uncaught TypeError: Cannot read property 'cache' of undefined`. Remember earlier that we said that Apollo Client gives us two things: the link to the GraphQL data and the cache? Well, those are the two things that need to be provided to the client constructor when setting it up. Start by installing a few more helpful packages: ``` npm i apollo-link-http apollo-cache-inmemory ``` These packages are going to help us connect to our GraphQL API and set up a local cache. Once installed, we'll import and use them in the `index.js` file. Let's start with the imports: ```javascript import { createHttpLink } from "apollo-link-http"; import { InMemoryCache } from "apollo-cache-inmemory"; ``` Then we'll use these helpers in the client constructor: ```javascript const client = new ApolloClient({ link: createHttpLink({ uri: "https://snowtooth.moonhighway.com" }), cache: new InMemoryCache() }); ``` `createHttpLink` takes in an object with our GraphQL API provided on the `uri` key and will fetch GraphQL results from a GraphQL API over an http connection. `InMemoryCache` is the default cache implementation for Apollo Client 2.0. We can check out `localhost:3000` again and open the console. This is now logging the client. There's not too much here yet, but we can see some familiar values here like the `cache` and `link`. ![Client Log](https://moonhighway.com/static/ccf983072f960061d8fb18f67841b929/0fafd/clientlog.png) At this point, we're not sending a request though. In order to send a request (a query) to the GraphQL endpoint, we'll need to incorporate one last package: `graphql-tag`. ``` npm install graphql-tag ``` Once installed, we'll import a function called `gql` from `graphql-tag` and wrap the query that we want to send to the API in this tag: ```javascript import gql from "graphql-tag"; const query = gql` query { allLifts { name } } `; ``` The tag function `gql` that wraps the query will parse the query string and turn it into an AST, an abstract syntax tree. An AST is a representation of the query string as an object. Once defined, we'll use the `client.query` function to send a query to the GraphQL API: ```javascript client .query({ query }) .then(console.log); ``` Calling `client.query` will send the query to the API that is defined in our client instantiation. Once you run this, you should see the JSON response logged to the console: ![Object Response](https://moonhighway.com/static/97a91f1e6ee73e71d49e7da96e61dbdc/db64a/objresponse.png) This object gives us all of the data from the query on the key called `data`. We also see whether the data is loading, if the request is stale, and the network status code. If successful, the network status code will be 7. Another thing we can take a look at with a console log message is the cache: ```javascript console.log("Cache", client.cache); ``` This will show the entire client cache object. We can drill down into the data response as well with: ```javascript console.log("Cache", client.extract()); ``` The data is cached on a key called `ROOT_QUERY`. Then we see the data added to an index on the query value. For example, if we ask for `allLifts`, the first lift will be cached at `allLifts.0`, the second lift cached at `allLifts.1`, and so on. If you want to be able to view the state of the cache in a nicer user interface, I'd recommend installing the [Apollo Dev Tools Chrome Extension](https://chrome.google.com/webstore/detail/apollo-client-developer-t/jdkknkkbebbapilgoeccciglkfbmbnfm?hl=en-US). Any time you're on a page that is using Apollo Client, you'll be able to open the Dev Tools This will give you visibility into the cache which is a little nicer than logging these details to the console: ![DevTools](https://moonhighway.com/static/50fb33756f6f30da7c270ec6c6c141db/20783/devtools.png) The fact that you can see the state of the cache is cool, but it's also really useful to have the GraphiQL explorer available. If the page you're viewing has an instance of the Apollo Client, you'll be able to use GraphiQL to introspect the schema and send queries, mutations, and subscriptions. Killer. Apollo Client doesn't require you to use any sort of specific UI library. Everything we've done in this article has used good old vanilla JavaScript. If you want to learn more about how to integrate Apollo Client with React though, check out our other article on [Apollo React Hooks](https://moonhighway.com/apollo-hooks)!
eveporcello
145,863
The Greatest Learning Technique For Learning to Code
I started programming when I was 18 years old, but I did it just because it was required in school, I...
0
2019-07-21T04:28:02
https://dev.to/stevenanthony/the-greatest-learning-technique-for-learning-to-code-1ha5
javascript, career, beginners, codenewbie
I started programming when I was 18 years old, but I did it just because it was required in school, I just did what was required, that's it. I didn't retain any information, nothing at all. When I started university, I began to take things a bit more seriously. A lot of my curriculum was focused on Object Oriented programming, which was cool, but wasn't what I wanted to learn. I liked building web applications, so I tried to figure out some way online to learn. I bought some Udemy class for full stack javascript, I watched Youtube tutorials and followed internet guides. And the results were... Horrendous, I had been too busy following guides and just copy pasta coding. When it came time to trying to build something from scratch, I had no idea what I was doing because I wasn't actually learning, I was given everything. In reality, coding is about reading documentation, applying techniques given by other developers and problem solving (stack overflow is a good friend). That is how you learn to code. The biggest problem with following guides to build projects is that you're given everything. A large part of software developing is problem solving, by following guides that aspect is eliminated. The greatest advice I could give is simply just start something. Whatever you want to build, whether it be a web application or a mobile app, etc. Open the docs, read the "getting started" section, and voila. You're on your way to creating and finishing your first real project. Also a nice side tip that helped me learn: Try and code for at least 1 hour per day. This really helps retain things that you've learned on previous days and keeps your mind fresh. Nothing sucks more than returning to a 3 week old poorly documented code base :P
stevenanthony
145,965
Docker Containers vs VMs
Virtual Machines (VMs) and Docker containers are two technologies used to improve resource utilization. This post will help you understand the background of both as well as how they differ.
0
2019-07-21T15:26:11
https://dev.to/npentrel/docker-containers-vs-vms-257i
docker, containerization, containers, vms
--- title: Docker Containers vs VMs published: true description: Virtual Machines (VMs) and Docker containers are two technologies used to improve resource utilization. This post will help you understand the background of both as well as how they differ. tags: docker, containerization, containers, VMs cover_image: https://thepracticaldev.s3.amazonaws.com/i/ashm2ss6vc6ool7he92p.png --- **Virtual Machines (VMs)** and **Docker containers** are two technologies that can help with resource utilization, enabling you to run multiple apps in isolated environments on the same infrastructure. Given a terminal on a VM or a container, either will behave much as though it was in fact a dedicated machine. This can make it hard to differentiate between VMs and containers, especially if you are just getting started with Docker. To help with that, this blog post will show you how both function under the hood, while pointing out their similarities and differences. No previous knowledge of VMs or containers is necessary for this post. #### Virtual Machines and Containers VMs have been around for a long time, allowing physical machines to run multiple operating systems and multiple isolated apps. VMs are similar to "regular" physical computers: they run an operating system, have a lot of libraries installed, and run applications. The difference is that a virtual machine does not have its own dedicated hardware. Instead, VMs use the physical machine's underlying infrastructure through software that imitates dedicated hardware. The software that virtualizes hardware for VMs is called a hypervisor. ![Multiple VMs that run on top of a hypervisor which regulates access to underlying infrastructure.](https://thepracticaldev.s3.amazonaws.com/i/b7ln74cl0po6wrd75ja4.png) <figcaption>Source: https://docs.docker.com/get-started/#containers-and-virtual-machines</figcaption> The fact that each VM has its own operating system makes VMs require quite a bit of storage and other resources. To reduce the needed resources per running application, people came up with containers. While containerization had been around for a while on various operating systems, the community rallied behind Docker which was first released in 2013. During that time, the tech world (and the DevOps world in particular) was looking for solutions that would make deploying and scaling microservices more efficient. Docker managed to address that need by making Linux's LXC containerization easier to use and more accessible. They took the heavy lifting and planning that usually went into configuring a container and turned it into writing a Docker file which then did that work for you. This ease of use is what made Docker so popular; the fact that Docker was open source from the start certainly also helped. Containers allow developers to minimize the resources they need by only packaging an application and its exact dependencies. Instead of running on a hypervisor the containers run on top of a container runtime environment which is installed on an operating system. While VMs use virtualized hardware, containers use underlying resources of the host operating system. ![Multiple containers that run on top of Docker which is installed on the host operating system which regulates access to underlying infrastructure.](https://thepracticaldev.s3.amazonaws.com/i/hz62tbfvkjntiqq4778h.png) <figcaption>Source: https://docs.docker.com/get-started/#containers-and-virtual-machines</figcaption> Comparing VMs and containers to houses and apartments can help make these differences more apparent: A house has its own water pipes, its own internet cable, and its own garbage disposal system. Apartments on the other hand have these resources managed by their apartment complex and can access the resources of the entire building without having to each have their own separate version. House rules ensure that apartment residents do not irritate each other and get a fair share of all resources. Virtual Machines are like houses in this comparison. They each have their own operating system with its own dedicated resources (e.g. kernel, filesystem, process tree, network stack). The VM accesses CPU, storage, RAM etc. through a hypervisor which virtualizes the hardware. Containers are much more like apartments. They only have exactly what they need to run their application. They share the underlying operating system's resources (e.g. kernel, filesystem, process tree, network stack) which are isolated by the container runtime environment which enforces rules around the resources particular containers have access to. Overall, this model makes containers a much more lightweight solution. These differences in how containers and VMs function under the hood make containers a lot faster to provision and means they are often (and generally should be) used as immutable, non-persistent constructs. Containers live and die and are largely not something you ought to care about (unless they all die at the same time!). VMs, on the other hand, often have a longer life span, you spend more time configuring them and you probably care if something happens to them. On a high level, the differences between containers and VMs can be summarized by saying that VMs are considered to be a virtualization technology and containers an application delivery technology. #### Learn more This was our concise overview of VMs and containers. If you are interested to learn more, check out: * [Docker Jargon: FROM Dockerfile to Container](https://dev.to/npentrel/docker-jargon-from-dockerfile-to-container-942) * [Docker tutorial](https://www.youtube.com/watch?v=wjvyN_r-zkk) * [Docker cheatsheet](https://github.com/npentrel/SmoothDevOps/raw/master/cheatsheets/01%20Docker%20Cheatsheet.pdf) * [Docker's get started guide](https://docs.docker.com/get-started/) * [Nigel Poulton's Docker Deep Dive](https://www.amazon.com/Docker-Deep-Dive-Nigel-Poulton-ebook/dp/B01LXWQUFF) * [orhandogan.net/docker/](http://orhandogan.net/docker/)
npentrel
145,986
WebGL month. Day 21. Rendering a minecraft terrain
In this tutorial we'll render a minecraft terain
0
2019-07-21T16:40:55
https://dev.to/lesnitsky/webgl-month-day-21-rendering-a-minecraft-terrain-24b5
beginners, javascript, webgl
--- title: WebGL month. Day 21. Rendering a minecraft terrain published: true description: In this tutorial we'll render a minecraft terain tags: beginners, javascript, webgl cover_image: https://thepracticaldev.s3.amazonaws.com/i/zf3sev83jaggq3n990rj.jpg --- This is a series of blog posts related to WebGL. New post will be available every day [![GitHub stars](https://img.shields.io/github/stars/lesnitsky/webgl-month.svg?style=social&hash=day21)](https://github.com/lesnitsky/webgl-month) [![Twitter Follow](https://img.shields.io/twitter/follow/lesnitsky_a.svg?label=Follow%20me&style=social&hash=day21)](https://twitter.com/lesnitsky_a) [Join mailing list](http://eepurl.com/gwiSeH) to get new posts right to your inbox [Source code available here](https://github.com/lesnitsky/webgl-month) Built with [![Git Tutor Logo](https://git-tutor-assets.s3.eu-west-2.amazonaws.com/git-tutor-logo-50.png)](https://github.com/lesnitsky/git-tutor) --- Hey 👋 Welcome to WebGL month. [Yesterday](https://dev.to/lesnitsky/webgl-month-day-20-rendering-a-minecraft-dirt-cube-5ag3) we rendered a single minecraft dirt cube, let's render a terrain today! We'll need to store each block position in separate transform matrix 📄 src/3d-textured.js ```diff gl.viewport(0, 0, canvas.width, canvas.height); + const matrices = []; + function frame() { mat4.rotateY(cube.modelMatrix, cube.modelMatrix, Math.PI / 180); ``` Now let's create 10k blocks iteration over x and z axis from -50 to 50 📄 src/3d-textured.js ```diff const matrices = []; + for (let i = -50; i < 50; i++) { + for (let j = -50; j < 50; j++) { + const matrix = mat4.create(); + } + } + function frame() { mat4.rotateY(cube.modelMatrix, cube.modelMatrix, Math.PI / 180); ``` Each block is a size of 2 (vertex coordinates are in [-1..1] range) so positions should be divisible by two 📄 src/3d-textured.js ```diff for (let i = -50; i < 50; i++) { for (let j = -50; j < 50; j++) { const matrix = mat4.create(); + + const position = [i * 2, (Math.floor(Math.random() * 2) - 1) * 2, j * 2]; } } ``` Now we need to create a transform matrix. Let's use `ma4.fromTranslation` 📄 src/3d-textured.js ```diff const matrix = mat4.create(); const position = [i * 2, (Math.floor(Math.random() * 2) - 1) * 2, j * 2]; + mat4.fromTranslation(matrix, position); } } ``` Let's also rotate each block around Y axis to make terrain look more random 📄 src/3d-textured.js ```diff gl.viewport(0, 0, canvas.width, canvas.height); const matrices = []; + const rotationMatrix = mat4.create(); for (let i = -50; i < 50; i++) { for (let j = -50; j < 50; j++) { const position = [i * 2, (Math.floor(Math.random() * 2) - 1) * 2, j * 2]; mat4.fromTranslation(matrix, position); + + mat4.fromRotation(rotationMatrix, Math.PI * Math.round(Math.random() * 4), [0, 1, 0]); + mat4.multiply(matrix, matrix, rotationMatrix); } } ``` and finally push matrix of each block to matrices collection 📄 src/3d-textured.js ```diff mat4.fromRotation(rotationMatrix, Math.PI * Math.round(Math.random() * 4), [0, 1, 0]); mat4.multiply(matrix, matrix, rotationMatrix); + + matrices.push(matrix); } } ``` Since our blocks are static, we don't need a rotation transform in each frame 📄 src/3d-textured.js ```diff } function frame() { - mat4.rotateY(cube.modelMatrix, cube.modelMatrix, Math.PI / 180); - gl.uniformMatrix4fv(programInfo.uniformLocations.modelMatrix, false, cube.modelMatrix); gl.uniformMatrix4fv(programInfo.uniformLocations.normalMatrix, false, cube.normalMatrix); ``` Now we'll need to iterate over matrices collection and issue a draw call for each cube with its transform matrix passed to uniform 📄 src/3d-textured.js ```diff } function frame() { - gl.uniformMatrix4fv(programInfo.uniformLocations.modelMatrix, false, cube.modelMatrix); - gl.uniformMatrix4fv(programInfo.uniformLocations.normalMatrix, false, cube.normalMatrix); + matrices.forEach((matrix) => { + gl.uniformMatrix4fv(programInfo.uniformLocations.modelMatrix, false, matrix); + gl.uniformMatrix4fv(programInfo.uniformLocations.normalMatrix, false, cube.normalMatrix); - gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.data.length / 3); + gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.data.length / 3); + }); requestAnimationFrame(frame); } ``` Now let's create an animation of rotating camera. Camera has a position and a point where it is pointed. So to implement this, we need to rotate focus point around camera position. Let's first get rid of static view matrix 📄 src/3d-textured.js ```diff const viewMatrix = mat4.create(); const projectionMatrix = mat4.create(); - mat4.lookAt(viewMatrix, [0, 4, -7], [0, 0, 0], [0, 1, 0]); - mat4.perspective(projectionMatrix, (Math.PI / 360) * 90, canvas.width / canvas.height, 0.01, 100); gl.uniformMatrix4fv(programInfo.uniformLocations.viewMatrix, false, viewMatrix); ``` Define camera position, camera focus point vector and focus point transform matrix 📄 src/3d-textured.js ```diff - import { mat4 } from 'gl-matrix'; + import { mat4, vec3 } from 'gl-matrix'; import vShaderSource from './shaders/3d-textured.v.glsl'; import fShaderSource from './shaders/3d-textured.f.glsl'; } } + const cameraPosition = [0, 10, 0]; + const cameraFocusPoint = vec3.fromValues(30, 0, 0); + const cameraFocusPointMatrix = mat4.create(); + + mat4.fromTranslation(cameraFocusPointMatrix, cameraFocusPoint); + function frame() { matrices.forEach((matrix) => { gl.uniformMatrix4fv(programInfo.uniformLocations.modelMatrix, false, matrix); ``` Our camera is located in 0.0.0, so we need to translate camera focus point to 0.0.0, rotate it, and translate back to original position 📄 src/3d-textured.js ```diff mat4.fromTranslation(cameraFocusPointMatrix, cameraFocusPoint); function frame() { + mat4.translate(cameraFocusPointMatrix, cameraFocusPointMatrix, [-30, 0, 0]); + mat4.rotateY(cameraFocusPointMatrix, cameraFocusPointMatrix, Math.PI / 360); + mat4.translate(cameraFocusPointMatrix, cameraFocusPointMatrix, [30, 0, 0]); + matrices.forEach((matrix) => { gl.uniformMatrix4fv(programInfo.uniformLocations.modelMatrix, false, matrix); gl.uniformMatrix4fv(programInfo.uniformLocations.normalMatrix, false, cube.normalMatrix); ``` Final step – update view matrix uniform 📄 src/3d-textured.js ```diff mat4.rotateY(cameraFocusPointMatrix, cameraFocusPointMatrix, Math.PI / 360); mat4.translate(cameraFocusPointMatrix, cameraFocusPointMatrix, [30, 0, 0]); + mat4.getTranslation(cameraFocusPoint, cameraFocusPointMatrix); + + mat4.lookAt(viewMatrix, cameraPosition, cameraFocusPoint, [0, 1, 0]); + gl.uniformMatrix4fv(programInfo.uniformLocations.viewMatrix, false, viewMatrix); + matrices.forEach((matrix) => { gl.uniformMatrix4fv(programInfo.uniformLocations.modelMatrix, false, matrix); - gl.uniformMatrix4fv(programInfo.uniformLocations.normalMatrix, false, cube.normalMatrix); gl.drawArrays(gl.TRIANGLES, 0, vertexBuffer.data.length / 3); }); ``` That's it! This approach is not very performant though, as we're issuing 2 gl calls for each object, so it is a 20k of gl calls each frame. GL calls are expensive, so we'll need to reduce this number. We'll learn a great technique tomorrow! --- [![GitHub stars](https://img.shields.io/github/stars/lesnitsky/webgl-month.svg?style=social&hash=day21)](https://github.com/lesnitsky/webgl-month) [![Twitter Follow](https://img.shields.io/twitter/follow/lesnitsky_a.svg?label=Follow%20me&style=social&hash=day21)](https://twitter.com/lesnitsky_a) [Join mailing list](http://eepurl.com/gwiSeH) to get new posts right to your inbox [Source code available here](https://github.com/lesnitsky/webgl-month) Built with [![Git Tutor Logo](https://git-tutor-assets.s3.eu-west-2.amazonaws.com/git-tutor-logo-50.png)](https://github.com/lesnitsky/git-tutor)
lesnitsky
146,251
Preparing For My First Tech Conference
Today I will be attending Codeland Conference in NYC. This my first ever tech conference, and I'm ner...
0
2019-07-22T10:40:58
https://dev.to/sarahscode/preparing-for-my-first-tech-conference-59m6
codenewbie, conference
--- title: Preparing For My First Tech Conference published: true tags: codenewbie, conference canonical_url: --- Today I will be attending Codeland Conference in NYC. This my first ever tech conference, and I'm nervous but excited. Codeland is organized by [CodeNewbie](https://www.codenewbie.org/), a podcast and super-supportive community aimed towards those new to coding (but super welcoming of people at all levels). I had heard about the conference from the podcast and on twitter (and in fact had at one point considering submitting a CFP), but didn't register, partially because I had never been to a conference before and didn't know if I would enjoy it, and partially because I've been in full-time job searching mode for a bit and didn't know if I'd be starting a new job that may not be keen on me taking off for a day so early in my time there. A few weeks ago I finally made the decision to register. The new job thing looked like it wasn't going to be happening before the conference (and on the off chance that it did, I was willing to just eat the money if my company wouldn't let me take off). And as for not knowing if I like conferences ... the conference is a 30-minute (or less) subway ride from my apartment. If I hate it, I can just bail. There aren't very many conferences where I have that opportunity ... so I figured I might as take the chance and try it out now. Once I decided to register, the next step was buying my ticket. Codeland offers "Talks Only" and "Talks and Choice of Workshop" tickets. I was really most interested in the talks, but I talked it over with a family member and decided that since some of the workshops did look interesting, I should spend the extra $20 for the workshop ticket. I figured I'd decide closer to the event (or at the event) what I wanted to do. #### What To Learn? I received an email a few weeks before the event inviting me to choose a workshop (and track). I didn't know I'd have to choose in advance (I don't know if that's a normal conference thing that I just don't know about because I'm a conference newbie or if it's unique to this conference) and I didn't know how quickly workshops filled up, so I spent some time the morning I got the email looking at the workshop descriptions and requirements, and narrowed it down to an IoT workshop and an accessibility workshop, but since I'm not a huge IoT person (other than using IoT things ... excuse me while I go ask Google to turn on my fan ... very useful in this heat wave) and was mostly interested for the take-home toy, I ended up signing up for the "Building with Accessibility in Mind" workshop led by Luisa Morales. Accessibility is something that's so important in web development and I don't put as much thought into it as I should, so I'm really hoping I learn a lot from this workshop and take what I learn and apply it to all of my future work. Although really, I couldn't have gone wrong with any workshop, they all looked really interesting. This workshop (unlike the IoT one I was looking at) is only offered in one of the two tracks, so I officially landed myself in Track B. There are some talks in Track A that I'm a little sad to be missing, but there are some good talks in Track B that I'm looking forward to hearing. I'm especially interested Jo-Wayne Josephs's talk "An Immigrant's Journey Into Tech", but I think all of the talks sound interesting and I'm sure I'll enjoy most or all of them. #### Advance Planning My Meals To make a long story short ... going places can be hard for me because I can't eat the food. I'm strictly kosher observant, which means that not only are there restrictions on what food I can eat, (in my case) all of my food has to come from a kosher-certified kitchen. When I registered for the event, there was an option to select kosher food, but after a recent situation where someone didn't realize that I wouldn't eat food from a non-certified kitchen (an understandable mistake for someone who didn't grow up surrounded by kosher-observant people), I wanted to make sure that there would be food that was acceptable for my family's standards. I tweeted at the official conference account to ask about the food, and they encouraged me to email a particular email address to ask. I kept not doing that, and finally late last week I sent them an email. The answer was that yes, they would have kosher food, but since NYU (the venue) didn't have their kosher kitchen set up at the moment, it would be coming from an outside vendor, and they'd tell me who when they know. A few days later I got an email telling me where the food would come from (Fresko), and since I'm familiar with the company (they sell sandwiches in my local 7-Eleven and at Staten Island Yankees baseball games), I knew that their kosher certification and their food is perfect acceptable to me. I also wasn't sure if they make anything other than sandwiches and I figured I might not want sandwiches all day, so I decided to pack myself a bag of Cheerios to have for breakfast and/or snack. I also packed fruit as a snack because it's never a bad idea to have fruit. Which reminds me ... I wonder if they have snacks during the day? #### What Does My Day Look Like? One of the good things about signing up for a conference taking place in NYC is that I don't have to worry about flights or hotel ... I just roll out of my apartment, hop on a subway (and only one train!), and I'm there. But there are still a few things that needed to be figured out... 1. **What Time To Show Up** - Registration and Breakfast starts at 8. Now that I know that there is definitely food for me, I want to be there for breakfast (I'm hoping there's decent coffee too ... if not I may have to hop out in the middle of breakfast to grab something), but I don't think I need to be there right when it starts. Right now I'm targeting an arrival between 8:15 and 8:30, which will hopefully get me there with enough time to register and eat breakfast, but not so much extra time that I feel super awkward. If things get too awkward, I can always stand in a corner on my phone and pretend I'm super busy. 2. **Workshops/Talks** - The day starts with a welcome and two talks before the first break. After that break, I'll head into my accessibility workshop, which hopefully won't be too uncomfortable (I don't know if there will be interacting with other people involved, and I'm not so good with that). After the workshop is lunch break. If the morning was too much for me, that might be when I call it a day. Or I might consider going for a walk to get some fresh air before making that decision. But if I'm feeling okay, it'll be lunch time, and hopefully I'll be able to find some people to eat with. After lunch are the talks, and I'll be sitting there with my notebook and a pen (note to self: pack notebook and pen) taking notes on all the interesting things I'm learning. If I'm still feeling good after that, I'll browse around the exhibit hall (and hopefully take home some free swag) before listening to the closing keynote, which is a topic that has personal interest to me. 3. **Afterparty** - Ideally, I'd like to go to the afterparty. I'm not a big party person, and I have no idea what "afterparty" entails in this context, but I think it's part of the experience, so I want to try it. But if they day has felt super draining and I don't think I have the energy to get through it, I'll just skip the afterparty and head home. #### What Do I Look Like? Picking out my outfit is a big thing for me. I like to look relatively nice in general (although I pull that off with varying amount of success), but for semi-special occasions, like my first day of school or my first tech conference, I like to buy a new outfit and/or pick out something special. I went shopping last week for two outfits - one for a networking event, and one for the conference - but only ended up buying something for the networking event, so I'm back at square one. The weather is forecasted to be 81° with scattered thunderstorms, which means I need to wear rain-appropriate shoes, and since I don't want to wear my rain boots, I'll probably end up in sneakers. Which means I can't count on my shoes to make my outfit look nicer, that will have to come from the outfit itself. After going through what I had available, I settled on the outfit that I wore for my first day at my last job - it's a comfortable outfit, has the pockets I need, and looks decently good. After I laid it out, I remembered something else about the outfit - my first day at my last job was an unintentional Mickey Mouse [Disneybound](https://disneybound.co/), so I tried to find my best Mickey Mouse accessories (mostly it's my Mickey Mouse earrings ... unless I want to walk around with my marathon finisher ears, which I don't). I toyed with the idea of another Disneybound, and thought of something that I wasn't sure would work, so I decided to set out both outfits and then decide in the morning. I'm very glad that I thought of having my outfit be a Disneybound, because I love finding outfit inspiration from my favorite characters, and even if nobody else knows that's what I'm going for, it always makes me smile a little. And I think that wearing an outfit that makes me feel good will help me feel more comfortable. #### What Are My Goals For The Conference? My primary goal for the conference is to get through the day. Mostly I'm going to know that I can do it. If I learn some new things, cool. If I meet some awesome people, even more cool. If I get some fun new swag, awesome. If I make it through the entire day without feeling completely out of place and like I shouldn't have gone ... mission accomplished. #### What's Next? This conference is somewhat a test for me. I've really wanted to get more involved with tech communities, and I've wanted to go to conferences, but crowded rooms and new situations can be very difficult for me. I want to enjoy the conference for what it is, but I also want to see if it inspires me to attend other conferences. Stay tuned for next week's blog to see how all this went!
sarahscode
146,264
A Comprehensive Guide to CSM
Know Why CSM Certification is taking the World by Storm Are you stuck at the bottom of the corporate...
0
2019-07-22T11:25:42
https://dev.to/samsuna/a-comprehensive-guide-to-csm-5hdg
Know Why CSM Certification is taking the World by Storm Are you stuck at the bottom of the corporate ladder? Are you finding it difficult to get noticed? And is it not your style to brag about your skills to get ahead? Then you might want to consider opting for recognised certification programs. There are several of these out there. Currently, programming languages like Python are on the move. The growth of artificial intelligence has created many opportunities for professionals. However, certifications like these are extremely expensive and not everyone’s cup of tea. These programs can be quite time-consuming as well. Hence, most professionals tend to stay away. If you are someone who can’t dedicate time or does not want to shift focus on learning a new language, considering opting for a [CSM course](https://www.knowledgehut.com/agile-management/csm-certification-training-washington). CSM or Certified Scrum Master Certification is an industry-wide recognised certification program. Additionally, it can be completed in 2 days. So, you simply can dedicate a weekend to be a more efficient and productive member of your organization. Moreover, this program is applicable for various roles. Hence, whether you are working on Artificial Intelligence or managing client accounts, this program is bound to get you a swankier paycheck. Currently, quality courses like [Certified Scrum Master Certification](https://www.knowledgehut.com/agile-management/csm-certification-training-houston) are on offer in over 70 countries, including major US cities like Washington DC, Houston, etc. This brings us to the next question. What is a Scrum Master Certification? Scrum master certification is designed to train professionals with the knowledge to further their understanding of project management. The certification program emphasizes heavily on core values of accountability, teamwork, and iterative process. The program also clearly defines end-goals for participants to synchronize their motivation with the larger goals of an organization. What are the Immediate Benefits of the Program? Apart from the immediate benefits mentioned above, Scrum masters are extremely valued members of large organizations. CEOs like Sundar Pichai of Google recognised the importance of programs like Agile in furthering the growth of the large organizations. Core Agile principles are also part of the Scrum framework. Hence, Scrum programs are likely to get you noticed at large organizations. However, Scrum masters usually help motivated employees reach their goal. Additionally, Scrum masters also are taught to manage the flow of information within organizations. Hence, CSM certification is really important from the perspective of organizations. It is likely to get you ahead in your career but not without a sound practical application rooted in genuine concern for the well-being of your team members and organization. How Do I avail the CSM Certification? As mentioned earlier, CSM certification is a two-day course. It includes training for 16 hours by a certified Scrum trainer. Once your training is over, you can appear for a multiple-choice exam. The exam is relatively easy, wherein you will be expected to get 24 answers correct out of 35 total questions. You would not require any prerequisites to appear for this exam as it is a course aimed at beginners. What is the cost of the CSM Certification? Generally, Scrum programs are paid for by organizations. So, if you are interested in this program, you should definitely consult your HR manager before taking on the responsibility on a personal level. The costs of these programs start at around 25,000 rupees or US $370. Due to the industry-wide recognition, training institutes for these programs have cropped up all over the world. A recognised global institute like KnowledgeHut is a good point to start. What are the long term benefits of the CSM Certification? You may choose to see Scrum certification as the next meal ticket. However, please understand that Scrum certification is intended to be a never-ending commitment to learning and professional growth. Scrum prepares professionals to become lifelong learners. For IT professionals, who have a keen interest in the business side of things, Scrum is bound to benefit tremendously for growth. For example, Scrum certification will teach you how to supervise projects, keep an eye on priorities, and how to get work done through a plethora of problems and distractions. Additionally, it will also teach members the different priorities of various stakeholders, especially clients. Why Is CSM Certification Taking the World by Storm? Scrum certification is gaining tremendous traction among professionals. Additionally, it is bound to prepare you for long term professional growth as it is designed keeping the key needs of organizations in mind. You are probably aware that today we are moving towards a knowledge-based economy all over the world. Additionally, once professionals get comfortable in their roles, their learning tends to go downhill. This is where large organizations suffer tremendously. On one hand, they have a large group of employees, who are loyal and productive members of the organization. On the other hand, these employees and their way of thinking are set in stone. This combined with different individual motivations, a different set of difficulties, and challenges can sometimes result in regressive attitude towards growth of organizations. Hence, it is essential for organizations to prepare professionals with a positive, team-oriented, and goal-driven discipline. It is also important that every member of the team is on-board when it comes to development. This is why Scrum certification is the answer to various project development and other professional challenges in the 21st century.
samsuna
146,281
Beginner’s Guide to Learning JavaScript as a WordPress Developer
If you have been following the happenings in the WordPress community for the past couple of months, y...
0
2019-07-22T12:19:20
https://www.codeinwp.com/blog/learning-javascript-for-wordpress/
wordpress, webdev, javascript, beginners
--- title: Beginner’s Guide to Learning JavaScript as a WordPress Developer published: true description: tags: wordpress, webdev, javascript, beginners canonical_url: https://www.codeinwp.com/blog/learning-javascript-for-wordpress/ cover_image: --- If you have been following the happenings in the WordPress community for the past couple of months, you might already be aware of the rising importance of JavaScript. First up, the WordPress.com desktop app, Calypso, departed from the standard PHP route and has been coded using JavaScript (something CodeinWP [talked about recently](https://www.codeinwp.com/blog/december-2015-wordpress-news/)). And if that was not enough, in his annual “State of the Word” address in December 2015, Matt Mullenweg kept it straight and simple: _"Learn JavaScript, deeply."_ Naturally, more and more WordPress developers are now turning towards JavaScript, and for all practical purposes, 2016 is going to be the year of JavaScript, at least from the perspective of WordPress developers. ![Alt text of image](https://www.codeinwp.com/wp-content/uploads/2016/01/learn-js.jpg) If you are an existing WordPress developer, how should you begin your journey with JavaScript? More importantly, what aspects of this diverse coding language should you care the most about? In this post, I will attempt to answer these questions. **Your beginner’s guide to learning JavaScript as a WordPress developer** ###**Introducing JavaScript to WordPress developers** Once again, if you have been active in the WordPress community, you might have already familiarized yourself with the basics of JavaScript and as such, we probably do not need to remind ourselves that JavaScript is different from JAVA. Now, moving on to another question: **is JavaScript really that unknown for WordPress?** Not really. In fact, even before WordPress 4.4, JavaScript has had a role in WordPress – the js folder within _wp-includes_ and _wp-admin_ stands for JavaScript, which has so far, been mostly employed in theme development. Thus, even though JS has been around in the world of WordPress for quite a while, now with the first half of REST API on board and applications such as Calypso, its role has risen manifolds. Any WordPress developer starting off with JavaScript might feel confused, simply because JavaScript has a world of its own. For example, take up three very common terms that are associated with JS: [jQuery](https://www.codeinwp.com/blog/learning-javascript-for-wordpress/), AngularJS and Node.js. ![Alt text of image](https://www.codeinwp.com/wp-content/uploads/2016/01/jQuery-AngularJS-Node.js_.png) All of the three above entities, even though related to JavaScript, are totally different from each other. JQuery is a JavaScript library, whereas AngularJS is a JavaScript framework, and Node.js is a JavaScript runtime environment. There are some coders who believe it is wiser to start off with plain vanilla JS, while others prefer relying on a framework or a library. And the talk doesn’t end there. If you decide to start with a JavaScript library, you will also have to learn about its add-ons or plugins (yes, there are plugins that extend the core functionality of libraries, much like libraries extend the core features of JavaScript). Confused already? Read on! ###**Know the fundamentals** If you are really serious about your JavaScript skills, you must focus heavily on the fundamentals. This is especially important because as a WordPress developer, you must already be aware of PHP, and JS is different from PHP in various aspects. Most PHP developers are accustomed to PHP’s functional approach, and while PHP does have a lot of object-oriented principles, it is, for the most part, a functional language. JavaScript, on the other hand, is a strictly object-oriented language. Mozilla Developer documentation talks about the [OOP basics of JavaScript](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Objects) in detail. Once you have mastered the fundamentals and basics of JavaScript, you can get started with frameworks. ###**Pick a JavaScript framework** Learning a JavaScript framework is vital if you wish to get started with building JS applications and projects as quick as possible. If you have had some experience with theme development in WordPress, you might already be familiar with HTML and CSS, and more importantly, you will be aware of the ease of coding that a good framework can bring to the table. Experts and coders are divided when it comes to picking the perfect JavaScript framework for beginners. Personally, for users who are familiar with the MVC framework, such as concrete5 developers, I have always recommended [AngularJS](https://angularjs.org/) as the better choice. Angular supports the MVC structure and is really easy to get started with. However, WordPress coders, at times, are not very familiar with the MVC structure and as such, React seems to be an easier and more apt option. Furthermore, since [React](https://github.com/reactjs) uses just the View component, it is popular among designers as well. There are other options as well, but for the most part, any journey in the world of JavaScript frameworks should begin with either React or AngularJS. I once wrote a comparative article about the two frameworks, and for greater details, you can find that article [here](https://torquemag.io/2015/08/comparing-angularjs-react/). ###**Decide what you need** Much like any other skill, learning a new programming language is a matter of needs and requirements. After all, you are trying to learn JavaScript because you need it. In JavaScript, libraries and frameworks are defined on the basis of your needs. For example, Node.js is ideal for folks who have to deal with server-side programming. Similarly, while React is a great choice for web development, it is lacking when it comes to mobile app development. For WordPress developers, the needs can basically be split as backend or frontend. **If you are dealing with frontend development, frameworks such as [AngularJS](https://angularjs.org/), [React](https://reactjs.org/), or [Backbone.js](https://backbonejs.org/) will suffice for your needs**. You can find a longer list with more options [here](https://github.com/collections/front-end-javascript-frameworks). ![Alt text of image](https://www.codeinwp.com/wp-content/uploads/2016/01/angular.png) For **backend coders, however, especially ones who are already well-versed with PHP, [Node.js](https://nodejs.org/en/) is a skill worth acquiring.** ![Alt text of image](https://www.codeinwp.com/wp-content/uploads/2016/01/node.png) ###**Where to learn** Thanks to the internet, there is no shortage of resources for anyone willing to learn something new, and JavaScript is no exception either. You can discuss your doubts at Stack Overflow, follow the tutorials at [Udemy](https://www.udemy.com/) or [Lynda.com](https://www.lynda.com/), and of course, refer to the great literature produced by several WordPress blogs out there. [JavaScript is Sexy](http://javascriptissexy.com/) is a good resource for learning JavaScript, whether you are a beginner or an advanced learner. Unfortunately, this site has not been updated for quite a while, so you’ll need to proceed with caution. That said, certain threads such as [OOP in JavaScript](http://javascriptissexy.com/oop-in-javascript-what-you-need-to-know/) is still useful and relevant. Codecademy’s [JavaScript Track](https://www.codecademy.com/learn/introduction-to-javascript) is quite popular among beginners, and you should give it a spin. Similarly, if you are keen on learning jQuery, Code School’s [Try jQuery](https://jquery.com/) plan is a good pick. For learning React, I have found [React for Beginners](https://reactforbeginners.com/) to be the most well-planned and structured resource. [Superhero.js](http://superherojs.com/) is another amazing collection of resources and tutorials that you can learn from. In terms of books, the options are plenty, and you can find some of the most useful and popular ones on [this GitHub thread](https://github.com/wesbos/ama/issues/61). ###**Conclusion** As a WordPress developer, your journey towards learning JavaScript will, more or less, be identical to the way you began with PHP. Practical knowledge of PHP is sufficient to tweak and get more out of WordPress and themes or plugins, but if you wish to seriously build something big, knowledge of PHP fundamentals is essential. Similarly, while mastering a framework in JavaScript will give you a quick start, you will also need to master the fundamentals of JS, in order to get the most out of it. **Have you started learning JavaScript? How has your experience been so far?** _The article was originally published on [CodeinWP.com](https://www.codeinwp.com/blog/learning-javascript-for-wordpress/)_
fitzchris
146,367
what is a locator in Selenium (and Python)
In order to interact with a web page in Selenium you have to locate the element. As a human you do...
0
2019-07-25T12:24:48
https://dev.to/tonetheman/what-is-a-locator-in-selenium-and-python-45o8
selenium, python, automation
--- title: what is a locator in Selenium (and Python) published: true description: tags: selenium,python,automation --- In order to interact with a web page in Selenium you have to locate the element. As a human you do this without much thought. In Selenium it requires more thought (sadly). There are lots of ways to find elements on a page. The most specific way is by id. Meaning there is an element on a page and it has a unique id on it. If you looked at the HTML source a unique id would look like this: ```html <button id="mybutton">click me</button> ``` When you have a button that has an id on it then you can locate it in Python like this ```python e = driver.find_element_by_id("mybutton") ``` Another way to do the exact same thing as above but the code is slightly different. ```python from selenium.webdriver.common.by import By # note 1 e = driver.find_element(By.ID,"mybutton") ``` Note 1 - you need another import for this code to work. If you want to use an XPath locator the code will look like this ```python driver.find_element(By.XPATH, '//button[text()="click me"]') ``` There is also a way to find multiple elements that match the same locator by using the find_elements method instead of find_element. See here for lots more information and all of the other locator types: https://selenium-python.readthedocs.io/locating-elements.html
tonetheman
146,400
Wordpress HTML to Markdown for Gatsby
I am currently in the process of creating my blog using WordPress as the backend and Gatsby for the f...
0
2019-07-22T16:34:37
https://www.shubho.dev/coding/wordpress-html-to-markdown-for-gatsby/
gatsby, wordpress, javascript
I am currently in the process of creating my blog using WordPress as the backend and Gatsby for the frontend. One of the most enticing features of Gatsby is plugins. Almost every feature you might want on your blog is available as a plugin, or you can create one for yourself. As a developer who has dabbled with WordPress plugins (but is not proficient in PHP) and knows JavaScript, I feel creating plugins for Gatsby is way easier. Of course, that is a biased opinion coming from me. ## Gatsby source plugin for WordPress Gatsby has many official plugins. Their structure is similar, but Gatsby does provide some standard terminology to make it easy to recognize the purpose for it. https://www.gatsbyjs.org/docs/naming-a-plugin/. Initially, I decided to use Contentful for my backend, the plugin being `gatsby-source-contentful` (see how naming it following the standard convention helps). The Contentful plugin provides all the posts as a Markdown node in GraphQL, and as a result, all “transformation” plugins for “Remark” can be used on them. Now the transformation plugins for “Remark” for “transforming” markdown data are fantastic. And working on the Contentful data using them is a pleasure. For getting data from WordPress into Gatsby, we use a “source” plugin `gatsby-source-wordpress`. I will discuss my reason for using WordPress in another post. But the main issue I faced with this plugin was it queries the data from the WordPress REST API and then creates the GraphQL schema for use within Gatsby. But the WordPress REST API by default returns the content only as HTML. So even if you write your posts as Markdown using some WordPress plugin (I use [WP Githuber MD]), the REST API will return the final content. However, this makes sense for WordPress as the output for their themes are always HTML. But I needed Markdown as I wanted to use those transformer plugins and they only work on the Markdown nodes. There are multiple Github issues on them like here https://github.com/gatsbyjs/gatsby/issues/6799. Even if a WordPress Markdown plugin exposes a separate REST endpoint, the Gatsby source plugin needed to support these. I didn’t want to find such a plugin or hack the official source plugin for Gatsby. 😀 ## Turndown - Convert HTML to Markdown So I wanted to look for a solution which can convert HTML to Markdown. Since I am always a DIY guy, I started reading on ASTs and started writing a conversion from HTML to Markdown by myself. I spent three days and had a working version. But there were lots of bugs. I realized this was silly of me. There must be some package already. Enter [Turndown]. It was awesome. The conversion was almost perfect. So I junked my conversion library and instead went to write a local Gatsby plugin that takes a WordPress Post (or Page) node and creates a Markdown node out of it using Turndown. ## The plugin `gatsby-transformer-wordpress-markdown` I named the plugin as per the Gatsby naming standards. The folder “gatsby-trasformer-wordpress-markdown” goes under the plugins folder of your root Gatsby project. The folder has 3 files: ```bash ├── gatsby-node.js ├── index.js └── package.json ``` `index.js` only contains a line `// noop `. `package.json` contains the name of the plugin and the `turndown` package as a dependency `yarn add turndown` and `yarn add turndown-plugin-gfm`. The main workhorse is the `gatsby-node.js`. {% gist https://gist.github.com/shubhojyoti/f6225d29a698baeff50dea9bd0803375 %} In my main `gatsby-config.js`, I call the plugin as follows: ```javascript module.exports = { siteMetadata: { ... }, plugins: [ ... { resolve: `gatsby-transformer-remark`, options: { plugins: [ { resolve: `gatsby-remark-reading-time` }, { resolve: `gatsby-remark-embed-gist`, }, { resolve: `gatsby-remark-prismjs`, options: { classPrefix: "language-", aliases: { javascript: 'js' }, inlineCodeMarker: '>>', showLineNumbers: false, noInlineHighlight: false, showLanguage: true } } ] } }, ... { resolve: `gatsby-transformer-wordpress-markdown`, options: { turndownPlugins: ['turndown-plugin-gfm'] } } ], }; ``` I haven’t added any tests as such as this is my local plugin. I might need to clean it up a bit. But here are a couple of points: 1. The plugin needs to tie in during the `onCreateNode` lifecycle of Gatsby build. In the current case, during the creation of a WordPress Post or Page node, the above plugin executes. 2. Turndown, by itself has a plugin system. I am using the `turndown-plugin-gfm` plugin. The plugin enables GitHub specific markdowns like tables in the Markdown Output. Line nos 26-35 are options you can pass to the local plugin. I am using all the defaults from the main `turndown` package. 3. For each WordPress Post and Page node created, the plugin extracts the HTML `content`, runs `TurndownService` against it and creates a Markdown child node of type `MarkdownWordpress`. 4. Since a new node of mediaType `text/markdown` is created, the `gatsby-transformer-remark` and its sub-plugins are run over them. ## Caveats In pure markdown nodes, the Markdown content is as you have written. However, note that in this case, WordPress has already created a HTML out of your post, and you are converting it back to Markdown. So if you use any special Markdown syntax, they will be lost. I did work around some of them as they were specific to my use case (I will write more on these in a future post), but YMMV.
shubho
146,412
TPP Topic 10: Orthogonality
This post originally appeared on steadbytes.com See the first post in The Pr...
0
2019-07-22T16:24:01
https://dev.to/steadbytes/tpp-topic-9-orthogonality-kcd
thepragmaticprogrammer, book, programming, education
--- categories: - The Pragmatic Programmer published: true tags: - The Pragmatic Programmer - book - programming - education title: 'TPP Topic 10: Orthogonality' --- > This post originally appeared on [steadbytes.com](https://steadbytes.com/blog/the-pragmatic-programmer-20th/topic-10-challenges/) <div></div> > See the [first post](https://dev.to/steadbytes/the-pragmatic-programmer-20th-anniversary-edition-series-1e2l) in The Pragmatic Programmer 20th Anniversary Edition series for an introduction. ## Challenge 1 > Consider the difference between tools which have a graphical user interface and small but combinable command-line utilities used at shell prompts. Which set is more orthogonal, and why? Which is easier to use for exactly the purpose for which it was intended? Which set is easier to combine with other tools to meet new challenges? Which set is easier to learn? Command-line utilities are more orthogonal than GUIs. The command-line utilities are independent of each other - separate programs, with separate codebases and developers. They can be independently updated or replaced entirely without affecting the others. _Generallly_, command-line utilities are easier to user for exactly the purpose for which they were intended. This is because (often) that is _all_ the program does and nothing more - as such it can be difficult to use for anything _but_ the intended purpose. Command-line utilities are **far easier to combine** with other tools to meet new challenges. These programs typically use [standard streams](https://en.wikipedia.org/wiki/Standard_streams) for I/O, allowing them to be composed through [redirection](<https://en.wikipedia.org/wiki/Redirection_(computing)#Piping) to accomplish a larger task. GUIs, on the other hand, are _almost impossible_ to combine as they do not provide standard methods for I/O - one cannot easily pipe the output of a textbox in one GUI to a textbox in another for example. ## Challenge 2 > C++ supports multiple inheritance, and Java allows a class to implement multiple interfaces. Ruby has mixins. What impact does using these facilities have on orthogonality? Is there a difference in impact between using multiple inheritance and multiple interfaces? Is there a difference between using delegation and using inheritance? These facilities improve orthogonality. Multiple inheritance for example allows common functionality to be implemented in within multiple smaller classes and added into other classes without re-implementation within that class. Changes made in the inherited classes will be present in any classes inheriting from it - _potentially_ removing the need to change multiple classes. Multiple inheritance allows 'sharing' of _implementation_ - the actual code to perform some action (which can of course be overridden). Multiple interfaces allow 'sharing' of _specification_ - the intended API for which a class must adhere to without the actual implementation. There's a lot more detail and nuance to this of course, but I think these are the main points. Delegation allows the _implementation_ of certain functionality to be kept within a class specifically for that functionality, yet enabling another class to use or expose that functionality. This maintains orthogonality by avoiding adding extra behaviour to a single class. ## Exercise 1 > You’re asked to read a file a line at a time. For each line, you have to split it into fields. Which of the following sets of pseudo class definitions is likely to be more orthogonal? ``` class Split1 { constructor(fileName) # opens the file for reading def readNextLine() # moves to the next line def getField(n) # returns the nth field in the current line } ``` or ``` class Split2 { constructor(line) # splits a line def getField(n) # returns nth field in current line } ``` `Split2` is more orthogonal - it _only_ implements the behaviour for splitting a line into fields and is not concerned about fetching the line from some source (i.e. a file). This allows the source of the lines to be changed without needing to alter the field splitting class and allows it to be used in other contexts. For example, one could re-use the class for splitting lines of logs streamed over a network from a log aggregation API (i.e. [AWS CloudWatch `GetLogEvents`](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html)) or for splitting lines of a web page fetched via an HTTP request. ## Exercise 2 > What are the differences in orthogonality between object-oriented and functional languages? Are these differences inherent in the languages themselves, or just in the way people use them? In general, the enemy of orthogonality is **high coupling** between supposedly independent modules/sections of a software system. Both object-oriented and functional-languages provide ways to increase and decrease this, depending on how they are used. Functional programming tends to use a large number of small, usually pure, independent functions and compose them to build up larger modules of functionality. This decreases coupling and improves orthogonality as each function is independent and can in theory be re-used and changed without affecting the larger module. However, these functions operate on data structures, transforming them and producing results which are fed to other functions. This can produce **hard to spot** coupling between the functions - changing the data can lead to change across many functions. Object-oriented programming provide classes which encapsulate modules of functionality, _in theory_ making them independent of one another. However (as previously discussed) multiple inheritance, interfaces, subclassing/overriding and a host of other language features can lead to increased coupling and decreased orthogonality - classes inheriting unneeded methods from a parent, changing a parent class method implementation and breaking all the subclasses etc. In summary, I think the level of orthogonality between the two paradigms primarily comes down to the way people use them. However, in my experience the way in which functional languages are used _tends_ to be more orthogonal than object-oriented.
steadbytes
146,422
Building Svelte 3 Budget Poll App [2]
Where we ended In my last post I covered basics of Svelte3 installation and usage. We crea...
1,559
2019-07-22T16:47:42
https://dev.to/corvusetiam/building-svelte-3-budget-poll-app-2-1cid
javascript, webdev, svelte, tutorial
# Where we ended In my last [post](https://dev.to/corvusetiam/how-to-build-budget-poll-app-in-svelte3-13c8) I covered basics of Svelte3 installation and usage. We created git repository and made multipage form component and panel component. Now, we will try to design our content panels. # Starting up Open up our repo in your favourite editor and add few files. We will need component for each panel. 1. Initial Poll Data component like name and full amount -> `PollHeader.svelte` 2. People joining up to the poll -> `PollContent.svelte` 3. Final table containing computed amounts for each person -> `Balance.svelte` Please, create now those 3 files. We will also need to add 2 new store into `globals.js`. We will name them: `peopleStore` and `pollStore`. ```js export const peopleStore = writable({ people: [] }) export const pollStore = writable({ name: null, amount: 0, currency: null }) ``` Now, we will import and use our first component containing app logic `PollHeader`. Inside `App.svelte` you have to import, just like the last time both new `PollHeader` component and new store. Your HTML template withing `App.svelte` should look like (stripped from script, style, and wrapper elements for brevity) ```html <FormPanel index={0}> <PollHeader /> </FormPanel> ``` Now lets design our PollHeader. ## PollHeader Component ```html <script></script> <style> .container { display: flex; flex-direction: column; } .group { display: flex; flex-direction: row; } .group label { width: 30%; margin-right: auto; } .group input { flex: 1; } .group select { flex: 1; } .push-right { margin-left: auto; } </style> <fieldset> <legend>Poll General Data</legend> <p>Please provide name of the poll and amount</p> <div class="container"> <div class="group"> <label for="pollName">Poll Name: </label> <input type="text" id="pollName" required> </div> <div class="group"> <label for="pollAmount">Poll Amount: </label> <input type="number" id="pollAmount" required> <select id="pollCurrency"> <option value="" selected disabled>Select Currency</option> </select> </div> <div class="group"> <button class="push-right" type="button">Save</button> </div> </div> </fieldset> ``` First, we made our basic component template. One thing, you may notice is that, we use again class `.container` here. We used it before. This is pretty important. Svelte3 can compile your CSS into modules eg. replace normal class names with hashes to make them easier to work with. What is more important here is that, you don't have to worry about them. If you want to use some global css in svelte it is also possible. Just define your css in `public/global.css` and use as any other css files. Now, I don't want to spend to much time on CSS. You can read about it here: * CSS Tricks Guide to FlexBox -- [https://css-tricks.com/snippets/css/a-guide-to-flexbox/](FlexBox Guide) * MDN Margin and section about `margin: auto` -- [https://developer.mozilla.org/en-US/docs/Web/CSS/margin](MDN) Now, we need to make sure that: 1. Our button is disabled, if all fields are not filled in 2. Find a way to get values from fields into variables In React land that would involve writing large amount of accessors, functions, and JSX. Here we will archive it in much more quicker manner. Svelte follows the way, paved by Vue, with custom, two-way binding. Here it is even easier, thanks to nature of Svelte. ```html <script> import { pollState } from "./globals.js"; let name = ""; let amount = 0; let currency = null; let changed = false; $: filled_in = (name !== "") && ( amount > 0 ) && ( currency != null ) && changed; $: disabled = is_filled_in(); function save() { $pollState.poll = { name, amount, currency }; } </script> <!-- parts of html omitted for brewity --> <input type="text" id="pollName" required bind:value={name}> <input type="number" id="pollAmount" required bind:value={amount}> <select id="pollCurrency" bind:value={currency} on:change={ev => { changed = !changed; }}></select> {#if filled_in } <button class="push-right" type="button" on:click={save} {disabled}>Save</button> {/if} ``` We are lacking one thing. Our select has one option, what's more. That option is disabled one with message to the user. Time to change that. ```html <select id="pollCurrency" bind:value={currency} on:change={ev => { changed = !changed; }}> <option value="" selected disabled>Select Currency</option> {#each Object.entries(CURRENCY) as entry } <option value={entry[0]}>{entry[1].name}</option> {/each} <select> ``` Is it better than React and normal JS functions. Hard to say, here we get pretty simple `{#each}{/each}` statement. It gives as basic iteration. Of course, our iterated element, here `CURRENCY` can be any JS expression. If you need something different, you can write function for it. And time to define currencies, which you should next import inside PollHeader script tag. ```js /// place it in globals.js and import inside PollHeader export const CURRENCY = { "PLN" : { name: "złoty", format: "{} zł" }, "USD" : { name: "dollar", format: "$ {}" }, "EUR" : { name: "euro", format: "{} EUR" } } ``` Of course, you could always provide them with property. ## PollContent component Time for classic CRUD app. Let me start with how this part will look like. It should contain few key parts: * Table representing our current data * Set of controls * Block with edit component The best idea for now, would be to wrap those parts into components. Let us try this approach Inside `PollContent.svelte` we will put: ```html <script> import { pollState } from "./globals.js"; </script> <div> <h2>{ $pollState.poll.name }</h2> <!-- Our table with data --> </div> <div> <button type="button">Add Entry</button> <button type="button">Update Entry</button> <button type="button">Delete Selected Entries</button> </div> <div> <!-- Our data input part --> </div> ``` Ok, why we need pollState store. I will try to follow this naming convention. If something is in globals and ends with `state`, you should think about it as store. To make all of it easier, let me define few more components here. First one, will be `PollTable.svelte` and `PollInput.svelte` next. ```html <script> import { pollState } from "./globals.js"; import PollTable from "./PollTable.svelte"; import PollInput from "./PollTable.svelte"; </script> <div> <h2>{ $pollState.poll.name }</h2> <PollTable /> </div> <div> <button type="button">Add Entry</button> <button type="button">Update Entry</button> <button type="button">Delete Selected Entries</button> </div> <div> <PollInput /> </div> ``` ### PollTable Generally it should be understandable enough. Only hard part is here ternary if inside `<td>`. You should remember that, you can put any JS expressions inside braces. This `person.amount > 0` expression checks if one person paid money or owe all of it and set class based on that. Function `get_person_by_timestamp` do one thing. Iterate over our dataset and find person with matching timestamp. Why not index? > Question for later: How would you add sorting later on? ```html <script> import { format_currency, pollState } from "./globals.js"; function get_person_by_timestamp(ts) { for ( let i = 0; i < $pollState.people.length; i++ ) { if ( $pollState.people[i].timestamp === ts ) { return i; } } } function select_checkbox(ts) { let index = get_person_by_timestamp(ts); $pollState.people[index].selected = !$pollState.people[index].selected; } </script> <style> .paid { color: green; } .owe { color: red; } </style> <table> <thead> <tr> <th>-</th> <th>No.</th> <th>Person</th> <th>Paid</th> </tr> </thead> <tbody> {#each $pollState.people as person, index } <tr> <td><input type="checkbox" on:change={ ev => select_checkbox(person.timestamp) } /></td> <td>{index + 1}.</td> <td>{person.name}</td> <td class = "{ person.amount > 0 ? 'paid' : 'owe' }">{ format_currency(person.amount, person.currency) }</td> </tr> {/each} </tbody> </table> ``` We want to keep track of which checkbox was selected. It is probably easiest by just adding boolean, selected field into globals.js in each object representing person. Now lets open our browser finally and use some buttons. Fill values in first panel, click *Save* and *Next* later on. Problem is, if you click previous, you will see everything disappears or rather, there are no value being kept. Why? There reason is that, our `{#if <smth>}` template will remove and re-add parts to the DOM. How to solve it? ### Going back to our FormPanel We have two solutions here. First one is to swap our `{#if}` template with good old css `display: none;`. It is pretty good, works fine. Our code may look like this now. ```html <style> .multiform-panel { display: block; } .multiform-panel.hidden { display: none; } </style> <div class="multiform-panel { index !== $controllerState.current ? 'hidden' : '' }"> <slot></slot> </div> ``` But let me show you second way and introduce you to `onMount` lifecycle hook. Inside our `PollHeader.svelte` we will do something like this: ```html <script> /* let me import onMount */ import { onMount } from "svelte"; import { CURRENCY, pollState } from "./globals.js"; onMount(() => { name = $pollState.poll.name || ""; amount = $pollState.poll.amount || 0; currency = $pollState.poll.currency || "default"; }) let name; let amount; let currency; /* rest goes here */ </script> ``` Lifecycle `onMount` is runned every time component is... guess what. Mounted into DOM. Which way is better? I think, the `onMount` is good a bit cleaner. ### PollInput ```html <script> import { onMount, createEventDispatcher } from "svelte"; import { CURRENCY, pollState } from "./globals.js"; export let currency; let dispatch = createEventDispatcher(); let name; let amount; let timestamp = null; let is_update = false; function get_person_to_update() { for ( let i = 0; i < $pollState.people.length; i++ ) { if ( $pollState.people[i].selected ) { return $pollState.people[i]; } } return null; } onMount(() => { let updated = get_person_to_update(); currency = $pollState.poll.currency; if ( updated !== null ) { timestamp = updated.timestamp; name = updated.name; amount = updated.amount; } else { name = ""; amount = 0; timestamp = null; } }); function dispatch_save() { dispatch('save', { name, amount, currency, timestamp: timestamp }); } </script> <div> <div> <label for="name">Name: </label> <input id="name" bind:value={name}> </div> <div> <label for="amount">Name: </label> <input id="amount" bind:value={amount}> <span>{ CURRENCY[currency].name }</span> </div> <div> <button type="button" on:click={ ev => { dispatch_save() } }>Save</button> <!-- [3] --> </div> </div> ``` Ok, what is it going on here? You can see, by now, we are creating custom events. This is another mechanic used to pass state between components. If your component looks like input, custom events are good idea, how to pass data. It resembles normal DOM operation and is pretty easy to use later on. Our code now. We do that with: 1. Creating new event dispatcher with createNewEvent, it will give as back function. 2. We write small helper function, important detail is that, we don't need to write to components for updating and creating items. I will use timestamp inside object as a marker. This object is accessible by event handler by using something like `ev => ev.detail` 3. I put our helper and bind it to click event on button. Now, we need to fill out PollContent. ### Going back to PollContent Rest should be at least understandable. We will update PollInput with few props and add . First we want our input box show up on click of the button. We need variable named `open` inside script block and edit our control buttons in this way; Logging was added to make it cleaner in console, but can be freely removed. ```html <div class="group horizontal"> <button type="button" on:click={ev => { console.log(ev.target.innerText); open = true; }}>Add Entry</button> <button type="button" on:click={ev => { console.log("%s -- not implemented yet", ev.target.innerText); }}>Remove Selected</button> <button type="button" on:click={ev => { console.log(ev.target.innerText); open = true; }}>Update Selected</button> </div> {#if open } <PollInput on:save={ ev => { create_or_update(ev.detail) }} currency={$pollState.poll.currency}/> {/if} ``` I added `on:save` event handler into the `PollInput` tag. This is how you can listen to custom events. In exactly the same way as before. And that is how I implemented create_or_update function. ```javascript function create_or_update(obj) { let { name, amount, currency, timestamp } = obj; if ( timestamp == null ) { let people = $pollState.people; people.push(create_person(name, amount)); $pollState.people = people; open = false; } else { for ( let i = 0; i < $pollState.people.length; i++ ) { if ( $pollState.people[i].timestamp == timestamp ) { $pollState.people[i].name = name; $pollState.people[i].amount = amount; $pollState.people[i].currency = currency; $pollState.people[i].selected = false; } } open = false; } } ``` Pretty simple. Those additional assingments are not necessary, but I like to keep them, because in Svelte assignment is special. Assignment run the whole reactive machinery. This is way, if you modify stores or reactive properties, you want to assign on each change. Only part left is `Remove Selected` button, but I would leave as an exercise for a reader. Again if something isn't clear please ask or read up on [svelte.dev](Svelte Docs and Api). Examples are pretty cool too. See you soon!
corvusetiam
146,633
Continuous Testing | Definition, Benefits & How to Perform
Due date: 23:59 p.m Oct 25th, 2021 The State of Quality Report 2021 Wouldn’t it be great to know how...
0
2019-07-23T04:41:50
https://dev.to/katalon/continuous-testing-definition-benefits-how-to-perform-1kio
cicd, continuous, automationtesting, softwaretesting
--- title: Continuous Testing | Definition, Benefits & How to Perform published: True tags: cicd, continuous,automationtesting,softwaretesting --- >**Due date: 23:59 p.m Oct 25th, 2021** >**The State of Quality Report 2021** >_Wouldn’t it be great to know how peers and experts are staying ahead of their quality assurance game? That’s why we're creating The State of Quality Report 2021 to collect the best QA practices from you, professionals of all levels. We are happy to offer **the first 100 respondents a $30 Amazon gift card**, along with the final report from the survey results. [Raise your voice!](https://www.research.net/r/medium-devto)_ ##**Overview** We are living in a competitive era where meeting customers' expectations and demands are the keys to winning over your competitor. As the need to release quality software products in the short amount of time continues to accelerate, incorporating continuous testing into your organization is a great way to ensure your product is released to the market at quality customer expect. ![continuous testing](https://thepracticaldev.s3.amazonaws.com/i/j31mqftyqx10b1jknazq.png) ##**What is Continuous Testing?** Continuous Testing is a software testing type in which the product is evaluated early, often, and throughout the entire Continuous Delivery (CD) process. It enables constant feedback for developers to fix bugs before being released to production. Incorporating continuous testing into your organization’s testing strategy accelerates your time-to-market but improves the quality your customers expect. Read more detailed definition [here](https://www.katalon.com/resources-center/blog/continuous-testing-introduction/) ##**Benefits of Continuous Testing** Let's break down a few benefits of continuous testing! * **Increase release rate:** Speed up delivery to production and release faster * **Communication transparency:** Eliminate silos between the development, testing, and operation teams * **Accelerate testing:** Run parallel performance tests to increase testing execution speed * **Find errors:** Ensure as many errors are found before being released to production * **Reduce business risks:** Assess potential problems before they become an actual problem ##**Key Components of Continuous Testing** ###**Continuous Integration** Continuous integration (CI) helps ensure that software components work together. It gathers code from developers working on one project and placing it into a code repository. Integrating different developer’s code into one project can generate a lot of bugs. And this is where continuous testing comes into play. ###**Continuous Delivery** Continuous delivery is the process that helps deploy all code changes in a build to the testing or staging environment. It is an integral part of continuous testing as it makes it possible to release builds to the production environment when needed. Want to learn more about CI/CD? Check out my previous article for [The Complete Introduction to CI/CD](https://dev.to/testingnews1/ci-cd-101-all-you-need-to-know-4j04) ###**Test Automation** Continuous testing can’t be done without test automation. While manual testing is laborious and time-intensive, automation gives time back to your engineers to actually fix the bugs found during testing. Automating your test executions each time the code is integrated will allow you to find bugs as early as possible and fix them faster. Find bugs before they’re released to production and you can save yourself a lot of time, money and effort to fix at a later day. ##**How to Perform Continuous Testing** Continuous testing should be implemented at every stage of your CI/CD pipeline. It works best by using the most recent build in an isolated environment. You can also set up test suites at every point code changes, merge or releases. This will help reduce time and effort on testing but still reap quality rewards. Below are some of the best practices to help you implement continuous testing to best serve your organization’s need. * **Adopt more test automation:** automation increases the speed and errors coverage at which testing can function. Automating as much as you can in the development lifecycle will help you achieve faster releases. * **Tracking metrics:** Use quantifiable metrics to keep track of your success or failure rate during testing * **Keep communication transparent:** Keep your communication lines transparent to prevent the testing pipeline from becoming siloed. Active communication is the key to achieving the balance necessary to effectively carry out continuous testing. * **Integrate performance testing into delivery cycle:** Performance testing is an integral part of continuous testing as it helps check the speed, responsiveness, and stability of your application. ##**Continuous Testing Tools** Tools are very useful to help make continuous testing even faster. Below are some of the best tools for your specific requirements. **[Travis CI](https://travis-ci.org/)** Travis CI is a continuous testing tool hosted on GitHub offering hosted and on-premise variants. **[Jenkins](https://jenkins.io/)** Jenkins is a continuous integration tool using Java language and is configurable via both GUI interface and console commands **[Katalon Studio](https://www.katalon.com/)** Developed by Katalon .Inc, Katalon Studio offers a comprehensive platform to perform automated testing for Web UI, [Web services](https://www.katalon.com/web-testing/), [API services](https://www.katalon.com/api-testing/), and [mobile](https://www.katalon.com/mobile-testing/). **[Selenium](https://www.seleniumhq.org/)** Selenium is an open-source software testing tool. It supports most mainstream browsers such as Chrome, Firefox, Safari, and Internet Explorer. Selenium WebDriver is used to automate web application testing. ##**Conclusion** Successful continuous testing is a competitive advantage, enabling businesses to deliver value to their customers with higher quality faster. However, it is not an easy task to make the jump to continuous testing, and if you’re not aware of the basic tenets of it, you may find yourself headed for disaster. So make sure you have a strategic planning process in place before incorporating continuous testing into your organization. **Resources: www.katalon.com**
testingnews1
146,634
Angular CDK Virtual Scroll
What is Virtual Scrolling? Modern web application are complex. They have a lot of moving p...
0
2019-07-27T09:26:39
https://dev.to/hassam7/angular-cdk-virtual-scroll-4ldj
angular, cdk, virtualscroll, material
# What is Virtual Scrolling? Modern web application are complex. They have a lot of moving parts, complexity and often times they have to deal with huge amount of data. Consider that you have application in which you have to display a list of users, a very common use case. As the number of items in the list increase so does number of elements in DOM resulting in more memory and cpu usage. We can reduce memory and cpu consumption by rendering only limited set of all the items, we can determine this limited set by looking at the height of container, scroll position and height of individual item and then perform some calculations which tell us which items from the list should rendered in DOM and as soon user scroll we again perform this calculation, remove previously rendered items and render new items according to calculation. All of this sounds very complex and it is, but the good news is that Angular Material CDK Virtual Scroll does this all for you and then some more. From now on I will be referring Angular Material CDK Virtual Scroll as Virtual Scroll So let's get started! # Prerequisites Before we begin we need a sample application with list of data so we can play with it and later on add angular virtual scroll to it. I have created a [small example](https://stackblitz.com/edit/ngfor-faker) This example is built on using Angular version 8. It uses fake to generate fake data, add it to array and uses `*ngFor` to render item of array as DOM elements in template. # Installation Installing Material CDK is pretty straight forward. If you are following along: In the stackblitz demo click on dependencies, it is just beneath window where project file are listed see screen shot below, type `@angular/cdk` and hit enter this will install material cdk. If you want to install it inside your angular cli project simply type `npm i --save @angular/cdk` ![Installing CDK](https://i.imgur.com/ScWpEbn.png) Once you have installed CDK, we are ready to move towards next step: using it! # Usage Before we can begin using the Virtual Scroll we need to import the module for it. Let's import it. Open your `app.module.ts` add this line after the last import `import { ScrollingModule } from '@angular/cdk/scrolling';` This will import the `ScrollingModule`, now we need to tell our `app.module.ts` to import contents of this module. For this inside the imports array add `ScrollingModule` your `app.module.ts` will look like this ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { ScrollingModule } from '@angular/cdk/scrolling'; @NgModule({ imports: [ BrowserModule, FormsModule, ScrollingModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` Now we are ready to use Virtual Scroll! We will modify our application to use Virtual Scroll! Here is link to [stackblitz app](https://stackblitz.com/edit/ngfor-faker-material-cdk?file=src/app/app.component.html) with desired state. This is same as our initial app but with Angular CDK installed and imported. Open `app.component.ts`, inside the constructor of `AppComponent` you will see ``` constructor() { this.persons = Array(100).fill(0).map(() => { return { name: faker.name.findName(), bio: faker.hacker.phrase(), avatar: faker.image.business() } }) } ``` What this does is creates an array of 100 objects, each object contains name, bio and avatar generated by `faker`. This array is assigned to instance variable called `persons` Now open the template file for this component (`app.component.html`). ``` <div class="search-wrapper cf"> <input type="text"> <button (click)="undefined">Go To</button> </div> <div *ngFor="let person of persons;let i = index"> <div class="card"> <img src="https://i.imgur.com/63S0RAq.png" alt="Avatar"> <div class="container"> <h4><b>{{person.name}}</b></h4> <h4><b>ID: {{i}}</b></h4> <p>{{person.bio.substr(0, 30)}}</p> </div> </div> </div> ``` The template consist of a button with click handler defined as of now it does nothing. Below the button we are using `*ngFor` to iterate over person array, for each person array we create a div with class card. This `card` div consist person's name, id, bio and avatar. We are displaying only first 30 characters of bio. Now let's modify our template to use Virtual Scroll. For this first we need to wrap our ngFor with `<cdk-virtual-scroll-viewport>` and then replace `*ngFor`with . `cdk-virtual-scroll-viewport` has a required input called `[itemSize]`. `itemSize` represents the height of item in pixels which we are rendering in our case this should be exact height of our card div which is `141px`. Another thing to note is that height of all of our cards component should be same. As of now the Virtual Scroll does not support fully support variable height. We will look into matter of variables heights later. so lets wrap our `*ngFor` with `cdk-virtual-scroll-viewport` and replace our `*ngFor`with `*cdkVirtualFor`. Another thing which needs to be done before we see any visual change is giving height to our `cdk-virtual-scroll-viewport` I want to show two user cards at once so I will give `cdk-virtual-scroll-viewport` height of `282px` *(size of single card is 141, 141 * 2 = 282)* you can either give height in stylesheet our add style tag. I will be adding height via style tag ``` <cdk-virtual-scroll-viewport [itemSize]="141" style="height: 282px"> <div *cdkVirtualFor="let person of persons;let i = index"> <div class="card"> <img src="https://i.imgur.com/63S0RAq.png" alt="Avatar"> <div class="container"> <h4><b>{{person.name}}</b></h4> <h4><b>ID: {{i}}</b></h4> <p>{{person.bio.substr(0, 30)}}...</p> </div> </div> </div> </cdk-virtual-scroll-viewport> ``` Now we can see that our user list items are rendered. Lets verify that we DOM items are created dynamically and reused. Open your browser developers tools, select `Elements Panel`. Find the DOM element shown in screen shot below and expand it. ![Virtual Scroll in dev tools](https://i.imgur.com/0dPEjTI.png) Now if you scroll through the list you will see that limited number of DOM elements are created, which leads to reduce memory and cpu usage. # Items with variable heights As of now Virtual Scroll does not support variable heights, but it is being worked on and is in experimental phase which means you should not use it in production as its api may change. For items with variable height we need to install `@angular/cdk-experimental`. You can install into stackblitz by click on dependencies and type `@angular/cdk-experimental`and hit enter. It will install the experimental cdk for you. User of angular cli can install it by `npm @angular/cdk-experimental`. Now we need to import it into our app module. Just like before, add following line in `app.module.ts` ``` import { ScrollingModule as ExperimentalScrollingModule} from '@angular/cdk-experimental/scrolling';``` It will import `ScrollingModule` and rename it as `ExperimentalScrollingModule`. We need both `ScrollingModule` and `ExperimentalScrollingModule` for variable height to work. Now add `ExperimentalScrollingModule` into import array. You `app.module.ts` should look like this: ```import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule } from '@angular/forms'; import { AppComponent } from './app.component'; import { ScrollingModule } from '@angular/cdk/scrolling'; import { ScrollingModule as ExperimentalScrollingModule} from '@angular/cdk-experimental/scrolling'; @NgModule({ imports: [ BrowserModule, FormsModule, ScrollingModule, ExperimentalScrollingModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` Now open `app.component.html` and replace `[itemSize]` with `autosize`. Thats it! Now you can elements of variable height in your Virtual Scroll but as said before this feature is experimental and should not be used. You can find example for autosize [here](https://stackblitz.com/edit/cdk-virtual-scroll-autosize?file=src/app/app.module.ts)
hassam7
146,638
Creating Trimmed Self Contained Executables in .NET Core
A great new way to create simple self contained executables in .NET Core
2,159
2019-07-23T05:29:35
https://dev.to/pluralsight/creating-trimmed-self-contained-executables-in-net-core-4m08
csharp, programming, dotnet, tutorial
--- title: Creating Trimmed Self Contained Executables in .NET Core published: true description: A great new way to create simple self contained executables in .NET Core tags: csharp,programming,dotnet,tutorial series: Getting to know .NET Core --- I'm going to show you a cool new feature in [.NET Core 3.0](https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0). Let's say you want to create a simple, lean executable you can build and drop on to a server. For an example we'll create a console app that opens a line of text, reads it and displays the output. First, let's create a new .NET Core app: ``` dotnet new console ``` This will scaffold a new console application. Now run it: ``` dotnet run ``` It should look something like this: ![](https://thepracticaldev.s3.amazonaws.com/i/u0qd94cn03d8sk5ycxtr.png) I'm on a Mac here, but it doesn't matter as long as your development box has the .NET Core CLI Installed. This displays "Hello World" on the console. Now, lets create a file called test.txt: ``` this is a file! With some lines whatever ``` Doesn't matter what you put in here, as long as it has some text in it. Next we'll create a something that will read those lines and display them. Remove the "Hello World!" code and replace it with this: ``` string[] lines = System.IO.File.ReadAllLines(@"test.txt"); foreach (string line in lines) { Console.WriteLine("\t" + line); } Console.WriteLine("Press any key to exit."); System.Console.ReadKey(); ``` This is pretty much your basic cookie cutter code for: * opening up a file * reading it into a string array * loop through the array line by line * print each line * exit Pretty simple stuff. When I run it on my machine it looks like this: ![](https://thepracticaldev.s3.amazonaws.com/i/6cedelzdo0mwuzafp6mq.png) And that's great. But I'm on a Mac here, what if I want it to run on a Windows Machine? Linux? No problem, this is .NET Core right? We'll just publish it to multiple targets. But what if .NET Core isn't installed on the machine? What if I just want a simple executable I can run to read this file without a pile of files or .Net core installed? ### Publishing in .Net Core Let's back up a little. .Net Core has had publishing profiles for a long time. The idea behind "target" publishing is one of the biggest selling points of the platform. Build your app, then publish it for a specific target, Windows, OSX, or Linux. You can publish it a few different ways * **Framework Dependent Deployment** - This means relies on a shared version of .NET Core that's installed on the Computer/Server. * **Self Contained Deployment** - This doesn't rely on .Net Core being installed on the server. All components are included with the package (tons of files usually). * **Framework Dependent Executables** - This is very similar to a framework dependent deployment, but it creates executables that are platform specific, but require the .NET Core libraries. Ok, so what's this cool new feature I'm going to show? Well, when you do a self contained deployment it's cool because you don't need the runtime installed, but it ends up looking something like this: ![](https://thepracticaldev.s3.amazonaws.com/i/gj26daq7x2ro8ozpyuse.png) This is the application we just built published as a Self Contained Deployment for Windows. Yikes. Let's say you wanted to share this file reader application, and asking someone to copy all these files into a folder to run something to read a text file. It's silly. ### New Feature: Self Contained Executables So to build that self contained executable I ran the following command: ``` dotnet publish -c release -r win10-x64 ``` This should look pretty familiar to you if you've done it before. But .NET Core 3.0 has a cool new feature: ``` dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true ``` Using this flag, it will build something a little different: ![](https://thepracticaldev.s3.amazonaws.com/i/3jft97xof4uoxfcubb68.png) That's MUCH better. It's now a single exe and .pdb. If we look at the info, it's not super small: ![](https://thepracticaldev.s3.amazonaws.com/i/unl7heb67jv5oajh9m22.png) But it includes the .NET Core runtime with it as well. And here's another cool feature. ``` dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true /p:PublishTrimmed=true ``` So in our example it doesn't change the size, but if you have a large complex application with a lot of libraries, if you just publish it to a single file, it can get HUGE. By adding the PublishTrimmed flag it will only extract the libraries you need to run the application. So when we copy the files to a Windows 10 machine, we have a nice small package: ![](https://thepracticaldev.s3.amazonaws.com/i/rdeeg878qx6kb6vufpxc.png) And we run it and it works! Without .NET Core! ![](https://thepracticaldev.s3.amazonaws.com/i/zzjrrx6wkhlvharn1od0.png) and if I change my target: ``` dotnet publish -r linux-x64 -c Release /p:PublishSingleFile=true /p:PublishTrimmed=true ``` I can run it on a Linux server without .NET Core just as easily: ![](https://thepracticaldev.s3.amazonaws.com/i/sj2wuc4zjj9kelqq2x3j.png) Just remember on a Linux machine you won't need the .NET Core runtime, but you will need the [Prerequisites for .NET Core on Linux](https://docs.microsoft.com/en-us/dotnet/core/linux-prerequisites?tabs=netcore2x) installed. ### Conclusion So this is a cool feature of .NET Core 3.0. If you want to build trimmed down self-contained executables for any of the platforms, you can do it easily with a couple of flags. This is great for those stupid simple things: console apps, data readers, microservices, or whatever you want to build and just drop on a machine with no hassle. I thought it was a cool feature to show. - [Jeremy](http://bit.ly/2JWE2uV) ###What's your .NET Core IQ?? ![](https://thepracticaldev.s3.amazonaws.com/i/pv5cvzxdg7qjyxzsfe71.png) My ASP.NET Core Skill IQ is 200. Not bad, [can you beat it? Click here to try](http://bit.ly/2Gsr6Mq)
jeremycmorgan
166,700
How we get awesome testimonials from our customers
Customer testimonials are important to us for both marketing and social validation. We feature them a...
0
2019-09-30T10:54:55
https://blog.leavemealone.app/how-we-get-testimonials/
testimonials, airtable
--- title: How we get awesome testimonials from our customers published: true tags: Testimonials, Airtable canonical_url: https://blog.leavemealone.app/how-we-get-testimonials/ cover_image: https://images.unsplash.com/photo-1510034696085-597d716bd162?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ --- Customer testimonials are important to us for both marketing and social validation. We feature them all over our website and we have a dedicated Wall of Love for all of the awesome things our customers have said about us. Many founders don't ask for them because they are scared to ask their customers for feedback, but customers often want to help. ## Starting simple When we built and launched Leave Me Alone we received an incredible positive response including a lot of love on Twitter! After the launch madness died down we realised that we could make use of the all of these lovely tweets. Inspired by [Baremetrics Wall of Love](https://baremetrics.com/wall-of-love) we put together the first version of our very own Wall of Love! <!--kg-card-begin: image--> ![How we get awesome testimonials from our customers](https://lh3.googleusercontent.com/HV1H9VsqbpOdkhJJIchS6PqqPgGTYTOVRsOPvR-2HNp6eXtcyARkz59ck_j0Bpd7RQo275yTYTVzEM5SJZy6DKPgZQLnNW8L87-2vC1bsi1Oy3wHtmYSIOanNhqPv8kdo6HcCINx)<figcaption>The old Wall of Love with embedded tweets - and several profile pictures being Holiday themed!</figcaption> <!--kg-card-end: image--> This was a really good start and people started asking how we built this page. It was very simple, we just used the Twitter embed code for the tweets. However, there were several downsides to this approach. The main issue was that because embedded tweets are loaded from Twitter when someone visits the page, they are not available to search engines which meant we were losing out on potential SEO benefits. We also experienced issues with the tweets taking a long time to show or appearing on the page without their styles which looked terrible! We wanted the testimonials to be more customisable, match our brand, and for our customers to be able to specify how their information would be displayed rather than using their name and avatar from Twitter. ## Asking for a testimonial We wanted to make it as simple as possible to write a testimonial for Leave Me Alone. Gathering testimonials is not easy, it takes time and follow ups to get people to provide feedback. The first thing we needed was a short and sweet message that we could send to our customers. If you use Intercom or another chat CRM then this could be automated but we are sending these out manually at the moment. <!--kg-card-begin: image--> ![How we get awesome testimonials from our customers](https://lh5.googleusercontent.com/hcrKFTuXvZkYwH_EMpz0mz7JlwzXPzaZP2IunPFWs_mQCzBejGCcDwTUFJqCDKX4lyf1ICEYiF9JfZVoGNlJDH26aGBTKLqKdXjU4dEkRYxGvr-yfyn9OMbGpLg-Q4ZprTIFiiAs)<figcaption>A short message asking our customers for help :)</figcaption> <!--kg-card-end: image--> We have tried and failed with this kind of thing in the past because we were overthinking what to say. We are getting responses with this message because the request is genuine. We really are so grateful to every single one of our customers and if they have a few minutes to write us a testimonial it helps us so, so much. ## Saving the responses We could have asked people to reply via email but we knew that would become difficult to manage. We use a form to collect the responses because it makes it super easy to gather all of the required information and makes it clear what we are asking for. We use [Airtable](https://airtable.com) for this which is hands-down the best tool for the job. You could use a spreadsheet but Airtable is like Google Sheets on crack - seriously, it's great. It provides us with a hosted form for customers to complete and the response goes straight into a table. We already use Airtable for other things at Leave Me Alone - [bug reports](https://leavemealone.app/bugs), [feedback](https://leavemealone.app/feedback), to track [news coverage](https://leavemealone.app/news), and to record our [expenses](https://leavemealone.app/open). We ask for some basic information (name, company, and an avatar) and one or two sentences. We also ask for the main reason they used Leave Me Alone which we don't use anywhere right now but is valuable information for us! We are asking for them to write about how we have helped, which prompts them to think and write about something specific. This is much better than "write us a testimonial" since that results in lower quality and non-specific responses. <!--kg-card-begin: image--> ![How we get awesome testimonials from our customers](https://blog.leavemealone.app/content/images/2019/09/3-airtable-feedback-form2.png) <!--kg-card-end: image--> Airtable even has this gallery feature to preview the responses, but we tend to use the spreadsheet view most of the time. <!--kg-card-begin: image--> ![How we get awesome testimonials from our customers](https://lh5.googleusercontent.com/Ojq4HsTpfRazce2JdB5Ft7w3Z1i3DwkfvTHJbp7kB_UDLerW797oMlFQNvdQvbCtHvnEeNnUjFAQjEQMLa2f04-IZVL9FKi9xQUJ7PXLcfZmUTxu87wxmy-oRPOiNhuA7IROLmsX)<figcaption>Some of our testimonials in Airtable - the gallery view makes it really nice to view them!</figcaption> <!--kg-card-end: image--> We found that Baremetrics also used Airtable at some point for their testimonials (maybe they still do!) and they have created [this Airtable template](https://airtable.com/shrDuPs3Wv2p9lNnz/tbl7pMEoXVyAW9KxH/viwqYBvlaC8odjOZo?blocks=hide) called "base" which you can copy to start collecting testimonials for your product straight away. This is a [preview of the form](https://airtable.com/shrufrcitHKs6q694) which customers will use. ## The final result Once we had enough testimonials we replaced the Tweets and styled them to match our branding. It looks much more professional and cleaner than our previous tweets solution. It's unlikely that visitors are going to read all of the testimonials, so we took inspiration from Baremetrics (again) and made it so they load in a different order each time. This way every single one of our amazing customers get a chance to be at the top! You can [check out our new Wall of Love here](https://leavemealone.app/wall-of-love)! <!--kg-card-begin: image--> ![How we get awesome testimonials from our customers](https://lh6.googleusercontent.com/bdPcY1uyCS47lvFNmCR3azjYmpRPeYuwbgXi7fToOsbM2e0co2OFui59rR9QJUU8OXqOmsxyzIpEiTCVQ7L59bk6DuXil_fjQcMYWmqvT52qdK0x4WrD7wmienL_C-3Q3RwyeBqa)<figcaption>✨ Our brand new Wall of Love! ✨</figcaption> <!--kg-card-end: image--> We are incredibly happy with our new Wall of Love. We hope you agree that it looks much better, and we now have super organised way of collecting and managing our testimonials for the future! <!--kg-card-begin: hr--> * * * <!--kg-card-end: hr--> We hope you enjoyed this post and hope you feel inspired to create a Wall of Love for yourself. Even if you aren’t ready to display them yet it is never too soon to start collecting social proof! Let us know if you have any comments or questions. Also, we would love to see your Walls of Love so please share them with us on [Twitter](https://twitter.com/LeaveMeAloneApp)!
dinkydani21
166,788
Browser API's localization: why can't you do this, Chrome?
Hey, devs! I'd like to share an interesting thing I've found, and ask your thoughts about it. I was...
0
2019-09-06T13:24:59
https://dev.to/room_js/browser-api-s-localization-why-can-t-you-do-this-chrome-1epk
chrome, google, javascript, browser
Hey, devs! I'd like to share an interesting thing I've found, and ask your thoughts about it. I was playing with `Date`'s `toLocaleString()` method in order to get a month name translated with the browser's API only. The code looks like this: ```javascript const date = new Date(); date.toLocaleString('en-GB', { month: 'long' }); ``` I tested it in my Chrome browser with a few different languages, everything was perfect. The interesting part has appeared when I posted it on Instagram: {% instagram B1_r-2RCKcW %} I've got a comment from one of my followers trying to apply this technique for getting an Armenian translation of the month name. According to [ISO Language Code Table](http://www.lingoes.net/en/translator/langcode.htm) the language code for Armenian language is `hy-AM`. I've tried that by myself and discovered that it doesn't really work in Chrome browser (you can see it on the head picture). For some reason, it returns `M09` (no idea what that means). But at the same time, it works as expected in Mozilla Firefox: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/pf5mrbcupm1r8fmz90l9.png) I've asked Google Chrome team on twitter already: {% twitter 1169965679392514048 %} Let's see if we get any thoughts on this issue from them. Please, feel free to share your ideas too... Thanks for reading and have an amazing weekend!
room_js
166,848
Dojo Widget Middleware
The newest features of Dojo 6 include the new function based widgets and widget middleware. Class ba...
0
2019-09-06T15:43:58
https://learn-dojo.com/dojo-widget-middleware/
dojo, webdev, typescript
The newest features of [Dojo 6](https://dojo.io/blog/version-6-dojo) include the new function based widgets and widget middleware. Class based widgets come with decorators to [watch for property changes](https://learn-dojo.com/watch-for-property-changes-in-widgets) and work with [metas](https://learn-dojo.com/dojo-from-the-blocks) which allow you to get information about your widget. With the introduction of function based widgets, those patterns have been replaced by the new [middleware](https://dojo.io/learn/middleware/introduction) system. ## Manage local state There are two middlewares available for managing local state in a widget. * [cache](https://dojo.io/learn/middleware/available-middleware#cache) - persists data * [icache](https://dojo.io/learn/middleware/available-middleware#icache) - works like cache, but also invalidates the widget when data changes. ### cache You might use `cache` for some fine grained state management, because if you do use it, it's up to you to manually invalidate the widget, so that it will render based with updated `cache` properties using the [`invalidator`](https://dojo.io/learn/middleware/core-render-middleware#invalidator) middleware. ```tsx // src/widgets/Parrot/Parrot.tsx import { create, invalidator, tsx } from "@dojo/framework/core/vdom"; import cache from "@dojo/framework/core/middleware/cache"; import * as css from "./Parrot.m.css"; // use `cache` and `invalidator` as middleware // in render factory const factory = create({ cache, invalidator }); export const Parrot = factory(function Parrot({ middleware: { cache, invalidator } }) { const name = cache.get<string>("name") || ""; return ( <virtual> <h3 classes={[css.root]}>{`Polly: ${name}`}</h3> <input classes={[css.input]} placeholder="Polly want a cracker?" type="text" onkeyup={event => { // update cache data with input value cache.set( "name", (event.target as HTMLInputElement).value ); // invalidate widget to render // with new data invalidator(); }} /> </virtual> ); }); export default Parrot; ``` You can see this demo in action here. {% codesandbox dojo-middleware-cache-3wc5n %} This is fine, _but it could be easier_. ### icache The `icache` is designed specifically to work like `cache`, but to also run an `invalidator()` on each update to render the widget with updated data. It also comes with an extra method, `icache.getOrSet()` that will return the current value or a specified default value if none available. ```tsx // src/widgets/Parrot/Parrot.tsx import { create, tsx } from "@dojo/framework/core/vdom"; import icache from "@dojo/framework/core/middleware/icache"; import * as css from "./Parrot.m.css"; const factory = create({ icache }); export const Parrot = factory(function Parrot({ middleware: { icache } }) { // get the current name value or an empty string const name = icache.getOrSet("name", ""); return ( <virtual> <h3 classes={[css.root]}>{`Polly: ${name}`}</h3> <input classes={[css.input]} placeholder="Polly want a cracker?" type="text" onkeyup={event => { // when the cache is updated, it will // handle calling the invalidator icache.set( "name", (event.target as HTMLInputElement).value ); }} /> </virtual> ); }); export default Parrot; ``` This would be equivalent to the [`@watch`](https://github.com/dojo/framework/tree/master/src/core#internal-widget-state) decorator that you can use with class based widgets. I would guess that 99% of the time, you would use `icache` to manage local state in your widgets. {% codesandbox dojo-middleware-icache-n6ktf %} ## Application Store There are a number of ways you could work with [stores](https://dojo.io/learn/stores/introduction) in Dojo. You could use [containers](https://learn-dojo.com/dojo-containers) or a [provider](https://github.com/dojo/framework/tree/master/src/stores#advanced). _Or_, you could use a [store](https://dojo.io/learn/stores/introduction#store-middleware) middleware! We can create a `store` middleware that will hold a list of users. ```ts // src/middleware/store.ts import createStoreMiddleware from "@dojo/framework/core/middleware/store"; import { User } from "../interfaces"; export default createStoreMiddleware<{ users: User[] }>(); ``` Now, we need a way to retrieve a list of users. We could do that via a [process](https://github.com/dojo/framework/tree/master/src/stores#processes), which is how you can manage application behavior. We can build a process that will fetch some user data. ```ts // src/processes/userProcess.ts import { createCommandFactory, createProcess } from "@dojo/framework/stores/process"; import { replace } from "@dojo/framework/stores/state/operations"; const commandFactory = createCommandFactory(); const fetchUsersCommand = commandFactory(async ({ path }) => { const response = await fetch("https://reqres.in/api/users"); const json = await response.json(); return [replace(path("users"), json.data)]; }); export const getUsersProcess = createProcess("fetch-users", [ fetchUsersCommand ]); ``` With a `store` and a `process` ready to go, we can use them in a widget that will display our list of users. ```tsx // src/widgets/Users/Users.tsx import { create, tsx } from "@dojo/framework/core/vdom"; import * as css from "./Users.m.css"; import store from "../../middleware/store"; import { fetchUsersProcess } from "../../processes/userProcesses"; import { User } from "../../interfaces"; // pass store to render factory // as middleware const render = create({ store }); // helper method to render list of Users const userList = (users: User[]) => users.map(user => ( <li key={user.id} classes={[css.item]}> <img classes={[css.image]} alt={`${user.first_name} ${user.last_name}`} src={user.avatar} /> <span classes={[css.title]}> {user.last_name}, {user.first_name} </span> </li> )); export default render(function Users({ middleware: { store } }) { // extract helper methods from the store in widget const { get, path, executor } = store; // get current value of Users const users = get(path("users")); if (!users) { // if no Users, run the `executor` against // the process to fetch a list of Users executor(fetchUsersProcess)(null); // since the process to fetch Users does not need // any arguments, execute with null // if the network is slow, return // a loading message return <em>Loading users...</em>; } return ( <div classes={[css.root]}> <h1>Users</h1> <ul classes={[css.list]}>{userList(users)}</ul> </div> ); }); ``` The key here is that the `store` middleware has an `executor` method that can be used to execute processes directly from your widget. ```ts executor(fetchUsersProcess)(null); ``` In this case, the `fetchUsersProcess` does not expect a payload, so we can pass `null` to it. If it needed to do pagination for example, we could pass which page we wanted as an argument and use it in our process. You can see this demo in action here. {% codesandbox dojo-function-based-widgets-94eyy %} ## Summary There's more [middleware](https://dojo.io/learn/middleware/available-middleware) available that we didn't cover in this post, related to theming, i18n, DOM related, and interacting with the render method. We'll cover most of these in future blog posts! I'm really excited about all the new features in this latest release of Dojo and working with the available middleware and even what I could do with a [custom middelware](https://dojo.io/learn/middleware/middleware-fundamentals#creating-middleware)!
odoenet
169,594
Embracing a mentality of prediction – evaluation – exploration
A while ago I started to play a little game when doing test-driven development: Every time I run my...
0
2019-09-12T22:43:22
https://cleandatabase.wordpress.com/2019/09/12/embracing-a-mentality-of-prediction-evaluation-exploration/
reflection, learning, life
--- title: Embracing a mentality of prediction – evaluation – exploration published: true tags: Reflection,Learning,Life canonical_url: https://cleandatabase.wordpress.com/2019/09/12/embracing-a-mentality-of-prediction-evaluation-exploration/ --- A while ago I started to play a little game when doing test-driven development: Every time I run my microtest, I predict the outcome. Will the test fail or pass? When my prediction was right, I continue. When it was not right, I investigate and explore why my code behaved differently than I thought it would be. I will not simply go on and poke around (though I do that frequently, too) but really try to understand why my expectation of the code’s behavior was different to reality. When I found the cause for the wrong prediction, I try to learn from reality and grow my knowledge about the code. Sometimes I not only find gaps in my knowledge but also flaws in my work process or in the way I do things. I found this game to be a very powerful tool of learning. I’m still not very seasoned in doing it, but it’s something I want to practice more often. Today I realized that I can not only use this technique for TDD, but for many different things in software development and also life. We are constantly predicting what will happen, what consequences our actions will have: Will my code work after merging that branch? What will change when I change this setting? Will I catch the bus when I go now? Will I need an umbrella today? How will my co-worker react to the feedback I give him? We constantly predict, but I usually don’t evaluate the outcome in reality and explore the reasons when my prediction was wrong. In not doing the evaluation and exploration, I probably miss a powerful tool to consciously learn and instead stick with a very unconscious and probably tedious trial-and-error approach. That’s something I want to change: What will happen when I start to notice the predictions I’m doing all the time, when I consciously evaluate their outcome and explore why my prediction was wrong? My prediction is that I will not only learn faster and become more aware of myself and the things around me, but might also find flaws in my thinking, in the way I approach things and in beliefs I hold. Sounds like quite some effort and also a bit scary, but I want to try it out. I would love to hear your thoughts on that!
pesse
171,344
In the Guts of a Shiv App - Building SaaS #24
In this episode, we got our hands dirty with the Django Shiv app that we build to work out issues...
2,058
2020-01-21T18:02:51
https://www.mattlayman.com/building-saas/guts-shiv-app/
python, django, saas
{% youtube 0kS-uPMOLlg %} In this episode, we got our hands dirty with the Django Shiv app that we build to work out issues with finding templates and other settings problems. Show notes for this stream are at [Episode 24 Show Notes](https://www.mattlayman.com/building-saas/guts-shiv-app/). To learn more about the stream, please check out [Building SaaS with Python and Django](https://www.mattlayman.com/building-saas/).
mblayman
173,900
Cloud e OpenStack: Uma breve introdução
Cloud Computing Computação em Nuvem, ou Cloud Computing, é uma tecnologia desenvolvida par...
0
2019-09-22T00:53:27
https://dev.to/opendevufcg/cloud-e-openstack-uma-breve-introducao-49cb
cloud, openstack, distributedsystems, ptbr
# Cloud Computing Computação em Nuvem, ou **Cloud Computing**, é uma tecnologia desenvolvida para flexibilizar os recursos que uma aplicação/sistema tem disponível. Nesse caso, quando falamos de recursos, estamos nos referindo a memória RAM, armazenamento, CPU e afins. Através da Cloud, é possível criar uma **infraestrutura elástica e dinâmica**, ou seja, o sistema pode crescer ou diminuir mais facilmente, uma vez que é possível utilizar os recursos **sob demanda**. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/2p1nw7s7280cb08ut0rl.png) **Por que Cloud Computing é tão importante para o futuro da computação?** Vamos dar um exemplo prático: uma aplicação. O crescimento de uma aplicação, por exemplo, é algo **inconstante**. Podemos tentar estimar, com base em dados, o quanto uma aplicação tende a crescer ou diminuir, mas, mesmo assim, é uma previsão complexa. Teremos períodos onde usaremos uma demanda de recursos maior, pois o número de usuários será maior, assim como o contrário, períodos em que esses recursos (comprados pelos desenvolvedores da aplicação, por exemplo), **ficarão ociosos**. Isso não é nada bom, uma vez que queremos aproveitar todo o dinheiro aplicado na ideia. Com o uso da Cloud, o usuário **controla seus recursos de forma dinâmica**, levando em consideração a demanda do momento. Isso torna o **sistema elástico**, o que, consequentemente, economiza recursos e investimentos. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/2dp8do1nhi21x8bhr45i.png) Existe um conceito importante dentro disso tudo, e que também é crucial para entender como essa administração de recursos funciona: **provisão de recursos**. Uma das definições de provisionamento é: *“fornecer os recursos necessários para que o sistema de gestão possa funcionar adequadamente”*. Distribuição dos recursos por demanda. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/x276oqgxosc546nnvjw1.png) Para administrar os recursos da Cloud, existem diversas tecnologias, sendo uma das mais utilizadas o [**Openstack**](https://www.openstack.org/). OpenStack pode ser definido como um **gerenciador para componentes de múltiplas infraestruturas virtualizadas**. Imagine o OpenStack como uma plataforma de componentes, uma espécie de sistema operacional para a Cloud. OpenStack utiliza recursos virtuais para executar uma combinação de ferramentas, seus componentes. Os componentes servem para criar um ambiente de cloud que atende aos cinco critérios do *National Institute of Standards and Technology* para a Cloud: **rede, recursos agrupados, interface de usuário, provisionamento de funcionalidades e alocação/controle automático de recursos.** ---- # Openstack: Conceitos ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/n58izzmom8bp808rmeyq.png) Agora que você já tem uma ideia do que é Cloud Computing, da sua importância para a computação e foi introduzido ao OpenStack e seus componentes, existem alguns conceitos que são importantes entender ao lidar com esse tipo de sistema: **Instância** A instância nada mais é do que uma máquina virtual, já com sua imagem, grupos de seguranças e outras configurações, administrada pelo OpenStack. **Imagens** A imagem de uma instância é o sistema operacional utilizado pela mesma. Assim como temos o famoso formato .iso para imagens que usamos normalmente, o formato padrão mais utilizado para Cloud no OpenStack é o qcow2. Imagens com esse formato podem ser encontradas tanto nos sites da maioria das distribuições, quanto no próprio site do OpenStack. **Network** A função de Network do OpenStack é bastante completa. Como o próprio nome já antecipa, essa área é focada nas conexões de redes, permitindo a criação de redes virtuais para as instâncias. Essas redes podem ser tanto públicas, quanto privadas. Ainda na parte de Network, é possível criar roteadores virtuais para conexão de redes entre si, o que possibilita a criação de redes públicas, isto é, conectadas à internet. **Grupos de Segurança** Os grupos de segurança podem ser definidos como um conjunto de regras de firewall aplicadas a uma instância. Essas regras são definidas pelos usuários de acordo com as necessidades da instância. Caso o usuário não crie um grupo de segurança, o OpenStack define um grupo default que, por padrão, bloqueia tudo. Essa estratégia de bloquear tudo e, através dos grupos de segurança, ir liberando aos poucos o acesso, é de extrema importância para a segurança da instância e o controle de acesso total da mesma. **Volumes** Os volumes são discos virtuais que persistem os dados da instância. Imagine o volume como um HD instalado na sua máquina. É lá onde os dados ficam, mesmo que a instância seja reiniciada. Caso sua instância não seja inicializada com um volume, os dados não ficarão salvos. **Identity** Essa parte diz respeito ao controle de acesso ao seu OpenStack. Na interface do Horizon, é possível criar projetos, usuários, definir senhas e, assim, controlar o acesso e poderes de usuários. **Par de Chaves** Por padrão, para acesso via SSH na sua instância, normalmente é necessário cadastrar uma senha pública gerada pelo usuário no OpenStack. Depois de cadastrada, basta aplicá-la ao criar uma instância. **Flavors** Os flavors são um conjunto de configurações pré-determinadas que definem características da instância, como a memória e a capacidade de armazenamento. Podem ser entendidos também como “configurações de hardware” disponíveis para um servidor. Vários flavors podem ser criados e aplicados para diferentes instâncias. ---- # OpenStack: Componentes Uma vez que o OpenStack atua como um gerenciador de componentes, é importante entender um pouco da atuação de cada um deles: cada um com sua responsabilidade na gerência da infraestrutura da Cloud. Como existem muitos, não precisamos nos apegar aos detalhes cada componente, então, vamos dar uma visão geral sobre os utilizados com maior frequência: ## OpenStack Compute (Nova) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/kxnbyrceret9m5km22zg.png) Responsável pela estrutura de virtualização. É uma ferramenta de acesso e gerenciamento total para os recursos computacionais do OpenStack, incluindo programações, criações e exclusões. É agnóstico ao hypervisor (camada de software entre o hardware e o sistema operacional), tendo compatibilidade com a maioria deles. ## OpenStack Block Storage Cinder (Cinder) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/cju49zb8kk1sbhe7nyqt.png) Responsável pelo serviço de storage do OpenStack Nova. Com o Cinder é possível gerenciar volumes através da interface do Horizon ou via CLI. ##OpenStack Object Storage (Swift) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/d8o495zf1kvbaztwd4kl.png) É um serviço altamente tolerante a falhas que armazena e recupera objetos de dados não estruturados. Enquanto o Cinder é responsável pelo volume, o Swift é responsável pelos objetos. ##OpenStack Image Service (Glance) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/t8r73kudrehh7fdwswlu.png) Responsável pelos recursos de imagens de máquinas virtuais. Ele armazena e recupera imagens de disco de máquinas virtuais de uma variedade de locais. ##OpenStack Networking (Neutron) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/6fynnrvuni20iy2dc32y.png) Responsável pelos recursos do serviço de rede para o OpenStack. Conecta redes a outros serviços do OpenStack. ##OpenStack Identity (KeyStone) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/kxnbyrceret9m5km22zg.png) Responsável pela autenticação e autorização, definindo regras para acesso aos recursos do OpenStack. Ele também é o catálogo de endpoints para todos os serviços. ##OpenStack Dashboard (Horizon) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/9rkdqe3hu46uw345v7a1.png) Provê uma interface web para acesso aos serviços do OpenStack. Dependendo da necessidade do serviço de Cloud, o administrador da infraestrutura pode inserir outros componentes, a exemplo do OpenStack Monasca, que tem como função ser um sistema de monitoramento para as máquinas (para saber mais, acesse a página de [componentes] (https://www.openstack.org/openstack-components/) do OpenStack). ---- ## OpenStack: Primeiros Passos É possível dar seus primeiros passos com OpenStack sem precisar ter uma infraestrutura de Cloud com diversos computadores, para isso existe uma solução: o [**OpenStack AIO**] (https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html) (All-In-One). Nele, é possível criar imagens, subir instâncias, definir grupos de segurança, integrar chaves públicas e outras funções, assim como no OpenStack; a diferença é que tudo funciona em uma máquina só, podendo ser utilizada uma máquina virtual, por exemplo. Por isso é considerado um laboratório para OpenStack. No site do OpenStack tem uma área dedicada para essa solução, com todo o passo a passo para sua instalação e configuração. É importante dizer também que o OpenStack AIO é integrado ao [**Ansible**] (https://pt.wikiversity.org/wiki/Ansible), uma ferramenta de automatização criada para gerenciar múltiplas máquinas de uma vez. A parte mais importante para se entender agora sobre Ansible são os seus Playbooks. Os Playbooks são um conjunto de Plays (instruções) através das quais o Ansible consegue configurar o passo a passo de um processo de configuração — uma estrutura bem parecida com um shell script. São basicamente instruções de fácil leitura de como serão feitas as configurações que possibilitam a realização de atividades em máquinas remotas ou diferentes hosts. Para administração e gerência do OpenStack, temos duas opções: a interface web já citada anteriormente, Horizon, ou o [**OpenStack-CLI**] (https://docs.openstack.org/cli/command-list.html) (Client Line Interface). Na dashboard do Horizon existem diversas abas relacionadas aos conceitos já abordados neste material. Para finalizar, quando pensamos no futuro da computação, é impossível não pensar em Cloud. Um dos fatores mais limitantes para nossa área são os recursos, por isso precisamos aproveitá-los da maneira mais inteligente e consciente. Ferramentas como OpenStack, em conjunto com toda a comunidade open source que move o projeto, nos dão uma visão que, no futuro, os limites serão apenas termos usados em cálculo. --- Muito obrigada pela leitura! Fique atento: em breve, teremos novos artigos de contribuidores do OpenDevUFCG aqui no **dev.to**. Acompanhe o OpenDevUFCG no [Twitter](https://twitter.com/OpenDevUFCG), no [Instagram](https://instagram.com/OpenDevUFCG) e, claro, no [GitHub](https://github.com/OpenDevUFCG).
martalais
175,053
Git hooks and HuskyJS
When using Git, like most of projects today do, code goes through 4 stages. Those stages are: Untra...
0
2019-09-22T18:59:55
https://dev.to/hi_iam_chris/git-hooks-and-huskyjs-13gb
husky, git, githooks, javas
When using Git, like most of projects today do, code goes through 4 stages. Those stages are: - Untracked - Files/changes are visible, but those will not be committed. - Staged - Changes get into this state by executing git add command. Once changes are staged, with next commit those will be committed. - Committed - After executing git commit command, these changes are committed and will be sent to main repository with git push. If changes are done after commit, those have to be added again with git add command. - Pushed - By executing git push change are sent to remote server. This all sounds very clear and simple but sometimes, just going from stage to stage is not enough and some other action might be needed. That can be anything like sending mail notification to rest of team that code is pushed, running tests before push or performing code analysis before each commit. This is where Git hooks come very useful. ### What are git hooks? > Git hooks are scripts that Git executes before or after events such as commit, push or merge. To simplify it, hooks are actions that get automatically triggered. These actions are bash scripts and once git project gets initialized, they can be found in .git/hooks folder. In this folder, there is already script for each event, and those can be used as example of a hook. Events, for which hooks can be defined are: - applypatch-msg - pre-applypatch - post-applypatch - pre-commit - prepare-commit-msg - commit-msg - post-commit - pre-rebase - post-checkout - post-merge - pre-receive - update - post-receive - post-update - pre-auto-gc - post-rewrite - pre-push This is a long list of different events, and most of them you will never need to use. Reason for that can be found in two categories hooks are divided into. Those categories are client-side and server-side hooks. Client-side are ones that are executed in your local machine, like pre-commit and post-commit, server-side are ones that are executed in central-machine like pre-receive. There is sadly not a lot of materials around on git hooks but probably best one, and most complete is githooks.com. This site contains examples for all hooks and links to many other libraries and projects using them or helping to work with them. While that all sounds great, there are some problems with hooks that need to be considered. - Hooks are bash scripts, and most of developers today are not comfortable with writing bash. - Hooks are located in .git/hooks folder, that means they are not cloneable and each developer would need to manually add them to hooks folder. ### Husky As a solution for these problems we can use Husky library. Husky is npm library that helps easily create and manage hooks easily. ### Requirements: Node >= v8.6.0 Git >= v2.13.2 Instalation: Npm npm install husky –save-dev Yarn yarn add husky Adding first hook: To use hooks with husky following steps need to be done: 1. Create .huskyrc file in root of your project 2. Inside of this file create object with property hooks that will contain all hooks 3. Add your hooks under that object where key is event name and value is command to be executed. ### Example: // .huskyrc { "hooks": { "pre-commit": "echo \"Hello, pre-commit hook!\"" } } In the example above, before commit, in command line we should see text “Hello, pre-commit hook!”. This is very simple example that won’t be much use. However, we could also run different things. We could run npm task, another bash script or some node script. NPM task example: “pre-commit”: “npm run test” NodeJS example: “pre-commit”: “node script.js” Bash script: “pre-commit”: “sh script.sh” When these configurations are added, husky will create hooks for us. Also, since configuration file is added to project root, it can be tracked with and available to all team members. This is far simpler than writing bash scripts in hooks folder and sending them to each individual person. ### Custom hooks When writing custom hooks, return value of process is important. If your custom hook needs to prevent going to next stage, it needs to exit with any non-zero value. If it exits with value 0, stage will be treated as successful and it will proceed to next. Bellow is example of bash script that would prevent commit if it is in master branch, otherwise it will allow it. #!/bin/sh branch="$(git rev-parse - abbrev-ref HEAD)" # a branch name where you want to prevent git push. In this case, it's "master" if [ "$branch" = "master" ]; then echo "You can't commit directly to '"${branch}"' branch" exit 1 fi ### Problems with Husky Husky also has few issues to take into concern. One of them is adding it to existing project. There are situations when hooks wouldn’t be properly created when adding husky to existing npm project. Solution I found to work was to delete node_modules and yarn.lock and/or package.lock files and reinstalling all dependencies. Second problem that could happen is duration. While hook add only few milliseconds, process that they trigger can last long time. Example would be running tests before each commit. Triggering it is short and trivial, but executing them can take a long time. ### Conclusion As you could see above, hooks can help with improving development process a lot. With husky on top this is even easier. There are few limitations to be aware of, however, possibilities are only limited by developer’s creativity. Small demo project with few samples can be found in [this github repository](https://github.com/kristijan-pajtasev/git-husky-demo).
hi_iam_chris
175,087
That was quite an episode...
Rails + React + Redux - Pt 5 This post is going to focus on creating an Appearance for each queen...
3,838
2019-09-22T22:08:21
https://dev.to/jaredharbison/dragnet-2nmf
ruby, rails
<h1><center>Rails + React + Redux - Pt 5</center></h1> ********************************************************* This post is going to focus on creating an Appearance for each queen. I'll iterate through each Episode's contestants' ids to find_or_create_by the contestants' drag_names then create the Appearance using the Episode and Queen ids. ********************************************************* <h1><center> Let's get started! </center></h1> ********************************************************* *1. Def get_seasons in season.rb to scrape the list of season names from Fandom, concatenate each season name into an array of URLS for each Season's Wikipedia page, then iterate through the array to .create!() an instance of each Season.* <center>__see previous posts for the gist__</center> *2. Def get_queens in queen.rb to scrape the list of queens' names from Fandom, concatenate each queen's name into an array of URLs for each Queen's Fandom page, then iterate through the array to .create!() an instance of each Queen and her attributes (including associations for Quotes and Trivia.* <center>__see previous posts for the gist__</center> *3. With Seasons and Queens instantiated, iterate through the Seasons and .create!() an appearance for each episode per Queen and her appropriate appearance attributes.* {% gist https://gist.github.com/JaredHarbison/03fd7f86b43369ac97adeb03eba06e31 %} A subsequent post will include the last in the scraping segment of the project, where I'll gather the queens stats from each episode and store them with her appearance! Then I'll have enough data to start creating the user interface. ********************************************************* <h1><center> That's all folks! </center></h1>
jaredharbison
175,111
Half Done...
As a mother to a very active child I have a habit of leaving things half finished. Shelves of half re...
0
2019-09-22T23:57:20
https://dev.to/nerdmom630/half-done-1of9
blogging, mom
As a mother to a very active child I have a habit of leaving things half finished. Shelves of half read books and folders on my computer full of half completed projects. There just never seems to be time to finish especially if I would like to sleep like a normal functioning human. So how do you get over this problem? I still haven't figured out how to get through my overwhelming stack of books. That is a mystery that may never be solved. My advice for getting through that stack of half finished projects while you have a small human in tow is as follows: 1.Let them see what you are working on. You will never have the free time to work alone unless they are in school, but if you are employed during that time frame your free time to learn that new skill is as elusive as seeing a unicorn. If you let them see what you are working on whether they understand why it works or not you bring them in instead of pushing them away. Truthfully the minute I start to explain what I'm working on to my son he stops crawling up my face to get attention because he feels included in the experience. Then slowly as if by magic he then wanders away and gives me the five minutes I need to figure out why the damn click function won't work no matter which way I write it. (more often then not it's because I'm distracted and missed a semi colon. 2.Build things you might need in your day to day life. As a content creator and everyday human my life needs organization. I realized when I became part of the Twitch community many moons ago that being a content creator is a special kind of job with special requirements of your time. So I wondered if people like me felt like the regular day planner wasn't organized in a way that would allow them to accomplish everything they needed. So I began to build a day planner specific to what I felt the needs of a content creator were. Then yesterday I realized what a cool idea it would be to have an app that allowed me to keep track of good behavior and chores for my son so that maybe he could earn an allowance. So I started work on said app. Both of these projects are open tabs on my laptop. I work a little on one and when I get stuck I move to the other. This means that no matter what I have accomplished something and feel like the time I took for myself meant something. I feel like as wives and mothers trying to learn a new skill or just do something for ourselves is hard for us. There is always somebody to take care of and you are normally last on that list. If coding is your new found passion as it is for me then you desperately need to take that five minutes for yourself. Even if you can't code. Read somebody else's blog post, do one practice exercise or follow one coder on your favorite social media spot. Do one thing everyday to further your goal. Finish your projects one piece at a time. There is a light at the end of the tunnel and I swear it isn't just the glare of your computer screen.
nerdmom630
175,180
Securing Microservices with Auth0 Pt. 2 (Resource Service)
This is the second part to a series of posts called Securing Microservices with Auth0. If you missed...
2,373
2019-09-25T21:59:36
https://dev.to/bbenefield89/securing-microservices-with-auth0-pt-2-2c23
react, spring, microservices, tutorial
This is the second part to a series of posts called **Securing Microservices with Auth0**. If you missed the [previous post](https://dev.to/bbenefield89/securing-your-microservices-with-auth0-20e3), I would suggest you go back and read that post first. # Overview In this part of the **Securing Microservices with Auth0** series, we're going to be creating the **Resource Service** microservice. The **Resource Service** will be our applications **REST API** and will perform **CRUD** operations on specific users **Todos**. By this I mean we will be able to: - **C**: Create (POST) - **R**: Read (GET) - **U**: Update (PATCH) - **D**: Delete (DELETE) At first, this service will be insecure and will not need any sort of authentication. It's important that we see the issues that come with insecure applications both from the developer perspective and the user perspective. After creating our **Auth Service**, which will be a different **microservice** in another post in this series, we will then perform **authorization** on requests being sent to our **Resource Service**. You can also go ahead and play around with [the code](https://github.com/bbenefield89/SpringTodo/tree/bbenefield89/tutorial_pt2) for this post. This branch, `bbenefield89/tutorial_pt2`, is the UI portion and the insecure RESTful API (**Resource Service**). # Creating the Resource Service For this series, I've decided to go with the [Spring Framework](https://spring.io/) to create our backend. **Microservices** are **not** **Java/Spring Framework** specific and we can just as easily create our **microservices** in any language that has the ability to create a web server and make HTTP requests. This means we could potentially create our **Resource Service** using [Express the Node Web Framework](https://expressjs.com/) and then turn around and create our **Auth Service** using [Django the Python Web Framework](https://www.djangoproject.com/). This is one of many of the advantages of going with a **microservice architecture** when creating your applications. Enough talk, it's time for action! Let's head over to [Spring Initializr](https://start.spring.io) where you can quickly create the boilerplate code for your **Spring** application. When you land on the **Spring Initializr** page go ahead and enter in the basic information for your project. As an example, my projects information will look like this: <img src="https://thepracticaldev.s3.amazonaws.com/i/vb2qa20lubwgjts0s6b9.PNG" width="100%"> And my chosen dependencies will be: <img src="https://thepracticaldev.s3.amazonaws.com/i/x2cbg1tvyffv7907m4r7.PNG" width="100%"> Go ahead and click on the green button at the bottom that says `Generate the project`. This will prompt you to download your project as a **zip** folder. <img src="https://thepracticaldev.s3.amazonaws.com/i/eth1pxnsg9b233njwjpw.PNG" width="100%"> Unzip your project, feel free to discard the **zipped** folder, and let's open up our project in our favorite IDE and get to work. # Inside our Resource Service Now that we're ready to go, let's find our first file at `TodoApp_API/src/main/resources/application.properties` and rename that to `application.yml` as I'm a fan of `YAML` when it comes to **Springs** configuration properties. Inside our `application.yml` file you'll notice it's empty. Go ahead and place the following text inside: ```yml server: port: 8080 ``` It's not much, and to be honest **Spring** defaults it's **PORT** to **8080** but I like to be as clear as possible, especially when we have multiple services for the same application. ## Creating the `Todo` **Entity** We've already discussed the application and yes, this is going to be yet *another* todo app but I believe creating something you're familiar with is best when learning about new technology. Might as well focus on the technology instead of the logic. Create a new **package** at `TodoApp_API/src/main/java/${}/${}/TodoApp_API` and name it **Entities** (`TodoApp_API/src/main/java/${}/${}/TodoApp_API/Entities`). This package is where we're going to create all of our **Entities** which are basically just a **Java** representation of a row in our DB. Inside the **Entities** folder, create a new **Java** file and name it `Todo.java` and inside of it place the following code (filling in the ${} with your own path). Be sure to read the comments as I'll explain some of the code as we go. **Todo.java** ```java package ${}.${}.TodoApp_API.Entities; import lombok.Data; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; /** * This annotation comes from "Lombok" and allows us to forget about writing * a lot of boilerplate code like "Constructors/Getters/Setter" */ @Data // Creates this class as a Bean to be picked up by Spring @Entity public class Todo { // Lets JPA know this is the unique identifier for our DB @Id // Sets the value that should be automatically generated for our ID in the DB @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String title; // We'll use the users' email address to find a user's todos private String userEmailAddress; /** * Notice we don't have to write anything else * Lombok will take care of this for us */ } ``` ## Creating the `TodoRepository` "Repository" The **Repository** for an **Entity** is going to be an interface that will extend another interface that comes with a ton of helpful methods to perform all of our **CRUD** operations. Create another package named `TodoRepositories` and place it at `TodoApp_API/src/main/java/${}/${}/TodoApp_API/Repositories`. Inside create a new file named `TodoRepository.java` and inside place the following code: **TodoRepository.java** ```java package ${}.${}.TodoApp_API.Repositories; import ${}.${}.TodoApp_API.Entities.Todo; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; import java.util.List; /** * Sets this interface up to be found by Spring * Later on we'll be taking advantage of the @Autowired annotation where this interface will then become a * concrete class */ @Repository /** * Our repository interface needs to extend the JpaRepository interface and pass along two arguments * 1. The Entity class that this repository is responsible for * 2. The id data type we chose for the Entity this Repository is responsble for * In this example, we've chosen to create our id as a Long data type */ public interface TodoRepository extends JpaRepository<Todo, Long> { /** * This is a custom method we'll be using to get a list of todos depending on the users email address * JPA supports a type of DSL where we can create methods that relate to an Entity by using keywords * 1. "findAll": returns a List of Todo * 2. "By": This signifies that we are going to be giving something specific to look for in the DB * 3. "UserEmailAddress": Find a Todo that contains the correct "userEmailAddress" in the DB */ public List<Todo> findAllByUserEmailAddress(String userEmailAddress); /** * Another custom method. This method will take the ID of a Todo and the users email address to return a * single Todo */ public Todo findByIdAndUserEmailAddress(Long todoId, String userEmailAddress); /** * This custom method will delete a single Todo depending on the ID and the userEmailAddress */ public void deleteByIdAndUserEmailAddress(Long todoId, String userEmailAddress); } ``` That's it for our **Repository**. We've only added a few methods but **JpaRepository** will still give us access to a lot more of the inner methods that we haven't defined. ## Creating the `TodoService` "Service" The idea behind a **Service** in this context is to bridge the gap between a **Controller** and a **Repository**. This is also where you will write your business logic. Splitting up your code like this keeps things small and typically easier to reason about. Go ahead and create another package named `Services` and place it at `TodoApp_API/src/main/java/${}/${}/TodoApp_API/Services`. Inside, create a file named `TodoService.java`. **TodoService.java** ```java package ${}.${}.TodoApp_API.Services; import ${}.${}.TodoApp_API.Entities.Todo; import ${}.${}.TodoApp_API.Repositories.TodoRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.List; /** * Lets Spring know to pick this up at runtime * You've probably noticed that so far we haven't really told Spring when to use any of our classes and that's * because of "Component Scanning". To learn more about the Component Scanning go to the following URL * https://www.baeldung.com/spring-component-scanning */ @Service public class TodoService { TodoRepository todoRepository; /** * @Autowired annotation sets this constructor to be called when booting our application and will automagically * inject any dependencies that we specify in the arguments * This is also known as "Dependency Injection" and is one of the more attractive aspects of the Spring Framework */ @Autowired public TodoService(TodoRepository todoRepository) { this.todoRepository = todoRepository; } // Returns a List of all of a users Todos public List<Todo> findAllByUserEmailAddress(String userEmailAddress) { return todoRepository.findAllByUserEmailAddress(userEmailAddress); } // Return a single Todo public Todo findByIdAndUserEmailAddress(Long todoId, String userEmailAddress) { return todoRepository.findByIdAndUserEmailAddress(todoId, userEmailAddress); } // Create/Update a new Todo and returns that Todo public Todo save(String userEmailAddress, Todo todo) { todo.setUserEmailAddress(userEmailAddress); return todoRepository.save(todo); } // Delete a Todo public void deleteByIdAndUserEmailAddress(Long todoId, String userEmailAddress) { todoRepository.deleteByIdAndUserEmailAddress(todoId, userEmailAddress); } } ``` ## Creating the `TodoController` "Rest Controller" Okay, we're almost finished with our first pass on our **Resource Service**. We just need to create the **Controller** that will determine our **services'** URL endpoints. Create your final package named `Controllers` and place it at `TodoApp_API/src/main/java/${}/${}/TodoApp_API/Controllers`. Inside, create yet another file and name it `TodoController.java` and place the following code inside. **TodoController.java** ```java package io.github.bbenefield89.TodoApp_API.Controllers; import io.github.bbenefield89.TodoApp_API.Entities.Todo; import io.github.bbenefield89.TodoApp_API.Services.TodoService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/api/todos") public class TodoController { private TodoService todoService; @Autowired public TodoController(TodoService todoService) { this.todoService = todoService; } // Returns a List of Todos @GetMapping("/{userEmailAddress}") public List<Todo> findAllByUserEmailAddress(@PathVariable String userEmailAddress) { return todoService.findAllByUserEmailAddress(userEmailAddress); } // Returns a single Todo @GetMapping("/{userEmailAddress}/{todoId}") public Todo findByIdAndUserEmailAddress(@PathVariable String userEmailAddress, @PathVariable Long todoId) { return todoService.findByIdAndUserEmailAddress(todoId, userEmailAddress); } // Creates a new Todo @PostMapping("/{userEmailAddress}") public Todo save(@PathVariable String userEmailAddress, @RequestBody Todo todo) { return todoService.save(userEmailAddress, todo); } // Deletes a single Todo @DeleteMapping("/{userEmailAddress}/{todoId}") public void deleteByIdAndUserEmailAddress(@PathVariable String userEmailAddress, @PathVariable Long todoId) { todoService.deleteByIdAndUserEmailAddress(todoId, userEmailAddress); } } ``` # Manually testing our endpoints Now that we've written our endpoints it's time we test them to make sure everything works. I would suggest downloading [Postman](https://www.getpostman.com/downloads/) for API testing. Let's go ahead and start making some HTTP requests. ### POST `localhost:8080/api/todos/user@gmail.com` (Create Todo) **Example Request** ```json { "title": "Get a haircut", "userEmailAddress": "user@gmail.com" } ``` **Example Response** ```json { "id": 1, "title": "Get a haircut", "userEmailAddress": "user@gmail.com" } ``` <img src="https://thepracticaldev.s3.amazonaws.com/i/yf6ef9m05zuefzbvaq5h.PNG" width="100%" /> ### GET `localhost:8080/api/todos/user@gmail.com` (Get All Todos) **Example Request** ``` Nothing required ``` **Example Response** ```json [ { "id": 1, "title": "Get a haircut", "userEmailAddress": "user@gmail.com" } ] ``` <img src="https://thepracticaldev.s3.amazonaws.com/i/tmp5rkjznq5zvageph6s.PNG" width="100%" /> ### GET `localhost:8080/api/todos/user@gmail.com/1` (Get a Single Todo) **Example Request** ``` Nothing required ``` **Example Response** ```json { "id": 1, "title": "Get a haircut", "userEmailAddress": "user@gmail.com" } ``` <img src="https://thepracticaldev.s3.amazonaws.com/i/tqz4o49c2z6mcxpdcl81.PNG" width="100%" /> ### DELETE `localhost:8080/api/todos/user@gmail.com/1` (DELETE a Single Todo) **Example Request** ``` Nothing required ``` **Example Response** ``` Nothing returned ``` <img src="https://thepracticaldev.s3.amazonaws.com/i/4ky5mdktnswi3ce0pj3z.PNG" width="100%" /> Great, everything works! The only problem now is that our endpoints aren't secured (to be fair we don't *really* have any users either). This means, you as the user `hacker_man@gmail.com` could easily access my data and vice versa. # Conclusion In this post, you didn't learn much about **Spring** or **Auth0** but you did learn about creating RESTful endpoints which is an important step to the process. Not to mention, you now see how easy it is for insecure endpoints to be accessed by the wrong people. In the next section of this series (link coming soon), you'll get an introduction on how to create a very simple **Auth Service** that uses: - **Spring Security**: Prevent access to users that are not authed - **Prehandle**: A method to intercept requests to endpoints that we can use to run logic before all requests (the secret sauce of our **auth**)
bbenefield89
175,195
React Ionic Framework and Hooks
building a sample application using react hooks api, useState & useReducer
3,623
2019-09-24T04:04:04
https://dev.to/ionic/react-ionic-framework-and-hooks-5135
javascript, react, reacthooks, ionic
--- title: React Ionic Framework and Hooks published: true description: building a sample application using react hooks api, useState & useReducer tags: #javascript #react #reacthooks #ionic cover_image: https://thepracticaldev.s3.amazonaws.com/i/nty8zwujft3ydudvu58m.png series: Ionic Framework & Firebase React Hooks --- > Please checkout and subscribe to my video content on YouTube. Feel free to leave comments and suggestions for what content you would like to see. [YouTube Channel](https://www.youtube.com/channel/UCMCcqbJpyL3LAv3PJeYz2bg) ### Overview Simple application with a list of things and the ability to add, edit and delete things. We will use the `useReducer` hook to manage the state of the array of things. We will use the `useState` hook to manage the state of the modal dialog we are using to input the information for the thing we are editing or updating and we use the `useState` hook to manage the state of the input field in the modal dialog. ## Lets Start with the useReducer API ```javascript // useThings.js // -- import React from "react"; const useThings = () => { // handle the specific action dispatched const reducer = (state, action) => { switch (action.type) { case "ADD_THING": { } case "DELETE_THING": { } case "EDIT_THING": { }; default: { return state; } } }; // here we set things up to use the reducer const [state, dispatch] = React.useReducer(reducer, { things: [] }); // the function returns everything needed to the caller to // dispatch specific action and get the updated state changes return { state, dispatch }; }; export default useThings; ``` ### Modify values in the state **Add An Item:** Add the `action.data` to the end of the array, set state properties ```javascript case "ADD_THING": { return { ...state, things: [...state.things, action.data] }; } ``` **Deleting An Item:** Add the `action.index` slice the array to get the things before the thing specified by the index and everything after the item specified by the index. This in turn is used to create new array which we set `state.things` with ```javascript case "DELETE_THING": { return { ...state, things: [ ...state.things.slice(0, action.index), ...state.things.slice(action.index + 1) ] }; } ``` **Editing An Item:** Add the `action.index` slice the array to get the things before the thing specified by the index and everything after the item specified by the index. Next we use the `action.data` as the new element to replace the element that was previously there. This in turn is used to create new array which we set `state.things` with. ```javascript case "EDIT_THING": { return { ...state, things: [ ...state.things.slice(0, action.index), action.data, ...state.things.slice(action.index + 1) ] }; } ``` ## Displaying a Modal for User Input ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/drmn0kvamdgvedgnuz9c.png) Using the `useState` functionality to manage displaying the modal dialog for inputting data for new things or editing things. The state has two keys, `isVisible` and `value`. `isVisible` will be set to true to show the dialog and false to hide it. The `value` property will be set when we are actually editing an object. We will also add an additional property called `index` when editing a thing so we can find it in the state array to update it. ```javascript // ThingsList.js // -- // using the useState functionality to manage displaying the modal // dialog for inputting data for new things or editing things const [modalInfo, setModalInfo] = useState({ isVisible: false, value: "" }); ``` ## Managing the Input Value Using useState ```javascript // ThingEdit.js // -- const [inputValue, setInputValue] = useState(); ``` How we use this in the `render` method of the component; when there is an input event in the input element, we update the state with the value entered by the user ```jsx <IonInput value={inputValue} onInput={e => setInputValue(e.target.value)} /> ``` So when the user is finished in the modal they will click on of two buttons to call the `handleClick` method ```jsx <IonButton onClick={() => handleClick(true)}>Save</IonButton> <IonButton onClick={() => handleClick(null)}>Cancel</IonButton> ``` If `handleClick` is called with a `true` value, then we need to return the value from the input form which is saved in our state, if the value is passed to `handleClick` is null, then we just need to exit the function and not return any data ```javascript // ThingEdit.js // -- const handleClick = _save => { handleFormSubmit({ isVisible: false, value: _save && inputValue }); }; ``` Back in the `ThingsList` component we need to handle the call from the `ThingEdit` component to process the data received from the modal. Get the response from the modal/form so we can update or create a new item. if the `_formResponse.value` is empty then ignore because the user selected the cancel button. If there is a `_formResponse.value` & `modalInfo.index` has a value, then edit the item; the `modalInfo.index` variable tells us which item in t he array to update; if no `modalInfo.index` then create a new things with the `_formResponse.value` ```javascript // ThingsList.js // -- const handleFormSubmit = _formResponse => { if (_formResponse.value) { modalInfo.index != null ? editEntry(modalInfo.index, _formResponse.value) : addNewEntry(_formResponse.value); } // reset the modalInfo state setModalInfo({ ...modalInfo, isVisible: false, value: "" }); }; ``` ## Displaying the List Of Things ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/5rup1puza36vxrzzxts2.png) Rendering the list of things from the components custom hook, `useThings`, we mentioned at the start of the post. ```javascript // get the function from my custom hook to mange the list // of things let { state, dispatch } = useThings(); ``` This give us access to the state object and the state object contains `state.things`. We loop through the array of values using the `Array.map()` function ```jsx <IonList> {state.things.map((_thing, _index) => ( <IonItem key={_index}> <IonLabel className="ion-text-wrap">{_thing}</IonLabel> <IonButton onClick={() => modalInfoWithEntry(_thing, _index)}> Edit </IonButton> <IonButton color="danger" onClick={() => deleteEntry(_index)}> Delete </IonButton> </IonItem> ))} </IonList> ``` We have all of the base function that are wrappers for calling the reducer methods with `dispatch` ```javascript // ThingsList.js //- /** * add entry to the list using `dispatch` from custom hook */ const addNewEntry = _data => { dispatch({ type: "ADD_THING", data: _data }); }; /** * remove entry from the list using `dispatch` and index in the array * to call custom hook * @param {*} _index */ const deleteEntry = _index => { dispatch({ type: "DELETE_THING", index: _index }); }; /** * update an existing entry in the list based on data * and the index of the entry * @param {*} _index * @param {*} _data */ const editEntry = (_index, _data) => { let payload = { index: _index, data: _data }; dispatch({ type: "EDIT_THING", ...payload }); }; ``` ## Wrapping It All Up All of the code for this projects in available to you here in the CodeSandbox.io website listed below. React hooks with `useState` and `useReducer` allows for your whole application to just be functional components who's state can be managed with the hooks api. Here is a link to a great video to give you some of the reasons why you might want to give hooks a try in your application. {% youtube eX_L39UvZes %} {% codesandbox tovwv %}
aaronksaunders
175,237
Angular: Build more dynamic components with ngTemplateOutlet 🎭
Introduction To build reusable and developer-friendly components, we need to make them...
0
2019-10-15T11:48:35
https://dev.to/mustapha/angular-build-more-dynamic-components-with-ngtemplateoutlet-3nee
angular, webdev, beginners, tutorial
# Introduction To build reusable and developer-friendly components, we need to make them more dynamic (read more adaptable). Great news, Angular comes with some great tools for that. For instance, we could inject content to our components using `<ng-content>`: ```typescript @Component({ selector: 'child-component', template: ` <div class="child-component"> <ng-content></ng-content> </div> `, }) export class ChildComponent {} @Component({ selector: 'parent-component', template: ` <child-component> Transcluded content </child-component> `, }) export class ParentComponent {} ``` <figcaption>Snippet 1: Transclusion</figcaption> Although this transclusion technique is great for simple content projection, what if you want your projected content to be context-aware. For example, while implementing a list component you want the items template to be defined in the parent component while being context-aware (of what is the current item it hosts). For those kinds of scenarios, Angular comes with a great API called `ngTemplateOutlet`! In this post, we will define what `ngTemplateOutlet` is, then we will build the list component we mentioned above as well as a card component to see the two most common `ngTemplateOutlet` use-cases. We will do the implementation of these components step-by-step, so by the end of this post, you should feel comfortable using this in your Angular components :) # Definition From the current Angular documentation `ngTemplateOutlet` is a directive that: **Inserts an embedded view from a prepared TemplateRef**. This directive has two properties: - ngTemplateOutlet: the template reference (type: `TemplateRef`). - ngTemplateOutletContext: A context object to attach to the EmbeddedViewRef. Using the key `$implicit` in the context object will set its value as default. What this means is that in the child component we can get a template from the parent component and we can inject a context object into this template. We can then use this context object in the parent component. If you find this too abstract, here is an example of how to use it: ```html <!-- Child component --> <child-component> <ng-container [ngTemplateOutlet]="templateRefFromParentComponent" [ngTemplateOutletContext]="{ $implicit: 'Joe', age: 42 }" > </ng-container> </child-component> <!-- Parent component --> <parent-component [templateRefFromParentComponent]="someTemplate"> <ng-template #someTemplate let-name let-age="age"> <p>{{ name }} - {{ age }}</p> </ng-template> </parent-component> ``` <figcaption>Snippet 2: ngTemplateOutlet usage</figcaption> In the code above, the child component will have a paragraph containing 'Joe - 42'. <b>Note</b> that for the name (`let-name`) we did not specify which property of the context object we had to use because the name was stored in the `$implicit` property. On the other hand, for the age (`let-age="age"`) we did specify the name of the property to use (in this case it was `age`). Well, enough with the definitions. Let's start coding. > The code that will be displayed in this article could be found [in this Github repository](https://github.com/TheAngularGuy/ngTemplateOutletTutorial) # Use case #1: Context-aware template Let's build a list component that takes two inputs from its parent: 1. data: A list of objects. 2. itemTemplate: a template that will be used to represent each element of the list. > run `ng new templateOutletTutorial --minimal` to generate a small Angular project to code along Let's generate the list component using the Angular schematics (`ng g c components/list`). Once that's done let's implement the component which will display every item of the data property (the inputted list). On every iteration of the `ng-for`, it will insert an embedded view that the parent component gives us in the itemTemplate property. While doing so, the component should attach a context object containing the current item. At the end the list component should look like this: ```typescript @Component({ selector: 'app-list', template: ` <ul class="list"> <li class="list-item" *ngFor="let item of data"> <ng-container [ngTemplateOutlet]="itemTemplate" [ngTemplateOutletContext]="{ $implicit: item }" ></ng-container> </li> </ul> `, styleUrls: ['list.component.scss'], changeDetection: ChangeDetectionStrategy.OnPush, }) export class ListComponent { @Input() data: any[]; @Input() itemTemplate: TemplateRef<HTMLElement>; // a template reference of a HTML element } ``` <figcaption>Snippet 3.1: List component implementation</figcaption> Then in the parent component, we need to call the list component with a list (of objects) and a template reference: ```html <app-list [itemTemplate]="customItemTemplate" [data]="[{ id: 4, name: 'Laptop', rating: 3 }, { id: 5, name: 'Phone', rating: 4 }, { id: 6, name: 'Mice', rating: 4 }]" > <ng-template #customItemTemplate let-item> <div style="display: flex; justify-content: space-between;"> <span> {{ item.id }} - <b>{{ item.name }}</b> </span> <mark> Stars: {{ item.rating }} </mark> </div> </ng-template> </app-list> ``` <figcaption>Snippet 3.2: Parent component template</figcaption> <b>Note</b> that we placed the ng-template (item template) inside the app-list component tags. This is only for readability, you could place the item template anywhere you want in the parent template. Also, I put some inline styles in the item template, but you could also give it a class and style it in the parent component style file. # Use case #2: Template overloading We saw how `ngTemplateOutlet` could help us to project context-aware templates, let's see another great use-case: template overloading. For this, we will build a card component that consists of two parts: 1. title: A title for the card. 2. content: The main content of the card. For the title, we will pass a simple string, and for the content, we can inject it using content projection. Let's do just that after creating the card component with the Angular schematics (`ng g c components/card`), the component should look like this: ```typescript @Component({ selector: 'app-card', template: ` <div class="card"> <header>{{ title }}</header> <article> <ng-content></ng-content> </article> </div> `, styleUrls: ['card.component.scss'], changeDetection: ChangeDetectionStrategy.OnPush, }) export class CardComponent { @Input() title: string; } ``` <figcaption>Snippet 4.1: Card component with a string *title*</figcaption> We call it in the parent component template: ```html <app-card [title]="'hello there'"> <p>i'm an awesome card.</p> </app-card> ``` <figcaption>Snippet 4.2: Parent component template with a string *title*</figcaption> Now let's say we want to put an image (`<img>`) in the title, or use another component in the title template. We would be stuck because the title property only takes a string. To solve this problem, we could implement a new behavior in our card component. We could say that the title could be a string or a TemplateRef. In case it is a string we will use string interpolation to bind it to the template, otherwise, we will use `ngTemplateOutlet`. After implementing the changes, the new card component should then look like this: ```typescript @Component({ selector: 'app-card', template: ` <div class="card"> <header *ngIf="isTitleAString(); else titleTemplateWrapper">{{ title }}</header> <ng-template #titleTemplateWrapper> <ng-container [ngTemplateOutlet]="title"></ng-container> </ng-template> <article> <ng-content></ng-content> </article> </div> `, styleUrls: ['card.component.scss'], changeDetection: ChangeDetectionStrategy.OnPush, }) export class CardComponent { @Input() title: string | TemplateRef<HTMLElement>; isTitleAString = () => typeof this.title == 'string'; } ``` <figcaption>Snippet 4.3: Card component with a strTemplateRef *title*</figcaption> We call it in the parent component template like this: ```html <app-card [title]="title"> <ng-template #title> <h2>Hello there</h2> </ng-template> <p>i'm an awesome card.</p> </app-card> ``` <figcaption>Snippet 4.4: Parent component template with a TemplateRef *title*</figcaption> # Use case #3: Tree {% twitter 1253270445329170435 %} # Wrapping up ![Wrapping up gif](https://media.giphy.com/media/S6PJSHMftMyPK/giphy.gif) So, we saw what `ngTemplateOutlet` is and how we could take advantage of it. We saw 3 of the most common use-cases, but now that you know about this technique maybe you will find another great use-case! --- That's it for this post. I hope you liked it. If you did, please share it with your friends and colleagues. Also you can follow me on twitter at [@theAngularGuy](https://twitter.com/TheAngularGuy) as it would greatly help me. Have a good day ! --- ### What to read next? {% link https://dev.to/mustapha/all-you-need-to-know-about-angular-animations-1c09 %}
mustapha
175,342
Why You Should Use Windows VPS Hosting for a Growing Website
A VPS or virtual private server hosting plan is responsible for providing the user site with own set...
0
2019-09-23T13:10:23
https://dev.to/rishabhsinha/why-you-should-use-windows-vps-hosting-for-a-growing-website-21j2
vpshosting
A VPS or virtual private server hosting plan is responsible for providing the user site with own set of resources like- operating system, disk space, and bandwidth. This is a huge contrast when compared to other hosting plans, where most of the resources are shared with other users present on the server. VPS hosting gives the users a more optimized service for their website visitors and this can be essential as the user looks to grow and expand his site. Here, in this article, I'll be sharing the top reasons that a website owner needs to consider when he decides to upgrade his website using a Windows VPS Hosting. **Why Use Windows VPS Hosting?** 1. Cost-Effective Solution As the business website grows with time, the budgeting of the sites becomes more and more challenging with time. The website owner needs to dedicate his time and money for growing his site. If he decides to invest in a shared hosting system, then this can prove to be a waste of money, as it is a cost-ineffective solution. Also, if the user is looking not to overspend and getting a dedicated server in return for his site, then VPS hosting is the best of the available hosting solution. 2. No Sharing and Draining of Resources One of the major pitfalls of shared hosting is that the resources of the servers are being accessed by multiple websites. If any other website encounters an abrupt spike in the traffic, then the user might go down with limited resources available in-hand. This would result in slower loading time, with reduction in traffic and conversions. It is not an ideal solution for the websites that are looking to expand and also it doesn't give a professional look-and-feel to the business website. Having a VPS hosting, gives website owners their own set of resources and are not at all affected by the excessive resource usage by others. Thus, they experience a faster, smoother experience for their visitors.   3. Higher Security VPS hosting is considered to be more secure than any other shared hosting service available. This is mainly due to the fact that the apps and data present on the virtual server are fully isolated from other users on the server. Separated storage available on the virtual servers means that it very difficult for the virus and other infections to spread across the users. 4. More Control Over the Site As a VPS hosting plan is free of other user accounts that use the same physical server, the website owners have more extent of control on the partition. Each user gets his own operating system along with full access to the available files and resources. VPS hosting plans are incredibly simple for the businesses that are going for an upgrade with growth in their business. The website owner needs to pay for only what he uses.
rishabhsinha
175,366
Migrating to the cloud but without screwing it up, or how to move house
For an application that's ready to scale, not using managed cloud architecture these days is like ins...
0
2019-09-23T14:51:07
https://victoria.dev/verbose/migrating-to-the-cloud-but-without-screwing-it-up-or-how-to-move-house/
serverless, webdev, devops, beginners
--- title: Migrating to the cloud but without screwing it up, or how to move house published: true tags: ["serverless",”webdev”,”devops”,"beginners"] canonical_url: https://victoria.dev/verbose/migrating-to-the-cloud-but-without-screwing-it-up-or-how-to-move-house/ cover_image: https://victoria.dev/verbose/migrating-to-the-cloud-but-without-screwing-it-up-or-how-to-move-house/cover_hu9516d7166362a6834fd7ca185e602fb0_827135_640x0_resize_box_2.png --- For an application that's ready to scale, not using managed cloud architecture these days is like insisting on digging your own well for water. It's far more labour-intensive, requires buying all your own equipment, takes a lot more time, and there's a higher chance you're going to get it wrong because you don't personally have a whole lot of experience digging wells, anyway. That said - let's just get this out of the way first - there is no cloud. It's just someone else's computer. Of course, these days, cloud services go far beyond the utility we'd expect from a single computer. Besides being able to quickly set up and utilize the kind of computing power that previously required a new office lease agreement to house, there are now a multitude of monitoring, management, and analysis tools at our giddy fingertips. While it's important to understand that the cloud isn't a better option in every case, for applications that can take advantage of it, we can do more, do it faster, and do it for less money than if we were to insist on building our own on-premises infrastructure. That's all great, and easily said; moving to the cloud, however, can look from the outset like a pretty daunting task. How, exactly, do we go about shifting what may be years of on-premises data and built-up systems to _someone else's computer?_ You know, without being able to see it, touch it, and without completely screwing up our stuff. While it probably takes less work and money than setting up or maintaining the same architecture on-premise, it does take some work to move to the cloud initially. It's important that our application is prepared to migrate, and capable of using the benefits of cloud services once it gets there. To accomplish this, and a smooth transition, preparation is key. In fact, it's a whole lot like moving to a new house. ![Cartoon](https://victoria.dev/verbose/migrating-to-the-cloud-but-without-screwing-it-up-or-how-to-move-house/cover_hu9516d7166362a6834fd7ca185e602fb0_827135_640x0_resize_box_2.png) In this article, we'll take a high-level look at the general stages of taking an on-premise or self-hosted application and moving it to the cloud. This guide is meant to serve as a starting point for designing the appropriate process for your particular situation, and to enable you to better understand the cloud migration process. While cloud migration may not be the best choice for some applications - such as ones without scalable architecture or where very high computing resources are needed - a majority of modular and modern applications stand to benefit from a move to the cloud. It's certainly possible, as I discovered at a recent event put on by [Amazon Web Services](https://aws.amazon.com/) (AWS) Solutions Architects, to migrate smoothly and efficiently, with near-zero loss of availability to customers. I'll specifically reference some services provided by AWS, however, similar functionality can be found with other cloud providers. I've found the offerings from AWS to be pleasantly modular in scope, which is why I use them myself and why they make good examples for discussing general concepts. To have our move go as smoothly as possible, here are the things we'll want to consider: 1. The type of move we're making; 2. The things we'll take, and the things we'll clean up; 3. How to choose the right type and size for the infrastructure we're moving into; and 4. How to do test runs to practice for the big day. ## The type of move we're making While it's important to understand why we're moving our application to cloud services, we should also have an idea of what we'd like it to look like when it gets there. There are three main ways to move to the cloud: re-host, re-platform, or re-factor. ### Re-host A re-host scenario is the the most straightforward type of move. It involves no change to the way our application is built or how it runs. For example, if we currently have Python code, use PostgreSQL, and serve our application with Apache, a re-host move would mean we use all the same components, combined in just the same way, only now they're in the cloud. It's a lot like moving into a new house that has the exact same floor plan as the current one. All the furniture goes into the same room it's in now, and it's going to feel pretty familiar when we get there. The main draw of a re-host move is that it may offer the least amount of complication necessary in order to take advantage of going to the cloud. Scalable applications, for example, can gain the ability to automatically manage necessary application resources. While re-hosting makes scaling more automatic, it's important to note that it won't in itself make an application scalable. If the application infrastructure is not organized in such a way that gives it the ability to scale, a re-factor may be necessary instead. ### Re-platform If a component of our current application set up isn't working out well for us, we're probably going to want to re-platform. In this case, we're making a change to at least one component of our architecture; for example, switching our database from Oracle to MySQL on [Amazon Relational Database Service](https://aws.amazon.com/rds/) (RDS). Like moving from a small apartment in Tokyo to an equally small apartment in New York, a re-platform doesn't change the basic nature of our application, but does change its appearance and environment. In the database change example, we'll have all the same data, just organized or formatted a little differently. In most cases, we won't have to make these changes manually. A tool such as [Amazon Database Migration Service](https://aws.amazon.com/dms/) (DMS) can help to seamlessly shift our data over to the new database. We might re-platform in order to enable us to better meet a business demand in the future, such as scaling up, integrating with other technological components, or choosing a more modern technology stack. ### Re-factor A move in which we re-factor our application is necessarily more complicated than our other options, however, it may provide the most overall benefit for companies or applications that have reason to make this type of move. As with code, refactoring is done when fundamental changes need to be made in order for our application to meet a business need. The specifics necessarily differ case-by-case, but typically involve changes to architectural components or how those components relate to one another. This type of move may also involve changing application code in order to optimize the application's performance in a cloud environment. We can think of it like moving out from our parent's basement in the suburbs and getting a nice townhouse in the city. There's no way we're taking that ancient hand-me-down sofa, so we'll need some new furniture, and for our neighbour's sake, probably window dressings. Refactoring may enable us to modernize a dated application, or make it more efficient in general. With greater efficiency, we can better take advantage of services that cloud providers typically offer, like bursting resources or attaining deep analytical insight. If a re-factor is necessary but time is scarce, it may be better to re-host or re-platform first, then re-factor later. That way, we'll have a job well done later instead of a hasty, botched migration (and more problems) sooner. ## What to take, and what to clean up Over the years of living in one place, stuff tends to pile up unnoticed in nooks and crannies. When moving house, it's usually a great opportunity to sort everything out and decide what is useful enough to keep, and what should be discarded or given away. Moving to the cloud is a similarly great opportunity to do the same when it comes to our application. While cloud storage is inexpensive nowadays, there may be some things that don't make sense to store any longer, or at least not keep stored with our primary application. If data cannot be discarded due to policy or regulations, we may choose a different storage class to house data that we don't expect to need anytime soon outside of our main application. In the case of [Amazon's Simple Storage Service](https://aws.amazon.com/s3/) (S3), we can choose to use different [storage classes](https://aws.amazon.com/s3/storage-classes/) that accomplish this goal. While the data that our business relies on every day can take advantage of the Standard class 99.99% availability, data meant for long-term cold storage such as archival backups can be put into the Glacier class, which has longer retrieval time and lower cost. ## The right type and size Choosing the type and size of cloud infrastructure appropriate for our business is usually the part that can be the most confusing. How should we predict, in a new environment or for a growing company, the computing power we'll need? Part of the beauty of not procuring hardware on our own is that won't have to make predictions like these. Using cloud storage and instances, expanding or scaling back resources can be done in a matter of minutes, sometimes seconds. With managed services, it can even be done automatically for us. With the proper support for scalability in our application, it's like having a magical house that instantly generates any type of room and amenity we need at that moment. The ability to continually ensure that we're using appropriate, cost-effective resources is at our fingertips, and often clearly visualized in charts and dashboards. For applications new to the cloud, some leeway for experimentation may be necessary. While cloud services enables us to quickly spin up and try out different architectures, there's no guarantee that all of those set ups will work well for our application. For example, running a single instance may be [less expensive than going serverless](http://einaregilsson.com/serverless-15-percent-slower-and-eight-times-more-expensive/), but we'd be hard pressed to know this until we tried it out. As a starting point, we simply need enough storage and computing power to support the application as it is currently running, today. For example, in the case of storage, consider the size of the current database - the actual database data, not the total storage capacity of hardware on-premises. For a detailed cost exploration, AWS even offers a [Simple Monthly Calculator](https://calculator.s3.amazonaws.com/index.html) with use case samples to help guide expectations. ## Do test runs before the big day Running a trial cloud migration may be an odd concept, but it is an essential component to ensuring that the move goes as planned with minimal service interruption. Imagine the time and energy that would be saved in the moving house example if we could automate test runs! Invariably, some box or still-hung picture is forgotten and left out of the main truck, necessitating additional trips in other vehicles. With multiple chances to ensure we've got it down pat, we minimize the possibility that our move causes any break in normal day-to-day business. Generally, to do a test run, we create a duplicate version of our application. The more we can duplicate, the more thorough the test run will be, especially if our data is especially large. Though duplication may seem tedious, working with the actual components we intend to migrate is essential to ensuring the migration goes as planned. After all, if we only did a moving-house test run with one box, it wouldn't be very representative. Test runs can help to validate our migration plan against any challenges we may encounter. These challenges might include: - Downtime restrictions; - Encrypting data in transit and immediately when at rest on the target; - Schema conversion to a new target schema (the [AWS Schema Conversion Tool](https://aws.amazon.com/dms/schema-conversion-tool/) can also help); - Access to databases, such as through firewalls or VPNs; - Developing a process to ensure that all the data successfully migrated, such as by using a hash function. Test runs also help to give us a more accurate picture of the overall time that a migration will take, as well as affording us the opportunity to fine-tune it. Factors that may affect the overall speed of a migration include: - The sizes of the source and target instances; - Available bandwidth for moving data; - Schema configurations; and - Transaction pressure on the source, such as changes to the data and the volume of incoming transactions. Once the duplicate application has been migrated via one or more [options](https://aws.amazon.com/cloud-data-migration/), we test the heck out of the application that's now running in the cloud to ensure it performs as expected. Ideally, on the big day, we'd follow this same general process to move up-to-date duplicate data, and then seamlessly point the "real" application or web address to the new location in the cloud. This means that our customers experience near-zero downtime; essentially, only the amount of time that the change in location-pointing would need to propagate to their device. In the case of very large or complex applications with many components or many teams working together at the same time, a more gradual approach may be more appropriate than the "Big Bang" approach, and may help to mitigate risk of any interruptions. This means migrating in stages, component by component, and running tests between stages to ensure that all parts of the application are communicating with each other as expected. ## Preparation is essential to a smooth migration I hope this article has enabled a more practical understanding of how cloud migration can be achieved. With thorough preparation, it's possible to take advantage of all the cloud has to offer, with minimal hassle to get there. My thanks to the AWS Solutions Architects who presented at Pop-Up Loft and shared their knowledge on these topics, in particular: Chandra Kapireddy, Stephen Moon, John Franklin, Michael Alpaugh, and Priyanka Mahankali. One last nugget of wisdom, courtesy of John: "Friends don't let friends use DMS to create schema objects."
victoria
175,440
Testing Jasmine Speed
I was told that a single IT within a DESCRIBE was an order of magnitude faster than multiple ITS within the same DESCRIBE. To say I was dubious would be an understatement.
2,154
2019-12-09T13:32:19
https://dev.to/rfornal/testing-jasmine-speed-38e9
jasmine, javascript, unittesting, performance
--- title: Testing Jasmine Speed published: true description: I was told that a single IT within a DESCRIBE was an order of magnitude faster than multiple ITS within the same DESCRIBE. To say I was dubious would be an understatement. series: Front-End Testing tags: Jasmine, JavaScript, Unit Testing, Performance cover_image: https://thepracticaldev.s3.amazonaws.com/i/u5lwn23ezqafdgz4jct7.png --- I was told, "a single it within a describe is an order or magnitude faster than multiple its within the same describe." I thanked the individual and walked away. Then, what he said to me sank in ... and, to say I was dubious would be an understatement. This was something I had to test ... The code supporting this article is [HERE](https://github.com/bob-fornal/jasmine-speed-tests). ## Initial Testing Initial Results: | Type | Random Run | Result | |------|------------|--------| | Multiple IT within DESCRIBE | TRUE | 4.294s | | Multiple IT within DESCRIBE | FALSE | 4.375s | | Single IT within many DESCRIBES | TRUE | 8.426s | | Single IT within many DESCRIBES | FALSE | 8.380s | The multiple IT within DESCRIBE code looks like this ... ```javascript describe('speed testing', function() { const maxRuns = 10000; function multipleIt() { it('something simple', function() { expect(true).toEqual(true); }); } describe('multiple its per describe', function() { describe('wrapper', function() { for (let i = 0, len = maxRuns; i < len; i++) { multipleIt(); } }); }); }); ``` ... and the single IT within many DESCRIBES code looks like this ... ```javascript describe('speed testing', function() { const maxRuns = 10000; function singleIt() { describe('single it run', function() { it('something simple', function() { expect(true).toEqual(true); }); }); } describe('single it per describe', function() { for (let i = 0, len = maxRuns; i < len; i++) { singleIt(); } }); }); ``` Then, I got to thinking that maybe the for-loop in the tests above was in some way impacting the test runs. So, I reworked the two test suites. I also added in testing that covered multiple nested DESCRIBE ... ## Non-Iterative Testing | Number of Tests | Type | Random Run | Result | |-----------------|------|------------|--------| | 1,000 | Multiple DESCRIBE | TRUE | 5.182s | | 1,000 | Multiple DESCRIBE | FALSE | 5.109s | | 1,000 | Multiple IT within DESCRIBE | TRUE | 0.534s | | 1,000 | Multiple IT within DESCRIBE | FALSE | 0.545s | | 1,000 | Single IT within many DESCRIBES | TRUE | 0.943s | | 1,000 | Single IT within many DESCRIBES | FALSE | 0.959s | The deeply nested DESCRIBE code looks like this ... ```javascript describe('wrapper 1', function() { describe('wrapper 2', function() { describe('wrapper 3', function() { describe('wrapper 4', function() { describe('wrapper 5', function() { describe('wrapper 6', function() { describe('wrapper 7', function() { describe('wrapper 8', function() { describe('wrapper 9', function() { describe('wrapper 10', function() { it('something simple', function() { expect(true).toEqual(true); }); }); }); }); }); }); }); }); }); }); }); ``` The multiple IT within DESCRIBE code looks like this ... ```javascript describe('speed testing', function() { describe('multiple its per describe 1000', function() { describe('wrapper', function() { it('something simple', function() { expect(true).toEqual(true); }); // ... 998 ITS like the one above (or below) it('something simple', function() { expect(true).toEqual(true); }); }); }); }); ``` ... and the single IT within many DESCRIBES code looks like ... ```javascript describe('speed testing', function() { describe('single it per describe 1000', function() { describe('single it run', function() { it('something simple', function() { expect(true).toEqual(true); }); }); // ... 998 DESCRIBES like the one above (or below) describe('single it run', function() { it('something simple', function() { expect(true).toEqual(true); }); }); }); }); ``` ## Conclusion ***Common sense won out*** ... The data here supports what common sense told me; that having multiple ITS within a single DESCRIBE is inherently faster within Jasmine than having a single IT within many DESCRIBE statements. Additionally, the slowest of the types of tests are the deeply nested DESCRIBE.
rfornal