id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
220,272
List Comprehension in D
This post is a language comparison coming out of this great article on list comprehension. D does not...
2,209
2019-12-13T08:12:31
https://dev.to/jessekphillips/list-comprehension-in-d-4hpi
dlang, tutorial, ranges
This post is a language comparison coming out of this great article on list comprehension. D does not have List comprehension. {% link https://dev.to/dvirtual/list-comprehensions-om4 %} Since D can generally operate on ranges rather than allocated arrays, if you need an array just add `.array` for more on arrays review {% link https://dev.to/jessekphillips/slicing-and-dicing-arrays-5akg %} My explanation will be limited to D specific differences, but please ask for further details if something is not clear. # List ```dlang import std; void main() { // arr = [i for i in range(10)] writeln(iota(10)); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] } ``` Iota is uncommon, but is the equivalent of range in Python. # Dictionary ```dlang // y = {i:v for i,v in enumerate(x)} auto x = [2,45,21,45]; auto y = enumerate(x).assocArray; writeln(y); // [0:2, 3:45, 2:21, 1:45] ``` As mentioned D prefers unallocated range manipulation, this tends to mean no index, enumerate creates a tuple with a count and value, and `assocArray` takes a range of tuple to build an associative array (dictionary) # Conditionals ```dlang // arr = [i for i in range(10) if i % 2 == 0] auto arr = iota(10) .filter!(i => i % 2 == 0); writeln(arr); // [0, 2, 4, 6, 8] ``` ```dlang // arr = ["Even" if i % 2 == 0 else "Odd" for i in range(10)] auto arr2 = iota(10) .map!(x => x % 2 ? "Odd" : "Even"); writeln(arr2); //["Even", "Odd", "Even", "Odd", "Even", "Odd", "Even", "Odd", "Even", "Odd"] ``` # Nested for loop ```dlang // arr = [[i for i in range(5)] for j in range(5)] auto arr3 = iota(5) .map!(j => iota(5)); writeln(arr3); // [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]] ``` ```dlang // arr = [(i,j) for j in range(2) for i in range(2)] auto arr4 = iota(2).cartesianProduct(iota(2)); writeln(arr4); //[Tuple!(int, int)(0, 0), Tuple!(int, int)(0, 1), Tuple!(int, int)(1, 0), Tuple!(int, int)(1, 1)] ``` I find that D makes this behavior very clear. Tuples being a library provided type, their string representation is a little more verbose. # Flatten 2D Array ```dlang // arr = [i for j in x for i in j] auto x2 = [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]; auto arr5 = x2.joiner; writeln(arr5); // 0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` # Conclusion D is a typed language, since I did not convert everything back into an array a unique arr variable was needed for each. Personally I think D represents the behavior more clearly than Python's list comprehension. Even the conditional output selection, which was more consice in D was represented reasonably. This does not touch on the other algorithms D provides and can be applied to the ranges. Python is praised on its clear syntax and readability, well I must be spoiled because it makes me cringe on most everything I see.
jessekphillips
220,334
Angular counter directive
https://stackblitz.com/edit/angular-p9xny6?file=src%2Fapp%2Fapp.component.ts
0
2019-12-13T09:16:52
https://dev.to/anhdung11cdt2/angular-counter-directive-1j2h
https://stackblitz.com/edit/angular-p9xny6?file=src%2Fapp%2Fapp.component.ts
anhdung11cdt2
220,345
Guessing all of the passwords! Advent of Code 2019 - Day 4
JavaScript walkthrough of Advent of Code 2019 (day 4)
3,910
2019-12-13T09:57:07
https://dev.to/thibpat/guessing-all-of-the-passwords-advent-of-code-2019-day-4-36i5
challenge, adventofcode, javascript, video
--- title: Guessing all of the passwords! Advent of Code 2019 - Day 4 published: true description: JavaScript walkthrough of Advent of Code 2019 (day 4) tags: challenge, AdventOfCode, javascript, video series: Advent of Code 2019 in Javascript --- {% youtube 8ruAKdZf9fY %}
thibpat
220,354
Check Object equality in javascript
Check whether two objects are equal or not in javascript function isDeepEqual(obj1, obj2, testPro...
0
2019-12-13T10:19:08
https://dev.to/isamrish/check-object-equality-in-javascript-ph8
javascript, objects
--- title: Check Object equality in javascript published: true description: tags: javascript, objects, js --- Check whether two objects are equal or not in javascript ``` function isDeepEqual(obj1, obj2, testPrototypes = false) { if (obj1 === obj2) { return true } if (typeof obj1 === "function" && typeof obj2 === "function") { return obj1.toString() === obj2.toString() } if (obj1 instanceof Date && obj2 instanceof Date) { return obj1.getTime() === obj2.getTime() } if ( Object.prototype.toString.call(obj1) !== Object.prototype.toString.call(obj2) || typeof obj1 !== "object" ) { return false } const prototypesAreEqual = testPrototypes ? isDeepEqual( Object.getPrototypeOf(obj1), Object.getPrototypeOf(obj2), true ) : true const obj1Props = Object.getOwnPropertyNames(obj1) const obj2Props = Object.getOwnPropertyNames(obj2) return ( obj1Props.length === obj2Props.length && prototypesAreEqual && obj1Props.every(prop => isDeepEqual(obj1[prop], obj2[prop])) ) } ```
isamrish
220,361
Connecting to ODBC databases from Python with pyodbc
Steps to connect to ODBC database in Python with pyodbc module Import the pyodbc module and create...
0
2019-12-13T10:37:43
https://dev.to/andreasneuman/connecting-to-odbc-databases-from-python-with-pyodbc-1p5j
odbc, software, python, driver
Steps to connect to ODBC database in Python with pyodbc module 1. Import the pyodbc module and create a connection to the database 2. Execute an INSERT statement to test the connection to the database. 3. Retrieve a result set from a query, iterate over it and print out all records. Key Features - Direct mode - SQL data type mapping - Secure connection - ANSI SQL-92 standard support for cloud services Learn more at [https://www.devart.com/odbc/python/](https://www.devart.com/odbc/python/)
andreasneuman
220,482
How to build a Twitter bot with NodeJs
twitter bot tutorial
0
2019-12-13T14:37:49
https://dev.to/codesource/how-to-build-a-twitter-bot-with-nodejs-1nlh
node
--- title: How to build a Twitter bot with NodeJs published: true description: twitter bot tutorial tags: Nodejs --- Building a Twitter bot using their [API](https://codesource.io/how-to-consume-restful-apis-with-axios/) is one of the fundamental applications of the Twitter API. To build a Twitter bot with Nodejs, you’ll need to take these steps below before proceeding: ##Create a new account for the bot. Apply for API access at developer.twitter.com Ensure you have NodeJS and NPM installed on your machine. We’ll be building a Twitter bot with Nodejs to track a specific hashtag then like and retweet every post containing that hashtag. ##Getting up and running Firstly you’ll need to initialize your node app by running npm init and filling the required parameters. Next, we install Twit, an NPM package that makes it easy to interact with the Twitter API. ~~~bash $ npm install twit --save ~~~ Now, go to your Twitter developer dashboard to create a new app so you can obtain the consumer key, consumer secret, access token key and access token secret. After that, you need to set up these keys as environment variables to use in the app. Building the bot Now in the app’s entry file, initialize Twit with the secret keys from your Twitter app. ~~~js // index.js const Twit = require('twit'); const T = new Twit({ consumer_key: process.env.APPLICATION_CONSUMER_KEY_HERE, consumer_secret: process.env.APPLICATION_CONSUMER_SECRET_HERE, access_token: process.env.ACCESS_TOKEN_HERE, access_token_secret: process.env.ACCESS_TOKEN_SECRET_HERE }); ~~~ ### Listening for events Twitter’s streaming API gives access to two streams, the user stream and the public stream, we’ll be using the public stream which is a stream of all public tweets, you can read more on them in the documentation. We’re going to be tracking a keyword from the stream of public tweets, so the bot is going to track tweets that contain “#JavaScript” (not case sensitive). ~~~js Tracking keywords // index.js const Twit = require('twit'); const T = new Twit({ consumer_key: process.env.APPLICATION_CONSUMER_KEY_HERE, consumer_secret: process.env.APPLICATION_CONSUMER_SECRET_HERE, access_token: process.env.ACCESS_TOKEN_HERE, access_token_secret: process.env.ACCESS_TOKEN_SECRET_HERE }); // start stream and track tweets const stream = T.stream('statuses/filter', {track: '#JavaScript'}); // event handler stream.on('tweet', tweet => { // perform some action here }); ~~~ ### Responding to events Now that we’ve been able to track keywords, we can now perform some magic with tweets that contain such keywords in our event handler function. The Twitter API allows interacting with the platform as you would normally, you can create new tweets, like, retweet, reply, follow, delete and more. We’re going to be using only two functionalities which is the like and retweet. ~~~js // index.js const Twit = require('twit'); const T = new Twit({ consumer_key: APPLICATION_CONSUMER_KEY_HERE, consumer_secret: APPLICATION_CONSUMER_SECRET_HERE, access_token: ACCESS_TOKEN_HERE, access_token_secret: ACCESS_TOKEN_SECRET_HERE }); // start stream and track tweets const stream = T.stream('statuses/filter', {track: '#JavaScript'}); // use this to log errors from requests function responseCallback (err, data, response) { console.log(err); } // event handler stream.on('tweet', tweet => { // retweet T.post('statuses/retweet/:id', {id: tweet.id_str}, responseCallback); // like T.post('favorites/create', {id: tweet.id_str}, responseCallback); }); ~~~~ ### Retweet To retweet, we simply post to the statuses/retweet/:id also passing in an object which contains the id of the tweet, the third argument is a callback function that gets called after a response is sent, though optional, it is still a good idea to get notified when an error comes in. ### Like To like a tweet, we send a post request to the favourites/create endpoint, also passing in the object with the id and an optional callback function. Deployment Now the bot is ready to be deployed, I use Heroku to deploy node apps so I’ll give a brief walkthrough below. Firstly, you need to download the Heroku CLI tool, here’s the documentation. The tool requires git in order to deploy, there are other ways but deployment from git seems easier, here’s the documentation. There’s a feature in Heroku where your app goes to sleep after some time of inactivity, this may be seen as a bug to some persons, see the fix here. You can read more on the Twitter documentation to build larger apps, It has every information you need to know about. Here is the [source code](https://github.com/Dunebook/js-bot) in case you might be interested. Source - [CodeSource.io](https://codesource.io/how-to-build-a-twitter-bot-with-nodejs/)
codesource
220,554
smart contract machine learning DEV
good morning , i want to implement a new smart contract for clustering data over blockchain. about...
0
2019-12-13T17:10:12
https://dev.to/gaviotamina/smart-contract-machine-learning-dev-52jk
good morning , i want to implement a new smart contract for clustering data over blockchain. about the algorithme of clusterning i wat to start by the simple algorithme Kmeans for example. the smart contract get data from un IPFS. what architecture can my smart contract have? and with what I have to start? Cdlt.
gaviotamina
220,564
How to test if an element is in the viewport
This article was originally posted on pelumicodes.com There can be several scenarios that may requir...
0
2019-12-13T17:41:52
https://pelumicodes.com/post/how-to-test-if-an-element-is-in-the-viewport/
jquery, javascript, angular, browser
This article was originally posted on [pelumicodes.com](https://pelumicodes.com) There can be several scenarios that may require you to determine whether an element is currently in the viewport of your browser, you might want to implement lazy loading or animating divs whenever they show up on the user's screen. > We will not be using jQuery :visible selector because it selects elements based on display CSS property or opacity. To do this with jQuery, we first have to define a function `isInViewPort` that checks if the element is in the browser's viewport ```javascript $.fn.isInViewport = function() { var elementTop = $(this).offset().top; var elementBottom = elementTop + $(this).outerHeight(); var viewportTop = $(window).scrollTop(); var viewportBottom = viewportTop + $(window).height(); return elementBottom > viewportTop && elementTop < viewportBottom; }; $(window).on(‘resize scroll’, function() { if ($(.foo).isInViewport()) { // code here } }); ``` The `elementTop` returns the top position of the element, `offset().top` returns the distance of the current element relative to the top of the [offsetParent](https://developer.mozilla.org/en-US/docs/Web/API/HTMLelement/offsetParent) node. To get the bottom position of the element we need to add the height of the element to the `offset().top` . The [outerHeight]( https://api.jquery.com/outerheight/) allows us to find the height of the element including the border and padding. The ` viewportTop ` returns the top of the viewport; the relative position from the scrollbar position to object matched. To get the ` viewportBottom ` too we add the height of the window to the ` viewportTop `. The ` isInViewport ` function returns ` true ` if the element is in the viewport, so you can run whatever code you want if the condition is met. *** Plugins that do the job for you * [Vanila Js helper function](https://vanillajstoolkit.com/helpers/isinviewport/). * https://github.com/customd/jquery-visible * http://www.appelsiini.net/projects/viewport cc: [stackoverflow answer](https://stackoverflow.com/questions/20791374/jquery-check-if-element-is-visible-in-viewport/33979503#33979503)
pelumicodes
220,596
Three Small Relational Efforts That Have Big Impacts
The interactions we have with other people help form the environment we work in. If we treat people w...
0
2019-12-13T20:35:18
https://dev.to/thebuffed/three-small-relational-efforts-that-have-big-impacts-5do4
career
The interactions we have with other people help form the environment we work in. If we treat people with respect, care, and interest, the environment will inevitably become an attraction. There are a few things I try to do on my quest to be a better human to help others feel comfortable and cared for. I know these things will not resonate with everyone, but I have experienced better relationships with my peers due in part to these habits. ## 1. Physically take note of things people mention When someone mentions something about themselves, I try to *write it down*. It's okay to study your relationships with others, and taking notes will make sure that you can follow up on their interests. For example, if someone at work mentions they're leaving early on Friday because their daughter has a soccer game, I'll make a note and ask about it the following week. ## 2. Make sure everyone has a direct opportunity to speak The opportunity part of this is important, because some people are more comfortable sitting back and listening, myself included. The issue is that some people inadvertently get pushed out of the conversation without a chance to get back in. Be direct and supportive. If someone gets interrupted, address the original speaker immediately after and ask them what they were saying. If a story causes someone else to go down a tangent, make sure to loop back around to the original storyteller. If someone simply hasn't spoken in a while, try to steer the conversation to a place they'll feel more comfortable. ## 3. Show more excitement than expertise If someone shares information with you about hobby or activity, give them your support in the form of excitement and questions rather than advice. Not every conversation needs to be a competition on who knows more about what. Even if you do know more, just enjoy the fact that you have something in common without intimidating them or belittling their interest and accomplishments. Hopefully these short tips are helpful for building better relationships. They are subtle things but can be really powerful when they are applied with a mentality of care.
thebuffed
220,756
Install Ghost with Caddy on Ubuntu
TL; DR Get a server with Ubuntu up and running Set up a non-root user and add it to...
0
2019-12-14T05:08:23
https://dev.to/alexkuang0/install-ghost-with-caddy-on-ubuntu-3flf
ubuntu, node
## TL; DR 1. Get a server with Ubuntu up and running 2. Set up a non-root user and add it to superuser group 3. Install MySQL and Node.js 4. Install Ghost-CLI and start 5. Install Caddy as a service and write a simple Caddyfile 6. Get everything up and running! ## Foreword (Feel-Free-to-Ignore-My-Nonsense™️) This article is about how I built **this** blog with Ghost, an open-source blog platform based on Node.js. I used to use WordPress and static website generators like Hexo and Jekyll for my blog. But they turned out either too heavy or too light. Ghost seems like a perfect balance between them. It's open source; it's elegant out-of-the-box. Zero configuration is required; yet it's configurable from top to bottom. The Ghost project is actually very well-documented – it has a decent [official installation guide](https://ghost.org/docs/install/ubuntu) on Ubuntu with Nginx. But as you can see from the title of this article, I am going to ship it together with my favorite web server – Caddy! It's a lightweight, easy to configure yet powerful web server. To someone like me, who hate to write or read either Nginx `conf` files or Apache `.htaccess` files, Caddy is like an oasis in the desert of tedious web server configurations. The web technologies are changing rapidly, especially for open source projects like Ghost and Caddy. From my observation, I would say neither Ghost nor Caddy are going to be backward compatible, which means the newer version of the software may not work as expected in older environment. So I **recommend** that you should always check if this tutorial is outdated or deprecated before moving on. You can go to their official website by clicking their names in the next section. Also,  if you are running the application in production, use a fixed version, preferably with LTS (*Long-term support*). ## Environments and Softwares - [Ubuntu 18.04.3 LTS](https://ubuntu.com/download/server) - [Node.js v10.17.0 LTS](https://nodejs.org/en/about/releases/) (This is the highest version Ghost support as of Dec. 2019) - [Caddy 1](https://caddyserver.com/v1/) (**NOT** Caddy 2, which is still in **beta** as of Dec. 2019) - MySQL 5.7 (It's gonna consume **A LOT** of memory! Use a lower version if you're running on a server with <1GB RAM. Let get started! 👨‍💻👩‍💻 ## Step 1: Get a server Get a server with Ubuntu up and running! Almost every cloud hosting company will provide Ubuntu 18.04 LTS image as of now. ## Step 2: Set up a non-root superuser ```bash # connect with root credentials to the server ssh root@<server_ip> -p <ssh_port> # Default port: 22 # create a new user adduser <username> # add that user to superuser group usermod -aG sudo <username> # login as the new user su <username> ``` ## Step 3: Install MySQL and set up ```bash sudo apt update sudo apt upgrade # install MySQL sudo apt install mysql-server # Set up MySQL password, it's required on Ubuntu 18.04! sudo mysql # Replace 'password' with your password, but keep the quote marks! ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; # Then exit MySQL quit ``` ## Step 4: Install Node.js ### Method 1: Use apt / apt-get ```bash curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash sudo apt-get install -y nodejs ``` ### Method 2: Use nvm (Node Version Manager) to facilitate switching between Node.js versions ```bash curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.1/install.sh | bash ``` The script clones the nvm repository to `~/.nvm`, and adds the source lines from the snippet below to your profile (`~/.bash_profile`, `~/.zshrc`, `~/.profile`, or `~/.bashrc`): ```bash export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm ``` Install Node.js v10.17.0 ```bash # source profile source ~/.bash_profile # change to your profile # check if nvm is properly installed command -v nvm # output will be `nvm` if it is nvm install v10.17.0 ``` ## Step 4: Install Ghost-CLI ### Method 1: Use npm ```bash sudo npm install ghost-cli@latest -g ``` ### Method 2: Use yarn ```bash # install yarn if you don't have it curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list sudo apt update && sudo apt install yarn sudo yarn global add ghost-cli@latest ``` ## Step 5: Get Ghost up and running ```bash sudo mkdir -p /var/www/ghost sudo chown <username>:<username> /var/www/ghost sudo chmod 775 /var/www/ghost cd /var/www/ghost ghost install ``` ### Install questions During install, the CLI will ask a number of questions to configure your site. They will probably throw an error or two about you don't have Nginx installed. Just ignore that. #### Blog URL Enter the exact URL your publication will be available at and include the protocol for HTTP or HTTPS. For example, `https://example.com`. #### MySQL hostname This determines where your MySQL database can be accessed from. When MySQL is installed on the same server, use `localhost` (press Enter to use the default value). If MySQL is installed on another server, enter the name manually. #### MySQL username / password If you already have an existing MySQL database enter the the username. Otherwise, enter `root`. Then supply the password for your user. #### Ghost database name Enter the name of your database. It will be automatically set up for you, unless you're using a **non**-root MySQL user/pass. In that case the database must already exist and have the correct permissions. #### Set up a ghost MySQL user? (Recommended) If you provided your root MySQL user, Ghost-CLI can create a custom MySQL user that can only access/edit your new Ghost database and nothing else. #### Set up systemd? (Recommended) `systemd` is the recommended process manager tool to keep Ghost running smoothly. We recommend choosing `yes` but it’s possible to set up your own process management. #### Start Ghost? Choosing `yes` runs Ghost on default port `2368`. ## Step 6: Get Caddy up and running Caddy has an awesome collection of plugins. You can go to [Download Caddy](https://caddyserver.com/v1/download) page. First, select the correct platform; then add a bunch of plugins that you're interested. After that, don't click `Download`. Copy the link in the `Direct link to download` section. Head back to ssh in terminal. ![](https://blog.alex0.dev/content/images/2019/12/Screenshot_2019-12-13_03-17-32.png) ```bash mkdir -p ~/Downloads cd ~/Downloads # download caddy binary, the link may differ if you added plugins curl https://caddyserver.com/download/linux/amd64?license=personal&telemetry=off --output caddy sudo cp ./caddy /usr/local/bin sudo chown root:root /usr/local/bin/caddy sudo chmod 755 /usr/local/bin/caddy sudo setcap 'cap_net_bind_service=+ep' /usr/local/bin/caddy sudo mkdir /etc/caddy sudo chown -R root:root /etc/caddy sudo mkdir /etc/ssl/caddy sudo chown -R root:<username> /etc/ssl/caddy sudo chmod 770 /etc/ssl/caddy ``` ### Run Caddy as a service ```bash wget https://raw.githubusercontent.com/caddyserver/caddy/master/dist/init/linux-systemd/caddy.service sudo cp caddy.service /etc/systemd/system/ sudo chown root:root /etc/systemd/system/caddy.service sudo chmod 644 /etc/systemd/system/caddy.service sudo systemctl daemon-reload sudo systemctl start caddy.service sudo systemctl enable caddy.service ``` ### Create `Caddyfile` ```bash sudo touch /etc/caddy/Caddyfile sudo chown root:root /etc/caddy/Caddyfile sudo chmod 644 /etc/caddy/Caddyfile sudo vi /etc/caddy/Caddyfile # edit Caddyfile with your preferred editor, here I use vi ``` We're gonna set up a simple reverse proxy to Ghost's port (2368). Here are 2 sample `Caddyfile`s respectively for Auto SSL enabled and disabled. ```bash # auto ssl example.com, www.example.com { proxy / 127.0.0.1:2368 tls admin@example.com } # no auto ssl http://example.com, http://www.example.com { proxy / 127.0.0.1:2368 } ``` If you want Auto SSL issued by Let's Encrypt, you should put your email after the `tls` directive on the 3rd line; otherwise, use the second part of this `Caddyfile`. (For me, I was using Cloudflare flexible Auto SSL mode, so I just built a reverse proxy based on HTTP protocol only here) ### Fire it up 🔥 ```bash sudo systemctl start caddy.service ``` ## References - [https://ghost.org/docs/install/ubuntu](https://ghost.org/docs/install/ubuntu/#overview) - [https://github.com/caddyserver/caddy/tree/master/dist/init/linux-systemd](https://github.com/caddyserver/caddy/tree/master/dist/init/linux-systemd) - [https://github.com/nvm-sh/nvm](https://github.com/nvm-sh/nvm)
alexkuang0
220,769
Do Stacked-PRs require re-review after merge?
Is the sum of two approved PR's also an approved PR?
0
2019-12-14T06:06:17
https://dev.to/jlouzado/do-stacked-prs-require-re-review-after-merge-522f
ask, help, git, codereview
--- title: Do Stacked-PRs require re-review after merge? published: true description: Is the sum of two approved PR's also an approved PR? tags: ask, help, git, code-review --- ## Scenario - I'm working on a big feature that I know can be broken into two parts - these parts are, as is often the case, dependent on each other. - I branch off `master` and create branch `feat/part1` - I finish coding part1, and create a PR with this branch - `feat/part1` -> `master` - this is `PR-1` - While I'm still checked out in `feat/part1`, I create a new branch `feat/part2` - I finish coding part 2 and create a second, "stacked" PR - `feat/part2` -> `feat/part1`. - this is `PR-2` Let's say both PRs get reviewed and approved. I now merge PR-2. - PR-1 now contains all the changes. ## Question - Can I assume that PR-1 is now approved and merge-able or do I need to re-review it? ## Background - As for why someone would do this, this technique of breaking up PRs is a way to breakup large changes into more manageable chunks - That way the rest of the team can give feedback more easily. - Can read more about it here: [Stacked PRs To Keep Github Diffs Small | graysonkoonce.com](https://graysonkoonce.com/stacked-pull-requests-keeping-github-diffs-small/)
jlouzado
220,778
Answer: Sonarqube 7.9.1 community troubleshooting
answer re: Sonarqube 7.9.1 community...
0
2019-12-14T06:31:47
https://dev.to/ftechnix/answer-sonarqube-7-9-1-community-troubleshooting-bm6
{% stackoverflow 58045999 %}
ftechnix
220,784
Laravel 6 REST API with Passport Tutorial with Ecommerce Project
Rest Api Development In this tutorial, we’ll explore the ways you can build—and test—a robust API usi...
0
2019-12-14T07:20:48
https://dev.to/techmahedy/laravel-6-rest-api-with-passport-tutorial-with-ecommerce-project-1965
restapi, laravel, laravelpassport
Rest Api Development In this tutorial, we’ll explore the ways you can build—and test—a robust API using Laravel. We’ll be using Laravel 6, and all of the code is available for reference on GitHub. Now we are going to develop a ecommerce rest api where we have a product table and a review table. Using those two table we will make our ecommerce rest api. RESTful APIs First, we need to understand what exactly is considered a RESTful API. REST stands for REpresentational State Transfer and is an architectural style for network communication between applications, which relies on a stateless protocol (usually HTTP) for interaction. https://codechief.org/article/laravel-6-rest-api-with-passport-tutorial-with-ecommerce-project
techmahedy
220,795
From old PHP/MySQL to the world's most modern web app stack with Hasura and GraphQL
This is the history of Nhost. Ever since 2007, I have been into programming and web development. Bac...
0
2019-12-14T07:59:03
https://blog.nhost.io/from-old-php-mysql-to-the-worlds-most-modern-web-app-stack-with-hasura-and-graphql/
graphql, postgres, react, productivity
This is the history of [Nhost](https://nhost.io/). Ever since 2007, I have been into programming and web development. Back then it was all PHP and MySQL websites and everything was great fun! Around 2013 SPA ([Single Page Application](https://en.wikipedia.org/wiki/Single-page_application)s) started to emerge. Instead of letting your web server render the whole page, the backend just provided data (from [JSON](https://www.json.org/), for example) to your front-end. Your front end then had to take care of rendering your website with the data from the back-end. And I wanted to learn more! I went through multiple frameworks, like [MeteorJS](https://www.meteor.com/) and [Firebase](https://firebase.google.com/). I did not feel comfortable with the NoSQL databases that these projects was based on. In retrospect, I am really happy I did not jump on the hype train of NoSQL. I also built a large enterprise project using React & Redux with a regular REST backend. The developer experience was somewhat OK. You could still use a SQL database and provide a REST API or a GraphQL API to your front-end. That is an OK approach. No NoSQL, which is good. But no real-time, which is bad. By November 2018 I was about to rebuild a CRM/Business system from PHP/MySQL to a modern SPA web app. At this time, I decided I would do it with React & Redux with a MySQL database and a REST API. This was pretty much standard at the time. Then something happened. I was about to create a VPS from DigitalOcean for my new database and REST API. For no obvious reason clicked on the "marketplace" tab where something drew my attention. ![](https://blog.nhost.io/content/images/2019/12/do-marketplace.png) GraphQL? A lambda sign? This looks interesting. Let's start a Hasura Droplet and see what it is! **60 minutes later my jaw was on the floor.** **This is amazing!** **This is it!** ![](https://blog.nhost.io/content/images/2019/12/photo-1490730141103-6cac27aaab94.jpeg) Hasura comes with: * PostgreSQL (relational database) * GraphQL * Real-Time * Access control * Blazing Fast™ I could not ask for more! I was so enthusiastic about Hasura I called up an emergency meeting for all developers in my co-working office ([DoSpace CoWorking](https://www.dospace.se/)). {% twitter 1068145251267895300 %} Now, Hasura is great and everything but... What about Auth and Storage for your app? ## **Auth and Storage** Hasura is great at handling your data and your API. But Hasura does not care how you handle authentication nor storage. ![](https://blog.nhost.io/content/images/2019/12/auth-storage.png) > With Hasura, you need to handle Auth and Storage yourself. ### **Auth** When it comes to authentication Hasura recommends that you use some other auth service like [Auth0](https://auth0.com/) or [Firebase Auth](https://firebase.google.com/docs/auth). I do not like any of those solutions 100%. I like to have full control over my users and not rely on third-party services. ### **Storage** For Storage, there is no recommended solution from Hasura. So... I decided to build my own Auth and Storage backend for Hasura. ## **Hasura-Backend-Plus** I built [Hasura Backend Plus (HB+)](https://github.com/elitan/hasura-backend-plus). Hasura Backend Plus provides auth and storage for any Hasura project. ![](https://blog.nhost.io/content/images/2019/12/logo.png) # **Visiting Hasura in Bangalore, India** I was helping out Hasura a bit during late 2018/early 2019\. I was giving small local talks about Hasura. I created Hasura Backend Plus. I was active in their Discord server helping other developers. Because of this, I got the chance to visit the Hasura Team in Bangalore. They were hosting the very first [GraphQL Asia](https://www.graphql-asia.org/) and I was invited. And off I went! {% twitter 1117332824757948417 %} # **Back to nhost.io** [nhost.io](https://nhost.io) helps every developer with the quick deployment of Hasura and Hasura-Backend-Plus. Get your next web project going with the world's most modern web stack. * PostgreSQL * GraphQL * Real-Time Subscriptions (just like Firebase) * Authentication * Storage [Get started with nhost.io](https://console.nhost.io/)!
elitan
220,800
Trial
ex
0
2019-12-14T07:48:27
https://dev.to/ard_handoyo/trial-3idk
ex
ard_handoyo
220,835
Chrome Dev Summit 2019: Everything you need to know
"As the largest open ecosystem in history, the Web is a tremendous utility, with more than 1.5B...
0
2019-12-14T10:07:30
https://www.ghosh.dev/posts/chrome-dev-summit-2019-everything-you-need-to-know/
webdev, webperf, techtalks, webplatform
--- title: Chrome Dev Summit 2019: Everything you need to know published: true date: 2019-12-14 00:00:00 UTC tags: webdev, webperf, techtalks, webplatform canonical_url: https://www.ghosh.dev/posts/chrome-dev-summit-2019-everything-you-need-to-know/ cover_image: https://www.ghosh.dev/static/media/chrome-dev-summit-2019.jpg --- > "As the **largest open ecosystem** in history, the Web is a tremendous utility, with more than 1.5B active websites on the Internet today, serving nearly 4.5B web users across the world. This kind of diversity (geography, device, content, and more) can only be facilitated by the **open web platform**." > > <footer> > <cite> > - from <a href="https://blog.chromium.org/2019/11/chrome-dev-summit-2019-elevating-web.html" target="_blank"> > blog.chromium.org</a> > </cite> > </footer> This would be the uber pitch for [Chrome Dev Summit](https://developer.chrome.com/devsummit/) this year: **elevating the web platform**. Last month I was in San Francisco for CDS 2019, my first time attending the conference in person. For those of you who couldn’t attend CDS this year or haven’t yet gotten around watching all the sessions on their [Youtube channel](https://www.youtube.com/watch?v=F1UP7wRCPH8&list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr), here are my notes and thoughts about almost everything that was announced that you need to know! Almost everything you say? Well then, buckle up, for this is going to be a long article! # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Bridging the “app gap” The web is surely a powerful platform. But there’s still so much that native apps can do that web apps today can not. Think about even some simple things that naturally come to your mind when picturing installed native applications on your phones or computers: accessing contacts or working directly on files on your device, background syncs at regular intervals, system-level cross-app sharing capabilities and such. This is what we call the “native app gap”. And we’re betting that’s there a real need for closing it. > In their natural progression, browsers have become viable, full-fledged application runtimes. Over the last decade, “the browser” has become so much more than a simple tool for accessing information on the web. It is perhaps the most widely-known publicly-used piece of software that there is, yet so heavily underrated at its sophistication. In their natural progression, browsers have become viable, full-fledged application runtimes - containers for not only delivering but also capable of deploying and running a very large variety of cross-platform applications. [Technologies](https://en.wikipedia.org/wiki/Node.js) [arising](https://en.wikipedia.org/wiki/Blink_layout_engine) out of browser engines have been used for a while to build [massively successful desktop applications](https://code.visualstudio.com/); browsers themselves have been used as the basis of building [entire operating system UI](https://en.wikipedia.org/wiki/Chrome_OS) and even run [complex real-time 3D games](https://blog.mozilla.org/blog/2014/03/12/mozilla-and-epic-preview-unreal-engine-4-running-in-firefox/) at a performance that is getting closer and closer to native speeds every day. With that train of thought, it seems almost inevitable that the _browserverse_ would sooner or later tackle the problem of being able to impart the average-joe web app the power to do things native apps can. ![I am inevitable](https://www.ghosh.dev/static/media/thanos-1.jpg)<figcaption> Source: <a href="https://imgflip.com/i/3j3vbb">imgflip.com</a> </figcaption> If I had to bet, the Web combined with its reach and ubiquitous platform support is probably going to become one of the best and perhaps most popular mechanisms for software delivery for a very large subset of applications use-cases in the coming years. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Project Fugu The goal of [Project Fugu](https://www.chromium.org/teams/web-capabilities-fugu) is to make the “app gap” go away. Simply put, the idea is to bake a set of right APIs into the browser, such that over time web apps become capable of doing almost everything that native apps can - with the right levers of permissions and access-control of course, for all your rightly-raised potential privacy and security concerns! ![They called me a madman](https://www.ghosh.dev/static/media/thanos-2.jpg)<figcaption> Source: <a href="https://imgflip.com/i/3j3vtu">imgflip.com</a> </figcaption> Enough with the Thanos references, but I wouldn’t be very surprised if someone in the web community called this crazy. And if I absolutely had to, I am going to admit that even if so, this is definitely _my_ kind of crazy. I like to think that I somehow saw this coming, and I wanted this to happen. For years, browsers _have_ in some capacity been trying to bring bits and pieces of native power to the web with [experimental APIs](https://developer.mozilla.org/en-US/docs/Web/API) for things like hardware sensors and such. With Fugu, all this endeavour to impart native-level power to the web has at least been formally unified under one banner and picked up like an umbrella initiative for building and standardising into the open web by the most popular browser project there is. And I’m really excited to see this succeed for the long term! So I’d love to see this turn out like IronMan (the winning, _not_ the dying part) rather than Thanos in the end! The advent and success of [Progressive Web Apps](https://developer.mozilla.org/en-US/docs/Web/Progressive_web_apps) (PWA) would have definitely acted as a catalyst at exhibiting that with the right pedigree, web apps can really triumph. Making web apps easily [installable](https://developers.google.com/web/fundamentals/app-install-banners) or bringing [push notifications](https://developers.google.com/web/fundamentals/push-notifications) to the web was perhaps just the first few pieces of this bigger puzzle we have been staring at for a while. Keeping aside whatever Google as a company’s strategies and business motivations may be to so heavily be investing in everything web; if at the end of the day it benefits the entire web user and developer community by making the web platform more capable, and I choose to believe that it will, I am surely going to sleep happier at night. ![Project Fugu](https://www.ghosh.dev/static/media/fugu-logo-1.png)<figcaption> Source: <a href="https://slides.com/mhadaily/hardware-connectivity-on-pwa#/0/7">slides.com</a> </figcaption> In case you’ve been wondering, the word “Fugu” is Japanese for [pufferfish](https://en.wikipedia.org/wiki/Tetraodontidae), one of the most toxic and poisonous species of vertebrates in the world, incidentally also prepared and consumed as a delicacy, which as you can guess, can be extremely dangerous from the poison if not prepared right. _See what they did there?_ Some of the upcoming and interesting capabilities that were announced as part of Project Fugu are: - [Native Filesystem API](https://web.dev/native-file-system/) which enables developers to build web apps that interact with files on the users’ local device, like IDEs, photo and video editors or text editors. - [Contact Picker API](https://web.dev/contact-picker/), an on-demand picker that allows users to select entries from their contact list and share limited details of the selected entries with a website. - [Web Share](https://web.dev/web-share/) and [Web Share Target](https://web.dev/web-share-target/) APIs which together allow web apps to use the same system-provided share capabilities as native apps. - [SMS Receiver API](https://web.dev/sms-receiver-api-announcement/), using which web apps can now use to auto-verify phone numbers with SMS. - [WebAuthN](https://developers.google.com/web/updates/2018/05/webauthn) to let web apps access hardware tokens (eg. [YubiKey](https://en.wikipedia.org/wiki/YubiKey)) or perform biometrics (like a fingerprint or facial recognition) based identification and recognition of users on the web. - [getInstalledRelatedApps() API](https://web.dev/get-installed-related-apps/) that allows your web app to check whether _your_ native app is installed on a user’s device, and vice versa. - [Periodic Background Sync API](https://web.dev/periodic-background-sync/) for syncing your web app’s data periodically in the background and possibly providing more powerful and creative offline use-cases for a more native-app-like experience. - [Shape Detection API](https://web.dev/shape-detection/) to easily detect faces, barcodes, and text in images. - [Badging API](https://web.dev/badging-api/) that allows installed web apps to set an application-wide badge on the app icon. - [Wake Lock API](https://web.dev/wakelock/) for providing a mechanism to prevent devices from dimming or locking the screen when a web app needs to keep running. - [Notification Triggers](https://www.chromestatus.com/feature/5133150283890688) for triggering notifications using timers or events apart from a server push. The list goes on. These features are either in early experimentation through [Origin Trials](https://github.com/GoogleChrome/OriginTrials/blob/gh-pages/explainer.md) or targeted to be built in the future. Here is an open [tracker](https://goo.gle/fugu-api-tracker) for the list of APIs and their progress that have been captured so far under the banner of Project Fugu. ![Project Fugu process](https://www.ghosh.dev/static/media/fugu-process-1.jpg)<figcaption> Source: <a href="https://developers.google.com/web/updates/capabilities#process">developer.google.com</a> </figcaption> Also worth noting, [webwewant.fyi](https://webwewant.fyi/) was as announced as well, which is a great place for anyone in the web community to go and provide feedback about the state of the web and things that we want the web to do! ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>On PWAs… wait, now we have TWAs? Progressive Web Apps with their installability and native-like fullscreen immersive experiences have been Google’s [showcase fodder](https://developers.google.com/web/showcase/tags/progressive-web-apps) for over multiple years of Google I/O and Chrome Dev Summit. Major consumer facing brands like Flipkart, Twitter, Spotify, Pinterest, Starbucks, Airbnb, Alibaba, BookMyShow, MakeMyTrip, Housing, Ola, OYO have all built and shown how great PWA based experiences can be made which are hard to distinguish from native apps by the average user. By this point in time, I think as a community, we generally understand and agree that PWAs can be awesome when done right. So what next? ![TWA](https://www.ghosh.dev/static/media/twa-logo-1.jpg)<figcaption> Source: <a href="https://medium.com/@firt/google-play-store-now-open-for-progressive-web-apps-ec6f3c6ff3cc">medium.com</a> </figcaption> A key development for installable web apps has been the emergence of [Trusted Web Activities](https://developers.google.com/web/updates/2019/02/using-twa) (TWA) which provide a way to integrate full-screen web content into Android apps using a protocol called Custom Tabs: in our case, [Chrome Custom Tabs](https://developer.chrome.com/multidevice/android/customtabs). To quote from the [Chromium Blog](https://blog.chromium.org/2019/02/introducing-trusted-web-activity-for.html), TWAs have access to all Chrome features and functionalities including many which are not available to a standard Android WebView, such as [web push notifications](https://developers.google.com/web/fundamentals/push-notifications/), [background sync](https://developers.google.com/web/updates/2015/12/background-sync), [form autofill](https://support.google.com/chrome/answer/142893?co=GENIE.Platform%3DDesktop&hl=en), [media source extensions](https://www.w3.org/TR/media-source/) and the [sharing API](https://developers.google.com/web/updates/2016/09/navigator-share). A website loaded in a TWA shares stored data with the Chrome browser, including cookies. This implies shared session state, which for most sites means that if a user has previously signed into your website in Chrome they will also be signed into the TWA. Basically think of how PWAs work today, except that you can install it from the Play Store, with possibly some other added benefits of app-like privileges. It was briefly mentioned that with TWAs the general permission model of certain web app capabilities (such as having to ask for push notification permissions) may go away and become as simple and elevated as true native Android apps (considering they _are_, in fact, so). As an application of all this, TWAs are now Google’s recommended for web app developers to surface their _PWA listings on the Play Store_. Here’s their [showcase](https://youtu.be/Hp_dQvQyYEI?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=1516) on OYO. > Imagine having native apps that almost never need to go through painful or slow update cycles and consume very little disk space. I believe TWAs can open up some really interesting avenues. Imagine having native apps that almost never need to go through painful or slow update cycles (or hence have to deal with all the typical complexities of releasing and maintaining native apps and their codebases), because like everything web, the actual UI and content always updates on the fly! These apps get installed on a user’s device, consume very little disk space compared to full-blown native apps because of an effectively shared runtime host (the browser) and work at full capacity of that browser, say Chrome. This is quite different from embedding WebViews into hybrid native applications for a number of reasons where the main browser on a user’s device can do more powerful things or have access to information that WebView components embedded into individual native apps can not. If today you maintain a native app, a _lite_ version of your native app, _and_ a mobile website (like Facebook does); now, if you want, your lite app can be simply your mobile website distributed via the Play Store wrapped in a TWA. One less codebase to maintain. Phew. I haven’t yet played around with deploying a TWA first-hand myself, so some of the low-level processes are still unclear to me, but from what I could gather talking to Google engineers at CDS who have been working on TWAs, because of several Play Store policies and mechanisms based on how it operates today by design, looks like as developers we still need to handcraft a TWA from a PWA, and manually upload and release it onto the Play Store. What would be amazing is being able to hook up some sort of a pipeline onto the Play Store itself that auto-vends a PWA as a TWA given a right, validated config. Other improvements coming to PWAs are the [shortcuts](https://www.w3.org/TR/appmanifest/#shortcuts-member) member in Web App Manifest which will allow us to register a list of static shortcuts to key URLs within the PWA where the browser exposes these shortcuts via interactions that are consistent with exposure of an application icon’s context menu in the host operating system (e.g., right-click, long press), similar to how native apps do. Also, Chrome intends to play with the [“Add to Homescreen” verbiage](https://www.youtube.com/watch?v=Hp_dQvQyYEI&feature=youtu.be&list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=82) and it might probably simply be called “Install” in the future. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Because Performance. Obviously! ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Not all devices _are_ made equal. And neither the experience of your web app. Users access web experiences from a large variety of network conditions (WiFi, LTE, 3G, 2G, optional data-saver modes) and device capabilities (CPU, memory, screen size and resolution) which causes a large performance gap to exist across the spectrum of network types and devices. Multiple Web APIs are available today that can collectively inform us with network and device information which could be used to understand, classify and target users with an adaptive experience for providing them with an optimal journey through our website, given their state of network and device performance. Some APIs that can help us here are the [Network Information API](https://developer.mozilla.org/en-US/docs/Web/API/Network_Information_API) that informs effective connection type, downlink speed, RTT or data-saver information, [DeviceMemory API](https://developers.google.com/web/updates/2017/12/device-memory), CPU [HardwareConcurrency API](https://developer.mozilla.org/en-US/docs/Web/API/NavigatorConcurrentHardware/hardwareConcurrency) and a mechanism to communicate such information as [Client-Hints](https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/client-hints). ![Adaptive Loading](https://www.ghosh.dev/static/media/adaptive-loading-1.jpg)<figcaption> Source: <a href="https://youtu.be/puUPpVrIRkc?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=454">Adaptive Loading - improving web performance on slow devices</a> </figcaption> Facebook [described](https://youtu.be/puUPpVrIRkc?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=1438) on a very high-level their classification models for devices for both mobiles and desktops (which are relatively harder to do) and how they have been effectively using this information to derive the best user experience for targeted segments. All this comes paired with the release of [Adaptive Hooks](https://github.com/GoogleChromeLabs/react-adaptive-hooks), which makes it easy to target device and network specs for patterns around resource loading, data-fetching, code-splitting or disabling certain features for your React web app. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Better to hang by more than a thread. > The more work is required to be done during JS execution, it queues up, blocks and slows everything down effectively causing the web app to suffer from jank and feel sluggish. To deliver web experiences that feel smooth, a lot of work needs to be done by the browser, ranging from the initial downloading, parsing and execution of HTML, CSS & Javascript; and all the successive work required to be able to eventually [paint pixels](https://developers.google.com/web/fundamentals/performance/rendering) on the screen which includes styling, layouting, painting and compositing; and then to make things interactive, handling multiple events and actions; and in response to that, frequently _redoing_ most of these above tasks to update content on the page. A lot of these tasks need to happen in sequence every time, as frequently as required, within a cumulative span of a _few milliseconds_ to keep the experience smooth and responsive: about 16ms to deliver 60 frames per second, while some new devices support even higher refresh rates like 90Hz or 120Hz which push the available time to paint a frame to even smaller if you have to keep up with the refresh rate. With the combined truth that all devices are not made equal and that JavaScript is single-threaded by nature which means the more work is required to be done during JS execution, it queues up, blocks and slows everything down effectively causing your web app to suffer from [jank](https://www.afasterweb.com/2015/08/29/what-the-jank/) and feel sluggish. ![Pixel pipeline](https://www.ghosh.dev/static/media/main-thread-1.png)<figcaption> Source: <a href="https://youtu.be/7Rrv9qFMWNM?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=485">The main thread is overworked &amp; underpaid</a> </figcaption> Patterns to effectively distribute the amount of client-side computational work across multiple threads by leveraging [web workers](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers) to perform **Off-Main-Thread** (OMT) work and keep the main (UI) thread reserved for doing only the amount of work that absolutely needs to be done by it (DOM & UI work) can help significantly to alleviate this problem. Such patterns are about _reducing risks_ of delivering poor user experiences, where although the entire time of overall work completion maybe, in fact, slowed marginally my message-passing overheads across multiple worker threads, the main (UI) thread is instead free to do any UI work required in the interim (even new user interactions like handling touch or scrolls) and keep delivering a smooth experience throughout. This works out great since the margin of error of dropping a frame is in the order of milliseconds while the making the user wait for an overall task to complete can go into the order of 100s of milliseconds. Web workers have existed for a while but somehow for probable reasons around their wonkiness, they haven’t really seen great adoption. Libraries like [Comlink](http://npm.im/comlink) can help a lot to that end. [Proxx.app](https://proxx.app/) is a great example to look at all of this in action. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Lighthouse shines brighter! ![Lighthouse](https://www.ghosh.dev/static/media/lighthouse-logo-1.jpg)<figcaption> Source: <a href="https://developers.google.com/web/tools/lighthouse/">google.com</a> </figcaption> Not to throw any shade at [Lighthouse](https://developers.google.com/web/tools/lighthouse) _(all puns intended)_, so far this web performance auditing tool had always fallen short of my expectations for any practical large-scale use-case. It seemed a little too simple and superficial to be of much use to drive meaningful insights in complex, real-world production applications. Don’t get me wrong, Lighthouse has always been a decent product on its own, but building web performance test tooling that is robust, powerful _and_ predictable, is inherently a hard problem to solve. Lighthouse had always been somewhat useful to me, but mostly as a basic in-browser audit panel thing that could give me some generic “Performance 101” insights, rather than something I would be excited to hook up as a powerful-enough synthetic performance testing tool in a production pipeline for catering to deeper performance auditing needs that I may have. Hopefully, that changes now. With the release of [Lighthouse CI](https://github.com/GoogleChrome/lighthouse-ci), an extension of the toolset for automated assertion, saving and retrieval of historical data and actionable insights for improving web app performance though continuous integrations, the future seems a bit more brighter. What’s even more exciting, is the introduction of [stack-packs](https://github.com/GoogleChrome/lighthouse-stack-packs) which detect what platform a site is built on (such as Wordpress) and displays specific stack-based recommendations, and [plugins](https://github.com/GoogleChrome/lighthouse/blob/master/docs/plugins.md) which provide mechanisms to extend the functionality of Lighthouse with things such as domain-specific insights and scoring, for example, to cater to the bespoke needs of an e-commerce website. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>… and comes with new Performance Metrics. ![Lighthouse scores](https://www.ghosh.dev/static/media/lighthouse-scores-1.jpg)<figcaption> Source: <a href="https://youtu.be/iaWLXf1FgI0?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=509">Speed tooling evolutions: 2019 and beyond</a> </figcaption> With Lighthouse 6, some important changes are coming to how it scores web page performance, focusing on some of the _new_ metrics that are being introduced: - [Largest Contentful Paint](https://web.dev/lcp/)(LCP), which measures the render time of the largest content element visible in the viewport. - [Total Blocking Time](https://web.dev/lighthouse-total-blocking-time/) (TBT), a measure of the total amount of time that a page is blocked from responding to user input, such as mouse clicks, screen taps, or keyboard presses. The sum is calculated by adding the _blocking portion_ of all [long tasks](https://web.dev/long-tasks-devtools) between [First Contentful Paint](https://web.dev/first-contentful-paint/) and [Time to Interactive](https://web.dev/interactive/). Any task that executes for more than 50ms is a long task. The amount of time after 50ms is the blocking portion. For example, if Chrome detects a 70ms long task, the blocking portion would be 20ms. - [Cumulative Layout Shift](https://web.dev/cls/) (CLS), which measures the sum of the individual _layout shift scores_ for each _unexpected layout shift_ that occurs between when the page starts loading and when its [lifecycle state](https://developers.google.com/web/updates/2018/07/page-lifecycle-api) changes to hidden. Layout shifts are defined by the [Layout Instability API](https://github.com/WICG/layout-instability) and they occur any time an element that is visible in the viewport changes its start position (for example, it’s top and left position in the default [writing mode](https://developer.mozilla.org/en-US/docs/Web/CSS/writing-mode)) changes between two frames. Such elements are considered _unstable elements_. Metrics that are being deprecated are [First Meaningful Paint](https://web.dev/first-meaningful-paint/)(FMP) and [First CPU Idle](https://web.dev/first-cpu-idle/) (FCI). ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Visually marking slow vs. fast websites Chrome has expressed plans to visually indicate what it thinks is a slow vs. a fast loading website. The exact form of how the UI is going to look like is still uncertain, but we can expect a bunch of trials and experiments from Chrome on this. ![Visual indicators for slow vs. fast websites](https://www.ghosh.dev/static/media/visual-slow-fast-1.png)<figcaption> Source: <a href="https://blog.chromium.org/2019/11/moving-towards-faster-web.html">Moving towards a faster web</a> </figcaption> I guess that this is going to be mired with some controversy when this happens. How does the browser judge what is right for my website? Why does it get to decide where exactly to draw the line? And how exactly does it do it for the type of website, target audience and content I have? Surely there are a lot of difficult questions with no easy answers yet, but there’s one thing for sure, that if this does happen, it will force the average website to take some web performance considerations more seriously. Somewhat like when browsers together decided to start enforcing HTTPS by visually penalizing non-secure websites and it worked out well eventually. Honestly, with so much that is unclear, for now, I am still going lean towards the side of more free will, but as a self-appointed advocate of web performance, I am very tempted to think that if executed right, this might just be a good thing. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Web Framework ecosystem improvements Google has been partnering up with popular web framework developers to make under-the-hood improvements to those frameworks such that sites that are built and run on top of these frameworks get visible performance improvements without having to lift a finger so to speak. > Choosing to deliver modern Javascript code to modern browsers could visibly improve performance. The prime example of this was a partnership with [Next.js](https://nextjs.org/), a popular web framework based on [React](https://reactjs.org/), where a lot of development has happened around improved chunking, differential loading, JS optimisations and capturing better performance metrics. An interesting (though quite seemingly obvious in retrospect) takeaway from this was choosing to deliver _modern_ Javascript code to _modern_ browsers (as opposed to lengthy transpiled or polyfilled code based on some lowest common denominator of browsers you need to support) could visibly improve performance by drastically reducing the amount of code that is shipped. This was coupled with the announcement of [Babel preset-modules](https://github.com/babel/preset-modules) which can help you achieve this. If you are interested, the [Framework Fund](https://opencollective.com/chrome) run by Chrome has also been announced multiple times across this CDS. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>How awesome is WebAssembly now? Spoiler alert: pretty awesome! 💯 ![WebAssembly](https://www.ghosh.dev/static/media/wasm-logo-1.svg)<figcaption> Source: <a href="https://commons.wikimedia.org/wiki/File:Web_Assembly_Logo.svg">Wikimedia Commons</a> </figcaption> [WebAssembly](https://developer.mozilla.org/en-US/docs/WebAssembly) (WASM) is a new language for the web that is designed to run alongside Javascript and as a compilation target from other languages (such as C, C++, Rust, …) to enable performance-intensive software to run within the browser at near-native speeds that were not possible to attain with JS. While WebAssembly has been out there being developed and improved upon for a few years, it saw a major announcements this time on performance improvements that brings it closer to being at par with high-performance native code execution, being enabled through the browser’s WASM engine improvements such as [implicit caching](https://dzone.com/articles/webassembly-caching-when-using-emscripten), introduction of support for [threads](https://developers.google.com/web/updates/2018/10/wasm-threads), and [SIMD](https://www.chromestatus.com/feature/6533147810332672) (Single Instruction Multiple Data) which is a core capability of modern CPU architectures that enables instructions to be executed multiple times faster. Multiple OpenCV-based [demos](https://riju.github.io/WebCamera/samples/) that illustrated real-time high-FPS feature recognition, card reading and image information extraction, facial expression detection or replacement within the web browser, all of which have been made possible by the recent WASM improvements, were showcased. Check them out, some of them are really cool! ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>The cake is still a lie, but a delicious one at that! _Here, if you didn’t get that [reference](https://knowyourmeme.com/memes/the-cake-is-a-lie)._ ![Portals](https://www.ghosh.dev/static/media/portals-logo-1.png)<figcaption> Source: <a href="https://dev.to/tomayac/hands-on-with-portals-seamless-navigation-on-the-web-4ho0-temp-slug-198334">web.dev</a> </figcaption> One of the unique new web platform capabilities that garnered a lot of interest was [Portals](https://dev.to/tomayac/hands-on-with-portals-seamless-navigation-on-the-web-4ho0-temp-slug-198334), which aims to enable seamless and animation-capable transitions across _page navigations_ for multi-page architecture (MPA) based web applications, effectively affording such MPAs a _creative lever_ to provide smooth and potentially “instantaneous” browsing experiences like native apps or single-page applications (SPA) could provide. They can be used to deliver great app or SPA-like behaviour without the complexity of doing a SPA which often becomes cumbersome to scale and maintain for large websites and with complex and dynamic use-cases. Portals are essentially a new type of HTML element that can be instantiated and injected in a page to load another page inside it (in some form similar to IFrames but also different in a lot of ways), keep it hidden or animate it in any form using CSS animations, and when required, navigate _into_ it - thus performing page navigations in an instant when used effectively. An interesting pattern with Portals is that it can also be made to load the page structure (skeleton or stencil, as you may also call it) preemptively even if the exact data is not prefetched and perform a lot of the major tasks of computation (parsing, execution, styling, layout, painting, compositing) for most of the document structure that contributes to web page performance, in effect moving the performance costs of these tasks out-of-band of critical page load latencies, such that when a portal navigation happens, only the data is fetched and filled in onto the page. This would also require some rendering costs but typically much lesser than the entire page’s worth of work. Several [demos](https://www.youtube.com/watch?v=X2zqwMBBvIs&feature=youtu.be&list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=956) are available and this API can be seen behind experimental flags in Chrome at the moment. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Web Bundles ![Web Bundles](https://www.ghosh.dev/static/media/web-bundles-logo-1.jpg)<figcaption> Source: <a href="https://web.dev/web-bundles/">web.dev</a> </figcaption> In the simplest terms, a [Web Bundle](https://web.dev/web-bundles/) is a file format for encapsulating one or more HTTP resources in a single file. It can include one or more HTML files, Javascript files, images, or Stylesheets. Also known as [Bundled HTTP Exchanges](https://wicg.github.io/webpackage/draft-yasskin-wpack-bundled-exchanges.html), it’s part of the [Web Packaging](https://github.com/WICG/webpackage) proposal (and as someone wise would say, not to be confused with [webpack](https://webpack.js.org/)). The idea is to enable offline distribution and usage of web apps. Imagine sharing web apps as a single `.wbn` file over anything like Bluetooth, Wi-Fi Direct or USB flash drives and then being able to run them offline on another device in the web application’s origin’s context! In a way, this sort of again veers into the territory of imparting the web more powers like native apps which are easily distributable and executable offline. > In such countries, a large portion of apps that exist on people’s phones get side-loaded over peer-to-peer mechanisms rather than from over a first-party distribution source like the Play Store. If you researched on different modes in how native apps get distributed in countries with emerging markets such as in India, Mid-East or Africa which are heavy on mobile users but generally deprived on public Wi-Fi availability or have predominantly poor, patchy or congested cellular networks, peer-to-peer file-sharing apps like [Share-It](https://play.google.com/store/apps/details?id=com.lenovo.anyshare.gps&hl=en_IN) or [Xender](https://play.google.com/store/apps/details?id=cn.xender&hl=en_IN) are extremely popular in terms of how people share software and a large portion of apps that exist on people’s phones get side-loaded over peer-to-peer mechanisms rather than from over a first-party distribution source like the Play Store. Seems only natural that this would be a place to catch on for web apps as well! I need to confess that the premise of Web Bundles does sort of remind me of the age-old [MHTML](https://en.wikipedia.org/wiki/MHTML) file format (`.mhtml` or `.mht` files if remember those) that used to be popular a decade back and in fact, is still [supported](https://en.wikipedia.org/wiki/MHTML#Browser_support) by all major browsers. MHTML is a web page archive format that could contain HTML and associated assets like stylesheets, javascript, audio, video and even Flash and Java applets of the day, with their content-encoding inspired from the MIME email protocol (leading to the name MIME HTML) to combine stuff into a single file. For what it’s worth though, from the limited knowledge that I have so far, I do believe that what we’ll have with Web Packaging is going to be much more complex and powerful and catering to the needs of the Web of _this_ generation with key differences like being able to run in the browser using the web application’s origin’s context (by being verified using signatures similar to how [Signed HTTP Exchanges](https://developers.google.com/web/updates/2018/11/signed-exchanges) may work) rather than being treated as locally saved content like with MTHML. But yeah… can’t deny that it does feel a bit like listening to [Backstreet Boys](https://www.youtube.com/watch?v=4fndeDfaWCg) again from those days! Hashtag Nostalgia. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>More cool stuff with CSS! Yeah! I didn’t even try to come up with a better headline for this section, because this is simply how I genuinely feel. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>New capabilities A quick list of some of the new coolness that’s landing on browsers (or even better, have already landed) are: - [`scroll-snap`](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Scroll_Snap) that introduces scroll snap positions, which enforce the scroll positions that a [scroll container’s](https://developer.mozilla.org/en-US/docs/Glossary/scroll_container) [scrollport](https://developer.mozilla.org/en-US/docs/Glossary/scrollport) may end at after a scrolling operation has completed. - [`:focus-within`](https://developer.mozilla.org/en-US/docs/Web/CSS/:focus-within) to represent an element that has received focus or _contains_ an element that has received focus. - `@media (prefers-*)` queries, namely the [`prefers-color-scheme`](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme), [`prefers-contrast`](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-contrast), [`prefers-reduced-motion`](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-reduced-motion) and [`prefers-reduced-transparency`](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-reduced-transparency), which in [Adam Argyle’s words](https://youtu.be/-oyeaIirVC0?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=396), can together enable you to serve user preferences like, _“I prefer a high contrast dark mode motion when in dim-lit environments!”_ 😎 - [`position: sticky`](https://css-tricks.com/position-sticky-2/) which is a hybrid of relative and fixed positioning, where the element is treated as `relative` positioned until it crosses a specified threshold, at which point it is treated as `fixed` positioned. - [CSS logical properties](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Logical_Properties) to provide the ability to control layout through logical, rather than physical, direction and dimension mappings. Think of dynamic directionality based on whether you’re serving `ltr`, `rtl` or `vertical-rl` content. Here’s a [nice article](https://webdesign.tutsplus.com/tutorials/how-to-use-css-logical-properties--cms-33024) that talks about it. ![CSS Logical Properties](https://www.ghosh.dev/static/media/css-logical-properties-1.jpg)<figcaption> Source: <a href="https://www.youtube.com/watch?v=-oyeaIirVC0&amp;list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;index=2">Next-generation web styling</a> </figcaption> - [`:is()` selector](https://css-tricks.com/almanac/selectors/i/is/), the new name for the [Matches-Any Pseudo-class](https://www.w3.org/TR/selectors-4/#matches). - [`backdrop-filter`](https://developer.mozilla.org/en-US/docs/Web/CSS/backdrop-filter) which lets you apply graphical effects such as blurring or colour shifting to _anything_ in the area behind an element! ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Houdini > ...creating new CSS features without waiting for them to be implemented natively in browsers. CSS Houdini has perhaps for some time been at the top of my _hot-new-things_ list when thinking about the exciting future of the web. As [MDN](https://developer.mozilla.org/en-US/docs/Web/Houdini) describes it, Houdini is a group of APIs being built to give developers direct access to the [CSS Object Model](https://developer.mozilla.org/en-US/docs/Web/API/CSS_Object_Model) (CSSOM) and enabling them to write code the browser can parse as CSS, thereby creating new CSS features _without waiting for them to be implemented_ natively in browsers. In other words, it’s an initiative to open up the browser in a way that gives web developers more direct access to be able to hook into styling and layout process of the browser’s rendering engine. And let creativity (with performance) run amok! If this is the first time you are hearing about CSS Houdini, take a few moments to let that sink in. Believe me. Then if you like, read a few [other](https://www.smashingmagazine.com/2016/03/houdini-maybe-the-most-exciting-development-in-css-youve-never-heard-of/) [great](https://www.qed42.com/blog/building-powerful-custom-properties-CSS-houdini) [articles](https://medium.com/front-end-field-guide/how-to-be-houdini-and-escape-the-limits-of-css-460af7307d41) on it. It’s relatively still quite early in the world of Houdini in terms of cross-browser support, but a few announcements on some good progress were made. [Paint API](https://developers.google.com/web/updates/2018/01/paintapi), [Typed OM](https://developers.google.com/web/updates/2018/03/cssom) and [Properties and Values API](https://web.dev/css-props-and-vals/) have shipped to Chrome. [Layout API](https://github.com/w3c/css-houdini-drafts/blob/master/css-layout-api/EXPLAINER.md) is in Canary. ![CSS Houdini](https://www.ghosh.dev/static/media/css-houdini-1.jpg)<figcaption> Source: <a href="https://youtu.be/-oyeaIirVC0?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=1258">Next-generation web styling</a> </figcaption> Check out the cool [Houdini Spellbook](https://houdini.glitch.me/) or the [ishoudinireadyyet.com](https://ishoudinireadyyet.com/) website for more. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Showing some ❤ to HTML ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Form elements In HTML, the visual redesign, accessibility upgrade and extensibility of [form elements](https://www.youtube.com/watch?v=ZFvPLrKZywA&list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&index=5) such as `<select>` or `<date>` is a welcome development! A joint session by Google and Microsoft described what to expect with the customizability of form elements in the future, or even the possibility of new elements that don’t exist yet, like table-views or toggle-switches. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Display Locking Web content is predominantly large amounts of text and images, and if you think about it, often just a collection of _list views_ of a few common types of components repeating over and over again. Think about your Facebook or Twitter feed, your Google or Amazon search results, a lengthy blog (like this one?) or a Wikipedia article you’re reading, or pretty much every other popular website that you use. There is a lot of scrolling involved! This demands the notion of [Virtual Scrolling](https://github.com/WICG/virtual-scroller), and one of the possible prototypes is a new web platform primitive called [Display Locking](https://github.com/WICG/display-locking), which is a set of API changes that aim to make it straightforward for developers and browsers to easily scale to a large amount of content and control when rendering work happens. ![Display Locking](https://www.ghosh.dev/static/media/display-locking-1.jpg)<figcaption> Source: <a href="https://www.youtube.com/watch?v=ZFvPLrKZywA&amp;list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;index=5">HTML isn’t done!</a> </figcaption> To quote from Nicole Sullivan from the [session](https://youtu.be/ZFvPLrKZywA?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=1100) on this, Display Locking allows you to do batch rendering updates to avoid paying the performance costs when handling large amounts of DOM; it also means that locked sub-trees are not rendered immediately, which means you can keep more stuff in the DOM; and that’s great because it becomes searchable both by Find in Page as well as by assistive technologies. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Safety and Privacy ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Visual updates to URL display ![URL security](https://www.ghosh.dev/static/media/privacy-url-1.jpg)<figcaption> Source: <a href="https://youtu.be/WnCKlNE52tc?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=146">Protecting users on a thriving web</a> </figcaption> Let’s talk about important stuff. In the dimension of safety and privacy, some notable callouts include Chrome’s experiments around the general web consumer’s [perception of security models](https://ai.google/research/pubs/pub48199) related to how URL information is displayed by the browser. Chrome’s past attempts on using [extended validation indicator](https://en.wikipedia.org/wiki/Extended_Validation_Certificate) which informs the legal entity tied to a domain name did not turn out to be successful, simply because people don’t notice when it’s missing. ![URL security](https://www.ghosh.dev/static/media/privacy-url-2.jpg)<figcaption> Source: <a href="https://youtu.be/WnCKlNE52tc?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=637">Protecting users on a thriving web</a> </figcaption> This also calls out studies where the average web user has been found out to not necessarily clearly understand visual cues like [what the HTTPS padlock icon on the URL bar means](https://publications.sba-research.org/publications/2019-Pfeffer-HTTPS_Mental_Models.pdf). All of this comes with a degree of potential controversy though, where Chrome as a browser will start hiding complete URLs and show origin information that it feels is the most relevant for the average web user. This is probably not going to make more experienced users very happy. In its defence, Chrome announced the [Suspicious Site Reporter](https://chrome.google.com/webstore/detail/suspicious-site-reporter/jknemblkbdhdcpllfgbfekkdciegfboi) extension that will allow the power user to see the full unedited URL, as well as report malicious sites to Google’s Safe Browsing service. While the intentions are great, this still feels somewhat stop-gap and a little unfulfilling to me. Simply because mobile has been always so hugely important and non-desktop versions of Chrome have so far, to the best of my knowledge, have had literally zero history of supporting extensions, this sounds like a hasty half-measure and an uncanny oversight. Wouldn’t something as simple as a browser setting have made things easier? Perhaps there are motivations here that I don’t fully understand. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Combating sophisticated spoofing attacks Sophisticated spoofing attacks like [IDN spoofing](https://en.wikipedia.org/wiki/IDN_homograph_attack) is extremely difficult for the average web user to detect, and for that matter, often likely to easily defeat the technically experienced user as well unless they are specifically looking for it. ![IDN spoofing](https://www.ghosh.dev/static/media/idn-attack-1.png)<figcaption> Source: <a href="https://youtu.be/WnCKlNE52tc?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=274">Protecting users on a thriving web</a> </figcaption> Chrome is introducing what it calls [lookalike warnings](https://www.zdnet.com/article/google-chrome-to-get-warnings-for-lookalike-urls/) to inform the user of potential attacks and then try to redirect them to the possibly intended website instead. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Committed to privacy, but with ads? 🤷‍♀️ Following a brief stint on [why we should all look at ads as a necessary evil](https://youtu.be/WnCKlNE52tc?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&t=1175), it was announced that there are [sic] restrictions coming to third-party cookies in Chrome. New cookie classification spec that’s landing enforces `SameSite=None` to be explicitly set to designate cookies intended for cross-site access, without which they will be by default accessible only in first-party contexts. Coupled with the `Secure` attribute, this makes cookies only accessible over HTTPS connections. Read more about it on [blog.chromium.com](https://blog.chromium.org/2019/10/developers-get-ready-for-new.html). In a world where other more privacy-focused browsers like Firefox or Safari have been blocking third-party cookies by default and have shown [committed](https://www.zdnet.com/article/firefox-to-add-tor-browser-anti-fingerprinting-technique-called-letterboxing/) [interest](https://gizmodo.com/apple-declares-war-on-browser-fingerprinting-the-sneak-1826549108) in fighting against fingerprinting, all this seems arguably feeble. Then again, [not that](https://www.eff.org/deeplinks/2019/08/dont-play-googles-privacy-sandbox-1) it has had a great track record. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Chrome Extensions ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>…making progress towards the way they should have ideally always been. Themes of privacy and safety continued into the presentation of the newer mechanisms for building Chrome extensions that are being designed to be more protective about security and access control. The introduction of Extension [Manifest V3](https://developer.chrome.com/extensions/migrating_to_manifest_v3) along with other overhauls to the extension ecosystem architecture would hopefully deprecate currently existing permission models and background code executions which traditionally have been overtly open and also known to cause performance bottlenecks. Stricter models for both permissions and execution limits through [background service workers](https://developer.chrome.com/extensions/migrating_to_service_workers) are being introduced. A notable mention here was the addition of the [declarative net request](https://blog.chromium.org/2019/06/web-request-and-declarative-net-request.html) model that will drastically change the behaviour of request interception and blocking as done today by Chrome extensions (such as [AdBlock](https://chrome.google.com/webstore/detail/adblock-%E2%80%94-best-ad-blocker/gighmmpiobklfepjocnamgkkbiglidom)) and provide more restrictive and performant models to achieve the same. ![Chrome Extensions: declarative net request](https://www.ghosh.dev/static/media/chrome-ext-1.jpg)<figcaption> Source: <a href="https://youtu.be/7-ALJiZCI6w?list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;t=1630">Chrome extensions and the world of tomorrow</a> </figcaption> # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>SEO Google [announced](https://www.youtube.com/watch?v=4pOH8Smd0Xs&list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&index=6) that the Googlebot crawler user-agent would be updated soon to reflect the [new evergreen Googlebot](https://webmasters.googleblog.com/2019/05/the-new-evergreen-googlebot.html). A much-needed improvement over the previous rendering engine used by Googlebot that was based on Chrome 41 and had been probably facing a lot of difficulties keeping up with new Javascript heavy websites of today. ![Googlebot user-agent](https://www.ghosh.dev/static/media/googlebot-1.png)<figcaption> Source: <a href="https://www.youtube.com/watch?v=4pOH8Smd0Xs&amp;list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&amp;index=6">How to make your content shine on Google Search</a> </figcaption> What this basically means is that with every release of Chrome, the Googlebot rendering engine will also keep up and get updated. Consequentially, this also gets coupled with the update of the Googlebot user-agent string which will now not always continue to match with an exact version of Chrome anymore, as used to be the case before. Benefits include better support for modern ways of building websites using such as with Javascript rendered pages, lazy-loading and the support for [web components](https://developer.mozilla.org/en-US/docs/Web/Web_Components). Also, going ahead, images set as `background-image` URLs may not be indexed. # <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Other stuff to call out… ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>The Blink shipping process I wouldn’t really bother trying to explain this considering there’s a great [video](https://www.youtube.com/watch?v=y3EZx_b-7tk&list=PLNYkxOF6rcIDA1uGhqy45bqlul0VcvKMr&index=17) and [write-up](https://blog.chromium.org/2019/11/intent-to-explain-demystifying-blink.html) that already does a much better job at it. If you are interested in learning how the web platform works in terms of new feature proposals, addition and review of web standards and how features are shipped to [Blink](http://www.chromium.org/blink) (Chromium’s rendering engine) and how it complements the web standards process, check them out. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Origin Trials This isn’t really news, but to the uninitiated, “Origin trials” are Chrome’s way to quickly experiment and gather feedback on new and potentially upcoming web features by providing some partner websites early access to try out early implementations and give feedback on usability, practicality, and effectiveness that eventually goes on its way to the web standards community. APIs exposed through Origin Trials are unstable by intent (prone to changes) and at one point before the end of the trials, go out of availability briefly. They also have inbuilt safeguarding mechanisms to get disabled if 0.5% of Chrome’s page loads start accessing the API, to prevent such experimental APIs from being exposed or baked too much into the open web before they are done right and become standards. For more information, see the [explainer](https://github.com/GoogleChrome/OriginTrials/blob/gh-pages/explainer.md), [developer guide](https://github.com/GoogleChrome/OriginTrials/blob/gh-pages/developer-guide.md) or [blink documentation](https://www.chromium.org/blink/origin-trials/running-an-origin-trial). If you are interested in playing around with a hot new API that’s landing on Chrome but far from general availability, this is your way to go! ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Context Indexing API Here comes that one-off API that I didn’t really quite know where to put. The idea is to enable indexing of offline content that you can present to a user, but to be able to effectively do that, request the browser to index them such that they can be surfaced in prominent places like Chrome’s landing page or other places, collectively known as “Discovery” experience. Check out the [proposal](https://github.com/rayankans/content-index) to learn more. _That’s all folks!_
abhishekcghosh
220,837
Building my next HTTP server, part 2
This is the second post of a series about my HTTP server On my first post of this series I explained...
3,333
2020-02-18T20:14:49
https://dev.to/andrepiske/building-my-next-http-server-part-2-466g
ruby, webdev, showdev
_This is the second post of a series about my HTTP server_ On my first post of this series I explained how the plan for my HTTP server is for it to be asynchronous, so it makes the best use of I/O and CPU at the same time. In order to be asynchronous, the problem I have to solve is that a call to [`TCPServer#accept`](https://ruby-doc.org/stdlib-2.5.0/libdoc/socket/rdoc/TCPServer.html#method-i-accept) or to [`TCPServer#read`](https://ruby-doc.org/core-2.5.0/IO.html#method-i-read) is a blocking call. This means that the method call will block the execution flow until there is something to be read from the other side. If there are two clients connected to the server and one of the clients gets stuck, it may block the whole server. A single client must never be able to block a whole server! There are a few technologies that can be used to solve this issue. But all of them have the same basic idea behind them. If we think about a blocking read operation, we can break it down into two pieces. Let's take those pieces to picture what could be the implementation of the `TCPServer#read` method: ```ruby class TCPServer def read(length) wait_until_bytes_available(length) read_available_bytes(length) end end ``` Of course those two methods being called are fictitious, only for the purpose of illustrating. The idea is that the `wait_until_bytes_available` method waits until `length` amount of bytes are available to be read from the wire. This is where the actual blocking occurs, as this method will only return when there is enough bytes to be read. If the client never sends more data, the method would not return. In reality, the method would return with an error state if the connection is broken. After that waiting, the `read_available_bytes` method call then does the actual reading. It reads `length` amount of bytes from the wire without blocking and then returns those bytes. Now, in order to have it working asynchronously, it's just a matter of removing the waiting part. That is done by removing the `wait_until_bytes_available` method call. Now we're only left with the actual reading method. And ruby, in fact, has a method just for that. It's the [`IO#read_nonblock`](https://ruby-doc.org/core-2.5.0/IO.html#method-i-read_nonblock) The `#read_nonblock` method call takes one argument that is how many bytes should be read _at maximum_. That is, the method will never read more bytes than what was passed in the argument, but it can read less than it in case there just isn't enough bytes available to be read. Now, the `read` method is not the only one that will block. We also have to solve the issue with the `accept` method. And that is very easy, because Ruby has the [`TCPServer#accept_nonblock`](https://ruby-doc.org/stdlib-2.5.0/libdoc/socket/rdoc/TCPServer.html#method-i-accept_nonblock) method! Also, which will be needed later, there is the non-blocking counterpart for the `write` method, which is the, you guessed it, [`IO#write_nonblock`](https://ruby-doc.org/core-2.5.0/IO.html#method-i-write_nonblock). Those `*_nonblock` methods are the way to go from here. But they introduce a lot of other difficulties to be dealt with. For instance, what should be done if there is nothing to read from the wire right now? That doesn't mean there won't be anything in the near future. Also, since the `read_nonblock` method can read less bytes than specified in its argument, this would mean that the data can be received in small pieces. How to manage those pieces and process them together afterwards? I intend to cover those issues in the next post of this series, so see you there!
andrepiske
221,894
Do u use Tailwind in React?
Hearing à lot about Tailwind CSS recently. Is there a React implementation and is it better than Mate...
0
2019-12-16T15:05:45
https://dev.to/bamboriz/do-u-use-tailwind-in-react-46m8
css, react, discuss
Hearing à lot about Tailwind CSS recently. Is there a React implementation and is it better than Material-ui ?
bamboriz
220,843
JavaScript's Async + Await in 5 Minutes
Bye Bye Promise inception and callback fury! 👋🎉 It’s likely that you’ve encountered Promises in yo...
0
2019-12-14T13:22:38
https://dev.to/jh3y/javascript-s-async-await-in-5-minutes-3e6d
javascript, webdev, typescript, beginners
> Bye Bye Promise inception and callback fury! 👋🎉 It’s likely that you’ve encountered Promises in your JavaScript (_If you haven’t check out this [guide](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises) quick 👍_). They allow you to hook into the completion of asynchronous calls. They make it simple to chain asynchronous operations or even group them together. There is one tiny downside. When consuming Promises, the syntax isn’t always the prettiest. Introducing __async__ + __await__ 🎉 For those in camp __TL;DR__ `async` + `await` are syntactic sugar for consuming your `Promise`s 🍭 They aid in understanding the flow of your code. There are no new concepts, it’s `Promise`s with nicer shoes 👟 Scroll down for a `gist` ⌨️ ## Baking a cake with code 🍰 We are going to bake a cake 🍰 yum! To bake the cake, we first need to get the ingredients. I’m sorry, it’s a plain sponge 😅 * Butter * Flour * Sugar * Eggs 🥚 In our code, getting each ingredient requires an asynchronous operation. For example, here is the method `getButter`: ```javascript const getButter = () => new Promise((resolve, reject) => { setTimeout(() => resolve('Butter'), 3000) }) ``` These operations will become part of a `getIngredients` method. When we bake the cake, we will need to invoke `getIngredients` before mixing, etc. ## With Promises Let’s assume we need to chain each asynchronous operation. `getIngredients` is a journey around a supermarket picking up one ingredient at a time 🛒 In most cases, we only need to chain operations if they are dependent on each other. For example, if the second operation needs the return value from the first operation and so on. In our example, it may be that we can only add one item to our shopping basket at a time. That means we need to progress through the ingredients one by one. Remember the code here is hypothetical and to show the use of Promises 😉 How might `getIngredients` look with Promises? I’ve certainly seen nested Promises like this before 👀 ```javascript const getIngredients = () => new Promise((resolve, reject) => { getButter().then((butter) => { updateBasket(butter) getFlour().then((flour) => { updateBasket(flour) getSugar().then((sugar) => { updateBasket(sugar) getEggs().then((eggs) => { updateBasket(eggs) resolve(basket) }) }) }) }) }) ``` This works but doesn’t look great 👎 It would look better with a Promise chain. ```javascript const getIngredients = () => getButter() .then(updateBasket) .then(getFlour) .then(updateBasket) .then(getSugar) .then(updateBasket) .then(getEggs) .then(updateBasket) ``` If we were doing our grocery shopping online, we could use `Promise.all` 🤓 ```javascript const getIngredients = () => Promise.all([ getButter(), getFlour(), getSugar(), getEggs(), ]) ``` These look much tidier but we still need to use a callback to get those ingredients. ```javascript getIngredients().then(ingredients => doSomethingWithIngredients(ingredients)) ``` ## Tidying it up with async + await Let’s sprinkle on that syntactic sugar 🍭 To use the `await` keyword, we must first declare a method as asynchronous with the `async` keyword. It’s important to note that an `async` method _will always_ return a `Promise`. That means there is no need to return a `Promise` 🎉 Let’s declare `getIngredients` as async ```javascript const getIngredients = async () => {} ``` Now, how might those `Promise`s look with sugar? The `await` keyword allows us to wait for a `Promise` and define a variable with the return value of that `Promise`. It's a little verbose for this example, but let’s apply that sugar to `getIngredients`. ```javascript const getIngredients = async () => { const butter = await getButter() const flour = await getFlour() const sugar = await getSugar() const eggs = await getEggs() return [ butter, flour, sugar, eggs, ] } ``` The code isn't smaller, but it's more verbose and concise 👍 No more callbacks. It's when we consume a `Promise` that the syntactic sugar comes into play. ```javascript const bakeACake = async () => { const ingredients = await getIngredients() // do something with the ingredients, no more ".then" 🙌 } ``` Wow! 😎 How much cleaner is that? The use of `async` and `await` makes our code procedural and comprehensive. It looks cleaner and does exactly the same thing. It’s important to remember here that we aren’t replacing `Promise`s, we're still using them under the hood. Now we're using them with a new cleaner syntax. And yes, this works with `Promise.all` too. So if we had done the shopping online, our code gets even smaller. ```javascript const getIngredients = async () => { const ingredients = await Promise.all([ getButter(), getFlour(), getSugar(), getEggs(), ]) return ingredients } ``` We don't need that wrapper function anymore! ```javascript const getIngredients = async () => await Promise.all([getButter(), getFlour(), getSugar(), getEggs()]); ``` ## Awaiting a non-Promise How about if the value you `await` on is not a `Promise`? In our example, the asynchronous functions are returning a `String` after a `setTimeout`. ```javascript const egg = await 🥚 ``` There will be no error, the value becomes a resolved `Promise` 😅 ## What about rejections? Up until now, we’ve dealt with the happy path 😃 But how about in the case where a `Promise` rejects? For example, what if there are no eggs in stock? Our asynchronous function for `getEggs` would reject with a potential error. To accommodate for this, a simple `try`/`catch` statement will do the trick 👍 ```javascript const getIngredients = async () => { try { const butter = await 'Butter' const flour = await getFlour() const sugar = await getSugar() const eggs = await getEggs() return [ butter, flour, sugar, eggs, ] } catch(e) { return e } } ``` We could wrap at this level or higher up where we invoke `getIngredients` 👍 ## Consuming our function and baking the cake 🍰 If you’ve got this far, we’ve created our function for `getIngredients` with the new `async` + `await` keywords. What might the rest of it look like? ```javascript const bakeACake = async () => { try { // get the ingredients const ingredients = await getIngredients() // mix them together const cakeMix = await mix(ingredients) // put in oven on 180C, gas mark 4for 20-25 minutes const hotCake = await cook(cakeMix) // allow to stand before serving const cake = await stand(hotCake) return cake } catch (e) { return e } } ``` Much cleaner than what we might have done previously with `Promise`s 🎉 ## That’s it! Baking a cake with async + await in 5 minutes 🍰 If you’ve got this far, thanks for reading 😃 I’ve put together a gist with some possible example code that can be seen below along with some further resources on `async` + `await`. The important takeaways ⚠️; * `async` functions will always return a `Promise` * `await` will in most cases be used against a `Promise` or a group of `Promise`s * Handle any potential errors with a `try`/`catch` statement 👍 * We haven’t touched on this but you can `await` an `await`. Making a `fetch` request you might `await` the request and then `await` the `json` function. ```javascript const data = await (await fetch(`${dataUrl}`)).json() ``` As always, any questions or suggestions, please feel free to leave a response or [tweet me 🐦](https://twitter.com/@jh3yy)! Be sure to follow me on the socials 😎 {% gist https://gist.github.com/jh3y/c7870ca6fb01d5e8577a9f1474a2c162 %} ### Further resources * [`await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) — MDN * [`async` function](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) — MDN * [Async + Await podcast](https://syntax.fm/show/028/async-await) — Syntax.fm
jh3y
220,865
Firefox: Installing Self-Signed certificate on Ubuntu
While developing webapps you may need to use HTTPS to match production environment. For local purpos...
0
2019-12-16T13:12:34
https://dev.to/lmillucci/firefox-installing-self-signed-certificate-on-ubuntu-4f11
linux, ubuntu, firefox, ssl
While developing webapps you may need to use HTTPS to match production environment. For local purposes you may not need a real certificate and a self-signed SSL certificate could be enough. A self-signed certificate is a SSL certificate which is not signed by any of the recognized certification authorities. If you want you can create [your self-signed certificate in 2 minutes](https://dev.to/kauresss/2-minute-self-signed-ssl-certificate-for-localhost-5817) Browsers do not trust self-signed certificate by default and display an error message that warns the user that the site you are connecting to does not have a recognized certificate and that this could be a potential security risk. For local development these warnings could be pretty annoying so you may want to ensure that your browser recognizes your certificate as valid. Let's see how to add a self-signed certificate to Firefox! # Finding Firefox profile folder All the customizations you make in Firefox are stored in a special folder called `profile`. To add a certificate the first thing to do is to find out where your proile is stored. You can find it simply by typing `about:profiles` in Firefox’s address bar, and then press Enter. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/226fvl2ze0mhsnil5neu.jpg) The folder you are looking for is the one with label `Root Directory`. For example my profile is stored at `/home/lorenzo/.mozilla/firefox/57w4ghfg.default-1394286602246-1560669052441` # Installing the certificate To install the certificate you have to ensure that `certutil` is installed on your system. In case it is missing you can install it with: ``` sudo apt install libnss3-tools ``` Now you are ready to add the certificate: ``` certutil -A -n "<CERT_NICKNAME>" -t "TC,," -i <PATH_FILE_CRT> -d sql:<FIREFOX_PROFILE_PATH> ``` where: - CERT_NICKNAME: is an alias for the certificate - PATH_FILE_CRT: is the path of the certificate you want to add - FIREFOX_PROFILE_PATH: is the path where your Firefox profile is stored NOTE: if you want to know what `trustargs` are you can [read the documentation](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Reference/NSS_tools_:_certutil). For example to add the certificate on my PC I have to use: ``` certutil -A -n "slope" -t "TC,," -i ~/Downloads/slope.crt -d sql:/home/lorenzo/.mozilla/firefox/57w4ghfg.default-1394286602246-1560669052441 ``` NOTE: after you installed the certificate you have to restart Firefox for these changes to take effect. # Displaying installed certificates To ensure that the certificate is added correctly you can display all the installed certificates using: ``` certutil -d sql:<FIREFOX_PROFILE_PATH> -L ``` # Removing a certificate If you need to remove an installed certificate you can do it with: ``` certutil -D -n "<CERT_NICKNAME>" -d sql:<FIREFOX_PROFILE_PATH> ``` And voila! You're all done, ready and up and running! --- Feel free to reach out to me! [Blog (in italian)](https://www.lorenzomillucci.it/) || [Twitter](https://twitter.com/LorenzoMillu) || [GitHub](https://github.com/lmillucci) || [LinkedIn](https://www.linkedin.com/in/lorenzo-millucci/)
lmillucci
220,876
Build WordPress App with React Native #25 : Setup Firebase Push Notification [Android]
This series intends to show how I build app to serve content from my WordPress blog by using react na...
3,611
2019-12-14T12:18:12
https://kriss.io/build-wordpress-client-app-with-react-native-25-setup-firebase-push-notification-android/
firebase, reactnative, reactnativewordpress, android
--- title: Build WordPress App with React Native #25 : Setup Firebase Push Notification [Android] published: true date: 2019-12-14 03:57:29 UTC tags: Firebase,React Native,React native Wordpress,android canonical_url: https://kriss.io/build-wordpress-client-app-with-react-native-25-setup-firebase-push-notification-android/ cover_image: https://cdn-images-1.medium.com/max/1024/1*0qWyQ1BHUL6FoFfa-uPTYg.png series: Build Wordpress Client App with React Native --- This series intends to show how I build app to serve content from my WordPress blog by using react native. Since, my blog is talking about react-native, the series and the articles are interconnected. We will learn how to set-up many packages that make our lives comfortable and learn how to deal with WordPress APIs. Here, the most prominent features talked about in the book are the dark theme , offline mode, infinite scroll and many more. You can discover much more in this series.this inspiration to do this tutorial series came from the [React Native Mobile Templates](https://www.instamobile.io) from instamobile In case of wanting to learn from the beginning, all the previous parts for this tutorial series are available below: 1. [Build WordPress Client App with React Native #1: Overview](https://kriss.io/build-wordpress-client-app-with-react-native-1-overview/) 2. [Build WordPress Client App with React Native #2: Setting Up Your Environment](https://kriss.io/build-wordpress-client-app-with-react-native-2-setting-up-your-environment/) 3. [Build WordPress Client App with React Native #3: Handle Navigation with React navigation](https://kriss.io/build-wordpress-client-app-with-react-native-3-handle-navigation-with-react-navigation/) 4. [Build WordPress Client App with React Native #4: Add Font Icon](https://kriss.io/build-wordpress-client-app-with-react-native-4-add-font-icon/) 5. [Build WordPress Client App with React Native #5 : Home Screen with React native paper](https://kriss.io/build-wordpress-client-app-with-react-native-5-using-react-native-paper/) 6. [Build WordPress Client App with React Native #6 : Using Html renderer and Moment](https://kriss.io/build-wordpress-client-app-with-react-native-6-using-html-renderer-and-moment/) 7. [Build WordPress Client App with React Native #7: Add pull to refresh and Infinite scroll](https://kriss.io/build-wordpress-client-app-with-react-native-7-add-pull-to-refresh-and-infinite-scroll/) 8. [Build WordPress Client App with React Native #8: Implementing SinglePost Screen](https://kriss.io/build-wordpress-client-app-with-react-native-8-implementing-singlepost-screen/) 9. [Build WordPress Client App with React Native #9: implement simple share](https://kriss.io/build-wordpress-client-app-with-react-native-9-implement-simple-share/) 10. [Build WordPress Client App with React Native #10: Setup and save bookmark](https://kriss.io/build-wordpress-client-app-with-react-native-10-setup-and-save-bookmark/) 11. [Build WordPress Client App with React Native #11: Remove and Render Bookmark](https://kriss.io/build-wordpress-client-app-with-react-native-11-remove-and-render-bookmark/) 12. [Build WordPress Client App with React Native #12: Categories screen](https://kriss.io/build-wordpress-client-app-with-react-native-12-categories-screen/) 13. [Build WordPress Client App with React Native #13: Configuring firebase in contact screen](https://kriss.io/build-wordpress-client-app-with-react-native-13-configuring-firebase-in-contact-screen/) 14. [Build WordPress Client App with React Native #14 : Implementing Settings Screen](https://kriss.io/build-wordpress-client-app-with-react-native-14-implementing-settings-screen/) 15. [Build WordPress Client App with React Native #15 : Forwarding message to inbox with Cloud function](https://kriss.io/build-wordpress-client-app-with-react-native-15-forwarding-message-to-inbox-with-cloud-function/) 16. [Build WordPress Client App with React Native #16 : Dark theme](https://kriss.io/build-wordpress-client-app-with-react-native-16-dark-theme/) 17. [Build WordPress Client App with React Native #17 : Fix react-native-render-html to change theme](https://kriss.io/build-wordpress-client-app-with-react-native-17-fix-react-native-render-html-to-change-theme/) 18. [Build WordPress Client App with React Native #18 : changing Theme](https://kriss.io/build-wordpress-client-app-with-react-native-18-manually-changing-theme/) 19. [Build WordPress Client App with React Native #19 : Notify user when offline](https://kriss.io/build-wordpress-client-app-with-react-native-19-notify-user-when-offline/) 20. [Build WordPress Client App with React Native #20 : Saving data to cache](https://kriss.io/build-wordpress-client-app-with-react-native-20-saving-data-to-cache/) 21. [Build WordPress Client App with React Native #21 : Splash Screen on iOS](https://kriss.io/build-wordpress-client-app-with-react-native-21-splash-screen-on-ios/) 22. [Build WordPress Client App with React Native #22 : Splash Screen on Android](https://kriss.io/build-wordpress-client-app-with-react-native-22-splash-screen-on-android/) 23. [Build WordPress Client App with React Native #23 : Setup Firebase Push notification [iOS]](https://kriss.io/build-wordpress-client-app-with-react-native-23-setup-firebase-push-notification-ios/) To write your Firebase Cloud Messaging Android client app, use the `FirebaseMessaging`API and [Android Studio 1.4 or higher](https://developer.android.com/sdk/index.html?authuser=0) with Gradle. The instructions in this page assume that you have completed the steps for [adding Firebase to your Android project](https://firebase.google.com/docs/android/setup?authuser=0). FCM clients require devices running Android 4.1 or higher that also have the Google Play Store app installed, or an emulator running Android 4.1 with Google APIs. Note that you are not limited to deploying your Android apps through Google Play Store. ## Set up Firebase and the FCM SDK 1. your project-level **`build.gradle`** file, make sure to include Google’s Maven repository in both your **`buildscript`** and **`allprojects`** sections. 2. Add the dependency for the Cloud Messaging Android library to your module (app-level) Gradle file (usually **`app/build.gradle`** ): ``` implementation 'com.google.firebase:firebase-messaging:20.1.0' ``` like this image ![](https://kriss.io/wp-content/uploads/2019/12/img_5df45c61c1346.png) ## Edit your app manifest A service that extends **`FirebaseMessagingService`**. This is required if you want to do any message handling beyond receiving notifications on apps in the background. To receive notifications in foregrounded apps, to receive data payload, to send upstream messages, and so on, you must extend this service. ``` \<service android:name=".java.MyFirebaseMessagingService" android:exported="false"\> \<intent-filter\> \<action android:name="com.google.firebase.MESSAGING\_EVENT" /\> \</intent-filter\> \</service\> ``` like this image ![](https://kriss.io/wp-content/uploads/2019/12/img_5df45d5de4924.png) ## Install the RNFirebase Messaging package Add the `RNFirebaseMessagingPackage` to your `android/app/src/main/java/com/[app name]/MainApplication.java:` ``` import io.invertase.firebase.messaging.RNFirebaseMessagingPackage; ``` then activate package ``` @Override protected List\<ReactPackage\> getPackages() { @SuppressWarnings("UnnecessaryLocalVariable") List\<ReactPackage\> packages = new PackageList(this).getPackages(); // Packages that cannot be autolinked yet can be added manually here, for // example: packages.add(new AsyncStoragePackage()); packages.add(new RNFirebaseDatabasePackage()); packages.add(new RNFirebaseAdMobPackage()); packages.add(new RNFirebaseMessagingPackage()); return packages; } ``` like this image ![](https://kriss.io/wp-content/uploads/2019/12/img_5df45d5de4924.png) ## Summary now we did setup on Android part we learn how to activate Firebase messaging on Android part The post [Build WordPress Client App with React Native #25 : Setup Firebase Push Notification [Android]](https://kriss.io/build-wordpress-client-app-with-react-native-25-setup-firebase-push-notification-android/) appeared first on [Kriss](https://kriss.io).
kris
220,895
Reading Snippets [10]
Pure functions are functions that satisfy two conditions: Deterministic No Side Effects They ar...
0
2019-12-14T12:31:21
https://dev.to/calvinoea/reading-snippets-10-518j
beginners, javascript
Pure functions are functions that satisfy two conditions: - Deterministic - No Side Effects They are deterministic which means that they will always return the same output for any given input. They also do not have side effects which means they cannot change global variables. Not only do pure functions not have side effects but also pure functions do not rely on code that has side effects. They always give the same return value for the same input parameters.
calvinoea
220,915
PHP form 20: input text textarea
Happy Coding Previous In index.php &lt;form method="post" action="p...
0
2019-12-14T13:28:25
https://dev.to/antelove19/php-form-20-input-text-textarea-17df
php, form
___ ### <center>Happy Coding</center> --- {% replit @antelove19/PHP-form-20 %} --- [`Previous`](https://dev.to/antelove19/php-form-10-input-text-1ep8) In **index.php** ```php <form method="post" action="process.php" > Firstname: <input type="text" name="firstname" /> <br /> Lastname: <input type="text" name="lastname" /> <br /> ``` Add input type **description**: ```php Description: <textarea name="description" rows="10" cols="50"></textarea> ``` --- ```php <br /> <hr /> <input type="submit" name="submit" value="Submit" /> </form> ``` --- In **process.php** ```php echo "<pre>"; var_dump($_POST); ``` --- [`Source Code`]() [`Next`](https://dev.to/antelove19/php-form-30-input-text-radio-textarea-4kf4) --- <center>Thank for reading :)</center> ---
antelove19
220,945
PHP form 45: input text textarea select-multiple
Happy Coding Previous We will create form with type select multiple input...
0
2019-12-14T14:41:41
https://dev.to/antelove19/php-form-45-input-text-textarea-select-multiple-322m
php, form
___ ### <center>Happy Coding</center> --- {% replit @antelove19/PHP-form-45 %} --- [`Previous`](https://dev.to/antelove19/php-form-40-input-text-textarea-select-27mc) We will create form with type **select multiple** input: ```php <form method="post" action="process.php" > Firstname: <input type="text" name="firstname" /> <br /> Lastname: <input type="text" name="lastname" /> <br /> Description: <textarea name="description" rows="10" cols="50"></textarea> <br /> ``` Add input type **select** with: 1. It name in array postfix *[]* 2. Set attribute *multiple* 3. Set attribute *size*; default *size='4'* (options) ```php Programming Language (multi): (hold ctrl + click item) <select name="languages[]" multiple size="5" > <option value="c" >C</option> <option value="c++" >C++</option> <option value="java" >Java</option> <option value="javascript" >Javascript</option> <option value="php" selected >PHP</option> </select> ``` --- ```php <br /> <hr /> <input type="submit" name="submit" value="Submit" /> </form> ``` --- And, the process: ```php echo "<pre>"; var_dump($_POST); ``` --- [`Source Code`]() [`Next`](https://dev.to/antelove19/php-form-50-input-text-textarea-checkbox-4gni) --- <center>Thank for reading :)</center> ---
antelove19
220,948
PHP form 55: input text textarea checkbox-multiple
Happy Coding Previous We will create form with type checkbox multi input:...
0
2019-12-14T14:49:05
https://dev.to/antelove19/php-form-55-input-text-textarea-checkbox-multiple-381j
php, form
___ ### <center>Happy Coding</center> --- {% replit @antelove19/PHP-form-55 %} --- [`Previous`](https://dev.to/antelove19/php-form-50-input-text-textarea-checkbox-4gni) We will create form with type **checkbox multi** input: ```php <form method="post" action="process.php" > Firstname: <input type="text" name="firstname" /> <br /> Lastname: <input type="text" name="lastname" /> <br /> Description: <textarea name="description" rows="10" cols="50"></textarea> <br /> ``` Add input type **checkbox** with all name are same and in array postfix *[]*: ```php Hobbies (multiple): <input type="checkbox" name="hobbies[]" value="studying" /> Studying <input type="checkbox" name="hobbies[]" value="reading" checked /> Reading <input type="checkbox" name="hobbies[]" value="writing" /> Writing <input type="checkbox" name="hobbies[]" value="sleeping" checked /> Sleeping ``` --- ```php <br /> <hr /> <input type="submit" name="submit" value="Submit" /> </form> ``` --- And, the process: ```php echo "<pre>"; var_dump($_POST); ``` --- [`Source Code`]() [`Next`](https://dev.to/antelove19/php-form-60-input-text-textarea-file-4okj) --- <center>Thank for reading :)</center> ---
antelove19
220,994
Kotlin type Extensions
Assumptions General understanding of kotlin The kotlin type system is extremely powerful. It prov...
0
2019-12-14T17:18:34
https://dev.to/dgoetsch/kotlin-type-extensions-1i88
kotlin, functional
*Assumptions* * General understanding of kotlin The kotlin type system is extremely powerful. It provides a wealth of useful tools: data classes, interfaces with default implemetnations, sealed classes, object types, final by default on concrete types - the list goes on. However, there is one feature that in my mind, that stands out as being overlooked but incredibly valuable - Extension Types. ### What is an extension type? Lets take a simple class as an example: ```kotlin import java.util.concurrent.LinkedBlockingQueue class Context<T> { private val items: Queue<T> = LinkedBlockingQueue() fun add(item: T) { items.add(item) } fun drainAll(): List<T> { val result = items.toList() items.clear() return result } } ``` Lets say you had some common methods that you had to run on this class, for example a "merge" method. In java, you would create static methods like so: ```kotlin fun <T> merge(context1: Context<T>, context2: Context<T>): Context<T> { val newContext = Context<T>() context1.drainAll().forEach { newContext.add(it) } context2.drainAll().forEach { newContext.add(it) } return newContext } ``` You could then invoke this method like so: ```kotlin val context1: Context<T> val context2: Context<T> merge(context1, context2) ``` This implementation works, even if it isn't the most elegant it is pragmatic. Its easy to test, and the semantics are clear. Extension methods allow us to express the exact same code a little bit more elegantly: ```kotlin fun Context<T>.merge(context: Context<T>): Context<T> { val newContext = Context<T>() this.drainAll().forEach { newContext.add(it) } context.drainAll().forEach { newContext.add(it_) } return newContext } ``` You now invoke `merge` on a context object like you would any normal method on the `Context<T>` class ```kotlin val context1: Context<T> val context2: Context<T> context1.merge(context2) ``` The function definition`fun Context<T>.merge(...)` tells the compiler within this method, the keyword`this` references the `Context<T>` on which `merge` was invoked. ### Extension type implementation and limitations At compile time, extension types become a static method that takes in an extra preceding type argument for the first type. In fact, in the above example, the static method example is conceptually what the compiler creates from the extension method example. Extension methods have a few limitations: * extension methods cannot access protected or private members * extension methods cannot override existing methods * *extensions can be messy and can make code much much worse if misused.* ### Extension Functions Above we saw a method with an extension type. Kotlin takes this one step further by also providing the ability to also apply this same principle to functions. ```kotlin fun <T> withContext(body: Context<T>.() -> Unit): List<T> { val context = Context<T>() context.body() return context.drainAll() } ``` ```kotlin val result: List<String> = withContext<String> { add("hello") add("goodbye") } ``` Just like extension methods, extention functions expose the type on the left side of the function parameter declaration as "this" within the function. The following screenshot shows the IDE identifying `this` in the block inside of withContext: ![IDE extension function](https://thepracticaldev.s3.amazonaws.com/i/pu86eb2wgx3r8jabg8nk.png) ### Extension methods on type parameters The kotline standard library is full of excellent extension methods like the [kotlin use function](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.io/use.html) First, lets naively attempt to implement use: ```kotlin inline fun <R> Closeable?.uselessUse(block: (Closeable?) -> R): R { ... } var stream: BufferedInputStream = BufferedInputStream(FileInputStream(File("my-file.txt"))) it.uselessUse { closable: Closeable? -> //doesn't know closable is a BufferedInputStream false } ``` In this implementation, the `block` loses context of what type of closable `uselessUse` was called on. This makes it difficult to interact with the `Closeable` inside of block, which undermines the utility of this method. ```kotlin inline fun <T : Closeable?, R> T.use(block: (T) -> R): R { ... } var stream: BufferedInputStream = BufferedInputStream(FileInputStream(File("my-file.txt"))) stream.use { bufferedInputStream: BufferedInputStream -> //knows that the closable was a BufferedInputStream String(bufferedInputStream.readAllBytes()) } ``` Notice how the `use` function extends a type parameter `T: Closable?` instead of a concrete type `Closable?`. This allows the `block` to have context of what type of `Closeable` is being used, and allows the user to take full advantage of auto-closable functionality. ## Okay, so what? In Java, auto closable was implemented as a language feature in Java 7. It worked, and it provided some syntactic sugar, but it was incredibly limited in scope. With extension functions, the Kotlin standard library provides an implementation for auto closable and describes a reproducible pattern for extending the language itself. Extension functions are the main mechanism through which kotlin enables DSL implementation. In my next post I'll go into detail about how this can be done.
dgoetsch
221,031
Day 10 – Mastering EventEmitter - Learning Node JS In 30 Days [Mini series]
Go to https://nodejs.org/dist/latest-v12.x/docs/api/fs.html then you can see oficial document of fs...
0
2019-12-14T18:09:29
https://blog.nerdjfpb.com/day10-nodejsin30days/
node, javascript, codenewbie
Go to https://nodejs.org/dist/latest-v12.x/docs/api/fs.html then you can see oficial document of fs ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/1o844vmsn79m5j94rjnw.png) Now, first we need to require the fs and store it on a const by writing `const fs = require('fs')` ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/mqsw2ss1xbzkg65s9tv4.png) We are going to use `fs.readFileSync` ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/cx30ijdhdwg8sk73xmkj.png) I am using a `song.txt` file on same folder and read file text using `readFileSync` and console log it ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/o031isdkr46q1eu5xknk.png) Run it and see the magic ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/bm0j1obsjsr5k67dxmfe.png) We are going to use writeFileSync for the writing in song.txt file ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/lb37wdqtfycxkanoar1j.png) Using fs.writeFileSync(path, dataForWrite) we can write in the file, what we need to write. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/f6rbhjg8hr15dqdaozqc.png) After write, we need to read the data so that we can see if are able to change - ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/jk92pt1s4iptakraxcgd.png) Did you used fs module before ? You can see the graphical version here {% instagram B6BF9-_AzXa %} Originally it published on [nerdjfpbblog](https://blog.nerdjfpb.com/day10-nodejsin30days/). You can connect with me in [twitter](https://twitter.com/nerdjfpb) or [linkedin](https://www.linkedin.com/in/nerdjfpb/) ! You can read the old posts from here -- {% post nerdjfpb/day-1-learning-node-js-in-30-days-mini-series-55e7 %} {% post nerdjfpb/day-2-learning-node-js-in-30-days-mini-series-5023 %} {% post nerdjfpb/day-3-learning-node-js-in-30-days-mini-series-24i4 %} {% post nerdjfpb/day-4-learning-node-js-in-30-days-mini-series-1koc %} {% post nerdjfpb/day-5-learning-node-js-in-30-days-mini-series-21jm %} {% post nerdjfpb/day-6-learning-node-js-in-30-days-mini-series-758 %} {% post nerdjfpb/day-7-learning-node-js-in-30-days-mini-series-3023 %} {% post nerdjfpb/day-8-var-vs-let-vs-const-learning-node-js-in-30-days-mini-series-1i72 %} {% post nerdjfpb/day-9-mastering-eventemitter-learning-node-js-in-30-days-mini-series-2dfe %}
nerdjfpb
221,057
Is Multitasking Effective for Your Work as a Developer?
Multitasking is a fairly common concept. It is something that everyone has done to a certain degree a...
0
2019-12-14T19:23:34
https://dev.to/danilapetrova/is-multitasking-effective-for-your-work-as-a-developer-45
productivity, career, multitasking, discuss
Multitasking is a fairly common concept. It is something that everyone has done to a certain degree and is also one that many disagree on. Some believe it raises the pace of getting things done. Others think it is, on the contrary, causing you to complete two tasks slower if you do them at the same time, rather than one after the other. ##What is multitasking? We are all very familiar with the term multitasking: it is simple right? Performing multiple tasks at the same time. For example, listening to music while washing dishes or maybe reading a blog post while waiting in line. And those are some of the ways you can do two things at once, however, it becomes more difficult when it comes to two highly intellectual tasks, as opposed to combining an intellectually taxing one with a mechanical one that is performed by muscle memory to a large degree. ##Only some can multitask Do you know people who seem to be great at multitasking? They show you how efficient they are and act surprised when you cannot keep up with their speed? While your brain prefers consecutive actions to the constant switch, there are, in fact, people who have found ways to make the most of it in a way that is working for them. They are better at quickly switching their attention from one task to another. The risk comes with the fact that while those people switch quickly, they are also susceptible to distractions. Which is exactly why multitasking is considered to be damaging to productivity in terms of attention span. ##The pitfalls of multitasking Research in [neuroscience](https://www.psychologytoday.com/us/basics/neuroscience) has determined that the human brain doesn’t really do tasks simultaneously. It is however fairly efficient with switching between tasks quickly. And the switch in our heads is a quick consecutive start and stop command. And this is exactly the part of multitasking that has the potential to become tiring and in fact more time-consuming. And each switch costs us a few microseconds that add up to slowing your overall performance down by up to 50 percent depending on multiple factors, including your environment. You can check out more fun facts about multitasking in the [infographic below] (https://www.dailyinfographic.com/the-high-cost-of-multitasking-infographic). ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/r2tvs8j095nhmyb1rmgj.jpg) In addition to being potentially more time-consuming, the process costs not only valuable time but also energy. The more tasks that you try to string up, say… listening to music washing dishes and speaking to a family member, the more resource switching costs. One thing you can do is attempt to do two tasks consecutively. And time how much you can do in a certain amount of time. Then put them together timing the process again and compare the results. This is a good way to test for yourself if multitasking is taking away or adding to your productivity. ##Multitasking in software development Now let's be completely honest. When it comes to people who rely greatly on the digital tools will have to be able to switch between tasks in order to perform an assignment. So it would be not only impractical but quite ignorant to advise developers to only do one thing and use one tool at a time. Coding and problem solving require obtaining information, processing it, changing the process or the code and then test how efficient it is. All those, while done consecutively are tightly intertwined and you would have to switch from one to another in the order that is the most beneficial to your current task. So here are two of the main definitions of multitasking: 1. Multitasking means switching back and forth from one thing to another 2. Multitasking can also involve performing a number of tasks in rapid succession. Both of those are key in software development. Switching between tasks is not simply a programmer’s choice but can quite literally be a necessity in order to code and create adequate software. So cutting out the idea of multitasking completely is no more productive than deciding to multitask as a constant practice at all times. ##When and how to multitask? As you may have realized by now the key to effective productivity is neither avoiding it, nor abusing it, but rather *moderation*. >"Everything that exceeds the bounds of moderation has an unstable foundation" >- Seneca You need to know when it is key to your assignment when you are leaning on it as if it is a crutch out of habit and when you are avoiding it because you think it is all bad. Assess the situation and use it when needed, as long as you are aware that it means simply switching from one task to another rather than doing them at the same time and that it can be taxing.
danilapetrova
221,137
HTML5 Canvas Basics
You may have heard of HTML5 Canvas, but what exactly is it? Let's figure it out together! Simply put...
0
2019-12-15T22:00:37
https://dev.to/jadejdoucet/html5-canvas-basics-ee6
html, javascript, beginners
You may have heard of HTML5 Canvas, but what exactly is it? Let's figure it out together! Simply put, `<canvas>` is used to draw graphics to a web page. This tag is just a container for graphics, but this is good news if you're a JavaScript developer because this is done with the power of JavaScript! ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/hy3wyz8obk17wb0lvqkt.jpg) ##Getting Started Canvas has many methods for drawing, you can make loads of things like squares, boxes, paths, text, images, and more! Luckily, canvas is also fully supported by most modern browsers, even Microsoft Edge, if that's your thing. To create a canvas, you'd start out with something like this: ``` <canvas id="gameScreen" width="800" height="600"></canvas> ``` It's important to note here, that this canvas needs to have an id, this is used for reference within your JavaScript. A border is also probably nice to have, so adding some style can help to visualize this a bit better. ``` <canvas id="gameScreen" width="800" height="600" style="border:1px solid black;"> </canvas> ``` That would result in something like this ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/siq3nujdavqffugx44e5.png) ##Drawing On this canvas, you can venture in many directions. If you wanted to simply draw a line across, you could do this ``` const canvas = document.getElementById("myCanvas"); const context = canvas.getContext("2d"); context.moveTo(0, 0); context.lineTo(800, 600); context.stroke(); ``` It looks like there's a lot going on here, so I'll break it down line by line. - On the first line, we are grabbing our canvas from our HTML page, so we can have access to it within our JavaScript file. - The next line is invoking the *getContext()* method on our canvas, which returns an object that provides methods for drawing on our canvas! In this case, I pass in the argument "2d", which is recognized by the method and returns the correct object which allows us to draw in our 2d space. There are other ways of utilizing tools for drawing in 3D spaces as well, check out [WebGL](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Getting_started_with_WebGL) for more on that! These last 3 lines are just invoking methods on our context. Side note: Many developers will shorten context with "ctx", so keep that in mind when googling more about it all. - *context.moveTo* is taking two parameters here, this is the X and Y position on our canvas to start drawing. Web pages start with (0, 0) as the top-left most corner. This is **very** important to remember, as most methods need to know your X and Y position. - *context.lineTo* is again, taking an X and Y position, and it's simply creating our line to follow from our "moveTo" position, and our "lineTo" position. Think of this like drawing with pencil and paper. You move your hand to the top left, then draw down to the corner. Since the size of our canvas is 800 X 600, top left is (0, 0), so bottom right is our (800, 600). - *context.stroke* is just making the physical line that you see, by following the the moveTo position, to the lineTo position. ##Conclusion This is a very basic example of using canvas, but I plan to dive deeper into this soon. Something that inspired me to start learning to use canvas is actually [Cross Code](http://www.cross-code.com/en/home). ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/qvc53ld6rq50u7f4abtn.jpg) This game is entirely 100% written using HTML5 canvas with **regular JavaScript**! That's very exciting for someone like me with a long history in video games and a background in JavaScript, I can't wait to see what other games come from this. Thanks for reading; if you've created anything really cool with canvas, feel free to leave a comment, I'd love to check it out! For a really great walk-through of developing a block breaking game, I highly recommend this [freeCodeCamp video](https://www.youtube.com/watch?v=3EMxBkqC4z0).
jadejdoucet
221,191
Awesome Terminal upgrades - Part Two: Upgrade and use a newer version of ZSH on macOS
Awesome Terminal upgrades - Part Two: Upgrade and use a newer version of ZSH on macOS
3,781
2019-12-15T02:36:00
https://dev.to/0xdonut/awesome-terminal-upgrades-part-two-upgrade-and-use-a-newer-version-of-zsh-on-macos-4hfg
opensource, productivity, zsh, bash
--- title: Awesome Terminal upgrades - Part Two: Upgrade and use a newer version of ZSH on macOS published: true description: Awesome Terminal upgrades - Part Two: Upgrade and use a newer version of ZSH on macOS tags: opensource, productivity, zsh, bash series: Awesome Terminal upgrades --- This is part two in the **"Awesome Terminal upgrades"** series. In this short tutorial we will cover ZSH on macOS and how you can upgrade and manage it using Homebrew. ## Install [Homebrew](https://brew.sh/) > The missing package manager for macOS (or Linux) We will be using Homebrew to manage everything for us, if you don't have it installed, go ahead and do it now! ``` /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` By default OSX has ZSH installed, but we're going to use homebrew to install ZSH so we can upgrade it and keep up to date with new releases. ### Confirm the current active zsh version: ``` zsh --version ``` ### Confirm the location of zsh: ``` which zsh ``` The output should look like `/bin/zsh` ### Confirm the shell that’s set for your user: ``` dscl . -read /Users/$USER UserShell ``` The output should look like `UserShell: /bin/zsh` ### Install ZSH: ``` brew install zsh ``` ### Use brew ZSH: ``` sudo dscl . -create /Users/$USER UserShell /usr/local/bin/zsh ``` After that, restart your Terminal to have it take effect. Now if you run `which` again, you’ll see the system is recognizing the one you installed: ``` which zsh ``` The output should look like `/usr/local/bin/zsh` ### Confirm you're running brew ZSH: ``` dscl . -read /Users/$USER UserShell ``` The output should look like `UserShell: /usr/local/bin/zsh` ### Change your shell to the right ZSH ``` sudo chsh -s $(which zsh) ``` Sometimes Switching ZSH shells can throw up some unsuspected errors, so you might want to add this to your `.zshrc` ``` export PATH=$HOME/bin:/usr/local/bin:$PATH ``` Awesome, your shell is now using ZSH managed by Homebrew! You can update ZSH whenever you like, as well as your other Homebrew installs by running: ``` brew update && brew upgrade ```
0xdonut
221,202
My own journey (Part 2)
High School was about to be over And one of my friends who wanted to see the recently esta...
1,919
2019-12-15T03:36:51
https://dev.to/ackzell/my-own-journey-part-2-pi0
career
# High School was about to be over And one of my friends who wanted to see the recently established campus of a fancy private school, took a bunch of us in his car and we basically had a group trip to get to know the place. I will forever be grateful with my friend for that trip on his car. Not only was a really good time with my friends, but it made me curious about _just having the possibility_ to study there! This was a really expensive place to study at. Tuition fees were waaaaay waaaaaay waaaaaay beyond my family's financial capabilities. It was certainly a long shot, but being nudged by my friend to take the test and see if I could score a scholarship there, I tried. # University times So there I was, after my parents agreed to a plan, and taking up a credit with the school, I would become an Engineer there... Stuff wasn't really easy during the first 2 years. I lived pretty far away from school and being there early, doing homework and trying to do some extracurricular activities was kind of draining. I wasn't the brightest, nor was I the one with the highest quality of assignments being turned in. And this wasn't easy to swallow. I wasn't used to "not being able to deliver". At that time I was also pretty active into the LDS Church, and I was at an age at which you turn into a Missionary and get sent for 24 months to serve. So I did. I spent 24 of the most fantastic months of my life in the Boise, Idaho area. I am no longer an active member of the Church, but that is another story I guess 😬. # University times, take two So I came back, the school took me in again and we started our credit once more. This time though, something was different: For starters, I felt like I was a tiny bit more mature, and I think the Mission actually helped a little with my classes and study habits. On the other hand was this agreement with my dad's employer. The company would pay for part of my tuition fees and I would basically be an intern with them as long as I was a student. # One of the IT guys So now I was 2 years older, basically had a part time job and was a full time student. I learned so much from the people I worked with in there. I was part of the IT team (yeah, including the whole "have you tried turning it off and on again?" 😅). But I also really liked programming, and expressed that to my boss. So he tasked me with a change on their in-house kind of ERP system they had built from scratch in PHP. I did that one and got another assignment, then another more difficult one... and so on and so forth. I started rewriting parts of the system little by little and learning a ton of stuff while doing so (like just how difficult it was to keep the browser compatibility on the UI if you were targeting IE6, Chrome and Firefox at the time _shivers_). VCS was not implemented in there, but I had learned about SVN on a side project I was also working at the time so I pitched it and we started using it! This was a huge win for me at the time, and felt so nice to be able to improve the development of their system even if by a little bit. A lot of other stuff was going on in there and I learned about creating computer images to speed up the set up of new machines for the staff, a little bit of Active Directory configurations, giving remote support, windows server configs... good times indeed.
ackzell
221,276
Send message as a Telegram bot. What may go wrong?
Last month I've worked on @hltvFeatured – it's a Telegram bot to get notifications about upcoming Cou...
0
2019-12-17T13:45:11
https://dev.to/mbelsky/send-message-as-a-telegram-bot-what-may-go-wrong-1adf
telegram, bots, node, javascript
Last month I've worked on [@hltvFeatured](https://t.me/hltvFeaturedBot) – it's a Telegram bot to get notifications about upcoming Counter-Strike: Global Offensive matches featured by HLTV.org. ![Example of @hltvFeatured message](https://thepracticaldev.s3.amazonaws.com/i/zb9ihaoxjfca6rbgsd7r.png) After a few weeks in production I've got an alert that the bot fails on sending notifications to subscribers. I had no access to my PC, so it was so nervous. I didn't know what may go wrong. When I back to home first of all I opened IDE and started debugging. There were no issues with database, the app's code or network. But the Telegram API returns error `400: Bad Request: can't parse entities`. I've started analyze what wrong with the messages and why it didn't fail before. Telegram API allows to format messages in two styles: [Markdown and HTML](https://core.telegram.org/bots/api#formatting-options). I've selected Markdown as less verbose and wrote a small function to convert match data entity to a Markdown string: ```javascript function convertToMessage({ event, href, stars, title, unixTimestamp }) { const when = new Date(unixTimestamp).toUTCString() const date = formatUTCString(when).replace(/\s/g, NBSP) return ` [${title.replace(/\s/g, NBSP)}](${href}) Rating: ${'☆'.repeat(stars) || '–'} _${date} @ ${event}_ `.trim() } ``` That day matches had an interesting event name: _cs_summit 5_ and I immediately noticed it. As you can see the convert function makes date and event _italic_: `_${date} @ ${event}_`. So the message contained three underscores. I didn't expect that Telegram API can't parse a message like that, so I've started search for a dependency to escape symbols like `_`, `*` and `[` in matches data before inject it in message template. You have no idea how I was surprised when Telegram API answered with the same error for a message with escaped symbols. This time I went to google it. The solution was a suggestion to use HTML markup and escape symbols. I've updated my function and... it works! ```javascript function convertToMessage({ event, href, stars, title, unixTimestamp }) { const when = new Date(unixTimestamp).toUTCString() const date = formatUTCString(when).replace(/\s/g, NBSP) return ` <a href="${href}">${escapeHtml(title).replace(/\s/g, NBSP)}</a> Rating: ${'☆'.repeat(stars) || '–'} <i>${date} @ ${escapeHtml(event)}</i> `.trim() } ``` I can't imagine a case when Markdown is a good choice to markup messages delivered by a bot. And if you can, please, share in comments :)
mbelsky
221,375
Everything you should know about Javascript functions
This article was originally published at JavaScript functions Function in programming is one of t...
0
2019-12-25T06:28:52
https://www.blog.duomly.com/essential-knowledge-about-javascript-functions-with-examples/
javascript, programming, beginners, codenewbie
This article was originally published at <a href=“https://www.blog.duomly.com/essential-knowledge-about-javascript-functions-with-examples”>JavaScript functions</a> --- Function in programming is one of the most basic elements. It is a set of statements that perform some activity to get the result. In lots of cases, the action is performed using the data which are provided as input. The statements in the function are executed every time the function is invoked. Functions are used to avoid repeating the same code. The idea is to gather tasks that are executed more than ones into a function and then call the function wherever you want to run that code. Taking into consideration that function is such an important concept in Javascript I’m going to take a look at: - defining a function, - calling a function, - return statement, - parameters and arguments, - arrow functions, - self-invoking functions. _* To check the code execution open the console in the browser and try to execute the code ( if you are using Google Chrome right-click on the page and select Investigate)_ <h4>Defining a function</h4> We may define functions in two different ways. Defining a function as a function declaration always starts with the function keyword. Then we set the name of the function, followed by parameters in the parenthesis or empty parenthesis if there are no parameters needed. Next, the statement comes closed in curly braces ({}). Let’s take a look at a code example: ```javascript function sayHi(name) { return 'Hi'+name; } ``` In the example above the function, the name is sayHi, and the parameter is (name). It’s also worth to know that function defined by declaration can be used before its defined because it is hoisted. The other way to define a function is known as a function expression. This way, it’s possible to define a named and anonymous function as well. Also, hoisting doesn’t work in this case, so the function has to be defined first, and then it can be used. Most functions created with this method are assigned to a variable. Let’s take a look at the code example: ```javascript var sayHi = function (name) { return 'Hi' + name; } ``` In the example above function is assigned to variable sayHi, but the function itself doesn’t have a name, so we may call this function anonymous. <h4>Calling a function</h4> Now we know how we can define a function in Javascript with two methods, let’s find out how we can execute this function. Instead of calling the function, we may say invoke the function, which is the term for the process of execution. So, how to call or invoke the function? To call the function from the previous example, we have to start from the name of the function followed with parenthesis with parameters: ```javascript function sayHi(name) { return 'Hi' + name; } sayHi('Peter'); ``` In the code above we can see the name of the function sayHi followed by the expected parameter (Peter). Now the function should start and return _Hi Peter_ string. <h4>Return</h4> In the example above, our function returned a string with the parameter. Every function needs to return a result if there isn’t any return statement defined the function will return undefined. Let’s check it on an example: ```javascript // With return function calc(a, b) { return a + b; } calc(1, 4) // returns 5 // Without return function calc(a, b) { a + b; } calc(1, 4) // returns undefined ``` In the example above the first function returns the result of math operation, and the other one doesn’t have the return statement, which means it will return _undefined_. <h4>Parameters and arguments</h4> Parameters and arguments are very ofter used alternately, but there is a difference between those two. **Parameters** are these names which we put into the parenthesis when defining a function, for example: ```javascript function name(param1, param2, param3) { // statement } ``` In the example above parameters are param1, param2, and param3. And in this case, there are no arguments yet. **Arguments** are the values which are brought into the function by params. It’s what we put inside the function while invoking. Let’s see the example: ```javascript name('Mark', 'Peter', 'Kate'); ``` In the example above the function from the previous example is called with the arguments now, and our arguments are param1 = Mark, param2 = Peter, param3 = Kate. There is one more thing worth to say if we are on the parameters and arguments topic. Sometimes it happens we are not sure how many arguments we are going to pass to our function. Then we may use argument object and then pass as many arguments as we need. Let’s take a look at how it works in real examples: ```javascript // Define a function with one param function calc(num) { return 2 * num; } // Invoke the function with more params calc(10, 5, 2); ``` In the example above, we defined a function with one parameter num and invoked it with more three arguments. Now the function will recognize num as the first passed an argument, but we can treat the param as an array-like object: ```javascript // Define a function with one param assuming it's an object function calc(num) { return 2 * num[0] * num[1]; } // Invoke the function with more params calc(10, 5, 2); ``` In this case, we defined a function with a parameter, which is going to be an object, and now we can use all the passed arguments. The function will do the following calculation according to the example above 2*10*5, taking a first and second argument. <h4>Arrow functions</h4> In ES6 **arrow functions (=>)** were introduced. An arrow function is mainly the shorter syntax for declaring function expression. It also pases the context so we can avoid binding. Let’s take a look at the code example: ```javascript sayHi = (name) => { // statement } ``` In the code example above, we defined an arrow function sayHi with name parameter without using the function keyword. In fact, having only one parameter, you can skip parentheses. <h4>Self-invoking functions</h4> There is also one type of functions in Javascript, the **self-invoking functions**. These are anonymous functions that are invoked immediately after completion of the definition. The self-invoking function is placed inside an additional parenthesis and with extra pair of parenthesis at the end. Let’s take a look at the code: ```javascript (function (num1, num2) { return num1 + num2; })(); ``` In the example above, you can see that the self-invoking function is a normal anonymous function with an additional two pairs of parentheses. <h4>Conclusion</h4> In this article, I went through essential things about functions like defining functions using two different methods and invoking functions. I also explained the difference between parameters and arguments and described the usage of the arguments object. Besides, I went through arrow functions and self-invoking functions. I hope this article will be useful to you. Try to create your own functions and play with different methods of defining and invoking. Have fun with coding! <a href="https://www.duomly.com"> ![Duomly - Programming Online Courses](https://thepracticaldev.s3.amazonaws.com/i/8lura54kk6uiof3dg6zf.png) </a> Thank you for reading, Anna from Duomly
duomly
221,385
Why do you write code??
I get asked this question alot because it normally looks like I enjoy what I do. Actually, the real q...
0
2019-12-15T12:45:54
https://dev.to/manyeya/why-do-you-write-code-2p4l
discuss
I get asked this question alot because it normally looks like I enjoy what I do. Actually, the real question is why do I write code? I personally code because it gives me enjoyment. I don’t think I have ever had a job more satisfying than coding and solving problems, Creating web sites and web applications feels like something i was always ment to do,i don't see myself doing anything else. ##What are your reasons??
manyeya
221,424
How to Configure ASP.NET Core 3.1 AngularJS SPA and Identity Server 4 Authentication with PostgreSQL DB
In this tutorial we will see how to configure an ASP.NET Core 3.1 web application with AngularJS SPA as front end and PostgreSQL as database with Secure User membership implementation using Identity Server 4.
0
2019-12-15T15:10:30
https://dev.to/windson/configuring-asp-net-core-3-1-angularjs-spa-and-identity-server-4-authentication-with-postgresql-db-jgk
aspnetcore31, postgres, angular, identityserver4
--- cover_image: https://thepracticaldev.s3.amazonaws.com/i/ec2sgdrsjbmuoxfg5cx2.png title: How to Configure ASP.NET Core 3.1 AngularJS SPA and Identity Server 4 Authentication with PostgreSQL DB published: true description: In this tutorial we will see how to configure an ASP.NET Core 3.1 web application with AngularJS SPA as front end and PostgreSQL as database with Secure User membership implementation using Identity Server 4. tags: ASP.NET Core 3.1, postgresql, AngularJS, Identity Server 4 --- In this tutorial we will see how to configure an ASP.NET Core 3.1 web application with AngularJS SPA as front end and PostgreSQL as database with Secure User membership implementation using Identity Server 4. You can follow along with this video tutorial here (http://bit.ly/2EkotL5) as it is step by step detail providied you have following PreRequisites satisfied. PreRequisites: Visual Studio 2019 (Community/Professional/Enterprise Edition) v16.4.0 or higher PostgreSQL Database Server installed. Version 11 or Higher How to install PostgreSQL on Windows PC http://bit.ly/2LYxk9z A browser preferrably Chrome, Firefox or IE 11 or higher PgAdmin to access PostgreSQL Server Database Windows 10 PC ## Install PostgerSQL Install PostgerSQL on your PC or access remote PostgerSQL db. To install PostgerSQL on your Windows 10 PC, check out the link in the description below that details on how to install and configure PostgerSQL 11 on your Windows 10 PC. Once the installation is done, Open pgAdmin Once the setup is completely done a database with asp.net core3.1 membership authentication related schema will appear here. ## Update Visual Studio 2019 with .NET Core 3.1 To get latest .NET Core 3.1, for that open Visual Studio Installer and update the installer itself if it prompts for an update and then update your Visual Studio to have .NET Core 3.1. Now Click on update your Visual Studio to the version 16.3.10. You can do this on community edition too and ensure you have .Net Core 3.1 version installed which got released in December 2019 ## What Next ? Now lets create an ASP.NET Core 3.1 application that will talk to this PostgerSQL database. We will configure the creation of the ASP.NET Core application to have Authentication at the Individual User Accounts. And then we generate migration scripts that will dump the authentication related tables in to the database. ## Create ASP.NET Core 3.1 Angular application with Individual User Accounts Now launch the Microsoft Visual Studio 2019 IDE. From the Get Started section, select the option to Create a new project. Look for and select the ASP.NET Core Web Application and click on Next to proceed Give Project name and location of your choice. For example, name the project ASPNET-Core3-1-Angular-PostgreSQL-Identity-Server and click Create. In the Create a new ASP.NET Core web application step, choose Angular template. Because we want our application to have an Angular based front-end. You may also choose React.js, Web Application to has ASP.NET razor Pages or Web Application (Model-View-Controller). Now in the right pane click on Change hyperlink under Authentication label to configure our application to have Individual User Accounts and click on OK. Now click on Create. Now this template configuration will generate all the code related to Authentication that involves Migration Schema that will generate tables for users, roles, claims, logins etc., The template also has angular screens automagically generated for Register, Login, Forgot Password, Login Menu. This scaffolded template eases the developer to focus on the application functionality rather than setting the user authentication from the scratch. ## Delete Migrations directory As the default database this template is configured to work with is MSSQL, we want to change this to work with PostgerSQL. For that, delete the Migrations directory which has MSSQL related migration schema that is located under the Data directory. ## Update Connection string to point to PostgreSQL Database Also update the default connection string present in appsettings.json to point to the local or remote PostgerSQL database. For local PostgerSQL server, the connection string goes like Server=localhost;Port=5432;Database=aspnetmembership;User Id=postgres;Password=mysupersecret! ## Update Dependencies ### Remove Microsoft.EntityFrameworkCore.SqlServer Now remove the reference to Microsoft.EntityFrameworkCore.SqlServer which is located under Dependencies -> Packages As we want to configure our application to target PostgerSQL instead of MSSQL. In the solution explorer, navigate to Dependencies -> Packages right-click on Microsft.EntityFrameworkCore.SqlServer and remove. Now install PostgerSQL package for EntityFrameworkCore. To do that right click on the project ASPNET-Core3-1-Angular-PostgreSQL-Identity-Server in the solution explorer and click on Manage NuGet Packages... This will open up a window to manage the Nuget packages for our project. Now navigate to Browse menu, search for Npgsql.EntityFrameworkCore.PostgreSQL and from the version selector choose the latest stable version 3.1.0 and Install Click on OK for the Preview Changes prompt. Ensure that all the dependent packages are having version compatiability with .NET Core 3.1 ## Update Startup.cs Now open Startup.cs file and locate ConfigureServices method. Find and replace UseSqlServer with UseNpgsql in order to point our DbContext to our PostgerSQL database. Also set the lamba expression options.SignIn.RequireConfirmedAccount to false. This will disable the requirement to the registered users to confirm their email before they could login. This setting is suitable for development purposes only. And do not forget to enable it to true once you have email server configured for the customer facing applications. ## Update malformed .csproj file Now open the .csproj file of ASPNET-Core3-1-Angular-PostgreSQL-Identity-Server and ensure the execution commands are properly written. If you see repeated double hypens like npm run build -- --prod and npm run build:ssr -- --prod change it to npm run build --prod and for ssr npm run build:ssr --prod ## install angular cli Now install angular cli. For that in the search box at the top of IDE type developer command prompt and open it. It will open up a command prompt window with current directory defaulted to project's root directory. Now set the current directory to ClientApp with the command > cd ClientApp and then run the command > npm install --save-dev @angular/cli@latest This will install latest version of angular cli and saves it as dev dependency in the package.json located in ClientApp directory. This operation will take a while. ## Add-Migration scripts Now we will generate PostgerSQL based migrations schema. Open the Package Manager Console by navigating to Tools -> Nuget Package Manager -> Package Manager Console Type in the command > add-migration PostgreSQLIdentitySchema This will generate Migrations directory adjacent to Data directory. Migrations directory will have two c-sharp files named datetimestamp_migrationname.cs to create identity schema and ApplicationDbContextModelSnapshot.cs that has the snapshot of the models that get migrated with this schema. ## Update-Database Now that we have migration schema with asp.net core 3.1 Authentication aut-generated for PostgerSQL, lets create our new database aspnetmembership to have the tables created related to authentication. For that run the command > update-database This command will create a database named aspnetmembership which is configured in the connection string located in appsettings.json Based on the auto-generated schema located in Migrations directory, the asp.net core 3.1 membership tables will be created in the aspnetmembership database. To view the new database created, go to pgAdmin refresh the Databases node that is present under Servers -> PostgerSQL 11 You will find newly created aspnetmembership database. Look for the tables node present under Schemas node of the aspnetmembership database. ## Build the solution Now in Visual Studio go to build menu and click on the Build Solution. This process will take a while as we are building the solution for the first time. For the first time all the dependent packages for the front-end angular app will be downloaded in the node modules directory and then the entire solution will be built. ## Run the application Click on the dropdown next to Run button and select your favorite browser from which you want to launch the web app. Lets proceed with chrome. Now click on the IIS Express and wait for the application to open in default browser configured for your PC. You can notice the Default home page with navigation menu having links for Register and Login pages. ## Access non-authenticated page The Counter navigation menu item is a non-authenticated page. Lets click on it to access it. ## Access authenticated page The Fetch Data navigation menu item is an authenticated page. As we have not yet registered, clicking on Fetch data menu will redirect the page to login. In order for users to login, they must be registered. ## Register So navigate to registeration page and provide email, password and then provide confirm password. Click on Register. Next click on login. As we disabled email confirmation for registered users, clicking on login will automatically redirect us to the authenticated app. Now that we are authenticated, click on Fetch data and you will be shown the default dummy weather data which is an authenticated page. ## Manage Profile Notice the email in the navigation menu, click on it to customize your profile, change email, password and also an option to download all the personal data. ## Logout Click on Logout and you will see the navigation menu shows Register and Login links. ## Check registered users in database. Now in order to check the registered users in the PostgerSQL database,go back to pgAdmin and expand the nodes aspnetmembership -> Schemas -> Tables right-click on AspNetUsers table and select the option View/Edit Data. You will notice the user entry with the details we registered with.
windson
221,470
A Modeling Editor and Code Generator for AsyncAPI
IIoT (Industrial IoT) architectures are typically distributed and asynchronous, with communication be...
0
2019-12-15T17:23:12
https://modeling-languages.com/asyncapi-modeling-editor-code-generator/
asyncapi, api, showdev, ide
IIoT (Industrial IoT) architectures are typically distributed and asynchronous, with communication being event-driven, such as the publication (and corresponding subscription) of messages. These asynchronous architectures enhance scalability and tolerance to changes, but raise interoperability issues as <strong>the explicit knowledge of the internal structure of the messages and their categorization (topics) is diluted</strong> among the elements of the architecture.  In fact, this was also a problem for REST APIs, until the industry came together and proposed a standard way to define the structure and schema of synchronous APIs: <a href="https://www.openapis.org/" target="_blank" rel="noopener noreferrer">OpenAPI</a> (derived from <a href="https://modeling-languages.com/modeling-web-api-comparing/" target="_blank" rel="noopener noreferrer">Swagger</a>). For asynchronous architectures, and inspired by OpenAPI, the <a href="https://www.asyncapi.com/" rel="nofollow">AsyncAPI</a> has appeared recently: <blockquote><em>AsyncAPI provides a specification that allows you to define Message-Driven APIs in a machine-readable format. It's protocol-agnostic, so you can use it for APIs that work over Kafka, MQTT, AMQP, WebSockets, STOMP, etc. The spec is very similar to OpenAPI/Swagger so, if you're familiar with it, AsyncAPI should be easy for you.</em></blockquote> In AsyncAPI, the specifications of an API can be defined in YAML or JSON, which allows specifying, for example, the message brokers, the topics of interest, or the different message formats associated with each one of the topics, among other aspects. AsyncAPI is, however, in the early stages of development, and the <strong>AsyncAPI tool market is underdeveloped</strong>, mainly limited to the generation of documentation to be consumed by humans. Similarly to what we've done for OpenAPI (see our <a href="https://modeling-languages.com/rest-api-composer/" target="_blank" rel="noopener noreferrer">API Composer</a> or our <a href="https://modeling-languages.com/automatic-discovery-web-api-specifications/" target="_blank" rel="noopener noreferrer">API Discoverer</a>), we believe<strong> a model-based approach would facilitate the modeling of AsyncAPI specifications and the development of Message-Driven APIs from them. </strong> The overall view of our approach is illustrated in the following figure. <a href="http://modeling-languages.com/wp-content/uploads/2019/12/asyncapitoolkit.png"><img src="http://modeling-languages.com/wp-content/uploads/2019/12/asyncapitoolkit.png" alt="AsyncAPI editor" width="801" height="309" class="aligncenter size-full wp-image-7294" /></a> The <a href="https://github.com/SOM-Research/asyncapi-toolkit" target="_blank" rel="noopener noreferrer">AsyncAPI toolkit is available on GitHub</a>, make sure you star/watch it to follow its evolution!. <h2>Importing / Modeling an AsyncAPI specification</h2> First, based on the AsyncAPI specification, we created an <a href="https://www.eclipse.org/Xtext/" target="_blank" rel="noopener noreferrer">Xtext</a> grammar. From this grammar, an Ecore metamodel is automatically derived, together with a set of editors and Eclipse-based tools. These editors allow creating JSON-based specifications of message-driven APIs using AsyncAPI. Specifications created using these editors are automatically parsed and reified as instances of the AsyncAPI metamodel. <a href="http://modeling-languages.com/wp-content/uploads/2019/12/AsyncApiMetamodel.png"><img class="wp-image-7296 size-large" src="https://modeling-languages.com/wp-content/uploads/2019/12/AsyncApiMetamodel-1024x524.png" alt="AsyncAPI metamodel" width="1024" height="524" /></a> <h2> Generating code to easily process messages from an AsyncAPI specification </h2> Additionally, the prototype is able to generate Java code supporting the creation and serialization of JSON-based message payloads according to the modeled AsyncAPI, including nested JSON objects. No support for arrays is provided yet at this point however. The excerpt below shows an example of an AsyncAPI specification supported by the prototype: <pre> { "asyncapi": "1.2.0", "info": { "title": "Sample AsyncAPI specification", "version": "0.1.0", }, "servers": [ { "url": "broker.url:{port}", "scheme": "mqtt", "description": "This is an example description", "variables": { "port": { "default": "1883", "enum": [ "1883", "8883" ] } } } ], "topics": { "messages/device2controller": { "publish": { "$ref" : "#/components/messages/request“ } } } }, "components": { "schemas": { "protocol_version": { "title": "Protocol version", "type": "integer", "default": 2, "x-friendly-name": "ProtocolVersion" }, "id": { "title": "ID", "type": "string", "format": "XXXXXX YY ZZZZZZ W" }, "status": { "title": "Status", "type": "string", "enum": ["OK", "ERROR"], "x-friendly-name" : "Status" }, "environment": { "title": "Environment", "type": "string", "enum": ["DEV", "STAG","PROD" ], "x-friendly-name" : "Environment" } }, "messages" : { "request" : { "summary" : "Request connectivity.", "description": "Request connectivity when status changes", "payload": { "type": "object", "properties": { "P": { "$ref": "#/components/schemas/protocol_version" }, "ID": { "$ref": "#/components/schemas/id" }, "E": { "$ref": "#/components/schemas/environment" }, "M": { "x-friendly-name" : "Message", "properties": { "S": { "$ref": "#/components/schemas/status" }, "C": { "title": "Content", "type": "string", "x-friendly-name": "Content" } } } } } } } } </pre> A specification like the above, allows generating messages as follows: <pre> package tests; import messages.device2controller.Request; import messages.device2controller.Request.Payload.Environment; import messages.device2controller.Request.Payload.Message; import messages.device2controller.Request.Payload.PayloadBuilder; import messages.device2controller.Request.Payload.Message.Status; public class Test { public static void main(String[] args) { PayloadBuilder builder = Request.payloadBuilder(); Request.Payload payload = builder .withProtocolVersion(2) .withEnvironment(Environment.DEV) .withID("id") .withMessage( Message.newBuilder() .withStatus(Status.OK) .withContent("Content") .build() ).build(); System.out.println(payload.toJson(true)); System.out.println(Request.Payload.fromJson(payload.toJson()).toJson(true)); } } </pre> The code generated by our toolkit also allows to easily publish the messages built as explained above, and to subscribe to them using the servers configured in the AsyncAPI specification. <a href="https://github.com/SOM-Research/asyncapi-toolkit/blob/master/README.md#quick-start-guide">Check our online documentation for an example!</a> <h2> Generating a new AsyncAPI from an Ecore model </h2> Until now, we assumed that either you already had an AsyncAPI file to import or you would be using our AsyncAPI editor to create one. In fact, there is a third alternative: <strong>take an existing Ecore model you already have available and generate an skeleton AsyncAPI specification from it</strong>. The generator will create a reusable JSON Schema for each domain class. Channels will be created out of annotated EClasses. Moreover, hosts information can also be specified via EAnnotations (<a href="https://github.com/SOM-Research/asyncapi-toolkit#generating-an-asyncapi-specification-from-an-ecore-model" rel="noopener noreferrer" target="_blank">more details</a>).
jcabot
221,502
Enhance your macOS terminal
This post was originally published at thbe.org. Personally, I use quite often the terminal when I us...
4,027
2019-12-15T19:07:58
https://www.thbe.org/posts/2019/07/23/Enhance_your_macOS_terminal.html
macos, iterm2, zsh, ohmyzsh
*This post was originally published at [thbe.org](https://www.thbe.org/posts/2019/07/23/Enhance_your_macOS_terminal.html).* Personally, I use quite often the terminal when I use my computer, laptop or whatever. As a result, I modified my terminal quite heavily to ease my work and to get the best out of the terminal. In the past, I did it mostly manually which requires a lot of attention from my side and regular upgrades when e.g. OS updates are performed. So I tried to reduce at least the effort I have to spent when enhancing the terminal. The result was the combination of **iTerm2** + **zsh** + **oh-my-zsh** + **powerline** + **powerlevel9k**. This combination covers roughly 95% of my requirements and reduced the effort I have to spend maintaining my terminal significantly. In this blog post, I’ll show you how you can get the same terminal that I use: ![Customized iTerm2](https://thepracticaldev.s3.amazonaws.com/i/stzo7jq8fmtmscuju8gc.png) So, let’s start! I assume you have Homebrew installed on your macOS. If you don’t have Homebrew installed, I strongly recommend installing it, it’s a must-have when you work with macOS. You find the installation instructions on their homepage [**https://brew.sh/**](https://brew.sh/). With Homebrew you can install most of the required packages. But before we do this, let’s download the current stable iTerm2 version: [**https://www.iterm2.com/downloads.html**](https://www.iterm2.com/downloads.html) Extract the ZIP file and move the app to your program folder. You can now start iTerm2. Once this is done you can install **zsh** : ```bash brew install zsh zsh-autosuggestions zsh-syntax-highlighting ``` The next step is to install **oh-my-zsh**. This is fortunately also quite easy, just use this command: ```bash sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" ``` Most of the actions are now already completed. Next, we need to install **powerline** : ```bash brew install python3 pip3 install powerline-status ``` I use **powerline** primarily for vim which needs to be configured in the .vimrc file: ```conf " powerline set rtp+=/usr/local/lib/python3.6/site-packages/powerline/bindings/vim set laststatus=2 set t_Co=256 ``` The last step is to install **powerlevel9k**. This can be done again with **Homebrew** : ```bash brew tap sambadevi/powerlevel9k brew install powerlevel9k ``` We now have all the required packages installed and we can start with the configuration. First things first, the built-in fonts do not fully support this configuration, you need to install an appropriate font first. I used the **FiraCode** light font. To install the font, you need to download the font to the font library: ```bash mkdir ~/Downloads/FiraCode && cd ~/Downloads/FiraCode wget https://github.com/tonsky/FiraCode/releases/download/2/FiraCode_2.zip unzip FiraCode_2.zip cp ttf/*.ttf ~/Library/Fonts/ cd ~/Downloads && rm -rf FiraCode/ ``` The next step is the iTerm2 color scheme. I use **Cobalt2** provided by Wes Bos at [https://github.com/wesbos/Cobalt2-iterm](https://github.com/wesbos/Cobalt2-iterm). The color scheme needs to be downloaded and then imported into **iTerm2** : ```bash cd ~/Downloads curl https://raw.githubusercontent.com/wesbos/Cobalt2-iterm/master/cobalt2.itermcolors --output cobalt2.itermcolors ``` The color scheme can now be imported when the preferences in iTerm2 are opened and then profiles -\> colors -\> color preset -\> import is chosen. Last but not least, you need to modify the zsh configuration file to match your needs. My .zshrc looks like this: ```bash # zsh configuration file # # Author: Thomas Bendler <code@thbe.org> # Date: Tue Sep 24 20:28:27 UTC 2019 # Add powerline support POWERLINE_ZSH="/usr/local/lib/python3.7/site-packages/powerline/bindings/zsh/powerline.zsh" [-e "${POWERLINE_ZSH}"] && source "${POWERLINE_ZSH}" # If you come from bash you might have to change your $PATH. export PATH="/usr/local/sbin:${PATH}" # Path to your oh-my-zsh installation. export ZSH="${HOME}/.oh-my-zsh" # Uncomment the following line to use case-sensitive completion. CASE_SENSITIVE="true" # Uncomment the following line to change how often to auto-update (in days). export UPDATE_ZSH_DAYS=7 # Uncomment the following line to enable command auto-correction. ENABLE_CORRECTION="true" # Uncomment the following line to display red dots whilst waiting for completion. COMPLETION_WAITING_DOTS="true" # Uncomment the following line if you want to change the command execution time # stamp shown in the history command output. # You can set one of the optional three formats: # "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd" # or set a custom format using the strftime function format specifications, # see 'man strftime' for details. # HIST_STAMPS="mm/dd/yyyy" # Which plugins would you like to load? # Standard plugins can be found in ~/.oh-my-zsh/plugins/* # Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/ # Add wisely, as too many plugins slow down shell startup. plugins=( ansible battery brew bundler colorize docker dotenv git git-flow-avh iterm2 nmap osx rake ruby sudo zsh-navigation-tools ) ZSH_THEME="powerlevel9k" source "${ZSH}/oh-my-zsh.sh" # User configuration # Load Zsh tools for syntax highlighting and autosuggestions HOMEBREW_FOLDER="/usr/local/share" source "${HOMEBREW_FOLDER}/zsh-autosuggestions/zsh-autosuggestions.zsh" source "${HOMEBREW_FOLDER}/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh" # Powerlevel9k configuration #POWERLEVEL9K_MODE="compatible" # Left prompt - Configure indicator when working as root POWERLEVEL9K_ROOT_INDICATOR_BACKGROUND="clear" POWERLEVEL9K_ROOT_INDICATOR_FOREGROUND="red" # Left prompt - Configure context (user@hostname) POWERLEVEL9K_CONTEXT_DEFAULT_BACKGROUND="clear" POWERLEVEL9K_CONTEXT_DEFAULT_FOREGROUND="magenta" # Left prompt - Configure display of current directory POWERLEVEL9K_DIR_HOME_BACKGROUND="clear" POWERLEVEL9K_DIR_HOME_FOREGROUND="white" POWERLEVEL9K_DIR_HOME_SUBFOLDER_BACKGROUND="clear" POWERLEVEL9K_DIR_HOME_SUBFOLDER_FOREGROUND="white" POWERLEVEL9K_DIR_ETC_BACKGROUND="clear" POWERLEVEL9K_DIR_ETC_FOREGROUND="red" POWERLEVEL9K_DIR_WRITABLE_FORBIDDEN_BACKGROUND="clear" POWERLEVEL9K_DIR_WRITABLE_FORBIDDEN_FOREGROUND="red" POWERLEVEL9K_DIR_DEFAULT_BACKGROUND="clear" POWERLEVEL9K_DIR_DEFAULT_FOREGROUND="white" POWERLEVEL9K_SHORTEN_DIR_LENGTH="3" POWERLEVEL9K_SHORTEN_STRATEGY="truncate_middle" # Right prompt - Configure command execution status indicator POWERLEVEL9K_STATUS_OK_BACKGROUND="clear" POWERLEVEL9K_STATUS_OK_FOREGROUND="green" POWERLEVEL9K_STATUS_ERROR_BACKGROUND="clear" POWERLEVEL9K_STATUS_ERROR_FOREGROUND="red" POWERLEVEL9K_STATUS_CROSS="true" POWERLEVEL9K_STATUS_VERBOSE="true" # Right prompt - Configure command execution time measurement POWERLEVEL9K_COMMAND_EXECUTION_TIME_BACKGROUND="clear" POWERLEVEL9K_COMMAND_EXECUTION_TIME_FOREGROUND="white" # Right prompt - Configure version control system POWERLEVEL9K_VCS_CLEAN_BACKGROUND="clear" POWERLEVEL9K_VCS_CLEAN_FOREGROUND="green" POWERLEVEL9K_VCS_MODIFIED_BACKGROUND="clear" POWERLEVEL9K_VCS_MODIFIED_FOREGROUND="darkorange" POWERLEVEL9K_VCS_UNTRACKED_BACKGROUND="clear" POWERLEVEL9K_VCS_UNTRACKED_FOREGROUND="red" POWERLEVEL9K_SHOW_CHANGESET="true" POWERLEVEL9K_CHANGESET_HASH_LENGTH="12" # Right prompt - Configure display of running background jobs POWERLEVEL9K_BACKGROUND_JOBS_BACKGROUND="clear" POWERLEVEL9K_BACKGROUND_JOBS_FOREGROUND="green" # Right prompt - Configure RAM settings POWERLEVEL9K_RAM_BACKGROUND="clear" POWERLEVEL9K_RAM_FOREGROUND="white" # Right prompt - Configure load settings POWERLEVEL9K_LOAD_CRITICAL_BACKGROUND="clear" POWERLEVEL9K_LOAD_WARNING_BACKGROUND="clear" POWERLEVEL9K_LOAD_NORMAL_BACKGROUND="clear" POWERLEVEL9K_LOAD_CRITICAL_FOREGROUND="red" POWERLEVEL9K_LOAD_WARNING_FOREGROUND="darkorange" POWERLEVEL9K_LOAD_NORMAL_FOREGROUND="green" # Right prompt - Configure battery status POWERLEVEL9K_BATTERY_CHARGING_BACKGROUND="clear" POWERLEVEL9K_BATTERY_CHARGING_FOREGROUND="white" POWERLEVEL9K_BATTERY_CHARGED_BACKGROUND="clear" POWERLEVEL9K_BATTERY_CHARGED_FOREGROUND="green" POWERLEVEL9K_BATTERY_DISCONNECTED_BACKGROUND="clear" POWERLEVEL9K_BATTERY_DISCONNECTED_FOREGROUND="darkorange" POWERLEVEL9K_BATTERY_LOW_THRESHOLD="10" POWERLEVEL9K_BATTERY_LOW_BACKGROUND="clear" POWERLEVEL9K_BATTERY_LOW_FOREGROUND="red" POWERLEVEL9K_BATTERY_VERBOSE=false # Right prompt - Configure disk usage POWERLEVEL9K_DISK_USAGE_NORMAL_BACKGROUND="clear" POWERLEVEL9K_DISK_USAGE_NORMAL_FOREGROUND="green" POWERLEVEL9K_DISK_USAGE_WARNING_BACKGROUND="clear" POWERLEVEL9K_DISK_USAGE_WARNING_FOREGROUND="darkorange" POWERLEVEL9K_DISK_USAGE_CRITICAL_BACKGROUND="clear" POWERLEVEL9K_DISK_USAGE_CRITICAL_FOREGROUND="red" # Right prompt - Configure IP address POWERLEVEL9K_IP_BACKGROUND="clear" POWERLEVEL9K_IP_FOREGROUND="white" # Configure multiline prompt POWERLEVEL9K_PROMPT_ON_NEWLINE="true" POWERLEVEL9K_SHOW_CHANGESET="true" POWERLEVEL9K_MULTILINE_FIRST_PROMPT_PREFIX="" POWERLEVEL9K_MULTILINE_LAST_PROMPT_PREFIX="$ " POWERLEVEL9K_LEFT_SEGMENT_SEPARATOR="" POWERLEVEL9K_RIGHT_SEGMENT_SEPARATOR="" POWERLEVEL9K_LEFT_SUBSEGMENT_SEPARATOR="" POWERLEVEL9K_RIGHT_SUBSEGMENT_SEPARATOR="" # Configure the prompt content POWERLEVEL9K_LEFT_PROMPT_ELEMENTS=(root_indicator context dir vcs) POWERLEVEL9K_RIGHT_PROMPT_ELEMENTS=(status command_execution_time ram disk_usage ip) #POWERLEVEL9K_RIGHT_PROMPT_ELEMENTS=(status command_execution_time load ram disk_usage ip) # Local custom snippets for item in $(ls -1 ${HOME}/.profile.d/*.profile); do [-e "${item}"] && source "${item}" done ``` With these add-ons and configuration in place, your terminal should look like my one and should, if you don’t like it, at least be a good starting point to start your own configuration.
thbe
221,525
Dev Christmas Songs
The festive period is upon us, let’s here your cheesy developer versions of classic songs!
0
2019-12-15T20:27:31
https://dev.to/thatonejakeb/dev-christmas-songs-5eo8
discuss, offtopic, watercooler, christmas
The festive period is upon us, let’s here your cheesy developer versions of classic songs!
thatonejakeb
221,577
CSV generation from JSON in Svelte
Generate CSV without third party library and fully supported by all browser and mobile devices.
0
2019-12-16T04:01:26
https://dev.to/karkranikhil/csv-generation-from-json-in-svelte-5cgf
svelte, tutorial, javascript, compiler
--- title: CSV generation from JSON in Svelte published: true description: Generate CSV without third party library and fully supported by all browser and mobile devices. tags: Svelte, tutorial, JavaScript, compiler cover_image: https://thepracticaldev.s3.amazonaws.com/i/9nlofmvbn3zol8mu4p0g.PNG --- Svelte is the new big thing in the market and I decided to try one common use case i.e. CSV generation from JSON. For those who don't know svelte "*Svelte is a radical new approach to building user interfaces. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app.*" There are several ways to setup Svelte project. You can read more about the many ways to get started here. For the purpose of this demo, we will be working with `degit` which is a software scaffolding tool. To start, run the following command: ```javascript npx degit sveltejs/template svelte-CSV-demo ``` Now go inside the project directory using following command: ```javascript cd svelte-CSV-demo ``` let's install the project dependencies using following command: ```javascript npm install ``` Now our Svelte base project is ready. Let's start writing our code. We have four part of our project 1. load the JSON from REST API 2. Integrate the JSON with template 3. Add style to project 4. CSV generation utility 5. End to End integration 6. Deploying to the web With [now](https://zeit.co/now) If you are interested only in Code you can checkout the code from below URL `https://github.com/karkranikhil/svelte-csv-demo` **1. load the JSON from REST API** Go to `App.svelte` file and remove the existing code with the below code ```javascript <script> import { onMount } from "svelte"; let tableData = []; onMount(async () => { const res = await fetch(`https://jsonplaceholder.typicode.com/posts`); tableData = await res.json(); console.log(tableData); }); </script> ``` As shown above, we have imported the `onMount` from svelte package. `onMount` is fired after the component is rendered. After that we have initialized the variable `tableData` with an empty array. Now we have defined the `onMount` function and within that we have used the async & await . * `async` functions returns a promise. * `async` functions use an implicit Promise to return its result. Even if you don’t return a promise explicitly `async` function makes sure that your code is passed through a promise. * await blocks the code execution within the `async` function, of which it(await statement) is a part. We have used *Fetch API* to get the JSON from the service. The Fetch API is a promise-based JavaScript API for making asynchronous HTTP requests in the browser. On successful calling of REST API we are storing the JSON in `tableData` and printing it in console. Let'run the project and see the console. To start the project run the following command. ```javascript npm run dev ``` once Above command run successfully navigate to http://localhost:5000/. Open your developer console and you will see the following output. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/sfdp7eyb0z2k7fpzw51t.PNG) If you look at the above image we are able to get the data successfully. Now we will go to next step and will see how to integrate it with HTML markup **2. Integrate the JSON with template** Now we already have our API data in `tableData` variable. Now we will integrate the data using `#each` iterator. Add the following code to `App.svelte` below `script` tag ```html <div class="container"> <div class="header"> <h1>CSV generation from JSON in Svelte</h1> </div> <div class="main"> <table> <thead> <tr> {#each tableHeader as header} <th>{header}</th> {/each} </tr> </thead> <tbody> {#each tableData as item} <tr> <td>{item.userId}</td> <td>{item.id}</td> <td>{item.title}</td> <td>{item.body}</td> </tr> {/each} </tbody> </table> </div> </div> ``` Above we have created the `div` with class `container` that hold two child one with `header` class another with `main` class. In `div` with `header` class we are only showing the header of our app. In `div` with `main` class we are creating the table and within the table we are creating table header and table body using `#each` block. `#each` loops the data in markup. We are using two loop one for header and another for the body. For table body we are using `tableData` that contains the REST API response and for header we are using the `tableHeader` variable that will create now under the `script` tag. let's define the `tableHeader` below `tableData` and initializing it with the array of custom header keys as shown below. ```javascript let tableHeader = ["User Id", "ID", "Title", "Description"]; ``` Let's run the project again if it's stop otherwise go to browser and you will see the following output. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/zodszgx9d3hbtj64lytt.PNG) **3. Add style to project** I have define some CSS to make our page look better. you can use it by adding the `style` tag after the markup ```css <style> .container { max-width: 1140px; margin: auto; } .header { display: flex; justify-content: space-between; display: flex; justify-content: space-between; background: orange; padding: 10px; } table { font-family: arial, sans-serif; border-collapse: collapse; width: 100%; } td, th { border: 1px solid #dddddd; text-align: left; padding: 8px; } tr:nth-child(even) { background-color: #dddddd; } button { border: none; /* Remove borders */ color: white; /* Add a text color */ padding: 14px 28px; /* Add some padding */ cursor: pointer; /* Add a pointer cursor on mouse-over */ background-color: #4caf50; height: fit-content; } h1 { margin: 0px; } </style> ``` Now if you look at the output, it will look like as shown below ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/njc3ktwfpytkzhfjubfj.PNG) **4.CSV generation Utility** Here is the key step in which we have wrote some Utility that will generate the csv based on some parameters. It works with all browser and even on all mobile phones. So, let's create a new file `csvGenerator.js` inside the src folder and paste the below code in it. ```javascript export const csvGenerator = (totalData,actualHeaderKey,headerToShow,fileName) => { let data = totalData || null; if (data == null || !data.length) { return null; } let columnDelimiter = ","; let lineDelimiter = "\n"; let keys = headerToShow; let result = ""; result += keys.join(columnDelimiter); result += lineDelimiter; data.forEach(function(item) { let ctr = 0; actualHeaderKey.forEach(function(key) { if (ctr > 0) result += columnDelimiter; if (Array.isArray(item[key])) { let arrayItem = item[key] && item[key].length > 0 ? '"' + item[key].join(",") + '"' : "-"; result += arrayItem; } else if (typeof item[key] == "string") { let strItem = item[key] ? '"' + item[key] + '"' : "-"; result += strItem ? strItem.replace(/\s{2,}/g, " ") : strItem; } else { let strItem = item[key] + ""; result += strItem ? strItem.replace(/,/g, "") : strItem; } ctr++; }); result += lineDelimiter; }); if (result == null) return; var blob = new Blob([result]); if (navigator.msSaveBlob) { // IE 10+ navigator.msSaveBlob(blob, exportedFilenmae); } else if (navigator.userAgent.match(/iPhone|iPad|iPod/i)) { var hiddenElement = window.document.createElement("a"); hiddenElement.href = "data:text/csv;charset=utf-8," + encodeURI(result); hiddenElement.target = "_blank"; hiddenElement.download = fileName; hiddenElement.click(); } else { let link = document.createElement("a"); if (link.download !== undefined) { // Browsers that support HTML5 download attribute var url = URL.createObjectURL(blob); link.setAttribute("href", url); link.setAttribute("download", fileName); link.style.visibility = "hidden"; document.body.appendChild(link); link.click(); document.body.removeChild(link); } } }; ``` As shown above, we have created a function called csvGenerator. That takes four paramters as mentioned below *totalData* - totalData is the JSON data to pass to CSV sheet *actualHeaderKey* - This is the array of JSON key name that need to be used to pick up data from totalData *headerToShow* - This is the array of custom name to show on header row of the csv file *fileName* -Name of the file by which it get download with an extension of `.csv` `csvGenerator` function will take the input and generate the CSV output by looping the data and adding comma to each value. **5. End to End integration** Till now we are ready with table and csvGenerator. Let's connect both together. First we need to import the `csvGenerator` file to our `App.svelte`. Add the following line below the `onMount` import statement ```javascript import { csvGenerator } from "./csvGenerator"; ``` Now we need a handler that will get called on click of button from markup and call our utility `csvGenerator`. Add the following code below `onMount` function ```javascript function downloadHandler() { let tableKeys = Object.keys(tableData[0]); //extract key names from first Object csvGenerator(tableData, tableKeys, tableHeader, "svelte_csv_demo.csv"); } ``` As shown above, we have created a function called `downloadHandler` that will called on click of button and generate the CSV file of table data. Let's create now a button on our template. Add the following code below the the h1 tag ```html <button on:click={downloadHandler}>Download</button> ``` and run the project and you will see the below output on your browser. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/0kkxxmoxqpbvrigeznps.PNG) On click of download button it will download the CSV in your machine. **4. Deploying to the web With [now](https://zeit.co/now)** Install `now` if you haven't already: ```bash npm install -g now ``` Then, from within your project folder: ```bash cd public now deploy --name svelte-csv-demo ``` `now` will deploy your code and generate a URL. Deployed Url - https://svelte-csv-demo.karkranikhil.now.sh Github - https://github.com/karkranikhil/svelte-csv-demo --- ## References https://svelte.dev/
karkranikhil
221,673
The perfect React's component doesn't exist. 🤯
Let's talk about how we can optimize our react's components using this simple rule "Less re-renders == More performance"
0
2019-12-16T06:58:54
https://bassemmohamed.me/post/the-perfect-reacts-component-doesnt-exist
react, javascript, optimization
--- title: The perfect React's component doesn't exist. 🤯 published: true description: Let's talk about how we can optimize our react's components using this simple rule "Less re-renders == More performance" tags: react, javascript, optimization cover_image: https://thepracticaldev.s3.amazonaws.com/i/4em9wjblzhsh4f9vav1x.png canonical_url: https://bassemmohamed.me/post/the-perfect-reacts-component-doesnt-exist --- Hey devs from all over the world 😊 In today's post, I want to tell you all about React's performance. How can we optimize our react components to reduce the number of undesired re-renders? I will be talking about React's `PureComponent` class, `Memos` and the truly awesome `shouldComponentUpdate` method. Oki, As most of you know. React uses the virtual DOM 🔥to reduce the costly real DOM manipulation operations. This virtual DOM is a representation of the actual DOM but built with javascript. **When a component updates,** React builds the new virtual DOM then compares it with the previously rendered one to decides whether an actual DOM update is required or not. 👨‍⚖️ That what makes React stand out from other frontend frameworks out there. 🥇Now, Let's talk about how to make **your React components stand out**. 💪 ## The perfect React's component doesn't exist. 🤯 Ohh yeah! I love [minimalism](https://www.google.com/#q=Minimalism) and I like to think that we are applying it's concepts here. Think about it for a second. **LESS CODE == LESS TROUBLE**, isn't it? 🤯 We can discuss this in another article though. In our today's article, it is more like **LESS RE-RENDERS == MORE PERFORMANCE**, We want to stabilize our component's as much as we can. cause every re-render means that react will **at least** check for the difference between new and old virtual DOM. If we don't need that re-render in the first place. That just means computations down the drain. which is obviously **a big no-no** when it comes to performance. 🙅‍♂️ ## shouldComponentUpdate to the rescue 🚀 I am sure most of you guys know about `shouldComponentUpdate` but if you don't, let me give a quick introduction. It is a component lifecycle method that tells React whether to continue updating the component or not. It runs every time there is a change in the props or the state and it defaults to true. So for example, if we have a component with a `shouldComponentUpdate` like this : ``` javascript shouldComponentUpdate(nextProps, nextState) { return false; } ``` It will basically never ever update without forcing it. `shouldComponentUpdate` doesn't get called for the initial render or when `forceUpdate()` is used. > Wait a sec! Are you saying we should write a shouldComponentUpdate method by hand for every component just to prevent a couple of undesired renders ?! 🤯Nobody got time for this! 😠 Not exactly! 🙄 ## What is React's PureComponent? 🤔 It is similar to React's component class but it implements `shouldComponentUpdate` with a **shallow** prop and state comparison by default. In other words, every prop/state update in a PureComponent will not trigger re-render unless there is a **shallow** difference between current & previous props or current & previous state. This **shallow** part is a little tricky, as it could lead to false-negatives ( Not updating when we actually want a re-render ) in the case of complex data structures like arrays or objects. let's go for an example. ``` javascript state = { itemsArray: [] } onSomeUserAction = (item) => { const itemsArray = this.state.itemsArray; itemsArray.push(item); this.setState({ itemsArray }) } ``` Now imagine this scenario where we have an array in the state and we want to push an item into that array on some user action. This will actually produce a false negative if it is a `PureComponent`. After this `setState`, `shouldComponentUpdate` will **shallowly** compare the old state to the new one just like this `this.state == nextState` and because our `itemsArray` reference is exactly the same this condition will be truthful and the PureComponent will not re-render. This is also a similar case for objects like this example. ``` javascript state = { user: {} } onSomeUserAction = (name) => { const user = this.state.user; user.name = name; this.setState({ user }) } ``` ### Immutable everywhere 🙌 We can fix this issue by using `forceUpdate()` but thats not exactly elegant and it goes against everything we just said so scrap that! What we should do is create a new object/array every time like this : ``` javascript state = { itemsArray: [] } onSomeUserAction = (item) => { const itemsArray = this.state.itemsArray; this.setState({ itemsArray: [...itemsArray, item] }) } or in case of objects state = { user: {} } onSomeUserAction = (name) => { const user = this.state.user; this.setState({ user: {...user, name} }) } ``` Using some not so new JavaScript features like destructing and the spread operator. It doesn't only look cooler but it is also considered a whole new object. Now the `this.state == nextState` is no longer truthful and the `shouldComponentUpdate` is no longer producing a false-negative. Now, what about functional components? Well, you should use `Memo` for that like this ``` javascript const MyComponent = React.memo(function MyComponent(props) { /* render using props */ }); ``` `Memo` is just like `PureComponent` but for functional components instead of classes. With `PureComponent` or `Memo` and creating new object/arrays with `setState`, We can now safely celebrate our better performing components, give yourselves a great round of applause. 👏👏 You made it all the way here! Thanks for reading and I really hope you enjoyed it. If you did, Don't forget to let me know and if you really liked it [follow me on twitter](https://twitter.com/BassemMohamed94) to never miss a future post. 😊 As always, **Happy coding 🔥🔥** “**كود بسعادة”**
bassemibrahim
221,695
Beginner's Guide to Requests and APIs
The requests module is your portal to the open web.🤯 Basically any API that you have access to, you c...
0
2019-12-16T09:21:19
https://dev.to/ejbarba/beginner-s-guide-to-requests-and-apis-41dk
python, requests, beginners
The **requests** module is your portal to the open web.🤯 Basically any API that you have access to, you can pull data from it (Though your mileage may vary). ## Getting Started ✊ Installing requests needs [pipenv](https://pypi.org/project/pipenv/), so go ahead and install that first if you don't have it. If you do, simply run this command: ```python pipenv install requests ``` ### Let's go ahead and pick an API to use.🤔 Here is a great resource to public APIs: {% github public-apis/public-apis %} Let's use the **kanye.rest** API for this quick example: https://kanye.rest/ ## Diving In 🙌 Let's import the all mighty: ```python import requests ``` And **get**-ting the data is pretty straightforward: ```python kanye_url = requests.get('https://api.kanye.rest') ``` Print it in JSON format: ```python print(kanye_url.json()) # gets a random quote # {'quote': 'Tweeting is legal and also therapeutic'} ``` It is important to read the docs of each API because each API is unique 🧐 Let's say we want to get a Chuck Norris joke from this API: From the docs it uses a different URL, so let's go ahead and code that: ```python chuck_url = requests.get('https://api.chucknorris.io/jokes/random') print(chuck_url.json()) ``` ... will output something like this: ``` {'categories': [], 'created_at': '2016-05-01 10:51:41.584544', 'icon_url': 'https://assets.chucknorris.host/img/avatar/chuck-norris.png', 'id': 'DLqW_fuXQnO1LtveTTAWRg', 'updated_at': '2016-05-01 10:51:41.584544', 'url': 'https://api.chucknorris.io/jokes/DLqW_fuXQnO1LtveTTAWRg', 'value': 'Chuck Norris Lost his virginity before his Dad...'} ``` ...not exactly pretty to look at, so let's *pretty-print* it:💅 ```python # 👇 Add this import below at the beginning of your file # import json print(json.dumps(chuck_url.json(), indent=2)) ``` Now it looks like this: ``` { "categories": [ "science" ], "created_at": "2016-05-01 10:51:41.584544", "icon_url": "https://assets.chucknorris.host/img/avatar/chuck-norris.png", "id": "izjeqnjzteeqms8l8xgdhw", "updated_at": "2016-05-01 10:51:41.584544", "url": "https://api.chucknorris.io/jokes/izjeqnjzteeqms8l8xgdhw", "value": "Chuck Norris knows the last digit of pi." } ``` Now that's much more readable! 💪💪💪 Sources: 🐍 https://3.python-requests.org/ 📖 https://github.com/public-apis/public-apis 🌊 https://kanye.rest/ 🤠 https://api.chucknorris.io/
ejbarba
221,724
Breaking an app up into modules
As apps grow larger and larger, their complexity tends to increase too. And quite often, the problems...
0
2019-12-16T09:17:49
https://www.donnywals.com/breaking-an-app-up-into-modules/
ios, swift
As apps grow larger and larger, their complexity tends to increase too. And quite often, the problems you're solving become more specific and niche over time as well. If you're working on an app like this, it's likely that at some point, you will notice that there are parts of your app that you know on the back of your hand, and other parts you may have never seen before. Moreover, these parts might end up somehow talking to each other even though that seems to make no sense. As teams and apps grow, boundaries in a codebase begin to grow naturally while at the same time the boundaries that grow are not enforced properly. And in turn, this leads to a more complicated codebase that becomes harder to test, harder to refactor and harder to reason about. In today's article, I will explain how you can break a project up into multiple modules that specialize on performing a related set of tasks. I will provide some guidance on how you can implement this in your team, and how you identify parts of your app that would work well as a module. While the idea of modularizing your codebase sounds great, there are also some caveats that I will touch upon. By the end of this article, you should be able to make an informed decision on whether you should break your codebase up into several modules, or if it's better to keep your codebase as-is. Before I get started, keep in mind that I'm using the term module interchangeably with target, or framework. Throughout this article, I will use the term module unless I'm referring to a specific setting in Xcode or text on the screen. ## Determining whether you should break up your codebase While the idea of breaking your codebase up into multiple modules that you could, in theory, reuse across projects sounds very attractive, it's important that you make an educated decision. Blindly breaking your project up into modules will lead to a modularized codebase where every module needs to use every other module in order to work. If that's the case, you're probably better of not modularizing at all because the boundaries between modules are still unclear, and nothing works in isolation. So how do you decide whether you should break your project up? Before you can answer that question, it's important to understand the consequences and benefits of breaking a project up into multiple modules. I have composed a list of things that I consider when I decide whether I should break something out into its own module: * Does this part of my code have any (implicit) dependencies on the rest of my codebase? * Is it likely that I will need this in a different app (for example a tvOS counterpart of the iOS app)? * Can somebody work on this completely separate from the app? * Does breaking this code out into a separate module make it more testable? * Am I running into any problems with my current setup? There are many more considerations you might want to put into your decision, but I think if you answer "yes" to at least three out of the five bullet points above it might make sense to break part of your code out into a separate module. Especially if you're running into problems with your current setup. I firmly believe that you shouldn't attempt to fix what isn't broken. So breaking a project up into module for the sake of breaking it up is always a bad idea in my opinion. Any task that you perform without a goal or underlying problem is essentially doomed to fail or at least introduce problems that you didn't have before. As with most things in programming the decision to break your project up is not one that's clear cut. There are always trade-offs and sometimes the correct path forward is more obvious than other times. For example, if you're building an app that will have a tvOS flavor and an iOS flavor, it's pretty clear that using a separate module for shared business logic is a good idea. You can share the business logic, models and networking client between both apps while the UI is completely different. If your app will only work on the iPhone, or even if it works on the iPhone and iPad, it's less clear that you should take this approach. The same is true if your team works on many apps and you keep introducing the same boilerplate code over and over. If you find yourself doing this, try to package up the boilerplate code in a framework and include it as a separate module in every project. It will allow you to save lots of time and bug fixes are automatically available to all apps. Beware of app-specific code in your module though. Once you break something out into its own module, you should try to make sure that all code that's in the module works for all consumers of that module. ## Identifying module candidates Once you've decided that you have a problem, and modularizing your codebase can fix this problem, you need to identify the scope of your modules. There are several obvious candidates: * Data storage layers * Networking clients * Model definitions * Boilerplate code that's used across many projects * ViewModels or business logic that used on tvOS and iOS * UI components or animations that you want to use in multiple projects This list is in no way exhaustive, but I hope it gives you an idea of what things might make sense as a specialized module. If you're starting a brand new project, I don't recommend to default to creating modules for the above components. Whether you're starting a new project or refactoring an existing one, you need to think carefully about whether you need something to be its own module or not. Successfully breaking code up into modules is not easy, and doing so prematurely makes the process even harder. After identifying a good candidate for a module, you need to examine its code closely. In your apps, you will usually use the default access level of objects, properties, and methods. The default access level is `internal`, which means that anything in the same module (your app) can access them. When you break code out into its own module, it will have its own `internal` scope. This means that by default, your application code cannot access any code that's part of your module. When you want to expose something to your app, you must explicitly mark that thing as `public`. Examine the following code for a simple service object and try to figure out what parts should be `public`, `private` or `internal`: ```swift protocol Networking { func execute(_ endpoint: Endpoint, completion: @escaping (Result<Data, Error>) -> Void) // other requirements } enum Endpoint { case event(Int) // other endpoints } struct EventService { let network: Networking func fetch(event id: Int, completion: @escaping (Result<Event, Error>) -> Void) { network.execute(.event(id)) { result in do { let data = try result.get() let event = try JSONDecoder().decode(Event.self, from: data) completion(.success(event)) } catch { completion(.failure(error)) } } } } ``` There's a good chance that you immediately identified the `network` property on `EventService` as something that should be private. You're probably used to marking things as `private` because that's common practice in any codebase, regardless of whether it's split up or not. Deciding what's `internal` and `public` is probably less straightforward. I'll show you my solution first, and then I'll explain why I would design it like that. ```swift // 1 internal protocol Networking { func execute(_ endpoint: Endpoint, completion: @escaping (Result<Data, Error>) -> Void) // other requirements } // 2 internal enum Endpoint { case event(Int) // other endpoints } // 3 public struct EventService { // 4 private let network: Networking // 5 public func fetch(event id: Int, completion: @escaping (Result<Event, Error>) -> Void) { network.execute(.event(id)) { result in do { let data = try result.get() let event = try JSONDecoder().decode(Event.self, from: data) completion(.success(event)) } catch { completion(.failure(error)) } } } } ``` Note that I explicitly added the `internal` access level. I only did this for clarity in the example, it's the default access level so in your own codebase it's up to you whether you want to add the `internal` access level explicitly. Let's go over the comments one by one so I can explain my choices: 1. `Networking` is marked `internal` because the user of the `EventService` doesn't have any business using the `Networking` object directly. Our purpose is to allow somebody to retrieve events, not to allow them to make any network call they want. 2. `Endpoint` is marked `internal` for the same reason I marked `Networking` as `internal`. 3. `EventService` is public because I want users of my module to be able to use this service to retrieve events. 4. `network` is `private`, nobody has any business talking to the `EventService`'s `Networking` object other than the service itself. Not even within the same module. 5. `fetch(event:completion:)` is public because it's how users of my module should interact with the events service. Identifying your module's public interface helps you to identify whether the code you're abstracting into a module can stand on its own, and it helps you decide whether the abstraction would make sense. A module where everything is public is typically not a great module. The purpose of a module is that it can perform a lot of work on its own and that it enforces a natural boundary between certain parts of your codebase. ## Creating new modules in Xcode Once you've decided that you want to pull a part of your app into its own module, it's time to make the change. In Xcode, go your project's settings and add a new target using the **Add Target** button: <img src="https://www.donnywals.com/wp-content/uploads/add-target-button.jpg" alt="Add Target Button" width="161" height="484" class="aligncenter size-full wp-image-783" /> In the next window that appears, scroll all the way down and select the **Framework** option: <img src="https://www.donnywals.com/wp-content/uploads/Screen-Shot-2019-12-11-at-11.13.10-1024x743.png" alt="Select Add Framework" width="750" height="544" class="aligncenter size-large wp-image-784" /> In the next step, give your module a name, choose the project that your module will belong to, and the application that will use your module: <img src="https://www.donnywals.com/wp-content/uploads/Screen-Shot-2019-12-11-at-11.34.34-1024x749.png" alt="Configure new module window" width="750" height="549" class="aligncenter size-large wp-image-785" /> This will add a new target to the list of targets in your project settings, and you will now have a new folder in the project navigator. Xcode also adds your new module to the **Frameworks, Libraries, and Embedded Content** section of your app's project settings: <img src="https://www.donnywals.com/wp-content/uploads/Screen-Shot-2019-12-11-at-11.40.10-1024x147.png" alt="Framework added to app" width="750" height="108" class="aligncenter size-large wp-image-787" /> Drag all files that you want to move from your application to your module into the new folder, and make sure to update the **Target Membership** for every file in the **File Inspector** tab on the right side of your Xcode window: <img src="https://www.donnywals.com/wp-content/uploads/Screen-Shot-2019-12-11-at-11.36.54.png" alt="Target Membership settings" width="516" height="162" class="aligncenter size-full wp-image-786" /> Once you have moved all files from your app to your new module, you can begin to apply the `public`, `private` and `internal` modifiers as needed. If your code is already loosely coupled, this should be a trivial exercise. If your code is a bit more tightly coupled, it might be harder to do this. When everything is good, you should be able to (eventually) run your app and everything should be good. Keep in mind that depending on the size of your codebase this task might be non-trivial and even a bit frustrating. If this process gets to frustrating you might want to take a step back and try to split your code up without modules for now. Try to make sure that objects are clean, and that objects exist in as much isolation as possible. ## Maintaining modules in the long run When you have split your app up into multiple modules, you are now maintaining several codebases. This means that you might often refactor your app, but some of your modules might remain untouched for a while. This is not a bad thing; if everything works, and you have no problems, you might not need to change a thing. Depending on your team size, you might even find that certain developers spend more time in certain modules than others, which might result in slightly different coding styles in different modules. Again, this is not a bad thing. What's important to keep in mind that you're probably growing to a point where it's unreasonable to expect that every developer in your team has equal knowledge about every module. What's important is that the public interface for each module is stable predictable and consistent. Having multiple modules in your codebase introduces interesting possibilities for the future. In addition to maintaining modules, you might find yourself completely rewriting, refactoring or swapping out entire modules if they become obsolete or if your team has decided that an overhaul of a module is required. Of course, this is highly dependent on your team, projects, and modules but it's not unheard of to make big changes in modules over time. Personally, I think this is where separate modules make a large difference. When I make big changes in a module I can do this without disrupting other developers. A big update of a module might take weeks and that's okay. As long as the public API remains in-tact and functional, nobody will notice that you're making big changes. If this sounds good to you, keep in mind that the smaller your team is, the more overhead you will have when maintaining your modules. Especially if every module starts maturing and living a live off its own, it becomes more and more like a full-blown project. And if you use one module in several apps, you will always have to ensure that your module remains compatible with those apps. Maintaining modules takes time, and you need to be able to put in that time to utilize modularized projects to their fullest. ## Avoiding pitfalls when modularizing Let's say you've decided that you have the bandwidth to create and maintain a couple of modules. And you've also decided that it absolutely makes sense for your app to be cut up into smaller components. What are things to watch for that I haven't already mentioned? First, keep in mind that application launch times are impacted by the number of frameworks that need to be loaded. If your app uses dozens of modules, your launch time will be impacted negatively. This is true for external dependencies, but it's also true for code that you own. Moreover, if you have modules that depend on each other, it will take iOS even longer to resolve all dependencies that must be loaded to run your app. The lesson here is to not go overboard and create a module for every UI component or network service you have. Try to keep the number of modules you have low, and only add new ones when it's needed. Second, make sure that your modules can exist in isolation. If you have five modules in your app and they all import each other in order to work, you haven't achieved much. Your goal should be to write your code so it's flexible and separate from the rest of your app and other modules. It's okay for a networking module to require a module that defines all of your models and some business logic, or maybe your networking module imports a caching module. But when your networking code has to import your UI library, that's a sign that you haven't separated concerns properly. And most importantly, don't modularize prematurely or if your codebase isn't ready. If splitting your app into modules is a painful process where you're figuring out many things at once, it's a good idea to take a step back and restructure your code. Think about how you would modularize your code later, and try to structure your code like that. Not having the enforced boundary that modules provide can be a valuable tool when preparing your code to be turned into a framework. ## In summary In today's article, you have learned a lot about splitting code up into modules. Everything I wrote in this post is based on my own experiences and opinions, and what works for me might not work for you. Unfortunately, this is the kind of topic where there is no silver bullet. I hope I've been able to provide you some guidance to help you decide whether a modularized codebase is something that fits your team and project, and I hope that I have given you some good examples of when you should or should not split your code up. You also saw how you can create a new framework in Xcode, and how you can add your application code to it. In addition to creating a framework I briefly explained how you can add your existing code to your framework and I told you that it's important to properly apply the `public`, `private` and `internal` access modifiers. To wrap it up, I gave you an idea of what you need to keep in mind in regards to maintaining modules and some pitfalls you should try to avoid. If you have any questions left, if you have an experience to share or if you have feedback on this article, don't hesitate to reach out to me on [Twitter](https://twitter.com/donnywals).
donnywals
221,745
How to connect Power BI to ODBC data source and access data in 3 steps
How to connect Power BI to ODBC data source and access data in 3 steps Install the driver and confi...
0
2019-12-16T10:14:24
https://dev.to/andreasneuman/how-to-connect-power-bi-to-odbc-data-source-and-access-data-in-3-steps-25b
odbc, powerbi, software, odbcdriver
How to connect Power BI to ODBC data source and access data in 3 steps - Install the driver and configure an ODBC data source. Start Power BI and choose Get Data > Other > ODBC. - Choose the DSN that you configured for the ODBC Driver and enter your credentials in the next step. - Now Power BI database connection is established: you can load a table for business intelligence reporting. Devart drivers provide Direct access to your databases and clouds from Power BI, which eliminates the use of database client libraries, simplifies the deployment process, and extends your application capabilities. Learn more at [https://www.devart.com/odbc/powerbi/](https://www.devart.com/odbc/powerbi/)
andreasneuman
221,788
My PWA made with Clojure/ClojureScript exceeded 400 users 🎉
This is the 19th article at Clojure Advent Calendar. Hello there! :) I'm a Japanese Clojurian. I...
0
2019-12-18T15:23:06
https://dev.to/boxp/my-pwa-made-with-clojure-clojurescript-exceeded-400-users-5co7
clojure, showdev, pwa, react
This is the 19th article at [Clojure Advent Calendar](https://qiita.com/advent-calendar/2019/clojure). {% youtube DvJC8z_H_ro %} Hello there! :) I'm a Japanese Clojurian. I released a Progressive Web App made using Clojure & ClojureScript, called "Hito Hub". So I will write about why I made this app and why I chose Clojure. ## About "Hito Hub" "Hito Hub" is an online pairing service just for avatars living in virtual worlds such as [VRChat](https://www.vrchat.com/), [VirutalCast](https://virtualcast.jp/about/), [YouTube](https://www.youtube.com/) or other platforms. Usage of "Hito Hub" is very simple. The process of finding other avatars is as follows. ![Usage of "Hito Hub"](https://thepracticaldev.s3.amazonaws.com/i/aavuuzb70jj5dplc4jvq.png) 1. Create your avatar's account using the wizard. 2. Swipe through other avatars to favorite or skip. 3. Several hours later, your avatar may get favorites from other avatars. 4. When your avatar matches with another avatar, you can share your avatar's virtual world accounts :tada: As of writing, Over 400 avatars have joined and over 8,000 favorites have been sent. You can also use "Hito Hub" from the following URL, so please take a look! ![https://hitohub.boxp.tk](https://thepracticaldev.s3.amazonaws.com/i/yna466jzdunm4zn525xh.png) https://hitohub.boxp.tk ## Motivations The biggest reason for creating "Hito Hub" was to solve the challenges felt by the Japanese community using the most popular Virtual World of VRChat. VRChat is a service that allows you to interact with people around the world using purchased or custom avatars. Recently however, most Japanese users play within a private space. So It is very rare to see Japanese people in public spaces. Recently, an exploit designed to steal avatars was circulated on VRChat. {% twitter 960577378505379840 %} Because of this, many Japanese users are choosing to stay in closed communities for protection against these tools. "Hito Hub" was developed to enable interaction with avatars across closed communities, and Japanese people using VRChat for the first time, to find people who can play together. Also, posters advertising "Hito Hub" were published VRChat's biggest market festival [Virtual Market] (https://www.v-market.work/). ![Poster advertising "Hito Hub" in Virtual Market](https://thepracticaldev.s3.amazonaws.com/i/y301qty5ycsd52nqvvxb.png) <figcaption>Poster advertising "Hito Hub" in Virtual Market</figcaption> ## Why Clojure? "Hito Hub" developed everything in Clojure across the web front end and API server. I usually write TypeScript and Go language in my usual work, but I love Clojure and personally do almost everything with Clojure / ClojureScript. Usually, I would develop a PWA like "Hito Hub" using TypeScript + React + Redux, but I've wanted to implement it in my favorite Language Clojure for a while. So, while creating [Simple sample implementation] (https://github.com/boxp/sample-github-spa) "Hito Hub" was the first prototype that I created to test Clojure/ClojureScript in production. "Hito Hub" has a PageSpeed ​​Insight score of over 90 points, and I was able to achieve a level of performance equivalent to a more traditional PWA architecture. !["Hito Hub"'s PageSpeed Insight score](https://thepracticaldev.s3.amazonaws.com/i/fk6j4w7re3pux05dqbma.png) <figcaption>"Hito Hub"'s PageSpeed Insight score</figcaption> ## Next Step Although "Hito Hub" is an app with a very limited target for avatars in Virtual World, we are going to continue development because the number of users is steadily increasing even after exceeding 400 users. For the time being, the Hito Hub concept booth will be on display at the next Virtual Market, and by the time this article was posted, I'm probably fighting with Blender and Unity :sweat_smile: That's all! Have a nice year:) Thanks for proofreading [@jonymul](https://twitter.com/jonymul) :pray:
boxp
221,805
7 Points to Consider for Making a Hybrid Mobile App in 2020
The number of smartphone users shows an increasing trend. As per Statista, there will be approximatel...
0
2019-12-16T11:45:14
https://dev.to/cliffex/7-points-to-consider-for-making-a-hybrid-mobile-app-in-2020-3595
mobileapp, appdevelopment, hybridapp
The number of smartphone users shows an increasing trend. As per <b>Statista</B>, there will be approximately 7 billion mobile users by 2020. As per the App Annie report for 2017 to 2022, annual mobile app downloads will be 258 billion. And that is a 45 % increase in what was achieved in 2017. With this kind of growth in the <a href="https://yourstory.com/mystory/how-popular-are-fantasy-sports-mobile-apps-in-india" rel="nofollow">mobile app industry</a>, it becomes difficult for the mobile app development to decide whether to go for a hybrid or native app development. It is only possible when they consider the pros and cons of each of these types of mobile apps. In this post, we will highlight the points to consider before making a hybrid mobile app. <h1>What is Hybrid App?</h1> A hybrid app is a blend between both native and web solutions. The core of the application is written using web technologies such as HTML, CSS, and JavaScript. This application is encapsulated within the native application. However, the app runs from within a native application and its embedded browser, that is invisible to the user. The hybrid apps use a web view control (WebView on Android and UIWebView on iOS) for presenting HTML and JavaScript files in full-screen formats. It uses a native browser rendering engine. For Android and iOS, the browser rendering engine is WebKit. The code is embedded in a native application wrapper and uses a solution like Apache Cordova (also known as PhoneGap) or Ionic Capacitor. A native shell application is created with the solutions. The application is actually a part of the WebView component for the platform for loading the application. The apps comprise of two main parts. The first part is the back-end code with the languages mentioned above. The other part is the native shell for making the apps downloadable. The developers choose hybrid mobile apps by using a single codebase for multiple mobile platforms. The developers make use of web technologies such as HTML, CSS, and JavaScript and prefer not to write native code separately for multiple platforms. Now, we arrive at all those important reasons why hybrid mobile app development is a priority. <h2>What are the Points to consider for Hybrid Mobile App Development Approach in 2019?</H2> <b>Here are some of the points to consider:</br> <b>Consider Native Functionality</br> It is possible to have a hybrid app with native functionalities but only after careful consideration. The wrapper in a hybrid app enables to install of the app with the right framework such as Cordova, Phonegap. Other than the frameworks, the developers can use packages and libraries for incorporating native functionality in a hybrid app. So, an app with a native behavior needs to write own code or find out a functional library. However, there is a negative element, while considering this point. It takes time for writing a native library for each of the targeted frameworks. <b>2. Reduce Application Development Delay</br> The time for application development of hybrid apps is much lesser than that of native app development as they use web technologies such as HTML, CSS, and JavaScript. On the other hand, native app development uses languages of the likes of Swift for iOS and Java for Android. <b>3. Choose the right Framework</br> There is a wide range of hybrid app frameworks and not all have a sure future ahead of them. The frameworks are usually similar and it is unusual to have one framework building the app and yet another for rebuilding the same. The decision to choose the right framework must be wisely taken. <b>4. Know the Performance Pitfalls</br> Here are some of the key performance issues for Hybrid app, and the development company must be well aware of the pitfalls. <b>Animations:</br> Lesser fluidity for animations when used in hybrid apps. <b>Memory Usage</br> WebView used by hybrid apps causes substantial memory issues. <b>App Fluidity</br> Page and Stage transitions in hybrid apps cause sluggishness in apps. Slide tray open animation horribly fails with these apps. <b>5. Consider Employee Skill Sets</br> When compared with a native app, your organization needs developers with Objective-C or Java programming background. Hybrid app development requires fewer skill sets for this. <b>6. Faster Time to Market</br> It has a faster time to market when compared with the native app development. <b>7. Allow Enterprise Agility</br> The WebViews in the hybrid apps helps the enterprises to remain Agile. This is for content/features requiring constant iterations. The native codes have an arduous process of submitting the app updates for the minutest of changes. <b>Conclusion</br> Hybrid app development does not need to develop separate versions for Android or iOS, and it reduces the development cost. <a href="https://cliffex.com/native-app-development-company">Cliffex</a> is a renowned <a href="https://cliffex.com/hybrid-app-development-company">Hybrid Mobile App Development</a> and Web Application Development Studio. If the primary concern for your organization is an easier development approach, compatibility across platforms and cost-effectiveness then Cliffex is the right choice.
cliffex
221,876
Frustrations in Python
Written by Nikita Sobolev✏️ Dark forces cast their wicked spells to leak into our realm of precious...
0
2019-12-19T14:49:54
https://blog.logrocket.com/frustrations-in-python/
python, tutorial
--- title: Frustrations in Python published: true date: 2019-12-16 14:10:48 UTC tags: python,tutorial canonical_url: https://blog.logrocket.com/frustrations-in-python/ cover_image: https://thepracticaldev.s3.amazonaws.com/i/lj562eqgl1wyij98lssn.png --- **Written by [Nikita Sobolev](https://blog.logrocket.com/author/nikita-sobolev/)**✏️ Dark forces cast their wicked spells to leak into our realm of precious Python programs. They spam their twisted magic uncontrollably and pollute our readable code. Today I am going to reveal several chthonic creatures that might already live inside your codebase and accustom themselves enough to start making their own rules. We need a hero to protect our peaceful world from these evil entities. And you will be this hero to fight them! ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/wxmlaq3d36tso0frbf5z.jpeg) All heroes need weapons enchanted with light magic to serve them well in their epic battles. [wemake-python-styleguide](https://github.com/wemake-services/wemake-python-styleguide) will be your sharp weapon and your best companion. Let’s start our journey! [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ### Space invaders Not so long ago, space invaders were [spotted in Python](https://twitter.com/raymondh/status/1131103570856632321). They take bizarre forms. ```jsx 5:5 E225 missing whitespace around operator x -=- x ^ 5:5 WPS346 Found wrong operation sign x -=- x ^ 10:2 E225 missing whitespace around operator o+=+o ^ 14:10 E225 missing whitespace around operator print(3 --0-- 5 == 8) ^ 14:10 WPS346 Found wrong operation sign print(3 --0-- 5 == 8) ^ 14:11 WPS345 Found meaningless number operation print(3 --0-- 5 == 8) ^ 14:12 E226 missing whitespace around arithmetic operator print(3 --0-- 5 == 8) ^ 14:13 WPS346 Found wrong operation sign print(3 --0-- 5 == 8) ^ ``` This is how our code base should look afterward: ```jsx x = 1 x += x o = 2 o += o print(3 + 5 == 8) ``` Readable and clean! ### Mystical dots [Some citizens report](https://stackoverflow.com/a/43487979/4842742) that some strange codeglyphes are starting to appear. Look, here they are! ```jsx print(0..__eq__(0)) # => True print(....__eq__(((...)))) # => True ``` What is going on here? Looks like a partial `float` and `Ellipsis` to me, but better be sure. ```jsx 21:7 WPS609 Found direct magic attribute usage: __eq__ print(0..__eq__(0)) ^ 21:7 WPS304 Found partial float: 0. print(0..__eq__(0)) ^ 24:7 WPS609 Found direct magic attribute usage: __eq__ print(....__eq__(((...)))) ^ ``` Ouch! Now we are sure. It is indeed the partial `float` with dot property access and `Elipsis` with the same dot access. Let’s reveal all the hidden things now: ```jsx print(0.0 == 0) print(... == ...) ``` And still, it is better not to provoke wrath and not to compare constants in other places. ### Misleading path We have a new incident. Some values have never seen returned from a function. Let’s find out what is going on. ```jsx def some_func(): try: return 'from_try' finally: return 'from_finally' some_func() # => 'from_finally' ``` We are missing `'from_try'` due to a broken entity in our code, how this can be addressed? ```jsx 31:5 WPS419 Found `try`/`else`/`finally` with multiple return paths try: ^ ``` Turns out `wemake-python-styleguide` knew it all along the way! It teaches us to never return from `finally`. Let’s obey it. ```jsx def some_func(): try: return 'from_try' finally: print('now in finally') ``` ### The C-ursed legacy Some ancient creature is awakening. It hasn’t been seen for decades. And now it has [returned](https://twitter.com/dabeaz/status/1199376319961849861). ```jsx a = [(0, 'Hello'), (1, 'world')] for ['>']['>'>'>'], x in a: print(x) ``` What is going on here? One can implicitly unpack values inside loops. And the target for unpacking might be almost any valid Python expression. But, we should not do a lot of things from this example: ```jsx 44:1 WPS414 Found incorrect unpacking target for ['>']['>'>'>'], x in a: ^ 44:5 WPS405 Found wrong `for` loop variable definition for ['>']['>'>'>'], x in a: ^ 44:11 WPS308 Found constant compare for ['>']['>'>'>'], x in a: ^ 44:14 E225 missing whitespace around operator for ['>']['>'>'>'], x in a: ^ 44:21 WPS111 Found too short name: x for ['>']['>'>'>'], x in a: ^ ``` Looks like the `['>'\]['>'>'>']` is just `['>'\][0]` because `'>' > '>'` is `False`. This case is solved. ### Signature of the black sorcerer [How complex](https://sobolevn.me/2019/10/complexity-waterfall) can an expression be in Python? The Black Sorcerer leaves his complex mark on all classes he touches: ```jsx class _: # There are four of them, do you see it? _: [(),...,()] = {((),...,()): {(),...,()}}[((),...,())] print(_._) # this operator also looks familiar 🤔 # => {(), Ellipsis} ``` How can this signature be read and evaluated? Looks like it consists of several parts: – Declaration and type annotation: `_: [(),...,()] =` – Dictionary definition with a set as a value: `= { ((),...,()): {(),...,()} }` – Key access: `[((),...,())]` While it does not make any sense to human beings from this world, it is still a valid Python code that can be used for something evil. Let’s remove it: ```jsx 55:5 WPS122 Found all unused variables definition: _ _: [(),...,()] = {((),...,()): {(),...,()}}[((),...,())] ^ 55:5 WPS221 Found line with high Jones Complexity: 19 _: [(),...,()] = {((),...,()): {(),...,()}}[((),...,())] ^ 55:36 WPS417 Found non-unique item in hash: () _: [(),...,()] = {((),...,()): {(),...,()}}[((),...,())] ^ 57:7 WPS121 Found usage of a variable marked as unused: _ print(_._) # this operator also looks familiar ^ ``` And now this complex expression (with Jones Complexity rate of 19) is removed or refactored. Any the Signature of the Black Sourcerer is removed from this poor class. Let’s leave it in peace. ### Metamagic Our regular classes start to hang out with some shady types. We need to protect them from this bad influence. Currently, their output really strange: ```jsx class Example(type((lambda: 0.)())): ... print(Example(1) + Example(3)) # => 4.0 ``` Why `1 + 3` is `4.0` and not `4`? To find out, let’s unwrap the `type((lambda: 0.)())` piece: – `(lambda: 0.)()` is just `0`. which is just `0.0`. – `type(0.0)` is `float` – When we write `Example(1)` it is converted to `Example(1.0)` inside the class. – `Example(1.0) + Example(3.0)` is `Example(4.0)` Let’s be sure that our weapon is sharp as always: ```jsx 63:15 WPS606 Found incorrect base class class Example(type((lambda: 0.)())): ^ 63:21 WPS522 Found implicit primitive in a form of lambda class Example(type((lambda: 0.)())): ^ 63:29 WPS304 Found partial float: 0. class Example(type((lambda: 0.)())): ^ 64:5 WPS428 Found statement that has no effect ... ^ 64:5 WPS604 Found incorrect node inside `class` body ... ^ ``` We have found all the possible issues here. Our classes are safe. Time to move on. ### Regenerators So similar and yet so different. Regenerator [is found](https://stackoverflow.com/questions/32139885/yield-in-list-comprehensions-and-generator-expressions) in our source code. It looks like an average generator expression, but it’s something totally different. ```jsx a = ['a', 'b'] print(set(x + '!' for x in a)) # => {'b!', 'a!'} print(set((yield x + '!') for x in a)) # => {'b!', None, 'a!'} ``` This is a bug in Python — yes, they do exist. And since `python3.8` is a `SyntaxError`, one should not use `yield` and `yield from` outside of generator functions. Here’s our usual report about the incident: ```jsx 73:7 C401 Unnecessary generator - rewrite as a set comprehension. print(set(x + '!' for x in a)) ^ 76:7 C401 Unnecessary generator - rewrite as a set comprehension. print(set((yield x + '!') for x in a)) ^ 76:11 WPS416 Found `yield` inside comprehension print(set((yield x + '!') for x in a)) ``` Also, let’s write comprehensions correctly as suggested. ```jsx print({x + '!' for x in a}) ``` This was a hard one to solve. But in the end, Regenerator is gone and so are wrong comprehensions. What’s next? ### Email evil clone If one needs to write an email address, the string is used. Right? Wrong! There are unusual ways to do regular things. And there are evil clones of regular datatypes. We are going to discover them. ```jsx class G: def __init__(self, s): self.s = s def __getattr__(self, t): return G(self.s + '.' + str(t)) def __rmatmul__(self, other): return other + '@' + self.s username, example = 'username', G('example') print(username@example.com) # => username@example.com ``` How does it work? – `@` is an operator in Python, it’s behavior can be modified via ` __matmul__ ` and ` __rmatmul__ ` magic methods – `.com` is an attribute `com` dot access, it can be modified via ` __getattr__ ` One big difference between this code and other examples is that this one is actually valid. Just unusual. We should probably not use it. But, let’s write this into our knowledge quest book. ### Fallacy of the walrus The darkness has fallen onto Python. The one that has split the friendly developer community, the one that brought the controversy. You have gained the power to program in strings: ```jsx from math import radians for angle in range(360): print(f'{angle=} {(th:=radians(angle))=:.3f}') print(th) # => angle=0 (th:=radians(angle))=0.000 # => 0.0 # => angle=1 (th:=radians(angle))=0.017 # => 0.017453292519943295 # => angle=2 (th:=radians(angle))=0.035 # => 0.03490658503988659 ``` What is going on here? – `f'{angle=}` is a new (python3.8+) way to write `f'angle={angle}` – `(th:=radians(angle))` is an assignment expression, yes you can do assignments in strings now – `=:.3f` is the formatting part, it returns the expression and its rounded result value – `print(th)` works because `(th:=radians(angle))` has the local scope effect Should you use assignment expressions? Well, that’s up to you. Should you assign values inside strings? Absolutely not. And here’s a friendly reminder of things that you can (but also probably should not) do with `f` strings themselves: ```jsx print(f"{getattr(__import__('os'), 'eman'[None:None:-1])}") # => posix ``` Just a regular module import inside a string — move on, nothing to see here. Luckily, we are not allowed to write this line in our real code: ```jsx 105:1 WPS221 Found line with high Jones Complexity: 16 print(f"{getattr(__import__('os'), 'eman'[None:None:-1])}") ^ 105:7 WPS305 Found `f` string print(f"{getattr(__import__('os'), 'eman'[None:None:-1])}") ^ 105:18 WPS421 Found wrong function call: __import__ print(f"{getattr(__import__('os'), 'eman'[None:None:-1])}") ^ 105:36 WPS349 Found redundant subscript slice print(f"{getattr(__import__('os'), 'eman'[None:None:-1])}") ^ ``` And one more thing: `f` strings cannot be used as docstrings: ```jsx def main(): f"""My name is {__file__}/{__name__}!""" print(main().__doc__) # => None ``` ### Conclusion We fought many ugly monsters that spawned inside our code and made Python land a better place to live. You should be proud of yourself, hero! That was an epic journey. And I hope you learned something new: to be stronger for the next battles to come. The world needs you! That’s it for today. Stay safe, traveler. ### Useful links - [Python code disasters](https://github.com/sobolevn/python-code-disasters) - [wtf, python?](https://github.com/satwikkansal/wtfpython) - [wemake-python-styleguide](https://github.com/wemake-services/wemake-python-styleguide) * * * **Editor's note:** Seeing something wrong with this post? You can find the correct version [here](https://blog.logrocket.com/frustrations-in-python/). ## Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps   ![LogRocket Dashboard Free Trial Banner](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/1d0cd-1s_rmyo6nbrasp-xtvbaxfg.png?resize=1200%2C677&ssl=1)   [LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.   In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.   [Try it for free](https://logrocket.com/signup/). * * * The post [Frustrations in Python](https://blog.logrocket.com/frustrations-in-python/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
221,913
LAST Part - teach your kids to build their own game with Python.
a tutorial that teaches kids/beginners how to develop the famous Space Invaders game with Python.
0
2019-12-16T15:37:57
https://dev.to/mustafaanaskh99/last-part-teach-your-kids-to-build-their-own-game-with-python-33m2
python, turtle, games, kids
--- title: LAST Part - teach your kids to build their own game with Python. published: true description: a tutorial that teaches kids/beginners how to develop the famous Space Invaders game with Python. tags: python, turtle, games, kids cover_image: https://thepracticaldev.s3.amazonaws.com/i/os7ug14fnguu2oy70lls.png --- So, without further due, lets pickup from where we left last time. (Nope wait! if you havent, go check [part 1](https://dev.to/mustafaanaskh99/teach-your-kids-to-build-their-own-game-with-python-rocket-1-3159) and [part 2](https://dev.to/mustafaanaskh99/teach-your-kids-to-build-their-own-game-with-python-2-d5l) then get back here. We'll be waiting!) So far our code creates the main player and allows us to move it, create the enemies, and randomly place them in the battle field. (want to jump and see the final outcome of this lesson? feel free to [visit the original repo](https://github.com/MustafaAnasKH99/Space-Invaders---Python) or jump to the end of this) The code by now looks like this: ```python import turtle from random import randint pen = turtle.Turtle() turtle.register_shape("ship.gif") turtle.register_shape("invador.gif") pen.penup() pen.setposition(-300,300) pen.pendown() #this line puts the pen on the paper for side in range(3): #see number three? its what reminds Python, THREE times! pen.forward(600) pen.right(90) pen.forward(600) pen.hideturtle() player = turtle.Turtle() player.shape('ship.gif') player.penup() enemies = [] for i in range(10): enemies.append(turtle.Turtle()) for enemy in enemies: x = randint(-300, 300) y = randint(0, 300) enemy.penup() enemy.setposition(x, y) enemy.shape('invador.gif') def moveRight(): x = player.xcor() x += 10 player.setx(x) def moveLeft(): x = player.xcor() x -= 10 player.setx(x) def moveForward(): print('something') y = player.ycor() y += 10 player.sety(y) def moveBackward(): y = player.ycor() y -= 10 player.sety(y) wn = turtle.Screen() wn.listen() wn.onkey(moveRight, 'd') wn.onkey(moveLeft, 'a') wn.onkey(moveForward, 'w') wn.onkey(moveBackward, 's') wn.bgpic("bg.gif") turtle.done() #this just keeps the window open until we close it. turtle.close() #this just fixes issues related to closing the window ``` Great. Now lets plan what we still have to do before we move on. - Keep on moving the enemies towards us as long as they are not dead - Allow the player to fire - We die if an enemy touches us - Display our score on the screen - Sure enough, emenies die if our bullet hits them Lets tackle this step by step. First lets work on the enemies movement. You can plan this differently as you wish, but for this tutorial, we will keep it simple. Enemies will keep moving towards the right side until they reach the white line. When they reach the white line, they will step down towards us, start again from the left side, and move towards the right side. Each time an enemy reaches the right white line, it will step closer to us and start over from the left side. If it reaches the bottom of the field without us killing it or it touching us, it will start all over again from the top of the field. If we manage to hit it with a bullet, we will make it start all over again from the top. Sounds good? Lets get this happening 🚀 . > Move the enemies Notice that our aim is to _keep_ on moving the enemies in one direction, *unless* they hit the white line, in which case we will make it start from the left side again. So the first step is to _keep_ on moving the enemy. This means that we need to write a piece of code that will *always* run. In code, we call this an *infinite loop*. Anything inside an infinite loop keep on running until we tell it to stop. to create an infinite loop in Python we write: ```python while Ture: # this is the body of the loop ``` No we will write the code that moves the enemies in the _body_ of the infinite loop. Remember the _methods_ `xcor()` and `ycor()` we used to move our player? we will use the same _methods_ to move the emeies. Lets define the speed of the enemies and move them by writing the body of the loop ```python enemy_speed = 30 # this is the speed of our enemies - You can change it to make them faster/slower while True: for enemy in enemies: # go through the enemies one by one x = enemy.xcor() # get the current x location of the enemy x += enemy_speed # change the x location of the enemy by its speed which is 30 enemy.setx(x) # move the enemy to its new x location ``` Now if you run this code,you will see that the enemies will keep on moving to the right until they are out of the screen. This is good but we still need to tell the enemy that *if* you hit the one of the white lines, stop, got step close to the player, then move in the opposite direction. To do so, we will use in _if statement_. This is a statement that only run if the condition we choose is right. Our condition is *if the enemy hits one of the sides*. How do we define such condition in the code? Easy. We check that if the current x location (`xcor()`) of the enemy is equal to `300 - enemy_speed`. WAIT ... what?? Ok lets take a second here. Check the code you wrote previously to draw the battle field. Do you see `pen.setposition(-300,300)` ? this 300/-300 is the x location of the two sides of the field. The reason we subtract the speed of the enemy is to make sure it stays in the battle field. Because when the player is at 270, it will still move by 30. Which makes it at 300. THEN we can make at start again from the other side but at a step closer to us. > before we right the code. So far we talked about x to move the enemy to the right. But how do we move it towards us? ... You are right, we should decrease the y because we are at a negative y since we are at the end of the screen Now inside the for loop (which is inside of our infinite loop), write this *if statement*: ```python enemy_speed = 30 # this is the speed of our enemies - You can change it to make them faster/slower while True: for enemy in enemies: if enemy.xcor() > 270: # check if the enemy hit the white line y = enemy.ycor() # get the current y location of the enemy y -= 40 # change the y location of the enemy by 40 enemy.sety(y) # move it closer to us by y which is 40 enemy.speed(0) # this makes the transition seems instant enemy.setx(-270) # move the enemy to the left side again x = enemy.xcor() x += enemy_speed enemy.setx(x) ``` GREAT! Now we have the enemies moving in the right direction. If you run the code and wait, you will see them moving slowly towards the bottom of the battle field. *But* the problem is, they keep on going down until they are out of the screen. So we need to stop them when they hit the bottom white line and make them start again from the top. _We need another if statement_ *but* for checking the y location this time: ```python enemy_speed = 30 # this is the speed of our enemies - You can change it to make them faster/slower while True: for enemy in enemies: if enemy.xcor() > 270: y = enemy.ycor() y -= 40 enemy.sety(y) enemy.speed(0) enemy.setx(-270) if enemy.ycor() < -260: # check if the enemy hit the bottom white line y = 300 # set y to the new location at the top of the screen enemy.speed(0) # this makes the transition seems instant enemy.sety(y) # move the enemy to the new location x = enemy.xcor() x += enemy_speed enemy.setx(x) ``` Yaay 🎆 now our enemies move perfectly! Feel free to change the values of x, y, and enemy_speed to choose the speed that best fits you for the game. > Keep on moving the enemies towards us as long as they are not dead ✔ Now we still have some work to do with the enemies since we want to change their position if they get hit by our bullet, but this will have to wait until we make the player fire. So lets move to that now. There are few things we need to do: - First, since all objects in our game are actually turtle objects, we will need to create a bullet turtle and hide it from the screen. - Second, we will create a function that runs when we press the firing button (the space bar in this case) which displays the bullet we created, move it to where our player is, then make it move forward. - Third, we will create a score variable, display it on the screen, then if the bullet touches any enemy, we will increase the score by one, then move that enemy to the top left of the battle field to start again. (True, our game only ends if we die `:]` ) to create the bullet, write the following code right before our `moveRight()` function: ```python bullet = turtle.Turtle() bullet.color('yellow') bullet.penup() bullet.speed(0) bullet.setheading(90) #this just makes the triangular turtle head towards the top of the screen bullet.shapesize(2, 2) bullet.hideturtle() bullet_speed = 50 bullet_state = 'ready' #??? ``` Most code above looks familiar, right? But why exactly did we add the bullet_state thing? Since there are more than just one function responsible for the bullet, this helps us know when we can fire and when not. The goal is to fire one bullet every time we hit the space bar. So we will change the bullet_state once we hit the bar and change it again once our bullet reaches the end of the battle field. (I understand that this might not make a lot of sense right now. Do not worry about it. You will understand more later when we use it) Now if you run this code, you will not really see anything new. That is because our bullet is hidden and we did not show it yet. Lets create the function that fires. Right below our `moveBackward()` function, add this: ```python def fire_bullet(): # the global word makes bullet_state accessible from whitin the function. If you remove it, the code breaks global bullet_state if bullet_state == 'ready': bullet_state = 'fire' x = player.xcor() # get the x location of the player y = player.ycor() + 10 # get the y location of the player bullet.setposition(x, y) # move the bullet to where the player is bullet.showturtle() # show the bullet ``` Great. The function is now ready but it does not run. We need to tell Python to run it when we press space. You know how to do that right? Add this blow its `wn.onkey` friends. ```python wn.onkey(fire_bullet, 'space') ``` This is how our game code looks like now: ```python import turtle from random import randint pen = turtle.Turtle() turtle.register_shape("ship.gif") turtle.register_shape("invador.gif") pen.penup() pen.setposition(-300,300) pen.pendown() #this line puts the pen on the paper for side in range(3): #see number three? its what reminds Python, THREE times! pen.forward(600) pen.right(90) pen.forward(600) pen.hideturtle() player = turtle.Turtle() player.shape('ship.gif') player.penup() enemies = [] for i in range(10): enemies.append(turtle.Turtle()) for enemy in enemies: x = randint(-300, 300) y = randint(0, 300) enemy.penup() enemy.setposition(x, y) enemy.shape('invador.gif') bullet = turtle.Turtle() bullet.color('yellow') bullet.penup() bullet.speed(0) bullet.setheading(90) #this just makes the triangular turtle head towards the top of the screen bullet.shapesize(2, 2) bullet.hideturtle() bullet_speed = 50 bullet_state = 'ready' #??? def moveRight(): x = player.xcor() x += 10 player.setx(x) def moveLeft(): x = player.xcor() x -= 10 player.setx(x) def moveForward(): print('something') y = player.ycor() y += 10 player.sety(y) def moveBackward(): y = player.ycor() y -= 10 player.sety(y) def fire_bullet(): # the global word makes bullet_state accessible from whitin the function. If you remove it, the code breaks global bullet_state if bullet_state == 'ready': bullet_state = 'fire' x = player.xcor() # get the x location of the player y = player.ycor() + 10 # get the y location of the player bullet.setposition(x, y) # move the bullet to where the player is bullet.showturtle() # show the bullet wn = turtle.Screen() wn.listen() wn.onkey(moveRight, 'd') wn.onkey(moveLeft, 'a') wn.onkey(moveForward, 'w') wn.onkey(moveBackward, 's') wn.onkey(fire_bullet, 'space') wn.bgpic("bg.gif") enemy_speed = 30 # this is the speed of our enemies - You can change it to make them faster/slower while True: for enemy in enemies: # go through the enemies one by one if enemy.xcor() > 270: # check if the enemy hit the white line y = enemy.ycor() # get the current y location of the enemy y -= 40 # change the y location of the enemy by 40 enemy.sety(y) # move it closer to us by y which is 40 enemy.speed(0) # this makes the transition seems instant enemy.setx(-270) # move the enemy to the left side again if enemy.ycor() < -260: # check if the enemy hit the bottom white line y = 300 # set y to the new location at the top of the screen enemy.speed(0) # this makes the transition seems instant enemy.sety(y) # move the enemy to the new location x = enemy.xcor() # get the current x location of the enemy x += enemy_speed # change the x location of the enemy by its speed which is 30 enemy.setx(x) # move the enemy to its new x location turtle.done() #this just keeps the window open until we close it. turtle.close() #this just fixes issues related to closing the window ``` Try running the code and pressing the space bar. What happens? Do you see the yellow bullet? does it move? If you think we missed something or did something wrong, then thing again 😉 This function should not be responsible for moving the bullet. It only fires the bullet. Thats why we have the `bullet_state` variable. When we press the space bar and fire the bullet, we change its state from 'ready' to 'fire'. But moving it should happen in the *infinite loop* (the infinite loop is responsible for moving stuff. Remember moving the enemies?). What we want to do now is the following: if the bullet state is fire, move the bullet towards the top. Keep on moving the bullet up until it either hits the top white line or one of the enemies. If so, change its state to 'ready' and hide it again. So we need two new _if statements_ in the body of the infinite while loop. one that moves the bullet if the state is ready, and another that changes the state to 'ready' if the bullet hits the top white line (dont worry about hitting the enemies for now. We will create a new function for that). Change the while loop to this: ```python while True: for enemy in enemies: if enemy.xcor() > 270: y = enemy.ycor() y -= 40 enemy.sety(y) enemy.speed(0) enemy.setx(-270) if enemy.ycor() < -260: y = 300 enemy.speed(0) enemy.sety(y) if bullet_state == 'fire': #chech if the state of the bullet is fire y = bullet.ycor() # get the y location of the bullet y += bullet_speed # increase the y location of the bullet by its speed bullet.sety(y) # move the bullet to the new location if bullet.ycor() > 275: # check if the bullet hit the top white line bullet.hideturtle() # hide the billet bullet_state = 'ready' # change its state to ready to enable the player to fire again x = enemy.xcor() x += enemy_speed enemy.setx(x) ``` Yayy 🎆 Now we can actually fire! Lets now hurt our enemies 😠 if the bullet hits an enemy, we want to move the enemy back to the top and increase the score. To know if a collision between the bullet and an enemy happened, we will we will make a function called `isCollision()` and run it in the infinite loop. Add the function below right above this line: `wn = turtle.Screen()` ```python def isCollosion(t1, t2): distance = math.sqrt(math.pow(t1.xcor() - t2.xcor(), 2) + math.pow(t1.ycor() - t2.ycor(), 2)) if distance < 35: return True else: return False ``` The function above is a bit complex. You are not expected to understand how exactly this work but you might wanna ask your parent about it if you are a math geek 😄 In short, the function creates a circle with certain dimensions. Then it check if the distance between t1 and t2 (when we call the funtion, we can make t1 the bullet and t2 the enemy) is less than 35. If so, it means they are close enough and so they touched. Also, since this is math stuff, we need to tell Python to bring the math tools (you know calulators and stuff) so at the top of your code add `import math` (did you notice something? changing 35 to a smaller number makes it more difficult to hit an enemy as you would need to be more accurate - where as making it a bigger number makes killing easier) Now it is time to run the function inside hte infinite loop. Somewhere in the body of the for loop which is inside of while loop, write this: ```python while True: for enemy in enemies: # go through the enemies one by one if isCollosion(bullet, enemy): bullet.hideturtle() bullet_state = 'ready' #make the state ready to allow firing again enemy.setposition(-300, 300) ``` EVERYTHING IS WORKING REALLY PERFECTLY 🙌 > Allow the player to fire ✔ > Sure enough, emenies die if our bullet hits them ✔ You are way closer to becoming a realy game developer! Only few things left. We need somewhere to show how many enemies we killed, and we need to stop the game if an enemy hits us. Lets start with the first At the top of our code, lets make a variable called *score* and make it equal to zero (no enemies killed at the beginning!) `score = 0` At the begining of the game, lets create a turtle that writes the score to the screen: ```python turtle.color("white") turtle.penup() turtle.setposition(-300, 250) turtle.write("Your score is: {}".format(score), move=False, align="left", font=("Arial", 18, "normal")) turtle.hideturtle() ``` Great. Now we need to increase the score every time a collision happens between a bullet and an enemy. To do so, we will have to clear what we wrote, increase the score, and write the score to the screen again. So lets update the function `isColision()` in the infinite loop to the following: ```python if isCollosion(bullet, enemy): bullet.hideturtle() bullet_state = 'ready' #make the state ready to allow firing again enemy.setposition(-300, 300) score += 1 turtle.clear() turtle.color("white") turtle.penup() turtle.setposition(-300, 250) turtle.write("Your score is: {}".format(score), move=False, align="left", font=("Arial", 18, "normal")) turtle.hideturtle() ``` Amazing! now we can see how many enemies we have killed so far. We are almost there! > Display our score on the screen ✔ Our code currently looks like this: ```python import turtle import math from random import randint score = 0 pen = turtle.Turtle() turtle.register_shape("ship.gif") turtle.register_shape("invador.gif") pen.penup() pen.setposition(-300,300) pen.pendown() #this line puts the pen on the paper for side in range(3): #see number three? its what reminds Python, THREE times! pen.forward(600) pen.right(90) pen.forward(600) pen.hideturtle() player = turtle.Turtle() player.shape('ship.gif') player.penup() enemies = [] for i in range(10): enemies.append(turtle.Turtle()) for enemy in enemies: x = randint(-300, 300) y = randint(0, 300) enemy.penup() enemy.setposition(x, y) enemy.shape('invador.gif') bullet = turtle.Turtle() bullet.color('yellow') bullet.penup() bullet.speed(0) bullet.setheading(90) #this just makes the triangular turtle head towards the top of the screen bullet.shapesize(2, 2) bullet.hideturtle() bullet_speed = 50 bullet_state = 'ready' #??? turtle.color("white") turtle.penup() turtle.setposition(-300, 250) turtle.write("Your score is: {}".format(score), move=False, align="left", font=("Arial", 18, "normal")) turtle.hideturtle() def moveRight(): x = player.xcor() x += 10 player.setx(x) def moveLeft(): x = player.xcor() x -= 10 player.setx(x) def moveForward(): y = player.ycor() y += 10 player.sety(y) def moveBackward(): y = player.ycor() y -= 10 player.sety(y) def fire_bullet(): # the global word makes bullet_state accessible from whitin the function. If you remove it, the code breaks global bullet_state if bullet_state == 'ready': bullet_state = 'fire' x = player.xcor() # get the x location of the player y = player.ycor() + 10 # get the y location of the player bullet.setposition(x, y) # move the bullet to where the player is bullet.showturtle() # show the bullet def isCollosion(t1, t2): distance = math.sqrt(math.pow(t1.xcor() - t2.xcor(), 2) + math.pow(t1.ycor() - t2.ycor(), 2)) if distance < 35: return True else: return False wn = turtle.Screen() wn.listen() wn.onkey(moveRight, 'd') wn.onkey(moveLeft, 'a') wn.onkey(moveForward, 'w') wn.onkey(moveBackward, 's') wn.onkey(fire_bullet, 'space') wn.bgpic("bg.gif") enemy_speed = 30 # this is the speed of our enemies - You can change it to make them faster/slower while True: for enemy in enemies: # go through the enemies one by one if isCollosion(bullet, enemy): bullet.hideturtle() bullet_state = 'ready' #make the state ready to allow firing again enemy.setposition(-300, 300) score += 1 turtle.clear() turtle.color("white") turtle.penup() turtle.setposition(-300, 250) turtle.write("Your score is: {}".format(score), move=False, align="left", font=("Arial", 18, "normal")) turtle.hideturtle() if enemy.xcor() > 270: # check if the enemy hit the white line y = enemy.ycor() # get the current y location of the enemy y -= 40 # change the y location of the enemy by 40 enemy.sety(y) # move it closer to us by y which is 40 enemy.speed(0) # this makes the transition seems instant enemy.setx(-270) # move the enemy to the left side again if enemy.ycor() < -260: # check if the enemy hit the bottom white line y = 300 # set y to the new location at the top of the screen enemy.speed(0) # this makes the transition seems instant enemy.sety(y) # move the enemy to the new location if bullet_state == 'fire': # check if the state of the bullet is fire y = bullet.ycor() # get the y location of the bullet y += bullet_speed # increase the y location of the bullet by its speed bullet.sety(y) # move the bullet to the new location if bullet.ycor() > 275: # check if the bullet hit the top white line bullet.hideturtle() # hide the billet bullet_state = 'ready' # change its state to ready to enable the player to fire again x = enemy.xcor() # get the current x location of the enemy x += enemy_speed # change the x location of the enemy by its speed which is 30 enemy.setx(x) # move the enemy to its new x location turtle.done() #this just keeps the window open until we close it. turtle.close() #this just fixes issues related to closing the window ``` We are only a step away from getting this game done! We need to make sure that if any of the enemies touches us, we will stop the game, and print out the final score. Can you guess how we are gonna know if we touched an enemy?? AAh right, we will use the `isCollision()` function! Lets insert this function inside the for loop in our inifinite loop: ```python if isCollosion(player, enemy): player.hideturtle() #hide player from the arena turtle.color('red') turtle.penup() turtle.setposition(0, 0) turtle.write("Game Over!", move=False, align="center", font=("Arial", 35, "normal")) turtle.setposition(0, -50) #Go to a new line turtle.write("Your score is: {}".format(score), move=False, align="center", font=("Arial", 35, "normal")) turtle.done() #Stop Game break ``` AND this was our last touch 🙋 lets review the whole code one last time: ```python import turtle import math from random import randint score = 0 pen = turtle.Turtle() turtle.register_shape("ship.gif") turtle.register_shape("invador.gif") pen.penup() pen.setposition(-300,300) pen.pendown() #this line puts the pen on the paper for side in range(3): #see number three? its what reminds Python, THREE times! pen.forward(600) pen.right(90) pen.forward(600) pen.hideturtle() player = turtle.Turtle() player.shape('ship.gif') player.penup() enemies = [] for i in range(10): enemies.append(turtle.Turtle()) for enemy in enemies: x = randint(-300, 300) y = randint(0, 300) enemy.penup() enemy.setposition(x, y) enemy.shape('invador.gif') bullet = turtle.Turtle() bullet.color('yellow') bullet.penup() bullet.speed(0) bullet.setheading(90) #this just makes the triangular turtle head towards the top of the screen bullet.shapesize(2, 2) bullet.hideturtle() bullet_speed = 50 bullet_state = 'ready' #??? turtle.color("white") turtle.penup() turtle.setposition(-300, 250) turtle.write("Your score is: {}".format(score), move=False, align="left", font=("Arial", 18, "normal")) turtle.hideturtle() def moveRight(): x = player.xcor() x += 10 player.setx(x) def moveLeft(): x = player.xcor() x -= 10 player.setx(x) def moveForward(): y = player.ycor() y += 10 player.sety(y) def moveBackward(): y = player.ycor() y -= 10 player.sety(y) def fire_bullet(): # the global word makes bullet_state accessible from whitin the function. If you remove it, the code breaks global bullet_state if bullet_state == 'ready': bullet_state = 'fire' x = player.xcor() # get the x location of the player y = player.ycor() + 10 # get the y location of the player bullet.setposition(x, y) # move the bullet to where the player is bullet.showturtle() # show the bullet def isCollosion(t1, t2): distance = math.sqrt(math.pow(t1.xcor() - t2.xcor(), 2) + math.pow(t1.ycor() - t2.ycor(), 2)) if distance < 35: return True else: return False wn = turtle.Screen() wn.listen() wn.onkey(moveRight, 'd') wn.onkey(moveLeft, 'a') wn.onkey(moveForward, 'w') wn.onkey(moveBackward, 's') wn.onkey(fire_bullet, 'space') wn.bgpic("bg.gif") enemy_speed = 30 # this is the speed of our enemies - You can change it to make them faster/slower while True: for enemy in enemies: # go through the enemies one by one if isCollosion(player, enemy): player.hideturtle() #hide player from the arena turtle.color('red') turtle.penup() turtle.setposition(0, 0) turtle.write("Game Over!", move=False, align="center", font=("Arial", 35, "normal")) turtle.setposition(0, -50) #Go to a new line turtle.write("Your score is: {}".format(score), move=False, align="center", font=("Arial", 35, "normal")) turtle.done() #Stop Game break if isCollosion(bullet, enemy): bullet.hideturtle() bullet_state = 'ready' #make the state ready to allow firing again enemy.setposition(-300, 300) score += 1 turtle.clear() turtle.color("white") turtle.penup() turtle.setposition(-300, 250) turtle.write("Your score is: {}".format(score), move=False, align="left", font=("Arial", 18, "normal")) turtle.hideturtle() if enemy.xcor() > 270: # check if the enemy hit the white line y = enemy.ycor() # get the current y location of the enemy y -= 40 # change the y location of the enemy by 40 enemy.sety(y) # move it closer to us by y which is 40 enemy.speed(0) # this makes the transition seems instant enemy.setx(-270) # move the enemy to the left side again if enemy.ycor() < -260: # check if the enemy hit the bottom white line y = 300 # set y to the new location at the top of the screen enemy.speed(0) # this makes the transition seems instant enemy.sety(y) # move the enemy to the new location if bullet_state == 'fire': # check if the state of the bullet is fire y = bullet.ycor() # get the y location of the bullet y += bullet_speed # increase the y location of the bullet by its speed bullet.sety(y) # move the bullet to the new location if bullet.ycor() > 275: # check if the bullet hit the top white line bullet.hideturtle() # hide the billet bullet_state = 'ready' # change its state to ready to enable the player to fire again x = enemy.xcor() # get the current x location of the enemy x += enemy_speed # change the x location of the enemy by its speed which is 30 enemy.setx(x) # move the enemy to its new x location turtle.done() #this just keeps the window open until we close it. turtle.close() #this just fixes issues related to closing the window ``` > Keep on moving the enemies towards us as long as they are not dead ✔ > Allow the player to fire ✔ > We die if an enemy touches us ✔ > Display our score on the screen ✔ > Sure enough, emenies die if our bullet hits them ✔ MISSION ACCOMPLISHED Now you just run it and enjoy 🎆 🎆 🎆 !!! Throughout this series, I got a lot of support messages from parents and I really appreciate that I was able to get any kid's interest. If there are any suggestion/requests you want from/for me please feel free to DM me. <a href="https://www.buymeacoffee.com/MustafaAnas" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" style="height: 51px !important;width: 217px !important;" ></a> > I am on a lifetime mission to support and contribute to the general knowledge of the web community as much as possible. Some of my writings might sound too silly, or too difficult, but no knowledge is ever useless.If you like my articles, feel free to help me keep writing by getting me coffee :)
mustafaanaskh99
221,932
Server Side Rendering and Feature Flag Management
While doing SSR, you want the greatest amount of your code as possible to be shared between the server and the client.
0
2019-12-16T16:40:09
https://rollout.io/blog/server-side-rendering-and-feature-flag-management/
ssr, devops, programming, monitoring
--- title: Server Side Rendering and Feature Flag Management published: true description: While doing SSR, you want the greatest amount of your code as possible to be shared between the server and the client. tags: ssr, devops, programming, monitoring canonical_url: https://rollout.io/blog/server-side-rendering-and-feature-flag-management/ --- CloudBees Rollout has been offering Javascript SDKs for years. On one hand, [rox-node](https://www.npmjs.com/package/rox-node) can be used on the server side and, on the other, [rox-browser](https://www.npmjs.com/package/rox-browser) is there for client-side, in-browser feature flags. While this covers many use cases, the limit between server and client is becoming fuzzier. React applications are no longer Single Page Applications running in the browser only. Everyone cares about SEO and Server Side Rendering (SSR) is becoming a standard. And if you use feature flags to show or hide some functionalities or data, SSR also makes sense from a security perspective: do not send to the browser data it’s not supposed to get! While doing SSR, you want the greatest amount of your code as possible to be shared between the server and the client. Obviously, having two separate SDKs makes this difficult. We are excited to introduce the [rox-ssr](https://www.npmjs.com/package/rox-ssr) SDK for CloudBees Rollout that will simplify your logic. Simply use rox-ssr and it will do all the magic for you (see below). And because rox-ssr is written in TypeScript, you automatically get all the TypeScript definitions, allowing you to learn the CloudBees Rollout API even faster and remain type-safe. # rox-ssr in details From a developer’s perspective, only rox-ssr is used. In the background, rox-ssr is automatically switching between rox-node and rox-browser depending on the environment. We know developers care about the size of their applications so we kept that in mind such that server-specific code is not bundled into your final client bundle. We successfully tested that with Webpack! We also know that developers care not only about the look and feel, but also performance. CloudBees Rollout does not fetch flag values from its servers at every evaluation but instead fetches the flag configuration once and then uses it to evaluate the flags. Calling flag.isEnabled() is a local, constant-time operation. This was always true, rox-ssr makes it even better. On the server-side, rox-ssr will fetch that configuration and refresh it regularly. Rox-ssr allows you to directly pass this configuration to the browser, making the initialization of Rollout on the client side immediate: no delay due to fetching data from the CloudBees Rollout server and no potentially annoying re-rendering of elements on the page due to flag configuration being received a few milliseconds too late. See this example below. # How to use rox-ssr 1. Register and create your new app on [rollout.io](https://rollout.io?utm_source=devto&utm_medium=blog&utm_content=server_side_rendering_blog&utm_campaign=rollout_trial_devto) 2. Initialize CloudBees Rollout - this should be once on the server-side, once on the client-side ``` import {Flag, Rox} from 'rox-ssr' export const featureFlags = { myFirstFlag: new Flag(true), mySecondFlag: new Flag(false), } Rox.register('myTestNamespace', featureFlags) await Rox.setup(<YOUR API KEY>) ``` 3. In your `render()` method, server-side, add a `<script>` within the `<head>` tag to send the Rollout configuration from the server to the clients: ``` import {Rox} from 'rox-ssr' // ... <script type='text/javascript' dangerouslySetInnerHTML={{__html: `window.rolloutData = ${JSON.stringify(Rox.rolloutData)};`}} /> ``` 4) Using the flags ``` featureFlags.myFirstFlag.isEnabled() ``` # How to migrate from rox-node and/or rox-browser to rox-ssr You might have had something like: ``` let Rox = null if (process.browser) { Rox = require('rox-browser') } else { Rox = require('rox-node') } ``` Replace it with: ``` import {Rox} from 'rox-ssr' ``` There is a small change to apply when declaring new Flag / Configuration / Variant. Before: ``` const Rox = require('rox-browser') export const container = { flag1: new Rox.Flag(true), configuration1: new Rox.Configuration(''), variant1: new Rox.Variant('', []) } ``` After: ``` import {Flag, Configuration, Variant} from 'rox-ssr' export const container = { flag1: new Flag(true), configuration1: new Configuration(''), variant1: new Variant('', []) } ``` Add (or update) the way to send configuration data to clients: ``` import {Rox} from 'rox-ssr' // ... <script type='text/javascript' dangerouslySetInnerHTML={{__html: `window.rolloutData=${Rox.rolloutData};`}} /> ``` Want to try CloudBees Rollout? Check out the [14 day free trial](https://rollout.io?utm_source=devto&utm_medium=blog&utm_content=server_side_rendering_blog&utm_campaign=rollout_trial_devto).
gguirado
222,194
PHP Vue form formData 10: input text using ajax (axios)
Happy Coding Add external script in head tag. First for vue, and second for axios ajax. &lt;h...
0
2019-12-17T04:55:12
https://dev.to/antelove19/php-vue-form-formdata-10-input-text-using-ajax-axious-469e
php, vue, form, axios
<hr /> <center>Happy Coding</center> <hr /> Add external script in head tag. First for vue, and second for axios ajax. <head> <script src="https://cdn.jsdelivr.net/npm/vue"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.19.0/axios.js"></script> </head> In body, add div tag with id="myApp" for virtual DOM Vue <div id="myApp" > <!-- v-on:submit.prevent --> <form method="post" action="process.php" @submit="submit" ref="formHTML" > Firstname: <input type="text" name="firstname" v-model="form.firstname" /> <br /> Lastname: <input type="text" name="lastname" v-model="form.lastname" /> <br /> <hr /> <input type="submit" value="Submit" /> </form> Script Vue <script> let vm = new Vue({ el: "#myApp", data: { form: {}, result: {} }, methods: { submit function submit: async function (event) { event.preventDefault(); var formHTML = event.target; // this.$refs.formHTML console.log( formHTML ); // formHTML element Ref: Mozilla <a href="https://developer.mozilla.org/en-US/docs/Web/API/FormData/Using_FormData_Objects" >formData Object</a> var formData = new FormData( formHTML ); console.log( formData ); // https://github.com/axios/axios /* AJAX request */ this.ajax( formHTML, formData ); // ajax( form, data, destination = null ) }, ajax: async function ( form, data, destination = null ) { Ref: github: ajax <a href="https://github.com/axios/axios" >axios</a> function from <a href="https://www.npmjs.com/package/axios" >npm</a> await axios( { method: form.method, url: form.action, data: data, config: { headers: { "Content-Type": "multipart/form-data" } } } ) /* handle success */ .then( result => { this.result = result.data; console.log(result); console.log(result.data); } ) /* handle error */ .catch( error => { console.error(error) } ); } } }); </script> <hr /> process.php var_dump($_POST); <hr /> Demo <a href="https://repl.it" >repl.it</a> <ul> <li><a href="https://repl.it/@antelove19/PHP-React-form-formData-10" >Editor</a></li> <li><a href="https://Vue-form-formData-10.antelove19.repl.co" >Live</a></li> </ul> <hr/> <center>Thank for reading :)</center> <hr/>
antelove19
222,204
Create a Secure Azure Sphere App using the Grove Shield Sensor Kit
This tutorial is a step by step guide to building your first Azure Sphere application using the Seeed Studio Grove Shield and Grove Sensors targeting Azure Sphere SDK 19.11 or better.
0
2019-12-17T05:46:59
https://dev.to/azure/create-a-secure-azure-sphere-app-using-the-grove-shield-sensor-kit-37bg
azuresphere, visualstudio, security, iot
--- title: Create a Secure Azure Sphere App using the Grove Shield Sensor Kit published: true description: This tutorial is a step by step guide to building your first Azure Sphere application using the Seeed Studio Grove Shield and Grove Sensors targeting Azure Sphere SDK 19.11 or better. tags: #AzureSphere #VisualStudio #Security #IoT --- ![Azure Sphere with shield](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/azure-sphere-shield.png) Follow me on Twitter [@dglover](https://twitter.com/dglover) |Author|[Dave Glover](https://developer.microsoft.com/en-us/advocates/dave-glover?WT.mc_id=devto-blog-dglover), Microsoft Cloud Developer Advocate | |:----|:---| |Target Platform | Seeed Studio Azure Sphere MT3620 | |Developer Platform | Windows 10 or Ubuntu 18.04 | |Azure SDK | Azure Sphere SDK 19.11 or better | |Developer Tools| [Visual Studio (The free Community Edition or better)](https://visualstudio.microsoft.com/vs/?WT.mc_id=devto-blog-dglover) or [Visual Studio Code (Free OSS)](https://code.visualstudio.com?WT.mc_id=devto-blog-dglover)| |Hardware | [Seeed Studio Grove Shield](https://www.seeedstudio.com/MT3620-Grove-Shield.html), and the [Grove Temperature and Humidity Sensor (SHT31)](https://www.seeedstudio.com/Grove-Temperature-Humidity-Sensor-SHT31.html) | |Language| C| |Date|As of December, 2019| ## What is Azure Sphere Azure Sphere is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It comprises a secured, connected, crossover microcontroller unit (MCU), a custom high-level Linux-based operating system (OS), and a cloud-based security service that provides continuous, renewable security. ## Hardware Required This tutorial requires the [Seeed Studio Azure Sphere](https://www.seeedstudio.com/Azure-Sphere-MT3620-Development-Kit-US-Version-p-3052.html), the [Seeed Studio Grove Shield](https://www.seeedstudio.com/MT3620-Grove-Shield.html), and the [Grove Temperature and Humidity Sensor (SHT31)](https://www.seeedstudio.com/Grove-Temperature-Humidity-Sensor-SHT31.html). These parts are available from many online stores including [Seeed Studio](www.seeedstudio.com). Be sure to plug the Grove Temperature Sensor into one of the I2C connectors on the Grove Shield. ## Set up your Development Environment This tutorial assumes Windows 10 and [Visual Studio (The free Community Edition or better)](https://visualstudio.microsoft.com/vs/?WT.mc_id=devto-blog-dglover). For now, Azure Sphere templates are only available for Visual Studio. However, you can clone and open this solution on Windows and Ubuntu 18.04 with [Visual Studio Code](https://code.visualstudio.com/?WT.mc_id=devto-blog-dglover). Follow the Azure Sphere [Overview of set up procedures](https://docs.microsoft.com/en-au/azure-sphere/install/overview?WT.mc_id=devto-blog-dglover) guide. ## Azure Sphere SDK This tutorial assumes you are using the [Azure Sphere SDK 19.11](https://docs.microsoft.com/en-us/azure-sphere/resources/release-notes-1911?WT.mc_id=devto-blog-dglover) or better which uses the CMake Build System. This tutorial uses a fork of the Seeed Studio [Grove Shield Library](https://github.com/Seeed-Studio/MT3620_Grove_Shield) that has been updated to support Azure Sphere SDK 19.11. ## Clone the MT3620 Grove Shield Library 1. Create a folder where you plan to build your Azure Sphere applications. 2. Clone the MT3620 Grove Shield Library. Open a command window and change to the directory where you plan to build your Azure Sphere applications. ```bash git clone https://github.com/gloveboxes/MT3620_Grove_Shield.git ``` ## Create a new Visual Studio Azure Sphere Project Start Visual Studio and create a new project in the same directory you cloned the MT3620 Grove Shield Library. It is important to create the Visual Studio Project in the same folder you cloned the MT3620 Grove Shield as there are relative links to this library in the application you will create. ```text azure-sphere |- MT3620_Grove_Shield |- YourAzureSphereApplication ``` ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-create-new-project.png) ### Select Azure Sphere Project Template Type **sphere** in the search box and select the Azure Sphere Blink template. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-select-azure-sphere-blink.png) ### Configure new Azure Sphere Project Name the project and set the save location. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-configure-new-project.png) ### Open the CMakeLists.txt file CMakelists.txt defines the build process, the files and locations of libraries and more. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-open-cmakelists.png) ### Add a Reference to MT3620_Grove_Shield_Library Two items need to be added: 1. The source location on the MT3620 Grove Shield library. Note, this is the relative path to the Grove Shield library. 2. Add MT3620_Grove_Shield_Library to the target_link_libraries definition. This is equivalent to adding a reference. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-configure-cmakelists.png) ## Set the Application Capabilities The application manifest defines what resources will be available to the application. Define the minimum set of privileges required by the application. This is core to Azure Sphere security and is also known as the [Principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege). 1. Review the [Grove Shield Sensor Capabilities Quick Reference](#grove-shield-sensor-capabilities-quick-reference) to understand what capabilities are required for each sensor in the library. 2. Open **app_manifest.json** 3. Add Uart **ISU0** - Note, access to the I2C SHT31 temperature/humidity sensor via the Grove Shield was built before Azure Sphere supported I2C. Hence calls to the sensor are proxied via the Uart. 4. Note, GPIO 9 is used to control an onboard LED. ```json { "SchemaVersion": 1, "Name": "AzureSphereBlink1", "ComponentId": "a3ca0929-5f46-42b0-91ba-d5de1222da86", "EntryPoint": "/bin/app", "CmdArgs": [], "Capabilities": { "Gpio": [ 9 ], "Uart": [ "ISU0" ], "AllowedApplicationConnections": [] }, "ApplicationType": "Default" } ``` ### Update the Code The following code includes the Grove Sensor headers, opens the Grove Sensor, and the loops reading the temperature and humidity and writes this information to the debugger logger. Replace all the existing code in the **main.c** file with the following: ```c #include <signal.h> #include <stdbool.h> #include <stdlib.h> #include <string.h> #include <time.h> #include <errno.h> #include <applibs/log.h> #include <applibs/gpio.h> // Grove Temperature and Humidity Sensor #include "../MT3620_Grove_Shield/MT3620_Grove_Shield_Library/Grove.h" #include "../MT3620_Grove_Shield/MT3620_Grove_Shield_Library/Sensors/GroveTempHumiSHT31.h" static volatile sig_atomic_t terminationRequested = false; static void TerminationHandler(int signalNumber) { // Don't use Log_Debug here, as it is not guaranteed to be async signal safe terminationRequested = true; } int main(int argc, char* argv[]) { Log_Debug("Application starting\n"); // Register a SIGTERM handler for termination requests struct sigaction action; memset(&action, 0, sizeof(struct sigaction)); action.sa_handler = TerminationHandler; sigaction(SIGTERM, &action, NULL); // Change this GPIO number and the number in app_manifest.json if required by your hardware. int fd = GPIO_OpenAsOutput(9, GPIO_OutputMode_PushPull, GPIO_Value_High); if (fd < 0) { Log_Debug( "Error opening GPIO: %s (%d). Check that app_manifest.json includes the GPIO used.\n", strerror(errno), errno); return -1; } // Initialize Grove Shield and Grove Temperature and Humidity Sensor int i2cFd; GroveShield_Initialize(&i2cFd, 115200); void* sht31 = GroveTempHumiSHT31_Open(i2cFd); const struct timespec sleepTime = { 1, 0 }; while (!terminationRequested) { GroveTempHumiSHT31_Read(sht31); float temp = GroveTempHumiSHT31_GetTemperature(sht31); float humi = GroveTempHumiSHT31_GetHumidity(sht31); Log_Debug("Temperature: %.1fC\n", temp); Log_Debug("Humidity: %.1f\%c\n", humi, 0x25); GPIO_SetValue(fd, GPIO_Value_Low); nanosleep(&sleepTime, NULL); GPIO_SetValue(fd, GPIO_Value_High); nanosleep(&sleepTime, NULL); } } ``` ## Deploy the Application to the Azure Sphere 1. Connect the Azure Sphere to your computer via USB 2. Ensure you have [claimed](https://docs.microsoft.com/en-au/azure-sphere/install/claim-device?WT.mc_id=devto-blog-dglover), [connected](https://docs.microsoft.com/en-au/azure-sphere/install/configure-wifi?WT.mc_id=devto-blog-dglover), and [developer enabled](https://docs.microsoft.com/en-au/azure-sphere/install/qs-blink-application?WT.mc_id=devto-blog-dglover) your Azure Sphere. 3. Select **GDB Debugger (HLCore)** from the **Select Startup** dropdown. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-start-application.png) 4. From Visual Studio, press **F5** to build, deploy, start, and attached the remote debugger to the Azure Sphere. ### View the Debugger Output Open the _Output_ window to view the output from **Log_Debug** statements in _main.c_. You can do this by using the Visual Studio **Ctrl+Alt+O** keyboard shortcut or click the **Output** tab found along the bottom/right of Visual Studio. ![Visual Studio View Output](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-view-output.png) ### Set a Debug Breakpoint Set a debugger breakpoint by clicking in the margin to the left of the line of code you want the debugger to stop at. In the **main.c** file set a breakpoint in the margin of the line that reads the Grove temperature and pressure sensor **GroveTempHumiSHT31_Read(sht31);**. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-set-breakpoint.png) ### Stop the Debugger **Stop** the debugger by using the Visual Studio **Shift+F5** keyboard shortcut or click the **Stop Debugging** icon. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/vs-stop-debugger.png) ## Azure Sphere Application Cloud Deployment Now you have learnt how to "Side Load" an application onto Azure Sphere it is time to learn about the [Deployment Basics]() to _Cloud Deploy_ an application. ## Finished 完了 fertig finito ख़त्म होना terminado Congratulations, you created a secure Internet of Things Azure Sphere application. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/finished.jpg) ## Appendix ### Grove Shield Sensor Capabilities Quick Reference | Sensors | Socket | Capabilities | | :------------- | :------------- | :----------- | | Grove Light Sensor | Analog | "Gpio": [ 57, 58 ], "Uart": [ "ISU0"] | | Grove Rotary Sensor | Analog | "Gpio": [ 57, 58 ], "Uart": [ "ISU0"] | | Grove 4 Digit Display | GPIO0 or GPIO4 | "Gpio": [ 0, 1 ] or "Gpio": [ 4, 5 ] | | Grove LED Button | GPIO0 or GPIO4 | "Gpio": [ 0, 1 ] or "Gpio": [ 4, 5 ] | | Grove Oled Display 96x96 | I2C | "Uart": [ "ISU0"] | | Grove Temperature Humidity SHT31 | I2C | "Uart": [ "ISU0"] | | Grove UART3 | UART3 | "Uart": [ "ISU3"] | | LED 1 | Red <br/> Green <br/> Blue | "Gpio": [ 8 ] <br/> "Gpio": [ 9 ] <br/> "Gpio": [ 10 ] | | LED 2 | Red <br/> Green <br/> Blue | "Gpio": [ 15 ] <br/> "Gpio": [ 16 ] <br/> "Gpio": [ 17 ] | | LED 3 | Red <br/> Green <br/> Blue | "Gpio": [ 18 ] <br/> "Gpio": [ 19 ] <br/> "Gpio": [ 20 ] | | LED 4 | Red <br/> Green <br/> Blue | "Gpio": [ 21 ] <br/> "Gpio": [ 22 ] <br/> "Gpio": [ 23 ] | For more pin definitions see the __mt3620_rdb.h__ in the MT3620_Grove_Shield/MT3620_Grove_Shield_Library folder. ### Azure Sphere Grove Kit | Azure Sphere | Image | | ---- | ---- | | [Azure Sphere MT3620 Development Kit](https://www.seeedstudio.com/Azure-Sphere-MT3620-Development-Kit-US-Version-p-3052.html)| | [Azure Sphere MT3620 Development Kit Shield](https://www.seeedstudio.com/Grove-Starter-Kit-for-Azure-Sphere-MT3620-Development-Kit.html). <br/> Note, you can also purchase the parts separately. | ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/seeed-studio-grove-shield-and-sensors.jpg) | ### Azure Sphere MT3620 Developer Board Pinmap The full Azure Sphere MT3620 Board Pinmap can be found on the [Azure Sphere MT3620 Development Kit](https://www.seeedstudio.com/Azure-Sphere-MT3620-Development-Kit-US-Version-p-3052.html) page. ![](https://raw.githubusercontent.com/gloveboxes/Create-a-Secure-Azure-Sphere-App-using-the-Grove-Shield-Sensor-Kit/master/resources/mt3620-dev-board-pinmap.png)
gloveboxes
222,237
What tools do you all use for multiple monitor setup?
I recently got my setup to 5 monitors. So far it's been good and I am getting used to a few little qu...
0
2019-12-17T07:09:19
https://dev.to/nickyoung/what-tools-do-you-all-use-for-multiple-monitor-setup-4nje
productivity, discuss
I recently got my setup to 5 monitors. So far it's been good and I am getting used to a few little quirks of the new flow, but I am curious if anyone out there has any cool programs or tips for being more productive on a setup like this. I am on Windows 10 if that matters.
nickyoung
222,434
Digital nomads, remote work, travelling... What makes you love/hate this?
Hey 👀✨ I'm thinking about writing an article about how to travel and work remotely cheap, having fun...
0
2019-12-17T09:46:00
https://dev.to/brownio/digital-nomads-remote-work-travelling-what-makes-you-love-hate-this-cep
discuss, travel, motivation, career
Hey 👀✨ I'm thinking about writing an article about how to travel and work remotely cheap, having fun, and having a comfortable stay. I haven't got much experience about working in different countries, so I thought it may be a good idea to see **what you people think about this topics**, and also to **collect information, links and any kind of help you'll like to share with the world.** So... **what makes you love || hate travelling, working remotely etc...?** Personally, I cannot find any blockers for me to do this (yet), so I'm eager to read what you have to say 🙈✨ ![Travelling](https://media.giphy.com/media/l3V0uMsvphowDr17O/giphy.gif)
brownio
222,458
How to convince your engineering lead to adopt Flutter
By Salvatore Giordano At the moment I'm not really into writing Flutter code, and I miss it. I've c...
0
2019-12-17T10:41:35
https://dev.to/packtpartner/how-to-convince-your-engineering-lead-to-adopt-flutter-3ka9
flutter, node, javascript, webdev
--- title: How to convince your engineering lead to adopt Flutter published: true description: tags: flutter, Node, js, webdev --- By Salvatore Giordano At the moment I'm not really into writing Flutter code, and I miss it. I've changed jobs more or less 10 months ago. Now I'm a backend microservices developer using Node.js as a primary tool, but after all this time I'm starting to miss Flutter, Dart and that great community. So, my new mission is to convince my engineering lead to let me rewrite our main application using Google's cross-platform framework - Flutter. I succeeded one year ago with my former employer, but everyone is different in this life. **What is Flutter** Flutter is an application development framework made by Google used for creating cross-platform mobile applications (in iOS and Android). As mentioned on the [official website](https://flutter.io/), it aims to make the development as easy, quick, and productive as possible. Flutter’s features including Hot Reload, a vast widget catalog, powerful performance, and a solid community contribute to meeting that objective and makes Flutter a pretty good framework. **Why use Flutter** What makes Flutter approachable to developers is that it requires no prior mobile experience and little programming skills. If you are familiar with object-oriented concepts (classes, methods, variables, etc) and imperative programming concepts (loops, conditionals, etc), you are good to get started. Flutter uses neither WebView nor the OEM widgets that ship with a mobile device, instead of using its own rendering engine to draw widgets. Flutter provides a set of widgets (including Material Design and Cupertino (iOS-styled) widgets), managed and rendered by Flutter’s framework and engine. It only has a thin layer of C/C++ code implementing most of its system in Dart that developers can easily approach read, change, replace, or remove. Unlike Javascript where UI experience is Just In Time compiled, Flutter provides a native experience being Ahead Of Time compiled. Flutter also provides a straightforward integration with Firebase making your infrastructure instantly serverless, redundant and scalable. Flutter also increases developer productivity by allowing them to see changes they make to the state of an app in less than one second. This is done using Flutter’s “hot reload” feature that makes you able to reload the application UI keeping the application state in memory. Not just that, at [2019 Google I/O](https://hub.packtpub.com/google-i-o-2019-flutter-ui-framework-now-extended-for-web-embedded-and-desktop/), Google made a major overhaul to its Flutter UI framework expanding it from mobile to multi-platform. The company released the first technical preview of Flutter for Web. In September at GDD, the team announced the [successful integration of Flutter’s web ](https://hub.packtpub.com/google-releases-flutter-1-9-at-gdd-google-developer-days-conference/)support into the main Flutter repository that will allow developers to write for desktop, mobile as well as Web with the same codebase. **My journey to convince my lead for Flutter** At first, I tried saying something about this wonderful framework now and then: * We could try Flutter to write our app! * We have only one Android and one iOS developer, maybe we'll benefit in productivity! * Mhhh we have this brand new feature to implement: using a cross-platform framework that makes you able to save and look at the result without recompiling everything every time may help us to implement it quicker! * Hey! Look at that bird! Reminds me about Dash, do you know him? The Flutter mascot! Every moment is good to remind my teammates and my CTO to take a look at Flutter. I'm becoming worse than those subliminal messages in old movies so loved by the conspiracy guys over the web. But nothing could scratch that bad feeling that people have about cross-platform applications. Also, they already had a bad experience using Cordova. In my prior job, I had more free time than now and I decided to rewrite one of our applications in Flutter from scratch over the weekend. The application was loved by my entire team. Since then, they've never seen another native application: Flutter was more comfortable and easy to use. So, what is the next step into my evil plan for Flutterization? We are an electric scooter sharing company. Apart from our main app, we have another application (at the moment an Angular web app, but we want to rewrite that using a cross-platform framework) used by the service team which is responsible for changing batteries and maintaining our scooter fleets. My idea is to write the service-app using Flutter and there is a strong probability that it will be a success, everybody will love it and it would be better instead of maintaining two different (but functionally equal) applications. **How to convince your team to move to Flutter?** Summarizing, here is my advice to convince your tech lead/product manager to consider Flutter as your next application framework: 1. Tell him about Flutter, the community, and it’s benefits. Try to convince him and your team by explaining real-world applications using Flutter. 2. Take the risky choice of investing your personal free time to learn Flutter and bring to your boss the results. 3. Try to rebuild some application, written in some other framework, in Flutter. 4. For starters, use Flutter to craft a side-application, not your main application or an application for your clients. I hope your boss appreciates your efforts and Flutter will eventually be your new daily companion. How to learn Flutter? If you want to take a brief journey into the Flutter world you can find my book on Packt Publishing, [Google Flutter Mobile Development Quick Start Guide](https://www.packtpub.com/application-development/google-flutter-mobile-development-quick-start-guide?utm_source=dev.to&utm_medium=&utm_campaign=OutreachB11253$5). In this book, you will understand the fundamentals of Flutter and get started with cross-platform mobile app development. You will learn about different widgets in Flutter and understand the concepts of Routing and Navigating. You will also work with Platform-specific code to use Native features and deploy your application on iOS and Android. **Author Bio** Salvatore Giordano is a 23-year-old software engineer from Italy. He currently works as a mobile and backend developer in Turin, where he attained a bachelor's degree in computer engineering. He is a member of the Google Developer Group of Turin, where he often gives talks regarding his experiences. He has written many articles on Flutter and contributed to the development of a number of plugins and libraries for the framework.
packtpartner
222,484
Vue CLI - Set up and getting started
Vue CLI is an all in one solution for getting started with a Vuejs app. Newbies and experts alike can...
0
2019-12-17T12:02:00
https://dev.to/grahammorby/vue-cli-set-up-and-getting-started-1kj2
vue, tutorial, npm, codenewbie
Vue CLI is an all in one solution for getting started with a Vuejs app. Newbies and experts alike can jump straight into the framework and hit the ground running with CLI and have a working app straight away. I myself started using it at the tail end of last year and it is now my go-to when I set up a new project of any type. I spin up a Vue CLI instance and crack a lumen API and off I go. So how do we get set-up? I'm going to assume you are using a Mac and for this exercise, I will be using NPM. ###Step 1 We need to make sure we have NPM installed. But what is NPM? Ok, so I grabbed this from the npm website - *'npm makes it easy for JavaScript developers to share and reuse code, and makes it easy to update the code that you’re sharing, so you can build amazing things.'* So we need to get that installed if you head over to https://nodejs.org/en/ and download the version of your choice and follow the installer. ###Step 2 So next we need to load up our terminal, I myself use ITerm2 on Mac as I find it a really nice alternative to the terminal on macOS. You can get a download here https://iterm2.com/ ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/nvdpc3bar5014jfal751.png) Once we have that loaded run the following command ``` npm install -g @vue/cli ``` ###Step 3 Once we are installed we can now type *'Vue'* into the command line which should give us a list of the available commands that CLI offers. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/7ztxnz0h00qg96l056ad.png) For this exercise, we want to use the create command as follows ``` vue create testingapp ``` So we are saying Vue please use the create command and name the app, in this case, testing app, please feel free to use any name you like. ###Step 4 Once we run the command we are given some options We have a default version and we can manually select some features which will work with how we are building our app. The default features are Babel and ESlint. Babel is a JavaScript compiler and ESlint will find and fix problems in your JavaScript code. My main build is always using vue-router, Vuex, babel, and ESlint. So we have a clue what the last two do but what is Vue router and Vuex? Ok so Vue-router is really what it says it is, it's a way for us to build routes to new pages and components in our app, I will explain this more in a future post. - https://router.vuejs.org/ Vuex is state management and over on their website they explain it as follows - *'Vuex is a state management pattern + library for Vue.js applications. It serves as a centralized store for all the components in an application, with rules ensuring that the state can only be mutated in a predictable fashion.'* - https://vuex.vuejs.org/ ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/uk558w10qtc9zmw6ak6y.png) So for this series and exercise, this is what I will use. So select those options from the option that asked you to manually select features and go ahead and create your app. ###Step 5 Ok so we are all done and the CLI is built. So what we do now? Well, there are the 2 commands at the bottom of our dialog in the terminal which are as follows: ``` cd testingapp npm run serve ``` The first command will move us into our new directory for the app we just spun up and once inside, we run the last command, we are then given a localhost address which npm has kindly generated which we can use in our browser. So go ahead and pop that into your browser and hey presto you should now be greeted with the Vue CLI home page and our new app is built. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/yw06m61bkjv0sp0nwajj.png) And you have just set up the Vue CLI and we are ready to get to developing, In my next post on this exercise we will explore the file system and what we have to work with and create our first page and route. *This is my first real attempt at an exercise tutorial and would welcome any feedback or tips to help me write this whole feature moving forward*
grahammorby
222,514
Lost in Translation (An Alexa Skill)
"Alexa, launch Translation Lost" Lost in Translation was the first Alexa Skill that me (and a friend...
0
2019-12-17T13:11:54
https://dev.to/viltaria/lost-in-translation-an-alexa-skill-4dbh
alexa, aws
*"Alexa, launch [Translation Lost](https://smile.amazon.com/dp/B082R3YQ3B/ref=sr_1_1)"* Lost in Translation was the first Alexa Skill that me (and a friend) published on Amazon's Alexa Skill Store. This post will overview what the game does, how it does it's stuff, and some of the problems we ran into. ## What It Do Lost in Translation is a speech based game in which Alexa says a phrase in an accent and the player has to guess what Alexa said. For example, Alexa might say something like "May the Force be With You", but in a Korean accent. After which the player should respond with that same phrase. By default, the player has five minutes to score as many points as they can, with each phrase correct being a point. If they need help, the player can ask Alexa for a hint, or just give up on the phrase if it's too hard (sometimes it's near impossible). ## How It Do [Code Repo](https://github.com/Viltaria/Translation-Lost) This project was written using Amazon's [Litexa](https://litexa.com/), or a literate style programming language for Alexa Skills. Litexa provides an easy interface interacting with AWS services such as DynamoDB, S3, and Polly. You can view the entire workflow of the Skill in `litexa/main.litexa`. ## Problems - Limited Alexa/Polly [Voice Support](https://developer.amazon.com/en-US/docs/alexa/custom-skills/speech-synthesis-markup-language-ssml-reference.html#voice) - Alexa doesn't support a lot of the voices that AWS Polly offers, this led us to hardcode a lot of the voice phrases, as we couldn't dynamically generate most of the ones we wanted (the Asianic Language ones are just hilarious) [Try it out!](https://smile.amazon.com/dp/B082R3YQ3B/ref=sr_1_1) [More Info](keygolem.com/lost-in-translation/)
viltaria
222,541
Practical Puppeteer: How to emulate timezone
Hi everybody! Today Puppeteer topic will be about emulating timezone when accessing a web page. This...
0
2019-12-17T14:33:39
https://dev.to/sonyarianto/practical-puppeteer-how-to-emulate-timezone-8d5
puppeteer, javascript, timezone
Hi everybody! Today Puppeteer topic will be about emulating timezone when accessing a web page. This feature available since Puppeteer version 2.0.0 and I think this API is very useful for testing or other use cases. My use case would be testing data that has datetime information related to user's timezone when he/she access my website. Now I just want to create a small script to test this nice feature. On Puppeteer, the API to emulate timezone is `page.emulateTimezone(timezoneId)`. The scenario is we will set the timezone to `Asia/Makassar` (which is GMT+8) and we will go to website https://whatismytimezone.com to check the timezone emulation correct or not. Simple right? Let's start. ## Preparation Install Puppeteer ``` npm i puppeteer ``` ## The code File `emulate_timezone.js` ```js const puppeteer = require('puppeteer'); (async () => { // set some options (set headless to false so we can see // this automated browsing experience) let launchOptions = { headless: false, args: ['--start-maximized'] }; const browser = await puppeteer.launch(launchOptions); const page = await browser.newPage(); // set viewport and user agent (just in case for nice viewing) await page.setViewport({width: 1366, height: 768}); await page.setUserAgent('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'); // emulate to Asia/Makassar a.k.a GMT+8 await page.emulateTimezone('Asia/Makassar'); // go to the web await page.goto('https://whatismytimezone.com'); // close the browser // await browser.close(); })(); ``` As usual, I set the `headless` option to `false` so we can see the browser in action and I am not close the browser at the end of the script. ## Run it ``` node emulate_timezone.js ``` If everything OK then it will show information like below. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/ot23bm5pqcgce4d26adl.png) My timezone is `GMT+7` and above screenshot is show `GMT+8` (since we set timezone emulation to `Asia/Makassar`). It means the timezone emulation works perfectly. Thank you and I hope you enjoy it. Source code of this article available at https://github.com/sonyarianto/emulate-timezone-in-puppeteer.git ## Reference - https://pptr.dev - https://whatismytimezone.com
sonyarianto
222,606
Chord Diagram 🔥
Chord diagram is a visually appealing way to represent weighted relationships in a radial layout. Ent...
0
2019-12-17T16:25:34
https://dev.to/sikrigagan/chord-diagram-35c4
codepen, fusioncharts, charts, chord
--- title: Chord Diagram 🔥 published: true tags: codepen, FusionCharts, charts, chord --- <p>Chord diagram is a visually appealing way to represent weighted relationships in a radial layout. Entities (or categories) among which the weighted relationship exist are drawn on the circumference whereas the actual weighted relationship are drawn as links in the circular space of the chord diagram. Most of the time this weighted relationship is a flow or a movement like, migration of people from one country to another, or, subscribers switching telecom operators. The essence of a chord diagram is in knowing the dominant relationship and one can toggle to narrow focus on most important relationships.</p> <p>Learn more about chord diagram <a href="https://www.fusioncharts.com/charts/chord-diagram" target="_blank">here</a>.</p> {% codepen https://codepen.io/fusioncharts/pen/OJPbwxJ %}
sikrigagan
222,643
Higher-Order Functions in ReasonML
In a previous post, I developed a recursive function that found the index of the first negative value...
0
2019-12-18T17:35:43
https://dev.to/jdeisenberg/higher-order-functions-in-reasonml-1150
reason, programming, functional
[In a previous post](https://dev.to/jdeisenberg/taming-recursion-with-tail-recursion-3dd6), I developed a recursive function that found the index of the first negative value in a list of quarterly balances: ```reasonml let debitIndex = (data) => { let rec helper = (index) => { if (index == Js.Array.length(data)) { -1; } else if (data[index] < 0.0) { index; } else { helper(index + 1); } }; helper(0); }; let balances = [|563.22, 457.81, -309.73, 216.45|]; let result = debitIndex(balances); ``` What if I wanted to find the first value that was 1000.00 or more? Then I’d need to write another function like this: ```reasonml let goodQuarterIndex = (data) => { let rec helper = (index) => { if (index == Js.Array.length(data)) { -1; } else if (data[index] >= 1000.0) { index; } else { helper(index + 1); } }; helper(0); }; ``` It’s the same as the first function, except for the `if` test for the data value. What about finding the first value that’s equal to zero? I’d need to write yet another function that looks exactly the same as the preceding two, except for the `if` test. There must be a way to write a generic `findFirstIndex` function that can handle any of these cases without having to duplicate most of the code. We can do this by writing a *higher-order function*&mdash;a function that takes another function as its argument. Let’s put the tests for “we found it!” (the only code that’s different) into their own functions that take an item as input and return a `true` or `false` value. ```reasonml let isDebit = (item) => {item < 0.0}; let isGoodQuarter = (item) => {item >= 1000.00}; let isZero = (item) => {item == 0.0}; ``` > *These functions that return boolean are sometimes called __predicate functions__ .* Then change the recursive function as follows: ```reasonml let findFirstIndex = (testingFunction, data) => { let rec helper = (index) => { if (index == Js.Array.length(data)) { -1; } else if (testingFunction(data[index])) { index; } else { helper(index + 1); } }; helper(0); }; ``` Now you can find the first negative balance, the first quarter with a good balance, and the first zero-balance quarter with these calls: ```reasonml let firstDebit = findFirstIndex(isDebit, balances); let firstGoodQuarter = findFirstIndex(isGoodQuarter, balances); let firstZero = findFirstIndex(isZero, balances); ``` When the first call is made, this is what it looks like: ![diagram showing isDebit function box pointing to testingFunction and balances pointing to data](http://langintro.com/dev.to/hof_diagram1.png) As a result, the call `testingFunction(data[index])` will pass `balances[index]` to the `isDebit()` function. The second time we call `findFirstIndex`, `testingFunction(data[index])` will pass `balances[index]` to the `isGoodQuarter()` function. The third time we call `findFirstIndex`, `testingFunction(data[index])` will pass `balances[index]` to the `isZero()` function. This ability to plug in a function as an argument to another function gives you great flexibility and helps you avoid a lot of duplicated code. # Example: Rate of Change Not all the functions you give to a higher-order function (HOF) need to be predicate functions. Consider this graph of the function *f*(*x*) = *x*^2 ![graph of parabola with horizontal and vertical lines in range 0-2 and 4-6](http://langintro.com/dev.to/hof/rate_of_change.png) In this diagram, you can see that the *y* value increases more slowly when *x* is between 0 and 2 than when *x* is between 4 and 6. We want to write a function that calculates the rate of change for some function *f*. If you have two points *x*<sub>1</sub> and *x*<sub>2</sub>, the formula for rate of change between those points is (*f*(*x*<sub>2</sub>)&nbsp;&ndash;&nbsp;*f*(*x*<sub>1</sub>))&nbsp;/&nbsp;(*x*<sub>2</sub>&nbsp;&ndash;&nbsp;*x*<sub>1</sub>). Here’s the rate of change function: ```reasonml let rateOfChange = (f, x1, x2) => { (f(x2) -. f(x1)) /. (x2 -. x1) }; ``` > *We are presuming here that function* `f` *returns a floating point value. ReasonML is very strict about not mixing integer and floating values; it’s so strict that it has separate arithmetic operators for each type. To add, subtract, multiply, or divide floating point numbers, you have to follow the operator with a dot.* Now let’s define the *x*^2 function and find the rates of change: ```reasonml let xSquared = (x) => {x *. x}; let rate1 = rateOfChange(xSquared, 0.0, 2.0); let rate2 = rateOfChange(xSquared, 4.0, 6.0); Js.log2("Rate of change from 0-2 is", rate1); // 2.0 Js.log2("Rate of change from 4-6 is", rate2); // 10.0 ``` It’s possible to find the rate of change for any sort of mathematical function: ```reasonml let tripleSine = (x) => {sin(3.0 *. x)}; let polynomial = (x) => { 5.0 *. x ** 3.0 +. 8.0 *. x ** 2.0 +. 4.0 *.x +. 27.0 }; let rate3 = rateOfChange(tripleSine, 0.0, Js.Math._PI /. 4.0); let rate4 = rateOfChange(polynomial, -3.0, 2.0); Js.log2("sine from 0-45 degrees:", rate3); // 0.9003... Js.log2("polynomial from -3 to 2:", rate4); // 31 ``` # Anonymous Functions There’s a way to specify short functions to pass to a HOF right on the spot without having to create a new, named function (as we have done so far). Here’s how we’d do it for the debit problem: ```reasonml let firstDebit = findFirstIndex( (item) => {item < 0.0}, balances); ``` The anonymous predicate function is `(item) => {item < 0.0}`. It’s exactly the same as the right-hand side of the binding starting `let isDebit =...` Similarly, we can use an anonymous function for finding rate of change of the function *x*^3 in the range 0 to 4: ```reasonml let cubeChange = rateOfChange((x) => {x ** 3.0}, 0.0, 4.0); ``` Many programmers like to use anonymous functions, and you will see a lot of them as you read other people’s ReasonML code. Should you use them too? My strong suggestion is that you don’t. * Named functions make things easier to read. `isDebit` is more immediately meaningful than having to parse `(item) => {item < 0.0}` * As you find bugs or need features, your one-line anonymous function might grow to a multi-line anonymous function. That makes the code more difficult to read, and you tend to lose the big picture. At that point, you will probably pull out the code into a separate function. So why not do it now? * Anonymous functions aren’t reusable. If you need the same function in a different place, you will have to copy it. That’s my viewpoint; your mileage may vary. # Summary Higher-order functions (HOFs) are functions that take other functions as arguments. HOFs can also return a function as their value, though we haven’t covered that aspect in this post. HOFs let you reuse code that would otherwise require writing near-duplicated code. While you might not have need to *write* HOFs yourself, you will often find yourself using them. For example, when you want to manipulate lists and arrays you’ll use the `map()`, `keep()`, and `reduce()` HOFs. But those three functions are the subject of a future post.
jdeisenberg
222,680
Ship Accessible [React] Apps
Humans with disabilities have a right to access the same wealth of knowledge and utility the internet provides able-bodied folks.
0
2019-12-17T19:01:30
https://dev.to/egghead/ship-accessible-react-apps-lbk
a11y, webdev, react, javascript
--- title: Ship Accessible [React] Apps published: true description: Humans with disabilities have a right to access the same wealth of knowledge and utility the internet provides able-bodied folks. tags: a11y, webdev, react, javascript cover_image: https://og-image-egghead-course.now.sh/develop-accessible-web-apps-with-react?v=20191209devto --- Assuming for a moment that you can see... Have you ever blindfolded yourself and tried to use your computer or phone? When was the last time you navigated your current project with a screen reader? Maybe that's somebody else's problem? 🙄 A large number of people with impairments or disabilities are unable to use the web effectively. This is not their fault. It's unacceptable to ignore this fact. As web developers, we have a responsibility to try harder. To do more. To learn what it takes to make our applications as accessible as possible. This means we need to understand the modern tools and techniques to make this happen and deliver a consistent user experience to as many folks as possible. Listen, we're not here to lecture you, but as a society, we have to come together. This isn't about wagging our fingers or posturing for internet points. Accessibility matters! **Humans with disabilities have a right to access the same wealth of knowledge and utility the internet provides able-bodied folks.** We have a *responsibility* to do the right thing and take a stand for accessible web applications. None of us are perfect, but we can choose to do better. Erin Doyle is an expert in creating accessible React applications and has put together a course for you that presents a concrete process to test and develop with accessibility in mind. After watching her new course, you’ll be prepared to audit and fix issues in your existing apps and have a better understanding of how to approach development from the perspective of your users. Even if you aren't building React apps right now, these concepts carry over and the course is an exploration of common patterns and solutions for accessible modern web applications [![](https://res.cloudinary.com/dg3gyk0gu/image/upload/c_scale,dpr_auto,e_unsharp_mask:100,w_244/v1576543637/email-images/AccessibleReact_1000.png)](https://egghead.io/courses/develop-accessible-web-apps-with-react) -> [Building Accessible React Apps](https://egghead.io/courses/develop-accessible-web-apps-with-react) (*this course is egghead members only, but the first couple of lessons are free to view*) Erin and I also [recorded a podcast](https://egghead.io/podcasts/test-driven-accessibility-with-erin-doyle) on this subject!
jhooks
222,692
Top 5 DEV Comments from the Past Week
Highlighting some of the best DEV comments from the past week.
0
2019-12-17T19:51:56
https://dev.to/devteam/top-5-dev-comments-from-the-past-week-5bl8
bestofdev
--- title: Top 5 DEV Comments from the Past Week published: true description: Highlighting some of the best DEV comments from the past week. tags: bestofdev cover_image: https://res.cloudinary.com/practicaldev/image/fetch/s--7VrAA2ln--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://thepracticaldev.s3.amazonaws.com/i/qmb5wkoeywj06pd7p8ku.png --- This is a weekly roundup of awesome DEV comments that you may have missed. You are welcome and encouraged to boost posts and comments yourself using the **[#bestofdev](/t/bestofdev)** tag. The discussion of **[How many programming languages do you know?](https://dev.to/ben/how-many-programming-languages-do-you-know-50op)** generated two particularly great replies. First, @deciduously offers a good-hearted response: {% devcomment j1n5 %} @kenbellows follows up later in the thread with three "themes" across different programming languages: {% devcomment j213 %} @isaacdlyman offers thoughtful response to **[What’s the most under-appreciated software?](https://dev.to/ben/what-s-the-most-under-appreciated-software-ce4)**: {% devcomment in00 %} @thatonejakeb adds their suggestion to **[What's the best thing to do when you've run into a debugging dead end?](https://dev.to/ben/what-s-the-best-thing-to-do-when-you-ve-run-into-a-debugging-dead-end-39jg)**: {% devcomment j3d5 %} @samuraiseoul chimed in to the **[9 Extremely Useful HTML Tricks](https://dev.to/razgandeanu/9-extremely-useful-html-tricks-463a)** thread to talk more about datalist tag, and to offer a cool trick of their own: {% devcomment io27 %} See you next week for more great comments ✌
peter
222,694
Guide: Rails Development with Docker
Originally posted on Hint's blog. As a software consultancy, we switch between many projects through...
0
2019-12-17T20:18:32
https://dev.to/hint/rails-development-with-docker-13np
rails, docker, showdev
*Originally posted on [Hint's blog](https://hint.io/blog/rails-development-with-docker).* As a software consultancy, we switch between many projects throughout the year. A critical factor in delivering value is the ease at which we are able to move between projects. Over the years, we have used many tools to manage dependencies needed to run and develop our clients' projects. The problem with most tools has been the ability to have consistent, reproducible development environments across our team. About two years ago, we discovered that Docker was a viable option for building consistent development environments. Since then, we continue to iterate on our configuration as we learn new ways to handle the complexity of the projects while simplifying the setup process for our team. In this guide, we will cover the basics of our Docker development environment for Rails. ## Getting started If you would like to follow along, [install Docker CE](https://hub.docker.com/search/?type=edition&offering=community) and create, [clone](https://github.com/hintmedia/base-rails-app), or have a working Rails app. We will be using a combination of Dockerfiles, Docker Compose, and bash scripts throughout this guide, so let's make a place for most of those files to live. Start by creating a `docker` folder in the root of the Rails project. Here we will store Dockerfiles and bash scripts to be referenced from the `docker-compose.yml` file we will be creating later. Inside the `docker` folder, create another folder named `ruby`. Inside the newly created `ruby` folder, create a file named `Dockerfile`. This file will contain commands to build a custom image for your Rails app. ![initial Docker directories](https://thepracticaldev.s3.amazonaws.com/i/6gzhijjsx37uumq7cy61.png) ## Hello Dockerfile ```bash ARG RUBY_VERSION=2.6 FROM ruby:$RUBY_VERSION ARG DEBIAN_FRONTEND=noninteractive ``` In this block, we set the `RUBY_VERSION` and `DEBIAN_FRONTEND` build arguments and specify the docker image we will use as our base image. The first `ARG` sets a default value for `RUBY_VERSION`, but passing in a value from the command line or a `docker-compose` file will override it, as you'll see later in the guide. An `ARG` defined before `FROM` is only available for use in `FROM`. `FROM` in a `Dockerfile` is the base image for building our image and the start of the build stage. In our case, we are using the official Ruby image from Docker Hub, defaulting to Ruby 2.6. The next `ARG` is used to set the build stage shell to `noninteractive` mode, which is the general expectation during the build process. We don't want this environment variable to carry over to when we are using the images in our development environment, which is why we are using `ARG` instead of `ENV`. ### Base Software Install ```bash ARG NODE_VERSION=11 RUN curl -sL https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \ echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list RUN apt-get update && apt-get install -y \ build-essential \ nodejs \ yarn \ locales \ git \ netcat \ vim \ sudo ``` Here we are setting up the necessary software and tools for running a modern Rails app. Like in the previous section, we are setting a default for our `NODE_VERSION` build argument. The next two `RUN` lines set up the defined version of the node apt repository and the latest stable yarn apt repo. Since Webpacker started being officially supported in Rails 5.1, it is important we have a recent version of node and yarn available in the image we will be running Rails on. The third `RUN` will look familiar if you have used any Debian based OS. It updates the apt repositories, which is important since we just installed two new ones, and then it installs our base software and tools. I want to point out `netcat` and `sudo` specifically. `netcat` is a networking tool we will use to verify the other services are up when we are bringing up our Rails app through Docker Compose. We install `sudo` since by default it is not installed on the Debian based Docker images, and we will be using a non-root user in our Docker image. ### Non-root User ```bash ARG UID ENV UID $UID ARG GID ENV GID $GID ARG USER=ruby ENV USER $USER RUN groupadd -g $GID $USER && \ useradd -u $UID -g $USER -m $USER && \ usermod -p "*" $USER && \ usermod -aG sudo $USER && \ echo "$USER ALL=NOPASSWD: ALL" >> /etc/sudoers.d/50-$USER ``` Docker containers generally use the root user, which is not inherently bad, but is problematic for file permissions in a development environment. Our solution for this is to create a non-root user and pass in our `UID`, `GID`, and username as build arguments. If we pass in our `UID` and `GID`, all files created or modified by the user in the container will share the same permissions as our user on the host machine. You will notice here that we use the build arguments to set the environment variable (via `ENV`) since we will also want these variables available when we bring up the container. In the `RUN` instruction we add a standard Linux user/group and then we add the new user to the `sudoers` file with no password. This gives us all the benefits of running as root while keeping file permissions correct. ### Ruby, RubyGems, and Bundler Defaults ```bash ENV LANG C.UTF-8 ENV BUNDLE_PATH /gems ENV BUNDLE_HOME /gems ARG BUNDLE_JOBS=20 ENV BUNDLE_JOBS $BUNDLE_JOBS ARG BUNDLE_RETRY=5 ENV BUNDLE_RETRY $BUNDLE_RETRY ENV GEM_HOME /gems ENV GEM_PATH /gems ENV PATH /gems/bin:$PATH ``` Explicitly setting the `LANG` environment variable specifies the fallback locale setting for the image. `UTF-8` is a sane fallback and is what locale defaults to when it's working properly. We will be using a `volume` with Compose, so we need to point RubyGems and Bundler to where that `volume` will mount in the file system. We also set the gem executables in the path and set some defaults for Bundler, which are configurable via build arguments. ### Optional Software Install ```bash #----------------- # Postgres Client: #----------------- ARG INSTALL_PG_CLIENT=false RUN if [ "$INSTALL_PG_CLIENT" = true ]; then \ apt-get install -y postgresql-client \ ;fi ``` Here is an example of how to set up optional software installs in the Dockerfile. In this scenario, `postgresql-client` will not be installed by default, but will be installed if we pass a build argument set to true (`INSTALL_PG_CLIENT=true`). ### Dockerfile Final Touches ```bash RUN mkdir -p "$GEM_HOME" && chown $USER:$USER "$GEM_HOME" RUN mkdir -p /app && chown $USER:$USER /app WORKDIR /app RUN mkdir -p node_modules && chown $USER:$USER node_modules RUN mkdir -p public/packs && chown $USER:$USER public/packs RUN mkdir -p tmp/cache && chown $USER:$USER tmp/cache USER $USER RUN gem install bundler ``` To wrap up the `Dockerfile`, we create and set permissions on needed directories that were referenced previously in the file or that we will be setting up as volumes with Docker Compose. We call `USER $USER` here at the end of the file, which will set the user for the image when you boot it as a container. The last `RUN` command installs Bundler, which may not be required depending on the version of Ruby. Next, we'll take a look at our `docker-compose.yml` file. ## Docker Compose Docker's definition of Compose is a great place to start. > Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration. That is what we are going to be doing in this section of the guide, adding support for a multi-container Docker application. The first thing we will need is a `docker-compose.yml` file at the root of the Rails app. If you would like to copy and paste the following YAML into a `docker-compose.yml` file, I'll breakdown its parts below. ```yaml version: '3.7' services: rails: build: context: ./docker/ruby args: - RUBY_VERSION=2.6 - BUNDLE_JOBS=15 - BUNDLE_RETRY=2 - NODE_VERSION=12 - INSTALL_PG_CLIENT=true - UID=500 - GID=500 environment: - DATABASE_USER=postgres - DATABASE_HOST=postgres command: bundle exec rails server -p 3000 -b '0.0.0.0' entrypoint: docker/ruby/entrypoint.sh volumes: - .:/app:cached - gems:/gems - node_modules:/app/node_modules - packs:/app/public/packs - rails_cache:/app/tmp/cache ports: - "3000:3000" user: ruby tty: true stdin_open: true depends_on: - postgres postgres: image: postgres:11 environment: - POSTGRES_HOST_AUTH_METHOD=trust volumes: - postgres:/var/lib/postgresql/data volumes: gems: postgres: node_modules: packs: rails_cache: ``` In this file, we are defining two services: the `rails` service, which our Rails app will run in, and the `postgres` service, which will accommodate PostgreSQL. The names for these services are arbitrary and could easily be `foo` and `bar`, but since the service names are used for building, starting, stopping, and networking, I would recommend naming them close to the actual service they are running. We also are setting our Compose file compatibility to `version: '3.7'` which is the latest at the time of this writing. ### The App Service Broken Down ```yaml version: '3.7' services: rails: build: context: ./docker/ruby args: - RUBY_VERSION=2.6 - BUNDLE_JOBS=15 - BUNDLE_RETRY=2 - NODE_VERSION=12 - INSTALL_PG_CLIENT=true - UID=500 - GID=500 ``` The first part of the `rails` service is to set up to build the image from the `Dockerfile` we created earlier. Once built, it uses the local image from that point forward. We set the `context`, which is the directory path where our `Dockerfile` is stored. The `args:` key specifies the build arguments we set up in the `Dockerfile` earlier in the guide. Notice that we are overriding some of our earlier defaults here. ```yaml services: rails: . . . environment: - DATABASE_USER=postgres - DATABASE_HOST=postgres command: bundle exec rails server -p 3000 -b '0.0.0.0' entrypoint: docker/ruby/entrypoint.sh volumes: - .:/app:cached - gems:/gems - node_modules:/app/node_modules - packs:/app/public/packs - rails_cache:/app/tmp/cache ports: - "3000:3000" ``` In the next part, we set up `environment` variables for connecting to the `postgres` service, which will require updating your `database.yml`[example](https://gist.github.com/nvick/c4e80964c7a7990b3bf4387de2f2b5b6) file to take advantage of these variables. We also set the default `command` that will run when `docker-compose up` runs, which in this case starts the Rails server on port 3000 and binds it to all IP addresses. Next, we point to our `entrypoint` script, which we'll revisit after breaking down the rest of the Compose file. After the `entrypoint`, we set up `volumes`. The first volume mounts the current directory (`.`) to `/app` in the container. The mapping for this one is `HOST:CONTAINER`. If you are on Mac, I would recommend using `.:/app:cached` since file sharing on Docker for Mac is CPU bound. By setting `:cached` on the mount point it allows files to be out of sync with the host being authoritative. [Here are more details](https://docs.docker.com/docker-for-mac/osxfs-caching/) about file sharing performance on Docker for Mac. In our testing `:cached` has been the best option for speed vs. tradeoffs. The next volumes are named volumes. Creating them happens when we bring up the containers for the first time and then they are persistent from up and down. These volumes are native to the Docker environment, so they operate at native speeds. The persistence and speed are why we have chosen to use them for `gem`, `node_modules`, `packs`, and `rails_cache` storage; otherwise, we would have to reinstall both every time we bring our environment back up. Lastly, there is a top-level `volumes` key at the very bottom of the `docker-compose.yml` file, which is where they are defined. The last key in this section is `ports`. Ports **only** need to be defined to access the containers from the host otherwise container to container communication happens on the internal Docker network generally using service names. The mapping `"3000:3000"` allows us to connect to the Rails server at `[localhost:3000](http://localhost:3000)`. The mapping is `HOST:CONTAINER` just like the volume mount, and it is recommended to pass them in as strings because YAML parses numbers in `xx:yy` as base-60. ```yaml services: rails: . . . user: ruby tty: true stdin_open: true depends_on: - postgres ``` This last section explicitly sets the `user` to `ruby`, which was set up in our `Dockerfile` above. Use the next two keys: `tty` and `stdin_open`, for debugging with `binding.pry` while hitting the Rails server. [Check out this gist](https://gist.github.com/nvick/b4458670f9395de7d19a8e2d06894d9b) for more info. One thing to call out from that gist is to use **ctrl-p + ctrl-q** to detach from the Docker container and leave it in a running state. The `depends_on` key is used to declare other containers that are required to start with the service. Currently, it is only dependent on `postgres`. No other health checks or validations are run; we will handle those in the `entrypoint`. ### The Postgres Service ```yaml services: rails: . . . postgres: image: postgres:11 environment: - POSTGRES_HOST_AUTH_METHOD=trust volumes: - postgres:/var/lib/postgresql/data ``` We define our `postgres` service next with pretty minimal configuration compared to what we went through for the `rails` service. The first key, `image`, will check locally first, then download the official postgres image from Docker Hub with a matching tag if needed. In this case, it will pull in `11.5`. I chose 11 for this example because that is now the default on Heroku, but there are lots of image options available on Docker Hub. We are using a named volume here as well for our `postgres` data. And, that wraps up the Compose file. ## Enter The Entrypoint There are a few requirements for starting a Rails server, e.g., the database running and accepting connections. We use the entrypoint, which is a bash script, to fulfill those requirements. Looking back at the compose file we should create this bash script at docker/ruby/entrypoint.sh and make it executable. ```bash #! /bin/bash set -e : ${APP_PATH:="/app"} : ${APP_TEMP_PATH:="$APP_PATH/tmp"} : ${APP_SETUP_LOCK:="$APP_TEMP_PATH/setup.lock"} : ${APP_SETUP_WAIT:="5"} # 1: Define the functions to lock and unlock our app container's setup # processes: function lock_setup { mkdir -p $APP_TEMP_PATH && touch $APP_SETUP_LOCK; } function unlock_setup { rm -rf $APP_SETUP_LOCK; } function wait_setup { echo "Waiting for app setup to finish..."; sleep $APP_SETUP_WAIT; } # 2: 'Unlock' the setup process if the script exits prematurely: trap unlock_setup HUP INT QUIT KILL TERM EXIT # 3: Wait for postgres to come up echo "DB is not ready, sleeping..." until nc -vz postgres 5432 &>/dev/null; do sleep 1 done echo "DB is ready, starting Rails." # 4: Specify a default command, in case it wasn't issued: if [ -z "$1" ]; then set -- bundle exec rails server -p 3000 -b 0.0.0.0 "$@"; fi # 5: Run the checks only if the app code is executed: if [[ "$3" = "rails" ]] then # Clean up any orphaned lock file unlock_setup # 6: Wait until the setup 'lock' file no longer exists: while [ -f $APP_SETUP_LOCK ]; do wait_setup; done # 7: 'Lock' the setup process, to prevent a race condition when the # project's app containers will try to install gems and set up the # database concurrently: lock_setup # 8: Check if dependencies need to be installed and install them bundle install yarn install # 9: Run migrations or set up the database if it doesn't exist # Rails >= 6 bundle exec rails db:prepare # Rails < 6 # bundle exec rake db:migrate 2>/dev/null || bundle exec rake db:setup # 10: 'Unlock' the setup process: unlock_setup # 11: If the command to execute is 'rails server', then we must remove any # pid file present. Suddenly killing and removing app containers might leave # this file, and prevent rails from starting-up if present: if [[ "$4" = "s" || "$4" = "server" ]]; then rm -rf /app/tmp/pids/server.pid; fi fi # 12: Replace the shell with the given command: exec "$@" ``` I will only call out a few things specific to working with Rails from this file. Below, we are using `nc`([netcat](https://www.digitalocean.com/community/tutorials/how-to-use-netcat-to-establish-and-test-tcp-and-udp-connections-on-a-vps#how-to-use-netcat-for-port-scanning)) to verify that `postgres` is up before running any database commands. `nc` doesn't just ping the server; it is checking that the service is responding on a specific port. This check runs every second until it boots. **Note:** We are using the service name `postgres` to connect to the container. ```bash echo "DB is not ready, sleeping..." until nc -vz postgres 5432 &>/dev/null; do sleep 1 done echo "DB is ready, starting Rails." ``` Next, we run our bundle and yarn install commands to make sure we have the latest dependencies. Once both of those have run, we will use the new Rails 6 `db:prepare` method to either set up the database or run migrations. This can be handled in many different ways but my preference is to use Rails tools. (Note: the comment is how you would do it on Rails 5 or older) ```bash bundle install yarn install # Rails >= 6 bundle exec rails db:prepare # Rails < 6 # bundle exec rake db:migrate 2>/dev/null || bundle exec rake db:setup ``` Lastly, we check for any dangling PID files left from killing and removing the app containers. If the PID files are left our `rails server` command will fail. ```bash if [[ "$4" = "s" || "$4" = "server" ]]; then rm -rf /app/tmp/pids/server.pid; fi ``` ## Start It Up With all of this in place, we run `docker-compose up`, which will build and start our containers. It is also a great time to get coffee because building the app container will take some time. Once everything is built and up, point your browser at [localhost:3000](http://localhost:3000) and your homepage, or if this is a new app, it will display the default Rails homepage. If you need to run specs or any other Rails command, running `docker-compose exec app bash` in a separate terminal will launch a Bash session on the running app container. ## Wrapping Up We now have a base-level Rails Docker development environment. [Here is a gist](https://gist.github.com/nvick/96c22b18b6cb31fa458fafba32fa000f) of all the three files referenced in this guide. Now that you have an understanding of how to set up a Docker development environment, check out [Railsdock](https://github.com/hintmedia/railsdock)! It is a CLI tool we are working on that will generate most of this config for you. PR's appreciated! In the next part of the series, we will set up more services and use Compose features to share config between similar services.
natevick
222,735
Charming the Python: RegEx
If coding tutorials with math examples are the bane of your existence, keep reading. This series uses...
3,374
2020-01-10T20:15:33
https://dev.to/vickilanger/charming-the-python-regex-2mf
python, beginners
*If coding tutorials with math examples are the bane of your existence, keep reading. This series uses relatable examples like dogs and cats.* --- ## Regular Expression (RegEx) Regular Expression is often referred to as RegEx or regex. Regex is used to search for and find patterns. Regular expressions are a sequence of characters used to define search patterns. If you've ever used `ctrl+f` to `find` or `find + replace`, you've used regex. Take a look at this screenshot. I have searched for every instance of ' dog' in the file. Including the space at thee beginning made sure we only got words that start with 'dog'. It ignores everything after the search sequence as I didn't desire to see that. In this case I could use `replace` to use 'canine' or 'snuggle bug' instead of 'dog'. ![a short paragraph about dogs and using the find and replace option to find and highlight all instances of the sequence ' dog'](https://thepracticaldev.s3.amazonaws.com/i/z88q2whs3iqefnbjlien.png) --- To use RegEx, start by importing module `re` ```python import re ``` ### Common Functions There are quite a few functions in the `regex` module. I'll dig into a few of the more common ones. #### re.match() Returns only the first thing that matches at the beginning of the first line of a string ```python # syntax re.match(pattern, string, flags=0) # example >>> import re >>> dating_site_hobbies = "long walks on the beach and dogs" >>> match = re.match("long walks", dating_site_hobbies, re.I) # re.I makes the seach case insensitive >>> print(match) <re.Match object; span=(0, 10), match='long walks'> # fails because it's not the beginning of the string >>> match_fail = re.match("dogs", dating_site_hobbies, re.I) >>> print(match_fail) None ``` In this case, if I was searching for similar likes on a dating site, I wouldn't care if the word was the first thing in the string of hobbies. #### re.search() Returns anything that matches, no matter where it is in a string ```python # syntax re.search(pattern, string, flags=0) # example >>> import re >>> dating_site_hobbies = "long walks on the beach and dogs" >>> search = re.search("dogs", dating_site_hobbies, re.I) # re.I makes the seach case insensitive >>> print(search) <re.Match object; span=(28, 32), match='dogs'> # span is telling us what characters the search term is at # fails because the search term isn't in the hobbies >>> search_fail = re.search("cats", dating_site_hobbies, re.I) >>> print(search_fail) None ``` #### re.findall() Returns list with all matches ```python # syntax re.findall(pattern, string, flags=0) # example >>> import re >>> research_paper_submission = "all the things that were written down to show that I know what I was learning in school" >>> matches = re.findall("that", research_paper_submission, re.I) # re.I makes the seach case insensitive >>> print(matches) ['that', 'that'] # fails because the search term isn't in the paper >>> matches_fail = re.findall("dog", research_paper_submission, re.I) >>> print(matches_fail) [] # prints empty list of no matches because the paper wasn't about dogs ``` #### re.sub() Replaces one or many matches with a string Like `ctrl+f` then `replace` In school, I had a professor suggest I eliminate the use of the word 'that'. To do this after I've written my paper, I could use the following code. ```python # syntax re.sub(pattern, repl, string, count=0 flags=0) # example >>> import re >>> research_paper_submission = "the things that were written down to show that I know what I was learning at school" >>> matches = re.sub('that', '', research_paper_submission) >>> print(matches) the things were written down to show I know what I was learning at school ``` ### MetaCharacters https://www.programiz.com/python-programming/regex | MetaCharacter | What it does | Example | |--|--|--| | `[]` | check all characters | `[act]` matches 'cat' and 'tack', but not 'dog' | | `.` | wildcard | `...` matches 'dog' and 'mouse', but not 'on' | | `^` | starts with | `^cat` matches 'catastrophic' but not 'locate' | | `$` | ends with | `cat$` matches 'bobcat' but not 'catch' | | `|` | or | ``r|b`` matches 'bat' and 'rat but not 'cat'| NOTE: if you know how to get pipes, `|`, to show in a table, please let me know. The last row of this table is messed up because I don't know how to make it work. I highly suggest [Regex One](regexone.com) for practicing your understanding of regex. --- Some reference stuff --- {% twitter 1163584420986662913 %} {% post https://dev.to/bachnxhedspi/verbalexpressions---regularexpression-made-easy-27a8 %} As always, refer to the Python docs for [more detailed information](https://docs.python.org/3/library/re.html) Here's a [cheat sheet worth bookmarking](https://www.debuggex.com/cheatsheet/regex/python) and another [thing worth looking at](http://www.pyregex.com). --- Series loosely based on {% post https://dev.to/asabeneh/30daysofpyton-challenge-43ii %}
vickilanger
222,766
Tips ORM Laravel that can help you
The Eloquent ORM Laravel is too difference ORM that all others Framework that I used, he really help...
0
2019-12-18T01:50:10
https://dev.to/diogom/tips-orm-laravel-that-helps-you-most-5apn
The Eloquent ORM Laravel is too difference ORM that all others Framework that I used, he really help us are productive work, let's go around a somethings tips that I separated for you. ### 1. Order by ```php $items = Item->orderBy('created_at', DESC)->get(); ``` ### 2. Where ```php $items = Item->where('category_id', $id)->orderBy('created_at', DESC)->get(); ``` ### 3. Where month (Ex.: January) ```php $items = Item->whereMonth('created_at', '1')->get(); ``` ### 4. Where year ```php $items = Item->whereYear('created_at', '1999')->get(); ``` ### 5. Where between ```php $users = DB::table('users')->whereBetween('votes', [1, 100])->get(); ``` You can also use aggregate, _sum, max, min, count, avg_, for example ### 6. Aggregate ```php $users = DB::table('users')->count(); ``` Sometimes, you may not want to select all columns, so, using _select_ method, you can specify the columns you need ### 7. Select specify columns ```php $users = DB::table('users')->select('name', 'email as user_email')->get(); ``` ### 8. Select random row ```php $randomUser = DB::table('users') ->inRandomOrder() ->first(); ``` ### 9. Union querys ```php $first = DB::table('users') ->whereNull('first_name'); $users = DB::table('users') ->whereNull('last_name') ->union($first) ->get(); ``` ### 10. Join tables ```php DB::table('users') ->join('contacts', function ($join) { $join->on('users.id', '=', 'contacts.user_id'); }) ->get(); ``` It`s, I hope you like ten tips of Eloquent Power, if you need read more about this topic, check out the documentation of Laravel: - https://laravel.com/docs/5.8/queries - https://laravel.com/docs/5.8/eloquent See you later!
diogom
222,844
Posting Nested Resources to Your Rails API
For the fourth project of the Flatiron School curriculum, I had to build an app that used a Ruby on R...
0
2019-12-18T01:41:18
https://dev.to/lberge17/posting-nested-resources-to-your-rails-api-he8
ruby, rails, beginners
For the fourth project of the Flatiron School curriculum, I had to build an app that used a Ruby on Rails API backend and a vanilla Javascript frontend. The API needed to have at least one has-many relationship. The frontend also needed to make at least one other type of request than a 'GET.' I chose to use a POST request, so I found myself needing to figure out how to make a POST request to my API along with rails' 'accepts_nested attributes.' We hadn't learned the standard way to combine these two in the curriculum, so I read through a few blogs to see what others had done. A little disclaimer: I am not sure if this is considered the best practice of putting a fetch request together with rails 'nested_attributes.' This is just the way I coded my app to make the request successful. <h2>The Backend</h2> I started with the API since I have more experience in ruby at this point. I start by coding 'accepts_nested_attributes_for' in my primary model like so... ```ruby # models/vacation.rb class Vacation has_many :stays accepts_nested_attributes_for :stays end ``` Then all that's left is to include those attributes in the controller white-listed params. The secondary model in a normal rails app from form_with sends through the 'fields_for' parts of the form with the key 'modelname_attributes.' ```ruby # controllers/vacations_controller.rb class VacationsController < ApplicationController def vacation_params # in the array value, declare all the attributes of the secondary model you need params.fetch(:vacation, {}).permit(:city, :state, :country, :etc, stays_attributes: [:name, :address, :etc]) end end ``` <h2>The Frontend</h2> While constructing the data being sent in the fetch request, I knew the API was expecting to receive data about a new vacation in the format of: ```js const data = { city: 'NYC', state: 'NY', country: 'USA', etc: 'more info', stays_attributes: [{ name: 'The Hilton', address: '21367 Some St', etc: 'more data' }, { name: 'Holiday Inn', address: '35261 Some St', etc: 'more data' } ] } ``` So, that is how I formatted my data from the HTML form. Once I had grabbed all the values from the HTML and formatted in the object as such, I passed that object into the body of the fetch request. ```js const configObject = { method: "POST", headers: { "Content-Type": "application/json", "Accept": "application/json" }, body: JSON.stringify(data) }; fetch(url, configObject) .then(resp => resp.json()) .then(json => console.log(json)) .catch(error => alert(error.message)); ``` <h3>Aside to Myself</h3> Looking at how little code this all requires, it seems very simple to me since creating this app, but it definitely took me a bit of time and blog reading to figure out. One of the most powerful things I've learned from my journey thus far is how simple something that at first seemed impossible to comprehend can seem. That realization has made me a lot kinder to myself as a beginner and learner, as well as more understanding of others who don't yet understand something I might find self-explanatory. I'm now only four months into learning to code and at the beginning writing a simple line of HTML seemed so complex. Now basic HTML seems almost like second nature. Hopefully, soon Javascript will begin to feel simple--or at least more readable--as well.
lberge17
222,864
VS Code for Haskell in 2020
01 – VS Code for Haskell in 2020 02 – Your first Stack project (build, test, dist) UPD So, for now,...
0
2020-01-20T05:35:30
https://dev.to/egregors/vscode-for-haskell-in-2020-5dn8
haskell, vscode
01 – VS Code for Haskell in 2020 02 – [Your first Stack project (build, test, dist)](https://dev.to/egregors/your-first-stack-project-build-test-dist-10fa) **UPD** So, for now, you don't need any of the mess below! Thanks to guys making `hie-server`, you finally can just download VS Code plugin and this is all: please check this out: https://mpickering.github.io/ide/posts/2020-07-10-ghc-libdir.html Looks like `vscode-hie-server`, will be hosted under the Haskell organization, and it's just working :D _I have left this guide here, just to remember how tough it was._ **Outdated** 👇 If you trying to find some Haskell IDE guides, most of them probably will be about `emacs` and `hashell-mode`. But in this case, you will need to deal with `emacs` first. And if you haven't any experience with this kind of IDE it will take a while. Also, for some reason, you would wanna fool around with Haskell on the Windows with the same "good enough" experience like in the macOS or Linux. That's why I believe `vscode` with a few very useful extensions will be a better solution So this little guide let you get absolutely the same Haskell programming experience, independently from OS ## TL;DR * get [vscode](https://code.visualstudio.com/) and [stack](haskellstack.org) * Install stack dependencies: `stack install intero QuickCheck hlint brittany ghcid` * add `$HOME/.local/bin` to your $PATH * Install VS Code extensions (IDE, linter, formatter, hoogle). You should have [code](https://code.visualstudio.com/docs/setup/mac#_launching-from-the-command-line) in your $PATH: `code \ --install-extension ucl.haskelly \ --install-extension hoovercj.haskell-linter \ --install-extension maxgabriel.brittany \ --install-extension jcanero.hoogle-vscode ` * restart vscode ... profit! ## Setup with details ### Haskell Platform | Stack First of all, you should install Haskell and a few tools. I recommend use [stack](https://www.haskellstack.org) installer for this purpose. Stack is a kind of package manager for Haskell. Like [cabal](https://www.haskell.org/cabal/), but [not quite](https://stackoverflow.com/a/30922706). It will install GHC automatically as well. Next, it will be useful tool for working with isolated dependencies, `ghci` REPL, building and testing your projects. The simplest way is just to execute install script from the official site for unix-like OS: ``` curl -sSL https://get.haskellstack.org/ | sh ``` or download [installer for Шindows](https://get.haskellstack.org/stable/windows-x86_64-installer.exe) ### Haskelly Next, we'll install one of the popular vscode [extension](https://marketplace.visualstudio.com/items?itemName=UCL.haskelly) for Haskell. It contains: * highlight * snippets * hovers * jump to definition * find references * code completion * integrated REPL * Build, Test and Run commands Exclude the vscode extension you should install a few Haskell packages (don't forgate to add `$HOME/.local/bin` to your $PATH): ``` stack install intero QuickCheck # for a global installation stack build intero QuickCheck # for a local installation ``` Especially I like `load GCHi` feature. When you're starting with Haskell you'll spend a lot of time in GHCi, and eventually, it becomes a kind of habit. You know, you are writing a beautiful function in the context of your current file, and you wanna test it within context. `Load GHCi` starts GHCi and automatically loads all stuff from your current file. And you got a useful way of playing around with your functions. Make some changes, safe files, and just type `:r` to reload a new version into GHCi. We are almost here. This is the minimum setup for pleasant Haskell development, but we'll make it even better. Much better! ### Hlint `hlint` is an absolute must-have for any beginner | middle Haskell developer. This is the linter that teaches you Haskell even better than books! From docs: `HLint is a tool for suggesting possible improvements to Haskell code. These suggestions include ideas such as using alternative functions, simplifying code and spotting redundancies.` Install by `stack` ``` stack install hlint ``` Next, we need to install vscode integrations for `hlint`: [vscode-haskell-linter](https://github.com/hoovercj/vscode-haskell-linter). After that `hlint` will suggest you some code improving. But keep it in mind, sometimes `hlint` may be kind of rude :3 ![use head](https://s3-eu-west-1.amazonaws.com/egregors.com/devto/hlint-use-head.png) ### Brittany Next one. If we are talking about indentations it's always difficult in Haskell. That's why you definitely need some code formatting tool. Personally, I prefer [brittany](https://github.com/lspitzner/brittany) ``` stack install brittany ``` And [brittany-vscode-extension](https://github.com/MaxGabriel/brittany-vscode-extension) as well. This tool applies to be the default "Format Document" command or shortcut. ### ghcid It does not look like a big deal but in fact, this tool becomes an absolutely [necessary thing](https://github.com/ndmitchell/ghcid) for a smooth workflow. `To a first approximation, it opens ghci and runs :reload whenever your source code changes, formatting the output to fit a fixed height console.` So, usually, I have split up the window to three parts: source code, GHCi with loaded up current code and ghcid like a realtime indicator that all is good. ![flow](https://s3-eu-west-1.amazonaws.com/egregors.com/devto/flow.png) To run `ghcid` you should use `stack exec ghcid YOUR_FILE_NAME.hs` command ### Hoogle Just shortcut for [Hoogle](hoogle.haskell.org) search. All of this should be enough to comfortable Haskell learning/development independently of your platform
egregors
222,880
Seeking any help/resources for MacOS logging using Splunk
We need to index system logs from about 100 Macs using Splunk. I have more experience with iOS mobile...
0
2019-12-18T04:40:14
https://dev.to/skyandsand/seeking-any-help-resources-for-macos-logging-using-splunk-487g
security
We need to index system logs from about 100 Macs using Splunk. I have more experience with iOS mobile device management rather than logging with Mac. If anyone has any pointers I'll post updates here. Thanks! Update (January 2020):: Apple has an entirely new binary, database format for logging their os. This prevents other parties (like Splunk) from reading logs and the daily log can exceed several GB in size with 20 million log entries! Solution: script tasks using native utility LOG to extract logs you need. I'm still not able to get this in a human readable format but slow progress is better than none I suppose. We will only be able to use bash scripts so if anyone has a hobby of working with bash on Mac I'm all ears🤗
skyandsand
223,039
Frameworks can be frustrating sometimes
Frameworks, like wordpress, joomla, cake php, spring, cordova and so on, can really be a pain in the...
0
2019-12-18T12:41:01
https://dev.to/webmaster_chuks/frameworks-can-be-frustrating-sometimes-16hi
wordpress, joomla, cakephp, cordova
Frameworks, like wordpress, joomla, cake php, spring, cordova and so on, can really be a pain in the ass. They don't give you enough flexibility to adjust what you need to. You are typically at the mercy of their community. You always need to work within the frame, except you are a good programmer you can study the files and make your code changes to your taste. The whole idea of a framework should be speed, organisation and FLEXIBILITY. It would be nice to get some frameworks that allow developers customize the codes easily to their taste, without having to send series of mails to support or checking countless files to link where each function came from.
webmaster_chuks
223,118
How to Use GitHub Pages
How to Use GitHub Pages GitHub Pages can serve static content for free using your GitHub a...
0
2019-12-18T15:18:38
https://dev.to/brunodrugowick/how-to-use-github-pages-j7d
github, jekyll, ruby
# How to Use GitHub Pages GitHub Pages can serve static content for free using your GitHub account. I use Jekyll to theme and blog on [drugo.dev](https://drugowick.dev) (and the irony is that I'm moving to [dev.to](https://dev.to) now - more on that later). Whenever you commit to your GitHub Pages repository, [Jekyll](https://jekyllrb.com/) runs to rebuild the pages in your site, from the content in your Markdown (or HTML) files. ## Quick Start For free GitHub accounts, create a public repository called `<username>.github.io`. The `<username>` must be your GitHub username. Create an `index.md` file with the following content: ``` # Hello World ``` Now your website is up and running at `https://<username>.github.io`. ## Theming You can theme GitHub pages simply by going to your repository Settings page and selecting a theme using the Theme Chooser on the GitHub Pages section. What this does is create a file on the root of your project... so, if you create the file yourself, you're done. Follow the steps: 1. Create a file named `_config.yml` on the root of your GitHub Pages repository. 2. Add the following to the file: ``` theme: jekyll-theme-cayman ``` 3. Commit the file and push it to GitHub. You're themed! If you want to use other themes, you can either go to your repository Settings page or [learn more here](https://help.github.com/en/articles/adding-a-jekyll-theme-to-your-github-pages-site). ## Blog Posts ### Create a file Blog posts are files with the following name convention under `_posts` directory: ``` YEAR-MONTH-DAY-title.md Example: 2019-04-24-HelloWorld.md ``` ### Add a header The following header must be added to every blog post, although you might want to customize title and other things. It's up to you, really. The HOME link for example is totally a personal choice. ``` --- layout: default title: Hello World --- [HOME]({{ site.url }}) ``` You then start writing your post using Markdown syntax. ## Developing locally You don't want to edit everything on your web browser, do you? Especially after you start to know [everything Jekyll can do](https://jekyllrb.com/docs/). To build the website locally follow the instructions [here](https://help.github.com/en/articles/setting-up-your-github-pages-site-locally-with-jekyll). You'll need Ruby. Build and run locally with `bundle exec jekyll serve` after configuring everything. ## Finally... There's way more to learn about GitHub Pages and Jekyll, but I'd like to suggest just one more piece of documentation: [Jekyll Data Files](https://jekyllrb.com/docs/datafiles/) Jekyll Data Files are very useful to organize the information on your GitHub Page. It gives you the ability to edit sections of your page without touching the markup file (.md or .html), just like I taught you with the Posts, but for your own data structures. If you want to take a look, this websites uses, so far, Data Files for the "Useful Links" and "Active Projects" sections.
brunodrugowick
223,246
No More Tears, No More Knots: Arena-Allocated Trees in Rust
An overview of region-based memory management for constructing trees and graphs in Rust
0
2019-12-19T00:02:13
https://dev.to/deciduously/no-more-tears-no-more-knots-arena-allocated-trees-in-rust-44k6
rust, beginners, tutorial, devjournal
--- title: No More Tears, No More Knots: Arena-Allocated Trees in Rust published: true description: An overview of region-based memory management for constructing trees and graphs in Rust cover_image: https://thepracticaldev.s3.amazonaws.com/i/0qfm7joaxu4bvp13w618.jpg tags: rust, beginners, tutorial, devjournal --- ## Enter The Arena When programming in Rust, it's not always straightforward to directly translate idioms you know. One such category is tree-like data structures. These are traditionally built out of `Node` structs that refer to other `Node`s in the tree. To traverse through your structure, you'd use these references, and changing the structure of the tree means changing which nodes are referred to inside each one. Rust *hates* that. Quickly you'll start running into problems - for instance, when iterating through nodes, you'll generally need to borrow the structure. After doing so, you'll have a bad time doing anything else with the structure inside. Trees are a fact of life, though, and very much a useful one at that. It doesn't have to hurt! We can use [region-based memory management](https://en.wikipedia.org/wiki/Region-based_memory_management) to pretty much forget about it. ### The Desert I'll briefly mention a few of the other methods I've bashed my head against before trying this today. The simplest is to use `unsafe`, which allows you to use raw pointers like you would in C. This forfeits a lot of the benefits we get from using safe Rust, as one use of `unsafe` will infect your whole crate. Now part of the code is only deemed safe because you, the programmer, have deemed it to be, and not the Rust compiler. To stick to statically safe Rust, you can wrap your pointers in `Rc<RefCell<T>>` types, which are reference-counted smart pointers with interior mutability. When you call `Rc::clone(&ptr)`, you get back a brand new pointer to the same data, that can be owned separately from any existing pointer, and when all such `Rc`s have been dropped the data itself will get dropped. This is a form of static garbage collection. The `RefCell` that allows you to take mutable borrows of things that aren't mutable, and enforces at runtime instead of statically. This lets you cheat, and will `panic!()` if screw up, so, hooray I guess. You need to use methods like `data.borrow_mut()` but then can, for example, change the pointer in a `next` field using an otherwise immutable borrow of the node during your traversal. Alternatively you can use `Box` smart pointers and clone them around, performing a lot of extra work for no reason - this involves deep-copying whole subtrees to make small edits. You do you, but that's not really my thing. You can even use plain references and introduce explicit lifetimes: ```rust struct Node<'a> { val: i32, next: &'a Node, } ``` Yippee, you're probably sprinkling `'a` all over the place now, and there's gonna be a part of you that wants to start getting friendly with `b`, and whoa there. That's gross, and you're solving a much simpler problem that requires. All of these options mean pain, and often compromise. At least in my experience, while you often can get to a successful compile your code gets unreadable and unmaintainable fast, and should you ever need to make a different choice you're pretty much back to square one trying to fit it all together. It's also the only way I've ever managed to actually produce a segfault in Rust. I was pretty impressed with myself for screwing up that hard and I wish I had kept better notes about how I got there, but I know it was some nonsense like the above. The problem is that Rust is keeping a close eye on who owns your nodes and what lifetime each has, but as you build a structure it's not always easy for the compiler to understand what it is you're trying to do. You end up with inferred lifetimes that are too small or not accurate for your structure and no way to efficiently traverse or edit the map. You end up needing to do manual work to convince the compiler you're right, which sucks. ### The Oasis What if your nodes could all have the SAME lifetime? I mean, they essentially do, right? Sure, some may get created after one another, but for all intents and purposes within this program you just care that they're all owned by your top-level tree structure. There's a super easy way - pop 'em in a `Vec<T>`: ```rust #[derive(Debug, Default)] struct ArenaTree<T> where T: PartialEq { arena: Vec<Node<T>>, } ``` Boom. Tree. It's generic for any type that can be compared with `==`, and the lifetime problems are solved. You want a node? Use `self.arena[idx]`. Instead of storing actual references to other nodes, just give 'em each an index: ```rust #[derive(Debug)] struct Node<T> where T: PartialEq { idx: usize, val: T, parent: Option<usize>, children: Vec<usize>, } ``` In this tree, each node has zero or one parents and zero or more children. New ones will require an ID specified, as well as a value to store, and will not connect to any other nodes: ```rust impl<T> Node<T> where T: PartialEq { fn new(idx: usize, val: T) -> Self { Self { idx, val, parent: None, children: vec![], } } } ``` You could go on and store as many indices as you want - it's your graph. This is just the example tree I used for [Day 6 of AoC](https://adventofcode.com/2019/day/6) (and why we're here). This is pretty easy to use. When you want a value, you can just ask for its index: ```rust impl<T> ArenaTree<T> where T: PartialEq { fn node(&mut self, val: T) -> usize { //first see if it exists for node in &self.arena { if node.val == val { return node.idx; } } // Otherwise, add new node let idx = self.arena.len(); self.arena.push(Node::new(idx, name)); idx } } ``` Whether or not it was there previously, you now have an index for that value in your tree. If it wasn't already there, a new node was allocated with no connections to any existing nodes. It will automatically drop when the `ArenaTree` goes out of scope, so all your nodes will always live as long as any other and all will clean up at the same time. This snippet shows how easy traversal becomes - you just walk the vector with, e.g., `for node in &self.arena`. Certain operations become trivial - want the number of nodes? Ask for it: ```rust fn size(&self) -> usize { self.arena.len() } ``` What about counting how many edges are there? Nothing fancy here either, count them: ```rust fn edges(&self) -> usize { self.arena.iter().fold(0, |acc, node| acc + node.children.len()) } ``` It's still pretty easy to do your standard recursive data structure stuff, though. You can see how deep a node is: ```rust fn depth(&self, idx: usize) -> usize { match self.arena[idx].parent { Some(id) => 1 + self.depth(id), None => 0, } } ``` Search for a value from the root, returning its depth: ```rust fn depth_to_target(&self, idx: usize, target: &T) -> Option<usize> { // are we here? If so, Some(0) if target == &self.arena[idx].val { return Some(0); } // If not, try all children for p in &self.arena[idx].children { if let Some(x) = self.depth_to_target(*p, &target) { return Some(1 + x); } } // If it cant be found, return None None } ``` You can of course traverse iteratively as well. This method finds the distance between the parents of two nodes using both iterative and recursive traversal to perform a series of depth-first searches: ```rust fn distance_between(&mut self, from: T, target: T) -> usize { // If it's not in the tree, this will add a new unconnected node // the final function will still return None let start_node = self.node(from); let mut ret = 0; // Start traversal let mut trav = &self.arena[start_node]; // Explore all children, then hop up one while let Some(inner) = trav.parent { if let Some(x) = self.depth_to_target(inner, &target) { ret += x; break; } trav = &self.arena[inner]; ret += 1; } // don't go all the way to target, just orbit ret - 1 } ``` This repeats a little work on each backtrack, but at even puzzle scale computes nearly instantly. It's quite concise and readable, not words I'm used to using for Rust trees! Inserting will depend on the domain, but this application received input as `PARENT)CHILD`, so my `insert` looked like this: ```rust fn insert(&mut self, orbit: &str) { // Init nodes let split = orbit.split(')').collect::<Vec<&str>>(); let inner = self.node(split[0]); let outer = self.node(split[1]); // set orbit match self.object_arena[outer].parent { Some(_) => panic!("Attempt to overwrite existing orbit"), None => self.object_arena[outer].parent = Some(inner), } // set parents self.object_arena[inner].children.push(outer); } ``` To recap, whenever you want to manipulate a given node you just need its index to do so. These are handily `Copy`, so don't worry too much about manipulating them. To get a node's index, call `tree.node(val)`. It will always succeed by performing a lookup first and then allocating it to your tree's arena if it wasn't already there. Then it's up to you to manipulate the node's fields to the indices where it belongs: `self.arena[idx].children.push(outer);`. You never need to worry about the memory again, your `Vec` will drop itself when it can. You define the structure of the tree yourself by what indices are stored in each node and what happens when you insert a new one. Basically, it's a tree like you want it to be, but it's in Rust and you don't even have to fight about it, and it's great. Here's a [playground link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=7cccabd269fd1ee8f61ff23fd79117e7) to poke and prod at. *cover image by Amanda Flavell on Unsplash*
deciduously
223,285
More Holiday Gifts for DEVs
A gifting guide focused on supporting coders for the 2019 holiday season.
3,526
2019-12-19T23:42:12
https://dev.to/devteam/more-holiday-gifts-for-devs-24d6
2019holiday, gift
--- title: More Holiday Gifts for DEVs published: true series: 2019 Holiday Gift Guides description: A gifting guide focused on supporting coders for the 2019 holiday season. tags: 2019holiday, gift --- Treat yourself and loved ones this holiday season with more gift ideas from DEV! This guide is themed to offer up supportive gifts for coders, because everyone can benefit from tools that promote healthy habits. *** ###Support your Body **Self-care is crucial to writing clean code.** What happens outside the computer is just as important as what happens inside. Despite your experience, poor posture or inadequate sleep can hinder your programming experience worse than any malware could. **Good posture can improve circulation and make you more efficient.** It can also help you avoid fatigue, or worse - burnout. If you know a dev that is working away at a desk, maybe think about picking up some [back support](https://amzn.to/2M60FyY) or a [footrest](https://amzn.to/2Z7rGqT). The chair you sit in has a measurable impact on your work. If you’re looking for a larger gift to give, upgrading your loved one’s chair for them would earn you a great deal of thanks. **Everyone has heard of the dangers of working with your laptop on your lap.** Not only is it bad for the alignment of your spine, laptops also have a tendency to get hot under a heavier workload. Although the threat of actually burning yourself with your laptop is likely low, it could help to invest in a [laptop stand](https://amzn.to/2EBSkP3). Keeping your laptop’s internal heat down is better for the components and will help them to last. *** ###Support your Mind **If you’re looking for more practical advice, we recently published a book called _Your First Year in Code_.** ![Your First Year in Code img](https://d2sofvawe08yqg.cloudfront.net/firstyearincode/hero?1567549243) The ebook comes with practical advice on topics like code reviews, resume writing, fitting in, ethics, and finding your dream job. You can buy it on LeanPub on a pay-what-you-can model. Check out Ben’s post for more details. {% link https://dev.to/devteam/the-dev-community-published-a-book-your-first-year-in-code-1ejk %} **Paid courses and ebooks can be a great gift to give to someone who wants to advance their CS career.** The holiday season is the perfect time to look for discounts on your favorite programming resources. Get a head start on your next project with these courses provided by DEV Community members! @kevinpowell teamed up with [Scrimba](www.scrimba.com) to create the Responsive Web Design Bootcamp. With 173 lessons and over 15 hours of content, you are sure to have a responsive website up and running by the end. Check out Kevin's post for more information on how Scrimba makes learning easier. {% link https://dev.to/kevinpowell/learn-to-master-css-with-the-responsive-web-design-bootcamp-2i4o %} These next posts were posted by our community members on [DEV Listings](https://dev.to/listings). While there are free courses and ebooks online that can get you started with the foundations of programming, paid materials are often able to look more in depth on topics such as web dev, AWS, and cloud computing. {% listing https://dev.to/listings/education/learn-how-to-make-a-website-3k7d %} {% listing https://dev.to/listings/education/12-days-of-badass-courses-on-egghead-io-58-off-memberships-c0l %} {% listing https://dev.to/listings/education/intro-to-aws-for-newbies-ebook-3f9e %} *** I hope some of these resources were helpful to you! Keep an eye out for our Last Minute Shopping guide, coming soon. Feel free to leave your gift suggestions in the comments below! If you were looking to advertise your product, course, or job listing, feel free to check out [DEV Listings](https://dev.to/listings) for more info.
devencourt
223,291
How I Got 10k stars on my Github Repository?
When I was in University, I missed a lot of opportunities like hackathons, conferences, internships a...
0
2019-12-19T04:27:13
https://dev.to/dipakkr/how-i-got-10k-stars-on-my-github-repository-13dj
github, beginners, sideprojects
When I was in University, I missed a lot of opportunities like hackathons, conferences, internships and many global events due to lack of awareness. Being a freshman, I was not aware of what kind of opportunities were available for students. Of course, We get to know about this thing with time and active involvement in the community. I didn't want other students/juniors to suffer the same problem. Hence, I created a hand-curated list of all the resources for students/developers. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/whfmju6848p88o7pohie.PNG) - The list has got more than 1M+ views/impressions on Github and other publications. - Almost 10k :star: - 3k forks - 590+ Contributors from across the world. ### How I got such massive traffic ? > Well, It's was never about getting stars :star: on repo but more for the value it would provide. I published the list on GitHub, and people from all across the world helped me make this resource list better and better. For the next 2-3 months, I reviewed every single Pull Request on the repository and closed more than 800 Pull requests. ### Community is Key Students/professionals/Developers from all across the world contributed and made the resources list richer. We have listed most of the contributed Name with their name, social links, and country name. ![Contributors](https://thepracticaldev.s3.amazonaws.com/i/5b27wgcc25m8dg3dh752.png) So, let me tell you a brief what all we have added in the list known as `A-to-Z-Resources-for-Students`. The list contains a brief and important resource in the following categories : - **Coding Resource** - **Hackathons and Events** - **Student Benefits and Programs** - Campus Ambassador Programs - Student Benefits and Packs - Student Fellowship Programs - Scholarships - **Open Source Programs** - **Startup Programs and Incubators** - **Internship Portal** - **Developer Club and Meetups** - **Conferences** - **Bootcamps** - **Top People to follow(In Different Category)**. Yes, the list was huge and I literally took more than a week to search and compile the whole list. ### Conclusion 1. **Create and Build thing which you think can be helpful for others. Finding your own problem and solving is the key. There may be someone who is facing the same problem** 2. **Focus on creating valuable Content, Traffic and Audience will be the byproduct**. Finally, You can check out the repository below. {% github dipakkr/A-to-Z-Resources-for-Students no-readme %} I am glad you are still here. I hope you liked the post. Please don't forget to give your feedback in the comments. Getting feedback helps me improve. I write about Javascript, Web, and new stuff almost daily. You can follow me on [Twitter](https://twitter.com/diipakkr) | [Instagram](https://instagram.com/diipakkr) [Subscribe to my email newsletter and stay updated!](https://dipakkr.substack.com/) If you liked the post, please give some :heart:!! Cheers !!
dipakkr
223,371
Bash Variables: To quote, or not to quote...
echo "$VARIABLE" Should you ALWAYS wrap your variables in quotes??? Some would say, "Yes! Absolute...
0
2019-12-18T19:09:54
https://dev.to/jimmymcbride/bash-variables-to-quote-or-not-to-quote-3c6o
bash, productivity, discuss
> echo "$VARIABLE" Should you ALWAYS wrap your variables in quotes??? Some would say, "Yes! Absolutely, always wrap your variables in quotes. Just to be safe!" Well, I say, "You do you, dawg". Fortunately, I'm not just going to leave you hanging there. Let's explore the differences between wrapping our variables in quotes and see what the differences are. Then, we can determine for ourselves, when is the best time to use quotes around our variables. All the time, or some of the time? Decide for yourself! ## $HOME and $PATH ```bash echo $HOME home/jimmy echo "$HOME" home/jimmy ``` Well, it doesn't seem like there's much of a difference here! Let's create a variable with some spaces in it and see what happens. ```bash NAME="Jimmy McBride" echo $NAME Jimmy McBride echo "$NAME" Jimmy McBride ``` Now, we're getting somewhere! It seems like if we have more than 1 empty space in a variable and if those spaces are important we need wrap our variable in quotes! > **Rule of thumb:** If your variable contains more than 1 consecutive white space and that white space is important for any reason then you DEFINITELY want to wrap your variable in quotes. Let's create a variable with a wildcard (*) and see what happens... ```bash NAME="Jimmy McBride *" echo $NAME Jimmy McBride bin calc Desktop Documents Downloads Games indicator-bulletin Music Pictures Postman Public Templates Videos ``` Uh-oh! Looks like, instead of printing out "Jimmy McBride *" It printed out "Jimmy McBride" plus all of our files in our home folder. That's most likely not what we wanted. ```bash NAME="Jimmy McBride *" echo "$NAME" Jimmy McBride * ``` Now that looks a whole lot better! > **Rule of thumb:** If you have a wildcard character (*) in your code and you want it to act as a regular character you DEFINITELY want to wrap your variable in quotes. However, if you intend for it to be and act as an actual wildcard, then you should NOT wrap your variable in quotes. Let's play around a bit with putting variables in quotes... ```bash NAME="Jimmy McBride" echo $NAME $HOME Jimmy McBride /home/jimmy echo $NAME $HOME Jimmy McBride /home/jimmy echo "$NAME $HOME" Jimmy McBride /home/jimmy echo $NAME * $HOME Jimmy McBride bin calc Desktop Documents Downloads Games indicator-bulletin Music Pictures Postman Public Templates Videos /home/jimmy echo "$NAME" * "$HOME" Jimmy McBride bin calc Desktop Documents Downloads Games indicator-bulletin Music Pictures Postman Public Templates Videos /home/jimmy echo "$NAME * $HOME" Jimmy McBride * /home/jimmy ``` ## Conclusion If you variable doesn't contain consecutive white spaces that are important or if you have a wild card symbol that you don't want to behave as a wildcard, you definitely want to wrap you variable in quotes. If you have a variable with a wildcard and you want it to behave as a wildcard, then you definitely want to wrap that puppy in quotes. If the variable you are calling doesn't have important consecutive white spaces or contain a wildcard character then it doesn't seem to make a difference whether or not you use quotes or not. In that situation, especially when I know what the variables are, I would save my self a few keystrokes by not adding quotes. However, if you are ever uncertain when using a variable, wrapping it in quotes just to be safe might be a really good idea! What are your thoughts? Did I miss anything? What's your opinion? Let me know in the comments! Thanks! :smile:
jimmymcbride
223,382
This is my new favorite (free) programming tool – GitHub Actions Tutorial
I've tinkered with a ton of programming tools this year, but none of them has captured my heart like...
0
2019-12-19T15:41:46
https://bytesized.xyz/github-actions-tutorial/
webdev, tutorial, showdev
I've tinkered with a ton of programming tools this year, but none of them has captured my heart like [GitHub Actions](https://www.github.com/features/actions). It's a continuous integration tool, yes, but it's also a general purpose code execution platform, and it's built into the website that I use to manage my projects every day. In this video, we'll explore the what, the why, and the how of GitHub Actions, building a very typical weather bot with a not-so-typical deployment strategy: it's deployed and run daily using GitHub Actions. In building the project, we'll understand how to get up and running with GitHub Actions, how to securely store environment variables and configuration details in our repositories, and much more. {% youtube J4EhgEskSZA %} If you enjoyed the video, give it a thumbs-up, and [subscribe to our channel](https://www.youtube.com/channel/UC046lFvJZhiwSRWsoH8SFjg?sub_confirmation=1) for more web dev content every week. We also have a newsletter where we send out what's new and cool in the web development world, every Tuesday – [join here](https://www.bytesized.xyz/newsletter)!
signalnerve
229,450
Testing in 2019, according to my Twitter bookmarks
I’m not a particularly active twitter user, but I do find it a great way to get a wider perspective...
0
2020-01-01T00:06:42
https://gerg.dev/2019/12/testing-2019-on-twitter/
testing, twitter, yearinreview
--- title: Testing in 2019, according to my Twitter bookmarks published: true date: 2019-12-31 15:00:35 UTC tags: testing, twitter, yearinreview canonical_url: https://gerg.dev/2019/12/testing-2019-on-twitter/ --- I’m not a particularly active twitter user, but I do find it a great way to get a wider perspective on software testing and development. I’ll often bookmark interesting tweets or threads as I come across them so I can remember to come back to them, either to read later or to think more about. To give you a snapshot of some of what’s been influencing my thoughts around testing this year, here’s a roundup of (almost) everything I bookmarked on twitter in 2019. In automation and development: - [Russel Johnson highlighted Angie Jones’s article](https://twitter.com/RussellJQA/status/1085206412035346432) about blurring the lines between UI and API testing to [bring the DRY principle to test automation](http://angiejones.tech/hybrid-tests-automation-pyramid). - A [quick lesson on xpath](https://twitter.com/dpaulmerrill/status/1085177333873545216) to counteract all the hate for it. - Gerard K. Cohen aptly points out that [the best way to learn accessibility in React is to learn HTML](https://twitter.com/gerardkcohen/status/1089942713716232193). - [T.J. Maher tipped me off](https://twitter.com/tjmaher1/status/1097870917252927493) to [The Internet](https://the-internet.herokuapp.com/), a good playground for experimenting with automation. - Ministry of Testing started a thread on [testing tools](https://twitter.com/ministryoftest/status/1117756010217623553). - I learned about [generative or property-based testing via Lisa Crispin](https://twitter.com/lisacrispin/status/1119940717420449792). - Manuel Matuzovic points out the limitations of Google Lighthouse with an example of a site that [passes all accessibility tests but remains woefully inaccessible](https://twitter.com/mmatuzo/status/1134380331694460928). - A [tip from Jon Kuperman to check out web.dev](https://twitter.com/jkup/status/1152019045891186688) pages for help with various frameworks. - [This tweet spurred me to dig more into the Chrome DevTools Protocol](https://twitter.com/shs96c/status/1156345850844667906), realizing both that I was already using it with knowing it and that I could leverage it more. - A [checklist of thins to look for when testing web pages](https://twitter.com/LenaPejgan/status/1156660767388839937) from Lena Pejgan Wiberg. - Samuel Nitsche, for developers: “[If you don’t know how to test something you wrote, you did it wrong](https://twitter.com/Der_Pesse/status/1168535331819466752)“ - This tip from Rick Scott on evaluating your test automation: “[take away the software [and] see what still passes](https://twitter.com/shadowspar/status/1185198034352852992)“. My old boss used to tall those “coffee break tests”. In product management/ownership, John Cutler continues to be a significant influence: - A [good definition of done can still lead to a bad product](https://twitter.com/johncutlefish/status/1085378750185795584), - [15 things to know about product managers](https://twitter.com/johncutlefish/status/1090294193585455104), - [An illustration on how trends in development practice live and die](https://twitter.com/johncutlefish/status/1126692138618478592) that helps me cut through both hype and hate - On how [the key to getting something done is 6 months is getting first first version done in 6 days](https://twitter.com/johncutlefish/status/1134917251335954433). - A thread on how [a shared infra/ops (and test?) team can grind things to a halt](https://twitter.com/johncutlefish/status/1135686190231744512). - On [working fast and slow](https://twitter.com/johncutlefish/status/1138343908729970688) that made me think about what level I tend to work on, and [this great thread](https://twitter.com/johncutlefish/status/1182784029478703105) to go with it. - A [maturity model for maturity models](https://twitter.com/johncutlefish/status/1156727594210881538) that I’ll pull from the next time someone gripes about maturity models. - His take on [how to define an MVP](https://cutle.fish/blog/what-is-an-mvp). - How to deal with [top-down solutioning and taming big egos](https://twitter.com/johncutlefish/status/1171292899138359296): assume the idea is good and test it. - [Tips on writing good one-pagers](https://twitter.com/johncutlefish/status/1201886531004186626), On the DevOps front, there’s always fun to be found in threads for or against deploying on Fridays: - Prompted by Kelly Sommers, Jez Humble points out that config changes are more likely to break production than code changes, [so configs should be part of the pipeline too](https://twitter.com/jezhumble/status/1106294766587641856). - Charity Majors says if you can’t push on Fridays, [you need to prioritize improving your CI/CD and observability until you can](https://twitter.com/mipsytipsy/status/1117858830136664067). I’ve been using something similar as a litmus test for a team’s technical agility. - Matt Long made [a mind map about CI pipelines as applications](https://twitter.com/burythehammer/status/1121035089419341829). - Or, you can think of [CI pipelines as part of the application](https://twitter.com/lisacrispin/status/1130142114241757184) itself. - On the [skills testers need in DevOps](https://twitter.com/lisacrispin/status/1137012098746261504). - “[Testing is the feedback we seek](https://twitter.com/DanAshby04/status/1151940263721537536)“, from Ashley Hunsberger via Dan Ashby. - The [2019 Accelerate State of DevOps report](https://twitter.com/jezhumble/status/1164576150942732289) came out. And speaking of frequency and agility: - Jason Yip asks “[How do we get better so we can do this faster](https://twitter.com/jchyip/status/1134123212231643137)” instead of “how do we do this faster”. - A [word of caution on measuring lead and cycle time](https://twitter.com/johncutlefish/status/1138286678466973696) from John Cutler, as well as pointing out [things that feel like going faster versus actually making us faster](https://twitter.com/johncutlefish/status/1029757026895720449). - Forget agile, just as “[where’s your feedback loop?](https://twitter.com/krisajenkins/status/1151149517548478464)“ - [Rebuttals to “Agile won’t work here”](https://twitter.com/jboogie/status/1172813244240670720) based on the excuse and who’s giving it from Jeff Gothelf. In UX, design, and customer-first approaches: - A thread from Dan Abramov that started about [a drawback of TDD](https://twitter.com/dan_abramov/status/1086418722124906497), but caught my attention because he emphasized that [nothing beats actually using a product to get a good understanding of it](https://twitter.com/dan_abramov/status/1086459618480529408). - Nate Sanders talked about [the importance of involving engineers in discovery](https://twitter.com/nlsanders/status/1100556577394720768). - Andy Budd points out that [design research is about finding the problems your users face](https://twitter.com/andybudd/status/1100682589860491264), not just solutions. - A [poem illustrating designer perspective vs user perspective](https://twitter.com/GenevieveDezso/status/1120544436096851968). - A fun illustration of [how users see unlabelled icons](https://twitter.com/DougCollinsUX/status/1146781072429989889). - I like this fill-in-the-blank exercise from Cindy Alvarez to [make the](https://twitter.com/cindyalvarez/status/1182373668577370112)_[why](https://twitter.com/cindyalvarez/status/1182373668577370112)_[of what you’re developing clear](https://twitter.com/cindyalvarez/status/1182373668577370112). On how teams work, communication, and collaboration: - [Liz Keogh describes how it takes three tries to work with WIP limits](https://twitter.com/lunivore/status/1089567842515775488): first people ignore them, then they feel bad about ignoring them, then they actually start staying within them. I wouldn’t be surprised if other changes evolve similarly. - A reminder [not to get too caught up in pedantics from @vandroidhelsing](https://twitter.com/vandroidhelsing/status/1085000055264694273). - A [simple development maturity model](https://twitter.com/rickasaurus/status/1089919150024286208) from Richard Minerich - On the value of [stating the wrong answer](https://twitter.com/Massawyrm/status/1100539089131110400) instead of asking a question - [Elisabeth Hendrickson asked a question about wide band delphi](https://twitter.com/mstine/status/1103868887727202305) and 68% of people (including myself) didn’t know what it was - There’s [a nice one-pager on agile testing](https://twitter.com/AgileTFellow/status/1123232641292095490) from Janet Gregory and Lisa Crispin. - [How to run a Lean Coffee](https://twitter.com/mheusser/status/1148747960961961984) from Matt Heusser - [Some commentary on how agile started technical and has ended up as a project management thing](https://twitter.com/jfairbairn/status/1158910251178926081). - I love [this scale for describing complexity](https://twitter.com/t_magennis/status/1180266360825204742) from Liz Keogh. Other miscellaneous testing stuff: - In March I bookmarked [a tweet about Janet Gregory’s Whole Team Testing course](https://twitter.com/AgileTFellow/status/1111275533244026880), and I’m now going to be taking the course in February. - Ministry of Testing asked about testing tools - The concept of “[qualtability” raised by Richard Bradshaw](https://twitter.com/FriendlyTester/status/1152274722006155264) but originally from [Anne-Marie Charrett](https://www.youtube.com/watch?v=46QFPrs5dT0&feature=youtu.be). - Is going live with the least testing possible our goal? [Beware of throwing away things that aren’t actually waste](https://twitter.com/LenaPejgan/status/1160478580427632640). - On Mob Programming – “[quality goes down as the number of people touching the code](https://twitter.com/mfeathers/status/1159495761714769921)_[independently](https://twitter.com/mfeathers/status/1159495761714769921)_[goes up](https://twitter.com/mfeathers/status/1159495761714769921).” (emphasis mine) - Related to that, evidently [company org structure is the biggest predictor of the number of bugs in code](https://twitter.com/Carnage4Life/status/1207411078658842624), not the code itself. - Ministry of Testing asked “[what should not be automated](https://twitter.com/ministryoftest/status/1197578181961756675)“, which I’ve been known to ask in job interviews. Several things made it into my bookmarks but are still on my “read/watch/do later” list: - [Myles Lewando drew my attention](https://twitter.com/codemacabre/status/1086711963927961604) to [Design for Developers by Sarah Drasner](https://frontendmasters.com/workshops/design-for-devs/). - [Vladimir Tarasov suggested](https://twitter.com/NetRat_eu/status/1089827884443860992) this article on [How Complex Systems Fail](http://web.mit.edu/2.75/resources/random/How%20Complex%20Systems%20Fail.pdf) should be required reading for all testers. - “[An Elegant Puzzle](https://lethain.com/elegant-puzzle/)“, thanks to [the recommendation from Charity Majors](https://twitter.com/mipsytipsy/status/1135881974424358913). - An [executive crash course on the DevOps phenomenon](https://twitter.com/nicolefv/status/1137067091725508608) from [Nicole Forsgren](https://twitter.com/nicolefv/status/1137067091725508608). - This article on [testing structure changes](https://twitter.com/KentBeck/status/1146126321057091584) from Kent Beck. - The “[Shape Up](https://twitter.com/jasonfried/status/1148705684663480320)” way of working [from Jason Fried](https://twitter.com/jasonfried/status/1148705684663480320?s=19). [Alan Page highlights](https://twitter.com/alanpage/status/1211388455504101377?s=19) this quote: “Therefore we think of QA as a level-up, not a gate or a check-point that all work must go through. We’re much better off with QA than without it. But we don’t depend on QA to ship quality features that work as they should.” - “Escaping the Build Trap” because of [the connection Richard Bradshaw pointed out with effective testing](https://twitter.com/FriendlyTester/status/1153008516522815488). - “The North Star Playbook”, yet another way of working, [this one from John Cutler](https://twitter.com/johncutlefish/status/1202136707178450951) and co. - “Leading Quality” thanks to [this slew of recommendations](https://twitter.com/TheTestingMuse/status/1159860418678067201). - I want to try Armstrong, an app for exploratory testing [that Richard Bradshaw highlighted](https://twitter.com/FriendlyTester/status/1174002895433474051). - “[Operations Anti-Patterns with DevOps Solutions](https://livebook.manning.com/book/operations-anti-patterns-devops-solutions)” [from Jeff Smith](https://twitter.com/DarkAndNerdy/status/1209591491305361409). Finally, just for fun, [there was this figure of a black hole published at 1:1 scale](https://twitter.com/jaredhead/status/1177737536040390657). And that, folks, was my year on Twitter.
gpaciga
230,437
How to prepare for DevOps & SRE interviews?
Note: the following is opinionated. Note2: the most updated version of this post is located in my Git...
0
2020-01-02T16:41:31
https://dev.to/abregman/how-to-prepare-for-devops-sre-interviews-enj
devops, career, linux
Note: the following is opinionated. Note2: the most updated version of this post is located in my GitHub [project](https://github.com/bregman-arie/devops-interview-questions/blob/master/prepare_for_interview.md) ### Skills you should have #### Linux Every DevOps Engineer should have a deep understanding of at least one operating system and if you have the option to choose then I would say it should definitely be Linux as I believe it's a requirement of at least 90% of the DevOps jobs postings out there. Usually, the followup question is "How extensive should my knowledge be?" Out of all the DevOps skills, I would say this, along with coding, should be your strongest skills. Be familiar with OS processes, debugging tools, filesystem, networking, ... know your operating system, understand how it works, how to manage issues, etc. Not long ago, I've created a list of Linux resources right [here](https://dev.to/abregman/collection-of-linux-resources-3nhk). There are some good sites there that you can use for learning more about Linux. #### Coding My personal belief is that any DevOps engineer should know coding, at least to some degree. Having this skill you can automate manual processes, improve some of the open source tools you are using today or build new tools & projects to provide a solution to existing problems. Knowing how to code = a lot of power. When it comes to interviews you'll notice that the level of knowledge very much depends on the company or position you are interviewing for. Some will require you just to be able to write simple scripts while others will deep dive into common algorithms, data structures, etc. It's usually clear from the job requirements or phone interview. The best way to practice this skill is by doing some actual coding - scripts, online challenges, CLI tools, web applications, ... just code :) Also, the following is probably clear to most people but let's still clarify it: when given the chance to choose any language for answering coding tasks/questions, choose the one you have experience with! Some candidates prefer to choose the language they think the company is using and this is a huge mistake since giving the right answer is always better than a wrong answer, no matter which language you have used :) I recommend the following sites for practicing coding: * [HackerRank](https://www.hackerrank.com) * [LeetCode](https://leetcode.com) * [Exercism](https://exercism.io) <div style="text-align:center" markdown="1"> <img src="https://i.kym-cdn.com/photos/images/original/001/525/884/7ed.jpg" width="350"> </div> #### Architecture and Design This is also an important aspect of DevOps. You should be able to describe how to design different systems, workflows, and architectures. Also, the scale is an important aspect of that. A design which might work for a dozen of hosts or X amount of data, will not necessarily work well with bigger scale. Some ideas for you to explore: * How to design and implement a CI pipeline (or pipelines) for verifying PRs, run multiple different types of tests, package the project and deploy it somewhere * How to design and implement secured ELK architecture which will get logs from 10,000 apps and will display the data eventually to the user * Microservices designs are also quite popular these days I recommend going over the following GitHub projects as they are really deep-diving into System Design: * https://github.com/donnemartin/system-design-primer #### Tools Some interviews will focus on specific tools or technologies. Which tools? this is mainly based on a combination of what you mentioned in your C.V & those that are mentioned in the job posting and used in the company. Here are some questions I believe anyone should know to answer regarding the tools he/she is familiar with: * What the tool does? What it allows us to achieve that we couldn't do without it? * What its advantages over other tools in the same area, with the same purpose? Why did you choose to use it? * How it works? * How to use it? Let's deep dive into practical preparation steps ### Scenarios || Challenges || Tasks This is a very common way to interview today for DevOps roles. The candidate is given a task which represents a common task of DevOps Engineers or a piece of common knowledge and the candidate has several hours or days to accomplish the task.<br> This is a great way to prepare for interviews and I recommend to try it out before actually interviewing. How? Take requirements from job posts and convert them into scenarios. Let's see an example: "Knowledge in CI/CD" -> Scenario: create a CI/CD pipeline for a project. At this point, some people ask: "but what project?" and the answer is: what about GitHub? it has only 9125912851285192 projects...and a free way to set up CI to any of them (also a great way to learn how to collaborate with others :) ) Let's convert another scenario: "Experience with provisioning servers" -> Scenario: provision a server (to make it more interesting: create a web server). And the last example: "Experience with scripting" -> Scenario: write a script. Don't waste too much time thinking "what script should I write?". Simply automate something you are doing manually or even implement your own version of common small utils. ### Start your own DevOps project Starting a DevOps project is a good idea because: * It will make you practice coding * It will be something you can add to your resume and talk about with the interviewer * Depends on size and complexity, it can teach you something about design in general * Depends on adoption, it can you teach you about managing Open Source projects Same here, don't overthink what your project should be about. Just go and build something :) ### Sample interview questions Make a sample list of interview questions on various topics/areas like technical, company, role, ... and try to answer them. See if you can manage to answer them in a fluent, detailed way. I've gathered "a couple" of questions [here](https://github.com/bregman-arie/devops-interview-questions) Better yet, ask a good friend/colleague to challenge you with some questions. Your self-awareness might be an obstacle in objective self-review of your knowledge :) <div style="text-align:center" markdown="1"> <img src="https://i.imgflip.com/3l06t0.jpg" width="350"> </div> ### Networking For those who attend technical meetups and conferences, it can be a great opportunity to chat with people from other companies on their interviewing process. But don't start with it, it can be quite awkward. Say at least hello first... (: Doing so can give you a lot of information on what to expect from an interview at some companies or how to how to better prepare. ### Know your resume It may sound trivial but the idea here is simple: be ready to answer any question regarding any line you included in your resume. Sometimes candidates surprised when they are asked on a skill or line which seems to be not related to the position but the simple truth is: if you mentioned something on your resume, it's only fair to ask you about it. ### Know the company Be familiar with the company you are interviewing at. Some ideas: * What the company does? * What products it has? * Why its products are unique (or better than other products)? This can also be a good question for you to ask ### Books From my experience, this is not done by many candidates but it's one of the best ways to deep dive into topics like operating system, virtualization, scale, distributed systems, etc. In most cases, you will do fine without reading books but for the AAA interviews (hardest level) you'll want to read some books and overall if you inspire to be better DevOps Engineer, books (also articles, blog posts) is a great way :) ### Consider starting in non-DevOps position While not a preparation step, you should know that landing DevOps as a first position can be challenging. No, it's not impossible but still, since DevOps covers many different practices, tools, ... it can be quite challenging and also overwhelming for someone to try and achieve it as a first position.<br> A possible path to becoming a DevOps engineer is to start with actually a different (but related) position and switch from there after 1-2 years or more. Some ideas: * System Administrator - This is perfect because every DevOps Engineer should have a solid understanding of the OS and sysadmins know their OS :) * Software Developer/Engineer - A DevOps should have coding skills and this position will provide more than the required knowledge in most cases * QA Engineer - This is a more tricky one because IMHO there are less overlapping areas/skills with DevOps Engineer. Sure, DevOps engineers should have some knowledge about testing but usually, it seems their solid skills/background is mainly composed out of system internals and coding skills. ### What to expect from a DevOps interview? DevOps interviews can be very different. Some will include design questions, some will focus on coding, others will include short technical questions and you might even have an interview where the interviewer only goes over your resume and discussing your past experience. There are a couple of things you can do about it so it will be a less overwhelming experience: 1. You can and probably should ask the HR (in some cases even the team lead) how the interview process looks like. Some will be kind enough to even tell you how to prepare. 2. Usually, the job posting gives more than a hint on where the focus will be and what you should focus on in your preparations so read it carefully. 3. There are plenty of sites that have notes or a summary of the interview process in different companies, especially big enterprises. ### Don't forget to be an interviewer as well Some people tend to look at interviews as a one-way road of "Determining whether a candidate is qualified" but in reality, a candidate should also determine whether the company he/she is interviewing at, is the right place for him/her. * Do I care about team size? More specifically, do I care about being a one-man show or being part of a bigger team? * Do I care about work-life balance? * Do I care about personal growth and how it's practically done? * Do I care about knowing what are my responsibilities as part of the role? If you do, you should also play the interviewer role :) ### One Last Thing [Good luck](https://youtu.be/Xz-UvQYAmbg?t=29) :)
abregman
230,598
Using Python to create PostgreSQL tables with random schema
Having a large amount of test data sometimes take a lot of effort, and to simulate a more realistic...
4,109
2020-01-02T18:20:28
https://dev.to/mesmacosta/using-python-to-create-postgresql-tables-with-random-schema-2100
postgres, database, metadata, python
Having a large amount of test data sometimes take a lot of effort, and to simulate a more realistic scenario, it’s good to have a large number of tables with distinct column types. This script generates random tables schema for PostgreSQL. If you want to set up a PostgreSQL environment for dev and test purposes, take a look at: https://dev.to/mesmacosta/quickly-set-up-a-postgresql-environment-on-gcp-758 ## Environment ##### Activate your virtualenv ```bash pip install --upgrade virtualenv python3 -m virtualenv --python python3 env source ./env/bin/activate ``` ##### Install the requirements for the metadata generator ```bash pip install -r requirements.txt ``` ## Code {% gist https://gist.github.com/mesmacosta/00d1f4854c7a9475dce307b7fc0a8258 %} ## Execution ```bash export POSTGRESQL_SERVER=127.0.0.1 export POSTGRESQL_USERNAME=postgres export POSTGRESQL_PASSWORD=postgresql_pwd export POSTGRESQL_DATABASE=postgres python metadata_generator.py \ --postgresql-host=$POSTGRESQL_SERVER \ --postgresql-user=$POSTGRESQL_USERNAME \ --postgresql-pass=$POSTGRESQL_PASSWORD \ --postgresql-database=$POSTGRESQL_DATABASE ``` And that's it! If you have difficulties, don’t hesitate reaching out. I would love to help you!
mesmacosta
328,769
Working with Data
2. Working with Data 2.1. Lists We’ve already seen quick introduction to lists in the previous articl...
6,570
2020-05-06T13:08:53
https://dev.to/estherwavinya/working-with-data-5ec0
python, beginners, codenewbie, writing
**2. Working with Data** **2.1. Lists** We’ve already seen quick introduction to lists in the previous article. ``` >>> [1, 2, 3, 4] [1, 2, 3, 4] >>> ["hello", "world"] ["hello", "world"] >>> [0, 1.5, "hello"] [0, 1.5, "hello"] >>> [0, 1.5, "hello"] [0, 1.5, "hello"] ``` A List can contain another list as member. ``` >>> a = [1, 2] >>> b = [1.5, 2, a] >>> b [1.5, 2, [1, 2]] ``` The built-in function `range` can be used to create a sequence of consecutive integers. The `range` function returns a special range object that behaves like a list. To get a real list from it, you can use the list function. ``` >>> x = range(1, 4) >>> x range(1, 4) >>> x[0] 1 >>> len(x) 3 ``` The built-in function `len` can be used to find the length of a list. ``` >>> a = [1, 2, 3, 4] >>> len(a) 4 ``` The `+` and `*` operators work even on lists. ``` >>> a = [1, 2, 3] >>> b = [4, 5] >>> a + b [1, 2, 3, 4, 5] >>> b * 3 [4, 5, 4, 5, 4, 5] ``` List can be indexed to get individual entries. Value of index can go from 0 to (length of list - 1). ``` >>> x = [1, 2] >>> x[0] 1 >>> x[1] 2 ``` When a wrong index is used, python gives an error. ``` >>> x = [1, 2, 3, 4] >>> x[6] Traceback (most recent call last): File "<stdin>", line 1, in ? IndexError: list index out of range Negative indices can be used to index the list from right. ``` ``` >>> x = [1, 2, 3, 4] >>> x[-1] 4 >>> x [-2] 3 ``` We can use list slicing to get part of a list. ``` >>> x = [1, 2, 3, 4] >>> x[0:2] [1, 2] >>> x[1:4] [2, 3, 4] ``` Even negative indices can be used in slicing. For example, the following examples strips the last element from the list. ``` >>> x[0:-1] [1, 2, 3] ``` Slice indices have useful defaults; an omitted first index defaults to zero, an omitted second index defaults to the size of the list being sliced. ``` >>> x = [1, 2, 3, 4] >>> a[:2] [1, 2] >>> a[2:] [3, 4] >>> a[:] [1, 2, 3, 4] ``` An optional third index can be used to specify the increment, which defaults to 1. ``` >>> x = range(10) >>> x [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> x[0:6:2] [0, 2, 4] ``` We can reverse a list, just by providing -1 for increment. ``` >>> x[::-1] [9, 8, 7, 6, 5, 4, 3, 2, 1, 0] ``` List members can be modified by assignment. ``` >>> x = [1, 2, 3, 4] >>> x[1] = 5 >>> x [1, 5, 3, 4] ``` Presence of a key in a list can be tested using `in` operator. ``` >>> x = [1, 2, 3, 4] >>> 2 in x True >>> 10 in x False ``` Values can be appended to a list by calling `append` method on list. A method is just like a function, but it is associated with an object and can access that object when it is called. We will learn more about methods when we study classes. ``` >>> a = [1, 2] >>> a.append(3) >>> a [1, 2, 3] ``` **Problem 1:** What will be the output of the following program? ``` x = [0, 1, [2]] x[2][0] = 3 print(x) x[2].append(4) print(x) x[2] = 2 print(x) ``` **2.1.1. The for Statement** Python provides `for` statement to iterate over a list. A `for` statement executes the specified block of code for every element in a list. ``` for x in [1, 2, 3, 4]: print(x) for i in range(10): print(i, i*i, i*i*i) ``` The built-in function `zip` takes two lists and returns list of pairs. ``` >>> zip(["a", "b", "c"], [1, 2, 3]) [('a', 1), ('b', 2), ('c', 3)] ``` It is handy when we want to iterate over two lists together. ``` names = ["a", "b", "c"] values = [1, 2, 3] for name, value in zip(names, values): print(name, value) ``` **Problem 2:** Python has a built-in function sum to find `sum` of all elements of a list. Provide an implementation for `sum`. ``` >>> sum([1, 2, 3]) >>> 6 ``` **Problem 3:** What happens when the above `sum` function is called with a list of strings? Can you make your `sum` function work for a list of strings as well. ``` >>> sum(["hello", "world"]) "helloworld" >>> sum(["aa", "bb", "cc"]) "aabbcc" ``` **Problem 4:** Implement a function `product`, to compute product of a list of numbers. ``` >>> product([1, 2, 3]) 6 ``` **Problem 5:** Write a function `factorial` to compute factorial of a number. Can you use the `product` function defined in the previous example to compute factorial? ``` >>> factorial(4) 24 ``` **Problem 6:** Write a function `reverse` to reverse a list. Can you do this without using list slicing? ``` >>> reverse([1, 2, 3, 4]) [4, 3, 2, 1] >>> reverse(reverse([1, 2, 3, 4])) [1, 2, 3, 4] ``` **Problem 7:** Python has built-in functions `min` and `max` to compute minimum and maximum of a given list. Provide an implementation for these functions. What happens when you call your `min` and `max` functions with a list of strings? **Problem 8:** Cumulative sum of a list `[a, b, c, ...]` is defined as `[a, a+b, a+b+c, ...]`. Write a function `cumulative_sum` to compute cumulative sum of a list. Does your implementation work for a list of strings? ``` >>> cumulative_sum([1, 2, 3, 4]) [1, 3, 6, 10] >>> cumulative_sum([4, 3, 2, 1]) [4, 7, 9, 10] ``` **Problem 9:** Write a function `cumulative_product` to compute cumulative product of a list of numbers. ``` >>> cumulative_product([1, 2, 3, 4]) [1, 2, 6, 24] >>> cumulative_product([4, 3, 2, 1]) [4, 12, 24, 24] ``` **Problem 10:** Write a function `unique` to find all the unique elements of a list. ``` >>> unique([1, 2, 1, 3, 2, 5]) [1, 2, 3, 5] ``` **Problem 11:** Write a function `dups` to find all duplicates in the list. ``` >>> dups([1, 2, 1, 3, 2, 5]) [1, 2] ``` **Problem 12:** Write a function `group(list, size)` that take a list and splits into smaller lists of given size. ``` >>> group([1, 2, 3, 4, 5, 6, 7, 8, 9], 3) [[1, 2, 3], [4, 5, 6], [7, 8, 9]] >>> group([1, 2, 3, 4, 5, 6, 7, 8, 9], 4) [[1, 2, 3, 4], [5, 6, 7, 8], [9]] ``` **2.1.2. Sorting Lists** The sort method sorts a list in place. ``` >>> a = [2, 10, 4, 3, 7] >>> a.sort() >>> a [2, 3, 4, 7 10] ``` The built-in function sorted returns a new `sorted` list without modifying the source list. ``` >>> a = [4, 3, 5, 9, 2] >>> sorted(a) [2, 3, 4, 5, 9] >>> a [4, 3, 5, 9, 2] ``` The behavior of `sort` method and `sorted` function is exactly same except that sorted returns a new list instead of modifying the given list. The `sort` method works even when the list has different types of objects and even lists. ``` >>> a = ["hello", 1, "world", 45, 2] >>> a.sort() >>> a [1, 2, 45, 'hello', 'world'] >>> a = [[2, 3], [1, 6]] >>> a.sort() >>> a [[1, 6], [2, 3]] ``` We can optionally specify a function as sort key. ``` >>> a = [[2, 3], [4, 6], [6, 1]] >>> a.sort(key=lambda x: x[1]) >>> a [[6, 1], [2, 3], [4 6]] ``` This sorts all the elements of the list based on the value of second element of each entry. **Problem 13:** Write a function `lensort` to sort a list of strings based on length. ``` >>> lensort(['python', 'perl', 'java', 'c', 'haskell', 'ruby']) ['c', 'perl', 'java', 'ruby', 'python', 'haskell'] ``` **Problem 14:** Improve the unique function written in previous problems to take an optional key function as argument and use the return value of the key function to check for uniqueness. ``` >>> unique(["python", "java", "Python", "Java"], key=lambda s: s.lower()) ["python", "java"] ``` **2.2. Tuples** Tuple is a sequence type just like `list`, but it is immutable. A tuple consists of a number of values separated by commas. ``` >>> a = (1, 2, 3) >>> a[0] 1 ``` The enclosing braces are optional. ``` >>> a = 1, 2, 3 >>> a[0] 1 ``` The built-in function `len` and slicing works on tuples too. ``` >>> len(a) 3 >>> a[1:] 2, 3 ``` Since parenthesis are also used for grouping, tuples with a single value are represented with an additional comma. ``` >>> a = (1) >> a 1 >>> b = (1,) >>> b (1,) >>> b[0] 1 ``` **2.3. Sets** Sets are unordered collection of unique elements. ``` >>> x = set([3, 1, 2, 1]) set([1, 2, 3]) ``` Python 2.7 introduced a new way of writing sets. ``` >>> x = {3, 1, 2, 1} set([1, 2, 3]) ``` New elements can be added to a set using the add method. ```` >>> x = set([1, 2, 3]) >>> x.add(4) >>> x set([1, 2, 3, 4]) ``` Just like lists, the existence of an element can be checked using the `in` operator. However, this operation is faster in sets compared to lists. ``` >>> x = set([1, 2, 3]) >>> 1 in x True >>> 5 in x False ``` **Problem 15:** Reimplement the `unique` function implemented in the earlier examples using sets. **2.4. Strings** Strings also behave like lists in many ways. Length of a string can be found using built-in function `len`. ``` >>> len("abrakadabra") 11 ``` Indexing and slicing on strings behave similar to that of lists. ``` >>> a = "helloworld" >>> a[1] 'e' >>> a[-2] 'l' >>> a[1:5] "ello" >>> a[:5] "hello" >>> a[5:] "world" >>> a[-2:] 'ld' >>> a[:-2] 'hellowor' >>> a[::-1] 'dlrowolleh' ``` The `in` operator can be used to check if a string is present in another string. ``` >>> 'hell' in 'hello' True >>> 'full' in 'hello' False >>> 'el' in 'hello' True ``` There are many useful methods on strings. The `split` method splits a string using a delimiter. If no delimiter is specified, it uses any whitespace char as delimiter. ``` >>> "hello world".split() ['hello', 'world'] >>> "a,b,c".split(',') ['a', 'b', 'c'] ``` The `join` method joins a list of strings. ``` >>> " ".join(['hello', 'world']) 'hello world' >>> ','.join(['a', 'b', 'c']) ``` The `strip` method returns a copy of the given string with leading and trailing whitespace removed. Optionally a string can be passed as argument to remove characters from that string instead of whitespace. ``` >>> ' hello world\n'.strip() 'hello world' >>> 'abcdefgh'.strip('abdh') 'cdefg' ``` Python supports formatting values into strings. Although this can include very complicated expressions, the most basic usage is to insert values into a string with the %s placeholder. ``` >>> a = 'hello' >>> b = 'python' >>> "%s %s" % (a, b) 'hello python' >>> 'Chapter %d: %s' % (2, 'Data Structures') ``` **2.5. Working With Files** Python provides a built-in function `open` to open a file, which returns a file object. ``` f = open('foo.txt', 'r') # open a file in read mode f = open('foo.txt', 'w') # open a file in write mode f = open('foo.txt', 'a') # open a file in append mode ``` The second argument to `open` is optional, which defaults to `'r'` when not specified. Unix does not distinguish binary files from text files but windows does. On windows `'rb'`, `'wb'`, `'ab'` should be used to open a binary file in read, write and append mode respectively. Easiest way to read contents of a file is by using the `read` method. ``` >>> open('foo.txt').read() 'first line\nsecond line\nlast line\n' ``` Contents of a file can be read line-wise using `readline` and `readlines` methods. The `readline` method returns empty string when there is nothing more to read in a file. ``` >>> open('foo.txt').readlines() ['first line\n', 'second line\n', 'last line\n'] >>> f = open('foo.txt') >>> f.readline() 'first line\n' >>> f.readline() 'second line\n' >>> f.readline() 'last line\n' >>> f.readline() '' ``` The `write` method is used to write data to a file opened in write or append mode. ``` >>> f = open('foo.txt', 'w') >>> f.write('a\nb\nc') >>> f.close() >>> f.open('foo.txt', 'a') >>> f.write('d\n') >>> f.close() ``` The `writelines` method is convenient to use when the data is available as a list of lines. ``` >>> f = open('foo.txt') >>> f.writelines(['a\n', 'b\n', 'c\n']) >>> f.close() ``` **2.5.1. Example: Word Count** Lets try to compute the number of characters, words and lines in a file. Number of characters in a file is same as the length of its contents. ``` def charcount(filename): return len(open(filename).read()) ``` Number of words in a file can be found by splitting the contents of the file. ``` def wordcount(filename): return len(open(filename).read().split()) ``` Number of lines in a file can be found from readlines method. ``` def linecount(filename): return len(open(filename).readlines()) ``` **Problem 17:** Write a program `reverse.py` to print lines of a file in reverse order. ``` $ cat she.txt She sells seashells on the seashore; The shells that she sells are seashells I'm sure. So if she sells seashells on the seashore, I'm sure that the shells are seashore shells. $ python reverse.py she.txt I'm sure that the shells are seashore shells. So if she sells seashells on the seashore, The shells that she sells are seashells I'm sure. She sells seashells on the seashore; ``` **Problem 18:** Write a program to print each line of a file in reverse order. **Problem 19:** Implement unix commands `head` and `tail`. The `head` and `tail` commands take a file as argument and prints its first and last 10 lines of the file respectively. **Problem 20:** Implement unix command `grep`. The `grep` command takes a string and a file as arguments and prints all lines in the file which contain the specified string. ``` $ python grep.py she.txt sure The shells that she sells are seashells I'm sure. I'm sure that the shells are seashore shells. ``` **Problem 21:** Write a program `wrap.py` that takes filename and width as aruguments and wraps the lines longer than `width`. ``` $ python wrap.py she.txt 30 I'm sure that the shells are s eashore shells. So if she sells seashells on t he seashore, The shells that she sells are seashells I'm sure. She sells seashells on the sea shore; ``` **Problem 22:** The above wrap program is not so nice because it is breaking the line at middle of any word. Can you write a new program `wordwrap.py` that works like `wrap.py`, but breaks the line only at the word boundaries? ``` $ python wordwrap.py she.txt 30 I'm sure that the shells are seashore shells. So if she sells seashells on the seashore, The shells that she sells are seashells I'm sure. She sells seashells on the seashore; ``` **Problem 23:** Write a program `center_align.py` to center align all lines in the given file. ``` $ python center_align.py she.txt I'm sure that the shells are seashore shells. So if she sells seashells on the seashore, The shells that she sells are seashells I'm sure. She sells seashells on the seashore; ``` **2.6. List Comprehensions** List Comprehensions provide a concise way of creating lists. Many times a complex task can be modelled in a single line. Here are some simple examples for transforming a list. ``` >>> a = range(10) >>> a [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> [x for x in a] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> [x*x for x in a] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> [x+1 for x in a] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ``` It is also possible to filter a list using `if` inside a list comprehension. ``` >>> a = range(10) >>> [x for x in a if x % 2 == 0] [0, 2, 4, 6, 8] >>> [x*x for x in a if x%2 == 0] [0, 4, 8, 36, 64] ``` It is possible to iterate over multiple lists using the built-in function `zip`. ``` >>> a = [1, 2, 3, 4] >>> b = [2, 3, 5, 7] >>> zip(a, b) [(1, 2), (2, 3), (3, 5), (4, 7)] >>> [x+y for x, y in zip(a, b)] [3, 5, 8, 11] ``` we can use multiple `for` clauses in single list comprehension. ``` >>> [(x, y) for x in range(5) for y in range(5) if (x+y)%2 == 0] [(0, 0), (0, 2), (0, 4), (1, 1), (1, 3), (2, 0), (2, 2), (2, 4), (3, 1), (3, 3), (4, 0), (4, 2), (4, 4)] >>> [(x, y) for x in range(5) for y in range(5) if (x+y)%2 == 0 and x != y] [(0, 2), (0, 4), (1, 3), (2, 0), (2, 4), (3, 1), (4, 0), (4, 2)] >>> [(x, y) for x in range(5) for y in range(x) if (x+y)%2 == 0] [(2, 0), (3, 1), (4, 0), (4, 2)] ``` The following example finds all Pythagorean triplets using numbers below 25. `(x, y, z)` is a called pythagorean triplet if `x*x + y*y == z*z`. ``` >>> n = 25 >>> [(x, y, z) for x in range(1, n) for y in range(x, n) for z in range(y, n) if x*x + y*y == z*z] [(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12, 16, 20)] ``` **Problem 24:** Provide an implementation for zip function using list comprehensions. ``` >>> zip([1, 2, 3], ["a", "b", "c"]) [(1, "a"), (2, "b"), (3, "c")] ``` **Problem 25:** Python provides a built-in function `map` that applies a function to each element of a list. Provide an implementation for map using list comprehensions. ``` >>> def square(x): return x * x ... >>> map(square, range(5)) [0, 1, 4, 9, 16] ``` **Problem 26:** Python provides a built-in function `filter(f, a)` that returns items of the list a for which `f(item)` returns true. Provide an implementation for `filter` using list comprehensions. ``` >>> def even(x): return x %2 == 0 ... >>> filter(even, range(10)) [0, 2, 4, 6, 8] ``` **Problem 27:** Write a function `triplets` that takes a number `n` as argument and returns a list of triplets such that sum of first two elements of the triplet equals the third element using numbers below n. Please note that `(a, b, c)` and `(b, a, c)` represent same triplet. ``` >>> triplets(5) [(1, 1, 2), (1, 2, 3), (1, 3, 4), (2, 2, 4)] ``` **Problem 28:** Write a function `enumerate` that takes a list and returns a list of tuples containing `(index,item)` for each item in the list. ``` >>> enumerate(["a", "b", "c"]) [(0, "a"), (1, "b"), (2, "c")] >>> for index, value in enumerate(["a", "b", "c"]): ... print(index, value) 0 a 1 b 2 c ``` **Problem 29:** Write a function `array` to create an 2-dimensional array. The function should take both dimensions as arguments. Value of each element can be initialized to None: ``` >>> a = array(2, 3) >>> a [[None, None, None], [None, None, None]] >>> a[0][0] = 5 [[5, None, None], [None, None, None]] ``` **Problem 30:** Write a python function `parse_csv` to parse csv (comma separated values) files. ``` >>> print(open('a.csv').read()) a,b,c 1,2,3 2,3,4 3,4,5 >>> parse_csv('a.csv') [['a', 'b', 'c'], ['1', '2', '3'], ['2', '3', '4'], ['3', '4', '5']] ``` **Problem 31:** Generalize the above implementation of csv parser to support any delimiter and comments. ``` >>> print(open('a.txt').read()) # elements are separated by ! and comment indicator is # a!b!c 1!2!3 2!3!4 3!4!5 >>> parse('a.txt', '!', '#') [['a', 'b', 'c'], ['1', '2', '3'], ['2', '3', '4'], ['3', '4', '5']] ``` **Problem 32:** Write a function `mutate` to compute all words generated by a single mutation on a given word. A mutation is defined as inserting a character, deleting a character, replacing a character, or swapping 2 consecutive characters in a string. For simplicity consider only letters from `a` to `z`. ``` >>> words = mutate('hello') >>> 'helo' in words True >>> 'cello' in words True >>> 'helol' in words True ``` **Problem 33:** Write a function `nearly_equal` to test whether two strings are nearly equal. Two strings `a` and `b` are nearly equal when `a` can be generated by a single mutation on `b`. ``` >>> nearly_equal('python', 'perl') False >>> nearly_equal('perl', 'pearl') True >>> nearly_equal('python', 'jython') True >>> nearly_equal('man', 'woman') False ``` **2.7. Dictionaries** Dictionaries are like lists, but they can be indexed with non integer keys also. Unlike lists, dictionaries are not ordered. ``` >>> a = {'x': 1, 'y': 2, 'z': 3} >>> a['x'] 1 >>> a['z'] 3 >>> b = {} >>> b['x'] = 2 >>> b[2] = 'foo' >>> b[(1, 2)] = 3 >>> b {(1, 2): 3, 'x': 2, 2: 'foo'} ``` The `del` keyword can be used to delete an item from a dictionary. ``` >>> a = {'x': 1, 'y': 2, 'z': 3} >>> del a['x'] >>> a {'y': 2, 'z': 3} ``` The `keys` method returns all keys in a dictionary, the `values` method returns all values in a dictionary and `items` method returns all key-value pairs in a dictionary. ``` >>> a.keys() ['x', 'y', 'z'] >>> a.values() [1, 2, 3] >>> a.items() [('x', 1), ('y', 2), ('z', 3)] ``` The `for` statement can be used to iterate over a dictionary. ``` >>> for key in a: print(key) ... x y z >>> for key, value in a.items(): print(key, value) ... x 1 y 2 z 3 ``` Presence of a key in a dictionary can be tested using `in` operator or `has_key` method. ``` >>> 'x' in a True >>> 'p' in a False >>> a.has_key('x') True >>> a.has_key('p') False ``` Other useful methods on dictionaries are `get` and `setdefault`. ``` >>> d = {'x': 1, 'y': 2, 'z': 3} >>> d.get('x', 5) 1 >>> d.get('p', 5) 5 >>> d.setdefault('x', 0) 1 >>> d {'x': 1, 'y': 2, 'z': 3} >>> d.setdefault('p', 0) 0 >>> d {'y': 2, 'x': 1, 'z': 3, 'p': 0} ``` Dictionaries can be used in string formatting to specify named parameters. ``` >>> 'hello %(name)s' % {'name': 'python'} 'hello python' >>> 'Chapter %(index)d: %(name)s' % {'index': 2, 'name': 'Data Structures'} 'Chapter 2: Data Structures' ``` **2.7.1. Example: Word Frequency** Suppose we want to find number of occurrences of each word in a file. Dictionary can be used to store the number of occurrences for each word. Lets first write a function to count frequency of words, given a list of words. ``` def word_frequency(words): """Returns frequency of each word given a list of words. >>> word_frequency(['a', 'b', 'a']) {'a': 2, 'b': 1} """ frequency = {} for w in words: frequency[w] = frequency.get(w, 0) + 1 return frequency ``` Getting words from a file is very trivial. ``` def read_words(filename): return open(filename).read().split() ``` We can combine these two functions to find frequency of all words in a file. ``` def main(filename): frequency = word_frequency(read_words(filename)) for word, count in frequency.items(): print(word, count) if __name__ == "__main__": import sys main(sys.argv[1]) ``` **Problem 34:** Improve the above program to print the words in the descending order of the number of occurrences. **Problem 35:** Write a program to count frequency of characters in a given file. Can you use character frequency to tell whether the given file is a Python program file, C program file or a text file? **Problem 36:** Write a program to find anagrams in a given list of words. Two words are called anagrams if one word can be formed by rearranging letters of another. For example 'eat', 'ate' and 'tea' are anagrams. ``` >>> anagrams(['eat', 'ate', 'done', 'tea', 'soup', 'node']) [['eat', 'ate', 'tea], ['done', 'node'], ['soup']] ``` **Problem 37:** Write a function `valuesort` to sort values of a dictionary based on the key. ``` >>> valuesort({'x': 1, 'y': 2, 'a': 3}) [3, 1, 2] ``` **Problem 38:** Write a function `invertdict` to interchange keys and values in a dictionary. For simplicity, assume that all values are unique. ``` >>> invertdict({'x': 1, 'y': 2, 'z': 3}) {1: 'x', 2: 'y', 3: 'z'} ``` **2.7.2. Understanding Python Execution Environment** Python stores the variables we use as a dictionary. The `globals()` function returns all the globals variables in the current environment. ``` >>> globals() {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None} >>> x = 1 >>> globals() {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, 'x': 1} >>> x = 2 >>> globals() {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, 'x': 2} >>> globals()['x'] = 3 >>> x 3 ``` Just like `globals` python also provides a function `locals` which gives all the local variables in a function. ``` >>> def f(a, b): print(locals()) ... >>> f(1, 2) {'a': 1, 'b': 2} ``` One more example: ``` >>> def f(name): ... return "Hello %(name)s!" % locals() ... >>> f("Guido") Hello Guido! ```
estherwavinya
230,617
How to Use Make Auth in Laravel 6
In this article, we will discuss “how to use make auth in Laravel 6”. Officiall...
0
2020-01-02T18:47:07
https://codebriefly.com/how-to-use-make-auth-in-laravel-6/
laravel, laravel6, auth, laravelauthenticati
--- title: How to Use Make Auth in Laravel 6 published: true date: 2019-09-25 19:43:18 UTC tags: Laravel,Laravel 6,Auth,Laravel Authenticati canonical_url: https://codebriefly.com/how-to-use-make-auth-in-laravel-6/ --- ![how-to-use-make-auth-in-laravel-6](https://codebriefly.com/wp-content/uploads/2019/09/how-to-use-make-auth-in-laravel-6.jpg) In this article, we will discuss “how to use make auth in Laravel 6”. Officially Laravel 6 is available now, but some of the changes here, as expected. Because you are reading this article, So I assume you are still not familiar with the Laravel 6. Note that, It’s an LTS version of Laravel. If […] The post [How to Use Make Auth in Laravel 6](https://codebriefly.com/how-to-use-make-auth-in-laravel-6/) appeared first on [Code Briefly](https://codebriefly.com).
erpankajsood
230,618
Laravel Scout with TNTSearch Driver
In this article, we will discuss “Laravel Scout with TNTSearch Driver”. As we k...
0
2020-01-02T18:51:46
https://codebriefly.com/laravel-scout-with-tntsearch-driver/
laravel, laravel6, laravelcodesnippet
--- title: Laravel Scout with TNTSearch Driver published: true date: 2019-10-03 20:00:04 UTC tags: Laravel,Laravel 6,Laravel Code Snippet canonical_url: https://codebriefly.com/laravel-scout-with-tntsearch-driver/ --- ![Laravel Scout with TNTSearch Driver](https://codebriefly.com/wp-content/uploads/2019/10/laravel-scout-with-tntsearch-driver.jpg) In this article, we will discuss “Laravel Scout with TNTSearch Driver”. As we know Laravel provides rich features to make development easy and smooth. Laravel provides an official package “Laravel Scout”, which helps to create a full-text search functionality in your application. Prerequisites Here, we need a Laravel installation to make this Laravel Scout functionality. […] The post [Laravel Scout with TNTSearch Driver](https://codebriefly.com/laravel-scout-with-tntsearch-driver/) appeared first on [Code Briefly](https://codebriefly.com).
erpankajsood
230,626
Brief Understanding Laravel Scopes
In this article, we will discuss “Brief Understanding Laravel Scopes”. I will t...
0
2020-01-02T18:49:20
https://codebriefly.com/brief-understanding-laravel-scopes/
laravel, laravel57, laravel58, laravel6
--- title: Brief Understanding Laravel Scopes published: true date: 2019-11-25 15:21:09 UTC tags: Laravel,Laravel 5.7,Laravel 5.8,Laravel 6 canonical_url: https://codebriefly.com/brief-understanding-laravel-scopes/ --- ![Brief Understanding Laravel Scopes](https://codebriefly.com/wp-content/uploads/2019/11/brief-understanding-laravel-scopes.jpg) In this article, we will discuss “Brief Understanding Laravel Scopes”. I will try to explain to you, How can you use this in your Laravel application. As we know, Laravel provides lots of rich features to make development easy and simple. You can check the official documentation here. Now, the first question is: Why we […] The post [Brief Understanding Laravel Scopes](https://codebriefly.com/brief-understanding-laravel-scopes/) appeared first on [Code Briefly](https://codebriefly.com).
erpankajsood
230,636
Organized Rails application with interactors and utilities
This article was originally published here. If you have ever used Rails, you know that there is no...
0
2020-01-02T19:08:37
https://dev.to/stojakovic99/organized-rails-application-with-interactors-and-utilities-3m9l
ruby, rails, codequality
*This article was originally published [here](https://nikolastojakovic.com/2020/01/02/organized-rails-application-with-interactors-and-utilities/).* If you have ever used Rails, you know that there is no better thing for quick prototyping. It doesn’t matter how easy is your favorite framework for that. Rails will be dozens of times easier, especially giving the fact it is now a mature solution. That’s the biggest reason why so many startups have used it for building their applications. I’m not going to talk about Rails today. There are hundreds of articles which have explained it better than I ever could. What I’m going to talk about is the situation when your application becomes complex. If you have used Rails for a long time, you will definitely know what I’m talking about. You just know the feeling when that point comes. Rubocop yells at you. Your significant other yells at you. Your dog yells at you. Suddenly, the whole world is against you. You’re trying to keep controllers lean but you end up with fat models. Ruby doesn’t seem so shiny anymore. You know where the problem lies, but you don’t know where to start. Beside having cats instead of dogs, I was in exactly the same situation recently. ## What happened? At that time, I was trying to solve the problem of fat controllers by introducing services. Our logic was more complex than in the example illustrated here, but you’ll get the point: ```rb module Services class UserService extend self def create(params) User.create!(params) end end end ``` Which would be called in a controller: ```rb class UsersController < BaseController def create ::Services::UserService.create(params) end end ``` The problem emerged when services became too general. For example, instead of creating separate services for handling user creation, update and deleting, I have put everything in `UserService`. I managed to keep my controllers and models lean. But now I ended up with fat services. ![good job](https://thumbs.gfycat.com/QueasyReliableCuckoo-size_restricted.gif) Now you may ask why I just didn’t split them? Well, in fact I was thinking about that. Unfortunately, we had some issues shortly after this. They weren’t related to the problem discussed in this article, but they required from us to start from the scratch. Starting from the scratch is generally not recommended (and for the right reason), but in this case it gave us the chance to put things in place. ## Interactors and utilities to the rescue Before we started working on a new version of the application, colleague introduced me to the idea of interactors and utilities. Interactors are basically just an abstraction over some small part of the application’s business logic. Simply said, they’re single purpose objects. If you’re familiar with design patterns, you probably already know it as the [Command pattern](https://en.wikipedia.org/wiki/Command_pattern). There are few approaches you can use for organizing your code base with interactors. You can use a [gem](https://github.com/collectiveidea/interactor) or you can implement them on your own. We chose to do the latter, as our use case was quite simple. Let’s see how it looks on an example of creating a user. ```rb module Users module Interactors module Create module_function def call(params) ::Users::Utils::Create.call(params) end end end end ``` And that’s it. You now have an interactor which calls a utility with the same name. What the heck is utility now and where is all the logic? Well, I lied a bit. Interactor won’t do anything on it’s own. It will rather serve as a *middleman* between controller and utilities. Together, utilities are going to do the process of creating new user. After all, this process can have many parts – sanitizing parameters, authentication, sending e-mail and so on. Let’s see how utilities for the interactor above could look like. ```rb module Users module Utils module Create module_function def call(params) user_params = ::Users::Utils::SanitizeParams.call(params) user = User.create!(user_params) ::Users::Utils::AuthenticateUser.call(user) ::Users::Utils::SendWelcomeEmail.call(user) end end end end ``` ```rb module Users module Utils module SanitizeParams module_function def call(params) params.require(:user).permit( :username, :password, :email ) end end end end ``` ```rb module Users module Utils module AuthenticateUser module_function def call(user) session[:user_id] = user.id end end end end ``` ```rb module Users module Utils module SendWelcomeEmail module_function def call(user) UserMailer.with(user: user).welcome_email.deliver_later end end end end ``` In the end, you will just have to call interactor from the controller. ```rb class UsersController < BaseController def create ::Users::Interactors::Create.call(params) end end ``` This has multiple advantages: * it will ensure that each part does only one thing * you’ll be able to add more features later easier * it will help you to keep your models and controllers lean * you won’t have to use `concerns` directories ## Okay, where should I put all these files? Wherever you want, but if you ask me, you should put them in `app/lib`. This way Rails will automatically require them. Here is the directory structure for the example above: ``` app/ lib/ users/ interactors/ create.rb utils/ create.rb sanitize_params.rb authenticate_user.rb send_welcome_email.rb ``` ## Learn more * [Keynote: Architecture The Lost Years by Robert Martin](https://www.youtube.com/watch?v=WpkDN78P884) * [Command Design Pattern](https://sourcemaking.com/design_patterns/command)
stojakovic99
236,674
Execute Django tests in your github actions
When you are trying github actions with your django project, and your database is for example, Postgr...
0
2020-01-12T15:31:25
https://dev.to/gamorales/execute-django-tests-in-your-github-actions-ckg
django, testing, github, actions
When you are trying github actions with your django project, and your database is for example, PostgreSQL, could be a problem running tests in the workflow (personal experience 😋). The problem is, you don't have an environment or DB engine running inside github actions platform, neither a server where github actions could run tests, then, when is turn of "run tests" step, you'll have this error. ![Running django tests with connection error](https://thepracticaldev.s3.amazonaws.com/i/ebn90vzctslhe2869kbt.jpg) So, ¿what do we do?. Well, an easy way to fix this problem is check if your project is running in test mode to change your DB engine to SQLite inside settings.py, in this way, github actions could run all tests. {% gist https://gist.github.com/gamorales/6f6b77d6e89606f8dfdba4c171c2f949 %} ![Running django tests Ok](https://thepracticaldev.s3.amazonaws.com/i/hieac53fy2757dahl3uo.jpg) This is my first post in dev.to, I hope could help somebody with the same problem I had. [Github actions workflow](https://gist.github.com/gamorales/31b41565764dbea4acbb888c1f3a3678)
gamorales
230,638
Uno Platform 2.0 RELOADED – General Availability, Hot Reload and more!
In September 2019 at our inaugural UnoConf, we had released the preview of Uno Platform 2.0. Today...
0
2020-02-20T14:07:25
https://platform.uno/uno-platform-2-0-reloaded-general-availability-hot-reload-and-more/
news, android, csharp, ios
--- title: Uno Platform 2.0 RELOADED – General Availability, Hot Reload and more! published: true date: 2020-01-02 18:06:48 UTC tags: News,Android,csharp,iOS canonical_url: https://platform.uno/uno-platform-2-0-reloaded-general-availability-hot-reload-and-more/ --- ![](https://s3.amazonaws.com/uno-website-assets/wp-content/uploads/2019/12/19153613/Uno-Platform-2-Available-Now.jpg) In September 2019 at our inaugural UnoConf, we had released the preview of Uno Platform 2.0. Today we are announcing general availability of Uno Platform 2.0 which includes not only the features announced at UnoConf, but also many others. One of the 2.0 release highlights is the fully functional **XAML Hot Reload which works across Web (via WebAssembly), Android and iOS.** Maybe best of all, you can do it **ALL AT ONCE** across all those platforms. This first blog post in Uno Platform 2.0 _Reloaded_ series will give a lot of detail about XAML Hot Reload as well as the rest of the release. In addition, over the next few days we will drip-feed dedicated blogs for a few other release highlights. That said, if you’d like to jump to the bulleted list of all new features, bug fixes and known issues, you can see it at [our GitHub](https://github.com/unoplatform/uno/releases). We are very proud of 2.0 release. **Not only are we releasing a handful of major improvements, but we have also introduced over 70 smaller functionality improvements and we have closed over 80 bugs reported by community.** Most importantly, over the past few months we saw massive contributions by the community, especially during Hacktoberfest. As of today we are humbled to have a very active contributor community of **over 130 contributors**! THANK YOU ALL for helping us make #WinUIEverywhere. With that said, let’s look at what is in the release and also spend a lot more time on XAML Hot Reload. **What is XAML Hot Reload Anyway?** Simply put, XAML Hot Reload with Uno Platform and Visual Studio gives you the ability to design your app’s UI while the app is running, without the need to re-build your project. Here it is in action. For this example, we are using Uno Platform sample app UADO (Universal Azure DevOps Organizer). Note the simultaneous UI changes on both the web client as well as the native application running on iOS (/Android). <iframe title="Uno Platform XAML Hot Reload" width="500" height="281" src="https://www.youtube.com/embed/c_2p7_tqPN8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> **How does it Work Behind the Scenes?** The architecture of the Uno XAML Hot Reload is built on two parts: The RC Server side (Remote Control Server), and the RC Client side (Remote Control Client). The developer side is .NET Core app that sits in between Visual Studio (or VS Code in the future) and the application. The app side is a WebSockets client which connects back to the RC server, after having been provided with the addresses of the server through the Visual Studio addin. The choice of WebSockets is communication channel that works on iOS, Android and WebAssembly without restrictions, particularly around CORS and browsers. Once the application has started, and the connection to the RC server has been established, every modification made one of the XAML files that the app uses, the new content is sent back the app. Every location where that file is being loaded in the app is being replaced by a new instance of that new XAML, showing you the new content. This means that reloading a UserControl does not reload the whole page that contains that control, make it the whole process faster! There are some limitations to the current implementation: - The code of a Page or UserControl constructor is not executed on reload - Events in code-behind are not reassigned - Standalone ResourceDictionary files cannot be updated Code modifications are not reloaded, only XAML files are. A more comprehensive list of known issues and limitations [**can be found here**](https://platform.uno/docs/articles/features/working-with-xaml-hot-reload.html#known-issues). **Next Steps** If you are new to the Uno Platform, you may want to run through the [Getting Started tutorial.](https://platform.uno/docs/articles/getting-started-tutorial-1.html) This will ensure you have the correct environment set up and that you have the latest release of Uno. If you have already worked with Uno, you may want to update your Uno package via your Visual Studio NuGet package manager. If you’d like to try Hot Reload yourself, but with a smaller app you can try our QuickStart app. Try these simple steps: 1. Import Uno QuickStart solution from our repo – [https://github.com/unoplatform/Uno.QuickStart](https://github.com/unoplatform/Uno.QuickStart) and verify it runs on your machine 2. Build and run the WebAssembly app without the debugger. Keep it running. 3. Build and run the iOS app without the debugger. Keep it running. 4. Steps to make changes 1. Go to file MainPage.xaml and change the Text attribute content 2. Save the file 3. Observe that both iOS and WebAssembly got updated If you need more information please consult [XAML Hot Reload docs](https://platform.uno/docs/articles/features/working-with-xaml-hot-reload.html), including the list of known issues. Stay tuned for more blog posts in Uno Platform 2.0 Reloaded series. Jerome Laban, on behalf of of complete Uno Platform team CTO, Uno Platform. Microsoft MVP. [https://twitter.com/jlaban](https://twitter.com/jlaban) The post [Uno Platform 2.0 RELOADED – General Availability, Hot Reload and more!](https://platform.uno/uno-platform-2-0-reloaded-general-availability-hot-reload-and-more/) appeared first on [Uno Platform](https://platform.uno).
unoplatform
230,642
An Inclusive Cross-Device Testing Checklist
Here’s my web compatibility checklist that includes comprehensive accessibility testing. Ph...
0
2020-01-02T19:48:56
https://dev.to/mpuckett/an-inclusive-cross-device-testing-checklist-2h6n
webdev, a11y, qa, html
--- title: An Inclusive Cross-Device Testing Checklist published: true description: tags: #webdev #a11y #qa #html --- Here’s my web compatibility checklist that includes comprehensive accessibility testing. ## Physical Devices I’m testing with the following real devices: * MacBook * iPhone with a notch * Pixel with a notch That’s it! I’m able to use macOS Bootcamp to dual boot to native Windows 10. You can, of course, emulate both iOS and Android. This is useful for inspecting layouts without the notch. But an emulator sometimes doesn’t behave quite the same way. If you are on a Windows or Linux PC, there won’t be a way to test any proprietary Apple software. I’ll look at how to emulate Safari later — WebKit is open source. ## Browsers You should look at your analytics to see what kind of coverage you want to support for older browsers. In most cases, unless I want to charge more for the engineering time, the following list is my baseline. I get to use ES6, CSS Grid, CSS variables, Web Components, and other modern features. When testing across browsers, I usually like to have a common set of workflows using various amounts of data. The list: * Latest Chrome (79) * macOS * Windows * Android * N-10 Latest Chrome (69) * macOS * Windows * Android * Latest Firefox (71) * macOS * Windows * Android * Latest Firefox Enterprise (ESR 68) * macOS * Windows * Latest Desktop Safari (13) * macOS * N-1 Latest Desktop Safari (12) * macOS * Latest Mobile Safari (13) * iOS * N-1 Latest Mobile Safari (12) * iOS What about Internet Explorer and Edge? IE is present but buried in Windows 10, and Edge Beta with Chromium is available for download. It will soon replace the existing Edge. If you have a high number of Edge users, add it to the browser support list, or consider providing a download link to Edge Beta. If you want to emulate Safari on Windows, you’ll need to install Ubuntu from the Windows Store. You’ll need to install an xServer emulator and have it running. Then run `apt-get update && apt-get install epiphany-browser` in the Ubuntu shell. Add `export DISPLAY =:0` to your bash profile. Then you can run `epiphany-browser` to test WebKit on a PC. ## Assistive Technologies Are you testing for accessibility? You should be! People with disabilities, such as blind people, need to be able to interact with every app and website. That’s done with semantic HTML and ARIA. All operating systems now come with a screen reader that interacts with the underlying Accessibility API. You can use ARIA to modify the Accessibility API. When using a screen reader, you use a keyboard to navigate between items using a virtual cursor (essentially the same as the focus cursor) and interact with the web page. There is usually a developer setting to show the text on screen as it would be spoken. Here are the assistive technologies I test with: * Windows Narrator (Windows 10) * VoiceOver (macOS and iOS) * TalkBack (Android) * NVDA (Windows 10) is a free download ## Manual and Automated Accessibility Testing I test the markup at various stages (adding items to cart, deleting items from cart, etc) using three helpful developer tools: * [validator.nu](https://validator.w3.org/nu/) HTML/ARIA Validator * [axe Chromium Extension](https://chrome.google.com/webstore/detail/axe-web-accessibility-tes/lhdoppojpmngadmnindnejefpokejbdd) * [Accessibility Insights Chromium Extension](https://chrome.google.com/webstore/detail/accessibility-insights-fo/pbjjkligggfmakdaogkfomddhfmpjeni) - This will walk you through many different accessibility scenarios ## Conclusion Of course, none of the time spent testing comes for free. Make sure you work with your client or project manager to integrate your inclusive testing plan into the overall project plan. So, did I miss any? Or go overboard? Tell me about your testing plan!
mpuckett
230,683
Node, Express, SSL Certificate: Run HTTPS Server from scratch in 5 steps
Node, Express, SSL Certificate: Run HTTPS Server from scratch in 5 steps I've decided to write abo...
0
2020-01-02T21:17:06
https://dev.to/omergulen/step-by-step-node-express-ssl-certificate-run-https-server-from-scratch-in-5-steps-5b87
beginners, tutorial, javascript, node
> Node, Express, SSL Certificate: Run HTTPS Server from scratch in 5 steps I've decided to write about this tutorial after I struggled while I was coding one of my web apps for a customer. It was a simple chart for the web but it was collecting data on a Telegram Bot. Held the data in MongoDB and prepared a quick API for fetching the data but got many problems on the way and SSL Certificate was one of them. So in this tutorial, I will go through my mistakes & problems and my solutions to them, if you want to skip straight to the short version, you can [continue from here](#1-ssh-into-the-server). ***In this article I will not mention MongoDB related code or problems.*** ## 1. Creating My Basic API Server with Express In my projects, I prefer creating an `npm` or `yarn` environment after creating the project folder. So, I've done it with the following commands: ```bash mkdir my-project && cd my-project yarn init ``` Just spammed `Enter` after `yarn init` and created the project environment with default settings. *(I prefer `yarn` over `npm` if there are no obstacles to use it.)* Then, I installed `express` to my project, locally with: ```bash yarn add express ``` You can also use: ```bash npm install express ``` Then, I created my single source file `index.js` and inserted these lines below: ```js // import express const express = require('express'); // create new express app and assign it to `app` constant const app = express(); // server port configuration const PORT = 8080; // create a route for the app app.get('/', (req, res) => { res.send('Hello dev.to!'); }); // server starts listening the `PORT` app.listen(PORT, () => { console.log(`Server running at: http://localhost:${PORT}/`); }); ``` So far, I imported the `express` package, created an instance of it and assigned it to the `app`. Set my `PORT` variable, and created a route for `endpoint` handling in my API Server and called `app.list(PORT, callback())` method to start my server listening on the specified port. Went back to my terminal and executed the command below in my project directory: ```bash node index.js ``` which starts my server and logs to the console as below: ``` Server running at http://localhost:8080/ ``` Then, I switched to my browser and browsed to `http://localhost:8080/` and the following page appeared: ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/y50os9hzwtzqdq1q86e8.png) So far so good. My app is correctly listening to my port. Afterwards, I've tested my initial trial works and wanted to test if I can handle more endpoints. So I've just added another `route` to my code. ```js app.get('/omergulen', (req, res) => { res.send('Hello Omer! Welcome to dev.to!'); }); ``` I expect this to work only when I entered `/omergulen` endpoint in my browser. So, I've stopped my running server with `Control+C` and re-started again, since hot-reloading is not inherent with how I run my app. Switched to my browser and visited the `http://localhost:8080/omergulen` and it was working, to be sure I re-visited the `http://localhost:8080/` and it was also working as expected. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/ou7jn5bg7xzvrzavgwjg.png) ## 2. Why and how to use middleware with Express? After my first API server deploys, I switched to my web app project and sent a fetch request to my API endpoint. ```js fetch('MY_API_URL') .then(function (response) { console.log(response); return response.json(); }) .then(...); ``` Nothing was happening in my DOM, but the console message was frustrating. ``` Access to fetch at 'MY_API_URL' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. App.js:34 Cross-Origin Read Blocking (CORB) blocked cross-origin response MY_API_URL with MIME type application/json. See https://www.chromestatus.com/feature/5629709824032768 for more details. ``` After doing some quick research, I've realized I needed to configure my API Server according to the `CORS Policy`. First, I've added added `mode: 'cors'` to my fetch request: ```js fetch('MY_API_URL', { mode: 'cors' }) .then(function (response) { console.log(response); return response.json(); }) .then(...); ``` It was alone no use to my problem. Then, I've added my `cors` middleware to my API server with only two lines actually. After installing `cors` package with: ```bash yarn add cors ``` I just added these lines to my code: ```js // import `cors` package const cors = require('cors'); // use middleware app.use(cors()); ``` And after running with these configurations, my problem was solved, for now. ## 3. How to serve Express API Server as HTTPS? To deploy, I moved my project to my VPS and redirected my `my_api_url` domain to this VPS. In that way I've put a small layer of abstraction to my server IP, Also, I wouldn't need to type my IP everywhere instead I could use my own domain with fancy subdomains like `api.omergulen.com`. In this step, I first tried to deploy it without certification on HTTP. ``` [blocked] The page at 'https://my_web_app' was loaded over HTTPS but ran insecure content from 'http://my_api_url': this content should also be loaded over HTTPS. ``` Yet, my webserver was being server on Firebase Hosting and it was served as https, sending a request from `HTTPS to HTTP` is called [Mixed Content](https://developers.google.com/web/fundamentals/security/prevent-mixed-content/what-is-mixed-content). And it is not allowed to. So, I just put `s` at the beginning of the URL :)) `https://my_api_url` as you can guess, it didn't work either. ``` GET https://my_api_url net::ERR_SSL_PROTOCOL_ERROR ``` Then, after doing focused research I've realized that I needed to create a certificate with a Certificate Authority. Many Certificate Authorities were paid but not [Let's Encrypt](https://letsencrypt.org/). *Let’s Encrypt is a free, automated, and open Certificate Authority.* If you have shell access to your server, it suggests you use [certbot](https://certbot.eff.org/). In the `certbot` website, I chose: My HTTP website is running `None of the above` on `Ubuntu 16.04 (xenial)` which was fitting to my case. Before starting they want you to be sure to have: - comfort with the command line - and an HTTP website (API Server in my case) - which is `online` - and serving on HTTP port (`80`) - which is hosted on a `server` - which you can access via `SSH` - with the ability to `sudo` Then just apply the steps on the below: ### 1. SSH into the server SSH into the server running your HTTP website as a user with sudo privileges. ### 2. Add Certbot PPA You'll need to add the Certbot PPA to your list of repositories. To do so, run the following commands on the command line on the machine: ```bash sudo apt-get update && sudo apt-get install software-properties-common && sudo add-apt-repository universe && sudo add-apt-repository ppa:certbot/certbot && sudo apt-get update ``` ### 3. Install Certbot Run this command on the command line on the machine to install Certbot. ```bash sudo apt-get install certbot ``` ### 4. Choose how you'd like to run Certbot Are you ok with temporarily stopping your website? ***Yes, my web server is not currently running on this machine.*** Stop your web server, then run this command to get a certificate. Certbot will temporarily spin up a webserver on your machine. ```bash sudo certbot certonly --standalone ``` ***No, I need to keep my webserver running.*** If you have a web server that's already using port 80 and don't want to stop it while Certbot runs, run this command and follow the instructions in the terminal. ```bash sudo certbot certonly --webroot ``` In this step, you need to insert your domain into the terminal such as `dev.to`. After that it will check your web server and look for specific files which it will create and in case of success it should print out like that: ``` Performing the following challenges: http-01 challenge for my_api_url Waiting for verification... Cleaning up challenges IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/my_api_url/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/my_api_url/privkey.pem Your cert will expire on 2020-04-01. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le ``` **Important Note:** To use the webroot plugin, your server must be configured to serve files from hidden directories. If `/.well-known` is treated specially by your webserver configuration, you might need to modify the configuration to ensure that files inside `/.well-known/acme-challenge` are served by the webserver. ## 4. Installing signed certificate to Express API Server You'll need to install your new certificate in the configuration file for your API Server. First, you need to install and import some modules: ```bash yarn add https ``` ```js // import packages const https = require('https'); const fs = require('fs'); // serve the API with signed certificate on 443 (SSL/HTTPS) port const httpsServer = https.createServer({ key: fs.readFileSync('/etc/letsencrypt/live/my_api_url/privkey.pem'), cert: fs.readFileSync('/etc/letsencrypt/live/my_api_url/fullchain.pem'), }, app); httpsServer.listen(443, () => { console.log('HTTPS Server running on port 443'); }); ``` If you also want to maintain `HTTP` requests among the `HTTPS` requests, you can add the following lines, too: ```js const http = require('http'); // serve the API on 80 (HTTP) port const httpServer = http.createServer(app); httpServer.listen(80, () => { console.log('HTTP Server running on port 80'); }); ``` In the end your final API Server code will be something like that: ```js // import required packages const express = require('express'); const cors = require('cors'); const https = require('https'); const http = require('http'); const fs = require('fs'); const app = express(); app.use(cors()); // create new express app and save it as "app" const app = express(); app.use(cors()); // create a route for the app app.get('/', (req, res) => { res.send('Hello dev.to!'); }); // another route app.get('/omergulen', (req, res) => { res.send('Hello Omer! Welcome to dev.to!'); }); // Listen both http & https ports const httpServer = http.createServer(app); const httpsServer = https.createServer({ key: fs.readFileSync('/etc/letsencrypt/live/my_api_url/privkey.pem'), cert: fs.readFileSync('/etc/letsencrypt/live/my_api_url/fullchain.pem'), }, app); httpServer.listen(80, () => { console.log('HTTP Server running on port 80'); }); httpsServer.listen(443, () => { console.log('HTTPS Server running on port 443'); }); ``` ## 5. Automatic Renewal and Test of the Certificate The Certbot packages on your system come with a cron job or systemd timer that will renew your certificates automatically before they expire. You will not need to run Certbot again, unless you change your configuration. You can test automatic renewal for your certificates by running this command: ```bash sudo certbot renew --dry-run ``` The command to renew certbot is installed in one of the following locations: ```bash /etc/crontab/ /etc/cron.*/* systemctl list-timers ``` If you needed to stop your webserver to run Certbot, you'll want to edit the built-in command to add the `--pre-hook` and `--post-hook` flags to stop and start your web server automatically. For example, if your webserver is HAProxy, add the following to the `certbot renew` command: ```bash --pre-hook "service haproxy stop" --post-hook "service haproxy start" ``` More information is available in the [Certbot documentation on renewing certificates](https://certbot.eff.org/docs/using.html?highlight=hooks#renewing-certificates). ***Confirm that Certbot worked*** To confirm that your site is set up properly, visit `https://yourwebsite.com/` in your browser and look for the lock icon in the URL bar. If you want to check that you have the top-of-the-line installation, you can head to https://www.ssllabs.com/ssltest/. Well done! You have come to the end of this long tutorial. After applying these steps you can finally go to your API Server URL and you should be seeing `Hello dev.to!`. ## *Thanks for reading* I hope this tutorial was helpful enough. You can check my last article here: {% link https://dev.to/omergulen/how-to-add-system-call-syscall-to-the-kernel-compile-and-test-it-3e6p %} Feel free to reach out to me at omrglen@gmail.com. I’m opened to suggestions & requests for future articles, cya 😃 Happy New Year! 🥳🥳🥳
omergulen
230,744
DevOps engineer trainee
A post by Terry
0
2020-01-02T22:24:29
https://dev.to/cosky5/devops-engineer-trainee-hfh
cosky5
230,749
Acoustic Atlas @ Web Audio Conference
Art Piece at Web Audio Conference 2019
0
2020-01-02T22:33:38
https://dev.to/guergana/acoustic-atlas-web-audio-conference-59le
webaudio, acoustics, javascript, tonejs
--- title: Acoustic Atlas @ Web Audio Conference published: true description: Art Piece at Web Audio Conference 2019 tags: webaudio, acoustics, javascript, tonejs --- Last year I participated in the Web Audio Conference with [Cobi van Tonder](https://www.otoplasma.com/) presenting a website called [Acoustic Atlas](https://acousticatlas.de). This project allows the visitor to virtually experience the acoustics of natural and cultural world heritage sites. Please make sure you have access to microphone and or headphones when exploring it. ![alt text](https://static.wixstatic.com/media/a86de2_17dd8d92a9ed49348c4a5d6d7182aed1~mv2_d_2240_1296_s_2.png/v1/fill/w_925,h_536,fp_0.50_0.50,q_90/a86de2_17dd8d92a9ed49348c4a5d6d7182aed1~mv2_d_2240_1296_s_2.webp "Acoustic Atlas Cathedral") You can read more about [Acoustic Atlas here](https://www.otoplasma.com/post/acoustic-atlas-premieres-wac2019-trondheim). You can also find the paper abstract here [(PDF)](https://www.ntnu.edu/documents/1282113268/1290817924/WAC2019-CameraReadySubmission-50.pdf/).
guergana
235,503
Jan. 10, 2020: What did you learn this week?
It's that time of the week again. So wonderful devs, what did you learn this week? It could be progra...
3,902
2020-01-10T05:14:27
https://dev.to/nickytonline/jan-10-2020-what-did-you-learn-this-week-2b9c
weeklylearn, discuss, weeklyretro
It's that time of the week again. So wonderful devs, what did you learn this week? It could be programming tips, career advice etc. ![Two hockey players reading a play book](https://media.giphy.com/media/OqJdvB5VyUUqNoZdjO/giphy-downsized-large.gif) Feel free to comment with what you learnt and/or reference your TIL post to give it some more exposure. {%tag todayilearned %} And remember, if something you learnt was a big win for you, then you know where to drop it as well.👇👇🏻👇🏼👇🏽👇🏾👇🏿 {% link https://dev.to/jess/what-was-your-win-this-past-week-4ac4 %} ![Victory!](https://media.giphy.com/media/K3RxMSrERT8iI/giphy.gif)
nickytonline
231,582
I implemented a sample Django app written in Hylang.
I wrote the app. The above is written in Hylang with Django and Django REST framework. ...
0
2020-01-04T07:35:01
https://dev.to/tqm1990/an-app-as-a-sample-implementation-using-django-rest-and-hylang-1h4o
hy, hylang, django, lisp
--- title: I implemented a sample Django app written in Hylang. published: true description: tags: hy, hylang, django, lisp --- I wrote [the app](https://github.com/charlie-browns/django-app-with-hylang). The above is written in Hylang with Django and Django REST framework. ## TODO Hylang is one of a Lisp dialect. Of course it can use Lisp macro but It doesn't actually use any lisp macro powers yet.
tqm1990
232,720
What I learned about my brain from playing 300+ hours of Stardew Valley
Yes, I got the Switch at the start of November. This skin is my aesthetic. It’s from decalgirl.com...
0
2020-01-06T17:39:50
https://heidiwaterhouse.com/2020/01/05/what-i-learned-about-my-brain-from-playing-300-hours-of-stardew-valley/
life, add, adhd, burnout
--- title: What I learned about my brain from playing 300+ hours of Stardew Valley published: true date: 2020-01-05 19:02:17 UTC tags: Life,add,adhd,burnout canonical_url: https://heidiwaterhouse.com/2020/01/05/what-i-learned-about-my-brain-from-playing-300-hours-of-stardew-valley/ --- Yes, I got the Switch at the start of November. ![A Nintendo Switch with a colorful soapbubble vinyl skin](https://heidiwaterhouse.com/wp-content/uploads/2020/01/img_1113-2-1.jpg) This skin is my aesthetic. It’s from decalgirl.com Yes, that is a lot, so many hours. Many of them were on planes and in waiting rooms, but many more were not. Yes, I do have a lot of internalized shame/misogyny/self-criticism about playing vidya games that much, and I’m working really hard on not listening to that shame, because my value as a human is not related to productivity. BUT — what’s been interesting is what I’ve learned about how my particular mind works by being able to watch it. # Flow and feedback I (my Stardew Valley character is “I” for the purposes here) get up in the morning and I do all the farm chores in a clockwise direction. Sometimes that only takes a gametime hour, and sometimes four, depending on season. But doing them in the same order every time means: 1) I remember to do them all, because I have locational memory. If I get distracted by real life in the middle, and I’m standing in my barn, I know that the next thing is the fishing pond. 2) I don’t have to think about it. That sounds like a trivial/lazy detail, but I think it’s actually key to flow. Adam Cuppy gave a talk called [Mechanical Confidence at RubyConf 2018](https://youtu.be/6xnealmhEnM) and it was really eye-opening for me. He talked about a book called _The Power of Habit_, but he also had some observational data that asking interns to configure their desktop the same way for a week made them feel more confident in knowing how to find answers. You should go watch the recording, but if you don’t just think about this: short-term memory is in a different part of your brain than routine/habit, and we can leverage that. (also the video has a cameo by Emily Freeman). I was thinking about this talk when I organized all my tools so that they are in the same location all the time, instead of just letting them randomly fall wherever they happened to get slotted into my inventory. And Adam is right. I feel more confident in combat if I know exactly how many clicks I am away from the weapon I want. The other thing I’ve been doing a ton of on this vacation is jigsaw puzzles. I realized the connection between my two downtime obsessions is that I got excellent flow from them with small but reliable dopamine hits/rewards. I need rewards, and I love flow, but detatching it from work has been an interesting way to untangle how much of my self I use as a work identity, and what I enjoy in the absence of work-feedback. ![Jigsaw puzzle of circles that look a bit like millefiore glass paperweights, but aren’t.](https://heidiwaterhouse.com/wp-content/uploads/2020/01/20191214_203608_original-3-1.jpg) This one was pretty fiendish, but it was the first one I did, and I was pretty burnt out. ![Jigsaw puzzle of three kittens in Christmas stockings](https://heidiwaterhouse.com/wp-content/uploads/2020/01/20191227_131737_original-2-1.jpg) <figcaption>The fur. Dear god, the fur.</figcaption> ![A jigsaw puzzle of a Colin Thompson painting, with many many crafting elements](https://heidiwaterhouse.com/wp-content/uploads/2020/01/20200101_204149_original-2-1.jpg) <figcaption>This is an artist called Colin Thompson. This was probably my favorite puzzle to complete</figcaption> ![Jigsaw puzzle of an idealized sewing/crafting room](https://heidiwaterhouse.com/wp-content/uploads/2020/01/20200104_171508_original.jpg) I enjoyed this puzzle, but spent a lot of time wanting to buy the crafter better light bulbs My son informs me that the genres of Stardew Valley are “farming simulator” (hah!) and “Skinner Box game”, which is a different kind of hah, a hah of recognition. Because yes, I, like a pigeon, feel rewarded by tapping in the correct place at the correct time. I crave that feeling of competence/excellence/correctness. But because it’s a video game that I am in control of, it’s more like internal validation/motivation than external. I can only do so much about how a talk or a post is received by the world, because it depends on other humans, but this? Purely me and the gamecode. # Directional/locational tasks I remember things by location or orientation. You know that feeling when you walk into a room and you can’t remember what you’re doing there? This is like the opposite of that, in that I have a little arrow pointing in a direction of the thing I was thinking about doing. In Stardew Valley, that’s actually a skill you can get, so that there are literally arrows pointing to things you can pick up for money. ![A screencap from Stardew Valley, with directional arrows to the right of the screen](https://heidiwaterhouse.com/wp-content/uploads/2020/01/img_1137-1.jpg) This is not my screencap, because I haven’t bothered to figure that out yet, but it’s representative. See those little yellow arrows to the right of the screen? Good stuff that direction, an unknown distance and maybe not accessible, but it’s there. That’s literally how my life looks inside my head, but also it applies to browser tabs and mental maps of the place I actually live. In another video game/real life crossover, I explain that my distractible-type ADD manifests as little Sims-like task bubbles hovering over all the household chores that need to be done, and I can’t ever dismiss them, so please, kid, do your chore so I stop thinking about it every time I walk into the room. I can totally leverage this, too, the little arrow of intent. I just need to point it at the things that need doing, not the ones that are distracting. # Ritual and change resistance There are four kinds of “tasks” that I think of spending my day on in Stardew Valley, once my chores are done. - Fishing - Mining - Farming - Win friends and influence people It doesn’t matter which one I did yesterday, it is easiest for me to do it again today. Even if it would be advantageous to do one of the others, it takes a conscious effort of will to switch tracks from, say, mining to fishing. And it’s always hard for me to stop farming long enough to go to any of the other three. Not because it’s more enjoyable, but because it’s what I’m doing now and therefore has the lowest cognitive cost, and also, all the farm chores are very similar to farming. I think this is about the cognitive cost of task switching and also the slightly increased effort of swapping out some tools for other tools. Sometimes I solve this by inefficiently carrying all the tools around, in case I need them, which sure is a metaphor for programming. # Memetic contagion One of the interesting things I noticed after I unlocked a minecart system that gave me access to destinations is that I would keep getting this oldies song “[Bus Stop](https://www.youtube.com/watch?v=It75wQ0JypA)” stuck in my head, followed by “[Umbrella](https://www.google.com/url?sa=t&source=web&cd=1&ved=2ahUKEwj64qWflejmAhVHR60KHT8BCa8QwqsBMAB6BAgGEAU&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Db0nNTklOKRA&usg=AOvVaw1RXbyQfAq8MNuD2nk96m-m)“. You know, the one sung by Rihanna and slain by Tom Holland. I finally figured out that merely reading the words “Bus Stop” was enough to set off this memetic trigger. EVERY TIME. Also when I picked up driftwood (What’s brown and sticky? A stick!), and when my fences rotted (Home Burial, by Robert Frost — “Three foggy mornings and one rainy day/Will rot the best birch fence a man can build.”) It makes me aware of how everything in my head is stored and filed by keywords, but I don’t actually have access to clear up the index tags or make sure that they are being accessed relevantly. We can’t really refactor our unconscious associations, more’s the pity. # In conclusion It is useful to me to take something I’m intimate with and get some perspective on it. That’s why I did a talk about dog training and people motivation. Somehow if we can get enough distance from something, the patterns become more apparent. What I learned from Stardew Valley and some time off work is that I crave flow as relaxing, that I need to budget extra time and energy to shifting tasks and modes, that making my work environment predictable helps me, and that my quirky ADD brain is working on workarounds I hadn’t even noticed.
wiredferret
235,296
Stream in 2020
2019 has been an exciting year for Stream. The team doubled, our customers doubled, and we launched o...
0
2020-01-09T16:36:55
https://getstream.io/blog/stream-in-2020/
recap, programming, chat, feeds
2019 has been an exciting year for Stream. The team doubled, our customers doubled, and we launched our second product – [Stream Chat](https://getstream.io/chat/). It’s been great to see how well our chat product has been received and how quickly it has grown. This would not have been possible without the trust and support of our customers and the hardworking team here at Stream. On that note, I would like to start this blog post about Stream in 2020 with a big thanks to all of you. It’s particularly exciting to see new startups leverage Stream and grow to millions of users without ever experiencing scaling issues on their chat and feeds. Only a few years ago, it was still typical for companies that experienced rapid growth to be bogged down by scaling challenges — companies such as Friendster, Tumblr, Hyves, Twitter, and Facebook all experienced significant scaling issues and an abundant amount of technical debt. Here at Stream, we’re passionate about building reusable components for applications that are easy to integrate and keep working regardless of how much traffic you throw at them. For the 2020 roadmap we’re planning the following improvements: ## 1. React, iOS & Android SDKs The React, iOS and Android SDKs are receiving major updates. You can join the discussion on GitHub for [React](https://github.com/GetStream/stream-chat-react/issues/99), [iOS](https://github.com/GetStream/stream-chat-swift/issues/54), and [Android](https://github.com/GetStream/stream-chat-android/issues/400). For React, we’re adopting hooks and are adding built-in support for message search. We’re also working on performance improvements. On iOS, the main focus is on stability, documentation, and better support for apps that don’t use RXSwift. For Android, we’re aiming to separate the low-level client and use an adapter system to support all three common ways of handling asynchronous requests: Callbacks, RXJava and Kotlin coroutines. ## 2. Ranking on Aggregated Feeds At the moment, Crunchbase is live with a ranked aggregated feed. In Q1 of this year, we expect to make this generally available to all customers with ranked feeds. ![](https://stream-blog-v2.imgix.net/blog/wp-content/uploads/5cdf7b26aff07ff4596ed8de376895b7/image1.png?auto=compress%2Cformat&ixlib=php-1.2.1) ## 3. Feeds With the rapid growth in 2019, we never managed to ship some smaller features on our wishlist for feeds. While that’s unfortunate, uptime and stability always take precedence. In 2020 we want to offer: * Push for feeds * Reaction counts for ranking * Follow counts * A CLI for feeds * Merge support for feeds * Muting & moderation for feeds ## 4. Docs & Support Stream’s documentation for feeds and chat is going through a significant overhaul. In addition to docs, we are increasing the size of our support team to make sure all customers get the help they deserve as quickly as possible. ![](https://stream-blog-v2.imgix.net/blog/wp-content/uploads/731913ef1ab34b745d76aa97d25fe4b6/image3.png?auto=compress%2Cformat&ixlib=php-1.2.1) ## 5. MML & Custom Commands There’s a sneak preview available of Message Markup Language (MML) in the [docs](https://getstream.io/chat/docs/js/#overview). The idea here is to allow for interactive messaging. MML is not generally available just yet. We would love to talk to more product teams that are adding interactive messaging so we can finetune the MML feature set based on your feedback. ![](https://stream-blog-v2.imgix.net/blog/wp-content/uploads/cc01c33fd568c2a4111183c3c4549bd6/image4.png?auto=compress%2Cformat&ixlib=php-1.2.1) ## 6. Live Location Sharing We’re going to add support for live location sharing to Stream’s chat offering. ![](https://stream-blog-v2.imgix.net/blog/wp-content/uploads/e723957986e66894b18b7899645a4c88/image2.png?auto=compress%2Cformat&ixlib=php-1.2.1) ## 7. Unifying Chat & Feeds The majority of performance and storage technology for Stream is shared between our chat and feed products. However, there are many small differences between the two products that bring up unique edge cases. We plan to do the following to ensure that we have feature parity between the two products: * Firehose support for webhooks, SQS, and WebSockets for chat * Truncate support for chat * API operation log for chat * Push for feeds * CLI for feeds ## 8. Chat small Features Also, there are a few small features that we’re adding to chat: 1. Improved message search: Enable filtering on message attributes in addition to searching on the message text. This makes it easier to search for messages that mention a user, or messages that have files attached to them. 2. System message support for all endpoints that modify the channel. All endpoints that modify a channel or change channel members should support sending a system message. For example: “Thierry changed the channel color to green” or “Tommaso joined the channel” 3. Better SDK support for loading messages from a specific point in time and viewing historical messages 4. Ability to replay messages from a past livestream while you’re watching it 5. Broadcast feature to write one message to many channels at once (for marketing and announcement type of use cases) 6. Custom data support for channel members. Right now, we support custom data on the channel, the message, the attachment, user, and reactions. We don’t support custom data on channel members. If you have some member-specific settings, that can be a little cumbersome. One example is making it easier to allow users to split channels into tabs. ## All the Best in 2020! Again, I want to say thank you to all of the customers out there that have helped Stream become a success. I would also like to thank our team for the passion and drive they put towards building reusable components. If you have feature requests for Stream, please don’t hesitate to reach out via email at [support@getstream.io](support@getstream.io). Happy New Year! 🙌 **_This post is sponsored by:_** [![Stream Chat](https://i.imgur.com/5DoQL0j.jpg)](https://getstream.io/chat/)
nickparsons
235,387
NoSql but with SQL
Couchbase I am so much comfortable with PHP and MariaDB &amp;&amp; MySql || Sqlite3 I ❤️ c...
0
2020-01-09T19:51:45
https://dev.to/cemkaanguru/day16-2p7o
100daysofcode, couchbase, nosql, sql
#Couchbase I am so much comfortable with PHP and MariaDB && MySql || Sqlite3 I ❤️ couchbase in JS. Enterprise-Class, Multicloud To Edge NoSQL Database ###And it is NoSql but with SQL [![View on vgy.me](https://i.vgy.me/yk2R9f.png "View yk2R9f.png on vgy.me")](https://vgy.me/u/yk2R9f) https://www.couchbase.com/ https://db-engines.com/en/system/CouchDB%3bCouchbase%3bMongoDB day16
cemkaanguru
235,488
Redux 101
Intro to redux
0
2020-01-11T17:04:21
https://irian.to/blogs/redux-101
redux, react, javascript, frontend
--- title: Redux 101 published: true description: Intro to redux tags: redux, react, javascript, frontend canonical_url: https://irian.to/blogs/redux-101 --- This is part one of two Redux mini-series. - *Part one*: Understanding Redux - [*Part two*: Setting up Redux and React app](https://dev.to/iggredible/redux-101-connecting-react-with-redux-195m) # Why I wrote this Redux is a big name if you are a React developer. In the beginning when I tried to learn it, I struggled to understand it. Even the basic tutorials were hard to understand because they contained terms I didn't know then: reducers, actions, store, pure functions, etc etc 🤷‍♂️🤷‍♀️. Now that I have used it for some time (big thanks for my coworkers who guided me), I want to help people to understand Redux. My hope is by the end of this article, you will know: 1. The problem with React without state management 2. What Redux is solving 3. What reducer, store, initialState, and action are The concept also applies to any state management library like Vuex. So even if you are not a React / Redux developer, this post may help you. # The problem with React without state management The first question I had when I learned about Redux was, *"Why do I even need it?"* It is helpful to know what problem Redux solves to understand it. Redux helps you to manage your application states. [Redux site](https://redux.js.org/) says that Redux is a "A Predictable State Container for JS Apps". What does that even mean? Imagine a page in a React app that has a form and a button. You fill out a form, then you click the button. A few things happen: the button turns red and the form hides. ![Pic1 - image of form and button](https://thepracticaldev.s3.amazonaws.com/i/amqg8u9fk1ebeqyr5rle.png) This page is made out of two React component: `Form.jsx` and `Button.jsx`. Remember, components are reusable. It is important to keep them separate so we can reuse `Button.jsx` in different places when we need it. Back to our app. Here we have a problem: How will our button tell our form to go hide? They are neither sibling nor parent/child either. 🤷‍♂️ Here is the problem we are facing working with stateful framework like React. It has many components that don't know about each other. It can get really tricky to make one component to change the state of the other component. # The problem Redux solves Redux is a state management library. Using Redux, button can now access and change `isHidden` that form uses. How does Redux do it? Redux is a command center. This command center has a storage that STORES states. Among these states are our color and isHidden. Our app may have initial states like this: ``` { buttonText: 'Submit', buttonColor: 'blue', isHidden: false, awesomeNotes: [ {title: 'awsome!', id: 1}, {title: 'awesomer!', id: 2} ] ... } ``` Every component that is CONNECTED to our store has access to it. Our form can see everything in the store, including `isHidden` and `buttonColor`. Our button can see everything in store, including `isHidden` and `buttonColor`. Because all important states are centralized, they can be shared to different components to be used and updated. When we click the button, imagine the button submitting a request to command center: "Hey command center, can you CHANGE_BUTTON_COLOR to red and TOGGLE_FORM_IS_HIDDEN?" ![Pic2 - image of button dispatching actions](https://thepracticaldev.s3.amazonaws.com/i/jmr729mizada82rly7fd.png) When command center receives the request requests, it processes the request from button. It updates those `buttonColor` to `red` and `isHidden` to true in store. ![Pic3 - image of command center processing](https://thepracticaldev.s3.amazonaws.com/i/c2xed1gaeep7uk92zc5e.png) Our state in our store now looks like this: ``` { buttonText: 'Submit', buttonColor: 'red', isHidden: true, awesomeNotes: [ {title: 'awsome!', id: 1}, {title: 'awesomer!', id: 2} ] ... } ``` When the state changes, since button and form are CONNECTED to store, it re-renders with new states. Button is now visibly red and form is now hidden. I skip a step here. Earlier I mentioned that button made a request to command center. When command center receives the request, it doesn't quite know what to do with the request. Imagine button only speaks Spanish and command center only speaks English. We ned someone in the command center who knows English AND Spanish to translate it into something that command center can understand! This translation from button request into something that command center can understand is done by REDUCER. In React, the request from button may look like this: ``` { type: 'TOGGLE_FORM_IS_HIDDEN', } ``` A request may contain argument(s): ``` { type: 'CHANGE_BUTTON_COLOR', color: 'red' } ``` This request, in Redux's term, is called ACTION. Back to our analogy, command center finally receives something he understands. Thanks to our translator, the request "TOGGLE_FORM_IS_HIDDEN" and "CHANGE_BUTTON_COLOR" can be understood by command center. He knows exactly what to do. For example, when receiving request 'TOGGLE_FORM_IS_HIDDEN', command center does: 1. He finds `isHidden` from state 2. He toggles it to opposite whatever it was before. 3. The remaining states on button and awesomeNotes is not part of the 'TOGGLE_FORM_IS_HIDDEN', so he leaves them alone. 4. When command center is done executing the request, it returns the states with `isHidden` updated. Button sees that `buttonColor` has a new state (`"red"`) and form sees that `isHidden` has new state (`true`). Since state is updated, React re-renders. That's why we see the button changes color and form goes into hiding. ![Pic4 - image of button turning red and form hiding](https://thepracticaldev.s3.amazonaws.com/i/7onepplecih0333z2nzw.png) That's the basic analogy how Redux works. # Four important concepts about Redux There are four concepts about Redux, mentioned above, that are important for it to work: - Initial states - Actions - Reducers - Store ## Initial States Initial states are the states at the start of our application. The button was initially blue and form was not hidden (isHidden: false) for example. When we refresh the app, Redux loses all updated states and reverts back to initial state. ## Actions The requests from button were actions. Actions are events. An action is nothing but an object. At minimum, an action must contain a `type`. ``` { type: "CHANGE_BUTTON_COLOR", color: "red" } ``` When button requests "CHANGE_BUTTON_COLOR" on click, we call it *dispatching* an action. ## Reducers Reducer is the guy who speaks Spanish and English and helps command center to understand the requests. Technically, a reducer also performs the action (Reducer is both the translator AND command center). It took me longer to grasp what reducer was, so I will elaborate more here: Let's say our reducer understand two actions: "ADD_NOTE" and "DELETE_NOTE". Our reducer looks like this (the switch case usage normal): ``` switch(action.type){ case ADD_NOTE: return [...state, action.note] case DELETE_NOTE: // return logic to delete note default: return state; } ``` The action might look like this: ``` { type: "ADD_NOTE", note: "This is my awesome note!", } ``` Our reducer checks the type (`action.type`) and finds a match (`"ADD_NOTE"`). It returns an updated state: `[...state, action.note]` (previous state plus the newest note) If you send this reducer "UPDATE_NOTE" action, it does not know what to do. It will simply default state (default). You can add as many different case scenarios as you want here. In short, a reducer has a set of instructions. When it receives an action, it looks for matching `type`. When it finds a match, it does whatever that instruction is set to and returns the modified state. Keep in mind that state is immutable. We are not directly changing the states array. We are returning a new array of notes consisting of the old notes plus the new note. Again, do **not** mutate the actual states. Return a copy of updated states. ## Store Store is where the states are being stored. Think of a giant storage with security guard outside. We cannot directly go to the store and modify the value. The security guard won't let you. You need a special permit. That permit is called action *dispatch*. Only by dispatching the security guard will let you update the store. A store might look something like this: ``` { buttonText: 'Submit', buttonColor: 'blue', isHidden: false, awesomeNotes: [ {title: 'awsome!', id: 1}, {title: 'awesomer!', id: 2} ] ... } ``` This should cover basic Redux. There are still more into Redux that I haven't covered. Hopefully this is enough to get you started with Redux. Next time, we will create a simple React / Redux app! You can find the next tutorial 👉[here](https://dev.to/iggredible/redux-101-connecting-react-with-redux-195m)👈. Thank you so much for reading this far. I really appreciate it. Please let me know if you find any mistakes/ have suggestions how I can make this better to better serve you guys! 👍
iggredible
235,547
I Asked Hiring Managers How to Answer Common Behavioral Questions
Anyone making a career change soon? I’ve asked a handful of senior FAANG employees heavily involved w...
0
2020-01-10T06:38:02
https://dev.to/rooftopslushie/hiring-managers-share-how-to-answer-behavioral-interview-questions-52a1
career, webdev, beginners
Anyone making a career change soon? I’ve asked a handful of **senior FAANG employees heavily involved with the hiring process** to hopefully help you with your next behavioral interview round! This post might get a liittle long, so please feel free to click on the hyperlink that you're more interested in. 😇 ## Common Behavioral Questions * [Tell me about a project you worked on] (#tell-me-about-a-project-you-worked-on) * [What do you do if you disagree with your boss?] (#what-do-you-do-if-you-disagree-with-your-boss?) * [Why do you want to come to our company?] (#why-do-you-want-to-come-to-our-company?) * [Why do you want to leave your company?] (#why-do-you-want-to-leave-your-company?) * [What's your weakness?] (#what's-your-weakness) ## Tell me about a project you worked on <a name="tell-me-about-a-project-you-worked-on"></a> For all significant projects you’ve worked on, **explicitly highlight what YOU contributed, and what you built, even if it was as part of a team.** We want to understand YOUR contributions, and what value you delivered to the project. Have concrete metrics, understanding of the timelines involved, as well as trade-offs made. The best answers also reflect what you could have done differently to better improve the outcome of the project. — sHvN64, Senior Software Engineer at Uber Make sure you focus on what the challenges were, and how you overcome the challenges. The tricky part is to describe your project in such a way that is easy to follow and at the same time sounds complex enough. My personal experience is, many candidates’ challenges were either hard to follow or did not sound challenging enough. Give enough context and avoid acronyms—the hiring manager may not be an expert in the area of your project! In the end, talk about what your takeaway is. **What did you learn from the experience?** — Max, Software Engineer at Google ## What do you do if you disagree with your boss? <a name="what-do-you-do-if-you-disagree-with-your-boss?"></a> **Use data to win arguments.** Can you tell a story about using product data to bring objective data to a subjective debate? **Express empathy.** Good answers will sound more like ‘I think doing it this way is more in-line with our goals, and here’s why’, and less like ‘I think you’re wrong and that’s a bad idea.’ **Highlight that debate and disagreement is healthy and improve teams.** Of course there is a healthy debate and unhealthy debate. Make it really clear that you use disagreements as an opportunity to learn OR influence others, not to win arguments. **Step back and understand everyone’s point of view.** Perhaps there’s a good reason for the disagreement that you can learn from. Maybe there are other factors worth considering. This will help take some of the emotion out of the conversation and bring you back to a rational place. — Dios, Engineering Manager at Facebook This is a time to show how you are able to resolve conflicts at work and drive consensus with someone who is a power figure. To answer this question, you should first set the context on **(1) what you disagreed with your manager’s position, (2) how you didn’t confront immediately and rather took a small pilot test to verify your hypothesis and (3) how you used data** to convince them that your idea’s worth exploring. For example, the manager says to focus on sales and acquisition of new customers when you want to woo existing customers as well. You notice an attrition of customers after the first few days, so you try an experiment to engage the existing customer base through product feature emails. You observe that these customers have a much higher retention and revenue in the long run. Ultimately, this allows you to convince your manager that the strategy should focus on both acquisition as well as engagement. — A, Product Leader at Amazon ## Why do you want to come to our company? <a name="why-do-you-want-to-come-to-our-company?"></a> **The one thing that you need to show is how much you know about the company.** A lot of times, managers ask this to gauge the interest of a candidate. If a candidate is highly interested in the company, the person should know a lot about what the company does and the latest news of the company. Make sure you read through the company’s website, social media page and blogs. You can pick certain examples from what you read and expand around those examples. — Max, Software Engineer at Google Aim to have a specific and rehearsed answer. **What about our company uniquely and specifically interests you?** Are we using a cool piece of technology at scale? Building a product that you care about better than anybody else? — Dios, Engineering Manager at Facebook ## Why do you want to leave your company? <a name="why-do-you-want-to-leave-your-company?"></a> My strategy to answer this question is always very simple: **stay positive and avoid the negative.** I tell them that I do not want to leave my current company as I am enjoying my job a lot. Having said that, I am keen to learn more about the bigger scope, higher complexity and the interesting challenges in the role offered by the other company. I always mention my excitement after reading the job description, how it felt like a great next step in my career and how I am using the interview opportunity to better understand the details. Finally, even if there are other genuine reasons to make the switch (higher pay, better location, etc.), this is not the right time to mention them. — A, Product Leader at Amazon ## What's your weakness? <a name="what's-your-weakness"></a> You need to pick just one weakness, and when thinking about your answer, keep in mind these two overarching principles: (1) **Your weakness is fixable** and you need to let the manager know that you know how and **are actively working on improvement. (2) Your weakness should hint about your strength.** Weakness does not necessarily mean you are not good, it can be because you are so good at something at the expense of another, which is your weakness. Simple examples: productivity vs. quality; great team player vs. speaking up. — Max, Software Engineer at Google Please don’t take the Michael Scott approach and turn this into a backhanded compliment (‘My biggest weakness is that I care too much’). **I’d highly recommend giving an honest answer, and one that is as specific as possible and not handwavy.** This shows that you’re self-aware and are able to independently identify areas for personal growth. Everyone has areas they need to work on and the interviewer knows this. Some potentially good answers: (1) I’ve been happy with my unit tests for regression coverage, but would like to invest more in systems-level integration testing to protect against things like service downtime and API version changes. (2) As a manager, I’m learning how to strike a balance between delegating enough, and not too much or too little. I want to stay hands-on, and trust in my own ability to deliver, but it’s also important that I give my team technical opportunities. On the other hand, I don’t want to delegate too much since I like to lead from the front. — Dios, Engineering Manager at Facebook ## In Summary 👻 I'm currently helping out at a startup (Rooftop Slushie) where verified tech employees provide career advice. Everyone mentioned in this post are active users there. If you like the sort of answers listed above, feel free to check out more here (open to all upon sign-up!): * [How rough is Uber?] (https://www.rooftopslushie.com/request/Uber%20performance%20evaluations-1584) * [Amazon Leadership Principles Interview Questions] (https://www.rooftopslushie.com/request/Amazon%20Leadership%20Principles%20Interview%20Questions-149) * [Junior Software Developer Resume Review] (https://www.rooftopslushie.com/request/Junior%20Software%20Developer%20Resume%20Review-1469) **More importantly, hope this post helps job hunters who have interviews soon - you got this!** 💪
rooftopslushie
235,592
Fixing My Embargo: Setting up a Cron Job
I wrote previously about setting up a simple embargo on my Gatsby site in the past. Even when I first...
0
2020-02-19T15:18:24
https://stephencharlesweiss.com/blog/2020-01-10/fix-embargo-cron-github-actions/
cron, githubactions
--- title: Fixing My Embargo: Setting up a Cron Job published: true date: 2019-12-27 00:00:00 UTC tags: cron, github actions canonical_url: https://stephencharlesweiss.com/blog/2020-01-10/fix-embargo-cron-github-actions/ --- I wrote previously about [setting up a simple embargo](stephencharlesweiss.com/blog/2019-10-16/gatsby-simple-embargo) on my Gatsby site in the past. Even when I first implemented it, however, I knew that there’d be a better solution in the future. As a reminder: Gatsby sites are static. In my case, the builds included all of the posts for my blog. Consequently, when I wrote a post and added it to my site - when the site built it was visible. The purpose of the embargo was to create space between when I wrote a new post and when it would appear. The original solution solved this by simply hiding the posts on the front end. I wanted a better solution however. For that, I reached for an old standby: the cron job. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>What Is A Cron Job Cron is a utility that is used for scheduling commands or programs to execute at a specific time. Named for Chronos, the diety of time, these scheduled commands are referred to as “Cron Jobs”. Common uses of cron jobs include backups, monitoring, and maintenance. The syntax for cron jobs is a little peculiar, but there’s a logic to it that just takes some familiarity: ``` ┏━━━━━━━━━━━━ minute (0 - 59) ┃ ┏━━━━━━━━━━ hour (0 - 23) ┃ ┃ ┏━━━━━━━━ day of month (1 - 31) ┃ ┃ ┃ ┏━━━━━━ month (1-12) ┃ ┃ ┃ ┃ ┏━━━━ day of week (0 - 6) or use names; ┃ ┃ ┃ ┃ ┃ 0 and 7 are Sunday, 1 is Monday, ┃ ┃ ┃ ┃ ┃ 2 is Tuesday, etc. ┃ ┃ ┃ ┃ ┃ * * * * * <command to execute> ``` Some great examples of different schedules can be found on [ostechnix.com](https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/). ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Netlify Build Hooks Now that we know a bit about cron jobs, we want to talk about which command to execute. In this particular case, we’ll be using [Netlify Build Hooks](https://docs.netlify.com/configure-builds/build-hooks/#app) as I’m using Netlify to build and deploy my site. Triggering a build requires only a POST request sent to the URL specified by a token. To add a build hook, we log into Netlify, select the site and choose Settings. Then, on the left, select “Build & Deploy” and scroll down to “Build Hooks”. Select “Add build hook”. From here, we can name the hook. ![](./build-hooks-configure.png) Once saved, Netlify will produce a token we can use to trigger a build. The token should be considered sensitive and should be kept private (similar to an API key). ![](./build-hooks-result.png) Now that we have a build hook, we need to schedule the invocation of it. For that, we’ll use a cron job via GitHub Actions. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>GitHub Actions Recently out of a beta, [GitHub Actions](https://github.com/features/actions) are a new feature from Github to automate software workflows. To get the action set up, we will need to: 1. Add a workflow `.yml` 2. Add the secret ### <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Adding GitHub Workflows Github actions are managed by the presence of a `.yml` file in a directory in your project `.yml`. Here’s the my `nightly-build.yml` file: ``` name: Daily Build on: schedule: - cron: ‘0 8 * * *’ # 0 minute, 8th hour, every day of the month, every month, every day of the week (UTC)* jobs: build: runs-on: ubuntu-latest steps: - name: Trigger Netlify Build Hook run: curl -s -X POST -d {} “https://api.netlify.com/build_hooks/${TOKEN}” env: TOKEN: ${{ secrets.NETLIFY_DAILY_CRON_HOOK }} ``` Remember that a cron job has two parts: a schedule and a command. In this yaml file, I’ve defined a schedule as the 0th minute of every 8th hour, every day of the month, every month, and every day of the week.<sup>[1](#footnotes)</sup><a id="fn1"></a> The job is to run the `curl` and send the POST to Netlify. Two things to note: 1. Regarding yaml files generally: they’re space sensitive. My first attempt resulted in an error, “YAML mapping values are not allowed in this context”. Investigating it led me to this [StackOverflow conversation](https://stackoverflow.com/questions/31313452/yaml-mapping-values-are-not-allowed-in-this-context) and ultimately [http://www.yamllint.com/](http://www.yamllint.com/) - a great tool for validating yaml files. 2. The `curl` is the same as what Netlify suggested - though I spefcified a TOKEN which takes a value `secrets.NETLIFY_DAILY_CRON_HOOK`. That secret hasn’t yet been defined. In fact, that’s the final step. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Adding A Secret Though we’ve referenced a variable in our `.yml` file, we now need to define it. To do so, in the repository, select settings and then “Secrets” on the left. [![github secrets](https://stephencharlesweiss.com/static/f7aa1ec8982e730c1b67b4af357de963/b9e4f/github-secrets.png)](/static/f7aa1ec8982e730c1b67b4af357de963/4117f/github-secrets.png) In this case, the secret is taken directly from Netlify as the string that follows `/build_hooks/`. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Conclusion And with that, I have a new build process. Suddenly, my website builds every day at 8am which frees me to focus on the parts of writing and running this site that I enjoy and begin to automate the other stuff. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Footnotes - <sup><a href="#fn1">1</a></sup> Though it’s scheduled at 8am, it’s important to remember that the time is local to the computer _running_ the command. In this case, that means wherever the Netlify servers are. ## <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Resources Additional resources that I found helpful include: - [Scheduling Netlify Deploys With Github Actions | Voorhoede](https://www.voorhoede.nl/en/blog/scheduling-netlify-deploys-with-github-actions/) - This was a great tutorial that proved a great starting point to understanding these actions. - [About GitHub Actions - GitHub Help](https://help.github.com/en/articles/about-github-actions) - [Automating your workflow with GitHub Actions - GitHub Help](https://help.github.com/en/categories/automating-your-workflow-with-github-actions) - [Events that trigger workflows - GitHub Help](https://help.github.com/en/articles/events-that-trigger-workflows) - [Automate your Netlify sites with Zapier | Netlify](https://www.netlify.com/blog/2018/11/07/automate-your-netlify-sites-with-zapier/)
stephencweiss
235,607
Create a Blog in Less Than 20 Lines of Code Using 11ty
In this blog post, I will show you how easy it is to create a personal website with a blog page using 11ty.
0
2020-01-10T09:36:45
https://dev.to/oohsinan/create-a-blog-in-less-than-20-lines-of-code-using-11ty-3oh0
beginners, tutorial, webdev, 11ty
--- title: Create a Blog in Less Than 20 Lines of Code Using 11ty published: true description: In this blog post, I will show you how easy it is to create a personal website with a blog page using 11ty. tags: beginners, tutorial, webdev, 11ty cover_image: https://i.imgur.com/rfruTyZ.png --- <small>This was originally posted on my personal blog at https://omarsinan.com/blog/the-eleventy-11ty-hype-explained<small> In this blog post, I'll go over what's so special about 11ty and why you should check it out. I also hope that this could serve as a motive for you to create a personal website with a blog page using 11ty 😄 ## 🤯 Possibly the simplest installation process If you open the 11ty website (11ty.dev), the first thing you see is the quick start section. With 3 commands (could be done in 2 as well), you can set up your first website. ```bash npm install -g @11ty/eleventy echo '# Hello, world!' > index.md eleventy ``` You might have to run the sudo npm in order to have the correct permissions to install 11ty. Finally, to see your website running, run `eleventy --serve` and magically, you have a running static website made from a single .md file! 🤯 ## 📝 You want a blog? Have a blog. It's supeeeer easy to set up a blog for your personal website. The first step is probably to create a layout file, layouts are basically templates that are used in other files. Create a directory called `_includes` and a file called `layout.njk` inside `_includes` and copy-paste the following: ```md --- title: A blog about cats --- <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>{{ title }}</title> </head> <body> {{ content | safe }} </body> </html> ``` Now lets create a directory called `blog` in the root directory and inside it, create a directory called `post-1`. Inside the `post-1` directory, create a file called `index.md`. This will serve as our first blog post: ```md --- layout: layout.njk title: This is my first ever post!! date: 2020-01-08 --- # {{ title }} Hello, world! ``` You can use the liquid templating language within your .md files!!! (whaaaat?) Open your browser and go to `http://localhost:8080/blog/post-1/` and guess what? ![Preview of result](https://i.imgur.com/qnNYyDm.png) Let's add an image to our blog post. Create a folder inside the `post-1` directory called `images` and inside it place an image of a cat (yes, only cats are allowed). ```md --- layout: layout.njk title: This is my first ever post!! date: 2020-01-08 --- # {{ title }} Hello, world! ![A picture of a cat](./images/cat.jpg) ``` Refresh the page annnnd, oh no! It didn't work. 11ty by default will scan for .md files only. So in order to tell it to copy over the images as well, we need to specify which file formats it should scan for. So in the terminal, run `eleventy --formats=md,jpg` and then `eleventy --serve --formats=md,jpg` Aaaaand viola! Check it out: ![Preview of result](https://i.imgur.com/ZFt1Zwu.png) Alright that's cool and all, but how do we display a link to all the individual blog posts on our main page? It's very simple. In your `index.md` file in the `post-1` directory, add another attribute in the frontmatter: `tags: post`. So it should look like this: ```md --- layout: layout.njk title: This is my first ever post!! date: 2020-01-08 tags: post --- # {{ title }} Hello, world! ![A picture of a cat](./images/cat.jpg) ``` The tags attribute will help us create a collection that we can iterate over. Now in your `index.md` in the root directory, it should have the following: ```liquid # Hello, world! {% for post in collections.post %} - {{ post.data.title }} {% endfor %} ``` So now if you head over to `http://localhost:8080/`, you can see that our first post shows up. How do we link it though? Easy peasy, just add post.url as a link. So your file should look like this: ```liquid # Hello, world! {% for post in collections.post %} - [{{ post.data.title }}]({{ post.url }}) {% endfor %} ``` Perfect, it works! Let's confirm it by creating another post. Again, same drill. Create a directory called `post-2` in the `blog` directory and create a file inside it called `index.md`: ```md --- layout: layout.njk title: Woohoo! My second post date: 2020-01-09 tags: post --- # {{ title }} Hey there! ``` It's as simple as that! ![Preview of result](https://i.imgur.com/HbzOgi6.png) By default, 11ty will sort in ascending order of dates. Let's sort it so that it shows the newest posts up first. Edit your `index.md` in the root directory by assigning `collections.post | reverse` to `posts` and change `collections.post` to `posts` in the for loop. You can leave the rest as is. ```liquid # Hello, world! {%- assign posts = collections.post | reverse -%} {% for post in posts %} - [{{ post.data.title }}]({{ post.url }}) {% endfor %} ``` This will make sure that we reverse the default sorting of 11ty which results in descending order of dates. ![Preview of result](https://i.imgur.com/WBhmvc6.png) ## 🤔 What's next? There are soooo many more powerful feautures provided by 11ty. This post was only to give you an idea of how easy it is to get a blog up and running. 11ty already provides a blog website base project which you can check out at https://www.11ty.dev/docs/starter/ in case you don't want to build everything from scratch (although if it is your first time using 11ty, I suggest you learn by creating it yourself). I also suggest you check out the documentation. In my opinion, 11ty has one of the most impressive docs I've seen. The docs provide a lot of content for you to get a proper website up and running. You could also check out Jason Lengstorf's Let’s Learn Eleventy! (with Zach Leatherman) at https://www.youtube.com/watch?v=j8mJrhhdHWc (it's an amazing video I highly recommend you check it out).
oohsinan
235,646
My Vanilla JavaScript Built-In Web Elements Collection
A collection of 10 built-in web elements
0
2020-01-10T18:27:15
https://dev.to/felipperegazio/my-vanilla-javascript-built-in-web-elements-collection-3moo
showdev, frontend, javascript, webdev
--- title: My Vanilla JavaScript Built-In Web Elements Collection published: true description: A collection of 10 built-in web elements tags: #showdev #frontend #javascript #webdev --- So i wrote this elements collection. The Built-In Web Elements are part of the "Web Components" specs. They allow us to extend standard HTML components and give them new super powers. I wrote them in a way that i don't need to keep repeating common UI elements. I focused on the elements behavior, keeping a minimalistic approach: 1. **Almost no style** (behavior driven style), letting you free to add your visual identity. 2. **Single purpose elements**. Every element must do only one thing. Commonly it encapsulates complex behaviors. 3. **Simplicity**. The code is very simple. The largest component source has 100 lines of code. There is no complex multi-purpose largely configurable element. 4. **Agnostic**. That means no frameworks required. You can use them anywhere, just add the `is="el-name"` attribute to give new powers to a standard HTML element. As said by MDN: "Web Components is a suite of different technologies allowing you to create reusable custom elements — with their functionality encapsulated away from the rest of your code — and utilize them in your web apps". ## Documentation and Usage I put some effort in documenting the Collection and each element separately. The elements are really EASY to use. You load the CSS and JS files, then use the 'is' attribute to specify which behavior you want to give to your tag. For example ```html <div is="el-accordion" data-summary="Hello World"> <p>Im an accordion that shows this message when expanded</p> </div> ``` You can check live examples and docs here: https://felippe-regazio.github.io/web-elements And here: https://github.com/felippe-regazio/web-elements All elements has a rich API with events and methods. I still need to add accessibility (needing help here). ## Element List * ⚡️ **Accordion** Extends the div element giving it an Accordion structure and behavior. * ⚡️ **Card** Extends the div element giving it a Card structure and behavior. * ⚡️ **Header** Extends the header element giving it a set of features as fixed on top and auto hide on scroll. * ⚡️ **Image Viewer** Extends the img element giving it a full screen view, the image will be showed on a lightbox when clicked. * ⚡️ **Lazy Load IMG** Extends the img element giving it a lazy load behavior. The images will be only loaded when visible. * ⚡️ **Lightbox** Extends the div element giving it a Lightbox behavior. * ⚡️ **Mustache Template Div** Extends the div element giving it a template engine capabilities. The div will be capable to parse json to fill its content. * ⚡️ **Read More** Extends the div element giving the content inside a Read More feature. * ⚡️ **Sidebar** Extends the div element giving it a Sidebar structure and behavior. * ⚡️ **Spinner** Extends the div element giving it different configurations to act like a loading spinner. --- Contributions, critics, appointments, hints... are VERY welcome! :)
felipperegazio
235,753
How to Add an Item to a List in Python: Append, Slice Assignment, and More
Lately, I’ve been thinking about new Python topics to write about, so I took to Google. When I search...
433
2020-07-26T03:59:12
https://therenegadecoder.com/code/how-to-add-an-item-to-a-list-in-python/
python, challenge, performance
Lately, I’ve been thinking about new Python topics to write about, so I took to Google. When I searched “Python how to”, “Python how to add to a list” popped up first. Since this must be a popular search term, I decided it was worth an article. In other words, today we’ll learn how to add an item to a list in Python. **To save you some time, you can start adding items to a list right now with the `append()` method: `my_list.append(item)`. If you have more complex needs, consider using `extend()`, `insert()`, or even slice assignment. See the rest of the article for more details.** ## Problem Description For people who come from other programming languages, tasks like creating and adding to a list can be daunting. After all, almost every language supports lists in one form or another (e.g. arrays, lists, etc.), but not all languages have the same syntax. For instance, here’s an example of an array in Java: ```java int[] myArray = new int[10]; myArray[0] = 5; // [5, 0, 0, 0, 0, 0, 0, 0, 0, 0] ``` Here, we’ve created a fixed-size array of 10 elements, and we’ve set the first element to 5. In other words, we don’t really add elements to arrays in Java. Instead, we modify existing elements. Meanwhile, in a language like Rust, arrays are declared a little differently: ```rust let mut my_array: [i32; 10] = [0; 10]; my_array[0] = 5; // [5, 0, 0, 0, 0, 0, 0, 0, 0, 0] ``` In Rust, we have to explicitly declare the array as mutable with `mut`. That way, we can modify the array just like in Java. In addition, the syntax is quite a bit different. For example, we set the type to `i32`, the size to 10, and all elements to 0. Of course, there are languages with built-in lists much similar to what you might find in Python. For example, Ruby’s (dynamic) arrays can be created and modified as follows: ```ruby my_array = [] my_array << 5 # [5] ``` Unlike the previous solution, we didn’t have to setup our array with a certain size. Instead, we’re able to start with an empty array. Then, we pushed a 5 into the array, and called it a day. Oddly enough, in Python, the syntax for creating a list is quite similar: ```python my_list = [] ``` But, how do we add anything to this list? That’s the topic of this article. ## Solutions In this section, we’ll take a look at various ways to add an item to a list in Python. Since this task is pretty straightforward, there aren’t very many options. In fact, that’s sort of by design in Python (i.e. “There should be one– and preferably only one –obvious way to do it.”). That said, I’ve included a few silly solutions to keep this piece interesting. ### Add an Item to a List Statically To be honest, this is sort of a nonanswer. That said, if you want to populate a list, you can declare the elements statically: ```python my_list = [2, 5, 6] ``` Rather than adding the items one at a time, we’ve decided to initialize the list exactly as we want it to appear. In this case, we’ve created a list with three items in it: 2, 5, and 6. In the next section, we’ll look at our first way to modify an existing list. ### Add an Item to a List by Slice Assignment In Python, there is this very peculiar piece of syntax that I only just recently learned about called slice assignment. While slicing can be used to return a section of a list, slice assignment can be used to replace a section of a list. In other words, it’s possible to write an expression which adds a value to the end of a list: ```python my_list = [] my_list[len(my_list):] = [5] ``` Here, we have an expression which replaces the end of the list, which is an empty list, with a list containing a single value. In essence, we’ve added an item to the end of the list. Interestingly enough, this syntax can be used to replace any part of a list with any other list. For example, we could add an item to the front of the list: ```python my_list = [1, 2, 3, 4] my_list[:0] = [0] # [0, 1, 2, 3, 4] ``` Likewise, we can replace any sublist with any other list of any size: ```python my_list = [1, 2, 3, 4] my_list[:2] = [6, 7, 8, 9] # [6, 7, 8, 9, 3, 4] ``` Fun fact: we aren’t restricted to lists with this syntax. We can assign **any** iterable to the slice: ```python my_list = [] my_list[:] = "Hello" # ['H', 'e', 'l', 'l', 'o'] my_list[:] = (1, 2, 3) # [1, 2, 3] my_list[:] = (i for i in range(5)) # [0, 1, 2, 3, 4] ``` When I first added this solution to the list, I thought it was kind of silly, but now I’m seeing a lot of potential value in it. Let me know if you’ve ever used this syntax and in what context. I’d love to see some examples of it in the wild. ### Add an Item to a List with Append On a more traditional note, folks who want to add an item to the end of a list in Python can rely on append: ```python my_list = [] my_list.append(5) ``` Each call to append will add one additional item to the end of the list. In most cases, this sort of call is made in a loop. For example, we might want to populate a list as follows: ```python my_list = [] for i in range(10): my_list.append(i) ``` Of course, if you’re going to do something like this, and you’re not using an existing list, [I recommend using a list comprehension](https://therenegadecoder.com/code/how-to-write-a-list-comprehension-in-python/): ```python my_list = [i for i in range(10)] ``` At any rate, `append()` is usually the go-to method for adding an item to the end of a list. Of course, if you want to add more than one item at a time, this isn’t the solution for you. ### Add an Item to a List with Extend If you’re looking to combine two lists, `extend()` is the method for you: ```python my_list = [] my_list.extend([5]) ``` In this example, we append a list of a single element to the end of an empty list. Naturally, we can append a list of any size: ```python my_list = [3, 2] my_list.extend([5, 17, 8]) # [3, 2, 5, 17, 8] ``` Another great feature of `extend()` is that we’re not restricted to lists; we can use **any** iterable. That includes tuples, strings, and generator expressions: ```python my_list = [] my_list.extend("Hello") # ['H', 'e', 'l', 'l', 'o'] my_list.clear() my_list.extend((1, 2, 3)) # [1, 2, 3] my_list.clear() my_list.extend(i for i in range(5)) # [0, 1, 2, 3, 4] ``` Of course, like `append()`, `extend()` doesn’t return anything. Instead, it modifies the existing list. Also like `append()`, `extend()` only adds the iterable to the end of the other list. There’s no way to specify where the input iterable goes. If you want more control, I suggest slice assignment or our next method, `insert()`. ### Add an Item to a List with Insert If `append()` and `extend()` aren’t doing it for you, I recommend `insert()`. It allows you to add an item to a list at any index: ```python my_list = [] my_list.insert(0, 5) ``` In this case, we inserted a 5 at index 0. Naturally, we can choose any valid index: ```python my_list = [2, 5, 7] my_list.insert(1, 6) # [2, 6, 5, 7] ``` And just like with regular list syntax, we can use negative indices: ```python my_list = [2, 5, 7] my_list.insert(-1, 9) # [2, 5, 9, 7] ``` How cool is that?! Unfortunately, however, we can’t really insert an entire list. Since Python lists don’t restrict type, we can add any item we want. As a result, inserting a list will literally insert that list: ```python my_list = [4, 5, 6] my_list.insert(1, [9, 10]) # [4, [9, 10], 5, 6] ``` Luckily, slice assignment can help us out here! ## Performance With all the solutions ready to go, let’s take a look at how they compare in terms of performance. Since each solution doesn’t do exactly the same thing, I’ll try to be fair in how I construct my examples. For instance, all the following examples will append a value at the end of each of the sample lists (ignoring the static assignment solutions): ```python setup = """ empty_list = [] small_list = [1, 2, 3, 4] large_list = [i for i in range(100000)] """ static_list_empty = "empty_list = []" static_list_small = "small_list = [1, 2, 3, 4, 5]" slice_assignment_empty = "empty_list[len(empty_list):] = [5]" slice_assignment_small = "small_list[len(small_list):] = [5]" slice_assignment_large = "large_list[len(large_list):] = [5]" append_empty = "empty_list.append(5)" append_small = "small_list.append(5)" append_large = "large_list.append(5)" extend_empty = "empty_list.extend([5])" extend_small = "small_list.extend([5])" extend_large = "large_list.extend([5])" insert_empty = "empty_list.insert(len(empty_list), 5)" insert_small = "small_list.insert(len(small_list), 5)" insert_large = "large_list.insert(len(large_list), 5)" ``` Now, let’s take a look at all the empty list examples: ```python >>> import timeit >>> min(timeit.repeat(setup=setup, stmt=static_list_empty)) 0.06050460000005842 >>> min(timeit.repeat(setup=setup, stmt=slice_assignment_empty)) 0.4962195999996766 >>> min(timeit.repeat(setup=setup, stmt=append_empty)) 0.17979939999986527 >>> min(timeit.repeat(setup=setup, stmt=extend_empty)) 0.27297509999971226 >>> min(timeit.repeat(setup=setup, stmt=insert_empty)) 0.49701270000059594 ``` As expected, the most straightforward solution performs the best. Let’s see how that plays out as we grow our list: ```python >>> min(timeit.repeat(setup=setup, stmt=static_list_small)) 0.1380927000000156 >>> min(timeit.repeat(setup=setup, stmt=slice_assignment_small)) 0.5136848000001919 >>> min(timeit.repeat(setup=setup, stmt=append_small)) 0.1721136000005572 >>> min(timeit.repeat(setup=setup, stmt=extend_small)) 0.28814950000014505 >>> min(timeit.repeat(setup=setup, stmt=insert_small)) 0.5033762000002753 ``` Again, `append()` gets the job done the quickest. Now, let’s take a look at an enormous list: ```python >>> min(timeit.repeat(setup=setup, stmt=slice_assignment_large)) 0.5083946000004289 >>> min(timeit.repeat(setup=setup, stmt=append_large)) 0.18050170000060461 >>> min(timeit.repeat(setup=setup, stmt=extend_large)) 0.28858020000006945 >>> min(timeit.repeat(setup=setup, stmt=insert_large)) 0.5108225000003586 ``` Amazingly, all these solutions seem to scale really well. That said, `append()` takes the cake. After all, [it is amortized O(1)](https://wiki.python.org/moin/TimeComplexity). In other words, appending to a list is a constant time operation as long as we don’t run out of space. That said, take these performance metrics with a grain of salt. After all, not every solution is going to be perfect for your needs. ## Challenge Now that we know how to add an item to a list, let’s try writing a simple sorting algorithm. After all, that’s the perfect task for someone who wants to get familiar with the various list manipulation methods. Personally, I don’t care which sorting algorithm you implement (e.g. bubble, insertion, merge, etc.) or what type of data you choose to sort (e.g. numbers, strings, etc.). In fact, I don’t even care if you sort the data in place or create an entirely separate list. All I care about is that you use one or more of the methods described in this article to get it done. When you think you have a good solution, feel free to share it in the comments. As always, I’ll share an example in the comments. ## A Little Recap With all that out of the way, let’s take a look at all our solutions again: ```python # Statically defined list my_list = [2, 5, 6] # Appending using slice assignment my_list[len(my_list):] = [5] # [2, 5, 6, 5] # Appending using append() my_list.append(9) # [2, 5, 6, 5, 9] # Appending using extend() my_list.extend([-4]) # [2, 5, 6, 5, 9, -4] # Appending using insert() my_list.insert(len(my_list), 3) # [2, 5, 6, 5, 9, -4, 3] ``` By far, this has been one of my favorite articles to write in awhile. There’s nothing quite like learning something new while writing a short response to the question “How do I add an item to a list?” If you liked this article, help me get it in front of more eyes by giving it a share. In addition, you can show your support by [hopping on my mailing list](https://newsletter.therenegadecoder.com/), [joining me on Patreon](https://www.patreon.com/TheRenegadeCoder), or [subscribing to my YouTube channel](https://www.youtube.com/channel/UCpyoVwOqYRlSAEUPEn7P9hw). Otherwise, check out some of these related articles: - [How to Print on the Same Line in Python](https://therenegadecoder.com/code/how-to-print-on-the-same-line-in-python/) - [How to Get the Last Item of a List in Python](https://therenegadecoder.com/code/how-to-get-the-last-item-of-a-list-in-python/) - [How I Display Code in My Python Videos](https://therenegadecoder.com/blog/how-i-display-code-in-my-python-videos/) (Patron-only) Other than that, that’s all I’ve got! Thanks again for your support. See you next time! The post [How to Add an Item to a List in Python: Append, Slice Assignment, and More](https://therenegadecoder.com/code/how-to-add-an-item-to-a-list-in-python/) appeared first on [The Renegade Coder](https://therenegadecoder.com).
renegadecoder94
235,802
31 character password length limit in Mojave
OSX, at least Mojave, has a 31 character password limit.
0
2020-01-10T17:06:44
https://dev.to/apetro/31-character-password-length-limit-in-mojave-1gp4
mac, security
--- title: 31 character password length limit in Mojave published: true description: OSX, at least Mojave, has a 31 character password limit. tags: mac, security cover_image: https://thepracticaldev.s3.amazonaws.com/i/l7c88rbejoa148mqddlm.jpg --- Today I learned that [Mojave has a 31 character password limit](https://www.mac-forums.com/forums/macos-operating-system/317066-maximum-length-passwords.html). In general I like long random passwords managed in a password manager. However, there are a few passwords that I find I have to type frequently. The password to access the password manager itself. The single sign-on paasword. These I like to be [long passwords easy to remember and type but difficult to brute force](https://xkcd.com/936/). While the single sign-on system at issue generally supports a longer maximum password length, it turns out using a length longer than 31 characters is a way to lock myself out of my Mac. Trimming the password down to 31 characters fixed it. Mostly leaving this post as a note to myself for next time. [Cover image credit](https://unsplash.com/photos/-LFe6Prglw4)
apetro
235,803
How to authenticate with Google Oauth in Electron
Last month, we started to get concerning bug reports from our users that they couldn’t authenticate w...
0
2020-01-10T17:44:31
https://pragli.com/blog/how-to-authenticate-with-google-in-electron/
development
--- title: How to authenticate with Google Oauth in Electron published: true date: 2020-01-10 16:45:50 UTC tags: Development canonical_url: https://pragli.com/blog/how-to-authenticate-with-google-in-electron/ cover_image: https://pragli.com/blog/content/images/size/w640/2019/12/desktop-authentication-communication-1.png --- Last month, we started to get concerning bug reports from our users that they couldn’t authenticate with Google directly inside of the Pragli desktop application. Here’s the error message that they were seeing: > You are trying to sign in from a browser or app that doesn’t allow us to keep your account secure. Try using a different browser. <!--kg-card-begin: image--> ![How to authenticate with Google Oauth in Electron](https://pragli.com/blog/content/images/2019/12/desktop-authentication-warning-message.png) <!--kg-card-end: image--> We knew that Google had deprecated Oauth authentication for non-standard browsers a year ago, but we had no idea that Electron (based on Chromium) was on the chopping block as well. This means that our desktop application could no longer authenticate with Google directly inside of the application. We now needed to navigate the user to a supported browser to properly authenticate. I couldn't find a resource to help me solve this problem, so I ended up experimenting with different authentication architectures until I resolved the issue. In this post, I show how to implement Google authentication for your Electron application without running into any browser restrictions. I use Firebase, React and Google Cloud Functions, but other technologies should have similar implementations. ## General idea Here’s a rough outline of how to authenticate your desktop application: 1. Generate a one-time random code within the Electron application when a user initiates a sign in 2. Listen for an authentication code in a publicly accessible database, using that one-time random code (more on this later) 3. Open the user’s default browser, passing along the one-time code 4. Once in the user’s browser, redirect to Google Oauth 5. Once you’re redirected back from Google Oauth with the session information, generate an authentication token for arbitrary clients to sign in as the user 6. Associate the authentication token with your one-time code in your datastore 7. The Electron application receives the authentication token via its listener in Step #2 and signs the user in ## Step 1: Generating a random token There’s no easy way to directly pass data from a browser to an Electron application. Therefore, we need to use a datastore (Firebase in our case) that is accessible by both the browser and the Electron application to pass the authentication information from the browser, which is handling the authentication, to the Electron application, which needs to sign in. <!--kg-card-begin: image--> ![How to authenticate with Google Oauth in Electron](https://pragli.com/blog/content/images/2019/12/desktop-authentication-communication.svg)<figcaption>Architecture for desktop authentication</figcaption> <!--kg-card-end: image--> To uniquely identify a sign in attempt, we also need to generate a random code that the supported browser can use to send the Electron application the user session information/authentication code to sign in. Within the Electron application, generate this one-time code with the `uuid()` module. <!--kg-card-begin: code--> ```javascript const signIn = () => { const id = uuid() // Rest of implementation }) ``` <!--kg-card-end: code--> ## Step 2 : Listen for the authentication code Listen for any authentication information that is passed through Firebase with the `.on()` function. **Note:** The `to-auth-codes/${id}` route needs to be readable/writeable by any unauthenticated client. For Pragli, we judged that the security concerns for this approach were minimal. <!--kg-card-begin: code--> ```javascript const signIn = () => { const id = uuid() const oneTimeCodeRef = firebase.database().ref(`ot-auth-codes/${id}`) oneTimeCodeRef.on(‘value’, async snapshot => { const authToken = snapshot.val() // Rest of implementation }) }) ``` <!--kg-card-end: code--> ## Step 3: Redirect the user to the browser Navigate the user to the browser to complete the Google authentication. We created a simple route `/desktop-sign-in` to initiate the authentication. Make sure that you pass along your one-time use code, so the browser can pass the authentication details back to Electron once the authentication finishes. <!--kg-card-begin: code--> ```javascript const signIn = () => { const id = uuid() const oneTimeCodeRef = firebase.database().ref(`ot-auth-codes/${id}`) oneTimeCodeRef.on(‘value’, async snapshot => { const authToken = snapshot.val() // Rest of implementation }) const googleLink = `/desktop-sign-in?ot-auth-code=${id}` window.open(googleLink, ‘_blank’) }) ``` <!--kg-card-end: code--> ## Step 4: Google authentication At the `/desktop-sign-in` page, redirect to Google for authentication. In our case, Firebase made this dead simple with the `signInWithRedirect()` function. The service transparently handles setting the Oauth redirect URL and creating a fully [populated User object](https://firebase.google.com/docs/reference/js/firebase.User). <!--kg-card-begin: code--> ```javascript componentDidMount() { const provider = new firebase.auth.GoogleAuthProvider() firebase.auth().signInWithRedirect(provider) } ``` <!--kg-card-end: code--> ## Step 5: Generate the authentication token Once Google redirects back to your application, recover the Firebase user object with the `getRedirectResult()` function. To generate an authentication token that Electron can use to sign in as the user, you need to use the admin Firebase library. Unfortunately, you can’t use the regular client-side Firebase library to generate a token - even if you’re already authenticated as the user. To use the admin Firebase library, you need a secure server environment. I prefer to use Google Cloud Functions to keep our infrastructure simple. But regardless of the server that you select, you also need to send the backend the following information; 1. The one-time use code 2. A user ID token for the server to validate that the sending request comes from the correct, authenticated user **Browser Client** <!--kg-card-begin: code--> ```javascript async componentDidMount() { const result = await firebase.auth().getRedirectResult() if (!result) { firebase.auth().signInWithRedirect(provider) } else { console.log(“Grabbed the user”, result.user) if (!result.user) { return } const params = new URLSearchParams(window.location.search) const token = await result.user.getIdToken() const code = params.get(“ot-auth-code”) const response = await fetch(`${getFirebaseDomain()}/create-auth-token?ot-auth-code=${code}&id-token=${token}`) await response.json() } } ``` <!--kg-card-end: code--> **Server (Google Cloud Function)** Since you’re executing a cross-origin request from client to a server in Google Cloud Functions, make sure that you wrap the request with the CORS node module to ensure that the correct `Access-Control-Cross-Origin` headers are set. <!--kg-card-begin: code--> ```javascript const admin = require(‘firebase-admin’) const cors = require(‘cors’)({ origin: true }) admin.initializeApp() exports.createAuthToken = functions.https.onRequest((request, response) => { cors(request, response, async () => { const query = request.query const oneTimeCode = query[“ot-auth-code”] const idToken = query[“id-token”] const decodedToken = await admin.auth().verifyIdToken(idToken) const uid = decodedToken.uid const authToken = await admin.auth().createCustomToken(uid) console.log(“Authentication token”, authToken) await admin.database().ref(`ot-auth-codes/${oneTimeCode}`).set(authToken) return response.status(200).send({ token: authToken }) }) }) ``` <!--kg-card-end: code--> ## Step 6: Associate the auth token with the one-time code Once you’ve generated the auth token, you can easily associate the token with your one-time code with the `.set()` function using the Firebase admin library. <!--kg-card-begin: code--> ```javascript exports.createAuthToken = functions.https.onRequest((request, response) => { // Other implementation const authToken = await admin.auth().createCustomToken(uid) await admin.database().ref(`ot-auth-codes/${oneTimeCode}`).set(authToken) return response.status(200).send({ token: authToken }) }) ``` <!--kg-card-end: code--> ## Step 7: Use the auth token to sign in as a user in Electron In Step #2, you set your Electron application to listen for updates to the the one-time use token route in Firebase. Once the server sets the auth token at that Firebase route, simply grab the auth token and use it to sign in. <!--kg-card-begin: code--> ```javascript const signIn = () => { const id = uuid() const oneTimeCodeRef = firebase.database().ref(`ot-auth-codes/${id}`) oneTimeCodeRef.on(‘value’, async snapshot => { const authToken = snapshot.val() const credential = await firebase.auth().signInWithCustomToken(authToken) }) const googleLink = `/desktop-sign-in?ot-auth-code=${id}` window.open(googleLink, ‘_blank’) }) ``` <!--kg-card-end: code--> # Another authentication strategy We used a publicly accessible datastore with Firebase to communicate from the browser to the Electron application. You can also facilitate the communication between the browser and the desktop app by **spinning up a local server** upon boot of the Electron app. Then, the local server simply writes to a known filesystem location, and the Electron app can poll for changes at that location to get the authentication token. <!--kg-card-begin: image--> ![How to authenticate with Google Oauth in Electron](https://pragli.com/blog/content/images/2019/12/desktop-authentication-communication-local.png) <!--kg-card-end: image--> ## Concluding thoughts The recent Google authentication restrictions have been a huge pain for desktop developers. The trickiest part is managing the communication channel from the browser back to the Electron application. Hopefully this tutorial helped in planning your migration to browser-based authentication. ## What’s Pragli? I built a virtual office for remote teams to frictionlessly chat and feel present with each other. If you’re interested in using the product with your team, [learn more here](https://pragli.com) or reach out to me on Twitter.
virtuallyvivek