id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
359,887
Responsive coffee site lesson
Use rem (root) unit for font sizes for easy scaling For background-image, set it to center so that c...
0
2020-06-20T22:00:38
https://dev.to/heggy231/responsive-coffee-site-lesson-2n1c
css
- Use rem (root) unit for font sizes for easy scaling - For background-image, set it to center so that center of img stays centered when resize ```css .banner-img { /* # is placeholder for img location */ background-image: url('#') background-position: center; } ``` - When reducing the img size, to keep its aspect ratio, only change width. ```css .fun-img { width: 20%; } ``` - Mobile screen media query rule set `max-width: 470px`. Set the img to fill the full width of the mobile Stack img on top. ```css @media only screen and (max-width: 470px) { img { width: 100%; float: left; display: block; } ```
heggy231
360,049
What is Readiness & Liveness Probes in Kubernetes?
Kubernetes uses Readiness & Liveness probes to manage pod lifecycle. Readiness probe is used to d...
7,023
2020-06-21T04:34:45
https://developersthought.in/kubernetes/2020/06/21/k8s-session-04.html
devops, kubernetes, opensource, beginners
Kubernetes uses `Readiness` & `Liveness` probes to manage pod lifecycle. Readiness probe is used to determine whether pod is ready to accept the traffic or not and liveness probe is used to determine whether pod is functioning properly or not. Read more about them [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). In this blog I am adding readiness & liveness probe to PHPMyAdmin application. ## Architecture ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/0fe50x78pk2frhoeizlr.JPG) ## Prerequisites: ### Deploy PHPMyAdmin Application Follow [Deploy phpMyAdmin application on kubernetes](https://dev.to/sagarjadhv23/what-is-deployment-service-secret-and-configmap-in-kubernetes-1pk5) blog ### Go to session_4 directory ``` cd ../session_4/ ``` ## Step 1: Delete PHPMyAdmin deployment ``` kubectl delete deployment phpmyadmin ``` ## Step 2: Deploy PHPMyAdmin deployment with Readiness & Liveness probes ``` kubectl create -f phpmyadmin-deployment.yaml ``` ``` kubectl get pods --watch ``` Exit once pod goes into running state ## Step 9: Browse phpmyadmin application Go to browser and browse http://IP_ADDRESS:30030. Login with `root` user & `test` password here IP_ADDRESS is the ip address of virtual machine where kubernetes is running. ## Demo {% youtube c9cnPuD6QgU %}
sagarjadhv23
360,101
So hard to make table header sticky
It is classic to display data with a table, each row is a record, each column is a data field, and th...
0
2020-06-21T08:46:08
https://dev.to/jennieji/so-hard-to-make-table-header-sticky-14fj
react, table, header, sticky
It is classic to display data with a table, each row is a record, each column is a data field, and the page could show quite a few data records, which requires user to scroll vertically enable to overlook large amount of data. And this, often requires keeping table header in our sight all the time, aligning with the columns, so we can tell what does each cell mean easily. My first reaction was to try on `<thead>`, and it didn't work. {% codepen https://codepen.io/jennieji/pen/MWKpWQm default-tab=css,result %} Then I found the blog "[Position Sticky and Table Headers](https://css-tricks.com/position-sticky-and-table-headers/)" by [Chris Coyier](https://css-tricks.com/author/chriscoyier/), and he explained this clearly: > The issue boils down to the fact that stickiness requires `position: relative` to work and that in the CSS 2.1 spec. And he provided one solution: > You can’t `position: sticky;` a `<thead>`. Nor a `<tr>`. But you can sticky a `<th>` And a decent example: {% codepen https://codepen.io/chriscoyier/pen/PrJdxb default-tab=css,result %} Then I tried this on the platform I was working on. It turns out it does not work either. Why??? It turned out thanks to this killer my dear `overflow: hidden;`. {% codepen https://codepen.io/jennieji/pen/LYGbwgN default-tab=css,result %} MDN explained why this happens: > Note that a sticky element "sticks" to its nearest ancestor that has a "scrolling mechanism" (created when `overflow` is `hidden`, `scroll`, `auto`, or `overlay`), even if that ancestor isn't the nearest actually scrolling ancestor. This effectively inhibits any "sticky" behavior (see the [Github issue on W3C CSSWG](https://github.com/w3c/csswg-drafts/issues/865)). Well, this sounds like a case CSS standard forgot to cover. Then you may think OK in this case, let us try to avoid wrapping tables in a `overflow:hidden` element. But if you maintain this page for a long term, or team working on the same page, can you make sure your `sticky` element will never been wrapped in an element with `overflow:hidden`? I bet no. So I keep searching for a sustainable solution, and what I found was just suggesting giving up on `<table>` tag, or giving up the table display, and use flex instead like this: {% codepen https://codepen.io/tfzvang/pen/WQBwVo default-tab=css,result %} You know unlike table cells, `flex` elements will not automatically align to each other. Enable to align the "cells", you will need to set a width on each "cell" element. That's totally fine for one or two tables I have to say. But what if I am working on a data management platform, which does constantly add new tables like that? And sometimes may add a new column into a long lived table which breaks the perfect size setting it has? That will be a disaster if you does not have a GUI tool like the classic Dreamweaver to help. Now I think it's time to use some Javascript. I recall that before `position: sticky ` is introduced, it was popular to usea jQuery plugin to clone a component out, hide it by default, and displays when the user scrolls in a calculated range. Like [this one](https://github.com/jmosbech/StickyTableHeaders/blob/master/js/jquery.stickytableheaders.js). It perfectly works in jQuery app, which uses css selectors to bind the elements with events, and the cloned elements will keep the original arrtibutes, all you have to keep in mind is to write event binding selectors carefully to make sure the cloned header will still respond to the events you required. But in framework like react, it is tricky to do this. Imagine that designer designed this kind of fancy table: {% codepen https://codepen.io/jennieji/pen/BajWypp default-tab=result %} How to make sure the cloned header works and looks exactly same as the original header? So I think instead of clone, why don't I just fixed the size of each header cells when user scrolls the table in and out the viewport, and make them `position: fixed` to avoid being affected by `overflow: hidden`, and I may enjoy the flexible cell width? Although it will be affected by `position: relative`, yet still a lot better. And here's what I came out: {% codepen https://codepen.io/jennieji/pen/pogeEXx default-tab=js,result %} Instead of listening on `scroll` event, I tried [IntersecionObserver API](https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API) for a better performance, and modern browser has supported `IntersectionObserver` quite well: ![canIuse IntersectionObserver](https://dev-to-uploads.s3.amazonaws.com/i/9d004v5jwuyrqthjr0gv.png) Unlike `scroll` event, it's a class accept a callback and options: ```javascript const observer = new IntersectionObserver(callback, options); observer.observe(targetElement); observer.unobserve(targetElement); ``` And it only calls the callback function when the target element displayed cross a given raitio of the viewport. > Rather than reporting every infinitesimal change in how much a target element is visible, the Intersection Observer API uses **thresholds**. When you create an observer, you can provide one or more numeric values representing percentages of the target element which are visible. Then, the API only reports changes to visibility which cross these thresholds. Here's a blog explaining `IntersectionObserver` in details: [An Explanation of How the Intersection Observer Watches](https://css-tricks.com/an-explanation-of-how-the-intersection-observer-watches/). Check it out! Because of this special setting, I observed 2 empty helper elements as start point, and end point. When the observer callback triggered, I check the top offset of the start point and the end point via `element.getBoundingClientRect()`. If the top of the start point become negative, it means the table header starts to leave the viewport. By contrast, if the top of end point become negative, it means the whole table almost leaves the viewport. ```javascript const startEl = React.useRef(null); const endEl = React.useRef(null); React.useEffect(() => { const states = new Map(); const observer = new IntersectionObserver( entries => { entries.forEach(e => { states.set(e.target, e.boundingClientRect); }); const { top } = states.get(startEl.current) || {}; const { top: bottom } = states.get(endEl.current) || {}; if (top < 0 && bottom > 0) { show(); } else { hide(); } }, { threshold: [0], } ); observer.observe(startEl.current); observer.observe(endEl.current); }, []) ``` The scrolling down experience looks like this: ![scrolling down experience](https://dev-to-uploads.s3.amazonaws.com/i/sgfljdhwvxyj5spy7d99.png) The scrolling up experience looks like this: ![scrolling up](https://dev-to-uploads.s3.amazonaws.com/i/ioxfoa5g20nptchvdjes.png) The star point is simply placed on top of the table, but the end point is somewhere above the end of the table to create better user experience since I feel it looks wierd when the last row is over half covered by the sticky header in the end. That's why you see this calculation: ```javascript const thead = el.current.querySelectorAll('thead'); const rows = el.current.querySelectorAll('tr'); const theadHeight = (thead && thead[0].getBoundingClientRect() || {}).height || 0; const lastRowHeight = (rows && rows[rows.length - 1].getBoundingClientRect() || {}).height || 0; endEl.current.style.top = `-${theadHeight + lastRowHeight/2}px`; ``` Working with CSS: ```css .end-buffer-area { z-index: -1; position: relative; } ``` Then we toggle a CSS class `.stickyHeader` on the wrapper to control the displaying of the sticky header: ```css .header-column { ... } .stickyHeader .header-column { position: fixed; top: 0; } ``` The first thing you may notice that after the header cell become `position: fixed`, it no longer aligns to the other cells, everything gets messed. So I need to find a way to keep the header cell size, and position at the same time. What I did was wrap the header cell content with a div first: ```html <thead> <tr> <th><div className="header-column">Name</div></th> <th><div className="header-column">Age</div></th> <th><div className="header-column">Address</div></th> </tr> </thead> ``` When it shows, I calculate the sizes, set on both `th` and `.header-column` to maintain the table alignment: ```javascript const show = () => { el.current.querySelectorAll('.header-column').forEach( col => { if (!col.parentElement) { return; } const { width, height } = col.parentElement.getBoundingClientRect() || {}; col.style.width = col.parentElement.style.width = `${width}px`; col.style.height = col.parentElement.style.height = `${height}px`; `${width}px`; } el.current.classList.add("stickyHeader"); }; ``` And some CSS to ensure they looks same: ```css thead th { padding: 0; } .header-column { height: auto !important; padding: 10px; box-sizing: border-box; } .stickyHeader .header-column { background: inherit; } ``` Next you may notice it will have a wierd jumping out behaviour makes the sticky header appearance look a bit unnatural. This is because when user scroll fast, we will see the header leaves out out of the viewport before `IntersectionObserver` triggers the callback. Right, our work arounds can never achieve the effect of the browser's native integration. But we can make it feels better via animation. So I added this simple CSS animation as a finishing: ```css .stickyHeader .header-column { top: 0; animation: slideDown 200ms ease-in; } @keyframes slideDown { 0% { transform: translateY(-50%); } 100% { transform: translateY(0%); } } ``` Here it goes. But you can tell this solution is still very rough. Some restrictions like: - need to carefully style the header - not responsive Are able to be fixed via more careful checks and events handling. Hope you enjoy the exploration of new solutions with me :).
jennieji
360,119
A simple extension to add buttons above the keyboard
It is quite common when you have an input view on an iOS app to show some actions above the keyboard...
0
2020-06-21T09:48:21
https://diamantidis.github.io/tips/2020/06/21/ios-keyboard-toolbar-extension
ios, swift
It is quite common when you have an input view on an iOS app to show some actions above the keyboard to improve the user experience. Those actions can range from a `Done` button on a number pad, a `Cancel` button to dismiss the keyboard to literally anything. Though there are various options you can use to add those actions, the most common and probably the easiest is to set the property `inputAccessoryView` of the `UITextField` to an instance of `UIToolbar` with items the buttons you would like to add. Since it's such a common practice, I am using an extension to make this operation a little bit nicer to use: ```swift extension UITextField { typealias ToolbarItem = (title: String, target: Any, selector: Selector) func addToolbar(leading: [ToolbarItem] = [], trailing: [ToolbarItem] = []) { let toolbar = UIToolbar() let flexibleSpace = UIBarButtonItem(barButtonSystemItem: .flexibleSpace, target: nil, action: nil) let leadingItems = leading.map { item in return UIBarButtonItem(title: item.title, style: .plain, target: item.target, action: item.selector) } let trailingItems = trailing.map { item in return UIBarButtonItem(title: item.title, style: .plain, target: item.target, action: item.selector) } var toolbarItems: [UIBarButtonItem] = leadingItems toolbarItems.append(flexibleSpace) toolbarItems.append(contentsOf: trailingItems) toolbar.setItems(toolbarItems, animated: false) toolbar.sizeToFit() self.inputAccessoryView = toolbar } } ``` --- Now, with that extension in place, you can add the toolbar in the following way: ```swift let textField = UITextField() let cancelButton = UITextField.ToolbarItem(title: "Cancel", target: self, selector: #selector(cancelPressed)) let resetButton = UITextField.ToolbarItem(title: "Reset", target: self, selector: #selector(resetPressed)) let doneButton = UITextField.ToolbarItem(title: "Done", target: self, selector: #selector(donePressed)) textField.addToolbar(leading: [cancelButton, resetButton], trailing: [doneButton]) ``` --- If you run the app, the keyboard will look like the following screenshot 🤓 ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/2d4kzvyu3dcn495rp0g9.png) --- ➡️ This post was originally published on my [blog](https://diamantidis.github.io).
diamantidis
360,134
How to automate your Postgres database backups
If you've got a web app running in production, then you'll want to take regular database backups, or...
7,414
2020-06-21T10:43:16
https://mattsegal.dev/postgres-backup-automate.html
postgres, django, bash, database
If you've got a web app running in production, then you'll want to take [regular database backups](https://mattsegal.dev/postgres-backup-and-restore.html), or else you risk losing all your data. Taking these backups manually is fine, but it's easy to forget to do it. It's better to remove the chance of human error and automate the whole process. To automate your backup and restore you will need three things: - A safe place to store your backup files - A script that creates the backups and uploads them to the safe place - A method to automatically run the backup script every day ### A safe place for your database backup files You don't want to store your backup files on the same server as your database. If your database server gets deleted, then you'll lose your backups as well. Instead, you should store your backups somewhere else, like a hard drive, your PC, or in the cloud. I like using cloud object storage for this kind of use-case. If you haven't heard of "object storage" before: it's just a kind of cloud service where you can store a bunch of files. All major cloud providers offer this service: - Amazon's AWS has the [Simple Storage Service (S3)](https://aws.amazon.com/s3/) - Microsoft's Azure has [Storage](https://azure.microsoft.com/en-us/services/storage/) - Google Cloud also has [Storage](https://cloud.google.com/storage) - DigitalOcean has [Spaces](https://www.digitalocean.com/products/spaces/) These object storage services are _very_ cheap at around 2c/GB/month, you'll never run out of disk space, they're easy to access from command line tools and they have very fast upload/download speeds, especially to/from other services hosted with the same cloud provider. I use these services a lot: this blog is being served from AWS S3. I like using S3 simply because I'm quite familiar with it, so that's what we're going to use for the rest of this post. If you're not already familiar with using the AWS command-line, then check out this post I wrote about [getting started with AWS S3](https://mattsegal.dev/aws-s3-intro.html) before you continue. ### Creating a database backup script In my [previous post on database backups](https://mattsegal.dev/postgres-backup-and-restore.html) I showed you a small script to automatically take a backup using PostgreSQL: ```bash #!/bin/bash # Backs up mydatabase to a file. TIME=$(date "+%s") BACKUP_FILE="postgres_${PGDATABASE}_${TIME}.pgdump" echo "Backing up $PGDATABASE to $BACKUP_FILE" pg_dump --format=custom > $BACKUP_FILE echo "Backup completed for $PGDATABASE" ``` I'm going to assume you have set up your Postgres database environment variables (`PGHOST`, etc) either in the script, or elsewhere, as mentioned in the previous post. Next we're going to get our script to upload all backups to AWS S3. ### Uploading backups to AWS Simple Storage Service (S3) We will be uploading our backups to S3 with the `aws` command line (CLI) tool. To get this tool to work, we need to set up our AWS credentials on the server by either using `aws configure` or by setting the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`. Once that's done we can use `aws s3 cp` to upload our backup files. Let's say we're using a bucket called "`mydatabase-backups`": ```bash #!/bin/bash # Backs up mydatabase to a file and then uploads it to AWS S3. # First, dump database backup to a file TIME=$(date "+%s") BACKUP_FILE="postgres_${PGDATABASE}_${TIME}.pgdump" echo "Backing up $PGDATABASE to $BACKUP_FILE" pg_dump --format=custom > $BACKUP_FILE # Second, copy file to AWS S3 S3_BUCKET=s3://mydatabase-backups S3_TARGET=$S3_BUCKET/$BACKUP_FILE echo "Copying $BACKUP_FILE to $S3_TARGET" aws s3 cp $BACKUP_FILE $S3_TARGET echo "Backup completed for $PGDATABASE" ``` You should be able to run this multiple times and see a new backup appear in your S3 bucket's webpage every time you do it. As a bonus, you can add a little one liner at the end of your script that checks for the last uploaded file to the S3 bucket: ```bash BACKUP_RESULT=$(aws s3 ls $S3_BUCKET | tail -n 1) echo "Latest S3 backup: $BACKUP_RESULT" ``` Once you're confident that your backup script works, we can move on to getting it to run every day. ### Running cron jobs Now we need to get our server to run this script every day, even when we're not around. The simplest way to do this is on a Linux server is with [cron](https://en.wikipedia.org/wiki/Cron). Cron can automatically run scripts for us on a schedule. We'll be using the `crontab` tool to set up our backup job. You can read more about how to use crontab [here](https://linuxize.com/post/scheduling-cron-jobs-with-crontab/). If you find that you're having issues setting up cron, you might also find this [StackOverflow post](https://serverfault.com/questions/449651/why-is-my-crontab-not-working-and-how-can-i-troubleshoot-it) useful. Before we set up our daily database backup job, I suggest trying out a test script to make sure that your cron setup is working. For example, this script prints the current time when it is run: ```bash #!/bin/bash echo $(date) ``` Using `nano`, you can create a new file called `~/test.sh`, save it, then make it executable as follows: ```bash nano ~/test.sh # Write out the time printing script in nano, save the file. chmod +x ~/test.sh ``` Then you can test it out a little by running it a couple of times to check that it is printing the time: ```bash ~/test.sh # Sat Jun 6 08:05:14 UTC 2020 ~/test.sh # Sat Jun 6 08:05:14 UTC 2020 ~/test.sh # Sat Jun 6 08:05:14 UTC 2020 ``` Once you're confident that your test script works, you can create a cron job to run it every minute. Cron uses a special syntax to specifiy how often a job runs. These "cron expressions" are a pain to write by hand, so I use [this tool](https://crontab.cronhub.io/) to generate them. The cron expression for "every minute" is the inscrutable string "`* * * * *`". This is the crontab entry that we're going to use: ```bash # Test crontab entry SHELL=/bin/bash * * * * * ~/test.sh &>> ~/time.log ``` - The `SHELL` setting tells crontab to use bash to execute our command - The "`* * * * *`" entry tells cron to execute our command every minute - The command `~/test.sh &>> ~/time.log` runs our test script `~/test.sh` and then appends all output to a log file called `~/time.log` Enter the text above into your user's crontab file using the crontab editor: ```bash crontab -e ``` Once you've saved your entry, you should then be able to view your crontab entry using the list command: ```bash crontab -l # SHELL=/bin/bash # * * * * * ~/test.sh &>> ~/time.log ``` You can check that cron is actually trying to run your script by watching the system log: ```bash tail -f /var/log/syslog | grep CRON # Jun 6 11:17:01 swarm CRON[6908]: (root) CMD (~/test.sh &>> ~/time.log) # Jun 6 11:17:01 swarm CRON[6908]: (root) CMD (~/test.sh &>> ~/time.log) ``` You can also watch your logfile to see that time is being written every minute: ```bash tail -f time.log # Sat Jun 6 11:34:01 UTC 2020 # Sat Jun 6 11:35:01 UTC 2020 # Sat Jun 6 11:36:01 UTC 2020 # Sat Jun 6 11:37:01 UTC 2020 ``` Once you're happy that you can run a test script every minute with cron, we can move on to running your database backup script daily. ### Running our backup script daily Now we're nearly ready to run our backup script using a cron job. There are a few changes that we'll need to make to our existing setup. First we need to write our database backup script to `~/backup.sh` and make sure it is executable: ```bash chmod +x ~/backup.sh ``` Then we need to crontab entry to run every day, which will be "[`0 0 * * *`](https://crontab.cronhub.io/)", and update our cron command to run our backup script. Our new crontab entry should be: ```bash # Database backup crontab entry SHELL=/bin/bash 0 0 * * * ~/backup.sh &>> ~/backup.log ``` Update your crontab with `crontab -e`. Now we wait! This script should run every night at midnight (server time) to take your database backups and upload them to AWS S3. If this isn't working, then change your cron expression so that it runs the script every minute, and use the steps I showed above to try and debug the problem. Hopefully it all runs OK and you will have plenty of daily database backups to roll back to if anything ever goes wrong. ### Automatic restore from the latest backup When disaster strikes and you need your backups, you could manually view your S3 bucket, download the backup file, upload it to the server and manual run the restore, which I documented in my [previous post](https://mattsegal.dev/postgres-backup-and-restore.html). This is totally fine, but as a bonus I thought it would be nice to include a script that automatically downloads the latest backup file and uses it to restore your database. This kind of script would be ideal for dumping production data into a test server. First I'll show you the script, then I'll explain how it works: ```bash #!/bin/bash echo -e "\nRestoring database $PGDATABASE from S3 backups" # Find the latest backup file S3_BUCKET=s3://mydatabase-backups LATEST_FILE=$(aws s3 ls $S3_BUCKET | awk '{print $4}' | sort | tail -n 1) echo -e "\nFound file $LATEST_FILE in bucket $S3_BUCKET" # Restore from the latest backup file echo -e "\nRestoring $PGDATABASE from $LATEST_FILE" S3_TARGET=$S3_BUCKET/$LATEST_FILE aws s3 cp $S3_TARGET - | pg_restore --dbname $PGDATABASE --clean --no-owner echo -e "\nRestore completed" ``` I've assumed that all the Postgres environment variables (`PGHOST`, etc) are already set elsewhere. There are three tasks that are done in this script: - finding the latest backup file in S3 - downloading the backup file - restoring from the backup file So the first part of this script is finding the latest database backup file. The way we know which file is the latest is because of the Unix timestamp which we added to the filename. The first command we use is `aws s3 ls`, which shows us all the files in our backup bucket: ```bash aws s3 ls $S3_BUCKET # 2019-04-04 10:04:58 112309 postgres_mydatabase_1554372295.pgdump # 2019-04-06 07:48:53 112622 postgres_mydatabase_1554536929.pgdump # 2019-04-14 07:24:02 113484 postgres_mydatabase_1555226638.pgdump # 2019-05-06 11:37:39 115805 postgres_mydatabase_1557142655.pgdump ``` We then use `awk` to isolate the filename. `awk` is a text processing tool which I use occasionally, along with `cut` and `sed` to mangle streams of text into the shape I want. I hate them all, but they can be useful. ```bash aws s3 ls $S3_BUCKET | awk '{print $4}' # postgres_mydatabase_1554372295.pgdump # postgres_mydatabase_1554536929.pgdump # postgres_mydatabase_1555226638.pgdump # postgres_mydatabase_1557142655.pgdump ``` We then run `sort` over this output to ensure that each line is sorted by the time. The aws CLI tool seems to sort this data by the uploaded time, but we want to use _our_ timestamp, just in case a file was manually uploaded out-of-order: ```bash aws s3 ls $S3_BUCKET | awk '{print $4}' | sort # postgres_mydatabase_1554372295.pgdump # postgres_mydatabase_1554536929.pgdump # postgres_mydatabase_1555226638.pgdump # postgres_mydatabase_1557142655.pgdump ``` We use `tail` to grab the last line of the output: ```bash aws s3 ls $S3_BUCKET | awk '{print $4}' | sort | tail -n 1 # postgres_mydatabase_1557142655.pgdump ``` And there's our filename! We use the `$()` [command-substituation](http://www.tldp.org/LDP/abs/html/commandsub.html) thingy to capture the command output and store it in a variable: ```bash LATEST_FILE=$(aws s3 ls $S3_BUCKET | awk '{print $4}' | sort | tail -n 1) echo $LATEST_FILE # postgres_mydatabase_1557142655.pgdump ``` And that's part one of our script done: find the latest backup file. Now we need to download that file and use it to restore our database. We use the `aws` CLI to copy backup file from S3 and stream the bytes into stdout. This literally prints out your whole backup file into the terminal: ```bash S3_TARGET=$S3_BUCKET/$LATEST_FILE aws s3 cp $S3_TARGET - # xtshirt9.5.199.5.19k0ENCODINENCODING # SET client_encoding = 'UTF8'; # false00 # ... etc ... ``` The `-` symbol is commonly used in shell scripting to mean "write to stdout". This isn't very useful on it's own, but we can send that data to the `pg_restore` command via a pipe: ```bash S3_TARGET=$S3_BUCKET/$LATEST_FILE aws s3 cp $S3_TARGET - | pg_restore --dbname $PGDATABASE --clean --no-owner ``` And that's the whole script! ### Next steps Now you can set up automated backups for your Postgres database. Hopefully having these daily backups this will take a weight off your mind. Don't forget to do a test restore every now and then, because backups are worthless if you aren't confident that they actually work. If you want to learn more about the Unix shell tools I used in this post, then I recommend having a go at the [Over the Wire Wargames](https://overthewire.org/), which teaches you about bash scripting and hacking at the same time.
mattdsegal
360,147
Uncaught SyntaxError: Unexpected token < in a script tag
So I have a PHP file which contains some JS code and I'm trying to pass an array from PHP into JS &...
0
2020-06-21T11:38:35
https://dev.to/galactum/uncaught-syntaxerror-unexpected-token-in-a-script-tag-4a6k
help, php, javascript
So I have a PHP file which contains some JS code and I'm trying to pass an array from PHP into JS ``` <?php include('autocomplete.js'); ?> <script type="text/javascript"> // any valid JS code can reproduce this issue because the syntax error is on the line above var people = <?php echo json_encode($people)?> </script> ``` I get an "unexpected token '<'" error and the line referenced is always the one with the script tag in it Not really sure what to try so if anyone can help out that would be great Thank you
galactum
360,175
Automate the end-to-end AutoML lifecycle with Amazon SageMaker Autopilot and Amazon Step Functions…
Automate the end-to-end AutoML lifecycle with Amazon SageMaker Autopilot and Amazon Step Fun...
0
2020-06-21T11:53:56
https://dev.to/oelesin/automate-the-end-to-end-automl-lifecycle-with-amazon-sagemaker-autopilot-and-amazon-step-functions-g7o
automl, aws, awsstepfunctions, machinelearning
--- title: Automate the end-to-end AutoML lifecycle with Amazon SageMaker Autopilot and Amazon Step Functions… published: false date: 2020-05-24 13:34:48 UTC tags: automl,aws,awsstepfunctions,machinelearning canonical_url: published: true --- ### Automate the end-to-end AutoML lifecycle with Amazon SageMaker Autopilot and Amazon Step Functions — CD4AutoML ![](https://cdn-images-1.medium.com/max/1024/1*MHfdrKCO_gz9r8KYzXbXcg.png)<figcaption>CD4AutoML with Amazon SageMaker Autopilot — Olalekan Elesin</figcaption> In my previous posts (linked below), I wrote about automating machine learning workflows on Amazon Web Services (AWS) with Amazon SageMaker and Amazon Step Functions. In those posts, I only provided GitHub Gists and minor code snippets but no fully working solutions. This left a lot of readers asking a lot of questions on the technical solutions either privately or via comments. I kinda solved a problem, but created more problems. This led to me ask myself: > **How might I help my readers better achieve the job they wanted done anytime they employed my technical blog posts**? Answer is what you’re now reading. In this, I provide a working project on deploy [automate an end-to-end AutoML lifecycle with Amazon SageMaker Autopilot and Amazon Step Functions](https://github.com/OElesin/sagemaker-autopilot-step-functions), which I now call **CD4AutoML**. With end-to-end, I am not referring to calling the Amazon SageMaker Endpoint from a notebook. I am talking about having a publicly available serverless REST API (Amazon API Gateway) connected to your Amazon SageMaker Endpoint in a fully automated way. This means that you can serve predictions in your applications with a fully automated workflow requiring no developer input apart from committing code to GitHub. Enough talk, you are already asking: How do I get started? Everything to get you started is available in the GitHub repository below: [OElesin/sagemaker-autopilot-step-functions](https://github.com/OElesin/sagemaker-autopilot-step-functions) ### Architecture ![](https://cdn-images-1.medium.com/max/1024/1*1RBNjcCy0_D7xApSL6fx_A.jpeg)<figcaption>Automate the end-to-end AutoML lifecycle with Amazon SageMaker Autopilot on Amazon Step Functions — CD4AutoML</figcaption> This project is designed to get up and running with [CD4AutoML](https://github.com/OElesin/sagemaker-autopilot-step-functions) ( **I coined this word** ), much [CD4ML](https://martinfowler.com/articles/cd4ml.html) from [Martin Fowler’s blogpost](https://martinfowler.com/articles/cd4ml.html). This project indeed completes the “Automated” part of AutoML. #### Technologies: - Amazon Cloudformation - [Amazon Step Functions](https://aws.amazon.com/step-functions/) - [Amazon SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/) - Amazon CodeBuild - [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io/) - AWS Serverless Application Model - Amazon Lambda - Amazon API Gateway - Amazon SSM Parameter Store With this project, you move out of the play/lab mode with Amazon SageMaker Autopilot into running real-life applications with Amazon SageMaker Autopilot. #### State machine Workflow The entire workflow is managed with AWS Step Functions Data Science SDK. Amazon Step Functions does not have service integration with Amazon SageMaker Autopilot out of the box. To manage this, I leveraged Amazon Lambda integration with Step Functions to periodically poll for Amazon SageMaker Autopilot job status. Once the AutoML job is completed, a model is created using the Amazon SageMaker Autopilot Inference Containers, and an Amazon SageMaker Endpoint is deployed. But there is more… On completion of the deployment of the Amazon SageMaker Endpoint, an Amazon CodeBuild Project state machine task is triggered which deploys our Amazon API Gateway with AWS Serverless Application Model. See workflow image below: ![CD4AutoML: Continuous Delivery for AutoML with Amazon SageMaker Autopilot and Amazon Step Functions](https://cdn-images-1.medium.com/max/374/0*W-XSK7MB9R4ATNmi.png)<figcaption>CD4AutoML: Continuous Delivery for AutoML with Amazon SageMaker Autopilot and Amazon Step Functions</figcaption> ### Future Work I have plans to abstract away all deployment details and convert this into a Python Module or better put AutoML-as-a-Service. Users can either provide their Pandas DataFrame or local CSV/JSON data, and the service takes care of the rest. Users will get a secure REST API which they can make predictions in their applications. If you’re interested in working on this together, feel free to reach out. Also feel free to extend this project as it suites you. Experiencing any challenges getting started, create an issue and I will have a look as soon as I can. ### Further Reading - Part 1: [Automating Machine Learning Workflows with AWS Glue, Amazon SageMaker and AWS Step Functions Data Science SDK](https://medium.com/@elesin.olalekan/automating-machine-learning-workflows-with-aws-glue-sagemaker-and-aws-step-functions-data-science-b4ed59e4d7f9) - Part 2: [Automating Machine Learning Workflows Pt2: Amazon SageMaker Processing and AWS Step Functions Data Science SDK](https://dev.to/oelesin/automating-machine-learning-workflows-pt2-sagemaker-processing-sagemaker-and-aws-step-functions-3an5-temp-slug-2013947) - [Amazon Step Functions](https://aws.amazon.com/step-functions/) - [Amazon Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html) - [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io/) Kindly share your thoughts and comments — looking forward to your feedback. You can reach me via [email](mailto:elesin.olalekan@gmail.com), follow me on [Twitter](https://twitter.com/elesinOlalekan) or connect with me on [LinkedIn](https://www.linkedin.com/in/elesinolalekan/). Can’t wait to hear from you!!
oelesin
360,213
Create your own RAC Cluster with VirtualBox + Vagrant
Hi Arthur, before you start your journey, is important to have a good test environment where we can t...
0
2020-06-21T13:21:36
https://dev.to/project42/create-your-own-rac-cluster-with-virtualbox-vagrant-9m8
oracle, database, vagrant, virtualbox
Hi Arthur, before you start your journey, is important to have a good test environment where we can try any endeavour we please. As important as having an stable Test system is the possibility to recreate it on few simple steps. This chapter of our guide will show you how to make Marvin create a Oracle RAC on Oracle Linux 7 and with 18C GRID + 18C DB (CDB + 1 PDB). Try to be patient, and remember that Marvin will complain and wont take any enjoyment for this or any task, but I'm sure he will eventually comply Let's go and start exploring this new place!! We will be using [Tim Hall](https://oracle-base.com/misc/site-info) RAC creation process, so please make sure you first visit his guide to get the necessary software and tools in your system before we start the process: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/75or2194obdoqj94nvb2.png) > If you want to know more about Vagrant, please visit following links: > > https://oracle-base.com/articles/vm/vagrant-a-beginners-guide > https://semaphoreci.com/community/tutorials/getting-started-with-vagrant You will need to download 18C DB and Grid Software: https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/o9zb4zv8fn6udvqiq7yc.png) In our case, we are running Vagrant and VirtualBox in Linux, but you can do this in Windows or Mac, will just change a bit on which commands to use, but the general idea is the same. System requirements are the following: -- At least 6GB of RAM (but 10GB are recommended) per each DB node (2 nodes on this installation) -- At least 1GB of RAM for the DNS node -- Around 150GB of Space on disks (80GB for default DATA diskgroup creation, but you can reduce the size of the disks if you need to) > 19c Is here already > > This guide was created to have 18c Tests cluster, but you can already > execute the same process to have your new 19c system running. Tim Hall > already updated his repository with the new version > https://github.com/oraclebase/vagrant > > The steps are the same, you just need to download 19c software and > make sure you use the new folder "vagrant/rac/ol7_19/" instead of > "vagrant/rac/ol7_183" > > More info in his Blog post: > https://oracle-base.com/blog/2019/02/28/oracle-database-19c-installations-rac-data-guard-and-upgrades/ First, lets clone Tim's Git repository (a lot of good stuff there.. not only this RAC): ``` arthur@marvin:~/# git clone https://github.com/oraclebase/vagrant.git ``` Doing this, we will get the following tree Structure into our server: ``` arthur@marvin:~/# cd vagrant/ arthur@marvin:~/vagrant# tree --charset=ascii . |-- dataguard | |-- ol7_121 | | |-- config | | | |-- install.env | | | `-- vagrant.yml | | |-- node1 | | | |-- scripts | | | | |-- oracle_create_database.sh | | | | |-- oracle_user_environment_setup.sh | | | | |-- root_setup.sh | | | | `-- setup.sh | | | `-- Vagrantfile [.....] | |-- rac | | |-- ol7_122 [....] | | |-- ol7_183 | | | |-- config | | | | |-- install.env | | | | `-- vagrant.yml | | | |-- dns | | | | |-- scripts | | | | | |-- root_setup.sh | | | | | `-- setup.sh | | | | |-- Vagrantfile | | | | `-- vagrant.log | | | |-- node1 | | | | |-- ol7_183_rac1_u01.vdi | | | | |-- scripts | | | | | |-- oracle_create_database.sh | | | | | |-- oracle_db_software_installation.sh | | | | | |-- oracle_grid_software_config.sh | | | | | |-- oracle_grid_software_installation.sh | | | | | |-- oracle_user_environment_setup.sh | | | | | |-- root_setup.sh | | | | | `-- setup.sh | | | | |-- software | | | | | `-- put_software_here.txt | | | | |-- start | | | | |-- Vagrantfile | | | | `-- vagrant.log | | | |-- node2 | | | | |-- scripts | | | | | |-- oracle_user_environment_setup.sh | | | | | |-- root_setup.sh | | | | | `-- setup.sh | | | | |-- Vagrantfile | | | | `-- vagrant.log | | | |-- README.md | | | |-- shared_disks | | | `-- shared_scripts | | | |-- configure_chrony.sh | | | |-- configure_hostname.sh | | | |-- configure_hosts_base.sh | | | |-- configure_hosts_scan.sh | | | |-- configure_shared_disks.sh | | | |-- install_os_packages.sh | | | `-- prepare_u01_disk.sh | | `-- README.md | `-- README.md |-- vagrant_2.2.3_x86_64.rpm |-- vagrant.lo `-- Videos ``` Make sure you copy the 18C DB and Grid Software to the RAC node1: ``` arthur@marvin:~/vagrant# cd rac/ol7_183/node1/software/ arthur@marvin:~/vagrant/rac/ol7_183/node1/software# pwd /home/arthur/vagrant/rac/ol7_183/node1/software arthur@marvin:~/vagrant/rac/ol7_183/node1/software# ls -lrth total 9.3G -rw-r--r--. 1 arthur arthur 4.3G Jul 25 19:33 LINUX.X64_180000_db_home.zip -rw-r--r--. 1 arthur arthur 5.1G Jul 25 19:41 LINUX.X64_180000_grid_home.zip -rw-rw-r--. 1 arthur arthur 104 Jan 11 22:27 put_software_here.txt arthur@marvin:~/vagrant/rac/ol7_183/node1/software# ``` Now that we have that, we need to modifyt he RAC shared disks location. To do that, modify the file vagrant.yml inside the folder 'rac/ol7_183/shared_disks/config/'. In my case, I just left them in "/home/arthur/rac_shared_disks", just make sure you have enough space (each disk disk is 20GB by default, but you can just change it changing "asm_disk_size" variable in that same file) ``` arthur@marvin:~/vagrant/rac/ol7_183/shared_disks# cat ../config/vagrant.yml shared: box: bento/oracle-7.6 non_rotational: 'on' asm_disk_1: /home/arthur/rac_shared_disks/asm_disk_1.vdi asm_disk_2: /home/arthur/rac_shared_disks/asm_disk_2.vdi asm_disk_3: /home/arthur/rac_shared_disks/asm_disk_3.vdi asm_disk_4: /home/arthur/rac_shared_disks/asm_disk_4.vdi asm_disk_size: 20 dns: vm_name: ol7_183_dns mem_size: 1024 ``` Ok, lets start the fun part and get the RAC created and deployed! One of the advantages of using Vagrant, is that will check each box and install any required service or even OS if needed, so if you need Oracle Linux 7.6, will download it (if available) and will do the same with each component needed Is important to follow the following order so everything gets completed correctly To start the rac all we have to do is the following in this same order: > Start the DNS server. > > cd dns vagrant up > > Start the second node of the cluster. This must be running before you > start the first node. > > cd ../node2 vagrant up > > Start the first node of the cluster. This will perform all of the > installations operations. Depending on the spec of the host system, > this could take a long time. On one of my servers it took about 3.5 > hours to complete. > > cd ../node1 vagrant up > > However, since the first the first deployment will take some time, I > recommend you to use the following command, so you also get a log to > check in case there is any issue: > > > arthur@marvin:~/vagrant/rac/ol7_183/node2# nohup vagrant up & Start first the DNS server. Notice how Vagrant will download Oracle Linux image directly, avoiding any extra steps downloading ISO images. Also, how will install any necessary packages like dnsmasq in this case: ``` arthur@marvin:~/vagrant/rac/ol7_183/dns# nohup vagrant up & Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'bento/oracle-7.6' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 0 ==> default: Loading metadata for box 'bento/oracle-7.6' default: URL: https://vagrantcloud.com/bento/oracle-7.6 ==> default: Adding box 'bento/oracle-7.6' (v201812.27.0) for provider: virtualbox default: Downloading: https://vagrantcloud.com/bento/boxes/oracle-7.6/versions/201812.27.0/providers/virtualbox.box default: Download redirected to host: vagrantcloud-files-production.s3.amazonaws.com default: Progress: 39% (Rate: 5969k/s, Estimated time remaining: 0:01:25) [....] default: Running: inline script default: ****************************************************************************** default: /vagrant_config/install.env: line 62: -1: substring expression < 0 default: Prepare yum with the latest repos. Sat Jan 12 16:57:48 UTC 2019 default: ****************************************************************************** [....] default: ****************************************************************************** default: Install dnsmasq. Sat Jan 12 16:57:50 UTC 2019 default: ****************************************************************************** default: Loaded plugins: ulninfo default: Resolving Dependencies default: --> Running transaction check default: ---> Package dnsmasq.x86_64 0:2.76-7.el7 will be installed default: --> Finished Dependency Resolution default: default: Dependencies Resolved default: default: ================================================================================ default: Package Arch Version Repository Size default: ================================================================================ default: Installing: default: dnsmasq x86_64 2.76-7.el7 ol7_latest 277 k default: default: Transaction Summary default: ================================================================================ default: Install 1 Package default: default: Total download size: 277 k default: Installed size: 586 k default: Downloading packages: default: Running transaction check default: Running transaction test default: Transaction test succeeded default: Running transaction default: Installing : dnsmasq-2.76-7.el7.x86_64 1/1 default: default: Verifying : dnsmasq-2.76-7.el7.x86_64 1/1 default: default: default: Installed: default: dnsmasq.x86_64 0:2.76-7.el7 default: default: Complete! default: Created symlink from /etc/systemd/system/multi-user.target.wants/dnsmasq.service to /usr/lib/systemd/system/dnsmasq.service. ``` Start the node2: ``` -- Start the second node of the cluster. This must be running before you start the first node. arthur@marvin:~/vagrant/rac/ol7_183/node2# nohup vagrant up & Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'bento/oracle-7.6' version '201812.27.0' is up to date... ==> default: Clearing any previously set forwarded ports... ==> default: Fixed port collision for 22 => 2222. Now on port 2200. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: intnet [....] ``` And finally, lets start node1. This will perform all of the installations operations. As you can see below, it took ~3.5 hours for this first start, but that wont be the case on next reboots since all the installation tasks were completed > Node1 failure > > If node1 creation and CRS/DB deployment fails for any reason (lack of > space, session disconnected during process.. etc..) is better to > destroy both nodes and start again the process starting node2 and then > node1. > > If not, you may face issues if you retry the process again just for > node1 (see how to destroy each component at the end of this guide) ``` arthur@marvin:~/vagrant/rac/ol7_183/node1# nohup vagrant up & Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'bento/oracle-7.6'... ^MESC[KProgress: 10%^MESC[KProgress: 30%^MESC[KProgress: 50%^MESC[KProgress: 60%^MESC[KProgress: 80%^MESC[KProgress: 90%^MESC[K==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'bento/oracle-7.6' version '201812.27.0' is up to date... ==> default: Setting the name of the VM: ol7_183_rac1 ==> default: Fixed port collision for 22 => 2222. Now on port 2201. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: intnet default: Adapter 3: intnet ==> default: Forwarding ports... default: 1521 (guest) => 1521 (host) (adapter 1) default: 5500 (guest) => 5500 (host) (adapter 1) [....] default: your host and reload your VM. default: default: Guest Additions Version: 5.2.22 default: VirtualBox Version: 6.0 ==> default: Configuring and enabling network interfaces... ==> default: Mounting shared folders... default: /vagrant => /home/arthur/vagrant/rac/ol7_183/node1 default: /vagrant_config => /home/arthur/vagrant/rac/ol7_183/config default: /vagrant_scripts => /home/arthur/vagrant/rac/ol7_183/shared_scripts ==> default: Running provisioner: shell... default: Running: inline script default: ****************************************************************************** default: Prepare /u01 disk. Sat Jan 12 17:12:07 UTC 2019 default: ****************************************************************************** [....] default: ****************************************************************************** default: Output from srvctl status database -d cdbrac Sat Jan 12 20:48:00 UTC 2019 default: ****************************************************************************** default: Instance cdbrac1 is running on node ol7-183-rac1 default: Instance cdbrac2 is running on node ol7-183-rac2 default: ****************************************************************************** default: Output from v$active_instances Sat Jan 12 20:48:04 UTC 2019 default: ****************************************************************************** default: default: SQL*Plus: Release 18.0.0.0.0 - Production on Sat Jan 12 20:48:04 2019 default: Version 18.3.0.0.0 default: default: Copyright (c) 1982, 2018, Oracle. All rights reserved. default: default: Connected to: default: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production default: Version 18.3.0.0.0 default: SQL> default: default: INST_NAME default: -------------------------------------------------------------------------------- default: ol7-183-rac1.localdomain:cdbrac1 default: ol7-183-rac2.localdomain:cdbrac2 default: default: SQL> default: Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production default: Version 18.3.0.0.0 ``` Lets check the rac If you are in the directory from where we have started the VM, you can simply use "vagrant ssh" to connect to it: ``` Vagrant SSH arthur@marvin:~/vagrant/rac/ol7_183/node1# vagrant ssh Last login: Mon Jan 14 19:48:50 2019 from 10.0.2.2 [vagrant@ol7-183-rac1 ~]$ w 19:49:02 up 11 min, 1 user, load average: 1.51, 2.02, 1.31 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT vagrant pts/0 10.0.2.2 19:49 2.00s 0.07s 0.06s w [vagrant@ol7-183-rac1 ~]$ ``` You should now be able to see the RAC resources running (allow 5-10 minutes for CRS/DB to start after the cluster deployment is completed) ``` arthur@marvin:~/vagrant/rac/ol7_183/node1# vagrant ssh Last login: Sun Jan 20 15:28:30 2019 from 10.0.2.2 [vagrant@ol7-183-rac1 ~]$ sudo su - oracle Last login: Sun Jan 20 15:45:51 UTC 2019 on pts/0 [oracle@ol7-183-rac1 ~]$ ps -ef |grep pmon oracle 9798 1 0 15:33 ? 00:00:00 asm_pmon_+ASM1 oracle 10777 1 0 15:33 ? 00:00:00 ora_pmon_cdbrac1 oracle 25584 25543 0 15:56 pts/1 00:00:00 grep --color=auto pmon [oracle@ol7-183-rac1 ~]$ [oracle@ol7-183-rac1 ~]$ echo $ORACLE_SID cdbrac1 [oracle@ol7-183-rac1 ~]$ sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Sun Jan 20 16:00:04 2019 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL> DB_NAME INSTANCE_NAME CDB HOST_NAME STARTUP DATABASE_ROLE OPEN_MODE STATUS --------- -------------------- --- ------------------------------ ---------------------------------------- ---------------- -------------------- ------------ CDBRAC cdbrac1 YES ol7-183-rac1.localdomain 20-JAN-2019 15:33:51 PRIMARY READ WRITE OPEN CDBRAC cdbrac2 YES ol7-183-rac2.localdomain 20-JAN-2019 15:33:59 PRIMARY READ WRITE OPEN INST_ID CON_ID NAME OPEN_MODE OPEN_TIME STATUS ---------- ---------- -------------------- ---------- ---------------------------------------- ---------- 1 2 PDB$SEED READ ONLY 20-JAN-19 03.35.25.561 PM +00:00 NORMAL 2 2 PDB$SEED READ ONLY 20-JAN-19 03.35.58.937 PM +00:00 NORMAL 1 3 PDB1 MOUNTED NORMAL 2 3 PDB1 MOUNTED NORMAL NAME TOTAL_GB Available_GB REQ_MIR_FREE_GB %_USED_SAFELY ------------------------------ ------------ ------------ --------------- ------------- DATA 80 52 0 35.2803282 [oracle@ol7-183-rac1 ~]$ . oraenv ORACLE_SID = [cdbrac1] ? +ASM1 ORACLE_HOME = [/home/oracle] ? /u01/app/18.0.0/grid The Oracle base remains unchanged with value /u01/app/oracle [oracle@ol7-183-rac1 ~]$ [oracle@ol7-183-rac1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE ol7-183-rac1 STABLE ONLINE ONLINE ol7-183-rac2 STABLE ora.DATA.dg ONLINE ONLINE ol7-183-rac1 STABLE ONLINE ONLINE ol7-183-rac2 STABLE ora.LISTENER.lsnr ONLINE ONLINE ol7-183-rac1 STABLE ONLINE ONLINE ol7-183-rac2 STABLE ora.chad ONLINE ONLINE ol7-183-rac1 STABLE ONLINE ONLINE ol7-183-rac2 STABLE ora.net1.network ONLINE ONLINE ol7-183-rac1 STABLE ONLINE ONLINE ol7-183-rac2 STABLE ora.ons ONLINE ONLINE ol7-183-rac1 STABLE ONLINE ONLINE ol7-183-rac2 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE ol7-183-rac1 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE ol7-183-rac2 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE ol7-183-rac2 STABLE ora.MGMTLSNR 1 ONLINE ONLINE ol7-183-rac2 169.254.14.104 192.1 68.1.102,STABLE ora.asm 1 ONLINE ONLINE ol7-183-rac1 Started,STABLE 2 ONLINE ONLINE ol7-183-rac2 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cdbrac.db 1 ONLINE ONLINE ol7-183-rac1 Open,HOME=/u01/app/o racle/product/18.0.0 /dbhome_1,STABLE 2 ONLINE ONLINE ol7-183-rac2 Open,HOME=/u01/app/o racle/product/18.0.0 /dbhome_1,STABLE ora.cvu 1 ONLINE ONLINE ol7-183-rac2 STABLE ora.mgmtdb 1 ONLINE ONLINE ol7-183-rac2 Open,STABLE ora.ol7-183-rac1.vip 1 ONLINE ONLINE ol7-183-rac1 STABLE ora.ol7-183-rac2.vip 1 ONLINE ONLINE ol7-183-rac2 STABLE ora.qosmserver 1 ONLINE ONLINE ol7-183-rac2 STABLE ora.scan1.vip 1 ONLINE ONLINE ol7-183-rac1 STABLE ora.scan2.vip 1 ONLINE ONLINE ol7-183-rac2 STABLE ora.scan3.vip 1 ONLINE ONLINE ol7-183-rac2 STABLE -------------------------------------------------------------------------------- [oracle@ol7-183-rac1 ~]$ ``` If you want to check the current status of the VMs, you can use the following commands from our host marvin: ``` arthur@marvin:~# vagrant global-status id name provider state directory --------------------------------------------------------------------------- 40d79c9 default virtualbox running /home/arthur/vagrant/rac/ol7_183/dns 94596a3 default virtualbox running /home/arthur/vagrant/rac/ol7_183/node2 87fd155 default virtualbox running /home/arthur/vagrant/rac/ol7_183/node1 The above shows information about all known Vagrant environments on this machine. This data is cached and may not be completely up-to-date (use "vagrant global-status --prune" to prune invalid entries). To interact with any of the machines, you can go to that directory and run Vagrant, or you can use the ID directly with Vagrant commands from any directory. For example: "vagrant destroy 1a2b3c4d" arthur@marvin:~# arthur@marvin:~# VBoxManage list runningvms "ol7_183_dns" {43498f5d-85b4-404e-8720-caa38de6b496} "ol7_183_rac2" {21c779b5-e2da-44af-a121-89a8c2bfc3c6} "ol7_183_rac1" {1f5718bc-140a-4470-aa0a-4a9ddb9215d7} arthur@marvin:~# ``` To show you how quick the system restarts, here is the time it takes to start every element after the first time the RAC is provisioned ``` arthur@marvin:~/vagrant/rac/ol7_183/node2# start_rac Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'bento/oracle-7.6' version '201812.27.0' is up to date... ==> default: Clearing any previously set forwarded ports... ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: intnet ==> default: Forwarding ports... default: 22 (guest) => 2222 (host) (adapter 1) ==> default: Running 'pre-boot' VM customizations... ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... ==> default: Machine booted and ready! ==> default: Checking for guest additions in VM... default: The guest additions on this VM do not match the installed version of default: VirtualBox! In most cases this is fine, but in rare cases it can default: prevent things such as shared folders from working properly. If you see default: shared folder errors, please make sure the guest additions within the default: virtual machine match the version of VirtualBox you have installed on default: your host and reload your VM. default: default: Guest Additions Version: 5.2.22 default: VirtualBox Version: 6.0 ==> default: Configuring and enabling network interfaces... ==> default: Mounting shared folders... default: /vagrant => /home/arthur/vagrant/rac/ol7_183/dns default: /vagrant_config => /home/arthur/vagrant/rac/ol7_183/config default: /vagrant_scripts => /home/arthur/vagrant/rac/ol7_183/shared_scripts ==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision` ==> default: flag to force provisioning. Provisioners marked to run always will still run. real 2m46.422s user 0m33.583s sys 0m38.177s Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'bento/oracle-7.6' version '201812.27.0' is up to date... ==> default: Clearing any previously set forwarded ports... ==> default: Fixed port collision for 22 => 2222. Now on port 2200. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: intnet default: Adapter 3: intnet ==> default: Forwarding ports... default: 1521 (guest) => 1522 (host) (adapter 1) default: 5500 (guest) => 5502 (host) (adapter 1) default: 22 (guest) => 2200 (host) (adapter 1) ==> default: Running 'pre-boot' VM customizations... ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2200 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... ==> default: Machine booted and ready! ==> default: Checking for guest additions in VM... default: The guest additions on this VM do not match the installed version of default: VirtualBox! In most cases this is fine, but in rare cases it can default: prevent things such as shared folders from working properly. If you see default: shared folder errors, please make sure the guest additions within the default: virtual machine match the version of VirtualBox you have installed on default: your host and reload your VM. default: default: Guest Additions Version: 5.2.22 default: VirtualBox Version: 6.0 ==> default: Configuring and enabling network interfaces... ==> default: Mounting shared folders... default: /vagrant => /home/arthur/vagrant/rac/ol7_183/node2 default: /vagrant_config => /home/arthur/vagrant/rac/ol7_183/config default: /vagrant_scripts => /home/arthur/vagrant/rac/ol7_183/shared_scripts ==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision` ==> default: flag to force provisioning. Provisioners marked to run always will still run. real 2m49.464s user 0m28.160s sys 0m31.663s Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'bento/oracle-7.6' version '201812.27.0' is up to date... ==> default: Clearing any previously set forwarded ports... ==> default: Fixed port collision for 22 => 2222. Now on port 2201. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Adapter 2: intnet default: Adapter 3: intnet ==> default: Forwarding ports... default: 1521 (guest) => 1521 (host) (adapter 1) default: 5500 (guest) => 5500 (host) (adapter 1) default: 22 (guest) => 2201 (host) (adapter 1) ==> default: Running 'pre-boot' VM customizations... ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2201 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... default: Warning: Remote connection disconnect. Retrying... ==> default: Machine booted and ready! ==> default: Checking for guest additions in VM... default: The guest additions on this VM do not match the installed version of default: VirtualBox! In most cases this is fine, but in rare cases it can default: prevent things such as shared folders from working properly. If you see default: shared folder errors, please make sure the guest additions within the default: virtual machine match the version of VirtualBox you have installed on default: your host and reload your VM. default: default: Guest Additions Version: 5.2.22 default: VirtualBox Version: 6.0 ==> default: Configuring and enabling network interfaces... ==> default: Mounting shared folders... default: /vagrant => /home/arthur/vagrant/rac/ol7_183/node1 default: /vagrant_config => /home/arthur/vagrant/rac/ol7_183/config default: /vagrant_scripts => /home/arthur/vagrant/rac/ol7_183/shared_scripts ==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision` ==> default: flag to force provisioning. Provisioners marked to run always will still run. real 2m50.626s user 0m27.728s sys 0m32.275s arthur@marvin:~/vagrant/rac/ol7_183/node1# ``` If you don't want to use "vagrant ssh" you can use "vagrant port" and check the different ports the VMs has open and connect using ssh directly. This also allows you to connect using a different user than vagrant (User/Pass details are in vagrant/rac/ol7_183/config/install.env) ``` arthur@marvin:~/vagrant/rac/ol7_183/node1# vagrant port The forwarded ports for the machine are listed below. Please note that these values may differ from values configured in the Vagrantfile if the provider supports automatic port collision detection and resolution. 22 (guest) => 2201 (host) 1521 (guest) => 1521 (host) 5500 (guest) => 5500 (host) arthur@marvin:~/vagrant/rac/ol7_183/node1# arthur@marvin:~/vagrant/rac/ol7_183/node1# ssh -p 2201 oracle@localhost The authenticity of host '[localhost]:2201 ([127.0.0.1]:2201)' can't be established. ECDSA key fingerprint is SHA256:/1BS9w7rgYxHc6uPe4YFvTX7oJcNFKjOpS8yWZRgXK8. ECDSA key fingerprint is MD5:6d:3f:4b:82:6a:80:3c:15:3b:d4:dd:c1:42:f1:95:5f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '[localhost]:2201' (ECDSA) to the list of known hosts. oracle@localhost's password: Last login: Sat Jan 26 16:15:58 2019 [oracle@ol7-183-rac1 ~]$ arthur@marvin:~# scp -P2201 -pr p13390677_112040_Linux-x86-64_* oracle@localhost:/u01/Installers/ oracle@localhost's password: p13390677_112040_Linux-x86-64_1of7.zip 11% 157MB 6.5MB/s 03:00 ETA ``` In case you want to delete the Cluster, instead of deleting the whole directory, you can just "destroy" each component ``` arthur@marvin:~/vagrant/rac/ol7_183# du -sh * 8.0K config 80K dns 22G node1 22G node2 arthur@marvin:~/vagrant/rac/ol7_183/node2# vagrant destroy -f ==> default: Destroying VM and associated drives... arthur@marvin:~/vagrant/rac/ol7_183/node2# arthur@marvin:~/vagrant/rac/ol7_183# du -sh * 8.0K config 80K dns 22G node1 28K node2 ``` Well, that's it for this guide, at least for now. I'm sure if you build one of this, you will be able to find more interesting things on it and like every good destination, we will end up coming back and add some extra nuggets on how personalize the Cluster to our taste.
project42
360,232
How many X you drink coffee in a day?
Other than energy drinks, coffee is also great alternative to get that adrenaline running keeping us...
0
2020-06-21T13:59:23
https://dev.to/penandpapers/how-many-x-you-drink-coffee-in-a-day-bn
webdev
Other than energy drinks, coffee is also great alternative to get that adrenaline running keeping us awake and active when working late at night before our project deadlines. So how many cup of coffee do you drink in a day? Do you make your own coffee or buy one in the store?
penandpapers
360,948
Good practices for starting with containers
So I really hate the saying “best practices” mainly because it creates a belief that there is only on...
0
2020-06-22T18:19:14
https://dev.to/documentednerd/good-practices-for-starting-with-containers-570d
codeproject, technology, containers, docker
--- title: Good practices for starting with containers published: true date: 2020-06-02 09:00:00 UTC tags: CodeProject,Technology,Containers,Docker canonical_url: --- So I really hate the saying “best practices” mainly because it creates a belief that there is only one right way to do things. But I wanted to put together a post around some ideas for strengthening your micro services architectures. As I’ve previously discussed, Micro service architectures are more complicated to implement but have a lot of huge benefits to your solution. And some of those benefits are: - Independently deployable pieces, no more large scale deployments. - More focused testing efforts. - Using the right technology for each piece of your solution. - I creased resiliency from cluster based deployments. But for a lot of people, including myself the hardest part of this process is how do you structure a micro-service? How small should each piece be? How do they work together? So here are some practices I’ve found helpful if you are starting to leverage this in your solutions. ### One service = one job One of the first questions is how small should my containers be. Is there such a thing as too small? A good rule of thumb to focus on is the idea of separation concerns. If you take every use-case and start to break it down to a single purpose, you’ll find you get to a good micro-service design pretty quickly. Let’s look at examples, I recently worked on a solution with a colleague of mine that ended up pulling from an API, and then extracting that information to put it into a data model. In the monolith way of thinking, that would have been 1 API call. Pass in the data and then cycle through and process it. But the problem was throughput, if I would have pulled the 67 different regions, and the 300+ records per region and processed this as a batch it would have been a mess of one gigantic API call. So instead, we had one function that cycled through the regions, and pulled them all to json files in blob storage, and then queued a message. Then we had another function that when a message is queued, will take that message, read in the records for that region, and process saving them to the database. This separate function is another micro-services. Now there are several benefits to this approach, but chief among them, the second function can scale independent of the first, and I can respond to queued messages as they come in, using asynchronous processing. ### Three words… Domain driven design For a great definition of Domain-Driven Design, see [here](https://en.wikipedia.org/wiki/Domain-driven_design). The idea is pretty straight forward, the idea of building software and the structure of your application should mirror the business logic that is being implemented. So for example, your micro-services should mirror what they are trying to do. Like let’s take the most straightforward example…e-commerce. If we have to track orders, and have a process once an order is submitted of the following: - Orders are submitted. - Inventory is verified. - Order Payment is processed. - Notification is sent to supplier for processing. - Confirmation is sent to the customer. - Order is fulfilled and shipped Looking at the above, one way to implement this would be to do the following: - OrderService: Manage the orders from start to finish. - OrderRecorderService: Record order in tracking system, so you can track the order throughout the process. - OrderInventoryService: Takes the contents of the order and checks it against inventory. - OrderPaymentService: Processes the payment of the order. - OrderSupplierNotificationService: Interact with a 3rd party API to submit the order to the supplier. - OrderConfirmationService: Send an email confirming the order is received and being processed. - OrderStatusService: Continues to check the 3rd party API for the status of the order. If you notice above, outside of an orchestration they match exactly what the steps were according to the business. This provides a streamlined approach that makes it easy to make changes, and easy to understand for new team members. More than likely communication between services is done via queues. For example, let’s say the company above wants to expand to except Venmo as a payment method. Really that means you have to update the OrderPaymentServices to be able to accept the option, and process the payment. Additionally OrderPaymentService might itself be an orchestration service between different micro-services, one per payment method. ### Make them independently deployable This is the big one, if you really want to see benefit of microservices, they MUST be independently deployable. This means that if we look at the above example, I can deploy each of these separate services and make changes to one without having to do a full application deployment. Take the scenario above, if I wanted to add a new payment method, I should be able to update the OrderPaymentService, check-in those changes, and then deploy that to dev, through production without having to deploy the entire application. Now, the first time I heard that I thought that was the most ridiculous thing I ever heard, but there are some things you can do to make this possible. - Each Service should have its own data store: If you make sure each service has its own data store, that makes it much easier to manage version changes. Even if you are going to leverage something like SQL Server, then make sure that the tables leveraged by each micro-service are used by that service, and that service only. This can be accomplished using schemas. - Put layers of abstraction between service communication: For example, a great common example is queuing or eventing. If you have a message being passed through, then as long as the message leaving doesn’t change, then there is no need to update the receiver. - If you are going to do direct API communication, use versioning. If you do have to have APIs connecting these services, leverage versioning to allow for micro-services to be deployed and change without breaking other parts of the application. ### Build with resiliency in mind If you adopt this approach to micro-services, then one of the biggest things you will notice quickly is that each micro-service becomes its own black-box. And as such I find its good to build each of these components with resiliency in mind. Things like leveraging Polly for retry, or circuit breaker patterns. These are great ways of making sure that your services will remain resilient, and it will have a cumulative affect on your application. For example, take our OrderPaymentService above, we know that Queue messages should be coming in, with the order and payment details. We can take a microscope to this service and say, how could it fail, its not hard to get to a list like this: - Message comes through in a bad format. - Payment service can’t be reached. - Payment is declined (for any one of a variety of reasons) - Service fails while waiting on payment to be processed. Now for some of the above, its just some simple error handling, like checking the format of the message for example. We can also build logic to check if the payment service is available, and do an exponential retry until its available. We might also consider implementing a circuit breaker, that says if we can’t process payments after so many tries, the service switches to an unhealthy state and causes a notification workflow. And in the final scenario, we could implement a state store that indicates the state of the payment being processed should a service fail and need to be picked up by another. ### Consider monitoring early This is the one that everyone forgets, but it dove-tails nicely out of the previous one. It’s important that there be a mechanism for tracking and monitoring the state of your micro-service. I find too often its easy to say “Oh the service is running, so that means its fine.” That’s like saying just cause the homepage loads, a full web application is working. You should build into your micro-services the ability to track their health and enable a way of doing so for operations tools. Let’s face it, at the end of the day, all code will eventually be deployed, and all deployed code must be monitored. So for example, looking at the above. If I build a circuit breaker pattern into OrderPaymentService, and every failure updates status stored within memory of the service that says its unhealthy. I can then expose an http endpoint that returns the status of that breaker. - Closed: Service is running fine and healthy - Half-Open: Service is experiencing some errors but still processing. - Open: Service is taken offline for being unhealthy. I can then build out logic that when it gets to Half-open, and even open specific events will occur. ### Start small, don’t boil the ocean This one seems kind of ironic given the above. But if you are working on an existing application, you will never be able to convince management to allow you to junk it and start over. So what I have done in the past, is to take an application, and when you find its time to make a change to that part of the application, take the opportunity to rearchitect and make it more resilient. Deconstruct the pieces and implement a micro-service response to resolving the problem. ### Stateless over stateful Honestly this is just good practice to get used to, most container technologies, like Docker or Kubernetes or other options really favor the idea of elastic scale and the ability to start or kill a process at any time. This becomes a lot harder if you have to manage state within a container. If you must manage state I would definitely recommend using an external store for information. Now I know not every one of these might fit your situation but I’ve found that these ten items make it much easier to transition to creating micro services for your solutions and seeing the benefits of doing so.
documentednerd
360,343
AWS: EC2 Hibernation
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute ca...
7,421
2020-06-22T15:02:26
https://dev.to/exampro/aws-ec2-hibernation-2fj4
aws, beginners, career, 100daysofcloud
[Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2/) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. ### What is EC2 Hibernation? Have you ever put your computer to "sleep"? When you close the computer your work is still saved and available as soon as you open the computer back up. In the same way we can put our EC2 instances "to sleep" or hibernation. ### Hibernation Flow: ![EC2 Hibernation flow](https://dev-to-uploads.s3.amazonaws.com/i/vbgiffwdnlsh8ql0kexc.png) ### Benefits **[Hibernation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)** saves the contents from the instance memory (RAM) to your [Amazon EBS](https://aws.amazon.com/ebs/?ebs-whats-new.sort-by=item.additionalFields.postDateTime&ebs-whats-new.sort-order=desc) root volume. This leads to a **faster** boot up time for your instances. AWS persists the instance's Amazon EBS root volume and any attached Amazon EBS data volumes. ####When an instances is started again: - Your Amazon EBS volume is restored to it's previous state. (*Like when opening a computer that has been "put to sleep"*) - Your **RAM contents** are reloaded and **the processes** that previously were running on the instance are **resumed.** ### Important Information EC2 hibernation is available for [On-Demand](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) and [Reserved](https://aws.amazon.com/ec2/pricing/reserved-instances/) Instances. An Instance can only be put in to hibernation if it meets the [hibernation prerequisites.](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#hibernating-prerequisites) ####Examples of prerequisites include: - The root **EBS** volume must be **encrypted**. - Supported instance families include: C3, C4, C5, M3, M4, M5, R3, R4, and R5 - Instance RAM size must be less than 150 GB - The Amazon Machine Images it currently supports are Amazon Linux 2, Linux AMI, Ubuntu and Windows AWS does not charge for a hibernated instance when it is in a **stopped** state. There are limitation and actions not supported for hibernation. For a complete list of limitations check out the [AWS documentation.](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#hibernating-not-supported) ####Use Cases for Hibernation: - Apps with long-running processing - To save RAM state for quicker boot-up - For services that take a lot of time to initialize This topic is addressed in the new [AWS Solutions Architect Exam](https://aws.amazon.com/certification/coming-soon/)**[SAA-C02]** If you want to learn even more and prepare for the AWS Certified Solutions Architect Associate Exam then check out our course on [ExamPro.](https://www.exampro.co/aws-exam-solutions-architect-associate)
coffeecraftcode
360,377
Golang Nedir?
Merhaba, bu benim ilk blog yazım ve heyecanla yazıyorum. Bu ilk yazımda Go programlama dilini ya da...
0
2020-06-21T19:48:07
https://dev.to/go/golang-nedir-5fd6
go, turkish, turkce
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ntdfnje8ozaq5ckui82i.png) Merhaba, bu benim ilk blog yazım ve heyecanla yazıyorum. Bu ilk yazımda Go programlama dilini ya da kısaca Go'yu ele alacağım. ##### *(Go, Golang, Google Go ya da Go programlama dili olarak da hitap edilir.)* İlk olarak Go ile nasıl tanıştığımdan bahsetmek istiyorum; 2019'un Aralık ayının birinci gününde Antalya'da gerçekleşen Antalya GDG DevFest'19 etkinliğinde, Sayın Ecir Uğur Küçüksille'nin Go dilini ele aldığı sunumu ile tanıma fırsatım oldu. İlk başta pek ilgilenmesem de yakın zamanda Go'ya olan ilgim epeyce arttı. İlgimin artması ile birlikte öğrendiklerimi paylaşma hevesim de bir o kadar arttığını fark ettim ve şu anda bu satırları yazıyorum. Lafı uzatmadan konuya giriş yapmak iyi olacaktır. 🙂 ### Golang nedir? Golang, İlk olarak Google'ın kendi içerisinde barındırdığı sistemleri daha hızlı, güvenli ve verimli hale getirmek amacıyla **Rob Pike**, **Robert Griesemer** ve **Ken Thompson** tarafından tasarlanan, [açık kaynak](https://github.com/golang/go) ve statik tipli yazılan bir programlama dilidir. Go, 2007 yılında bu 3 başarılı mühendis tarafından tasarlanmaya başlandı ve 2009 yılında ilk versiyonu piyasaya sürüldü. Google'ın hedefi diğer dillerdeki en iyi yönleri alıp, sorunlu yanlarını çözümleyip tek dilde birleştirmekti ve kısmen bunu başardı. Bir çok şirket alt sistemleri için Go'yu tercih ediyor fakat bu Go'nun sadece alt sistemler için kullanıldığı anlamına gelmiyor. Go, içerisinde 25 anahtar sözcük(keyword) bulunduruyor ve bu da dili sade kılıyor. Dilin yakın zamanda bu kadar popüler olmasının etmenlerinden biri de maskotu olan **Gopher** isimli sevimli sincap kesinlikle. Ayrıca Go dilini kullanan geliştiricilere de Gopher denmekte. Pekala Go hakkında bir kaç şey biliyoruz peki neden tercih etmelisiniz? ### Go'yu neden tercih etmelisiniz?. - Go size C dilinde elde ettiğiniz performansa çok yakın bir performans sağlar. - Go eşzamanlı çalışmayı doğal olarak destekleyen bir dildir. - Geriye yönelik bir dildir yani Go'nun ilk sürümünde yazılmış bir program'ı başka sürümlerde de sorunsuz çalıştırabileceğiniz anlamına geliyor. - Az vakitte çok iş teriminin karşılığını verir. Bu şu anlama gelir; Go yorumlanabilir dilden ziyade <u>derlenebilir</u> bir dil olduğundan, sanal makinelere ihtiyaç duymadan direkt olarak doğal makine diline çevrilir bu da size zaman kazandırır. - Go esnek bir dildir, size bir çok alanda ürün çıkarmanıza olanak sağlar. Go ile sistem ve ağ programlama, makine öğrenmesi, big data(büyük veri), web, [mobil](https://github.com/golang/go/wiki/Mobile), CLI ve masaüstü alanlarında ürün çıkarabilirsiniz. (Go bahsettiklerimin bir kaçını kendi içerisinde bulundurmasa da yazdığım tüm alanlarda ürün çıkarmanız mümkün) - Go, ayrılan belleğin düzgünce yönetimini sağlar. - Kendi çöp toplayıcısına sahiptir. (Garbage Collector ya da GO GC) ### Go en çok hangi kısımlar için tercih ediliyor? - Ölçeklenebilir, yüksek performanslı uygulamalar için. - Web alanında Back-End (Arka uç) kısmı için. - Bir çok medya şirketi içerisindeki ağır yükleri azaltmak için (Netflix, YouTube, SoundCloud vb.). - Dahili analiz hizmetleri için. - Cloud Computing(Bulut bilişim) için. ve daha fazla alanlarda kullanılmakta. ### Go kullanan şirketler - Google - Uber - Medium - Trendyol - SoundCloud - Dropbox - Netflix - YouTube - Peak Games daha fazlasını Go'nun GitHub sayfasındaki [GoUsers](https://github.com/golang/go/wiki/GoUsers) bölümünde bulabilirsiniz. ## Go hakkındaki kişisel görüşüm Go benim için ideal bir dil, kullandığım dillere nazaran Go'ya daha fazla ısındım, benim için aileden biri gibi. Go dilinin rahat ve kolay kullanımı da beni kendisine çekti diyebilirim. Henüz ciddi işler için kullanmasam da kullanacağım kesinlik kazandı. Kısaca artık ben de bir Gopher'ım. 😊 **Bu yazımda eksik, hatalı veya sorunlu kısımlar fark ettiyseniz bana [kişisel sitem](https://metecan.dev/)den ulaşabilirsiniz.** ## Kaynakçalar - [https://go.kaanksc.com/](https://go.kaanksc.com/) - [https://yalantis.com/blog/why-use-go/](https://yalantis.com/blog/why-use-go/) - [Staticly-Typed and Dynamically-typed languages](https://stackoverflow.com/questions/1517582/what-is-the-difference-between-statically-typed-and-dynamically-typed-languages) - [http://devnot.com/2017/go-programlama-diline-genel-bakis/](http://devnot.com/2017/go-programlama-diline-genel-bakis/) - [Why you should learn go? by Keval Patel](https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65#:~:text=Go%20runs%20directly%20on%20underlying,Processors%20understand%20binaries.) - [https://www.yakuter.com/go-dilinde-concurrency/](https://www.yakuter.com/go-dilinde-concurrency/) - [Interpreted vs Compiled Programming Languages: What's the Difference?](https://www.freecodecamp.org/news/compiled-versus-interpreted-languages/#:~:text=Interpreted%20vs%20Compiled%20Programming%20Languages%3A%20What's%20the%20Difference%3F,-Every%20program%20is&text=In%20a%20compiled%20language%2C%20the,reads%20and%20executes%20the%20code.)
metecan
360,391
Big-O For The Non-CS Degree - Part 1
Ever wonder why some algorithms are faster than others? Yeah me neither, but Big-O Notation is the li...
7,426
2020-06-21T20:12:26
https://www.travislramos.com/blog/big-o-for-the-non-cs-degree-part-1
tutorial, javascript, beginners, computerscience
Ever wonder why some algorithms are faster than others? Yeah me neither, but Big-O Notation is the likely source of explanation, and in this two-part series you will learn why! ## So what the heck is Big-O Notation? It's a way of measuring how long an algorithm will take to execute, and how well it scales based on the size of the dataset. Basically, it measures algorithmic efficiency. Let's say for example we have a list of 15 people, and we would like to sort through these 15 people to find every person whose first name starts with the letter T. Well, there are different algorithms you could use to sort through this list all ranging in different levels of complexity, with some performing better than others. Now let us pretend that list just jumped up to 1 million names. How do you think this will affect the performance and complexity? The answers to these questions can be found using Big-O notation. ## Many Flavors Big-O comes in different forms: - O(1) - O(log n) - O(n) - O(n log n) - O(n^2) - O(2^n) - O(n!) In this post, I will be discussing the first three variations with the last four discussed in the next post, so stay tuned for that! #### O(1) - Constant Time Constant time complexity doesn’t care about the size of the data being passed in. The execution time will remain the same regardless of the dataset. Whether our list contained 5 items or 1,000 items, it doesn't matter. This means that this notation is very scalable and independent on time. Let’s say for example we have an array of numbers and we want to find the second number in that list. No matter what the size of the list is, finding the second number will always take the same amount of time. ```js let smallList = [0, 1, 2] let largeList = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] let logSecondNumber = (list) => { console.log(list[1]); } logSecondNumber(smallList) logSecondNumber(largeList) ``` Both calls to the function will execute in the same amount of time even though one list is larger than the other. #### O(log n) - Logarithmic Time Logarithmic time complexity is the time it takes to execute depending on the logarithm of the input size. A good example of this would be a binary search. You divide the dataset continuously until you get to the point you want. In our example below I am looping through the list of numbers and checking whether our middle position in the array is equal to our target value. If it isn’t we divide the list of numbers accordingly, calculate our new middle position, and check again. This will continue until either we find the number we are looking for, or we run out of numbers in our array. ```js function binarySearch(array, targetValue) { let minIndex = 0; let maxIndex = array.length - 1; let middleIndex = Math.floor((maxIndex + minIndex) / 2); while (array[middleIndex] != targetValue && minIndex < maxIndex) { if (targetValue < array[middleIndex]) { maxIndex = middleIndex - 1; } else if (targetValue > array[middleIndex]) { minIndex = middleIndex + 1; } middleIndex = Math.floor((maxIndex + minIndex)/2); } return (array[middleIndex] != targetValue) ? -1 : middleIndex; }; let numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; binarySearch(numbers, 7); ``` #### O(n) - Linear Time Linear time complexity means that the time to execute the algorithm has a direct relationship with the size of n. As more items are added to the dataset, the time to execute will scale up proportionally. Looking at our example below, we are using a for loop to print out each item in our array. For each item added to this array, it will increase the time it takes to execute by n. ```js let junkFood = ['pizza', 'cookie', 'candy', 'icecream'] loopThroughOurJunkFood(junkFood) { for (let i = 0; i > junkFood.length; i++) { console.log(junkFood[i]); } } ``` If we were to add another item to our junkFood array, the time it takes to execute our function will increase linearly. #### More to come… In the next post in this series, we will go over the rest of our Big-O notation flavors so stay tuned for that! If you like what you see and want to read more, head over to my [blog](https://travision.dev/blogposts/) where I write more about software development, along with personal development!
travislramos
360,444
Learning new skills while breaking into tech
I hope by reading this, you can avoid these painful mistakes and get into the tech industry faster.
0
2020-06-21T23:23:48
https://dev.to/jcsmileyjr/learning-new-skills-while-breaking-into-tech-o09
motivation, developer, blog, beginners
--- title: Learning new skills while breaking into tech published: true description: I hope by reading this, you can avoid these painful mistakes and get into the tech industry faster. tags: #motivation #developer #blog #beginners cover_image: https://dev-to-uploads.s3.amazonaws.com/i/qoncv7hpelfgamop2og1.png --- I have been trying to get into tech for a long time and have made every classic mistake possible. I hope by reading this, you can avoid these painful mistakes and get into the tech industry faster. ###What I did Wrong Software development isn’t just one language or skill. A developer needs a myriad of technologies and industry-standard coding practices to develop useful working applications. I’ve made the mistake of learning one thing at an introductory level and no established coding practices. The second mistake was using one or two technologies to build all types of applications. The classic scenario of “If all you have is a hammer, then everything looks like a nail”. ![Hammer and Nail](https://dev-to-uploads.s3.amazonaws.com/i/kz48528vtwwl8evoer5k.png) ###Year One: Moving too Slow In 2016, I learned basic HTML, CSS, jQuery, and a sliver of JavaScript. I thought I was the best thing since fried chicken. With that in mind, I built a monstrosity family reunion website in mainly jQuery. Looking back, I only knew the basics of HTML elements and CSS styles. This led to two months of pain and suffering building something relatively simple. ![Burning House](https://dev-to-uploads.s3.amazonaws.com/i/axsn2fd9hjhyd244xquf.jpg) ###Year Two: Building for the Sake of Building In 2017, I learned an introductory level of AngularJS, Electron, GitHub, and a sliver of PHP. I learned just enough to build something. I was able to develop several applications; however, not every application was suited as an AngularJS web app or Electron desktop app. The code was a scary beast of spaghetti code with no documentation and minimal version control. In two years, I was still limited in what I could accomplish. I had developed 11 applications that no one wanted to use and were horribly designed. All of my applications looked the same because I had no experience with design or user experience. Each application was different in purpose and platform. I was basically restarting my learning curve with each project. My back-end code was copied and pasted from previous projects. I found out that AngularJS was being scrapped for a newer and totally different Angular 2. ###What I Should have Done A much better strategy was to commit two months to basic web technologies, two months to master JavaScript, two months each for basic AngularJS and PHP, and the remaining four months for UI design theory, hosting, database, testing, and creating an API. During that time period I should have mastered basic coding practices and version control. The second year I should have been doing one project every two to three months with a focus on a gradual complexity. An additional goal should have been to find ways to work with other developers. Fast-forward to mid 2019. I had the opportunity to get paid for a freelance React Native mobile app with a colleague from my local tech community. Several weeks were spent learning the technology using tutorials and documentation. Four additional weeks were devoted to building four projects of increasing complexity, but simple enough to be done in a week. The main idea was to determine what complementary knowledge I would need to research before working on the contract work. Ultimately, we were able to complete the contract work. My colleague, a UI/UX specialist, was able to help me with concepts I didn’t know I didn’t know. ![Two men building a house](https://dev-to-uploads.s3.amazonaws.com/i/qoncv7hpelfgamop2og1.png) ###Take-away The take-away is to plan your learning journey with having a set of complementary skills as the end goal in the shortest time period. * Pick an area of expertise and get good at it. * Take the time to advance past the basics of each technology. * Research what technology is hot in your area, what the maintainers for a technology have planned for it, and how it complements your current knowledge. Then wash and repeat with another complementary area of expertise. The goal is to get good at one to two things while having knowledge on everything else. You can follow my journey doing #100DaysOfCode on Twitter at JCSmiley4 or connect with me on LinkedIn at JC Smiley Jr. As always, let’s have fun and do lots of victory dancing. ![Animated Developer](https://dev-to-uploads.s3.amazonaws.com/i/b46pz3lbin1onq25gd69.png) Icons by: * Icon made by Freepik from www.flaticon.com * Photo by Stephen Radford on Unsplash
jcsmileyjr
360,480
Post Commit: Ensuring Quality Before Release
How do we release software responsibly to make sure it is bug free and users are satisfied.
0
2020-06-23T20:03:24
https://dev.to/justinctlam/post-commit-ensuring-quality-before-release-38dj
productivity, testing, career, engineering
--- title: Post Commit: Ensuring Quality Before Release published: true description: How do we release software responsibly to make sure it is bug free and users are satisfied. tags: #productivity #testing #career #engineering cover_image: https://dev-to-uploads.s3.amazonaws.com/i/0jkza11s0whlmbqbul7j.jpg --- # Story Time You just spent the last few weeks writing the latest and greatest feature for your users. You did your due diligence and wrote entire suite of unit tests and integration tests. Your fellow colleagues reviewed your code and gave their thumbs up. You press commit, your CI and CD processes are green and your feature officially releases. You go home eagerly anticipating the praises and accolades in the following days. Then it happens. The complaints start pouring in, tweets about how your new feature doesn't really work, even worst it broke other features in your app. Things don't look good and you spend the next few days to a week fighting fires. You are starting to lose customer satisfaction. How would we have handled this better? I will describe how committing your feature is just the beginning of your journey to releasing your software. # Let's take a journey #### Know What Success Means Hopefully before you begin writing your feature you know what success looks like and what metrics you are anticipating to move. We don't build things in a vacuum, and new features don't automatically imply happy users. Are you looking for increase in new users, high engagement with existing users, better performance, etc... make sure you know what you want and how you will measure it. #### Have Internal Builds Some people might refer to this as an experimental build, sandbox build, alpha build, etc... This is a version of your build for internal use only. The internal build allows developers to safely commit code for internal testing and experimentation. This is the build other engineers, PMs, designers, etc... will go to for an early feel of what the feature is going to do. #### Dogfood Your Software Once you have an internal build deployment process, your next phase for ensuring quality is to have dogfooding sessions. As the developer of your feature, setup some time with your team to test your new feature and do your best to break it. Report any bugs, fix, and repeat the dogfooding process a few times. Also, this is a good time to evaluate design decisions, user interaction, performance, etc... #### Formal QA Testing If your team is fortunate enough to have a QA team, this is the time to officially submit a build to them along with a write up on how to use it. Having dedicated QA professionals will help immensely in finding edge cases. The QA team should have additional resources, like multiple multiple devices or configurations to test on. Examples include testing in many different screen sizes, all variations of mobile devices, different deployment environments, etc... #### Use Feature Flags One of the best ways to save yourself in an emergency is to be able to turn your feature off after deployment. Building feature flags into your code before release will give you that safety net. #### Deploy Incrementally Internal dogfooding and QA testing can still miss some edge cases. Sometimes the best way to get feedback is from real users. It is best to release your software incrementally. Start with 5% of your users and evaluate any feedback you might get back. Once things are stable, start deploying to 10%, 25%, 50%, 75%, etc... until you finally release to 100% of your users. Find a good cadence you feel necessary for your business. This helps ensure you don't upset too many users if something does go horribly wrong. #### Measure Success Through Data Circling back to the first point, this is the time to measure if your feature is providing the value you intended. This is also the time to make sure there aren't any performance issues and regressions through data. That means you need to have dashboards with metrics to monitor the health of your software. Don't always take anecdotes or customer feedback as your only source of feedback. Sometimes the data will tell you a different story. # Release Is Only The Beginning Following these processes will help you release your software in a safe way. But all the processes in the world won't guarantee 100% bug free and user satisfied software. Keep an eye on your feature and continue to address any feedback and issues as they appear. The success of your software really depends on the amount of love you give it through its life time after committing your code.
justinctlam
360,575
Proxy Component
A proxy component is a placeholder component that can be rendered to or from another component. In sh...
0
2020-06-22T07:45:18
https://dev.to/reactpatterns/proxy-component-349e
react, beginners, javascript, webdev
A proxy component is a placeholder component that can be rendered to or from another component. In short a proxy component is a reusable component. Example: ```js import React from 'react' class Button extends React.Component { render() { return <button type="button">My Button</button> } } class App extends React.Component { render() { return <Button /> } } export default App ``` <hr/> ### React Patterns To learn more React Patterns visit: <a href="https://reactpatterns.js.org/">https://reactpatterns.js.org</a> To get the latest React patterns follow/star/like us on: * Twitter: <a href="https://twitter.com/reactjspatterns">@reactjspatterns</a> * Github: <a href="https://github.com/reactpatterns/reactpatterns">reactpatterns</a> * Facebook: <a href="https://www.facebook.com/reactjspatterns">reactjspatterns</a>
reactpatterns
360,604
You can try AiSara for Free !
Grab the opportunity Malaysia's Next Generation Artificial Intelligence Upcoming...
0
2020-06-22T09:22:42
https://dev.to/aisaraenquiry/you-can-try-aisara-for-free-nb5
machinelearning
#**Grab the opportunity** [Malaysia's Next Generation Artificial Intelligence](https://www.aisara.ai/) ### **Upcoming Demo** AiSara team is going to launch a demo version of AiSara soon ! Where user can upload their own numerical data and do their own prediction. **1. General Aisara** **2. Oil & Gas Production Forecast , History Matching** **3. Hyperparameter Tuning** Aisara can predict any kind of numerical data, no model design are needed the only thing that you need is your input and output data then Aisara can easily find the pattern ! Visit our website OR Email us directly on support@aisara.ai for the latest update regarding the demo launching date
aisaraenquiry
360,614
Writing a swagger.json file
Swagger is a tool that you can use to document and consume API. The document can be in JSON or YAML f...
0
2020-06-22T10:08:57
https://dev.to/zeeshanahmad/getting-started-with-swagger-3bbc
php, node, python, javascript
Swagger is a tool that you can use to document and consume API. The document can be in `JSON` or `YAML` format. In this tutorial, we will document [JSONPlaceholder](https://jsonplaceholder.typicode.com/) endpoints using Swagger and finally, we will consume JSONPlaceholder endpoints using Swagger UI. ## Initial Setup I will recommend using Visual Studio Code as your editor for writing `Swagger` file with the below-mentioned extension as it helps in autocompletion: * [OpenAPI (Swagger) Editor ](https://marketplace.visualstudio.com/items?itemName=42Crunch.vscode-openapi) Let's start by creating a new file you can name it whatever you want but I will call it `swagger.json`. Now open that file in Visual Studio Code and put below text inside of it: ```json { "openapi": "3.0.3", "info": { "title": "JSONPlaceholder", "description": "Fake Online REST API for Testing and Prototyping", "version": "0.0.1" }, "paths": {} } ``` Lets breakdown the above JSON into multiple parts: * `openapi`: Swagger uses OpenAPI specifications which defines Swagger file structure * `info`: Information about `JSONPlaceholder` * `title`: Our API name * `description`: Short description of our API * `version`: Version of the swagger file * `paths`: All endpoints of any API ## JSONPlaceholder `/posts` endpoint Now navigate to `swagger.json` and put the following conten in the `paths` key: ```json "/posts": { "get": { "description": "List all posts", "responses": {} } } ``` Lets breakdown the above JSON into multiple parts for your understanding: * `"/posts"`: Endpoint to JSONPlaceholder which returns the list of posts * `"get"`: HTTP method of `/posts` endpoint * `"description"`: Short description of this endpoint * `"responses"`: List of possible responses that could come from this endpoint Inside your browser open the following [link](https://jsonplaceholder.typicode.com/posts). You will see the array of posts. As we know now that if everything is working fine on the `JSONPlaceholder` side we receive the list of posts. Now go back to our `swagger.json` file and replace `"responses": {}` with the following text: ```json "responses": { "200": { "description": "Successfully fetched all posts from JSONPlaceholder", "content": { "application/json": { "schema": { "type": "array", "items": { "type": "object", "properties": { "userId": { "type": "number" }, "id": { "type": "number" }, "title": { "type": "string" }, "body": { "type": "string" } }, "example": { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" } } } } } } } ``` Lets breakdown the above JSON: * `"200"`: HTTP method of the success response * `"description"`: Short description of this response * `"content"`: Response that is coming from the server * `"application/json"`: Type of the response from server * `"schema"`: Data structure of the response from server * `"type"`: Data type of the received Data structure: `object`, `number`, `string`, `boolean` * `"items"`: Array item structure * `"type"`: Type of the array item: `object`, `number`, `string`, `boolean` * `"properties"`: Properties inside the post object * `"property"`: Property inside the post object * `"type"`: Type of the property: `object`, `number`, `string`, `boolean` * `"example"`: Example of the structure of a post item Here is the full example of `swagger.json` file: ```json { "openapi": "3.0.2", "info": { "title": "JSONPlaceholder", "description": "Fake Online REST API for Testing and Prototyping", "version": "0.0.1" }, "paths": { "/posts": { "get": { "description": "List all posts", "responses": { "200": { "description": "Successfully fetched all posts from JSONPlaceholder", "content": { "application/json": { "schema": { "type": "array", "items": { "type": "object", "properties": { "userId": { "type": "number" }, "id": { "type": "number" }, "title": { "type": "string" }, "body": { "type": "string" } }, "example": { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" } } } } } } } } } } } ```
zeeshanahmad
360,679
React Lifecycle
One of my favorite parts of learning React so far has been understanding the React component lifecycl...
0
2020-06-22T11:33:38
https://dev.to/irenejpopova/react-lifecycle-30ja
react, javascript
One of my favorite parts of learning React so far has been understanding the React component lifecycle. The component lifecycle goes through the following phase > * Mounting * Updating * Unmounting ## Mounting The component rendered to the DOM for the first time. This is called mounting. These methods are called in the following order when an instance of a component is being created and inserted into the DOM. `constructor()` static getDerivedStateFromProps() rarely case use render() componentDidMount() ## Updating An update can be caused by changes to props or state. These methods are called in the following order when a component is being re-rendered static `getDerivedStateFromProps()` rarely case use `shouldComponentUpdate()` rarely case use `render()` `getSnapshotBeforeUpdate()` rarely case use `componentDidUpdate()` ## Unmounting When component removed from DOM. This is called unmounting. Below method is called in this phase. `componentWillUnmount()` ## Lifecycle Methods `constructor()` The constructor for a React component is called before it is mounted. The constructor call only once in whole lifecycle. You and set initial value for this component. Constructors are only used for two purposes 1. Initializing local state by assigning an object to this.state 2. Binding event handler methods to an instance. ```javascript constructor(props){ super(props); this.state = {qty: this.props.qty} this.clickHandling = this.clickHandling.bind(this); } ``` From the lifecycle methods in React.js then `render()` is the most used method. If React component has to display any data then it uses JSX. React uses JSX for templating instead of regular JavaScript. Actually `render()` is the most used method for any React powered component which returns a JSX with backend data. It is seen as a normal function but render() function has to return something whether it is null. When the component file is called it calls the render() method by default because that component needs to display the HTML markup or we can say JSX syntax. This method is the only required method in a class component. The `render()` function should be pure, meaning that it does not modify component state, which means it returns the same output each time it’s invoked. ```javascript render(){ return( <div> <h2>Cart Items ({this.state.qty})</h2> </div> ) } ``` It is good to keep in mind that we must return something, if there is no JSX for the return then null would be perfect, but must return something. In that scenario, you can do something like this. ```javascript import { Component } from 'react'; class App extends Component { render() { return null; } } export default App; ``` Another thing to keep in mind is that setState() cannot be defined inside render() function. Because setState() function changes the state of the application and causing a change in the state called the render() function again. So if you write something like this then calling the function stack will go for infinity and application gets the crash. You can define some variables, perform some operation inside `render()` function, but never use the setState function. In general cases, We are logging out some variable’s output in the render() method. It is the function that calls in mounting lifecycle methods. `componentDidMount()` after all the elements of the page is rendered correctly, this method is called. After the markup is set on the page, this technique called by React itself to either fetch the data from An External API or perform some unique operations which need the JSX elements. `componentDidMount()` method is the perfect place, where we can call the setState() method to change the state of our application and `render()` the updated data loaded JSX. For example, we are going to fetch any data from an API then API call should be placed in this lifecycle method, and then we get the response, we can call the setState() method and render the element with updated data. ```javascript import React, { Component } from 'react'; class App extends Component { constructor(props){ super(props); this.state = { data: 'Irie' Dreams' } } getData(){ setTimeout(() => { console.log('The data is fetched'); this.setState({ data: 'Hello Dreams' }) }, 1000) } componentDidMount(){ this.getData(); } render() { return( <div> {this.state.data} </div> ) } } export default App; ``` an API call with setTimeOut function is simulated and fetch the data. So, after the component is rendered correctly, componentDidMount() function is called and that call getData() function. So the method is invoked immediately after the component is mounted. If you load data using api then it right place for request data using api. `componentWillUnmount()` componentWillMount() method is the least used lifecycle method and called before any HTML element is rendered. If you want to see then check out the example mentioned above, we just need to add one more method. This method immediately executed when the component is unmounted and destroy from the DOM. Means this method is called when a component is being removed from the DOM. #### `componentDidUpdate()` This method immediately executed on the DOM when the component has been updated. Update occurs by changing state and props. This method is not called for the initial render. This is a good place for compare the current props to previous props. The method `componentDidUpdate()`is called after `componentDidMount()` and can be useful to perform some action when the state changes. It takes as its first two arguments the previous props and the previous state. ### When `componentDidUpdate()` is good to be used? `componentDidUpdate()` is good to be used when we need to call an external API on condition that the previous state and the current state have changed. The call to the API would be conditional to the state being changed. If there is no state change, no API is called. In order to avoid an infinite loop, the API call needs to be inside a conditional statement.
irenejpopova
360,740
TEX Shinobi 1달 사용기
TEX Shinobi 키보드를 사용한지 한달 되었다. 아주 맘에 든다.
0
2020-06-22T14:22:09
https://dev.to/kingori/tex-shinobi-1-21f6
tex, shinobi, keyboard, trackpoint
--- title: TEX Shinobi 1달 사용기 published: true description: TEX Shinobi 키보드를 사용한지 한달 되었다. 아주 맘에 든다. tags: tex, shinobi, keyboard, trackpoint cover_image: https://dev-to-uploads.s3.amazonaws.com/i/9wyougs0axz12qj4rcg3.png --- 2019년 11월 19일에 주문하여 2020년 5월 14일에 드디어 TEX사의 [Shinobi](https://tex.com.tw/products/shinobi?variant=16969883648090)를 받았다. 개봉기는 유튜브에 다른 분이 잘 정리한 영상이 있어 대신한다. {% youtube DNKqLQtVadc %} 난 동일한 레이아웃의 SK-8855 버전을 [2011년에 사서](http://kingori.egloos.com/4533865) 잘 쓰고 있었는데, 근 10년 정도 쓰다보니 조금(?) 지겨워져서 한번 다른 키보드를 써 볼까 찾아보고 있다가, 우연히 이 키보드가 출시된다는 소식을 듣고 얼른 예약 구매를 했다. 1달 쯤 사용했기 때문에 첫 개봉의 설레임이 거의 사라진 이 시점에 사용기를 간단히 적어본다. 나는 BLE 모듈 / 적축 / US 레이아웃 구성으로 샀다. # 레이아웃 레이아웃은 기존의 SK-8855와 완전히 동일하기 때문에 적응하는데는 무리가 없었다. 하지만 누르는 방식도 다르고, 키캡 크기도 다르다보니 손이 작은 나에겐 살짝 부담스러움이 있다. 특히나 빨콩을 쓰면서 스크롤하거나, 왼쪽 클릭을 할 때 오른손을 살짝 움직여야만 한다. 예전 8855를 쓸 때는 정말 완전히 손을 붙이고 쓸 수 있었다면, 이 키보드는 손이 조금은 더 바빠진 느낌이다. 처음엔 굉장히 어색하고 불편했는데, 한달 쯤 쓰다보니 그러려니 한다. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/dqtzqgv66nvrunszhjdd.png) 이렇게 치다가 ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/8mixqa25fswqj25lb60t.png) 이~렇게 움직여야만 클릭이 가능하다. # 블루투스 연결 BLE 모듈은 큰 이슈가 몇 가지 있다. 1. 맥에서 BLE 로 연결한 경우, 키보드의 기능키 설정이 동작하지 않는다. 2. BLE 연결한 상태에서 횡 스크롤이 되지 않는다. 3. 일정 시간 키보드를 치지 않고 있다가 다시 치는 경우, 빨콩이 동작하는데 3초 쯤 걸린다. 1번의 경우, 아예 키맵을 바꿔버리면 간단히 해결된다. 키맵 설정 방식이 아주 인상적인데, [웹 페이지에서 설정을 한 다음](https://shinobi.tex-design.com.tw/#layout), 설정 내용을 파일로 내려받은 후, 키보드의 dip 스위치를 눌러 키보드가 일종의 USB 저장소로 인식되게 해서 여기에 파일을 복사하면 된다. 와우! 굳이 키맵까지 손 댈 필요가 있을까 싶었는데, 막상 BLE에서 caps lock -> cmd 전환을 위해 손을 대기 시작하니 조금 욕심이 나서 아래와 같이 변경을 해서 사용하는데, 아주 만족스럽다. * caps lock -> cmd * backward / forward -> home / end * fn + backward / forward -> pgup / pgdn 2번, 횡스크롤이 안되는 문제는 제조사에 문의해보니 펌웨어 업데이트를 해야하고 준비중이라고 한다. 그런데 아직 별 얘기가 없다. 2번도 불편하지만 개발 업무를 하다 횡스크롤이 필요한 경우는 그리 많지 않아 대강 넘어갈 수 있는데, 3번의 딜레이는 꽤 문제가 컸다. 요즘은 화상회의도 많아서 화면을 계속 지켜보면서 얘기를 하다 키보드를 누른 후 빨콩을 움직이려는데 키 입력은 바로 동작을 하지만 빨콩 이동은 마음속으로 '하나 둘 셋'을 세야 그제서야 동작을 한다. 이게 꽤나 신경이 쓰인다. 결국 난 BLE 모듈을 샀지만 그냥 USB로 연결해서 사용하고 있다. 참고로 USB 연결 시엔 아무런 문제가 없다. 하지만 가끔 맥과 윈도우 PC를 함께 사용해야 할 경우엔 BLE 연결을 하는데, fn+음량 버튼을 눌러 3개의 디바이스를 바로 전환할 수 있기 때문에 굉장히 편리하다. # 키감? 주문을 넣을 때 스위치 부분에 굉장히 많은 선택지가 있다. 청축,갈축,적축은 들어봤는데 클리어, 은축, 녹축은 처음 들어봤다. 주변의 키보드 매니아에게 물어보니 적축 정도면 사무실에서 쓸 만 할 것이라 하여 적축으로 주문했다. 그런데 내가 키보드를 세게 두드리는 편인지, 생각보다 소리가 커서 좀 걱정이다. 계속 재택근무중이라 회사에 들고가서 써 본 적이 없는데 쫓겨나지 않을런지. 무소음 적축이 10$ 더 비싸서 그냥 적축을 했는데 이런 걱정을 할 바에 10$ 더 쓸 걸 그랬나 하는 후회가 된다. 엔터, 스페이스, 시프트와 같이 길다란 키들은 누를 때 텅텅 소리가 좀 난다. 이를 방지하는 용도에선지 o링이 몇개 같이 들어있어 스페이스 키 등에 껴 보니 좀 나아지는 것 같긴 한데, 길쭉한 키들을 다 커버하기엔 o링의 개수가 좀 모자라서 o링을 끼우지 못한 키를 칠 땐 여전히 텅텅 소리가 난다. 그런데 이것도 한달 쯤 쓰다보니 전혀 의식하지 못하게 되었다. 하지만 회사에 들고가면 옆자리 사람들이 의식할지도... 지금은 나무 책상위에 바로 올려두고 써서 더 소리가 크게 나는지도 모르겠다. 회사에선 장패드를 깔아놓기 때문에 소리가 조금은 덜 나지 않을까 하는 희망을 가지고 있다. # 정리 어차피 7열 빨콩 키보드를 원한다면 SK-8855가 단종된 마당에 이제는 이 키보드 밖에 선택지가 없다. 굳이 따지자면 같은 TEX에서 나온 [Kodachi](https://www.tex-design.com.tw/us-en/products.php?act=view&id=87)라는 모델도 있지만, 40만원 돈이 넘는다. 따라서 그냥 이걸 사면 된다. 꼭 7열을 고집하지 않는다면 정품격인 레노보의 [Trackpoint Keyboard II](https://www.lenovo.com/us/en/accessories-and-monitors/new-arrivals/KBD-BO-TrackPoint-KBD-US-Eng/p/4Y40X49493)가 있지만, 난 7열 쓰다가 6열을 써 보니 너무 불편해서 다신 6열로 돌아가지 못할 것 같다. BLE 문제만 빨리 해결되면 아주 좋겠다. 또 한가지, 나 처럼 손이 작은 사람들에겐 처음엔 살짝 불편할지도 모르겠다. 하지만 금새 익숙해져서 전혀 의식하지 못하고 사용할 수 있으리라 생각한다. 물론 이건 사람마다 차이가 있겠지만. 마지막으로 근 10년간 함께 한 SK-8855와 몇 장의 비교샷을 첨부한다. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/7qrhsll6z0da80npqtao.jpeg) 가로 사이즈는 거의 비슷하다. 세로는 팜레스트 때문에 시노비가 더 크다. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5qlzg1wxu7tmsyollgwk.jpeg) 높이는 방식 때문에 확연히 차이가 난다. SK-8855는 그냥 손을 얹고 쓰는 느낌이라면 시노비는 키캡에 손끝을 담그는 느낌? ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/avh0xivijep98lzrwl2z.jpeg) 키캡이 꽤나 움푹하고, 중간 빨콩 부위의 G,H 는 키캡 아래를 잘라내어 내부가 보인다. 참으로 독특하네.
kingori
360,770
How to run multi-tenant Kafka
Running Kafka in production can be a tough task. What if you have to run thousands of instances, some multi-tenant?
0
2020-06-22T14:33:47
https://dev.to/heroku/how-to-run-multi-tenant-kafka-3i6j
--- title: How to run multi-tenant Kafka published: true description: Running Kafka in production can be a tough task. What if you have to run thousands of instances, some multi-tenant? tags: //cover_image: https://dev-to-uploads.s3.amazonaws.com/i/kao51hqb0dx3m8v0htgs.jpg --- Apache Kafka is a beast. Forget for a moment what it actually does. I’m talking about running it in production. Even experienced teams find that getting the most out of Apache Kafka can become a serious time sink. ![Railway tracks diverging](https://dev-to-uploads.s3.amazonaws.com/i/kao51hqb0dx3m8v0htgs.jpg) At Heroku, their DevOps team looks after Kafka on behalf of thousands of developers through the [Apache Kafka on Heroku](https://www.heroku.com/kafka) service. Not everyone can justify the expense of their own dedicated Kafka cluster, though. To make Kafka available for testing, development, and smaller production use cases, Heroku offers access to multi-tenant clusters. If running just one Kafka instance is a full-time job, though, what does it take to operate multi-tenant Kafka clusters at Heroku’s scale? ## Making the right trade-offs Running anything multi-tenant is about making the right trade-offs. But some compromises are off the table. Security, as you’d expect, isn’t up for discussion. Performance, too, must be good enough that someone could use a multi-tenant Apache Kafka on Heroku cluster as part of their production stack. So, what about functionality? Perhaps Heroku could reduce the number of brokers or Zoopkeeper instances. But that, too, is a compromise too far. There’s little use in a testing or development instance that behaves differently from a production cluster. So, what’s left? How do you run secure, performant, fully functional multi-tenant Kafka at a fraction of the price of a [dedicated Kafka cluster](https://devcenter.heroku.com/articles/kafka-on-heroku#plans-and-configurations)? By using Kafka’s own functionality, plus some of Heroku’s, to securely divide a single cluster between multiple customers. Oh, and lots of planning, testing, and automation. ## Security through isolation Security in multi-tenant environments usually boils down to, “Can other people see my stuff?” ![Key in hand](https://dev-to-uploads.s3.amazonaws.com/i/9q7alio90xaibpez6lfy.jpg) The Heroku Kafka team handles this problem in two main ways. The first solution is to use Kafka’s access control lists (ACLs). They’re enforced at the cluster-level and specify which users can perform which actions on which topics. Second, Heroku uses namespaces to separate each tenant’s resources. Let’s say you add multi-tenant Kafka to your Heroku account. The Heroku provisioning system automatically generates a name -- something like wabash-58799 -- and associates it with your account when it creates the Kafka resource. After that, Heroku verifies that your account is associated with the right resource name each time you perform an action on Kafka. That way, only your account can access any activity on that resource, providing another level of security that is unique to Heroku. ## Staying on top of noisy neighbors Just as one tenant must not be able to access another’s data, all tenants of a cluster must have fair access. So, even if another customer is processing huge numbers of events, it should not disturb your usage. Heroku uses Kafka’s built-in support for quotas on producers and consumers, meaning that there is a fixed limit on the number of bytes each tenant can read or write per second. That way, every user gets their fair share of the computing resources available. ## Maintaining availability Noisy neighbours are a solvable problem. However, some multi-tenant services make it almost impossible to avoid them. Think about traditional shared hosting offerings, where they promise the Earth for $3 a month. Much of the time, they’re overprovisioning. Squeezing a thousand customers each expecting 100 GB of disk space onto a machine with a 1 TB hard drive works only if most customers use only a fraction of their full allocation. Heroku’s multi-tenant Kafka immediately provisions the full set of resources purchased. So, you don’t end up with a hundred people all trying to use the same gigabyte of disk space. And even if a customer does go beyond their disk quota, for example, Heroku will automatically expand their limit while emailing a notification that they need to upgrade their service. Availability is basically about setting sane defaults, like this. Have the system behave in the way that maximizes its usefulness. Often that means provisioning more than the Kafka defaults. For example, higher partition settings (from one to eight, to maximize throughput), additional replicas (from one to three, to ensure data is not lost), and more in-sync replicas (from one to two, to truly confirm that a replica received a write). ## Testing for the real world The saying goes that being prepared is half the battle. Knowing what could go wrong enables you to avoid those problems before they happen. The Heroku team have run extensive tests on their multi-tenant Kafka offering to simulate real-world usage, failure scenarios, and extreme workloads. For example, hammering a cluster with a million messages, then taking one of the brokers offline to see what happens. Or operating a cluster normally then stopping and restarting a server to check that failover works. Those one-off tests have developed into a test suite that creates an empty cluster then generates fifty users. Those users attach the Kafka add-on to their application and then create several producers and consumers each. From there, realistic usage profiles are assigned, such as having 10% of the test users generate very small amounts of traffic, while 20% send very large messages at slow speeds, and so on. Then, the tests gradually increase the number of users to determine a multi-tenant cluster’s operational limits. Through that testing, the Heroku team identified issues before they became a problem for real users. There’s more detail in a talk called “[Running Hundreds of Kafka Clusters with Five People](https://www.confluent.io/kafka-summit-nyc17/running-hundreds-of-kafka-clusters-with-5-people/inv/).” ## Kafka in more places For most development teams, getting the benefits of Kafka without having to actually run the Kafka cluster is the ideal situation. You’re free to focus on building your product without worrying about learning the ins and outs of yet another platform. Multi-tenant Kafka takes that a step further by making it affordable for situations where a dedicated cluster is overkill and yet where Kafka can have a benefit. There’s more about [what the Heroku team have learned from working with Kafka](https://blog.heroku.com/tags/kafka) over on the Heroku blog. <small><i>Cover photo by [Sophie Dale](https://unsplash.com/@allthestars) Stormtrooper photo by [Liam Tucker](https://unsplash.com/@itsliamtucker)</i><small>
matthewrevell
361,046
python list directory files
Do you want to find out how to display all directory's files using Python? You can use os.walk() or...
0
2020-06-22T21:19:52
https://dev.to/importostar/python-list-directory-files-4h5d
python
Do you want to find out how to display all directory's files using Python? You can use `os.walk()` or `glob()` in the Python programming language. To list files and folders you can use `os.walk()`. From there you can do specific <a href="https://python-commandments.org/python-file-handling/">file handling</a> Did you know Python can <a href="https://pythonspot.com/http-download-file-with-python/">download files</a> from the web? ## List files and folders ## os.walk() List all the Python files in the directory and subdirectories: ```python import os files = [] cwd = os.getcwd() # (r)oot, (d)irectory, (f)iles for r,d,f in os.walk(cwd): for file in f: if '.py' in file: files.append(os.path.join(r,file)) for f in files: print(f) ``` List folders and sub folders You can also use `os.walk()` to get the folders and sub-folders. ```python import os cwd =os.getcwd() print("folder {}".format(cwd)) folders = [] for r,d,f in os.walk(cwd): for folder in d: folders.append(os.path.join(r, folder)) for f in folders: print(f) ``` You can <a href="https://pythonprogramminglanguage.com/how-to-run/">run</a> this both in the terminal and the web. ## glob.glob() The function `.glob()` can be used for the same purpose. To list all files with the py extension ```python import glob cwd = "/home/you/Desktop/" files = [f for f in glob.glob(cwd + "**/*.py", recursive=True)] for f in files: print(f) ``` To list all folders and sub folders ```python import glob cwd = "/home/you/Desktop/" folders = [f for f in glob.glob(cwd + "**/", recursive=True)] for f in folders: print(f) ``` If you are new to Python, I suggest this <a href="https://gumroad.com/l/dcsp">Python book and course</a>
importostar
361,174
Tailwind CSS & Svelte on Snowpack - Svelte Preprocess
Snowpack is a tool for building web applications with less tooling and 10x faster iteration. Tailwind...
0
2020-06-23T05:41:31
https://blog.agney.dev/tailwind-snowpack-svelte/
svelte, tailwindcss, snowpack, webdev
[Snowpack](https://www.snowpack.dev/) is a tool for building web applications with less tooling and 10x faster iteration. [Tailwind CSS](https://tailwindcss.com/) is a utility-first CSS framework for rapidly building custom designs. Let's explore how to combine both of them. [Svelte](https://svelte.dev/) is a radical new approach to building user interfaces. This article talks about how to use them in combination. This will also interest you if you want to add `svelte-preprocess` when using a Snowpack app. # Template Premade template is available on [Github](https://github.com/agneym/svelte-tailwind-snowpack). You can use the template with command: ```bash npx create-snowpack-app dir-name --template @snowpack/app-template-svelte ``` ## Setup Svelte and Snowpack Snowpack provides an official template for svelte that can be initialised with: ```bash npx create-snowpack-app dir-name --template svelte-tailwind-snowpack ``` [Template Source](https://github.com/pikapkg/create-snowpack-app/tree/master/templates/app-template-svelte) ## Svelte Preprocess If you wanted to add PostCSS to your Svelte application, [`svelte-preprocess`](https://github.com/sveltejs/svelte-preprocess) would probably be the plugin you think of. It functions as an automatic processor for PostCSS, SCSS, Less and a lot more. But since we are using Snowpack's custom plugin, none of the usual loaders would work. Luckily, snowpack plugin has a secret hatch to push in plugins. It's a config file named `svelte.config.js`. You can create one in your root folder and export your pre-processor. ```javascript module.exports = { preprocess: () => console.log('Preprocess Script'), }; ``` To add `svelte-preprocess`, you would need to install it with: ```bash npm i svelte-preprocess ``` Modify the `svelte-config.js` with: ```javascript const sveltePreprocess = require("svelte-preprocess"); const preprocess = sveltePreprocess({ // options to preprocess here }); module.exports = { preprocess, }; ``` ## Configuring PostCSS As [suggested on Tailwind docs](https://tailwindcss.com/docs/installation), we will use Tailwind CSS with PostCSS. To add this to preprocess: ```javascript const sveltePreprocess = require("svelte-preprocess"); const preprocess = sveltePreprocess({ postcss: { plugins: [ // Plugins go in here. ] } }); module.exports = { preprocess, }; ``` ## Adding Tailwind Tailwind is available as a PostCSS plugin, you can add it with: ```javascript const sveltePreprocess = require("svelte-preprocess"); const preprocess = sveltePreprocess({ postcss: { plugins: [ require('tailwindcss') ] } }); module.exports = { preprocess, }; ``` After installing `tailwindcss` package ofcourse. --- and you are good to go. You can find the complete template on Github: {% github agneym/svelte-tailwind-snowpack %} It is also listed on the page for community templates on Snowpack website. Have Fun 🎉
boywithsilverwings
361,262
Loading CSV data into Kafka - video walkthrough
For whatever reason, CSV still exists as a ubiquitous data interchange format. It doesn’t get...
5,469
2020-06-23T09:27:40
https://rmoff.net/2020/06/17/loading-csv-data-into-kafka/
apachekafka, tutorial, csv, dataengineering
--- title: Loading CSV data into Kafka - video walkthrough published: true date: 2020-06-17 00:00:00 UTC tags: apachekafka,tutorial,csv,dataengineering canonical_url: https://rmoff.net/2020/06/17/loading-csv-data-into-kafka/ series: Kafka Connect examples --- {% youtube N1pseW9waNI %} For whatever reason, CSV still exists as a ubiquitous data interchange format. It doesn’t get much simpler: chuck some plaintext with fields separated by commas into a file and stick `.csv` on the end. If you’re feeling helpful you can include a header row with field names in. ``` order_id,customer_id,order_total_usd,make,model,delivery_city,delivery_company,delivery_address 1,535,190899.73,Dodge,Ram Wagon B350,Sheffield,DuBuque LLC,2810 Northland Avenue 2,671,33245.53,Volkswagen,Cabriolet,Edinburgh,Bechtelar-VonRueden,1 Macpherson Crossing ``` In this article we’ll see how to load this CSV data into Kafka, without even needing to write any code Importantly, we’re not going to reinvent the wheel by trying to write some code to do it ourselves - [Kafka Connect](https://docs.confluent.io/current/connect/index.html) (which is part of Apache Kafka) already exists [to do all of this for us](https://rmoff.dev/ljc-kafka-02); we just need the appropriate connector. ## Schemas? Yeah, schemas. CSV files might not care about them much, but the users of your data in Kafka will. **Ideally** we want a way to define the schema of the data that we ingest so that it can be stored and read by anyone who wants to use the data. To understand why this is such a big deal check out: - [Streaming Microservices: Contracts & Compatibility](https://www.infoq.com/presentations/contracts-streaming-microservices/) (InfoQ talk) - [Yes, Virginia, You Really Do Need a Schema Registry](https://www.confluent.io/blog/schema-registry-kafka-stream-processing-yes-virginia-you-really-need-one) (blog) - [Schemas, Contracts, and Compatibility](https://www.confluent.io/blog/schemas-contracts-compatibility) (blog) - [Confluent Platform Now Supports Protobuf, JSON Schema, and Custom Formats](https://www.confluent.io/blog/confluent-platform-now-supports-protobuf-json-schema-custom-formats/) (blog) If you are going to define a schema at ingest (and I hope you do), use Avro, Protobuf, or JSON Schema, as described [here](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained). | Note | You don’t **have** to use a schema. You can just ingest the CSV data as-is, and I cover this below too. | ## Kafka Connect SpoolDir connector The Kafka Connect SpoolDir connector supports various flatfile formats, including CSV. Get it from [Confluent Hub](https://www.confluent.io/hub/jcustenborder/kafka-connect-spooldir), and check out the [docs here](https://docs.confluent.io/current/connect/kafka-connect-spooldir/). Once you’ve installed it in your Kafka Connect worker make sure you restart the worker for it to pick it up. You can check by running: ``` $ curl -s localhost:8083/connector-plugins|jq '.[].class'|egrep 'SpoolDir' "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector" "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceConnector" "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirLineDelimitedSourceConnector" "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSchemaLessJsonSourceConnector" "com.github.jcustenborder.kafka.connect.spooldir.elf.SpoolDirELFSourceConnector" ``` ### Loading data from CSV into Kafka and applying a schema If you have a header row with field names you can take advantage of these to define the schema at ingestion time (which is a **good** idea). Create the connector: ``` curl -i -X PUT -H "Accept:application/json" \ -H "Content-Type:application/json" http://localhost:8083/connectors/source-csv-spooldir-00/config \ -d '{ "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector", "topic": "orders_spooldir_00", "input.path": "/data/unprocessed", "finished.path": "/data/processed", "error.path": "/data/error", "input.file.pattern": ".*\\.csv", "schema.generation.enabled":"true", "csv.first.row.as.header":"true" }' ``` | Note | when you create the connector with this configuration you need to run this with `"csv.first.row.as.header":"true"` and a file with headers already in place pending to be read. | Now head over to a Kafka consumer and observe our data. Here I’m using kafkacat cos it’s great :) ``` $ docker exec kafkacat \ kafkacat -b kafka:29092 -t orders_spooldir_00 \ -C -o-1 -J \ -s key=s -s value=avro -r http://schema-registry:8081 | \ jq '.payload' { "order_id": { "string": "500" }, "customer_id": { "string": "424" }, "order_total_usd": { "string": "160312.42" }, "make": { "string": "Chevrolet" }, "model": { "string": "Suburban 1500" }, "delivery_city": { "string": "London" }, "delivery_company": { "string": "Predovic LLC" }, "delivery_address": { "string": "2 Sundown Drive" } } ``` What’s more, in the header of the Kafka message is the metadata from the file itself: ``` $ docker exec kafkacat \ kafkacat -b kafka:29092 -t orders_spooldir_00 \ -C -o-1 -J \ -s key=s -s value=avro -r http://schema-registry:8081 | \ jq '.headers' [ "file.name", "orders.csv", "file.path", "/data/unprocessed/orders.csv", "file.length", "39102", "file.offset", "501", "file.last.modified", "2020-06-17T13:33:50.000Z" ] ``` ### Setting the message key Assuming you have header row to provide field names, you can set `schema.generation.key.fields` to the name of the field(s) you’d like to use for the Kafka message key. If you’re running this after the first example above remember that the connector relocates your file so you need to move it back to the `input.path` location for it to be processed again. | Note | The connector name (here it’s `source-csv-spooldir-01`) is used in tracking which files have been processed and the offset within them, so a connector of the same name won’t reprocess a file of the same name and lower offset than already processed. If you want to force it to reprocess a file, give the connector a new name. | ``` curl -i -X PUT -H "Accept:application/json" \ -H "Content-Type:application/json" http://localhost:8083/connectors/source-csv-spooldir-01/config \ -d '{ "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector", "topic": "orders_spooldir_01", "input.path": "/data/unprocessed", "finished.path": "/data/processed", "error.path": "/data/error", "input.file.pattern": ".*\\.csv", "schema.generation.enabled":"true", "schema.generation.key.fields":"order_id", "csv.first.row.as.header":"true" }' ``` The resulting Kafka message has the `order_id` set as the message key: ``` docker exec kafkacat \ kafkacat -b kafka:29092 -t orders_spooldir_01 -o-1 \ -C -J \ -s key=s -s value=avro -r http://schema-registry:8081 | \ jq '{"key":.key,"payload": .payload}' { "key": "Struct{order_id=3}", "payload": { "order_id": { "string": "3" }, "customer_id": { "string": "695" }, "order_total_usd": { "string": "155664.90" }, "make": { "string": "Toyota" }, "model": { "string": "Avalon" }, "delivery_city": { "string": "Brighton" }, "delivery_company": { "string": "Jacobs, Ebert and Dooley" }, "delivery_address": { "string": "4 Loomis Crossing" } } } ``` ### Changing the schema field types The connector does a fair job at setting the schema, but maybe you want to override it. You can declare the whole thing upfront using the `value.schema` configuration, but perhaps you are happy with it inferring the whole schema except for a couple of fields. Here you can use [Single Message Transform](https://docs.confluent.io/current/connect/transforms/index.html) to munge it: ``` curl -i -X PUT -H "Accept:application/json" \ -H "Content-Type:application/json" http://localhost:8083/connectors/source-csv-spooldir-02/config \ -d '{ "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceConnector", "topic": "orders_spooldir_02", "input.path": "/data/unprocessed", "finished.path": "/data/processed", "error.path": "/data/error", "input.file.pattern": ".*\\.csv", "schema.generation.enabled":"true", "schema.generation.key.fields":"order_id", "csv.first.row.as.header":"true", "transforms":"castTypes", "transforms.castTypes.type":"org.apache.kafka.connect.transforms.Cast$Value", "transforms.castTypes.spec":"order_id:int32,customer_id:int32,order_total_usd:float32" }' ``` If you go and look at the schema that’s been created and stored in the Schema Registry you can see the field data types have been set as specified: ``` ➜ curl --silent --location --request GET 'http://localhost:8081/subjects/orders_spooldir_02-value/versions/latest' |jq '.schema|fromjson' { "type": "record", "name": "Value", "namespace": "com.github.jcustenborder.kafka.connect.model", "fields": [ { "name": "order_id", "type": ["null", "int"], "default": null }, { "name": "customer_id", "type": ["null", "int"], "default": null }, { "name": "order_total_usd", "type": ["null", "float"], "default": null }, { "name": "make", "type": ["null", "string"], "default": null }, { "name": "model", "type": ["null", "string"], "default": null }, { "name": "delivery_city", "type": ["null", "string"], "default": null }, { "name": "delivery_company", "type": ["null", "string"], "default": null }, { "name": "delivery_address", "type": ["null", "string"], "default": null } ], "connect.name": "com.github.jcustenborder.kafka.connect.model.Value" } ``` ### Just gimme the plain text! 😢 All of this schemas seems like a bunch of fuss really, doesn’t it? Well not really. But, if you absolutely must just have CSV in your Kafka topic then here’s how. Note that we’re using a [different connector class](https://docs.confluent.io/current/connect/kafka-connect-spooldir/connectors/line_delimited_source_connector.html) and we’re using `org.apache.kafka.connect.storage.StringConverter` to write the values. If you want to learn more about serialisers and converters [see here](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained). ``` curl -i -X PUT -H "Accept:application/json" \ -H "Content-Type:application/json" http://localhost:8083/connectors/source-csv-spooldir-03/config \ -d '{ "connector.class": "com.github.jcustenborder.kafka.connect.spooldir.SpoolDirLineDelimitedSourceConnector", "value.converter":"org.apache.kafka.connect.storage.StringConverter", "topic": "orders_spooldir_03", "input.path": "/data/unprocessed", "finished.path": "/data/processed", "error.path": "/data/error", "input.file.pattern": ".*\\.csv" }' ``` The result? Just CSV. ``` ➜ docker exec kafkacat \ kafkacat -b kafka:29092 -t orders_spooldir_03 -o-5 -C -u -q 496,456,80466.80,Volkswagen,Touareg,Leeds,Hilpert-Williamson,96 Stang Junction 497,210,57743.67,Dodge,Neon,London,Christiansen Group,7442 Algoma Hill 498,88,211171.02,Nissan,370Z,York,"King, Yundt and Skiles",3 1st Plaza 499,343,126072.73,Chevrolet,Camaro,Sheffield,"Schiller, Ankunding and Schumm",8920 Hoffman Place 500,424,160312.42,Chevrolet,Suburban 1500,London,Predovic LLC,2 Sundown Drive ``` ## Side-bar: Schemas in action So we’ve read some CSV data into Kafka. That’s not the end of its journey. It’s going to be used for something! Let’s do that. Here’s [ksqlDB](https://ksqldb.io/quickstart.html), in which we declare the orders topic we wrote to with a schema as a stream: ``` ksql> CREATE STREAM ORDERS_02 WITH (KAFKA_TOPIC='orders_spooldir_02',VALUE_FORMAT='AVRO'); Message ---------------- Stream created ---------------- ``` Having done that—and because there’s a schema that was created at ingestion time—we can see all of the fields available to us: ``` ksql> DESCRIBE ORDERS_02; Name : ORDERS_02 Field | Type ------------------------------------------- ROWKEY | VARCHAR(STRING) (key) ORDER_ID | INTEGER CUSTOMER_ID | INTEGER ORDER_TOTAL_USD | DOUBLE MAKE | VARCHAR(STRING) MODEL | VARCHAR(STRING) DELIVERY_CITY | VARCHAR(STRING) DELIVERY_COMPANY | VARCHAR(STRING) DELIVERY_ADDRESS | VARCHAR(STRING) ------------------------------------------- For runtime statistics and query details run: DESCRIBE EXTENDED <Stream,Table>; ksql> ``` and run queries against the data that’s in Kafka: ``` ksql> SELECT DELIVERY_CITY, COUNT(*) AS ORDER_COUNT, MAX(CAST(ORDER_TOTAL_USD AS DECIMAL(9,2))) AS BIGGEST_ORDER_USD FROM ORDERS_02 GROUP BY DELIVERY_CITY EMIT CHANGES; +---------------+-------------+---------------------+ |DELIVERY_CITY |ORDER_COUNT |BIGGEST_ORDER_USD | +---------------+-------------+---------------------+ |Bradford |13 |189924.47 | |Edinburgh |13 |199502.66 | |Bristol |16 |213830.34 | |Sheffield |74 |216233.98 | |London |160 |219736.06 | ``` What about our data that we just ingested into a different topic as straight-up CSV? Because, like, schemas aren’t important? ``` ksql> CREATE STREAM ORDERS_03 WITH (KAFKA_TOPIC='orders_spooldir_03',VALUE_FORMAT='DELIMITED'); No columns supplied. ``` Yeah, no columns supplied. No schema, no bueno. If you want to work with the data, whether to query in SQL, stream to a data lake, or do anything else with—at some point you’re going to have to declare that schema. Hence why CSV, as a schemaless-serialisation method, is a bad way to exchange data between systems. If you really want to use your CSV data in ksqlDB, you can, you just need to enter the schema—which is error prone and tedious. You enter it each time to use the data, every other consumer of the data enters it each time too. Declaring it once at ingest and it being available for all to use makes a lot more sense. ## Regex and JSON If you’re using the REST API to submit configuration you might hit up against errors sending regex values within the JSON. For example, if you want to set `input.file.pattern` to `.*\.csv` and you put that in your JSON literally: ``` "input.file.pattern": ".*\.csv", ``` You’ll get this error back if you submit it as inline data with `curl`: ``` com.fasterxml.jackson.core.JsonParseException: Unrecognized character escape '.' (code 46) at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 7, column: 36] ``` THe solution is to escape the escape character (the backslash): ``` "input.file.pattern": ".*\\.csv", ``` ## Streaming CSV data from Kafka to a database (or anywhere else…) Since you’ve got a schema to the data, you can easily sink it to a database, such as Postgres: ``` curl -X PUT http://localhost:8083/connectors/sink-postgres-orders-00/config \ -H "Content-Type: application/json" \ -d '{ "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector", "connection.url": "jdbc:postgresql://postgres:5432/", "connection.user": "postgres", "connection.password": "postgres", "tasks.max": "1", "topics": "orders_spooldir_02", "auto.create": "true", "auto.evolve":"true", "pk.mode":"record_value", "pk.fields":"order_id", "insert.mode": "upsert", "table.name.format":"orders" }' ``` | Note | This **only** works if you have a schema in your data. See [here](https://rmoff.dev/jdbc-sink-schemas) to understand why and how to work with this requirement. | ``` postgres=# \dt List of relations Schema | Name | Type | Owner --------+--------+-------+---------- public | orders | table | postgres (1 row) postgres=# \d orders; Table "public.orders" Column | Type | Collation | Nullable | Default ------------------+---------+-----------+----------+--------- order_id | integer | | not null | customer_id | integer | | | order_total_usd | real | | | make | text | | | model | text | | | delivery_city | text | | | delivery_company | text | | | delivery_address | text | | | Indexes: "orders_pkey" PRIMARY KEY, btree (order_id) postgres=# SELECT * FROM orders FETCH FIRST 10 ROWS ONLY; order_id | customer_id | order_total_usd | make | model | delivery_city | delivery_company | delivery_address ----------+-------------+-----------------+------------+----------------+---------------+--------------------------+-------------------------- 1 | 535 | 190899.73 | Dodge | Ram Wagon B350 | Sheffield | DuBuque LLC | 2810 Northland Avenue 2 | 671 | 33245.53 | Volkswagen | Cabriolet | Edinburgh | Bechtelar-VonRueden | 1 Macpherson Crossing 3 | 695 | 155664.9 | Toyota | Avalon | Brighton | Jacobs, Ebert and Dooley | 4 Loomis Crossing 4 | 366 | 149012.9 | Hyundai | Santa Fe | Leeds | Kiehn Group | 538 Burning Wood Alley 5 | 175 | 63274.18 | Kia | Sportage | Leeds | Miller-Hudson | 6 Kennedy Court 6 | 37 | 97790.04 | BMW | 3 Series | Bristol | Price Group | 21611 Morning Trail 7 | 644 | 76240.84 | Mazda | MPV | Leeds | Kihn and Sons | 9 Susan Street 8 | 973 | 216233.98 | Hyundai | Elantra | Sheffield | Feeney, Howe and Koss | 07671 Hazelcrest Terrace 9 | 463 | 162589.1 | Chrysler | Grand Voyager | York | Fay, Murazik and Schumm | 42080 Pawling Circle 10 | 863 | 111208.24 | Ford | Laser | Leeds | Boehm, Mohr and Doyle | 0919 International Trail (10 rows) ``` To learn more about writing data from Kafka to a database see [this tutorial](https://rmoff.dev/kafka-jdbc-video). <iframe src="https://www.youtube.com/embed/b-3qN_tlYR4" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" allowfullscreen title="YouTube Video"></iframe> For more tutorials on Kafka Connect see [🎥 this playlist](https://www.youtube.com/playlist?list=PL5T99fPsK7ppB_AbZhBhTyKHtHWZLWIJ8). ## Try it out! All [the code for this article is on GitHub](https://github.com/confluentinc/demo-scene/tree/master/csv-to-kafka), and you just need Docker and Docker Compose to spin it up and give it a try. The commandline examples quoted below are based on the Docker environment. To spin it up, clone the repository, change to the correct folder, and launch the stack: ``` git clone https://github.com/confluentinc/demo-scene.git cd csv-to-kafka docker-compose up -d ``` Wait for Kafka Connect to launch and then off you go! ``` bash -c ' \ echo -e "\n\n=============\nWaiting for Kafka Connect to start listening on localhost ⏳\n=============\n" while [$(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors) -ne 200] ; do echo -e "\t" $(date) " Kafka Connect listener HTTP state: " $(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors) " (waiting for 200)" sleep 5 done echo -e $(date) "\n\n--------------\n\o/ Kafka Connect is ready! Listener HTTP state: " $(curl -s -o /dev/null -w %{http_code} http://localhost:8083/connectors) "\n--------------\n" ' ``` The examples in this article are based on the `data` folder mapped to `/data` on the Kafka Connect worker. ---- {% youtube N1pseW9waNI %}
rmoff
361,287
Building an Intelligent QA System With NLP and Milvus
The question answering system is commonly used in the field of natural language processing. It is use...
0
2020-06-23T10:09:06
https://dev.to/milvusio/building-an-intelligent-qa-system-with-nlp-and-milvus-599i
githunt, datascience, database, tutorial
The question answering system is commonly used in the field of natural language processing. It is used to answer questions in the form of natural language and has a wide range of applications. Typical applications include: intelligent voice interaction, online customer service, knowledge acquisition, personalized emotional chatting, and more. Most question answering systems can be classified as: generative and retrieval question answering systems, single-round question answering and multi-round question answering systems, open question answering systems, and specific question answering systems. This article mainly deals with a QA system designed for a specific field, which is usually called an intelligent customer service robot. In the past, building a customer service robot usually required conversion of the domain knowledge into a series of rules and knowledge graphs. The construction process relies heavily on “human” intelligence. Once the scenes were changed, a lot of repetitive work would be required. With the application of deep learning in natural language processing (NLP), machine reading can automatically find answers to matching questions directly from documents. The deep learning language model converts the questions and documents to semantic vectors to find the matching answer. This article uses Google’s open source BERT model and Milvus, an open source vector search engine, to quickly build a Q&A bot based on semantic understanding. ## **Overall Architecture** This article implements a question answering system through semantic similarity matching. The general construction process is as follows: 1. Obtain a large number of questions with answers in a specific field ( a standard question set). 2. Use the BERT model to convert these questions into feature vectors and store them in Milvus. And Milvus will assign a vector ID to each feature vector at the same time. 3. Store these representative question IDs and their corresponding answers in PostgreSQL. When a user asks a question: 1. The BERT model converts it to a feature vector. 2. Milvus performs a similarity search and retrieves the ID most similar to the question. 3. PostgreSQL returns the corresponding answer. The system architecture diagram is as follows (the blue lines represent the import process and the yellow lines represent the query process): ![img](https://miro.medium.com/max/642/0*wwtEvlq7Cg99V9Se) Next, we will show you how to build an online Q&A system step by step. ## **Steps to Build the Q&A System** Before you start, you need to install Milvus and PostgreSQL. For the specific installation steps, see the Milvus official website. **1. Data preparation** The experimental data in this article comes from: https://github.com/chatopera/insuranceqa-corpus-zh The data set contains question and answer data pairs related to the insurance industry. In this article we extracts 20,000 question and answer pairs from it. Through this set of question and answer data sets, you can quickly build a customer service robot for the insurance industry. **2. Generate feature vectors** This system uses a model that BERT has pre-trained. Download it from the link below before starting a service: https://storage.googleapis.com/bert_models/2018_10_18/cased_L-24_H-1024_A-16.zip Use this model to convert the question database to feature vectors for future similarity search. For more information about the BERT service, see https://github.com/hanxiao/bert-as-service. ![img](https://miro.medium.com/max/681/1*WUN1TDbP5YggVefAopaIqg.png) **3. Import to Milvus and PostgreSQL** Normalize and import the generated feature vectors import to Milvus, and then import the IDs returned by Milvus and the corresponding answers to PostgreSQL. The following shows the table structure in PostgreSQL: ![img](https://miro.medium.com/max/626/0*gnBIpn3vPLnnOXOR) **4. Retrieve Answers** The user inputs a question, and after generating the feature vector through BERT, they can find the most similar question in the Milvus library. This article uses the cosine distance to represent the similarity between two sentences. Because all vectors are normalized, the closer the cosine distance of the two feature vectors to 1, the higher the similarity. In practice, your system may not have perfectly matched questions in the library. Then, you can set a threshold of 0.9. If the greatest similarity distance retrieved is less than this threshold, the system will prompt that it does not include related questions. ![img](https://miro.medium.com/max/622/0*xdOoVwfj7dbPgQnH) ## **System Demonstration** The following shows an example interface of the system: ![img](https://miro.medium.com/max/1278/1*E688A3vSkyrMqzqEx-tY0A.png) Enter your question in the dialog box and you will receive a corresponding answer: ![img](https://miro.medium.com/max/1275/0*OuoVSDAm55daLHuI) ## **Summary** After reading this article, we hope you find it easy to build your own Q&A System. With the BERT model, you no longer need to sort and organize the text corpora beforehand. At the same time, thanks to the high performance and high scalability of the open source vector search engine Milvus, your QA system can support a corpus of up to hundreds of millions of texts. Milvus has officially joined the Linux AI (LF AI) Foundation for incubation. You are welcome to join the Milvus community and work with us to accelerate the application of AI technologies! => Try our online demo here: https://www.milvus.io/scenarios
milvusio
361,331
Learn With Me 3
Hey developers.How are you feeling today?So this is the 3rd part of my Learn with meIn my 2nd blog I...
0
2020-06-23T10:50:42
https://dev.to/osmang670/learn-with-me-3-5209
html, css, javascript, beginners
Hey developers.How are you feeling today?So this is the 3rd part of my <i><h3>Learn with me</i></h3>In my 2nd blog I said I would talk about CSS today.But thinking about it and reading the 2nd blog again I didn't saw any html topic to be discussed in my post.So I wanted to discuss html topics,what aspect of html you should care more about and also some tips to learn html. Ok now,a quick overview on html.html isn’t vastly spread all over the place.It is a very small and sorted markup language.So its easy to read it all.Whoever is trying to master html should remember explicitly that html cannot do everything on its own.So he/she better be ready to learn the extension for doing anything with html.The extensions are not that hard to learn but its not that easy too.It's like the <b>English</b> language(For people who are **not** native English speaker)They have to learn it.Its not easy at first.But once you get hold of it,It's easy to get around or may be not.Because I have only been exposed to CSS.But it's fine I will let you people know about it once I cover all the extensions. About the extensions I will list some of the important ones that I think is important: <ul><li>css</li><li>java-script</li><li>bootstrap</li></ul> Before moving on to the next topic I would love to tell you about this movement that was started sometime ago.And its a big movement for new coders.Its called the #100dayschallenge.So what happens is you basically code for 100 days consistently for at least an hour.I joined it yesterday.You can join it if you want.Just tweet once by saying <q> I start the #100dayschallenge</q>.Here is the link: https://twitter.com/osmang670/status/1275068165811101698?s=19. Now about the topics you should give more attention.First of all there aren't many topics so its easy to remember all.But what happened with me is that I started css and now I keep forgetting some important things like combining a link from html.That is why you should learn how to link an url with any button or image or word and practice it.And other tags and small things are easy to remember or you can google it when you need.But you always need to remember the process how to make a form,table,link an image/word/button.Also learn about how to link an email in a form(so that the admin gets the message).That is something I forgot too. Now about learning html.Actually you dont have to do much.You just have to be consistent enough to take all the demotivation in and kick it out.For getting materials just search in youtube <em>learn html</em> and filter it to **playlist**.You will see 100s of playlist.If you need written document I would suggest www.w3schools.com So this is all about html that I learned and from where I learned.Hope it helps you guys.I will give a blog after I finish CSS.It will take some time.May be a week or more no idea.If I can,I will give another blog in the meanwhile. **Thanks for reading till the end**
osmang670
361,416
How To Test NPM Packages Locally
When creating NPM packages, it's much better to test them locally before publishing. Let's take a...
0
2020-06-23T16:18:01
https://www.jamesqquick.com/blog/how-to-test-npm-packages-locally
javascript, npm
--- title: How To Test NPM Packages Locally published: true date: 2020-06-23 15:00:36 UTC tags: javascript, npm canonical_url: https://www.jamesqquick.com/blog/how-to-test-npm-packages-locally cover_image: https://cdn.sanity.io/images/rx426fbd/production/ec2a77c87b63089a7cba9d4590ef9a9a8bc34c79-1920x1080.png --- {% youtube Knw8U5XyHaM %} When creating NPM packages, it's much better to test them locally before publishing. Let's take a look at how to do that. > Shoutout to [Brad Garropy](https://bradgarropy.com/) for sharing his knowledge on NPM packages with me. ## TLDR - link your package locally - create a test application - link the NPM package in your test application - do the test stuff ## Getting Started You'll need an NPM package local on your machine. If you've never created an NPM package before, you can learn how to create one by following this article, [Creating and Publishing NPM Packages](https://jamesqquick.com/blog/how-to-create-and-publish-npm-packages). > For clarity, let's assume the name of the package we are working on is name `jqq-package`. You'll need to replace this with the name of your actual package. You'll also need an application to test your package with. For this, create a new folder and open it inside of your text editor. I recommend VS Code 😀. Then, initialize this test project by running `npm init`. ## Let's Test It With your NPM package local to your machine, you'll need a way to reference/install it in the test application. **Inside of the original NPM package directory** , run `npm link` from the command line. This command will allow us to simulate installing this NPM package without it actually being published. From there, we need to link to this package from inside of the **test directory**. You can do this by running `npm link` followed by the name of the local package. In this demo, the name of the package we want to test is `jqq-package` so you would run `npm link jqq-package`, but make sure to use the specific name of the package you are testing. Now, you should be able to test the package in whatever way makes sense. I won't go into detail here because this varies significantly based on what your package does, but hopefully, this sets you up to run whatever tests you think make sense. ## Wrap Up I've been really pleased with how easy it is to create, test, and publish NPM packages. Hopefully, this helps you in testing your packages. If you have any awesome NPM packages to share or additional questions, please reach out on [Twitter](https://www.twitter.com/jamesqquick). > This article was first posted on my [blog](https://www.jamesqquick.com/blog). You can also follow me on [Twitter](https://www.twitter.com/jamesqquick).
jamesqquick
363,663
Using Event Decorators with lit-element and Web Components
When Web Components need to communicate state changes to the application, it uses Custom Events, just...
0
2020-06-29T02:24:26
https://coryrylan.com/blog/using-event-decorators-with-lit-element-and-web-components
webcomponents, typescript
--- title: Using Event Decorators with lit-element and Web Components published: true date: 2020-06-22 05:00:00 UTC tags: webcomponents, typescript canonical_url: https://coryrylan.com/blog/using-event-decorators-with-lit-element-and-web-components --- When Web Components need to communicate state changes to the application, it uses Custom Events, just like native events built into the browser. Let’s take a look at a simple example of a component emitting a custom event. ```typescript const template = document.createElement('template'); template.innerHTML = `<button>Emit Event!</button>`; export class Widget extends HTMLElement { constructor() { super(); this.attachShadow({ mode: 'open' }); this.shadowRoot.appendChild(template.content.cloneNode(true)); this.shadowRoot.querySelector('button').addEventListener('click', () => { this.dispatchEvent(new CustomEvent('myCustomEvent', { detail: 'hello there' })); }); } } customElements.define('my-widget', Widget); ``` Our widget is a basic custom element containing a single button. With our widget, we can listen to the click event from the component template to trigger a custom event. ```typescript this.dispatchEvent(new CustomEvent('myCustomEvent', { detail: 'hello there' })); ``` With custom events, we pass an event name and a configuration object. This configuration object allows us to pass a value using the `detail` property. Once we have the event setup, we can listen to our new custom event. ```html <my-widget></my-widget> ``` ```typescript import './widget'; const widget = document.querySelector('my-widget'); widget.addEventListener('myCustomEvent', (event) => { alert(`myCustomEvent:, ${event.detail}`); }); ``` Just like any other DOM event, we can create an event listener to get notified when our event is triggered. We can improve the reliability of our custom events by using TypeScript decorators to create a custom `@event` decorator. ## Example Alert Web Component For our example, we will be making a simple alert component to show messages to the user. This component will have a single `property` to determine if the alert can be dismissed and a single `event` to notify the application when the user has clicked the dismiss button. ![A simple alert Web Component](https://coryrylan.com/assets/images/posts/2020-06-22-using-event-decorators-with-lit-element-and-web-components/example-alert-web-component.png) Our Web Component is using [lit-element](https://lit-element.polymer-project.org/). Lit Element is a lightweight library for making it easy to build Web Components. Lit Element and its templating library lit-html provide an easy way to bind data and render HTML within our components. Here is our example component: ```typescript import { LitElement, html, css, property } from 'lit-element'; import { event, EventEmitter } from './event'; class Alert extends LitElement { @property() dismiss = true; render() { return html` <slot></slot> ${this.dismissible ? html`<button aria-label="dismiss" @click=${() => this.dismissAlert()}>&times;</button>` : ''} `; } dismissAlert() { this.dispatchEvent(new CustomEvent('dismissChange', { detail: 'are you sure?' })); } } customElements.define('app-alert', Alert); ``` Our alert component can show or hide a dismiss button. When a user clicks the dismiss button, we emit a custom event `dismissChange`. ```typescript this.dispatchEvent(new CustomEvent('dismissChange', { detail: 'are you sure?' })); ``` By using TypeScript, we can improve handling our custom events. Custom events are dynamic, so it’s possible to make a mistake emitting different types on the same event. ```typescript this.dispatchEvent(new CustomEvent('dismissChange', { detail: 'are you sure?' })); this.dispatchEvent(new CustomEvent('dismissChange', { detail: 100 })); ``` I can emit a string or any other value and make the event type value inconsistent. This will make it hard to use the component in our application. By creating a custom decorator, we can catch some of these errors at build time. ## TypeScript Decorator and Custom Events Let’s take a look at what our custom decorator looks like in our alert Web Component. ```typescript import { LitElement, html, css, property } from 'lit-element'; import { event, EventEmitter } from './event'; class Alert extends LitElement { @property() dismiss = true; @event() dismissChange: EventEmitter<string>; render() { return html` <slot></slot> ${this.dismiss ? html`<button aria-label="dismiss" @click=${() => this.dismissAlert()}>&times;</button>` : ''} `; } dismissAlert() { this.dismissChange.emit('are you sure?'); } } customElements.define('app-alert', Alert); ``` Using the TypeScript decorator syntax, we can create a property in our class, which will contain an event emitter to manage our components events. The `@event` decorator is a custom decorator that will allow us to emit events with type safety easily. We leverage TypeScript [Generic Types](https://www.typescriptlang.org/docs/handbook/generics.html) to describe what type we expect to emit. In this use case, we will be emitting string values. ```typescript @event() dismissChange: EventEmitter<string>; ... dismissAlert() { this.dismissChange.emit('are you sure?'); this.dismissChange.emit(100); // error: Argument of type '100' is not assignable to parameter of type 'string' } ``` So let’s take a look at how we make a `@event` decorator. First, we are going to make a small `EventEmitter` class. ```typescript export class EventEmitter<T> { constructor(private target: HTMLElement, private eventName: string) {} emit(value: T, options?: EventOptions) { this.target.dispatchEvent( new CustomEvent<T>(this.eventName, { detail: value, ...options }) ); } } ``` Our `EventEmitter` class defines a generic type `<T>` so we can ensure we always provide a consistent value type when emitting our event. Our decorator will create an instance of the `EventEmitter` and assign it to the decorated property. Because decorators are not yet standardized in JavaScript, we have to check if the decorator is being used by TypeScript or Babel. If you are not writing a library but an application, this check may not be necessary. ```typescript export function event() { return (protoOrDescriptor: any, name: string): any => { const descriptor = { get(this: HTMLElement) { return new EventEmitter(this, name !== undefined ? name : protoOrDescriptor.key); }, enumerable: true, configurable: true, }; if (name !== undefined) { // legacy TS decorator return Object.defineProperty(protoOrDescriptor, name, descriptor); } else { // TC39 Decorators proposal return { kind: 'method', placement: 'prototype', key: protoOrDescriptor.key, descriptor, }; } }; } ``` Decorators are just JavaScript functions that can append behavior to and existing property or class. Here we create an instance of our `EventEmitter` service and can start using it in our alert. ```typescript import { LitElement, html, css, property } from 'lit-element'; import { event, EventEmitter } from './event'; class Alert extends LitElement { @property() dismiss = true; @event() dismissChange: EventEmitter<string>; render() { return html` <slot></slot> ${this.dismiss ? html`<button aria-label="dismiss" @click=${() => this.dismissAlert()}>&times;</button>` : ''} `; } dismissAlert() { // type safe event decorator, try adding a non string value to see the type check this.dismissChange.emit('are you sure?'); } } customElements.define('app-alert', Alert); ``` If you want an in-depth tutorial about TypeScript property decorators, check out [Introduction to TypeScript Property Decorators](https://coryrylan.com/blog/introduction-to-typescript-property-decorators). The full working demo can be found here https://stackblitz.com/edit/typescript-xuydzd
coryrylan
372,667
Trying out Vim
Some basic Vim commands to get up and running with Vim.
0
2020-06-28T15:58:24
https://dev.to/shuv1824/trying-out-vim-33gf
webdev, vim, tools
--- title: Trying out Vim published: true description: Some basic Vim commands to get up and running with Vim. tags: #productivity #webdev #vim #tools //cover_image: https://camo.githubusercontent.com/537458336e8bb85785642c057a60a21d5dba5164/68747470733a2f2f646c2e64726f70626f7875736572636f6e74656e742e636f6d2f732f35327571667463637068756a32726c2f76696d2d73686f7274637574732d6461726b5f31323830783830302e706e67 --- [Vim](https://www.vim.org/) (**V**i **IM**proved) is a highly configurable text editor built to make creating and changing any kind of text very efficiently. It is included as "vi" with most UNIX systems and with Apple OS X.I attempted to learn Vim quite a few times before but could not cope up with it. But recently I thought I should at least know very basic usage of Vim. So I started learning the absolute basics. Here I am putting all the basic commands and usage of Vim for any absolute beginner like me. This can be helpful to get started with Vim I think. But there is a lot of things to learn about this handy tool that will take much time and practice. One will get proficient in using Vim only by using it consistently. Vim has two basic modes: 1. One is `INSERT` mode, in which you write text as if in normal text editor. 2. Another is `NORMAL` mode which provides you efficient ways to navigate and manipulate text. This is also called `Command` mode cause we can do various vim commands on this mode. To change between modes, use `esc` for normal mode and `i` for insert mode. ## VIM Commands |Command | Action | |---------------|-------------------------------| |:e *filename* | Open *filename* for editon | |:w | Save file | |:q | Exit vim | |:q! | Quit without saving | |:x | Save and Exit | |:sav *name* | Save current file as *name* | |. | Repeat last change made in `NORMAL` mode | --- ## Cursor Movements * `h` to move cursor left (&#8592;) * `l` to move cursor right (&#8594;) * `k` to move cursor up (&#8593;) * `j` to move cursor down (&#8595;) * `b` moves to the beginning of the word * `e` moves to the end of the word * `w` moves to the beginning of the next word ## Number Based Movements Using a `number` before each command can execute that command that many times. e.g `3w` will move to the 3rd next word. ## Insert Text Repeatedly To insert the same text multiple times use `<number>i<text>esc`. e.g `3<i>go<esc>` will write `gogogo`. ## Find a Character To find and move to the next (or previous) occurrence of a character, use `f` and `F`, e.g. `fo` finds next 'o'. You can combine f with a number. e.g. you can find 3rd occurrence of 'q' with `3fq`. ## Go to Matching Parenthesis In text that is structured with parentheses or brackets, `(` or `{` or `[`, use `%` to jump to the matching parenthesis or bracket. ## Start/End of Line To reach the beginning of a line, use `0`. For end of line use `$`. ## Find Word Under Cursor Find the next occurrence of the word under cursor with `*`, and the previous with `#`. ## Go to Line `gg` takes you to the beginning of the file; `G` to the end. To jump directly to a specific line, give its line number along with `G`. e.g `5G` will take you to the fifth line of the file. ## Search for Text Searching text is a vital part of any text editor. In Vim, press `/`, and give the text to search for. The search can be repeated for next and previous occurrences with `n` and `N` respectively.For advanced use cases, it's possible to use regexps that help to find text of particular form. ## Insert New Line To insert text into a new line **after** the current line use `o` and to insert a new line **before** the current line use `O`. After new line is created, the editor is set to `insert` mode. ## Removing a Character `x` and `X` delete the character under the cursor and to the left of the cursor, respectively. Also adding a `number` before `x` of `X` can perform the action that many times. e.g `5x` will remove the **next** five characters **including** the character under the cursor and `5X` will remove the **previous** five characters **excluding** the character under cursor. ## Replace a Character User `r` to replace only one character under the cursor. ## Deleting `d` is the delete command. It can be combined it with movements, e.g. `dw` deletes the characters on the right side of the cursor upto the beginning of the next word. `de` deletes all the characters of the word on the right side of the cursor **including** the character under cursor. `db` will delete the previous word if the cursor is under the first letter of a word or else it will delete the characters left to the cursor upto the beginning of the word. It also **copies** the content, so that you can **paste** it with `p` to another location. `dd` will delete the whole line. ## Repeat Command To repeat the previous command, just use `.` (period). ## Replace Mode Use `R` to enter `REPLACE` mode. In this mode characters under the cursor can be replaced. ## Visual Mode Use `v` to enter the `VISUAL` mode and `V` to enter `VISUAL LINE` mode. In this mode the text can be selected by the movement keys before deciding what to do with it. Selected text can be **deleted/cut** using `d` or **coppied** using `y`. It can be **pasted** after the cursor using `p` or before the cursor using `P`.
shuv1824
374,320
I build Stadia clone with Vuetify.
I made a fully responsive template for Stadia Homepage, in a short time
0
2020-06-29T17:06:00
https://dev.to/benyou1324/i-build-stadia-clone-with-vuetify-5ck0
vue, webdev, showdev, javascript
--- title: I build Stadia clone with Vuetify. published: true description: I made a fully responsive template for Stadia Homepage, in a short time tags: vuejs, webdev, showdev, javascript cover_image: https://dev-to-uploads.s3.amazonaws.com/i/6ioxjdho8t46uvu8chlf.png --- I made a fully responsive template for Stadia Homepage, in a short time 🔥🔥 with vuetify. check it out 😊 [Github Repo](https://github.com/benyou1969/vuetify-stadia-clone) The Site demo: [stadia.netlify.app](https://stadia.netlify.app) [Vuetifyjs.com](http://vuetifyjs.com/)
benyou1324
374,349
My first codewar solved
the Question : You are going to be given a word. Your job is to return the middle character of the w...
0
2020-06-29T17:30:52
https://dev.to/g33knoob/my-first-codewar-solved-171m
the Question : You are going to be given a word. Your job is to return the middle character of the word. If the word's length is odd, return the middle character. If the word's length is even, return the middle 2 characters. #Examples: Kata.getMiddle("test") should return "es" Kata.getMiddle("testing") should return "t" Kata.getMiddle("middle") should return "dd" Kata.getMiddle("A") should return "A" #Input A word (string) of length 0 < str < 1000 (In javascript you may get slightly more than 1000 in some test cases due to an error in the test cases). You do not need to test for this. This is only here to tell you that you do not need to worry about your solution timing out. #Output The middle character(s) of the word represented as a string. my solving : function getMiddle(s) { var length = s.length; var middle = length / 2; let floor = Math.floor(middle); const middleNumber = Number(middle); const floorNumber = Number(floor); const mathNumber = middleNumber - floorNumber; if (mathNumber > 0) { let finalMiddle = s[floorNumber]; return finalMiddle; } else { let finalMiddle1 = s[floorNumber - 1]; let finalMiddle2 = s[floorNumber]; return finalMiddle1 + finalMiddle2; } } ofcourse i know my solving is not to the point but im really happy to solve this
g33knoob
374,574
Understanding the Mobility Adoption, Trends, and Its Future Outlook
Mobility and Mobile Technology is one of the prominent aspects that has largely influenced our daily...
0
2020-06-29T20:51:55
https://dev.to/iamarpitabiswas/understanding-the-mobility-adoption-trends-and-its-future-outlook-42bi
Mobility and Mobile Technology is one of the prominent aspects that has largely influenced our daily lives and most importantly professional lives. The sight of new generation modern workforces using advanced smart technologies is quite common and Entrepreneurs can predict and see a new genre of work process influencing the coming future more extensively. Mobility is a growing trend with business sectors adapting to workforce mobility. The infographic gives a magnified insight into the mobile ecosystem and understanding into the adoption of mobility, its influence in the 21st-century organizations, employees, and customers who are looking for a high level of connectivity, productivity and what the future has in store with enterprise mobility transforming businesses and redefining its workability. Infographic source- https://blog.scalefusion.com/the-magnified-mobile-ecosystem-infographic/
iamarpitabiswas
374,712
Manipulate Colors With Sass
Sass Before jumping into the code, I'd first like to provide a brief description of Sass for anyone...
0
2020-06-30T00:42:17
https://dev.to/aryaziai/manipulate-colors-in-sass-1mdo
sass, scss, color
<h2>Sass</h2> Before jumping into the code, I'd first like to provide a brief description of Sass for anyone who isn't familiar with this specific stylesheet language. Sass (Syntactically awesome style sheets) is a CSS preprocessor. It adds additional features such as variables, nested rules and mixins into CSS. Sass allows developers to write code with these features and then compile it into a single CSS file (commonly through <a href="https://marketplace.visualstudio.com/items?itemName=ritwickdey.live-sass">this</a> extension). Sass is currently the most popular preprocessor, other well-known css-preprocessors include Less and Stylus. <h4>Syntax Example:</h4> <img src="https://dev-to-uploads.s3.amazonaws.com/i/ryzba8onw9sv5no8m103.png" /> <i>As you can see, it's almost identical to CSS!</i> <h2>Manipulating Colors</h2> <h3>1. Lighten()</h3> ``` scss $btn-primary: red; .btn { background: $btn-primary; &:hover { background: lighten($btn-primary, 10%) } } // Lightens button by 10% in hover state ``` Creates a lighter color of color with an amount between 0% and 100%. The amount parameter increases the HSL lightness by that percent. <h3>2. Darken()</h3> ``` scss $btn-primary: red; .btn { background: $btn-primary; &:hover { background: darken($btn-primary, 10%) } } // Darkens button by 10% in hover state ``` Creates a darker color of color with an amount between 0% and 100%. The amount parameter decreases the HSL lightness by that percent. <h3>3. Mix()</h3> ``` scss body { background: mix(blue, green, 50%); } // Body background is a mix between blue and green ``` Creates a color that is a mix of color1 and color2. The weight parameter must be between 0% and 100%. A larger weight means that more of color1 should be used. A smaller weight means that more of color2 should be used. Default is 50%. <h3>4. Transparentize()</h3> ``` scss .btn { padding:10px; background: green; color: white; &:hover { background: transparentize(green, 0.5); } } // Button background becomes 50% transparent in hover state ``` Creates a more transparent color of color with an amount between 0 and 1. The amount parameter decreases the alpha channel by that amount. <h3>5. Saturate()</h3> ``` scss .btn { padding:10px; color: white; background-color: saturate(#333,40%); } // Button background is a reddish-black color ``` Creates a more saturated color of color with an amount between 0% and 100%. The amount parameter increases the HSL saturation by that percent. <hr> To see the full list of color manipulation functions, please visit the <a href="https://w3schools.unlockfuture.com/sass/sass_functions_color.html">Color Function Documentation</a> at <a href="https://w3schools">W3Schools</a>.
aryaziai
374,764
What are ECMAScript, ECMA-262, and JavaScript?
Throughout my career, I have heard many people talk about ECMA, ECMAScript, ES5, ES6, ESNext, ES an...
0
2020-06-29T23:35:50
https://dev.to/rembrandtreyes/what-are-ecmascript-ecma-262-and-javascript-139j
javascript, ecmascript, ecma, ecma262
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/u1ec516c7ylqok6fzsvb.jpeg) Throughout my career, I have heard many people talk about ECMA, ECMAScript, ES5, ES6, ESNext, ES and beyond. What are all these things and how do they point to JavaScript? Let's trace back to 1960 when a "Committee was nominated to prepare the formation of the Association and to draw up By-laws and Rules."([ECMA history](https://www.ecma-international.org/memento/history.htm)) which would be called, **European Computer Manufacturers Association** or ECMA for short. Before you get into JavaScript you should understand the history of it all. So let's take a trip down history lane to learn more about ECMA, ECMAScript, ECMA-262, and TC39 and how they all play a role in the JavaScript we love today. ECMA - On 17th May 1961 the Association officially came into being and all those Companies which attended the original meeting became members. The constituent assembly was held on 17th June 1961 ([ECMA history](https://www.ecma-international.org/memento/history.htm)). ECMA-262 - Each specification comes with a standard and a committee. You can find all the ECMA standards [here](https://www.ecma-international.org/publications/standards/Standard.htm). In JavaScript's case, its standard is associated with [ECMA-262](https://www.ecma-international.org/publications/standards/Ecma-262.htm) and its committee is [TC39](https://tc39.es/ecma262/) ECMAScript or ES - is a general-purpose programming language, standardized by ECMA International according to the document ECMA-262. ECMAScript is a programming language itself, specified in the document ECMA-262. Although ECMAScript is inspired by JavaScript, they are different programming languages, and ECMAScript is not the specification of JavaScript ([wiki](https://en.wikipedia.org/wiki/ECMAScript#:~:text=ECMAScript%20(or%20ES)%20is%20a,help%20foster%20multiple%20independent%20implementations.)). Now let's talk about JavaScript. The standard for JavaScript is [ECMAScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Language_Resources). You might see some other popular references like ECMAScript 5 or ES5 and ECMAScript 6 or ES6. These are all editions of ECMAScript and they relate to how we implement JavaScript in the browser. As of mid-2017 most common browsers fully support ES6. Fascinating! Now you know a bit of web development history. So go out there and tell all your friends the differences of all these ECMA*'s and put that knowledge to good use. But wait, how do all these standards get passed for JavaScript? Good question! You can find out more about [proposals](https://github.com/tc39/proposals) of ECMAScript and how it goes through (stages)[https://tc39.es/process-document/]
rembrandtreyes
374,858
How to setup XAMPP-VM in MacOS
If you’re someone just getting started with web development and you’re leveling up your learning to i...
0
2020-07-13T18:09:21
https://brandonb.ca/how-to-setup-xampp-vm-in-macos
--- title: How to setup XAMPP-VM in MacOS published: true date: 2020-06-29 07:00:00 UTC tags: canonical_url: https://brandonb.ca/how-to-setup-xampp-vm-in-macos --- If you’re someone just getting started with web development and you’re leveling up your learning to include PHP and other server-side languages, and you have an Apple Mac product of some kind running macOS 10.15, it’s very likely that you’ve ran into a wall trying to get your first `index.php` file to run. You could just open the Terminal.app and type `php index.php` which would run your file through the built-in php interpreter, but let’s say you want to view it in your web browser; how do you do that? Well, this is where [XAMPP](https://www.apachefriends.org/index.html) comes in. <!-- break --> XAMPP is a suite of tools delivered to you as a single package that you download and install on your computer. Bundled inside it is Apache (a web server), MariaDB (an open source MySQL-compliant database server), PHP(a server-side programming language), and Perl (a programming language that many people love). The XAMPP project is a great tool to use for developing websites as it is cross-platform (Windows and Linux packages are available for it too) and allows you to build your project before you move on to other more advanced ways of serving your content (like using other macOS tools, or even configuring a Docker-based setup). The newer version of XAMPP for macOS is called [XAMPP-VM](https://www.apachefriends.org/download.html#download-apple). Unfortunately the [documentation](https://www.apachefriends.org/faq_stackman.html) for XAMPP-VM is a little lacking for beginners, so in this post I intend to fill in the gaps of knowledge. Note: It is expected that you understand the basics of how to configure an Apache server before moving on to the rest of this post. * * * ## Installation The first step to getting XAMPP-VM installed can be done one of two ways: 1. Install XAMPP-VM by downloading it [directly from the website](https://www.apachefriends.org/download.html#download-apple), opening the DMG file, and dragging the XAMPP.app file to the Applications folder. 2. Download and install it with homebrew (much easier). ``` brew install xampp-vm ``` Once installed, run `XAMPP.app` from the Applications folder. You should be presented with the application containing some buttons and an empty “status” area. First thing you want to do is click the `Start` button here; this fires up the VM and gets the server running so we can complete the rest of the configuration. Next you want to click on the `Volumes` tab at the top, then `Mount`, then `Explore` once the `Mount` button is greyed out. If you clicked `Explore`, Finder should open to the location where the XAMPP server config is available to browse. Click on the `etc` folder here, then go to the next section. ## Apache There’s a few tweaks you need to make to the Apache config before moving forward: ### Main config `httpd.conf` Find and edit the `httpd.conf` file in the `etc` folder, and uncomment the line containing `#Include etc/extra/httpd-vhosts.conf` by removing the `#`. This will allow us to configure Apache using [VirtualHosts](https://httpd.apache.org/docs/2.4/vhosts/) without having to edit the main Apache config file anymore. ### VirtualHosts `httpd-vhosts.conf` From the `etc` folder click on the `extra` folder and then open the `httpd-vhosts.conf` file in your editor. You should be presented with a file containing two dummy entries, feel free to delete the last entry and edit the first one to match the example below: ``` <VirtualHost *:80> DocumentRoot "/opt/lampp/htdocs/testfolder" ServerName testserver </VirtualHost> ``` In Finder, create a new folder called `testfolder` inside the `htdocs` folder that sides beside the `etc` folder we’ve been doing our edits in. ### Hosts file `/etc/hosts` Open the `/etc/hosts` file and add a new entry like so: ``` 127.0.0.1 testserver ``` Make sure the value you used in the vhosts config file matches the server name in the hosts file (in this case we use `testserver`). ### Testing in a browser Open your browser and navigate to `http://testserver:8080`. You should be presented with an index file listing page that should be blank. If yes, you’ve successfully configured Apache to server your PHP files! Now all you need to do is start writing your PHP files inside the `/opt/lampp/htdocs/testfolder` folder and you should be able to view them in your browser. ## Developing a PHP application in XAMPP-VM The core piece of this post is this: **all development work needs to happen inside of the mounted `lampp` folder**. This means you can’t symlink your Dropbox folder or any other folder from your local drive into the XAMPP-VM drive, but instead have to copy-paste your work from elsewhere in, or work from directly inside the VM drive. The XAMPP blog has a [great post](https://www.apachefriends.org/blog/xampp_vm_cakephp_20170711.html) covering this. The reason for this: When you clicked the `Mount` button above, this triggered XAMPP.app to “mount a network drive” and symlinked it in Finder so you could access it easily. When you “Unmount” the drive or shut down the VM, you can no-longer access your server content because it resides within the mounted network drive.
brandonb927
375,008
Why Discord Bot Development is Flawed.
Before I complain about my experience with Discord bots, let me preamble with this: I enjoy...
0
2020-06-30T04:45:01
https://chand1012.dev/WhatsWrongWithDiscordBots/
discord, bot, chatbot, serverless
--- title: Why Discord Bot Development is Flawed. published: true date: 2020-06-30 00:00:00 UTC tags: discord, bot, chatbot, serverless canonical_url: https://chand1012.dev/WhatsWrongWithDiscordBots/ --- Before I complain about my experience with Discord bots, let me preamble with this: I enjoy developing the bots. I enjoy making bots that entertain people and that everyone uses for fun and memes. I like my Discord bots, I don’t regret developing them, and I will continue development of them. I do not think Discord’s current system for bot development should be replaced, it is too prevalent and there is too many bots currently using it. ## Issues, Summarized. My biggest complaint, and the one that I have seen done in what I would call a better way, is how the Discord bot connects to their servers and gets data. Currently, Discord uses a persistent connection to their servers, giving the bot a stream of messages, to which it is the programmer’s responsibility to correctly sort through and program behaviors. This is all fine when your bot is only on a few channels, but the second you start getting big, you have a problem of an endless stream of data coming into your bot. Every message must be scanned for your command, and there is no way of verifying that two bots may or may not have the same commands. This can cause issues, such as when one of your users is attempting at using another bot, but because your bot and the other bot on the server have the same command, confusion and frustration occur. There is no system implemented to verify which command the user meant, and it is the server owner’s responsibility to verify that the installed bots don’t conflict with each other. Finally, there is no “official” Discord bot repository or “store”. This means that the task to installing bots falls to websites like [top.gg](https://top.gg/) or [discord.bots.gg](https://discord.bots.gg/), which while good websites, are not official. ## Connection Issues I get why Discord decided to use a persistent connection over using another system, I do. Its because the bot is essentially emulating a Discord client, getting all messages sent in the channels that the “user”, or in this case the bot, and rather than putting them aside for the user to read later, streams it to the bot. The main issue that I do not like is the requirement for the bot to be running 100% of the time on _some_ sort of computer. Now I _do_ like that this computer can be anything with an internet connection, and that no ports have to be forwarded. Memory and CPU use heavily depend on the library that you use, along with the language. The variance between libs and languages can be **orders of magnitude**. I originally wrote my [DiscordQuickMeme](https://github.com/chand1012/Discord-Quick-Meme) bot in Python using [discord.py](https://discordpy.readthedocs.io/en/latest/), and at peak hours with around 40k users RAM use was well over 1GB. The same Discord bot, with RAM caching added, i.e. post titles, links, and scores from Reddit were all cached in RAM, peaked at 100MB of use when coded in Golang with Discordgo. Literally all of my programmer friends attributed to Python’s lack of efficient RAM use and because Golang has a great garbage collection system. ## Command Overlap Another issue that I only actually came across once was command overlap. It seems that most server admins are good about making sure that command overlap is minimal, but it has happened before. Now, the command in question was called `!search` by both my bot and the other one in question, which I just changed my command to `!revsearch` as it was a reverse image search command. I think I have a solution to both this and the previous issue, but I will talk about that later. ## Bot Sources Discord does not have an official store for bots or extensions. While this isn’t an immediate problem, I think that they would benefit greatly, as there are users who wouldn’t know what to trust or not to trust. Simple as that. ## The Solution My solution already exists in another popular instant messaging service, and one that is more oriented at businesses: Slack. Slack’s method for creating a bot is very different in its structure. Rather than having the programmer scan each message for the commands, Slack sends a REST request to a web app with a matching API key. This means two things: 1) The app doesn’t have a persistent connection to the server at any given time and 2) You can develop the app to be serverless. Serverless? Why? Because Functions-as-a-Service exist! For the most part, the average Slack bot can host themselves for free using AWS Lambda or Cloudflare Workers using webhooks to trigger commands. If this was used for Discord, bots could be designed using a simple web framework such as [Flask](https://flask.palletsprojects.com/en/1.1.x/) or [Sinatra](sinatrarb.com), both of which are supported by AWS Lambda. Cloudflare Workers only supports Javascript and WASM, but that may be all you need depending on the complexity of the Discord Bot. Now, as stated before, I don’t think Discord should just throw away their current method for bot development, that would upset their bot devs and their users too much. Rather, I think that they should offer this development cycle as an option that can be used in place of the current one. What about conflicting commands? Well this would be handled by the Discord Client on the user’s side, allowing the client to differentiate which bot they meant by some sort of context bot or similar. That way the users will always use what they mean. Another possible solution would be to disallow bots with the same command all together, either making each one on a server have a unique prefix (kind of like how my Discord bot has a `quickmeme` command for system commands) that is unique to each bot, and that can be aliased if they are conflicting on a specific server. ## Conclusion This was my rant about Discord, the platform that I use literally every day to communicate with my friends, classmates, and coworkers. I love the platform, and I love developing new ways for users to use Discord to interact with their friends and the internet. However, I think that their bot development cycle could use some work, and I just felt like complaining today. That is all, hope you enjoyed my complaining. Feel free to discuss anything about basically anything I just said!
chand1012
375,152
Gmail configuration with mail server
I have an admin console for a website in Google. I wanted to configure new users to have access to Gm...
0
2020-06-30T09:12:28
https://dev.to/ssupdoc/gmail-configuration-with-mail-server-3gmj
help
I have an admin console for a website in Google. I wanted to configure new users to have access to Gmail under my domain. I added the user in admin console and found that I could send emails from the new account but could not receive mails, where I was getting a 550 virtual mailbox not found error on the sender's side. Thinking it had to do something with my mail server, I logged into OpenHost, which rests the email service provision and added the same email. Now if I send an email, I don't get any error nor do I receive any email. On further lookup, I found that we would need to configure the server to forward mails to the Gmail client. I was advised to add MX records of Gmail onto the mail server. I added them from the Open Host console. Now, I come to find that there exists MX records with a site called webhost, which doesn't ring a bell to me. I don't have any association with the guy who set me the server two years ago. I am advised that it may take 48 hours for the MX records to be properly configured and for the mail to be setup properly. I am not sure if I have done the steps correctly. Also, I am still worried about the fact that I would need to add the user in GSuite and Open Host each time I would like to add another user. Is there any way to directly just use GSuite to add users and receive mails with relatively lesser effort?
ssupdoc
375,539
Create a 3D Image Slideshow using HTML, CSS & JS
In this article, you will learn how to create an amazing 3D image slideshow on the web using just HTM...
0
2020-06-30T12:53:24
https://dev.to/kgprajwal/create-a-3d-image-slideshow-using-html-css-js-13fm
webdev, css, html, javascript
In this article, you will learn how to create an amazing 3D image slideshow on the web using just HTML, CSS and JavaScript which you can use in your personal websites to render images beautifully. # **HTML** In our HTML file, we will primarily have a *container* div that encloses three other divs that will mark the three rotating sections of our image. A *cube* class is assigned to each of these divs which will behave as independent cubes rotated along a common axis (Like a Rubix cube but cut only along one side). The *face* class will depict the regions of the cube where the image is held. ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.3.1/css/all.css" integrity="sha384-mzrmE5qonljUremFsqc01SB46JvROS7bZs3IO2EmfFsd15uHvIt+Y8vEf7N7fWAU" crossorigin="anonymous"> <link rel="stylesheet" href="style.css"> <title>Cube</title> </head> <body> <div class="container"> <div class="viewport"> <div class="control left-arrow"><i class="fas fa-arrow-left"></i></div> <div class="cube cube-1"> <div class="face front"></div> <div class="face back"></div> <div class="face top"></div> <div class="face bottom"></div> <div class="face left"></div> <div class="face right"></div> </div> <div class="cube cube-2"> <div class="face front"></div> <div class="face back"></div> <div class="face top"></div> <div class="face bottom"></div> <div class="face left"></div> <div class="face right"></div> </div> <div class="cube cube-3"> <div class="face front"></div> <div class="face back"></div> <div class="face top"></div> <div class="face bottom"></div> <div class="face left"></div> <div class="face right"></div> </div> <div class="control right-arrow"><i class="fas fa-arrow-right"></i></div> <div class="control play-pause"><i class="fas fa-play"></i></div> </div> </div> <script src="app.js"></script> </body> </html> ``` # **CSS** We will provide our basic styles to the body, background and the viewport. Here is where the high-tech CSS wizard skills come into picture along with some amount of maths. The CSS file is provided below and I think it is pretty much self-explanatory. Create a folder called *img* and save some images in it with the name used in the CSS file. ```css * { margin: 0; padding: 0; } .container { width: 100%; height: 100vh; background: linear-gradient(rgba(0, 0, 0, .6), rgba(0, 0, 0, .8)), url(images/bg.jpg) no-repeat; background-size: cover; } .viewport { height: 21vw; width: 42vw; top: 50%; left: 50%; position: absolute; transform: translate(-50%, -50%); perspective: 1300px; } .viewport::after { width: 120%; height: 20%; top: 140%; left: -10%; content: ''; position: absolute; background-color: #000; z-index: -1; filter: blur(50px); } .cube { width: 100%; height: 33.3333333%; transform-style: preserve-3d; position: relative; } .cube-1 { transition: transform .4s; } .cube-2 { z-index: 10; transition: transform .4s .2s; } .cube-3 { transition: transform .4s .4s; } .face { width: 100%; height: 100%; position: absolute; } .front { transform: translateZ(21vw); } .cube-1 .front { background: url(images/slide-img-1.jpg) no-repeat 50% 0; background-size: cover; } .cube-2 .front { background: url(images/slide-img-1.jpg) no-repeat 50% -7vw; background-size: cover; } .cube-3 .front { background: url(images/slide-img-1.jpg) no-repeat 50% -14vw; background-size: cover; } .back { transform: translateZ(-21vw) rotateY(180deg); } .cube-1 .back { background: url(images/slide-img-2.jpg) no-repeat 50% 0; background-size: cover; } .cube-2 .back { background: url(images/slide-img-2.jpg) no-repeat 50% -7vw; background-size: cover; } .cube-3 .back { background: url(images/slide-img-2.jpg) no-repeat 50% -14vw; background-size: cover; } .left { transform: translateX(-21vw) rotateY(-90deg); } .cube-1 .left { background: url(images/slide-img-3.jpg) no-repeat 50% 0; background-size: cover; } .cube-2 .left { background: url(images/slide-img-3.jpg) no-repeat 50% -7vw; background-size: cover; } .cube-3 .left { background: url(images/slide-img-3.jpg) no-repeat 50% -14vw; background-size: cover; } .right { transform: translateX(21vw) rotateY(90deg); } .cube-1 .right { background: url(images/slide-img-4.jpg) no-repeat 50% 0; background-size: cover; } .cube-2 .right { background: url(images/slide-img-4.jpg) no-repeat 50% -7vw; background-size: cover; } .cube-3 .right { background: url(images/slide-img-4.jpg) no-repeat 50% -14vw; background-size: cover; } .top { height: 42vw; background-color: #111; transform: translateY(-21vw) rotateX(90deg); } .bottom { height: 42vw; background-color: #111; transform: translateY(-14vw) rotateX(-90deg); } .control { width: 40px; height: 40px; align-items: center; color: #fff; position: absolute; border-radius: 100%; transform: translate(-50%, -50%); border: 1px solid #fff; background-color: rgba(59, 52, 50, .7); display: flex; justify-content: center; cursor: pointer; z-index: 100; transition: background-color .3s; } .control:hover { background-color: rgba(42, 38, 36, .8); } .control i { pointer-events: none; } .left-arrow { top: 50%; left: -35%; } .right-arrow { top: 50%; left: 135%; } .play-pause { top: 140%; left: 120%; } ``` # **JavaScript** The javascript part of this project will handle the initiation of the rotation and the little play and pause button in the bottom of the page that is responsible for automating the slide show after a set amount of time interval. ```javascript const rotate = () => { const cubes = document.querySelectorAll(".cube"); Array.from(cubes).forEach( (cube) => (cube.style.transform = `rotateY(${x}deg)`) ); }; const changePlayPause = () => { const i = document.querySelector(".play-pause i"); const cls = i.classList[1]; if (cls === "fa-play") { i.classList.remove("fa-play"); i.classList.add("fa-pause"); } else { i.classList.remove("fa-pause"); i.classList.add("fa-play"); } }; let x = 0; let bool = false; let interval; const playPause = () => { if (!bool) { interval = setInterval(() => { x -= 90; rotate(); }, 3000); changePlayPause(); bool = true; } else { clearInterval(interval); changePlayPause(); bool = false; } }; document.querySelector(".left-arrow").addEventListener("click", () => { x += 90; rotate(); if (bool) { playPause(); } }); document.querySelector(".left-arrow").addEventListener("mouseover", () => { x += 25; rotate(); }); document.querySelector(".left-arrow").addEventListener("mouseout", () => { x -= 25; rotate(); }); document.querySelector(".right-arrow").addEventListener("click", () => { x -= 90; rotate(); if (bool) { playPause(); } }); document.querySelector(".right-arrow").addEventListener("mouseover", () => { x -= 25; rotate(); }); document.querySelector(".right-arrow").addEventListener("mouseout", () => { x += 25; rotate(); }); document.querySelector(".play-pause").addEventListener("click", () => { playPause(); }); ``` # **Result** By now, I know you are all wondering, "What could these enormous and gigantic set of code possibly do?". Please have a look at the gif provided below to get a clear cut idea of how all these codes work together. <img src="https://dev-to-uploads.s3.amazonaws.com/i/6q4s6oyrimwrsn6y9j9t.gif"> Thank you for reading this article. The entire project is available on [GitHub](https://github.com/K-G-PRAJWAL/HTML-CSS-Projects/tree/master/Cube%20SlideShow). For any queries feel free to contact me on my [LinkedIn](https://www.linkedin.com/in/k-g-prajwal-a6b3b517a/). ***Thank You!***
kgprajwal
375,691
Difference in Asynchronous & Synchronous JavaScript Code
Hello beautiful people on the internet 🙋‍♂️ This blog points out difference betwe...
0
2020-06-30T14:00:52
https://dev.to/mannypanwar/difference-in-asynchronous-synchronous-javascript-code-4gcb
javascript
### Hello beautiful people on the internet 🙋‍♂️ ## This blog points out difference between Asynchronous & Synchronous JavaScript code ### All developer eventually have to know about these two in order to write good code ### Lets get to it then 🚀 - **`Synchronous Programming`** ▶ `Synchronous basically means that you can only execute one thing at a time` - Like in JavaScript, the code runs from top to button executing single line of code at a time - **`Asynchronous Programming`** ▶ `Asynchronous means that you can execute multiple things at a time and you don't have to finish executing the current thing in order to move on to next one` ### Why does it even matter 🤔 #### Now that you know about this why does this even matter? It is important because the code that might take more time **(like API calls)** must be written Asynchronously otherwise rest of the code will have to wait till the data is fetched. #### In simple words 💁‍♂️ - If we make API calls or fetch data **Synchronously**, our code written after the call will have to wait till out call is made - Assuming that fetching data takes `200ms`, JavaScript will wait for `200ms` and then execute rest of your code. - While if the data fetching is written **Asynchronously** the `200ms` wait is no longer there, the rest of the code runs without waiting for the data fetching making code run faster. ### Now how to write code Asynchronously 🤔 There are various ways, most preferred are - `promises` - under this you fetch data inside a promise. Read more [🔗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) - `async await` - this is used to make normal function act Asynchronously. Read more [🔗](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) ### Thank you for reading 💙👨‍💻
mannypanwar
375,736
Angular code editor locally through web app
Hai All, i needed some suggestions, will be able to create stackblitz like editor using Angular, whil...
0
2020-06-30T14:44:16
https://dev.to/vesivagu/angular-code-editor-locally-through-app-14ig
Hai All, i needed some suggestions, will be able to create stackblitz like editor using Angular, while run localhost - the app should show source code(html,css,ts) and the corresponding result dynamically when there is any changes happened. Thanks in advance!
vesivagu
377,739
What is AWS Lambda | Serverless Application using Python(part-1)| Hands-on
In this tutorial, we will learn what is serverless and AWS Lambda with the "Hello World" function....
0
2020-07-01T17:56:37
https://dev.to/techparida/what-is-aws-lambda-serverless-application-using-python-part-1-hands-on-4l9a
serverless, aws, tutorial, codenewbie
{% youtube q8WUijxrnpM %} In this tutorial, we will learn what is serverless and AWS Lambda with the "Hello World" function. This is a series of tutorial for serverless where will learn how to develop a serverless application using AWS Lambda, API Gateway, and DynamoDB. Prerequisite 1. You should have an AWS account. 2. Basics of Python Follow on: Twitter: https://twitter.com/TechParida Linkedin: https://www.linkedin.com/company/codingx/ Facebook: https://www.facebook.com/codingx Instagram: https://www.instagram.com/coding.x/ Leave any questions in the comments section and don't forget to subscribe to be notified of new content! :)
techparida
375,754
Binding translations to views
This post is last in the series of react i18n integration. In the previous post we went over how our...
7,473
2020-06-30T15:23:57
https://dev.to/devabhijeet/binding-translations-to-views-397i
react, i18n, localisation, translation
This post is last in the series of react i18n integration. In the previous post we went over how our `TranslationServiceProvider` exposes API method for our components to consume. Now a ideal place for `TranslationServiceProvider` would be inside the root component. When our application is getting initialised, react-i18n exposes `ready` boolean flag which can be accessed using the `useTranslation` hook. ```sh import React from "react"; import "./styles.css"; import "antd/dist/antd.css"; import { Spin } from "antd"; import { useTranslation } from "react-i18next"; import { Router } from "react-router-dom"; import { TranslationServiceProvider } from "./providers/TranslationServiceProvider"; import { AppRoutes } from "./routes/index"; import { createHashHistory } from "history"; import "./i18n"; const BrowserHistory = createHashHistory(); export default function App() { const { ready } = useTranslation(); return ( <> {!ready ? ( <Spin className="suspense-spinner" size="large" /> ) : ( <TranslationServiceProvider> <Router history={BrowserHistory}> <AppRoutes /> </Router> </TranslationServiceProvider> )} </> ); } ``` Once we have encapsulated our main component with `TranslationServiceProvider` we should now be able to access all the API method provided by `TranslationServiceProvider`. > If you remember in the previous post I'd mentioned different ways to consume provider methods for `class` based and `functional` component. We will have a look at those in this post. when we say we want to translate textual contents embedded inside our components, we essentially mean to say, if there's any way can link or bind the textual content inside the component with the one mentioned in the `.json` file. `react-i18n` let's us do that with a set of helper function namely - t (`t` short form for translate) - `<Trans></Trans>` (to translate contents with html tag) - `<Translation></Translation>` (comes handy when translating a standalone texts that are not inside any components) Let's have a look how we'd inject this helper methods in our `class` and `functional` component ```sh import { withTranslation } from 'react-i18next'; import { TranslationServiceHelper } from 'HOC/TranslationServiceHelper'; class ToBeTranslated extends React.pureComponent{ render() { const { t, currentLanguage } = this.props; return ( <p>{t('helloworld')}</p> ) } } ToBeTranslated = withTranslation()(ToBeTranslated); export default compose(TranslationServiceHelper)(ToBeTranslated); ``` In the above example if you can see `()` empty parenthesis adjacent to `withTranslation`. The empty parentheses is where you specify which the namespace should the component look into for resolving the translations. if no namespace is mentioned in `withTranslation`, default namespace is used for lookup. Alternatively we can also define the namespace to look for inside the `t` helper function. The `helloworld` inside the `t` helper function is the `key` inside the json file that binds the translated content. ```sh render() { const { t, currentLanguage } = this.props; return ( <p>{t('home:helloworld')}</p> ) } ``` The above is how you'd for `class` components, what if I have a functional component. Well, in that case we can make use of `useTranslation` hook. ```sh import { useTranslation } from 'react-i18next'; const ToBeTranslated = ({}) => { const { t } = useTranslation('home'); return ( <p>{t('helloworld')}</p> ) } export default ToBeTranslated ``` The `t` helper function suffices for only textual content, what if you have to translate below content. ```sh <div> It's <span style={{color:'red'}}>free</span> Forever! </div> ``` Translating the above content is one thing, but how do we define keys for above contents inside `json` file ```sh "key": "Sus <1>GRATIS</1> Siempre!" ``` see how `<1></1>` is substituted instead of <span></span>. For every `html` tag we have incrementing numbers inside the `json` file. `<1></1>`, `<2></2>`. For translation we can use `Trans` component provided by `react-i18n`. ```sh <Trans i18nKey="key"> {"It's"} <span style=style={{color:'red'}}>FREE </span> Forever! </Trans> ``` ### Bonus If you plan to scale `i18n` in your application by including multiple `locales`, you may wish to load translation based on `route`. Instead of downloading all the `translations` in one go it makes sense to download them based on routes. Let's start by adding `useEffect` to watch for `location` ```sh useEffect(() => { const pathNameTerms = locationPath.includes("/terms"); const pathNameHome = locationPath.includes("/home"); if (pathNameTerms) { props.loadNameSpaces("terms"); } else if (pathNameHome) { props.loadNameSpaces("home"); } }, [props.location]); ``` The above peace of code watched for route changes and loads appropriate namespace. It also changes the default namespace to `terms` and `home` respectively. That's it for this series. You can see the above lessons in action below. You can also find the github repo containing all codes mentioned in this series [here](https://github.com/devAbhijeet/react-i18n-app) {% codesandbox blue-brook-xgg5v %}
devabhijeet
375,807
Rebasing in Git to maintain history's health
Have you ever felt that the Git history of your project is becoming a big mess and when you need to...
0
2020-06-30T16:09:44
https://dev.to/d4vsanchez/rebasing-in-git-to-maintain-history-s-health-310c
git, tutorial, tools, productivity
Have you ever felt that the Git history of your project is becoming a big mess and when you need to find out why a change was made you encounter a lot of small commits that were just adding tiny bits of code that you forgot to add in previous commits? Well, this also happened to me. I'm a passionate learner so I'm always trying to find out new ways of doing things and questioning the current knowledge I have about a tool. When I found out that the Git history of my project was having a lot of meaningless commits, I started asking myself if there was a better way to add those changes to previously made commits so they can maintain cohesion and in the future, another developer doesn't need to solve the puzzle by searching through multiple commits to finding the reason of the change. This is when I found about **[Rebasing](https://git-scm.com/docs/git-rebase)** in Git. There are two ways that we can use Rebase: as a "replacement" of `git merge` and change, drop, and squash previously made commits. Gotcha, you can see that one of the use cases of the Rebase command fits perfectly the use case I'm searching for. I'm not going to talk about `git rebase` as a "replacement" of `git merge` in this article, but if you are curious how is that so, I recommend you to [read this article from the people at Atlassian](https://www.atlassian.com/git/tutorials/merging-vs-rebasing) where they explain it. To explain how to do Interactive Rebase in Git I've created this simple example repository that has some meaningless commits that I want to get rid of. ![Tig showing the commits in the repository](https://i.imgur.com/gG9wSD4.png) By using `tig` we can observe multiple things: 1. I'm doing these changes in a separate branch called *feature-branch*, it's very important that you do not use `git rebase` on public/shared branches as it will overwrite history. This means that if you force-push the changes, anyone else who's also working in your branch will go out of sync. 2. There are two meaningless commits in my `feature-branch` that only fixes some accents that I forgot to add in the *Adding my personal information* commit. We will be fusing them 2 with the original commit. 3. I made them all quickly in my example, but you may extrapolate the things we're going to make here into bigger projects. ## Let's do it First, we'll take the SHA-1 of the oldest commit that we want to retain as-is. In this case, the last commit we want maintaining as-is is the *Adding my initial description file* commit. This is because we are going to be fusing the two *"Forgot ..."* commits with the *Adding my personal information* commit, hence not retaining it as-is. ![Tig showing that the hash of the oldest commit we want to retain is: 6c8e26c67111b0cab4ce388f4899b3b879152f55](https://i.imgur.com/2kmsF5q.png) Now that we have the hash of the commit, we're going to execute the following command which is going to run Rebase in Interactive mode, so we can easily pick which actions we want to perform to the commits. ```bash git rebase -i 6c8e26c^ ``` You may notice two things in the command: 1. We're using only a few characters of the hash: Git allows us to refer to a commit by only using few characters from it. You can [check out that in Git's documentation](https://git-scm.com/book/en/v2/Git-Tools-Revision-Selection). 2. The caret at the end: In order to have our current commit available in the rebase command we will need to bring the parent of the *6c8e26c* commit. This will let us do the fusion operation between the 3 commits. Executing this command will open the editor that we have configured in Git with the following text: ``` pick d7ce763 Adding my personal information pick b049848 Forgot to add accent in my last name pick b215b9e Forgot to add accent in my city pick b544998 Adding software development skills # Rebase 6c8e26c..b544998 onto b544998 (4 commands) # # Commands: # p, pick <commit> = use commit # r, reword <commit> = use commit, but edit the commit message # e, edit <commit> = use commit, but stop for amending # s, squash <commit> = use commit, but meld into previous commit # f, fixup <commit> = like "squash", but discard this commit's log message # x, exec <command> = run command (the rest of the line) using shell # b, break = stop here (continue rebase later with 'git rebase --continue') # d, drop <commit> = remove commit # l, label <label> = label current HEAD with a name # t, reset <label> = reset HEAD to a label # m, merge [-C <commit> | -c <commit>] <label> [# <oneline>] # . create a merge commit using the original merge commit's # . message (or the oneline, if no original merge commit was # . specified). Use -c <commit> to reword the commit message. # # These lines can be re-ordered; they are executed from top to bottom. # # If you remove a line here THAT COMMIT WILL BE LOST. # # However, if you remove everything, the rebase will be aborted. # # Note that empty commits are commented out ``` By reading at the commands, the best fit of actions for what we want to do is the following: - Pick the *Adding my personal information* commit (leave it as-is) - Fixup the *Forgot to add accent in my last name* commit - Fixup the *Forgot to add accent in my city* commit - Pick the *Adding software development skills* commit (leave it as-is) By using the Fixup action instead of the Squash action, we're telling Git to move the changes of both commits to the above commit (*Adding my personal information*) and not ask us to rewrite the commit message or append their commit messages to the one we had originally. This is how our file will look like after making the changes: ``` pick d7ce763 Adding my personal information fixup b049848 Forgot to add accent in my last name fixup b215b9e Forgot to add accent in my city pick b544998 Adding software development skills # Rebase 6c8e26c..b544998 onto b544998 (4 commands) # # Commands: # p, pick <commit> = use commit # r, reword <commit> = use commit, but edit the commit message # e, edit <commit> = use commit, but stop for amending # s, squash <commit> = use commit, but meld into previous commit # f, fixup <commit> = like "squash", but discard this commit's log message # x, exec <command> = run command (the rest of the line) using shell # b, break = stop here (continue rebase later with 'git rebase --continue') # d, drop <commit> = remove commit # l, label <label> = label current HEAD with a name # t, reset <label> = reset HEAD to a label # m, merge [-C <commit> | -c <commit>] <label> [# <oneline>] # . create a merge commit using the original merge commit's # . message (or the oneline, if no original merge commit was # . specified). Use -c <commit> to reword the commit message. # # These lines can be re-ordered; they are executed from top to bottom. # # If you remove a line here THAT COMMIT WILL BE LOST. # # However, if you remove everything, the rebase will be aborted. # # Note that empty commits are commented out ``` Now it's only necessary to save and quit the editor and Git will do these operations instantly. This is the final result of running our rebase command with the fixup. ![Tig showing that the meaningful commits are gone!](https://i.imgur.com/ImbWscd.png) There are no more meaningless commits, both have been fused with the commit that added the personal information. This way no one has to solve the puzzle by having multiple commits that fix stuff that we mistyped or forgot to do in previous commits that are not yet merged into the stable branch or shared with another coworker. You can also use this [Interactive Rebase to edit something](https://hackernoon.com/beginners-guide-to-interactive-rebasing-346a3f9c3a6d) in a previous commit that you just happened to see right now, you just have to use the edit action and Git will take you to that moment in history so you can modify it. The only difference when you finish making the change is that you don't do a `git commit` but you do a `git commit --amend`. ## Conclusion Maintaining meaningful commits in your Git history is important, it helps other developers flow through changes in a more natural way and waste less time reading meaningless changes that were added just to comply with Git's rule to always add a commit message. Rebase will help us achieve this goal by giving us tools to traverse the history of our repository and do changes to our commits so we don't need to create a single commit to fix something insignificant but we can edit the commit in which the problem it was introduced. We may need to take care of rebasing though. It is good that we only use it in branches that are not shared with anyone else and the commits are not merged into a public branch. After our branch is merged in the main branch of the project is not a good idea to rebase it, we'll have to create a new commit. --- I hope to read your comments about this method and if you like the post I really appreciate you to share it and add a reaction to it!
d4vsanchez
375,996
Develop Charts in Angular with NGX-CHARTS
Charts helps us to visualize large amount of data in an easy to understand and interactive way. Th...
0
2020-06-30T18:54:02
https://www.ngdevelop.tech/how-to-use-ngx-charts-in-angular/
angular, ngxcharts, charts
![Charts in Angular with NGX-CHARTS](https://dev-to-uploads.s3.amazonaws.com/i/0vqj4jsszaqboblyaeks.png) Charts helps us to visualize large amount of data in an easy to understand and interactive way. This helps businesses to grow more by taking important decisions from the data. For example, e-commerce can have charts or reports for product sales, with various categories like product type, year, etc. In angular, we have various charting libraries to create charts. Ngx-charts is one of them. > Check out the list of [best angular chart libraries](https://www.ngdevelop.tech/best-angular-chart-libraries/). ## In this article, we will see data visualization with ngx-charts and how to use ngx-charts in angular application ? We will see, - How to install ngx-charts in angular ? - Create a vertical bar chart > 👉 This article was originally published at [NgDevelop Blogs](https://www.ngdevelop.tech/blog/). **Checkout the complete article [How to use ngx-charts in angular ?](https://www.ngdevelop.tech/how-to-use-ngx-charts-in-angular/)**. This article includes other important chart examples. like pie chart, advanced pie chart and pie chart grid. ## Introduction [ngx-charts](https://swimlane.gitbook.io/ngx-charts/) is an open-source and declarative charting framework for angular2+. It is maintained by [Swimlane](https://swimlane.com/). It is using Angular to render and animate the SVG elements with all of its binding and speed goodness and uses d3 for the excellent math functions, scales, axis and shape generators, etc. By having Angular do all of the renderings it opens us up to endless possibilities the Angular platform provides such as AoT, Universal, etc. ngx-charts supports various chart types like bar charts, line charts, area charts, pie charts, bubble charts, doughnut charts, guage charts, heatmap, treemap and number cards. It also supports features like autoscaling, timeline filtering, line interpolation, configurable axis, legends, real-time data support and so on. ## Ngx-charts Installation 1. Create a new angular application using the following command (Note: skip this step if you want to add ngx-charts in the existing angular application, At the time of writing this article I was using angular 9). ``` ng new ngx-charts-demo ``` 2. Install `ngx-charts` package in an angular application using the following command. ``` npm install @swimlane/ngx-charts --save ``` 3. At the time of installation if you get the following error ``` ERROR in The target entry-point "@swimlane/ngx-charts" has missing dependencies: - @angular/cdk/portal ``` we need to add `@angular/cdk` using the following ``` npm install @angular/cdk --save ``` 4. Import `NgxChartsModule` from 'ngx-charts' in `AppModule`. 5. ngx-charts also required the `BrowserAnimationsModule`. Import it in `AppModule`. So our final `AppModule` will look like : ```js import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { NgxChartsModule }from '@swimlane/ngx-charts'; import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, BrowserAnimationsModule, NgxChartsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` Great, Installation steps are done. Now let’s develop various charts using `ngx-charts` components. In `AppComponent` we will create the following sales data array. We will use this object to generate charts. ```js import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { saleData = [ { name: "Mobiles", value: 105000 }, { name: "Laptop", value: 55000 }, { name: "AC", value: 15000 }, { name: "Headset", value: 150000 }, { name: "Fridge", value: 20000 } ]; } ``` Now lets use this data to create a vertical bar chart with ngx-charts bar chart component. ## Vertical Bar Chart To generate a vertical bar chart, `ngx-charts` provide `ngx-charts-bar-vertical` component, add it on the template as below: ```html <ngx-charts-bar-vertical [view]="[1000,400]" [results]="saleData" [xAxisLabel]="'Products'" [legendTitle]="'Product Sale Chart'" [yAxisLabel]="'Sale'" [legend]="true" [showXAxisLabel]="true" [showYAxisLabel]="true" [xAxis]="true" [yAxis]="true" [gradient]="true"> </ngx-charts-bar-vertical> ``` ![Vertical Bar Chart Generated with Ngx-Charts](https://dev-to-uploads.s3.amazonaws.com/i/yk8qhgeezw68wu7trg9w.png) *Vertical Bar Chart of Sales Data* 👉 Important properties of `ngx-charts-bar-vertical` component - `results`: To render `salesData` chart we need to assign this data object to results - `view`: set width and height of chart view - `xAxisLabel`: set x-axis label - `legendTitle`: set legend title - `legend` : if you want to show legend set it to `true`, default it is `false`. - `showXAxisLabel` : set `true` to show x-axis label. - `showYAxisLabel` : set `true` to show y-axis label. - `xAxis` / `yAxis` : set `true` to show specific axis. - `gradient`: set it to `true` to show bar with gradient background. ### Pie Chart, Advanced Pie Chart and Pie Chart Grid 👉 Checkout complete article at [How to use ngx-charts in angular application](https://www.ngdevelop.tech/how-to-use-ngx-charts-in-angular/)?. ## Summary In this article, We have seen data visualization with ngx-charts in angular application. We have installed the ngx-charts library and created vertical bar chart using sample sales data. I hope you like this article, please provide your valuable feedback and suggestions in below comment section🙂. For more updates, Follow us 👍 on [NgDevelop Facebook page](https://www.facebook.com/ngdevelop.tech/). ### Checkout the other [best angular chart libraries](https://www.ngdevelop.tech/best-angular-chart-libraries/).
ankitprajapati
376,122
Find_or_create_by: ActiveRecord Ruby on Rails
I am working on a project that I am very excited about! I am creating a social media platform for a...
0
2020-06-30T20:31:58
https://dev.to/brookebachman/findorcreateby-activerecord-ruby-on-rails-2ifm
ruby, rails, codenewbie
I am working on a project that I am very excited about! I am creating a social media platform for animals. I like to keep my project ideas silly and fun because it is hard to learn a new skill if you are working on a project that doesn't thrill you. I am using a Rails backend with a Javascript frontend. My relationships in my model are set up such that A Post cannot be created without a User. I created ease with my project by allowing a person to create both a post and a user at the same time. When I was attempting to create a new post in my application. I ran into an error on my backend. My post was able to create, however, I was not able to create a user, or find a user properly. My database would attempt to create a new user record, but would then fail. My code looked like this originally: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/hnvvaxjsrrfa59me7okb.png) I am using the Find_or_create_by method. This is a method to help search for data from what you pass in, and if its record is not found in your database, will create a new record. The first argument is the attribute name, and its value. Find_or_create_by accepts a block of code. In the block that starts on line 20 I am specifiying the values that I want each attribute to be set to if this user is not found in the database. Now the database is able to create a new user in the database with all of its attributes passed in through the form where the data was entered. I kept getting an error where the database would INSERT a row to the table, but then it would rollback transaction. The database was very confused because I was only passing it a name attribute, and so if it could not find the user record, it was unable to create a new user in the table with only the name attribute given. In my posts controller, I have strong params set for users. What strong params do is make sure a user can only be created with the parameters I specified. I only want to create a new user record in my database if the name attribute that was typed into my form does not match any of the names that are currently in my database. When I create a User on my form I am passing 6 attributes. However, the only attribute I do not want repeated is the User's name. I rewrote my code like so: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jakur8j7cj4bxxhd61bs.png) Here, In line 19 I am still looking for the name attribute, however, I am specifying what the database should compare that name to. It should be compared to the name attribute that gets entered into my form. Now the database is able to create an entire User, with all of its params passed in as well as look for the name attribute in the entire object. I am excited about the find_or_create_by method. I have built 2 javascript frontend with rails backend projects. I am constantly searching through my database, and now there is an easier way to accomplish this task. Hope you enjoyed this!
brookebachman
376,725
Understanding proc objects
What is a proc? Proc objects are blocks of code that have been bound to a set of local var...
0
2020-07-01T06:42:47
https://dev.to/abeidahmed/understanding-proc-objects-1bnm
ruby
### What is a proc? Proc objects are blocks of code that have been bound to a set of local variables. That means we can pass them as method arguments, push them inside an array, and do any other things that we normally do to objects. Before moving on, I would like to clarify that `Proc` will denote the class and `proc` will denote the instance of the `Proc` class. ### Calling a proc ```ruby pr = Proc.new { puts "hello world, once" } pr.call # hello world, once ``` `Proc.new` is an alias for the `Kernel#proc` method. ```ruby pr = Kernel.proc { puts "hello world, twice" } pr.call # hello world, twice ``` We can also write like this. ```ruby pr = proc { puts "hello world, thrice" } pr.call # hello world, thrice ``` ### Capturing a code block A method can capture a block and objectify it using the special `&` operator. Without the `&` operator, ruby has no way of understanding that you want to perform a block to proc conversion, rather than passing a regular argument. ```ruby def call_a_proc(&block) block.call end call_a_proc { puts "Hello world" } # Hello world ``` Similar to this we can also place a `Proc.new` to serve in place of the code block. ```ruby p = Proc.new { |x| puts x.upcase } %w(abeid ahmed).each(&p) ``` It outputs to: ```ruby ABEID AHMED => ["abeid", "ahmed"] ``` Here `p` is an instance of the `Proc` class. You can verify this using the `p.class` in your terminal. A `Proc.new` takes in a code block that tells what to do when a `proc` is called. Now that is exactly what we are doing in the above code. If you are writing ruby for sometime, you must be familiar with the `each` keyword. `each ` method, takes in a block of some kind with an argument. The `&p` convinces `each` that it doesn't need an actual block and to receive a `proc` object. The above code is same as saying: ```ruby %w(abeid ahmed).each { |x| puts x.upcase } ``` ### Block-proc conversion ```ruby def capture_block(&block) block.call end capture_block { puts "Beautiful day" } ``` We've seen this type of code in one of the examples above. So, here's how this method gets called. ##### Step 1 The `capture_block` method with the code block is called first. ```ruby capture_block { puts "Beautiful day" } ``` ##### Step 2 A new `proc` object is created using the `Proc.new`. ```ruby Proc.new { puts "Beautiful day" } ``` ##### Step 3 The `proc` created in step 2, is captured or bound to the `block` argument in the `capture_block` method. ```ruby capture_block(&block) ``` So, these are the three steps that are completed to call the `capture_block` method. Wait, there's more to this `proc`. Keen readers would be able to identify it and even if you didn't it is completely fine. Remember the `each` example that we talked about? We passed in a `Proc.new`, like, ```ruby %w(abeid ahmed).each(&p) ``` We can do the same with the `capture_block` method. Instead of calling the block like, ```ruby capture_block { puts "hello world" } ``` we can do exactly the same as the `each` method. First we create a new `Proc.new` and we pass it on. ```ruby my_proc = Proc.new { puts "Another way" } capture_block(&my_proc) # Another way ``` ### Conclusion In this article we have accomplished quite a lot. We learned how to create `proc` objects, how to capture them, and how the conversion from `block` to `proc` takes place. If you are interested in learning more about `Proc` head over to the official [ruby docs](https://docs.ruby-lang.org/en/2.0.0/Proc.html). This is my first article on this platform, so go easy on me :smile:. If you found this article helpful, you know the drill and if there's any typo or general errors let me know in the comments. Good day and happy learning! Credits: [The Well Grounded Rubyist](https://www.amazon.com/Well-Grounded-Rubyist-David-Black/dp/1617295213/ref=sr_1_1?crid=2EPQNR1IOSJ97&dchild=1&keywords=well+grounded+rubyist&qid=1593584433&sprefix=Well+grounded%2Caps%2C536&sr=8-1)
abeidahmed
376,753
We need React Native code for SQLite Sync with MySql for offline Working
We need React Native code for SQLite Sync with MySql database With User Base Sync for offline Working...
0
2020-07-01T07:02:16
https://dev.to/rupeshghar/we-need-react-native-code-for-sqlite-sync-with-mysql-for-offline-working-5h26
We need React Native code for SQLite Sync with MySql database With User Base Sync for offline Working in Mobile and Web Application
rupeshghar
376,851
[Beginner HTML&CSS] Duomly coding challenge #3: Shopping cart
Hey guys, we created a coding challenge that you can complete to test your skills and build a coding...
7,562
2020-07-01T08:47:06
https://dev.to/duomly/beginner-html-css-duomly-coding-challenge-3-shopping-cart-23il
challenge, html, codenewbie, beginners
Hey guys, we created a coding challenge that you can complete to test your skills and build a coding portfolio. ### Today's info: - Code the design into the HTML&CSS layout - You can use bootstrap or another framework that you know - Create git repository with name Duomly challenge #1 - Push the code to the GitHub and share a link to the repository in the comments ### Layout to code: ![Duomly challenge](https://dev-to-uploads.s3.amazonaws.com/i/tbcjzkijn9dnikgcjdey.png) ### HELP: -Bootstrap tutorial: https://www.blog.duomly.com/bootstrap-tutorial/ -Build e-commerce with bootstrap: https://www.duomly.com/course/build-ecommerce-with-bootstrap -HTML&CSS course: https://www.duomly.com/course/html-and-css-basics-course/ Let's start! Radek from Duomly
duomly
377,033
Does dev.to has dark mode? How can I activate it?
Dark mode has become the latest feature in almost every website, app, and even operating systems. Wi...
0
2020-07-02T23:38:54
https://dev.to/b_hantsi/does-dev-to-has-dark-mode-how-can-i-activate-it-51d9
--- title: Does dev.to has dark mode? How can I activate it? published: true description: tags: //cover_image: https://direct_url_to_image.jpg --- Dark mode has become the latest feature in almost every website, app, and even operating systems. With dev.to been my favorite website, I am really concern about the feature. Do we have it? How can I activate it? Thanks
b_hantsi
377,097
Mesure your create react app build.
Measuring the build speed of your Create react app
0
2020-07-01T13:33:29
https://dev.to/avinash8847/mesure-your-create-react-app-build-331h
react, cra, webpack
--- title: Mesure your create react app build. published: true description: Measuring the build speed of your Create react app tags: #React, #CRA, #webpack //cover_image: https://direct_url_to_image.jpg --- ##How to tweak webpack config? Create react app comes with pre-configured webpack. To add your config in Create-React-App, you have to either eject it or [fork react-scripts](https://create-react-app.dev/docs/alternatives-to-ejecting/). ###Other ways to modify your webpack config You can use libraries like [react-app-rewired](https://github.com/timarney/react-app-rewired) or [customize-cra](https://github.com/arackaf/customize-cra) > :warning: **"Stuff can break"**: - Dan Abramov --- ##Mesuring speed That been said. What I wanted to be was to just see which webpack loader or plugin took how much time. So I used [react-app-rewired](https://github.com/timarney/react-app-rewired) It is easy to configure and use. You can go through the [docs](https://github.com/timarney/react-app-rewired#how-to-rewire-your-create-react-app-project) to see how it is done Now once you are done with the setup. You need a webpack plugin [speed-measure-webpack-plugin](https://www.npmjs.com/package/speed-measure-webpack-plugin). Install it by just running this command `npm install --save-dev speed-measure-webpack-plugin` or if you are using yarn `yarn add -D speed-measure-webpack-plugin` Now we need to add this code in `config-overrides.js` ```js const SpeedMeasurePlugin = require("speed-measure-webpack-plugin"); const smp = new SpeedMeasurePlugin({ outputFormat: "human", outputTarget: './measur.txt' }); module.exports = function (config, env) { config = smp.wrap({ ...config, }) return config } ``` Once you run your build command a measur.txt file will be generated.
avinash8847
377,377
State in React for Designers
React "state" should be the least difficult concept to understand but it isn't. Or is it? Open Figma...
0
2020-07-01T14:37:55
https://dev.to/networkaaron/state-in-react-for-designers-50i2
react, beginners
React "state" should be the least difficult concept to understand but it isn't. Or is it? Open Figma or Sketch. Create a button and label it 'Buy Now'. Duplicate the button and make it look as if it were disabled. Your UI kit now consists of two states. You're done. Now pass the UI kit over to the developer. Here's where it gets difficult. **"State" is not so simple for developers.** For instance, the developer may have to connect to the inventory API to determine the state of the 'Buy Now' button. And, this has to be done before the button appears on the webpage. If available, show the 'Buy Now' and if out of stock show the disabled state. Only a couple hours of programming if all goes well. Hold on. The product was sold out before the shopper had an opportunity to click the "Buy Now" button. The developer needs to take that into consideration, connect to the inventory API once again, and then provide a pop up which informs the shopper it is not available. Here's where it gets extremely difficult. **"State" is not so simple for designers.** For instance, the pop up is hideous. The UI kit did not include the state(s) for the pop up. Oops. Open up Figma or Sketch again. Design the states for the pop up. Watch as the developer goes into a "state" of rage after finding out it needs to be redeveloped. In summary, state is what an element visually looks like at any given moment. Designers plan for each state while developers figure out how to make it happen using APIs, JavaScript, HTML, JSX, CSS, Sass, React, etc.
networkaaron
377,576
Introducing isBusy: A Free Personal Status Page for Remote Work
I wanted to make things a little bit easier for my family and myself with all the adjustments happening, so I started on a little nights & weekends project: https://isbusy.app. The idea is simple: come up with an easy way for my family to understand if I'm busy and have it automatically sync with all my work and personal calendars.
0
2020-07-01T16:37:19
https://dev.to/markthethomas/introducing-isbusy-a-personal-status-page-for-remote-work-52hb
productivity, webdev, javascript, serverless
--- title: Introducing isBusy: A Free Personal Status Page for Remote Work published: true description: "I wanted to make things a little bit easier for my family and myself with all the adjustments happening, so I started on a little nights & weekends project: https://isbusy.app. The idea is simple: come up with an easy way for my family to understand if I'm busy and have it automatically sync with all my work and personal calendars." tags: productivity, webdev, javascript, serverless //cover_image: https://ifelse.io/static/isbusy-1.jpg --- I wanted to write a little post to announce a side project/fledgling app to the world: [isBusy](https://isbusy.app). With all that's been going on in the world these last few months, a lot has changed. Very little of our lives seems to have been left untouched by the Covid-19 pandemic. One area that has been changed dramatically for many people is their work life and situation. Tragically, many people have lost their jobs or face the challenge of a radically different work environment, remote or otherwise. I've found myself adjusting to working from home and now working from home on a permanent basis. And it's been...interesting. Some parts of it have been really, really good. I find myself having more time for the most important parts in life: family, friends, and being outside in nature. But there have also been some harder parts, too. My family has had to adjust to me being home all the time and I had to adjust my working habits to a new rhythm. I found it harder at times to focus because of everything on my mind and ## "Hey! Are you free? Oh sorry 🏃‍♂️💨" I wanted to make things a little bit easier for my family and myself with all the adjustments happening, so I started on a little nights & weekends project: [isBusy](https://isbusy.app). The idea is simple: come up with an easy way for my family to understand if I'm busy and have it automatically sync with all my work and personal calendars. I've had more than a few "oops sorry" as different family members made their way into my office (accidentally or not). It's honestly a welcome addition to the day every time, but I also know it can disrupt my day in subtler ways. I also know that my situation is pretty close to idea (I have a space I can go to, a door I can close, etc.) and that this is not the case for everyone. So I thought about setting up a "flag" system outside my office door but that would invariably get out of date fast and be annoying to update constantly. I want a way for my family to easily see if I'm busy and know the nuances between "Mark is working" and "Mark is on a call and absoultely cannot be interrupted except for emergencies." I'm also planning to make [isBusy](https://isbusy.app) free for everyone and forever. I've [set up a Patreon](https://www.patreon.com/isbusy) if people want to donate to reduce costs. But fundamentally I want to make sure I can keep making tools like these for folks to help them thrive, especially in these hard times. Some quick looks at the app: ![isbusy explainer](https://ifelse.io/static/isbusy-1.jpg) ![isbusy status screenshots](https://ifelse.io/static/isbusy-2.jpg) ![isbusy screenshots](https://ifelse.io/static/isbusy-3.jpg) ## Features This was a side project worked on over the last few weeks, so the feature list isn't too crazy: - isBusy is launching with calendar support for Google Calendar and Microsoft Outlook. If there are more calendars people want, I'll add those in, too. - users get a personal status link they can easily share with family or others so they can easily see whether you are busy or available. - You can set up "working hours" so people can more easily understand what your schedule looks like - change your display name - automatically display in your local timezone - profile management (reset password etc.) Hoping to add more if there's demand! ## Fin [I'm launching this on Product Hunt](https://www.producthunt.com/posts/isbusy?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-isbusy) and a few other places to get the word out. Give it a spin at [https://isbusy.app](https://isbusy.app). Let me know if you have any feedback [on Twitter](https://twitter.com/markthethomas) 🤗
markthethomas
377,752
Databases for Windows Desktop Development: Welcome to the Jungle
I was a Microsoft enthusiast. And I’m a Windows desktop developer. You can judge, and stop reading he...
0
2020-07-01T18:17:55
https://dev.to/vicobiscotti/databases-for-windows-desktop-development-welcome-to-the-jungle-23k3
dotnet, database, sql, windows
I was a Microsoft enthusiast. And I’m a Windows desktop developer. You can judge, and stop reading here. Or you might be in the same boat and wondering why Microsoft decided to abandon the development for Windows. I’m about to ship a [desktop app](https://www.xplan-taskmanager.com/en/). I’ve been far from swdev for a few years but I’ve written code for the previous thirty years. A dozen years with .NET. So, for my new application, I decided on the excellent Windows Presentation Foundation. WinForms are gone, and UWP looks orphan and confined to Windows 10. I know people who buy Windows desktop apps (I’m one of them, of course). But I know nobody who buys them on Microsoft Store, or even checks that once in a while, and I fear that this is not going to get better anytime soon. I still keep my old Windows Phone on a shelf, as a warning. Without saying, WPF goes with Entity Framework. Not necessarily, of course, but preferably. As local db I started with an mdf (so, SQL Server). I told myself: “We’ll see what will be best, maybe SQL Compact or something. That’s not the problem.” Being a personal app, it was mandatory to have a single setup, all included, but that has never been a problem in my life. The moment of the final touches came, and what seemed a “final touch” turned out to be a pain in the ass. I’ve been away from swdev for a while. Too much, maybe. #Microsoft forgot a detail# Or is knowingly telling us something. Remember that [Nadella](https://en.wikipedia.org/wiki/Satya_Nadella) is a cloud man. Everything is cloud nowadays — I get it — but a lot of business and productivity still runs offline. And Microsoft was rather good at it. Surprise — my bad — [SQL Server Express](https://www.microsoft.com/it-it/sql-server/sql-server-editions-express) needs a separate setup in any case. I should have imagined. I was betting on [LocalDb](https://docs.microsoft.com/it-it/sql/database-engine/configure-windows/sql-server-2016-express-localdb?view=sql-server-2017), the simplified flavor of SQL Server Express. [From Microsoft](https://docs.microsoft.com/it-it/sql/sql-server/editions-and-components-of-sql-server-2017?view=sql-server-2017): *“ SQL Server Express LocalDB is a lightweight version of Express that has all of its programmability features, runs in user mode and has a fast, zero-configuration installation and a short list of prerequisites.”* User instances. Excellent. I was hoping to deliver also to users without admin rights. But… You can get a msi file of LocalDb (so you can more or less have a silent install) but admin rights are required in any case for the installation (user mode, not user permissions…). Also, it cannot be password protected nor encrypted. It’s intended for being used by developers, but it’s not for developers. It’s for design time. In a few words, LocalDb is not exactly the file-based database you could expect from Microsoft 26 years after the introduction of Access. But the story doesn’t end here. Windows 7 and x86 are [not supported](https://docs.microsoft.com/it-it/sql/sql-server/install/hardware-and-software-requirements-for-installing-sql-server?view=sql-server-2017) by SQL Server Express 2017. Microsoft decided that half of my possible customers have an obsolete OS (Windows 7 still serve a good third of the global desktop/laptop users) and cannot be addressed. Also, try to install a previous SQL Server Express — for Windows 7 support — and… it can’t open your mdf because you’ve created it with Visual Studio 2017 or the most recent SQL Server. No downgrade tool available for such a common problem, nor any option to create an “older” mdf. In the hope to read your mdf in Windows 7, you have to manually migrate your mdf to a previous version — using an “older” SQL Server tool — and renounce to use Visual Studio to manage it. Wow. They really should have had developers in mind, when designing this trap. OK, let’s rollback. **SQL Compact.** [Private install](https://www.codeproject.com/Articles/33661/%2FArticles%2F33661%2FCreating-a-Private-Installation-for-SQL-Compact), no admin privileges required, password protection. Perfect. Well, no. SQL Compact still lives in a dark corner of Microsoft servers but, as of February 2013, it’s been deprecated. You can go with it, at your risk, but Windows 8 and 10 are not in the specifications and zapping the external setup requires some workaround. Also, Visual Studio and SQL Server Management Studio clearly tell that SQL Compact is stone dead and forgotten. You’re on your own. More rollback. **Access**? (Oh, my…) Entity Framework does not support Access. Yes, you read it correctly. Microsoft does not support its own popular database with its own recommended ORM. Implementations are there, for example [JetEntityFrameworkProvider](https://github.com/bubibubi/JetEntityFrameworkProvider), but it supports only code first. And it’s from [bubibubi](https://github.com/bubibubi). I’m sure Bubi is a great developer but I hoped in something more… official. I once wrote an ORM for Access and .NET, but my new app is already all based on EF. Using [Linq to DataSet](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/linq-to-dataset-overview), with Access? Never used, but I’m sure it pays a significant toll to performance — among other issues, you have to entirely populate the Dataset — , and I’ve already to be careful on that side. It doesn’t seem a mainstream solution. I’m not ready to put in a lot of work for getting other surprises. You can guess that I started sweating. Okay, let’s step out of Microsoft territory. What’s the most renowned self-contained SQL database engine, coming with EF support? **[SQLite](https://www.sqlite.org/index.html)**, of course. But SQLite comes with a few limitations. [You can’t drop a column](https://www.sqlite.org/faq.html#q11), for example, or add a constraint. Imagine supporting full database copies through every schema change. And password/encryption are not natively supported by SQLite. Of course, someone will tell you that a plugin is there for everything. With some limitations. But then again there’s the other plugin, which works differently, or fades to void after a couple of years… **[MySQL](https://www.mysql.com/it/)**? Oracle, the fiercest competitor of Microsoft. I’m sure they do their best, to [support .NET](https://dev.mysql.com/downloads/connector/net/) against Java — and that I won’t have further surprises — but let me worry. Then, you have to find your way for a silent install, crossing fingers. And you can go on, and on, and on… Sailing many seas and dreaming of endless discoveries and refactoring. #So?# So, we’re left with a lot of options and no solution. Microsoft has two clear answers: SQL Express on the desktop (with a separate and bulky installer, no password, no encryption), or Azure (that means cloud and it’s not appropriate for too many desktop apps). Microsoft forgot — or wants to forget — desktop apps and their need for integrated file-based databases. You can find many options but no straight and complete solution is there, clearly and fully supported by Microsoft now and in the long term. #What will I do?# First of all, I have to accept that I do not belong to this Cloud&Chaos era. I’ll likely struggle with that for the rest of my life. Now what, with my desktop app? Well, I plan to expand my product to teams. So, SQL Server is on my path, given that I prefer to stay in the Microsoft territory, if possible. Better to start with that tech since the beginning. I’ll stay with an mdf file, migrating it to an older version (so, use an older SQL Server) and renouncing to manage it from Visual Studio. Then I thought to deploy LocalDb silently, using the msiexec command line, embedded in my setup (given that the msi is provided): *msiexec /i c:\temp\SqlLocalDB2017.msi* But… Msi cannot run inside another msi (and I’m using a Visual Studio Installer project, which I prefer to other options for many reasons). So, I decided to run it at the first run of the application (else, I could have written a setup wrapper, or used a different setup builder). Of course, I have to • detect possible previous SQL Server installations (by checking the registry and then files), • decide which engine to install (LocalDb 2014 on Windows 7 — x86 or x64 — , else LocalDb 2017 — x64 — ), • prompt the user (who will surely be happy of the extra setup, especially likely being a consumer), • and likely distribute the SQL setup/s together with the application, or as an optional fat setup (else, a separate LocalDb download will be required). ***No comment, Microsoft.*** No encryption, and users can be added only by application logic or at SQL Server level. It works, but I’m not happy. Also, it took me days, to make my decision, implement, and test. Now, if you’ll excuse me, I have some backlog to work on, since I didn’t expect to waste so much time on the local database after 22 years from the first Visual Studio.
vicobiscotti
377,870
Kendimize özel VPN kurulumu 🇹🇷
VPN Kuralım Neden kendi VPN sistemi kurmalıyız ? Sunucu Ayarları Kullanıcı Ayarları Güvenlik...
0
2020-07-01T21:07:16
https://mrturkmen.com/vpn-kuralim/
blog, security, linux, macosx
--- title: Kendimize özel VPN kurulumu 🇹🇷 published: true date: 2020-07-01 10:00:00 UTC tags: blog,security,linux,macosx canonical_url: https://mrturkmen.com/vpn-kuralim/ --- - [VPN Kuralım](#vpn-kural%C4%B1m) - [Neden kendi VPN sistemi kurmalıyız ?](#neden-kendi-vpn-sistemi-kurmal%C4%B1y%C4%B1z-) - [Sunucu Ayarları](#sunucu-ayarlar%C4%B1) - [Kullanıcı Ayarları](#kullan%C4%B1c%C4%B1-ayarlar%C4%B1) - [Güvenlik Duvarı ayarları](#g%C3%BCvenlik-duvar%C4%B1-ayarlar%C4%B1) # VPN Kuralım Bugün sizlere kendinize ait VPN sistemi nasıl kurulur, onu anlatmak istiyorum, daha önce İngilizce olarak, [yayınladım](https://mrturkmen.com/setup-free-vpn/) fakat Türkçe bir kaynağın da faydalı olabileceğini düşündüm. Burada anlatılanlar, ubuntu ailesine (16.04,18.04) ait sunucular üzerinde test edilmiştir. İlk olarak bulut hizmeti sağlayan bir şirketten bu DigitalOcean, Google Cloud, Microsoft Azure veya Amazon olabilir, sunucu kiralıyorsunuz, en ucuzu ve makul olanı DigitalOcean tarafından sunulan [aylık 5 dolar](https://www.digitalocean.com/pricing/) olan sunucu diyebilirim. Sunucuyu kiraladıktan ve ssh bağlantısını sağladıktan sonra VPN kurulumuna geçebiliriz. VPN hakkında tam bilgisi olmayan arkadaşlar için şu şekilde özetlenebilir, sizin için oluşturulmuş sanal bir bağlantı noktası gibi düşünebilirsiniz. Yani VPN’e bağlandıktan sonra bilgisayarınızdan çıkan ve bilgisayarınıza gelen ağ trafik şifrelenmiş olarak işlenir. Üçüncü parti yazılımların veya MITM gibi saldırıların önüne geçmiş olursunuz. ## Neden kendi VPN sistemi kurmalıyız ? Çünkü şu anda var olan bütün VPN sistemleri, ücretsiz olarak hizmet sağlasa dahi, sizin bilgilerinizin satılması, arşivlenmesi ve gerektiğinde ilgili birimlere aktarılması amacıyla kaydedilmektedir. Bunun ne gibi zararları olabilir gelin birlikte şöyle bir sıralayalım: - Oltalama saldırılarına sadece sizin bilebileceğiniz bilgiler ile maruz kalma. - Ziyaret ettiğiniz siteler tarafından reklam bombardımanına maruz kalma. - Kişisel bilgilerinizin reklam veren ajanslara satılması, bu durum birçok kişi tarafından tam olarak anlaşılamıyor, yani şu şekilde anlaşılamıyor, internet üzerinden alışveriş yapan A kişisi, kendine ait bilgilerin, onun bilgilerini satacak kişiler tarafından değersiz olduğuna inanıyor ve hiçbir gizlilik sağlamadan internet kullanımına devam ediyor. Bu sonunda o kişiye zarar vermese bile o kişinin konuştuğu, görüştüğü veya birlikte çalıştığı arkadaşlara zarar verebiliyor. Burada sıralananlar sadece buzdağının görünen ucu bile diyemeyiz, günümüzde veri işleme teknikleri ve yaklaşımları öyle gelişmiştir ki siz bile kendinize ait olan bir şeyin varlığına farkında olmadan onlar işlemleri tamamlamış oluyor :). Bu ve bunlardan çok daha fazla nedenden dolayı VPN kullanımı şart diyebilirim. Peki bunu nasıl yapacağız, bu kısımdan sonra sizin bir bulut sağlayıcısı tarafından sunucunu kiraladığınızı ve ssh bağlantısını sağladığınızı varsayıyorum. Bu gönderide WireGuard VPN uygulaması kullanılacaktır. WireGuard VPN uygulaması açık kaynaklı bir uygulama olup, sağladığı imkanlar sayesinde diğer VPN uygulamalarına (OpenVPN ve diğerleri) kıyasla çok daha hızlı ve güvenilirdir. ## Sunucu Ayarları **VPN uygulamasını kiraladığımız sunucu üzerine kuralım.** ``` $ sudo apt-get update && sudo apt-get upgrade -y $ sudo add-apt-repository ppa:wireguard/wireguard $ sudo apt-get update $ sudo apt-get install wireguard ``` **Uygulamayı çekirdek güncellemeleri ile birlikte güncellemek için gerekli komutu girelim.** ``` $ sudo modprobe wireguard ``` **Aşağıda verilen komut girildiğinde beklenen sonuç.** ``` $ lsmod | grep wireguard wireguard 217088 0 ip6_udp_tunnel 16384 1 wireguard udp_tunnel 16384 1 wireguard ``` **Anahtarları üretelim** ``` $ cd /etc/wireguard $ umask 077 $ wg genkey | sudo tee privatekey | wg pubkey | sudo tee publickey ``` **VPN Konfigurasyon dosyasını `/etc/wireguard/wg0.conf` ayarlayalım.** ``` [Interface] PrivateKey = <daha-öncesinde-üretilen-gizli-anahtar> Address = 10.120.120.2/24 Address = fd86:ea04:1111::1/64 SaveConfig = true PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens3 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens3 -j MASQUERADE ListenPort = 51820 ``` Burada önemli nokta `ens3`, ip tables komutu içerisinde yer alan `en3`, sunucudan sunucuya farklılık gösterebilir, bundan dolayı sizin sunucunuzda ne ise ağ kartının ismi onu girmelisiniz. `ifconfig ` komutu sayesinde öğrenilebilir. Bir diğer önemli nokta ise daha öncesinde 4. adımda üretilen `privatekey`, içeriğinin `PrivateKey` alanına girilmesidir. **Ağ trafiğini yönlendirme** `/etc/sysctl.conf` dosyası içerisine aşağıda verilen bilgileri girerek kaydediniz. ``` net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1 ``` **Bilgiler gerekli dosyaya kaydedildikten sonra aşağıdaki komutlar sırası ile girilmelidir.** ``` $ sysctl -p $ wg-quick up wg0 ``` **Komutların girilmesi ve herhangi bir sorun görülmemesi durumda ve `wg` komutu terminale girildikten sonra aşağıda verilen çıktıya benzer bir çıktı göreceksiniz.** ``` $ wg interface: wg0 public key: loZviZQpT5Sy4gFKEbk6Vc/rcJ3bH84L7TUj4qMB918= private key: (hidden) listening port: 51820 ``` Eğer herhangi bir sorun ile karşılaşmazsanız bu adıma kadar, bu demek oluyor ki, sunucu tarafında işiniz şimdilik tamamlandı geriye sadece kendi bilgisayarımızı, telefonumuzu vs VPN sunucusuna bağlamak kaldı. ## Kullanıcı Ayarları Kullanıcıların kendi bilgisayar ortamlarında, telefonlarında, tabletlerinde veya diğer sunucularında kullanabileceği uygulamaları [buradan](https://www.wireguard.com/install/) indirebilirsiniz. Gerekli uygulamayı kendi ortamınıza indirdikten sonra tek yapmanız gereken, VPN sunucusu tarafında ayarladığımız VPN’e bağlanmak, bunun için gerekli olan sadece konfigurasyonları doğru girmek olacaktır. Kullanıcı tarafında, uygulama üzerinden aşağıda verilen konfigurasyona benzer bir ayarı (kendi kurduğunuz VPN ayarlarına göre privatekey ve ip adressi değişiklik gösterecektir.) ayarlamanız gerekmektedir. ``` [Interface] Address = 10.120.120.2/32 Address = fd86:ea04:1111::2/128 # note that privatekey value is just a place holder PrivateKey = KIaLGPDJo6C1g891+swzfy4LkwQofR2q82pFR6BW9VM= DNS = 1.1.1.1 [Peer] PublicKey = <sunucunuza-ait-public-anahtar> Endpoint = <sunucunuzun-dış-ip-adresi>:51820 AllowedIPs = 0.0.0.0/0, ::/0 ``` Gerekli işlemler kullanıcı tarafında da sağlandıktan sonra, sunucu tarafında bu kullanıcıya bağlantı izni vermek kalıyor, onuda aşağıda verilen komut ile sağlayabilirsiniz. ``` $ wg set wg0 peer <kullanici-public-anahtari> allowed-ips 10.120.120.2/32,fd86:ea04:1111::2/128 ``` Sunucu tarafindan kullanicinin VPN baglantisi sağladığını aşağıda verilen komut ile teyit edebilirsiniz. ``` $ wg interface: wg0 public key: loZviZQpT5Sy4gFKEbk6Vc/rcJ3bH84L7TUj4qMB918= private key: (hidden) listening port: 51820 peer: Ta9esbl7yvQJA/rMt5NqS25I/oeuTKbFHJu7oV5dbA4= allowed ips: 10.120.120.2/32, fd86:ea04:1111::2/128 ``` Daha sonrasinda, wireguard tarafından oluşturulan ağ kartını aktivate edelim. ``` $ wg-quick up wg0 ``` ## Güvenlik Duvarı ayarları Bazen sunucu tarafında yapmanız gereken bazı güvenlik duvarı ayarları bulunmakta, bunlar VPN bağlantısını başarılı bir şekilde sağlamanız için kritik öneme sahiptir. ``` $ ufw enable ``` **VPN uygulamasına bağlanmamızı sağlayacak portu açıyoruz.** ``` $ ufw allow 51820/udp ``` **IP tabloları ile 51820 portu için bazı ayarlamalar yapıyoruz.** ``` $ iptables -A INPUT -p udp -m udp --dport 51820 -j ACCEPT $ iptables -A OUTPUT -p udp -m udp --sport 51820 -j ACCEPT ``` Burada önemli olan kısımlardan biriside bütün komutlar ROOT, yani yönetici yetkisi ile yapılmalı, aksi takdirde hata verecektir. Bu noktadan sonra, bilgisayarınıza, tabletinize veya telefonunuza kurduğunuz WireGuard uygulaması sayesinde sorunsuz ve güvenlikli bir şekilde internetinizi kullanabilirsiniz.
mrturkmen
377,978
🔑 OAuth 2.0 flows explained in GIFs
In this post, we will be covering all OAuth 2.0 flows using GIFs that are simple and easier to unders...
0
2020-07-14T05:46:03
https://dev.to/hem/oauth-2-0-flows-explained-in-gifs-2o7a
oauth, security, computerscience, design
In this post, we will be covering all OAuth 2.0 flows using GIFs that are simple and easier to understand. This post can be used as a cheat-sheet for future reference as well! > **Note**: Feel free to [⏩ skip](#toc) to the flows directly if you are already aware of OAuth. --- ## Hold on, what's OAuth❓ OAuth (Open Authorization) enables third-party websites or apps to access user's data without requiring them to share their credentials. It is a set of rules that makes access delegation possible. The user gets to _authorize_ which resources an app can access and limits access accordingly. ## Terminologies 🧱 Now that we know what OAuth is about, let us quickly cover the terminologies before we dive-in. | Term | Description | | :--------------------- | ------------------------------------------------------------ | | Client 📦 | The application that seeks access to resources. Usually the third-party. | | Resource Owner 👤 | The user who owns the resources. It can also be a machine 🤖 (E.g. Enterprise scenarios). | | Resource 🖼 | Could be images, data exposed via APIs, and so on. | | Resource Server 📚 | Server that hosts protected resources. Usually, an API server that serves resources if a proper token is furnished. | | Authorization Server 🛡 | Server responsible for authorizing the client and issuing access tokens. | | User-Agent 🌐 | The browser or mobile application through which the resource owner communicates with our authorization server. | | Access Token 🔑 | A token which is issued as a result of successful authorization. An access token can be obtained for a set of permissions (scopes) and has a pre-determined lifetime after which it expires. | | Refresh Token 🔄 | A special type of token that can be used to replenish the access token. | Now, let us relate these terminologies in an abstract OAuth flow based on an example [(1)](https://tools.ietf.org/html/rfc6749#section-1). > An end-user (resource owner 👤) grants a printing service (app 📦) access to their photo (resource 🖼) hosted in a photo-sharing service (resource server 📚), without sharing their username and password. Instead, they authenticate directly with a server trusted by the photo-sharing service (authorization server 🛡), which issues the printing service delegation-specific credentials (access token 🔑). Note that our app (client) has to be _registered_ in the authorization server in the first place. A Client ID is returned as a result. A client secret is also optionally generated depending on the scenario. This secret is known only to the authorization server and the application. Okay, enough theory! It is time for some GIFs to understand the authorization scenarios/flows! --- ## Flows in OAuth 2.0 <a name="toc"></a> 1. [Authorization Code Grant](#authorization-code-flow) 2. [Authorization Code Grant with PKCE](#authorization-code-flow-with-PKCE) - [Code Transformation Method (CM) - Meet SHA256](#pkce-sha256) - [Code verifier (CV) & Code Challenge (CC)](#code-verifier-and-code-challenge) 3. [Client Credentials Grant](#client-credentials) 4. [Resource Owner Password Credentials Grant](#resource-owner-password-credentials) 5. [Implicit Grant](#implicit-grant) --- ### 1. Authorization Code Grant flow <a name="authorization-code-flow"></a> It is a popular browser-based authorization flow for Web and mobile apps. You can directly relate it with the above-mentioned example. In the diagram below, the flow starts from the Client redirecting the user to the authorization endpoint. ![1 Authorization Code Grant Flow - confidential clients](https://dev-to-uploads.s3.amazonaws.com/i/2j7kqc7qabtfpl250jf2.gif) This flow is optimized for _confidential clients_. Confidential clients are apps that can guarantee the secrecy of `client_secret`. A part of this flow happens in the front-channel (until the authorization code is obtained). As you can see, the `access_token` 🔑 exchange step happens confidentially via back-channel (server-to-server communication). Now you might naturally wonder, 'What about public clients?!' ### 2. Authorization Code Grant with PKCE <a name="authorization-code-flow-with-PKCE"></a> Public clients using authorization code grant flow have security concerns. Be it single-page JS apps or native mobile apps, it is impossible to hide the `client_secret` as the entire source is accessible (via DevTools or App Decompilation). Also, in native apps where there is a custom URI scheme, there is a possibility of malicious apps intercepting the authorization code via similar redirect URIs. To tackle this, the Authorization Code Grant flow makes use of Proof Key for Code Exchange (PKCE). This enables the generation of a secret at runtime which can be safely verified by the Authorization Server. So how does this work? #### 2.1. Transformations - Meet SHA256 <a name="pkce-sha256"></a> Basically the client generates a random string named `Code Verifier` (CV). A `Code transformation method` (CM) is applied to the CV to derive a `Code Challenge` (CC) ✨ As of now, there are two transformation methods `plain` or `S256` (SHA-256) ![2.1 Authorization Code Grant Flow - PKCE - How does SHA256 work?](https://dev-to-uploads.s3.amazonaws.com/i/tw9nu7l1yufx3bczy7fi.gif) In plain transformation, CC will be equal to CV. This is not recommended for obvious security reasons & hence S256 is recommended. SHA256 is a hash function. On a high-level, it takes input data and outputs a string. However, there are special characteristics to this output string: - This string is unique to the input data and any change in the input will result in a different output string! One can say, it is a signature of the input data. - The input data _cannot_ be recovered from the string and it is a one-way function (Refer the GIF above). - The output string is of fixed-length. #### 2.2. How do we generate the Code Challenge (CC) from the Code verifier (CV) ? <a name="code-verifier-and-code-challenge"></a> ![2.2 Authorization Code Grant Flow - PKCE - What is a Code Challenge vs. Code Verifier](https://dev-to-uploads.s3.amazonaws.com/i/tfuwug010zprz84p702o.gif) You might have already guessed it. Take a look at the diagram to understand how CC is generated from CV. In this case, since SHA256 is used, CV cannot be generated from CC (Remember, it is a one-way transformation). Only CC can be generated from CV (Given that the transformation method is known - `S256` in our case). #### PKCE flow Note that the flow starts with CV and CC generated first replacing the `client_secret`. ![Authorization Code Grant with PKCE ](https://dev-to-uploads.s3.amazonaws.com/i/odkf14kzlb5gcbvrmuvx.gif) Now that we know how CV, CC is generated, let us take a look at the complete flow. Most parts are similar to the authorization code grant flow. The CV and CC are generated even before the flow starts. Initially, only the CC and CM are passed to obtain an authorization code. Once the authorization code is obtained, the CV is sent along to obtain the access token. In order for the authorization server to confirm the legitimacy, it applies the transformation method (CM, which is SHA256) on the received CV and compares it with the _previously-obtained_ CC. If it matches, a token is provided! Note that even if someone intercepts the authorization code, they will not have the CV. Thanks to the nature of the one-way function, a CV cannot be recovered from CC. Also, CV cannot be found from the source or via decompilation as it is only generated during runtime! ### 3. Client Credentials Grant flow <a name="#client-credentials"/> In the above flows, a resource owner (user) had to provide consent. There can also be scenarios where a user's authorization is not required every time. Think of machine-to-machine communication (or app-to-app). In this case, the client is confidential by nature and the apps may need to act on behalf of themselves rather than that of the user. ![4 Client Credentials Grant](https://dev-to-uploads.s3.amazonaws.com/i/gp4n79x84xujj8mn625w.gif) Also, this is the simplest of all flows. ### 4. Resource Owner Password Credentials Grant flow <a name="#resource-owner-password-credentials"/> This flow is to be used only when there is a _high degree of trust_ between the resource owner and the client. As you can see, initially, the username & password is obtained from the R.O and are used to fetch the `access_token`. The username & password details will be discarded once the token is obtained. ![3 Resource Owner Password Credentials grant](https://dev-to-uploads.s3.amazonaws.com/i/6hsfukc7f4rnopbsy04f.gif) This flow is _not recommended_ for modern applications and is often only used for legacy or migration purposes. It carries high risk compared to other flows as it follows the password anti-pattern that OAuth wants to avoid in the first place! ### 5. Implicit Grant flow <a name="#implicit-grant"/> This flow is _no longer recommended_ officially! Implicit grant was considered an alternative to Authorization Code Grant for public clients. As you can notice, the public client does not contain the `client_secret` and there is no back-channel involved. In fact, the access token is obtained immediately after the consent is obtained. ![5 Implicit Grant](https://dev-to-uploads.s3.amazonaws.com/i/90t3te63144tcdven41w.gif) However, the token is passed in the URL fragment (Begins with `#`) which will never be sent over the network to the redirect URL. Instead, the fragment part is accessed by a script that is loaded in the frontend (as a result of redirection). The `access_token` will be extracted in this manner and subsequent calls are made to fetch the resources. As you can already see, this flow is susceptible to access token leakage and replay attacks. It is [recommended](https://tools.ietf.org/html/draft-ietf-oauth-security-topics-09#section-2.1.2) to use the Authorization Code Grant or any other suitable grant types instead of Implicit. --- Hope this post was useful. Checkout [RFC 6749](https://tools.ietf.org/html/rfc6749), [RFC 7636](https://tools.ietf.org/html/rfc6749) to dig deeper into the steps involved in each of the flows. Feel free to share your thoughts as well. Don't forget to share this post if you found it useful 🚀 Let me know what you would like to see next as a part of the GIF series! Until then, stay OAuthsome!✨ | [🐥 Twitter](https://twitter.com/HemSays) | [💼 LinkedIn](https://www.linkedin.com/in/hems23/) | |---|---| ---
hem
378,100
Tutorial: Setting up HarperDB Cloud
Overview The goal of this tutorial is to give a very simple explanation of how to set up a...
0
2020-07-01T20:48:08
https://dev.to/harperdb/tutorial-setting-up-harperdb-cloud-10i4
database, tutorial, sql, microservices
## Overview The goal of this tutorial is to give a very simple explanation of how to set up a HarperDB Cloud account and a free instance of HarperDB. ### Signing Up * Go to https://studio.harperdb.io/sign-up * Fill out the form with your details ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593631741126_image.png) * Click Sign Up For Free * You will then receive an email that looks like this: ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593631864036_image.png) * Make sure to copy your temporary password and then click Log into HarperDB Studio * Type in your email and temporary password * You will then be asked to create a new password * Note: This is only for the management studio. Your database creds are separate for each db instance. Additionally, you will be able to invite other users to your organization once you login. * * Now click on your org ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593632079387_image.png) ### Creating an Instance * Now click on Create New HarperDB Cloud Instance ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593632119706_image.png) * Click on Create HarperDB Cloud Instance ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593632166435_image.png) * Enter your Credentials *Note: these are NOT the same as your Studio account. This is for the super user you are creating on this database instance. Username should NOT be in the form of an email. Instance Name is for reference only.* ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593632304933_image.png) * Pick your instance specs. You can start with a free version and always upgrade later. Also pick the AWS region closest to you to reduce latency. ![](https://paper-attachments.dropbox.com/s_09C1E3EC4CD39FB8BA6BE455CA9A3F593F112E47AFE1C751F7352036819262EE_1593636252765_image.png) * Click Confirm Instance Details * Review your instance details *Note: The Instance URL is how you will access HarperDB with REST calls* ![](https://paper-attachments.dropbox.com/s_694AC5CA9308096771EF7B00CE87C94A0B19247B0A8F1B3279168F6A6C25589F_1593632429266_image.png) * Select I agree * Click Add Instance. **Note: Your HarperDB Cloud instance will be provisioned immediately, but it takes a few minutes to setup** 17. In 5 to 10 minutes you will receive an email letting you know that your instance is ready. In the background we are spinning up a cloudformation template, a network load balancer, a vpn, and an ec2 instance all with HarperDB running on it. ![](https://paper-attachments.dropbox.com/s_09C1E3EC4CD39FB8BA6BE455CA9A3F593F112E47AFE1C751F7352036819262EE_1593636645125_image.png) * Log back into the Studio * Click back into your org * Click on your instance ![](https://paper-attachments.dropbox.com/s_09C1E3EC4CD39FB8BA6BE455CA9A3F593F112E47AFE1C751F7352036819262EE_1593636945627_Screen+Shot+2020-07-01+at+2.53.26+PM.png) * You are all set! Make sure to check out the example code for help getting started! ![](https://paper-attachments.dropbox.com/s_09C1E3EC4CD39FB8BA6BE455CA9A3F593F112E47AFE1C751F7352036819262EE_1593636931911_Screen+Shot+2020-07-01+at+2.55.00+PM.png)
stephengoldberg
378,144
ng-package vs. package.json
If we hang around building libraries in Angular we're bound to run into how these two files work toge...
0
2020-07-01T22:01:08
https://dev.to/jwp/ng-package-vs-package-json-2amp
angular, typescript
If we hang around building libraries in Angular we're bound to run into how these two files work together. If our library package.json looks like this: **package.json** ```json { "name": "msd", "version": "0.0.5", ✔️"peerDependencies": { "@angular/common": "^8.2.0", "@angular/core": "^8.2.0", "hammerjs": "^2.0.8", "install": "^0.13.0", "npm": "^6.14.5", "rxjs": "^6.5.5", "zone.js": "^0.9.1", "@fortawesome/angular-fontawesome": "^0.5.0", "@fortawesome/fontawesome-free": "^5.13.0", "@fortawesome/fontawesome-svg-core": "^1.2.27", "@fortawesome/free-regular-svg-icons": "5.13.0", "@fortawesome/free-solid-svg-icons": "5.13.0" }, ✔️"devDependencies": { "@angular/animations": "^8.2.14", "@angular/cdk": "^8.2.3", "@angular/common": "^8.2.14", "@angular/compiler": "^8.2.14", "@angular/core": "^8.2.14", "@angular/forms": "^8.2.14", "@angular/material": "^8.2.3", "@angular/platform-browser": "^8.2.14", "@angular/platform-browser-dynamic": "^8.2.14", "@angular/router": "^8.2.14", "@microsoft/signalr": "^3.1.5" } } ``` We have two sections of dependencies, peer and dev. If we compile our library and see this: [No name was provided for external module](https://dev-to-uploads.s3.amazonaws.com/i/vcx588ps6a1e7smp2uy8.png) We have to dig a bit deeper in understanding how the Angular (npm) Packager configuration can stop these messages. **ng-package.json** ```json { "$schema": "../../node_modules/ng-packagr/ng-package.schema.json", "dest": "../../dist/msd", "lib": { "entryFile": "src/public-api.ts", "umdModuleIds": { "@fortawesome/angular-fontawesome": "angularFontAwesome", "@fortawesome/free-solid-svg-icons": "freeSolidSvgIcons", "@fortawesome/free-regular-svg-icons": "freeRegularSvgIcons", "@microsoft/signalr": "signalr" } }, "whitelistedNonPeerDependencies": ["@angular/*"] } ``` If we think of the package.json being the front end pre-compile configuration, and the ng-package.json as the post compilation and interface to webpack we begin to see the relationship. The *whitelistedNonPeerDependcies* are called out by the compiler errors, those errors tell us exactly what to put into the configuration file. Why? I don't know and right now don't care. I just want to focus on publishing the pacakge! One last tip, we must always bump the pacakage.json version number each time we publish. JWP 2020 NPM Publish, Package.Json Version, ng-package.json umdModuleIds
jwp
378,184
Up and running with Factory Bot in Rails 5
This post is a short getting-up-and-running style of post. It expects that you have Rails 5 setup an...
0
2020-07-01T23:24:53
https://blog.dennisokeeffe.com/blog/2020-07-02-up-and-running-with-factory-bot/
beginners, ruby, rails, testing
This post is a short getting-up-and-running style of post. It expects that you have Rails 5 setup and ready to roll. ## Why Factory Bot? From the world's most reliable resource [Wikipedia](<https://en.wikipedia.org/wiki/Factory_Bot_(Rails_Testing)>): > Factory Bot is often used in testing Ruby on Rails applications; where it replaces Rails' built-in fixture mechanism. Rails' default setup uses a pre-populated database as test fixtures, which are global for the complete test suite. Factory Bot, on the other hand, allows developers to define a different setup for each test and thus helps to avoid dependencies within the test suite. There is more info on the why on the [Why Factories](https://thoughtbot.com/blog/why-factories) article. This is simply a quick start to get up and going to test model validation. ## Quick start ```s rails new <project> -- api cd <project> gem install rspec-rails factory_bot_rails ``` ## Update Gemfile config In the Gemfile: ```ruby group :development, :test do gem 'factory_bot_rails', '~>6.0' gem 'rspec-rails', '>= 3.9.0' end ``` Run `bundle install`. ## Automatic factory definition loading From the docs: > By default, factory_bot_rails will automatically load factories defined in the following locations, relative to the root of the Rails project: ```s factories.rb test/factories.rb spec/factories.rb factories/*.rb test/factories/*.rb spec/factories/*.rb ``` If you want to, you can set custom configuration in `config/application.rb` or the appropraite env config. ```s config.factory_bot.definition_file_paths = ["custom/factories"] ``` This will cause factory_bot_rails to automatically load factories in `custom/factories.rb` and `custom/factories/*.rb`. ## Config Add the following configuration to `test/support/factory_bot.rb`: ```rb # test/support/factory_bot.rb require "factory_bot_rails" RSpec.configure do |config| config.include FactoryBot::Syntax::Methods end ``` Be sure to require that file in `test/test_helper.rb`: ```rb # test/test_helper.rb ENV["RAILS_ENV"] ||= "test" require_relative "../config/environment" require_relative "./support/factory_bot" require "rails/test_help" require "rspec/rails" class ActiveSupport::TestCase # Setup all fixtures in test/fixtures/*.yml for all tests in alphabetical order. fixtures :all # Add more helper methods to be used by all tests here... end ``` ## Create a model From the [guides](https://guides.rubyonrails.org/getting_started.html#creating-the-article-model), we are going to generate a new model. ```s rails generate model Article title:string text:text # run the migration rails db:migrate ``` If successful, the migration should return: ```s == CreateArticles: migrating ================================================== -- create_table(:articles) -> 0.0019s == CreateArticles: migrated (0.0020s) ========================================= ``` ## Update Ruby Update `app/models/article.rb` to look like the following: ```rb class Article < ApplicationRecord validates :title, presence: true, length: {minimum: 5} validates :text, presence: true, length: {minimum: 5} end ``` ## Add the following to the factories directory ```rb # test/factories/articles.rb FactoryBot.define do factory :article do title { "MyString" } text { "MyText" } end end ``` ## Add an Rspec for the model ```rb # test/models/article_test.rb require "./test/test_helper" class ArticleTest < ActiveSupport::TestCase describe "article model" do before(:all) do @article1 = FactoryBot.create(:article) end it "is valid with valid attributes" do expect(@article1).to be_valid end it "is not valid without a title" do article2 = FactoryBot.build(:article, title: nil) expect(article2).to_not be_valid end it "is not valid without text" do article2 = FactoryBot.build(:article, text: nil) expect(article2).to_not be_valid end it "is not valid without a title of min length 5" do article2 = FactoryBot.build(:article, title: "Min") expect(article2).to_not be_valid end it "is not valid without text of min length 5" do article2 = FactoryBot.build(:article, text: "Min") expect(article2).to_not be_valid end end end ``` ## Running the test ```s rspec test/models/article_test.rb ``` We should get something like the following out: ```s ..... Finished in 0.04765 seconds (files took 0.90722 seconds to load) 5 examples, 0 failures Run options: --seed 18801 # Running: Finished in 0.001607s, 0.0000 runs/s, 0.0000 assertions/s. 0 runs, 0 assertions, 0 failures, 0 errors, 0 skips ``` ## Resources and Further Reading 1. [thoughtbot/factory_bot_rails](https://github.com/thoughtbot/factory_bot_rails) 2. [thoughtbot/factory_bot](https://github.com/thoughtbot/factory_bot) 3. [Why Factories?](https://thoughtbot.com/blog/why-factories) 4. [Creating an Article model in Rails](https://guides.rubyonrails.org/getting_started.html#creating-the-article-model) 5. [Testing RSpec](https://semaphoreci.com/community/tutorials/how-to-test-rails-models-with-rspec) _Image credit: [Alex Knight](https://unsplash.com/@agkdesign)_ _Originally posted on my [blog](https://blog.dennisokeeffe.com/blog/2020-07-02-up-and-running-with-factory-bot/). Follow me on Twitter for more hidden gems [@dennisokeeffe92](https://twitter.com/dennisokeeffe92)._
okeeffed
378,320
Creating GitHub Action to Deploy Projects into a Private Maven Repository
I have been heavily using other CI/CD platforms like GitLab, BitBucket, Jenkins, and Azure DevOps but...
0
2020-07-02T19:35:04
https://dev.to/fk/creating-github-action-to-deploy-projects-into-a-private-maven-repository-2jf0
java, kotlin, git
--- title: Creating GitHub Action to Deploy Projects into a Private Maven Repository published: true date: 2020-07-01 22:32:55 UTC tags: java,kotlin,git canonical_url: --- I have been heavily using other CI/CD platforms like GitLab, BitBucket, Jenkins, and Azure DevOps but I can definitely say GitHub nailed it. It has a different and easy-to-use approach and I am surprised that you can add custom runners for your CI/CD pipeline. I will create a very simple Kotlin application to show how to deploy a library in a private Maven repository. I am going to use [Repsy](https://repsy.io) for Maven repository hosting. It gives 3 GB of free storage and should be sufficient for most cases. Let's create a Gradle project, your IDE will help with this, or you can install Gradle CLI from the official [gradle](https://gradle.org/) site or you can use [SdkMan](https://sdkman.io/) for this purpose. ```shell sdk i gradle ``` Gradle CLI will help you to bootstrap your project. ```shell mkdir hello-actions-kotlin-lib cd hello-actions-kotlin-lib gradle init ``` Gradle will ask us a bunch of questions. I will select the `library` type for project generation ``` Select the type of project to generate: 1: basic 2: application 3: library 4: Gradle plugin Enter selection (default: basic) [1..4] 3 ``` Kotlin for programming language selection. I'm happy to see plenty of options there. ``` Select implementation language: 1: C++ 2: Groovy 3: Java 4: Kotlin 5: Scala 6: Swift Enter selection (default: Java) [1..6] 4 ``` And going to use Kotlin DSL instead of Groovy. ``` Select build script DSL: 1: Kotlin 2: Groovy Enter selection (default: Kotlin) [1..2] 1 ``` Just press enter for the default project name or you can enter another one suitable for you. ``` Project name (default: hello-actions-kotlin-lib): ``` For the source package selection, I am going to use my GitHub TLD but feel free to use your package structure. ``` Source package (default: hello.actions.kotlin.lib): com.github.firatkucuk.hello_actions_kotlin_lib ``` As a final question gradle will ask you the Java version. You may want to select the Latest LTS version (21) or one one the safest ones `8` or `11`. I'll choose 17. ``` Enter target version of Java (min. 7) (default: 21): 17 ``` It seems a little bit long but that’s OK. Gradle will create a simple `Library.kt` file. Let’s rename it `Hello.kt` and change the content. ```kotlin package com.github.firatkucuk.hello_actions_kotlin_lib; class Hello { fun sayHello(text: String): String { return "Hello $text" } } ``` And same for the test file `LibraryTest.kt`. Let's rename it to `HelloTest.kt` and change the test file content to: ```kotlin package com.github.firatkucuk.hello_actions_kotlin_lib import kotlin.test.Test import kotlin.test.assertEquals class HelloTest { @Test fun sayHelloMethodReturnsHello() { val classUnderTest = Hello() assertEquals("Hello World", classUnderTest.sayHello("World"), "sayHello method should return 'Hello World'") } } ``` Now, we can test the build operation: ```shell gradle build ``` The next phase is publishing our library to our private maven repository in [repsy.io](https://respy.io). After signup, [repsy](https://repsy.io) creates a default repository that can be used with the account password. So you only need to sign up, and that’s all. Let’s modify our `build.gradle.kts` file for publishing the artifact. The most important part is adding a `maven-publish` plugin and a publishing section with the credentials from repsy. ```kotlin version = "1.0.0" plugins { // Apply the org.jetbrains.kotlin.jvm Plugin to add support for Kotlin. id("org.jetbrains.kotlin.jvm") version "1.9.10" // Apply the java-library plugin for API and implementation separation. `java-library` `maven-publish` } repositories { // Use Maven Central for resolving dependencies. mavenCentral() } val repsyUrl: String by project val repsyUsername: String by project val repsyPassword: String by project publishing { publications { create<MavenPublication>("maven") { from(components["java"]) } } repositories { maven { url = uri(repsyUrl) credentials { username = repsyUsername password = repsyPassword } } } } dependencies { // Use the Kotlin JUnit 5 integration. testImplementation("org.jetbrains.kotlin:kotlin-test-junit5") // Use the JUnit 5 integration. testImplementation("org.junit.jupiter:junit-jupiter-engine:5.9.3") testRuntimeOnly("org.junit.platform:junit-platform-launcher") // This dependency is exported to consumers, that is to say found on their compile classpath. api("org.apache.commons:commons-math3:3.6.1") // This dependency is used internally, and not exposed to consumers on their own compile classpath. implementation("com.google.guava:guava:32.1.1-jre") } // Apply a specific Java toolchain to ease working on different environments. java { toolchain { languageVersion.set(JavaLanguageVersion.of(17)) } } tasks.named<Test>("test") { // Use JUnit Platform for unit tests. useJUnitPlatform() } ``` Our project implementation is completed. Let’s create git integration of our project. First, create an empty git project called `hello-actions-kotlin-lib` on GitHub. After that, we commit the changes and push files to GitHub. ```shell git init git add . git commit -m "initial commit" git remote add origin git@github.com:YOUR_GITHUB_USER_NAME/hello-actions-kotlin-lib.git git push origin main ``` The next step is integrating your project with GitHub Actions. Go to `https://github.com/YOUR_USER_NAME/hello-actions-kotlin-lib/actions/new` Please Find the **Java with Gradle** Action. Then you can click the **Set up this workflow** button. An online editor will be opened up. Let’s directly commit the changes and we’re gonna see our Action will run immediately. GitHub action will fail so please ignore this. It requires some properties file we’re gonna add properties as GitHub secret. We need to create a GitHub secret for our `gradle.properties` file. But because of the limitations. GitHub cannot use file content as secrets. GitLab has the same restriction once upon a time then they’ve introduced File types but we need to create a text file our local computer. The content should be something like this: ```properties repsyUrl=https://repo.repsy.io/mvn/YOUR_REPSY_USERNAME/default repsyUsername=YOUR_REPSY_USERNAME repsyPassword=YOUR_REPSY_PASSWORD ``` We need to encode this with base64. There are many online base64 converters. If you’re using Mac/Linux it should be easy like this: ```bash base64 -w0 gradle.properties ``` Let's copy the output and create a secret. Go to `https://github.com/YOUR_USER_NAME/hello-actions-kotlin-lib/settings/secrets` Add a GitHub secret called GRADLE_PROPERTIES and paste the encoded string into it. Now we can open our pipeline configuration file and make the latest changes. Go To `https://github.com/YOUR_USER_NAME/hello-actions-kotlin-lib/blob/master/.github/workflows/gradle.yml` and click the edit button. We will add some extra steps for injecting Gradle properties. ```yaml name: Java CI with Gradle on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4.1.1 - name: Set up JDK 17 uses: actions/setup-java@v3.13.0 with: java-version: 17 - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Build with Gradle env: GRADLE_PROPERTIES: ${{ secrets.GRADLE_PROPERTIES }} run: mkdir -p ~/.gradle && echo $GRADLE_PROPERTIES | base64 -d > ~/.gradle/gradle.properties && cat ~/.gradle/gradle.properties && ./gradlew build - name: publish run: ./gradlew publish ``` That’s all you can find the source code [here](https://github.com/firatkucuk/hello-actions-kotlin-lib).
fk
378,509
Learning apache kafka
SVR Technologies works with an operation to be one of the finest apache kafka online training cent...
0
2020-07-02T06:30:13
https://dev.to/svrtechnologie/learning-apache-kafka-2een
kafkacourse
SVR Technologies works with an operation to be one of the finest <a href="https://svrtechnologies.com/kafka-training/"> apache kafka online training </a> centers that offers all the newest updated kafka course online training in various environments . What is Kafka? Kafka is open-source software that can provides a framework for storing, reading and analysing streaming data. It is fundamentally free to use, and it has a large network of users and developers who contribute towards updates, new features and offering support for new users and it has high reliability and replication characteristics. Kafka is designed to be run in a distributed environment, rather than sitting on one user’s computer, it runs across many servers. This brings additional processing power and storage capacity. Why learn Apache Kafka? Kafka training helps you gain expertise in Kafka Architecture, Installation, Configuration, Performance Tuning, Kafka Client APIs like Producer, Consumer and Stream APIs, Kafka Administration, Kafka Connect API and Kafka Integration with Hadoop, Storm and Spark using Twitter Streaming use case.
svrtechnologie
378,636
What is all that stuff in this frontend repo?
Introduction You are getting ready for your next assignment. Should be an easy job, just...
0
2020-07-02T11:32:28
https://dev.to/justusromijn/what-is-all-that-stuff-in-this-frontend-repo-4o02
javascript, git, npm, beginners
# Introduction > You are getting ready for your next assignment. Should be an easy job, just update some templates to implement a new menu design, so let's get down to it. Clone this git repo, allright! Wait...wut... what is all this stuff? ![Huh?](https://media.giphy.com/media/l3q2K5jinAlChoCLS/giphy.gif) My assumption is that a lot of developers have gone through such a moment, where you look a new project in the face and think: what is all this stuff? To help you get back down in your seat again and approach this with some confidence, I will drill down some more common frontend setups that you will encounter anno 2020. _Note: this is (of course) not a full, exhaustive list. Every project is different, and I've seen some rare custom setups over time. This article is aimed to help starting developers to find their way in most projects._ # Anatomy of frontend repositories ### Files Independent of framework or type of project, there's going to be a bunch of files in the root folder. - `README.md` ([make a readme](https://www.makeareadme.com/)) Always start here. A lot of tools by default open a README file if they find it in the root. Most of the time, this is the best place to find documentation written by the actual maintainers of this project about how to get started, requirements to be able to run it, and possible other details that are relevant. - `LICENSE` ([license help](https://choosealicense.com/)) Some legal information about usage, warranty and sharing of the code. Also often refer to standard software licenses like MIT, GNU, etc. - `package.json` ([npm docs](https://docs.npmjs.com/files/package.json)) This is also important to peek into. A package.json file indicates that this project relies on the `npm` ecosystem. Wether or not this project is actually exposed publicly, beside details like name/description/author of this project, it usually also lists dependencies (other packages from npm). Another important aspect is that it often lists a couple of npm scripts that perform certain tasks within a project, like installing dependencies, start a development environment, test the codebase and build/bundle for production. For node projects, the `main` field in the package.json is rather important as it targets it as the entry point for the package. For website packages, this is not relevant. - `package-lock.json` ([npm docs](https://docs.npmjs.com/configuring-npm/package-lock-json.html)) The package lockfile holds metadata about which dependencies were installed in the node_modules. This is very useful to be able to exactly reproduce a specific situation, as by design dependencies are able to be of different version depending on when you run your install command (by allowing patch and minor updates, see [semver](https://semver.org/)). - `.gitignore` ([git on gitignore](https://git-scm.com/docs/gitignore)) This file has instructions of what to exclude from version control. It can be specific files, as well as entire directories. Think about the build-output of your project, temporary folders or dependencies. Common items include `node_modules`, `tmp`, `dist`, `www`,`build` and so on. - `.editorconfig` ([editorconfig docs](https://editorconfig.org/)) To avoid unneeded clashes of handling charactersets and whitespace, this file will help editors pick (among others) tabs vs spaces, level of indentation and how to handle newlines, based on filename/extension. - `.[SOMETHING]rc` What exactly is the definition of `RC` is [not entirely clear ](https://stackoverflow.com/questions/11030552/what-does-rc-mean-in-dot-files), but all those RC files are basically configurations for anything that runs in your project and supports it. Often you will find these: `.npmrc`, `.babelrc`, etc. - `[SOMETHING].config.js` `[SOMETHING].config.json` Configuration settings for the specified...thing. Think of linters (`eslint`, `prettier`), transpilers (`babel`,`traceur`), bundlers (`rollup`,`parcel`,`webpack`), typescript (`ts`), etc. ### Folders - `node_modules` ([npm on folders](https://docs.npmjs.com/configuring-npm/folders.html)) All installed dependencies will go in here. Usually this folder is created once you run `npm install` or `yarn install`, as it almost always is ignored by git (see `.gitignore`). - `scripts` (undocumented convention) Command line actions from the package.json often refer to executing files in this folder. Building, linting, testing, often the instructions for performing these tasks are in here. - `src` (undocumented convention) The real meat: the source code of this project. Probably 90% or more of the repo activity has its place in this folder. - `assets` (undocumented convention) Any audio, image, font, video or other static assets are often stored together here. - `build`|`dist` (undocumented convention, [Stack Overflow question](https://stackoverflow.com/questions/22842691/what-is-the-meaning-of-the-dist-directory-in-open-source-projects)) The bundled or optimized output of the sourcecode. Depending on the goal of this repo, this may or may not be included in `git`, so you might have to run some build script first before this will be summoned into existence. - `tmp` | `.tmp` (undocumented convention) When running projects in development mode, it often needs temporary workspace to serve to the browser, it might need intermediate files. Either way, this folder is as it states, temporary. Don't expect it to be there for long. - `bin` (convention, probably originates in [linux](http://www.linfo.org/bin.html) and other operating systems) Any command-line executables are defined here. In the scope of frontend projects, it is mostly limited to some command-line utilities like scaffolding tools (for example generate new pages or components). - `lib` | `vendor` (undocumented convention) Sometimes you need libraries that you cannot, or do not want to rely on through npm. 3th party assets are often stored in this folder. - `test` (undocumented convention) For tests that you don't want next to your source code, there is a separate directory. Direct page testing is often something that is written in this folder. ### Enjoy your journey This is just scratching the surface, hopefully this gives beginning developers a clue on what to expect when starting with projects. Basically my advice usually is: - Start with the `README`! Other maintainers want you to read this first before getting your hands dirty; - Next up: `package.json`: see what script instructions there are for installation, development, testing and building. - Lets get to it! `src`: look at how this folder is organised, probably you will start recognising things here and get a clou of how to get things done. I know that those instructions sound almost blatantly straightforward, but how often did I have someone at my desk asking how to get a project up and running, where I would reply... Did you read the README? ![kid licking wrong finger to turn the newspaper page](https://media.giphy.com/media/myub4wKKdR0Wc/giphy.gif) Some follow-up for this could be a repository which holds a lot of those files with comments and readme's, that can be a community-driven effort to explain what it all does in a nice, kind-of interactive way. Let me know in the comments if you would like to see such an initiative!
justusromijn
378,654
Building a Modern Web Application with Neo4j and NestJS
This article is the introduction to a series of articles and a Twitch stream on the Neo4j Twitch chan...
0
2020-07-02T08:52:46
https://dev.to/adamcowley/building-a-modern-web-application-with-neo4j-and-nestjs-38ih
neo4j, typescript, nestjs
This article is the introduction to a series of articles and a Twitch stream on the [Neo4j Twitch channel](https://twitch.tv/neo4j_) where I build an application on top of Neo4j with NestJS and a yet-to-be-decided Front End. This week I built a Module and Service for interacting with Neo4j.  <iframe width="560" height="315" src="https://www.youtube.com/embed/9sNgCiPnhZE" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> **TL;DR: I've pushed the [code to Github](https://github.com/adam-cowley/twitch-project) and created a [Neo4j module for NestJS](https://github.com/adam-cowley/nest-neo4j) to save you some time.** Over the past few weeks I have been spending an hour live streaming something that I have found interesting that week, but from this week I thought I would change things up and start to build out a project on Neo4j. ## Tech Stack ### Neo4j If you're subscribed to this channel then you are likely familiar with Neo4j, but if not then Neo4j is the world's leading Graph Database. Rather than tables or documents, Neo4j stores it's data in Nodes - those nodes are categorised by labels and contain properties as key/value pairs. Those Nodes are connected together by relationships, which are categorised by a type and can also contain properties as key/value pairs. ``` (a:Person {name: "Adam"})-[:USES_DATABASE {since: 2015}]->(neo4j:Database:GraphDatabase {name: "Neo4j", homepage: "neo4j.com"}) ``` What sets Neo4j apart from other databases is it's ability to query connected datasets. Where traditional databases build up joins between records at read time, Neo4j stores the data Neo4j is schema-optional - meaning that you can enforce a schema on your database if necessary by adding unique or exists constraints on Nodes and Relationships. ### Typescript I've been experimenting with [Typescript](https://www.typescriptlang.org/) for a while now, and the more I use it the more I like it. Typescript is essentially Javascript but with additional static typing. Under the hood, it compiles down to plain Javascript but it improves the developer experience a lot, and allows you to identify problems in real-time as you are writing your code. ### NestJS By far the best framework I have seen that supports typescript is NestJS. NestJS is an opinionated framework for building server-side applications. It also includes modern features you'd expect in a modern framework like Spring Boot or Laravel - mainly Dependency Injection. ## Week 1 - Nest fundamentals & Neo4j Integration Nest comes with a CLI with many helpers for starting and developing a project. You can install it by running: ``` npm i --global @nestjs/cli nest --help ``` Once it's installed, you can use the `new` or `n` command to create a new project. ``` nest new api ``` After selecting the package manager of your choice, the CLI command will generate a new project and install any dependencies. Once it's done, you can `cd` into the directory and then run `npm run start:dev` to fire up the development server. In the generated `src/` folder, you'll see: - `main.ts` - The main entrypoint of the file, this creates a Nest application instance - `app.module.ts` - This is the root module of the application, where you define the child modules that are used in the application - `app.controller.ts` - This is a basic controller, where you can define REST endpoints on the server ### Nest modules Functionality in Nest is grouped into *modules*, the [official documentation uses Cats](https://docs.nestjs.com/modules) as it's example. Modules are a way of grouping related functionality together. In the Cats example, the module *provides* a CatsService which handles the applications interactions with Cats, and a Cats controller which registers routes which define how the Cats are accessed. Module classes are defined by a `@Module` annotation, which in turn defines which child modules are imported into module, any controllers that are defined in the module, and any classes that are exported from the module and made available for dependency injection. Take the annotation on the [Cats example in the documentation](https://docs.nestjs.com/modules#module-re-exporting), this is saying that the CatsModule registers a single controller `CatsController` and provides the `CatsService`. ``` @Module({ controllers: [CatsController], providers: [CatsService], }) ``` The CatService is registered with the Nest instance and can then be *injected* into any class. ### `@Injectable()` classes Classes annoted with `@Injectable()` are automatically injected into a class using some under-the-hood Nest "magic". For example, by defining the `CatsService` in the constructor for the `CatsController`, Nest will automatically resolve this dependency and inject it to the class without any additional code. This is identical to how things work in more mature frameworks like [Spring](https://spring.io) and [Laravel](https://laravel.io). ```ts import { Controller } from '@nestjs/common'; import { CatsService } from './cats.service'; @Controller export class CatsController { constructor(private catsService: CatsService) {} } ``` Dependency Injection is a software technique where a class will be "injected" with instances of other classes that it depends on. This makes the testing process easier where instead of instantiating classes. It also promotes the principles of DRY - don't repeat yourself and SOLID. Each class should have a single responsibility - for example a `User` service should only be concerned with acting on a User's record, not be concerned with how that record is persisted to a database. ### Nest Integration for Neo4j In order to use [Neo4j](https://neo4j.com) in services across the application, we can define a Neo4jService for interacting with the graph through the JavaScript driver. This service should provide the ability to interact with Neo4j but without the service itself needing to know any of the internals. This service should be wrapped in a module which can be registered in the application. The first step is to install the Neo4j Driver. ```sh npm i --save neo4j-driver ``` Then, we can use CLI to generate a new module with the name Neo4j. ```sh nest g mo neo4j # shorthand for `nest generate module neo4j` ``` The command will create a `neo4j/` folder with it's own module. Next, we can use the CLI to generate the Service: ```sh nest g s neo4j # shorthand for `nest generate service neo4j` ``` This command will generate `neo4j.service.ts` and append it to the `providers` array in the module so it can be injected into any application that uses the module. #### Configuration & Dynamic Modules By default, these modules are registerd as static modules. In order to add configuration to the driver, we'll have add a static method which accepts the user's Neo4j credentials and returns a `DynamicModule`. The first thing to do is generate an interface that will define the details allowed when instantiating the module. ```sh nest g interface neo4j-config ``` The driver takes a connection string and an authentication method. I like to split up the connection string into parts, this way we can validate the scheme. The scheme (or protocol) at the start of the URI should be a string, and one of the following options: ```ts export type Neo4jScheme = 'neo4j' | 'neo4j+s' | 'neo4j+scc' | 'bolt' | 'bolt+s' | 'bolt+scc' ``` The host should be a string, port should either be a number or a string, then username, password should be a string. The database should be an optional string, if the driver connects to a 3.x version of Neo4j then this isn't a valid option and if none is supplied then the driver will connect to the default database (as defined in neo4j.conf - `dbms.default_database`). ```ts export interface Neo4jConfig { scheme: Neo4jScheme; host: string; port: number | string; username: string; password: string; database?: string; } ``` Next, for the static method which registers the dynamic module. The documentation recommends using the naming convention of `forRoot` or `register`. The function should return a [`DynamicModule`](https://docs.nestjs.com/fundamentals/dynamic-modules) - this is basically an object that contains metadata about the module. The module property should return the Type of the module - in this case `Neo4jModule`. This module will provide the `Neo4jService` so we can add the class to the `provides` array. ```ts // ,, export class Neo4jModule { static forRoot(config: object): DynamicModule { return { module: Neo4jModule, provides: [ Neo4jService, ] } } // ,, } ``` Because we are providing a configuration object, we'll need to register it as a provider so that it can be injected into the `Neo4jService`. For providers that are not defined globally, we can define a unique reference to the provider and assign it to a variable. We will use this later on when injecting the config into the service. The `useValue` property instructs Nest to use the config value provided as the first argument. ```ts // Reference for Neo4j Connection details const NEO4J_OPTIONS = 'NEO4J_OPTIONS' export class Neo4jModule { static forRoot(config: object): DynamicModule { return { module: Neo4jModule, provides: [ { // Inject this value into a class @Inject(NEO4J_OPTIONS) provide: NEO4J_OPTIONS, useValue: config }, Neo4jService, ], } } // ,, } ``` If the user supplies incorrect credentials, we don't want the application to start. We can create an instance of the Driver and verify the connectivity using an [Asynchronous provider](https://docs.nestjs.com/fundamentals/async-providers). An async provider is basically a function that given a set of configuration parameters, returns an instance of the module that is configured at runtime. In a new file `neo4j.utils.ts`, create an `async` function to create an instance of the driver and call the `verifyConnectivity()` to verify that the connection has been successful. If this function throws an Error, the application will not start. ```ts import neo4j from 'neo4j-driver' import { Neo4jConfig } from './interfaces/neo4j-config.interface' export const createDriver = async (config: Neo4jConfig) => { // Create a Driver instance const driver = neo4j.driver( `${config.scheme}://${config.host}:${config.port}`, neo4j.auth.basic(config.username, config.password) ) // Verify the connection details or throw an Error await driver.verifyConnectivity() // If everything is OK, return the driver return driver } ``` The function accepts the `Neo4jConfig` object as the only argument. Because this has already been defined as a provider, we can define it in the `injects` array when defining it as a provider. ```ts // Import the factory function import { createDriver } from './neo4j.utils.ts' // Reference for Neo4j Driver const NEO4J_DRIVER = 'NEO4J_DRIVER' export class Neo4jModule { static forRoot(config: object): DynamicModule { return { module: Neo4jModule, provides: [ { provide: NEO4J_OPTIONS, useValue: options }, { // Define a key for injection provide: NEO4J_DRIVER, // Inject NEO4J_OPTIONS defined above as the inject: [NEO4J_OPTIONS], // Use the factory function created above to return the driver useFactory: async (config: Neo4jOptions) => createDriver(config) }, Neo4jService, ], } } } ``` Now that the driver has been defined, it can be injected into any class in it's own right by using the `@Inject()` annotation. But in this case, we will add some useful methods to the `Neo4jService` to make it easier to read from and write to Neo4j. Because we have defined `NEO4J_DRIVER` in the `provides` array for the dynamic module, we can pass the `NEO4J_DRIVER` as a single parameter to the `@Inject` directive in the constructor. ```ts import { Injectable, Inject } from '@nestjs/common'; import { NEO4J_DRIVER } from './neo4j.constants' @Injectable export class Neo4jService { constructor( @Inject(NEO4J_CONFIG) private readonly config, @Inject(NEO4J_DRIVER) private readonly driver ) {} } ``` Each Cypher query run against Neo4j takes place through a Session, so it makes sense to expose this as an option from the service. The default access mode of the session allows the Driver to route the query to the right member of a Causal Cluster - this can be either `READ` or `WRITE`. There is also an optional parameter for the database when using [multi-tenancy in Neo4j 4.0](https://adamcowley.co.uk/neo4j/multi-tenancy-neo4j-4.0/). As I mentioned earlier, if none is supplied then the query is run against the default database. So the user doesn't need to worry about the specifics of read or write transactions, we should create a method for each mode - both with an optional parameter for the database. There is also a database specified in the `Neo4jConfig` object, so we should fall back to this if none is explicitly specified. ```ts import { Driver, Session, session, Result } from 'neo4j-driver' //... export class Neo4jService { constructor(@Inject(NEO4J_DRIVER) private readonly driver) {} getReadSession(database?: string): Session { return this.driver.session({ database: database || this.config.database, defaultAccessMode: session.READ, }) } getWriteSession(database?: string): Session { return this.driver.session({ database: database || this.config.database, defaultAccessMode: session.WRITE, }) } } ``` These methods make use of `NEO4J_CONFIG` and `NEO4J_DRIVER` which were injected into the constructor. So with that in mind, it would be useful to create a method to read data from Neo4j. The driver accepts parameterised queries as a string (eg. queries with literal variables replaced with parameters - `$myParam`) and an object of parameters so these will be the arguments for the query. Optionally, we may want to specify which database this query is run against so it makes sense to include that as an optional third parameter. The query then returns a `Result` statement which includes the result and some additional statistics. ```ts read(cypher: string, params: Record<string, any>, database?: string): Result { const session = this.getReadSession(database) return session.run(cypher, params) } ``` Over the course of the application, this will save us a few lines of code. The same can be done for a write query: ```ts write(cypher: string, params: Record<string, any>, database?: string): Result { const session = this.getWriteSession(database) return session.run(cypher, params) } ``` ### Using the Service in the Application Now we have a service that is registered in the main application through the `Neo4jModule` that can be injected into any class in the application. So as an example, let's modify the Controller that was generated in the initial command. By default, the route at '/' returns a hello world message, but instead let's use it to return the number of Nodes in the database. To do this, we should first inject the `Neo4jService` into the controller: ```ts import { Controller, Get } from '@nestjs/common'; import { Neo4jService } from './neo4j/neo4j.service' @Controller() export class AppController { constructor(private readonly neo4jService: Neo4jService) {} // ... } ``` Now, we can modify the `getHello` method to return a string. The constructor will automatically assign the `Neo4jService` to the class so it is accessible through `this.neo4jService`. From there we can use the `.read()` method that we've just created to execute a query against the database. ```ts async getHello(): Promise<any> { const res = await this.neo4jService.read(`MATCH (n) RETURN count(n) AS count`) return `There are ${res.records[0].get('count')} nodes in the database` } ``` Navigating in the browser to http://localhost:3000 should now show a message including the number of nodes in the database. ## Tune in next week! Tune in to the [Neo4j Twitch channel](https://twitch.tv/neo4j_) Tuesdays at 13:00BST, 14:00CEST for the next episode.
adamcowley
378,697
Apache Spark On DataProc vs Google BigQuery
Introduction When it comes to Big Data infrastructure on Google Cloud Platform, the most p...
0
2020-07-02T10:13:54
https://dev.to/sigmoidinc/apache-spark-on-dataproc-vs-google-bigquery-4654
## Introduction When it comes to Big Data infrastructure on Google Cloud Platform, the most popular choices Data architects need to consider today are Google BigQuery – A serverless, highly scalable and cost-effective cloud data warehouse, Apache Beam based Cloud Dataflow and Dataproc – a fully managed cloud service for running [Apache Spark]: https://spark.apache.org/ and [Apache Hadoop]:https://hadoop.apache.org/ clusters in a simpler, more cost-efficient way. This variety also presents challenges to architects and engineers looking at moving to Google Cloud Platform in selecting the best technology stack based on their requirements and to process large volumes of data in a cost-effective yet reliable manner. In the following sections, we look at the research we had undertaken to provide interactive business intelligence reports and visualizations for thousands of end-users. Furthermore, as these users can concurrently generate a variety of such interactive reports, we need to design a system that can analyze billions of data points in real-time. . . . ## Requirements For technology evaluation purposes, we narrowed down to following requirements – 1. Raw data set of 175TB size: This dataset is quite diverse with scores of tables and columns consisting of metrics and dimensions derived from multiple sources. 2. Catering to 30,000 unique users 3. Serving up to 60 concurrent queries to the platform users The problem statement due to the size of the base dataset and requirement for a high real-time querying paradigm requires a solution in the Big Data domain. . . . ##Salient Features of Proposed Solution The solution took into consideration following 3 main characteristics of the desired system: 1. Analyzing and classifying expected user queries and their frequency. 2. Developing various pre-aggregations and projections to reduce data churn while serving various classes of user queries. 3. Developing state of the art ‘Query Rewrite Algorithm’ to serve the user queries using a combination of aggregated datasets. This will allow the Query Engine to serve maximum user queries with minimum number of aggregations. . . . ## Tech Stack Considerations For benchmarking performance and the resulting cost implications, following technology stack on Google Cloud Platform were considered: 1. Cloud DataProc + Google Cloud Storage For Distributed processing – Apache Spark on Cloud DataProc For Distributed Storage – Apache Parquet File format stored in Google Cloud Storage 2. Cloud DataProc + Google BigQuery using Storage API For Distributed processing – Apache Spark on Cloud DataProc For Distributed Storage – BigQuery Native Storage (Capacitor File Format over Colossus Storage) accessible through BigQuery Storage API 3. Native Google BigQuery for both Storage and processing – On Demand Queries Using BigQuery Native Storage (Capacitor File Format over Colossus Storage) and execution on BigQuery Native MPP (Dremel Query Engine) All the queries were run in on demand fashion. Project will be billed on the total amount of data processed by user queries. 4. Native Google BigQuery with fixed price model Using BigQuery Native Storage (Capacitor File Format over Colossus Storage) and execution on BigQuery Native MPP (Dremel Query Engine) Slots reservations were made and slots assignments were done to dedicated GCP projects. All the queries and their processing will be done on the fixed number of BigQuery Slots assigned to the project. . . . ## Tech Stack Performance Comparison After analyzing the dataset and expected query patterns, a data schema was modeled. Dataset was segregated into various tables based on various facets. Several layers of aggregation tables were planned to speed up the user queries. All the user data was partitioned in time series fashion and loaded into respective fact tables. Furthermore, various aggregation tables were created on top of these tables. All the metrics in these aggregation tables were grouped by frequently queried dimensions. In the next layer on top of this base dataset various aggregation tables were added, where the metrics data was rolled up on a per day basis. All the probable user queries were divided into 5 categories – 1. Raw data and lifting over 3 months of data 2. Aggregated data and lifting over 3 months of data 3. Aggregated data over 7 days. 4. Aggregated data over 15 days. 5. Raw data over 1 month. The total data processed by individual query depends upon time window being queried and granularity of the tables being hit. #### Query Response times for large data sets – Spark and BigQuery ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wyf3fiwd1dgb63wwd402.png) Test Configuration Total Threads = 60,Test Duration = 1 hour, Cache OFF 1) Apache Spark cluster on Cloud DataProc Total Nodes = 150 (20 cores and 72 GB), Total Executors = 1200 2) BigQuery cluster BigQuery Slots Used = 1800 to 1900 ####Query Response times for aggregated data sets – Spark and BigQuery ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/1mwj87obzyc11eadzfrg.png) Test Configuration Total Threads = 60,Test Duration = 1 hour, Cache OFF 1) Apache Spark cluster on Cloud DataProc Total Machines = 250 to 300, Total Executors = 2000 to 2400, 1 Machine = 20 Cores, 72GB 2) BigQuery cluster BigQuery Slots Used: 2000 #### Performance testing on 7 days data – Big Query native & Spark BQ Connector It can be seen that BigQuery Native has a processing time that is ~1/10 compared to Spark + BQ options ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jmvjqrt2jymbo5kmgarz.png) ####Performance testing on 15 days data – Big Query native & Spark BQ Connector It can be seen that BigQuery Native has a processing time that is ~1/25 compared to Spark + BQ options ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/cry3mrfeg8s40zf1296i.png) Processing time seems to reduce with increase in the data volume ####Longevity Tests – BigQuery Native REST API Once it was established that BigQuery Native outperformed other tech stack options in all aspects. We also ran extensive longevity tests to evaluate the response time consistency of data queries on BigQuery Native REST API. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jbos02eeq2sv4k0qm7ag.png) It is evident from the above graph that over long periods of running the queries, the query response time remains consistent and the system performance and responsiveness doesn’t degrade over time. . . . ## ETL performance – BigQuery Native To evaluate the ETL performance and infer various metrics with respect to the execution of ETL jobs, we ran several types of jobs at varied concurrency. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/61amkibkgwz7vavz9jlb.png) In BigQuery, similar to interactive queries, the ETL jobs running in batch mode were very performant and finished within expected time windows. This should allow all the ETL jobs to load hourly data into user facing tables and complete in a timely fashion. Running the ETL jobs in batch mode has another benefit. All jobs running in batch mode do not count against the maximum number of allowed concurrent BigQuery jobs per project. . . . ## Comparing costs – BigQuery & Spark Here we capture the comparison undertaken to evaluate the cost viability of the identified technology stacks. Actual Data Size used in exploration: Two Months billable dataset size in BigQuery: 59.73 TB. Two Months billable dataset size of Parquet stored in Google Cloud Storage: 3.5 TB. Parquet file format follows columnar storage resulting in great compression, reducing the overall storage costs. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/tk4bf91g0gp7jie16yih.png) Actual Data Size used in exploration- BigQuery 2 Months Size (Table): 59.73 TB Spark 2 Months Size (Parquet): 3.5 TB In BigQuery – storage pricing is based on the amount of data stored in your tables when it is uncompressed In BigQuery – even though on disk data is stored in Capacitor, a columnar file format, storage pricing is based on the amount of data stored in your tables when it is uncompressed. Hence, Data Storage size in BigQuery is ~17x higher than that in Spark on GCS in parquet format. . . . ## Conclusion 1. For both small and large datasets, user queries’ performance on BigQuery Native platform was significantly better than that on Spark Dataproc cluster. 2. Query cost for both On Demand queries with BigQuery and Spark based queries on Cloud DataProc is substantially high. 3. Using BigQuery with Flat-rate priced model resulted in sufficient cost reduction with minimal performance degradation.
sigmoidinc
378,707
Curious case of Git delete with jenkins pipeline
Hi Guys, Today i want to share a unique use case of delete remote git branch command with jenkins pi...
0
2020-07-02T10:30:30
https://dev.to/developerhelp/curious-case-of-git-delete-with-jenkins-pipeline-13a
devops, git, showdev, cheatsheet
Hi Guys, Today i want to share a unique use case of delete remote git branch command with jenkins pipeline. All a job is doing some work checkin to a branch, merge it with master and want to delete once everything is done. It is simply captured with below set of command from commandline git checkout branch git commit git push git checkout master git merge branch git push git branch -d branch//local branch delete git push origin --delete branch This is simple if you are doing it from the your dev machine, but not so with jenkins pipepline running in cloud with some cloud provider. One of the key things to note hear is, git is not secured protocol and you can't directly communicate on top of it, because cloud providers rule will whitelist the delete command, because it can see the content. So we have to use https to communicate with remote git repo. I modified the command like this withCredentials([[ $class: 'UsernamePasswordMultiBinding', credentialsId: 'my-git-credential-id', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD' ]]) { sh 'git push origin --delete branch' } considering this will go through, but this command keeps on hanging without any clues. Now the second attempt was to change it like this circumventing whitelisting of the cloud provider. withCredentials([[ $class: 'UsernamePasswordMultiBinding', credentialsId: 'my-git-credential-id', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD' ]]) { sh 'git push origin --delete branch https://${GIT_USERNAME}:${GIT_PASSWORD}@${GIT_URL_WITHOUT_HTTPS}' } But still it is not working, with all the thoughts being put through of why it is not working. Than finally comes Eureka moment where i just interchanged order of commands, as with https url in git command flags are the last one to end the command with argument preceding them. withCredentials([[ $class: 'UsernamePasswordMultiBinding', credentialsId: 'my-git-credential-id', usernameVariable: 'GIT_USERNAME', passwordVariable: 'GIT_PASSWORD' ]]) { sh 'git push https://${GIT_USERNAME}:${GIT_PASSWORD}@${GIT_URL_WITHOUT_HTTPS}'branch --delete }
developerhelp
378,812
Time Complexity for Interview Prepration
A post by Anant Rungta
0
2020-07-02T11:11:49
https://dev.to/coolrocks/time-complexity-for-interview-prepration-38pj
tutorial, computerscience, todayilearned, techtalks
{% youtube yqcspkyYyRQ%}
coolrocks
378,853
Writing a compelling resume
Your resume will be the first point of contact with the company you’re applying for. It’s absolutely...
0
2020-07-02T11:54:25
https://madsbrodt.com/blog/resume-writing/
career, codenewbie
Your resume will be the first point of contact with the company you’re applying for. It’s absolutely **crucial** that your resume creates a good first impression. The receiver of your resume can either be a recruiter, or someone directly from the company. Either way, the entire goal of your resume is to get the recruiter or company to advance you to the next step in the hiring process. ## Adapt to the position There’s a lot of stuff that goes into creating a compelling resume, but most importantly: **you have to adapt your resume to the job you’re applying for**. One of the most common mistakes is to send the same resume to 500 companies and hope to hear back - this doesn’t work. Instead, you need to personalize your resume to match the job spec. If you’re applying for a React job, put your React projects on top of your project list. If you’re applying for a data science job, highlight your strong Python capabilities. Make sure you include the most important **keywords** from the job spec. This does not just apply to technical skills either. When you’re researching the company website and reading through the job specification, take note of **which values are important to the company**. If they talk a lot about the importance of learning and self-improvement among employees, mention how you learned to code by doing tutorials at night. Or if their mission evolves around creating transparent, customer-centric solutions, talk about how you did that while working at X previous job. There’s several reasons for doing this. Firstly, you show that you’ve done your due diligence. You’ve read the job specification carefully and researched the company. This proves that you’ve spent some time on your application, and definitely more than the people who throw the same resume to every company. It also proves that you understand and care about the company mission, which makes you more relatable to hiring managers and indicates that you will fit in well with the other employees and the company culture as a whole. ## Highlight skills learned from other fields A great resume is especially important early in your career, when you don’t have a lot of relevant experience. This is where you will need to rely entirely on your resume to move on to the next round. Later on, your previous work experience will do a lot of the heavy lifting - but that’s not the case if you’re applying for your first developer job. However, even if you don’t have a lot of related developer experience, you can still show your previous work places. Use these to highlight skills you learned **that are also relevant for this job**. If you’ve worked as a cashier, write how that has helped you interacting with customers and showing politeness. Or how being a football coach has taught you the importance of teamwork and leadership. Whatever your previous experience has been, highlight the key learnings from each. You can also use this to showcase some of your favourite moments from previous jobs, developer related or not. Things like “got promoted to manager after 2 years” or “launched a non-profit website on a super tight deadline” or “learned a new language for a specific project on the job”. If you’ve done awesome things that you’re proud of, don’t be afraid to show it. It feels more human than just listing the responsibilities you had. ## Practical quick wins Keeping all of this in mind, there are also some simple, quick-win tips you can apply to make your resume more compelling: - Keep it at **1-2 page maximum**. The receiver will simply not have time to read long walls of text. There might be potentially better candidates applying for this job, but by keeping your resume concise, you’ll get an advantage over them by actually having your resume read. Stick to a few (2-4) bullet points for each previous job or project. - **Put your experience and skills at the top.** This is the most important part of the resume, so make sure the receiver doesn’t accidentally miss it, or discard your resume before getting to these sections. - **Show some personality.** You don’t need to go crazy with colors and images, but adding small visual design elements will help your resume to stand out. - **Use a template.** There are tons of good CV template builders on the internet. Using a template will make your resume look more professional at a glance. - **Proofread.** And have a friend proofread as well. You want your resume to be as free of spelling mistakes and poor wording as possible. ## Final notes Your resume might be your only chance at landing a particular job. The receiver of your resume will have to make a very quick judgement call of whether or not there’s a chance of you providing value to the company. If there is, you’ll move to the next round. If not, your application will get discarded. This may seem very cutthroat - and that’s because it is. But it’s how the world works. When reviewing your resume, try to put yourself in the shoes of the company or recruiter receiving it - would you pass yourself on to the next round? Even if you had as little as [7.4 seconds](https://www.hrdive.com/news/eye-tracking-study-shows-recruiters-look-at-resumes-for-7-seconds/541582/) to decide? If not, your resume could still use some work. Thanks for reading! If you liked this article, check out my [blog](https://madsbrodt.com/blog) for more!
madsbrodt
378,856
Qvault Classroom Launches Golang Crash Course
The post Qvault Classroom Launches Golang Crash Course appeared first on Qvault. We just launched...
0
2020-07-02T12:07:43
https://qvault.io/2020/07/02/qvault-classroom-launches-golang-crash-course/
engineering, go, programming, tutorial
--- title: Qvault Classroom Launches Golang Crash Course published: true date: 2020-07-02 12:05:46 UTC tags: Engineering,Golang,Programming,Tutorial canonical_url: https://qvault.io/2020/07/02/qvault-classroom-launches-golang-crash-course/ --- ![](https://qvault.io/wp-content/uploads/2020/07/Ebn8FSCUcAENHop-300x181.png) The post [Qvault Classroom Launches Golang Crash Course](https://qvault.io/2020/07/02/qvault-classroom-launches-golang-crash-course/) appeared first on [Qvault](https://qvault.io). We just launched [Qvault Classroom](https://classroom.qvault.io/#/)and can’t be more excited. Our first crash course in Go, “_Go Mastery_” is now available! We teach students by allowing them to write, compile, and run backend code directly in the browser. Qvault Classroom: [https://classroom.qvault.io/](https://classroom.qvault.io/#/) ## Our Difference Education as an industry is unbelievably far behind when it comes to technological innovation. We are humbled to be a part of pushing its boundaries. We have three core goals with Qvault content: - **Gamify Learning** – Learning online should feel like a game, not a chore. We use a rewards system for unlocking content, which is earned by completing achievements in the app - **Focus on Mastery** – Clumping students together in classes and moving them through grade levels even when concepts aren’t mastered is an artifact of the past. Timed-tests and due dates don’t exist in Qvault Classroom. Students move at their own pace and can’t move on until a concept is mastered. - **Code In-Browser** – Hands-on is king when learning to code. Our courses are ~75% coding assignments that can be completed right in the browser, even in backend languages like Go. ## Gamify Learning There is no reason learning shouldn’t feel more like a game. The current “learn as you go” courses often don’t incentivize students to go _fast_. As a result, many students become disinterested and lose motivation, or end up going so slow that they don’t achieve their goals. By treating courses like videogames, we keep students **engaged**. <iframe title="Elon Musk on Education and the need to Gamify Learning" width="828" height="621" src="https://www.youtube.com/embed/ctvib19wL4E?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ## Focus on Mastery _Mastery-based learning_ focuses on allowing each student to master a concept before moving on to the next one. Contrast this with traditional schools where students pass with a “C” and are forced to move to the next course, where they will likely do _even worse_. Advanced subjects like Computer Science require solid fundamentals, and mastery-based learning is the best way to achieve that. Sal Kahn from Kahn Academy has a great video about mastery-based learning, and spells out exactly what we are aiming for with [Qvault Classroom](https://classroom.qvault.io/#/): <iframe title="Let's teach for mastery -- not test scores | Sal Khan" width="828" height="466" src="https://www.youtube.com/embed/-MTRxRO5SRA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ## Code In Browser Programming courses make the most sense as **hands-on** , **code-as-you-go** style tutorials. Qvault Classroom has 2 exercise types: - Code Completion (~75%) - Multiple Choice Questions (~25%) Both kinds of exercises are accompanied by instructions in easy-to-follow text format. We believe videos are one of the _ **worst** _ medium for learning to code. Students get stuck listening to things they already know or don’t care about, and lose the ability to skim through instructions and move fast. We use Web Assembly compilers to allow students to learn and run backend languages right in the browser, something few online learning environments offer. If you want to run Go in your browser, try it out here: [https://classroom.qvault.io/#/playground/go](https://classroom.qvault.io/#/playground/go) Or jump right into a course: [https://classroom.qvault.io/](https://classroom.qvault.io/#/) ## Thanks For Reading Hit me up on twitter [@wagslane](https://twitter.com/wagslane) if you have any questions or comments. Take your coding career to the next level with courses on [Qvault Classroom](https://classroom.qvault.io/#/) Follow me on Dev.to: [wagslane](https://dev.to/wagslane) The post [Qvault Classroom Launches Golang Crash Course](https://qvault.io/2020/07/02/qvault-classroom-launches-golang-crash-course/) appeared first on [Qvault](https://qvault.io).
wagslane
378,873
The Concept of Domain-Driven Design Explained
Using microservices means creating applications from loosely coupling services. The application consi...
0
2020-07-02T12:39:32
https://dev.to/microtica/the-concept-of-domain-driven-design-explained-1ccn
microservices, devops, webdev, architecture
Using [microservices](https://microtica.com/everything-about-microservices/?utm_source=devto&utm_medium=referral_link&utm_campaign=domain_driven_design) means **creating applications from loosely coupling services.** The application consists of several small services, each representing a separate business goal. [They can be developed and easily maintained individually](https://microtica.com/deploy-your-first-microservice-on-kubernetes-in-10-mins/?utm_source=devto&utm_medium=referral_link&utm_campaign=domain_driven_design), after what they are joint in a complex application. Microservices is an architecture design model with a **specific bounded context, configuration, and dependencies.** These result from the architectural principles of the domain-driven design and DevOps. **Domain-driven design is the idea of solving problems of the organization through code.** The business goal is important to the business users, with a **clear interface and functions.** This way, the microservice can run independently from other microservices. Moreover, the team can also work on it independently, which is, in fact, the point of the microservice architecture. Many developers claim microservices have made them more efficient. This is due to the ability to work in small teams. This allows them to develop different small parts that will later be merged as a large app. They spend less time coordinating with other developers and more time on developing the actual code. Eventually, this creates more value for the end-user. # The Complexity Challenge Complexity is a relative term. What’s complex for one person is simple for another. However, **complexity is the problem that domain-driven design should solve.** In this context, complexity means interconnectedness, many different data sources, different business goals, etc. The domain-driven approach is here to solve the complexity of software development. On the other hand, you can use emergent design when the challenge is simple. However, when your application is complex, the complexity will only grow, and so will your problems. **Domain-driven design bases on the business domain.** Modern business environments are very complex and wrong moves can lead to fatal outcomes. Domain-driven design solves complex domain models, connecting to the core business concepts. Eric Evans, introduced the concept in 2004, in his book *Domain-Driven Design: Tackling Complexity in the Heart of Software*. According to the book, it focuses on three principles: - The primary focus of the project is the core **domain** and **domain logic**. - Complex designs are based on **models of the domain.** - Collaboration between technical and **domain experts** is crucial to creating an application model that will solve particular domain problems. # Important terms in Domain-Driven Design In DDD, it’s important to pay attention to the following terms: ### Domain logic Domain logic is **the purpose of your modeling.** Most commonly, it’s referred to as the **business logic**. This is where your business rules define the way data gets created, stored, and modified. ### Domain model Domain model includes the **ideas, knowledge, data, metrics, and goals** that revolve around that problem you’re trying to solve. It contains all the rules and patterns that will help you deal with complex business logic. Moreover, they will be useful to meet the requirements of your business. ### Subdomain A domain consists of several subdomains that refer to **different parts of the business logic.** For example, an online retail store could have a product catalog, inventory, and delivery as its subdomains. ### Design patterns Design patterns are all about **reusing code**. No matter the complexity of the problem you encounter, someone who’s been doing object-oriented programming has probably already created a pattern that will help you solve it. Breaking down your problem into its initial elements will lead you to its solution. Everything you learn through patterns, you can later use for any object-oriented language you start to program in. ### Bounded context Bounded context is a **central pattern** in domain-driven design that contains the complexity of the application. It handles large models and teams. This is where you implement the code, after you’ve defined the **domain** and the **subdomains**. Bounded contexts actually represent **boundaries in which a certain subdomain is defined and applicable.** Here, the specific subdomain makes sense, while others don’t. One entity can have different names in different contexts. When a subdomain within the bounded context changes, the entire system doesn’t have to change too. That’s why developers use adapters between contexts. ### The Ubiquitous Language The Ubiquitous Language is a methodology that refers to **the same language domain experts and developers use** when they talk about the domain they are working on. This is necessary because projects can face serious issues with a disrupted language. This happens because domain experts use their own jargon. At the same time, tech professionals use their own terms to talk about the domain. There’s a gap between the terminology used in daily discussions and the terms used in the code. That’s why it’s necessary to define a set of terms that everyone uses. All the terms in the ubiquitous language are structured around the domain model. ### Entities Entities are a combination of **data and behavior**, like a user or a product. They have identity, but represent data points with behavior. ### Value objects and aggregates Value objects have **attributes**, but can’t exist on their own. For example, the shipping address can be a value object. Large and complicated systems have countless entities and value objects. That’s why the domain model needs some kind of structure. This will put them into logical groups that will be easier to manage. These groups are called **aggregates**. They represent a collection of objects that are connected to each other, with the goal to treat them as units. Moreover, they also have an **aggregate root**. This is the only entity that any object outside of the aggregate can reference to. ### Domain service The domain service is **an additional layer that also contains domain logic.** It’s part of the domain model, just like entities and value objects. At the same time, the application service is another layer that doesn’t contain business logic. However, it’s here to coordinate the activity of the application, placed above the domain model. ### Repository The repository pattern is a **collection of business entities** that simplifies the data infrastructure. It releases the domain model from infrastructure concerns. The layering concept enforces the separation of concerns. # Example of Domain-Driven Design ![Example of domain-driven design](https://mk0microtica2di3k2co.kinstacdn.com/wp-content/uploads/2020/07/topac-01.png) If we take an e-commerce app, for example, the business domain would be to process an order. When a customer wants to place an order, they first need to go through the products. Then, they choose their desired ones, confirm the order, choose shipping type, and pay. The app then processes the data the client provides. So, a user app would consist of the following layers: ### User Interface This is where the customer can find **all the information needed to place an order.** In an e-commerce case, this is where the products are. This layer presents the information to the client and interprets their actions. ### Application layer This layer doesn’t contain business logic. It’s the part that **leads the user from one to another UI screen.** It also interacts with application layers of other systems. It can perform simple validation but it contains no domain-related logic or data access. Its purpose is to organize and delegate domain objects to do their job. Moreover, it’s the only layer accessible to other bounded contexts. ### Domain layer This is where the **concepts of the business domain are**. This layer has all the information about the business case and the business rules. Here’s also where the **entities** are. As we mentioned earlier, entities are a combination of data and behavior, like a user or a product. They have a unique identity guaranteed via a unique key and remains even when their attributes change. For example, in an e-commerce store, every order has a unique identifier. It has to go through several actions like confirming and shipping to be considered as an entity. On the other hand, **value objects** don’t have unique identifiers. They represent attributes that various entities can share. For example, this could be the same last name of different customers. This part also contains services with **defined operational behavior** that don’t have to be a part of any domain. However, they are still part of the business domain. The services are named according to the ubiquitous language. They shouldn’t deprive entities and value objects of their clear accountability and actions. Customers should be able to use any given service instance. The history of that instance during the lifetime of the application shouldn’t be a problem. Most importantly, the domain layer is in the center of the business application. This means that it should be separated from the rest of the layers. It shouldn’t depend on the other layers or their frameworks. ### Infrastructure layer This layer supports **communication between other layers** and can contain supporting libraries for the UI layer. # Advantages of Domain-Driven Design - **Simpler communication.** Thanks to the Ubiquitous Language, communication between developers and teams becomes much easier. As the ubiquitous language is likely to contain simpler terms developers refer to, there’s no need for complicated technical terms. - **More flexibility.** As DDD is object-oriented, everything about the domain is based on and object is modular and caged. Thanks to this, the entire system can be modified and improved regularly. - **The domain goes before UI/UX.** As the domain is the central concept, developers will build applications suited for the particular domain. This won’t be another interface-focused application. Although you shouldn’t leave out UX, using the DDD approach means that the product targets exactly the users that are directly connected to the domain. # Downsides of Domain-Driven Design - **Deep domain knowledge is needed.** Even for the most technologically advanced teams working on development, there has to be at least one domain specialist on the team who understands the precise characteristics of the subject area that’s the center of the application. Sometimes there’s a need for several team members who thoroughly know the domain to incorporate in the development team. - **Contains repetitive practices.** Although many would say this is an advantage, the domain-driven design contains many repetitive practices. DDD encourages the use of [continuous integration](https://microtica.com/cracking-the-continuous-deployment-code/?utm_source=devto&utm_medium=referral_link&utm_campaign=domain_driven_design) to build strong applications that can adapt themselves when necessary. Many organizations may have difficulties with these methods. More particularly, if their previous experience is generally tied to less-flexible models of growth, like the waterfall model. - **It might not work best for highly-technical projects.** Domain-driven design is perfect for applications that have complex business logic. However, it might not be the best solution for applications with minor domain complexity but high technical complexity. Applications with great technical complexity can be very challenging for business-oriented domain experts. This could cause many limitations that might not be solvable for all team members. # Conclusion Domain-driven design is a software engineering approach to solving a specific domain model. The solution **circles around the business model** by connecting execution to the key business principles. Common terminology between the domain experts and the development team includes **domain logic, subdomains, bounded contexts, context maps, domain models, and ubiquitous language** as a way of collaborating and improving the application model and solving any domain-related challenges. With this article, we wanted to define the core concepts around domain-driven design. Moreover, we wanted to explain them, adding the advantages and downsides of the approach. This way, we hope to help you decide whether this is the right approach for your business and your application. [Microservices](https://microtica.com/everything-about-microservices/?utm_source=devto&utm_medium=referral_link&utm_campaign=domain_driven_design) offer some serious advantages over traditional architectures, providing scalability, accessibility, and flexibility. Moreover, this approach keeps developers focused as each microservice is a loosely coupled service with a single idea of accountability.
saramiteva
379,041
What problems did active_support concerns solve?
In the Rails project some of the most commonly used methods are extracted and generally placed in...
0
2020-07-02T15:15:00
https://dev.to/ihavecoke/what-problems-did-activesupport-concerns-solve-56oe
ruby, rails
--- title: What problems did active_support concerns solve? published: true description: tags: Ruby, Rails --- ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/65670e41o57mxgrj2zq0.jpg) In the Rails project some of the most commonly used methods are extracted and generally placed in the corresponding concerns directory. generally different concern files are stored in these directories. a file is a module for example: ```ruby module UserPlan extend ActiveSupport::Concern included do belongs_to :plan end class_method do def cached_all_plan #bala bala end end end ``` Note that the module `extend ActiveSupport::Concern` is used in this module. we will explore its benefits and why this module exists. Let's start with the following example: ```ruby module Foo def self.included(base) base.class_eval do def self.method1 ... end end end end module Bar def self.included(base) base.method1 end end class Host include Foo # The Foo module was introduced because the Bar module depends on it include Bar # Bar reference is to call its method1 method end ``` If you want to use the `method1` method in the `Host` class, you must refer to the `Bar` module but the `Bar` module depends on the `Foo` module. Did you find any problems? the `Host` class does not care about the `Foo` module but in order to call the `method1` method when the `Bar` is included. you have to Introduce the `Foo` module. There is an improved code along the above logic: ```ruby module Bar include Foo def self.included(base) base.method1 end end class Host include Bar end ``` The `Host` only cares about the reference of `Bar` so the `Foo` module is put in the `Bar`. It seems that there is no problem but when you implement the running code, you will find that the code cannot run. In the included hook method of the `Bar` module the base object points to the `Host` and the include `Foo` in `Bar` will put `method1` on the instance object of `Bar`. when the instance object of the Host calls `method1`, it will throw a method not found exception How to do it? ```ruby require 'active_support/concern' module Foo extend ActiveSupport::Concern included do def self.method1 ... end end end module Bar extend ActiveSupport::Concern include Foo included do self.method1 end end class Host include Bar end ``` By referring to `ActiveSupport::Concern` to solve the problem of interdependence between modules and modules. The comparison found that the code is much simpler and the call is also very convenient. the readability of the code has also become very good and very clear. now `ActiveSupport::Concern` is quite widely used and many good gems will be used. after all It was put in the **ActiveSupport** Gem. Hope it can help you :)
ihavecoke
379,061
Practical Functional Programming in JavaScript - Side Effects and Purity
Edit: This article doesn't do such a great job at communicating what I originally intended, so it has...
0
2020-07-02T16:09:55
https://dev.to/richytong/practical-functional-programming-in-javascript-side-effects-and-purity-1838
Edit: This article doesn't do such a great job at communicating what I originally intended, so it [has a revision](https://dev.to/richytong/practical-functional-programming-in-javascript-side-effects-and-purity-revised-420h). I recommend you read the revised version, though I've left this original for historical purposes. Hello 🌍. You've arrived at the nth installment of my series on functional programming: Practical Functional Programming in JavaScript. On this fine day I will talk about a two-pronged approach to problem solving that makes life easy: **Side Effects and Purity**. Let's talk about purity. A function is said to be **pure** if it has the following properties: * Its return value is the same for the same arguments * Its evaluation has no side effects ([source](https://en.wikipedia.org/wiki/Pure_function)) Here's **side effect** [from stackoverflow](https://softwareengineering.stackexchange.com/questions/40297/what-is-a-side-effect): > A side effect refers simply to the modification of some kind of state - for instance: * Changing the value of a variable; * Writing some data to disk; * Enabling or disabling a button in the User Interface. Here are some more instances of side effects * reading data from a file * making a request to a REST API * writing to a database * reading from a database * logging out to console Basically, all interactions of your function with the world outside its scope are side effects. You have likely been using side effects this whole time. Even the first "hello world" you logged out to the console is a side effect. In a world full of side effects, your goal as a functional programmer should be to **isolate those side effects to the boundaries of your program**. Purity comes into play when you've isolated the side effects. At its core, **purity is concerned with data flow**, as in how your data transforms from process to process. This is in contrast to side effects, which are only concerned with doing external stuff. The structure of your code changes for the clearer when you separate your programming concerns by side effects and purity. Here is an impure function `add10`: ```javascript let numCalls = 0 const add10 = number => { console.log('add10 called with', number) numCalls += 1 console.log('add10 called', numCalls, 'times') return number + 10 } add10(10) /* > add10 called with 10 > add10 called 1 times > 20 */ ``` `add10` has the side effects of logging out to the console, mutating the variable `numCalls`, and logging out again. The console logs are side effects because they're logging out to the console, which exists in the world outside `add10`. Incrementing `numCalls` is also a side effect because it refers to a variable in the same script but outside the scope of `add10`. `add10` is not pure. By taking out the console logs and the variable mutation, we can have a pure `add10`. ```javascript let numCalls = 0 const add10 = number => number + 10 console.log('add10 called with', 10) // > add10 called with 10 numCalls += 1 console.log('add10 called', numCalls, 'times') // > add10 called 1 times add10(10) // > 20 ``` Ah, sweet purity. Now `add10` is pure, but our side effects are all a mess. We'll need the help of some higher order functional programming functions if we want to clean this up. You can find these functions in functional programming libraries like [rubico](https://github.com/a-synchronous/rubico) (authored by yours truly), Ramda, or RxJS. If you don't want to use a library, you can implement your own versions of these functions in vanilla JavaScript. For example, you could implement minimal versions of the functions we'll be using, `pipe` and `tap`, like this ```javascript const pipe = functions => x => { let y = x for (const f of functions) y = f(y) return y } const tap = f => x => { f(x); return x } ``` We'll use them to make it easy to think about side effects and purity. * **pipe** takes an array of functions and chains them all together, calling the next function with the previous function's output. Since `pipe` creates a flow of data in this way, we can use it to think about **purity**. You can find a runnable example in [pipe's documentation](https://doc.rubico.land/#pipe). * **tap** takes a single function and makes it always return whatever input it was passed. When you use `tap` on a function, you're basically saying "don't care about the return from this function, just call the function with input and give me back my input". Super useful for **side effects**. You can find a runnable example in [tap's documentation](https://doc.rubico.land/#tap). Here's a refactor of the first example for purity while accounting for side effects using `pipe` and `tap`. If the example is looking a bit foreign, see my last article on [data last](https://dev.to/richytong/practical-functional-programming-in-javascript-data-last-1gjo). ```javascript const logCalledWith = number => console.log('add10 called with', number) let numCalls = 0 const incNumCalls = () => numCalls += 1 const logNumCalls = () => console.log('add10 called', numCalls, 'times') const add10 = number => number + 10 pipe([ tap(logCalledWith), // > add10 called with 10 tap(incNumCalls), tap(logNumCalls), // > add10 called 1 times add10, ])(10) // > 20 ``` We've isolated the console log and variable mutation side effects to the boundaries of our program by defining them in their own functions `logCalledWith`, `incNumCalls`, and `logNumCalls`. We've also kept our pure `add10` function from before. The final program is a composition of side effecting functions and a pure function, with clear separation of concerns. With `pipe`, we can see the flow of data. With `tap`, we designate and isolate our side effects. That's organized. ![thumbs-up-chuck.gif](https://raw.githubusercontent.com/a-synchronous/assets/master/gifs/thumbs-up-chuck.gif) Life is easy when you approach problems through side effects and purity. I'll leave you today with a rule of thumb: _if you need to console log, use tap_. Next time, I'll dive deeper into data transformation with `map`, `filter`, and `reduce`. Thanks for reading! You can find the rest of the series on rubico's [awesome resources](https://github.com/a-synchronous/rubico#awesome-resources). See you next time for _Practical Functional Programming in JavaScript - Intro to Transformation_
richytong
379,402
Day 5: Loops
Salutations! Welcome to another day of exploration and my endeavours with the 30 days of c...
0
2020-07-02T21:45:09
https://dev.to/gradius93/day-5-loops-2lm0
python
### Salutations! Welcome to another day of exploration and my endeavours with the 30 days of code challenge, Day 5. **Loops** are the centre of todays attention. **Loops** are a fundamental part of coding, which feature in almost all programming languages, so understanding them is paramount. Python contains two different types of loops, the **for** loops and the **while** loop. **For** loops can iterate over a sequence of numbers using the range function. **While** loops repeat as long as a certain boolean condition is met. Todays challenge involved using a loop to iterate over a variable 10 times and multiply it by the iteration number. i decided to do this in both styles of loop. Here is my code for the **for** loop: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wikxywwmil8myp7tsx79.png) And here is the output: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vm0ilwvu8h8fv7sppg9p.png) Here is my code for the **while** loop: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/xsp7wh26gefxhj4nx01d.png) and here is the output: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5nydr3kear4fxudl7tw1.png)
gradius93
379,551
For developers: 20 online meetup recordings you should watch from June 2020
IBM Developer hosts weekly online meetups on various topics. Online events are one of the best ways...
0
2020-07-06T22:33:59
https://maxkatz.org/2020/07/02/for-developers-20-online-meetup-recordings-you-should-watch-from-june-2020/
ibmcloud, meetup, tutorial, video
--- title: For developers: 20 online meetup recordings you should watch from June 2020 published: true date: 2020-07-02 22:48:36 UTC tags: IBMCloud, Meetup, Tutorial, Video canonical_url: https://maxkatz.org/2020/07/02/for-developers-20-online-meetup-recordings-you-should-watch-from-june-2020/ --- [IBM Developer](http://crowdcast.io/ibmdeveloper) hosts weekly online meetups on various topics. [Online events are one of the best ways to scale](https://dev.to/ibmdeveloper/using-online-meetups-to-scale-your-developer-relations-program-17li) your Developer Relations program and reach developers anywhere, anytime and for a long time after the event. 🎟 Register for our [upcoming events](http://crowdcast.io/ibmdeveloper) #### Node-RED Series - Node-RED Series: Getting Started with Node-RED Essentials [watch replay](https://www.crowdcast.io/e/node-red-series) - Node-RED Series Node-RED Dashboard and UI Techniques [watch replay](https://www.crowdcast.io/e/node-red-series-2) - Node-RED Series: Building end to end Node-RED Application [watch replay](https://www.crowdcast.io/e/node-red-series-3) #### Single Events Containers Developer Summit – Online [Watch replay](https://www.crowdcast.io/e/containers-developer-summit-online) Domain-Driven Design: Lessons Learned & Useful Patterns [Watch replay](https://www.crowdcast.io/e/domain-driven-design) IBM Cloud Series – Getting Started with Edge Computing [Watch replay](https://www.crowdcast.io/e/ibmcloud-edge) Kubernetes vs Red Hat OpenShift [Watch replay](https://www.crowdcast.io/e/kubernetes-vs-red-hat-2) Online Hands-on Lab: Build a Smart Bot with Slack, Block Kit and Watson [Watch replay](https://www.crowdcast.io/e/online-hands-on-lab) IBM Cloud Series – Getting started with Serverless [Watch replay](https://www.crowdcast.io/e/ibm-cloud-series-serverless) Deep Learning Hands-On Series – Monitoring [Watch replay](https://www.crowdcast.io/e/dl-hands-on-3) Have the world of IoT Data at your fingertips to play [Watch replay](https://www.crowdcast.io/e/play-with-iot-data) [Pride Edition] IBM Visual Insights: Deep Learning & AI Vision at Blazing Speed [Watch replay](https://www.crowdcast.io/e/pride-edition-ibm-visual) Kubernetes hands-on with OpenShift on IBM Cloud [Watch replay](https://www.crowdcast.io/e/kubernetes-hands-on-with-2) IBM Cloud Series – Getting started with AI [Watch replay](https://www.crowdcast.io/e/ibm-cloud-series-ai) Deep Dive into Containers and Enterprise Kubernetes with Red Hat OpenShift [Watch replay](https://www.crowdcast.io/e/deep-dive-into) Build a Virtual Agent in JavaScript with Twilio Autopilot and IBM Watson Natural Language Understanding [Watch replay](https://www.crowdcast.io/e/build-a-smart-chatbot) [PRIDE EDITION] Transgender 101 Panel Discussion [Watch replay](https://www.crowdcast.io/e/pride-edition) Serverless Swift – Serverless Mobile Backend as a Service [Watch replay](https://www.crowdcast.io/e/serverless-swift-mbaas) Predict your insurance premium cost with Auto AI [Watch replay](https://www.crowdcast.io/e/predict-your-insurance-autoai) Decision Optimization for Disaster Response [Watch replay](https://www.crowdcast.io/e/decision-optimization) 🎟 Register for our [upcoming events](http://crowdcast.io/ibmdeveloper)
maxkatz
379,939
Live demo on RBAC on OAuth2 scopes
Hey guys, we're hosting a live demo session next week on role based access control on OAuth2 scopes.W...
0
2020-07-03T06:33:25
https://dev.to/fishfaceishi/live-demo-on-rbac-on-oauth2-scopes-i7b
security, javascript, devlive
Hey guys, we're hosting a live demo session next week on role based access control on OAuth2 scopes.We will be taking questions during the demo too so feel free to add it to your calendar. https://bit.ly/Identityin15July8
fishfaceishi
380,054
Plan like a Pro with Automatic Scheduling in Taskjuggler
The Need for Automatic Project Scheduling? Every one of us has done some project planning...
0
2020-07-03T10:09:37
https://dev.to/turbopape/plan-like-a-pro-with-automatic-scheduling-in-taskjuggler-3a15
productivity, project, management, tasks
# The Need for Automatic Project Scheduling? Every one of us has done some project planning in a way or another. Some people would have used fancy GANTT editing tools for that; others would just go ahead and dump their brains onto a backlog. But very few are those who follow an automatic approach to "schedule" their projects. More often than not, people would go manually about making decisions about the gameplay to adopt facing their daunting projects. They try to envision the best possible plan while mentally trying to sort what we call "precedence constraints," i.e., one can't work on Task A before Task B and C have been achieved. They do so all while finding feasible and commercially viable resource allocation schemes. Also, they need to contort to time-bound constraints, like deadlines and seasonal phenomena. All that makes a perfect case for an automatic scheduling tool, and [Taskjuggler](https://taskjuggler.org/) might be the most powerful - if not the only - mature alternative you have at hand. But this is not a shiny Kanban-y drag-and-drop thingy: this is a full-fledged enterprise-grade automatic project planning and estimating solution, so much that it takes some learning curve to get fluent in it, but believe me, the effort is worth it. # Taskjuggler: Thinking in Work Breakdown Structure Without drowning too deep in detail, **Taskjuggler** lets you focus on your project structure. Rather than giving you GANTT drawing tools and letting you figure the plan out yourself, **Tasjkjuggler** asks that you give it your **Work Breakdown Structure**, and it does the scheduling for you. Simply put, A Work Breakdown Structure is a graph depicting the tasks you need to achieve, along with the relationships between them and the resources necessary to make them happen. **Taskjuggler** is also able to follow on the realization of the project and will adjust the plan according to the actual project roll-out. Besides, it offers you the possibility to simulate multiple scenarios. **Taskjuggler** is also finance aware, and sports an accounting module that can track the costs inferred by the allocated resources against revenues that you'd record as you're working on your project. I have used this accounting module so many times as my secret weapon to come up with detailed financial quotes and win the commercial race! # Taskjuggler in action First of all, you'll need to install **Taskjuggler** [following the instructions on its website.](https://taskjuggler.org/tj3/manual/Installation.html#Installation) **Taskjuggler** comes as a line command, *tj3*, that compiles your project descriptions into the project reports you specify. Let's run through a simple example. Suppose you want to specify a Software Development project. We first need the data engineer to design the database, then have the web designers design the application, then have the Devops deploy it. Let's begin by creating a file for our project. We'll name it project.tjp. We'll begin by specifying our project name, plus some general information about it, like its start date, and what would be the deadline for it(1 month from the start, hence the +1m": ```tjp project softdev "Software Development" 2020-08-16 +1m { timezone "Europe/Paris" currency "USD" } ``` Let's then declare the financial accounts we'll be using to track the finances of our project: ```tjp account cost "cost" account rev "payments" balance cost rev ``` We now declare the resources we need for our tasks: ```tjp resource data_engineer "Data Engineer" { rate 200 } resource web_engineer "Web Engineer" { rate 200 } resource devops_engineer "Devops Engineer" { rate 200 } ``` *rate* will be used to give a financial estimate of our project. We are using the simplest form, but you can organize them into teams, flag them, assign managers... Let's now get into the actual business. Next, we'll describe our Work Breakdown Structure, that is, tasks, constraints, and resources. We'll organize our tasks into to categories. **Work**, where we'll describe our actual tasks to be done, and **Milestones**, to track important steps and deliveries of our project. For this, we'll use the possibility to nest Tasks in **taskjuggler**. First, let's see the **Work** part. ```tjp task work "Work"{ task data "Database Design" { chargeset cost allocate data_engineer effort 3d # This task takes 10 days effort } task web "Web App Design" { chargeset cost allocate web_engineer effort 3d # This task takes 10 days effort depends !data # This task needs task data to be complete } task deploy "Deployment" { chargeset cost allocate devops_engineer effort 3d # This task takes 10 days effort depends !web # This task needs task web to be complete } } ``` The **chargeset** clause is used to add the costs inferred by a resource working on a task to a particular account, which is **cost** here. We'll use this account to get a quote for the whole work needed for this project. The **depends** construct tells **Taskjuggler** that a task can't be worked on BEFORE the referred task has been achieved. This is what we call *"Precedence Constraint"* in Work Breakdown Speak. The *depends* clause uses the exclamation mark **'!'** to refer to tasks that are one level higher in the hierarchy. For example, ***deploy*** depends on ***!web*** means that it depends on the task **web** under a direct parent(one level up), yielding its direct sibling **web**(their common parent task being **work**) Now for the **Milestones** section: ```tjp task milestones "Milestones" { task db_milestone "Database Finish" { # A milestone has effort 0, it's represented as a little black square depends !!work.data } task web_milestone "Web Finish" { # A milestone has effort 0, it's represented as a little black square depends !!work.web } task deploy_milestone "Deploy Finish" { # A milestone has effort 0, it's represented as a little black square depends !!work.deploy } task project_end_milestone "Project End" { depends !!work.deploy } } ``` Note how now **depends** uses a double exclamation mark. Look at task **deploy_milestone** for instance. It is meant to track the completion of the **deploy** task under the **work** container task. At this level, we are under **milestones**, so we need one **!** to get to the level of the direct parent **milestones**, then another more to get the grand-parent level, under which we get to the aunt task **work** and refer **deploy** by using a familiar dotted syntax, hence the **!!work.deploy** token. At this point, our project description is finished, but if you run it through the **taskjuggler** compiler, you won't get any output: ```shell tj3 project.tjp #... Truncated Output Warning: This project has no reports defined. No output data will be generated. Warning: None of the reports has a 'formats' attribute. No output data will be generated. ``` Indeed, you need to specify reports to get the **Taskjuggler** engine to compile the results under the format you want. And this is presumably the most complicated part. Here we go. We'll generate a static website. The entry point to the reports is the **Overview** report page, with a link to detailed resource allocation by task under the **Development** subreport, and a link to a resource Utilization under the **Resource Graph** subreport. This post won't dive into the **Taskjuggler** report syntax, but if you're interested, you can check it [here](https://taskjuggler.org/manual/general_usage.html). This being said, you can use the following as a template to start your pro scheduling journey - this is actually the way I followed (without necessarily being fluent in the syntax). The main report **Overview** is specified like so: ```tjp navigator navbar { hidereport @none } macro TaskTip [ tooltip istask() -8<- '''Start: ''' <-query attribute='start'-> '''End: ''' <-query attribute='end'-> ---- '''Resources:''' <-query attribute='resources'-> ---- '''Precursors: ''' <-query attribute='precursors'-> ---- '''Followers: ''' <-query attribute='followers'-> ->8- ] textreport frame "" { header -8<- == Toy Software Development Project == <[navigator id="navbar"]> ->8- footer "----" textreport index "Overview" { formats html center '<[report id="overview"]>' } textreport development "Development" { formats html center '<[report id="development"]>' } textreport "ResourceGraph" { formats html title "Resource Graph" center '<[report id="resourceGraph"]>' } } taskreport overview "" { header -8<- === Project Overview === The project is structured into 3 phases. # <-reportlink id='frame.development'-> === Original Project Plan === ->8- columns bsi { title 'WBS' }, name, start, end, effort, cost, revenue, chart { ${TaskTip} } # For this report we like to have the abbreviated weekday in front # of the date. %a is the tag for this. timeformat "%a %Y-%m-%d" loadunit days hideresource @all balance cost rev caption 'All effort values are in man days.' footer -8<- === Staffing === All project phases are properly staffed. See [[ResourceGraph]] for detailed resource allocations. ->8- } ``` The Report for "Development" is as follows: ```tjp taskreport development "" { headline "Development - Resource Allocation Report" columns bsi { title 'WBS' }, name, start, end, effort { title "Work" }, duration, chart { ${TaskTip} scale day width 500 } timeformat "%Y-%m-%d" hideresource ~(isleaf() & isleaf_()) sortresources name.up } ``` and finally, the Report for "Resource Graph" is like so: ```tjp resourcereport resourceGraph "" { headline "Resource Allocation Graph" columns no, name, effort, rate, weekly { ${TaskTip} } loadunit shortauto # We only like to show leaf tasks for leaf resources. hidetask ~(isleaf() & isleaf_()) sorttasks plan.start.up } ``` Now, save the project definition and compile it: ```shell tj3 project.tjp ``` after successful scheduling, you'll see a set of HTML files and other assets that have been created. Head over to **Overview.html** in your browser: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/4itzb55kh6ppc2a7x6b7.jpg) You see **Taskjuggler** has computed a schedule for you. Verify that **precedence** has been respected. Note the cost by task and for the overall project (we have no revenue at this point). Also, see how the **Work** and **Milestones** umbrella tasks make our project schedule clearly presented by separating work to be done and the expected deliveries or milestones. Let's explore the detailed task description by clicking on the **Development** link: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/mespdqr2y2zwtgbmw2kr.jpg) You see in this Report how tasks are being worked on by each resource. Now head over to the **Resource Graph** report to see how are resources being used overall in this project: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5ign47qvhefpnskbu0qf.jpg) You can notice how this report shows how much is every resource being loaded during the execution of its assigned tasks. # Conclusion We just scratched the surface of what's possible with **Taskjuggler**. It is really a unique scheduling beast; one that can accomplish so much, but which is unfortunately not well served by its looks. First, the only viable option to interact with it now is the proprietary syntax we've covered. This is showing its limits, as very poor tooling exists, and for large projects, with complex task hierarchies and report schemes, one can easily feel lost. Second, although it's possible to track the progress of projects, **Taskjuggler** lacks advanced enterprise collaboration features. I think I've read somewhere it has a server or something, but I don't think it's of any practical use right now. All of these problems make this tool only fit for the vim-savvy engineer, unfortunately. But once mastered, this is a very sharp one to run accurate time and money estimates, and to effectively track costs, and I think it can perfectly sit next to other collaborative agile Trellos and Jiras.
turbopape
380,177
MaixDuino 上使用 MaixPy 讀取類比輸入
MaixDuino 是透過 ESP32 讀取 ADC, 可參考官方的範例檔。不過目前測試似乎有問題, 都會得到以下錯誤: [MaixPy]: esp32 read adc failed! 根...
0
2020-07-03T09:33:22
https://dev.to/codemee/maixduino-maixpy-o68
maixduino, maixpy, micropython, adc
MaixDuino 是透過 ESP32 讀取 ADC, 可參考官方的[範例檔](https://github.com/sipeed/MaixPy_scripts/blob/master/network/demo_esp32_read_adc.py)。不過目前測試似乎有問題, 都會得到以下錯誤: ``` [MaixPy]: esp32 read adc failed! ``` 根據[論壇的說明](https://github.com/sipeed/MaixPy/issues/148), 這似乎是 ESP32 上的韌體有問題, 以後再測試看看。 如果使用[特定版本的 MaixPy 及 ESP32 韌體](https://github.com/sipeed/MaixPy/files/3514633/fixed_adc_fw.tar.gz), 就可正常運作。請使用 kflash 工具燒錄下載後解開的 mainxpy.bin: ![](https://i.imgur.com/SdC7axj.png) 接著再使用 flash_download_tools 將下載解開的 NINA_W102-1.3.1.bin 燒錄到 MaixDuino 上的 ESP32: ![](https://i.imgur.com/ugDAFjl.png) 請記得 ESP32 是另一個序列埠。完成後, 就可以使用以下程式測試 ADC (MaixDuino 上的 K210 晶片和 ESP32 之間使用 SPI 傳輸資料, 相關接腳可參考電路圖): ```python import network import utime from Maix import GPIO from fpioa_manager import * #iomap at MaixDuino fm.register(25,fm.fpioa.GPIOHS10)#cs fm.register(8,fm.fpioa.GPIOHS11)#rst fm.register(9,fm.fpioa.GPIOHS12)#rdy fm.register(28,fm.fpioa.GPIOHS13)#mosi fm.register(26,fm.fpioa.GPIOHS14)#miso fm.register(27,fm.fpioa.GPIOHS15)#sclk nic = network.ESP32_SPI(cs=fm.fpioa.GPIOHS10,rst=fm.fpioa.GPIOHS11,rdy=fm.fpioa.GPIOHS12, mosi=fm.fpioa.GPIOHS13,miso=fm.fpioa.GPIOHS14,sclk=fm.fpioa.GPIOHS15) adc = nic.adc() print(adc) ``` adc() 會以 tuple 形式傳回 6 個 ADC 接腳的值: ```python >>> %Run -c $EDITOR_CONTENT ESP32_SPI init over (839, 1169, 192, 16, 0, 0) ```
codemee
380,735
PreciousChickenToken: A guided example of OpenZeppelin's ERC20 using Ethers, Truffle and React
Introduction This guide is a step-by-step demonstration of ERC20 Tokens in React using a...
0
2020-07-03T15:35:21
https://www.preciouschicken.com/blog/posts/openzeppelin-erc20-using-ethers-truffle-and-react/
ethereum, react, solidity, erc20
--- title: PreciousChickenToken: A guided example of OpenZeppelin's ERC20 using Ethers, Truffle and React published: true date: 2020-07-01 16:08:04 UTC tags: #ethereum #react #solidity #ERC20 canonical_url: https://www.preciouschicken.com/blog/posts/openzeppelin-erc20-using-ethers-truffle-and-react/ --- ## Introduction This guide is a step-by-step demonstration of ERC20 Tokens in React using a local Truffle Ethereum blockchain. It is not, nor is intended to be, a best practice study on how to write ERC20s. It is intended to produce familiarisation and working code, which can be the basis for further education. ## ERWhat? ERC20 is a standard for tokens that applies on the Ethereum network (ERC standing for Ethereum Request for Comments) which ensures interoperability of these assets across the network. Having a standard for tokens is a big deal as tokens are a central feature of Ethereum as laid out in the original [whitepaper](https://ethereum.org/en/whitepaper/#token-systems): > On-blockchain token systems have many applications ranging from sub-currencies representing assets such as USD or gold to company stocks, individual tokens representing smart property, secure unforgeable coupons, and even token systems with no ties to conventional value at all, used as point systems for incentivization. Token systems are surprisingly easy to implement in Ethereum. The key point to understand is that a currency, or token system, fundamentally is a database with one operation: subtract X units from A and give X units to B, with the provision that (1) A had at least X units before the transaction and (2) the transaction is approved by A. Using this standard therefore ensures that everyone is following a common set of rules, so allowing a token developed by one person to be traded across the system. There are a number of [other tokens](https://crushcrypto.com/ethereum-erc-token-standards/), however I'm using ERC20 primarily as it is the most popular. Although it is possible to roll your own ERC20 by implementing the interface provided in the standard, it makes sense to use one that has been created and thoroughly tested by a specialised third party, in this case [OpenZeppelin](https://openzeppelin.com). ## Prerequisites - [NodeJS](https://nodejs.org) - If you haven't installed it before, I found installing it using the Node Version Manager (nvm) as suggested on this [Stack Overflow answer](https://stackoverflow.com/a/24404451/6333825) to cause less aggravation than downloading via the official website. - If you've previously installed [create-react-app](https://create-react-app.dev) globally via `npm install -g create-react-app`, then uninstall it with the command `npm uninstall -g create-react-app` so you are using the latest version as below. ## Ganache [Ganache](https://www.trufflesuite.com/ganache), part of the Truffle suite, positions itself as a "one-click blockchain." It allows a developer to host an Ethereum blockchain quickly on a local machine and provides a number of accounts (or wallets) already full of pretend Ether. This is much easier and quicker than deploying on either a testnet (e.g. Ropsten) or the main Ethereum network (which costs actual money). Download the Appimage from their [homepage](https://www.trufflesuite.com/ganache) and run (I've blogged previously on how [I like to install Appimages](https://www.preciouschicken.com/blog/posts/where-do-i-put-appimages/), but clearly install how you want). Once the application is running select the _Quickstart Ethereum_ option when you are invited to 'Create a workspace.' You should be presented with something similar to the following: [![Ganache](https://www.preciouschicken.com/blog/images/ganache.png)](https://www.preciouschicken.com/blog/images/ganache.png) which gives a list of accounts (or wallets) Ganache has created for you and the pretend Ether they contain. There are other non-GUI ways of running a local blockchain, and although I'm generally a fan of the terminal, this is such a neat solution I'm going to use it. ## Truffle Although Ganache is part of the Truffle suite; the main course, if you'll excuse the pun, is [Truffle](https://www.trufflesuite.com/truffle) itself. Truffle is a development framework for Ethereum which allows you to deploy and test smart contracts quickly. Create a directory that will hold our project: ```bash mkdir erc20-pct cd erc20-pct ``` Now we (globally) install and then initialise Truffle which will install a number of [default files and folders](https://www.trufflesuite.com/tutorials/getting-started-with-drizzle-and-react#directory-structure): ```bash npm install -g truffle truffle init ``` ## Install OpenZeppelin ERC20 As we will be using OpenZeppelin's implementation of ERC20 we need to install with npm (initialising this first): ```bash npm init -y npm install @openzeppelin/contracts ``` ## Smart Contract: PreciousChickenToken The ERC20 we will be creating for this demo will be called the PreciousChickenToken (Symbol: PCT), and to do so we need to create a smart contract. Using your favourite text editor (mine happens to be vim) create and edit the following file: ```bash vim contracts/PreciousChickenToken.sol ``` Copy and paste the following text (or alternatively download it from my [github repository](https://github.com/PreciousChicken/openzeppelin-erc20-using-ethers-truffle-and-react)): ```sol // SPDX-License-Identifier: Unlicense pragma solidity ^0.6.2; import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; contract PreciousChickenToken is ERC20 { // In reality these events are not needed as the same information is included // in the default ERC20 Transfer event, but they serve as demonstrators event PCTBuyEvent ( address from, address to, uint256 amount ); event PCTSellEvent ( address from, address to, uint256 amount ); address private owner; mapping (address => uint256) pendingWithdrawals; // Initialises smart contract with supply of tokens going to the address that // deployed the contract. constructor(uint256 _initialSupply) public ERC20("PreciousChickenToken", "PCT") { _mint(msg.sender, _initialSupply); _setupDecimals(0); // Sets PCTs as integers only owner = msg.sender; } // A wallet sends Eth and receives PCT in return function buyToken(uint256 _amount) external payable { // Ensures that correct amount of Eth sent for PCT // 1 ETH is set equal to 1 PCT require(_amount == ((msg.value / 1 ether)), "Incorrect amount of Eth."); transferFrom(owner, msg.sender, _amount); emit PCTBuyEvent(owner, msg.sender, _amount); } // A wallet sends PCT and receives Eth in return function sellToken(uint256 _amount) public { pendingWithdrawals[msg.sender] = _amount; transfer(owner, _amount); withdrawEth(); emit PCTSellEvent(msg.sender, owner, _amount); } // Using the Withdraw Pattern to remove Eth from contract account when user // wants to return PCT // https://solidity.readthedocs.io/en/latest/common-patterns.html#withdrawal-from-contracts function withdrawEth() public { uint256 amount = pendingWithdrawals[msg.sender]; // Pending refund zerod before to prevent re-entrancy attacks pendingWithdrawals[msg.sender] = 0; msg.sender.transfer(amount * 1 ether); } } ``` This contract when called generates a number of 'PreciousChickenTokens' and allocates them to the address that deployed the contract - in our case that will be the first address listed in Ganache (marked as 'Index 0'). It then has a function to allow users to buy PCTs in exchange for Ether (with the conversion rate set at one Eth equal to one PCT), and to sell those PCT back in return for Ether. Any Ether collected goes to the contract address itself. Again it is worth re-iterating this code is intended as a demonstrator only and is not intended to be a model for anything production ready that is exposed to financial risk. Next is the code that deploys the contract we've just written. Create the following file: ```bash vim migrations/2_deploy_contract.js ``` Copy and paste the following into the file you've just created: ```javascript var PreciousChickenToken = artifacts.require("PreciousChickenToken"); module.exports = function(deployer) { // Arguments are: contract, initialSupply deployer.deploy(PreciousChickenToken, 1000); }; ``` This code is responsible for telling Truffle to deploy the smart contract we've just written above. It also serves to provide arguments to the constructor method within _PreciousChickenToken.sol_: this had one argument called *\_initialSupply* which sets the amount of tokens created on deployment of the contract (i.e. one thousand). Lastly we need to make some amends to our existing truffle configuration file: ```bash vim truffle-config.js ``` Delete the entire contents and replace with this: ```javascript const path = require("path"); module.exports = { contracts_build_directory: path.join(__dirname, "client/src/contracts"), networks: { }, mocha: { }, compilers: { solc: { version: "^0.6.2", // Fetch exact version from solc-bin (default: truffle's version) } } } ``` So a number of changes have been made to the original: - Comments have been deleted. Clearly normally we'd keep them in, but for the purposes of this guide it is easier to see what's happening if we delete them. - A build directory has been set in a sub-directory. We are going to be putting our React front end in a subdirectory called *client*. As the front end needs to have access to the smart contract and can't view the root directory, we need to tell truffle to build the files in a directory it does have access to i.e. *client/src/contracts*. An alternative way to do this would be to create a soft link (e.g. `ln -s ../../build/contracts/ contracts`) and let Truffle build the files where it normally would. - We've changed the solidity compiler to minimum version 0.6.2. This is different to the Truffle default which would fail to compile the OpenZeppelin ERC20 (3.1.0) due to dependencies. If we weren't using React or the latest version of OpenZeppelin's contracts therefore, we could keep this as is. ## Build the front end As we are viewing this in React we now need to generate the front end, to do this we are going to use create-react-app to create a new sub-folder: ```bash npx create-react-app client ``` Once finished you will have your standard create-react-app files and folders, but there are a couple of additional installations we need to do: ```bash cd client npm install ethers react-bootstrap bootstrap npm audit fix ``` (**Update 25 November 2020**: Currently running the above command with npm version 7.0.8 creates a `npm ERR! Cannot read property 'length' of undefined` error. Using version 6.14.9 however does not. If this command generates an error therefore, check your npm version with `npm -v`.) The first of these packages, [ethers.js](https://github.com/ethers-io/ethers.js/), is the most important - it aiming to be a "complete and compact library for interacting with the Ethereum Blockchain and its ecosystem"; the second two are for the purposes of UI. The primary alternative to ethers.js is [web3.js](https://github.com/ethereum/web3.js/), [Adrian Li](https://github.com/adrianmcli/web3-vs-ethers) and [infura.io](https://blog.infura.io/ethereum-javascript-libraries-web3-js-vs-ethers-js-part-i/) have written more on the difference between the two. Edit the following file with your text editor: ```bash vim src/App.js ``` Delete all the content, and replace with: ```javascript import React, { useState } from 'react'; import './App.css'; import { ethers } from "ethers"; import PreciousChickenToken from "./contracts/PreciousChickenToken.json"; import { Button, Alert } from 'react-bootstrap'; import 'bootstrap/dist/css/bootstrap.min.css'; // Needs to change to reflect current PreciousChickenToken address const contractAddress ='0xa8dC92bEeF9E5D20B21A5CC01bf8b6a5E0a51888'; let provider; let signer; let erc20; let noProviderAbort = true; // Ensures metamask or similar installed if (typeof window.ethereum !== 'undefined' || (typeof window.web3 !== 'undefined')) { try{ // Ethers.js set up, gets data from MetaMask and blockchain window.ethereum.enable().then( provider = new ethers.providers.Web3Provider(window.ethereum) ); signer = provider.getSigner(); erc20 = new ethers.Contract(contractAddress, PreciousChickenToken.abi, signer); noProviderAbort = false; } catch(e) { noProviderAbort = true; } } function App() { const [walAddress, setWalAddress] = useState('0x00'); const [pctBal, setPctBal] = useState(0); const [ethBal, setEthBal] = useState(0); const [coinSymbol, setCoinSymbol] = useState("Nil"); const [transAmount, setTransAmount] = useState('0'); const [pendingFrom, setPendingFrom] = useState('0x00'); const [pendingTo, setPendingTo] = useState('0x00'); const [pendingAmount, setPendingAmount] = useState('0'); const [isPending, setIsPending] = useState(false); const [errMsg, setErrMsg] = useState("Transaction failed!"); const [isError, setIsError] = useState(false); // Aborts app if metamask etc not present if (noProviderAbort) { return ( <div> <h1>Error</h1> <p><a href="https://metamask.io">Metamask</a> or equivalent required to access this page.</p> </div> ); } // Notification to user that transaction sent to blockchain const PendingAlert = () => { if (!isPending) return null; return ( <Alert key="pending" variant="info" style={{position: 'absolute', top: 0}}> Blockchain event notification: transaction of {pendingAmount} &#x39e; from <br /> {pendingFrom} <br /> to <br /> {pendingTo}. </Alert> ); }; // Notification to user of blockchain error const ErrorAlert = () => { if (!isError) return null; return ( <Alert key="error" variant="danger" style={{position: 'absolute', top: 0}}> {errMsg} </Alert> ); }; // Sets current balance of PCT for user signer.getAddress().then(response => { setWalAddress(response); return erc20.balanceOf(response); }).then(balance => { setPctBal(balance.toString()) }); // Sets current balance of Eth for user signer.getAddress().then(response => { return provider.getBalance(response); }).then(balance => { let formattedBalance = ethers.utils.formatUnits(balance, 18); setEthBal(formattedBalance.toString()) }); // Sets symbol of ERC20 token (i.e. PCT) async function getSymbol() { let symbol = await erc20.symbol(); return symbol; } let symbol = getSymbol(); symbol.then(x => setCoinSymbol(x.toString())); // Interacts with smart contract to buy PCT async function buyPCT() { // Converts integer as Eth to Wei, let amount = await ethers.utils.parseEther(transAmount.toString()); try { await erc20.buyToken(transAmount, {value: amount}); // Listens for event on blockchain await erc20.on("PCTBuyEvent", (from, to, amount) => { setPendingFrom(from.toString()); setPendingTo(to.toString()); setPendingAmount(amount.toString()); setIsPending(true); }) } catch(err) { if(typeof err.data !== 'undefined') { setErrMsg("Error: "+ err.data.message); } setIsError(true); } } // Interacts with smart contract to sell PCT async function sellPCT() { try { await erc20.sellToken(transAmount); // Listens for event on blockchain await erc20.on("PCTSellEvent", (from, to, amount) => { setPendingFrom(from.toString()); setPendingTo(to.toString()); setPendingAmount(amount.toString()); setIsPending(true); }) } catch(err) { if(typeof err.data !== 'undefined') { setErrMsg("Error: "+ err.data.message); } setIsError(true); } } // Sets state for value to be transacted // Clears extant alerts function valueChange(value) { setTransAmount(value); setIsPending(false); setIsError(false); } // Handles user buy form submit const handleBuySubmit = (e: React.FormEvent) => { e.preventDefault(); valueChange(e.target.buypct.value); buyPCT(); }; // Handles user sell form submit const handleSellSubmit = (e: React.FormEvent) => { e.preventDefault(); valueChange(e.target.sellpct.value); sellPCT(); }; return ( <div className="App"> <header className="App-header"> <ErrorAlert /> <PendingAlert /> <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/6f/Ethereum-icon-purple.svg/512px-Ethereum-icon-purple.svg.png" className="App-logo" alt="Ethereum logo" /> <h2>{coinSymbol}</h2> <p> User Wallet address: {walAddress}<br/> Eth held: {ethBal}<br /> PCT held: {pctBal}<br /> </p> <form onSubmit={handleBuySubmit}> <p> <label htmlFor="buypct">PCT to buy:</label> <input type="number" step="1" min="0" id="buypct" name="buypct" onChange={e => valueChange(e.target.value)} required style={{margin:'12px'}}/> <Button type="submit" >Buy PCT</Button> </p> </form> <form onSubmit={handleSellSubmit}> <p> <label htmlFor="sellpct">PCT to sell:</label> <input type="number" step="1" min="0" id="sellpct" name="sellpct" onChange={e => valueChange(e.target.value)} required style={{margin:'12px'}}/> <Button type="submit" >Sell PCT</Button> </p> </form> <a title="GitR0n1n / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)" href="https://commons.wikimedia.org/wiki/File:Ethereum-icon-purple.svg"> <span style={{fontSize:'12px',color:'grey'}}> Ethereum logo by GitRon1n </span></a> </header> </div> ); } export default App; ``` ## Deploy the contract With everything built we are going to deploy the smart contract. At the terminal we change directory back to the one containing our contract and deploy: ```bash cd .. truffle deploy ``` If everything works you should see output similar to this: [![Truffle deploying PreciousChickenToken](https://www.preciouschicken.com/blog/images/truffle_deploy.png)](https://www.preciouschicken.com/blog/images/truffle_deploy.png) I've highlighted the contract address in a yellow box on the above - this is the address on the blockchain that your contract has been deployed to (your address will be similar but different). Copy this address to clipboard. Now we know this address we have to change the client src code to reflect this. Therefore edit *App.js*: ```bash vim client/src/App.js ``` Find the relevant line of code, in my example it is: ```javascript const contractAddress ='0xa8dC92bEeF9E5D20B21A5CC01bf8b6a5E0a51888'; ``` and replace the string within the single quotes with the address you copied from the yellow box above. If you switch to Ganache you will see that the first account (Index 0) no longer has a balance of 100 Eth, this is because a small amount of Eth has been consumed in deploying the contract. This account now also owns 1000 PreciousChickenTokens, although we don't know that looking at Ganache. ## Approve the transacting account As the first account in Ganache (Index 0) is now set as the *owner* by the smart contract, e.g. it owns the 1000 ERC20 tokens; we'll be using the account at Index 1 to transact on. The ERC20 specification says that when the [transferFrom](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md#transferfrom) method is used authorisation has to be given specifically so that Account A can pass Account B tokens. We will do this using truffle console. Therefore at the terminal: ```bash truffle console ``` Your prompt should now change to `truffle(ganache)>` or similar. Enter: ```javascript token = await PreciousChickenToken.deployed() ``` If successful this should return `undefined`. We can now increase the allowance: ```javascript token.increaseAllowance(accounts[1], 1000, {from: accounts[0]}) ``` This code should output a transaction log to the console. The command states that accounts[0] (e.g. the Ganache account at Index 0 - defined as the *owner* within the smart contract) authorises accounts[1] (e.g. Index 1) to hold up to 1000 tokens. Lastly exit the console: ```javascript .exit ``` ## Start React Time to fire up React: ```bash cd client npm start ``` Your browser should now point to `localhost:3000`. If you do not have [Metamask](https://metamask.io) installed, or some equivalent software for accessing the Ethereum blockchain, then you will see the error message: *Metamask or equivalent required to access this page*. The remainder of this guide will assume you have a fresh install of Metamask; I haven't tested other options (e.g. [Brave](https://brave.com/) browser), so they may not work. If your browser is suitably enabled you should otherwise see the rotating Ethereum symbol, and a number of blank fields: [![Pre-wallet import PreciousChickenToken splash screen](https://www.preciouschicken.com/blog/images/metamask_gc_blank.png)](https://www.preciouschicken.com/blog/images/metamask_gc_blank.png) ## Import wallet into Metamask This process will assume your starting point is a fresh install of Metamask on Google Chrome (v83.0.4103.116); other browsers are likely going to be different; and if you have already used Metamask previously you will need to logout etc. Assuming you do have that fresh install selecting the Metamask extension icon will result in: [![Pre-wallet import PreciousChickenToken splash screen](https://www.preciouschicken.com/blog/images/metamask_gc_welcome.png)](https://www.preciouschicken.com/blog/images/metamask_gc_welcome.png) Selecting the *Get Started* option will take us to our set up options. Here we want to select *No, I already have a seed phrase*: [![Metamask: Choose seed](https://www.preciouschicken.com/blog/images/metamask_gc_new.png)](https://www.preciouschicken.com/blog/images/metamask_gc_new.png) We now need our seed phrase that Ganache has created for us; at the top of the screen we will find twelve words under the heading *Mnemonic*. Copy and paste them into the *Wallet Seed* field below, and add a password of your choosing: [![Metamask: import seed](https://www.preciouschicken.com/blog/images/metamask_gc_import.png)](https://www.preciouschicken.com/blog/images/metamask_gc_import.png) There will be a number of congratulatory / analytics sreens to click through after which you will see your account. Currently blank as we haven't connected it to our local blockchain instance. Therefore select *Custom RPC* from the *Networks* drop-down menu accessed by selecting *Main Ethereum Network* (which is the currently selected network): [![Metamask: Networks](https://www.preciouschicken.com/blog/images/metamask_gc_networks.png)](https://www.preciouschicken.com/blog/images/metamask_gc_networks.png) We now need to enter the details of where Metamask can find Ganache on the network. Returning to Ganache copy the *RPC Server* details (in my instance this is `HTTP://127.0.0.1:7545`) and copy them into *New RPC URL* field on the add network screen, give it a sensible *Network Name* (e.g. `Ganache`), and select *Save*: [![Metamask: Custom RPC](https://www.preciouschicken.com/blog/images/metamask_gc_customrpc.png)](https://www.preciouschicken.com/blog/images/metamask_gc_customrpc.png) Confirm that *Ganache* (or whatever you called your network) is now displayed in the network dropdown menu at the top of the Metamask window, as opposed to the default of *Main Ethereum Network*. This ensures the application is talking to your local blockchain (where we have deployed the smart contract), rather than the real-world Ethereum network (where we have not). <!---From the left hand menu select *Connections*: this will allow us to add our React site to the list of allowed sites. Add `localhost` and select *Connect*: [![Metamask: Connect to localhost](https://www.preciouschicken.com/blog/images/metamask_gc_connections.png)](https://www.preciouschicken.com/blog/images/metamask_gc_connections.png) Returning to our React tab and refreshing the screen we see that Metamask has successfully connected and values appear: [![Chrome: Account 0 Details](https://www.preciouschicken.com/blog/images/metamask_gc_success0.png)](https://www.preciouschicken.com/blog/images/metamask_gc_success0.png) --> We now need to return to the page serving our React app (typically this is [http://localhost:3000/](http://localhost:3000/)). A pop up should now appear asking to 'Connect with Metamask': [![Chrome: Connect with Metamask](https://www.preciouschicken.com/blog/images/metamask_gc_connect_with_m.png)](https://www.preciouschicken.com/blog/images/metamask_gc_connect_with_m.png) We don't want to select *Next* to proceed yet though. Slightly confusingly, as Ganache uses zero-based numbering, when it asks to connect to Account 1, this is actually Ganache Account 0 - which contains all of our PCT. We want to connect on Ganache Account 1 (which Metamask refers to as Account 2). Therefore select the *New Account* link, *save* the default option of *Account 2*; then de-select Account 1 and select Account 2 as shown: [![Chrome: Connect with Metamask to Account 1](https://www.preciouschicken.com/blog/images/metamask_gc_connect_with_newaccount.png)](https://www.preciouschicken.com/blog/images/metamask_gc_connect_with_newaccount.png) <!---Slightly confusingly although Metamask refers to this as Account 1, within Ganache this account is Account, or Index, 0 - which currently owns all thousand PCT minted. We will be transacting on Account 1; so we now need to import this account into Metamask. Returning to Ganache select the key icon to the right of the text *Index 1* and then copy the Private Key that appears: [![Ganache: private key reveal](https://www.preciouschicken.com/blog/images/ganache_privatekey.png)](https://www.preciouschicken.com/blog/images/ganache_privatekey.png) Returning to Chrome and the Metamask extension select the Accounts menu, by selecting the multicoloured circular icon on the top-right, and then the *Import account* option: [![Metamask: import account](https://www.preciouschicken.com/blog/images/metamask_gc_importaccount.png)](https://www.preciouschicken.com/blog/images/metamask_gc_importaccount.png) And paste the Private Key from Ganache in: [![Metamask: import private key](https://www.preciouschicken.com/blog/images/metamask_gc_privatekey.png)](https://www.preciouschicken.com/blog/images/metamask_gc_privatekey.png) --> Once done select *Next*. We now get another confirmation from Metamask as to whether we want to connect Account 2: [![Chrome: Connect with Metamask to Account 1](https://www.preciouschicken.com/blog/images/metamask_gc_connect_to_a2.png)](https://www.preciouschicken.com/blog/images/metamask_gc_connect_to_a2.png) Select *Connect* and after a browser refresh we should now have the details of Ganache Account 1 on screen: [![Chrome: Account 1 Details](https://www.preciouschicken.com/blog/images/metamask_gc_success1.png)](https://www.preciouschicken.com/blog/images/metamask_gc_success1.png) Success! A bit of a tortuous process, that varies across browsers. For instance other browsers might not produce a pop-up, but rather have a far less obvious notification on the Metamask icon itself. <!--- For instance if you are using Firefox (v78.0.1) then connecting to a local site is handled differently using a *Connected sites* menu option: [![Firefox: Connected sites](https://www.preciouschicken.com/blog/images/metamask_ff_connectedsites.png)](https://www.preciouschicken.com/blog/images/metamask_ff_connectedsites.png) [![Firefox: Connect with Metamask](https://www.preciouschicken.com/blog/images/metamask_ff_localsiteconnect.png)](https://www.preciouschicken.com/blog/images/metamask_ff_localsiteconnect.png) --> ## Buy, buy, buy; sell, sell, sell! At this point we can go ahead, test the application, and buy and sell some PCT. Putting an order to buy PCT will result in a request for authorisation from Metamask (this might pop up, or remain in the background, in which case you will have to manually select the Metamask icon): [![Metamask authorisation](https://www.preciouschicken.com/blog/images/metamask_gc_authorisation.png)](https://www.preciouschicken.com/blog/images/metamask_gc_authorisation.png) Authorisation will lead to the relevant block being mined on the local blockchain and if successful a pop up will appear: [![Google Chrome event success](https://www.preciouschicken.com/blog/images/metamask_gc_eventsuccess.png)](https://www.preciouschicken.com/blog/images/metamask_gc_eventsuccess.png) Likewise failure (in this case trying to sell more PCT than the Account holds) will lead to an error message: [![Google Chrome event fail](https://www.preciouschicken.com/blog/images/metamask_gc_eventfail.png)](https://www.preciouschicken.com/blog/images/metamask_gc_eventfail.png) Go ahead and try and break things. ## Configuration control If this hasn't worked it might be due to different software versions having been released since this post was written. This is especially the case due to the number of moving parts plus the frequency of breaking changes in the Ethereum world. So for reference, components used are: node v14.4.0, truffle v5.1.30, ganache v.2.4.0, metamask v7.7.9, openzeppelin/contracts v3.1.0, ethers v5.0.3, create-react-app v3.4.1, solidity v0.6.2 and my OS is Ubuntu 20.04 LTS ([Regolith](https://regolith-linux.org) flavour). ## Conclusions So my aim here was to achieve working code, rather than to develop the next hot ICO (if such things even exist anymore). There is no testing (__bad__), it doesn't handle decimals, and there are oodles of other things to refactor - however it does give a good idea of how the building blocks stack up. I also got some great feedback on the [OpenZeppelin forum](https://forum.openzeppelin.com/t/preciouschickentoken-a-guided-example-of-openzeppelins-erc20-using-ethers-truffle-and-react/3257/2?u=preciouschicken) as to how the structure of the smart contract could be improved: > I would keep any non-core functionality separate from the token (such as purchasing). The token should ideally just be the functionality to use the token. If you have purchasing functionality you can put this in a separate contract. > > You could also transfer an amount of tokens to the purchasing functionality contract rather than having to set an allowance for the token contract to be able to use some of the deployer of the token contracts tokens. Definitely worth bearing in mind if this guide is being used as a jumping off point (or I revisit this). If you have feedback, observations, etc; I'd love to read them in the comments section. ## Further reading ### Documentation - [OpenZeppelin contracts documentation](https://docs.openzeppelin.com/contracts/3.x/) particularly the [ERC20](https://docs.openzeppelin.com/contracts/3.x/erc20) section - [Ethers documentation](https://docs.ethers.io/) ### Tutorials - [Code Your Own Cryptocurrency on Ethereum (How to Build an ERC-20 Token and Crowd Sale website)](https://www.dappuniversity.com/articles/code-your-own-cryptocurrency-on-ethereum) by Dapp University - [How to Connect a React App to the Blockchain](https://www.publish0x.com/blockchain-developer/how-to-connect-a-react-app-to-the-blockchain-xvveoe) - [How to create an ERC20 token the simple way](https://www.toptal.com/ethereum/create-erc20-token-tutorial) - [Ethereum Dapps with Truffle,Ganache, Metamask, OppenZippelin and React](https://www.techiediaries.com/ethereum-truffle-react/) ### Additional background - [Points to consider when creating a fungible token (ERC20, ERC777)](https://forum.openzeppelin.com/t/points-to-consider-when-creating-a-fungible-token-erc20-erc777/2915) by OpenZeppelin - [Ethereum smart contract security best practices](https://consensys.github.io/smart-contract-best-practices/)
preciouschicken
380,821
Why Backwards Compatibility is Critical
Backwards compatibility is not something I see discussed much in tech circles. It’s all new-new-new,...
0
2020-07-08T14:09:26
https://joshghent.com/backwards-compatible/
--- title: Why Backwards Compatibility is Critical published: true date: 2020-07-03 12:31:03 UTC tags: canonical_url: https://joshghent.com/backwards-compatible/ --- Backwards compatibility is not something I see discussed much in tech circles. It’s all new-new-new, fast-fast-fast. Piling features on top of one another and tightly coupling releases between services. Previously Facebook, the [4th most popular site on the internet](https://www.alexa.com/topsites) no less, had the mantra of “move fast and break things”.These are the kinds of sentiments I see all around me, particularly from startup and SaaS companies. I’ve always felt that coupling releases too closely was insanity inducing and seen first hand how corrosive they were to a customer experience of the product. But the web hasn’t always been like this. The core backbone of the internet, in the form of TCP/IP, DNS, HTTP and even HTML and CSS, has been unchanged for many years - or at least changed in manner that doesn’t break previous versions. As an example, both [Space Jam’s website](http://spacejam.com) and [Million Dollar Homepage](http://www.milliondollarhomepage.com/) both still function on modern browsers, having been created in 1996 and 2005 respectively.So what happened? There isn’t one conclusive answer to this, more a prevailing zeitgeist amongst developers and product managers. But in my view, it’s due to the large investment that technology has seen over the past 20 years. It’s grown exponentially.With that, we have seen bad business practises, ill thought out ideas and customers that are keeping a company afloat. These things existed before, but now manifest themselves in the technology that these organisations build.Additionally, products are architected around small services given a single responsibility. Previously, the web was simple - throw a LAMP stack on a server somewhere and bobs your uncle. There were a lot less moving parts. Now this isn’t going to be a nostalgic post where we reminisce the days of the “good ol’ web” or something, because I find that all a bit petty. I want to discuss how we need to build things to last and practical ways to do that in the face of “moving fast” (side note: watch Bryan Cantrill’s fantastic talk on the principles of tech leadership [here](https://www.youtube.com/watch?v=9QMGAtxUlAc)) # But why do developers avoid making things backwards compatible? This article wouldn’t exist if there wasn’t at least 1 answer to this question.Primarily it boils down to “it’s more effort”. If you’re making a major change to a service, and you have all other teams that consume or otherwise use this service in some way to also make the needed change then it’s reasonable to assume you don’t really need the old system. And potentially more work to maintain it.Your team may also not even have a “versioning” strategy in place. I’ve sat in meetings well over 5 hours of meetings about how to version services with no outcome. A lot of people have opinions about this, and often, developers seem more intent on arguing the others point rather than accomplishing the objective behind the change.Furthermore, there have arguably been a number of failures in attempting to preserve backwards compatibility such as with Java and SQLite3.These challenges can be major roadblocks in creating stability and backwards compatibility in your products services. # Why is it important then? First we need to clarify that preserving backwards compatibility is not about holding onto legacy. If something is old, busted, broken or unused, then by all means, pave over it and start afresh. There’s no need to attach infinite eels to yourself to support absolutely every single use case of your service. Things change as software changes. It’s natural.On the other hand, backwards compatibility is about not creating unnecessary work for hundreds of your users every time you make a change. Or coupling releases so tightly that everything has to be deployed at exactly the same time and caches flushed in sync. Or having no deprecation plan and changing external interfaces constantly. Stripe treads this line very careful (however, I am unaware of the overall experience as a developer there). Being a payments processor, there has to be certain guarantees about how things will be handled.To accomplish this, Stripe use a date versioned API system. You get assigned the latest versioned API when you create an account and can easily update the API if you so wish. But you also have the option to leave it completely. In fact, there are still websites I built a few years ago with now old Stripe integrations that tick along fine. They have a great post about their versioning mechanism [here](https://stripe.com/blog/api-versioning) # How to do it You might assume that because you use a /v1 and /v2 in your endpoints you’re all set right? Well, not so fast. Ultimately, as humans we are subject to reason about things that we ourselves cannot fully understand. Therefore, what constitutes a major version bump for some, may not be for others.So how can you do it? ## 1. General coding practise If you’re changing something minor like the name of a parameter on an interface or the type, then there is the possibility that you can support the old method by casting it to the new type and so forth. There are many general coding practises that allow your code to be bug hardened whilst not introducing lots of spaghetti.Additionally, a good starting point for all backwards compatible changes is to mark the “old” code with a “deprecation” warning of some kind so that other developers in your team know not to use that code any more. ## 2. Documentation and Deprecations If you’re going to make a breaking change to something, you need a way to communicate that to the consumers of your product and you need to tell them how to update if they absolutely cannot be held on the previous version for some reason.You can do this by giving the customer, plenty of warning via email, an account manager or a deprecation warning in the response. You could have a system whereby, when a deprecated API method is called, it logs it to a table. Each day, the table can then be scanned and you can tell the customer “You called X route which has been deprecated and will no longer receive updates, please see N website for documentation on how to update”.Hand-in-Hand with this goes a clear policy on how long you will support “deprecated” routes. Depending on the market you’re in that could be a few months or a few years. Either way, be clear to your customers. Again, Stripe does a pretty good job of saying “you can use this near indefinitely” and including that as part of its marketing to developers. ## 3. Pivoting If making something backwards compatible has become so incredibly painful that you’d rather play hopscotch on a floor of hot coals, then you need to ask if the service has pivoted to a point where it’s potentially a whole new thing.As an example of this, I created a service for a messaging application that kept track of when a customer had last read a particular message. However, it then needed the functionality to manage if the customer had left a group message and then if the customer had muted the messages and so on. Before you knew it, it was no longer an API for managing if the customer had read a message or not but more a fully fledged notifications API.In retrospect, I should have seen this inevitability. But the service had devolved to a point where it wasn’t anything like the original. Although it was an internal only service, it’s something that looking back I should have redone and gradually migrated over to the new service.Although this may not be preserving the backwards compatibility in a true sense, as long as you provide a sensible upgrade path and don’t immediately shutdown the old service then it’s ok in my books. ## 4. Versioning We’ve touched on this a few times in this post, and arguments about different versioning strategies has been talked about since time began.I’m not going to provide any guidance on which one is best or which one should you chose - simply decide on one as a team and agree upon clear definitions about what constitutes a version upgrade. Then include this as part of your release strategy.If you work with continuous deployment then perhaps look at something similar to Stripe or a Semver strategy that goes beyond a “/v1” and “/v2” route structure (although may include it).It will depend on a few factors\* Expectations of the market you are in - how long do you customers expect to use your product and forget about the implementation - hint it’s often longer than you think\* What is your release cycle like? - if it’s daily then you need something to automate the process, if it’s each decade where you pack your software onto a disc then it can be something manual\* Do you have a lot of third party consumers - if there consumers to your service that go beyond your company then you will have different requirements about deprecation etc. TL;DR - Pick one and go with it. You can even pick different mechanism for different services! ## 5. Limit dependencies > Mo’ Dependencies, mo’ problems - Notorious D.E.V By limiting the number of dependencies you use, versioning will be easier because you no longer need to provide constant security updates and check if every last version of your software works with it.The Node community is particular bad at this one, and often does provide security and bug fixes down stream and instead just forces everyone to upgrade to the latest version. We can do better than this. But we make our lives a lot easier by reducing dependencies. # Conclusion There are lots of arguments against backwards compatibility and I can understand why. Personally speaking, I want to build to last. I’d like to think that in 10 years time I could still use my products without having to change the integration.Something about seeing the spacejam website just sort of fills me with a warm glow of a moment in time that is accessible at any point.
joshghent
380,865
[React] Improve your portfolio for FAANG
If anyone is interested in improving their portfolio and standing out to FAANG (and other large compa...
0
2020-07-03T16:47:07
https://dev.to/ipzard/improve-your-portfolio-for-faang-23lp
If anyone is interested in improving their portfolio and standing out to FAANG (and other large companies), we could use some new members in our open-source React, GitHub organization! Email me your GitHub ID (or post here) and I'll send you an invite. Docs: https://default.services GitHub: https://github.com/default-services Email: support@default.services
ipzard
380,946
Test behaviour, not implementation
Introduction Last year I spent a lot of time with writing and also practicing how to write...
0
2020-07-03T17:10:03
https://devz.life/blog/test-behaviour-not-implementation/
testing, junit5, mockito, java
--- title: Test behaviour, not implementation published: true date: 2020-06-20 09:30:00 UTC cover_image: https://dev-to-uploads.s3.amazonaws.com/i/0jplgtz4scbzcz761s9l.jpg tags: testing, Junit5, Mockito, Java canonical_url: https://devz.life/blog/test-behaviour-not-implementation/ --- ## Introduction Last year I spent a lot of time with writing and also practicing how to write good unit/integration tests in AEM (Adobe Experience Manager). Now I would like to share with you what I have learned so far. What I have learned is not only AEM related, you can apply it to any programming language or framework. Before that time I had some "experience" with unit testing. I wrote several unit tests so that I can say "I have experience with it". But to be honest I didn't like to write it. Even though I knew what benefits tests brings to the product what we are building, what brings to the team members and to me, I didn't care. Typical excuses were: - "I don't have time or we don't have time for it" - "I can't test or mock that" - "You can't write test, pick up some new feature or fix bug" and on the end nobody pushed me to write it. Sadly, writing tests wasn't part of development process. Now when I'm thinking a little bit, I didn't know how to write tests. Let's face it, writing tests is not easy, like many other things when you don't have experience with it. Luckily that kind of things has changed at some point, and I would like to try convince all of you who are still thinking like "old" me. I would like that we all start thinking more about Quality and not Quantity. ## Writing tests as part of development process Most of us works in "Agile" way (whatever that means) and use DDD (Deadline-driven development) methodology (I could write about that in separate post). This usually means that there is no time for writing tests. This need's to be changed, developers and all other technical team members should convince all other team members that writing tests should be part of development process. Writing tests should be part of any estimation. Period. Why? There are a lot of benefits, but I will point out most important of them: 1. Bugs prevention 2. Better code quality 3. Provides some kind of documentation 4. Time saving 5. Money saving 6. Feeling "safe" Now let's see typical "disadvantages": 1. Time consuming 2. Money consuming 3. Tests are slow to write 4. Tests are slow to run 5. Changing implementation require changing tests Probably you have notice that I have mention **time and money** as advantage and disadvantage. Depending if you are thinking in short term, then yes it's waist of time and money, but If you think in long term then it's not true. Actually on the end it saves your time and money. A lot of people thinks that they are waist of time and money. I think because they don't see some visual outcome of them, like how they see it when you build some feature. Try to think like this, with tests you can prevent a lot of bugs and a lot of ping pongs between Developers and QAs. Very often we have change requests during development and it happens that we implemented something in wrong way. New request just doesn't fit anymore to existing implementation. That means we need to refactor our old code or reimplement it from scratch. Here tests provides you some safe feel because you know if you broke behaviour or not. Another example could be big, never ending projects where several different teams have worked before you. Probably that project has poorly written documentation, you need to deal with legacy code and implement new features on top of it. Having tests is gold here. Also, a lot of projects starts like MVP which turns out to some core / base project with several subprojects. Not having test coverage here is total nonsense. Last 3 disadvantages are also not true. - Tests are slow to write - yes, If you don't know how to write it and if you don't have experience - practice - Tests are slow to run - again yes, if you don't know how to write it - Changing implementation require changing tests - yes, because you are testing wrong things - test behaviour not implementation You don't believe me? Take 1h of your time and watch the talk **"TDD, Where Did It All Go Wrong"** from **Ian Cooper.** For me this was eye opener. Before this talk I read few books about testing and I was not so convinced. In my opinion this is definitely the best talk about it. {% youtube EZ05e7EMOLM %} **tl; dr;** - **Test behaviour / requirements, not implementation** - with this kind of approach you will eliminate previously mentioned disadvantages - Test the public API of a module, not classes, methods or technical details - Unit test shouldn't be focused on classes, methods it should be focused on module, user stories - Test gives you promise what should be expected result / behaviour, so when you are refactoring an implementation, use tests to make sure that the implementation still yields the expected results - Write tests to cover the use cases or stories - Use the "Given When Then" model - Avoid mocks This testing approach helps you to build **right product**. But negative point could be that it doesn't help you to build **product right**. Other downside is that you don't see exactly what is wrong when test is failing. So classic unit testing approach push you to write more clean and quality code than "behaviour testing". In my opinion strict code reviews and static code analysis tools are better approach to achieve the same result. Second downside for me is really minor thing, since with debugging you can quickly find out what is happening. I hope that you are still follow me and that I'm start changing a little bit your thinking about testing. Now let's stop with theory and let's see how it works in practice. ## Testing in AEM Because last few years I'm working with AEM, I will show you how to test behaviours in your AEM projects. The same things you can apply in any other programming languages or frameworks. Depending on testing library support, this can be easier or harder to achieve. As an example let say we need to implement Product Details API which is consumed by client side. To build Product Details API lets say in Spring you will probably create several classes like Product Controller, Service, Repository, DTO and so on. In AEM world this means you need to create Sling Servlet, OSGi Service, Sling Model and some DTO classes. Product Details acceptance criteria: - show product details information (id, name, description, category id, images and variants) - product variants need's to be available for specific country - product variants are available from specific date - product variants need's to be sorted by sort order - name and description of product need's to be localized (depending on market), fallback is English Implementation what you will see here is not perfect, it's simplified and hardcoded. In real world this is more complex. But here implementation is not important, instead we should focus how to test requirements of this API. I will add here just 3 most important classes, other implementations you can see on [Github](https://github.com/mkovacek/how-to-test-aem-demo) #### [ProductDetails Sling Servlet](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/servlets/products/ProductDetailsServlet.java) - for handling request - it does some request validation - it use ProductDetailsService to get all information about requested product ``` package com.mkovacek.aem.core.servlets.products; import com.mkovacek.aem.core.models.products.ProductDetailsModel; import com.mkovacek.aem.core.records.response.Response; import com.mkovacek.aem.core.services.products.ProductDetailsService; import com.mkovacek.aem.core.services.response.ResponseService; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import org.apache.sling.api.SlingHttpServletRequest; import org.apache.sling.api.SlingHttpServletResponse; import org.apache.sling.api.resource.Resource; import org.apache.sling.api.servlets.HttpConstants; import org.apache.sling.api.servlets.SlingSafeMethodsServlet; import org.apache.sling.servlets.annotations.SlingServletResourceTypes; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Reference; import javax.servlet.Servlet; import javax.servlet.ServletException; import java.io.IOException; @Slf4j @Component(service = Servlet.class) @SlingServletResourceTypes( resourceTypes = ProductDetailsServlet.RESOURCE_TYPE, selectors = ProductDetailsServlet.ALLOWED_SELECTOR, extensions = ProductDetailsServlet.JSON, methods = HttpConstants.METHOD_GET) public class ProductDetailsServlet extends SlingSafeMethodsServlet { public static final String ALLOWED_SELECTOR = "productdetails"; static final String RESOURCE_TYPE = "demo/components/productdetails"; static final String JSON = "json"; @Reference private transient ResponseService responseService; @Reference private transient ProductDetailsService productDetailsService; @Override public void doGet(final SlingHttpServletRequest request, final SlingHttpServletResponse response) throws ServletException, IOException { try { this.responseService.setJsonContentType(response); final String selector = request.getRequestPathInfo().getSelectorString(); final String productId = this.responseService.getSuffix(request); if (this.responseService.areSelectorsValid(selector, ALLOWED_SELECTOR) && StringUtils.isNotBlank(productId)) { final Resource resource = request.getResource(); final Response<ProductDetailsModel> data = this.productDetailsService.getProductDetails(productId, resource); this.responseService.sendOk(response, data); } else { this.responseService.sendBadRequest(response); } } catch (final Exception e) { log.error("Exception during handling request", e); this.responseService.sendInternalServerError(response); } } } ``` #### [ProductDetails OSGi Service](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/services/products/impl/ProductDetailsServiceImpl.java) - it's searching for requested product in repository / database - it's doing some product validation - maps product resource to ProductDetails model - returns product details ``` package com.mkovacek.aem.core.services.products.impl; import com.day.cq.wcm.api.PageManager; import com.mkovacek.aem.core.models.products.ProductDetailsModel; import com.mkovacek.aem.core.records.response.Response; import com.mkovacek.aem.core.records.response.Status; import com.mkovacek.aem.core.services.products.ProductDetailsService; import com.mkovacek.aem.core.services.resourceresolver.ResourceResolverService; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import org.apache.sling.api.resource.Resource; import org.apache.sling.api.resource.ResourceResolver; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Reference; import java.util.Locale; import java.util.Optional; @Slf4j @Component(service = ProductDetailsService.class, immediate = true) public class ProductDetailsServiceImpl implements ProductDetailsService { private static final String PIM_READER = "pimReader"; private static final Response<ProductDetailsModel> notFoundResponse = new Response<>(new Status(true, "Product Details not found"), null); private static final Response<ProductDetailsModel> errorResponse = new Response<>(new Status(false, "Error during fetching product details"), null); @Reference private ResourceResolverService resourceResolverService; @Override public Response<ProductDetailsModel> getProductDetails(final String id, final Resource resource) { try (final ResourceResolver resourceResolver = this.resourceResolverService.getResourceResolver(PIM_READER)) { final Locale locale = resourceResolver.adaptTo(PageManager.class).getContainingPage(resource).getLanguage(false); //usually this would be implemented with query final String productPath = StringUtils.join("/var/commerce/products/demo/", id); return Optional.ofNullable(resourceResolver.getResource(productPath)) .map(productResource -> productResource.adaptTo(ProductDetailsModel.class)) .map(productDetailsModel -> productDetailsModel.setLocale(locale)) .filter(ProductDetailsModel::isValid) .map(productDetailsModel -> new Response<>(new Status(true), productDetailsModel)) .orElse(notFoundResponse); } catch (final Exception e) { log.error("Exception during fetching product details", e); } return errorResponse; } } ``` #### #### [ProductDetails Sling Model](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/models/products/ProductDetailsModel.java) - representation of product resource in repository / database - used as response in JSON format ``` package com.mkovacek.aem.core.models.products; import com.fasterxml.jackson.annotation.JsonIgnore; import com.fasterxml.jackson.annotation.JsonProperty; import com.mkovacek.aem.core.services.products.ProductLocalizationService; import com.mkovacek.aem.core.services.products.ProductValidatorService; import lombok.Getter; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import org.apache.sling.api.resource.Resource; import org.apache.sling.api.resource.ValueMap; import org.apache.sling.models.annotations.Default; import org.apache.sling.models.annotations.DefaultInjectionStrategy; import org.apache.sling.models.annotations.Model; import org.apache.sling.models.annotations.injectorspecific.ChildResource; import org.apache.sling.models.annotations.injectorspecific.OSGiService; import org.apache.sling.models.annotations.injectorspecific.Self; import org.apache.sling.models.annotations.injectorspecific.ValueMapValue; import java.util.ArrayList; import java.util.Comparator; import java.util.List; import java.util.Locale; @Slf4j @Model(adaptables = {Resource.class}, defaultInjectionStrategy = DefaultInjectionStrategy.OPTIONAL) public class ProductDetailsModel { @ValueMapValue @Default(values = StringUtils.EMPTY) @Getter private String id; @ValueMapValue @Default(values = StringUtils.EMPTY) @Getter private String categoryId; @ChildResource @Getter private List<ImageModel> images; @ChildResource private List<VariantsModel> variants; @Self private ValueMap valueMap; @OSGiService private ProductLocalizationService productLocalizationService; @OSGiService private ProductValidatorService productValidatorService; @Getter @JsonProperty("variants") private List<VariantsModel> validVariants = new ArrayList<>(); @Getter private String name = StringUtils.EMPTY; @Getter private String description = StringUtils.EMPTY; @JsonIgnore public boolean isValid() { return !this.validVariants.isEmpty(); } @JsonIgnore public ProductDetailsModel setLocale(final Locale locale) { this.setLocalizedValues(locale); this.validateAndSortVariants(locale); return this; } private void setLocalizedValues(final Locale locale) { this.name = this.productLocalizationService.getLocalizedProductDetail(this.valueMap, "name.", locale); this.description = this.productLocalizationService.getLocalizedProductDetail(this.valueMap, "description.", locale); } private void validateAndSortVariants(final Locale locale) { this.validVariants = this.productValidatorService.getValidVariants(this.variants, locale); this.validVariants.sort(Comparator.comparing(VariantsModel::getSortOrder)); } } ``` Except those 3 classes I need to create several more: - [ImageModel](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/models/products/ImageModel.java), [VariantsModel](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/models/products/VariantsModel.java) - [BlobStorageService](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/services/blobstorage/impl/BlobStorageServiceImpl.java), [ProductValidatorService](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/services/products/impl/ProductValidatorServiceImpl.java), [ProductLocalizationService](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/services/products/impl/ProductLocalizationServiceImpl.java), [ResourceResolverService](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/services/resourceresolver/impl/ResourceResolverServiceImpl.java), [ResponseService](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/services/response/impl/ResponseServiceImpl.java) - [Response](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/records/response/Response.java) and [Status](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/main/java/com/mkovacek/aem/core/records/response/Status.java) records You saw that we have a lot of classes to build this user story. Usually what would developer test here are OSGi services. I'm not saying this is a bad approach, but for that you will need more time, and every time when you will refactor your code or add some new stuff, it's very likely that you will need to change your tests as well. Instead of that let's test only Servlet because this is public API of this user story. So what we need to test in Servlet? First of all, we need to cover all requirments from acceptance criteria, additionaly we can cover some technical details of servlet implementation. ## Test libraries in AEM At the moment in my opinion the best library what you can use are [AEM Mocks](https://wcm.io/testing/aem-mock/). AEM Mocks supports most common mock implementations of AEM APIs + contains Apache Sling and OSGi mock implementations. For other not implemented mocks you will need to implement it by yourself or use [Mockito](https://site.mockito.org). Besides those two I will use Junit 5. Some tips before we start: - Try to have Test classes clean as possible, it should contains just tests. - Move mocks in separate classes - Create some Util classes with common helper methods, if you repeat yourself in multiple places - Use @BeforeAll / AfterAll, @BeforeEach / AfterEach, Junit 5 annotations to not repeat yourself in every test method and to speed up your tests - Create common AEM context in separate class if you repeat yourself in several test classes - Don't programmatically create complex resources in AEM context, instead export it from real AEM instance as JSON resource and load it into AEM context. - Use ResourceResolverMock type whenever is possible to speed up your tests #### [ProductDetailsServletTest](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/test/java/com/mkovacek/aem/core/servlets/products/ProductDetailsServletTest.java) You will see that this test class is more or less clean and it focused only on tests. There is no mocking here, separated mock example you can see [here](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/test/java/com/mkovacek/aem/core/context/mocks/MockExternalizer.java). I'm using @BeforeAll and @BeforeEach to do some common setup, like setting up market pages/resources and common request informations. Also I needed some helper class to easier register all necessary classes into [AEM context](https://github.com/mkovacek/how-to-test-aem-demo/blob/develop/core/src/test/java/com/mkovacek/aem/core/context/AppAemContextBuilder.java). All resources are exported as JSON from real AEM instance and imported into AEM context so that we test on [real data](https://github.com/mkovacek/how-to-test-aem-demo/tree/develop/core/src/test/resources/jcr_root). In this test class I'm testing technical details and requirements - technical details - request validation - requirements - response for non existing product id - product details in different markets to cover localization - product variants validation for specific markets - product variants availability from specific date - product varinats sorting ``` package com.mkovacek.aem.core.servlets.products; import com.day.cq.wcm.api.Page; import com.mkovacek.aem.core.context.AppAemContextBuilder; import com.mkovacek.aem.core.context.constants.TestConstants; import com.mkovacek.aem.core.context.utils.ResourceUtil; import com.mkovacek.aem.core.services.blobstorage.impl.BlobStorageServiceImpl; import com.mkovacek.aem.core.services.products.impl.ProductDetailsServiceImpl; import com.mkovacek.aem.core.services.products.impl.ProductLocalizationServiceImpl; import com.mkovacek.aem.core.services.products.impl.ProductValidatorServiceImpl; import com.mkovacek.aem.core.services.resourceresolver.impl.ResourceResolverServiceImpl; import com.mkovacek.aem.core.services.response.impl.ResponseServiceImpl; import io.wcm.testing.mock.aem.junit5.AemContext; import io.wcm.testing.mock.aem.junit5.AemContextExtension; import org.apache.commons.lang3.StringUtils; import org.apache.sling.api.resource.ResourceResolverFactory; import org.apache.sling.testing.mock.sling.servlet.MockRequestPathInfo; import org.apache.sling.testing.resourceresolver.MockResourceResolverFactory; import org.junit.jupiter.api.*; import org.junit.jupiter.api.extension.ExtendWith; import javax.servlet.ServletException; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.util.Collections; import static org.junit.jupiter.api.Assertions.assertAll; import static org.junit.jupiter.api.Assertions.assertEquals; @ExtendWith(AemContextExtension.class) class ProductDetailsServletTest { private static final AemContext context = new AppAemContextBuilder() .loadResource(TestConstants.HR_HR_LANDING_PAGE_JSON, TestConstants.HR_HR_LANDING_PAGE_PATH) .loadResource(TestConstants.DE_AT_LANDING_PAGE_JSON, TestConstants.DE_AT_LANDING_PAGE_PATH) .loadResource(TestConstants.FR_FR_LANDING_PAGE_JSON, TestConstants.FR_FR_LANDING_PAGE_PATH) .loadResource(TestConstants.PRODUCTS_JSON, TestConstants.PRODUCTS_PATH) .registerService(ResourceResolverFactory.class, new MockResourceResolverFactory()) .registerInjectActivateService(new ResourceResolverServiceImpl()) .registerInjectActivateService(new ResponseServiceImpl()) .registerInjectActivateService(new BlobStorageServiceImpl(), Collections.singletonMap("productImagesFolderPath", "https://dummyurl.com/images/products/")) .registerInjectActivateService(new ProductValidatorServiceImpl()) .registerInjectActivateService(new ProductLocalizationServiceImpl()) .registerInjectActivateService(new ResponseServiceImpl()) .registerInjectActivateService(new ProductDetailsServiceImpl()) .build(); private static final MockRequestPathInfo requestPathInfo = context.requestPathInfo(); private final ProductDetailsServlet servlet = context.registerInjectActivateService(new ProductDetailsServlet()); private static final String CONTENT_RESOURCE_PATH = "root/productdetails"; private static String NOT_FOUND_RESPONSE; private static String BAD_REQUEST_RESPONSE; @BeforeAll static void setUpBeforeAllTests() throws IOException { context.addModelsForPackage(TestConstants.SLING_MODELS_PACKAGES); requestPathInfo.setExtension("json"); NOT_FOUND_RESPONSE = ResourceUtil.getExpectedResult(ProductDetailsServlet.class, "responses/not-found-response.json"); BAD_REQUEST_RESPONSE = ResourceUtil.getExpectedResult(ProductDetailsServlet.class, "responses/bad-request-response.json"); } @BeforeEach void setupBeforeEachTest() { context.response().resetBuffer(); requestPathInfo.setSelectorString(ProductDetailsServlet.ALLOWED_SELECTOR); requestPathInfo.setSuffix("123456789"); final Page page = context.pageManager().getPage(TestConstants.HR_HR_LANDING_PAGE_PATH); context.request().setResource(page.getContentResource(CONTENT_RESOURCE_PATH)); } @Test @DisplayName("GIVEN landing page (en-HR) WHEN servlet is called with not valid selector THEN it returns bad request response in JSON format") void testNotValidSelector() throws ServletException, IOException { requestPathInfo.setSelectorString(ProductDetailsServlet.ALLOWED_SELECTOR + ".test"); this.servlet.doGet(context.request(), context.response()); assertAll( () -> assertEquals(HttpServletResponse.SC_BAD_REQUEST, context.response().getStatus()), () -> assertEquals(BAD_REQUEST_RESPONSE, context.response().getOutputAsString()) ); } @Test @DisplayName("GIVEN landing page (en-HR) WHEN servlet is called without productId suffix THEN it returns bad request response in JSON format") void testNoProductId() throws ServletException, IOException { requestPathInfo.setSuffix(StringUtils.EMPTY); this.servlet.doGet(context.request(), context.response()); assertAll( () -> assertEquals(HttpServletResponse.SC_BAD_REQUEST, context.response().getStatus()), () -> assertEquals(BAD_REQUEST_RESPONSE, context.response().getOutputAsString()) ); } @Test @DisplayName("GIVEN landing page (en-HR) WHEN servlet is called with not existing productId THEN it returns not found response in JSON format") void testNotExistingProductId() throws ServletException, IOException { requestPathInfo.setSuffix("123abc"); this.servlet.doGet(context.request(), context.response()); assertAll( () -> assertEquals(HttpServletResponse.SC_OK, context.response().getStatus()), () -> assertEquals(NOT_FOUND_RESPONSE, context.response().getOutputAsString()) ); } @Test @DisplayName("GIVEN landing page (en-HR) WHEN servlet is called with existing productId THEN it returns an expected localized (fallback) product details response in JSON format") void testProductDetailsInCroatianMarket() throws ServletException, IOException { this.servlet.doGet(context.request(), context.response()); final String expectedProductDetails = ResourceUtil.getExpectedResult(this.getClass(), "responses/product-123456789-hr-HR.json"); assertAll( () -> assertEquals(HttpServletResponse.SC_OK, context.response().getStatus()), () -> assertEquals(expectedProductDetails, context.response().getOutputAsString()) ); } @Test @DisplayName("GIVEN landing page (de-AT) WHEN servlet is called with existing productId THEN it returns an expected localized product details response in JSON format") void testProductDetailsInAustrianMarket() throws ServletException, IOException { this.setPageResource(TestConstants.DE_AT_LANDING_PAGE_PATH); this.servlet.doGet(context.request(), context.response()); final String expectedProductDetails = ResourceUtil.getExpectedResult(this.getClass(), "responses/product-123456789-at-DE.json"); assertAll( () -> assertEquals(HttpServletResponse.SC_OK, context.response().getStatus()), () -> assertEquals(expectedProductDetails, context.response().getOutputAsString()) ); } @Test @DisplayName("GIVEN landing page (fr-FR) WHEN servlet is called with existing productId which is not valid for French market THEN it returns not found response in JSON format") void testProductDetailsInFrenchMarket() throws ServletException, IOException { this.setPageResource(TestConstants.FR_FR_LANDING_PAGE_PATH); this.servlet.doGet(context.request(), context.response()); assertAll( () -> assertEquals(HttpServletResponse.SC_OK, context.response().getStatus()), () -> assertEquals(NOT_FOUND_RESPONSE, context.response().getOutputAsString()) ); } private void setPageResource(final String path) { final Page page = context.pageManager().getPage(path); context.request().setResource(page.getContentResource(CONTENT_RESOURCE_PATH)); } } ``` With this testing approach I have covered 87% of lines of code. Other 13% what is not covered is catching exceptions. ![](https://cdn.sanity.io/images/0a8atbln/production/a57dd21ea6b63fc5013a99c8e2eb56973a34e33c-1757x213.png) ![](https://cdn.sanity.io/images/0a8atbln/production/078a9109aa65cb5ea3ef6ea72293e1fe1feb6ebb-1225x559.png) Other good examples for testing in AEM would be components. For every component you have requirements. To achieve those requirements you will probaly create several classes like OSGi service, some Utils, Records and those requirements you will publicly exposed through Sling model to view layer. Ideal candidats for testing. ## Sum up - If you don't write tests, start writing it - Test requirements not implementation - Developers should have time for writing tests
mkovacek
381,056
JavaScript: How to Remove Duplicate Values from Arrays
Originally posted on Will's blog In a previous post we saw how to determine whether a JavaScript...
0
2020-07-03T18:15:06
https://dev.to/will_devs/javascript-how-to-remove-duplicate-values-from-arrays-lf0
javascript, webdev, tutorial
*Originally posted on [Will's blog](https://www.willharris.dev/garden/remove-array-duplicates)* --- In a [previous post](https://bikesandbytes.net/check-for-array-duplicates) we saw how to determine whether a JavaScript array contains duplicate values. Today, I want to show a few different methods I've found for removing duplicate values from an array. ## Using the `Array.prototype.filter()` & `Array.prototype.indexOf()` methods ```js let originalArray = [1, 2, 3, 4, 1, 2, 3, 4] let uniqueArray = originalArray.filter((item, index, array) => { return array.indexOf(item) === index }) // uniqueArray === [1, 2, 3, 4] ``` The basic strategy here is to iterate through `originalArray` and check to see if the index of the item we are currently examining is the same as the index of the item in the `originalArray`. Because `indexOf` returns the first index that it finds for a given value, if it isn't a duplicate value then the index for that item must be the same! Note that this method is not the most efficient: it executes in quadratic time. So if the arrays you're checking are very large in size, you may want to use a different method. Another thing worth nothing is that we can use the same method to return only the duplicate values by inverting the comparison: ```js let duplicateArray = originalArray.filter((item, index, array) => { return array.indexOf(item) !== index }) ``` ## Using `Array.prototype.reduce()` & `Array.prototype.includes()` ```js let originalArray = [1, 2, 3, 4, 1, 2, 3, 4] let uniqueArray = originalArray.reduce((unique, item) => { unique.includes(item) ? unique : [...unique, item] }, []) // uniqueArray === [1, 2, 3, 4] ``` Here the strategy is to keep a running list of the unique items in our reducer function's 'accumulator' (`unique`). For each item in `originalArray` we check to see if the accumulator includes the item under consideration. - If it does contain the item, return the accumulator without making any changes, in effect 'skipping over' that item. - If it does not contain the item, spread the values in the accumulator into a new array, and add the item under consideration. `Array.prototype.includes` returns a boolean value -- `true` if the value is found in the array, `false` if not. This boolean value drives our conditional, determining what to do with each value. I find this approach less intuitive and harder to read, but it works. Also note that the empty array that is passed in after the reducer function is the starting value for the accumulator, so the first pass through the `reduce`, `unique` is an empty array. ## ⚡ Using the ES6 `Set` object ⚡ ```js let originalArray = [1, 2, 3, 4, 1, 2, 3, 4] let uniqueArray = array => [...new Set(array)] // or let uniqueArray = Array.from(new Set(originalArray)) // uniqueArray = [1, 2, 3, 4] ``` This approach harnesses the power of the `Set` object, introduced in ES6. Sets are guaranteed to preserve the order of the inserted items, and to only contain unique values. Therefore it is by definition impossible for a set to contain duplicates! Here we call the `Set` object's constructor, passing it the array we'd like to construct a `Set` from. Then, once we've trimmed out all the duplicates and stored the remaining values in our `Set`, we convert back to an array and return the result. I've seen some discussion of this approach being a bit less performant if the array under consideration is very large and contains many duplicate values. However, the same discussion found that this approach is very efficient in a scenario where the data has very few duplicates. Personally I think the conciseness of this last approach is enough of a benefit to warrant using the `Set` object approach, unless there's a compelling performance reason not to.
will_devs
381,205
I Created a Responsive Portfolio Website Using HTML, CSS, Bootstrap, and JavaScript
I already had a portfolio but it was created by others. But now have created my portfolio from...
0
2020-07-03T22:30:43
https://dev.to/mjmaurya/i-created-a-responsive-portfolio-website-using-html-css-bootstrap-and-javascript-2in9
html, css, webdev, javascript
I already had a portfolio but it was created by others. But now have created my portfolio from scratch using HTML, CSS, Bootstrap, and JavaScript. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/oaa09vbgr9dqhbt3pq94.png) Tell me, what amazes you? What should I change? Here is the link to the portfolio:https://mjmaurya.github.io OR https://manojcse.netlify.app Your feedback will be very important to me. It will help to enhance my development skill.
mjmaurya
381,240
Top 3 Things to Drop From Resumes - Discuss Additional!
As always, I'm helping people put together resumes and get re-employed during this whole pandemic induced depression we're barreling into. In light of that I've noticed a bunch of stuff on resumes that aren't really needed anymore.
0
2020-07-03T23:41:24
https://dev.to/adron/top-3-things-to-drop-from-resumes-discuss-additional-3obh
discuss, career
--- title: Top 3 Things to Drop From Resumes - Discuss Additional! published: true description: As always, I'm helping people put together resumes and get re-employed during this whole pandemic induced depression we're barreling into. In light of that I've noticed a bunch of stuff on resumes that aren't really needed anymore. tags: #discussion #career cover_image: https://dev-to-uploads.s3.amazonaws.com/i/wamv0b49az8ejk8bqxi7.png --- As always, I'm helping people put together resumes and get re-employed during this whole pandemic induced depression we're barreling into. In light of that I've noticed a bunch of stuff on resumes that aren't really needed anymore. The physical address is not really needed anymore these days. I've noticed it's often on the resumes still. No need, take it off, and I'd even argue it perpetuates several unneeded tropes and paradigms about geographic location. The other thing, unless there is a specific reason, no need to put the physical location you existed in for various past jobs. The only thing that matters is if you'll be able to be in the geographic space required for jobs you're applying to. Another, if you're past a solid decade of experience, it's rarely necessary in tech to keep listing every position for the last gazillion years. Just the last ~5-10 years plus a note that you have more and can discuss if interested. That'll help prevent the 20+ page resumes. Any others you can think of this day and age? Any advice I should add to my list of advice I provide as I review people's resumes that might be helpful?
adron
381,553
I'm hosting a workshop at CodeLand:Distributed
I will host a workshop at the CodeLand:Distributed conference. In Rocking the Gamepad API we will explore the Gamepad API and use it to build a small game.
0
2020-07-04T15:36:42
https://dev.to/alvaromontoro/i-m-hosting-a-workshop-at-codeland-distributed-4oo2
codeland, watercooler, conference
--- title: I'm hosting a workshop at CodeLand:Distributed published: true description: I will host a workshop at the CodeLand:Distributed conference. In Rocking the Gamepad API we will explore the Gamepad API and use it to build a small game. tags: codeland,watercooler,conference cover_image: https://dev-to-uploads.s3.amazonaws.com/i/bzz04hbzr1dltctfkgvb.png --- A few months back, I submitted a couple of workshop ideas for the [Codeland conference](https://codelandconf.com/). Since then, the world has changed quite a bit, the conference moved online... and **one of the ideas got picked!** This will be **the first time that I speak or host a workshop at a conference**, and I am excited –and nervous– beyond belief. (**Please, share any tips you may have!**) The title of the workshop is "*Rocking the Gamepad API*" and it is aimed at people who do web development and like exploring different web technologies (no gaming experience is required.) We will: - Talk about **Web APIs** - Learn the basics of the **Gamepad API** - Explain how to use it - Explore extensions and the future of the API - **Build a small game with HTML and JS** It won't be too advanced, but some JavaScript knowledge is required. Also, some type of game controller that can be connected to the computer will be necessary to follow along. I look forward to meeting you at CodeLand! Whether in the talks, this workshop, or in any of the other amazing workshops that are available. --- The conference is on July 23 and 24, with **free talks on the 23<sup>rd</sup>** and many workshops on the 24<sup>th</sup>. The talks are free and the workshops are $25. [Register today, spots are limited!](https://codelandconf.com/#tickets)
alvaromontoro
381,638
Basic RegEx in Javascript for beginners 🔥
What is regular expression This is a sequence of character that define a search pattern in...
0
2020-07-04T20:07:23
https://dev.to/tracycss/basic-regex-in-javascript-for-beginners-1dnn
javascript, beginners, webdev, codenewbie
##What is regular expression This is a sequence of character that define a search pattern in a form or text. It is used in popular languages like Javascript, Go, Python, Java, C# which supports regex fully. Text editors like Atom, Sublime and VS code editor use it to find and replace matches in your code. Example in vs code editor. Click (ALT+ R) to use regex ![regex view in vs code](https://i.ibb.co/sb0xqWt/Screenshot-163.png) ###Applications - Grabbing HTML tags - Trimming white spaces - Removing duplicate text - Finding or verifying card numbers - Form Validation - Matching Ip addresses - Matching a specific word in a large block of text. ####Literal character It matches a single character. Example, if you want to match character 'e' in bees and cats. #### Meta character Match a range of characters. Example lets do an easy regex to find specific numbers 643 in a series of numbers.It will only match 643 not the rest of the numbers. I am using [Regex101](https://regex101.com/) ![simple regex](https://i.ibb.co/6H9wFXB/643.png) #####Two ways of writing regex ```JS 1) const regex = /[a-z]/gi; 2) const regex = new RegExp(/[a-z], 'gi'/); ``` #### Different types of meta characters include: ##### 1) Single character ```JS let regex; // shorthand for the single characters regex = /\d/; //Matches any digital character regex = /\w/; // Matches any word character [a-zA-z0-9_] regex = /\s/; // Matches any whitespace regex = /./; //Matches any character except line terminators regex = /\W/; //Matches any non-word characters. Anything that's not [^a-zA-z0-9] regex = /\S/; // Matches any non whitespace regex = /\D/; //Matches any non-digit character [^0-9] regex = /\b/; //assert position at a word boundary regex = /\B/; // matches non-boundary word // Single characters regex = /[a-z]/; // Matches lowercase letters between a-z (char code 97-122) regex = /[A-Z]/; // Matches uppercase letters between A-z (char code 65-90) regex = /[0-9]/; // Matches digits numbers between 0-9 (char code 48- 57) regex = /[a-zA-Z]/; // matches matches both lower and uppercase letters regex = /\./ ; // matches literal character . (char code 46) regex = /\(/ ; // matches literal character ( regex = /\)/ ; // matches literal character ) regex = /\-/ ; // matches literal character - (char code 95) ``` ##### 2) Quantifiers They measure how many times you want the single characters to appear. ```JS * : 0 or more + : 1 or more ? : 0 or 1 {n,m} : min and max {n} : max /^[a-z]{5,8}$/; //Matches 5-8 letters btw a-z /.+/; // Matches at least one character to unlimited times const regex = /^\d{3}-\d{3}-\d{4}$/; // Matches 907-643-6589 const regex = /^\(?\d{3}\)?$/g // matches (897) or 897 const regex = /.net|.com|.org/g // matches .com or .net or .org ``` ##### 3) Position ```JS ^ : asserts position at the start $ : asserts position at the end \b : word boundary const regex = /\b\w+{4}\b/; // Matches four letter word. ``` If you want to look for words with any 4 word character use \b without the boundary it will select any 4 word letters from word characters. ![regex with boundary](https://i.ibb.co/qr5Tvyz/Screenshot-164.png) #### Character Classes This are characters that appear with the square brackets [...] ```JS let regex; regex = /[-.]/; //match a literal . or - character regex = /[abc]/; //match character a or b or c regex =/^\(?\d{3}\)?[-.]\d{3}[-.]\d{4}$/; // matches (789)-876-4378, 899-876-4378 and 219.876.4378 ``` ####Capturing groups This is used to separate characters within a regular expression and is enclosed with parentheses (....) The below regex pattern captures different groups of the numbers ![regex pattern](https://i.ibb.co/dgx0yM0/capturing-groups.png) Capturing groups is useful when you want to find and replace some characters. Example you can capture a phone number or a card number and replace it by showing only the first 3-4 digits. Take a look at the example below. ![regex pattern](https://i.ibb.co/DKkb48C/carbon-2.png) ```JS //How to create a regex pattern for email address const regex = /^(\w+)@(\w+)\.([a-z]{2,8})([\.a-z]{2,8})?$/ // It matches janetracy@jsninja.co.uk or janetracy@hey.com ``` #### Back reference You can capture a group within a [regex pattern](https://regex101.com/library/Py5VMF) by using (\1) ```JS const regex = /^\b(\w+)\s\1\b$/; // This will capture repeated words in a text. ``` Back reference can be used to replace markdown text to html. ![regex pattern](https://i.ibb.co/DWR2ZKQ/carbon-3.png) ###Types of methods used regular expression ####1) Test method This is a method that you can call on a string and using a regular expression as an argument and returns a boolean as the result. True if the match was found and false if no match found. ```JS const regex = /^\d{4}$/g; regex.test('4567'); // output is true ``` #### 2) match method It is called on a string with a regular expression and returns an array that contains the results of that search or null if no match is found. ```JS const s = 'Hello everyone, how are you?'; const regex = /how/; s.match(regex); // output "how" ``` #### 3) exec method It executes a search for a match in a specified string. Returns a result array or null. Both full match and captured groups are returned. ```JS const s = '234-453-7825'; const regex = /^(\d{3})[-.](\d{3})[.-](\d{4})$/; regex.exec(s); //output ["234-453-7825", "234", "453", "7825"] ``` #### 4) replace method Takes in two arguments, regex and the string/ callback function you want to replace it with. This method is really powerful and can be used to create different projects like games. ```JS const str = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.'; const regex = /\b\w{4,6}\b/g; const results = str.replace(regex, replace) function replace(match){ return 'replacement'; } // output replacement replacement replacement sit replacement, consectetur adipiscing replacement, sed do eiusmod replacement incididunt ut replacement et replacement replacement replacement. ``` #### 5) split method The sequence of character that makes where you should split the text. You can call the method it on a string and it takes regular expression as an argument. ```JS const s = 'Regex is very useful, especially when verifying card numbers, forms and phone numbers'; const regex = /,\s+/; regex.split(s); // output ["Regex is very useful", "especially when verifying card numbers", "forms and phone numbers"] // Splits the text where is a , or whitespace ``` ###Let's make a small fun project We want to make a textarea, where you can write any word character and when you click the submit button, the text generated will be individual span tags. When you hover on the span text, background color will change and also the text to (Yesss!!). Let's do this!!!!! ```Html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Regex expression</title> <link rel="stylesheet" href="style.css"> </head> <body> <h1>Regex expression exercises</h1> <div class="text-container"> <textarea name="textarea" id="textarea" class = "textarea" cols="60" rows="10"> Coronavirus disease (COVID-19) is an infectious disease caused by a newly discovered coronavirus. Most people 234-9854 infected with the COVID-19 virus will experience mild to moderate respiratory illness and recover without requiring special treatment. Older people, and those with underlying medical problems like cardiovascular disease, diabetes, chronic respiratory disease, and cancer are more likely to develop serious illness. The best way to prevent and slow down 456-2904 transmission is be well informed about the COVID-19 virus, the disease it causes and how it spreads. Protect yourself and others from infection by washing your hands or using an alcohol based rub frequently and not touching your face. The COVID-19 virus spreads 860-8248 primarily through droplets of saliva or discharge from the nose when an infected person coughs or sneezes, so it’s important that you also practice respiratory etiquette (for example, by coughing into a flexed elbow). </textarea> <div class="result-text"> </div> <button type="submit">Submit</button> </div> <script src="regex.js"></script> </body> </html> ``` Let's write the Javascript part ```JS const button = document.querySelector('button'); const textarea = document.querySelector('textarea'); const resultText = document.querySelector('.result-text'); function regexPattern (){ const regex = /(\W+)/g; const str = textarea.value; const results = str.split(regex); console.log(results); results.forEach(result =>{ if(result != null){ const span = document.createElement('span'); span.innerHTML = result; resultText.appendChild(span); span.addEventListener ('mouseover', () => { const randomColour = Math.floor(Math.random()* 255); const randomColour1 = Math.floor(Math.random()* 255); const randomColour2 = Math.floor(Math.random()* 255); span.style.backgroundColor = `rgba(${randomColour}, ${randomColour1}, ${randomColour2})`; span.textContent = 'Yesss!' }); } }); }; button.addEventListener('click', () => { resultText.innerHTML += `<p class ='text-info'>This is what I matched</P>`; regexPattern(); }); ``` ####results ![regex project](https://i.ibb.co/0q9j4LK/screencapture-127-0-0-1-5500-regex-exercise-index-html-2020-07-05-10-22-29.png) [Source code in my GitHub](https://muchresultinge.github.io/regex-word-generator/) [Watch the result video](https://vimeo.com/user118963371/review/435428478/e6d146cb59) #### Websites resources for learning regex in Js + 💻[Regular expression info](https://www.regular-expressions.info/quickstart.html) + 💻[Regex.com](https://regexr.com/) + 💻[Regexone](https://regexone.com/) + 💻[Regex101](https://regex101.com/) #### Youtube videos + 🎥[Regular Expressions (Regex) Mini Bootcamp by Colt Steele](https://www.youtube.com/watch?v=EiRGUNrz9MY) + 🎥[Learn Regular Expressions In 20 Minutes by Web Dev Simplified](https://www.youtube.com/watch?v=rhzKDrUiJVk) + 🎥[Regular Expressions (RegEx) Tutorial by NetNinja](https://www.youtube.com/watch?v=r6I-Ahc0HB4&list=PL4cUxeGkcC9g6m_6Sld9Q4jzqdqHd2) + 🎥[Regular Expressions (Regex) in JavaScript by FreecodeCamp](https://www.youtube.com/watch?v=909NfO1St0A) #### Books + 📖[Mastering Regular Expressions by Jeffrey E. F. Friedl](https://www.amazon.com/Mastering-Regular-Expressions-Understand-Productive-ebook/dp/B007I8S1X0) + 📕[Regular Expressions Cookbook by Jan Goyvaerts](https://www.amazon.com/Regular-Expressions-Cookbook-Solutions-Programming/dp/1449319432/ref=pd_bxgy_img_3/139-9415231-5476500?_encoding=UTF8&pd_rd_i=1449319432&pd_rd_r=ef24aa36-51bb-4a12-b0d4-2cc69b9fed66&pd_rd_w=FOQLm&pd_rd_wg=73GRS&pf_rd_p=4e3f7fc3-00c8-46a6-a4db-8457e6319578&pf_rd_r=6GP5DZ9VB8174P09BB41&psc=1&refRID=6GP5DZ9VB8174P09BB41) + 📙[Introducing Regular Expressions by Michael Fitzgerald](https://www.amazon.com/Introducing-Regular-Expressions-Step-Step/dp/1449392687/ref=sr_1_2?dchild=1&keywords=regex+books&qid=1593950860&s=books&) ###Conclusion As a code newbie I was terrified when i first saw how regex looks like but this week, I decided to learn it and write about. To be honest I will use this post as a future reference, I hope you will too. Now that you know how powerful regex is and where it can be applied. Especially in form validation or card number validation. I hope this helps any beginner to understand how powerful regex can be and how to use it.
tracycss
381,755
Delete Node in a Linked List(in-place)
Problem on leetcode.com We are asked to delete a node from a linked list. And we have to do this wit...
0
2020-07-04T10:27:07
https://dev.to/hunterheston/delete-node-in-a-linked-list-in-place-1eba
javascript, leetcode, linkedlist, computerscience
[Problem on leetcode.com](https://leetcode.com/problems/delete-node-in-a-linked-list) We are asked to delete a node from a linked list. And we have to do this without knowing anything about this node's parent or the root of the linked list. Assuming a node structure that looks like this: ```javascript function ListNode(value) { this.value = value; this.next = null; } ``` Let's go through the solution looking at this example: `A->B->C->D->E->null` and assume we are asked to delete `C`. We will not be able to see: `A->B` so our effective list is `C->D->E->null`. Since we can't see `B` we need to make `C` look like `D` without damaging the link that `B` already has to `C`. Here are the steps to solve this problem: 1. Copy D.value into C.value 2. Copy D.next into C.next Here is the JS code: ```JavaScript function deleteNode(node) { node.val = node.next.val node.next = node.next.next }; ``` Thanks for reading!
hunterheston
381,808
Problem-solving and the dirty dishes principle of software development
In this story, I would like to explore with you the cost of solving various tasks and proble...
0
2020-07-06T08:22:40
https://bornfight.com/blog/problem-solving-and-the-dirty-dishes-principle-of-software-development/
engineeringmonday, career
# In this story, I would like to explore with you the cost of solving various tasks and problems. More specifically, let’s explore how the cost of solving a problem grows with respect to the size of the problem. Some types of problems have a linear cost function. This means that if a problem is doubled, the cost to solve it is also doubled. An example would be reading a book, or travelling — if it takes you one hour to read fifty pages of a book, then you can conclude that it would take you twice as long to read twice as many pages, or if it takes you one hour to travel a fixed distance, then it will take you twice as long to travel twice the distance (providing that you maintain the same average speed). **However, there are many problems where the cost function is nonlinear. In practice, this means that as the problem increases in size, the solution cost becomes disproportionately higher.** ### Dirty dishes principle? Take for example washing dishes. If all you need to wash is one dirty plate right after lunch, it’s no big deal. But as soon as you have a sink full of dirty plates laying around for weeks, the problem becomes much more difficult than simply cleaning every single plate individually (unless you are using a dishwasher, but that is cheating). All of a sudden, you have less free space in your sink, you need to be extra careful not to damage the rest of the dishes, you will probably need to do much more scrubbing and soaking as the dirt on the plates has hardened, and that in turn will require more soap, bacteria can form, etc, etc. **The goal of cleaning dishes suddenly looks unachievable and the task is not fun for anyone.** A bunch of new problems (which need to be solved, and now have a cost) emerged just by deferring and accumulating the problem and letting it increase in size. These problems did not exist when there was only one dirty plate. These problems would never have showed up had you washed every dish as soon as it became dirty. **I call this the dirty dishes principle and it maps very well to software engineering.** ### How does it transfer to software development? Take for example the [exponential cost of fixing bugs](https://deepsource.io/blog/exponential-cost-of-fixing-bugs/). If you detect a bug early on, as early as in the inception phase, the cost of fixing it is very low. The bug was not around for enough time to cause any real damage and you already know which part of the code needs fixing — the context is well known and understood, which all contributes to a quick and easy solution. But if you fail to find the bug, and time passes — the problem becomes harder and more costly to solve. Now all of a sudden, you need to invest time just to figure out what and why is wrong. The bug probably caused some issues with customers, which damaged the product image and will negatively impact future revenue. The original author of the bug is months gone in a different company, documentation is scarce, no one knows how the code works, and you need a hero to save the day. A bunch of new problems (which need to be solved, and now have a cost) emerged just by deferring and accumulating the problem and letting it increase in size. These problems did not exist when there was only one dirty ~~plate~~ bug. These problems would never showed up had you ~~washed every dish~~ fixed every bug as soon as possible. This is exactly what happened to the pile of dirty dishes, and the best part is that it could have easily been avoided by [investing in QA](https://www.bornfight.com/blog/how-qa-expenses-can-be-cheaper-than-having-no-qa-expenses/) and in software development best practices. For example, by using a [static analysis checker](https://dev.to/bornfightcompany/how-to-improve-your-code-quality-with-psalm-a-static-analysis-tool-419d). Another infamous example that clearly explains this principle is the [Broken Windows Theory](https://medium.com/@matryer/broken-windows-theory-why-code-quality-and-simplistic-design-are-non-negotiable-e37f8ce23dab), which states that leaving small problems unattended will contribute to the overall situation becoming worse and thus more costly to solve. And this applies well to software engineering and highlights the importance of refactoring technical dept and maintaining a high quality bar. ### The point is — handle problems on time… Software engineering is absolutely filled with these costly nonlinear problems. Be it code maintenance, system design, bug fixing, [algorithm analysis](https://dev.to/jainroe/the-ultimate-beginners-guide-to-analysis-of-algorithm-84h) and many many others. That’s why it’s important to know which problems have a nonlinear solving cost, detect them early and address them [as soon as possible](https://blog.noisli.com/what-it-means-to-eat-the-frog/). **Do not let your problems accumulate, as the price of solving them could end up costing you more than the sum of its parts.** Happy dishwashing! 🙂
krukru
381,833
Advance Mongodb queries - cheatsheet
Here are few less used but important Mongodb queries I use now and then. Will keep updating this page...
0
2020-07-04T16:27:40
https://dev.to/mrkanthaliya/advance-mongodb-queries-cheatsheet-30on
mongodb, database, nosql
Here are few less used but important Mongodb queries I use now and then. Will keep updating this page If I get to use different less used queries. ### Copy database ```javascript db.copyDatabase("dbname", "dbnamenew") ``` ### Drop database ```javascript use dbname db.dropDatabase() ``` ### Active operations ```javascript db.currentOp() or db.adminCommand( { lockInfo: 1 } ) ``` ### Kill operation ```javascript db.killOp(opid) ``` ### Query stats ```javascript db.collection.find({"status":"single"}).explain( "executionStats") ``` ### DB stats (in GB) ```javascript db.stats(1024*1024*1024) ``` ### Create index ```javascript db.collection.createIndex( { status: 1 } ) ``` ### Get indexes on collection ```javascript db.collection.getIndexes() ``` ### Create read only access for user on mongodb ```javascript db.createUser({user: "example_read",pwd: "12345", roles: [{role:"read", db: "dbname"}]}) ``` ### Get user roles on mongodb ```javascript db.getUser("user_name") ``` ### Export collection ```javascript mongoexport --host=10.0.0.1:27017 --username=example --authenticationDatabase=admin --collection=example_mapper --db=dbname --out=example_mapper.json ``` ### Import from collection ```javascript mongoimport --host=10.0.0.1:27017 --username=example --authenticationDatabase=admin --collection=example_mapper --db=dbname --file=example_mapper.json ``` ### Dump mongodb ```javascript mongodump --host=10.0.0.1:27017 --username=example --authenticationDatabase=admin --db db_example --out dumps/ ``` ### Restore mongodb from dump ```javascript mongorestore --host=10.0.0.1:27017 --username=example --authenticationDatabase=admin dumps/ ``` ### Shell connect with ssh ```javascript mongo --ssl --host docdb.amazonaws.com:27017 --sslCAFile rds.pem --username example --password asddasdasd ```
mrkanthaliya
381,877
Telefoonnummer voor technische ondersteuning van Dell
Of uw Dell-computer nu niet werkt of uw printer niet goed presteert, u moet het Dell klantenservicenu...
0
2020-07-04T14:12:21
https://dev.to/hplooij/telefoonnummer-voor-technische-ondersteuning-van-dell-1lp3
Of uw Dell-computer nu niet werkt of uw printer niet goed presteert, u moet het <a href="https://dell.klantenservicebelgium.be/">Dell klantenservicenummer bellen</a>. Met behulp van probleemoplossers van leidinggevenden in de klantenservice kunt u weer genieten van het werk op uw computer of printer. Dit nummer is 365x24x7 beschikbaar, zodat u zich geen zorgen hoeft te maken over de problemen die optreden bij Dell-producten.
hplooij
381,915
Think like a man, UI typography, whimsical websites — and more UX this week
Your weekly list of curated design resources, brought to you by your friends at the UX Colle...
0
2020-07-04T14:49:39
https://uxdesign.cc/to-all-the-people-who-told-me-to-think-like-a-man-9db5e7a77eda
productivity, hotthisweek, marketing, design
--- title: Think like a man, UI typography, whimsical websites — and more UX this week published: true date: 2020-07-04 12:35:53 UTC tags: productivity,hotthisweek,marketing,design canonical_url: https://uxdesign.cc/to-all-the-people-who-told-me-to-think-like-a-man-9db5e7a77eda --- #### _Your weekly list of curated design resources, brought to you by your friends at the UX Collective._ [**View this email in your browser**](https://mailchi.mp/uxdesign/to-the-people-who-told-me-to-think-like-a-man?e=%5BUNIQID%5D) ![](https://cdn-images-1.medium.com/max/1024/0*Gxk4YC7GY6cz7yqe.jpg) Remember to breathe. - [**Connected dots**](https://alistapart.com/article/creative-culture-excerpt/) → Healthier benchmarks for creative culture. - [**Think like a man**](https://uxdesign.cc/to-the-people-who-told-me-to-think-like-a-man-a7ed0ad468b5?source=friends_link&sk=23b5f0526f24188696b16d48a0d4d219) → To the people who told me to “think like a man”. By [Jess Vergara](https://medium.com/u/a902bffa65e5). - [**Your Black friends are busy**](https://www.yourblackfriendsarebusy.com/) → A growing resource for learning and getting informed about anti-racism. The UX Collective is a platform to elevate unheard design voices all over the world, reaching over 368,700 designers every week. Curated by [Fabricio Teixeira](http://twitter.com/fabriciot) and [Caio Braga](http://twitter.com/caioab). ### Stories from the community ![](https://cdn-images-1.medium.com/max/900/0*FDeZ18ayvZR-MOZk.png) [**Does the design community perpetuate impostor syndrome?**](https://uxdesign.cc/does-the-design-community-perpetuate-imposter-syndrome-71496a3d3c86?source=friends_link&sk=6d78abf60a6d0600df5c7dfef106955c) → By [Trish Willard](https://medium.com/u/6ed1b6e8e585) ![](https://cdn-images-1.medium.com/max/900/0*P7zbQQ3iJ4eOJv-_.png) [**Designers, own your feedback**](https://uxdesign.cc/designers-own-your-feedback-d091f765985e?source=friends_link&sk=a41294f6bc24d32c2735fa0b6d92519b) → By [Fabricio Teixeira](https://medium.com/u/50e39baefa55) ![](https://cdn-images-1.medium.com/max/900/0*jTDelk4v1ejBV_xh.png) [**Where are the Black designers?**](https://uxdesign.cc/where-are-the-black-designers-17929127ccb2?source=friends_link&sk=07c55c20172cdd09948d11ec9f40ee2f) → By [Zariah Cameron](https://medium.com/u/9948ce5bf331) Top stories this week: - [**No, Apple is not moving towards neumorphism**](https://uxdesign.cc/no-apple-is-not-moving-towards-neumorphism-3c144d74c53b?source=friends_link&sk=480cdb353d6e9c7b36f74fab3e6718fa) **→** By [Rubens Cantuni](https://medium.com/u/d7ca111e7984) - [**Pixel-snapping in icon design**](https://uxdesign.cc/pixel-snapping-in-icon-design-a-rendering-test-6ecd5b516522?source=friends_link&sk=75203215dfc1fd5335b52559038b1ebc) **→** By [Helena Zhang](https://medium.com/u/a81b0a6a418e) - [**What the Apple Newton taught us about UX 27 years ago**](https://uxdesign.cc/what-the-apple-newton-taught-us-about-ux-27-years-ago-427c6be66a59?source=friends_link&sk=53974557e6f3f3c84948f2c0ed7c9ed9) **→** By [Jesse Freeman](https://medium.com/u/1300f3ac22c0) - [**47 key lessons for UI & UX designers**](https://uxdesign.cc/47-key-lessons-for-ui-ux-designers-3cb296c1945b?source=friends_link&sk=78626934a49f272ef5fb00b6a8d3b711) **→** By [Danny Sapio](https://medium.com/u/ea241d603214) - [**A few thoughts on Dribbble designs**](https://uxdesign.cc/my-thoughts-on-dribbble-4013c4e9a3b5?source=friends_link&sk=e78f1b8f1f329167b1c76c72a4bf71df) **→** By [Frank Huang](https://medium.com/u/5a14072929f4) - [**A designer’s guide to successful user testing**](https://uxdesign.cc/a-beginners-guide-to-user-testing-for-usable-products-fa049df82f28?source=friends_link&sk=60f267f8a7b4a62f638fdd015d578669) **→** By [Michelle Chiu](https://medium.com/u/caf9edec5c88) - [**Specialisation is for insects — be curious!**](https://uxdesign.cc/specialisation-is-for-insects-be-curious-297bf2f0cd2?source=friends_link&sk=f195081598b3d65fbbcdcbc4307a75ef) **→** By [Michal Malewicz](https://medium.com/u/fde1eb3eb589) - [**3 ready-to-use templates for your next testing with users**](https://uxdesign.cc/usability-testing-templates-9b79b40eb481?source=friends_link&sk=1dc046bfaf72376978205ae1368d9628) **→** By [Slava Shestopalov](https://medium.com/u/8b46234a12db) > “It is an extraordinary truth of my life that I am biologically more than half white, and yet I have no white people in my genealogy in living memory. No. Voluntary. Whiteness. I am more than half white, and none of it was consensual. White Southern men — my ancestors — took what they wanted from women they did not love, over whom they had extraordinary power, and then failed to claim their children.” [**You want a confederate monument? My body is a confederate monument**](https://www.nytimes.com/2020/06/26/opinion/confederate-monuments-racism.html) → ### News & ideas - [**Lights & shadows**](https://ciechanow.ski/lights-and-shadows/) → To talk about light we have to start in darkness. - [**Responsible design**](https://cennydd.com/blog/responsible-design-a-process-attempt) → Embed ethical considerations into your process. - [**Web dark ages**](https://pavellaptev.github.io/web-dark-ages/) → Notes on the old internet’s design and front-end. - [**UI typography**](https://developer.apple.com/videos/play/wwdc2020/10175/) → Achieving exceptional typography in your product, by Apple. ![](https://cdn-images-1.medium.com/max/1000/0*83Z-awFbMnWrqp5p.jpg) ![](https://cdn-images-1.medium.com/max/1000/0*xVIhi6KISgAZt0MV.jpg) ![](https://cdn-images-1.medium.com/max/1000/0*gxzqf1lKhb5wcEIn.jpg)<figcaption><a href="https://www.juxtapoz.com/news/magazine/features/calida-rawles-wade-in-the-water/"><strong>Featured work: Calida Rawles</strong></a> →</figcaption> ### Tools & resources - [**Good service scale**](https://good.services/the-good-services-scale) → Assessing the quality of your service. - [**Human sounds**](https://failflow.com/humansounds) → UI sounds made by humans for humans. - [**Quotebacks**](https://quotebacks.net/) → Grab and embed snippets of text from the web. - [**Whimsical club**](https://whimsical.club/) → A collection of websites that spark joy. We believe designers are thinkers as much as they are makers. So we created the [**design newsletter**](https://newsletter.uxdesign.cc/) we have always wanted to receive. * * *
fabriciot
382,124
Briefing 10
Original Post Date: 4/22/2020 Author: IvanCoHe Hey people! So I'm going to start doing the...
0
2020-07-04T16:58:39
https://dev.to/smleaks/briefing-10-1nai
*Original Post Date: 4/22/2020* *Author: IvanCoHe* ## Hey people! So I'm going to start doing the Briefings like this now. It's easier for me to format and it serves the same purpose. Now, for the news:<br> <br> **The internal survival build was last updated 4 days ago.** This might mean nothing but it most likely means that this is a pretty stable build. I'd expect some updates in the coming days but this might even be the last one (Unprobable but still a possiblity)<br> <br> **A new seat is visible in the new gif.** It's basically an egg seat.<br> <br> **There's now pipe chests in survival.** Seen in the last gif, it looks like there will be a chest variety that allows you to connect it to pipes and create automatic sorting systems. It's also needed for the Packing Stations.<br> <br> **Thanks for reading Mechanics, see you next briefing!**<br> <br> Chest and seat<br> ![](https://cdn.discordapp.com/attachments/685994642768265235/702639725260963951/unknown.png)
trbodev
403,856
#2 Saying No - Tips from The Clean Coder
Some tips from the book The Clean Coder, by Uncle Bob.
0
2020-07-19T12:06:49
https://dev.to/yuricosta/2-saying-no-tips-from-the-clean-coder-2pp1
tips, softwaredevelopment, habits, begginers
--- title: #2 Saying No - Tips from The Clean Coder published: true description: Some tips from the book The Clean Coder, by Uncle Bob. tags: #tips #softwaredevelopment #habits #begginers //cover_image: https://direct_url_to_image.jpg --- #####*This is the second article from the series "Tips from The Clean Coder". Here we gathered and summarized the main tips from the second chapter.* Professionals speak truth to power. Professionals have the courage to say no to their managers. But how do you say no to your boss? After all, it's your boss! Aren't you supposes to do what your boss says? **No. Not if you are a professional.** ![arnold] (https://media.giphy.com/media/3o6nURg0zspR7R4oY8/giphy.gif) ### Adversarial roles Managers are people with a job to do, and most managers know how to do that job pretty well. Part of that job is to pursue and defend their objectives as aggressively as they can. By the same token, programmers are also people with a job to do. If they are professionals they will pursue and defend their objectives as aggressively as they can too. When your manager tells you that the login page has to be ready by tomorrow, he is pursuing and defending one of his objectives. He's doing his job. If you know full well that it's impossible to be done by tomorrow, **then you are not doing your job if you say "OK, I'll try".** The only way to do your job, at that point, is to say **"No, that's impossible"**. ![No] (https://media.giphy.com/media/d2ZcfODrNWlA5Gg0/giphy.gif) But don't you have to do what your manager says? No, your manager is counting on you to defend your objectives as aggressively as he defends his. That's how the two of you are going to get to **the best possible outcome**, which is the goal that you and your manager share. The trick is to **find the goal**, and that usually takes **negotiation**. ![krust] (https://media.giphy.com/media/l2Je9g1ZSAVm0bCdq/giphy.gif) ### High stakes The most important time to say no is when the stakes are highest. **The higher the stakes, the more valuable no becomes**. This should be self-evident. When the cost of failure is so high that the survival of your company depends upon it, **you must be absolutely determined to give your managers the best information you can**. And that often means saying no. ![honest] (https://media.giphy.com/media/xT1XGv4aPEDIVblaik/giphy.gif) ### Being a "team player" Being a team player means playing your position as well as you possibly can, and helping out your teammates when they get into a jam. A team player communicates frequently, keeps an eye out of his or her teammates, and executes his or her own responsibilities as well as possible. ![skinner] (https://media.giphy.com/media/xT5LMQCiaKVLAbp3dC/giphy.gif) **A team player is not someone who says yes all the time.** #### Trying ![Yoda] (https://media.giphy.com/media/pvDp7Ewpzt0o8/giphy.gif) Yoda is right. Perhaps you don't like that idea? Perhaps you think trying is a positive thing to do. After all, would Columbus have discovered America if he hadn't tried? The word try has many definitions. The definition I take issue with here is "to apply extra effort". The promise to try is an admission that you've been holding back, that you have a reservoir of extra effort that you can apply. The promise to try is an admission that the goal is attainable through the application of his extra effort; moreover, it is a commitment to apply that extra effort to achieve the goal. Therefore, by promising to try you are committing to succeed; This puts the burden on you. If your "trying" does not lead to the desired outcome, you will have failed. ![Abe] (https://media.giphy.com/media/uKPkgG7Utcd6E/giphy.gif) If you are not holding back some energy in reserve, if you don't have a new plan, if you aren't going to change your behavior, and if you are reasonably confident in your original estimate, then promising to try is fundamentally dishonest. **You are lying**. **And you are probably doing it to save face and avoid a confrontation**. ### The cost of saying yes Most of the time we want to say yes. Indeed, healthy teams strive to find a way to say yes. Managers and developers in well-run teams will negotiate with each other until they come to a mutually agreed-upon plan of action. But, as we've seen, **sometimes the only way to get to the right yes is to be unafraid so say no.** ### Is good code impossible? Every feature a client asks for will always be more complex to write than it is to explain. Clients and managers have figured out how to get developers to write code quickly (not effectively, but quickly): - **Tell developers the app is simple** - **Add features by faulting the team for not recognizing their necessity** - **Push the deadline. Over and over** In this dysfunction, it isn't only the code that will suffer. ![gilfoyle] (https://media.giphy.com/media/eHdCatwjMRBvFeUkLy/giphy.gif) Professionals are often heroes, but not because the try to be. Professionals become heroes when they get a job done well, on time, and on budget. **By trying to become the man of the hour, the savior of the day, you are not acting like a professional**. Saying yes to dropping our professional disciplines is not the way to solve problems. Dropping those disciplines is the way you create problems. Is good code impossible? Is professionalism impossible? **I say no**. ##### Next article: [#3 Saying Yes](https://dev.to/yurishenrique/3-saying-yes-tips-from-the-clean-coder-24ni) ##### Previous article: [#1 Professionalism](https://dev.to/yurishenrique/1-professionalism-tips-from-the-clean-coder-ba0)
yuricosta
403,978
To be a Full Stack Engineer in 2020
Things to learn to be a full stack engineer in 2020
0
2020-07-19T14:47:15
https://dev.to/freakomonk/to-be-a-full-stack-engineer-in-2020-3n2d
react, node, graphql, javascript
--- title: To be a Full Stack Engineer in 2020 published: true description: Things to learn to be a full stack engineer in 2020 tags: react, node, graphql, javascript //cover_image: https://direct_url_to_image.jpg --- This is in follow up to the blog post I have written last year about being a Full Stack Engineer in [2019]([https://dev.to/freakomonk/to-be-a-full-stack-engineer-in-2019-97m](https://dev.to/freakomonk/to-be-a-full-stack-engineer-in-2019-97m)). I have since joined an amazing [company](www.fanatics.com) and picked up a few more skills on being a full stack engineer. I have tried to be as concise as possible, yet exhaustive is the skills to be learnt. Starting from the front-end. HTML, CSS -------------- Well, nothing can be done on web without a basic understanding of HTML and CSS. Developers have long moved on from writing actual HTML, CSS with advent of UI libraries but still one should learn the basic building blocks of web. 1. Mozilla Developer Network is the best resource out there for anything related to web (mostly!). [https://developer.mozilla.org/en-US/docs/Web/HTML](https://developer.mozilla.org/en-US/docs/Web/HTML) 2. [https://www.w3schools.com/html/](https://www.w3schools.com/html/) 3. Freecodecamp offers might be the best learning roadmap for HTML, CSS out there is: [https://www.freecodecamp.org/learn/](https://www.freecodecamp.org/learn/) Javascript ------------- Javascript is probably the most important skill a web developer or a full stack engineer can have just because of the varied applications of the language. It can be used on the browser and also server-side. Freecodecamp [track](https://www.freecodecamp.org/learn/) also covers Javascript, but my favorite way of learning JS would be to read @getify 's "You don't know JS" [series](https://github.com/getify/You-Dont-Know-JS). He even recently launches "You don't know JS yet" [series](https://github.com/getify/You-Dont-Know-JS). React ------------- Next we dive into the UI libraries that one must learn. There is a still debate on which is more popular React or Angular, but since more and more companies are adopting React, let's go with it. Kent C Dodds has an excellent [video tutorial](https://egghead.io/courses/the-beginner-s-guide-to-react) for React Beginners on egghead.io Also, its recommended to go through the [Official docs](https://reactjs.org) for more advanced topics. Redux/Mobx/Context/Recoil ----------------------------- State management is a major problem when designing component based web applications. Each one of Redux/Mobx/Context/Recoil solves the problem in their own way and having an idea on atleast one of them is imperative. **Redux** : [Getting Started with Redux](https://redux.js.org/tutorials/essentials/part-1-overview-concepts) **Mobx**: [Intro to Mobx](https://mobx.js.org/getting-started) **Context**: This is natively supported state management in React - [What is React Context](https://reactjs.org/docs/context.html) **Recoil**: [What is Recoil](https://recoiljs.org/) REST ---------------- We make tons of API calls daily and a majority of them are powered by REST. It only makes sense to understand the basic principles behind REST and the corresponding HTTP error codes. [Introduction to RESTful APIs](https://dzone.com/articles/introduction-to-rest-api-restful-web-services) GraphQL ---------------- GraphQL is a latest contender for REST but has its own applications. Knowing when to use REST vs GraphQL is important for optimising the application performance. [Learn GraphQL](https://graphql.org/learn/) [How to GraphQL](https://www.howtographql.com/) Node.js -------------- Node.js is the server side runtime for JS which enables you to build APIs and host them using servers. Having to code in same language on both browser and server speeds up the developer velocity. [Intro to Node.js](https://nodejs.org/en/docs/guides/) Golang/Java ---------------- There are certain limitations to what a Nodejs application can achieve and so for such use-cases we use another OO language like Golang or Java. Java is the most popular one but Golang is fast rising **Java**: [Java Intro](https://www.w3schools.com/java/java_intro.asp) **Golang**: [A Tour of Go](https://tour.golang.org/welcome/1) Databases ---------------- There are two types of Databases, SQL and NoSQL. The differences between them both should be learnt and only then we can decide when to use which type of Database. **SQL**: There are several popular SQL databases. We have Oracle, MySQL etc, but I will go with [Postgres](https://www.postgresql.org/docs/online-resources/) simply because of its rise and performance. **NoSQL**: NoSQL databases are used when there is not many inter dependencies among your tables ( putting it very very simply, you should go read the differences ). Both [MongoDB](https://university.mongodb.com/) and [Cassandra](https://cassandra.apache.org/doc/latest/getting_started/index.html) are good candidates. Cache ----------- More often that not, you end up using Cache to store data that is needed frequently from the database. Again, noting down the popular ones: [Redis](https://university.redislabs.com/) & [Memcached](https://memcached.org/) ---------------------------------------------------------------------------- Apart from this, a Full stack engineer should know the basics of Cloud ( Azure, AWS or Google Cloud) and [Web design](https://web.dev/learn/) There are a few options I deliberately skipped from this list such as Typescript and Deno etc to not confuse new engineers entering into the realm. Let me know if you see anything amiss or you want to know about any particular tech.
freakomonk
404,120
Environmental Variables in Python
Environmental Variables in Python Before we begin let's look at our little vocabulary ENV...
0
2020-07-19T16:34:22
https://dev.to/igmrrf/environmental-variables-in-python-5a2k
python, env, environmental, decouple
# Environmental Variables in Python Before we begin let's look at our little vocabulary ENV = Environmental Variable ENVs = Environmental Variables Haven't really built any major project with Python except writing scripts or solving simple Algorithms. Today I was making changes to my [Facebook_autopost_bot](https://github.com/igmrrf/Facebook_autopost_post) when I realized am gonna need to set this up to enable others use the code and also share it openly on Github without letting out my passwords and configurations. Maybe you're already using **ENVs** in your Python scripts or applications, but if your haven't started then now is a good time to consider a change. I believe prior to you readint his, you already know what an ENV is so no need to make this post longer. NOTE: ENVs exist outside of your code as part of your server environment— can help you by both streamlining and making more secure the process of running your scripts and applications. Automation and Security are the major reasons for adopting ENVs ## Let's Start In Python environment variables are implemented using the _os_ package. Sample Code: import os print(environ) Result: Will show you all the ENVs existing on your machine(object containing a lot of information about your machine, os, services etc.) Note: This is an edited (shortened) output for the purpose of the blog post length. >>> environ({'SHELL': '/bin/bash', 'LSCOLORS': 'ExFxBxDxCxegedabagacad', 'SESSION_MANAGER': 'local/igmrrf:@/tmp/.ICE-unix/2554,unix/igmrrf:/tmp/.ICE-unix/2554', 'QT_ACCESSIBILITY': '1', 'APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL': 'true', 'LANGUAGE': 'en_NG:en', 'QT4_IM_MODULE': 'ibus', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'XMODIFIERS': '@im=ibus', 'DESKTOP_SESSION': 'ubuntu', 'SSH_AGENT_PID': '2467', 'NO_AT_BRIDGE': '1', 'GTK_MODULES': 'gail:atk-bridge', 'DBUS_STARTER_BUS_TYPE': 'session', 'PWD': '/home/igmrrf/Desktop/Writings/ENVs in Python', 'TERM_PROGRAM': 'vscode', '_': '/usr/bin/python3'}) ### COMMANDS for Reading and Writing environment variables: ### READING os.environ.get('USER') os.environ['User'] os.getenv('USER') >>> igmrrf >>> igmrrf >>> igmrrf The commands will print out your current username Note: If there no environment variable matching the key, it'll return _None_ ### WRITING To change an ENV ```shell os.environ['USER'] = 'tldo' ``` os.environ['USER'] >>>tldo ### To Clear an ENV ```shell os.environ.pop('USER') ``` When trying to access that ENV, you'll get _None_ ```shell os.environ.get('USER') >>> None ``` ### To Clear All ENV ```shell os.environ.clear() ``` When trying to access any ENV, you'll get _KeyError_ ```shell os.environ.get('USER') >>> KeyError: key does not exist. ``` NOTE: I was scared at first about this clear function but don't worry, the settings you apply in your python projects and scripts don't affect other projects outside that specific process or affect machine wide ENVs.If you wish to affect a machine wide change on your machine you'll need to run these commands from _bash_ with _sudo_ priviledges ## Using ENVs In order to use these variables as we keep on building scripts and as programmers and developers, effeciency, speed and optimization is a major criteria, we need to assign the function of handling these variables to an external file. A package that does this effortlessly is _python-decouple_ Open your terminal & Run pip install python-decouple if your use Linux ubuntu and install python using _sudo apt install python3_ then run pip3 install python-decouple A useful package for handling ENVs locally instead of us acccessing our os(_import os_) and manipulating which is a bit complex :wink: If you've already got it installed, you'll get Requirement already satisfied: python-decouple in /home/your_name/.local/lib/python3.8/site-packages (3.3) else it will be installed in few seconds ## Using Python-decouple ### Let's get started by creating and opening our .env file at the root of your project ```shell $ touch .env $ code .env ``` Note: _code_ is a command that comes with [VsCode](www.visualstudiocode.com). Only run it if you have VsCode install and configured correctly on your machine Then configure the file as follows ``` username=igmrrf PASSWORD=12345 URL=https://api.igmrrf.com ``` Then import Python-decouple in Your Python script where you need these variables ``` from decouple import config print(config('URL')) print(config('USERNAME')) >>> your_api_endpoint >>> igmrrf ``` Wasn't that easy? :smile: Python provides a package for almost everything, that's one of the reasons Python is so popular and getting more recognition. If you've got a second, either tweet about this or go check out a python package :wink:
igmrrf
404,136
Visualizing Merge Sort
I find algorithms fascinating. I've recently been focusing on the awesomeness of sorting algorithms....
0
2020-07-19T18:22:36
https://dev.to/jameseaster/visualizing-merge-sort-3mnc
algorithms, javascript, vue, async
I find algorithms fascinating. I've recently been focusing on the awesomeness of sorting algorithms. This is no coincidence as I dove head first into a personal project with the aim of accomplishing two things: become familiar with Vue.js and grow a deeper understanding/ appreciation for sorting algorithms. My idea for this project was to create a sorting algorithm visualizer that displayed the moment to moment operations that occur inside of each algorithm. This absolutely helped me achieve the to goals previously mentioned (utilize Vue.js & learn more sorting algorithms). While building this visualizer I came across several challenges. The first challenge was simply diving deeper into each algorithm and writing my own version. The next challenge was dissecting each operation in order to pick and choose what I needed to visualize. Several of the algorithms them lent themselves to being slowed down by async/await functionality, flipping some colors and values, and then letting the algorithm do the rest. I go over an example of this with bubble sort in this [blog post](https://dev.to/jameseaster/visualizing-bubble-sort-vue-js-10g3). However, merge sort was not so straight forward. If you are not familiar with how merge sort operates check out [this blog](https://dev.to/jameseaster/merge-sort-talk-2g50) and my [visualizer](https://jameseaster.github.io/algo-visualization/) so we can dive into the killer inner workings of animating this dude. Cutting to the chase: merge sort has several steps that require recursive function calls, because of this, I found it increasingly difficult to know when and where to pause the code and color and move the data appropriately in order to visualize the algorithm's operations. In fact, I never got it to work... I would slow one part of the algorithm down to visualize it which would then cause another part of the algorithm to get all jumbled up. I knew I needed another solution. My algorithm worked excellently and sorted data quickly, but I was having a heck of a time trying to visualize any piece of it without messing up its entire functionality. So, brainstorm, brainstorm, brainstorm... I decided to not change anything about the algorithm. Instead, I would let it run like normal and add another parameter which would take an array that recorded the operations as they happened! In other words: at each operation inside of merge sort I would create an object that would record the current action (comparing or overwriting), the index, and the value of each piece of data being sorted. Example of one of the objects: ```javascript { // record what action was happening action: "overwrite", // which index it was occurring at idx1: k, // the value at that index value: arrayCopy[i].value, } ``` Because Vue.js cannot pickup on the updating of an array or a property of an object without the calling Vue.set(), I could let my merge sort algorithm run, record each computation in an object, then store that object to my animations array. Once the merge sort was finished sorting (which is just about instantly) the DOM looked the exact same and I had an array of objects that held the information from each computation. All I had to do then was iterate over this array of animations and slowly animate these changes using Vue.set() and then voila! Once my merge sort algorithm ran, I ran this method to visualize each animation on the DOM. ```javascript async function animate(animations) { for (let todo of animations) { if (todo.action === "compare") { // changes the color of the two indexes being compared let { value: val1, color: col1 } = this.numbers[todo.idx1]; let { value: val2, color: col2 } = this.numbers[todo.idx2]; this.$set(this.numbers, todo.idx1, { value: val1, color: this.compare, }); this.$set(this.numbers, todo.idx2, { value: val2, color: this.compare, }); // pauses the event loop to better visualize the algo await new Promise((resolve) => setTimeout(resolve, 20)); // changes the colors back to original color this.$set(this.numbers, todo.idx1, { value: val1, color: col1, }); this.$set(this.numbers, todo.idx2, { value: val2, color: col2, }); } else { // pauses the event loop to better visualize the algo await new Promise((resolve) => setTimeout(resolve, 20)); // overwrite idx1 with idx2, change color to sorted this.$set(this.numbers, todo.idx1, { value: todo.value, color: this.sorted, }); } } } ``` It's no secret that merge sort can be very tricky. There are several ways of implementing it, but an important note is that in order to utilize this visualizing approach it is necessary to record the index of the values that merge sort is manipulating. That being said, if you ever so slightly have a hankering to tackle a visualizing project, do it! There is so much to learn and appreciate from each algorithm. I hope this has been helpful and interesting. Best of luck and share your sorting visualizations - we'd all love to see them!
jameseaster
404,227
Using Google's OAuth, Passport.js , and Express for Authorization - Part 2
Okay, so last week we started the process of implementing user authentication with the help of Google...
0
2020-07-19T23:34:04
https://dev.to/mlittle17/using-google-s-oauth-passport-js-and-express-for-authorization-part-2-gam
node, beginners, googlecloud, javascript
Okay, so last week we started the process of implementing user authentication with the help of Google's OAuth API and Passport.js. On the server side, we're using Node.js and Express for middleware. We covered some basics like how to get our Google Client Id and Client Secret and then we set up our Google Strategy within Passport to handle some the functionality under the hood. Just as a reminder here's what it looked like: ```javascript passport.use(new GoogleStrategy({ // options for the google strategy callbackURL: '/googleRedirect', clientID: process.env.GOOGLECLIENTID, clientSecret: process.env.GOOGLECLIENTSECRET, }, callback); ``` What we didn't cover was the callback function inside of that Passport object, so let's discuss that in a little more detail. But to do that, let's first visualize this entire authentication process a little bit with the help from [Google's OAuth documentation](https://developers.google.com/identity/protocols/oauth2): ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/1rodbnxbclbmqtjzkduq.png) These arrows can be a little confusing so let's break them down step by step: 1. Our user visits our application and wants to login. For our application, we're only giving the user the option to sign in through Google. 2. Google informs the user that our application is asking for their information and by signing in, they are giving Google permission to pass their data back to us. 3. Once the user signs in, Google redirects the user back to our application but within that redirect, the user is also carrying something important: an authorization code. 4. When the user returns to our site, we aren't immediately given their info. Instead, we're given this authorization code, which we then have to go back to Google and say "Hey, we're good, they came back with this code, can we get their info now?" Google obliges. 5. Once we have that user data from Google, we can do two things: save that user to our database if they've never visited our website before or, if they have, render the application with any additional data they've saved within our application before. ### Our Callback Function While that seems like a lot of steps, the callback function we've been talking about manages almost all of these for us, so let's finally take a look at that: ```javascript (accessToken, refreshToken, profile, done) => { // passport callback function const { id: googleId, displayName: username, given_name: firstName, family_name: lastName, picture: photo, email: email, } = profile; const user = { googleId, username, firstName, lastName, photo, email, }; getUser(googleId) .then(currentUser => { currentUser; // if the response includes a user object from our database if (currentUser.length) { done(null, currentUser[0]); } else { // if not, create a new user in the database createUser(user); getUser(googleId) .then(newUser => { newUser; done(null, newUser[0]); }) .catch(err => console.log(err)); } }); }; ``` Wow, that's a doozy! But again, by breaking this down with the steps we listed before, this can make a lot more sense. ### Breaking down the callback What's not in this function are steps 1 through 3, our user has signed in and Google has delivered what they call their "profile", the object that contains all the user info we've requested. But we probably aren't saving all of that profile info to our database and we probably aren't going to name it the same thing they do. For example, Google saves what is typically considered someone's last name as the key of "family_name", so we'll need to take the value stored there but then re-name the key to whatever our database is expecting. All of that is done in this part here: ```javascript // destructuring the profile object from Google, creating new variable names to be stored in our user object const { id: googleId, displayName: username, given_name: firstName, family_name: lastName, picture: photo, email: email, } = profile; //creating our user object with all of our new user variables stored as keys const user = { googleId, username, firstName, lastName, photo, email, }; ``` Next we need to handle step 5 to determine if this user is new (which in that case we need to save them to our database) or, if they've been here before, we need to load our application with their previously entered data. Since we're storing the user's Google ID, that's a perfect thing to look for since we can be sure that it's unique. One note about this section: this could look different depending on what database you're using and how your database returns data you're searching for, but the overall logic will be similar. For this project, we're using PostgreSQL and PG Promise, which returns an array when searching for a user. If the user is new, you'll get an empty array. If not, that user object will be stored at the index of 0 in the array. ```javascript // get the user with this Google ID stored in our database getUser(googleId) .then(currentUser => { // if the response includes a user object from our database if (currentUser.length) { // call done with that user done(null, currentUser[0]); } else { // if not, create a new user in the database createUser(user); // once created, retrieve that newly created user getUser(googleId) .then(newUser => { // call done with that newly created user done(null, newUser[0]); }) .catch(err => console.log(err)); } }); ``` See, that wasn't so bad! To be frank, the hardest part about this function is building your database methods like getUser or createUser. Once those are operating like you designed them to, it's just a matter of chaining some .then's to your functions (well, in this case, since PG Promise returns a Promise) to complete the cycle. ### Looking at our App.js file thus far Alright, so we've added our callback to our promise object, so let's do a quick review of our app.js file so far. Like I mentioned last week, it's generally better to separate parts that don't directly have to do with your app's server into other files, but we're keeping it on one page for simplicity. ```javascript // bringing express into our project const express = require('express'); // bringing passport into our project const passport = require('passport'); // bringing a Google "plugin" or Strategy that interacts with Passport const GoogleStrategy = require('passport-google'); // initializing our app by invoking express const app = express(); passport.use(new GoogleStrategy({ // options for the google strategy callbackURL: '/googleRedirect', clientID: process.env.GOOGLECLIENTID, clientSecret: process.env.GOOGLECLIENTSECRET, }, (accessToken, refreshToken, profile, done) => { // passport callback function const { id: googleId, displayName: username, given_name: firstName, family_name: lastName, picture: photo, email: email, } = profile; const user = { googleId, username, firstName, lastName, photo, email, }; getUser(googleId) .then(currentUser => { currentUser; // if the response includes a user object from our database if (currentUser.length) { done(null, currentUser[0]); } else { // if not, create a new user in the database createUser(user); getUser(googleId) .then(newUser => { newUser; done(null, newUser[0]); }) .catch(err => console.log(err)); } }); })); // assigning the port to 8000 const port = 8000; // calling the listen method on app with a callback that will execute if the server is running and tell us what port app.listen(port, () => { console.log(`Server listening on port ${port}`); }); ``` ### Next week In the last part of this series, we'll wrap everything up by setting up our routes, which are essentially the strike of the match that get this authentication process started. Plus, these routes are crucial both when the user goes to Google but also when that user comes back with that access code. And finally, there are some other functions that Passport gives us that we need to use to help our user avoid logging in every time they visit our page. Just like last week, here are some of the functions that we'll be talking about. Notice something interesting? These functions use a done method just like our callback. Might be important to figure out what exactly that method does, right? ```javascript passport.serializeUser((user, done) => { // calling done method once we get the user from the db done(null, user.googleid); }); passport.deserializeUser((id, done) => { getUser(id) .then(currentUser => { currentUser[0]; done(null, currentUser[0]); }); }); ```
mlittle17
733,973
Intro to Kafka - Consumer groups
Consumers can form groups, aptly named “consumer groups”. These consumer groups determine what...
13,985
2021-08-06T19:39:01
https://lankydan.dev/intro-to-kafka-consumer-groups
kafka, kotlin
--- title: Intro to Kafka - Consumer groups published: true date: 2021-06-20 00:00:00 UTC tags: kafka, kotlin canonical_url: https://lankydan.dev/intro-to-kafka-consumer-groups series: Intro to Kafka --- Consumers can form groups, aptly named “consumer groups”. These consumer groups determine what records a consumer receives. I know that sounds quite vague, but honestly, I can’t think of any other short and snappy way to summarise this topic. To be more specific, consumer groups do the following: - Groups consumers by function - Shares a topic’s partitions between consumers in the same group The following sections expand on these points. ## Groups consumers by function Consumer groups allow you to relate a set of consumers working together to perform a single function, process or task. Every consumer group will process every record in a topic independently from other groups. For example, suppose you have a topic representing sales data. In that case, one group could maintain an aggregate of the sale values while another is passing it to a downstream system for later processing. Both of these groups will run independently and manage their offsets, even when subscribing to the same topic. The diagram below has two consumer groups who receive all the records stored in the topic: [![Kafka consumers in different groups receive all records independent of other groups](https://lankydan.dev/static/60929ff451b39bc3c21653e235d668bb/b13e1/kafka-consumer-groups-receive-all-records.png "Kafka consumers in different groups receive all records independent of other groups")](https://lankydan.dev/static/60929ff451b39bc3c21653e235d668bb/7d7af/kafka-consumer-groups-receive-all-records.png) ## Shares a topic’s partitions between consumers in the same group Every consumer inside a consumer group is assigned a number of a topic’s partitions, ensuring that records are shared between the group while also keeping each partition’s records in order. This mechanism is made more efficient by keeping the assigned partitions the same until introducing a new consumer to the group. At this point, Kafka reassigns the partitions between the group’s consumers, and this process is called “rebalancing”. Consumers can also be assigned multiple partitions if there are more partitions than consumers within a group. The diagram shows the sharing of partitions within a single consumer group: [![Single Kafka consumer group with multiple consumers receiving all records](https://lankydan.dev/static/55fc33f26ad42e2f712227842d73da84/b13e1/kafka-consumer-groups-multiple-consumers-in-group.png "Single Kafka consumer group with multiple consumers receiving all records")](https://lankydan.dev/static/55fc33f26ad42e2f712227842d73da84/aff31/kafka-consumer-groups-multiple-consumers-in-group.png) Introducing another consumer group does not affect the assignment of existing groups, as shown below: [![Multiple Kafka consumer groups with multiple consumers with each group receiving records independently from the other](https://lankydan.dev/static/d8bcf333f3332a064d194df5e45fa540/b13e1/kafka-consumer-groups-multiple-consumers-in-multiple-groups.png "Multiple Kafka consumer groups with multiple consumers with each group receiving records independently from the other")](https://lankydan.dev/static/d8bcf333f3332a064d194df5e45fa540/56d9c/kafka-consumer-groups-multiple-consumers-in-multiple-groups.png) The system will rebalance when a new consumer is added to one of the groups, visualised below: [![A consumer group with two consumers reassigns partitions when a new consumer is added to the group](https://lankydan.dev/static/280b35ae41ff95591ffd284748862c86/b13e1/kafka-consumer-groups-rebalance.png "A consumer group with two consumers reassigns partitions when a new consumer is added to the group")](https://lankydan.dev/static/280b35ae41ff95591ffd284748862c86/56d9c/kafka-consumer-groups-rebalance.png) A new consumer was added to `group 1`, which caused the assigned partitions to rebalance between them. ## Creating a consumer group You don’t have to create a group explicitly. Instead, it’s done as part of creating a consumer. When constructing a consumer, pass in the `group.id` property of the group. In fact, this field is required, so any Kafka example or tutorial you’ve followed will have already included this piece of code. ```kotlin fun createConsumer(): Consumer<String, String> { val props = Properties() props["bootstrap.servers"] = "localhost:9092" // The important stuff is below! props["group.id"] = "test" // That's the end of the important stuff! props["enable.auto.commit"] = "true" props["auto.commit.interval.ms"] = "1000" props["key.deserializer"] = "org.apache.kafka.common.serialization.StringDeserializer" props["value.deserializer"] = "org.apache.kafka.common.serialization.StringDeserializer" return KafkaConsumer(props) } ``` Initialising more consumers adds them to the existing group without you having to do anything else. Kafka makes this work exceptionally smoothly since the Broker knows about all the consumers in the group and can therefore handle consumers joining (or leaving) a group. ## How partitions are assigned The chosen “assignment strategy” determines the assignment of partitions within a group. The default strategy is the `RangeAssignor`, which I would rather not explain in my own words, so I’ll quote [Kafka’s JavaDocs](https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/consumer/RangeAssignor.html) instead. > The range assignor works on a per-topic basis. For each topic, we lay out the available partitions in numeric order and the consumers in lexicographic order. We then divide the number of partitions by the total number of consumers to determine the number of partitions to assign to each consumer. If it does not evenly divide, then the first few consumers will have one extra partition. > > For example, suppose there are two consumers `C0` and `C1`, two topics `t0` and `t1`, and each topic has 3 partitions, resulting in partitions `t0p0`, `t0p1`, `t0p2`, `t1p0`, `t1p1`, and `t1p2`. > > The assignment will be: > > C0: [`t0p0`, `t0p1`, `t1p0`, `t1p1`] C1: [`t0p2`, `t1p2`] You can configure a consumer group’s assignment strategy by using the `partition.assignment.strategy` property and defining the qualified class name of the assignor to use. > More information on the different assignors can be found in [Kafka’s documentation](https://kafka.apache.org/documentation/#consumerconfigs_partition.assignment.strategy). ## Summary Consumer groups in Kafka allow you to: - Group consumers by their function in a system. - Split the processing load of a topic by sharing its partitions between consumers in a group. You’ve seen how to create a consumer and place it in a group or add one to an existing group by specifying its `group.id` during initialisation. You will hopefully also recall that Kafka does the rest of the work and that selecting the `group.id` is all you have to do. By finishing this post, we have now covered the core parts of Kafka (at least in my opinion), and you should have enough knowledge to start building applications using it. Probably nothing fancy, but you’ll at least have the foundational understanding to get started. I’ll look at particular topics more closely in future posts that I might have brushed over previously, or I’ll cover new material as I learn it myself.
lankydandev
404,243
Configuring routes in NodeJS with Typescript
In the previous post, we provided an overview of the use of typescript in NodeJS, navigating to the f...
7,868
2020-07-19T20:33:27
https://dev.to/aryclenio/configuring-routes-in-nodejs-with-typescript-2281
node, typescript, javascript, api
In the previous post, we provided an overview of the use of typescript in NodeJS, navigating to the following points: * Yarn installation * Configuration of dependencies * Configuration of Express and TS-NODE-DEV Today, we will continue the project by configuring our routes now, we will understand the HTTP methods and their use on the node through Typescript. Here we go? ### **Part 1: Understanding routes** In a REST API, routes are responsible for providing data to a Web application. When accessing a **route**, the server is responsible for **creating, reading, changing or removing** data within the database . Imagine, for example, a user registration application on a system. Our front-end application should normally have screens for registering, viewing, changing and removing users, and each of these screens makes an * HTTP request * to the server and waits for a response from it. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/3sewf22rdsrjuznrp5kz.png) Shall we understand how to create and view one? ### **Part 2: Creating the first route** In the previous post, we created the file **server.ts** that was responsible for keeping the express on port 3333. Now, let's make it respond to that. Let's create a folder called **routes** and in it, create the file **user.routes.ts**. This file will be responsible for telling the server how it should respond if the web application requests something related to the user. For that, we will need to use in this file, an express module called Router and initialize it inside the file, as shown below. ```typescript import { Router } from 'express'; const usersRouter = Router(); ``` With that, we start a routing on our server, however, it is still necessary to indicate the methods and the response to be made, and for that, before continuing, we need to understand what HTTP methods are. ### **Part 3: Getting to know HTTP methods** Basically, applications that need our backend must identify their requests to the database using HTTP methods. Most applications are based on the so-called CRUD (Create, Read, Update and Delete) and each type of action requested, there is an http method that needs to be mentioned at the time of the request, which are: ** POST, GET, PUT and DELETE ** respectively. There are several other HTTP methods, but for our application, we will use only the most common ones. ![http methods](https://dev-to-uploads.s3.amazonaws.com/i/z2rcgmyvx56hhddw2h8c.jpeg) ### **Part 4: Creating the GET route** Returning to our **user.routes.ts** file, we will create our first GET route by initializing the express's Router method. In it, in addition to indicating the path of the request, we also need to include a callback (a return function) that should be called with the data. A route necessarily has a request and a response. The first is responsible for the data coming from the request, for example, if a user were registered, the request would contain all data regarding the creation of this user. The response necessarily has the data returned by the bank, such as confirmation messages, errors, or the data itself. See the schematic construction of the GET route below: ```typescript usersRouter.get('/', (request, response): => { return response.json("OK"); }); ``` That's it, a route was created on the express. However, it is not yet activated. For this, we must export our routes, where our file will look like this: ```typescript import { Router } from 'express'; const usersRouter = Router(); usersRouter.get('/', (request, response): => { return response.json("OK"); }); export default usersRouter; ``` ### **Part 5: Activating routes** For the activation of our routes, we created in the same folder a file called **index.ts** that will be responsible for uniting all the routes of our application. It will only import our user route and make it respond when the application accesses localhost:3333/users. See below: ```typescript import { Router } from 'express'; import usersRouter from './user.routes'; const routes = Router(); routes.use('/users', usersRouter); export default routes; ``` Note that we have imported the express ROUTER again to indicate that this file will concentrate all the routes of our application. In addition, on our server, we must indicate that this file must use the routes, importing the index.ts file and using the **app.use()** that we saw earlier. ```typescript import express from 'express'; import routes from './routes'; const app = express(); app.use(express.json()); app.use(routes); app.listen(3333, () => { console.log('Server started'); }); ``` We see some changes in our file, the first of which is *app.use(express.json ())* which only serves so that express can receive data via JSON for our request, in addition to *app.use(routes)* , already mentioned above, which activates our routes. ### **Part 6: Testing our application** Activate the server using the command below that will start ts-node-dev by putting our server online: `` console yarn dev `` Now, access [localhost:3333/users](//localhost:3333/users) in your browser and you will see that an **OK** will be returned, inserted in the creation of our route. This informs that the server worked and that we made a get request for our API. In the next articles, we will continue with the creation of new routes and understand what **Repository and Model** is and how typescript can be superior to Javascript in the creation of this processes. Thanks for reading!
aryclenio
404,265
Empaqueter et télé-déployer son cluster Kubernetes et ses applications simplement avec Gravity …
Gravity (de la société Gravitational) est un outil open source permettant aux développeurs de regro...
0
2020-07-19T20:48:28
https://medium.com/@deep75/empaqueter-et-t%C3%A9l%C3%A9-d%C3%A9ployer-son-cluster-kubernetes-et-ses-applications-simplement-avec-gravity-fcc2cecae34d
kubernetes, docker, hetzner, digitalocean
--- title: Empaqueter et télé-déployer son cluster Kubernetes et ses applications simplement avec Gravity … published: true date: 2020-07-19 20:29:03 UTC tags: kubernetes,docker,hetzner,digitalocean canonical_url: https://medium.com/@deep75/empaqueter-et-t%C3%A9l%C3%A9-d%C3%A9ployer-son-cluster-kubernetes-et-ses-applications-simplement-avec-gravity-fcc2cecae34d --- ![](https://cdn-images-1.medium.com/max/2658/1*i33_JOxG1S_7aD4zhk0D6A.jpeg) Gravity (de la société Gravitational) est un outil open source permettant aux développeurs de regrouper plusieurs applications Kubernetes dans un fichier *.tar* facilement distribuable appelé «image de cluster». [**gravitational/gravity** *Gravity is an upstream Kubernetes packaging solution that takes the drama out of deploying and running applications in…*github.com](https://github.com/gravitational/gravity) Une image de cluster contient tout ce dont une application a besoin et peut être utilisée pour créer rapidement des clusters Kubernetes préchargés avec des applications à partir de zéro. Ou pour charger des applications contenues dans une image dans un cluster Kubernetes existant comme OpenShift ou GKE. ![](https://cdn-images-1.medium.com/max/2640/1*ACzD80OkCTREn79PWTFW4A.jpeg) [**Run Cloud-native Applications in Uncharted Territory | Gravity** *Make cloud-native applications work in any Linux environment, even air-gapped data centers or edge devices. Meet strict…*gravitational.com](https://gravitational.com/gravity) > Gravity peut être utilisé par les organisations qui ont besoin de déployer leurs applications Kubernetes dans des «territoires inexplorés» tels que l’infrastructure de leurs clients (cloud ou sur site), dans un contexte d’Edge Computing ou pour empaqueter des applications pour une distribution interne facilement sur différents clouds … Elle est présente dans le fameux paysage de la CNCF (Cloud Native Computing Foundation). [**CNCF Cloud Native Interactive Landscape** *Filter and sort by GitHub stars, funding, commits, contributors, hq location, and tweets. Updated: 2020-07-19 00:23:28Z*landscape.cncf.io](https://landscape.cncf.io/selected=gravity) ![](https://cdn-images-1.medium.com/max/2730/1*KsUSosn-D89zm8Frk37ifA.jpeg) ![](https://cdn-images-1.medium.com/max/2000/0*TokK-AzMsnfEmhC_.jpg) La récente version 7.0 a apporté la faculté de : * Déployer des images Gravity Cluster dans des clusters Kubernetes existants (avec des noeuds protégés notamment par SELinux). * Faciliter l’exécution des clusters Gravity dans des environnements où la sécurité et la conformité sont primordiales (via notamment Teleport, une passerelle SSH / Kubernetes intégrée pour synchroniser le contrôle d’accès basé sur les rôles pour les deux protocoles et fournir une authentification unifiée). [**Announcing Gravity 7.0** *Today, we are excited to announce the release of Gravity 7.0! Gravity is a tool for developers to package multiple…*gravitational.com](https://gravitational.com/blog/announcing-gravity-7-0/) [**Secure Access for Developers that Doesn't Get in the Way** *" Gravitational reduces the operations and the support burden normally associated with on-premises software. The…*gravitational.com](https://gravitational.com/teleport/) Lancement pour ce nouveau test d’un droplet sous Ubuntu 20.04 LTS dans DigitalOcean : ![](https://cdn-images-1.medium.com/max/2298/1*FwBpRCySqwjUZtmJDtBHdA.jpeg) avec l’installation préalable du moteur Docker : ![](https://cdn-images-1.medium.com/max/2456/1*7Dj-ioaN7kudS4h1DqQldg.png) Je récupère le paquet contenant Gravity depuis le site web de Gravitational (qui existe sous une version communautaire ou commerciale) : [**Download Gravity - Community Edition** *Gravity is open-source, meets open standards and is built with cloud-native expertise. Open source and open standards…*gravitational.com](https://gravitational.com/gravity/download/) ![](https://cdn-images-1.medium.com/max/2718/1*CmrNc3nP2mbmpunc3nVtdg.jpeg) Ce paquet contient les binaires nécessaires au bon fonctionnement de Gravity … ![](https://cdn-images-1.medium.com/max/3668/1*wujPWeKkfTZDtSUKeN5PzQ.png) Je récupère le dépôt Git des exemples relatifs à la solution Gravity … ![](https://cdn-images-1.medium.com/max/3264/1*fPIXlEejBgTSYpmO8Uy1iQ.png) qui contient les Charts Helm nécessaires au déploiement de Wordpress : ![](https://cdn-images-1.medium.com/max/2610/1*8GkXcJYxJ_0KZFx1Dqa7Gg.jpeg) mais aussi les manifests YAML comprenant l’installation ou la mise à jour du cluster : ![install.yaml](https://cdn-images-1.medium.com/max/2556/1*vRu53bVnxcKDr7oV5LNMug.png) ![upgrade.yaml](https://cdn-images-1.medium.com/max/2524/1*N3Wfo-gSNCIhdKExgSm5KA.png) On va donc commencer l’empaquetage d’un cluster K8s avec cette solution Wordpress embarquée via cette commande : ![](https://cdn-images-1.medium.com/max/3824/1*ACs5AX38O0GXJKvoSWsFbQ.png) qui fait appel à ce fichier YAML décrivant tout le processus de déploiement du cluster Kubernetes avec Wordpress (j’ai choisi le mode à un noeud pour le cluster et il est possible dans ce cas d’opter pour un cluster médian à 3 noeuds ou en haute-disponibilité avec 6 noeuds) : ![app.yaml](https://cdn-images-1.medium.com/max/4096/1*bYRmgYUWcmUcLnOmVNmZBQ.png) Juste avant cela j’ai arrêté le fonctionnement du moteur Docker local : ![](https://cdn-images-1.medium.com/max/2000/1*JI_ZvjOcogx2NIMCE6JcDg.png) > Lorsque Gravity crée un cluster Kubernetes à partir d’une image de cluster, il installe un conteneur système spécial ou “Master Container” sur chaque hôte. Ce conteneur est appelé “planète” et est visible en tant que démon pour Gravity. > Le conteneur principal contient tous les services de Kubernetes, en assure la gestion automatique et les isole des autres démons préexistants fonctionnant sur les hôtes du cluster. [**Building Cluster Images** *This section covers how to build a Gravity Cluster Image. There are two use cases we'll cover here: Building an "empty"…*gravitational.com](https://gravitational.com/gravity/docs/pack/#custom-system-container) > La planète est une image Docker maintenue par Gravitational. Pour le moment, l’image de la planète est basée sur **Debian 9**. L’image de base de la planète est publiée dans un Docker Registry public à l’adresse **quay.io/gravitational/planet** afin qu’il soit possible de personnaliser l’environnement de la planète pour ses Clusters en utilisant l’image de Gravitational comme base. J’obtiens un paquet global d’environ **2,7 Go**. Je vais me servir de ce Droplet pour décompacter localement ce paquet et déployer mon cluster Kubernetes : ![](https://cdn-images-1.medium.com/max/3400/1*Jp7QaCxJhaeFv3_OJGclRQ.png) Une fois décompaqueté, je peux lancer son installation locale : ![](https://cdn-images-1.medium.com/max/4096/1*K_8ab_XRFcKfp1EdexlElg.png) J’obtiens les endpoints nécessaires aux accès à Wordpress et à la gestion du cluster Kubernetes : ![](https://cdn-images-1.medium.com/max/3368/1*4jIulJZ-bvyPxbBCf_LlOw.png) Wordpress avec sa base de données est accessible sur le port TCP 30080 : ![](https://cdn-images-1.medium.com/max/4008/1*_qCobgfOS_GZSONTOBPvZA.png) ![](https://cdn-images-1.medium.com/max/2700/1*vdwvsYra6K2sEWoywFFv_g.jpeg) ![](https://cdn-images-1.medium.com/max/2730/1*noj1O6YUxUB-xPHUR4qghg.jpeg) avec sa console d’administration : ![](https://cdn-images-1.medium.com/max/2724/1*-d5uAA50PVO4u4BkkH3PZw.jpeg) ![](https://cdn-images-1.medium.com/max/2702/1*83SFSwt5Cqdcnguajaw3Zw.jpeg) et sa page Web par défaut : ![](https://cdn-images-1.medium.com/max/2710/1*xI6edwk7QNP5cqX-MI0ivQ.jpeg) Via la solution Teleport (passerelle SSH / Kubernetes avec RBAC), je crée un utilisateur avec des droits d’administration : ![](https://cdn-images-1.medium.com/max/2000/0*dqzahWFoKJ3vCkZK.png) ![](https://cdn-images-1.medium.com/max/4076/1*ZWzQAsQT8LuZcgg5W4KpRQ.png) Il est alors possible via cet utilisateur d’accéder à la console d’administration et de monitoring fournie par Gravity … ![](https://cdn-images-1.medium.com/max/2710/1*AhIs6-R7fHYl8gH8tbpoIw.jpeg) où je peux visualiser le Droplet que j’ai utilisé pour ce télé-déploiement : ![](https://cdn-images-1.medium.com/max/2680/1*gdsfGAhTJp3XCa73nE2YSg.jpeg) la partie logs : ![](https://cdn-images-1.medium.com/max/2688/1*Ft2ogjNPKV4s_hqrnBHjyQ.jpeg) une console SSH : ![](https://cdn-images-1.medium.com/max/2718/1*gkAXXDuSmeLst4-5tQE3xg.jpeg) et le cluster Kubernetes local : ![](https://cdn-images-1.medium.com/max/2688/1*wk1DlbSLYig-ugu8MSAMWw.jpeg) sans oublier la partie monitoring fournies par le célèbre duo “Prometheus + Grafana” : ![](https://cdn-images-1.medium.com/max/2674/1*723MAOvhsqkY7A9vi0m5Qg.jpeg) ![](https://cdn-images-1.medium.com/max/2688/1*9k3waogpnr4rjkWTqLTh-w.jpeg) Je décide d’ajouter deux nouveaux noeuds externes à ce cluster via deux nouveaux Droplets : ![](https://cdn-images-1.medium.com/max/2292/1*KH7Ri_6Dqbfis6zAShumzQ.jpeg) L’un sera dédié à la partie Front de Wordpress avec les conteneurs correspondants : ![](https://cdn-images-1.medium.com/max/2730/1*Pd18lF4WuxgUrgCF-3V8eQ.jpeg) avec la génération de ces scripts d’installation de Gravity : ![](https://cdn-images-1.medium.com/max/2716/1*0cygSeJAvdgDlsT7ZNIVnA.jpeg) que j’exécute dans le Droplet : ![](https://cdn-images-1.medium.com/max/4096/1*k4t2ILZyI0JubL1pq6DueQ.png) Le noeud apparaît et est actif dans la console d’administration : ![](https://cdn-images-1.medium.com/max/2688/1*Ybcm3R2eQOM_ct8_TCuxig.jpeg) Le second Droplet sera lui logiquement dédié à la partie Base de données avec les conteneurs correspondants : ![](https://cdn-images-1.medium.com/max/2674/1*9-Osnn-2J8WhQj07cNTIuA.jpeg) ![](https://cdn-images-1.medium.com/max/2700/1*mmGimqiaEuV7mVXP1hzL1w.jpeg) Lancement des deux scripts : ![](https://cdn-images-1.medium.com/max/4096/1*MgPZday-taJzIFogQiGk_Q.png) Les trois noeuds du cluster sont actifs avec Wordpress embarqué : ![](https://cdn-images-1.medium.com/max/2694/1*wv8TatEieGNlUlL7NpUZYQ.jpeg) Je m’en assure via Teleport et la console SSH sur le noeud maître : ![](https://cdn-images-1.medium.com/max/2728/1*l4f0aZGmIrSzHfVmpiEwNA.jpeg) Il existe un modèle en pur déploiement web via un mode “Wizard”. Test de ce dernier avec toujours Wordpress en backend via la réutilisation de l’archive .tar précédemment générée. Je pars ici de trois instances Ubuntu 20.04 LTS dans Hetzner Cloud qui propose depuis peu des instances avec processeurs AMD EPYC et d’un Droplet dans DigitalOcean qui contient l’archive : ![](https://cdn-images-1.medium.com/max/2656/1*tkqPDaGw8iGt3uAIUspXoQ.jpeg) ![](https://cdn-images-1.medium.com/max/2702/1*akXtCu9d0v0Wf3pYsihORA.jpeg) Lancement du télé-deploiement du cluster Kubernetes avec la solution Wordpress embarquée via ce mode Wizard depuis le Droplet Ubuntu dans DigitalOcean : ![](https://cdn-images-1.medium.com/max/2722/1*_oDK2HjwAVzal1Ij_2AWig.jpeg) Un endpoint avec une console Web est accessible pour la suite des opérations : ![](https://cdn-images-1.medium.com/max/2728/1*so5nQizn9hutPrektXf5Xw.jpeg) Comme l’utilisation d’un FQDN est préconisé par Gravitational pour le nom de ce futur cluster Kubernetes, j’en choisis un en domaine Wildcard. Puis je choisis une architecture classique à trois noeuds pour le cluster Kubernetes (un noeud maître généraliste et deux noeuds Worker avec l’un dédié comme précedemment à la partie Front de Wordpress et le second à la partie Base de données) : ![](https://cdn-images-1.medium.com/max/2730/1*KAsHOSNBXzFE7FtNZW0zIg.jpeg) Et les scripts d’installation à lancer sur ces instances dans Hetzner Cloud sont fournies encore une fois : ![](https://cdn-images-1.medium.com/max/2730/1*PxZbQwI0A-0CW0kPzB9PEw.jpeg) Exécution de ces derniers sur les instances Ubuntu : ![](https://cdn-images-1.medium.com/max/2726/1*6pQz79__e4p1w0oVIe2S1Q.jpeg) ![](https://cdn-images-1.medium.com/max/2722/1*HT-BVJREj1xP8osXluhAFQ.jpeg) ![](https://cdn-images-1.medium.com/max/2720/1*qrEwE_OUBiiXf8VnyEq2Rg.jpeg) Adresses IP et Hostname apparaissent au fur et à mesure de l’exécution de ces scripts dans la console Web : ![](https://cdn-images-1.medium.com/max/2722/1*nEAahpbslusKXvEaXoWmKA.jpeg) J’initie alors le processus de télé-déploiement du cluster Kubernetes en mode Web : ![](https://cdn-images-1.medium.com/max/2728/1*0UVf7nZ54hXyfZ7RIpc9Sw.jpeg) ![](https://cdn-images-1.medium.com/max/2724/1*eGZNAuYEtskIFXMPIkutEA.jpeg) ![](https://cdn-images-1.medium.com/max/2726/1*SvTWVSQ088tM0CiwAcYkhg.jpeg) Après un laps de temps, l’installation se termine … ![](https://cdn-images-1.medium.com/max/2726/1*dA_ON6Xya9X29Q32VNOw1A.jpeg) Et j’accède au dashboard du cluster offert par Gravity : ![](https://cdn-images-1.medium.com/max/2706/1*jHwGeR4ii2anWxQpe_BEug.jpeg) ![](https://cdn-images-1.medium.com/max/2698/1*s0cU7Wi4i_l4sB4gnHCkYQ.jpeg) ![](https://cdn-images-1.medium.com/max/2694/1*AyFUXoFd9Ds8W-VJl2qpCQ.jpeg) La solution Wordpress est déployée et active : ![](https://cdn-images-1.medium.com/max/2730/1*1SslIDxGef3OJtXI_JyL_Q.jpeg) ![](https://cdn-images-1.medium.com/max/2708/1*TYgTfWgGQ2yLM-_9OG-_rQ.jpeg) Ainsi que la partie Monitoring via Grafana + Prometheus : ![](https://cdn-images-1.medium.com/max/2694/1*pBUcHQRUoFrFRdgetiepXw.jpeg) Via Teleport, je peux contrôler ce cluster avec le client Kubectl : ![](https://cdn-images-1.medium.com/max/2726/1*9lLDjO1eQUGCmacsPX0LoA.jpeg) Wordpress est accessible encore une fois par défaut en mode NodePort sur le port TCP 30080 de tous les noeuds du cluster Kubernetes : ![](https://cdn-images-1.medium.com/max/2690/1*ccKmNZsCmjqrMTmuSnAZhg.jpeg) ![](https://cdn-images-1.medium.com/max/2712/1*_0ytP5ZZz57ZwO93tlcang.jpeg) ![](https://cdn-images-1.medium.com/max/2702/1*fod_cV5OuGl6vgsqpsABfA.jpeg) avec sa page de blog par défaut … ![](https://cdn-images-1.medium.com/max/2704/1*hqSULnfm20wsG-7omViRBQ.jpeg) Je peux tester en mode Beta l’accès à cette page depuis le port 80 via l’utilisation du nouveau service de Load Balancing offert par Hetzner Cloud : ![](https://cdn-images-1.medium.com/max/2710/1*QjAReGkfptoSUzg28XNENw.jpeg) ![](https://cdn-images-1.medium.com/max/2700/1*Rp5VIn_iVaNSf4GhRrEWCw.jpeg) Les clusters Kubernetes créés avec Gravity sont désormais automatiquement configurés pour exécuter OpenEBS . Cela facilite considérablement le package de bases de données et d’autres services avec état dans des images de cluster Gravity. Gravity a inclus la commande ‘*gravity status*’, qui explique bien ce qui ne va pas le cas échéant avec son cluster K8s. Et offre également une «vue chronologique» qui vous permet de voir comment l’état d’un cluster a changé au fil du temps. La version commerciale de Gravity offrant la possibilité d’acquerir un Gravity Hub pour “récupérer ou pousser” son cluster Kubernetes et ses applications comme on le ferait avec des conteneurs … ![](https://cdn-images-1.medium.com/max/2560/0*K82bOfZ1jHbd9omw.png) Très récemment, c’est la passerelle Teleport qui a bénéficié d’une amélioration de son interface utilisateur … [**Teleport 4.3 Product Release Notes: A New UI & Approval Workflow Plugins** *This is a major Teleport release with a focus on new features, functionality, and bug fixes. It's a substantial release…*gravitational.com](https://gravitational.com/blog/4-point-3-release-notes/) [**Teleport Demo Video - Modern SSH** *We recently launched Teleport 4.3 and received an overwhelming response from newer members of the community. They have…*gravitational.com](https://gravitational.com/blog/teleport-demo-video/) Il y a encore plein de choses passionnantes à découvrir sur la feuille de route de Gravity ! ![](https://cdn-images-1.medium.com/max/2000/0*FLFo2WoQBjgS6KzI.png) [**Gravity Overview - Gravitational Gravity** *Gravity is an open source toolkit that provides true portability for cloud-native applications. It allows developers to…*gravitational.com](https://gravitational.com/gravity/docs/) ![](https://cdn-images-1.medium.com/max/2000/0*7SM755OMjE2-SkVR.jpg) À suivre !
deep75
404,304
Building a Serverless (UK) Property Helper using Zoopla - Part 1: Not Serverless Yet
UPDATE Annoyingly, whilst continuing to work on this, I have gotten stuck due to the details listed h...
0
2020-07-19T22:19:57
https://dev.to/jcts3/building-a-serverless-uk-property-helper-using-zoopla-part-1-not-serverless-yet-9nd
javascript, serverless, node, zoopla
**UPDATE** *Annoyingly, whilst continuing to work on this, I have gotten stuck due to the details listed here: https://medium.com/@mariomenti/how-not-to-run-an-api-looking-at-you-zoopla-bda247e27d15. To cut a long story short, Zoopla no longer actively support the API, which means that the API keys just randomly stop working with no warning. Back to the drawing board I suppose...* # Introduction For a little holiday project, I wanted to try and build something to help with a non-work thing I've found has been inefficient - property searching. My partner and I are hoping to buy a place soon, and we've found it frustrating trying to keep up with property pages, plus who doesn't like finding something that annoys them a little, then spend hours trying to get rid of that itch! This article is the first of a few (depending on far I take it), and I'll add links to subsequent docs at the bottom. Things that will be covered later will include integrating with the Airtable API for a good spreadsheet + images view, using Lambda and Cloudformation to make the work repeatable for others, then using Cloudwatch Events and Eventbridge to automate the process! ## Initial Research My first thought was a web scraper might be a a good option, and the site I use most frequently when looking at properties is Rightmove. However, a little googling quickly lead me to this section of Rightmove's T's & C's: > You must not use or attempt to use any automated program (including, without limitation, any spider or other web crawler) to access our system or this Site. You must not use any scraping technology on the Site. Any such use or attempted use of an automated program shall be a misuse of our system and this Site. Obtaining access to any part of our system or this Site by means of any such automated programs is strictly unauthorised. > *From [Rightmove's terms of use](https://www.rightmove.co.uk/termsUp.html)* So that was the end of that train of thought... Zoopla is another commonly-used property site in the UK, so this was the next option. They have a refreshing opinion on third party development, and have [an extensive API](https://developer.zoopla.co.uk/home) with what I've seen so far to be a good set of docs, so this seemed like a good place to start! ## Broad Plan The first version for this won't be serverless, just to make sure I'm getting things right. I want to be able to query Zoopla using a script, via the Node REPL, and then be able to send the results to a suitable place for viewing. I've toyed with a few ideas, but I think to begin with Airtable (on the free version!) should be good enough. This does mean this won't be a totally serverless solution, but we can potentially replace it further down the line. From there, I'll use Lambda to do the communicating with Zoopla/Airtable, and then set it to run at a certain frequency using a Cloudwatch cron-event. # Development ## Step 1: Zoopla Test First step here was to register for the [Zoopla Developer API](https://developer.zoopla.co.uk/member/register/) for the API key required to make queries. This was fairly simple, just needing some standard details and some inclination of what you want to do with it. Then, to see what results I would get I quickly tested the [Property Listings Endpoint](https://developer.zoopla.co.uk/docs/read/Property_listings) using Postman. Adding a bunch of fields which I felt would be useful (postcode, radius, listing_status, maximum_price, minimum_beds and of course the api_key), revealed a quite extensive result (see below). ## Step 2: Prototype Code Though I want to get this into Lambda, I thought it best to try with running a Javascript-script locally to repeat what i've done with Postman, and allowing me to separate out the params for the request. In order to separate the logic of the params being used for a specific query and the querying itself, for now I have written the params I want to test with to a local `params.json` file that looks like this: ```json { "api_key": "INSERTAPIKEYHERE", "radius": 1, "listing_status": "sale", "maximum_price": 1221000, "minimum_beds": 1, "postcode": "NW1 6XE", "page_size": 10 } ``` *(I have of course changed the demo params here to be those for Sherlock Holmes, who due to house price rises will now have to do with a budget of 0.001221b£ for a property on Baker St.)* These params can then be used, along with axios, to query the Zoopla endpoint like so: ```js const axios = require("axios"); const fs = require("fs"); const propertyListingUrl = "https://api.zoopla.co.uk/api/v1/property_listings.js"; const getParams = async () => { const params = await fs.readFileSync("./params.json"); return JSON.parse(params); }; const buildConfig = async () => { return { params: await getParams(), url: propertyListingUrl, headers: {}, method: "get" }; }; const axiosRequest = async config => { const result = axios(config); return result; }; const queryZooplaPropertyListing = async () => { const config = await buildConfig(); const result = await axiosRequest(config); return result.data; }; module.exports = { queryZoopla: queryZooplaPropertyListing }; ``` Here, in the main `queryZooplaPropertyListing` function we are building our config, which involves reading in our params from `./params.json`, then we are using this built config with axios to request the property listings from the Zoopla url.(Note I've appended `.js` to the url in order to receive a JSON response!) This uses Node's async-await functionality, as both the file-reading, and the Zoopla request itself are asynchronous processes. After the promises have resolved, the exported `queryZoopla` function should then returns an object which looks like this: ```json { "country": "England", "result_count": 196, "longitude": -0.158541, "area_name": " NW1", "listing": [ { "country_code": "gb", "num_floors": 0, "image_150_113_url": "https://lid.zoocdn.com/150/113/2cd80711fb52d57e85068b025920836abb906b89.jpg", "listing_status": "sale", "num_bedrooms": 2, "location_is_approximate": 0, "image_50_38_url": "https://lid.zoocdn.com/50/38/2cd80711fb52d57e85068b025920836abb906b89.jpg", "latitude": 51.525627, "furnished_state": null, "agent_address": "24 Charlotte Street, London", "category": "Residential", "property_type": "Flat", "longitude": -0.162988, "thumbnail_url": "https://lid.zoocdn.com/80/60/2cd80711fb52d57e85068b025920836abb906b89.jpg", "description": "360' virtual tour available. A very well presented second floor apartment set within a popular gated development located just moments away from the open spaces of Regent's Park, Baker Street & Marylebone stations and numerous shops, bars and restaurants. The property is quietly located overlooking the courtyard gardens comprising two bedrooms, two bathrooms, a reception room, seperate kitchen with integrated appliances and 2 x private balconys. The apartment is sold with an underground parking space. As a resident of the building benefits include concierge, access to a communal gym, a swimming pool and landscaped communal gardens. Alberts Court is set within the modern Palgrave Gardens development to the west of Regent's Park. The Leaseholders are currently in the process of purchasing the freehold.The building provides easy access to the West End, The City and various transport links around and out of London.", "post_town": "London", "details_url": "https://www.zoopla.co.uk/for-sale/details/55172443?utm_source=v1:8aaVEj3AGALC-xWzf7867y2rJwMs0-2Y&utm_medium=api", "short_description": "360' virtual tour available. A very well presented second floor apartment set within a popular gated development located just moments away from the open spaces of Regent's Park, Baker Street &amp; Marylebone stations and numerous shops, bars (truncated)", "outcode": "NW1", "image_645_430_url": "https://lid.zoocdn.com/645/430/2cd80711fb52d57e85068b025920836abb906b89.jpg", "county": "London", "price": "1200000", "listing_id": "55172443", "image_caption": "Picture No. 13", "image_80_60_url": "https://lid.zoocdn.com/80/60/2cd80711fb52d57e85068b025920836abb906b89.jpg", "status": "for_sale", "agent_name": "Hudsons Property", "num_recepts": 1, "country": "England", "first_published_date": "2020-07-09 08:44:51", "displayable_address": "Alberts Court, 2 Palgrave Gardens, Regent's Park, London NW1", "floor_plan": [ "https://lc.zoocdn.com/4cb0366075b14e99efe3a1a7b24a608f4c7a92f0.jpg" ], "street_name": "Alberts Court", "num_bathrooms": 2, "agent_logo": "https://st.zoocdn.com/zoopla_static_agent_logo_(62918).jpeg", "price_change": [ { "direction": "", "date": "2020-06-28 22:30:07", "percent": "0%", "price": 1200000 } ], "agent_phone": "020 3641 7089", "image_354_255_url": "https://lid.zoocdn.com/354/255/2cd80711fb52d57e85068b025920836abb906b89.jpg", "image_url": "https://lid.zoocdn.com/354/255/2cd80711fb52d57e85068b025920836abb906b89.jpg", "last_published_date": "2020-07-09 08:44:51" } ], "street": "", "radius": "0.5", "town": "", "latitude": 51.523659, "county": "London", "bounding_box": { "longitude_min": "-0.170158861045769", "latitude_min": "51.5164304665016", "longitude_max": "-0.146923138954231", "latitude_max": "51.5308875334984" }, "postcode": "NW1 6XE" } ``` And voila. A swanky 2 bed, 2 bath property near Baker St for Sherlock to relocate to! With a whole heap of extra data to boot. Evaluating this will be the first part of the next step, as we aim to get this data into Airtable, so stay tuned! You can see this code in full at https://github.com/jcts3/serverlessPropertyHelper/tree/workingLocalQuery
jcts3
404,307
Leetcode - Missing Number (with JavaScript)
Today I am going to show how to solve the Missing Number algorithm problem. Here is the problem: S...
0
2020-07-20T02:07:05
https://dev.to/urfan/leetcode-missing-number-with-javascript-3nd3
Today I am going to show how to solve the Missing Number algorithm problem. Here is the problem: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/d7z611jiswzzvub3r34m.png) Solution: I solved this problem in 2 different ways. The first one is easier and simpler. I used a Set data structure to store an array. Then I iterate through it to find which number is missing. ```javascript var missingNumber = function(nums) { let map = new Set(nums); let expectedLength = nums.length + 1 for (let number = 0; number < expectedLength; number++) { if (!map.has(number)) { return number } } return -1 }; ``` For the second solution I use Gauss' formula, which helps to sum sequences of numbers. You can see the formula below: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/j09q4he0mkwm9x791vd3.png) Essentially, all we have to do is to find the expected sum by using Gauss’ formula and then subtract from it the actual sum of the array. ```javascript var missingNumber = function(nums) { let expectedSum = nums.length*(nums.length + 1)/2 let actualSum = nums.reduce((a, b) => a + b, 0) let missingNumber = expectedSum - actualSum return missingNumber }; ```
urfan