id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
610,681 | What's in a name? | There is one very important problem in software engineering: naming accurately. How many times have... | 0 | 2021-05-27T12:31:19 | https://dev.to/shivsan/what-s-in-a-name-24p9 | architecture, software, design | There is one very important problem in software engineering: naming accurately.
How many times have you stumbled upon a github repository, a class or a variable wondering what it is used for? Would a more precise and accurate name have eliminated ambiguity in reading a name?
Naming things is as old as humanity itself. It has always been an art to name anything, be it humans, inanimate objects, or abstract idea. The process of naming things varies based on the purpose of naming.
For humans, we name ourselves after positive emotions or the gods, even if we did not possess those qualities that our names indicate. This is fine for naming people, as people are generally complex and a name isn't supposed to really define what or who they are.
Naming, in software, is for an entirely different purpose altogether. The purpose of naming is to indicate clearly what the intended owner (classes, variables, components) of the name is supposed to represent; the complete opposite of what human names are used for. Stereotyping software components has big advantages in the engineering industry as noone wants to maintain software that is as complex as human beings. | shivsan |
638,591 | Point-Free Style (in Javascript) | All the cool-kids are talking about point-free style. They brag about how clean and declarative their... | 0 | 2021-03-18T18:23:35 | https://dev.to/justin_m_morgan/point-free-style-in-javascript-43o9 | javascript, pointfree, functional | All the cool-kids are talking about `point-free style`. They brag about how `clean` and `declarative` their code is and look down at lowly `imperative` code. You glean that it has something to do with `functional programming` and clever use of `functions as first-class values`, but what does it all mean? You don't want to be the last one picked for the coder kick-ball team, do you? So let's dive in and see what it's all about.
In an earlier entry ([A Deeper Dive into Function Arity](https://dev.to/justinmmorgan/a-deeper-dive-into-function-arity-with-a-focus-on-javascript-ae)), I alluded to `data-last signatures` and a `point-free style`. Although there were occassionally examples, I feel it would be of value to go into greater detail about what these terms mean, and what advantages they afford us. I will not rely too greatly on the contents of that article.
As an introductory definition, `point-free style` is passing `function references` as arguments to other functions. A function can be passed as an argument in two ways. Firstly, an anonymous function expression (or declaration) can be provided inline:
```js
// Function declaration
function (arg1, arg2) { ... }
// Newer (ES2015) style - unnamed function expression
(value) => { ... }
// Example
doSomeThingThatResolvesToPromise
.then((valueFromPromiseResolution) => {...})
.catch((errorFromPromiseRejection) => {...})
```
While this works, it isn't `point-free` style. A function expression has been declared inline to the function which will consume it. Instead, if we declare our function separately, assign it a name, and provide it `by reference` to another function:
```js
function somePromiseValueResolutionHandler(value) { ... }
function somePromiseValueErrorHandler(error) { ... }
// Or, using function expressions:
// const somePromiseValueResolutionHandler = value => {...}
// const somePromiseValueErrorHandler = error => {...}
doSomeThingThatResolvesToPromise
.then(somePromiseValueResolutionHandler)
.catch(somePromiseValueErrorHandler)
```
With these examples, you're only seeing the bare minimum requirement of `point-free style`. A function is being passed `by reference` as an argument to a function where it expects a callback. The referenced function's signature matches the function signature expected by the callback, and thereby allows us to pass the function reference directly. This allows our function chains to have a lot of noise removed, as functions are not defined inline and the arguments from one function are passed implicitly to the referenced function. Consider:
```js
function someAsynchronousAction(arg1, arg2, (error, successValue) => {...})
// versus
function thenDoTheThing (error, successValue) { ... }
function someAsynchronousAction(arg1, arg2, thenDoTheThing)
```
At this point, you may be thinking "yeah, that looks a little nicer, but is it really worth the effort?" Broadly speaking, this style of code flourishes when you embrace:
1. knowledge and patterns of function arity, and
2. utility functions.
## Function Arity Patterns
I have written [elsewhere](https://dev.to/justinmmorgan/a-deeper-dive-into-function-arity-with-a-focus-on-javascript-ae) more substantively on the topic of `function arity`. For the purposes of this discussion, it is sufficient to know that the term `arity` refers to the number of parameters a function signature contains. Functions can be said to have a strict `arity` when they have a fixed number of parameters (often given a latin-prefixed name such as `unary` and `binary`) or `variadic` when they can receive a variable number of arguments (such as `console.log`, which can receive any number of arguments and will log each argument separated by a space).
In Javascript, all functions will behave as `variadic` functions technically. Although scoped variables can capture argument values in the function signature, any number of arguments are collected in the [`arguments array-like object`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/arguments) (or captured with another name using the `rest operator`) without any additional steps taken.
```js
function variadicFunction1() {
console.log("===Arguments Object===");
Array.from(arguments).forEach((arg) => console.log(arg));
return null
}
function variadicFunction2(a, b) {
console.log("===Declared Parameters===");
console.log(a);
console.log(b);
console.log("===Arguments Object===");
Array.from(arguments).forEach((arg) => console.log(arg));
return null
}
variadicFunction1("a", "b", "c")
// ===Arguments Object===
// a
// b
// c
// null
variadicFunction2("a", "b", "c")
// ===Declared Parameters===
// a
// b
// ===Arguments Object===
// a
// b
// c
// null
variadicFunction2("a")
// ===Declared Parameters===
// a
// undefined
// ===Arguments Object===
// a
// null
```
> The reason the functions return `null` is to distinguish the final return of `undefined` referencing the `b` argument in `variadicFunction2` from the otherwise default `undefined` return value of the function (which is a `void` function, i.e. returning `undefined` as Javascript does by default when no value is explicitly returned).
Related to this point, and essential for the topic at hand, is that in Javascript all function references are technically `variadic` (i.e. accepting any number of arguments without erroring) though their behaviour remains constrained by however the function signature is defined. That is, we can pass functions `by reference` as arguments, without writing the execution/assignment of arguments section as so:
```js
function add(a, b) { return a + b }
function subtract(a, b) { return a - b }
function multiply(a, b) { return a * b }
function divide(a, b) { return a / b }
function operation(operator) {
// Take all but the first argument
let implicitArguments = Array.from(arguments).slice(1)
// Same thing using rest operator
// let [operator, ...implicitArguments] = [...arguments]
// spread the array arguments into the function execution
return operator(...implicitArguments)
}
operation(add, 10, 20)
// operation executes add(10, 20)
// 30
operation(multiply, 10, 20)
// operation executes multiply(10, 20)
// 200
operation(multiply, 10, 20, 40, 50, 20, 50)
// operation executes multiply(10, 20, 40, 50, 20, 50)
// but the multiply function ignores all
// but the first two arguments
// 200
```
This behaviour does pose a challenge, as function arity is not strictly enforced. You can do unusual things and your code will continue to function without erroring. Many developers exploit this characteristic but this requires mentally-retaining more implicit knowledge of the system than if the function arity were explicitly stated and enforced.
An example where this behaviour is exploited is in the `Express` framework middleware/callback function, which can have multiple signatures. [See Express documentation for `app.use`](https://expressjs.com/en/5x/api.html#app.use)
```js
// `Express` callback signatures
(request, response) => {...}
(request, response, next) => {...}
(error, request, response, next) => {...}
// From the Express documentation
// Error-handling middleware
// Error-handling middleware always takes four arguments. You
// must provide four arguments to identify it as an error-
// handling middleware function. Even if you don’t need to use
// the next object, you must specify it to maintain the
// signature. Otherwise, the next object will be interpreted
// as regular middleware and will fail to handle errors. For
// details about error-handling middleware, see: Error handling.
// Define error-handling middleware functions in the same way
// as other middleware functions, except with four arguments
// instead of three, specifically with the signature (err, req, res, next)):
```
Employing this pattern, we can see that we can write our middleware/callback function outside of the site where it will be consumed so long as we match the arity/function signature properly. Refactoring the example from the [`Express` documentation](https://expressjs.com/en/5x/api.html#app.use)
```js
app.use(function (req, res, next) {
console.log('Time: %d', Date.now())
next()
})
// ...can be re-written as
function logTime(req, res, next) {
console.log('Time: %d', Date.now())
next()
}
// ..then hidden away in a supporting file and imported
// --or hoisted from the bottom of the file--
// and passed by reference at the call-site
app.use(logTime)
```
In currently popular libraries and frameworks such as Express, we implicitly consider the impact of `function arity` in our code and develop certain patterns that we must become familiar with. `Point-free style` requires designing with `function arity` as a central concern.
### Data-Last Functions
A pattern that is central to `point-free style` is that of `data-last` function signatures. This pattern emerges from the practice of `currying` a function. A `curried function` is a function that always takes and applies one argument at a time. Instead of thinking of a function as taking multiple arguments and then producing a single output, we must think of our function as a series of steps before finally arriving at a "final" value.
For example, consider that we are talking about a function which concatonates two strings:
```js
function concat(string1, string2) {
return string1 + string2
}
```
The desired behaviour of this function is to take two arguments (both strings) and return a string. This is a functional unit and it may be difficult to conceive of why you would ever need to pause in the middle, but bear with me. To curry this function, we need to allow it to receive each argument one at a time, returning a new function at each step.
```js
function concat(string1) {
return function (string2) {
return string1 + string2
}
}
// or using a cleaner function expression syntax
const concat = string1 => string2 => string1 + string2
// Executing this function to "completion" now looks like:
concat("string1")("string2")
```
> It is typical practice to define javascript functions in a non-curried form and to consume functions from external libraries which are provided in a non-curried form. If you wish to use currying in your code, there are `curry` utility functions in most of the popular functional programming utility libraries such as `[Ramda](https://ramdajs.com/docs)` and `[Lodash/fp](https://github.com/lodash/lodash/wiki/FP-Guide)`. These utilities simply wrap your existing (non-curried) function and then receive subsequent arguments individually or in groups until all of the required arguments have been supplied. This offers the flexibility of treating functions as truly `curried` versions or just providing all the necessary arguments as if it were not curried.
Imagine for a moment that you stuck with the original `concat` function. You are asked to write a function which takes a list of string values and prefixes each with a timestamp.
```js
// ...without currying
function prefixListWithTimestamp(listOfValues) {
return [...listOfValues].map(value => concat(`${Date.now()}: `, value))
}
// ...with currying
const prefixListWithTimestamp = map(concat(timestamp()))
```
Okay, what just happend. I did cheat (a little). We included the `map` function (rather than use the method on the array prototype) probably from a utility function but we'll write it out below. It behaves in exactly the same way as the prototype method but it is a curried function which obeys the `data-last` signature.
```js
const map = mappingFunction => array => array.map(value => mappingFunction(value))
// Equivalent to
const map = mappingFunction => array => array.map(mappingFunction)
// Or some iterative implementation, the details of which are unimportant to our main logic
```
Additionally we created a small utility around our timestamp value to hide the implemenation details.
What is important is that `map` is a curried function which receives first a mapping function (a function to be applied to each value in an array). Providing the mapping function returns a new function which anticipates an array as its sole argument. So our example follows these steps:
```js
const prefixStringWithTimestamp = value => concat(`${Date.now()}: `)(string)
// We can pair this down to...
const prefixStringWithTimestamp = concat(`${Date.now()}: `) // a function which expects a string
const mapperOfPrefixes = array => map(prefixStringWithTimestamp)(array)
// We can pair this down to...
const mapperOfPrefixes = map(prefixStringWithTimestamp) // a function which expects an array of strings
// prefixStringWithTimestamp is functionally equivalent to concat(`${Date.now()}: `)
map(concat(`${Date.now()}: `))
// Perhaps our timestamp implementation can be a utility.
// We make timestamp a nullary function, `timestamp()`
const timestamp = () => `${Date.now()}: `
map(concat(timestamp())) // A function which expects an array of strings.
```
This pattern encourages you to design your functions in such a way that the parameters are arranged from least specific to most specific (said another way, from general to concrete). The `data-last` name implies that your data is the most concrete detail which will be given to the function. This allows for greater function re-use (via function composition) and is necessary to accomplish a `point-free style`.
> Not sure what the proper ordering of parameters is? Don't worry! You will get a feel for it with time. But if you find a function just isn't working, `flip` it.
> ```const flip = fn => a => b => fn(b)(a) ```
## Utility Functions
Embracing utility functions is critical to realizing the value of `point-free style`. By doing so, you will realize that a lot of the code you write is a variant of repetitive patterns that are easily generalizable. Additionally it adds a lot of noise to your code.
For example, it is becoming increasingly popular to "destructure" objects and arrays. In many ways, this is an improvement over prior access patterns and itself removes a lot of noise from your logic. If we take that notion a step further, the same can be accomplished by "picking" properties from an object or "taking" from an array.
```js
const obj1 = { a: 1, b: 2, c: 3, d: 4 }
// Destructuring
const {a, c, d} = obj1
// versus "Pick"
// `pick` (from Ramda): Returns a partial copy of an object
// containing only the keys specified.
// If the key does not exist, the property is ignored.
R.pick(["a", "d"], obj1); //=> {a: 1, d: 4}
R.pick(["a", "e", "f"], obj1); //=> {a: 1}
```
That little definition already exposes a behaviour that isn't matched by the destructuring approach but is critical: `pick` accounts (in a particular way) for when the property doesn't exist. Say instead you wanted to change the behaviour to such that a default value is supplied if the property doesn't exist on the original object. Suddenly the destructuring approach will get a lot messier. With utility functions (especially pre-written librariers), we can get accustomed to using different utilities that already provide the behaviour we desire while removing this edge case code from our main logic.
```js
const obj1 = { a: 1, b: 2, c: 3, d: 4 }
const {
a: a = "Nope, no 'a'",
c: c = "No 'c' either",
e: e = "I'm such a disappointing object"
} = obj1
// versus
// `pipe` (from Ramda)
// Performs left-to-right function composition.
// The first argument may have any arity; the remaining arguments must be unary.
// In some libraries this function is named sequence.
// Note: The result of pipe is not automatically curried.
const f = R.pipe(Math.pow, R.negate, R.inc);
f(3, 4); // -(3^4) + 1
// `merge` (from Ramda):
// Create a new object with the own properties
// of the first object
// merged with the own properties of the second object.
// If a key exists in both objects,
// the value from the second object will be used.
R.merge({ name: "fred", age: 10 }, { age: 40 });
//=> { 'name': 'fred', 'age': 40 }
// Our own derivative utility, `pickWithDefaults`
const pickWithDefaults = (keys, defaults) => R.pipe(R.pick(keys), R.merge(defaults));
// Notice: Our data source is omitted, which if included would be written as
const pickWithDefaults = (keys, defaults) => (object) => R.pipe(R.pick(keys), R.merge(defaults))(object);
const defaultValues = { a: "default a", c: "default c", e: "default e" }
pickWithDefaults(["a", "c", "e"], defaultValues)(obj1); //=> { a: 1, c: 3, e: "default e" }
```
Now imagine that the destructuring approach taken above is employed throughout the codebase, but you don't realize that it contains a bug and this bug emerges only in a subset of the use cases. It would be pretty challenging to do a textual search of the project and modify/correct them. Now instead consider whether our object property access had been done using a function like `pick`/`pickAll`. We now have two courses of corrective action.
The first is to "correct" the behaviour in our implementation by implementing our own version, and then update the imports throughout our project to use the fixed version of the function. This is easy because we are simply searching for a reference to the function label (`R.pick`, or `pick` in the import section of the project files).
The second, which perhaps we should have considered doing at the outset, is to create a [facade](https://en.wikipedia.org/wiki/Facade_pattern) for our library. In our utility function, we create delegate functions for the Ramda utilities we use and then we use our delegates throughout the project. Our `pick` function from our `utils` file delegates to `R.pick`. If we decide to move to a different library in the future, "correct" its behaviour, or hand-roll our own versions of these functions, we do so from a single location and our changes propagate to all use-cases.
As an added bonus, extracting utility work out of your main logic allows you to extract that logic right out of the file and into utility files, drastically cleaning up main-logic files. In the example just provided, Ramda provides `pipe` and `merge`, meaning they already exist outside of this hypothetical file. Our derivative `pickWithDefaults` can exist in our own utility file, meaning that only the `defaultValues` and final `pickWithDefaults` function execution line are actually in the final code--everything else can be imported. At the very least, utility functions can be moved to a portion of the file that seems appropriate. With function declarations (using the `function` keyword), the declaration can exist at the bottom of the file and be `[hoisted](https://developer.mozilla.org/en-US/docs/Glossary/Hoisting)` to the place of execution. `Function expressions` (using the arrow syntax), sadly, cannot be `hoisted` and need to be declared above the point of execution.
> Without allowing it to overwhelm you, I encourage you to look at the documentation for the function utility library [RamdaJS](https://ramdajs.com/docs/) or [Lodash](https://lodash.com/docs/4.17.15). All that I wish to highlight is the abundance of utility functions that these libraries have identified as worthy of inclusion in your aresnal.
> Unfortunately, Lodash has two variants--standard and "functional programming"/fp. The fp variant is necessary for what we are discussing here but the main documentation cannot be viewed in the "fp-style" of `data-last` function style. The biggest difference between the two versions is the order of parameters/arguments is such that the "data being acted on" comes last. The fp-guide does discuss other differences.
> Another library that is thematically linked to this topic is [RxJS](https://rxjs-dev.firebaseapp.com), which is particularly useful for converting "events" (often real-world events) into data. This allows for the normalizing of event-handling in the manner just described. For example, "double-click" can be defined as a utility which is consumed throughout a project in a unified manner. `Operators` are functions which transform this data and eventually these transformed events are either consumed or ignored.
## Conclusion
I genuinely believe that `point-free style` is a helpful in making my projects' main logic cleaner and more condensed. But this benefit comes at an expense or at least with some cautions.
If working with others who do not use `point-free style`, it can be jarring if done in excess. In several of the examples above, we created utility functions which omitted the data source (in order to avoid having to create a superfluous wrapping function).
```js
const pickWithDefaults = (keys, defaults) => R.pipe(R.pick(keys), R.merge(defaults));
// Notice: Our data source is omitted,
// which if included would be written as
const pickWithDefaults = (keys, defaults) => (object) => R.pipe(R.pick(keys), R.merge(defaults))(object);
```
For your collegues' benefit, consider including the data source for documentation's sake. You would still gain the benefit of deploying it without needing to include it, and so it still has the desired impact.
Similarly, it is possible to chain a tremendous number of utilities together in a single block. There are even utility functions in libraries which replace the typical imperative operators, such as: `if`, `ifElse`, `tryCatch`, `forEach`, etc. Chaining together too many of these will result in your code looking pretty similar to a block of imperative code. Instead, try to think of functional blocks and define them such that they expose a simple interface. That way, chaining the pieces together documents your intent and reduces the chance that you get lost in your control flow.
While it can seem overwhelming at first, a utility library such as `Ramda` can be approach incrementally to great effect. Additionally, there are Typescript typings available for `Ramda`, though the README page does admit that there are certain limitations they have encountered in fully typing the library.
Lastly, as you split up your logic into utilities you are inherently creating abstractions. There is a popular addage within the coding community--AHA (avoid hasty abstractions). To an extent, this can be reduced by standing on the shoulders of existing library authors. The abstractions present libraries such as [RamdaJS](https://ramdajs.com/docs/) are not hasty, but rather longstanding ideas battle tested in the fields of functional programming and category theory. But in organizing our code, consider restraining yourself from writing code that doesn't come intuitively. Instead, write some code and then reflect on whether you see opportunities to clean it up. In time you will accumulate wisdom that will guide your future `point-free` efforts.
| justin_m_morgan |
610,808 | DEV VS Hashnode VS Medium: Where Should You Start Your Tech Blog | This article was originally posted on my personal blog Let me start by saying that I recommend you s... | 0 | 2021-02-18T12:05:14 | https://dev.to/shahednasser/dev-vs-hashnode-vs-medium-where-should-you-start-your-tech-blog-91i | tips, advice, writing, technology | _This article was originally posted on [my personal blog](https://blog.shahednasser.com/dev-vs-hashnode-vs-medium-which-platform-should-you-use-for-your-blog/)_
Let me start by saying that I recommend you start a tech blog. By sharing your knowledge and writing it down, you learn more. It also encourages you to try new things so that you can write about them as well.
But if you're here then you are probably already considering it. So where should you start your tech blog? Should you use [DEV](https://dev.to/), [Hashnode](https://hashnode.com/), or [Medium](https://medium.com/)? Or should you consider starting a personal blog of your own, without using any of these platforms?
---
## [DEV](https://dev.to/)
The charm of DEV, in my own experience, is the community. Everyone on DEV is respectful and encouraging. Whether you're a junior or a senior, you'll find your voice and place there. You can share experiences, tutorials, advice, or show off what you are learning. It's basically a "no judgment" space, which is great if you're still unsure about how to write or what to write about.
At DEV you'll also have a great source of traffic. Out of the three platforms, DEV drove the most traffic to my blog posts, and I think the reason behind that is simple yet very important: everyone is "featured" in a sense. In other platforms in general (not just the three mentioned here), veterans or big writers/bloggers might have their articles seen more. It would take you time to build your audience to then start having readers. However, with DEV, people can easily see your articles on their feed whether you have 0 followers or hundreds. The reason I find that important is that it gives everyone a chance to be heard.
I think DEV only has two drawbacks. The first is that you can't have your own blog on their platform, with your own name and customization, or with a subdomain or custom domain. The second is that their editor is fully in [Markdown](https://www.markdownguide.org/). So, if you're not familiar with it you might find it a hassle.
---
## [Hashnode](https://hashnode.com/)
Hashnode is another great platform for blogging. Having a blog on Hashnode can be very similar to hosting your own personal blog. You can change the name, use a sub-domain or your custom domain, and customize the blog with so many options. You can change the colors, enable dark mode, enable newsletter opt-in for your blog visitors, and even allows integrations to Google Analytics, Hotjar, Facebook Pixel, and much more. You can almost do anything you would do in a personal blog with Hashnode.
Hashnode's editor is very easy to use. It also relies on Markdown, however, you also have a toolbar with options that you can use, so you don't really have to know Markdown to use it. The community at Hashnode is similar to that at DEV which is also a plus.
However, from my own experience, it's not easy to get much traffic in Hashnode. I'm not sure if it's because your articles aren't shown often in people's feed or which reason it is exactly, but you wouldn't generate the same kind of traffic you would at DEV. On the other side, writing on Hashnode can get you featured on [daily.dev](https://daily.dev/) which can get people to see your article more.
---
## [Medium](https://medium.com/)
Medium is probably the most famous out of the three, and not just in the technical world. Medium is a big platform that offers an easy-to-use editor, your own blog with a subdomain or a custom domain, customization to your blog from colors to the look and feels in general, and newsletter opt-in to your visitors.
Getting traffic on Medium, however, is tough. Medium prioritizes articles from big blogs on their platform or authors who are enrolled in their [Partner Program](https://medium.com/earn). If you're neither, your articles probably won't be seen much by anyone. Another way people can see your articles is if you submit your publication to another big blog, meaning that your article appears as part of another blog, but this requires a special invite from the blog itself so that you can submit your publications. I personally have been submitting mine for [gitconnected](https://levelup.gitconnected.com/)'s Level Up Coding blog as I have been invited as an author on it before, and that's how I mainly get traffic on it.
Another thing is that on Medium you don't really have much interaction with others in the community. Generally, readers on Medium just read your article, and if they like it enough they'll give it "claps", but not much interaction happens (at least from my own experience).
---
## Should You Start A Personal Blog?
Starting a personal blog definitely gives you more freedom. However, depending on the kind of CMS platform or blogging experience you are going for, there are setups and costs that you need to consider. Especially if you're still a beginner, you might find it hard to manage your own hosting.
Personal blogs have a big perk which is you have a better chance at monetizing them, but even that can take some time as it is not easy.
If you are interested in starting your own blog and you're looking for ways to do that cost-free, here are some suggested reads you can look through:
1. [The Things You Can Do For Free: The Ultimate Guide](https://blog.shahednasser.com/the-things-you-can-do-for-free-the-ultimate-guide/)
2. [Deploy a Free Website With Jekyll and GitHub Pages](https://blog.shahednasser.com/deploy-a-website-with-jekyll-and-github-pages)
---
## Conclusion
All 3 platforms are great choices. It mostly depends on what kind of blogging experience you are going for.
1. If you're looking for a place where you can write freely and also interact with others in the community, start at [DEV](https://dev.to/).
If you're looking for a place where you can also do that with less traffic, but have more freedom, go for [Hashnode](https://hashnode.com/).
If you're looking for a platform where you can write easily and also have some freedom in your blog, go for [Medium](https://medium.com/). | shahednasser |
610,849 | How to Install Drupal 9 on Ubuntu 20.04 | In this article, we’ll explain how to install Drupal 9 on Ubuntu 20.04. The tutorial will guide you t... | 0 | 2021-02-18T13:46:59 | https://dev.to/hostnextra/how-to-install-drupal-9-on-ubuntu-20-04-4hpj | In this article, we’ll explain how to install Drupal 9 on Ubuntu 20.04. The tutorial will guide you to install and configure Nginx as web server, PHP, MariaDB as a database.
Drupal is a free and open-source content management system. With robust content management tools, sophisticated APIs for multichannel publishing, and a track record of continuous innovation—Drupal is ready to stand as the hub of your digital presence.
Learn more: <a href="https://www.hostnextra.com/kb/how-to-install-drupal-9-on-ubuntu-20-04/">Install Drupal 9 on Ubuntu 20.04</a>
| hostnextra | |
610,877 | Announcing Strapi v3.5 with the Sentry plugin, SSO authentication, and more | This month, we’re excited to announce the release of Strapi 3.5 with the Sentry plugin, the Single-Si... | 0 | 2021-02-18T14:43:11 | https://strapi.io/blog/v3-5-sentry-plugin-sso-authentication | This month, we’re excited to announce the release of Strapi 3.5 with the Sentry plugin, the Single-Sign-On Authentication feature, and a bunch of improvements. More details in the post!
## Announcing a new Sentry plugin
Everyone loves plugins: they help you connect your favorite tools, extend your software capabilities while keeping its core light and clean. In short, plugins help us build apps in a better, simpler, and more efficient way!
We at Strapi are not an exception. This year, we're planning to release more plugins and we are starting to bring the plan to life with a highly requested **Sentry plugin** 🎉
[Sentry](https://sentry.io/) allows developers to ensure the quality of their applications. It logs errors and gives you all the context you need to identify the source of the problem. Here's a sneak peek of what a Strapi error may look like in Sentry.

This plugin lets you benefit from all these Sentry features with your Strapi app. Once you install it, the Sentry plugin will:
- Initialize a Sentry instance when your Strapi app starts
- Send errors encountered in your application's end API to Sentry
- Attach useful metadata to Sentry events, to help you with debugging
- Expose a global Sentry service
Interested? Check out the instructions on how to use the Strapi Sentry plugin in [the article written by Rémi](strapi.io/blog/introducing-sentry-plugin).
## Join us for the live Sentry plugin demo
In this [online meetup](https://app.livestorm.co/p/b325983d-dc01-4dc7-88da-fece280aa552), Rémi de Juvigny, Software Engineer at Strapi, and Rohit Kataria, Customer Solutions Engineer at Sentry, will walk you through the best practices for an end to end JAMstack Monitoring.
Join us on February 25th, at 18:00 CET!
<iframe width="100%" height="360" frameborder="0" src="https://app.livestorm.co/p/b325983d-dc01-4dc7-88da-fece280aa552/form" title="JAMstack Monitoring with Strapi and Sentry | Strapi"></iframe>
## SSO authentication in v3.5
We’re striving to make Strapi a robust and secure tool for all kinds of projects and users. Version 3.5 introduces an SSO authentication feature for the Strapi admin panel, which lets enterprises connect Strapi to their authentication providers and protocols such as [Active Directory](https://azure.microsoft.com/en-gb/services/active-directory/), [Okta](https://www.okta.com/), [Auth0](https://auth0.com/), [Keycloak](https://www.keycloak.org/), [OAuth](https://oauth.net/) etc. Employees will be able to use the credentials of a third-party app to log in to the admin panel. Please note that version 3.5 includes only SSO authentication and not authorization.

The SSO Authentication feature for the admin panel is available in a [Gold Enterprise Edition plan](https://strapi.io/pricing). There are several reasons why we made this decision:
- Like any other company, Strapi has to be profitable and we need to find a sustainable business model to continue investing in the product, the ecosystem, and the community. It's also a very common practice in the ecosystem to charge for an enterprise-oriented feature such as SSO.
- SSO authentication feature does not block anyone from using Strapi for its main purpose - quickly and efficiently building new projects and managing content with ease. We’re including support and some extra features in paid plans to be able to provide Strapi core functionalities for free to millions of users.
- Based on analysis & user research, SSO authentication is a specific requirement that is relevant to most big enterprises.
We appreciate every commit and input the community did and continues doing for Strapi, as well as respect all feedback. Our mission is to empower millions of people to share & manage content in tomorrow’s world, and thanks to you we’re getting one step closer to it every day.
## What else in the latest Strapi releases?
We’re also releasing many smaller improvements and fixes contributed by awesome community members. Other highlights from v3.5 and previous releases include:
- [Add custom logic to react to events ](https://strapi.io/documentation/developer-docs/latest/concepts/configurations.html#available-options)
This improvement lets you set up custom logic to react to particular authentication events. For example, if someone connects to the Strapi application using either Strapi login or 3rd-party service login via SSO, you can set up a reaction to this event - like send a notification or a message to a third-party service. You can also trigger these events if there is an error in the connexion process or if somebody logs in using authorization.
Here are some examples:
```
events: {
// Emitted when an admin user has successfully logged in to the administration panel
onConnectionSuccess(e) {
const { user, provider } = e;
console.log(`Connection successful for user n°${user.id} using ${provider}`);
},
// Emitted when an admin user failed to log in to the administration panel
onConnectionError(e) {
const { error, provider } = e;
console.log(`An error has occurred as someone tried to log in using ${provider}`);
console.log(error);
},
// Emitted when a user has been registered thanks to the auto-register feature added by SSO.
onSSOAutoRegistration(e) {
const { user, provider } = e;
console.log(
`A new user (${user.id}) has been automatically registered using ${provider}`
);
},
```
- Add the support for Auth0 [#8362](https://github.com/strapi/strapi/pull/8362) and Reddit [#8537](https://github.com/strapi/strapi/pull/8537) to the Users and Permissions plugin
- Handle duplicate error in the database [#8367](https://github.com/strapi/strapi/pull/8367)
- Configurable default _limit parameter [#8377](https://github.com/strapi/strapi/pull/8377)
- Read responsive breakpoints from config instead of being hardcoded [#9002](https://github.com/strapi/strapi/pull/9002)
Have a look at more fixes and enhancements [here](https://github.com/strapi/strapi/releases), perhaps the bug that has been bugging you is fixed now 🤭
## Introducing a new changelog
We have a detailed [changelog on Github](https://github.com/strapi/strapi/releases) which includes all the new features, bug fixes, and improvements. However, while it’s quite detailed and thorough, it doesn’t provide a quick overview of what’s new in Strapi.
That’s why we’re introducing a [new changelog](https://strapi.io/changelog) on the Strapi website, which lets you navigate through the history of major Strapi updates in one scroll. You can also access the materials which explain each new feature and release in more detail.
Now you know how to quickly find out what’s new in Strapi 🤓

## New to Strapi? Give it a try
You can check out our [hosted demo](https://strapi.io/demo) with sample data to see what Strapi is like. We also have a [template library](https://strapi.io/starters) where you can find starters for a blog, corporate website, portfolio, or e-commerce website using different frontend technologies.
You can also create a new Strapi project on your computer by running the quickstart command:
```npx create-strapi-app my-project --quickstart ```
Follow the [migration guides](https://strapi.io/documentation/developer-docs/latest/migration-guide/#instructions) to update your Strapi version and access more features.
## Get involved
Strapi is an open-source product and everyone can contribute to it. The Strapi community helps us move forward and we are making Strapi better with our users in mind.
Here’s how you can help us to improve the product:
- Contribute to the project on [Github](https://github.com/strapi/strapi/blob/master/CONTRIBUTING.md)
- Share what features you’d love to have in our [public roadmap](https://portal.productboard.com/strapi/1-roadmap/tabs/2-under-consideration)
- [Create a Strapi template](https://strapi.io/documentation/v3.x/concepts/templates.html#creating-a-template) for a specific use case
- Share the plugins and providers you built for Strapi in this [repository](https://github.com/strapi/awesome-strapi)
- Showcase the projects you built with Strapi in [Awesome Strapi](https://strapi.io/showcases)
## Thanks to all contributors
Strapi is a product built together with more than 600 community members. We would love to say thank you to people who contributed to the v3.5 release as well to the previous v3.4 enhancements and bug fixes:
[@rlvk-vk](https://github.com/rlvk-vk), [@gh640](https://github.com/gh640), [@Heziode](https://github.com/Heziode), [@Zeranoe](https://github.com/Zeranoe), [@jozefcipa](https://github.com/jozefcipa), [@pimsomeday](https://github.com/pimsomeday), [@darron1217](https://github.com/darron1217), [@Igloczek](https://github.com/Igloczek), [@croatian91](https://github.com/croatian91), [@pr0gr8mm3r](https://github.com/pr0gr8mm3r), [@NgyAnthony](https://github.com/NgyAnthony), [@leroydev](https://github.com/leroydev), [@shtelzerartem](https://github.com/shtelzerartem), [@iicdii](https://github.com/iicdii), [@shiningnova57](https://github.com/shiningnova57), [@jorrit](https://github.com/jorrit), [@jonmol](https://github.com/jonmol), [@csandven](https://github.com/csandven), [@gh0stsh0t](https://github.com/gh0stsh0t), [@centogram](https://github.com/centogram), [@dappiu](https://github.com/dappiu), [@ertrzyiks](https://github.com/ertrzyiks), [@cwray-tech](https://github.com/cwray-tech), [@meck93](https://github.com/meck93), [@taylor-work](https://github.com/taylor-work), [@bglidwell](https://github.com/bglidwell), [@florianmarkusse](https://github.com/florianmarkusse), [@MattieBelt](https://github.com/MattieBelt), [@blefevre](https://github.com/blefevre), [@PaulWeinsberg](https://github.com/PaulWeinsberg), [@chris-makaio](https://github.com/chris-makaio), [@tunasakar](https://github.com/tunasakar), [ThewBear](https://github.com/ThewBear), [@yusufisl](https://github.com/yusufisl), [@acalvino4](https://github.com/acalvino4), [@avdeyev](https://github.com/avdeyev) and [@ngjoni](https://github.com/ngjoni).
Join the community! Come chat with us on [Forum](https://forum.strapi.io/) or [Slack](https://slack.strapi.io/), or jump in on [Github](https://github.com/strapi/strapi) directly. We're always happy to meet new members of the Strapi family ❤️
| strapijs | |
610,887 | ReactJS Pinterest Clone | Browse our Teachable courses. We'll build our mock-up in steps. Create a... | 0 | 2021-02-19T16:10:43 | https://dev.to/anobjectisa/reactjs-pinterest-clone-509e | react, ux, webdev, javascript | #### <center>Browse our [Teachable](http://anobject.teachable.com "Teachable") courses.</center>
------------

### We'll build our mock-up in steps.
1. Create a mock-up of a single **Pinterest Pin**.
2. Create a mock-up of the Pinterest "**Add a Pin Screen**".
3. Merge the above; Use the "**Add Pin Screen**" to generate a single **Pinterest Pin**.
4. Transfer the logic of our merge in **step 3** into our **Pinterest Layout**.
------------
### "Pinterest Pin" Mock-Up
#### Each Pin is made up of 3 sections.
1. **Pin title** - the user won't see this; you can use this on the back-end as a way to store the Pin in a database
2. **Pin modal** - this is the overlay for each card; this won't be functional for this tutorial, just cosmetic

3. **Pin image** - the actual image the user uploads

#### Each pin has a "header" and "footer".

```
<div>
<div className="card">
<div className="pin_title"></div>
<div className="pin_modal"></div>
<div className="pin_image"></div>
</div>
</div>
```
------------
### "Add a Pin" Modal Mock-Up
#### The Modal is made up of 9 sections.
1. **Modal overlay** - the transparent black background
2. **Add Pin container** - the main interface of the screen
3. **Left side** - the left half of our interface
4. **Right side** - the right half of our interface
5. **Left side Header** - a simple button leading to options
6. **Left side Body** - the user's image shows up here
7. **Left side Footer** - an option to upload an image from the web
8. **Right side Header** - select the Pin size (small, med, large)
9. **Right side Body** - information about the Pin is entered here

```
<div className="add_pin_modal">
<div className="add_pin_container">
<div className="side" id="left_side">
<div className="section1">
</div>
<div className="section2">
</div>
<div className="section3">
</div>
</div>
<div className="side" id="right_side">
<div className="section1">
</div>
<div className="section2">
</div>
</div>
</div>
</div>
```
------------
### Merge our Pin and "Add Pin Screen".
#### When we click save, we create a Pin.
```
function save_pin(pinDetails, add_pin) {
const users_data = {
...pinDetails,
author: 'Jack',
board: 'default',
title: document.querySelector('#pin_title').value,
description: document.querySelector('#pin_description').value,
destination: document.querySelector('#pin_destination').value,
pin_size: document.querySelector('#pin_size').value,
}
add_pin(users_data);
}
```

------------
### The Final Pinterest Board
#### We'll take our Pinterest Layout from a different tutorial.
In a previous tutorial, we created the Pinterest Layout using CSS Grids.
We'll import that code and use that layout as our **pin_container** for this project.
You can find that tutorial [here](https://youtu.be/baBvJDmziGQ "here").
This merge is very simple. There is no new HTML here.
The major change comes in our CSS and JSX.
In our CSS, we create three new class definitions; the small, medium, and large Pin options.
```
.card_small {
grid-row-end: span 26;
}
.card_medium {
grid-row-end: span 33;
}
.card_large {
grid-row-end: span 45;
}
```
Then based on which **pin size** the user chooses, we add that class to our **Pin** component.
```
<div className={`card card_${props.pin_size}`}>
```

------------
### There is much more nuance to this project.
You can get the source files [here](https://github.com/an-object-is-a/reactjs-pinterest-clone "here") and you can follow the video tutorial down below.
------------
#### <center>If you want a more in-depth guide, check out my full video tutorial on YouTube, **[An Object Is A](https://www.youtube.com/c/anobjectisa "An Object Is A")**.</center>
### <center>ReactJS Pinterest Clone</center>
{% youtube XvmQbEziy4Y %} | anobjectisa |
610,935 | product card | A post by TechAkhil | 0 | 2021-02-18T15:49:01 | https://dev.to/techakhilc47/product-card-njj | codepen | {% codepen https://codepen.io/techakhil-me/pen/yLaNGmx %} | techakhilc47 |
611,157 | Tiny eink Dashboard | Lately I'm been pinning for new and exciting things to build. When I came across PiHut's case for the... | 0 | 2021-02-18T20:30:02 | https://dev.to/gmemstr/tiny-eink-dashboard-29a4 | python, eink, raspberrypi, showdev | Lately I'm been pinning for new and exciting things to build. When I came across [PiHut's case](https://thepihut.com/products/pi-zero-case-for-waveshare-2-13-eink-display) for the Raspberry Pi Zero and Waveshare Epaper Display, I quickly snapped it up, knowing I could put it to use - at the very least I would have my own eink display, which is a technology I am very keen on.
_sidenote: I will be using eink and epaper interchangeably in this post. "E Ink" is a trademarked term but is typically used in a more generic form of "eink"._
Having never done much, if anything, with the Raspberry Pi's hardware and "hat" capabilities, the first step was figuring out how the two interacted. Thankfully, Waveshare has a [repository containing examples](https://github.com/waveshare/e-Paper) in both C and Python, the latter being the languages I opted for due to my familiarity. Setup is relatively straightforward, requiring one or two libraries fetched via `apt-get` followed by a `git clone` of the repository.
Hold on though! The Pi Zero I have on hand does not have WiFi! How does one connect it to the internet? It's relatively straightforward, truth be told, when using Windows or macOS (Linux is another story, but we'll get to that). The Pi Zero (I am unsure if this extends to the full fat Pi) has a "gadget mode", wherein it can appear as a network adapter and allow your computer to connect over this. From there you can share your actual internet connection with it. Adafruit has a nifty little guide [here](https://learn.adafruit.com/turning-your-raspberry-pi-zero-into-a-usb-gadget/ethernet-gadget).
With the internet connection sorted and the repository cloned and tested, it was time to create a minimal viable product. This was more or less taking the example code and extracting the valuable bits, giving it a smaller footprint and easier to navigate layout. This simpler version [has been tagged](https://github.com/gmemstr/epaper-dashboard/tree/1.0) for your viewing pleasure.
One interesting note about this particular display is that the Python library draws on the display upside down. Why, I am not particularly sure, but it's a relatively easy thing to compensate for until I can figure out the root cause. This is why in the code the image is flipped at the end!
This ran fine, but a few days after I made the leap to Linux full time (again) and left my Windows install behind. I was under the impression that sharing the connection would be just as straightforward, but it turned out [it was a bit more complex than that](https://bbs.archlinux.org/viewtopic.php?id=216968) (it's entirely possible I was trying to rush through it and missed a critical piece) and I quickly gave up after many failed attempts to configure the necessary rules. Without this internet connection, the Pi was unable to sync the date and time, which was a small issue for a device showing the time. Since my desktop and Pi did, however, have a solid network connection, I figured I may as well leverage it to the best of my ability, and set out to rework the script in a more traditional webserver/client configuration, with my desktop generating the image. This has the added advantage of lessening the work the Pi itself has to do, hopefully lengthening the lifespan of it or the eink display.
There's nothing particular to note about [the new codebase](https://github.com/gmemstr/epaper-dashboard/tree/2b2259eae4093036e1692513bba8b9dc22e303c2), beyond the minimal dependencies and routing system that I borrowed from a previous project, which looks for properly named files and functions in the `lib/` directory before returning the 404. It was especially important to keep dependencies on the Pi-side minimal since the lack of internet connection would require me to manually fetch them and sftp them up. This lack of connection also constrained me to Python 2, since the dependencies I had previously installed were done under that version. To combat the largest issue regarding the time and date of the Pi, I ran a small snippet over SSH that synced the time with my desktop's with a small offset to account for lag, and hope to integrate this sync directly into the script. While this single snippet could have solved the lack of internet problem without needing to rewrite the application, I think it was a worthwhile time investment for future expansion.
_update: since this post, I have set up an ntp server local to my desktop to sync the Pi with, so a lot of the problems regarding time being out of sync are solved for good! This sort of invalidates part of the need for the client <-> server, but oh well._
Overall, I'm very happy with the end product, especially with the new client/server structure. It will allow me to do some more complex things without worrying about bogging down the smaller CPU. The clock can lag behind by a few seconds, but that's not a deal breaker in my case. Porting the client to another Waveshare eink display should be as easy as switching out the display library and adjusting the parameters in the script, and I hope to obtain a larger display in the future to have more real estate for information.
_What would you do with such a device connected to your computer? What information would you find useful? I'd love to gain some more inspiration for this!_ | gmemstr |
611,160 | Data Type and Type Casting | DailyJS There are 5 different data types that contain values. 1) String 2) Number... | 0 | 2021-02-18T17:59:40 | https://dev.to/mubasshir00/data-type-and-type-casting-4emb | javascript | #DailyJS
There are 5 different data types that contain values.
1) String
2) Number
3) Boolean
4) Object
5) Function
There are 6 types of objects.
1) Object
2) Date
3) Array
4) String
5) Boolean
There are two types that don't contain values.
1) NUll 2) Undefined
### Type Casting
There are two types of typecasting.
1) implicit conversion.
2) Explicit conversion.
### implicit conversion :
In certain situations, javascript automatically converts one data type to another.
```javascript
var res ;
res = '3' + 5 //output '35'
res = '3' + true //output : '3true'
res = '3' + null //output : '3null'
res = null + '3' //output : 'null3'
res = null + null //output : 0
res = '10' - '3' // output : 7
res = '10' - 3 // output : 7
res = '10' * 3 // output : 30
res = '10' / 2 // output : 5
res = 'hello' - 5 // output : NaN
res = '7' - true // output : 6
res = '7' + true // output : 8
```
### Explicit Conversion
In Explicit one data type convert to another manually.
```javascript
// string to number
res = Number('324') // 324
res = parseInt('20.01') // 20
res = parseFloat('20.01') // 20.01
//convert to string
res = String(324)
``` | mubasshir00 |
611,185 | Web development roadmap for 2021 and beyond | If you found value in this thread you will most likely enjoy my tweets too so make sure you follow me... | 0 | 2021-02-18T18:56:33 | https://dev.to/pascavld/web-development-roadmap-for-2021-and-beyond-1a3 | webdev | If you found value in this thread you will most likely enjoy my tweets too so make sure you follow me on [Twitter](https://twitter.com/VladPasca5) for more information about web development and how to improve as a developer. This article was first posted on my [Blog](vladpasca.hashnode.dev)
## 1. Learn how the internet works
[Zero to mastery playlist](http://youtube.com/playlist?list=PL2HX_yT71umBgUzdKDfbuXnysZWqiGX4L)
[MDN Introduction](http://developer.mozilla.org/en-US/docs/Learn/Common_questions/How_does_the_Internet_work)
## 2. Learn HTML
[HTML Crash Course For Absolute Beginners](http://youtube.com/watch?v=UB1O30fR-EE&t=5s)
[Basic HTML and HTML5](https://www.freecodecamp.org/learn/responsive-web-design/)
## 3. Learn CSS
[CSS Tutorial - Zero to Hero (Complete Course)](http://youtube.com/watch?v=1Rs2ND1ryYc)
[Basic CSS](https://www.freecodecamp.org/learn/responsive-web-design/)
## 4. Learn a CSS framework (optional but good to know)
[Bootstrap](http://youtube.com/watch?v=5GcQtLDGXy8)
[Tailwind](https://youtu.be/UBOj6rqRUME)
## 5. Learn JavaScript
[Learn JavaScript - Full Course for Beginners](http://youtube.com/watch?v=PkZNo7MFNFg)
[Basic and advanced JavaScript](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/)
## 6. Learn Git and GitHub
[Git and GitHub for Beginners Crash Course](http://youtu.be/RGOj5yH7evk)
[An intro to Git and GitHub for beginners](https://t.co/h6ceyV3f1G?amp=1)
## 7. Learn NPM
[NPM Crash Course](http://youtu.be/jHDhaSSKmB0)
[An Absolute Beginner's Guide to Using NPM](https://t.co/DbALy70lNZ?amp=1)
## 8. Learn a Front-End framework
[React](http://youtu.be/DLX62G4lc44)
[Vue](http://youtu.be/4deVCNJq3qc)
[Angular](https://youtu.be/2OHbjep_WjQ)
## 9. Learn Node.js
[Full Tutorial for beginners](http://youtu.be/RLtyhwFtXQA)
[Introduction to Node.js](https://nodejs.dev/)
## 10. Learn Database
[SQL](http://youtu.be/HXV3zeQKqGY)
[MySQL](https://youtu.be/7S_tz1z_5bA)
## 11. Build projects
This is the most important thing you need to do before going to the next step.
You learn HTML? Build a project.
Learned a front-end framework? Build a project.
## 12. Keep learning
Being a developer is a live long learning journey.
It might look hard at first to learn all these things but take one step at a time.
You can learn all of these in 6 to 12 months if you are effective and consistent.
## End
I hope found this useful and if you did please let me know. If you have any question feel free to DM me on [Twitter](https://twitter.com/VladPasca5) | pascavld |
611,337 | Angular Components Unit Test – Common Use Cases | In this article, I will provide a collection of some important statements used to unit test angular... | 0 | 2021-02-18T22:16:41 | https://dev.to/this-is-angular/angular-components-unit-test-common-use-cases-4j3p | angular, testing, webdev, typescript | In this article, I will provide a collection of some important statements used to unit test angular components. You can use any of the following examples directly in your project, or you may prefer to extract some of them into separate helper functions and reuse them all over your project. This article covers testing the following scenarios:
* [Text interpolation](#text-interpolation)
* [User Input Value Change](#user-input-value-change)
* [Clicking HTML Element] (#clicking-html-element)
* [Access Child (nested) Component] (#access-child-component)
* [Content projection] (#content-projection)
* [Component inputs and outputs] (#component-input-output)
* [Component Dependencies] (#component-dependencies)
For this purpose lets assume we have the following simple example component generated using Angular CLI ng g c ExampleComponent:
{% gist https://gist.github.com/NicholaAlkhouri/4cc7cf0536c642ecffb4343df4a324fe.js %}
<p>A very basic component consists of one input <code>header</code> and one property <code>name</code> displayed in the template using a direct interpolation, a form with one input field and a submit button and one output <code>nameChange</code> which will emit an event when the user submits the form.</p>
<p>When you create the above component using Angular CLI you will get automatically a unit test file in the same directory as your component. All the next sections in this article are based on this file, especially the fixture object <code>let fixture: ComponentFixture;</code>. If you don't use Angular CLI to generate your component file, you may copy the above file in your project, and replace <code>ExampleComponent</code> with your component class name.</p>
<hr />
##Text interpolation: <a name="text-interpolation"></a>
<p>Here we make sure that our component will bind the correct values in the template. Don't forget to call <code>fixture.detectChanges()</code> which forces the TestBed to perform data binding and update the view.</p>
{% gist https://gist.github.com/NicholaAlkhouri/f43e571eb9a72c8dacddaf4b818528c3.js %}
<hr />
##User Input Value Change: <a name="user-input-value-change"></a>
<p>Here we test that the user interaction with the text input is reflected correctly into our component class. Notice here the use of fakeAsync and tick, because the forms binding involves some asynchronous execution.</p>
{% gist https://gist.github.com/NicholaAlkhouri/07f7016ff8c51471102a6438bfcbc771.js %}
<hr />
##Clicking HTML Element: <a name="clicking-html-element">
{% gist https://gist.github.com/NicholaAlkhouri/5849021558d8a6592a2dc8be07c48d59.js %}
<hr />
##Access Child (nested) Component: <a name="access-child-component"></a>
<p>Lets assume that our component contains a nested child component: </p>
<pre class="wp-block-code"><code><app-nested-component></app-nested-component></code></pre>
<p>You can access the child component and interact it as the following:</p>
{% gist https://gist.github.com/NicholaAlkhouri/7453c8bd1ce49401c76bed849cba9c82.js %}
<hr/>
##Content projection: <a name="content-projection"></a>
<p>Testing content projection is not straightforward, to do so we need to add a wrapper component around the component being tested and use this wrapper component to pass content through projection. Let's add the following projected content to the view of our component</p>
<pre class="wp-block-code"><code><div class="projected-content>
<ng-content select="[description]"></ng-content>
</div></code></pre>
<p>And we can test is by adding a wrapper <code>ExampleWrapperComponent</code> as the following:</p>
{% gist https://gist.github.com/NicholaAlkhouri/8f88362e80307c09fa964846ce6c6b1a.js %}
<hr/>
##Component inputs and outputs: <a name="component-input-output"></a>
<p>You can test component input similar to any normal component property. on the other hand the outputs can be spied on and check if it emits the correct value.</p>
{% gist https://gist.github.com/NicholaAlkhouri/b1fc3a3aa4eb63df987bbd8621d07819.js %}
<hr/>
##Component Dependencies: <a name="component-dependencies"></a>
<p> Components usually have dependencies (services) that help the component to function correctly, and the component needs to interact with these dependencies. When testing a component we need to provide our tests with those dependencies in order to run correctly. Here we need to distinguish between two way of providing a dependency:</p>
<h3>Dependencies provided in the root injector:</h3>
<p>When the component has a dependency on a service that is provided in the root injector, you need to provide this service to the TestBed configuration to be available to the component while running the tests:</p>
{% gist https://gist.github.com/NicholaAlkhouri/c301ba395db3db3cda5f049f22f80c46.js %}
<p>Notice that we are using a mock service here since it is easier and safer to interact with. After that, you will be able to access that service in your tests by calling the <code>inject</code> method of the <code>TestBed</code>.</p>
<h3>Dependencies provided in the component injector:</h3>
<p>When you have a dependency provided in your component, you can not access it using the TestBed, since it will be available only on the component level of the injection tree. In this case, we need to override the component providers to provide this dependency, and then you can use the component injector to access it.</p>
{% gist https://gist.github.com/NicholaAlkhouri/13ddf1a8c9d96c3b7de9cc524bf9197b.js %}
<hr/>
Do you have or need a specific testing scenario that is not covered by this article? Feel free add it in the comments sections and we will add a use case for you :) | nicholaalkhouri |
611,343 | 8 Data Structures every Python programmer needs to know | When solving real-world coding problems, employers and recruiters are looking for both runtime and re... | 0 | 2021-02-18T22:39:55 | https://www.educative.io/blog/8-python-data-structures | python, career, tutorial, algorithms | When solving real-world coding problems, employers and recruiters are looking for both runtime and resource efficiency.
Knowing which data structure best fits the current solution will increase program performance and reduce the time required to make it. For this reason, most top companies require a strong understanding of data structures and heavily test them in their coding interview.
**Here’s what we’ll cover today:**
* [What are data structures?](#what)
* [Arrays in Python](#arrays)
* [Queues in Python](#queues)
* [Stacks in Python](#stacks)
* [Linked lists in Python](#linked-list)
* [Circular linked lists in Python](#circular-ll)
* [Trees in Python](#tree)
* [Graphs in Python](#graph)
* [Hash tables in Python](#hashtable)
* [What to learn next](#next)
<br>
<h4><b> Ace the Python interview the first time</b></h4>
Brush up on your Python programming skills like data structures, recursion, and concurrency with 200+ hands-on practice problems.
<b>[Ace the Python Coding Interview](https://www.educative.io/path/ace-python-coding-interview)</b>
<br>
<a name="what"></a>
## What are data structures?
Data structures are code structures for storing and organizing data that make it easier to modify, navigate, and access information. Data structures determine how data is collected, the functionality we can implement, and the relationships between data.
Data structures are used in almost all areas of computer science and programming, from operating systems, to [front-end development](https://www.educative.io/blog/web-development-in-python), to machine learning.
> **Data structures help to:**
>
> - Manage and utilize large datasets
> - Quickly search for particular data from a [database](https://www.educative.io/blog/database-design-tutorial)
> - Build clear hierarchical or relational connections between data points
> - Simplify and speed up data processing
Data structures are **vital building blocks** for efficient, real-world problem-solving. Data structures are proven and optimized tools that give you an easy frame to organize your programs. After all, there's no need for you to remake the wheel (or structure) every time you need it.
Each data structure has a task or situation it is **most suited to solve**. Python has 4 built-in data structures, lists, dictionaries, tuples, and sets. These built-in data structures come with default methods and behind-the-scenes optimizations that make them easy to use.
Most data structures in Python have modified forms of these or use the built-in structures as their backbone.
- **List**: Array-like structures that let you save a set of mutable objects of the same type to a variable.
- **Tuple**: Tuples are immutable lists, meaning the elements cannot be changed. It's declared with parenthesis instead of square brackets.
- **Set**: Sets are unordered collections, meaning that elements are unindexed and have no set sequence. They're declared with curly braces.
- **Dictionary (dict)**: Similar to hashmap or hash tables in other languages, a dictionary is a collection of key/value pairs. You initialize an empty dictionary with empty curly braces and fill it with colon-separated keys and values. All keys are unique, immutable objects.
Now, let's see how we can use these structures to create all the advanced structures interviewers are looking for.
<br>
<a name="array"></a>
## Arrays (Lists) in Python
Python does not have a built-in [array](https://www.educative.io/blog/data-structures-arrays-javascript-tutorial) type, but you can use lists for all of the same tasks. An array is a collection of values of the **same type saved under the same name**.
Each value in the array is called an "element" and indexing that represents its position. You can access specific elements by calling the array name with the desired element's index. You can also get the length of an array using the `len()` method.

Unlike programming languages like Java that have static arrays after declaration, Python's arrays **automatically scale up or down** when elements are added/subtracted.
For example, we could use the `append()` method to add an additional element on the end of an existing array instead of declaring a new array.
This makes Python arrays particularly easy to use and adaptable on the fly.
```py
cars = ["Toyota", "Tesla", "Hyundai"]
print(len(cars))
cars.append("Honda")
cars.pop(1)
for x in cars:
print(x)
```
**Advantages:**
- Simple to create and use data sequences
- Automatically scale to meet changing size requirements
- Used to create more complex data structures
**Disadvantages:**
- Not optimized for scientific data (unlike NumPy's array)
- Can only manipulate the rightmost end of the list
**Applications:**
- Shared storage of related values or objects, i.e. `myDogs`
- Data collections you'll loop through
- Collections of data structures, such as a list of tuples
<br>
### Common arrays interview questions in Python
- Remove even integers from a list
- Merge two sorted lists
- Find the minimum value in a list
- Maximum sum sublist
- Print products of all elements
<br>
<a name="queue"></a>
## Queues in Python
[Queues](https://www.educative.io/blog/data-structures-stack-queue-java-tutorial) are a linear data structure that store data in a "first-in, first-out" (FIFO) order. Unlike arrays, you cannot access elements by index and instead can **only pull the next oldest element**. This makes it great for order-sensitive tasks like online order processing or voicemail storage.
You can think of a queue as a line at the grocery store; the cashier does not choose who to check out next but rather processes the person who has stood in line the longest.

We could use a Python list with `append()` and `pop()` methods to implement a queue. However, this is inefficient because lists must shift all elements by one index whenever you add a new element to the beginning.
Instead, it's best practice to use the `deque` class from Python's `collections` module. Deques are optimized for the append and pop operations. The deque implementation also allows you to create double-ended queues, which can access both sides of the queue through the `popleft()` and `popright()` methods.
```py
from collections import deque
# Initializing a queue
q = deque()
# Adding elements to a queue
q.append('a')
q.append('b')
q.append('c')
print("Initial queue")
print(q)
# Removing elements from a queue
print("\nElements dequeued from the queue")
print(q.popleft())
print(q.popleft())
print(q.popleft())
print("\nQueue after removing elements")
print(q)
# Uncommenting q.popleft()
# will raise an IndexError
# as queue is now empty
```
**Advantages:**
- Automatically orders data chronologically
- Scales to meet size requirements
- Time-efficient with `deque` class
**Disadvantages:**
- Can only access data on the ends
**Applications:**
- Operations on a shared resource like a printer or [CPU core](https://www.educative.io/blog/beginners-guide-to-computers-and-programming)
- Serve as temporary storage for batch systems
- Provides an easy default order for tasks of equal importance
<Br>
### Common queue interview questions in Python
- Reverse first k elements of a queue
- Implement a queue using a linked list
- Implement a stack using a queue
<br>
<a name="stack"></a>
## Stacks in Python
[Stacks](https://www.educative.io/blog/data-structures-stack-queue-java-tutorial) are a sequential data structure that acts as the Last-in, First-out (LIFO) version of queues. The last element inserted in a stack is considered at the **top of the stack** and is the only accessible element. To access a middle element, you must first remove enough elements to make the desired element at the top of the stack.
Many developers imagine stacks as a stack of dinner plates; you can add or remove plates to the top of the stack but must move the whole stack to place one at the bottom.

Adding elements is known as a **push,** and removing elements is known as a **pop**. You can implement stacks in Python using the built-in list structure. With list implementation, push operations use the `append()` method, and pop operations use `pop()`.
```py
stack = []
# append() function to push
# element in the stack
stack.append('a')
stack.append('b')
stack.append('c')
print('Initial stack')
print(stack)
# pop() function to pop
# element from stack in
# LIFO order
print('\nElements popped from stack:')
print(stack.pop())
print(stack.pop())
print(stack.pop())
print('\nStack after elements are popped:')
print(stack)
# uncommenting print(stack.pop())
# will cause an IndexError
# as the stack is now empty
```
**Advantages:**
- Offers LIFO data management that's impossible with *Applications:*s or arrays
- Automatic scaling and object cleanup
- Simple and reliable data storage system
**Disadvantages:**
- Stack memory is limited
- Too many objects on the stack leads to a stack overflow error
**Applications:**
- Used for making highly reactive systems
- Memory management systems use stacks to handle the most recent requests first
- Helpful for questions like parenthesis matching
<br>
### Common stacks interview questions in Python
- Implement a queue using stacks
- Evaluate a Postfix expression with a stack
- Next greatest element using a stack
- Create a `min()` function using a stack
<br>
<a name="linked-list"></a>
## Linked lists in Python
[Linked lists](https://www.educative.io/blog/data-structures-linked-list-java-tutorial) are a sequential collection of data that uses **relational pointers on each data node** to link to the next node in the list.
Unlike arrays, linked lists do not have objective positions in the list. Instead, they have relational positions based on their surrounding nodes.
The first node in a linked list is called the **head node,** and the final is called the **tail node**, which has a `null` pointer.

Linked lists can be singly or doubly linked depending if each node has just a single pointer to the next node or if it also has a second pointer to the previous node.
You can think of linked lists like a chain; individual links only have a connection to their immediate neighbors but all the links together form a larger structure.
Python does not have a built-in implementation of linked lists and therefore requires that you implement a `Node` class to hold a data value and one or more pointers.
```py
class Node:
def __init__(self, dataval=None):
self.dataval = dataval
self.nextval = None
class SLinkedList:
def __init__(self):
self.headval = None
list1 = SLinkedList()
list1.headval = Node("Mon")
e2 = Node("Tue")
e3 = Node("Wed")
# Link first Node to second node
list1.headval.nextval = e2
# Link second Node to third node
e2.nextval = e3
```
Linked lists are primarily used to create advanced data structures like graphs and trees or for tasks that require frequent addition/deletion of elements across the structure.
**Advantages:**
- Efficient insertion and deletion of new elements
- Simpler to reorganize than arrays
- Useful as a starting point for advanced data structures like graphs or trees
**Disadvantages:**
- Storage of pointers with each data point increases memory usage
- Must always traverse the linked list from Head node to find a specific element
**Applications:**
- Building block for advanced data structures
- Solutions that call for frequent addition and removal of data
<BR>
### Common linked list interview questions in Python
- Print the middle element of a given linked list
- Remove duplicate elements from a sorted linked list
- Check if a singly linked list is a palindrome
- Merge K sorted linked lists
- Find the intersection point of two linked lists
<br>
<a name="circular-ll"></a>
## Circular linked lists in Python
The primary downside of the standard linked list is that you always have to start at the Head node. The circular linked list fixes this problem by replacing the Tail node's `null` pointer with a pointer back to the Head node. When traversing, the program will follow pointers until it reaches the node it started on.

The advantage of this setup is that you can start at any node and traverse the whole list. It also allows you to use linked lists as a loopable structure by setting a desired number of cycles through the structure. Circular linked lists are great for processes that loop for a long time like CPU allocation in operating systems.
**Advantages:**
- Can traverse the whole list starting from any node
- Makes linked lists more suited to looping structures
**Disadvantages:**
- More difficult to find the Head and Tail nodes of the list without a `null` marker
**Applications:**
- Regularly looping solutions like CPU scheduling
<Br>
### Common circular linked list interview questions in Python
- Detect loop in a linked lists
- Reverse a circular linked list
- Reverse circular linked list in groups of a given size
<br>
<div style="background-color:#F1F1FF;padding-left: 10px;padding-top:1px;padding-bottom:1px;">
#### Keep brushing up on Python Data Structures
Practiced knowledge of data structures is essential for any interviewee. Educative's text-based courses give you hundreds of hands-on practice problems to ensure you're ready when the time comes.
<b>[Ace the Python Coding Interview](https://www.educative.io/path/ace-python-coding-interview)</b>
</div>
<br>
<a name="tree"></a>
## Trees in Python
[Trees](https://www.educative.io/blog/data-structures-trees-java) are another relation-based data structure, which specialize in representing hierarchical structures. Like a linked list, they're populated with `Node` objects that contain a data value and one or more pointers to define its relation to immediate nodes.
Each tree has a **root** node that all other nodes branch off from. The root contains pointers to all elements directly below it, which are known as its **child nodes**. These child nodes can then have child nodes of their own. Binary trees cannot have nodes with more than two child nodes.
Any nodes on the same level are called **sibling nodes**. Nodes with no connected child nodes are known as **leaf nodes**.

The most common application of the binary tree is a **binary search tree**. Binary search trees excel at searching large collections of data, as the time complexity depends on the depth of the tree rather than the number of nodes.
>**Binary search trees have four strict rules:**
>
> - The left subtree contains only nodes with elements lesser than the root.
> - The right subtree contains only nodes with elements greater than the root.
> - Left and right subtrees must also be a binary search tree. They must follow the above rules with the “root” of their tree.
> - There can be no duplicate nodes, i.e. no two nodes can have the same value.
```py
class Node:
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def insert(self, data):
# Compare the new value with the parent node
if self.data:
if data < self.data:
if self.left is None:
self.left = Node(data)
else:
self.left.insert(data)
elif data > self.data:
if self.right is None:
self.right = Node(data)
else:
self.right.insert(data)
else:
self.data = data
# Print the tree
def PrintTree(self):
if self.left:
self.left.PrintTree()
print( self.data),
if self.right:
self.right.PrintTree()
# Use the insert method to add nodes
root = Node(12)
root.insert(6)
root.insert(14)
root.insert(3)
root.PrintTree()
```
**Advantages:**
- Good for representing hierarchical relationships
- Dynamic size, great at scale
- Quick insert and delete operations
- In a binary search tree, inserted nodes are sequenced immediately.
- Binary search trees are efficient at searches; length is only $O(height)$.
**Disadvantages:**
- Time expensive, $O(logn)4$, to modify or "balance" trees or retrieve elements from a known location
- Child nodes hold no information on their parent node and can be hard to traverse backward
- Only works for lists that are sorted. Unsorted data degrades into linear search.
**Applications:**
- Great for storing hierarchical data such as a file location
- Used to implement top searching and sorting algorithms like binary search trees and binary heaps
<br>
### Common tree interview questions in Python
- Check if two binary trees are identical
- [Implement level order traversal of a binary tree](https://www.educative.io/blog/tree-traversal-algorithms)
- Print the perimeter of a binary search tree
- Sum all nodes along a path
- Connect all siblings of a binary tree
<br>
<a name="graph"></a>
## Graphs in Python
[Graphs](https://www.educative.io/blog/graph-algorithms-tutorial) are a data structure used to represent a visual of relationships between data **vertices** (the Nodes of a graph). The links that connect vertices together are called **edges**.
Edges define which vertices are connected but do not indicate a direction flow between them. Each vertex has connections to other vertices which are saved on the vertex as a comma-separated list.

There are also special graphs called **directed graphs** that define a direction of the relationship, similar to a linked list. Directed graphs are helpful when modeling one-way relationships or a flowchart-like structure.

They're primarily used to convey visual web-structure networks in code form. These structures can model many different types of relationships like hierarchies, branching structures, or simply be an unordered relational web. The versatility and intuitiveness of graphs makes them a favorite for [data science](https://www.educative.io/blog/python-pandas-tutorial).
When written in plain text, graphs have a list of vertices and edges:
```
V = {a, b, c, d, e}
E = {ab, ac, bd, cd, de}
```
In Python, graphs are best implemented using a dictionary with the name of each vertex as a key and the edges list as the values.
```py
# Create the dictionary with graph elements
graph = { "a" : ["b","c"],
"b" : ["a", "d"],
"c" : ["a", "d"],
"d" : ["e"],
"e" : ["d"]
}
# Print the graph
print(graph)
```
**Advantages:**
- Quickly convey visual information through code
- Usable for modeling a wide range of real-world problems
- Simple to learn the syntax
**Disadvantages:**
- Vertex links are difficult to understand in large graphs
- Time expensive to parse data from a graph
**Applications:**
- Excellent for modeling networks or web-like structures
- Used to model social network sites like [Facebook](https://www.educative.io/blog/cracking-top-facebook-coding-interview-questions)
<br>
### Common graph interview questions in Python
- Detect cycle in a directed graph
- Find a "Mother Vertex" in a directed graph
- Count the number of edges in an undirected graph
- Check if a path exists between two vertices
- Find the shortest path between two vertices
<br>
<a name="hash-table"></a>
## Hash tables in Python
[Hash tables](https://www.educative.io/blog/data-strucutres-hash-table-javascript) are a complex data structure capable of storing large amounts of information and retrieving specific elements efficiently.
This data structure uses key/value pairs, where the key is the name of the desired element and the value is the data stored under that name.

Each input key goes through a **hash function** that converts it from its starting form into an integer value, called a **hash**. Hash functions must always produce the same hash from the same input, must compute quickly, and produce fixed-length values. Python includes a built-in `hash()` function that speeds up implementation.
The table then uses the hash to find the general location of the desired value, called a **storage bucket**. The program then only has to search this subgroup for the desired value rather than the entire data pool.
Beyond this general framework, hash tables can be very different depending on the application. Some may allow keys from different data types, while some may have differently setup buckets or different hash functions.
Here is an example of a hash table in Python code:
```py
import pprint
class Hashtable:
def __init__(self, elements):
self.bucket_size = len(elements)
self.buckets = [[] for i in range(self.bucket_size)]
self._assign_buckets(elements)
def _assign_buckets(self, elements):
for key, value in elements: #calculates the hash of each key
hashed_value = hash(key)
index = hashed_value % self.bucket_size # positions the element in the bucket using hash
self.buckets[index].append((key, value)) #adds a tuple in the bucket
def get_value(self, input_key):
hashed_value = hash(input_key)
index = hashed_value % self.bucket_size
bucket = self.buckets[index]
for key, value in bucket:
if key == input_key:
return(value)
return None
def __str__(self):
return pprint.pformat(self.buckets) # pformat returns a printable representation of the object
if __name__ == "__main__":
capitals = [
('France', 'Paris'),
('United States', 'Washington D.C.'),
('Italy', 'Rome'),
('Canada', 'Ottawa')
]
hashtable = Hashtable(capitals)
print(hashtable)
print(f"The capital of Italy is {hashtable.get_value('Italy')}")
```
**Advantages:**
- Can covert keys in any form to integer indices
- Extremely effective for large data sets
- Very effective search function
- Constant number of steps for each search and constant efficiency for adding or deleting elements
- Optimized in [Python 3](https://www.educative.io/blog/python3-guide)
**Disadvantages:**
- Hashes must be unique, two keys converting to the same hash causes a collision error
- Collision errors require a full overhaul of the hash function
- Difficult to build for beginners
**Applications:**
- Used for large, frequently-searched databases
- Retrieval systems that use input keys
<Br>
### Common hash table interview questions in Python
- Build a hash table from scratch (without built-in functions)
- Word formation using a hash table
- Find two numbers that add up to "k"
- Implement open addressing for collision handling
- Detect if a list is cyclical using a hash table
<br>
<a name="next"></a>
## What to learn next
There are dozens of interview questions and formats for each of these 8 data structures. The best way to prepare yourself for the interview process is to keep trying hands-on practice problems.
To help you become a data structure expert, Educative has created the [**Ace the Python Coding Interview Path**](https://www.educative.io/path/ace-python-coding-interview). This curated Learning Path includes hands-on practice material for all the most discussed concepts like data structures, recursion, and concurrency.
By the end, you'll have completed over 250 practice problems and have the hands-on experience to crack any interview question.
*Happy learning!*
<br>
### Continue reading about Python interviews
- [50 Python Interview Questions and Answers](https://www.educative.io/blog/python-interview-questions)
- [Top 5 Concurrency Interview Questions for Software Engineers](https://www.educative.io/blog/top-five-concurrency-interview-questions-for-software-engineers)
- [Master Algorithms with Python for Coding Interviews](https://www.educative.io/blog/python-algorithms-coding-interview)
| ryanthelin |
629,931 | Our Favorite Raspberry Pi (Day) Projects | The Blues Wireless team got together to pick their favorite projects to celebrate Raspberry Pi Day! | 0 | 2021-03-11T14:35:05 | https://dev.to/blues/our-favorite-raspberry-pi-day-projects-210b | raspberrypi, iot, pie | ---
title: Our Favorite Raspberry Pi (Day) Projects
published: true
description: The Blues Wireless team got together to pick their favorite projects to celebrate Raspberry Pi Day!
tags: raspberrypi, iot, pie
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gode09glfb02jb6i4xli.jpg
---
Happy Pi Day everyone! 🥧
Let's start off by reciting as many digits of pi as we can, ok? I'll start:
3.1415...9?

So anyway, those of us in the Raspberry Pi community have fully co-opted Pi Day as way to celebrate the best of the RPi ecosystem. From the robust [Raspberry Pi 4](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) single board computer, all the way to the new [Raspberry Pi Pico](https://www.raspberrypi.org/products/raspberry-pi-pico/) microcontroller board, there are more options than ever to create IoT solutions with RPi hardware.
> **FUN FACT:** Did you know Raspberry Pi *wasn't* in fact named for the number pi? Initially RPi was designed to only run **Py**thon. We've collectively decided to ignore that for Pi Day though!
Like many of you, I have a personal affinity towards the Raspberry Pi as it was key to kicking off my interest in IoT. I was delighted to see it also spark my kid's "aha moment" when using Scratch to program a Sense HAT.
*Totally-not-staged photo of said event:*

## The Best of Raspberry Pi
Needless to say, all of us at [Blues Wireless](https://blues.io/?utm_source=dev.to&utm_medium=blog&utm_campaign=piday) are massive fans of Raspberry Pi. On day one of our launch, we announced the [Notecarrier-Pi HAT](https://shop.blues.io/products/raspberry-pi-starter-kit?utm_source=dev.to&utm_medium=blog&utm_campaign=piday), giving your RPi easy access to pre-paid global cellular connectivity.
Without further ado, here are some of our favorite Raspberry Pi projects:
## Vertical Hydroponic Farm 🌱
This elaborate project is an IoT-enabled hydroponic farm, powered by a Raspberry Pi. The system itself is maintained with a series of Arduino controllers, but they are all networked to a Raspberry Pi using I2C. This allows all system parameters to be monitored and updated in real time.
**Check out the full project on [hackster.io](https://www.hackster.io/bltrobotics/vertical-hydroponic-farm-44fef9).**
## Remote ML Bird Identification 🐦
Machine Learning is becoming a killer angle for broad IoT adoption, due to the new opportunities provided in edge computing. While identifying birds is a relatively common ML task, this project tackles the same problem in a remote environment by adding cellular connectivity to send SMS messages with Twilio (and a solar/battery powered option).
Not to mention, the entire code base is about 100 lines of Python!

**View the [complete build instructions](https://www.hackster.io/rob-lauer/remote-birding-with-tensorflow-lite-and-raspberry-pi-8c4fcc) to create your own remote ML solution.**
## Pneumonia Classification & Detection Using EdgeML 🧑⚕️
As fun as it is for us to tinker on relatively impractical IoT projects, we are continually impressed by those in the community who are able to address significant real world problems.
Our friend [Arijit Das](https://twitter.com/Arijit_Student) recently released a Raspberry Pi-based project that uses Machine Learning to analyze chest x-rays to help diagnose bacterial vs viral pneumonia.
**Take a look at [an overview of Arijit's project](https://www.hackster.io/arijit_das_student/pneumonia-classification-detection-using-edgeml-991e18).**
## Machine Learning on Raspberry Pi Pico 🤖
As previously mentioned, the Raspberry Pi Pico MCU is the new kid on the block in the Raspberry Pi ecosystem.
In this project, Dmitry Maslov dives into using a neural network trained with [Edge Impulse](https://www.edgeimpulse.com/) on the Pico!

**Read through Dmitry's tutorial [right here](https://www.hackster.io/dmitrywat/machine-learning-inference-on-raspberry-pico-2040-e6e874).**
## Adding Cellular to the Raspberry Pi Pico 📞
And speaking of the Raspberry Pi Pico, how about adding cellular to your next Pico project?
Brandon Satrom provides a beginner-friendly walkthrough of adding cellular connectivity to the Pico using MicroPython and the [Notecard](https://blues.io/products/?utm_source=dev.to&utm_medium=blog&utm_campaign=piday).
**Follow along with Brandon in his [tutorial](https://www.hackster.io/brandonsatrom/adding-cellular-to-the-raspberry-pi-pico-b8a4b6).**
## Energy Monitoring through a Raspberry Pi ⚡
And finally, the folks from [Control Everything](https://www.hackster.io/ControlEverything) created an extensive walkthrough for setting up a current monitor with a Raspberry Pi and displaying circuit readings in a web UI.

This is a great tutorial as it also covers how to set up Apache on your RPi for serving web content.
**Check out the [complete how-to](https://www.hackster.io/ControlEverything/energy-monitoring-through-a-raspberry-pi-190a2a).**
## Remember: It's Also "Pie" Day
Go get yourself a piece of pie to celebrate, you deserve it. 🥧🥰

| rdlauer |
634,727 | Project 60 of 100 - Lil' Context API Demo | Hey! I'm on a mission to make 100 React.js projects ending March 31st. Please follow my dev.to profil... | 0 | 2021-03-15T04:46:18 | https://dev.to/jameshubert_com/project-60-of-100-lil-context-api-demo-8f7 | react, javascript, 100daysofcode | *Hey! I'm on a mission to make 100 React.js projects ending March 31st. Please follow my dev.to profile or my [twitter](https://www.twitter.com/jwhubert91) for updates and feel free to reach out if you have questions. Thanks for your support!*
Link to the deployed project: [Link](https://100-react-projects-day-60-lil-context-demo.netlify.app/)
Link to the repo: [github](https://github.com/jwhubert91/100daysofreact/tree/master/day-60-lil-context-demo)
I'm going back to my Scrimba (thank you Scrimba 🙏) React tutorial and starting from the beginning of the Context API section I abandoned quite a while ago. It's funny that I've built so many React projects without Context or Redux. I guess it just shows I haven't built many production-level web applications with tens or hundreds of components, but even the full-stack applications I have built have avoided complex state management tools like these by passing props 🤔
So this is one of the simplest projects you could do with Context, but it's a worthwhile exercise for someone new to it because it's so demonstrative of the tool, and how it is working.
We start out with a `create-react-app` project and remove all of the contents of the `App` component. Next I create a React component and call it `Prompt`. This is where we'll ask for some user input. I actually stored my state in the `App` component despite `Prompt` being where we take in data which just goes to show how used to the props way of doing things that I am. Apparently, any component can serve as the provider of data.
```JS
import React,{useState} from 'react'
import Prompt from './components/Prompt'
import InnerOne from './components/InnerOne'
import NameContext from './nameContext'
function App() {
const [name,setName] = useState('')
const handleNameChange = (e) => {
setName(e.target.value)
}
return (
<div className="app">
<Prompt handleNameChange={handleNameChange} />
<NameContext.Provider value={name}>
<InnerOne />
</NameContext.Provider>
</div>
);
}
export default App;
```
According to React master [Kent C. Dodds](https://kentcdodds.com/blog/how-to-use-react-context-effectively) all we need to do is "use the Provider and expose a component that provides a value".
To actually begin using the Context API it's a best practice to have a separate file where you initialize the context so that it can be imported and used anywhere. We can do this in just two lines of code by importing {createContext} from the React node module and initializing a new context. Then you'll have to export it.
```JS
import {createContext} from 'react'
const NameContext = createContext()
export default NameContext;
```
As you can see above in the `App` component we then import this context to create a Provider.
```JS
import NameContext from './nameContext'
...
<NameContext.Provider value={name}>
<InnerOne />
</NameContext.Provider>
```
We can then pass any information we want to other components by creating props on the Provider. I then create a component called InnerOne. This is basically just a div with a little styling but the fact that we're creating a separate component will demonstrate what's going on with Context. I will also create an `InnerTwo` component with the same structure.
```JS
import React from 'react'
import InnerTwo from './InnerTwo'
const InnerOne = () => {
return (
<div className='innerOne inner-container'>
Inner One - I have no context
<InnerTwo />
</div>
)
}
export default InnerOne
```
`InnerThree` is where the action is. Here is where we actually create a consumer to use the data provided by the provider. It has access to the data in the provider despite being nested two levels deep and not having any props!
```JS
import React from 'react'
import NameContext from '../nameContext'
const InnerThree = () => {
return (
<NameContext.Consumer>
{(name) => (
<div className='innerThree inner-container'>
Inner three - Do I have some context? 🤔
<div className='innerThree__nameText'>{name}</div>
</div>
)}
</NameContext.Consumer>
)
}
export default InnerThree
```
Like I said, not the fanciest project but I do feel it's deeply illustrative of the power of React Context. You can extrapolate this relationship to any depth. 100 levels deep and you could still pass that data from the provider without props.
Neat! More context tomorrow. If you like projects like this don't forget to follow me on [the Twitters](https://www.twitter.com/jwhubert91) :) | jameshubert_com |
638,173 | Answer: How to generate an .apk file from Xamarin.Forms Project using Visual Studio? | answer re: How to generate an .apk fi... | 0 | 2021-03-18T07:18:14 | https://dev.to/maxangelo987/answer-how-to-generate-an-apk-file-from-xamarin-forms-project-using-visual-studio-25i9 | {% stackoverflow 60184629 %} | maxangelo987 | |
638,213 | The Implementation of HDB, the _hyperscript debugger | The 0.0.6 release of the _hyperscript hypertext UI scripting language introduces HDB, an interactive... | 0 | 2021-03-18T07:44:13 | https://denizaksimsek.com/2021/the-implementation-of-hdb/ | webdev, hyperscript, javascript | The 0.0.6 release of the [_hyperscript][] hypertext UI scripting language introduces HDB, an interactive debugging environment. In this article I discuss how the hyper-flexible hyperscript runtime allowed me to implement the first release of HDB with ease. If you'd like to see what HDB is like, I have a a demo on [my website][].
## Implementation
HDB lives in a [single JavaScript file][hdb-src].
### Turning the keys
In the hyperscript runtime (which is a tree walking interpreter), each command has an `execute()` method which either returns the next command to be executed, or a `Promise` thereof. The execute method for the breakpoint command creates an HDB environment and assigns it to the global scope (usually `window`):
_[hdb.js ln. 20](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L20)_
```js
var hdb = new HDB(ctx, runtime, this);
window.hdb = hdb;
```
The `HDB` object keeps hold of the current command and context as we step through. (The context is the object holding the local variables for the hyperscript code, and some other things the runtime keeps track of). We call its `break()` method:
_[hdb.js ln. 35](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L35)_
```js
HDB.prototype.break = function(ctx) {
var self = this;
console.log("%c=== HDB///_hyperscript/debugger ===", headingStyle);
self.ui();
return new Promise(function (resolve, reject) {
self.bus.addEventListener("continue", function () {
if (self.ctx !== ctx) {
// Context switch
for (var attr in ctx) {
delete ctx[attr];
}
Object.assign(ctx, self.ctx);
}
delete window.hdb;
resolve(self.runtime.findNext(self.cmd, self.ctx));
}, { once: true });
})
}
```
There are a few things to unpack here. We call `self.ui()` to start the UI, which we'll get to later. Remember how a command can return the next method to execute as a promise? The break method resolves after the [internal event bus][] receives a `"continue"` event, whether by the user pressing "Continue" or simply reaching the end of the debugged code.
The "context switch" is the dirtiest part of it all. Because we can step out of functions, we might finish debugging session with a different context than before. In this case, we just wipe the old context and copy the current context variables over. Honestly, I thought I'd have to do a lot more of this kind of thing.
Speaking of stepping out of functions...
### Stepping Over and Out
Firstly, if self.cmd is null, then the previous command was the last one, so we just stop the debug process:
_[hdb.js ln. 58](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L58)_
```js
HDB.prototype.stepOver = function() {
var self = this;
if (!self.cmd) return self.continueExec();
```
If not, then we do a little dance to execute the current command and get the next one:
_[hdb.js ln. 61](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L61)_
```js
var result = self.cmd && self.cmd.type === 'breakpointCommand' ?
self.runtime.findNext(self.cmd, self.ctx) :
self.runtime.unifiedEval(self.cmd, self.ctx);
```
We perform a useless check that I forgot to take out (`self.cmd &&`). Then, we special-case the `breakpoint` command itself and don't execute it (nested debug sessions don't end well...), instead finding the subsequent command ourselves with the `runtime.findNext()` in hyperscript core. Otherwise, we can execute the current command.
Once we have our command result, we can step onto it:
_[hdb.js ln. 64](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L61)_
```js
if (result.type === "implicitReturn") return self.stepOut();
if (result && result.then instanceof Function) {
return result.then(function (next) {
self.cmd = next;
self.bus.dispatchEvent(new Event("step"));
self.logCommand();
})
} else if (result.halt_flag) {
this.bus.dispatchEvent(new Event("continue"));
} else {
self.cmd = result;
self.bus.dispatchEvent(new Event("step"));
this.logCommand();
}
```
If we returned from a function, we step out of it (discussed below). Otherwise, if the command returned a Promise, we await the next command, set `cmd` to it, notify the event bus and log it with some fancy styles. If the result was synchronous and is a [HALT][]; we stop debugging (as I write this, I'm realizing I should've called [`continueExec()`][continue-exec] here). Finally, we commit the kind of code duplication hyperscript is meant to help you avoid, to handle a synchronous result.
To step out, we first get our hands on the context from which we were called:
_[hdb.js ln. 80](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L80)_
```js
HDB.prototype.stepOut = function() {
var self = this;
if (!self.ctx.meta.caller) return self.continueExec();
var callingCmd = self.ctx.meta.callingCommand;
var oldMe = self.ctx.me;
self.ctx = self.ctx.meta.caller;
```
Turns out _hyperscript function calls already keep hold of the caller context (`callingCommand` was added by me though). After we change context, we do something a little odd:
_[hdb.js ln. 92](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L92)_
```js
self.cmd = self.runtime.findNext(callingCmd, self.ctx);
self.cmd = self.runtime.findNext(self.cmd, self.ctx);
```
Why do we call `findNext` twice? Consider the following hyperscript code:
```
transition 'color' to darkgray
set name to getName()
log the name
```
We can't execute the command to set `name` until we have the name, so when `getName()` is called, the current command is still set to the `transition`. We call `findNext` once to find the `set`, and again to find the `log`.
Finally, we're done stepping out:
_[hdb.js ln. 95](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L95)_
```js
self.bus.dispatchEvent(new Event('step'))
```
### HDB UI
What did I use to make the UI for the hyperscript debugger? Hyperscript, of course!
_[hdb.js ln. 107](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L107)_
```html
<div class="hdb" _="
on load or step from hdb.bus send update to me
on continue from hdb.bus remove #hyperscript-hdb-ui-wrapper-">
```
There are a lot of elements listening to `load or step from hdb.bus`, so I consolidated them under `update from .hdb`. `#hyperscript-hdb-ui-wrapper-` is the element whose Shadow DOM this UI lives in --- using shadow DOM to isolate the styling of the panel cost me later on, as you'll see.
-------------
We define some functions.
_[hdb.js ln. 112](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L112)_
```html
def highlightDebugCode
set start to hdb.cmd.startToken.start
set end to hdb.cmd.endToken.end
set src to hdb.cmd.programSource
set beforeCmd to escapeHTML(src.substring(0, start))
set cmd to escapeHTML(src.substring(start, end))
set afterCmd to escapeHTML(src.substring(end))
return beforeCmd+"<u class='current'>"+cmd+"</u>"+afterCmd
end
```
Now, I wasn't aware that we had [template literals][] in hyperscript at this point, so that's for the next release. The `escapeHTML` helper might disappoint some:
_[hdb.js ln. 122](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L122)_
```html
def escapeHTML(unsafe)
js(unsafe) return unsafe
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/\\x22/g, """)
.replace(/\\x27/g, "'") end
return it
end
```
Unfortunately, hyperscript's regex syntax isn't decided yet.
------------
And we have the most broken part of HDB, the prettyPrint function. If you know how to do this better, feel free to send a PR.
Having defined our functions we have a simple toolbar and then the **eval panel**:
_[hdb.js ln. 158](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L158)_
```html
<form class="eval-form" _="
on submit call event.preventDefault()
get the first <input/> in me
then call _hyperscript(its.value, hdb.ctx)
then call prettyPrint(it)
then put it into the <output/> in me">
<input type="text" id="eval-expr" placeholder="e.g. target.innerText">
<button type="submit">Go</button>
<output id="eval-output"><em>The value will show up here</em></output>
```
Why do I use weird selectors like `<input/> in me` when these elements have good IDs? Because `#eval-expr` in hyperscript uses `document.querySelector`, which doesn't reach Shadow DOM.
------------
A panel to show the code being debugged:
_[hdb.js ln. 170](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L170)_
```html
<h3 _="on update from hdbUI
put 'Debugging <code>'+hdb.cmd.parent.displayName+'</code>' into me"></h3>
<div class="code-container">
<pre class="code" _="on update from hdbUI
if hdb.cmd.programSource
put highlightDebugCode() into my.innerHTML
scrollIntoView({ block: 'nearest' }) the
first .current in me"></pre>
</div>
```
------------
Finally, a context panel that shows the local variables.
_[hdb.js ln. 106](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L186)_
```html
<dl class="context" _="
on update from hdbUI
set my.innerHTML to ''
repeat for var in Object.keys(hdb.ctx) if var != 'meta'
get '<dt>'+var+'<dd>'+prettyPrint(hdb.ctx[var])
put it at end of me
end end
on click
get closest <dt/> to target
log hdb.ctx[its.innerText]"></dl>
```
That loop could definitely be cleaner. You can see the hidden feature where you can click a variable name to log it to the console (useful if you don't want to rely on my super-buggy pretty printer).
Some CSS later, we're done with the UI! To avoid CSS interference from the host page, we create a wrapper and put our UI in its shadow DOM:
_[hdb.js ln. 350](https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L350)_
```js
HDB.prototype.ui = function () {
var node = document.createElement('div');
var shadow = node.attachShadow({ mode: 'open' });
node.style = 'all: initial';
node.id = 'hyperscript-hdb-ui-wrapper-';
shadow.innerHTML = ui;
document.body.appendChild(node);
window.hdbUI = shadow.querySelector('.hdb');
_hyperscript.processNode(hdbUI);
}
```
## The End
In just 360 lines, we have a basic debugger. This speaks volumes to the flexibility of the hyperscript runtime, and I hope HDB serves as an example of what's possible with the hyperscript extension API. Like the rest of hyperscript, it's in early stages of development --- feedback and contributors are always welcome!
[_hyperscript]: https://hyperscript.org
[hdb-src]: https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js
[continue-exec]: https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L54
[HALT]: https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/core.js#L1221
[template literals]: https://hyperscript.org/expressions/string/
[internal event bus]: https://github.com/bigskysoftware/_hyperscript/blob/7740c7eccfe3fe4f09443ec0adb961c72eb27a7b/src/lib/hdb.js#L10
[my website]: https://denizaksimsek.com/2021/the-implementation-of-hdb/ | dz4k |
638,446 | TypeScript videos from February | Greetings, Here's our TypeScript roundup, covering the most relevant TS videos from February. You ca... | 0 | 2021-03-18T13:40:18 | https://dev.to/meetupfeedio/typescript-videos-from-february-5bjk | typescript | Greetings,
Here's our TypeScript roundup, covering the most relevant TS videos from February. You can either read our article or scroll through the videos and pick your favorites below.
https://blog.meetupfeed.io/typescript-videos-february-2021/
Mastering TypeScript state management using just React | Jack Herrington
Continuing on with Typescript we are starting a series on React state management where we take the same To-Do list and implement it using a bunch of different state managers. And to kick that off we are looking at using just the React basics to start; useState, useContext and createContext.
Link: https://meetupfeed.io/talk/mastering-typescript-state-management-using-just-react-jack-herrington
Build a Glassmorphism React component – TypeScript & Material-UI
In this video Arjan explains how to create a customizable glassmorphism React component. Glassmorphism is a visual design for glass-like surfaces in user interfaces. Next to showing how to build a glassmorphic element in React, he also shows you how to manipulate the style through properties, and how to use the useStyles hook from the Material-UI library to create fully customizable styles. Finally, Arjan adds a few nice motion effects to finish the example and give it that sheen that every user desires!
Link: https://meetupfeed.io/talk/build-a-glassmorphism-react-component-typescript-material-ui
How to configure TypeScript in NodeJS
In this video you will learn how to configure TypeScript in NodeJS and express API.
Link: https://meetupfeed.io/talk/how-to-configure-type-script-in-node-js
UseContext & UseReducer with TypeScript = No more Redux!
This video will show you an effective pattern to make sure you no longer have to fight with types and Redux!
Link: https://meetupfeed.io/talk/use-context-use-reducer-with-typescript-no-more-redux
Starting with TypeScript in Cypress | Filip Hric
Cypress.io is a JS-based front end testing tool aiming to make the lives of QA engineers much easier. And, guess what: it also works with TypeScript. Learn how!
Link: https://meetupfeed.io/talk/starting-with-type-script-in-cypress-filip-hric | meetupfeedio |
638,637 | How social media is best for startups to find web projects? | Social networking is the most powerful platform & is a perfect way to communicate with new people... | 0 | 2021-03-18T15:06:19 | https://dev.to/polestartechno/how-social-media-is-best-for-startups-find-web-projects-id5 | startup, webdev | Social networking is the most powerful platform & is a perfect way to communicate with new people or new businesses, a way to find some potential prospects, marketing products & services, and engage with communities and groups.
Networking is really valuable for startups because it helps connect with the prospects they are looking for, allows you to share knowledge and develop professional and social relationships for better business. The social media site serves as the gateway to marketing for startups and reaches out to more people.
## Helpful Social Media Platforms for Technology Startups

### 1. LinkedIn
One of the best networking platforms in social media and to showcase your expertise. LinkedIn targeted the business community and as the results, more than 80% of leads generated from social media come from LinkedIn. This platform has the benefit of low expectations, as this platform’s users do not expect you to post daily updates. Within a couple of days, you can add an industry-related post is a great start to get some exposure.
### 2. Twitter
Twitter is a free microblogging platform that allows the audience to broadcast short posts with 280 characters as tweets. It’s a place for viewpoints and news, and offering solutions. You can search for your # hashtags industry on Twitter, and follow the trends leading back to your targeted audience. Sharing your expertise on twitter is sure to get you noticed.
### 3. Facebook
One of the biggest social networks to increase awareness of the brand and build a great fan following. Facebook groups give your audience value when they are interested in and require your products & services. You can establish credibility for trust and a loyal audience or clients. The manner in which you handle your Facebook pages or groups will decide how those audiences are your paying clients. If you’re one of the startups who would like to handle social media events or create the latest post and updates for clients, then it’s compulsory to create your own Facebook page or group.
### 4. Instagram
One of the platforms with the most creativeness. Whatever you do, you can just capture the moment no matter wherever you are, and create the content that will capture your audience’s attention. Give your audience a needful reason to follow you on this platform and if they like and appreciate your creativity, they may even contact you for some work and deal.
### 5. Pinterest
Pinterest is now an incredibly successful network where every day lots of people are searching for inspiration and creativity. With Pinterest, you can add text along with your photos, to tell your followers about your offerings and services, but you have to be transparent about your proposal and whom you are attracting because if you got that right audience, more people will discover you.
## Useful Social Media Tools for Startups and Their Clients

### 1. Buffer
Batching your work by task together and doing everything in one sitting can be a great way to save time. Instead of interrupting the rest of your day to avoid making regular social media posts, Buffer allows you to write your posts in advance and put them in a queue to be published automatically on a particular date. Those are among the most popular social media resources available to Startups.
Writing down all your weekly social media posts might only take a couple of minutes if you do it all at once, but otherwise it can be a huge time sink that pulls you away from your job if you manually create one post at a time all day.
### 2. Hootsuite
Besides having only a name that is very fun to speak, Hootsuite is another great tool for handling social media. In Hootsuite, you can schedule posts similar to Buffer across multiple social media accounts but it also has additional features. From within the app, you can engage with customers and followers and reply with one single click to comments.
Hootsuite also offers a fairly comprehensive collection of analytical reports to help you to monitor how your social media posts and performs, and how your audience is through. It also offers useful insight into what kinds of posts are being shared for your audience, and not.
### 3. Agorapulse
Agorapulse is an up-and-coming Hootsuite competitor who has made a favorable impact on the world’s social media pros. It provides most of the features which Hootsuite does, and also builds on them, making it a good all-around platform for startups on social media. The app offers an “inbox style” way to look at the feedback and direct messages that help you to easily clear delays and make sure nothing is missing.
### 4. Tweetdeck
Tweetdeck is different from the other social media tools that we have discussed because they focus on supporting the Twitter community only. If you’re a power user of Twitter and focusing on it rather than other networks, then Tweetdeck might be the perfect tool for you.
It is a web-based platform that gives you power over all of the accounts you handle on Twitter. The dashboard shows a column for each of your Twitter accounts so it’s easy to keep them apart. It may also divide specific events such as direct messages and updates into one of its own columns. It’s highly customizable, so you can add, remove, or change column order.
### 5. BuzzSumo
Unlike the other tools we’ve talked about so far, BuzzSumo is not a tool for consolidating or scheduling your social media posts. You can think of it more like social media’s Analytics system/tool which provides you access to information about what’s hot and trending right now.
BuzzSumo allows you to quickly search content from all major social media networks over the last 12 months. You can use it for displaying backlinks, collaborating with influencers, evaluating your rivals, and more. Using it as part of a social media campaign B2B is a great tool.
## Social Media Guide for using Social Media as a Startup

No wonder Startups will greatly benefit from using social media. Growing loyal on Twitter or Instagram or sharing your work using Twitter could be a great way to find new clients. Even social media is useful for maintaining relationships with your existing clients. However, the vast range of social networks means making effective use of any of them can be very difficult. If you’ve really thought about how to boost your role as a Startup, you can use the following strategies to reach the audience and monetize with your skills.
### Make a content plan
If the platforms of social media are selected, it is important to determine what information is to be posted on them. You could screw things up without a plan so your audience doesn’t understand what you’re trying to say. You will have a message structure with a schedule, and you will be able to categorize your messages. Over time, your audience will become accustomed to these terms, and look forward to your posts.
### Regularly promote yourself
Social media is mostly free from payments, so you can widely promote yourself. The more activities you make the more visible you will become. You will have the networks cross-advertised on various platforms. Collaboration with other bloggers or tech agencies may also be useful in capturing an audience with surrounding interests. Analytical tools like Google Analytics can display which of your channels are more popular and thus more useful to you. On the other hand, too much publicity can actively turn customers away from your brand. Combining paying ads with insightful content is the strongest way.
### Engage in broader activities
You may also indirectly promote yourself by answering the questions in your experience thematic discussions. Look for potential clients use hashtags in comments under other posts of your topics. If you simply answer their questions or at least learn about your existence, customers will appreciate your expertise. This is also frazzled in how they structure their algorithms in social media. For example, Instagram and Facebook favor accounts that participate actively in conversations with other users and earn the most ‘likes’ in their comments. Social media marketing has long been about creating informative content and engaging content.
### Use social media as job posting sites
While this suggestion does not extend to all social media, many platforms have functionality that significantly enhances work searches. For example, Reddit has a variety of sub-forums specific for startups in any particular field (e.g., r/webdevelopment). Although these smaller platforms on social media are not as wide as Instagram or Twitter, you are virtually assured that someone who sees your posts will know your field or be a potential client. One thing to bear in mind is that sub-forums and boards of photos usually have different rules than all other social media platforms. To prevent any disputes with the policing committee, it’s best to become familiar with such rules and keep a mental checklist of what you’re authorized to do on any specific network.
**Must Check** [How web is better than mobile app for enterprise software?](https://polestartechno.com/blog/web-is-better-than-mobile-app-for-enterprise-software)
Becoming a Startup means taking responsibility for the success of your work. Social networking is an excellent communication tool that promotes yourself and attracts new clients. Nevertheless, the use of these networks requires an optimized marketing strategy and a pulse on new functionality added to the platforms. | jaypala |
638,687 | Fun Tilting Form Validation Idea. | I was playing around on codepen trying to find playful ways of animating UI — without using animation... | 0 | 2021-03-18T17:05:28 | https://dev.to/shadowfaxrodeo/fun-tilting-form-validation-idea-35mm | webdev, css, html, ux | I was playing around on `codepen` trying to find playful ways of animating UI — without using `animation` or `transition` properties. Just tiny movements.
Came up with this and thought I'd share it:
*Try typing more than 12 characters.*
{% codepen https://codepen.io/shadowfaxrodeo/pen/eYBwdMO %}
## How it works.
First, we use the `pattern` attribute alongside `required`.
This allows us to target the input with the `:valid` and `:invalid` pseudo-classes.
The pattern attribute has a regular expression that matches any string from 1 to 12 characters long. If the input's value is in that range we can select it in our `css` using the `:valid` pseudo-class, otherwise use the `:invalid` pseudo-class.
```html
<input required pattern="(\S|\s){1,12}">
```
Now we style the input to tilt to the right if it's invalid. If it's valid it stays level:
```css
input:invalid {
transform:rotate(1deg);
}
```
But, when the input is empty it's also invalid, and will tip right. We want to give the impression the text has weight. So if the input is empty it should tip the opposite way.
To select the empty input we need to do **something a bit hacky.**
We can use the pseudo-class `:placeholder-shown` — it allows you to style an input only when it is displaying a placeholder.
It requires a valid placeholder value to work. But if you don't want a placeholder you can use a space:
```html
<input required pattern="(\S|\s){1,12}" placeholder=" ">
```
Then we add the styles to get it to tilt to the left when empty:
```css
input:invalid {
transform:rotate(1deg);
}
input:placeholder-shown {
transform:rotate(-1deg);
}
```
That works, but it makes the page a **bit disorienting**. Better to only style the input once the user has interacted with it.
Here we'll add the tilt only if the input has been focused or already filled:
```css
input:placeholder-shown:invalid {
transform:rotate(0deg);
}
input:placeholder-shown:invalid:focus {
transform:rotate(-1deg);
}
input:invalid {
transform:rotate(1deg);
}
```
Thats it. Except for some basic styles added to the codepen.
I like this, it really makes me want to balance out the text.
It could definitly be improved though:
- You could make it more universal by reversing the tilt when the text-direction is right-to-left with `:dir(rtl)`
- It only works with a range starting from 1. Finding a `css` only solution for say, *choose a password between 10 and 16 characters long** would be interesting. Otherwise it could use `js`.
- Add `IE 10-11` support with `:-ms-placeholder-shown`
| shadowfaxrodeo |
638,768 | Crack the top 50 Golang interview questions | The Go programming language, or Golang, is an open-source programming language similar to C but is op... | 0 | 2021-03-18T19:02:03 | https://www.educative.io/blog/50-golang-interview-questions | go, career, tutorial, learning | The Go programming language, or [Golang](https://www.educative.io/blog/golang-tutorial), is an open-source programming language similar to C but is optimized for quick compiling, seamless concurrency, and developer ease of use.
This language was created and adopted by Google but has been gaining popularity in other companies in recent years as the demand for concurrent, networked programs is increasing.
Whether you're preparing for a [Google job interview](https://www.educative.io/blog/google-coding-interview) or just want to remain a cutting edge developer, Go is the right choice for you. Today, we'll help you practice your Go skills with 50 of the most important Go questions and answers.
**Here’s what we’ll cover today:**
* [Questions on Golang Basics](#basics)
* [Intermediate Golang Questions](#intermediate)
* [Coding challenges in Golang](#challenges)
* [Golang Concurrency Questions](#concurrency)
* [25 More Golang Questions](#more)
* [Next steps for your learning](#next)
<br>
<h4><b> Learn Go in half the time</b></h4>
Go is known for its concurrency capabilities. Stand out from the crowd with hands-on experience using the latest Go concurrency techniques.
<b>[Mastering Concurrency in Go](https://www.educative.io/courses/mastering-concurrency-in-go)</b>
<br>
<a name="basics"></a>
## Questions on Language Basics
<bR>
### 1. What are the benefits of using Go compared to other languages?
- Unlike other languages which started as academic experiments, Go code is pragmatically designed. Every feature and syntax decision is engineered to make life easier for the programmer.
- Golang is optimized for concurrency and works well at scale.
- Golang is often considered more readable than other languages due to a single standard code format.
- Automatic garbage collection is notably more efficient than Java or Python because it executes concurrently alongside the program.
<bR>
### 2. What are string literals?
A string literal is a string constant formed by concatenating characters. The two forms of string literal are raw and interpreted string literals.
Raw string literals are written within backticks (`foo`) and are filled with uninterpreted UTF-8 characters. Interpreted string literals are what we commonly think of as strings, written within double quotes and containing any character except newline and unfinished double quotes.
<bR>
### 3. What data types does Golang use?
Golang uses the following types:
- Method
- Boolean
- Numeric
- String
- Array
- Slice
- Struct
- Pointer
- Function
- Interface
- Map
- Channel
<bR>
### 4. What are packages in a Go program?
Packages (`pkg`) are directories within your Go workspace that contain Go source files or other packages. Every function, variable, and type from your source files are stored in the linked package. Every Go source file belongs to a package, which is declared at the top of the file using:
```go
package <packagename>
```
You can import and export packages to reuse exported functions or types using:
```go
import <packagename>
```
Golang's standard package is `fmt`, which contains formatting and printing functionalities like `Println()`.
<bR>
### 5. What form of type conversion does Go support? Convert an integer to a float.
Go supports explicit type conversion to satisfy its strict typing requirements.
```go
i := 55 //int
j := 67.8 //float64
sum := i + int(j) //j is converted to int
```
<bR>
### 6. What is a goroutine? How do you stop it?
A [goroutine](https://www.educative.io/blog/golang-tutorial#goroutine) is a function or method that executes concurrently alongside any other goroutines using a special goroutine thread. Goroutine threads are more lightweight than standard threads, with most Golang programs using thousands of goroutines at once.
To create a goroutine, add the `go` keyword before the function declaration.
```go
go f(x, y, z)
```
You can stop a goroutine by sending it a signal channel. Goroutines can only respond to signals if told to check, so you'll need to include checks in logical places such as at the top of your `for` loop.
```go
package main
func main() {
quit := make(chan bool)
go func() {
for {
select {
case <-quit:
return
default:
// …
}
}
}()
// …
quit <- true
}
```
### 7. How do you check a variable type at runtime?
The Type Switch is the best way to check a variable's type at runtime. The Type Switch evaluates variables by type rather than value. Each Switch contains at least one `case`, which acts as a conditional statement, and a `default` case, which executes if none of the cases are true.
For example, you could create a Type Switch that checks if interface value `i` contains the type `int` or `string`:
```go
package main
import "fmt"
func do(i interface{}) {
switch v := i.(type) {
case int:
fmt.Printf("Double %v is %v\n", v, v*2)
case string:
fmt.Printf("%q is %v bytes long\n", v, len(v))
default:
fmt.Printf("I don't know type %T!\n", v)
}
}
func main() {
do(21)
do("hello")
do(true)
}
```
<bR>
### 8. How do you concatenate strings?
The easiest way to [concatenate strings](https://www.educative.io/blog/concatenate-string-c) is to use the concatenation operator (`+`), which allows you to add strings as you would numerical values.
```go
package main
import "fmt"
func main() {
// Creating and initializing strings
// using var keyword
var str1 string
str1 = "Hello "
var str2 string
str2 = "Reader!"
// Concatenating strings
// Using + operator
fmt.Println("New string 1: ", str1+str2)
// Creating and initializing strings
// Using shorthand declaration
str3 := "Welcome"
str4 := "Educative.io"
// Concatenating strings
// Using + operator
result := str3 + " to " + str4
fmt.Println("New string 2: ", result)
}
```
<br>
<a name="intermediate"></a>
## Intermediate Golang Questions
<bR>
### 9. Explain the steps of testing with Golang.
Golang supports automated testing of packages with custom testing suites.
To create a new suite, create a file that ends with `_test.go` and includes a `TestXxx` function, where `Xxx` is replaced with the name of the feature you're testing. For example, a function that tests login capabilities would be called `TestLogin`.
You then place the testing suite file in the same package as the file you wish to test. The test file will be skipped on regular execution but will run when you enter the `go test` command.
<bR>
### 10. What are function closures?
Function closures is a function value that references variables from outside its body. The function may access and assign values to the referenced variables.
For example: `adder()` returns a closure, which is each bound to its own referenced `sum` variable.
```go
package main
import "fmt"
func adder() func(int) int {
sum := 0
return func(x int) int {
sum += x
return sum
}
}
func main() {
pos, neg := adder(), adder()
for i := 0; i < 10; i++ {
fmt.Println(
pos(i),
neg(-2*i),
)
}
}
```
<bR>
### 11. How do we perform inheritance with Golang?
This is a bit of a trick question: there is no [inheritance](https://www.educative.io/blog/java-inheritance-tutorial) in Golang because it does not support classes.
However, you can mimic inheritance behavior using composition to use an existing struct object to define a starting behavior of a new object. Once the new object is created, functionality can be extended beyond the original struct.
```go
type Animal struct {
// …
}
func (a *Animal) Eat() { … }
func (a *Animal) Sleep() { … }
func (a *Animal) Run() { … }
type Dog struct {
Animal
// …
}
```
The `Animal` struct contains `Eat()`, `Sleep()`, and `Run()` functions. These functions are embedded into the child struct `Dog` by simply listing the struct at the top of the implementation of `Dog`.
<bR>
### 12. Explain Go interfaces. What are they and how do they work?
Interfaces are a special type in Go that define a set of method signatures but do not provide implementations. Values of type
interface can hold any value that implements those methods.
Interfaces essentially act as placeholders for methods that will have multiple implementations based on what object is using them.
For example, you could implement a `geometry` interface that defines that all shapes that use this interface must have an implementation of `area()` and `perim()`.
```go
type geometry interface {
area() float64
perim() float64
}
```
<bR>
### 13. What are Lvalue and Rvalue in Golang?
**Lvalue**
- Refers to a memory location
- Represents a variable identifier
- Mutable
- May appear on the left or right side of the `=` operator
> For example: In the statement `x =20`, `x` is an lvalue and `20` is an rvalue.
**Rvalue**
- Represents a data value stored in memory
- Represents a constant value
- Always appears on the `=` operator's right side.
> For example, The statement `10 = 20` is invalid because there is an rvalue (`10`) left of the `=` operator.
<bR>
### 14. What are the looping constructs in Go?
Go has only one looping construct: the `for` loop. The `for` loop has 3 components separated by semicolons:
- The `Init` statement, which is executed before the loop begins. It's often a variable declaration only visible within the scope of the `for` loop.
- The condition expression, which is evaluated as a Boolean before each iteration to determine if the loop should continue.
- The post statement, which is executed at the end of each iteration.

```go
package main
import "fmt"
func main() {
sum := 0
for i := 0; i < 10; i++ {
sum += i
}
fmt.Println(sum)
}
```
<bR>
### 15. Can you return multiple values from a function?
Yes. A Go function can return multiple values, each separated by commas in the `return` statement.
```go
package main
import "fmt"
func foo() (string, string) {
return "two", "values"
}
func main() {
fmt.Println(foo())
}
```
<br>
#### Keep practicing Go.
Hands-on practice is essential to learning Golang.
Educative's text-based courses feature live coding environments and professional Go developer tips to help you pick up the language in half the time.
<b>[Mastering Concurrency in Go](https://www.educative.io/courses/mastering-concurrency-in-go)</b>
<br>
<a name="coding"></a>
## Coding challenges with Golang
<Br>
### 16. Implement a Stack (LIFO)
Implement a stack structure with pop, append, and print top functionalities.
**Solution**
You can implement a stack using a slice object.
```go
package main
import "fmt"
func main() {
// Create
var stack []string
// Push
stack = append(stack, "world!")
stack = append(stack, "Hello ")
for len(stack) > 0 {
// Print top
n := len(stack) - 1
fmt.Print(stack[n])
// Pop
stack = stack[:n]
}
// Output: Hello world!
}
```
First, we use the built-in `append()` function to implement the append behavior. Then we use `len(stack)-1` to select the top of the stack and print.
For pop, we set the new length of the stack to the position of the printed top value, `len(stack)-1`.
<bR>
### 17. Print all permutations of a slice characters or string
Implement the `perm()` function that accepts a slice or string and prints all possible combinations of characters.
**Solution**
```go
package main
import "fmt"
// Perm calls f with each permutation of a.
func Perm(a []rune, f func([]rune)) {
perm(a, f, 0)
}
// Permute the values at index i to len(a)-1.
func perm(a []rune, f func([]rune), i int) {
if i > len(a) {
f(a)
return
}
perm(a, f, i+1)
for j := i + 1; j < len(a); j++ {
a[i], a[j] = a[j], a[i]
perm(a, f, i+1)
a[i], a[j] = a[j], a[i]
}
}
func main() {
Perm([]rune("abc"), func(a []rune) {
fmt.Println(string(a))
})
}
```
We use rune types to handle both slices and strings. Runes are Unicode code points and can therefore parse strings and slices equally.
<bR>
### 18. Swap the values of two variables without a temporary variable
Implement `swap()` which swaps the value of two variables without using a third variable.
**Solution**
```go
package main
import "fmt"
func main() {
fmt.Println(swap())
}
func swap() []int {
a, b := 15, 10
b, a = a, b
return []int{a, b}
}
```
While this may be tricky in other languages, Go makes it easy.
We can simply include the statement `b, a = a, b`, what data the variable references without engaging with either value.
<bR>
### 19. Implement min and max behavior
Implement `Min(x, y int)` and `Max(x, y int)` functions that take two integers and return the lesser or greater value, respectively.
**Solution**
By default, Go only supports min and max for floats using `math.min` and `math.max`. You'll have to create your own implementations to make it work for integers.
```go
package main
import "fmt"
// Min returns the smaller of x or y.
func Min(x, y int) int {
if x > y {
return y
}
return x
}
// Max returns the larger of x or y.
func Max(x, y int) int {
if x < y {
return y
}
return x
}
func main() {
fmt.Println(Min(5,10))
fmt.Println(Max(5,10))
}
```
<bR>
### 20. Reverse the order of a slice
Implement function `reverse` that takes a slice of integers and reverses the slice in place without using a temporary slice.
**Solution**
```go
package main
import "fmt"
func reverse(sw []int) {
for a, b := 0, len(sw)-1; a < b; a, b = a+1, b-1 {
sw[a], sw[b] = sw[b], sw[a]
}
}
func main() {
x := []int{3, 2, 1}
reverse(x)
fmt.Println(x)
}
```
Our for loop swaps the values of each element in the slice will slide from left to right. Eventually, all elements will be reversed.
<bR>
### 21. What is the easiest way to check if a slice is empty?
Create a program that checks if a slice is empty. Find the simplest solution.
**Solution**
The easiest way to check if a slice is empty is to use the built-in `len()` function, which returns the length of a slice. If `len(slice) == 0`, then you know the slice is empty.
For example:
```go
package main
import "fmt"
func main() {
r := [3]int{1, 2, 3}
if len(r) == 0 {
fmt.Println("Empty!")
} else {
fmt.Println("Not Empty!")
}
}
```
<bR>
### 22. Format a string without printing it
Find the easiest way to format a string with variables without printing the value.
**Solution**
The easiest way to format without printing is to use the `fmt.Sprintf()`, which returns a string without printing it.
For example:
```go
package main
import "fmt"
func main() {
s := fmt.Sprintf("Size: %d MB.", 85)
fmt.Println(s)
}
```
<br>
<a name="concurrency"></a>
## Golang Concurrency Questions
<br>
### 23. Explain the difference between concurrent and parallelism in Golang
[Concurrency](https://www.educative.io/blog/multithreading-and-concurrency-fundamentals) is when your program can *handle* multiple tasks at once while parallelism is when your program can *execute* multiple tasks at once using multiple processors.
In other words, concurrency is a property of a program that allows you to have multiple tasks in progress at the same time, but not necessarily executing at the same time. Parallelism is a runtime property where two or more tasks are executed at the same time.
Parallelism can therefore be a means to achieve the property of concurrency, but it is just one of many means available to you.
The key tools for concurrency in Golang are goroutines and channels. Goroutines are concurrent lightweight threads while channels allow goroutines to communicate with each other during execution.
<Br>
### 24. Merge Sort
Implement a concurrent [Merge Sort](https://www.educative.io/blog/algorithms-101-merge-sort-quicksort) solution using goroutines and channels.
You can use this sequential Merge Sort implementation as a starting point:
```go
package main
import "fmt"
func Merge(left, right [] int) [] int{
merged := make([] int, 0, len(left) + len(right))
for len(left) > 0 || len(right) > 0{
if len(left) == 0 {
return append(merged,right...)
}else if len(right) == 0 {
return append(merged,left...)
}else if left[0] < right[0] {
merged = append(merged, left[0])
left = left[1:]
}else{
merged = append(merged, right [0])
right = right[1:]
}
}
return merged
}
func MergeSort(data [] int) [] int {
if len(data) <= 1 {
return data
}
mid := len(data)/2
left := MergeSort(data[:mid])
right := MergeSort(data[mid:])
return Merge(left,right)
}
func main(){
data := [] int{9,4,3,6,1,2,10,5,7,8}
fmt.Printf("%v\n%v\n", data, MergeSort(data))
}
```
**Solution**
```go
package main
import "fmt"
func Merge(left, right [] int) [] int{
merged := make([] int, 0, len(left) + len(right))
for len(left) > 0 || len(right) > 0{
if len(left) == 0 {
return append(merged,right...)
}else if len(right) == 0 {
return append(merged,left...)
}else if left[0] < right[0] {
merged = append(merged, left[0])
left = left[1:]
}else{
merged = append(merged, right [0])
right = right[1:]
}
}
return merged
}
func MergeSort(data [] int) [] int {
if len(data) <= 1 {
return data
}
done := make(chan bool)
mid := len(data)/2
var left [] int
go func(){
left = MergeSort(data[:mid])
done <- true
}()
right := MergeSort(data[mid:])
<-done
return Merge(left,right)
}
func main(){
data := [] int{9,4,3,6,1,2,10,5,7,8}
fmt.Printf("%v\n%v\n", data, MergeSort(data))
}
```
Firstly, in merge sort, we keep dividing our array recursively into the `right` side and the `left` side and call the `MergeSort` function on both sides from **line 30** to **line 34**.
Now we have to make sure that `Merge(left,right)` is executed after we get return values from both the recursive calls, i.e. both the `left` and `right` must be updated before `Merge(left,right)` can be executable. Hence, we introduce a channel of type `bool` on **line 26** and send `true` on it as soon as `left = MergeSort(data[:mid])` is executed (**line 32**).
The `<-done` operation blocks the code on **line 35** before the statement `Merge(left,right)` so that it does not proceed until our goroutine has finished. After the goroutine has finished and we receive `true` on the `done` channel, the code proceeds forward to `Merge(left,right)` statement on **line 36**.
<br>
### 25. Sum of Squares
Implement the `SumOfSquares` function which takes an integer, `c` and returns the sum of all squares between 1 and `c`. You'll need to use `select` statements, goroutines, and channels.
For example, entering `5` would return `55` because $1^2 + 2^2 + 3^2 + 4^2 + 5^2 = 55$
You can use the following code as a starting point:
```go
package main
import "fmt"
func SumOfSquares(c, quit chan int) {
// your code here
}
func main() {
mychannel := make(chan int)
quitchannel:= make(chan int)
sum:= 0
go func() {
for i := 0; i < 6; i++ {
sum += <-mychannel
}
fmt.Println(sum)
}()
SumOfSquares(mychannel, quitchannel)
}
```
**Solution**
```go
package main
import "fmt"
func SumOfSquares(c, quit chan int) {
y := 1
for {
select {
case c <- (y*y):
y++
case <-quit:
return
}
}
}
func main() {
mychannel := make(chan int)
quitchannel:= make(chan int)
sum:= 0
go func() {
for i := 1; i <= 5; i++ {
sum += <-mychannel
}
fmt.Println(sum)
quitchannel <- 0
}()
SumOfSquares(mychannel, quitchannel)
}
```
Take a look at our `SumOfSquares` function. First, on **line 4**, we declare a variable `y` and then jump to the `For-Select` loop. We have two cases in our select statements:
- `case c <- (y*y)`: This is to send the square of `y` through the channel `c`, which is received in the goroutine created in the main routine.
- `case <-quit`: This is to receive a message from the main routine that will return from the function.
<br>
<a name="more"></a>
## 25 More Golang Questions
**26.** What is a workspace?
**27.** What is CGO? When would you want to use it?
**28.** What is shadowing?
**29.** What is the purpose of a GOPATH environment variable?
**30.** How are pointers used in Go?
**31.** What types of pointers does Go have?
**32.** Why is Go often called a "Post-OOP" language?
**33.** Does Go have exceptions? How does Go handle errors?
**34.** When would you use a break statement in Go?
**35.** How do untyped constants interact with Go's typing system?
**36.** What is the difference between `=` and `:=` in Go?
**37.** What is the difference between C arrays and Go slices?
**38.** Does Go support method overloading?
**39.** What makes Go so fast?
**40.** How do you implement command-line arguments in Go?
**41.** How does Go handle dependencies?
**42.** What is a unique benefit of Go's compiler?
**43.** What is in the src directory?
**44.** Name one Go feature that would be helpful for DevOps.
**45.** What does GOROOT point to?
**46.** What makes Go compile quickly?
**47.** Implement a binary search tree data structure in Go.
**48.** What does it mean when people say Go has a "rich standard library"?
**49.** What is an advantage of Go evaluating implicit types at compile time?
**50.** Describe the crypto capabilities of Go.
<br>
<a name="next"></a>
## Next steps for your learning
Great job on those practice questions! Go is a rising language and hands-on practice like this is the key to picking it up fast. To best prepare for interviews, you'll want to:
* Develop a detailed study plan
* Practice Go problems on a whiteboard
* Learn how to articulate your thought process aloud
* Prepare for behavioral interviews
To help you learn even faster, Educative has created [**Mastering Concurrency in Go**](https://www.educative.io/courses/mastering-concurrency-in-go). This course includes in-depth explanations and practice projects to show you how to get the most out of Go's awesome concurrency capabilities.
By the end of the course, you'll have the practical Go experience you'll need to pick up this language in half the time.
*Happy learning!*
<Br>
### Continue reading about Golang
- [How to become a Golang developer: 6 step career guide](https://www.educative.io/blog/become-golang-developer)
- [Getting started with Golang: a tutorial for beginners](https://www.educative.io/blog/golang-tutorial)
- [The 6 Best Programming Languages to Learn in 2021](https://www.educative.io/blog/best-programming-language-learn-2021)
| ryanthelin |
638,925 | A look back at my first published npm library 5 years ago | I recently looked back at some npm packages I first published 5 years ago, and thought it would be an... | 0 | 2021-07-13T17:34:43 | https://www.antoniovdlc.me/a-look-back-at-my-first-published-npm-library-5-years-ago/ | javascript, typescript, npm | ---
title: A look back at my first published npm library 5 years ago
published: true
description:
tags: javascript, typescript, npm
canonical_url: https://www.antoniovdlc.me/a-look-back-at-my-first-published-npm-library-5-years-ago/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cjc3c7txbq8yh01278ww.png
// cover_image: https://images.unsplash.com/photo-1489389944381-3471b5b30f04?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1350&q=80
// cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tw5ck8xrtqwvypcfkrkn.png
---
I recently looked back at some npm packages I first published 5 years ago, and thought it would be an interesting exercise to bring them up to 2021 standards.
For the sake of this article, we will be focusing on the library https://github.com/AntonioVdlC/html-es6cape by first looking at the original code that has been published for the past 5 years, then we will look at some of the changes I recently made on that project and finally reflect a bit on the tooling current landscape.
{% github AntonioVdlC/html-es6cape no-readme %}
---
## 5 years ago
This was my first npm package, which I built following Kent C. Dodds' course ["How to Write a JavaScript Library"](https://egghead.io/series/how-to-write-an-open-source-javascript-library).
The library per se is just 10 lines of code, so nothing really interesting there, but the tools around the code are very ... 2015!
`index.js`
```js
// List of the characters we want to escape and their HTML escaped version
const chars = {
"&": "&",
">": ">",
"<": "<",
'"': """,
"'": "'",
"`": "`"
};
// Dynamically create a RegExp from the `chars` object
const re = new RegExp(Object.keys(chars).join("|"), "g");
// Return the escaped string
export default (str = "") => String(str).replace(re, match => chars[match]);
```
`package.json`
```json
{
"name": "html-es6cape",
"version": "1.0.5",
"description": "Escape HTML special characters (including `)",
"main": "dist/index.js",
"jsnext:main": "src/index.js",
"scripts": {
...
},
"devDependencies": {
"babel": "^5.8.29",
"chai": "^3.4.0",
"codecov.io": "^0.1.6",
"istanbul": "^0.4.0",
"mocha": "^2.3.3",
"uglify-js": "^2.5.0"
},
...
}
```
As this was 2015, all the hype was around ES6! But because it was 2015, using directly ES6 syntax in the wild was not really an option, hence `babel` being a centerpiece of the toolchain.
Rollup just came about the same time with native support for ES modules. As most npm packages were built around CommonJS (and still are), they started to promote a `jsnext:main` field to link to code using ES modules, as their tooling was optimised for that.
For testing purposes, you had pretty much the default setup of Mocha, Chai and Istanbul, with reports being pushed to CodeCov.
Another interesting aspect is the use of TravisCI, which was also pretty much the default in open source at the time:
`.travis.yml`
```yml
language: node_js
cache:
directories:
- node_modules
branches:
only:
- master
node_js:
- iojs
before_install:
- npm i -g npm@^2.0.0
before_script:
- npm prune
script:
- npm run test
- npm run coverage:check
after_success:
- npm run coverage:report
```
> Yes, that says `iojs` and `npm@^2.0.0` ... living on the bleeding edge of technology!
---
## Today
So, looking at the code from 5 years ago, there were a few things that needed to be dusted off, and some niceties to add in, because why not:
- Using TypeScript
- Supporting both ES modules and CommonJS
- Migrating the tests to Jest (which provides coverage out-of-the-box)
- Moving from TravisCI to GitHub Actions
- Adding `prettier` for code formatting (+ pre-commit hooks)
### Using TypeScript
As part of modernizing the project, I thought it would be a good idea to port those 10 lines of code to TypeScript. The benefits of using TypeScript for an open-source library are that you have an extra layer of static analysis on potential contributions, and, more importantly, it generates types by default which leads to a better developer experience using the library in some IDEs.
`chars.ts`
```typescript
// List of the characters we want to escape and their HTML escaped version
const chars: Record<string, string> = {
"&": "&",
">": ">",
"<": "<",
'"': """,
"'": "'",
"`": "`",
};
export default chars;
```
`index.ts`
```typescript
import chars from "./chars";
// Dynamically create a RegExp from the `chars` object
const re = new RegExp(Object.keys(chars).join("|"), "g");
// Return the escaped string
function escape(str: string = ""): string {
return String(str).replace(re, (match) => chars[match]);
}
export default escape;
```
### Supporting both ES modules and CommonJS
Supporting both ES modules and CommonJS deliverables from a TypeScript codebase meant a fair amount of changes to the build tooling as well:
`package.json`
```json
{
"name": "html-es6cape",
"version": "2.0.0",
"description": "Escape HTML special characters (including `)",
"main": "dist/index.cjs.js",
"module": "dist/index.esm.js",
"types": "dist/index.d.ts",
"files": [
"dist/index.cjs.js",
"dist/index.esm.js",
"dist/index.d.ts"
],
"scripts": {
...
"type:check": "tsc --noEmit",
...
"prebuild": "rimraf dist && mkdir dist",
"build": "npm run build:types && npm run build:lib",
"build:types": "tsc --declaration --emitDeclarationOnly --outDir dist",
"build:lib": "rollup -c",
...
},
...
"devDependencies": {
...
"@rollup/plugin-typescript": "^8.2.0",
...
"rimraf": "^3.0.2",
"rollup": "^2.41.2",
"rollup-plugin-terser": "^7.0.2",
"tslib": "^2.1.0",
"typescript": "^4.2.3"
}
}
```
Of notice are the type checking step `type:check` which is pair with other static analysis tools (like ESLint) to ensure the soundness of the source code.
To be able to publish code that would work both for ES modules and CommonJS, I've leveraged Rollup, and after some trial and error, arrived at the following configuration:
`rollup.config.js`
```js
import typescript from "@rollup/plugin-typescript";
import { terser } from "rollup-plugin-terser";
export default [
{
input: "src/index.ts",
output: {
file: "dist/index.cjs.js",
format: "cjs",
exports: "default",
},
plugins: [typescript(), terser()],
},
{
input: "src/index.ts",
output: {
file: "dist/index.esm.js",
format: "es",
},
plugins: [typescript(), terser()],
},
];
```
### Migrating tests to Jest
While improving the tooling around writing and building the library, the existing test setup looked a bit too complex for the simple needs of such a small open-source project. Luckily, there exists one tool that provides a test runner, an assertion library and code coverage out-of-the-box: Jest.
`test/index.test.js`
```js
import chars from "../src/chars.ts";
import escape from "../src/index.ts";
describe("html-es6cape", () => {
it("should coerce the argument to a String (if not null or undefined)", () => {
expect(escape(true)).toEqual("true");
expect(escape(27)).toEqual("27");
expect(escape("string")).toEqual("string");
expect(escape(undefined)).not.toEqual("undefined");
expect(escape()).not.toEqual("undefined");
});
it("should return an empty string if null or undefined", () => {
expect(escape()).toEqual("");
expect(escape(undefined)).toEqual("");
});
Object.keys(chars).forEach((key) => {
it('should return "' + key + '" when passed "' + chars[key] + '"', () => {
expect(escape(key)).toEqual(chars[key]);
});
});
it("should replace all the special characters in a string", () => {
expect(
escape(
`Newark -> O'Hare & O'Hare <- Hartfield-Jackson ... "Whoop" \`whoop\`!`
)
).toEqual(
"Newark -> O'Hare & O'Hare <- Hartfield-Jackson ... "Whoop" `whoop`!"
);
});
it("should work as a template tag on template literals", () => {
expect(
escape`Newark -> O'Hare & O'Hare <- Hartfield-Jackson ... "Whoop" \`whoop\`!`
).toEqual(
"Newark -> O'Hare & O'Hare <- Hartfield-Jackson ... "Whoop" `whoop`!"
);
});
});
```
The code in itself isn't particularly interesting, but to be able to test TypeScript code with Jest required some heavy lifting!
`package.json`
```json
{
"name": "html-es6cape",
"version": "2.0.0",
"description": "Escape HTML special characters (including `)",
...
"scripts": {
...
"test": "jest",
...
},
...
"devDependencies": {
"@babel/core": "^7.13.10",
"@babel/preset-env": "^7.13.10",
"@babel/preset-typescript": "^7.13.0",
...
"@types/jest": "^26.0.20",
"babel-jest": "^26.6.3",
...
"jest": "^26.6.3",
...
}
}
```
For Jest to understand TypeScript, it needs to compile it first. This is where Babel comes in and produces JavaScript out of the TypeScript source code.
`babel.config.js`
```js
module.exports = {
presets: [
["@babel/preset-env", { targets: { node: "current" } }],
"@babel/preset-typescript",
],
};
```
### Moving from TravisCI to GitHub Actions
After spending way more time than I originally planned on this simple migration, the last piece of the puzzle was to move from TravisCI to GitHub Actions and still have CI/CD working as before (automatic tests + publishing).
`.github/workflows/test.yml`
```yml
name: test
on: push
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x, 14.x, 15.x]
steps:
- uses: actions/checkout@v2
- name: Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v2
with:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npm run format:check
- run: npm run type:check
- run: npm test
```
`.github/workflow/publish.yml`
```yml
name: publish
on:
release:
types: [created]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: 12
- run: npm ci
- run: npm test
publish-npm:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: 12
registry-url: https://registry.npmjs.org/
- run: npm ci
- run: npm run build
- run: npm publish
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
```
With that in place, I had pretty much replicated the CI/CD pipeline I previously had on TravisCI.
---
There are still a few topics we haven't touched upon (pre-commit hooks, auto-formatting, ...), but I am quite pleased with the new setup and I will probably be using a similar one moving forward whenever I'd write a small npm package. | antoniovdlc |
638,927 | Reconmap release notes (0.9.0) | Hi everyone! It is my pleasure to share the greatest and latest from Reconmap, the open source and Sa... | 0 | 2021-03-18T21:16:40 | https://dev.to/reconmap/reconmap-release-notes-0-9-0-2p15 | security, pentesting, opensource, releases | Hi everyone! It is my pleasure to share the greatest and latest from Reconmap, the open source and SaaS **[pentest automation and reporting platform](https://reconmap.com)**, in its version 0.9.0 (maturity version is around the corner!).
Before diving into the details I am going to start thanking our early adopters for the magnificent feedback shared. Their input is helping us shape our roadmap and validate our ideas. ♥
# New features
## New vulnerability statuses
Up until now vulnerabilities had two possible statuses: Open and closed. That was modest to say the least. Since version 0.9.0 the possible options are now many more:
- Open (reported, unresolved)
- Confirmed (unexploited, exploited)
- Resolved (remediated, mitigated)
- Closed (remediated, mitigated, rejected)

Special kudos to [GlitchWitch](https://glitchwitch.io/) for her suggestion on new statuses.
## Command line improvements
The Reconmap CLI (rmap) is a key part of our solution. This release added many things to the REST API to support our command line command. One such feature is the command search shown in the example:

## Document library
Part of many security projects require sharing documents internally and with colleagues but there wasn't a clear place in Reconmap to store those. This version introduces a simple library where to share documents such as NDA (Non-disclosure agreements) and security questionnaires.

## Other new features
Apart from these features we added the following:
* Archive option for projects
* Bulk transition and deletion of tasks
* New task due date (with reminders)
* New client role
* Automatic password generation for new users
## Bugfixes
As usual, there were a number of bug fixes pushed to different parts of the system, and the test coverage has seen an increase for the REST API code.
# Next release
If you liked 0.9.0 stay tuned for the next version, stay safe! | santiago |
638,972 | Hacker BI: Airtable + Metabase | You manage what you measure...right? But at a startup, just measuring your business can be hard.... | 0 | 2021-03-18T23:53:21 | https://docs.sequin.io/airtable/playbooks/metabase | startup, analytics, postgres, tutorial | You manage what you measure...right?
But at a startup, just measuring your business can be hard. Setting up analytics, cleaning up your data, structuring it properly.
Luckily, [Airtable](https://airtable.com) can make it really easy to pull together your data from all sorts of sources. Then Metabase can help you ask the hard questions of your data so you can become *data driven*.
A tool I helped build, [Sequin](https://sequin.io) makes it easy for you to connect Airtable to Metabase so you can standup an analytics stack in no time. In this tutorial, we'll walk through it step by step.
## What is Metabase
Airtable needs no introduction, but you may be new to Metabase.
[Metabase](https://metabase.com) is a powerful platform for asking questions of data. Sequin allows you to connect all your Airtable data to Metabase.
While Metabase doesn’t have native support for Airtable, it does come with first-class support for Postgres. So we’re going to use [Sequin](https://sequin.io) to turn your Airtable base into a Postgres database that plugs right into Metabase.

Then, in the background, Sequin will do all the hard work to keep the data current so your metrics are always up to date.
## Airtable Setup
For this tutorial, we'll use [Airtable's inventory tracking template](https://airtable.com/templates/local-business/expDrHGuyjSQlrKTq/inventory-tracking):

This base contains simple data around inventory, orders, and sales that almost any business selling products or services might work with. You'll use this data to build a dynamic dashboard in Metabase that shows how sales and gross profits are trending:

First, add the Airtable inventory tracking template to your Airtable workspace:
1. Log into your [Airtable workspace](https://airtable.com/) and then open the [inventory tracking template](https://airtable.com/templates/local-business/expDrHGuyjSQlrKTq/inventory-tracking) in a new tab.
2. Click the **Use Template** button to add the inventory tracking template to your workspace.

## Sequin Setup
Now, use Sequin to provision a Postgres database that contains all the data in the inventory tracker base:
1. Go to https://app.sequin.io/signup and create a Sequin account:

2. Connect your base to Sequin using the tutorial or check out the [Quickstart guide](https://docs.sequin.io/get-started). It's as easy as copying and pasting your API Key into Sequin, selecting the inventory tracker base you just added to your workspace, and clicking **Create**:
<img alt="Add resource" src="https://docs.sequin.io/assets/metabase/006_sync_flow.gif" style=" box-shadow: 0px 24px 24px rgba(99, 99, 99, 0.165264);" />
3. Sequin will immediately provision you a Postgres database and begin syncing all the data in the inventory tracker base. You'll be provided with credentials for you new database. Keep these handy as you'll use them to connect your Sequin database to Metabase:

## Metabase Setup
With an Open Source license, you can choose to [install Metabase](https://www.metabase.com/start/) for free if you wish. Or, you can pay to use the hosted version of Metabase - known as [Metabase cloud](https://www.metabase.com/start/).
In this tutorial, we'll us Metabase cloud (which comes with a nice 14 day trial).
Simply go to https://www.metabase.com/ and create a **Metabase Cloud** account by clicking **Get Metabase** and selecting to start a free trial:
<img alt="Create metabase account" src="https://docs.sequin.io/assets/metabase/008_metabase_flow.gif" style=" box-shadow: 0px 24px 24px rgba(99, 99, 99, 0.165264);" />
You'll go through several steps to create an account, select your cloud url, and add payment information (you can cancel at any time in the trial).
Metabase will then spin up your cloud instance and email you in a couple minutes when everything is ready.
Then, just login to your new Metabase account.
## Connect Sequin to Metabase
You'll add your Sequin Postgres database to Metabase just as you would any other Postgres database:
1. Click the gear icon in the top right corner and select **Admin**:

2. On the Metabase Admin page, select the **Databases** tab and click the blue **Add database** button:
'
3. On the add database page, select **PostgreSQL** as the database type and give your new database a name - in this case, something like "Airtable - Inventory Manager." Then, enter the **Host**, **Port**, **Database Name**, **Username**, and **Password** for your Sequin database (in case you closed the tab, you can find all this information by clicking the **Connect** button next to the resource in the [Sequin console](https://app.sequin.io)). Lastly, toggle on **SSL** and click the blue **Save** button at the bottom of the page:

4. Metabase will confirm it can connect to your Sequin Postgres database and present you with a success modal. Click the **I'm good thanks** link to close the modal:

5. You'll see that your new database has been added! You can now exit the admin page by clicking the gear icon in the top right corner and selecting **Exit admin**:

## Create a new dashboard
All your Airtable data is now accessible in Metabase. Now, you'll use Metabase to ask questions of your data and build a dashboard.
Before you start querying your Airtable data and building visualizations, set up your dashboard:
1. Click the plus icon in the top right and select **New dashboard**:

2. In the modal that appears, give your new dashboard a name - something like "Sequin Tutorial." Then click the blue **Create** button:

You'll now see an empty dashboard. Let's fill it with some helpful insights.
## Create your first *question*
Metabase calls metrics and visualizations you derive from your data *questions*. While this nomenclature feels a little casual compared to other BI tools, I like the mental model it creates. It encourages you to consider, "what questions do I need to answer from my data?"
Since you are working with inventory data from a small business, the first question you might want to answer is "is the store profitable?" In business language, this metric is called the *gross profit* of the store. It would also be helpful to inspect the gross profit at different time periods to see how it changes.
To calculate gross profit you'll need to sum up all the sales of the store and then subtract any costs. To then inspect the gross profit over different time periods, you'll want to add a date filter that only includes sales and costs from specified time periods.
With your question defined, you can now *ask* it in Metabase.
To do so, click **Ask question** in the navigation bar and then select **Native query** to create a new question. On the new question page, select the **Airtable - Inventory Manager** database that you just added:
<img alt="Ask question" src="https://docs.sequin.io/assets/metabase/015_add_question.gif" style=" box-shadow: 0px 24px 24px rgba(99, 99, 99, 0.165264);" />
> You might be wondering why you are using the **Native query** builder to create your question instead of the *Simple question* or *Custom question* options. You'll often want to use native queries to build questions on your Airtable data because, as you'll see, your Airtable data will contain more complex data structures like [Postgres arrays](https://www.postgresql.org/docs/9.1/arrays.html) and [JSON](https://www.postgresql.org/docs/10/datatype-json.html). SQL is an easy way to work with these objects and format your results. You can learn more about how to query your Airtable data in SQL by reading the Sequin [Cheat Sheet](https://docs.sequin.io/cheat-sheet) and [Reference](https://docs.sequin.io/reference#querying-airtable-with-sql).
You can now use SQL to calculate the gross profit. To get started, calculate the gross profit without the ability to filter by date:
```sql
select
sum(sales_orders.revenue::numeric - (product_inventory.manufacturer_price * sales_orders.quantity)) as "Gross Profit"
from sales_orders
join product_inventory on sales_orders.product[1] = product_inventory.id;
```
There are several flourishes in this query that are worth unpacking:
1. First, you are selecting the `revenue` column from the `sales_orders` table. Because `revenue` is a calculated field in Airtable, it appears in your Sequin Postgres database as type `text`. Since you can't add and subtract `text` values, you are then casting this to type numeric using `::numeric`.
2. Next, you are calculating the `Gross Profit` by taking the sum of all the revenue and subtracting the costs (i.e. `product_inventory.manufacturer_price * sales_orders.quantity`)
3. Finally, you are joining the `sales_orders` table to the `product_inventory` table. In Airtable, `sales_orders.product` is a linked record to the `product_inventory` table. Linked records appear in your Sequin Postgres database as arrays, because linked records can contain multiple values. So to complete the join, you use `sales_orders.product[1]` to extract the first (and only) value of the array to match it to the corresponding record in the `product_inventory` table.
When you run the query by clicking the blue **play** button, you'll see that the small business is indeed making money:

Since gross profit is a dollar value, adjust the settings for this question to add a dollar sign as well as two decimal places:

That looks more like it! As a last step, recall that you want to be able to inspect the gross profit over different time periods. To do this in Metabase, you'll edit your underlying SQL query to add a `WHERE` clause with a **variable**:
```sql
select
sum(sales_orders.revenue::numeric - (product_inventory.manufacturer_price * sales_orders.quantity)) as "Gross Profit"
from sales_orders
join product_inventory on sales_orders.product[1] = product_inventory.identity
[[where {{date}}]];
```
Breaking this down. The `{{date}}` syntax creates the variable. Then, you wrap the entire `WHERE` clause in brackets (`[[where {{date}}]]`) to make this statement optional in case no `date` is provided (you can learn more about variable in the [Metabase documentation](https://www.metabase.com/docs/latest/users-guide/13-sql-parameters.html)).
Metabase will detect the `{{date}}` variable in the query and open the variable settings to the right. Since the gross profit question you are building will live in a dashboard - you will configure the date variable to be a **Field Filter**.
To do so, you'll select **Field Filter** as the variable type. Since your `date` variable is actually filtering the `date` column of the `sales_orders` table, you'll then map the variable accordingly. Last, since you want to be able to select different time periods, you'll set the widget type to **Date Range**:
<img alt="Configure variable" src="https://docs.sequin.io/assets/metabase/018_configure_variable.gif" style=" box-shadow: 0px 24px 24px rgba(99, 99, 99, 0.165264);" />
You'll now see a date selector in the question, and if you select a new date range the value of the gross profit will update:

> Keep in mind that in the Airtable template we are using, the date range is from February 17, 2017 to February 27, 2017 :)
With everything looking good, save your question by clicking **Save** and naming your new question something like "Gross Profit." After saving, you'll return to the Metabase home page.
## Add your question to the dashboard
Finally, add the gross profit question as well as a date range filter to your dashboard:
<img alt="Add question to dashboard" src="https://docs.sequin.io/assets/metabase/020_add_to_dash.gif" style=" box-shadow: 0px 24px 24px rgba(99, 99, 99, 0.165264);" />
1. First, open up your dashboard by clicking **Browse all items** from the home screen and then selecting the **Sequin Tutorial** dashboard.
2. Edit the dashboard by clicking the pencil icon in the top right.
3. Add your **Gross Profit** question to the dashboard by clicking the plus icon and selecting it from the modal. Resize and position as you would like.
4. Then, add a date range filter to the dashboard by clicking the **Add a filter** icon, selecting **Time**, and then selecting **Date Range**.
5. Finally, **map the date range filter to the date variable** in your gross profit question and then click the **Done** button.
6. Last but not least, click **Save**.
You've now created a new dashboard with a **Gross Profit** question that you can filter by date. The foundations of your dashboard are now in place.
## Add a graph
Now, add some graphs to your dashboard to reveal more about the small business.
Starting with a Metabase *question*, it would be nice to know how many sales happen each day and on which sales platform. A stacked bar chart might tell this story well.
To build the question, click the **Ask a question** button and then select **Native query** just as you did before. Select your **Airtable - Inventory Manager** database.
Now write the SQL that answers this question:
```sql
select
sales_orders.date::DATE,
sales_orders.revenue::NUMERIC,
sales_orders.sale_platform
from
sales_orders
[[where {{date}}]];
```
This query should look more familiar to you now:
* You'll notice that in the `SELECT` statement you are again using `CAST` (i.e. the `::`) to ensure that the data returned is the right type.
* You are then adding an optional `WHERE` clause with the `{{date}}` variable so that your date range picker will also affect this data as well.
When you add the SQL to your question, you'll first configure the `{{date}}` variable just as you did with your gross profit question:

Finally, let's present this data as a stacked bar chart. Metabase makes this pretty easy:
<img alt="Set visualization" src="https://docs.sequin.io/assets/metabase/022_set_visual.gif" style=" box-shadow: 0px 24px 24px rgba(99, 99, 99, 0.165264);" />
1. Click the **Visualize** button in the lower right corner and select the **Bar** option.
2. The settings pane will open and Metabse will detect the data types to auto format the axis. All you need to do is toggle the chart to be a *stacked* bar chart by clicking the **Display** tab and selecting **Stack**.
The chart looks great. Save it by naming it something like "Sales by Date and Platform." Then repeat the steps you performed with the gross profit question to add it to your **Sequin Tutorial** dashboard.
As a last step, you'll need to map the **Date Range** filter to the **Date Variable** in your new graph by clicking the **Date Range Filter** and then selecting the **Date** variable in the "Sales by Date and Platform" graph:

When you're done, click the **Save** button and you'll see the beginnings of a dashboard that quickly tells the story of this small business:

## Conclusion
In this tutorial you've learned how to build a dynamic dashboard in Metabase using Sequin and Airtable. From this starting point, you can use SQL to answer almost any question hiding in your Airtable base and present it in a clean dashboard that you can easily share. | thisisgoldman |
639,037 | JS Set Object (2 handy usages) | Hello there guys. Today i will talk for the SET object which is storing new unique values... | 0 | 2021-03-19T11:27:37 | https://dev.to/feco2019/js-using-set-3-handy-usages-2bgl | functional, javascript, fundementals, algorithms | #Hello there guys.
Today i will talk for the **SET** object which is storing new
unique values of any type as well primitive values and object references.
This could be handy for some cases,i will show you two of them
in this thread.
##Cases
1. Remove duplicated records from the arrays
3. Using **add()** method to add values to the SET object
So first lets create our array and try our first case,in my example we will set some values twice,with this way we will see
how **Set()** will help us to point and delete the overwrites.
```
let myArray = ['Jim','Jhon','Grace','Felice','Jhon','Sylia','Grace'] ;
let myArrayClear = [...new Set(myArray)]
console.log(myArrayClear)
```
The result without double records.

Check to see the results on your console it should miss the double records,job done!
Secondary we will see how a new instance of **Set** object could work very nicely with **add()** method and create values as long as avoiding the double records, let's take a look.
```
const mySetObject = new Set()
mySetObject.add(2)
mySetObject.add('Hello Word')
mySetObject.add(4)
mySetObject.add({a: 1, b: 2})
mySetObject.add(2)
```
After our additions (number,string,object) we can itterate through our array that contains different values and data types but you will notice that we add the number **2** twice,whe we will loop though and you will see that is beeing added only once because of the rule **"A value in the Set may only occur once"**.So let's use **for** to see what we get.
```
const mySetObject = new Set()
mySetObject.add(2)
mySetObject.add('Hello Word')
mySetObject.add(4)
mySetObject.add({a: 1, b: 2})
mySetObject.add(2)
for (let item of mySetObject) console.log(item)
```

That's all for today!
Have a nice workday guys, in case for further explanation do not hesitate to contact me or find me in github or linkedin.
GitHub : https://github.com/feco2019
Linkedin : https://www.linkedin.com/in/dimitris-chitas-930285191/ | feco2019 |
639,097 | Mining Thai Food Text with R | This is part 3 in the Thai Food Dishes series. Text Mining Which raw material(s)... | 11,788 | 2021-03-19T05:27:00 | https://paulapivat.com/post/thai_dishes_project/#text_mining | textmining, r, datascience | This is part 3 in the Thai Food Dishes series.
### Text Mining
#### Which raw material(s) are most popular?
One way to answer this question is to use text mining to **tokenize** by either word and count the words by frequency as one measure of popularity.
In the below bar chart, we see frequency of words across all Thai Dishes. **Mu** (หมู) which means pork in Thai appears most frequently across all dish types and sub-grouping. Next we have **kaeng** (แกง) which means curry. **Phat** (ผัด) comings in third suggesting "stir-fry" is a popular cooking mode.
As we can see **not** all words refer to raw materials, so we may not be able to answer this question directly.

```python
library(tidytext)
library(scales)
# new csv file after data cleaning (see above)
df <- read_csv("../web_scraping/edit_thai_dishes.csv")
df %>%
select(Thai_name, Thai_script) %>%
# can substitute 'word' for ngrams, sentences, lines
unnest_tokens(ngrams, Thai_name) %>%
# to reference thai spelling: group_by(Thai_script)
group_by(ngrams) %>%
tally(sort = TRUE) %>% # alt: count(sort = TRUE)
filter(n > 9) %>%
# visualize
# pipe directly into ggplot2, because using tidytools
ggplot(aes(x = n, y = reorder(ngrams, n))) +
geom_col(aes(fill = ngrams)) +
scale_fill_manual(values = c(
"#c3d66b",
"#70290a",
"#2f1c0b",
"#ba9d8f",
"#dda37b",
"#8f5e23",
"#96b224",
"#dbcac9",
"#626817",
"#a67e5f",
"#be7825",
"#446206",
"#c8910b",
"#88821b",
"#313d5f",
"#73869a",
"#6f370f",
"#c0580d",
"#e0d639",
"#c9d0ce",
"#ebf1f0",
"#50607b"
)) +
theme_minimal() +
theme(legend.position = "none") +
labs(
x = "Frequency",
y = "Words",
title = "Frequency of Words in Thai Cuisine",
subtitle = "Words appearing at least 10 times in Individual or Shared Dishes",
caption = "Data: Wikipedia | Graphic: @paulapivat"
)
```
We can also see words common to both Individual and Shared Dishes. We see other words like **nuea** (beef), **phrik** (chili) and **kaphrao** (basil leaves).

```python
# frequency for Thai_dishes (Major Grouping) ----
# comparing Individual and Shared Dishes (Major Grouping)
thai_name_freq <- df %>%
select(Thai_name, Thai_script, major_grouping) %>%
unnest_tokens(ngrams, Thai_name) %>%
count(ngrams, major_grouping) %>%
group_by(major_grouping) %>%
mutate(proportion = n / sum(n)) %>%
select(major_grouping, ngrams, proportion) %>%
spread(major_grouping, proportion) %>%
gather(major_grouping, proportion, c(`Shared dishes`)) %>%
select(ngrams, `Individual dishes`, major_grouping, proportion)
# Expect warming message about missing values
ggplot(thai_name_freq, aes(x = proportion, y = `Individual dishes`,
color = abs(`Individual dishes` - proportion))) +
geom_abline(color = 'gray40', lty = 2) +
geom_jitter(alpha = 0.1, size = 2.5, width = 0.3, height = 0.3) +
geom_text(aes(label = ngrams), check_overlap = TRUE, vjust = 1.5) +
scale_x_log10(labels = percent_format()) +
scale_y_log10(labels = percent_format()) +
scale_color_gradient(limits = c(0, 0.01),
low = "red", high = "blue") + # low = "darkslategray4", high = "gray75"
theme_minimal() +
theme(legend.position = "none",
legend.text = element_text(angle = 45, hjust = 1)) +
labs(y = "Individual Dishes",
x = "Shared Dishes",
color = NULL,
title = "Comparing Word Frequencies in the names Thai Dishes",
subtitle = "Individual and Shared Dishes",
caption = "Data: Wikipedia | Graphics: @paulapivat")
```
#### Which raw materials are most important?
We can only learn so much from frequency, so text mining practitioners have created **term frequency - inverse document frequency** to better reflect how important a word is in a document or corpus (further details [here](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)).
Again, the words don't necessarily refer to raw materials, so this question can't be fully answered directly here.

#### Could you learn about Thai food just from the names of the dishes?
The short answer is "yes".
We learned just from frequency and "term frequency - inverse document frequency" not only the most frequent words, but the relative importance within the current set of words that we have tokenized with `tidytext`. This informs us of not only popular raw materials (Pork), but also dish types (Curries) and other popular mode of preparation (Stir-Fry).
We can even examine the **network of relationships** between words. Darker arrows suggest a stronger relationship between pairs of words, for example "nam phrik" is a strong pairing. This means "chili sauce" in Thai and suggests the important role that it plays across many types of dishes.
We learned above that "mu" (pork) appears frequently. Now we see that "mu" and "krop" are more related than other pairings (note: "mu krop" means "crispy pork"). We also saw above that "khao" appears frequently in Rice dishes. This alone is not surprising as "khao" means rice in Thai, but we see here "khao phat" is strongly related suggesting that fried rice ("khao phat") is quite popular.

```python
# Visualizing a network of Bi-grams with {ggraph} ----
library(igraph)
library(ggraph)
set.seed(2021)
thai_dish_bigram_counts <- df %>%
select(Thai_name, minor_grouping) %>%
unnest_tokens(bigram, Thai_name, token = "ngrams", n = 2) %>%
separate(bigram, c("word1", "word2"), sep = " ") %>%
count(word1, word2, sort = TRUE)
# filter for relatively common combinations (n > 2)
thai_dish_bigram_graph <- thai_dish_bigram_counts %>%
filter(n > 2) %>%
graph_from_data_frame()
# polishing operations to make a better looking graph
a <- grid::arrow(type = "closed", length = unit(.15, "inches"))
set.seed(2021)
ggraph(thai_dish_bigram_graph, layout = "fr") +
geom_edge_link(aes(edge_alpha = n), show.legend = FALSE,
arrow = a, end_cap = circle(.07, 'inches')) +
geom_node_point(color = "dodgerblue", size = 5, alpha = 0.7) +
geom_node_text(aes(label = name), vjust = 1, hjust = 1) +
labs(
title = "Network of Relations between Word Pairs",
subtitle = "{ggraph}: common nodes in Thai food",
caption = "Data: Wikipedia | Graphics: @paulapivat"
) +
theme_void()
```
Finally, we may be interested in word relationships *within* individual dishes.
The below graph shows a network of word pairs with moderate-to-high correlations. We can see certain words clustered near each other with relatively dark lines: kaeng (curry), pet (spicy), wan (sweet), khiao (green curry), phrik (chili) and mu (pork). These words represent a collection of ingredient, mode of cooking and description that are generally combined.

```python
set.seed(2021)
# Individual Dishes
individual_dish_words <- df %>%
select(major_grouping, Thai_name) %>%
filter(major_grouping == 'Individual dishes') %>%
mutate(section = row_number() %/% 10) %>%
filter(section > 0) %>%
unnest_tokens(word, Thai_name) # assume no stop words
individual_dish_cors <- individual_dish_words %>%
group_by(word) %>%
filter(n() >= 2) %>% # looking for co-occuring words, so must be 2 or greater
pairwise_cor(word, section, sort = TRUE)
individual_dish_cors %>%
filter(correlation < -0.40) %>%
graph_from_data_frame() %>%
ggraph(layout = "fr") +
geom_edge_link(aes(edge_alpha = correlation, size = correlation), show.legend = TRUE) +
geom_node_point(color = "green", size = 5, alpha = 0.5) +
geom_node_text(aes(label = name), repel = TRUE) +
labs(
title = "Word Pairs in Individual Dishes",
subtitle = "{ggraph}: Negatively correlated (r = -0.4)",
caption = "Data: Wikipedia | Graphics: @paulapivat"
) +
theme_void()
```
#### Summary
We have completed an exploratory data project where we scraped, clean, manipulated and visualized data using a combination of Python and R. We also used the `tidytext` package for basic text mining task to see if we could gain some insights into Thai cuisine using words from dish names scraped off Wikipedia.
For more content on data science, R, Python, SQL and more, [find me on Twitter](https://twitter.com/paulapivat). | paulapivat |
639,168 | Startup Founder Tips for Conventions – Real & Virtual | Five Reasons to go to conferences This focuses on attending conferences, not setting up exhibits.... | 0 | 2021-03-19T07:03:37 | https://perceptionbox.io/business/startup-founder-tips-for-conventions-real-virtual/ | perceptionbox, outsourcing, startup | ---
title: Startup Founder Tips for Conventions – Real & Virtual
published: true
description:
tags: PerceptionBox, Outsourcing, startups
canonical_url: https://perceptionbox.io/business/startup-founder-tips-for-conventions-real-virtual/
---

<h2>Five Reasons to go to conferences</h2>
<p>This focuses on attending conferences, not setting up exhibits. Running an exhibit at a conference is more expensive and requires more resources to support. An exhibit could be useful for you, but let’s make that a separate discussion. There are a lot of reasons to simply attend tech conferences, summits, and tradeshows, including:</p>
<ol start="1"><li>Find prospective investors and customers</li>
<li>Make connections and share experiences</li>
<li>Learn from industry experts and get inspired</li>
<li>Get some free press</li>
<li>Observe and learn the ropes for future events</li>
</ol>
<h2><a></a>Research the venue, vendors and sponsors</h2>
<p>Ideally, define the events you want to attend 2-3 months (or more) in advance so you have ample time to prepare. Make sure the event will be useful for you. As the founder or partner in a startup, your time is valuable. If the event fits your niche, very few things could be more valuable than attending it with a plan to get the most from it.</p>
<ul><li>How large is the venue?</li>
<li>What kind of reviews did it receive in previous years?</li>
<li>How much do the tickets cost?</li>
<li>Are its vendors and sponsors closely aligned to your target market?</li>
</ul>
<p>If you decide the convention or match is a good fit for your startup, identify specific companies you want to meet with – then, identify who will be representing them. This may take some research on LinkedIn and/or social channels. You can also write the company directly if all else fails and try to arrange a meeting with their representative, in advance.</p>
<h2><a></a>What are your goals?</h2>
<p>As a startup, you probably have a lengthy shopping list of things that could be useful for you. Make that shopping list – and prioritize the items on it. Be realistic with your expectations. Merely attending a conference won’t bring you investors, but it may get you a foot in the door. Some objectives might be:</p>
<ul><li>Enter any startup pitch competitions or contests for which you feel you can be prepared.</li>
<li>Meet and talk with representatives from angel and venture capital firms to see if there’s a match between your startup and what they invest in.</li>
<li>Connect with prospective clients and customers who may be interested in what your startup aims to offer. Find out what software they’re using now, how happy they are with it, features they’d like to see, etc.</li>
<li>Connect with groups and organizations that represent your industry. See if they have an events calendar of their own. This can range from programmer groups to professional associations or lobbyist groups on the high-end.</li>
<li>Talk with vendors you will need in the future like providers of cloud or hosting services, advertising agencies, distributors, and retailers, as appropriate to lay the foundation for future deals.</li>
<li>See how others have handled obstacles and problems that you’re still trying to resolve.</li>
<li>Try to get some free publicity with journalists or influencers that you follow regularly.</li>
</ul>
<h2><a></a>Define in advance what you can offer on the spot</h2>
<p>Most importantly is knowing what you are able to offer based upon where you are at in development. Know what kind of deals you can make and be ready to offer them on the spot. This can range from promoting a free webinar to explain your project after the conference to early access or a free trial. Of course, promote sales or subscriptions if you’re ready to provide at least basic customer support.</p>
<p>If funding is your primary objective, be able to concisely how much you need and what you need it for should someone ask. You don’t want to waste their time or yours, suffice that most investors know and expect that the details are more complex. Your answer will tell them whether they should take further interest in your startup or not.</p>
<p>Your startup may need other things – so know your budget and be ready to wheel and deal, because that’s what all of the vendors are there to do, too.</p>
<p>Also, if you have questions, write them out in advance. You’ll have direct access to industry experts and they’ll be happy to answer most questions for free.</p>
<h2><a></a>Bring Promotional materials</h2>
<p>Whether you’re attending physical events or virtual ones, you’ll want promotional materials. In the physical, you’ll want plenty of nice business cards at the very least. From there, you may step it up with a promotional brochure, cool (but cheap) swag like a pen with your logo, tagline, and website URL. It can also be a good idea to carry at least a few <a href="https://www.google.com/url?q=https://www.entrepreneur.com/article/242202&sa=D&source=editors&ust=1616140508237000&usg=AOvVaw0ep_zvaH5WlAVKH1Ea_caN" target="_blank">press kits</a> with you. If you’re doing virtual conventions, you can use PDF versions of the same.</p>
<p>Practice and memorize your <a href="https://www.google.com/url?q=https://www.forbes.com/sites/ryanrobinson/2017/09/05/elevator-pitch-tips-making-impression/?sh%3D643b33fe7234&sa=D&source=editors&ust=1616140508237000&usg=AOvVaw1XXpuLoUdCiCUwOpW05Ww-" target="_blank">elevator pitch</a> so that it sounds natural. This can be based, at least in part, upon <a href="https://www.google.com/url?q=https://perceptionbox.io/business/your-product-vision-as-a-first-step-for-outsourcing/&sa=D&source=editors&ust=1616140508237000&usg=AOvVaw0NIFDLmulHnmuqOCV1WbRA" target="_blank">your product vision statement</a>. It’s useful to have a short but memorable 15-second version that segues into a longer version according to listener interest.</p>
<h2><a></a>The more you network, the easier it is to network more</h2>
<p>Buddy-up! It can be awkward trying to socialize alone. Ideally, bring others from your startup with you to make as many contacts as you can. Alternatively, if you have friends or colleagues in other startups see if they’d like to tag-team the event. Share your goals and keep each other informed of opportunities throughout the conference.</p>
<p>The real goal is to create opportunities for introductions. Each and every introduction you make has a chance to lead to more introductions. Or to put it another way, “The more you network, the easier it gets to network more.” Qualified introductions, where one party is in the market for what the second party is offering are always best. Networking, where one party is in the market for something the second party knows someone else is able to offer also works.</p>
<h2><a></a>Keep an eye open for journalists</h2>
<p>As noted in promotional materials, it’s always a good idea to have press kits onhand and available to handout. Tech and business journalists, television and magazine reporters, attend conventions, too. They might make videos or write articles, suffice that they’re constantly on the look for an interesting story, new technologies, cool quotes, and industry insight.</p>
<p>A press kit makes it easy for journalists to write a story about your startup. If one journalist picks up your story, you dramatically increase your chances of getting picked up by others. It’s also “news” that you can publish on your website, press kit, or blog.</p>
<h5><a></a>Get a world-class developers with no stress</h5>
<p>We build and scale software teams and R&D centres. Flexible staffing, full-time staffing, and R&D centres in Kyiv, Ukraine.</p>
<p style="text-align: center;"><a href="https://www.google.com/url?q=https://perceptionbox.io/business/startup-founder-tips-for-conventions-real-virtual/%23&sa=D&source=editors&ust=1616140508238000&usg=AOvVaw3rRDaono9n7emjmarfiWm2" target="_blank">Contact us</a></p>
<h2><a></a>Take notes on the vendor exhibits</h2>
<p>If you have reason to think that your startup will want its own exhibit in the vendor area, take stock of the most impressive setups. What’s the first thing you noticed about them? Where were they located on the floorplan. What was the foot traffic like and what portion of people paused to at least look at what they were offering? How many people do they have? Are they dressed in suits, casual, wearing company t-shirts or other branded attire? What do they have in the way of promotional brochures or swag? What devices are they using? Were they using videos?</p>
<h2><a></a>Follow-up with the people you met</h2>
<p>After the convention, make sure to follow-up with the people you met if you managed to get their contact info. This can be a quick letter thanking them for their time and the information they shared. Then see if they’d like to schedule a short chat on Zoom, Skype, or whatever’s convenient for them. Try to have a couple of questions at the intersection of what they do and what you do. This may require some thought, but odds are if they’re in tech, they’ve interacted with other startups.</p>
<p>Of course, your discussion could immediately jump into the details of your startup or finalizing a deal that you already discussed. That’s likely to be with a minority of your contacts – but you never know when other contacts you made can be useful. If you had a meaningful conversation and they were friendly, take that as a win and try to touch base with them every so often. If your contacts seemed particularly keen on social media – like, share, retweet them on those channels, too.</p>
<p>Again, the more you network – the easier it is to network more. We might have to discuss that in a lot more depth one of these days!</p>
<p>There are a lot of other things you can do to get the most out of the conferences and summits you attend – here are <a href="https://www.google.com/url?q=https://www.activecampaign.com/blog/conference-tips&sa=D&source=editors&ust=1616140508240000&usg=AOvVaw1pLZPoWlPK8hE8opDRmYP6" target="_blank">thirty conference</a> tips to help you cover everything down to the hand sanitizer.</p>
<p> </p>
</body>
</html>
| _perceptionbox_ |
639,257 | 10 Days of JS | Recently completed 10 Days of JS on HackerRank. https://www.linkedin.com/feed/update/urn:li:activity:... | 0 | 2021-03-19T09:25:37 | https://dev.to/mesaurabhtekam/10-days-of-js-4jid | javascript | Recently completed 10 Days of JS on HackerRank.
https://www.linkedin.com/feed/update/urn:li:activity:6778605295544479744/
The importance of JavaScript as a web technology can be determined from the fact that it is currently used by 94.5% of all websites. As a client-side programming language, JavaScript helps web developers to make web pages dynamic and interactive by implementing custom client-side scripts. At the same time, the developers can also use cross-platform runtime engines like Node.js to write server-side code in JavaScript. They can even combine JavaScript, HTML5, and CSS3 to create web pages that look good across browsers, platforms, and devices. There are also a number of reasons why each modern web developer must know how to leverage all benefits of JavaScript. | mesaurabhtekam |
639,258 | Serverless Event Driven Speeding Infraction Management System using Azure Event Grid, Functions and Cognitive Services | I have always been a fan of movies. When I think of fast cars, comedy, and swag, Rowan Atkinson's Jo... | 0 | 2021-03-21T18:09:59 | https://dev.to/fullstackmaddy/serverless-event-driven-speeding-infraction-management-system-using-azure-event-grid-functions-and-cognitive-services-4bb8 | azure, serverless, dotnet, eventdriven | I have always been a fan of movies. When I think of fast cars, comedy, and swag, Rowan Atkinson's Johnny English driving a Rolls Royce or Aston Martin always comes to my mind. He always ends up speeding his vehicle and gets captured in the speed detecting camera and believe me his facial expressions are always hilarious. Have a look!

Though Johnny English destroys the camera with his awesome car fired missile, this gives us a real life example where we can use the power of serverless computing to process the captured images of the driving vehicles.
In this article we will discuss how we can implement a event driven system to process the images uploaded by camera to cloud environment. We will apply Azure flavor to it.
# Problem Statement
The people of Abracadabra country are speed aficionados and some times in their zeal of enjoying the thrill of fast speed, they cross the legal speed limit laid down by the local government. Hence the Minister of Transport has decided to install speed triggered cameras at major intersections in all the major cities pan Abracadabra. The cameras will capture the images of the vehicles and upload the images to the cloud. The process is
1. The cameras upload captured images to the cloud storage as blobs along with all the details like the district where infraction occurred etc.
2. The in cloud system detects the registration number and notifies the registered owner of the speeding infractions.
**Caveat** In past due to presence of faces in the government captured photos, the citizens had sued the government over privacy infringement. So the minister wants all the faces in the captured images blurred out.
# Azure Based Solution
Let us have a look at how we can implement the process laid out by the Transport Minister of Abracadabra government.
As is the case with software engineering, we can implement a system in multiple ways. Today we will implement the requirement using event driven architecture principles. Various components in the system will react to various events and perform their intended tasks and if required emit events of their own. To know more about the event driven architecture, read Martin Fowlers bliki article
[What do you mean by “Event-Driven”?](https://martinfowler.com/articles/201701-event-driven.html).
I digress, let us get back to the task at hand.
Let us Break down each requirement and have a look at what is available to us.
| Task | Azure Tech Available|
| ------------- |:-------------:|
| Upload Image to Cloud | Azure Blob Storage container |
| Extract Registration Number | Azure Cognitive Service: Computer Vision : OCR |
| Maintain a Registration Database | Azure CosmosDb collection|
|Maintain Created tickets against offenders| Azure CosmosDb Collection|
|Send Notification Emails | Send Grid(Not Azure)|
|Blur Faces| Azure Cognitive Service: Face API|
|Code blocks to execute gelling logic for all components| Azure Functions|
|Create event sinks for code to tap onto | Azure Event Grid Custom and System topic|
|Log End to End Transactions | Application Insights|
Based on the components, following architecture can be visualized.

The logical flow of the control in the system can be represented as below.

Based on the architecture the following is a list of the events consumed or emitted by components of the system.
| Event | Type| When
| ------------- |:-------------:|:-------------:|
| Blob Created | System Event |When a blob is created in a container in storage account.|
| NumberExtractionCompleted | Custom Event| When the registration number of the vehicle in image is extracted out successfully.|
| SpeedingTicketCreated | Custom Event | When a record is successfully created inside the `SpeedingInfractions` collection. |
|Exceptioned | Custom Event | Whenever the system experiences an irrecoverable exception. |
# Understanding The Architecture
Let us look in depth at each component and why it is present in our architecture.
## Azure Cognitive Service: Computer Vision
Azure Computer vision is a Cognitive service offered my Microsoft which caters to Image analysis. It is trained and managed by Microsoft using some predefined algorithms
Some Salient features of Computer Vision service are
1. Analyze images using pre-defined and trained models and extract rich information from the image.
2. Perform OCR which enables us to extract printed and handwritten text from the images with a very high accuracy
3. Recognize famous personalities, places landmarks etc.
4. Generate thumbnails
Computer vision supports REST based API calls or use of SDKs for development using .NET. The service can process image which can be passed to the underlying API as a byte array or using URL to the image. Billing model for this is Pay per Usage. The Free tier offerings are very generous which can be easily used for experimentation.
**In Our Case:** Computer Vision API will be used to extract out the alpha numeric registration number.
## Azure Cognitive Service: Face API
Azure Face API is a Cognitive service offered my Microsoft which caters to Face analysis. It is trained and managed by Microsoft using some predefined algorithms. Some salient features of Face API are
1. Detect faces
2. Recognize faces etc.
Face API supports REST based API calls or use of SDKs for development using .NET. The service can process image which can be passed to the underlying API as a byte array or using URL to the image. Billing model for this is Pay per Usage. The Free tier offerings are very generous which can be easily used for experimentation.
**In Our Case:** Face API will be used to detect the presence of faces in the captured images.
There are two ways to create Cognitive services in Azure
1. Create a bundle of services called the cognitive services and it will deploy all the services and we can use any service with same subscription key.
2. Create individual subscriptions for each Cognitive Service.
I generally prefer the later option as it allows me granular control over the keys in case the key is compromised. But for beginners refer to [Quickstart: Create a Cognitive Services resource using the Azure portal](https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows) .This lists both ways that we just discussed above.
## Azure Cosmos DB
Azure Cosmos DB is a fully managed NoSQL database suitable for modern day applications. It provides single digit millisecond response times and is highly scalable. It supports multiple data APIs like SQL, Cassandra, MongoDB API, Gremlin API, Table API etc. Since Cosmos DB is fully managed by Azure we do not have to worry about database administration, updates patching etc. It also provides capacity management as it provides a serverless model or automatic scaling. To try Azure Cosomos DB for free please refer [Try Azure Cosmos DB for free](https://azure.microsoft.com/en-us/try/cosmosdb/).
**In Our Case:**
1. We will store the record of the registered vehicle owner in a collection. This collection will be partitioned based on the district under which the vehicle is registered.
2. We will store the created ticket in a collection. This collection will be partitioned based on the district under which the infraction occurred.
Setting such partition keys will ensure that we have a well partitioned collection and no one partition will behave as a hot partition.
## Azure Event Grid
Azure Event Grid is a fully managed event handling service provided by Azure. It allows us to create systems which work on principles of event driven architecture. It is tightly baked in Azure and hence a lot of activities that are done on Azure will produce events onto which we can tap. E.g. When a blob is created, a event notification is generated an put onto a system topic in event grid. We can react to this event using Logic Apps or Azure Functions very easily. (This is what we will do by the way :wink:). We can also create custom topics and push custom events native to our application. (Another thing we will do). To learn more about how to work with Azure Event Grid, please refer [What is Azure Event Grid?](https://docs.microsoft.com/en-us/azure/event-grid/overview)
**In Our Case:**
1. We will tap into the "*Blob Created*" event emitted when the images are uploaded to blob container. This event will kick start the flow.
2. Other systems will subscribe or publish custom events e.g. "*NumberExtractionCompleted*" . Each system will publish a custom event which will contain the state change in case there were some side effects in the system. (A precious thing called `Event Carried State Transfer`).
## Azure Functions
Azure functions provide a code based way of creating small pieces of codes that perform task or list of tasks. Azure Functions are useful when we want to write stateless idempotent functions. In case complex orchestrations are required, we can use its cousin Durable Functions. Durable functions work with the durable task framework and provide a way to implement long run stateful orchestrations. To learn more about Azure Functions refer [Introduction to Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview). To learn about Durable Functions [What are Durable Functions?](https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp) is a good starting point
**In Our Case:** We will use Azure functions to create pieces of workflow which react to system events and custom events and work together to provide the desired output from the system. we can use Durable Functions in our case as well, but to reduce the complexity of the code, I have decided to stick with Azure Functions.
## Azure Blob Storage
Azure Blob Storage is the Azure solution for storing massive amount of unstructured data in the cloud. Blob Storage is useful when we want to stream images, videos, static files, store log files etc. We can access the blob storage through language specific SDKs or through the platform agnostic REST APIS. To learn more about the blob storage, [What is Azure Blob storage?](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview) is a good starting point.
**In Our Case:**
1. The system managing the camera and uploads to the cloud will upload the images to the `sourceimages` container.
2. The Serverless system will detect faces and upload the images with blurred faces to the `blurredimages` container.
3. If the system encounters an irrecoverable error, the particular uploaded image will be move to `exceptions` folder for some one to manually process it.
## SendGrid
SendGrid will be used to send out notifications to the registered users. SendGrid is a third party service. To set up a send grid account for free plan register at [Get Started with SendGrid for Free](https://sendgrid.com/free/)
# Building the Solution
As you can understand most of the components we have used in the architecture are already deployed and available for use to consume. So let us know look at how the azure functions are built.
***Note:*** *I am deliberately going to avoid adding the actual logic in the article as there are over thousand lines of code and the article will grow out of proportions and loose the sight of understanding the main topic. I highly urge to consult the GitHub repository in order to understand the code written*
## Prerequisites
In order to build a Azure Function Solution, we need following tools and nuget packages.
### Tools and Frameworks
1. .NET Core 3.1 as the Azure Functions project targets this framework ( [Download .NET Core 3.1](https://dotnet.microsoft.com/download/dotnet/3.1) )
2. Azure Functions core tools ([Work with Azure Functions Core Tools](https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash))
3. Visual Studio Code or Visual Studio 2019 or any IDE or Editor capable of running .NET core applications.
Optional:
1. Azure Storage Explorer([Download](https://azure.microsoft.com/en-in/features/storage-explorer/))
2. Azure CosmosDB Emulator ([Work locally with Azure CosmosDB](https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator?tabs=cli%2Cssl-netstd21))
### NuGet Packages
Following screenshot shows all the nuget packages required in solution.

### Solution Structure
Once set up the solution looks like following.

## Understanding the Solution
**Full Disclosure:** Azure functions has plethora of bindings which allows us to import or push data out to many resources like CosmosDB, Event Grid etc. It is my personal preference to write Interfaces to establish communications with external entities. So in this solution, I have implemented interfaces for each external system to which the solution communicates.
### Interfaces
#### CosmosDB
In order to create speeding ticket or querying data from the ticketing collection or vehicle registration collections, I have created a ***IDmvDbHandler*** interface its contract is shown below.
```csharp
public interface IDmvDbHandler
{
/// <summary>
/// Get the information of the registered owner for the vehicle using the vehicle registration number
/// </summary>
/// <param name="vehicleRegistrationNumber">Vehicle registration number</param>
/// <returns>Owner of the registered vehicle</returns>
public Task<VehicleOwnerInfo> GetOwnerInformationAsync(string vehicleRegistrationNumber);
/// <summary>
/// Create a speeding infraction ticket against a vehicle registration number
/// </summary>
/// <param name="ticketNumber">Ticket number</param>
/// <param name="vehicleRegistrationNumber">Vehicle registration number</param>
/// <param name="district">The district where the infraction occured</param>
/// <param name="date">Date of infraction</param>
/// <returns></returns>
public Task CreateSpeedingTicketAsync(string ticketNumber, string vehicleRegistrationNumber, string district, string date);
/// <summary>
/// Get the ticket details
/// </summary>
/// <param name="ticketNumber">Tikcet number</param>
/// <returns>Speeding Ticket details</returns>
public Task<SpeedingTicket> GetSpeedingTicketInfoAsync(string ticketNumber);
}
```
This interface is implemented using the CosmosDB SDK. It is implemented in the class ***CosmosDmvDbHandler***
#### Face API
To communicate with the Face API, following ***IFaceHandler*** interface is created.
```csharp
/// <summary>
/// Detect all the faces in the image specifed using an url
/// </summary>
/// <param name="url">Url of the image</param>
/// <returns>List of all the detected faces</returns>
public Task<IEnumerable<DetectedFace>> DetectFacesWithUrlAsync(string url);
/// <summary>
/// Detect all the faces in an image specified using a stream
/// </summary>
/// <param name="imageStream">Stream containing the image</param>
/// <returns>List of all the detected faces</returns>
public Task<IEnumerable<DetectedFace>> DetectFacesWithStreamAsync(Stream imageStream);
/// <summary>
/// Blur faces defined by the detected faces list
/// </summary>
/// <param name="imageBytes">The byte array containing the image</param>
/// <param name="detectedFaces">List of the detected faces in the image</param>
/// <returns>Processed stream containing image with blurred faces</returns>
public Task<byte[]> BlurFacesAsync(byte[] imageBytes, List<DetectedFace> detectedFaces);
```
This interface is implemented with the help of Face API SDK available on NuGet. It is implemented in class ***FaceHandler*** class.
#### ComputerVision
To extract out the vehicle registration number from the image using Optical Character Recognition, the interface ***IComputerVisionHandler*** is created.
```csharp
public interface IComputerVisionHandler
{
/// <summary>
/// Extract registration number from image specified using its url
/// </summary>
/// <param name="imageUrl">Url of the image</param>
/// <returns>Extracted registration number</returns>
public Task<string> ExtractRegistrationNumberWithUrlAsync(string imageUrl);
/// <summary>
/// Extract registration number from image specified using the stream
/// </summary>
/// <param name="imageStream">Stream containing the image</param>
/// <returns>Extracted registration number</returns>
public Task<string> ExtractRegistrationNumberWithStreamAsync(Stream imageStream);
}
```
This interface is implemented with the help of the Computer Vision SDK. It is implemented in ***ComputerVisionHandler*** class.
#### Sending Notification
In order to send out notifications to the users, the interface ***IOwnerNotificationHandler*** is used.
```csharp
public interface IOwnerNotificationHandler
{
/// <summary>
/// Notify the Owner of the Vehicle
/// </summary>
/// <param name="ownerNotificationMessage">Information of the vehicle Owner</param>
/// <returns></returns>
public Task NotifyOwnerAsync(OwnerNotificationMessage ownerNotificationMessage);
}
```
This interface is implemented using the SendGrid SDK. It is implemented in ***SendGridOwnerNotificationHandler*** class.
#### Publishing Events
The ***IEventHandler*** interface describe the methods used while publishing messages to the event sink.
```csharp
public interface IEventHandler
{
/// <summary>
/// Publish a custom event to the event sink
/// </summary>
/// <param name="customEventData">Customn Event Data</param>
/// <returns></returns>
public Task PublishEventToTopicAsync(CustomEventData customEventData);
}
```
This interface is implemented using the Azure Event Grid SDK in the ***EventGridHandler*** class.
The azure functions in solution will emit events which conform to the event grid schema. In order to emit custom solution data, instances of the ***CustomEventData*** class are serialized to the data node. The custom data published by the functions follows below contract
```csharp
public class CustomEventData
{
[JsonProperty(PropertyName = "ticketNumber")]
public string TicketNumber { get; set; }
[JsonProperty(PropertyName = "imageUrl")]
public string ImageUrl { get; set; }
[JsonProperty(PropertyName = "customEvent")]
public string CustomEvent { get; set; }
[JsonProperty(PropertyName = "vehicleRegistrationNumber")]
public string VehicleRegistrationNumber { get; set; }
[JsonProperty(PropertyName = "districtOfInfraction")]
public string DistrictOfInfraction { get; set; }
[JsonProperty(PropertyName = "dateOfInfraction")]
public string DateOfInfraction { get; set; }
}
```
A sample event emitted when the number extraction is completed is as following
```json
{
"id": "3c848fdc-7ad6-47e9-820d-b9f346ba7f7a",
"subject": "speeding.infraction.management.customevent",
"data": {
"ticketNumber": "Test",
"imageUrl": "https://{storageaccount}.blob.core.windows.net/sourceimages/Test.png",
"customEvent": "NumberExtractionCompleted",
"vehicleRegistrationNumber": "ABC6353",
"districtOfInfraction": "wardha",
"dateOfInfraction": "14-03-2021"
},
"eventType": "NumberExtractionCompleted",
"dataVersion": "1.0",
"metadataVersion": "1",
"eventTime": "2021-03-14T14:51:41.2448544Z",
"topic": "/subscriptions/{subscriptionid}/resourceGroups/rg-dev-stories-dotnet-demo-dev-01/providers/Microsoft.EventGrid/topics/ais-event-grid-custom-topic-dev-01"
}
```
#### Blob Management
In order to access the blobs from the solution, interface ***IBlobHandler*** interface is used to describe the contracts.
```csharp
public interface IBlobHandler
{
/// <summary>
/// Retrieve the metadata associated with a blob
/// </summary>
/// <param name="blobUrl">Url of the blob</param>
/// <returns>Key value pairs of the metadata</returns>
public Task<IDictionary<string, string>> GetBlobMetadataAsync(string blobUrl);
/// <summary>
/// Upload the stream as a blob to a container
/// </summary>
/// <param name="containerName">Name of the container where the stream is to be uploaded</param>
/// <param name="stream">Actual Stream representing object to be uploaded</param>
/// <param name="contentType">Content type of the object</param>
/// <param name="blobName">Name with which the blob is to be created</param>
/// <returns></returns>
public Task UploadStreamAsBlobAsync(string containerName, Stream stream, string contentType,
string blobName);
/// <summary>
/// Download Blob contents and its metadata using blob url
/// </summary>
/// <param name="blobUrl">Url of the blob</param>
/// <returns>Blob information</returns>
public Task<byte[]> DownloadBlobAsync(string blobUrl);
/// <summary>
/// Copy the blob from one container to another in same storage account using the url of the source blob
/// </summary>
/// <param name="sourceBlobUrl">Url of the source blob</param>
/// <param name="targetContainerName">Destination container name</param>
/// <returns></returns>
public Task CopyBlobAcrossContainerWithUrlsAsync(string sourceBlobUrl, string targetContainerName);
}
```
This interface is implemented in the solution using Azure Blob Storage SDKs in ***AzureBlobHandler*** class.
### Options
Azure functions support the `Options` pattern to access a group of related configuration items. ([Working with options and settings in Azure functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#working-with-options-and-settings))
Following a sample options class to access the details of the email details from the application settings of the Azure Function App.
```csharp
public class SendGridOptions
{
public string EmailSubject { get; set; }
public string EmailFromAddress { get; set; }
public string EmailBodyTemplate { get; set; }
}
```
Below is an example of ***local.settings.json*** which contains SendGridOptions
```json
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"BlobOptions:BlurredImageContainerName":"",
"BlobOptions:CompensationContainerName":"",
"Bloboptions:UploadContentType":"",
"BlobStorageConnectionKey":"",
"ComputerVisionEndpoint":"",
"ComputerVisionSubscriptionKey":"",
"CosmosDbOptions:DatabseId":"",
"CosmosDbOptions:InfractionsCollection":"",
"CosmosDbOptions:OwnersCollection":"",
"DmvDbAuthKey":"",
"DmvDbUri":"",
"EventGridOptions:TopicHostName":"",
"EventGridTopicSasKey":"",
"FaceApiEndpoint":"",
"FaceApiSubscriptionKey":"",
"SendGridApiKey":"",
"SendGridOptions:EmailBodyTemplate":"",
"SendGridOptions:EmailFromAddress":"",
"SendGridOptions:EmailSubject":""
}
}
```
### Registering Dependencies
Azure Functions support dependency injection to make our code more testable and loosely coupled. As with .NET core based web apps, the dependency injection in Azure Functions is implemented in the ***Startup*** class of the project. Refer [Dependency Injection in Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection)
Following code snippet shows the skeleton of the startup class and the ***Configure*** method where we will register all of our dependencies.
```csharp
[assembly: FunctionsStartup(typeof(Speeding.Infraction.Management.AF01.Startup))]
namespace Speeding.Infraction.Management.AF01
{
public class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
}
}
}
```
Following code snippet shows how we configure options in the startup classes. Again I am using the example of the SendGridOptions.
```csharp
builder.Services.AddOptions<SendGridOptions>()
.Configure<IConfiguration>((settings, configuration) =>
{
configuration.GetSection(nameof(SendGridOptions)).Bind(settings);
});
```
All the major SDK clients are registered as singletons so that the same instance of the client is used through out the lifetime of the application. The code snippet below shows how to do this.
```csharp
builder.Services.AddSingleton<IDocumentClient>(
x => new DocumentClient(
new Uri(
Environment.GetEnvironmentVariable("DmvDbUri")
),
Environment.GetEnvironmentVariable("DmvDbAuthKey")
)
);
builder.Services.AddSingleton<IComputerVisionClient>(
x => new ComputerVisionClient(
new Microsoft.Azure.CognitiveServices.Vision.ComputerVision.ApiKeyServiceClientCredentials(
Environment.GetEnvironmentVariable("ComputerVisionSubscriptionKey")
)
)
{
Endpoint = Environment.GetEnvironmentVariable("ComputerVisionEndpoint")
}
);
builder.Services.AddSingleton<IFaceClient>(
x => new FaceClient(
new Microsoft.Azure.CognitiveServices.Vision.Face.ApiKeyServiceClientCredentials(
Environment.GetEnvironmentVariable("FaceApiSubscriptionKey")
)
)
{
Endpoint = Environment.GetEnvironmentVariable("FaceApiEndpoint")
}
);
builder.Services.AddSingleton<BlobServiceClient>(
new BlobServiceClient(
Environment.GetEnvironmentVariable("BlobStorageConnectionKey")
)
);
builder.Services.AddSingleton<IEventGridClient>(
new EventGridClient(
new TopicCredentials(
Environment.GetEnvironmentVariable("EventGridTopicSasKey")
)
)
);
builder.Services.AddSingleton<ISendGridClient>(
new SendGridClient(
Environment.GetEnvironmentVariable("SendGridApiKey")
)
);
```
Other interfaces and their implementations are registered as following.
```csharp
builder.Services.AddAutoMapper(AppDomain.CurrentDomain.GetAssemblies());
builder.Services.AddTransient<IBlobHandler, AzureBlobHandler>();
builder.Services.AddTransient<IDmvDbHandler, CosmosDmvDbHandler>();
builder.Services.AddTransient<IFaceHandler, FaceHandler>();
builder.Services.AddTransient<IComputerVisionHandler, ComputerVisionHandler>();
builder.Services.AddTransient<IEventHandler, EventGridHandler>();
builder.Services.AddTransient<IOwnerNotificationHandler, SendGridOwnerNotificationHandler>();
```
### Azure Functions
Following is the list of all the functions created as part of the solution. As the concept of dependency injection is used, we can safely create non static classes and inject the necessary dependencies in the constructor of the class containing the Azure Function(s).
#### ExtractRegistrationNumber
This azure function is triggered when a blob is uploaded to the `sourceimages` container. Once the function completes it either emits a `NumberExtractionCompleted` event if successful or `Exceptioned` event if exception occurs. The skeleton for the function is shown below.
```csharp
public class NumberPlateController
{
private readonly IComputerVisionHandler _computerVisionHandler;
private readonly IEventHandler _eventHandler;
private readonly IBlobHandler _blobHandler;
public NumberPlateController(IComputerVisionHandler computerVisionHandler,
IEventHandler eventHandler,
IBlobHandler blobHandler)
{
_computerVisionHandler = computerVisionHandler ??
throw new ArgumentNullException(nameof(computerVisionHandler));
_eventHandler = eventHandler ??
throw new ArgumentNullException(nameof(eventHandler));
_blobHandler = blobHandler ??
throw new ArgumentNullException(nameof(blobHandler));
}
[FunctionName("ExtractRegistrationNumber")]
public async Task ExtractRegistrationNumber(
[EventGridTrigger] EventGridEvent eventGridEvent,
ILogger logger
)
{
}
}
```
#### CreateSpeedingTicket
This function is triggered when a `NumberExtractionCompleted` event occurs. This function creates the speeding ticket in the `SpeedingInfractions` collection in the CosmosDB. Following is a sample ticket created by the function.
```json
{
"id": "4f81c43a-53a8-42c0-a61a-10a40680f836",
"ticketNumber": "2f4e63a3-b9c5-4fe2-ab5d-745920b905f2",
"vehicleRegistrationNumber": "MLK 6353",
"district": "nagpur",
"date": "15-03-2021",
"_rid": "oaMUAJlZ1PwMAAAAAAAAAA==",
"_self": "dbs/oaMUAA==/colls/oaMUAJlZ1Pw=/docs/oaMUAJlZ1PwMAAAAAAAAAA==/",
"_etag": "\"13008756-0000-2000-0000-604f39e60000\"",
"_attachments": "attachments/",
"_ts": 1615804902
}
```
The function emits `SpeedingTicketCreated` event if successful and `Exceptioned` if exception occurs.
The skeleton for the function looks as follows.
```csharp
public class TicketController
{
private readonly IDmvDbHandler _dmvDbHandler;
private readonly IEventHandler _eventHandler;
public TicketController(IDmvDbHandler dmvDbHandler,
IBlobHandler blobHandler,
IEventHandler eventHandler)
{
_dmvDbHandler = dmvDbHandler ??
throw new ArgumentNullException(nameof(dmvDbHandler));
_eventHandler = eventHandler ??
throw new ArgumentNullException(nameof(eventHandler));
}
[FunctionName("CreateSpeedingTicket")]
public async Task CreateTicket(
[EventGridTrigger] EventGridEvent eventGridEvent,
ILogger logger
)
{
}
}
```
#### DetectAndBlurFaces
This function is triggered when `SpeedingTicketCreated` event occurs. This function uses the Face API and detects the presence of faces in the image by passing the URL of the image blob. If a face is detected, the function then blurs the face using the `ImageProcessorCore` SDK and then uploads the blurred image as a blob to `blurredimages` container. This function emits `Exceptioned` event if exception occurs.
Following is the skeleton for the function.
```csharp
public class FaceController
{
private readonly IFaceHandler _faceHandler;
private readonly IBlobHandler _blobHandler;
private readonly IEventHandler _eventhandler;
private readonly BlobOptions _options;
public FaceController(IFaceHandler faceHandler,
IBlobHandler blobHandler,
IEventHandler eventHandler,
IOptions<BlobOptions> settings)
{
_faceHandler = faceHandler ??
throw new ArgumentNullException(nameof(faceHandler));
_blobHandler = blobHandler ??
throw new ArgumentNullException(nameof(blobHandler));
_eventhandler = eventHandler ??
throw new ArgumentNullException(nameof(eventHandler));
_options = settings.Value;
}
[FunctionName("DetectAndBlurFaces")]
public async Task DetectAndBlurFaces(
[EventGridTrigger] EventGridEvent eventGridEvent,
ILogger logger
)
{
}
}
```
#### NotifyRegisteredOwner
This function is triggered by `BlobCreated` event which occurs when a blob is uploaded to the `blurredimages` container. This function queries speeding ticket data from the `SpeedingInfractions` collection, collects registered owner details from `RegisteredOwners` collection and then sends out an email using the SendGrid. This function emits `Exceptioned` event if an exception occurs.
The skeleton of the function is shown below.
```csharp
public class NotificationController
{
private readonly IDmvDbHandler _dmvDbHandler;
private readonly IBlobHandler _blobHandler;
private readonly IOwnerNotificationHandler _ownerNotificationHandler;
private readonly IEventHandler _eventHandler;
public NotificationController(IDmvDbHandler dmvDbHandler,
IBlobHandler blobHandler,
IOwnerNotificationHandler ownerNotificationHandler,
IEventHandler eventHandler)
{
_dmvDbHandler = dmvDbHandler ??
throw new ArgumentNullException(nameof(dmvDbHandler));
_blobHandler = blobHandler ??
throw new ArgumentNullException(nameof(blobHandler));
_ownerNotificationHandler = ownerNotificationHandler ??
throw new ArgumentNullException(nameof(ownerNotificationHandler));
_eventHandler = eventHandler ??
throw new ArgumentNullException(nameof(eventHandler));
}
[FunctionName("NotifyRegisteredOwner")]
public async Task NotifyRegisteredOwner(
[EventGridTrigger] EventGridEvent eventGridEvent,
ILogger logger
)
{
}
}
```
#### ManageExceptions
This function acts as the grace for the entire solution. This function is one stop shop for managing the exceptions that occur throughout any other functions. The function copies the blob from the `sourceimages` container to the `exception` container so that some one can retrace where the failure occurred and what remedy needs to be done. The skeleton of the function is shown below.
```csharp
public class ExceptionController
{
private IBlobHandler _blobHandler;
private readonly BlobOptions _options;
public ExceptionController(IBlobHandler blobHandler,
IOptions<BlobOptions> settings)
{
_blobHandler = blobHandler ??
throw new ArgumentNullException(nameof(blobHandler));
_options = settings.Value;
}
[FunctionName("ManageExeceptions")]
public async Task ManageExceptions(
[EventGridTrigger] EventGridEvent eventGridEvent,
ILogger logger
)
{
}
}
```
# Testing
Let us check out two scenarios
## 1. Exception Flow
When a image without a vehicle is uploaded, it will land in the `exception` container.
I am uploading following image with name `NoVehicleImage.png` to the `sourceimages`.

Since the Azure Function cannot detect a registration number in this picture, it emits `Exceptioned` event and the exception management flow copies the image to `exceptionfolder` as shown below.

## 2. Working Flow
There were no fast vehicles in the vicinity of my residence. So I decided to improvise. I have tested the flow using a capture of me driving a two wheeler.
I am using following image with name `15271b93-e416-4dde-9430-4994ee9cd360.png`

And soon enough an email pops up in my inbox as shown below

And the attachment has my face blurred out.

The blurred image is present in the `blurredimages` container as well.

***YAY, IT WORKS***

# Challenges
Building event driven systems comes with the perks of loose coupling and easy replaceability of the parts as all parts do not directly communicate with each other but do so via events. This however creates multiple problems.
1. Since there is no direct communication between the parts of the system we can not create a `system map` which gives a nice flow of data from one part of the system to other.
2. There is no orchestration workflow to visualize here. This poses a major problem when debugging is required. When an event driven system is not implemented correctly, it can be a major source of worry when something bad happens in production environment. Handling exceptions in a graceful way is very important to retrace the flow of message in the system.
3. Testing event driven systems need a penchant for patience. There are many unanticipated things that can happen when testing the systems. Since there is no orchestration engine to control the flow, often tester and developers can be seen banging their heads against the wall.
The challenges all point to one thing *a event driven system absolutely needs to have a well thought out correlated logging which spans across all the working parts of the event.*
Luckily, Azure Functions supports logging to Application Insights by default. Each function has access to the implementation of `ILogger` which provides multiple ways of logging to Application Insights. It supports ***Structured Logging*** where system specific custom properties can be logged. This solution implements a correlated logging based upon the name of the blob that is uploaded(It is GUID by the way :wink:).
The application tracks the Function where the statements are logged, custom defined event for each activity, and their status.
Following class shows the logging template and the custom logging events defined for the application.
```csharp
public class LoggingConstants
{
public const string Template
= "{EventDescription}{CorrelationId}{ProcessingFunction}{ProcessStatus}{LogMessage}";
public enum ProcessingFunction
{
ExtractRegistrationNumber,
DetectAndBlurFaces,
CreateSpeedingTicket,
NotifyRegisteredOwner,
ManageExeceptions
}
public enum EventId
{
ExtractRegistrationNumberStarted = 101,
ExtractRegistrationNumberFinished = 102,
DetectAndBlurFacesStarted = 301,
DetectAndBlurFacesFinished = 302,
CreateSpeedingTicketStarted = 201,
CreateSpeedingTicketFinished = 202,
NotifyVehicleOwnerStarted = 401,
NotifyVehicleOwnerFinished = 402,
ManageExeceptionsStarted = 501,
ManageExeceptionsFinished = 502
}
public enum ProcessStatus
{
Started,
Finished,
Failed
}
}
```
The logging can be done in any azure function as shown in example shown below.
```csharp
logger.LogInformation(
new EventId((int)LoggingConstants.EventId.ExtractRegistrationNumberStarted),
LoggingConstants.Template,
LoggingConstants.EventId.ExtractRegistrationNumberStarted.ToString(),
blobName,
LoggingConstants.ProcessingFunction.ExtractRegistrationNumber.ToString(),
LoggingConstants.ProcessStatus.Started.ToString(),
"Execution Started"
);
```
Exception can be logged as
```csharp
logger.LogError(
new EventId((int)LoggingConstants.EventId.ExtractRegistrationNumberFinished),
LoggingConstants.Template,
LoggingConstants.EventId.ExtractRegistrationNumberFinished.ToString(),
blobName,
LoggingConstants.ProcessingFunction.ExtractRegistrationNumber.ToString(),
LoggingConstants.ProcessStatus.Failed.ToString(),
"Execution Failed. Reason: Failed to extract number plate from the image"
);
```
Once this is done in each function, we have end to end correlated logging.
Following is an example of how the correlated logging looks when results are queried in Application Insights.
### Query
```sql
traces
| sort by timestamp desc
| where customDimensions.EventId > 1
| where customDimensions.prop__CorrelationId == "{blob name goes here}"
| order by toint(customDimensions.prop_EventId) asc
| project Level = customDimensions.LogLevel
, EventId = customDimensions.EventId
, EventDescription = customDimensions.prop__EventDescription
, ProcessingWorkflow = customDimensions.prop__ProcessingFunction
, CorrelationId = customDimensions.prop__CorrelationId
, Status = customDimensions.prop__Status
, LogMessage = customDimensions.prop__LogMessage
```
### Vanilla Flow

### Exception Tracked Gracefully

And that is it we have a trace of what is going on in the system.
# Blob Storage Cleanup
Cleaning up blob storage is an important maintainance task. If the blobs are left as they are, they can accumulate and will add to the cost of running the solution.
Blob Storage account clean up is accomplished by defining a `Life Cycle Management` policy on the account to `Delete all blobs older than 1 days`. It is implemented as shown below.



# Conclusion
In this article, we discussed how to design and implement a event driven system to process images uploaded to the blob containers. This is just the basic design. Actual production system will contain many other components to create manual tasks for the employees handling the exception workflow.
# Repository
The code implemented as part of this small project is available under MIT License on my GitHub Repository.
{% github fullstackmaddy/speeding-infraction-management %} | fullstackmaddy |
639,413 | Game Dev Digest — Issue #86 - Awesome Effects | Issue #86 - Awesome Effects How to re-create some awesome effects! Plus the GDC Showcase... | 4,330 | 2021-03-19T12:33:11 | https://gamedevdigest.com/digests/issue-86-awesome-effects.html | gamedev, unity3d, csharp, news | ---
title: Game Dev Digest — Issue #86 - Awesome Effects
published: true
date: 2021-03-19 12:33:11 UTC
tags: gamedev,unity,csharp,news
canonical_url: https://gamedevdigest.com/digests/issue-86-awesome-effects.html
series: Game Dev Digest - The Newsletter About Unity Game Dev
---
###Issue #86 - Awesome Effects

How to re-create some awesome effects! Plus the GDC Showcase is going on, check out what is coming up with Unity and much more. Enjoy!
---
[**Unity at GDC Showcase 2021: Visual scripting, new releases, and other moments from the keynote**](https://blogs.unity3d.com/2021/03/16/unity-at-gdc-showcase-2021-visual-scripting-new-releases-and-other-moments-from-the-keynote/) - “Unity for All 2021: Tech and Creator Showcase” offered an overview of recent and upcoming features and behind-the-scenes stories from the teams who created Oddworld: Soulstorm, Humankind and Crash Bandicoot: On the Run!
[_Unity_](https://blogs.unity3d.com/2021/03/16/unity-at-gdc-showcase-2021-visual-scripting-new-releases-and-other-moments-from-the-keynote/)
[**Particle Metaballs in Unity using URP and Shader Graph Part 3**](https://bronsonzgeb.com/index.php/2021/03/13/particle-metaballs-in-unity-using-urp-and-shader-graph-part-3/) - This post is part 3 in a series of articles that explain how to draw Metaballs in Unity using the Universal Render Pipeline (URP) and Shader Graph.
[_bronsonzgeb.com_](https://bronsonzgeb.com/index.php/2021/03/13/particle-metaballs-in-unity-using-urp-and-shader-graph-part-3/)
[**Soft Foliage Shader Breakdown**](https://www.cyanilux.com/tutorials/soft-foliage-shader-breakdown/) - This is a foliage shader that is applied to mesh consisting of intersecting quads generated from a particle system. It uses adjusted normals to provide a soft shading, alpha clipping with foliage texture and a small amount of vertex displacement to simulate wind.
[_Cyanilux_](https://www.cyanilux.com/tutorials/soft-foliage-shader-breakdown/)
[**ShaderQuest Part 4: Shader Environment Architecture**](https://halisavakis.com/shaderquest-part-4-shader-environment-architecture/) - The purpose of this ShaderQuest part is to familiarize you with the architecture of coded shaders in Unity as well as with the visual shader creation environments found in both Unity and UE4.
[_Harry Alisavakis_](https://halisavakis.com/shaderquest-part-4-shader-environment-architecture/)
[**Changing Materials on the Fly**](https://www.stellargameassets.com/post/changing-materials-on-the-fly) - There are a million reasons you might want to change a material in runtime. This is a really easy tutorial that will show you how. Better yet, it can be easily extended!
[_stellargameassets.com_](https://www.stellargameassets.com/post/changing-materials-on-the-fly)
[**Unity Releases**](https://unity3d.com/unity/beta/2021.1.0b12) - Unity [2021.1.0 Beta 12](https://unity3d.com/unity/beta/2021.1.0b12) and [2021.2.0 Alpha 9](https://unity3d.com/unity/alpha/2021.2.0a9) have been released.
[_unity3d.com_](https://unity3d.com/unity/beta/2021.1.0b12)
## Videos
[](https://www.youtube.com/watch?v=FR618z5xEiM)
[**Splatoon's Ink System | Mix and Jam**](https://www.youtube.com/watch?v=FR618z5xEiM) - This project is a take on the Ink System from one of my favorite games: Splatoon! Let’s explore game dev techniques and try to achieve a similar effect!
[_Mix and Jam_](https://www.youtube.com/watch?v=FR618z5xEiM)
[**VFX Graph Tutorial - Magical Library**](https://www.youtube.com/watch?v=h4SBACYb26k) - A tutorial on creating a magical library using VFX Graph.
[_A Beginner's Dev Vlog_](https://www.youtube.com/watch?v=h4SBACYb26k)
[**Adding Shadow Casting to Grass Baked with Compute Shaders | Unity URP Game Dev Tutorial**](https://www.youtube.com/watch?v=IPoHY_yJxMc) - Grass is one of those things that's deceptively difficult to implement. I've taken a stab at it in this series of videos, showing how to generate grass blades using a compute shader, and animate it using the vertex function of a graphics shader. In this video, I modify previous scripts to add shadow casting to the grass blades, as well as show a trick to lighten shadows without global illumination.
[_Ned Makes Games_](https://www.youtube.com/watch?v=IPoHY_yJxMc)
[**Procedural Animation: Tail, Wings, Hair, Tentacles! (Unity Tutorial)**](https://www.youtube.com/watch?v=9hTnlp9_wX8) - A procedural animation tutorial.
[_Blackthornprod_](https://www.youtube.com/watch?v=9hTnlp9_wX8)
[**How to Create a Flexible Dialogue System - Unity #1**](https://www.youtube.com/watch?v=RfLCzDzkvb0) - Welcome to the first part of this tutorial series in which we build a generalized dialogue system that you can use in your game, regardless of whether it is 2D or 3D!
[_Semag Games_](https://www.youtube.com/watch?v=RfLCzDzkvb0)
## Assets
[](https://assetstore.unity.com/browse/new-to-unity?aid=1011l8NVc)
[**Unity New Purchasers Sale**](https://assetstore.unity.com/browse/new-to-unity?aid=1011l8NVc) - We have a new promotion starting today for new Unity purchasers! For a limited time new purchasers can save up to 90% on a selection of bestsellers and grab their first asset from just $9.99 with the code below. This promotion ends on March 31
Coupon Code: MARCHWELCOME
[_Unity_](https://assetstore.unity.com/browse/new-to-unity?aid=1011l8NVc) **Affiliate**
[**3D Math Primer for Graphics and Game Development (Free Ebook)**](https://gamemath.com/) - _[as described by [Game From Scratch](https://www.youtube.com/watch?v=PcKWD-fpP-o)...]_ Hands down the most readable book I've ever encountered for learning math for game development is now free online. 3D Math Primer for Graphics and Game Development by Fletcher Dunn and Ian Parbery is now free.
[_gamemath.com_](https://gamemath.com/)
[**unity-shell**](https://github.com/marijnz/unity-shell) - Write and execute code in an intuitive "shell" with autocompletion, for the Unity Editor.
[_marijnz_](https://github.com/marijnz/unity-shell) *Open Source*
[**Unity Flocking**](https://github.com/CristianQiu/Unity-Flocking-CPU-GPU) - The project contains different CPU and GPU flocking implementations in Unity's game engine, using a single thread and multithread approach, as well as another using compute shaders. All of them are mainly based on Unity's ECS implementation.
[_CristianQiu_](https://github.com/CristianQiu/Unity-Flocking-CPU-GPU) *Open Source*
[**Project Auditor**](https://github.com/Unity-Technologies/ProjectAuditor) - Project Auditor is a static analysis tool for Unity Projects. Project Auditor analyzes scripts and settings of a Unity project and reports a list of potential problems that affect performance.
[_Unity-Technologies_](https://github.com/Unity-Technologies/ProjectAuditor) *Open Source*
[**UITK Editor Aid**](https://github.com/OscarAbraham/UITKEditorAid) - Elements and scripts that help in making Unity editors with UIToolkit.
UIToolkit (UITK) allows for interfaces that are more dynamic and performant than IMGUI. Its web-like API makes creating complex Editor UI (i.e. node graphs) a lot easier.
There's a problem, though: the editor part of UITK currently lacks some of the IMGUI features required for easily creating anything more than basic stuff.
This package contains some of the stuff I use to solve the problem with a pure UIToolkit approach.
[_OscarAbraham_](https://github.com/OscarAbraham/UITKEditorAid) *Open Source*
[**Try Unity ArtEngine today**](https://80.lv/articles/try-unity-artengine-today/) - Unity ArtEngine $19/mo promotion brings AI material generation to the masses.
[_80.lv_](https://80.lv/articles/try-unity-artengine-today/)
[**HUMBLE SOFTWARE BUNDLE: INTRO TO CODE 2021**](https://www.humblebundle.com/software/intro-to-code-2021-software?partner=unity3dreport) - With this brand new bundle, you’ll master the world’s most popular languages by building real, portfolio-ready projects. From building games in popular styles - including RPGs, strategy games, idle games, and more - to web development, mobile apps, data science, and machine learning, the skills you’ll learn can be expanded upon to form strong foundations for your future projects.
$1,400 WORTH OF AWESOME STUFF
PAY $1 OR MORE
_[You may also like the [Learn You More Code](https://www.humblebundle.com/books/learn-you-more-code-no-starch-press-books?partner=unity3dreport) bundle from Humble as well]_
[_Humble Bundle_](https://www.humblebundle.com/software/intro-to-code-2021-software?partner=unity3dreport) **Affiliate**
## Spotlight
[](https://www.cardgamesimulator.com/)
[**Card Game Simulator**](https://www.cardgamesimulator.com/) - Create & Share Custom Games With CGS, users can create and share their own custom card games!
_[[Source](https://github.com/finol-digital/Card-Game-Simulator) also available on Github]_
[_Finol Digital_](https://www.cardgamesimulator.com/)
---
You can subscribe to the free weekly newsletter on [GameDevDigest.com](https://gamedevdigest.com)
This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article. | gamedevdigest |
639,451 | Leetcode 841. Keys and Rooms [Solution] | The question is pretty easy, it's all about just clicking the approach. Difficulty: Medium Jump To:... | 11,734 | 2021-03-19T13:30:53 | https://dev.to/shivams136/leetcode-841-keys-and-rooms-solution-6oe | cpp, algorithms, programming, tutorial | The question is pretty easy, it's all about just clicking the approach.
**Difficulty:** Medium
**Jump To:**
* [Problem Statement](#problem-statement)
* [Explanation](#explanation)
* [Solution](#solution)
* [Implementation](#implementation)
---
## Problem Statement <a name="problem-statement"></a>
**Question:** [https://leetcode.com/problems/keys-and-rooms/](https://leetcode.com/problems/keys-and-rooms/)
There are `N` rooms and you start in room `0`. Each room has a distinct number in `0, 1, 2, ..., N-1`, and each room may have some keys to access the next room.
Formally, each room `i` has a list of keys `rooms[i]`, and each key `rooms[i][j]` is an integer in `[0, 1, ..., N-1]` where `N = rooms.length`. A key `rooms[i][j] = v` opens the room with number `v`.
Initially, all the rooms start locked (except for room `0`).
You can walk back and forth between rooms freely.
Return `true` if and only if you can enter every room.
**Example 1:**
> **Input:** [[1],[2],[3],[]]
**Output:** true
**Explanation:**
We start in room 0, and pick up key 1.
We then go to room 1, and pick up key 2.
We then go to room 2, and pick up key 3.
We then go to room 3. Since we were able to go to every room, we return true.
**Example 2:**
> **Input:** [[1,3],[3,0,1],[2],[0]]
**Output:** false
**Explanation:** We can't enter the room with number 2.
**Note:**
1. `1 <= rooms.length <= 1000`
2. `0 <= rooms[i].length <= 1000`
3. The number of keys in all rooms combined is at most `3000`.
---
## Explanation <a name="explanation"></a>
So the problem is simple, We have a room open and we'll find keys of some other rooms then we can visit these rooms and we'll find other keys there. One room can contain no keys as well. We can not visit a room until we have key to it. So we need to find that whether we can visit all rooms or not.
---
## Solution <a name="solution"></a>
Doesn't it like look a graph traversal problem??? Where rooms are like node and keys to other rooms present in a room are like edges to other nodes. And we just need to check whether we can traverse all nodes or not. So we can use **Breadth First Search(BFS)** or **Dreadth First Search(DFS)**. I am using a stack-based DFS approach.
A little enhancement is that I don't want to revisit a node which we have already visited, for that we can use a **bitset** or a boolean array. One more enhancement we can do that, we can prevent collecting keys for the rooms for whom we already have keys. This can be done using another bitset.
So solution is easy:
1. Mark `Room-0` visitable using bitset `keysCollected` and push into stack `s`.
2. Now perform below operations until the stack `s` becomes empty:
1. Pop room number from stack, `t`.
2. If `Room-t` is not visited yet then visit and mark it visited using bitset `visited`.
3. Pick those keys from here which open a room which is not visitable yet, which can be checked by `keysCollected`.
4. For picked keys, mark associated rooms visitable using `keysCollected` and push into stack `s`.
3. If count of visited rooms as per `visited` is equal to the count of total rooms `rooms.size()` then return `true` else `false`.
---
## Implementation <a name="implementation"></a>
**C++ Code:**
```cpp
class Solution {
public:
bool canVisitAllRooms(vector<vector<int>>& rooms)
{
stack<int> s;
bitset<1000> visited;
bitset<1000> keysCollected;
keysCollected[0] = 1;
s.push(0);
while (!s.empty()) {
int t = s.top();
s.pop();
if (!visited[t]) {
visited[t] = 1;
for (auto i = rooms[t].begin(); i != rooms[t].end(); i++) {
if (!keysCollected[*i]) {
keysCollected[*i] = 1;
s.push(*i);
}
}
}
}
return visited.count() == rooms.size();
}
};
```
**Runnable C++ code:**
{% replit @shivams136/Leetcode-841-Keys-and-Rooms %}
Cover Photo by <a href="https://unsplash.com/@jaye_haych?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Jaye Haych</a> on <a href="/s/photos/keys?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
| shivams136 |
639,927 | sysctlbyname-improved v20210223 | sysctlbyname-improved version 20210223 is out! The FreeBSD Operating System maintains a Management I... | 0 | 2021-03-19T21:27:01 | https://alfonsosiciliano.gitlab.io/posts/2021-03-04-sysctlbyname-improved-20210223.html | freebsd, unix, kernel, sysctl | **sysctlbyname-improved version 20210223 is out!**
The [FreeBSD](https://www.freebsd.org) Operating System maintains a Management Information Base ("MIB") where an object represents a parameter of the system, the [sysctl](https://man.freebsd.org/sysctl/3) system call explores the MIB to find an object by its Object Identifier ("OID") to get or set the value of the parameter.
An OID is a series of numbers, it is possible to replace a number with a string to obtain an object name, example [1.1] -\> "*kern.ostype*", the *sysctlbyname* system call finds an object by its name.
The purpose of [sysctlbyname-improved](https://gitlab.com/alfix/sysctlbyname-improved) is to allow *sysctlbyname* to handle:
- a name with an empty string level, example "**security.jail.param.allow.mount.**"
- an extended name for a CTLTYPE\_NODE with a defined handler, example "**kern.proc.pid.\<pid\>**"
Example with "*kern.proc.pid.0*", the implementation of *sysctlbyname\_improved()* is in the project repo:
```c
#include <sys/types.h>
#include <sys/sysctl.h>
#include <sys/user.h>
#include <stdio.h>
#include <string.h>
int main()
{
size_t valuelen;
struct kinfo_proc kp;
printf(" ## sysctlbyname ##\n");
valuelen = sizeof(kp);
if (sysctlbyname("kern.proc.pid.1", &kp, &valuelen, NULL, 0) == 0)
printf("effective user id: %d\n", kp.ki_uid);
else
printf("kern.proc.pid.1: error\n");
printf(" ## sysctlbyname_improved ##\n");
valuelen = sizeof(kp);
if (sysctlbyname_improved("kern.proc.pid.1", &kp, &valuelen, NULL, 0) == 0)
printf("kern.proc.pid.1: (effective user id) %d\n", kp.ki_uid);
else
printf("kern.proc.pid.1: error\n");
return 0;
}
```
Output:
```default
## sysctlbyname ##
kern.proc.pid.1: error
## sysctlbyname_improved ##
kern.proc.pid.1: (effective user id) 0
```
Furthermore this project can be useful to convert an extended name in the corresponding OID, example "*kern.proc.pid.2021*" -> [1.14.1.2021], this feature is used by *sysctlmif\_oidextendedbybame()* function of the [sysctlmibinfo2](https://gitlab.com/alfix/sysctlmibinfo2) library.
-----------------------------------------------------
Properly, *sysctlbyname-improved* is an extension of *sysctlinfo*, so this new version takes advantages of the improvements (mainly efficiency) of [sysctlinfo 20210222](alfonsosiciliano/sysctlinfo-20210222-b10).
To install the port [sysutils/sysctlbyname-improved-kmod](https://www.freshports.org/sysutils/sysctlbyname-improved-kmod/):
```default
# cd /usr/ports/sysutils/sysctlbyname-improved-kmod/ && make install clean
```
To add the package:
```default
# pkg install sysctlbyname-improved-kmod
```
**To know more**: <https://gitlab.com/alfix/sysctlbyname-improved>.
| alfonsosiciliano |
640,115 | test test | test content | 0 | 2021-03-20T03:56:25 | https://dev.to/tdnam/test-test-3gkl | test content | tdnam | |
640,552 | AI | There was a time, not so long ago, when artificial intelligence, machine learning, and natural langua... | 0 | 2021-03-20T13:59:14 | https://dev.to/raghavtroublemaker/ai-1d38 | There was a time, not so long ago, when artificial intelligence, machine learning, and natural language processing were only found in science fiction novels. Artificial intelligence, on the other hand, has been around for over 70 years. We were still working on a model of the most basic component of artificial intelligence at the time: artificial neurons. Artificial intelligence (AI) and machine learning had crept into the public's blind spot for a long time after that, all the while becoming smarter. It wasn't until IBM's Deep Blue defeated Garry Kasparov in 1997 that AI technologies gained traction and the data race began.
Of course, today's digital imperative includes AI, machine learning, and natural language processing. The benefits of using AI technologies in all shapes and sizes have prompted companies to invest heavily in their growth and application. Transportation, entertainment, and manufacturing operations have all seen tremendous growth and result of AI adoption.
AI's ability to automate sales and marketing activities has shown great promise in terms of lowering human effort and costs while still boosting top-line growth. Artificial intelligence (AI) reduces the time spent on administrative tasks, improves a brand's ability to provide customised services to consumers, and creates previously untapped revenue streams. This is why AI is at the heart of the most promising marketing automation initiatives. | raghavtroublemaker | |
640,791 | Self-Made-Robot: Review Robots Projects | Building a robot from scratch can be a daunting task. To understand what is possible, and to gain ins... | 0 | 2021-03-22T05:43:10 | https://dev.to/admantium/self-made-robot-review-robots-projects-1ka7 | robots | Building a robot from scratch can be a daunting task. To understand what is possible, and to gain inspiration, I researched various projects from the community. This article presents a list of robots with different actuators, sensors and capabilities. I also detail the hardware and software that is used in these projects.
_This article originally appeared at [my blog](https://admantium.com/blog/robo02_community_projects/)_.
## Two Wheel Self-Balancing Robot
_Source: <https://electricdiylab.com/diy-self-balancing-robot/>_

### Features
- Self balancing when external force is applied
- Can be controlled via App
### Hardware
- Arduino Nano
- 2 Nema 17 Stepper Motors
- 2 A4988 Motor Control Units
- Bluetooth Module HC-05
- Gyro Sensor MPU6050
### Software
- Custom [balancing firmware](https://github.com/mahowik/BalancingWii) based on firmware for [quad-copter/multi-rotor flying](http://www.multiwii.com/)
- [Android app for Bluetooth connection](http://ez-gui.com/) (not maintained anymore)
## Self-Moving Line Following Robot
_Source: <https://www.electronicshub.org/arduino-line-follower-robot/>_

### Features
- Follows a printed black line on the ground
### Hardware
- Arduino UNO or Nano
- L293D Motor Driver
- Custom Build IR Sensor Module (IR LED, Photo Diode)
- Geared Motors
### Software
- Custom Arduino code for [Motor controller](https://gist.github.com/elktros/df71b575474f6ae96e3edcc149d30302#file-arduino-line-follower-robot)
## Gesture Controlled Self-Moving Robot
_Source: <https://rootsaid.com/spinelcrux-gesture-controlled-robot-1/>_

### Features
- Gesture Controlled
- Selfmade controlling glove
- Moves by Tracks
### Hardware
- Glove
- Arduino MKR1000
- MMA7361 accelerator
- Flex sensor
- Breadboard
- Robot
- DIY tank Kit
- Raspberry Pi 3
- Motor controller: Breadboard & L293D driver board
- Raspberry Pi camera module V2-8 Megapixel
### Software
- Custom Arduino code for [motor controller](https://github.com/rootsaid/gesture_robot/blob/master/arduino_robot_code.ino)
- Custom Arduino code for [glove gestures](https://github.com/rootsaid/gesture_robot/blob/master/gesture_sensor.ino)
## Pickup & Deliver Voice Controlled
_Source: <https://www.instructables.com/Object-Finding-Personal-Assistant-Robot-Ft-Raspber/>_

### Features
- Voice controlled
- Object recognition
- Distance measuring with ultra-sonic waves
- Moves by tracks
- Picks objects with shovels
### Hardware
- [Rover 5 platform](https://solarbotics.com/product/50858/)
- Raspberry Pi 3b
- Geared Brush DC
- USB Webcam
- Google Coral USB
- Ultrasonic distance sensor
- Stepdown converter
### Software
- Voice recognition: [Snowboy Hotword Detection](https://github.com/Kitt-AI/snowboy/) - closed 2020-12-31
- Robo claw Controller: Basicmicro Python library.
## Self-Navigating Robot
_Source: <https://github.com/RBinsonB/Nox_robot>_

### Features
- Controlled by shell commands
- Automapping of its environment
- Self-Navigating
### Hardware
- Raspberry Pi 3b
- Arduino MEGA 2560, Adafruit motor shield
- Kinect
- Custom 2-wheel, 2-caster
### Software
- ROS
- Navigation: [ROS navigation](https://wiki.ros.org/navigation)
- Surrounding mapping: [Simultaneous Localization and Mapping (SLAM)](http://wiki.ros.org/gmapping)
- Self-Navigation: [TEB local planner](https://wiki.ros.org/teb_local_planner)
- Camera: [Freenect](http://wiki.ros.org/freenect_launch)
## Self Navigating Image Recognizing Robot
_Source: <https://www.youtube.com/watch?v=U0--ZJfmUEM>_

### Features
- Controlled by Custom Remote Controller (Touch Screen/Joysticks)
- Automapping of its environment
- Self-Navigating
- Object recognization
### Hardware
- Chassis
- [robotis Turtle Bot 3](https://emanual.robotis.com/docs/en/platform/turtlebot3/features/#specifications)
- Raspberry Pi 3B+ microcontroller
- Raspberry Pi Camera
- Dynamixel wheels
- LiDAR Sensor
- Image recognition
- NVidia Jetson Nano
- Additionally
- Raspberry Pi4 based custom remote control
### Software
- ROS
- Navigation: [ROS navigation](https://wiki.ros.org/navigation)
- Surrounding Mapping: [Simultaneous Localization and Mapping (SLAM)](http://wiki.ros.org/gmapping)
- Laser Sensor
## Conclusion
This article showed various robot projects: Self-navigatin robots that detect objects and humans, which can be controlled with apps, voice or gestures. These projects are a great inspiration, and with the software and hardware that they use, you can build a robot on your own too.
| admantium |
640,899 | JSON en PostgreSQL | PostgreSQL incluye soporte para el tipo de dato JSON desde la versión "9.2". En este tutorial se verá... | 0 | 2021-03-20T23:07:45 | https://dev.to/__alexander_/json-en-postgresql-27di | postgres | ---
title: JSON en PostgreSQL
published: true
description:
tags: postgresql
summary: PostgreSQL incluye soporte para el tipo de dato JSON desde la versión "9.2". En este tutorial se verán ejemplos de cómo trabajar con inserciones y consultas de campos JSON.
---
PostgreSQL incluye soporte para el tipo de dato JSON desde la versión "9.2". En este tutorial se verán ejemplos de cómo trabajar con inserciones y consultas de campos JSON.
## ¿Qué es JSON?
JSON, es el acrónimo de "JavaScript Object Notation" o "notación de objeto de JavaScript". Es un formato de texto sencillo utilizado para el intercambio de datos.
Su estructura es similar a la del siguiente ejemplo:
```javascript
{
"autor": "Isaac Asimov",
"pais": "Rusia",
"libros":[
{
"titulo": "Yo robot",
"isbn": "9780739312698"
},{
"nombre": "El hombre bicentenario",
"isbn": "9788440677778"
}
]
}
```
## Creación de una tabla que utiliza un campo de tipo JSON
La declaración de una columna JSON solo requiere el nombre del campo + el tipo de dato "json", por ejemplo:
```sql
CREATE TABLE book (
id serial,
info json
);
```
En la tabla anterior almacenaremos información sobre grupos de libros por autores.
## Inserción de registros
```sql
INSERT INTO book (info)
VALUES ('{"autor": "Isaac Asimov","pais": "Rusia","libros":[{"titulo": "Yo robot","isbn": "9780739312698"}]}');
```
o para insertar varios registros al mismo tiempo:
```sql
INSERT INTO book (info)
VALUES ('{"autor": "Patrick Rothfuss","pais": "EEUU","libros":[{"titulo": "El nombre del viento","isbn": "9788499082479"}]}'),
('{"autor": "Arthur C. Clarke","pais": "Reino Unido","libros":[{"titulo": "Cánticos de la lejana Tierra","isbn": "9788401322013"}]}');
```
## Consultas
Para ver la información que ya tenemos almacenada podemos usar:
```sql
SELECT id, info FROM book;
```
y para mostrar un atributo del json, podemos utilizar el operador "->":
~~~
::sql
SELECT info->'autor' FROM book;
~~~
| autor (json) |
| ----------- |
| "Isaac Asimov" |
| "Patrick Rothfuss" |
| "Arthur C. Clarke" |
Noten que el operador nos retorna un tipo "json", pero si utilizamos la versión "->>" se transforma en un tipo "text":
```sql
SELECT info->>'autor' as autor, info->'pais' as pais FROM book;
```
| autor (text) | pais (json) |
| ----------- | ----------- |
| Isaac Asimov | "Rusia" |
| Patrick Rothfuss | "EEUU" |
| Arthur C. Clarke | "Reino Unido" |
## Consultas con condicionales (WHERE)
Por ejemplo, si deseamos obtener los registros cuyos autores pertenezcan a un país, podemos realizar una consulta similar a:
```sql
SELECT info FROM book WHERE info->>'pais' = 'Rusia';
```
Nótese que para comparar un valor del json con un "text" debemos utilizar el operador: "->>".
- - -
Se pueden realizar consultas y operaciones más complejas, pero como al menos como introducción, espero que este post les haya sido de ayuda.
## Referencias
* <a href="https://www.postgresql.org/docs/12/datatype-json.html" target="_blank">JSON Datatype</a>
* <a href="https://www.postgresql.org/docs/12/functions-json.html" target="_blank">JSON Functions and Operators</a>
Este post apareció primero en: [JSON en Postgresql](https://alexanderae.com/json-postgresql.html)
| __alexander_ |
641,015 | Ultimate Express & Mongo Reference | This is not a guide on how to use express and mongo, but a useful reference especially for those star... | 0 | 2021-03-21T03:38:45 | https://dev.to/alexmercedcoder/ultimate-express-mongo-reference-kb3 | mongodb | This is not a guide on how to use express and mongo, but a useful reference especially for those starting to learn these technologies. This guide will serve as documentation of the basics of all the main functions and patterns in these libraries.
## Express
- **Installation Command:** `npm install express`
- **Importing:** `const express = require("express")`
#### express()
The express function returns a new application object. The application object has several different functions.
- can register middleware and routes
- can initiate the http listener on a particular port
**Typical use**
```js
// import express
const express = require("express")
// create an application object
const app = express()
// create new router
const router = express.Router()
// register middleware with the application, (each request will run through each of these one by one)
app.set("view engine", "ejs") // specify the view engine for view rendering
app.use(express.static("public")) // serve a folder called "public" as static
app.use(express.json()) // parse json bodies based on Content-Type header
app.use(express.urlencoded({extended: false})) // parse urlencoded form data
app.use("/", router) // register the router with the application
// register routes with the router
router.get("/path", getHandlerFunction) // invokes function for matching get request
router.post("/path", postHandlerFunction) // invokes function for matching post request
router.put("/path", putHandlerFunction) // invokes function for matching put request
router.delete("/path", deleteHandlerFunction) // invokes function for matching delete request
// Initiate the server listening on a port with a function that runs when server starts listening
app.listen(3000, () => console.log("Listening on Port 3000"))
```
#### The Request and Response Objects
When the listener receives a request two objects are generated, Request and Response (req, res) which is passed to each middleware function & route one by one until a response is sent. These request and response objects have several useful properties.
**REQUEST**
- req.params: an objects with an defined url parameters (defined in router paths using colons... `"/:these/:are/:params"`)
- req.query: an object of any url queries in the request url
- req.headers: an object of all the headers in the request
- req.method: a string detailing the method of the request ("get", "post", etc.)
- req.body: object containing the data from the request, must be parsed by middleware that can parse the type of data sent. (express.json for json data, express.urlencoded for form data)
**RESPONSE**
- res.send("string"): function for sending a text response, will auto-send arrays and objects as json responses. Will send html strings as html responses.
- res.json({data}): send a json response
- res.render("template", {data}): tells view engine to render the specified template in the views folder with the data in the second argument. The resulting HTML file is sent as a response.
- res.set("header-name","header-value"): function for setting a response header
- res.redirect("/path"): redirect the browser to a different path
#### Popular 3rd Party Middleware
- `npm install cors` : middleware for setting cors headers
- `npm install method-override`: override request methods based on a url query
- `npm install morgan`: request logging
- `npm install express-session`: use session cookies with express
- `npm install connect-mongo`: session cookie store using mongodb
#### Popular View Engines
- EJS
- Handlebars
- Mustache
- Nunjucks
- Pug
- Marko
- Express-React-Views
- Liquid
## Mongoose
ODM (Object Document Mapper/Manager) for Mongo databases. Allows you to connect to mongo databases and create model objects.
- `npm install mongoose`
#### Connecting to Mongo
```js
const mongoose = require("mongoose")
// connect to the database, 1st argument the connection string, 2nd argument a configuration object
mongoose.connect(MongoURI, ConfigObject)
//set responses to database events
mongoose.connection
.on("open", () => console.log("connected to mongo"))
.on("close", () => console.log("connected to mongo"))
.on("error", (error) => console.log(error))
```
#### Creating a Model Object
```js
const {Schema, model} = require("mongoose")
// create schema
const ModelSchema = new Schema({
property1: String,
property2: Number,
property3: Boolean
}, {timestamps: true})
// create model specifying model/collection name and schema
const Model = model("Model", ModelSchema)
```
#### 3 Ways to Write Queries with your model with error handling
```js
// Callback Syntax
Model.find({}, (error, data) => {
if (error){
console.log(error)
} else {
console.log(data)
}
})
// .then syntax
Model.find({})
.then((data) => console.log(data))
.catch((error) => console.log(error))
// using async/await
const queryFunction = async() => {
try{
const data = await Model.find({})
console.log(data)
} catch(error){
console.log(error)
}
}
queryFunction()
```
#### Model Functions
```js
Model.deleteMany()
Model.deleteOne()
Model.find()
Model.findById()
Model.findByIdAndDelete()
Model.findByIdAndRemove()
Model.findByIdAndUpdate()
Model.findOne()
Model.findOneAndDelete()
Model.findOneAndRemove()
Model.findOneAndReplace()
Model.findOneAndUpdate()
Model.replaceOne()
Model.updateMany()
Model.updateOne()
```
## Further Reading
- [Express Documentation](https://expressjs.com/en/4x/api.html)
- [Mongoose Documentation](https://mongoosejs.com/docs/guides.html) | alexmercedcoder |
641,197 | VSCode terminal & meta key | I had the same issue as described here: https://superuser.com/questions/1549993/how-to-map-option-shi... | 0 | 2021-03-21T09:17:52 | https://dev.to/mikkom/vscode-terminal-meta-key-273b | vscode, terminal, zsh | I had the same issue as described here: https://superuser.com/questions/1549993/how-to-map-option-shift-digit-keys-in-zsh-using-bindkey
I went with the same solution that disables `terminal.integrated.macOptionIsMeta` and uses bindkey to map the traditional readline meta key commands. I added `insert-last-word` and had to modify the bound characters to some extent. Still need to figure out how to best handle dead keys (`option-t` and `option-u` for example)...
```
bindkey '' accept-and-hold # option-a
bindkey 'ƒ' forward-word # option-b
bindkey 'ç' fzf-cd-widget # option-c
bindkey 'ð' delete-word # option-d
bindkey '›' backward-word # option-f
bindkey '¸' get-line # option-g
bindkey '˛' run-help # option-h
bindkey 'fi' down-case-word # option-l
bindkey '‘' history-search-forward # option-n
bindkey 'π' history-search-backward # option-p
bindkey '•' push-line # option-q
bindkey 'ß' spell-word # option-s
bindkey '†' transpose-words # option-t
bindkey 'ˀ' up-case-word # option-u
bindkey 'Ω' copy-region-as-kill # option-w
bindkey '≈' execute-named-cmd # option-x
bindkey 'µ' yank-pop # option-y
bindkey '…' insert-last-word # option-.
```
Would be nice if VSCode supported choosing just the left option key as meta, same as iTerm does. | mikkom |
641,318 | Separation of concerns with custom React hooks | React is without a doubt one of the most popular front-end JavaScript frameworks / UI libraries aroun... | 0 | 2021-03-21T13:55:06 | https://dev.to/areknawo/separation-of-concerns-with-custom-react-hooks-3aoe | react, javascript, webdev | **React** is without a doubt one of the **most popular** front-end JavaScript frameworks / UI libraries around. However, it doesn't mean that it's the best or that everyone likes it.
Among some of the more **technical reasons** behind people disliking React is, surprisingly, one of its biggest features as well - **[JSX](https://facebook.github.io/jsx/)**. An extension to standard JavaScript that allows you to use **HTML-like syntax** in your React components.
How such a recognizable part of React, one that clearly stands to improve readability, and ease-of-writing one's code can be turned into a con? Well, it all comes down to the **separation of concerns**.
# Separation of concerns
Before we dive in, I'd like to explain exactly what separation of concerns is, not to leave out any nuances.
So, separation of concerns means having **clear lines** between **different concepts**/pieces of something. In programming, JSX is a clear example of ignoring this rule. No longer do we have a *"template"* describing component **structure** in a separate HTML file and its **logic** in a JS one, but both (or more if you're using CSS-in-JS) are mixed together to form what some consider *perfect harmony*, and others - *uncontrolled chaos*.
## Personal preference
Alright, so mixing the *"view"* and the *"logic"* together brings about the disruption of the separation of concerns. But is that really bad and does that mean that you always have to keep your component's view and logic separately?
No and no. First off, a lack of separation of concerns isn't necessarily a bad thing. It's a matter of **personal preference** of a developer or a team, and other guidelines. You don't have to keep your logic and view separately. But if you do, it still doesn't mean that each one of them needs a separate file. Perfect examples of that are [Vue Single File Components](https://vuejs.org/v2/guide/single-file-components.html#What-About-Separation-of-Concerns) (SFCs) or simply pure HTML file with `<script>` and `<style>` tags inside them.
# React hooks
Separation of concerns is one thing, and **React hooks** the other.
So, **[React hooks](https://reactjs.org/docs/hooks-intro.html)** have been around for quite a while now (almost 2 years since [stable release](https://reactjs.org/blog/2019/02/06/react-v16.8.0.html)), so they are rather well-known and already *"covered to death"* by many other blogs and devs alike. But let's have a brief overview one more time.
React hooks allow developers to add **state** and use other special **React features**, inside **functional components**, as opposed to the prior requirement of class-based ones. There are 10 of them built-in (*v17.0.1*), each for handling different React features, from which only 4 are commonly-used (`useState()`, `useEffect()`, `useContext()`, and `useRef()`) and you can naturally **[create your own](https://reactjs.org/docs/hooks-custom.html)**. And it's this one last bit of information that we're most interested in.
# Custom hooks
While React hooks themselves should be somewhat well-known, the process of **creating a hook** of your own is a bit less likely.
You see, the built-in hooks are *"more than enough"* to built solid React components, and if not, there's almost certainly an open-source library of some kind in the immense **React ecosystem** that *"hookifies"* the exact functionality you seek. So, why bother with learning more about custom hooks if this isn't necessary?
## Creating a hook
That's a fair point. Custom hooks aren't necessary to do anything, but they can certainly make your life easier - especially if you like separation of concerns.
But everything will come in time. First - how to make a custom hook? Well, it couldn't be easier. A custom hook is just a **function** that uses **other hooks**. It's really that simple. It should also follow the ["rules of the hooks"](https://reactjs.org/docs/hooks-rules.html), which can be easily done if you're using **ESLint** and [proper official config](https://www.npmjs.com/package/eslint-plugin-react-hooks), but that's it.
To be honest, you don't even have to do any of those things - using other hooks is not required (but rather common), and if your code is of good quality, custom hook name starts with *use,* and you use hooks as intended (at the very top-level of React component), then you should be fine.
## Examples
Here's a very simple hook that runs the provided callback every second (because I couldn't think of anything better 🙃):
```javascript
const useTick = (callback) => {
const handle = setInterval(() => {
callback();
}, 1000);
return () => {
clearInterval(handle);
};
};
```
...and here's how you can use it:
```javascript
const Component = () => {
const stopTick = useTick(() => {
console.log("Tick");
});
return <button onClick={stopTick}>Stop ticking</button>;
};
```
As for a hook that depends on another hook, here's one that forces your component to update without noticeable state change by using `useState()` *"in the background"*.
```javascript
const useForceUpdate = () => {
const [value, setValue] = useState(true);
return () => {
setValue(!value);
};
};
```
...and here's a usage example:
```javascript
const Component = () => {
const forceUpdate = useForceUpdate();
return <button onClick={forceUpdate}>Update component</button>;
};
```
As a side-note, it's worth saying that such *force update* usually shouldn't be used. Most of the time it's either pointless or indicates some potential errors in your code. The only exception to this rule are uncontrolled components.
# Solution proposal
By now I think you see where this is going. No matter how pointless my examples were, both of them still share one advantage - they **abstract logic** away from the main component function, making it look cleaner as result.
Now, it's only a matter of scaling this idea up, potentially moving the resulting hook away from the component file itself, and voila! You've got yourself a pretty good separation of concerns - in React!
It might seem like a simple revelation, but I've only come to it a while ago, and using it in my React project since then I must admit - it's a pretty nice solution.
You might agree with me on this idea or not (leave your comments down below), but it doesn't really matter. I'm just presenting a potential strategy to arrange your code that I find pretty nice, in hopes that it'll help you as well.
## Best practices
So, if you end up at least trying out such an approach in one of your projects, then I do have some *"best practices"* that I personally follow and that might be of interest to you:
- only apply this tactic if your component's logic takes **\>10 lines** or has a lot of smaller hook calls;
- put your hook in a separate file, which ideally should have **no JSX** in it (`.js` vs `.jsx` files);
- keep your **naming consistent** - e.g. hook in `logic.js` or `hook.js` (with appropriate hook naming as well, e.g. `useComponentNameLogic()`) and the component itself in `view.jsx` or `index.jsx` under a single folder, with optional `index.js` file (if it isn't reserved for the component already) for re-exporting the necessary bits;
- keep only the simplest callbacks and event listeners in the JSX file, and move the rest to the hook;
- if using **CSS-in-JS library** that deals with hooks (e.g. `useStyles()`) then place it in a separate file, or at the top of the component file if it's not too big;
- remember to **organize your hook's code** correctly - separate some of it to outer functions, and maybe even smaller hooks, if the logic is reused across different components.
# What do you think?
That's my proposal for implementing separation of concerns in React. Is this the best approach that you must use? Definitely not, besides there's no *"best approach"* at all. Again, I just discovered that this one fits my needs, and I wanted to share it with you in hopes that it might help you as well.
So, what are your thoughts on such an approach? Would you like to see more posts where I share some personal **code style tips** in the future? If so, let me know in the **comment section** below.
As always, for more content like this, be sure to follow me on [Twitter](https://twitter.com/areknawo), [Facebook](https://facebook.com/areknawoblog), or through [my newsletter](https://areknawo.com#newsletter). Thanks for reading and happy coding!
> This post was written with ease, made grammatically-correct, and cross-posted here within 1 click thanks to **[CodeWrite](https://codewrite.io/)** with its great editor, smooth Grammarly integration, and "one-click publishing". Try it for free, and use the code `first100` to get **20% off** your subscription (only **$2.40/month**!) | areknawo |
641,810 | Blocking Time for Tasks with Toggl | For people doing mainly project work, planning daily tasks is a constant struggle, which can cause se... | 0 | 2021-03-22T07:05:39 | https://nikoheikkila.fi/blog/blocking-time-for-tasks-with-toggl/ | productivity, tooling, go, showdev | For people doing mainly project work, planning daily tasks is a constant struggle, which can cause severe issues with productivity and mental well-being.
As a software developer with multiple projects on my daily schedule, I used to waddle in a sea of chaos. Attempting to swim and survive throughout my days impacted my work in many ways. The loss of context made it difficult to focus on tasks, which caused my productivity to halt and gave a diminishing sense of accomplishment in return.
Furthermore, when I needed to recall what I had been doing on a given day, I had to resort to checking my browsing history. As you might guess, it had little relevance to what I had worked. Daily standup meetings mainly were a burden where I was supposed to figure out what I did yesterday and what shall I do today. Similarly, the reported hours contained some time to this project and some time to that.
The technique of surviving with simple to-do lists — where **Todoist** is still the king — I had learned previously was suitable for single-project environments. There I could split my days between writing and reviewing code in one domain. It didn't scale at all for multiple sequential tasks. Fortunately, later on, I discovered the magnificent technique called [_time blocking_](https://todoist.com/productivity-methods/time-blocking), where every minute of the day is given a purpose.
Let me tell you how.
## Basics of Time Blocking
Time blocking means saving time for important — not necessarily urgent — tasks in your calendar. With your chosen tool, start by dividing a day into slices and decide **one and only one theme** you're working on during a particular time block.
Don't worry. With time blocking, you get to keep your regular breaks and days off. This post is not one of those foolish blurbs hailing 60+ hour work weeks and waking up 4:30 every day to solve productivity issues.
After dividing your days into slices, your day might look a bit like in the picture below. Note that the themes are only examples, and I usually write those down precisely to help me later recall what I worked on.

## How Do I Split the Day?
My typical day is as follows. Due to the dynamic nature of my profession, some variations may occur, but mostly it's easy to stick with the plan.
### From Morning to Noon
I start working between 8–9 in the morning and sketch my time blocks for the day. Next, I begin responding to urgent stuff like emails and messages before clearing out the path for focusing on tasks. Then I grab a few planned tasks and complete them at my own pace.
After the first round of intensive focus, I attend a daily standup with my team. Here I leverage the time blocks from yesterday and today while explaining my status and possible blockers. If any meetings have been invited, they are usually held around mid-day.
### Lunch Break / Time Off
With the first half of the day completed, I allow myself to take a decent amount of time off eating, drinking and relaxing. Together with the daily shutdown, it's one of the most important events of the day, and you shouldn't skip it.
### From Noon to Afternoon
When I return to focus work, I then go over the pending code reviews or help team members debugging issues. This kind of work fits my afternoon mental state as most of the urgent stuff has already been done and now there's time to shine some creative light.
To call it a day, I begin slowing the pace down around 16–17 and make note of all the tasks I've achieved. Shutting down at regular times is essential for the mind to function correctly.
## Toggl Track to the Rescue
I first tried using traditional paper and the **Remarkable** tablet for time blocking, but having my schedules tied to one device quickly drove me to use [**Toggl Track**](https://track.toggl.com). It comes with excellent desktop, mobile, and web apps allowing me to access and edit my schedule from virtually anywhere. I don't use Toggl's flagship timer feature that much. The very intention of time blocking is to plan and control my time spent instead of spontaneous tracking.
Toggl web app has a nifty calendar view which is ideal for drawing blocks of time with a mouse and marking them to specific projects. You can also extend it with your calendar. I've hooked my Google work calendar into it, which allows me to duplicate each calendar event as a time block. Fun fact: when you're involved in a project with all-day meetings the time-blocking is automatically there, but it will hurt you in other ways.
## Why Command-Line? Why Not?
{% github https://github.com/nikoheikkila/hours %}
However, one thing I was missing from Toggl was a performant terminal app. So, as a hobby project and because my current employer sponsors the personal time spent in open-source, I started writing one. Enter [**Hours**](https://github.com/nikoheikkila/hours). It's a lightweight, single-binary, open-source terminal companion for Toggl written in **Go**. At the moment, Hours ships the following features:
- listing the saved time entries in various formats using the public Toggl API
- styling the plain text output with `termenv`
- converting the output to Markdown, JSON, and CSV allowing you to post-process data with other scripts
Fortunately, the Toggl API is convenient to work with and I've been delighted how easy it is to marshal Go structures to JSON and vice versa. Thus, more features are definitely on the roadmap. I'm thinking of implementing features such as starting and stopping timers without forgetting the basic time entry CRUD operations (create, read, update, and delete). Since **Hours** is still in the very early stages of development, you can help me improve it!
## Conclusion
Blocking time ensures I can complete essential tasks while still reacting to urgent tasks. Having a clear daily plan helps me to recall individual days back to mind. I can also enjoy the rewarding sensation of accomplishing tasks. Most importantly, it helps me to stay away from unnecessary distractions. By the way, I don't have any desktop or web notifications enabled while working, so deal with it. Naturally, time blocking can't eliminate all the stress that a busy world with many projects brings forth, but it guarantees continuous calmness throughout the day.
Throw me with a message if you need help organizing your day with time blocking.
_Photo by **Aron Visuals** on Unsplash_
| nikoheikkila |
641,930 | What is Cloud Computing and What are the benefits of using it | Cloud computing is a term used to portray the utilization of equipment and programming conveyed by me... | 0 | 2021-03-22T08:59:24 | https://dev.to/sherihans123/what-is-cloud-computing-and-what-are-the-benefits-of-using-it-1280 | cloud, cloudcomputing | Cloud computing is a term used to portray the utilization of equipment and programming conveyed by means of the organization (normally the Internet). The term comes from the utilization of cloud molded image that addresses deliberation of rather a complex foundation that empowers crafted by programming, equipment, calculation, and far off administrations.
Basically, cloud computing will be computing dependent on the web. Previously, individuals would run applications or projects from programming downloaded on an actual PC or worker in their structure. Cloud computing permits individuals admittance to similar sorts of utilizations through the web.
Cloud computing depends on the reason that the primary computing happens on a machine, regularly far off, that isn't the one at present being utilized. Information gathered during this cycle is put away and prepared by distant workers (likewise called cloud workers). This implies the gadget getting to the cloud doesn't have to fill in as hard.
By facilitating programming, stages, and information bases distantly, the cloud workers let loose the memory and computing force of individual PCs. Clients can safely get to cloud administrations utilizing certifications got from the cloud computing supplier. If you are a Cloud aspirant and preparing for any [Cloud computing certification](https://intellipaat.com/course-cat/cloud-computing-courses/), this article will be helpful for you. Considering that, let us jump into our conversation on the benefits of Cloud Computing.
Cloud computing benefits
Here's a rundown of key advantages a venture can hope to accomplish when receiving a cloud foundation.
1. Proficiency/cost decrease
By utilizing cloud foundation, you don't need to spend gigantic measures of cash on buying and maintaining gear. This definitely lessens CAPEX costs. You don't need to put resources into equipment, offices, utilities, or working out an enormous server farm to develop your business. You don't require huge IT groups to deal with your cloud server farm activities, as you can appreciate the mastery of your cloud supplier's staff.
Cloud additionally lessens costs identified with personal time. Since personal time is uncommon in cloud frameworks, this implies you don't need to invest energy and cash on fixing potential issues identified with vacation.
2. Information security
One of the significant worries of each business, paying little mind to measure and industry, is the security of its information. Information breaks and different cybercrimes can crush an organization's income, client reliability, and brand situating.
Cloud offers many progressed security includes that ensure that information is safely put away and taken care of.
Cloud stockpiling suppliers carry out gauge assurances for their foundation and the information they measure, such as confirmation, access control, and encryption. From that point, most ventures supplement these insurances with added safety efforts of their own to reinforce cloud information security and fix admittance to delicate data in the cloud.
3. Adaptability
Various organizations have diverse IT needs - a huge undertaking of 1000+ workers will not have similar IT prerequisites as a beginning up. Using the cloud is an extraordinary arrangement since it empowers endeavor to effectively - and rapidly - scale up/down their IT offices, as per business requests.
Cloud-based arrangements are ideal for organizations with developing or fluctuating transfer speed requests. On the off chance that your business requests increment, you can undoubtedly build your cloud limit without putting resources into the actual framework. This degree of dexterity can give organizations utilizing cloud computing a genuine benefit over contenders.
This versatility limits the dangers related to in-house operational issues and upkeep. You have superior assets available to you with proficient arrangements and zero in advance speculation. Versatility is most likely the best benefit of the cloud.
4. Portability
Cloud computing permits portable admittance to corporate information through cell phones and gadgets, which is an extraordinary method to guarantee that nobody is at any point avoided with regards to the circle. Staff with occupied timetables, or who live far away from the corporate office, can utilize this component to stay up with the latest with customers and collaborators.
Assets in the cloud can be effortlessly put away, recovered, recuperated, or handled with only a few snaps. Clients can gain admittance to their chips away at the-go, day in and day out, by means of any gadgets of their decision, in any side of the world as long as you stay associated with the web. What's more, every one of the overhauls and updates is done consequently, off-sight by the specialist co-ops. This saves time and collaboration in keeping up the frameworks, immensely diminishing the IT group jobs.
5. Debacle recuperation
Information misfortune is a significant worry for all associations, alongside information security. Putting away your information in the cloud ensures that information is consistently accessible, regardless of whether your hardware like workstations or PCs, is harmed. Cloud-based administrations give snappy information recuperation to a wide range of crisis situations - from catastrophic events to blackouts.
Cloud framework can likewise assist you with misfortune anticipation. In the event that you depend on the conventional on-premises approach, all your information will be put away locally, on office PCs. In spite of your earnest attempts, PCs can glitch for different reasons - from malware and infections to age-related equipment disintegration, to basic client mistakes.
However, on the off chance that you transfer your information to the cloud, it stays accesible for any PC with a web association, regardless of whether something happens to your work PC.
6. Control
Having authority over delicate information is indispensable to any organization. No one can really tell what can occur if an archive gets into some unacceptable hands, regardless of whether it's simply the hands of an undeveloped representative.
Cloud empowers you with complete permeability and power over your information. You can undoubtedly choose which clients have what level of admittance to what information. This gives you control, however, it likewise smoothes out work since staff will handily understand what archives are doled out to them. It will likewise increment and straightforwardness coordinated effort. Since one variant of the record can be chipped away at by various individuals, and there's no compelling reason to have duplicates of a similar archive available for use.
7. Serious edge
Few out of every odd organization will move to the cloud, in any event not yet. Notwithstanding, associations that receive cloud track down the numerous advantages that cloud offers emphatically impact their business.
Cloud selection expands each year since organizations understand that it offers them admittance to a-list endeavor innovation. Furthermore, in the event that you execute a cloud arrangement now, you'll be in front of your rivals.
End
Cloud computing reception is on the ascent consistently, and it doesn't take long to perceive any reason why. Undertakings perceive cloud computing advantages and perceive what they mean for their creation, cooperation, security, and income.
By utilizing a cloud-based arrangement, an endeavor can forestall a ton of issues that plague associations that depend on on-premises foundation.
| sherihans123 |
641,932 | 5 creative ideas for virtual team building events for startups 🎭 | Creating a place where people love to work and collaborate can be challenging in a fast growing start... | 0 | 2021-03-22T09:15:55 | https://dev.to/n8n/5-creative-ideas-for-virtual-team-building-events-for-startups-26bl | hackathon, automation, startup, team | Creating a place where people love to work and collaborate can be challenging in a fast growing startup. A constant high number of new joiners and a remote-first work culture are not the best circumstances for team building activities in startups. Luckily, there are still many opportunities to connect your remote team members without meeting in person.
Here are five creative ideas for remote team building events for startups.
1\. One-Day Hackathon
---------------------
Organise a one-day hackathon with your team to automate tasks or build low-code products.

If you want to bring your team together, enhance cross-functional collaboration, and increase your startup's efficiency at the same time, this is the must do event. To get this started you need to set up teams of 2-3 people. It's important that the team has mixed technical skill levels to ensure that everyone can build a product or automate processes. There are also many no-code tools available (e.g. n8n, Webflow, Airtable, MailChimp etc.) which make it possible to run the event for non-developers. In preparation, it is advisable to organise team meetings in advance to give everyone time to decide what to build or which processes to automate. There are many options, like building a dashboard to automatically track your KPIs or automating your customer onboarding processes. You can find some ideas for workflows [here](https://n8n.io/blog/).
As for the schedule, you can divide the day into five sections:
1\. Welcome and icebreaker
2\. Hacking Part I
3\. Virtual team lunch
4\. Hacking Part II
5\. Presentation, celebration, and drinks.
**Group size:** teams of 2-3, no limit for the number of teams
**Duration:** whole day
**Best for:** dev focused teams, cross-functional collaboration, mixed technical skill levels
2\. Art night
-------------
Your office walls could use some color or you just want to take some time out and get creative? Then this will be the perfect team event for you.

You could use the art night to paint a collage for your office, where every team member creates one piece of the picture. Even if it takes a bit of effort to draw the first lines, it is an excellent activity to get everyone out of their stressful everyday life and have a lot of fun together. It's even nicer when you all listen to the same music and have a tasty drink while painting. It is amazing to see how the event can strengthen your team affiliation!
If you want someone to organise the event and guide your painting session, you should find a local art teacher or school to run the session, ideally one which can also provide paint and materials to the team.
**Group size:** no limit
**Duration:** 2-3 hours
**Best for:** team affiliation, creativity, relaxing
3\. Among Us
------------
30 min coffee chats, Thursday after-work or after your next lunch break? Honestly, there is always time for a quick round of Among Us.

If you haven't heard about it, don't worry you will learn it easily. You can play the game with up to 10 people. The game takes place on a spaceship where all crew members need to complete different tasks. Unfortunately, up to 3 team members are Impostors that sabotage and kill crew mates. The goal of the crew is to identify the Impostor(s); the goal of the Impostor(s) is to kill everyone before completing their tasks. Prep for this quick team event is pretty simple: You only need a video meeting and the Among Us app on your smartphone. I swear your team will love this.
**Group size:** 3-10
**Duration:** 15 min
**Best for:** taking a break from a long day, new teams, fun
4\. Fitness activities
----------------------
With so many of us working from home, it can be difficult to get motivated to get up and get moving. Why not take the opportunity to boost the team spirit and work out together?

This will not only be really beneficial for everyone's physical and mental health, but also bond your team. There are many activities you can do together, like yoga or HIT (high intensity training). In any case, it should be something that everyone would enjoy. You can make a poll where team members can vote for their favourite activity. If you don't have a sporty volunteer in your team, it's a great idea to ask a professional trainer for a private online session.
**Group size:** no limit
**Duration:** 60 min
**Best for:** active teams, mental break
5\. Escape Room
---------------
Another online game but so much different! You probably know about Escape Rooms: Those sometimes more or less spooky rooms where you need to team up and solve different riddles to escape the room get out.

The team's goal is to solve all riddles in a predetermined time. Though this game is commonly played offline at a real location, there are also online versions, just as exciting. Working together under time pressure will make it a lasting experience for everyone, so this is the perfect opportunity to improve your team collaboration skills and have a lot of fun together.
**Group size:** no limit
**Duration:** 60 min
**Best for:** problem solving, team bonding, collaboration
Out of all these fun ideas, our team is most excited about the automation hackathon. If you want to learn more about how we run it and how to organise a remote one for your team, feel free to contact us on [Twitter](https://twitter.com/n8n_io) or [email](mailto:hello@n8n.io). If you have any questions about n8n, drop by our [forum](https://community.n8n.io) 🧡.
*This article was originally published on the [n8n blog](https://n8n.io/blog/five-creative-ideas-for-virtual-team-building-events-for-startups/).* | le_bruch |
642,020 | Solution: Vowel Spellchecker | This is part of a series of Leetcode solution explanations (index). If you liked this solution or fou... | 11,116 | 2021-03-22T11:34:08 | https://dev.to/seanpgallivan/solution-vowel-spellchecker-22o | algorithms, javascript, java, python | *This is part of a series of Leetcode solution explanations ([index](https://dev.to/seanpgallivan/leetcode-solutions-index-57fl)). If you liked this solution or found it useful,* ***please like*** *this post and/or* ***upvote*** *[my solution post on Leetcode's forums](https://leetcode.com/problems/vowel-spellchecker/discuss/1121848).*
---
#### [Leetcode Problem #966 (*Medium*): Vowel Spellchecker](https://leetcode.com/problems/vowel-spellchecker/)
---
#### ***Description:***
<br />(*Jump to*: [*Solution Idea*](#idea) || *Code*: [*JavaScript*](#javascript-code) | [*Python*](#python-code) | [*Java*](#java-code) | [*C++*](#c-code))
>
Given a `wordlist`, we want to implement a spellchecker that converts a query word into a correct word.
>
For a given `query` word, the spell checker handles two categories of spelling mistakes:
- Capitalization: If the query matches a word in the wordlist (__case-insensitive__), then the query word is returned with the same case as the case in the wordlist.
- Example: `wordlist = ["yellow"]`, `query = "YellOw"`: `correct = "yellow"`
- Example: `wordlist = ["Yellow"]`, `query = "yellow"`: `correct = "Yellow"`
- Example: `wordlist = ["yellow"]`, `query = "yellow"`: `correct = "yellow"`
- Vowel Errors: If after replacing the vowels `('a', 'e', 'i', 'o', 'u')` of the query word with any vowel individually, it matches a word in the wordlist (__case-insensitive__), then the query word is returned with the same case as the match in the wordlist.
- Example: `wordlist = ["YellOw"]`, `query = "yollow"`: `correct = "YellOw"`
- Example: `wordlist = ["YellOw"]`, `query = "yeellow"`: `correct = ""` (no match)
- Example: `wordlist = ["YellOw"]`, `query = "yllw"`: `correct = ""` (no match)
>
In addition, the spell checker operates under the following precedence rules:
- When the query exactly matches a word in the wordlist (__case-sensitive__), you should return the same word back.
- When the query matches a word up to capitlization, you should return the first such match in the wordlist.
- When the query matches a word up to vowel errors, you should return the first such match in the wordlist.
- If the query has no matches in the wordlist, you should return the empty string.
>
Given some `queries`, return a list of words `answer`, where `answer[i]` is the correct word for `query = queries[i]`.
---
#### ***Examples:***
>
Example 1:||
|---:|---|
Input:|wordlist = ["KiTe","kite","hare","Hare"],<br>queries = ["kite","Kite","KiTe","Hare",<br>"HARE","Hear","hear","keti","keet","keto"]
Output:|["kite","KiTe","KiTe","Hare","hare","","","KiTe","","KiTe"]
---
#### ***Constraints:***
>
- `1 <= wordlist.length <= 5000`
- `1 <= queries.length <= 5000`
- `1 <= wordlist[i].length <= 7`
- `1 <= queries[i].length <= 7`
- All strings in `wordlist` and `queries` consist only of english letters.
---
#### ***Idea:***
<br />(*Jump to*: [*Problem Description*](#description) || *Code*: [*JavaScript*](#javascript-code) | [*Python*](#python-code) | [*Java*](#java-code) | [*C++*](#c-code))
This problem can be broken up into a couple steps of increasing difficulty. The first step is to check whether or not the words in the query list (**Q**) exists in the word list (**W**). For that, we can use the simplest form of value-lookup data structure, which is a **Set**.
Next, we need to check if each query has a case-insensitive match in **W**. For case-insensitive matching, the easiest thing to do is to **lowercase** (or **uppercase**) both terms before comparing. In this case, since we want to match one term, but return another, we should use a **Map** data structure, where the **key** is the lowercased term and the **value** is the matching **word**.
But here we encounter an issue, as it is possible for two words to have the same lowercase form. Per the rules we want to favor the one that appears first in **W**, so we can either iterate through **W** forwards and repeatedly check to make sure we're not overwriting an existing entry, or we can simply iterate through **W** backwards and just automatically overwrite entries. This will force the first occurance to be the one that "sticks".
For the third check, we need to match the **word** except for the vowels. Whenever you need to selectively match strings by only a portion, the easiest way to do it is with a **mask**. In this case, we can use **regex** to replace all vowel occurrances with a **character mask**, such as **"#"**. For example, we can check if **"tail"** and **"tool"** would match by applying the character masks to both terms and seeing that **"t##l" == "t##l"**.
This calls for another map structure. We could technically reuse the earlier one, as there will be no overlaps, but navigating two separate, smaller maps is generally more efficient than one large one. Since we'll also want to iterate backwards through **W** for this map, we migtht as well do it at the same time as the other one.
Then we can just iterate through **Q** and check for matches in the correct order. As is generally the case with query lists, we can replace the queries in **Q** with their result in order to save on **space complexity**.
Then, when we're done, we just **return Q**.
---
#### ***Implementation:***
Javascript can use **logical OR** chaining to shorten the assignment of the proper result in **Q**.
Regex is much slower in Java and C++, so we can use a helper function to do the same thing for us.
C++ will also need a helper to lowercase the words.
---
#### ***Javascript Code:***
<br />(*Jump to*: [*Problem Description*](#description) || [*Solution Idea*](#idea))
```javascript
const regex = /[aeiou]/g
var spellchecker = function(W, Q) {
let orig = new Set(W), lower = new Map(), mask = new Map()
for (let i = W.length - 1; ~i; i--) {
let word = W[i], wlow = word.toLowerCase()
lower.set(wlow, word)
mask.set(wlow.replace(regex, "*"), word)
}
for (let i in Q) {
let query = Q[i], qlow = query.toLowerCase(),
qmask = qlow.replace(regex, "*")
if (orig.has(query)) continue
else Q[i] = lower.get(qlow) || mask.get(qmask) || ""
}
return Q
};
```
---
#### ***Python Code:***
<br />(*Jump to*: [*Problem Description*](#description) || [*Solution Idea*](#idea))
```python
class Solution:
def spellchecker(self, W: List[str], Q: List[str]) -> List[str]:
orig, lcase, mask = set(W), defaultdict(), defaultdict()
regex = r'[aeiou]'
for i in range(len(W)-1,-1,-1):
word = W[i]
wlow = word.lower()
lcase[wlow] = word
mask[re.sub(regex, '*', wlow)] = word
for i in range(len(Q)):
query = Q[i]
qlow = query.lower()
qmask = re.sub(regex, '*', qlow)
if query in orig: continue
elif qlow in lcase: Q[i] = lcase[qlow]
elif qmask in mask: Q[i] = mask[qmask]
else: Q[i] = ""
return Q
```
---
#### ***Java Code:***
<br />(*Jump to*: [*Problem Description*](#description) || [*Solution Idea*](#idea))
```java
class Solution {
public String[] spellchecker(String[] W, String[] Q) {
Set<String> orig = new HashSet<>(Arrays.asList(W));
Map<String, String> lower = new HashMap<>(), mask = new HashMap<>();
for (int i = W.length - 1; i >= 0; i--) {
String word = W[i], wlow = word.toLowerCase();
lower.put(wlow, word);
mask.put(vmask(wlow), word);
}
for (int i = 0; i < Q.length; i++) {
String query = Q[i], qlow = query.toLowerCase(),
qmask = vmask(qlow);
if (orig.contains(query)) continue;
else if (lower.containsKey(qlow)) Q[i] = lower.get(qlow);
else if (mask.containsKey(qmask)) Q[i] = mask.get(qmask);
else Q[i] = "";
}
return Q;
}
public String vmask(String str) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < str.length(); i++) {
char c = str.charAt(i);
if (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u') c = '*';
sb.append(c);
}
return sb.toString();
}
}
```
---
#### ***C++ Code:***
<br />(*Jump to*: [*Problem Description*](#description) || [*Solution Idea*](#idea))
```c++
class Solution {
public:
vector<string> spellchecker(vector<string>& W, vector<string>& Q) {
set<string> orig (W.begin(), W.end());
unordered_map<string, string> lower, mask;
for (int i = W.size() - 1; ~i; i--) {
string word = W[i], wlow = lcase(word);
lower[wlow] = word, mask[vmask(wlow)] = word;
}
for (string &query : Q) {
string qlow = lcase(query), qmask = vmask(qlow);
if (orig.count(query)) continue;
else if (lower.count(qlow)) query = lower[qlow];
else if (mask.count(qmask)) query = mask[qmask];
else query = "";
}
return Q;
}
static string vmask(string str) {
for (char &c : str)
if (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u')
c = '*';
return str;
}
static string lcase(string str) {
for (char &c : str) c = tolower(c);
return str;
}
};
``` | seanpgallivan |
642,032 | Solve Leetcode Problems and Get Offers From Your Dream Companies||Maximum Points You Can Obtain from Cards | Maximum Points You Can Obtain from Cards In this series, I am going to solve Leetcode medium probl... | 11,797 | 2021-03-22T12:11:21 | https://www.simplecoding.dev/articles/solve-leetcode-problems-and-get-offers-from-your-dream-companies-maximum-points-you-can-obtain-from-cards-47m6 | leetcode, algorithms, programming, codenewbie |
1423. **Maximum Points You Can Obtain from Cards**
In this series, I am going to solve Leetcode medium problems live with my friends, which you can see on our youtube channel, Today we will do Problem Leetcode: 1423. **Maximum Points You Can Obtain from Cards.**

A little bit about me, I have offers from **Uber** **India** and **Amazon** **India** in the past, and I am currently working for **Booking.com** in Amsterdam.
## Motivation to learn algorithms
[**Solve Leetcode Problems and Get Offers From Your Dream Companies**](https://medium.com/leetcode-simplified/solve-leetcode-problems-and-get-offers-from-your-dream-companies-2786415be0b7)
## Problem Statement
There are several cards **arranged in a row**, and each card has an associated number of points The points are given in the integer array cardPoints.
In one step, you can take one card from the beginning or from the end of the row. You have to take exactly k cards.
Your score is the sum of the points of the cards you have taken.
Given the integer array cardPoints and the integer k, return the *maximum score* you can obtain.
**Example 1:**
Input: cardPoints = [1,2,3,4,5,6,1], k = 3
Output: 12
Explanation: After the first step, your score will always be 1. However, choosing the rightmost card first will maximize your total score. The optimal strategy is to take the three cards on the right, giving a final score of 1 + 6 + 5 = 12.
**Example 2:**
Input: cardPoints = [2,2,2], k = 2
Output: 4
Explanation: Regardless of which two cards you take, your score will always be 4.
**Example 3:**
Input: cardPoints = [9,7,7,9,7,7,9], k = 7
Output: 55
Explanation: You have to take all the cards. Your score is the sum of points of all cards.
**Example 4:**
Input: cardPoints = [1,1000,1], k = 1
Output: 1
Explanation: You cannot take the card in the middle. Your best score is 1.
**Example 5:**
Input: cardPoints = [1,79,80,1,1,1,200,1], k = 3
Output: 202
## **Youtube Discussion**
{% youtube ZxcA4rVQBbE %}
## **Solution**
We will discuss two approaches, the first one will give **TIME LIMIT EXCEEDED** and the second one will be accepted.
According to the question, we can choose a card from either the beginning or the end and we need to repeat this step till we get k cards.
**Base Cases:**
1. If k is less than 1 then no cards can be chosen
2. i should be always less than j
3. If i equal to j and k=1 , then we have to choose i -th card.
**Recursion Step:**
So for an array from i to j , with k cards to be drawn,
if we choose the card from the beginning then ar[i]+choose from (i+1) to j
if we choose the card from the end then ar[j]+choose from (i) to (j-1)
To avoid the same calculations, we can use dynamic programming.We can store the value for i to j with k cards to be drawn in a dictionary.
{% gist https://gist.github.com/sksaikia/10ea71990b58e8733d740824ed3a8774.js %}
This solution is not that efficient and will give **TIME LIMIT EXCEEDED.**
Let’s talk about the optimal solution for this problem. We have to k cards either from the beginning or the end. Therefore we can choose
0 cards from left and k cards from right
1 card from left and (k-1) cards from right
2 cards from left and (k-2) cards from right
.............
k cards from left and 0 cards from right
Therefore, we need to find the maximum of these combinations. To make our calculations simpler, we can store the cumulative sum of first k cards from beginning and end. We can find the maximum for all these combinations ;
left[i]+right[k-i] for all possible values of i (k is given)
{% gist https://gist.github.com/sksaikia/8007504665d3e8b2f505b49568693af1.js %}
The code for this problem can be found in the following repository.
[**webtutsplus/LeetCode**](https://github.com/webtutsplus/LeetCode/tree/main/src/LC1423_MaximumPointsObtainedFromCards)
## Thank You for reading and Follow this publication for more LeetCode problems!😃
[**LeetCode Simplified**](https://medium.com/leetcode-simplified)
| nilmadhabmondal |
642,221 | Free Figma wireframe kits for your next project | In Creating the design, the first and one of the most important steps is prototyping. If you are not... | 0 | 2021-03-22T14:43:27 | https://dev.to/harshptl14/free-figma-wireframe-kits-for-your-next-project-meg | figma, design | In Creating the design, the first and one of the most important steps is prototyping. If you are not a designer, and you are working on a project which needed prototyping, then in this situation may be ready-to-use Wireframe kits are very useful. They will save a lot of time for you. And if you are a beginner, then also it will be useful for getting started. In this article, I have collected the top 5 Wireframe kits from the Figma community with which you will be satisfied.
1. Sanity Sketching Kit
Everything is black&white and mostly outline-based. All the templates fit great into an A4/letter format. You can easily print them and play with them on paper or use them in workshops and meetings afterward.
All basic components are included. container styles you can easily copy/paste to make containers, layouts, and other elements. You can also find window/grid templates to get you started. everything based on an 8px grid for easier nudging/resizing/moving around.
https://www.figma.com/community/file/898186441853776318

2. Greyhound Flowcharts 2
Greyhound Flowcharts 2 comes with fully customizable ready-to-use flowcharts. There are 200+ cards in 11 categories for your web and application projects. Mobile and web both are covered in this kit. You can easily copy/paste components in your existing project where you needed.
https://www.figma.com/community/file/900373866162206210

3. Figma wireframe kit
Figma wireframe kit is full of UI elements, sections, and components for easy prototyping. These wireframing templates are low fidelity, made for brainstorming and planning your designs. If you want to speed up your wireframing in Figma, this kit is here to help you save time throughout your creative process. Create fast, modern, responsive prototypes using a Bootstrap grid and premade, editable components to build your layouts.
https://www.figma.com/community/file/809483562248762502

4. Sketchy Wireframes
A simple kit for mocking up applications and websites. Free yourself from pixel perfection and use this kit to quickly mock up your ideas. Build up your own components from the primitives provided (lines & rectangles) to create anything!
https://www.figma.com/community/file/820762933996665437

5. MPS Wireframe Toolkit
The MPS Wifreframe Toolkit is a complete collection of elements and a system to create rapidly consistent wireframes. With over 50 handmade elements in a minimal style so you can focus completely on the UX for a website, webshop, or app.
https://www.figma.com/community/file/877360315674687828

I hope that this article helped you and made your workflow much easier and faster. If you liked the content, do check out my social media. I will try to write more useful articles to help you in every aspect. Good luck fellows!
Thank You, Happy Hacking!! | harshptl14 |
642,376 | System Design Analysis of Instagram | How do you design a photo-sharing service like Instagram? Performance, availability, consistency,... | 0 | 2021-03-22T18:17:28 | https://towardsdatascience.com/system-design-analysis-of-instagram-51cd25093971 | programming, softwareengineering, systemdesignintervie, architecture | ---
title: System Design Analysis of Instagram
published: true
date: 2021-03-22 17:40:16 UTC
tags: programming,softwareengineering,systemdesignintervie,softwarearchitecture
canonical_url: https://towardsdatascience.com/system-design-analysis-of-instagram-51cd25093971
---
[](https://towardsdatascience.com/system-design-analysis-of-instagram-51cd25093971?source=rss-577e2119c7f2------2)
How do you design a photo-sharing service like Instagram?
Performance, availability, consistency, scalability, reliability, etc., are important quality requirements in system design. We need to analyze these requirements for the system.
As a system designer, we might want to have a design that will be highly available, very high performant, top-notch consistency in the system, highly secured system, etc. But it’s not possible to achieve all these targets in one system. We need to have requirements that will work as restrictions on the design of a system. So, let’s define our NFRs:
Our system should be highly available. In the case of any web service, it’s a mandatory requirement. Home page generation latency should be at most 200 msec. If the home page generation takes too long, users will be dissatisfied, which is not acceptable.
As we choose for the system’s high availability, we should keep in mind that may hamper consistency across the system. The system should also be highly reliable, which means any uploaded photo or video by users should never be lost.
In this system, photos search and views would be more than uploading. As the system would have more read-heavy operations, we will focus on building a system that can retrieve photos quickly. While viewing photos, latency needs to be as low as possible.
###Data flow:
If you are not sure where to start in a system design, always start with the data storing system. It will help to keep your focus aligning with the requirements of the system.
We need to support two scenarios at a high-level, one is to upload photos, and another is to view/search photos. Our system would need some object storage servers to store photos and some database servers to store metadata information.
Defining the database schema is the first phase of understanding the data flow between different components of the system. We need to store user profile data like the follower list, uploaded photos by users. There should be a table that stores all data regarding photos.
###Load balancer:
As we use multiple copies of a server, we need to distribute the user traffic to those servers efficiently. The load balancer will distribute the user requests to various servers uniformly. We can use IP-based routing for the Newsfeed service, as the same user requests go to the same server, and caching can be used to get a faster response.
If there are no such requirements, round-robin should be a simple and good solution for the server selection strategy of load balancers.
###API GateWay:
We have a lot of services for our system. Some will generate newsfeed, some help storing photos, some viewing the photos, etc. We need to have a single entry point for all clients. This entry point is API Gateway.
It will handle all the requests by sending them to multiple services. And for some requests, it will just route to the specific server. The API gateway may also implement security, such as verifying the client’s permission to perform the request.
###Pagination:
The newsfeed of users can be a large response. So, we may design the API to return a single page of the feed. Let’s say we are sending at most 50 posts every time a feed request is made.
The user may send another request for the next page of feeds for the next time. And within that period, if there is not enough feed, Newsfeed may generate the next page. This technique is called pagination.
[Continue reading on Towards Data Science for detailed design analysis»](https://towardsdatascience.com/system-design-analysis-of-instagram-51cd25093971?source=rss-577e2119c7f2------2) | ashchk |
642,568 | AWS Certified Solutions Architect - Professional a few thoughts | For what reason did I attend this exam at all? After a few years of working on plenty of c... | 0 | 2021-03-22T22:23:07 | https://dev.to/aws-builders/aws-certified-solutions-architect-professional-a-few-thoughts-1m5n | aws, architecture |
## For what reason did I attend this exam at all?
After a few years of working on plenty of cloud projects, where AWS was around 95% of my daily duty, I just wanted to give it a try. No other reasons that I'm aware of ;).
## The Exam
AWS states on the official [Exam Guide](https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf):
"This exam validates advanced technical skills and experience in designing distributed applications and systems on the AWS platform."
You are going to be tested on five domains:
Domain 1: Design for Organizational Complexity
Domain 2: Design for New Solutions
Domain 3: Migration Planning
Domain 4: Cost Control
Domain 5: Continuous Improvement for Existing Solutions
On my [LinkedIn](https://www.linkedin.com/feed/update/urn:li:activity:6778623026612228096/) post, I wrote.
"A very nice exam, but you have to rush, really - for me, the time was short. It covers all possible solutions available on the AWS cloud, so you should have hands-on experience with most of them. Most of the questions are migration-related, so again you have to have experience."
And above is a nutshell of what you can expect and what is expected from you.
The exam itself is a "Blitzkrieg," if you know what I mean. Questions are quite long, so you have to keep the focus on the details. Answers are not so tricky, but the devil is in the details - again. What you can expect is well shown [here](https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Sample-Questions.pdf).
It lasts for 220 mins. Due to Covid-19, the only way to get it done is the on-line way. I did online exams before, but never for 4h in a row, so bear in mind that you are stick to your desk without an option to move or use a bathroom (that's insane).
220 mins for 75 questions, give you like 3 minutes per question. It means that you have to be confident about the topic. There is not that much time for consideration. At least for me, the time was short, even, the language is quite straightforward (I'm not a native English speaker).
## Preparation
The best preparation is to use the cloud and work with multiple projects for few years.
Before the cloud, I was responsible for building multi-branches secure networks, highly available and resilient solutions, clustered virtualized environments, that supported the companies I worked for.
What helped me a lot was that I've conducted commercial "AWS driven" training for multiple groups of professionals in the area of Development on AWS, Architecture, and dedicated Security training for the last two years.
If you can, try to teach someone, for sure you will have a very good recap for all services :). You can pick someone from your surrounding, maybe a colleague who starts his/hers journey with the cloud. You will be able to find gaps in how you understand particular services.
Creating materials, labs, and writing articles for my newly created [site](https://poznajaws.pl) was extremely helpful too.
You might be surprised that I don't write much about certain services. This is for a reason, as you will find all possible solutions on the exam that fits within 5 domains.
Most questions were around the fifth domain, so I was asked about possible improvements to the existing solutions: a few questions involved AWS DMS and the overall migration process. Also, I got a few technical questions about VPC, like AWS Transit Gateway or Peering connections. Of course, you will find services like Kinesis or SQS (remember, decoupling). But there are few questions, that ask about 4-5 services to be implemented in a row.
A list of services you might find on the exam can be found [here](https://github.com/swiatchmury/aws-egzaminy/blob/master/materials/AWS_Certified_Solutions_Architect_Professional/README.md) even it's in Polish.
I also used [Adrian Cantrills](https://learn.cantrill.io/) AWS Certified Solutions Architect - Professional course to do a quick re-cap around services, which I haven't touched for a while.
The last service I've used was a [Whizlabs](https://www.whizlabs.com/aws-solutions-architect-professional/), as they have very good tests, so you can practice and find your gaps also.
The [TutorialsDojo](https://tutorialsdojo.com/category/aws-cheat-sheets/) has well-prepared AWS cheat sheets, they also might be helpful for you.
## My last thoughts around the exam
Build your own lab and use as many services as you can. Play a lot. Think like you would have a three-tier application on your on-premises environment, and how would you migrate it to the cloud, when you have 10 GB, 100GB, and 100TB of data to move. What services are involved, what are your options for transfer, and so on?
Take your time to get a good understanding of the costs of migration, what costs more, what is better if you are thin into time.
Bear in mind, that cost is not always the determinant.
What if you want to implement something quickly without disrupting the production environment.
How would you secure the application, when you have attacks coming from distributed sources with random IPs - what services can help you with it.
If you haven't work on multiple domains yet, wait and come back sooner or later. I mean it.
Try to teach someone, it's the best way I found, to be on track with the services and how you understand them.
Take advantage of the previous exam discount (if you still have one) - AWS gives you a 50% discount coupon for each exam you pass.
| donkoyote |
643,000 | We partnered with Bauman Moscow State Technical University to teach biomedical programming | The Evrone team doesn’t just work on commercial projects. We also actively support the open-source co... | 0 | 2021-03-23T11:06:14 | https://evrone.com/bauman-mstu-course | beginners, writing, programming | The Evrone team doesn’t just work on commercial projects. We also actively support the open-source community, sharing our tools and holding events for developers of all different levels. Because of this, Bauman Moscow State Technical University has invited one of our specialists to teach the “Algorithmization and Programming” course for students of the Faculty of Biomedical Engineering.
Through our partnership with the University, we will teach freshmen to write and understand code and use it to solve scientific and technological challenges. [In this article,](https://evrone.com/bauman-mstu-course) we explain why we chose to teach the Julia programming language, and how this knowledge will help students prepare for their future careers. | evrone |
643,263 |
SREview Issue #11 March 2021 | Is it spring yet? Or spring still? Time sure is strange nowadays. At least we have a ton to look forw... | 0 | 2021-03-23T15:01:42 | https://dev.to/blameless/sreview-issue-11-march-2021-2idg | sre, devops | Is it spring yet? Or spring still? Time sure is strange nowadays. At least we have a ton to look forward to in the next few weeks! Here are some of the most exciting Tweets, content, and events happening in the SRE and resilience engineering community this month.

#Tweets that have us twittering#
{% twitter 1371338187352645634 %}
{% twitter 1370883721319026690 %}
{% twitter 1367618267246858247 %}
#SREading#
**[SRE2AUX: How Flight Controllers were the first SREs](https://www.blameless.com/blog/how-flight-controllers-were-the-first-sres)**: Geoff White writes about what vintage space lore has to do with site reliability engineering in the 21st century.
**[The Netflix Cosmos Platform](https://netflixtechblog.com/the-netflix-cosmos-platform-35c14d9351ad)**: This article explains why the Netflix team built Cosmos, how it works, and shares some of the things the team learned along the way.
**[SRE as Organizational Transformation: Lessons from Activist Organizers](https://www.blameless.com/blog/sre-as-organizational-transformation-lessons-from-activist-organizers?utm_content=155300039&utm_medium=social&utm_source=linkedin&hss_channel=lcp-28625893)**: Chris Hendrix writes about how we can learn from activist organizers while driving company-wide change.
**[What is a Canary Deployment?](https://launchdarkly.com/blog/what-is-a-canary-deployment)**: This post contains a thorough description of canary releases including benefits, visual examples, and how it fits into an effective deployment strategy.
**[How We Built and Use Runbook Documentation](https://www.blameless.com/blog/how-we-built-and-use-runbook-documentation-at-blameless)**: Alicia Li and Lucas Bartroli write about runbooks. “Even if you don’t notice, you are executing runbooks everyday, all the time.”
**[Increment’s Reliability Issue](https://increment.com/)**: This issue contains articles on reliability from thought leaders such as Tanya Reilly, Mads Hartmann, Ana Margarita Medina, and more.
#Events#
**[SRE Thought Leader Panel](https://blameless.zoom.us/webinar/register/WN_vN5XzMblQrW4jYKEfvz34g)**: SRE Adoption as Organizational Transformation March 25, 11 AM PDT: Hear from experts Kurt Andersen, Vanessa Yiu, and Tony Hansmann. Hosted by Chris Hendrix.
**[Blameless Bi-Weekly Demo](https://blameless.zoom.us/webinar/register/WN_xiwvRmJjRLeXGYaJeObnNg)** March 30 at 8 AM PDT: Check out a live demo of Blameless as we walk you through operations best practices, and get your questions answered.
**[DevOps Online Summit](https://devopsonlinesummit.com/)** April 26-30: DevOps professionals throughout the world come together and share their learnings.
**[Deserted Island DevOps](https://desertedisland.club/)** April 30: A single-day virtual event streamed on Twitch. All presentations will take place in the world of Animal Crossing: New Horizons. | kludyhannah |
643,285 | Implementing factory pattern for dependency injection in .NET core | ASP.NET Core supports the dependency injection (DI) software design pattern, which is a technique for... | 0 | 2021-03-23T15:53:40 | https://chaitanyasuvarna.wordpress.com/2021/03/21/factory-pattern-di-in-net-core/ | ASP.NET Core supports the dependency injection (DI) software design pattern, which is a technique for achieving [Inversion of Control (IoC)](https://docs.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/architectural-principles#dependency-inversion) between classes and their dependencies.
A *dependency* is an object that another object depends on. If there is a hardcoded dependency in the class i.e. the object is instantiated in the class, this could create some problems. In case there is a need to replace the dependency with another object, it would require the modification of the class. It also makes it difficult to unit test the class.
By using dependency injection we move the creation and binding of the dependent objects outside of the class that depends on them. Thus, solving the problems that we face with hardcoded dependency and making the classes loosely coupled.
###What is Factory Pattern?
The Factory Pattern was first introduced in the book “*Design Patterns: Elements of Reusable Object-Oriented Software*” by the “Gang of Four” (Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides).
The Factory Pattern is a Design Pattern which defines an interface for creating an object but lets the classes that have a dependency on the interface decide which class to instantiate. This abstracts the process of object generation so that the type of object to be instantiated can be determined at run-time by the class that want’s to instantiate the object.
###When is Factory Pattern for DI required?
Factory Pattern is useful when there are multiple classes that implement an interface and there is a class that has a dependency on this interface. But this class will determine at run-time, based on user input, which type of object does it want to instantiate for that interface.
To understand this scenario further, let’s take an example of an interface called `IShape`.
{% gist https://gist.github.com/chaitanya-suvarna/7196804f1ec25ca664490ac24ec7522c %}
And we also have two classes called Cube and Sphere that implement `IShape`.
{% gist https://gist.github.com/chaitanya-suvarna/9e102fc9787461d58591654b1634e40e %}
Now, imagine we have a service class called `ShapeCalculationService` which has a dependency on `IShape`. It takes the input from user to choose either Cube or Sphere and based on the input it has to instantiate the `Cube` or `Sphere` object at runtime, how would that be possible?
The service class needs to use this corresponding Shape object to Get the input and display the SurfaceArea and Volume for the shape.
This scenario where multiple classes implement an interface and the object that needs to be instantiated is decided at runtime, is where Factory Pattern comes to use.
###How to implement Factory Pattern for DI?
Continuing with above example, we will try to implement factory pattern in this scenario.
To abstract the instantiation of the correct Shape object at runtime, we will create a `ShapeFactory` class who’s responsibility is to resolve which concrete class is required to be instantiated for a given selection by user.
{% gist https://gist.github.com/chaitanya-suvarna/d0921dd0acc38e148922f6d38ea717d3 %}
Here you can see that the `ShapeFactory` class has a dependency on `IServiceProvider` so that `ShapeFactory` only resolves which concrete class needs to be instantiated but should ask the built-in service container of .Net core to get the instance and resolve its dependencies.
This is because we want to rely on **IoC container of .Net core** to resolve our dependencies and don’t want to make changes in out factory every single time a new dependency is introduced in either of `Cube` or `Sphere` classes.
This further **decouples** the code and makes it easier to manage.
The place where you setup the ServiceCollection for your project should look like below so that the DI of .Net core could figure out the required dependencies by either of the service and could resolve them at run time.
In my example, since this is a simple console application, this code resides in `Program.cs`
{% gist https://gist.github.com/chaitanya-suvarna/2a7742177e2ee09bcc82fdb4604388ee %}
Finally, I’d also like you to see the actual `ShapeCalculationService` that takes an input from the user and uses `ShapeFactory` to get the `Shape` class at runtime which is the used to display the surface area and the volume.
{% gist https://gist.github.com/chaitanya-suvarna/d0fef4ed96038e57e94f030f03d8ec5e %}
###Conclusion
Thus we have seen how we can inject a factory to get total control of creation of our dependencies at runtime, while still using .Net core’s IoC container to resolve our dependencies.
If you are like me, who needs a small yet complete demo solution to clearly understand how this works, I have created a demo project in my github repository which would help you understand this better.
You can have a look at it here : [DotNetCoreFactoryPattern](https://github.com/chaitanya-suvarna/DotNetCoreFactoryPattern)
I hope you found this interesting. Thanks for reading! | chaitanyasuvarna | |
643,309 | Voices from Women in the digital industry Pt. 1 | Illustration by Gloria Shugleva As part of International Women's Day, we at SinnerSchrader created a... | 11,901 | 2021-03-24T15:40:58 | https://dev.to/studio_m_song/voices-from-women-in-the-digital-industry-d6f | career, interview, womenintech, motivation | *Illustration by [Gloria Shugleva](https://dribbble.com/shots/7045097-Female-leaders/attachments/45347?mode=media)*
*As part of International Women's Day, we at SinnerSchrader created an internal website that was filled with content from our employees. Among other things, we asked various women in leadership positions the same three questions. The answers were so inspiring that we didn't want to withhold them from the rest of the world. We start with Daniela Kirchner (Director Strategy).*

## What does female leadership mean to you?
Daring greatly.
These two words pointed out by Brené Brown are the essence of what Female Leadership means to me:
"Working hard towards a healthier future, taking courage and pride in addressing things that should be changed, deciding based on common sense, accepting the existence of emotions (to mention the infamous word here, but passion for great results comes with emotions), constant attention, integrity, acknowledgement for our fears of failure, showing up anyways and having a backbone."
During my studies and my professional career so far I have always entered an industry or zone dominated by men. My first three leaders were women. And outstanding role models. It just feels great to be surrounded and inspired by strong women. And I met awesome men who feel the same and who are also daring greatly.
## Would you say that being a woman has influenced your way of leading and/or did you have to acquire (or drop) certain qualities to be successful?
My way of leading is influenced by who I am, my beliefs and the experiences I made. Leading means having tangible and intangible interactions in various directions. I adapt the style based upon the person in front of me while staying true to myself. Continuous learning is mandatory to do so. I refuse focusing on "female/male only" qualities. That ends up in copying, tightening boundaries or limiting oneself.
## What would you like to tell young women who aim to take a leadership role in the future?
Trust yourself! Reach out to other women in charge and connect with authentic leaders of any gender who help you to grow. | annika_h |
643,311 | Where (and how) can a web developer learn more about cyber security? | The world is getting scary. In the last few years I've been getting increasingly uneasy about the i... | 0 | 2021-03-23T16:34:45 | https://dev.to/shadowfaxrodeo/where-and-how-can-a-web-developer-learn-more-about-cyber-security-2791 | discuss, webdev, cybersecurity, security | The world is getting scary.
In the last few years I've been getting increasingly uneasy about the internet. Things like…
- Data Breaches
- Identity Theft
- Scams
- Surveilance capitalism
- Nations destroying other nations' digital infrastructure
- Oppressive regimes tracking dissidents
When these things occur, you can bet it's the most vulnerable people in society who are hardest hit.
Several people close to me have been hacked — one even had her entire savings stolen (she is fine now).
**Where can we learn to protect ourselves, the people around us, and the people who rely on our products?**
…and how can we get really deep into cyber security — with the goal of building tools to help people?
| shadowfaxrodeo |
643,363 | My Azure Notebook: Key Things | Organisation— An organization represents a business entity that is using Microsoft cloud offerings. T... | 0 | 2021-03-23T17:23:02 | https://dev.to/arunksingh16/my-azure-notebook-key-things-2d6m | azure, devops | **Organisation**— An organization represents a business entity that is using Microsoft cloud offerings. The organization is a container for subscriptions.
**Azure subscription**— A subscription is an agreement with Microsoft to use one or more Microsoft cloud platforms or services
**User Acccount**— User accounts for all of Microsoft’s cloud offerings are stored in an Azure Active Directory (Azure AD) tenant. An Azure AD tenant can be synchronized with your existing Active Directory Domain Services. Multiple subscriptions of an organization can use the same Azure AD tenant.
**Azure AD tenant**— A dedicated and trusted instance of Azure AD that’s automatically created when your organization signs up for a Microsoft cloud service subscription, such as Microsoft Azure, Microsoft Intune, or Microsoft 365. An Azure tenant represents a single organization.
**Resource**— A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Resource groups, subscriptions, management groups, and tags are also examples of resources.
**Resource group**— A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See Resource groups.
**Resource provider**— A service that supplies Azure resources. For example, a common resource provider is Microsoft.Compute, which supplies the virtual machine resource. Microsoft.Storage is another common resource provider. See Resource providers and types.
**Azure Stock Keeping Unit(SKU)**— It represents a purchasable Stock Keeping Unit (SKU) under a product. It can be of different type depends on product. Kind of Item for sale !
**Azure compute unit (ACU)**— The Azure Compute Unit (ACU) is used in understanding the relative compute performance between different Azure compute series VMs. Microsoft uses the ACU concept to help you compare computing performance across the different types of compute VM SKUs.
**Azure Service principal**— An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources.
**Azure App Registration**- In order to delegate Identity and Access Management functions to Azure AD, an application must be registered with an Azure AD tenant. When you register your application with Azure AD, you are creating an identity configuration for your application that allows it to integrate with Azure AD.
| arunksingh16 |
649,426 | Backstage architecture | Originally published on mccricardo.com. In Backstage intro we talked about what kinds of problems Ba... | 12,059 | 2021-03-29T21:57:16 | https://dev.to/mccricardo/backstage-architecture-9cb | backstage | Originally published on [mccricardo.com](https://mccricardo.com/backstage-architecture/).
In [Backstage intro](https://mccricardo.com/backstage-intro/) we talked about what kinds of problems Backstage aims to solve and got our first taste of how to launch and play with it. Although we're now ready to start getting our hands dirty we'll take a moment to understand the pieces that make Backstage the great piece of software that it is. This will give us a better feel for what's under the hood and how everything fits together.
## What it looks like
The setup we did in [Backstage intro](https://mccricardo.com/backstage-intro/) hides some of the complexity so that we could get up-and-running quickly. When deployed with the goal of being used by our target users, Backstage will be composed of:
- the core UI
- the UI plugins and their backing services
- databases
The image below shows an example of a Backstage deployment with a few plugins.

<sub>(image source: Backstage official documentation)</sub>
The UI depicted above is a client-side wrapper around a set of plugins. It is responsible for providing some core UI components and libraries for shared activities.
Plugins can be of three types:
- standalone - run entirely in the browser
- service backed - make API requests to a service within the Backstage installation
- third-party backed - make API requests to a third-party service (through a proxy)
Each plugin will make itself available on the UI on a dedicated URL since they are client-side applications. Plugins are written in TypeScript or JavaScript and live in their own directory in **backstage/plugins**. For more information on how to build and integrate plugins check [Intro to plugins](https://backstage.io/docs/plugins/).
Backstage uses the [Knex](http://knexjs.org/) library which supports a multitude of databases. Despite that, Backstage has been tested to work with SQLite (for testing purposes) and PostgreSQL (for production workloads).
Backstage can also be [containerized](https://backstage.io/docs/getting-started/deployment-docker) as well deployed deployed on [Kubernetes](https://backstage.io/docs/getting-started/deployment-k8s) (you can even use [Helm](https://backstage.io/docs/getting-started/deployment-helm)).
This was a quick overview of how Backstage looks under the hood. Next, we will see some of the pieces that make Backstage (even more) valuable. | mccricardo |
693,488 | C Preprocessor cheatsheet | The C preprocessor is the macro preprocessor for the C, Objective-C, and C++ computer programming lan... | 0 | 2021-05-10T08:06:08 | https://dev.to/hoanganhlam/c-preprocessor-cheatsheet-44c0 | cpreprocessor, cheatsheet | The C preprocessor is the macro preprocessor for the C, Objective-C, and C++ computer programming languages. The preprocessor provides the ability for the inclusion of header files, macro expansions, conditional compilation, and line control. [C Preprocessor Cheat Sheet](https://cheatsheetmaker.com/c-preprocessor) is quick reference for the C macro preprocessor, which can be used independently of C/C++.
## File and line
```
#define LOG(msg) console.log(__FILE__, __LINE__, msg)
#=> console.log("file.txt", 3, "hey")
```
## Stringification
```
#define STR(name) #name
char * a = STR(object); #=> char * a = "object";
```
## Token concat
```
#define DST(name) name##_s name##_t
DST(object); #=> object_s object_t;
```
## Macro
```
#define DEG(x) ((x) * 57.29)
```
## Error
```
#if VERSION == 2.0
#error Unsupported
#warning Not really supported
#endif
```
## [\[Reference\] If](/post/c-preprocessor_reference_if)
```
#ifdef DEBUG
console.log('hi');
#elif defined VERBOSE
...
#else
...
#endif
```
## Defines
```
#define FOO
#define FOO "hello"
#undef FOO
```
## Includes
```
#include "file"
```
## Compiling
```
$ cpp -P file > outfile
```
## Reference
* https://en.wikipedia.org/wiki/C\_preprocessor
* https://cheatsheetmaker.com | hoanganhlam |
652,974 | What's the difference between Continuous Delivery and Continuous Deployment? | Continuous Delivery and Continous Deployment are frequently conflated with each other. This is at... | 0 | 2021-04-05T10:59:14 | https://jhall.io/archive/2021/04/02/whats-the-difference-between-continuous-delivery-and-continuous-deployment/ | cicd, continuousdelivery, continuousdeployment, devops | ---
title: What's the difference between Continuous Delivery and Continuous Deployment?
published: true
date: 2021-04-02 00:00:00 UTC
tags: cicd, continuousdelivery, continuousdeployment, devops
canonical_url: https://jhall.io/archive/2021/04/02/whats-the-difference-between-continuous-delivery-and-continuous-deployment/
---
Continuous Delivery and Continous Deployment are frequently conflated with each other. This is at least in part because they have the same common abbreviation: CD.
And while thes concepts are technically related, it is important to understand their differences, and be clear about which one we mean when communicating.
In short, **Continuous Delivery** is the practice of continuously and automatically building and delivering a software package whenever a code change is committed. The end result is typically a Docker image, .zip file, or APK package uploaded to a centralized server for access by users or testers.
**Continuous Deployment** goes one step further, and not only builds the software, but also _deploys it_ into _production_.
In a technical sense, the difference between continuous _delivery_ and _deployment_ is conceptually as small as running `dpkg -i` or `brew install`. In human terms, there can be a huge psychological difference between the two.
Which of these is your team using, and why? If you have a moment hit REPLY, and let me know. If you’re doing neither, what is stopping you? | jhall |
657,940 | 10 Topics to Prepare for Java and Spring Boot Interviews | Gumroad is celebrating GumroadDay and my both books, Grokking the Java Interview and Grokking the Spring Boot Interviews are "pay what you want", minimum of $1 | 0 | 2021-04-07T15:17:27 | https://dev.to/javinpaul/on-gumroadday-my-books-are-pay-what-you-want-minimum-1-59m5 | java, books, programming, springframework | ---
title: 10 Topics to Prepare for Java and Spring Boot Interviews
published: true
description: Gumroad is celebrating GumroadDay and my both books, Grokking the Java Interview and Grokking the Spring Boot Interviews are "pay what you want", minimum of $1
tags: java, books, programming, springframework
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xg6ntt1wb5qimzq9om4z.png
---
Hello devs, just wanted to write a short post about Gumroad Day and an excellent opportunity to buy my books for just 1$ (minimum). For #GumroadDay, both my books, **[Grokking the Java Interview](https://gumroad.com/l/QqjGH)** and **[Grokking the Spring Boot Interview](https://gumroad.com/l/hrUXKY)** are 'Pay What You Want, $1 minimum. Today only.
*- Grokking the Java Interview, $19.9 (normal price)*
*- Grokking the Spring Bot Interview, $19.9 (normal price)*
More than 580+ Java developers have bought so far. Go see what they look like!
Here is the link - **<https://gumroad.com/javinpaul/>**
More than 150+ people have already bought my books in few hours and given only a few hours are left before #GumroadDay is over, I suggest all of my blog readers, fans, and followers to use this opportunity.
Gumroad Day is a special day organized by Gumroad where Creators will keep all the earning, all the fees like transaction fee, Paypal fee will be paid by Gumroad. Many creators are offering their courses for huge discounts so that more people can benefit from their work.
[](https://gumroad.com/l/hrUXKY)
### What does Grokking the Java Interview cover?
This is my first book in 10 years of blogging and it contains all the essential core Java topics you need to prepare to crack a Java developer interview. This is a 150+ pages book that contains frequently asked Java interview questions and their answers.
Here is the list of core Java topics covered in this book:
1\. [Object-Oriented Programming](https://www.java67.com/2018/02/5-free-object-oriented-programming-online-courses.html)
2\. [Java Fundamentals](https://www.java67.com/2018/08/top-10-free-java-courses-for-beginners-experienced-developers.html)
3\. [Java Collections](https://javarevisited.blogspot.com/2011/11/collection-interview-questions-answers.html#axzz6sDGFFop5)
4\. [Java Multithreading](https://javarevisited.blogspot.com/2018/06/top-5-java-multithreading-and-concurrency-courses-experienced-programmers.html)
5\. [Garbage Collection](https://medium.com/javarevisited/7-best-courses-to-learn-jvm-garbage-collection-and-performance-tuning-for-experienced-java-331705180686)
6\. [JDBC](https://medium.com/javarevisited/top-5-courses-to-learn-jdbc-and-database-connectivity-for-java-developers-free-and-best-of-lot-7945156fcc3)
7\. [Generics](https://www.java67.com/2019/07/top-50-java-generics-and-collection-interview-questions.html)
8\. [Design Patterns](https://medium.com/javarevisited/7-best-online-courses-to-learn-object-oriented-design-pattern-in-java-749b6399af59)
9\. [Telephonic Interview Questions](https://www.java67.com/2015/03/top-40-core-java-interview-questions-answers-telephonic-round.html)
You can use this book to prepare for the Java interview in a guided and structured way. Java is vast and it's tough to crack Java interviews without proper preparation and this book helps you there. Whether you are a beginner looking for your first job or an experienced Java developer looking for your next job this book and questions will help you.
Here is the link to learn more about the book - **[Grokking the Java Interview](https://gumroad.com/l/QqjGH)**
[](https://gumroad.com/l/QqjGH)
###What does Grokking the Spring Boot Interview cover?
This is my second book to help Java developers in their interview preparation and this book covers [Spring Framework](https://medium.com/javarevisited/10-best-spring-framework-books-for-java-developers-360284c37036), the most important skill for Java developers.
When I released my first book, a lot of my readers and followers message me that to write a similar book but on the Spring framework. Since Spring is also vast like Java, it took me 5 months to write this book but I am very happy that it turns out to be a great resource for anyone preparing for Java + Spring boo interview as well for [Spring certification](https://medium.com/javarevisited/top-5-spring-professional-certification-exam-resources-for-java-developers-3ef9fa42fe13).
This is a 250+ pages book that contains frequently asked Spring, Spring Boot, and Spring Security interview questions and their answers.
Here is the list of Spring Framework topics covered in this book:
1. [Container, Dependency, and IOC](https://javarevisited.blogspot.com/2012/12/inversion-of-control-dependency-injection-design-pattern-spring-example-tutorial.html)
2\. Spring Bean Lifecycle
3\. [Aspect-Oriented Programming (AOP)](https://javarevisited.blogspot.com/2021/03/spring-aop-interview-questions-answers.html#axzz6nwXUSoGH)
4\. [Spring MVC](https://javarevisited.blogspot.com/2020/08/top-5-courses-to-learn-spring-mvc-for.html)
5\. [Spring Boot Intro](https://medium.com/javarevisited/top-10-courses-to-learn-spring-boot-in-2020-best-of-lot-6ffce88a1b6e)
6\. [Spring Boot Auto Configuration](https://www.java67.com/2018/05/difference-between-springbootapplication-vs-EnableAutoConfiguration-annotations-Spring-Boot.html)
7\. Spring Boot Starter Dependency
8\. [Spring Boot Actuator](https://www.java67.com/2021/02/spring-boot-actuator-interview-questions-answers-java.html)
9\. Spring Boot CLI
10\. [Spring Boot Testing](https://javarevisited.blogspot.com/2021/02/-spring-boot-testing-interview-questions-answers-java.html)
11\. [Spring Cloud Questions](https://www.java67.com/2021/01/spring-cloud-interview-questions-with-answers-java.html)
12\. [Spring Data JPA](https://www.java67.com/2021/01/spring-data-jpa-interview-questions-answers-java.html)
13\. [Spring Security](https://javarevisited.blogspot.com/2021/02/spring-security-interview-questions-answers-java.html#axzz6lIcZ8tnd)
You can use this book to prepare for the Spring Boot interview in a guided and structured way. Just like Java, Spring Framework is also vast and it's tough to crack Spring Boot interviews without proper preparation and this book helps you there.
Whether you are a beginner looking for your first job or an experienced Java developer looking for your next job this book and questions will help you.
Here is the link to learn more about the book - **[Grokking the Spring Boot Interview](https://gumroad.com/l/hrUXKY)**
[](https://gumroad.com/l/hrUXKY)
That's all guys, this post means to be short because Gumroad day is just for one day and the prices will revert back to normal tomorrow. If you are preparing for Java and Spring Boot interviews then this is your best chance to grab your copy of my books.
My other **Books and Courses** you may like
1. [Java SE 11 Certification Practice Test on Udemy](https://www.udemy.com/course/java-se-11-certification-exam-1z0-819-practice-tests/?referralCode=6A43D9FD2DD560081062)
2. [Spring Certification Test on Udemy](https://www.udemy.com/course/spring-professional-practice-test-questions-vmware-edu-certification/?referralCode=7419B0A2C8AB79F0520E)
3. [AZ-900 Test on Udemy](https://www.udemy.com/course/az-900-practice-test-azure-fundamentls-certification-exam/?referralCode=C335B28D838A48DEDFA1)
Thank you all for your support and I look forward that many of you find value in this practice test.
| javinpaul |
659,755 | A Next.js + Firebase TDD Environment Example | Recently I've been working with a start-up that leverages Next.js and Firebase. This is a very intere... | 0 | 2021-04-09T05:30:04 | https://dev.to/ellioseven/a-next-js-firebase-tdd-environment-example-3ojp | nextjs, firebase, testing, react |
Recently I've been working with a start-up that leverages Next.js and Firebase. This is a very interesting stack, as it lends itself to a very fast development lifecycle. As a result, it's been tempting to let testing take a back seat, especially when the initial development cost for a testing environment with a range of testing strategies is quite large.
I've spent some time creating a TDD environment for a Next.js and Firebase application, so I thought I'd share my results here to reduce that time cost and help avoid some of the confusion and traps.
The rest of the article outlines a basic overview, features, technologies used, architecture notes and strategies covered.
[Check out the code](https://github.com/ellioseven/next-firebase-testing), illustrating how I've created the environment.
## Overview
The repo contains a simple application that allows a user to enter a score, which may appear on the top score board. The application attempts to include a surface area of functionality you'd find in a typical Next.js application. See the [README](https://github.com/ellioseven/next-firebase-testing/blob/master/README.md) for instructions how to install and run the application.
### Features
- Emulated Firebase: Firebase offers local emulated environments, including Firestore and Functions
- Component Tests: Simple tests with RWT
- API Integration Tests: API endpoint tests that verify Firestore data
- Firestore Functions Unit Tests: Tests that consume and verify emulated Firestore Function logic
- Application E2E Tests: End to end tests with Cypress.js
- CircleCI Integration: A simple example showcasing how to set up tests suites into a CI pipeline
### Technologies
- Docker: Virtualised environments for application & Firebase runtimes
- Next.js: Popular React application runtime
- Firebase: Popular database and serverless function infrastructure
- Cypress: Automated browser simulation for integration tests
- Jest: Automated testing framework
- React Testing Library: Automated React testing library
- MSW: API mocking library
- CircleCI: Continuous integration & delivery SaaS
- Husky: Bootstrap local development with Git hooks to run tests on Git events
### Structure
- `.circleci` - CircleCI configuration
- `.docker` - Docker configuration and storage for images
- `cypress` - Cypress E2E configuration and assertions
- `packages/app` - Next.js application
- `packages/firebase` - Firebase services and Firestore data collections
- `packages/functions` - Firebase Functions logic
## Testing Architecture
The goal is to create an environment that solves complexities for test strategies, so that any area in the stack can be covered by a test, breadth over depth. This helps developers figure out "how" to create tests.
Docker is used to make it as easy as a simple command to build all the system dependencies, such as Node, Cypress, Java, Firebase CI and emulators, etc. This makes it extremely easy to pull down and configure the testing environment. The are two environments, `dev` and `test`, which provide the different services and configuration required.
During local development, seed data is injected to create a controllable and reliable test data for tests and local development. When the application boots, a history of scores and a leaderboard is already created. This provides consistent data across the development and testing team. This is done in a Docker service, which will wait for the Firebase emulators to become healthy before migration.
Firebase provides emulators that mimic some of their cloud services such as Firestore and Functions. This is extremely helpful, but getting the environment set up can be confusing and time confusing (system dependencies, environment variables, configuration, etc.). Grokking how to test assert Firestore data and test serverless functions can be difficult. This repository attempts to help solve that.
I've also included CircleCI integration to show how the test environment can be built in a CI process. I use the [machine type executor](https://circleci.com/docs/2.0/executor-types/#using-machine) which provides a VM with full network management and Docker utilities. This makes it easy to use Docker's "host network mode", which simplifies container networking.
Mono-repositories are a popular pattern, so I have implemented this approach with [Lerna](https://github.com/lerna/lerna) to show how the structure might look.
## Testing Strategies
### React Unit Tests
There is a huge amount of resources on how to run unit tests against React components, and so isn't the focus of this repository. I have included some basic Next.js/React tests that assert component and API interaction to depict how they can be structured.
### API/Firebase Integration Tests
Examples include how to pre-populate and tear down the emulated Firebase environment for each API integration test. Be aware that Firestore interaction (eg: pre-populating data) will trigger built Firebase functions. If possible, it's best to keep interaction to a minimum to prevent a high frequency of triggers.
### Firebase Functions Tests
Firebase comes with testing libraries which help interact with emulated environments. I've included some examples that pre-populate Firestore and run simulated snapshots. Testing functions can be tricky, as they run as synchronous background tasks, meaning they can't be simply changed and asserted. This can also cause potential race conditions. To overcome this problem, I've provided a simple solution that waits and retries for the assertion.
### E2E Tests
End to end tests are managed with Cypress. Before Cypress can be launched, packages are built, the emulators are run, data is seeded, then the Next.js is booted in production mode. This prevents any problems with having to wait for pages to compile, which can cause timeout issues. The timing sequence is managed by Docker Compose, which will check for healthy services before running appropriate tasks. | ellioseven |
660,319 | Day 52 | Day 52/100 of #100DaysOfCode Codewars | Node js Hours coded: 2.5 Lines of code: 232 Keystrokes: 346... | 11,311 | 2021-04-09T16:24:06 | https://dev.to/rb_wahid/day-52-44hf | 100daysofcode, programming, javascript | Day 52/100 of #100DaysOfCode
Codewars | Node js
Hours coded: 2.5
Lines of code: 232
Keystrokes: 3461
via @software_hq's #vscode extension | rb_wahid |
660,502 | What's your favorite programming channel? | What's your favorite programming YouTube channel? Mine are Web Dev, Web Dev Simplified, Codingflag, a... | 0 | 2021-04-09T18:41:18 | https://dev.to/cristoferk/what-s-your-favorite-programming-channel-2113 | webdev, youtube, programming, discuss | What's your favorite programming YouTube channel?
Mine are Web Dev, Web Dev Simplified, Codingflag, and Online Tutorials.
Also, I am making programming tutorials too! Here is the link to my channel
https://www.youtube.com/channel/UCFzeA3xC-_i4ZT-XwcwsJxQ/featured
Please Subscribe! | cristoferk |
660,580 | Day 342 : RIP DMX | liner notes: Professional : Pretty chill day to start the weekend. Had a meeting with the team and... | 0 | 2021-04-09T21:27:12 | https://dev.to/dwane/day-342-rip-dmx-381o | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Pretty chill day to start the weekend. Had a meeting with the team and then I worked on a demo to help with a question that we got in the community Slack channel. The original poster said that it helped with what they were ooking for. Cool. Pretty much that was it. The day went by pretty quick.
- Personal : Last night, I worked on the layout for my side project. Worked on tabs and some buttons. Also added some elements into details so that things take up less space. Also got it so that it can be navigable with the keyboard. Came out pretty good.

It's the weekend! This week flew by!!! Working on tracks for the radio show tomorrow. Learned that DMX passed away. Damn. Sucks. Rest in Power DMX. May do some more coding. Really want to watch the new episode of "Falcon and Winter Soldier" and maybe an anime. We'll see how soon I can get the show set up.
Have a great night and weekend!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube sB2_MmtMoIc %} | dwane |
660,981 | Array of object | A post by swarnaparvathi | 0 | 2021-04-10T08:05:51 | https://dev.to/swarnaparvathi/array-of-object-297j | swarnaparvathi | ||
660,991 | Web Application Penetration Test Checklist | Part - 01 | In this article I am going to share a checklist which you can use when you are doing a penetration te... | 12,165 | 2021-04-10T08:42:07 | https://thehackedsite.netlify.app/bug/bounty/2021/04/10/web-application-penetration-test-checklist-part-01 | linux, security, beginners, github | **In this article I am going to share a checklist which you can use when you are doing a penetration test on a website, you can also use this list as a reference in bug bounties. This is beginner’s friendly list, so they can look it for reference.**
*Before stating the list I want to make something clear, that before you start using this list for finding bugs/vulnerabilities make sure that you have already completed the first step which is *****Reconnaissance*****. Otherwise you will find it hard to find bug/vulnerabilities.*
*You are not genius! Remember this thing, so if you don’t understand something just Google about it and so some research, I also don’t know everything and there could be things that I have missed, so don’t worry and keep learning.*
#### General things to do
1. Create 2 accounts on the same website if it has login functionality. You can use this [extension]( https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/) to use same browser for creating different accounts on the same website.
2. Try directory forcing using tools like **Dirsearch**, **FeroBuster**, **Ffuf**, might be possible some directory may reveal sensitive information.
#### Login page
1. Session expiration
2. Improper session validation
3. OAuth bypass *(it includes features like login with Google, Microsoft, Instagram or any)*
- OAuth token stealing
- Authentication bypass
- Privilege escalation
- SQLi
#### Registration page
1. XML file upload using SVG *(if website asks for documents upload or profile upload then you can try this)*
2. Bypassing limitation on file types to upload *(if they just allow jpg, png then try to upload `.php` or `.py`)*
3. Bypassing mobile or email verification
4. Brute forcing OTP sent
5. Try inserting XSS payload whenever possible *(like If you can enter payload in first name/last name/address etc text box makes sure to enter because sometimes it may reflects somewhere else or maybe it’s stored XSS)*.
#### Forgot password page
1. Password reset poisoning *(kind of similar way we do host header injection)*
2. Reset token/link expiring *(maybe they pay)*
3. Reset token leaks *(this can happen when some website interacts to third party services at that point of time maybe password reset token is sent via referrer part and maybe it can leak)*
4. Check for sub-domain takeover.
5. Check for older version of service is being used by your target and if they so try to find existing exploit for the target.
*So this was all about some basic things to check while doing penetration test on a website or in a bug bounty program. Hope you liked it and learned something new from it.*
If you have any doubt, question, quires related to this topic or just want to share something with me, than please feel free to contact me.
### 🖥 My personal blog
[The Hacked Site](https://thehackedsite.netlify.app/)
### 📱 Contact Me
[Twitter](https://twitter.com/r_mishra10),
[LinkedIn](https://www.linkedin.com/in/rahul-mishra-66210b185),
[Telegram](https://t.me/rahul_mishra10),
[Instagram](https://www.instagram.com/rahul_mishra10/?hl=en),
### 📧 Write a mail
<rahulmishra102000@gmail.com>
#### 🚀 Other links
[GitHub](https://github.com/rahulMishra05),
[HackerRank](https://www.hackerrank.com/rahulmishra10201),
[Tryhackme](https://tryhackme.com/p/rahulMishra05)
| rahulmishra05 |
661,059 | Build your own Shakespeare Translation Web App with JavaScript Fetch API | Shakespeare may have been a genius, but one thing's for sure: he wasn't speaking our language. His... | 0 | 2021-04-10T20:50:54 | https://dev.to/nayyyhaa/build-your-own-shakespeare-translation-web-app-with-javascript-fetch-api-loo | javascript, webdev, codenewbie, tutorial | Shakespeare may have been a genius, but one thing's for sure: he wasn't speaking our language. His ever-popular works (dramas and poems) make his unique language style live even today.
I've always been curious about how Shakespeare would've expressed my thoughts in his words. _Have you been too??_
**Then you've come to the right place, my friend!**
This is a vanillaJS project which uses API from https://funtranslations.com/ to **translate English text into Shakespeare English.**
| Prerequisites | Basic understanding of HTML and CSS, an understanding of what JavaScript is. |
| ------------- |:-------------:|
This is what we'll build:

>Try out the app [here](https://iamshakespeare.netlify.app/)
##Source Code
In case you get lost while following along, you can grab the [source code from here](https://github.com/nayyyhaa/shakespeare-translator).
Let's begin!
## Getting Started
To get started, we'll be using VSCode for writing our code. Create your first file with the name **index.html** for writing out HTML code.
In our Application, we have 3 basic elements:
1. Input Field - to accept the user's input
2. Translate Button - to create an event when the user clicks on the translate button.
3. Output Field - to preview the translated text.
These 3 elements can be created as follows in HTML:
###### HTML code snippet - index.html
```html
<body>
<input type="textarea" id="inputTxt" placeholder="insert your text">
<button id="translateButton">Translate!</button>
<p id="outputTxt"></p>
<script src='/scripts/app.js'></script>
</body>
```
> _Note: < script > tag is being used to bind this HTML file with the JavaScript file **app.js**._
## Initialising variables to store our data
This section of the code sets up the variables we need to store the data our program will use.
In your **app.js** file, create the following variables:
###### JS code snippet - app.js
```js
let inputElement = document.querySelector("#inputTxt"); // input element
let translateBtnElement = document.querySelector("#translateButton"); // button element
let outputElement = document.querySelector("#outputTxt"); // output element
let url="https://shakespeare.p.mashape.com/shakespeare.json"; //API URL
```
The first three variables `inputElement`, `translateBtnElement`, `outputElement` are each made to store a reference to the form text input, translate button and output element in our HTML.
Our final variable `url` is used to store the server's API call URL from where we obtain the translated data.
Here, we've used `.querySelector()` function for selecting the particular **id** that we've already set in our index.html file.
To listen to the button click event we need to define an event handler function.
```js
translateBtnElement.addEventListener("click", translateFunction);
```
Here,
- `click` - is the event
- `translateBtnElement` - is the event listener
- `translateFunction` - is the event handler/callback function.
After `click` event has been fired on `translateBtnElement`, the `addEventListener()` method handles by calling `translateFunction()`.
Before defining the `translateFunction()` we need to get some basic knowledge about APIs.
###What is an API?
API stands for **Application Programming Interface**, is a set of functions that allows applications to access data and interact with external software components, operating systems, or microservices.
_WOAH! What?!_
OK! Let me explain this to you in easy words. Suppose you are in a restaurant and you are dying to have that chocolate cake. You don't go straight to the chef for placing the order, right? The waiter does that for you. That's what API is. **It's an interface that communicates between applications.**
Here,
- You/Customer: Client
- Waiter: API
- Chef: Server
Hence, in order to get the data from the web servers, we need APIs.
In our example, we are using [FunTranslationAPI](https://funtranslations.com/shakespeare) to fetch the data in JSON format(key - value pair).
Let's call the API then!
## Fetch API
The Fetch API is a modern interface that allows you to make HTTP requests to servers from web browsers to given URL.
Basic syntax involved:
```js
fetch(url)
.then(response => {
// handle the response
})
.then(data => console.log(data))
.catch(error => {
// handle the error
});
```
Here in the `fetch()` function we pass the URL of the resource from where we are requesting the data. This will pass the data as a `response` object. The `response` object is the API wrapper for the fetched resource with a number of useful properties and methods to inspect the response. This will then passed to the `data` variable (you can give any name to this) for printing output.
Now, it's time to define the functions.
## Defining Functions() _for some action_
To get our code into some action, we need to define some functions.
```js
function translateFunction(event){
let inputValue = inputElement.value; //fetching input value
fetch(url) //Fetch API call
.then(response => response.json())
.then(data => {
outputElement.innerText = data;
})
.catch(() => alert("Shakespeare(Server) is busy! Try after sometime"))
```
Now, let's break it down:
1. We'll extract `inputElement` value into `inputValue` variable.
2. Making `fetch` API call using the given `url` and then extracting `response` object. This is just an HTTP response, not the actual JSON. To extract the JSON body content from the response, we use the `json()` method via using an [arrow](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions) function.
3. Setting `data` variable's value to the `outputElement` variable.
4. Finally, error handling with `catch()` function.
Let's try our application. Go to the browser, input your text & click on the translate button. You'll get the following output.
###### In console
```json
{
"error": {
"code": 400,
"message": "Bad Request: text is missing."
}
}
```
That's not the output that we were expecting. That's because we've to pass the text to our URL. For that we'll define another function `translatedURL()`.
```js
function translatedURL(inputValue){
return `${url} ?text= ${inputValue}`;
}
```
Let's try our app with sample text _Hi. How are you?_ and calling in fetch() function as `fetch(translatedURL(inputValue))` instead of previous `fetch(url)` to concatenate the text message to our server API's URL. We'll get output like this:
```json
{
"success": {
"total": 1
},
"contents": {
"translated": "Good morrow. How art thee?",
"text": "Hi. How are you?",
"translation": "shakespeare"
}
}
```
Success! Not so much. Notice that the output text doesn't look pretty. This output is JSON data and we need to extract the translated value from it.
Here,
- translated: translated text
- text: input text
- translation: language of translation being used from FunTranslation API
We refer it by `json.contents.translated`. Now our code should look something like this:
```js
function translatedURL(inputValue){
return `${url}?text=${inputValue}`;
}
function translateFunction(event){
let inputValue = inputElement.value;
let finalURL = translatedURL(inputValue);
fetch(finalURL)
.then(response => response.json())
.then(json => {
outputElement.innerText = json.contents.translated;
})
.catch(() => alert("Shakespeare(Server) is busy! Try after sometime"))
}
```
and we get the following output:

**Voila!** We've built our very own Shakespeare Translation Web App with JavaScript Fetch API.
>Note: Funtranslation APIs are free to use, hence they have a limitation of 5 calls/hour. Once it exceeds this limit, it would result in a failure with an error we've mentioned in the `catch()` block.
##Finished for now...
Congrats on making it this far! We've got the basic understanding of DOM scripting i.e. JS in the browser, calling servers, and getting data from there, taking user input and showing user output, and many more things.
Now all that's left for you is to design your own styling with CSS. You can also check out funtranslation [site](https://funtranslations.com) for a similar app with different translation languages.
>[Click here](https://iamshakespeare.netlify.app/) to check out the live project.
Give it a try, create your version of the same and share your experience and feedback on the comments section.
*_Thanks for reading!_* | nayyyhaa |
661,064 | Hello World | Greetings! | 0 | 2021-04-10T11:28:08 | https://dev.to/khalelthecoder/hello-world-317n | Greetings! | khalelthecoder | |
661,966 | Renaming the Master Branch to Main On GitHub | Update: You can now rename it directly from GitHub 🎉. Check it out: Renaming a branch on... | 0 | 2021-04-11T12:28:48 | https://dev.to/jonathanbcsouza/how-to-rename-the-master-branch-on-github-540a | github | **Update:** You can now rename it directly from GitHub 🎉.
Check it out: [Renaming a branch on GitHub](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-branches-in-your-repository/renaming-a-branch)!
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chfo7dvjmani0nxa8jr7.png" width="400">
**Original Post:**
Since October 1, 2020, all new repositories on GitHub have defaulted to using `main` as the branch name. If you created repositories before this date, your branches will be named `master`. GitHub, along with the broader Git community, is gradually transitioning the default branch name from `master` to `main`, a convention that's slowly being adopted across the community. In this guide, we'll walk you through the process of renaming your branch, step by step.
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wtk61dqisixmsr1bc6c0.png" width="400">
### Step 1: Select Your Repository
1. Navigate to the main page of your repository on [GitHub](https://github.com)
2. Copy the URL provided
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n2p8i1vjcsjw9jje3sm7.png" width="400">
### Step 2: Clone the Repository to Your Local Machine
1. Open your terminal
2. Change the current working directory to the desired location where you want to save the project.
3. Type `git clone`, followed by the URL you copied earlier.
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/btchyrw19nt8ybbwyfze.png" width="400">
### Step 3: Modify the Branch Name
While still in your terminal, navigate to the root folder of your project using `cd <your-project-folder>` and execute the following command:
`git branch -m master main`
This command creates a new branch named `main` and switches to it, with the `-m` flag transferring all the commit history from `master` to your new `main` branch.
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk50z7zmvg5w66owc0z9.png" width="400">
### Step 4: Remove the “master” Branch
Now it's time to remove the old branch.
Execute the following command to set your local machine to track the new branch and update your remote GitHub repository:
`git push -u origin main`.
Next, point the HEAD to the current branch reference with: `git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/main`
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j14pg5l3agasjddhudaq.png" width="400">
You can now verify your tree using:
`git branch -a`
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fu5c5i3cp3qk03m4kfx.png" width="400">
4. Navigate to your GitHub repository, go to the Settings section, find the branches section, and switch the default branch to “main”, as shown below.
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g64754rfrkry5f6eqpwq.png" width="400">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ugc7gqkfmftkqwcxxg6d.png" width="400">
5. Return to your terminal and delete the old branch both locally and remotely with:
`git push origin --delete master`
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3on5dlry02vaidb141y.png" width="400">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c0t8pyiirof1srgkix9n.png" width="400">
### Step 5: Celebrate!
You've done a fantastic job! 😎
| jonathanbcsouza |
661,214 | If you were to uninstall all secondary extensions in your text editor and leave just Three(3) what will they be? | Introduction : A code editor simply put is a playground that aids you write your code in a... | 0 | 2021-06-19T20:33:51 | https://dev.to/yunweneric/if-you-were-to-uninstall-all-secondary-extensions-in-your-text-editor-and-leave-just-four-4-what-will-they-be-1725 | discuss, codenewbie, dev | ## Introduction :
A code editor simply put is a playground that aids you write your code in a nice, easy and quicker way.
There are several code editors both free and paid out there. They all share some commonalities. One of which is the ability to install additional helpers (extensions) to facilitate your development. In-built or primary extensions comes pre installed with the editor while secondary extensions are installed by users when need be.
In the comments, drop your 3 secondary extensions (extensions you find absolutely necessary) as well as the code editors respectively
I'll start:
1. Live server --Vscode
2. Beautify --VScode
3. ES6/ES7 JavaScript Snippets --Vscode | yunweneric |
661,261 | Why I'm writing a blog every week this year and why you should write more too! | Contents Intro Documentation Summary Intro This year I will be writin... | 0 | 2021-04-10T16:42:25 | https://jordanfinners.dev/blogs/why-im-writing-a-blog-every-week-this-year-and-why-you-should-write-more-too.html | webdev, documentation, beginners |
## Contents
1. [Intro](#intro)
2. [Documentation](#documentation)
3. [Summary](#summary)
## Intro <a name="intro"></a>
This year I will be writing 52 blogs, one for each week of the year. Now I'm a realist, in that holidays and life will get in the way of writing, so my aim is for the number of blogs to be 52 or more, not necessarily one for each calender week.
So that is quite a lot of blogs, you might be asking why.
I've never been a particularly good writer, I guess thats why I've always tended to more numeric courses/roles. But I realise that it is a critical part of any project and life as a whole.
Writing is still the best ways of transferring knowledge between people and generations.
I also think that in tech it is often undervalued, which leads me on to why I'm writing.
## Documentation <a name="documentation"></a>
Documentation is as important as the code of a project. I fundamentally believe that.
If you think back to projects you've enjoyed working on, APIs you've integrated with ease or dependencies you've enjoyed using. I bet they all have good documentation in common.
Likewise if you've had issues using a tool or dependency or API etc, it probably wasn't that well documented, or, like I have on so many occasions, not read the README! 🙈
Documentation can come in many forms:
* READMEs
* Architectural Diagrams
* Background information about what it solves, why it exists
* API Documentation
* Usage Information
* Commit Messages
* Github Issues
* Pull Requests and Review Comments
* Schema Registries
* Types
All these and many more forms of documentation, thread together to provide you with the story, the full picture of a project.
> Documentation gives you the who, what, why and how of a project.
Now I'm not saying that every project should have every tiny detail documented, it's very much specific to your circumstances.
For example on my personal projects, there is very little documentation, its just me working on it so I can keep most of it in my head or at least thrown into a simple README.
Whereas at work I like to ensure I've got really thorough READMEs, containing links to any background information about why a project exists, a README with getting started commands, API documentation with example curl requests and responses.
This is because I work as part of a larger and ever growing team and so not everyone will have the same context as I will when I was writing the code, and anyone should be able to pick up the project and understand it practically as well as the person who wrote the code.
I also like to think about *future me* or successors who will take over the project after me. To ensure that they aren't looking at it thinking what they heck is this, and also partly what was this guy doing! 😂
Understanding what good documentation looks like is really hard, and writing it equally tough.
This is why I'm aiming to write so much this year, to get better with practice! Both reading and writing documentation consistently will help me to produce better documentation, waffle less and make life a bit easier for *future me*!
I also think writing about a topic can help solidify your understanding of a topic.
Something I've picked up is if documentation is lacking in an area and someone new asks you to explain it, encourage them to document it and then review it afterwards. This not only helps delegate documentation writing, which is often a large task for the more experienced team members. But also helps solidify the new team members understanding, see the situation from fresh eyes and give the new members ownership of a part of the teams work which helps embed them.
I want to just give a shout out to [Remark](https://github.com/remarkjs/remark-lint#install) which is a linter for your READMEs which makes so much sense to me. Our code should be of a consistent standard, so why not the READMEs too!
## Summary <a name="summary"></a>
> Documentation is hard. But it is also vital.
I aim to write consistently to help me create better documentation and speed up the process of writing the documentation too! I can already see this happening with how long it takes me to write a blog.
I'd love to hear your thoughts on what good documentation looks like, and example of good docs! Let me know on <a href="https://twitter.com/JordanFinners" target="_blank" rel="me noopener noreferrer">Twitter</a>.
| jordanfinners |
661,457 | Encrypting cron emails with GPG | Introduction You might have your server setup in such a way that it runs a few tasks with... | 0 | 2021-04-10T21:12:57 | https://www.philipp-mayr.de/2017/03/25/encrypting-cron-emails | cron, gpg, linux | ## Introduction
You might have your server setup in such a way that it runs a few tasks with cron so you don’t have to worry about them. Except.. you should. That is if the scheduled tasks send mission critical information over the internet. Now assume you have some kind of security audit software running like say lynis. You sure don’t want that report in the wrong hands since an attacker could really use that information to break into your server way easier than otherwise.
## Assumptions
* You are familiar with GPG
* You have root access to your linux web server
* Your server runs on a recent Ubuntu
* Cron is already configured to send emails
## Encrypting
There are basically two ways of encrypting emails one is GPG and the other S/MIME. We will be using GPG. Further this article assumes you are familiar with GPG.
1. Upload your public key (ending in .asc) to your server /home is a good place.
2. that the key can actually be read by the command we will be using, it has to be slightly modified. To be precise the ASCII amor has to be removed we need the key in binary form. This is archived by the following command.
```
gpg --dearmor < /home/YOURPUBLICKEY.asc > /home/YOURPUBLICKEY.asc.gpg
```
3. Add this line at the top of your /etc/crontab just after MAILTO=you@example.de. You need to replace the email address and the public key path.
```
GPG_CMD = "ifne /usr/bin/gpg --batch --armor --trust-model always --no-default-keyring --keyring /home/YOURPUBLICKEY.asc.gpg --recipient you@example.de --encrypt"
```
4. For this command to work we need the program ifne installed. Usually if a command has no output to /dev/stdout or /dev/stderr gpg would encrypt an empty string and you would receive an encrypted email that has no content once decrypted. This would be annoying ifne prevents this. To install it run.
```
apt-get install moreutils
```
5. Now in /etc/crontab you can simply pipe the output to gpg and enjoy encrypted emails.
```
* * * * * root /bin/echo "gpg test" | $GPG_CMD
```
I got the inspiration for this from [this](https://www.zulius.com/how-to/send-encrypted-email-from-shell-scripts-cron/) a site that is unfortunately offline for a few days now (checked April 2021). | philippmayrth |
661,714 | 49 Days of Ruby: Day 19 - Exceptions | Join us on a 49 day learning journey through Ruby! | 10,268 | 2021-04-15T04:38:12 | https://dev.to/bengreenberg/49-days-of-ruby-day-19-exceptions-1em8 | ruby | ---
title: 49 Days of Ruby: Day 19 - Exceptions
published: true
description: Join us on a 49 day learning journey through Ruby!
series: 49 Days of Ruby
tags: #ruby
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/9m73qaetuse3nm3y3it8.png
---
**Welcome to day 19 of the 49 Days of Ruby! 🎉**
Today we are going to discuss a topic that can strike fear in the hearts of developers everywhere! 😱 We are talking about exceptions. Why are they so scary?
Take a look at this:
```ruby
irb(main):001:0> puts "Hi, #{name}"
Traceback (most recent call last):
4: from /Users/bengreenberg/.asdf/installs/ruby/2.7.2/bin/irb:23:in `<main>'
3: from /Users/bengreenberg/.asdf/installs/ruby/2.7.2/bin/irb:23:in `load'
2: from /Users/bengreenberg/.asdf/installs/ruby/2.7.2/lib/ruby/gems/2.7.0/gems/irb-1.2.6/exe/irb:11:in `<top (required)>'
1: from (irb):1
NameError (undefined local variable or method `name' for main:Object)
```
We start seeing a lot of things that don't make a lot of sense to us. Items like `Traceback (most recent call last)` and `NameError`. It's no wonder that exceptions are intimidating.
What specifically is an exception?
An exception is a language, in this case, Ruby, telling us that something is not quite right.
Exceptions are actually trying to be helpful. Let's break down the example above to see what we mean.
## Exceptions in Ruby
In the above example, I tried to output the following interpolated string: `"Hi, #{name}"`. That looks okay, what could be wrong? What does the exception say?
```ruby
NameError (undefined local variable or method `name' for main:Object)
```
If we look closely we see it says that there is an undefined local variable or method called `name`. This is called a `NameError`. What do you think that means?
Perhaps you think it means Ruby is trying to tell us we don't have a variable called `name`? You would be right!
We can view exceptions as guides along our development journey. They can help us return back to the path when we veered off course.
There are many different sub-classes of the main `Exception` class. Each sub-class is communicating a different type of error. The `NameClass` is an example of one of those sub-classes.
The [Ruby docs](https://ruby-doc.org/core-2.7.2/Exception.html) list all the different sub-classes, and you can even click through to read about each one!
The more you familiarize yourself with exceptions, the more helpful they will be in debugging what went wrong when things inevitably go wrong in your code.
**Come back tomorrow for the next installment of 49 Days of Ruby! You can join the conversation on Twitter with the hashtag [#49daysofruby](https://twitter.com/hashtag/49daysofruby).**
| bengreenberg |
661,871 | Starwalt is an Internet Company, We help to develop your #Business
https://starwalt.github.io | A post by santhosh kumar | 0 | 2021-04-11T09:16:21 | https://dev.to/sandysanthosh/starwalt-is-an-internet-company-we-help-to-develop-your-business-https-starwalt-github-io-2ai5 | sandysanthosh | ||
662,060 | Intro to Angular - Part 1 | In this article we will be start a journey to learn Angular. This Part 1 of our series to cover Angu... | 0 | 2021-04-11T13:59:58 | https://dev.to/moe23/intro-to-angular-part-1-428o | angular, typescript, beginners, javascript | In this article we will be start a journey to learn Angular.
This Part 1 of our series to cover Angular from all different aspects, we will be building an application which will connect to an API and pull data, validate input, authenticate users and many more functionality.
You can watch the full video on YouTube:
{% youtube YcatzwXWQ64 %}
And you get the source code from GitHub:
https://github.com/mohamadlawand087/v24-AngularPart1
In this first part we will be focusing on Angular fundamentals as well building our first component.
So what we will cover today:
- What is Angular
- Why Angular
- Anatomy of Angular application
- Ingredients
- Start Coding
As always you will find the source code in the description down below. Please like, share and subscribe if you like the video. It will really help the channel
### What is Angular
Angular is a javascript framework for building client-side applications using HTML, CSS and TypeScript.
Angular in its current format has been release in 2016 and has been updated ever since. And currently we are in version 11.
It's a very popular Js framework to build client side application.
### Why Angular:
- Angular makes HTML more expressive, features if conditions, loops, local variable
- Data binding, track changes and process updates from the users
- modular design, create building blocks and reuse them across the applications
- Modern, take latest features in JS it support legacy and new browsers
- Simplified API
- build for speed, faster load, rendering time
- built in support for communication with backend service
- Enhance the productivity
### Anatomy of an Angular application
An Angular application is composed of a set of components as well as a services which provide functionality across these component.

### What is an Component:
- Template is the html for the UI, defining a view for the component
- Class: is the associated code with the view, a class contain the properties and data elements available to use in the view, methods which perform actions in the view like clicking a button
- Metadata: provide additional information about component to Angular, it is the meta which identify the class as a component
When building a lot of component how do we define all of these component into a single application.
### What is a Service:
A service is typically a class with a narrow, well-defined purpose. It should do something specific and do it well.
### What is a Module:
Angular Modules, they help us organise our application into functionality blocks. Every Angular application has at least 1 module which is called the root Angular Module.
An application can contain additional modules, which can provide additional features.

### Ingredients
- Visual Studio Code ([https://code.visualstudio.com](https://code.visualstudio.com/))
- Node ([https://nodejs.org/en/download/](https://nodejs.org/en/download/))
## Setting up the project
We need to make sure we have node installed on our machine, to verify that you have it type the following command in your terminal
```bash
npm -v
```
then we will need to install the Angular CLI (command line interface) which will allow us to utilise and build Angular applications
```bash
npm i -g @angular/cli
```
Now its time to create our own application.
```bash
ng new my-app
```
This will take around a minute to complete, it has asked if want to enable routing we said yes, this functionality will us to communicate between different components.
Once our setup is complete, let us run the application to see what do we get out of the box, and make sure everything has been generated successfully.
```bash
ng serve -o // open the app in default browser
```
the command above will also give us the ability for hot reloading, so when ever we do any change in the code. It is directly compiled and reflected in the browser for us.
Let us now discuss the folder structure of our Angular application.
- e2e: end to end testing
- node_modules: npm dependencies
- src: where code lives
- App where we are going to put all of the Angular code
- index.html: app-root is the entry point of our application (we are not really going to use it)
- Styles.css is where are going to style the application
### Let us start coding
Let us create our own custom component and show it, inside our src ⇒ app folder we will create a new file called hello.component.ts this is our new simple component that we will use to do the hello world component
```tsx
import { Component } from '@angular/core';
@Component({
selector: 'hello-world',
template: '<h2>{{title}}</h2>'
})
export class HelloWorldComponent {
title = 'hello world from component';
}
```
After adding all of the code that we need inside our component how will Angular now know about it? how are we going to display this component output.
To fix this issues we need to
- the first thing we are going to do is to add the selector "hello-world" into our app.component.html html page as this app component is the entry point to our Angular application, we will delete all of our generated code and leave the router-outlet which we will discuss later and pass the selector. When we pass the selector in the html its now called the directive which mean the custom element we created.
```tsx
<hello-world></hello-world>
```
- Then we need to update the app.module.ts to inform Angular that we have a new component
```tsx
import { HelloWorldComponent } from './hello.component';
@NgModule({
declarations: [
AppComponent,
**HelloWorldComponent**
],
imports: [
BrowserModule,
AppRoutingModule
],
providers: [],
bootstrap: [AppComponent]
})
```
Now let us start developing our application, we are going to start with a list of users and in order to make our UI nicer we will be utilising bootstrap.
We will start by installing bootstrap into our application by following these steps. Open the terminal in your project and type the following
```bash
npm install bootstrap
```
Now that we have installed we need to import it to our global style sheet "styles.css" .
```css
@import url(~bootstrap/dist/css/bootstrap.min.css);
```
Now let us start creating our User Components by convention every feature of our application will have its own folder. For this reason we will be create a folder called users inside our app folder.
Inside the users folder will create the template for our user list component user-list.component.html once we create this file let us start building the UI
```html
<div class="card">
<div class="card-header">
User List
</div>
<div class="card-body">
<div class="row">
<div class="col-md-2">Filter by:</div>
<div class="col-md-4">
<input type="text" />
</div>
</div>
</div>
<div class="row">
<div class="col-md-6">
<h4>Filtered by:</h4>
</div>
</div>
<div class="table-responsive">
<table class="table">
<thead>
<tr>
<th>
Name
</th>
<th>
Email
</th>
<th>
Phone
</th>
<th>
Country
</th>
</tr>
</thead>
<tbody>
</tbody>
</table>
</div>
</div>
```
Now we build our component, inside our users folder will create a new file called user-list.component.ts and we add the following
```tsx
import { Component } from '@angular/core'
@Component({
selector: 'pm-users',
templateUrl: './user-list.component.html'
})
export class UserListComponent {
}
```
Now let us update our app.module.ts so we can inform Angular about our new component.
```tsx
import { UserListComponent } from './users/user-list.component';
@NgModule({
declarations: [
AppComponent,
HelloWorldComponent,
**UserListComponent**
],
imports: [
BrowserModule,
AppRoutingModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
The final step is to add our new component to the app.component.html
```html
<pm-users></pm-users>
```
Now let us run the application using the following command
```bash
ng serve -o
```
Now that our application is running and we can see the changes that we have created the main structure for our components let us discuss binding and how we can utilise it to build our view.
### Bindings

Bindings Coordinates communication between the component's class and its template and often involves passing data. We can provide values from the class to our template and our template will provide us with actions.
Binding happens in 2 ways
- from class ⇒ template : to display information
- from template ⇒ class : to raise events and values
Binding is always in the template
We will cover 1 way binding now which is interpolation but we will cover the rest as we go
Let us start with implementation, will start by making the title of the page dynamic will start by making the page title dynamic by adding our user-list.component.ts class the following
```tsx
pageTitle: string = "User list";
```
and then updating the user-list.component.html to the following
```html
<div class="card-header">
{{pageTitle}}
</div>
```
### Directives
Custom HTML elements or attributes used to extend our HTML functionalities, we can build our own custom directives or utilise Angular built in ones.
When we have created our component and utilised it inside the app.component.html we utilised our own directives.
Built in directives: *ngIf and *ngFor
Let us start utilising the built in directive *ngIf, we are going to update our table to only show the list if there is records available. To do that let us update our class component as following:
```tsx
export class UserListComponent {
pageTitle: string = "User list";
users: any[] = [
{
"userId": 1,
"fullName": "Mohamad Lawand",
"email": "mohamad@email.com",
"phone": "1231123",
"country": "lebanon"
},
{
"userId": 2,
"fullName": "Richard Feynman",
"email": "richard@email.com",
"phone": "333333",
"country": "US"
},
{
"userId": 3,
"fullName": "Neil Degrasse Tyson",
"email": "neil@email.com",
"phone": "44444444",
"country": "US"
}
]
}
```
and now we update our template with the following
```html
<table class="table" *ngIf="users.length">
```
Now let us populate the table with the user list that we have, in order to do that we are going to be utilising the *ngFor directive which will allow us to iterate through the array that we have to display information. To do that we need update our template with the following
```html
<tbody>
<tr *ngFor='let user of users'>
<td>{{ user.fullName }}</td>
<td>{{ user.email }}</td>
<td>{{ user.phone }}</td>
<td>{{ user.country }}</td>
</tr>
</tbody>
```
A component listen to users actions via event bindings, event binding will allow us to bind an event to a method in our component like a click event or hover event...
We will be updating our code to show and hide the user phone numbers based on button click event. To accomplish this we need to update the component class and the template as following.
Will start by updating our class
```tsx
showNumber:boolean = false;
showPhone(): void {
this.showNumber = !this.showNumber;
};
```
And then our template
```html
<div class="col-md-6">
<button (click)='showPhone()' class="btn btn-primary btn-sm">
{{showNumber ? 'Hide' : 'Show'}} Phone numbers
</button>
</div>
<!-- We update the td element in our table to the following -->
<td> <span *ngIf='showNumber'>{{ user.phone }}</span></td>
```
The next step is for us to enable 2 way binding by adding the filter options on our table, to do that we need to utilise the FormsModule that Angular provides, we don't have that Module in our current application so we will start by adding it. Inside our app.module.ts we need to add the following
```tsx
import { FormsModule } from '@angular/forms';
imports: [
BrowserModule,
AppRoutingModule,
FormsModule
],
```
Then in the user-list component we need to update our class to the following:
```tsx
listFilter: string = '';
```
And then we need to update our template with the following
```html
<div class="col-md-4">
<!-- the ngModel is only available from the FormsModule that angular provides
Its not available anywhere else -->
<input type="text" [(ngModel)]='listFilter' />
</div>
<h4>Filtered by: {{listFilter}}</h4>
```
Now as we can see that our filtration is not working since haven't implement the logic in our class, to implement that logic we will need to update our component.
One of the main benefits of using TypeScript is that it is strongly typed and we can tell from what what we have wrote so far everything is strong typed except the user list it is of type any.
To fix this we need to specify a custom type which is an interface. An interface is a specification identifying a release set of properties and methods. Will start by creating the interface inside the users folder, will create a new file called user.ts and update it as the following
```tsx
export interface IUser {
userId: number,
fullName: string,
email: string,
phone: number,
country: string
}
```
Once we added our interface we now need to update our component class to take advantage of it
```tsx
// We import the interface
import { IUser } from './user'
// We update the list to take advantage of our interface
users: IUser[] = [
{
"userId": 1,
"fullName": "Mohamad Lawand",
"email": "mohamad@email.com",
"phone": 1231123,
"country": "lebanon"
},
{
"userId": 2,
"fullName": "Richard Feynman",
"email": "richard@email.com",
"phone": 333333,
"country": "US"
},
{
"userId": 3,
"fullName": "Neil Degrasse Tyson",
"email": "neil@email.com",
"phone": 44444444,
"country": "US"
}
];
```
Before completing the filter functionality we are going to discuss the Angular Component lifecycle and then based on that we will complete the filtration
A component has a life cycle managed by Angular
Angular create a component ⇒ Render the Component ⇒ Create and render the component children ⇒ process any changes to the component

A lifecycle hook is an interface we implement to write code, when a component lifecycle occurs. the 3 main lifecycle hooks we are going to use:
OnInit: Perform component initialisation and retrieve data. Best use to do API calls to gather data (we will cover API calls in Part 2)
OnChanges: any action after change to input properties
OnDestroy: Perform cleanup
To use a lifecycle interface we need to implement it onto our class like the following
```tsx
// We need to update the import
import { Component, OnInit } from '@angular/core';
// Update our class
export class UserListComponent implements OnInit {
// Add the ngOnInit functionality
ngOnInit(): void {
console.log("Init");
}
```
Now that we have understood the lifecycle we need discuss 1 more thing which is the getter and a setter.
So to define a property in TypeScript there is 2 way the simple way as following
simple way
```tsx
name: string = "Mohamad"; // which is an easy and fast way to initialise
```
advance way using getter and setting, the main reason to use this is to execute methods when these variable are get and set.
```tsx
private _name: string = "";
get name(): string {
return this._name;
}
set name(value: string) {
this._name = value;
}
```
Now we can resume the implementation of our filtration functionality by updating our component class to the following
```tsx
private _listFilter: string = '';
get listFilter(): string {
return this._listFilter;
}
set listFilter(value: string) {
this._listFilter = value;
this.filteredUsers = this.performFilter(value);
}
filteredUsers: IUser[] = [];
ngOnInit(): void {
this.listFilter = '';
};
performFilter(filterBy: string): IUser[] {
filterBy = filterBy.toLowerCase();
return this.users.filter((user: IUser) =>
user.fullName.toLowerCase().includes(filterBy));
}
```
finally we need to update our template to utilise the filteredUser array instead of the users list
```html
<tr *ngFor='let user of filteredUsers'>
```
Thank you for reading, part 2 will be release in the upcoming week. | moe23 |
662,090 | Cross-platform building instructions, using GitHub workflows (for webview/webview) | I adapted from tauri-action, but for Golang. patarapolw /... | 0 | 2021-04-11T14:45:43 | https://dev.to/patarapolw/cross-platform-building-instructions-using-github-workflows-for-webview-webview-5e05 | github, desktop, go, webdev | I adapted from [tauri-action](https://github.com/tauri-apps/tauri-action#creating-a-release-and-uploading-the-tauri-bundles), but [for Golang](https://github.com/patarapolw/go-webview-launcher/blob/main/.github/workflows/go.yml).
{% github patarapolw/go-webview-launcher %}
Why GitHub Action? Because xgo not only doesn't always work / outdated (failed with gojieba), but also that `webkit2gtk-4.0` is required in addition to xgo.
I need to do this, because [Neutralino.js](https://github.com/neutralinojs/neutralinojs) documentation is currently broken, and [webview/webview](https://github.com/webview/webview) documentation sucks; and I don't know how to maximize in [Tauri](https://github.com/tauri-apps/tauri).
Otherwise, [the guidance on security from Tauri](https://tauri.studio/en/docs/usage/patterns/about-patterns) is quite understandable. | patarapolw |
662,123 | Angular as your first JavaScript framework? | Does the following sound relatable? I started coding not that long ago. I am going through... | 0 | 2021-04-11T16:14:22 | https://dev.to/florianbracke/angular-as-your-first-javascript-framework-3gg3 | angular, beginners |
### Does the following sound relatable?
I started coding not that long ago.
I am going through the four horsemen of Web Development
(HTML ,CSS , JS & PHP).
I want to achieve more so now it is time for me to invest some effort into a nice and exciting framework!
**Check? Keep reading!**
___________________________________
> ->Be me
> ->Bad version of columbus the framework explorer
> ->Dive towards first framework in sight
> ->Dive deeper than expected
> ->No mermaids
> ->No backing out now
> ->Play 'Oxygen Not Included'
> ->Should play 'Subnautica' instead
> ->Wrong again
> ->Should stick to coding, really
> ->To late now
So here you are, doing your framework research. Good job!
I recently went through my first "big" project.
A 'tinder meets dogwalking'-app. It was fun and I am fairly happy with the result. In the process of coding this thing I did stumble upon some findings. Findings that I like to tell you in a short summary.
**Synopsis**
As a beginner I wish I learned React or Vue instead.
I had one month for the project and spend almost two weeks on simply understand Angular and getting started with the basics. It's darn hard.
Truth be told, I might not be the best coder but odds are that I am at least close to the average Joe, and odds are that you are as well.
Still, Angular is quite amazing to work with and is definitely worth your time but I think the benefits of the framework are for those who already have a more advanced understanding of code.
___________________________________
## Angular,
a pretty impressive framework.
___________________________________
**Two-way Binding**
Angular is a Single Page Application, so it is all about updating the view with components.
Two-way binding gives components in your application a way to share data. The app listens to an event and updates your data simultaneously. The page never gets reloaded, only updated. It makes the app super fast. Big benefit apparently, but is it that half a second extra speed important to you and your coding project?
___________________________________
**MVC**
Angular provides MVC architecture, which auto updates 'imports' and is generally speaking "plug-and-play". Not being familiar with the concept is not a problem in Angular since the framework guides you through it. To use Angular is to use it's architecture.
I personally learned a lot about MVC, just from working with Angular.
___________________________________
**Angular.io**
I got the best information about the framework on Angular.io.
For me It was the first time that I learned something without YouTube.
This is both a pro and a con. The documentation is solid but it takes a while to digest of course.
There is a "heroes" tutorial provided on angular.io that is splendid and takes you over the basics.
For me it was not enough to completely understand Angular but maybe its works like a charm for you. Information was exponentially harder to find if I had a problem unrelated to-, or a problem beyond the scope of the tutorial..
But really, that tutorial... wow!
___________________________________
**TypeScript**
Angular works with TypeScript. Although not the hardest, it can offer some extra resistance. I think it is definitely not a bad thing, but it does elevate that already steep learning curve some more.
I suggest a question to consider: would you prefer learning React-JsX over Angular-Typescript?
___________________________________
**Angular Material**
Like most common frameworks, Angular has a build in way (after some installs) of styling:
'Angular Material – A Comprehensive and Modern UI'.
It is very neat and allows you to develop extremely fast but I missed some documentation, especially on the forms part which to me sounds crucial.
___________________________________
**Tests**
All code in Angular is required to go through a series of tests. This convenience allows you to develop and test everything at the same time. Combined with the power of TypeScript, you automatically know what went wrong and where it happened. They have a very smooth error system. Every time something is wrong, even in different files, the issue just gets underlined in red and it saves you a whole bunch of time! I think this is one of my favorite things about Angular, it gives you a comfortable space to test things out and see if they work.
___________________________________
**Second conclusion**
So overall I really enjoyed working with the framework! It was just a "female dog" to get started with and perhaps my life would have been easier if I started out with React or Vue. All three frameworks have a lot in common. the differences seem minor to me (a beginner) so I recommend other beginners to start with an easier one to learn!
___________________________________
[^1]: As a beginner I am prune to making mistakes. Any comments and suggestions on this article are welcome! :) | florianbracke |
662,156 | Home widgets in iOS | Introduction During the first couple of days of this last week, we had hack days in my job... | 0 | 2021-04-11T16:37:21 | https://dev.to/fmo91/home-widgets-in-ios-o7o | ios, widgetkit, swiftui, swift | # Introduction
During the first couple of days of this last week, we had hack days in my job. So everyone in the team could work on whatever they would prefer, and we'd then integrate them to the project if everyone agree on the positive outcome we'd get from them.
I love it (and my teammates too), so I chose to create a home Widget for the app.
For obvious reasons I can't tell you what is the app I'm working on, but I can tell you that's an important newspaper, so the widget was intended to show the top articles for the current day.
In this article, I won't cover a step-by-step tutorial on how you should create a home widget, but I will tell you what I liked of WidgetKit and what I suffered, since I'd never worked on it before, and link to useful tutorials in the process.
# Home Widgets?
[`WidgetKit`](https://developer.apple.com/widgets/) is a framework Apple released in WWDC 2020 and that allows developers to create widgets for their apps which their users could then add to their home screens.

You can define as many widgets as you'd like for your app in three different sizes: small/medium/large.

# The good parts
I liked working in WidgetKit so far, so I'll start by telling you the good things I've seen about it.
## SwiftUI
Widgets are created using SwiftUI. If you'd like to start learning and experimenting using SwiftUI, then WidgetKit is a good place to do it.
You can define your widgets using SwiftUI, the experience is nice and the framework overall feels nice and natural to do it using that technology.
Previews are also your best friends in the process.
Widgets are pretty limited (I'll expand on this later), so you won't be using state management and all of that part of the job that could be interesting. You'll be working creating static views and updating it as needed, and linking the user to the app using deep links. So most of the work you do while creating widgets is static views design. Nice place to start learning SwiftUI, I repeat.
## Documentation
Or maybe not [the docs](https://developer.apple.com/documentation/widgetkit/) themselves which aren't exactly great, but the WWDC videos. I really enjoyed them.
Here are my favorites:
- The code-along sessions [part 1](https://developer.apple.com/videos/play/wwdc2020/10034/), [part 2](https://developer.apple.com/videos/play/wwdc2020/10035) and [part 3](https://developer.apple.com/videos/play/wwdc2020/10036): they are a great place to learn and practice developing widgets. You can download an initial project and you can see how the code is evolving on each step. These videos are short (between 9 and 15 minutes) and pure gold to start with.
- [Build SwiftUI views for widgets](https://developer.apple.com/videos/play/wwdc2020/10033): tips and tricks on how to create optimized and good looking SwiftUI views that you can integrate into your widgets.
- [Add configuration and intelligence to your widgets](https://developer.apple.com/videos/play/wwdc2020/10194/): Sirikit based intents can be used to let the user configure their widgets. I haven't done that, but it's possible. This session explains how you should do it.
## It's easy!
If you compare the benefits vs. effort, I think adding a widget to your app is a no brainer. You can create a good looking widget in just a couple of hours. You can of course use more time to improve it, and add configuration and different sizes, and it can take a couple of days (or maaaybe weeks?) but it isn't something that will add a big amount of complexity to your project.
# The bad parts
## They are static views
Widgets consist on static SwiftUI views that can't be animated, or scrolled, or interacted with.
While creating the widget, I opened and compiled the Android version of the project. It had a scroll view and a button to refresh the widget. That was just impossible to do in iOS.
The iOS Widget can just use `Link` as a way of interaction. Whenever the user taps on your widget, the app will open. You can then deep link to different parts of the app using deep linking.
In the case of the widget I developed, the app opens in an article detail view.
## Sharing code
There are several ways of sharing your main components from your app into your widget. Creating a module with the share core code is great idea, considering that there are many app extensions you app can support.
However, if you app isn't architected in such a way, this could be a pain point.
# To sum up
Having done a widget, and I'm sure there are a lot of things I still don't know them, I think creating widgets is a great experience if you haven't done much SwiftUI yet, and it's a nice addition to your app.
This is my piece of advice while working with widgets.
- Start by splitting your app core functionality (networking, persistence, etc.) into a different "core" module using Swift Package Manager, Carthage, Cocoapods, or just using a dynamic framework in Xcode. This will allow you to include and use your app functionality in your widget target in a clean way. Of course you can just add the files to both targets, and it will work, but it just doesn't look clean at all.
- Don't be afraid of the time it will take you in order to create a widget, it won't be so long and you'll end with a nice modern feature your user will enjoy.
- Take time to learn and watch the WWDC videos. Widgets are a new feature, so it makes sense to spend some time learning on the experiences it will enable you to create.
- Add analytics to the `openURL` function in your `AppDelegate`. So whenever the app opens from your widget, you can register that event. | fmo91 |
662,307 | SCSS make life more easier . | What Scss ? Scss is Sassy Cascading Style Sheets. It wraps the CSS to allow you to use fun... | 0 | 2021-04-11T20:03:24 | https://dev.to/fatimaalmashhor/scss-make-life-more-easier-9h8 | scss, javascript, css, html | # What Scss ?
Scss is Sassy Cascading Style Sheets. It wraps the CSS to allow you to use functions and variables exc.. make more likely language like JavaScript .
Previously when we styled some of our projects we get repeated code and some time needed much work to design the things .
Then after the Scss appears make the style more clean , easy to read and use multiple times . I am not here to explain what is Scss and how to start and all these PLA PLA things , So let's jump into the way to use it .
Just Second before we dev deep into the example ?! Would you ever feel **confused between the Scss and Sass**?
Sass is stand from (Syntactically Awesome Style Sheets) ,language that will be compiled into CSS . SassScript is itself a scripting language whereas SCSS is the main syntax for the SASS which builds on top of the existing CSS syntax.SASS has more developer community and support than SCSS
let jump in into the basic syntax
### Variables
the most useful feature , It is really help for write the value once and get it all over the project and help for avoid the forgot value of the colors , fonts size and even the break points
```
// Colors
$color-primary : #333333;
$color-scondary : #4F4F4F ;
$color-oriange : #F2994A ;
$color-green : #B0C2AC ;
```
### Functions
the second things that we absolutely aspect from script language is the methods , Which struct our code and reject the repetitions .In Scss there are two comment way to do that on is by using `@mixin` and the other is `@function`.
And the equation is which one is better . Let me tell you the main differences between them first . Function are blocks of code that return a single value of any Sass data type.
```
@function pow($base, $exponent) {
$result: 1;
@for $_ from 1 through $exponent {
$result: $result * $base;
}
@return $result;
}
```
And invoke it like this
```
.sidebar {
float: left;
margin-left: pow(4, 3) * 1px;
}
```
But the mixin will compile directly into CSS styles, no need to return any value .
like
```
@mixin reset-list {
margin: 0;
padding: 0;
list-style: none;
}
@mixin horizontal-list {
@include reset-list;
li {
display: inline-block;
margin: {
left: -2px;
right: 2em;
}
}
}
nav ul {
@include horizontal-list;
}
```
By using the `@include`
### Import
Sometime we need to split the code into multiple file ,Then we need to call some of them into other Scss make the way possible by adding statements to do so . Like `@import` and `@use`.
The main differences is how they handle members. @import makes everything globally accessible in the target file. The Sass team discourages the continued use of the @import rule and that because it is allows for overlap and makes it difficult to trace back why your perfect css breaks .
Same as @import, @use rule enables us to break our stylesheet into more practical, smaller sections and load them inside other stylesheets. The key difference is how you access the original files' members .
and You can access variables, functions, and mixins from another module by writing <namespace>.<variable>,
```
// src/_corners.scss
$radius: 3px;
@mixin rounded {
border-radius: $radius;
}
```
```
// style.scss
@use "src/corners";
.button {
@include corners.rounded;
padding: 5px + corners.$radius;
}
```
### extend
one more think I like to add here the `@extend` . when one class should have all the styles of another class, as well as its own specific styles.
```
.error {
border: 1px #f00;
background-color: #fdd;
&--serious {
@extend .error;
border-width: 3px;
}
}
```
after compile it will be like
```
.error, .error--serious {
border: 1px #f00;
background-color: #fdd;
}
.error--serious {
border-width: 3px;
}
```
I will keep update this post till I get the most helpful features in SCSS . I HOPE YOU ENJOY IT
| fatimaalmashhor |
662,313 | Jumble Solver multiple Words For Any Puzzle | Utilizing The Word Jumble Solver to Solve Word Jumbles Many people are unaware that Jumble Solvers a... | 0 | 2021-04-11T20:25:43 | https://dev.to/robert29105427/jumble-solver-multiple-words-for-any-puzzle-3cbb | gamedev | Utilizing The Word Jumble Solver to Solve Word Jumbles
Many people are unaware that Jumble Solvers actually exist. Jumble Solver can unjumble a set of characters, revealing possible words that could be produced from them. Our Jumble Solver will be able to carry out rapid dictionary research in order to do this.
Wildcard characters and also bare character tiles are taken. Everything you should do is type in a * in place of a character and the Jumble Solver know to swap it, trying every single letter of the alphabet instead.
Based on the style of word game that you're participating in or attempting to resolve, the Jumble Solver can be quite an excellent benefit. Word jumbles and phrase anagrams are perfect.
Just How Do I Utilize Word Jumble Solver?
You will make use of the jumble solver to solve word jumble games such as those utilized in newspapers. Simply enter each letter you have available into the box and hit enter. Your jumble solver can easily develop a list of all of the terms that may be formed making use of these letters.
Just what technique will the word jumble solver make use of to organize words?
All the words for your jumble solver will be ordered in order of word count. The phrases that have the longest lengths are going to be demonstrated first.
The Jumble Solver operates on your mobile phone in a non-public and very discreet fashion. You will see that the screen may change instantly to show the best layout to match your device as well as look excellent on phones and tablets.
The application starts up quickly and needs hardly any bandwidth to operate and is incredibly simple to use. The application is backed by promotions however these are not intrusive and nonetheless leave the Jumble Solver as being the Internet's speediest Jumble Solver.
It will resolve any characters which you type in solving all of them into words and phrases, which makes it a great tool for an extremely number of word games. You must have the ability to find the word that you need as it has an extensive dictionary to call upon. Regardless of how many word games which you submit nor how complex they are, the Jumble Solver will be capable of enabling you to out of a difficult position.
There are reasons why this particular isn't a multi-word Jumble Solver, the complexness of sorting the answers into a controllable format prevents this particular. Therefore know that the software depends upon what you put into it. The more you type in the more it'll return. This particular makes it well suited for word scramble puzzles however it is extremely useful any time found in other sorts of term puzzles.
So long as you tend to be smart with the number of letters that you enter, the word-finding algos are very speedy. The anagram resolver consists of the exact same algos.
Multi-level jumble puzzles have grown to be very popular, in which they resolve an expression as opposed to just an individual word. As you can imagine this specific contributes a bit of complexity since you need to solve these types of brain teasers as well as solve the disorderly sentences. To unscramble your letters, simply mix up some disorderly characters.
Is this particularly suitable for a Scrabble game?
Indeed, this word jumble solver is a great scrabble word finder (and text twist). Jumble Cheat, yay! The easy word creator engine takes on the same performance. With regard to blank tiles, make use of the * as a wild card. Anagrams can even be resolved using our term un-scrambling method (exactly the same process anyone can make use of to be able to resolve jumbled words). In this specific website, we have a specific scrabble resolver with scrabble points (with regard to word values).
<a href="https://thejumblesolver.net/">Jumble Solver multiple Words</a> tool can be found at thejumblesolver.net
| robert29105427 |
662,316 | How to get started with programming | If you found value in this thread you will most likely enjoy my tweets too so make sure you follow me... | 0 | 2021-04-15T12:49:14 | https://blog.vlddev.live/how-to-get-started-with-programming | beginners, programming, webdev | _If you found value in this thread you will most likely enjoy my tweets too so make sure you follow me on [Twitter](https://twitter.com/VladPasca5) for more information about web development and how to improve as a developer. This article was first published on my [Blog](https://vladpasca.hashnode.dev/)_
### 1. Do some research
First you need to know what you want to do and go from there
Search some fields like web development, machine learning, game development
Watch some videos/read articles and see which one you would like to work in
### 2. Choose your first programming language
Now that you know what field you want to get into it's the time to choose your first programming language
If you choose Web Development learn HTML, CSS and JavaScript
If you choose Machine Learning learn Python
### 3. Choose your resources
You need to take your time with this
Do some research and find the best resources for the programming language you want to learn
Then choose video courses, docs, interactive lessons and see which one works the best for you
### 4. Join a community
This is one of the most important steps
But why should you join a community of developers?
-You'll stay more motivated
-You'll help and get help from others
-You'll get more job opportunities
### 5. Use coding platforms like Codewars
After you learned the basics in the programming language you choose it's important to practice that syntax
The best way to do it is by doing coding problems on websites like Codewars
Don't focus on writing "smart" code, just practice
### 6. Start Building projects
Now that you have more experience you should be able to build some small projects
Start with something simple and improve every day
There is no shame if it's bad, as long as you built it on your own it's great
### 7. Contribute to open source
This is the step that will prepare you for jobs
Why?
Because at a job you will code on top of existing projects and this is exactly what you do when contributing to an open-source project
Start with an easy project and go from there
### 8. Keep learning
From here you should already know what path you want to take and what technologies you want to learn next
Keep learning those
Build projects
Contribute to open source
When you feel confident in your skills go to the next step
### 9. Build a portfolio and apply for jobs
Now it's the time to build a portfolio and showcase all your projects and open source contributions you made
After you built it start applying for jobs
Use LinkedIn, Twitter, or other communities you joined in the past
### The end
_I hope found this useful and if you did please let me know. If you have any question feel free to DM me on [Twitter](https://twitter.com/VladPasca5) ._ | pascavld |
662,491 | Ultimate Reference on Javascript Functions 2021 | Functions are one of the most important concepts in programming, and Javascript gives functions first... | 0 | 2021-04-12T01:50:04 | https://dev.to/alexmercedcoder/ultimate-reference-on-javascript-functions-2021-40a5 | javascript | Functions are one of the most important concepts in programming, and Javascript gives functions first-class support meaning there is a lot to learn but a lot of great ways to use functions in javascript. This article is a reference on functions in javascript. Enjoy.
## What is Function?
Think of functions like a wizard has spells. Whenever a wizard wants to conjure some creature he looks in his book of spells and casts a spell. Another analogy is a Chef with their book of recipes.
Whether you are a chef or a wizard you must commit write down your spell/recipe before you can use it, this is referred as defining your function.
```js
function wizardSpell (){
// what the spell does
}
```
The code above is one of three ways we can write down our spell/recipe, also known as defining our function. Once our function is defined we can use it anytime we want like so.
```js
wizardSpell()
```
So to cast our spell, cook our recipe and invoke our function we write the function's name with a parenthesis after it. (If there is no parenthesis then you are not using the function, but just referring to the function itself).
To notice the difference between invoking the function, and the value of the function, try this.
```js
function returnOne(){
//this function will give you the number one
return 1
}
// logging the result of invoking the function
console.log("Invoking: ", returnOne())
// logging the value of the function
console.log("Function Value ", returnOne)
```
## Function Declarations
As I mentioned the syntax above was one of two main ways we can define our function. The method above is a function declaration. Just as a refresher...
```js
// defining our function with a function declaration
function someFunction(){
}
// invoking the function we defined
someFunction()
```
Function declarations are hoisted, which means the javascript engine before executing any code will scour your code for all function declarations and read them into memory. This means you can invoke a function in a line prior to its declaration. For example, the following code is confusing, but works.
```js
// invoking that is defined later on
someFunction()
// defining our function with a function declaration
function someFunction(){
}
```
This is certainly more confusing and having all possible functions loaded into the global space can also hamper performance, so most modern Javascript development has moved towards function expressions.
## Function Expressions
Function expressions take advantages that functions have first-class support in javascript, which means they are a value that can be used in any way any other datatypes can be used.
- Functions can be assigned to variables, stored in arrays, or be the value of object properties
- Functions can be passed as an argument to other functions
- Function can be returned by functions
So instead of declaring a function, function expressions define a variable in which a function is stored. Variable declarations are not hoisted so invoking must occur after the definition and avoids the memory pollution of function declarations.
#### Ways to write function expressions
1. Named function stored in a variable
```js
// define the function via function expression
const someFunction = function funcName(){
}
// invoke the function
someFunction()
```
2. Function expression using an anonymous function (has no name) with the function keyword
```js
// define the function via function expression
const someFunction = function(){
}
// invoke the function
someFunction()
```
3. Function expression using an anonymous function (has no name) using arrow functions
```js
// define the function via function expression
const someFunction = () => {
}
// invoke the function
someFunction()
```
#### Parameters & Arguments
Functions become really powerful when you can pass in data to customize what happens each time you invoke a function. Parameters and Arguments allow us to do just this. Parameters allow us to define a placeholder for data that will be passed in when the function is invoked. Arguments are the data that is passed in when the function invoked/called.
```js
// cheese and bread are parameter, acting as a placeholder for data we don't have yet
const someFunction = function(cheese, bread){
console.log(cheese)
console.log(bread)
}
// we will pass the string "gouda" as the first argument which gets stored in cheese as the function runs, we also pass "rye" as the second argument which gets stored as bread during the run.
someFunction("gouda", "rye")
```
## Functions Return Values
Think of a function as a task given to a butler. Usually, a task involves the butler getting something and bringing it back. In the function world, this is called a return value.
The benefit of a return value...
- can be assigned to a variable
- can be used in expressions
- can be passed as arguments to other functions (callbacks)
Try out the below to see the difference
```js
// function that logs instead of returning a value, kind of like a butler showing the bottle of wine you asked for but never bringing it to you.
const noReturn = () => {
console.log("Hello World")
}
const result1 = noReturn() //no return value, so the variable gets nothing
console.log(result1) // undefined is logged, since the variable has no value
//////////////////////////////////
//////////////////////////////////
// function that returns a value, this is like the wine being brought and placed in your hands
const returnSomething = () => {
return "Hello World"
}
const result2 = returnSomething() // the variable will hold the return value of "Hello World"
console.log(result2) // this will log "Hello World"
```
## Cool Function Tricks
#### Parameter Default Values
```js
// we assign 4 & 6 as default value to x & y
const someFunction = (x = 4, y = 6) => {
return x + y
}
console.log(someFunction()) // log 10
console.log(someFunction(2,2)) // log 4
```
#### Variable Number of Arguments
There are two ways of doing this. In a function definition that uses the function keyword, there are magical iterable object arguments you can access, you can then use a for-of loop to loop over it or use the spread operator to turn it into an array.
```js
const someFunction = function(){
// log the arguments object
console.log(arguments)
// loop over the arguments object
for (arg of arguments){
console.log(arg)
}
// turn it into a proper array
const argArray = [...arguments]
}
someFunction(1,2,3,4,5,6,7)
```
The more explicit way that works with all methods of defining function is using the rest operator to capture all remaining arguments in an array.
```js
// function that adds up all the numbers
const someFunction = (x, y, ...args) => {
// add the first two arguments
let sum = x + y
// add in the remaining arguments
for (num of args){
sum += num
}
return sum
}
console.log(someFunction(1,2,3,4,5,6,7,8))
```
#### Closure
Each function has its own scope and if you define a function inside of a function it has access to the parent functions scope. This can be an interesting way of hiding data, which is particularly key to how React Hooks work. Examine the example below.
```js
const parentFunction = (startingValue) => {
// creating a variable with an initial value
const value = startingValue
// define a function that returns the value
const getValue = () => { return value }
// define a function that alters the value
const setValue = (newValue) => { value = newValue }
// return both functions in an array
return [getValue, setValue]
}
// destructure the return value of the parent function
const [getValue, setValue] = parentFunction(1)
console.log(getValue()) // logs 1
setValue(2)
console.log(getValue()) // logs 2
```
In this example, getValue and setValue have access to the parentFunction scope outside of it since they were defined inside of it.
#### Currying
This is breaking up a function that needs multiple arguments into a chain of functions taking advantage of closure.
Let's curry this function.
```js
const addAndMultiply = (x, y, z) => {
return x + y * z
}
console.log(addAndMultiply(2,3,4)) // 2+3*4=20
```
Given, this example is simple enough it probably doesn't need to be curried but to illustrate how it would work...
```js
const addAndMultiply = (x) => (y) => (z) => {
return x + y + z
}
//invoking the functions back to back
console.log(addAndMultiply(2)(3)(4)) // 20
// doing it step by step
const add = addAndMultiply(2)
const multiply = add(3)
const result = multiply(4)
console.log(result)//20
```
#### Destructuring Arguments
If you know a function will be passed an object or an array as an argument you can use destructuring.
```js
// For Objects
const myFunction = ({name, age}) => {
console.log(name)
console.log(age)
}
myFunction({name: "Alex Merced", age: 35})
```
```js
// For Arrays
const myFunction = ([name, age]) => {
console.log(name)
console.log(age)
}
myFunction(["Alex Merced", 35])
```
#### Arrow Function Shorthand
- If there is only one parameter, no parenthesis needed
- If you plan on return the value of a single express, you can exclude the curly brackets the return keyword will be implied
- If the expression is long you can wrap it in parenthesis
```js
const quickFunction = x => x + 1
const longExpression = y => (y + y * y - y * y)
```
## Good Function Design Tips
- Function should not mutate alter variables outside of its scope
- Anything it needs from outside its scope should be passed in as arguments
- If you need to transform data have the function return a copy with the transformed data instead of mutating the original
- If you need lots of arguments use an object, this allows you to give arguments names and also be able to add new arguments without much refactoring
- long complex functions should be broken down into many smaller ones (think currying)
- As you get more comfortable with writing functions, look into memoization, a technique that allows a function to cache previously calculated results to minimize redundant processes.
| alexmercedcoder |
662,820 | Starting Block chain as a beginner | i am beginner in block chain kindly guide me how to start my career in block chain. | 0 | 2021-04-12T09:25:12 | https://dev.to/bilal69621/starting-block-chain-as-a-beginner-4nj4 | i am beginner in block chain kindly guide me how to start my career in block chain.
| bilal69621 | |
662,907 | Repeated Capturing Group | Regular expression can be used to check a string or a pattern is repeated in a string. For example, i... | 0 | 2020-04-04T00:00:00 | https://dev.to/ethanzxlee/repeated-capturing-group-5f5e | java, regex, code, note | ---
title: 'Repeated Capturing Group'
date: '2020-04-04'
author: 'Ethan Lee'
tags: 'java, regex, code, note'
published: true
---
Regular expression can be used to check a string or a pattern is repeated in a string. For example, if you want to check if the string 'abc' is repeated for exactly 3 times in a string, you can use the following regex: `(abc)\1{2}`, or it would be like this in Java after adding the escape characters:
```java
Pattern.compile("(abc)\\1{2}");
```
The `\1` in the regex matches the first capturing group in the regex. If you want it to match the second capturing group, you can use `\2` and so on.
It is also possible check if a capturing group is repeated at least `n` times or more than `n` times. For examples,
- to check if `abc` is repeated in a string for at least 5 times, `(abc)\1{4,}`
- to check if `abc` is repeated in a string for less than 5 times, `(abc)\1{0,4}` | ethanzxlee |
662,913 | Create Video And Photo Thumbnail Images With Two Parameters | If you have ever tried to create thumbnail images on your computer, then you probably know that the p... | 0 | 2021-04-12T10:01:08 | https://dev.to/georgenevis404/create-video-and-photo-thumbnail-images-with-two-parameters-4m5c | gawdo, gawdocom | If you have ever tried to <a class="c4" href="https://www.google.com/url?q=https://gawdo.com/products/create-thumbnail-images?_pos%3D1%26_sid%3D2d3bba0d5%26_ss%3Dr&sa=D&source=editors&ust=1618225203940000&usg=AOvVaw1lFS2SdyVRGbVddGXz_4FB">create thumbnail images</a> on your computer, then you probably know that the process is not a very pleasant one. It is something that is much easier if you have already a fairly decent photo collection to work with. There are several ways that you can go about thawing your images if you would like to create a nicer album.
The first thing that we will discuss is the method of having a digital camera and trying to snap a photo of a real live creature. This might sound difficult, but there are several digital cameras that are designed specifically for taking such pictures. For example, there is a brand called Nikon that has developed a camera called the Nimage series. These series of digital cameras were created in response to the need of someone trying to create larger than life looking photos of things such as wildlife. There is actually a whole series of digital cameras based on the Nimage series. When you go looking for a way to create thumbnail images, this may be the easiest option.
Another way to create thumbnail images is to use an editing program such as Photoshop. The great thing about Photoshop is that there are lots of different filters that you can apply to your images. This allows you to create a variety of different looks, both bright and dark. You can also use an image mask tool to make sure that only certain portions of your image are visible. You can also use a variety of filters with watercolors and oils.
If you do not want to mess with any digital equipment, you can also create your own video thumbnail. Video is one of those things that tends to retain its images much better than text or images. It can take some practice to master the creation of these videos, but it is not impossible. There are software programs available that make it very easy to create professional looking video thumbnails.
A simple way to create thumbnail images is to save the images you want to display as a JPEG file. When you save an image as a JPEG it will be stored on your computer in the JPEG folder. You can then open the file in an imaging software program to create the perfect image next time.
One other method of saving the image next to your article is to use the "crop" tool in Photoshop. To crop an image, simply move the mouse cursor over the image and choose "cropped." You will then see a new window appear with the image size being the original size for that portion of the page. With the mouse, you will choose the "scale" option and move the cursor to the right. This will make the image next to your article larger in size.
If you would rather save the JPEG with only one parameter, you can pass two parameters to Photoshop. Click on "Tools" and then choose" presets," and then click "OK." In the "Settings" dialog box, choose "Original Images" and click the "load" button. Two new thumbnails will be shown in the Photoshop Window. Use the "shape" pull down to adjust the two parameters and then click on "OK."
Creating videos and photos that are in JPEG format is quite easy when using one of the two parameters described above. These parameters are useful if you are looking for a quick and easy way to create a great video or photo thumbnail. They can also be used with social media sites like Facebook and MySpace. If you take a lot of digital photos and video, you may find that these two methods are very useful. Try both of these methods and create the perfect thumbnail image for your social media profiles.
| georgenevis404 |
662,967 | Real world example of compose function and currying. | Another currying article Using Javascript, you can decide to write your code based on FP o... | 0 | 2021-04-22T18:00:09 | https://dev.to/pegahsafaie/real-world-example-of-compose-function-and-currying-3ofl | functional, currying, composition, javascript |
## Another currying article
Using Javascript, you can decide to write your code based on FP or OOP principles. When you decide on FP there are some concepts you need to understand in order to make the most out of FP principles. These include concepts like currying and compose functions. For me it took a while to understand what the **currying** is and **when** and **how** I should use it in my code. Here, I tried to explain what I found in a simple way, hopping to make the learning process quicker and smoother for you.
- [When to use compose functions?](#when)
- [How to use compose functions?](#how)
- [How to enhance compose functions using currying?](#how-2)
- [Homework](#homework)
- [Your opinion](#opinion)
## <a name="when"></a>When should we use compose functions in our code?
we want to model the following ice cream production line by using javascript functions.

We see a sequence of 3 actions following one another:
- **Mix** the ice cream with sth like 🍓, 🍒 and 🍇.
- **Decorate** the ice cream with sth like 🍫.
- **Form** the ice cream scoopes.
All actions take ice cream as input, modify it with some settings(berries or chocolate) and send the modifed ice cream to the ouput to be used by next function.
Here is the atomic function for each action.
```javascript
function mix(ice, tastes) {
return tastes.join(', ') + ice;
}
function decorate(ice, taste) {
return 'decorated with ' + taste;
}
function form(ice) {
return 'scooped ' + ice;
}
```
For a berry ice cream with chocolate topping, you might write:
```javascript
decorate(form(mix(ice, 🍓, 🍒, 🍇)), 🍫)
// output: " scooped 🍓, 🍒, 🍇 ice cream decorated with 🍫"
```
I'm sure you've seen this pattern in your code:
Modifying a single data (ice cream) by a couple of operations to create the desired outcome (scooped berry ice cream with chocolate).
But this way of writing function sequences is not quite nice. The brackets are too many, and the execution order is from right to left.
To write it better, we can use the **Composition Function** concept in math:
> Having
> - f: x -> y
> - g: y -> z
>
> we can create a third function which receives a single input(x) and creates an output(z)
> - h: x -> z
>
> it looks like
> - h(x) = g(f(x))
## <a name="how"></a>3 steps to write a better function sequence using the composition function in JS
**1. Create a new compose function**
For me the simplest compose function would be a wrapper function, which receives all required inputs and returns the results of the function sequence execution.

```javascript
const compose = (ice, tastes, decorateTaste) =>
form(decorate(mix(ice, tastes), decorateTaste));
// call compose
compose('ice',['🍓', '🍒', '🍇'], '🍫');
// output: " scooped 🍓, 🍒, 🍇 ice cream decorated with 🍫"
```
**2. Reduce the compose function's input parameters**
Compose function should take only one single input. This is the data that gets modified throught the function sequence and comes out as output. In our example ice cream is this data.
It matters to keep compose function unary because when calling compose function we only want to focus on the data that is sent to the method and not care about the setting parameters.

As you see in the above picture, Each action(mix, decorate) can be customized by its corresponding setting parameters(berries and chocolate):
```javascript
// Customized version of mix function using berries
const mixWithBerries = ice => mix('ice', ['🍓', '🍒', '🍇']);
// Customized version of decorate function using chocolate
const decorateWithChoclate = ice => decorate('ice', '🍫');
// Compose function accepts just one single input
const compose = (ice) => form(decorateWithChoclate (mixWithBerries(ice)));
// Call compose. looks nicer!
compose('ice');
```
**3. A more elegant generic way of creating compose functions**
In this section we write a compose function **generator**. Why? Because it is more convenient to use a compose function generator rather than to write a compose function every time if you use compose functions a lot.
> You can skip this section if you want to use composeGenerator function available in lodash/fp and ramda libraries.
We also implement our compose function generator in a more elegant fashion than our previous implementation of compose function, where we still have a lot of brackets and the execution order is still from right to left.
Then compose function generator is a function that takes a series of functions(fn1, fn2, ..., fnN) as input parameters and returns a new function(compose). The returned compose function receives data and executes functions(fn1, fn2, ..., fnN) in a given order.

That looks like this:
```javascript
const composeGenerator = (fn1, fn2, fn3) => data => fn1(fn2(fn3(data)))
// create compose function using composGenerator
const compose = composeGenerator(form, decorate, mix)
compose('ice')
// or
composeGenerator(form, decorate, mix)('ice')
```
The double arrow in the code above indicates a function `composegenerator(fn1, fn2, fn3)` which returns another function `compose(data)`.
This implementation of composeGenerator is limited to 3 functions. We need something more generic to compose as many functions as you want:
```javascript
const composeGenerator = (...fns) => data =>
fns.reduceRight((y, fn) => fn(y), data)
const compose = composeGenerator(form, decorateWithBerries , mixWithChoclate )
compose('ice')
// or
composeGenerator(form, decorateWithBerries , mixWithChoclate )('ice')
```
It's not easy but at least you define it once, and then you don't have to worry about the complexity anymore. Let's break it down into a group of smaller parts to make it easier to understand.

And here is how reduceRigth works when we call composeGenerator with our piepeline functions.

## <a name="how-2"></a>Enhance your compose function with currying
Our solution to remove the setting parameter from our compose function is not good since we will have to write new custom function every time we wish to add a new flavor to our pipeline:
```javascript
// Change the production line to decorate with 🍓
const decorateWithStrawberry = ice => decorate('ice', ['🍓']);
composeGenerator(form, decorateWithStrawberry , mixWithChoclate )('ice');
// Change the production line to decorate with 🍓 and 🍫
const decorateWithChocAndStrawberry = ice => decorate('ice', ['🍓', '🍫'])
composeGenerator(form, decorateWithChocAndStrawberry , mixWithChoclate )('ice')
```
Our solution is to implement the **curry** function, which accepts the tastes and returns the decorate function with one single argument.
``` javascript
// Currying decorate function
const curriedDecorate = (tastes) => (ice) => decorate(ice, tastes);
// Currying mix function
const curriedMix = (taste) => (ice) => decorate(ice, taste);
composeGenerator(
form,
curriedDecorate('🍫') ,
curriedMix(['🍓', '🍒', '🍇]))('ice')
```

Like compose functions, we may write our curried functions ourselves or create a generic function that returns a curried version of a function.
> You can skip this section if you want to use curry function available in lodash/fp and ramda libraries.
A curry function receives a function `fn` as input. If the passed arguments(`args.length`) are at least equal to the function `fn`'s required arguments(`fn.length`), it will execute function `fn`, otherwise it will return a partially bind callback.
```javascript
const curry = fn => () ({
const args = Array.prototype.slice.call(arguments)
return args.length >= fn.length ?
fn.apply(null, args) :
currify.bind(null, ...args)
})
curry(decorate)(['🍓','🍫']) //output: a function which just needs ice cream as input
```
When we execute a curryFunction(curriedDecorate) with all the setting parameters(decorateTaste), it returns a new function which only needs one data parameter, and we can use it in our compose function.
## <a name="homework"></a>A homework for you:
Generally, remember that currying is used to decrease the number of parameters of a function. In our last example, we saw that reducing inputs to a single one can be beneficial when using a compose function but unary functions can be used in more cases where we only require a single argument. For example in arrow functions we can remove the brackets when function just has one parameter:
```javascript
// 👎
[1,2,3].map(function(digit) {
return digit * 2
})
// 👍
[1,2,3].map(digit => digit * 2)
```
As a pratice try to improve this code using currying.
```javascript
const pow = (base, exponent) => Math.pow(base, exponent)
const digits = [1,2,3];
const exponent = 2;
digits.map(digit, function(digit) {
return pow(digit, exponent)
})
```
you can find the solution in this [video](https://www.youtube.com/watch?v=fvJ9yWqXcZI) from Derick Bailey
## <a name="opinion"></a>Your opinion
What is your favorite example of using currying in your code? And generally do you like using it or do you think it makes the code unnecessarily complicated ? | pegahsafaie |
663,237 | Simplest Firebase Analytics Guide for your project | Note: This post does not include how to create a web app on firebase console, since I am focusing onl... | 0 | 2021-04-12T15:53:19 | https://dev.to/vikirobles/simplest-firebase-analytics-guide-for-your-project-2kj6 | firebase, webdev, typescript, analytics | Note: This post does not include how to create a web app on firebase console, since I am focusing only on how to show on the Firebase Analytics console the events that you want to create for your project.
The result of the process should lead you on a view like this

I spent quite a lot of time today trying to figure it out how this works and I thought to share my experience since there are many posts that you will loose your time.
- I am using Vite framework and TypeScript.
- I have add the app on the firebase console and also the firebase.ts configuration file.
On the picture above, on the right side you can see the login event.
Below this is an example on how I created and stored the ``signInWithEmailAndPassword()`` method from firebase library and how I am adding the analytics method after the signing in.
```js
const signIn = (
email: string,
password: string,
): Promise<firebase.auth.UserCredential> => {
return auth.signInWithEmailAndPassword(email, password).then((user) => {
analytics.logEvent('login')
return user
})
}
```
On that part of code, the ``login`` event is being used by firebase analytics library and documentation:
[https://developers.google.com/gtagjs/reference/event#page_view]()
Then I realised that the events on the console will not show up till 24hrs have passed, so you will have to see it on debugging view.
- Firstly download on your Chrome the Google Analytics Debugger:
[https://chrome.google.com/webstore/detail/google-analytics-debugger/jnkmfdileelhofjcijamephohjechhna?hl=en]()
and activated it.
- Then on your firebase project go to Dashboard -> and click on the ``DebugView Report``

[https://firebase.google.com/docs/analytics/debugview#web]()
This will open to another page and the last thing you need to do is to copy-paste the localhost link and it will connect.
That's a simple and easy example so that you can create more and more events for your analytics.
Enjoy!
| vikirobles |
663,278 | #1 of 100DaysOfCode | Today I started my 100 days of code journey. Right now I don't know much about the topic that I am le... | 12,237 | 2021-04-12T17:01:00 | https://dev.to/icecoffee/1-of-100daysofcode-19mm | 100daysofcode, react | Today I started my 100 days of code journey.
Right now I don't know much about the topic that I am learning so can't specify what should be my daily goal or at least how many topics I'm gonna cover each day.
My today's learnings:
What is react?
Why use it?
Possible use cases.
Shortcomings and strengths...
Just theory nothing too fancy.
Key takeaways:
1) React is a library
2) It makes js more powerful, unlike frameworks.
3) With the help of appropriate third-party libraries react can do anything that a framework can.
4) After zipping up its total size falls in the range of 30-35 kb, so super small.
5) To get better at react you need to get better at javascript.
6) React doesn't violate the separation of concerns, it just gives it a new definition.
7) Principle of React: "Do one thing, but do it right"
8) React gives an overwhelming number of options, so till you don't become experienced enough stick to the recommondations.
I know there is nothing to learn from this article but just trying to build a habit here. 😀
Hope I may find interested peers on my way.
Wish me luck. 🤞
Thanks for reading, Have a beautiful day.
| icecoffee |
663,352 | Tips en JavaScript | Índice Formatear la salida de JSON Stringify Obtener el índice de una iteración en un... | 0 | 2021-04-12T22:51:30 | https://dev.to/corteshvictor/tips-o-trucos-javascript-16o8 | javascript, tips | ## **Índice**
- [Formatear la salida de JSON Stringify](#formatear-la-salida-de-json-stringify)
- [Obtener el índice de una iteración en un bucle for-of](#obtener-el-índice-de-una-iteración-en-un-bucle-for-of)
- [Intercambiar variable](#intercambiar-variable)
- [Ordenar arreglos](#ordenar-arreglos)
- [Edita páginas web directamente en el navegador sin tocar los elementos HTML](#edita-páginas-web-directamente-en-el-navegador-sin-tocar-los-elementos-html)
- [Copiando objetos desde las herramientas de desarrollo o developer tools](#copiando-objetos-desde-las-herramientas-de-desarrollo-o-developer-tools)
- [Utilizar las propiedades-métodos-eventos de un elemento HTML por medio de su id](#utilizar-las-propiedades-métodos-eventos-de-un-elemento-html-por-medio-de-su-id)
- [Desplácese hasta un elemento específico con una animación de desplazamiento suave](#desplácese-hasta-un-elemento-específico-con-una-animación-de-desplazamiento-suave)
- [Agregando propiedades dinámicas a un objeto](#agregando-propiedades-dinámicas-a-un-objeto)
- [Eliminar los duplicados de un array](#eliminar-los-duplicados-de-un-array)
- [Filtrar los valores considerados falsos](#filtrar-los-valores-considerados-falsos)
- [Arguments en funciones tradicionales o normales](#arguments-en-funciones-tradicionales-o-normales)
- [Actualizar el estado mediante la composición de funciones en React](#actualizar-el-estado-mediante-la-composición-de-funciones-en-react)
- [Utilizar objetos literales en lugar de if anidados o switch](#utilizar-objetos-literales-en-lugar-de-if-anidados-o-switch)
## Formatear la salida de JSON Stringify
Uso clásico de `JSON.stringify()` y el uso para dar formato `JSON.stringify(object, null, 2)`
```js
const object = {
firstName: "firstName",
lastName: "lastName",
birthDate: "1986-01-01",
homeAddress: {
state: "state",
address: "Address 34 56 apt 501",
city: "city",
zipCode: "zipCode"
}
}
// Uso clásico
console.log(JSON.stringify(object))
/* output
'{"firstName":"firstName","lastName":"lastName","birthDate":"1986-01-01","homeAddress":{"state":"state","address":"Address 34 56 apt 501","city":"city","zipCode":"zipCode"}}'
*/
// Pasando el número 2 como tercer parámetro o argumento permite formatear la salida con 2 espacios de sangría.
console.log(JSON.stringify(object, null, 2))
/* output
'{
"firstName": "firstName",
"lastName": "lastName",
"birthDate": "1986-01-01",
"homeAddress": {
"state": "state",
"address": "Address 34 56 apt 501",
"city": "city",
"zipCode": "zipCode"
}
}'
*/
```
## Obtener el índice de una iteración en un bucle for-of
Un bucle for...of, introducido en ES6, es una excelente manera de iterar sobre una matriz:
```js
const arr = [ 'a', 'b', 'c' ]
for (const value of arr) {
console.log(value)
}
```
¿Cómo se puede obtener el índice de una iteración?
El bucle no ofrece ninguna sintaxis para hacer esto, pero puede combinar la sintaxis de desestructuración introducida en ES6 con llamar al `entries()` método en el [Array.prototype.entries()](https://developer.mozilla.org/es/docs/Web/JavaScript/Reference/Global_Objects/Array/entries):
```js
const arr = [ 'a', 'b', 'c' ]
for (const [index, value] of arr.entries()) {
console.log(index, value)
}
```
## Intercambiar variable
Los valores de dos variables se pueden intercambiar en una expresión de desestructuración
```js
let a = 12;
let b = 6;
[b, a] = [a, b]
console.log(a, b) //output: 6, 12
```
## Ordenar arreglos
Si intentas ordenar arreglos con el método `sort()` notaras que no da el resultado esperado.
```js
const numbers = [1, 4, 7, 2, 3, 896, 2334, 400, 100]
numbers.sort()
//output: [1, 100, 2, 2334, 3, 4, 400, 7, 896]
```
Te muestro una pequeña forma de hacerlo y esperar el resultado de la forma correcta.
```js
const numbers = [1, 4, 7, 2, 3, 896, 2334, 400, 100]
numbers.sort((a, b) => a - b)
//output: [1, 2, 3, 4, 7, 100, 400, 896, 2334]
```
## Edita páginas web directamente en el navegador sin tocar los elementos HTML
- Abres tu navegador
- Buscas la pagina web a editar.
- Ingresas a las herramientas de desarrollo (click derecho inspect o tecla F12)
- Ingresas a la pestaña Consola o Console.
- Escribes el comando para encender la edición o apagarla. `document.designMode='on'` o `document.designMode='off'`
<img width="700" alt="imagen" src="https://user-images.githubusercontent.com/17968316/114419681-41aae900-9b79-11eb-8e1a-9b2835b7f8f4.png">
## Copiando objetos desde las herramientas de desarrollo o developer tools
- Abres tu navegador
- Buscas la pagina web a editar.
- Ingresas a las herramientas de desarrollo (click derecho inspect o tecla F12)
- Ingresas a la pestaña Consola o Console.
- Supongamos que tenemos un `console.log(object)` en nuestro código y cuando vamos a la consola lo vemos.
- Lo puedes copiar haciendo clic derecho en el objeto y copiar objeto.
<img width="315" alt="imagen" src="https://user-images.githubusercontent.com/17968316/114424611-e92a1a80-9b7d-11eb-8b61-bf6e15d83041.png">
- o puedes utilizar Store object as global variable y después el método `copy` de la siguiente forma:
<img width="315" alt="imagen" src="https://user-images.githubusercontent.com/17968316/114424848-1f679a00-9b7e-11eb-99e9-fd66b3aa1db6.png">
## Utilizar las propiedades-métodos-eventos de un elemento HTML por medio de su id
Si tienes un elemento en el DOM con un id, este se almacena en window y puedes obtener este elemento con javascript o desde la consola como se muestra en la siguiente imagen.
<img width="895" alt="imagen" src="https://user-images.githubusercontent.com/17968316/114426790-07911580-9b80-11eb-9e79-8fa683892ef4.png">
- `window.app` te devuelve el elemento html.
- `window.hi.getAttribute('for')` estas utilizando el método getAttribute para obtener el valor del atributo for del elemento `label`
- `window.hi.textContent` estas obteniendo el valor de la propiedad textContent del elemento `label`
## Desplácese hasta un elemento específico con una animación de desplazamiento suave
¿Sabías que puedes desencadenar un desplazamiento en un elemento específico utilizando una sola llamada a una función en JavaScript?
Incluso puedes añadir un comportamiento para conseguir una bonita animación de desplazamiento suave.
```js
const element = document.getElementById('elementId')
element.scrollIntoView({
behavior: "smooth"
});
```
**Nota:** En IE11 no funciona.
## Agregando propiedades dinámicas a un objeto
```js
const dynamic = 'model'
const vehicle = {
type: 'car',
[dynamic]: 2021
}
console.log(vehicle) //output: { type: 'car', model: 2021 }
```
## Eliminar los duplicados de un array
Utilizando Set y spread operator
```js
const arr = [ 'Victor', 'Cortes', 'Victor', 'Hugo' ]
const uniqueArr = [ ... new Set(arr) ]
console.log(uniqueArr) //output: [ 'Victor', 'Cortes', 'Hugo' ]
```
## Filtrar los valores considerados falsos
```js
const arr = [ 0, 'Valores', false, null, 'Verdaderos', undefined, true, 3 ]
const filtered = arr.filter(Boolean)
console.log(filtered) //output: [ 'Valores', 'Verdaderos', true, 3 ]
```
## Arguments en funciones tradicionales o normales
Cuando utilizas una función tradicional o normal, estas tienen incluido un objeto arguments que es similar a un arreglo y digo similar porque tiene un índice numerado y la propiedad `length`, pero en verdad no es un arreglo porque no posee todos los métodos de manipulación de los arreglos.
Esto puede ser muy útil, porque puedes llamar a la función pasándole más parámetros de los que formalmente declaraste o de pronto no declaraste, es decir, a simple vista la función no recibe parámetros o argumentos.
Con Spread operator `(...)` podemos copiar el contenido del objeto arguments a una variable y esta nueva variable ya puede ser manipulada.
```js
function getArguments() {
console.log(arguments) //output mas abajo
const array = [...arguments]
console.log(array). //output: [ 'V', 'H', 'C' ]
}
getArguments('V','H','C')
/* Output: del console.log(arguments)
{
'0': 'V',
'1': 'H',
'2': 'C',
length: 3,
callee: ƒ getArguments(),
__proto__: {...}
}
*/
```
**Nota:** Esta es una de las tantas principales diferencias entre una arrow functions y una función normal, las arrow functions no tienen arguments.
## Actualizar el estado mediante la composición de funciones en React
Si utilizas composición de funciones, te pueden ser muy útiles para diferentes propósitos.
En el siguiente ejemplo: se compone una función para crear diferentes funciones de [setter](https://developer.mozilla.org/es/docs/Web/JavaScript/Reference/Functions/set) para actualizar el estado.
```js
import { useState } from "react";
export default function App() {
const [firstName, setFirstName] = useState("");
const [lastName, setLastName] = useState("");
//Set State using function composition
const setState = (set) => (event) => set(event.target.value);
const handleSubmit = (event) => {
event.preventDefault();
console.log(firstName, lastName);
setFirstName("");
setLastName("");
};
return (
<div className="App">
<h2>Enter user data</h2>
<form onSubmit={handleSubmit}>
<label htmlFor="first-name">firstName:</label>
<input
id="last-name"
value={firstName}
onChange={setState(setFirstName)}
/>
<label htmlFor="last-name">lastName:</label>
<input
id="last-name"
value={lastName}
onChange={setState(setLastName)}
/>
<button disabled={!firstName || !lastName}>add</button>
</form>
</div>
);
}
```
## Utilizar objetos literales en lugar de if anidados o switch
En JavaScript estamos acostumbrados a utilizar objetos para casi todo, entonces cuando hay varias condiciones, pienso que los objetos literales son la forma más legible de estructurar el código.
Imaginemos que tenemos una función que te devuelve una frase dependiendo del tiempo atmosférico.
**Nota**: Para nuestro ejemplo quiero utilizar mayúsculas (`.toUpperCase()`) para resaltar el clima, pero se puede utilizar minúsculas (`.toLowerCase()`).
Si utilizamos la sentencia `if/else`, se vería algo como esto:
```js
function setWeather(climate) {
const weather = climate.toUpperCase();
if (weather === 'SUNNY') {
return 'It is nice and sunny outside today';
} else if (weather === 'RAINY') {
return `It's raining heavily`;
} else if (weather === 'SNOWING') {
return 'The snow is coming down, it is freezing!';
} else if (weather === 'OVERCAST') {
return `It isn't raining, but the sky is grey and gloomy`;
} else {
return 'Weather not found!';
}
}
```
Definitivamente creo que no es algo muy legible, por ende, pensamos utilizar `switch` para mejorar:
```js
function setWeather(weather) {
switch (weather.toUpperCase()) {
case 'SUNNY':
return 'It is nice and sunny outside today';
case 'RAINY':
return `It's raining heavily`;
case 'SNOWING':
return 'The snow is coming down, it is freezing!';
case 'OVERCAST':
return `It isn't raining, but the sky is grey and gloomy`;
default:
return 'Weather not found!';
}
}
```
Ya comienza a verse un poco mejor, pero se puede presentar un inconveniente, por ejemplo si nos olvidamos de colocar el `break` o `return` dependiendo el caso, seguirá ejecutando las lineas siguientes de código y esto puede ser un problema. Entonces dicho esto, es posible que sea mucho mejor utilizar objetos literales ya que se vería de la siguiente forma:
```js
function setWeather(weather) {
const atmosphericWeather = {
SUNNY: 'It is nice and sunny outside today',
RAINY: `It's raining heavily`,
SNOWING: 'The snow is coming down, it is freezing!',
OVERCAST: `It isn't raining, but the sky is grey and gloomy`,
default: 'Wather not found!'
}
return atmosphericWeather[weather.toUpperCase()] || atmosphericWeather['default'];
}
```
o puedes utilizar [nullish coalescing](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Nullish_coalescing_operator) para asignar una respuesta predeterminada:
```js
function setWeather(weather) {
const atmosphericWeather = {
SUNNY: 'It is nice and sunny outside today',
RAINY: `It's raining heavily`,
SNOWING: 'The snow is coming down, it is freezing!',
OVERCAST: `It isn't raining, but the sky is grey and gloomy`
}
return atmosphericWeather[weather.toUpperCase()] ?? 'Weather not found!';
}
```
---
- Si deseas colaborar para agregar más tips aquí comparto el [repositorio](https://github.com/corteshvictor/tips-javascript).
- En este [enlace](https://corteshvictor.github.io/tips-javascript) puedes también realizar la lectura.
| corteshvictor |
663,676 | Attention, freelance dev📢 Taking job interviews every year keeps you in a good shape | Hi there! I'm Arisa, a freelance Full Stack Developer living in Germany🇩🇪 I'm developing Lilac, an... | 0 | 2021-04-13T01:40:18 | https://dev.to/arisa_dev/attention-freelance-dev-taking-job-interviews-every-year-keeps-you-in-a-good-shape-22i3 | job, interview | Hi there!
I'm Arisa, a freelance Full Stack Developer living in Germany🇩🇪
I'm developing [Lilac](https://note.com/frontendlifeinde/m/m9b8feda1d547), an online school with hands-on Frontend e-books and tutoring👩💻
Normally, I write what I learned from my everyday developer experience though this time is an interview experience I just completed recently.
The thing I want to tell people in here is super simple.
**"I take job interviews every year although I really like the way I work as a freelance."**
It's not that I'm fooling HR and companies (don't get me wrong!)
All I want to tell is that job interviews are so great to keep you being updated in a field.
I'm a freelance dev and mostly build things on my own or on a small-scale team.
In order to keep being updated as a freelance, you gotta find any chance to grab anyone who gives you feedback and evaluate you 👓
So my message is **to keep taking job interviews to make yourself being updated.**
Not too complicated, right? :)
From here, I'll talk about more details about my experience in the most recent one I had.
# What kind of company approached me?
It was a startup software company in Germany.
The great news for a non-German speaker is that it's not so hard anymore to find English-speaking companies.
When I just started my career as a developer back in 2017, there were very few companies that agreed to offer full remote work.
I live somewhere in the south of Germany where not many people know the name of the city.
I love living in the countryside but all the attractive tech companies in Germany are mostly in Berlin...
Basically, it was not common to work from home around 2017.
It means less opportunity if you choose to live in the countryside.
But since 2020, things have changed.
Companies started to approach candidates from everywhere in the world because the full remote is a new standard we have nowadays.

# How the process started?
I only apply the one I was approached by HR or the head of Talent.
No agencies or recruiting companies.
Why?
I'm tired of getting their messages saying they have "great" opportunities in "Java" projects.
I mean, Java projects to a JavaScript dev 😑

It's faster to talk with people who know what they're looking for.
Anyway, and I was approached by this company.
After I decided to apply, the process started with a 15 minutes call.
# What was the process like?
Well, this one was not a lot.
Just the first interview with HR and coding challenge.
The first interview was for 15 minutes.
And the coding challenge was an assignment style to complete within 7 days.
Although we had Eastern during the process, in general, the process was about 3 weeks.
The first interview was very short and I didn't have much time left to ask questions from my side. (I had a few questions though)
But I was asked these questions.
- Q0: Can you introduce yourself?
- Q1: How are you going to do with your own project, Lilac after you're employed?
- Q2: Why do you want to work with us?
- Q3: What do you expect in your job position(senior) in our company?
- Q4: Are you willing to learn new tools?
Before we finished, I asked one question about which language I can choose for my coding challenge.
This was the problem we had because this company is running projects mainly in Vue and Nuxt for Frontend.
They knew I'm from React and Next background.
Also, the backend was Python only.
It ended up taking just the frontend part as they recommended and I agreed.
I already had a feeling from this moment that they will eventually find someone with already equipped Vue, Nuxt, and Python skillsets.
But I'm taking job interviews to make myself keep fitting in a field.
It didn't bother me anyway.
(Ofc, I'll take the offer if the companies I apply offers me a job. But these things are the stuff normally to think later🍵)
# What kind of coding interview was it?
Can't tell what exactly I had but I had a mock web app which this company deals with.
Getting API to display logs in a nice style with an on and off feature to show/hide logs for human operators.
And displaying logs in certain ways.
It definitely required arrays and loop in Vue and axios module from Nuxt.
I had to start from scratch to adopt Vue and Nuxt in the first 2 days and 1 day for Tailwind CSS.
Tailwind CSS was not that hard to adopt but I had 3 tools to adopt from a scratch.
The rest of the 4 days are committing to work on the project.
The submission was through GitHub private repo.
Honestly, it wasn't easy to do these things in 1 week but I learned Vue, Nuxt, and Tailwind CSS in a short time💪
# How long was the process?
I guess the first interview was very quick.
It was done within a week since they approached me.
And the coding challenge was 1 week.
The review took a week.
Couldn't reach out to them while the Eastern holidays but I was busy with the coding challenge anyways 😌
So, it didn't take too long in my opinion.
# What did I learn from this time?
Literally, A LOT.
I repeat, A LOT.

I learned the journey to gain my skills will never be over 😎
Even it was quite packed in a week to learn 3 new tools, I really liked the process I learned new.
Also, even they couldn't give me feedback because of legal reasons, I could tell the senior position requires soft skills as well as high dev skills.
If I don't go for the job interviews every year to gain my skills, I can't imagine how many gems to gain my skills I'm missing.
Every single time of the interview process is new discovery.
There's nothing worthless interview experience I had.
You heard me, I had none of the waste from my past interview experience.

If you're afraid of failing, don't be!
You don't have to tell anyone about that and even if you did, people know that it's not easy to get a job.
People who look very successful must have had way more failure experience than we do.
Don't miss the treasure 💎
Hope my story helps you! | arisa_dev |
663,892 | Wave - The Open-source Software as a Service Starter Kit | Introduction Laravel Wave is an open-source Software as a Service Starter Kit that can hel... | 0 | 2021-04-13T07:30:51 | https://github.com/thedevdojo/wave | laravel, php, tailwindcss, saas | # Introduction
Laravel [Wave](https://devdojo.com/wave) is an open-source Software as a Service Starter Kit that can help you build your next great idea 💰.
Wave is built with [Laravel](https://laravel.com), [Voyager](https://voyager.devdojo.com), [TailwindCSS](https://tailwindcss.com), and a few other awesome technologies.
# Features
Here are some of the awesome features that Wave provides out of the box ✨:
- [Authentication](https://wave.devdojo.com/docs/features/authentication)
- [User Profiles](https://wave.devdojo.com/docs/features/user-profiles)
- [User Impersonation](https://wave.devdojo.com/docs/features/user-impersonation)
- [Subscriptions](https://wave.devdojo.com/docs/features/billing)
- [Subscription Plans](https://wave.devdojo.com/docs/features/subscription-plans)
- [User Roles](https://wave.devdojo.com/docs/features/user-roles)
- [Notifications](https://wave.devdojo.com/docs/features/notifications)
- [Announcements](https://wave.devdojo.com/docs/features/announcements)
- [Fully Functional Blog](https://wave.devdojo.com/docs/features/blog)
- [Out of the Box API](https://wave.devdojo.com/docs/features/api)
- [Voyager Admin](https://wave.devdojo.com/docs/features/admin)
- [Customizable Themes](https://wave.devdojo.com/docs/features/themes)
# GitHub Repository
You can get a copy of Laravel Wave here:
{% github https://github.com/thedevdojo/wave %}
# Demo
View a live [demo here](https://wave.devdojo.com), or deploy your own instance to DigitalOcean, by clicking the button below.
<a href="https://cloud.digitalocean.com/apps/new?repo=https://github.com/thedevdojo/wave/tree/main" target="_blank"><img src="https://www.deploytodo.com/do-btn-blue.svg" width="240" alt="Deploy to DO"></a>
# Installation
To install Wave, you'll want to clone or download this repo:
```
git clone https://github.com/thedevdojo/wave.git project_name
```
Next, we can install Wave with these **4 simple steps**:
### 1. Create a New Database
During the installation we need to use a MySQL database. You will need to create a new database and save the credentials for the next step.
### 2. Copy the `.env.example` file
We need to specify our Environment variables for our application. You will see a file named `.env.example`, you will need to duplicate that file and rename it to `.env`.
Then, open up the `.env` file and update your *DB_DATABASE*, *DB_USERNAME*, and *DB_PASSWORD* in the appropriate fields. You will also want to update the *APP_URL* to the URL of your application.
```bash
APP_URL=http://wave.test
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=wave
DB_USERNAME=root
DB_PASSWORD=
```
### 3. Add Composer Dependencies
Next, we will need to install all our composer dependencies by running the following command:
```php
composer install
```
### 4. Run Migrations and Seeds
We need to migrate our database structure into our database, which we can do by running:
```php
php artisan migrate
```
<br>
Finally, we will need to seed our database with the following command:
```php
php artisan db:seed
```
<br>
🎉 And that's it! You will now be able to visit your URL and see your Wave application up and running.
# Watch, Learn, and Build
We've also got a full video series on how you can setup, build, and configure Wave. 🍿 You can watch first few videos for free, and additional videos will require a [DevDojo Pro](https://devdojo.com/pro?ref=bobbyiliev) subscription. By subscribing to a [DevDojo Pro](https://devdojo.com/pro) subscription you will also be supporting the ongoing development of this project. It's a win win! 🙌
[Click here to watch the Wave Video Series](https://devdojo.com/course/wave).
## Documentation
Checkout the [official documentation here](https://wave.devdojo.com/docs).
# Conclusion
With Laravel Wave you can save time and focus on the functionality of your SaaS.
If you like the project make sure to star it on GitHub 🙌
Any feedback is also going to be highly appriciated! | bobbyiliev |
663,956 | Passing React.forwardRef to child's child | In short in this post, I want to show how to forward refs if its needs to be passed more than one... | 0 | 2021-04-13T08:55:49 | https://dev.to/srikanthkyatham/using-react-forwardref-42m9 | react, javascript, functional | In short in this post, I want to show how to forward refs if its needs to be passed more than one level
In the [React forwardRef guide](https://reactjs.org/docs/forwarding-refs.html) the instructions tell us how to pass one level. How about if needs to be passed more than one level.
In my case it was a custom button
```
const LinkButton = (props) => {
return <button {...props} />;
}
```
I had to use this button inside another component which was passing ref to this button.
The usage was
```
const ShowInfoBox = () => {
const infoRef = React.useRef(null);
const props = {};
return (
<InfoBox
referenceElement={<LinkButton {...props} />}
ref={infoRef}
>
{content}
</InfoBox>
);
}
```
When I used like above the React complained
> Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()?
To solve this I had to create a wrapper component
```
const LinkButtonWithRef = React.forwardRef((props, ref) => (
<LinkButton {...props} ref={ref} />
));
```
Since I cannot use the prop with name "ref", I had to rename to "innerRef". The subsequent changes were
```
const LinkButton = ({innerRef, ...rest}) => {
return <button ref={innerRef} {...rest} />;
}
```
```
const LinkButtonWithRef = React.forwardRef((props, ref) => (
<LinkButton {...props} innerRef={ref} />
));
```
Hope it helps someone who is facing similar issue.
| srikanthkyatham |
664,113 | 🧟♂️ adventures in software development: automatic, scheduled wsl2 backups | As we rejoin our brave adventurer, we find that he has hurdled another piece of the backup puzzle, to... | 0 | 2021-08-15T15:18:16 | https://dev.to/jlcummings/adventures-in-software-development-automatic-scheduled-wsl2-backups-h12 | docker, wsl2, powershell, backup | As we rejoin our brave adventurer, we find that he has hurdled another piece of the backup puzzle, to be faced by yet even more hurdles when automatically scheduling Windows Subsystem for Linux (WSL2) backups. He has scoured and squirmed his way to timely exports of the distribution of concern and beat back weird interactions with the foul, yet, beloved beast, Docker.
The first step was creating an archive of the instance. That is made possible by the ‘wsl’ command using the ‘export’ sub-command.
```powershell
# Get a list of WSL `distributions` and their status
$ wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
docker-desktop Running 2
docker-desktop-data Running 2
# Note the current directory/drive and change to a location to save the exported distribution to; I recommend not saving this to a location that is cloud synchronized, because it will be large and extremely time consuming to upload (and then sync to all connected devices).
$ pwd
Path
----
C:\Users\justi\Documents
# Export a distribution for backup
# Note that this takes it offline and for several minutes, but ensures absolute consistency
$ wsl --export Ubuntu
$ wsl -l -v
NAME STATE VERSION
* Ubuntu Converting 2
docker-desktop Running 2
docker-desktop-data Running 2
# Verify it completed once the 'State' shows as 'Stopped' for the given distribution.
$ ls
Directory: C:\Users\justi\Documents
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 4/12/2021 2:52 PM 12001935360 Ubuntu.tar
```
Now that we are confident that we can archive the distribution, the next step is to do it under automatic execution.
After fiddling around from near zero powershell or programatic task-scheduler knowledge, I currently can run the backup under the task scheduler for reoccurring, time-of-day and day-of-week based execution. Personally, I set it to the late evening-to-very-early morning hours, like 2:30AM locally, but only weekly, because that is my risk vs cost threshold and don't feel more frequent backups would be helpful; however, you might. I could register the task through the 'Task Scheduler' GUI, but what fun is that? Lets look a little more 😎.
Initial registration of the task is roughly prepared using power shell script. The archives are date-tagged in the filename scheme to easily identify when they were created and to make sorting and visualizing other aspects like growth easy to see.
```powershell
# .\Register-ScheduledBackup-WSL2.ps1
...
# this is the meat; a lot of sides surround the main course, but this is how you add a scheduled task
Register-ScheduledTask -Action $action -Trigger $trigger -TaskPath $TaskPath -TaskName $taskName -Description $taskDescription -Principal $principle -Settings $settingsSet
...
# using a subscription to the task scheduler log
$subscription = @"
<QueryList>
<Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational">
<Select Path="Microsoft-Windows-TaskScheduler/Operational">
*[EventData[@Name="ActionSuccess"][Data[@Name="TaskName"]="\$TaskPath\$taskName"][Data[@Name="ResultCode"]="0"]]
</Select>
</Query>
</QueryList>
"@
...
# add a task that fires based on that subscription
Register-ScheduledTask -Action $dockerRestartAction -Trigger $dockerRestartTrigger -TaskPath $TaskPath -TaskName 'Restart Docker' -Description "Restart Docker on completion of '$taskName'" -Principal $dockerRestartPrinciple -Settings $dockerRestartSettings
...
```
Note: the above example only creates the scheduled task. To update the scheduled task requires a different script. That is not shown. It is not implemented. Instead, I just unregister, and register anew. The actual backup script that is run via the above registration process is:
```powershell
# .\Backup-WSL2.ps1
# parameter handling and logging omitted; here is the crux
$command = (Get-Command wsl).Definition
$dateExecuted = $(Get-Date -Format FileDate)
$commandArgs = "--export $Distribution $DestinationPath\$Distribution-$dateExecuted.tar"
# execute the backup
Invoke-Expression "& $command $commandArgs"
```
There is a problem with this. For some reason when the export/archive of the WSL distribution finishes, the connection to the docker socket of the host is lost or corrupted or something, so Docker needs to be restarted before your distribution can interact with the host docker system: We 😵
I could reboot the entire machine, but just restarting docker seems to be sufficient. I could use the Docker Desktop GUI/menus, but automatic restart would be nice while the underlying challenge remains. TLDR; Be 😎.
Note: restarting all of docker has it's own consequences, so proceed cautiously as any other running containers on the same host, your machine, will also be stopped.
The process to accomplish a docker restart via powershell is something that I am less comfortable showing, as it is based on a general instruction demonstrated in the following link:
[Stackoverflow: Restart docker Windows 10 command line](https://stackoverflow.com/a/57560043/549306)
The reason I am less comfortable showing how I do it because:
1. Licensing attached to Stackoverflow user-contributed code may impede personal or professional efforts.
2. The example solution does not work cleanly `as is`. So modification is required, and then I refer back to '#1' 😧.
But I will highlight the relevant commands here (with parameter handling, logging, and sleep or await commands omitted):
```
...
# kill the underlying Windows services used by docker, as gracefully as possible
Stop-Service -InputObject $_ -ErrorAction Continue -Confirm:$false -Force
...
# kill the user facing docker desktop app without delay (the app has little consequence on the underlying docker system, and is mostly just a dashboard; not a criticism)
Stop-Process -InputObject $_
...
# start the desktop app because it generally takes the longest to completely start and won't complain terribly if the underlying docker system isn't ready or available
$_.Start()
...
# start the underlying Windows services for docker
Start-Process -FilePath $clientAppPath -PassThru |
...
# execute a `docker info` command and parse the results for indicators of readiness
$healthCheckResult = $($ServiceHealthCommand)
...
```
The complete script used to restart docker is:
```
.\Restart-Suite.ps1
```
In any case, once Docker is restarted, I am able to rejoin/restart my now safely archived distribution.
The complete effort to-date is found here:
[backup-wsl2](https://github.com/jlcummings/backup-wsl2)
Repeating the disclaimer in the repository README: this is very rookie stuff you will find here, and more importantly, it may be a terrible approach to managing wsl2 backups at the end of the day; but it is an option and maybe inspirational to someone with more foresight, talent, and time.
Insert Tim Allen as Tim the Toolman Taylor trope. I connected a service restart to the end of the backup script to automatically restart Docker after exporting the wsl instance at a regular time of day when I wasn't heavily using the machine. I should add more steps:
- modify my process to also copy a complete, intact archive periodically to another location; one other option might be ‘rysnc’ the archive chunks to remote storage for isolation from the source (likely you want to stage it to `local` network storage first and let that NAS/SAN sync to offsite during off-peak periods)
- test the recovery process; if you can't recover from it, the backup is not worth making
- shrink the live distribution somehow, as the growth of the image has a negative affect on the archive process in terms of time and space (experience shows this is going to be fruitless in most cases)
| jlcummings |
664,169 | Robotic Process Automation with Automation Anywhere | Robotic Process Automation (RPA) has attracted significant investment from many corporate organizat... | 0 | 2021-04-20T14:51:26 | https://dev.to/packtpub/robotic-process-automation-with-automation-anywhere-1o30 | rpa, clourpa, automation360, processautomation | 
**Robotic Process Automation (RPA)** has attracted significant investment from many corporate organizations in recent years. This has opened up many opportunities for using RPA, whether you are an experienced developer wanting to gain additional valuable skills or you're thinking about starting your career as an RPA developer.
In this overview, we explain what Robot Process Automation is. You’ll learn about Automation Anywhere (AA) and what it does, and get some initial insights into AA’s RPA tool. A number of versions of AA are available, and you will learn about their differences. Our focus will be on the latest Community Edition A2019. Besides being the latest version, there are several other reasons for learning RPA with this version. We will explain why this version is ideal for gaining actual hands-on experience and starting your journey in building software robots (bots).
Along with building bots, AA has a number of additional features and components. These include IQ Bot, Bot Insight, Bot Store, Mobile Bot, and Automation Anywhere University. We will show how you can benefit from these features and components.
# Technical requirements
To use AA A2019 Community Edition, you will need the following:
* Windows OS version 7 or higher
* A processor with a minimum speed of 3 GHz
* A minimum of 4 GB RAM
* Internet Explorer v10 or higher, or Chrome v49 or higher
* An internet connection with a minimum speed of 10 Mb/second
# What is robotic process automation?
You probably already know what RPA is, but we will quickly review it here. The words *automation* or *robot* usually conjure up images of a physical machine performing repetitive tasks. We began to see this type of robotic automation years ago, particularly in manufacturing. Physical robotic machines were built to help automate tasks usually done by humans. This form of industrial manufacturing automation was later adopted by many other industries including logistics, distribution, and packaging. This also led to automation being taught in universities at the postgraduate level. Many new technology jobs were created from this, including roles such as robotics engineer, designer, and maintenance operative, as well as automated programmable manufacturing tools such as CNC machinery. Since the widespread adoption of the internet, we have seen the concept of web-controlled automation being introduced. As an example, large buildings often deploy internet-enabled CCTV, heating controls, and security systems, where all these systems can be managed remotely over the internet. You could have a very fulfilling career as a developer or engineer working in automation.
We can see the same thing happening with RPA. RPA is specifically designed to automate tasks that are performed by humans on desktops. Most jobs have an element that involves tasks that are high-volume, repetitive, and tedious. Such tasks tend to drain the enjoyment out of our jobs. RPA can be applied to automate these types of tasks.
We can build bots to perform these types of tasks, and this is specifically what RPA bots have been designed for. Having a bot can give you more time to spend on the tasks that you actually enjoy and excel at. This in turn would deliver more job satisfaction.
You may be thinking, *well, what's the difference between RPA and traditional software development?* Well, with traditional development, the developer needs to be proficient in developing the application with it being automated as well. For example, to automate a task in Excel you would expect the developer to have skills in VBA. To develop web applications, the developer may need skills in Java or HTML. The developer needs to understand how the application is executing the tasks as well as what the user needs to do. It would usually also involve a greater learning curve to master these skills and would involve writing lines of code to build the solutions. RPA is different. It doesn't really matter what application you are working with as it interacts with the user interface. The user only needs to understand how to operate the application they are working with without necessarily understanding how the application executes the task, and this is all that RPA needs to know. So, no specific expertise is needed to work on multiple applications. It also does not require writing lines of code, as you can build a solution by designing a workflow or using pre-defined drag and drop commands. This makes it an ideal technology to rapidly learn how to build bots; it doesn't require years of learning to become a bot developer. See the following comparison:

Figure 1 – Comparison of traditional automation against RPA
You can clearly see the benefits of having an RPA bot as opposed to building a new traditional-style software solution. So, what sorts of tasks can a bot perform? Bots can pretty much do most tasks that involve a human using the desktop. This includes the automation of the tasks shown in the following diagram:

Figure 2 – Tasks that can be performed with RPA
You should now have a good understanding of what RPA is. This is a growing market with great demand for RPA skills. We know we can learn these skills far more quickly and easily than those required for traditional development. The range of tasks that can be automated with RPA is vast and not limited to specific industries.
The number of RPA vendors on the market is growing. As in most industries, only a few become recognized and reputable as market leaders, although we have seen a handful of industry leaders emerging over the last few years. One of the key players has been Automation Anywhere.
# Overview of Automation Anywhere
The list of vendors that provide RPA tools is growing constantly. There are three main leaders in this automation technology. These are UiPath, Blue Prism, and **Automation Anywhere (AA)**. All these vendors provide RPA tools with pretty much the same functionality. You can see the top 10 RPA vendors of 2020 at the following link, created by Horses for Sources: https://www.horsesforsources.com/RPA_Top10_2020_012920.
Although the aforementioned top three do provide similar functionalities, there are some key differences. The following table shows a breakdown of the features available from each provider:

Figure 3 – Top vendors' features comparison
We can see that AA and UiPath have the most comprehensive tools and features when compared to Blue Prism.
We will use AA, as they were the first to release a fully cloud-based RPA tool. This eliminates the need to install AA on your desktops to build, manage, and deploy bots. AA has won a number of prestigious technology awards and was recently named the *market leader* in RPA by a Forrester report.
AA also runs a number of annual events, including the *Bot Games*. Here, developers from around the world are challenged against each other to build specific bots. Maybe, once you have gained enough confidence in your own bot development skills, you can be part of these Bot Games.
The mission statement of AA, as published on their website at https://www.automationanywhere.com/company/about-us, is:
> To enable companies to operate with unprecedented productivity and efficiency by automating any part of the enterprise that can be automated with the most intelligent and intuitive robotic process automation platform we call - The Intelligent Digital Workforce
We can break this statement down into three distinct elements:
* **What AA offers**: Giving organizations the opportunity to increase productivity and efficiency.
* **How they can offer this**: Creating the opportunity to automate any process within the organization by the deployment of intelligent RPA.
* **The outcome**: This results in building bots that make up the **Digital Workforce**.
When designing and building an RPA solution, it is essential that a statement relates to why RPA is needed. The Digital Workforce has to add value within the organization. This can be measured in terms of cost savings, time reduction, or the reduction of effort. As a developer, understanding why automation is needed can help in designing a robust, intelligent automation solution.
We will now take a closer look at some of the additional features and components available with AA. This will show how AA stands out from the crowd of its competitors. We will look at the following features and components:
* The Digital Workforce
* IQ Bot
* Bot Insight
* Bot Store
* Mobile Bot
* Automation Anywhere University
Let's take a look at these in more detail.
## The Digital Workforce
A bot is referred to by AA as a **Digital Worker** as it clones the actions of a human to perform a given task. A Digital Worker is a member of the team designed to carry out a process just the same as any human worker. As more bots are built within an organization, you can see a Digital Workforce being created. These bots can work side by side with a human or can be deployed to run on their own. Decision-making is a key aspect when using RPA. RPA has the ability to perform condition-based decisions when the outcome is purely based on a single condition or set of conditions.
For example, a condition-based decision could be, *do we order some keyboards?*
We would check our stock levels in the stock database, and if it is below our re-ordering threshold, then yes, we do; otherwise, we don't.
In some cases, condition-based decisions are not sufficient to get the correct outcome. There are occasions when decisions have to be made using **Artificial Intelligence (AI)** or by applying machine learning algorithms. This is where RPA needs to be used in conjunction with AI. AA allows us to train an RPA bot to perform complex decisions involving AI and machine learning algorithms. This is achievable using the IQ Bot feature of AA.
## IQ Bot
As well as utilizing condition-based decisions, more and more processes require a certain level of cognitive intelligence to make decisions. An example of this would be when dealing with unstructured data. A common scenario involves invoices, which all tend to have the same type of data such as supplier, items, costings, and dates, but the layout and format vary between different suppliers. AA has developed a product called **IQ Bot**. This bot uses cognitive automation with RPA to learn how to handle unstructured data. This enables such processes to be automated from end to end without human intervention. It integrates AI technologies such as fuzzy logic, **Natural Language Processing (NLP)**, computer vision, and **Machine Learning (ML)**, all without the help of data scientists or highly trained experts.
## Bot Insight
Designing and building bots is not the complete story. AA has also developed a platform that produces real-time analytics about your Digital Workforce, processes, and business-level processes This is all a part of the Bot Insight tool, the RPA analytics tool for AA. Bot Insight is broken down into two categories: operational analytics and business intelligence.
As bots are deployed, as well as executing tasks, they also process data. This data is related to each specific process and can provide valuable insight. Bot Insight analyzes this data and transforms it into meaningful insights. It also captures operational data such as how well the bot is performing, tracking data as it is being processed. All this data can be presented in various formats including graphs, charts, and tables. It can also predict possible bot failures. It can be integrated seamlessly with other leading business intelligence platforms such as Tableau, ThoughtSpot, and QlikView. As an independent tool, Bot Insight provides a complete analytics solution without the need to integrate with other tools. It's simple to use; all it requires is tagging the data items that need to be analyzed and Bot Insight will do the rest for you.
> **Note**
> You can learn more about Bot Insight at https://www.automationanywhere.com/products/bot-insight.
## Bot Store
AA is the first RPA vendor to have a fully operational Bot Store. Bot Store is an online store with a collection of Digital Workers. The bots available here are built by independent developers from all around the world, as well as by AA themselves. AA Bot Store won the Silver award in the 2019 Edison Awards for developing the world's first and largest enterprise automation marketplace.
These are complete bots out of the box that will perform a specific task or role. They are available as bots for specific applications, categories, or business processes. These applications include Microsoft, Google Cloud, CyberArk, and LinkedIn. You can pick specific bots for particular tasks, such as NLP bots for converting speech to text or bots for converting a QR code image to text. The bots on offer are continuously growing as more of them are added. Many of these bots are available for free, but there are some you will have to pay for.
Once you have mastered bot development, maybe you can submit your bots to be hosted on Bot Store. This is a great way to promote your skills as well as having the opportunity to sell your bots.
> **Note**
> You can learn more about Bot Store at https://www.automationanywhere.com/products/botstore.
## Mobile Bot
AA has also released a mobile app to work with your bots. It allows you to manage your Digital Workers from your mobile device. Bot Insight is available on the mobile app. This app will give you live alerts on bot performance as well as business insights on bot data. You can control your bots from the app including starting and stopping them. It also provides a platform for you to connect with the wider AA RPA community.
> **Note**
> You can learn more about Mobile Bot at https://www.automationanywhere.com/products/apps.
## Automation Anywhere University
AA also has an online university that provides many learning paths and opportunities to earn a globally recognized certificate. You can gain many accreditation badges approved by AA by completing the online assessments. These assessments usually consist of multiple-choice questionnaires. To gain the Certified Master Professional accreditation, you will have to build three bots and submit them to the university. These will then be assessed to determine whether you qualify or not. There are many areas of AA that you can gain accreditation badges for, including Bot Developer, Business Analyst, IQ Bot Developer, Control Room Administrator, Solutions Architect, Technical Support Specialist, and RPA Program Manager.
You can attempt the accreditation badge assessments for free, but there is a cost for certification. These range from 50 USD to 100 USD depending on the certificate. These certifications are great ways to promote your RPA skills; I would recommend the Automation Anywhere Certified Advanced RPA Professional certification.
> **Note**
> You can learn more about the AA University at https://university.automationanywhere.com/.
Hopefully, you now have a better insight into AA’s features. There is a distinct advantage to using AA for RPA over its competitors. We know that the AA platform offers far more than just bot development. It allows data analytics, a platform to showcase and generate revenue from our bots, and a tool specifically designed to incorporate AI in our bots, as well as a path to gain recognized certifications for our skills.
Along with these features, three versions of AA are available. We will now look at the differences between them.
# Automation Anywhere versions
Each AA version is designed with a different user in mind. The following table summarizes the main differences:

Figure 4 – AA versions
Community Edition A2019 is totally free. The other two versions come with a 30-day free trial, after which you have to purchase an AA license to continue using them. Community Edition A2019 is specifically designed for students and developers. There is no limit to the number of bots you can build, and nor is there any limited functionality.
You can now see the benefits of using Community Edition A2019, as well as understand what additional capabilities the other versions have to offer. In the next section, we’ll take a closer look at Community Edition A2019 as well as walk through how to register with AA in order to start using it.
## Community Edition A2019
AA Community Edition A2019 is the latest free version and was released in November 2019. The version prior to this, AA v11.x, used a client-server architecture where the management was done through the web-based Control Room app while bot development was done through a client application installed on the desktop.
Community Version A2019 is a fully cloud-based solution. Bot management and building are all done through the web application; no development client is installed on your desktop. You need to download and install a **Bot agent** on each device that is to run the bot. Once installed, you build your bot, connect to your device using a Bot agent, and then deploy.
**Registration with Automation Anywhere**
As Community Edition A2019 is free, you can start using it once you’ve registered with AA.
To register, follow these instructions:
1. Navigate to https://www.automationanywhere.com/products/community-edition
2. Complete the appropriate details, including your **First Name**, **Last Name**, **Email Address**, **Country**, **Phone Number**, and **Company Name**
3. Then submit your details.
You will shortly get a welcome email including your login credentials. The key details to note are the following:
* Your Control Room URL
* Your username
* Your password
You will need these credentials every time you launch AA so keep a note of them. You need to change the password when you first log in.
You are now ready to start your RPA journey using AA.
# In conclusion
You now have a good understanding of what RPA is and how AA is positioned in the RPA space. You also have some understanding of AA’s capabilities. Having registered with AA to use the free Community Edition A2019, you are all set to get AA up and running on your machine.
This article on robotic process automation with Automation Anywhere is part of Husan Mahey's book of the same name. To continue reading about RPA and AA and to learn more about the recent developments in process automation, check out the book [here](https://packt.live/3cARTW5).
| packtpub |
665,062 | Ember Apollo Client + @use | A real-world application for @pzuraq's new `@use` API in Ember, using it to wrap some common ember-apollo-client methods | 0 | 2021-04-13T22:36:48 | https://dev.to/chrismllr/ember-apollo-client-use-5h3o | graphql, ember, typescript | ---
title: Ember Apollo Client + @use
published: true
description: A real-world application for @pzuraq's new `@use` API in Ember, using it to wrap some common ember-apollo-client methods
tags: graphql, ember, typescript
---
I've recently spun up my first Ember app using GraphQL, and as I would do when approaching any new functionality in my Ember app, I reached for the community supported addon [`ember-apollo-client`](https://github.com/ember-graphql/ember-apollo-client).
`ember-apollo-client` provides a really nice wrapper around everything I'd want to do with the `@apollo/client`, without making too many assumptions/ abstractions. It nicely wraps the `query`, `watchQuery`, and `subscribe` methods, and provides a `queryManager` for calling those methods, which quite nicely cleans them up for you as well.
Ember traditionally has many ways to set up/ clean up data-fetching methods, and you usually fall into two camps; I find myself choosing a different path almost every time I write an ember app.
## 1. Use the `model` hook
`ember-apollo-client` first suggests using your model hook, illustrated here:
```js
// app/routes/teams.js
import Route from '@ember/routing/route';
import query from '../gql/queries/teams';
export class TeamsRoute extends Route {
@queryManager apollo;
model() {
return this.apollo.watchQuery({ query }, 'teams');
}
}
```
**Pros:** This method is well supported by the framework, and allows for utilizing `error` and `loading` substates to render something while the model is reloading.
**Drawbacks:** `query parameters`. Say we have a `sort` parameter. We would then set up an additional `observable` property within our model hook, and likely use the `setupController` hook to set that on our controller for re-fetching data when `sort` changes. This is fine, but includes extra code which could become duplicative throughout your app; leading to potential bugs if a developer misses something.
## 2. Utilize `ember-concurrency`
Based on a suggestion I found while digging through their issues and documentation, I gave `ember-concurrency` a shot:
```js
// app/routes/teams.ts
import Route from '@ember/routing/route';
export class TeamsRoute extends Route {
setupController(controller, model) {
controller.fetchTeams.perform();
}
resetController(controller) {
controller.fetchTeams.cancelAll();
unsubscribe(controller.fetchTeams.lastSuccessful.result);
}
}
// app/controllers/teams.js
import Controller from '@ember/controller';
import query from '../gql/queries/teams';
export class TeamsController extends Controller {
@queryManager apollo;
@tracked sort = 'created:desc';
@task *fetchTeams() {
const result = yield this.apollo.watchQuery({
query,
variables: { sort: this.sort }
});
return {
result,
observable: getObservable(result)
};
}
@action updateSort(key, dir) {
this.sort = `${key}:${dir}`;
this.fetchTeams.lastSuccessful.observable.refetch();
}
}
```
**Pros:** This feels a little more ergonomic. Within the `ember-concurrency` task `fetchTeams`, we can set up an observable which will be exposed via `task.lastSuccessful`. That way, whenever our sort property changes, we can access the underlying observable and `refetch`.
`ember-concurrency` also gives us some great metadata and contextual state for whether our task's `perform` is running, or if it has errored, which allows us to control our loading/ error state.
**Drawbacks**: In order to perform, and subsequently clean this task up properly, we're going to need to utilize the route's `setupController` and `resetController` methods, which can be cumbersome, and cleanup especially is easily missed or forgotten.
This also requires the developer writing this code to remember to `unsubscribe` to the watchQuery. As the controller is a singleton, it is not being torn down when leaving the route, so the queryManager unsubscribe will not be triggered. _Note: if this is untrue, please let me know in the comments!_
Either way, we will still need to cancel the task. This is a lot to remember!
## Enter `@use`
Chris Garrett (@pzuraq) and the Ember core team have been working towards the `@use` API for some time now. Current progress can be read about [here](https://www.pzuraq.com/introducing-use/).
While `@use` is not yet a part of the Ember public API, the article explains the low-level primitives which, as of Ember version 3.25+, are available to make `@use` possible. In order to test out the proposed `@use` API, you can try it out via the [`ember-could-get-used-to-this`](https://github.com/pzuraq/ember-could-get-used-to-this) package.
> ⚠️ Warning -- the API for `@use` and `Resource` could change, so keep tabs on the current usage!
## How does this help us?
Remember all of those setup/ teardown methods required on our route? Now, using a helper which extends the `Resource` exported from `ember-could-get-used-to-this`, we can handle all of that.
Lets go `ts` to really show some benefits we get here.
```ts
// app/routes/teams.ts
import Route from '@ember/routing/route';
export class TeamsRoute extends Route {}
// app/controllers/teams.ts
import Controller from '@ember/controller';
import { use } from 'ember-could-get-used-to-this';
import GET_TEAMS from '../gql/queries/teams';
import { GetTeams } from '../gql/queries/types/GetTeams';
import { WatchQuery } from '../helpers/watch-query';
import valueFor from '../utils/value-for';
export class TeamsController extends Controller {
@tracked sort = 'created:desc';
@use teamsQuery = valueFor(new WatchQuery<GetTeams>(() => [{
GET_TEAMS,
variables: { sort: this.sort }
}]));
@action updateSort(key, dir) {
this.sort = `${key}:${dir}`;
}
}
```
And voila! No more setup/ teardown, our `WatchQuery` helper handles all of this for us.
> Note: `valueFor` is a utility function which helps reflect the type of the "value" property exposed on the Resource. More on that below. This utility should soon be exported directly from `ember-could-get-used-to-this`.
So whats going on under the hood?
```ts
// app/helpers/watch-query.ts
import { tracked } from '@glimmer/tracking';
import { Resource } from 'ember-could-get-used-to-this';
import { queryManager, getObservable, unsubscribe } from 'ember-apollo-client';
import { TaskGenerator, keepLatestTask } from 'ember-concurrency';
import ApolloService from 'ember-apollo-client/services/apollo';
import { ObservableQuery, WatchQueryOptions } from '@apollo/client/core';
import { taskFor } from 'ember-concurrency-ts';
type QueryOpts = Omit<WatchQueryOptions, 'query'>;
interface WatchQueryArgs {
positional: [DocumentNode, QueryOpts];
}
export class WatchQuery<T> extends Resource<WatchQueryArgs> {
@queryManager declare apollo: ApolloService;
@tracked result: T | undefined;
@tracked observable: ObservableQuery | undefined;
get isRunning() {
return taskFor(this.run).isRunning;
}
get value() {
return {
result: this.result,
observable: this.observable,
isRunning: this.isRunning,
};
}
@keepLatestTask *run(): TaskGenerator<void> {
const result = yield this.apollo.watchQuery<T>(this.args.positional[0]);
this.result = result;
this.observable = getObservable(result);
}
setup() {
taskFor(this.run).perform();
}
update() {
this.observable?.refetch(
this.args.positional[0].variables
);
}
teardown() {
if (this.result) {
unsubscribe(this.result);
}
taskFor(this.run).cancelAll({ resetState: true });
}
}
```
Lot going on, lets break it down:
We've brought in some libraries to help with using `typescript`, including `ember-concurrency-ts`.
The `Resource` class gives us a way to perform our task upon initialization:
```ts
setup() {
taskFor(this.run).perform();
}
```
And a way to clean up after ourselves when we're done:
```ts
teardown() {
if (this.result) {
unsubscribe(this.result);
}
taskFor(this.run).cancelAll({ resetState: true });
}
```
And remember how we declaratively called `refetch` after updating sort? Well, now we can utilize ember's tracking system, since we passed `sort` in the constructor function, it should reliably trigger the `update` hook if updated:
```ts
update() {
this.observable?.refetch(
this.args.positional[1].variables
);
}
```
## Where do we go from here
From here, you can use the same paradigm to build out Resources for handling `apollo.subscribe` and `apollo.query`, with few code changes.
As our app is very new, we plan on tracking how this works for us over time, but not having to worry about setting up/ cleaning up queries for our application should greatly improve the developer experience right off the bat.
An important thing to note, this article focuses on wrapping the `ember-apollo-client` methods, but can Easily be extrapolated to support any data-fetching API you want to use, including Ember Data.
Thanks for reading! Please let me know what ya think in the comments 👋
| chrismllr |
665,351 | Run pgsql migrations on WSL | First you must run: rake db:create Then: rake db:migrate | 0 | 2021-04-14T02:41:16 | https://dev.to/taoliu12/run-pgsql-migrations-on-wsl-5g54 | First you must run: rake db:create
Then: rake db:migrate
| taoliu12 | |
665,844 | Dynamic import - recipe for a lightning fast application | In this article we will delve into the dynamic import of JavaScript modules and the lazy loading of R... | 0 | 2021-04-14T13:40:16 | https://www.sensenet.com/blog/2021-04-07-dynamic-import-recipe-for-a-lightning-fast-application | javascript, typescript, react, performance | In this article we will delve into the dynamic import of JavaScript modules and the lazy loading of React components. We will examine through a real example how they work and how we can make our web application faster by reducing our initial bundle size. It is common to use TypeScript for its static type system. We often need types from our dependencies, but if we don't pay attention it can ruin our hardly achieved code splitting. I will show you a fairly new syntax to avoid it.
## Dynamic import
Dynamic import has reached the stage 4 of the TC39 process and is included in the ECMAScript 2020 language specification. Webpack, currently the most popular JavaScript module bundler, already supports it since the v2 which was released in 2017. It makes it possible to load parts of your application at runtime. Maybe you use a heavy dependency only on specific cases or you want to load only the desired localization files on a multi-language page based on the user's preferences. This way you can make your site more performant and lightweight at the same time.
The syntax of the dynamic import is quite simple, it just extends the import keyword by making it possible to use it followed by parentheses with the path of your dependency in between.
```javascript
import('module/example').then(example => console.log(example.default)
```
> This sytax looks like a function call, but it is not. Import is not defined as a function, it is a specific operator.
The code above loads the module at runtime and logs its default export to the console. This is just a basic example, you can use anything exported by the module in the callback function or load multiple modules at once with Promise.all.
All popular modern bundlers support it and they automatically split dynamically imported modules to a separate bundle. All of the import statements of that module or dependency should be dynamic across your project to work as expected.
## React.lazy
It is also possible to import React components dynamically since React 16.6. `React.lazy` is a function which will handle your dynamic import and make a renderable React component from it. It has one parameter, which is a function returning the import:
```javascript
const MyComponent = React.lazy(() => import('./MyComponent'))
```
Module bundlers will handle dynamic imports as `React.lazy` parameter the same as described above.
It is important to know that the component must be the default export of the imported module. If it is not given (e.g. a third party library exports it by name), you can create a module to handle it in your application:
```javascript
export { Component as default } from 'react-library'
```
You can wrap the lazy loaded component by React Suspense with a fallback component. It will render the fallback while the dynamic component is loading.
```jsx
<Suspense fallback={<Loader />}>
<MyComponent />
</Suspense>
```
## Importing types
Previously TypeScript tried to omit type-only imports from compiled JavaScript code, but it cannot accurately recognized and removed accurately. In some edge cases the import was compiled to your code even if it is only used as a type. They added a new syntax to the language at version 3.8 to prevent this problem:
```javascript
import type { SomeType } from "external-dependency";
```
This way you can use external types confidently without pulling in a new dependency to your main bundle. You can read more about this in the [TypeScript release note](https://devblogs.microsoft.com/typescript/announcing-typescript-3-8-beta/#type-only-imports-exports).
## Real life example
At Sense/Net we are developing a headless CMS called sensenet. One part of our product is the admin-ui that makes content management easy for the customers. It is a complex React application with a lot of internal and external dependencies. Over time our bundle became huge, so we started to optimize it with multiple techniques. One of these is the better usage of lazy loading pages and dependencies.
The biggest improvement was achieved by lazy loading Monaco Editor. It is a code editor which powers Visual Studio Code. It is around 2 MB of parsed JavaScript code and only used on 3 or 4 pages by our application. You definitely don't want to load it for pages where it is not used.
We applied all the above methods to separate its code to a chunk and load it only on-demand. We use it in multiple isolated parts of our application so we had to make these changes for each import of the editor.
An interesting part was the usage of imported functions. We created a new React state which stores the return value of the function. We load and call the function inside a useEffect and show a loader until the state gets a value.
```javascript
export const Loader = (props) => {
const [uri, setUri] = useState()
useEffect(() => {
;(async () => {
const { monaco } = await import('react-monaco-editor')
setUri(monaco.Uri.parse(`sensenet:File`))
})()
}, [])
if (!uri) {
return <Loader />
}
...
}
```
## Final thoughts
In conclusion, JavaScript and its ecosystem give us a lot of opportunity to improve the performance of our applications. One of the most important aspect of user experience is speed, so it is definitely worth the effort. Hopefully in the future it will be even more easier to achieve such optimization.
If you need help or have any feedback, feel free to comment here.
Thanks for reading my article! If you enjoyed it give a star to [sensenet](https://github.com/SenseNet/sn-client) on GitHub. I hope that you'll [give a try to our headless CMS for free](https://www.sensenet.com/tryit), we are eager to hear your feedback.
| taki9 |
666,028 | Kotlin – SpringBoot MongoDB – Model One-to-One, One-to-Many Relationships Embedded Documents | Kotlin – SpringBoot MongoDB – Model One-to-One, One-to-Many Relationships Embedded Documents https:/... | 0 | 2021-04-14T16:30:37 | https://dev.to/loizenai/kotlin-springboot-mongodb-model-one-to-one-one-to-many-relationships-embedded-documents-11hd | kotlin, springboot, mongodb | Kotlin – SpringBoot MongoDB – Model One-to-One, One-to-Many Relationships Embedded Documents
https://grokonez.com/spring-framework/spring-boot/kotlin-spring-boot/kotlin-springboot-mongodb-model-one-one-one-many-relationships-embedded-documents
With MongoDB, we can structure related data by embedded documents. In general, embedding gives a better performance for read operations. So in the tutorial, <a href="https://grokonez.com">JavaSampleApproach</a> will show you way to work with Embedded Documents using SpringBoot by Kotlin language.
<!--more-->
<h2>I. Technologies</h2>
– Kotlin 1.2.20
– Apache Maven 3.5.2
– Spring Tool Suite – Version 3.9.0.RELEASE
– Spring Boot – 1.5.10.RELEASE
– MongoDB: v3.4.1
<h2>II. MongoDB – Embedded Documents</h2>
Embedded Documents are generally known as denormalized models. It is a way to structure related data in a single document. See below diagram:
<img src="https://grokonez.com/wp-content/uploads/2018/02/kotlin-mongodb-embedded-document-structure.png" alt="kotlin mongodb-embedded-document-structure" width="690" height="388" class="alignnone size-full wp-image-10628" />
In general, we decide to design data as embedded models in case: data contains one-to-one or one-to-many relationships between entities.
With embedding data model, in general, it gives us a better performance for read operations. And we can use a single atomic read/write database operation to request and retrieve or update related data.
However, a weakness point is embedding related data in documents may lead to situations where documents grow after creation.
<h2>III. Practice</h2>
In the tutorial, we use SpringToolSuite to create a Kotlin SpringBoot project for MongoDB embedded documents:
<img src="https://grokonez.com/wp-content/uploads/2018/02/kotlin-mongodb-embedded-document-project-structure.png" alt="kotlin mongodb-embedded-document-project structure" width="411" height="217" class="alignnone size-full wp-image-10629" />
<strong>Step to do</strong>
– Create Kotlin SpringBoot project
– Create Document models
– Create MongoRepository
– Implement Client
– Run and check results
<h3>1. Create Kotlin SpringBoot project</h3>
Using Spring Tool Suite, create a Kotlin SpringBoot project. Then open pom.xml file, add dependencies:
<pre><code class="language-html"><dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency></code></pre>
More at:
Kotlin – SpringBoot MongoDB – Model One-to-One, One-to-Many Relationships Embedded Documents
https://grokonez.com/spring-framework/spring-boot/kotlin-spring-boot/kotlin-springboot-mongodb-model-one-one-one-many-relationships-embedded-documents | loizenai |
666,227 | Avoid Procrastination and Improve Focus | Notes What is Procrastination? People who procrastinate are often mislabeled as... | 11,415 | 2021-04-14T18:31:43 | https://dev.to/javidjms/avoid-procrastination-and-improve-focus-3l63 | productivity, zen, beginners, webdev | ## Notes
### What is Procrastination?
People who procrastinate are often mislabeled as lazy. Many of us even engage in self talk about how lazy or unfocused we are when we engage in procrastination. But procrastination is not a reflection of someone’s work ethic or their ability to focus. There’s more to it than that.
> When we procrastinate, we typically put off something that we find difficult, challenging, or uncomfortable in favor of something easier or more appealing.
Chronic procrastinators know that waiting will cause more harm than good. We know that this choice will ultimately lead to a worse outcome for us physically, emotionally, and otherwise, but we do it anyway
### The Real Reason We Procrastinate
Dr. Tim Pychyl is also a professor of psychology and a member of the Procrastination Research Group at Carleton University in Ottawa. He said, “Procrastination is an emotion regulation problem, not a time management problem.” Sirois and Pychyl teamed up in 2013 to research the notion that people place a priority on their immediate emotional needs over those of their future selves via procrastination. Here’s what they concluded.
> People engage in this irrational cycle of chronic procrastination because of an inability to manage negative moods around a task.
When faced with an “aversive” task, i.e. something that we find “boring, frustrating, lacking in meaning and/or structure,” we react with negative feelings and moods. Then we have a choice. We can get through those feelings and moods via “self-regulation” or we can succumb to the immediately protective choice of procrastination.
### The Problem with Procrastination
We believe that when we sit down to work tomorrow, or next week, or next month before the big deadline, we will feel like doing it. But we are wrong. When we choose the temporary reprieve from boredom, frustration, or challenge and kick the can down the road to our future selves, we are only making matters cumulatively worse.
> We aren’t dismissing our future selves as being unimportant or anything. We are just absolutely convinced that our future self will be better able to handle the given task.
For one thing, a constant current of anxiety and tension will be along for the entire procrastination ride. This nagging looming deadline will be churning in the background, coloring our daily mood and impacting our health and well being. The negative feelings we have about the task itself make us procrastinate. That, in turn, leads to ruminating negative thoughts about the act of procrastination itself. It’s a cycle that has a snowball effect, gathering more self blame, shame, anxiety, and stress along the way.
## Techniques
### 1. Daily Todo List (Don't forget to read/update it daily !)
Make a list of 3-6 things that you want to get done during the day. Put them in order of the most important or time sensitive to the least important or time sensitive. Start working on the first task until it is finished. Check it off, mark it out, and move to the next item. Keep going until your the end of the day. Move any tasks that are left undone to the new list you will create for the next day.
<h5>Remember to start the day by reading the current list of tasks and update it if it is needed.</h5>
### 2. Digitally Declutter
Let’s face it. Distractions are very easily found these days. If you tend to wander into social media or headline news or your personal inbox instead of working on the more pressing task at hand, it’s time to digitally declutter your workspace.
The idea here is to make it harder to get distracted by removing the devices that hold those distractions. So, let’s say you need to create an outline for an upcoming presentation and you plan to work at the breakfast counter in the kitchen. Take only your laptop into the workspace. Put your phone, tablet, and any other devices you may have in another room and make sure they are on silent.
Close all tabs other than the document you are actively writing into. If you find that you cannot stop opening new windows to browse online, go old school and pull out an actual pad of paper and pen to write your outline in ink.
To take this a step further, declutter your actual devices so that distractions are not as easily accessible. Remove social media apps from your devices, delete games, and create folders to organize essential apps.
<h5>Remember to keep a specific workspace based on your task in order to avoid distractions</h5>
### 3. Bundle Up
We all love a package deal. Bring the bundle up benefit into your life to get things done. This technique works very well for self care and health habits as well as household chores and responsibilities that we all find so easy to put off.
We are generally only accountable to ourselves for things like working out, mowing the grass, cleaning the house, cooking healthy meals, or doing the laundry. Make these tasks less tiresome and more appealing by bundling them with something you really love.
If audiobooks are your jam, only listen to them when you are cleaning the house. Catch up on your favorite podcasts only while you cut the grass or cook dinner. Watch the latest binge-worthy show only when you are on the treadmill.
<h5>Find the trick to do annoying chores in a funny way.</h5>
### 4. Set a Timer (Ex: Custom, Podomoro, Flow)
People can accomplish staggering volumes of work simply by committing to show up and do the work for a set period of time, no matter what. Writers past and present have found success with time techniques but it works with a wide variety of tasks. Here’s how it works.
Pick a task that you want to get done. This can be a routine, daily responsibility or a special project or work product you need to produce on a particular deadline. Decide how much time you have to work on the task each day or in this particular work period. It could be 15 minutes, 2 hours, or 60 seconds...pick a time period that makes sense for the task at hand. Set a timer for that time period and don’t stop until the timer goes off. No matter what!
But, what if your kid comes in the room and needs help with their lesson? What if you need to take a bathroom break? What if the doorbell rings or your mom calls or the dog starts barking madly to go out? Look, life happens. We know that. Hit the pause button on the timer, take care of the immediate need, and get right back to it.
<h5>Remember to start the timer each time you pick a task.</h5>
### 5. Worst Thing First
This is a little psychological trick that is both effective and super rewarding. Think about all the things you want to do today or that should happen everyday. Take care of the task that is the least appealing as soon as humanly possible. Get it over with and move on to the things that are more engaging, easy, and fun, or just less frustrating, dull, or challenging.
Once that “worst thing” is finished and done, there will be an immediate lift in spirit and a real sense of accomplishment. Ride that wave of success forward knowing that things will only get better from there! Worst Thing First is motivating, rewarding, and really works.
<h5>Remember to not pick the easiest task first</h5>
## Links
* [https://blog.doit.io/procrastinate/](https://blog.doit.io/procrastinate/) | javidjms |
666,563 | I got a 50$ gift from dev.to, and here it is 😍 | I did not imagine that a simple article like sending an email would perform so well. Similar tutorial... | 0 | 2021-04-15T03:15:21 | https://dev.to/aahnik/i-got-a-50-gift-from-dev-to-and-here-it-is-ggh | watercooler, discuss, news | I did not imagine that a simple article like sending an email would perform so well. Similar tutorials are in n no. of places on the internet. But still, people loved my article.
{% link https://dev.to/aahnik/how-to-send-emails-with-python-simply-explained-for-beginners-hea %}
This article 👆 reached the [top seven](https://dev.to/devteam/the-7-most-popular-dev-posts-from-the-past-week-4njl) for the week, and dev.to sent me this email.

I am excited to have this badge,

I thought, let me give the code back to the community. So here it is
`dev-top7-acmewh` ,
the first person to redeem it, gets 50$ worth of stuff from the [shop.dev.to](https://shop.dev.to)
If you have redeemed the code, please comment below, with what you bought. I would love to know.
Thank you for the love.
I am [@aahnik](https://github.com/aahnik) on GitHub. If you are feeling bored, do check out my projects, and hit a few stars.
Thank you again.
| aahnik |
666,858 | Flutter: Build Circular Progress with CustomPaint and Animation | Hey, you are on the right way if you come with a question about how to draw something in Flutter. For... | 0 | 2021-04-15T11:41:31 | https://medium.com/litslink-mobile-development/flutter-build-your-custom-painter-with-animation-795490acb156 | flutter, tutorial, dart | Hey, you are on the right way if you come with a question about how to draw something in Flutter. For example when you need to draw something like a Progress indicator in a circle shape.
I’ll show you how to do that for the next platforms on the Flutter:
Android, iOS, Web, macOS | alexmelnyk |
667,065 | SocketCluster. The most underrated framework. Part 3: A Pub/Sub example and middleware | maarteNNNN / sc-underrated-framework-pubsub... | 11,955 | 2021-04-15T16:41:22 | https://dev.to/maartennnn/socketcluster-the-most-underrated-framework-part-3-a-pub-sub-example-and-middleware-a10 | socketcluster, opensource, node, javascript | {% github maarteNNNN/sc-underrated-framework-pubsub %}
## Introduction
In this part we will make a simple chat example to understand how Pub/Sub works in SocketCluster. The app can be tested across multiple browser windows. We will add some simple middlewares. A chat history and censorship for bad-words.
## Setup
Let's setup a blank project by running `socketcluster create sc-pubsub` and `cd sc-pubsub`. Let's install nodemon to restart the server automatically `npm i -D nodemon`. And for our bad-words censorship we will use a package called [bad-words](https://www.npmjs.com/package/bad-words) from NPM. `npm i -s bad-words`. The server can be run with `npm run start:watch`.
### Client code setup (don't give much attention to this, just copy and paste)
We will use vanilla JavaScript in HTML like [part 2](https://dev.to/maartennnn/socketcluster-the-most-underrated-framework-part-2-setting-up-5b3o) shipped with SocketCluster in `public/index.html`. Let's delete everything inside the `style` tag and replace it with:
```css
* {
margin: 0;
padding: 0;
}
html {
height: 100vh;
width: 100vw;
}
.container {
height: 100vh;
display: flex;
align-items: center;
justify-content: center;
flex-direction: column;
}
.chat-history {
height: 70vh;
width: 75%;
border: 1px solid #000;
display: flex;
flex-direction: column;
overflow-y: auto;
}
.chat-input {
width: 75%;
height: 5vh;
border-left: 1px solid #000;
border-bottom: 1px solid #000;
border-right: 1px solid #000;
}
input {
box-sizing: border-box;
width: 100%;
height: 100%;
border: none;
padding: 0 1em;
}
strong,
small {
font-size: 11px;
color: gray;
}
.message {
padding: 0.25rem 1rem;
}
```
and delete eveything inside `<div class="container">` tag and replace it with:
```html
<div id="chat-history" class="chat-history"></div>
<div class="chat-input">
<input placeholder="message" onkeyup="sendMessage(event)" />
</div>
```
Okay. Now we have a basic chat page. Nothing too fancy. Now we can focus on getting actual logic of our chat application.
## The Pub/Sub functionality
### Client
Pub/Sub in SocketCluster is something that can work without writing any backend logic. We can create a channel on the client and the server makes this channel available for other clients.
```js
(async () => {
for await (const data of socket.subscribe('chat')) {
console.log(data);
}
})();
```
and we should create the function that listens to the `enter` key on the input to send the publish the message.
```js
const sendMessage = async (event) => {
if (event.keyCode === 13) {
try {
await socket.transmitPublish('chat', {
timestamp: Date.now(),
message: event.target.value,
socketId: socket.id,
});
event.target.value = '';
} catch (e) {
console.error(e);
}
}
};
```
The `transmitPublish` method does not suspect a return value. If you do want a response you can look at [`invokePublish`](https://socketcluster.io/docs/api-ag-client-socket/#methods).
The `transmitPublish` sends an object with a `timestamp`, `message` and the `socketId`. The `socket.subscribe('chat')` [async iterable](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of) will log any new data being pushed. Open two browser windows next to each other and open the Developer Tools in both windows. If you send a message in one window it should output it in both consoles.
We will display the messages in the `#chat-history` `div` by creating a function that creates an element, changes the text, adds a class and appends the element.
```js
const createMessage = ({ socketId, timestamp, message }) => {
const chatHistoryElement = document.getElementById('chat-history');
const messageElement = document.createElement('div');
messageElement.className = 'message';
messageElement.innerHTML = `<strong>${socketId}</strong> <small>${timestamp}:</small> ${message}`;
chatHistoryElement.appendChild(messageElement);
// Always scroll to the bottom
chatHistoryElement.scrollTop = chatHistoryElement.scrollHeight
};
```
change the previous `console.log(data)` inside the `socket.subscribe('chat')` to `createMessage(data)`.
Now if we send messages it should display them in the HTML instead of the Developer Tools. Pretty neat, huh? Upon to this point we still didn't do any server-side code.
### Server-side
There's only one problem with our app. Every new window does not have any older messages. This is where the server comes in. We will create a middleware that pushes every message to an array, for simplicity sake. Another thing the middleware will pick-up is bad-words. We can filter them and replace the characters with a `*`.
```js
const Filter = require('bad-words');
const filter = new Filter();
...
const history = []
agServer.setMiddleware(
agServer.MIDDLEWARE_INBOUND,
async (middlewareStream) => {
for await (const action of middlewareStream) {
if (action.type === action.PUBLISH_IN) {
try {
// Censor the message
action.data.message = filter.clean(action.data.message);
} catch (e) {
console.error(e.message);
}
// Push to the array for history
history.push(action.data);
}
// Allow the action
action.allow();
}
},
);
...
```
We set an inbound middleware, we pass it an async iterable stream. On every `action` of the stream we check if the `action.type` equals the constant provided by SC `action.PUBLISH_IN`. If the conditional is true we filter the message and allow the action. Alternatively we could `action.block()` the action if we don't want it to go through. [More on middleware here](https://socketcluster.io/docs/middleware-and-authorization/)
To implement the history it's pretty simple, we just create a constant `const history = []` and push every `action.data` to it. Like shown in the above code.
To initially get the history we `transmit` the data upon a socket connection (e.g. a new browser window).
```js
(async () => {
for await (let { socket } of agServer.listener('connection')) {
await socket.transmit('history', history);
}
})();
```
And create a receiver on the client which uses a loop to create the messages.
```js
(async () => {
for await (let data of socket.receiver('history')) {
for (let i = 0; i < data.length; i++) {
const m = data[i];
createMessage(m);
}
}
})();
```
I will try to add an article every two weeks.
| maartennnn |
667,466 | 250+ JS Resources to Master Programming 💥 Cheat Sheet | Hello World! I felt bored after completing the Ultimate Cheat Sheet Compilation, so I just decided to... | 11,340 | 2021-04-20T13:30:49 | https://dev.to/worldindev/200-js-resources-to-master-programming-3aj6 | javascript, webdev, productivity, beginners | `Hello World!` I felt bored after completing the [Ultimate Cheat Sheet Compilation](https://dev.to/worldindev/the-ultimate-compilation-of-cheat-sheets-100-268g), so I just decided to create another one! The most complete javascript cheat sheet and resource compilation you can find online!
🔖 - Waaait, don't leave this page without bookmarking it!! <a name="top"></a>
Read also:
{% link https://dev.to/worldindev/400-javascript-interview-questions-with-answers-2fcj %}
---
⚡ Giveaway ⚡
We are giving away any course you need on Udemy. Any price any course.
Steps to enter the giveaway
--> React to this post
--> [Subscribe to our Newsletter](https://worldindev.ck.page/) <-- Very important
PS: It took me around 10 hours to complete the article - So please remember the **like ❤️** and **super like 🦄**
{% collapsible Table of content %}
* [My cheat Sheet](#delta)
* [Projects ideas to become a javascript master ](#projects)
* [Other resources](#alpha)
1. [Complete Javascript cheat sheets](#1da)
2. [JS promises](#1db)
3. [JS Arrays](#1dc)
4. [JS loops](#1dd)
5. [Preprocessor](#1de)
* [CoffeScript](#1df1)
* [EJS](#1df2)
* [Babel](#1df3)
6. [JS Frameworks & Libraries](#frameworks)
* [Angular](#2f)
* [Vue](#2g)
* [React](#2g+)
* [JQuery](#2g+++)
* [Others](#2g++)
* [Node](#2c)
7. [Other resources](#oother)
* **Remember the like ❤️ and maybe super like🦄**
{% endcollapsible %}
---
{% collapsible For beginners %}
## What is JS (Javascript)
> JavaScript is a scripting or programming language that allows you to implement complex features on web pages — every time a web page does more than just sit there and display static information for you to look at — displaying timely content updates, interactive maps, animated 2D/3D graphics, scrolling video jukeboxes, etc. — you can bet that JavaScript is probably involved. It is the third layer of the layer cake of standard web technologies. [MDN](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/First_steps/What_is_JavaScript)

## What it used for?
> To put things simply, JavaScript is an object orient programming language designed to make web development easier and more attractive. In most cases, JavaScript is used to create responsive, interactive elements for web pages, enhancing the user experience. Things like menus, animations, video players, interactive maps, and even simple in-browser games can be created quickly and easily with JavaScript. JavaScript is one of the most popular programming languages in the world. [BitDegree - What Is JavaScript Used For And Why You Should Learn It](https://www.bitdegree.org/tutorials/what-is-javascript-used-for/#:~:text=To%20put%20things%20simply%2C%20JavaScript,pages%2C%20enhancing%20the%20user%20experience.)
## Hello World In Javascript:
```js
alert("Hello World") — Output data in an alert box in the browser window
confirm("Hello World") — Opens up a yes/no dialog and returns true/false depending on user click
console.log("Hello World") — Writes information to the browser console, good for debugging purposes
document.write("Hello World") — Write directly to the HTML document
prompt("Remember the like!") — Creates a dialogue for user input
```
## Resources to learn it:
[Mozilla’s JavaScript Guide](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide)
JavaScript track on Codecademy: Interactive tutorials for beginners.
JavaScript for Cats by Max Ogden
Eloquent JavaScript by Marijn Haverbeke
Wikibooks’ JavaScript book
JavaScript Lectures by Douglas Crockford
You Don't Know JS - Possibly the best book written on modern JavaScript, completely readable online for free, or can be bought to support the author.
braziljs/js-the-right-way - An easy-to-read, quick reference for JS best practices, accepted coding standards, and links around the Web.
JSbooks - Directory of free JavaScript ebooks.
Superhero.js - A collection of resources about creating, testing and maintaining a large JavaScript code base.
SJSJ - Simplified JavaScript Jargon is a community-driven attempt at explaining the loads of buzzwords making the current JavaScript ecosystem in a few simple words.
How to Write an Open Source JavaScript Library - A comprehensive guide through a set of steps to publish a JavaScript open source library.
JavaScript Tutorials - Learn Javascript online from a diverse range of user ranked online tutorials.
Functional-Light JavaScript - Pragmatic, balanced FP in JavaScript.
Clean Code JavaScript - Clean Code concepts adapted for JavaScript.
[List at GitHub - Awesome Javascript - By Alexandru Gherasim](https://github.com/ShinobiWPS/awesome-javascript#worth-reading)
### At Reddit - [What 10 Things Should a Serious Javascript Developer Know Right Now?](https://www.reddit.com/r/javascript/comments/6mlc9d/what_10_things_should_a_serious_javascript/)
- **Scope.** If you don't understand this intimately then you aren't that serious about this language. This is the number one point intentionally and I cannot stress it enough.
- **Architecture.** You don't have to be a master software architect, but if you cannot perform some basic planning and put pieces together without massive layers of tooling you are an imposter. Expecting frameworks and other tools to simply do it for you isn't very impressive.
- **DOM.** It is very common to see developers hiding from the DOM by layers of abstractions and other stupid crap. querySelectors are great, but are also 2800x slower than the standard DOM methods. That isn't trivial. These methods are super simple, so there is no valid excuse for developers fumbling over this or hiding in fear. http://prettydiff.com/guide/unrelated_dom.xhtml
- **Node.js** If you are a serious developer should have a pretty solid grasp of how to walk the file system. You should understand how to conveniently read files as text or less conveniently read files as bit for bit binary buffers.
- **Timing and asynchronous operations.** Events, timers, network requests are all asynchronous and separate from each other and exist both in Node and in the browser. You have to be able to understand how to work with callbacks or promises.
- **Accessibility.** The interactions imposed by JavaScript can present accessibility barriers. A serious JavaScript developer is already familiar with WCAG 2.0 and knows how to work within its recommendations or when to push back on violating business requirements.
- **Security.** You need to have at least a basic understanding of security violations, security controls, and privacy. You don't need to be a CISSP, but you need to be able to supply recommendations and avoid obvious failures. If you cannot get this right in the most basic sense you aren't a serious developer.
- **Data structures.** You need to understand how to organize data in a way that allows the fastest possible execution without compromising maintenance. This is something that is learned through academic study and repeated experience writing applications.
- **Presentation and semantics.** You really need to have a basic understanding how to properly organize the content your users will consume and how to present in a consumable way efficiently. This is something almost completely learned from experience only. You might think CSS and HTML are simple skills that can be picked up when needed, but you would be absolutely wrong.
- **Knowing when to avoid the bullshit.** Many developers lack the years of experience to be confident in their performance.... so some of these developers will try to fake it. Don't be an imposter, because everybody will see straight through it. Hoping mountains of abstractions, tooling, frameworks, compilers, and other bullshit will save you just bogs down your application and screws over your teammates. If you aren't confident then be honest about that and seek mentorship or get involved with open source software outside of work.

[Source](https://www.mindmeister.com/283065198/getting-started-with-javascript?fullscreen=1)
{% endcollapsible %}
## JS Cheat Sheet: <a name="delta"></a>
--> Download the PDF Version of this Cheat Sheet [here](https://worldindev.ck.page/)
### Include Javascript:
```js
<script type="text/javascript"></script>
// or Include it in an external file (this is a comment)
/* This is also another way you can insert comments,
Multiline normally */
<script src="myscript.js"></script><code></code>
// PS: Remember to sub to our newsletter for the Giveaway!
```
---
### Variables:
```js
var myVariable = 22; //this can be a string or number. var is globally defined
let myVariable = 22; //this can be a string or number. let can be reassigned
const myVariable = 22; //this can be a string or number. can't be reassigned
```
[JavaScript Variables - w3schools](https://www.w3schools.com/js/js_variables.asp)
{% link https://dev.to/devlorenzo/js-hide-and-show-32og %}
---
### Data Types:
```js
//string
let string = 'ASCII text';
//int
let integer = 123456789;
//float
let float = 123.456;
//boolean, can be true or false
let t = true;
let f = false;
//undefined
let undef;//defaults to undefined
let undef = undefined;//not common, use null
//null
let nul = null;
//array
let arr = ['Hello','my','name','is','Dr.Hippo',123,null];
//object
let person = {'name':'John Smith','age':27};
//function
let fun = function(){
return 42;
}
```

[Source - Datatypes In JavaScript - c-sharpcorner.com](https://www.c-sharpcorner.com/article/datatypes-in-javascript/)
---
### Operators
Basic Operators
```js
+ — Addition
- — Subtraction
* — Multiplication
/ — Division
(...) — Grouping operator, operations within brackets are executed earlier than those outside
% — Modulus (remainder )
++ — Increment numbers
-- — Decrement numbers
```
{% link https://dev.to/devlorenzo/js-change-text-on-hover-3945 %}
Comparison Operators
```js
== Equal to
=== Equal value and equal type
!= Not equal
!== Not equal value or not equal type
> Greater than
< Less than
>= Greater than or equal to
<= Less than or equal to
? Ternary operator
```
Logical Operators
```js
&& Logical and
|| Logical or
! Logical not
```
Bitwise Operators
```js
& AND statement
| OR statement
~ NOT
^ XOR
<< Left shift
>> Right shift
>>> Zero fill right shift
```
---
### Loops
for - loops through a block of code a number of times.
```js
for (statement 1; statement 2; statement 3) {
// Coooode
}
```
for/in - loops through the properties of an object.
for/of - loops through the values of an iterable object.
while - loops through a block of code while a specified condition is true.
```js
let i=0;
while (i < 10) {
console.log(i);
i++;
}
```
Break and Continue
> When you use break without a label, it terminates the innermost enclosing while, do-while, for, or switch immediately and transfers control to the following statement.
When you use break with a label, it terminates the specified labeled statement.
> When you use continue without a label, it terminates the current iteration of the innermost enclosing while, do-while, or for statement and continues execution of the loop with the next iteration. In contrast to the break statement, continue does not terminate the execution of the loop entirely. In a while loop, it jumps back to the condition. In a for loop, it jumps to the increment-expression.
When you use continue with a label, it applies to the looping statement identified with that label.
[Source - Loops and iteration - MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Loops_and_iteration#break_statement)
---
{% link https://dev.to/devlorenzo/js-on-scroll-events-4232 %}
### Strings

[dev.to Article - 10 JavaScript string methods you should know - by @frugencefidel ](https://dev.to/frugencefidel/10-javascript-string-methods-you-should-know-4l76)
Escape characters
```js
\' — Single quote
\" — Double quote
\\ — Backslash
\b — Backspace
\f — Form feed
\n — New line
\r — Carriage return
\t — Horizontal tabulator
\v — Vertical tabulator
```
---
### Array and array methods

[Top 10 JavaScript Array Methods You Should Know - By Rachel Cole at morioh.com](https://morioh.com/p/3ba421a8a63d)
```js
concat(arr1,[...]) // Joins two or more arrays, and returns a copy of the joined arrays
copyWithin(target,[start],[end]) // Copies array elements within the array, to and from specified positions
entries() // Returns a key/value pair Array Iteration Object
every(function(currentval,[index],[arr]),[thisVal]) // Checks if every element in an array pass a test
fill(val,[start],[end]) // Fill the elements in an array with a static value
filter(function(currentval,[index],[arr]),[thisVal]) // Creates a new array with every element in an array that pass a test
find(function(currentval,[index],[arr]),[thisVal]) // Returns the value of the first element in an array that pass a test
findIndex(function(currentval,[index],[arr]),[thisVal]) // Returns the index of the first element in an array that pass a test
forEach(function(currentval,[index],[arr]),[thisVal]) // Calls a function for each array element
from(obj,[mapFunc],[thisVal]) // Creates an array from an object
includes(element,[start]) // Check if an array contains the specified element
indexOf(element,[start]) // Search the array for an element and returns its position
isArray(obj) // Checks whether an object is an array
join([seperator]) // Joins all elements of an array into a string
keys() // Returns a Array Iteration Object, containing the keys of the original array
lastIndexOf(element,[start]) // Search the array for an element, starting at the end, and returns its position
map(function(currentval,[index],[arr]),[thisVal]) // Creates a new array with the result of calling a function for each array element
pop() // Removes the last element of an array, and returns that element
push(item1,[...]) // Adds new elements to the end of an array, and returns the new length
reduce(function(total,currentval,[index],[arr]),[initVal]) // Reduce the values of an array to a single value (going left-to-right)
reduceRight(function(total,currentval,[index],[arr]),[initVal]) // Reduce the values of an array to a single value (going right-to-left)
reverse() // Reverses the order of the elements in an array
shift() // Removes the first element of an array, and returns that element
slice([start],[end]) // Selects a part of an array, and returns the new array
some(function(currentval,[index],[arr]),[thisVal]) // Checks if any of the elements in an array pass a test
sort([compareFunc]) // Sorts the elements of an array
splice(index,[quantity],[item1,...]) // Adds/Removes elements from an array
toString() // Converts an array to a string, and returns the result
unshift(item1,...) // Adds new elements to the beginning of an array, and returns the new length
valueOf() // Returns the primitive value of an array
```
---
### Functions
Syntax
```js
function name(parameter1, parameter2, parameter3) {
// code to be executed
}
```
Examples
```js
function myFunction(p1, p2) {
return p1 * p2; // The function returns the product of p1 and p2
}
let x = myFunction(4, 3); // Function is called, return value will end up in x
function myFunction(a, b) {
return a * b; // Function returns the product of a and b
}
// Convert Fahrenheit to Celsius:
function toCelsius(fahrenheit) {
return (5/9) * (fahrenheit-32);
}
document.getElementById("demo").innerHTML = toCelsius(77);
```
[Source - JavaScript Functions - w3schools](https://www.w3schools.com/js/js_functions.asp)
---
### Maths
Methods

{% link https://dev.to/worldindev/finally-how-to-understand-math-awesome-resource-list-4end %}
Properties
```js
E — Euler’s number
LN2 — The natural logarithm of 2
LN10 — Natural logarithm of 10
LOG2E — Base 2 logarithm of E
LOG10E — Base 10 logarithm of E
PI — The number PI
SQRT1_2 — Square root of 1/2
SQRT2 — The square root of 2
```
---
### Date
{% link https://dev.to/devlorenzo/js-how-to-get-current-date-1km %}
Javascript date objects allow us to work with date and time. We can retrieve information for it by creating a date and assign and assigning it to a variable:
```js
let d = new Date(); // We usually call it d or date
```
Date object provide us a lot of different methods, the most used are year, month, day, hours, minutes, seconds, and milliseconds. Remember that you always have to precise the entire year (1950 and not only 50), that we always start with 0 (so, for example, December is the eleventh, a minute is composed of 59 seconds...) and that day is in a 24 hours format.
You can then retrieve from date a lot of differents info:
```js
d.getDate() Returns the day of the month (from 1-31)
d.getDay() Returns the day of the week (from 0-6)
d.getFullYear() Returns the year
d.getHours() Returns the hour (from 0-23)
d.getMilliseconds() Returns the milliseconds (from 0-999)
d.getMinutes() Returns the minutes (from 0-59)
d.getMonth() Returns the month (from 0-11)
d.getSeconds() Returns the seconds (from 0-59)
```
We can also set things... [Open the article](https://dev.to/devlorenzo/js-how-to-get-current-date-1km) to continue reading
---
### Events
{% details Mouse %}
onclick - The event occurs when the user clicks on an element
oncontextmenu - User right-clicks on an element to open a context menu
ondblclick - The user double-clicks on an element
onmousedown - User presses a mouse button over an element
onmouseenter - The pointer moves onto an element
onmouseleave - Pointer moves out of an element
onmousemove - The pointer is moving while it is over an element
onmouseover - When the pointer is moved onto an element or one of its children
onmouseout - User moves the mouse pointer out of an element or one of its children
onmouseup - The user releases a mouse button while over an element
{% enddetails %}
{% details Keyboard %}
onkeydown - When the user is pressing a key down
onkeypress - The moment the user starts pressing a key
onkeyup - The user releases a key
{% enddetails %}
{% details Frame %}
onabort - The loading of a media is aborted
onbeforeunload - Event occurs before the document is about to be unloaded
onerror - An error occurs while loading an external file
onhashchange - There have been changes to the anchor part of a URL
onload - When an object has loaded
onpagehide - The user navigates away from a webpage
onpageshow - When the user navigates to a webpage
onresize - The document view is resized
onscroll - An element’s scrollbar is being scrolled
onunload - Event occurs when a page has unloaded
{% enddetails %}
{% details Form %}
onblur - When an element loses focus
onchange - The content of a form element changes (for <input>, <select>and <textarea>)
onfocus - An element gets focus
onfocusin - When an element is about to get focus
onfocusout - The element is about to lose focus
oninput - User input on an element
oninvalid - An element is invalid
onreset - A form is reset
onsearch - The user writes something in a search field (for <input="search">)
onselect - The user selects some text (for <input> and <textarea>)
onsubmit - A form is submitted
{% enddetails %}
{% details Drag %}
ondrag - An element is dragged
ondragend - The user has finished dragging the element
ondragenter - The dragged element enters a drop target
ondragleave - A dragged element leaves the drop target
ondragover - The dragged element is on top of the drop target
ondragstart - User starts to drag an element
ondrop - Dragged element is dropped on the drop target
{% enddetails %}
{% details Clipboard %}
oncopy - User copies the content of an element
oncut - The user cuts an element’s content
onpaste - A user pastes content in an element
{% enddetails %}
{% details Media %}
onabort - Media loading is aborted
oncanplay - The browser can start playing media (e.g. a file has buffered enough)
oncanplaythrough - When browser can play through media without stopping
ondurationchange - The duration of the media changes
onended - The media has reached its end
onerror - Happens when an error occurs while loading an external file
onloadeddata - Media data is loaded
onloadedmetadata - Meta Metadata (like dimensions and duration) are loaded
onloadstart - Browser starts looking for specified media
onpause - Media is paused either by the user or automatically
onplay - The media has been started or is no longer paused
onplaying - Media is playing after having been paused or stopped for buffering
onprogress - Browser is in the process of downloading the media
onratechange - The playing speed of the media changes
onseeked - User is finished moving/skipping to a new position in the media
onseeking - The user starts moving/skipping
installed - The browser is trying to load the media but it is not available
onsuspend - Browser is intentionally not loading media
ontimeupdate - The playing position has changed (e.g. because of fast forward)
onvolumechange - Media volume has changed (including mute)
onwaiting - Media paused but expected to resume (for example, buffering)
animationend - A CSS animation is complete
animationiteration - CSS animation is repeated
animationstart - CSS animation has started
{% enddetails %}
{% details Other %}
transitionend - Fired when a CSS transition has completed
onmessage - A message is received through the event source
onoffline - Browser starts to work offline
ononline - The browser starts to work online
onpopstate - When the window’s history changes
onshow - A <menu> element is shown as a context menu
onstorage - A Web Storage area is updated
ontoggle - The user opens or closes the <details> element
onwheel - Mouse wheel rolls up or down over an element
ontouchcancel - Screen touch is interrupted
ontouchend - User finger is removed from a touch screen
ontouchmove - A finger is dragged across the screen
ontouchstart - Finger is placed on touch screen
{% enddetails %}
---
### Asynchronous JS and Error handling
{% link https://dev.to/devlorenzo/js-settimeout-and-setinterval-1pbf %}
SetTimeout will wait foo seconds and then execute the action. SetInterval will execute this same action every foo seconds.
Both can be inline or multiline, I recommend using multiline 99% of the time. It's important to notice that they work in milliseconds.
SetTimeout:
```js
setTimeout(function(){
alert("Hello World!");
}, 2000); // 2 seconds
setTimeout(function(){ alert("The fifth episode of the series"); }, 3000);
```
SetInterval:
```js
setInterval(function() {
alert("I want to show you another Javascript trick:");
}, 1000);
setInterval(function() {alert("How to work with SetTimeout and SetInterval");}, 1000);
```
* If you want to remove the first delay you have to add code a first time out of the function. I recommend you save this code in a separate function you can call whenever you need. [Continue reading here](https://dev.to/devlorenzo/js-settimeout-and-setinterval-1pbf)
---
{% link https://dev.to/devlorenzo/js-settimeout-and-setinterval-1pbf %}
First, it's important to notice that a majority of backend actions have an unknown result, we don't know if it will work when we write our code. So we always have to write two different codes, one if the action works, another if the action results in an error. This is exactly how a try/catch work, we submit a code to try, if it works code continues, if it doesn't we catch the error (avoiding the app crashing) and run another code. This is a very common thing we don't only use in web development (also in Android app development with java for example).
#### Try / Catch
```js
try {
// Try to run this code
// For example make a request to the server
}
catch(e) {
console.log(e)
// if any error, Code throws the error
// For example display an error message to the user
}
```
#### Promises
The big problem with try/catch is that when you have to nest it (and you will have), it's really messy and difficult to read and write. So Javascript support promises with async functions:
Syntax: new Promise (executor)
executor= (accept, reject) =>{}
```js
var asyncronus_function = (number)=>
{
return new Promise( (accept, reject)=>
{
})
}
```
This function returns a promise object.
If function end well we return a accept(), otherwise reject()
[More here](https://dev.to/devlorenzo/js-how-to-handle-errors-fi6)
[Back to Top - 🔝](#top)
---
## Projects ideas to become a javascript master <a name="projects"></a>
{% link https://dev.to/worldindev/10-projects-to-become-a-javascript-master-giveaway-2o4k %}
a) [General (for beginners)](https://dev.to/worldindev/10-projects-to-become-a-javascript-master-giveaway-2o4k)
1. Converters
2. Word Counter
3. Timer / Clock
4. Random password generator
5. Calculator
b) [Games](https://dev.to/worldindev/10-projects-to-become-a-javascript-master-giveaway-2o4k)
1. Guess the number
2. Math time!
3. Other Games
c) [Social & Websites](https://dev.to/worldindev/10-projects-to-become-a-javascript-master-giveaway-2o4k)
1. Log-in, Sign-up
1. Filter
2. To-Do List
3. Social
3. Portfolio
[Open the post](https://dev.to/worldindev/10-projects-to-become-a-javascript-master-giveaway-2o4k) for more info about each project!
[Back to Top - 🔝](#top)
---
## Other resources:<a name="alpha"></a>
{% collapsible Table of content %}
* [My cheat Sheet](#delta)
* [Projects ideas to become a javascript master ](#projects)
* [Other resources](#alpha)
1. [Complete Javascript cheat sheets](#1da)
2. [JS promises](#1db)
3. [JS Arrays](#1dc)
4. [JS loops](#1dd)
5. [Preprocessor](#1de)
* [CoffeScript](#1df1)
* [EJS](#1df2)
* [Babel](#1df3)
6. [JS Frameworks & Libraries](#frameworks)
* [Angular](#2f)
* [Vue](#2g)
* [React](#2g+)
* [JQuery](#2g+++)
* [Others](#2g++)
* [Node](#2c)
{% endcollapsible %}
### Complete JS cheat sheets:<a name="1da"></a>

[By dev hints](https://devhints.io/es6)

Incredible resource --> [By website setup](https://websitesetup.org/javascript-cheat-sheet/)
> [PDF Version](https://websitesetup.org/wp-content/uploads/2020/09/Javascript-Cheat-Sheet.pdf)
Two Others:
[By overapi](https://overapi.com/javascript)
[By HTML cheat sheet.com - Interactive](https://htmlcheatsheet.com/js/)
---
#### JS promises (Asynchronous JS):<a name="1db"></a>
[Dev.to article](https://dev.to/devlorenzo/js-how-to-handle-errors-fi6
)
{% link https://dev.to/devlorenzo/js-how-to-handle-errors-fi6 %}
[Dev.to article](https://dev.to/devlorenzo/js-settimeout-and-setinterval-1pbf)
{% link https://dev.to/devlorenzo/js-settimeout-and-setinterval-1pbf %}

[By codecadamy](https://www.codecademy.com/learn/introduction-to-javascript/modules/javascript-promises/cheatsheet)
---
#### JS Arrays: <a name="1dc"></a>

[By dev hints](https://devhints.io/js-array)
---
#### JS Loops:<a name="1dd"></a>

[By codecademy](https://www.codecademy.com/learn/introduction-to-javascript/modules/learn-javascript-loops/cheatsheet)
---
#### JS preprocessor:<a name="1de"></a>
##### CoffeeScript:<a name="1de1"></a>
[CoffeeScript website](https://coffeescript.org/)

Others:
[At karloeaspirity.io](https://karloespiritu.github.io/cheatsheets/coffeescript/)
[Quick reference - By autotelicum - PDF Version](https://autotelicum.github.io/Smooth-CoffeeScript/CoffeeScript%20Quick%20Ref.pdf)
[JS to CoffeeScript](https://awsm-tools.com/code/coffee2js)
---
##### EJS:<a name="1de2"></a>
[EJS website](https://ejs.co/)
[EJS docs](https://ejs.co/#docs)

[At one compiler](https://onecompiler.com/cheatsheets/ejs-embedded-javascript-templates)
[Or at GitHub](https://gist.github.com/cmugla/daf2ab86937b9983c30f3724914644da)
---
##### Babel:<a name="1de3"></a>
[Babel website](https://babeljs.io/)
[Babel docs](https://babeljs.io/docs/en/)

[By karloespiritu.io](https://karloespiritu.github.io/cheatsheets/babel/)
[Or at Medium](https://medium.com/@housecor/babel-6-cheat-sheet-7344f7936f2d)
---
## JavaScript-based Frameworks & Libraries:<a name="frameworks"></a>
[Article Angular vs vue vs react at codeinwp.com](https://www.codeinwp.com/blog/angular-vs-vue-vs-react/)

[Best Javascript Frameworks - article at hackr.io](https://hackr.io/blog/best-javascript-frameworks)
### Angular<a name="2f"></a>

[By angular.io](https://angular.io/guide/cheatsheet)

[By dev hints](https://devhints.io/angularjs)
---
### Vue<a name="2g"></a>

[By vue mastery](https://www.vuemastery.com/pdf/Vue-Essentials-Cheat-Sheet.pdf)

[By dev hints](https://devhints.io/vue)
[Other - By marozed](https://marozed.com/vue-cheatsheet)
---
### React<a name="2g+"></a>

[By dev hints](https://devhints.io/react)
Others:
[By react cheat sheet.com](https://reactcheatsheet.com/)
[At GitHub: React + Typescript cheat sheet](https://github.com/typescript-cheatsheets/react)
---
### JQuery<a name="2g+++"></a>
[AJAX intro + cheat sheet at GitHub](https://gist.github.com/joelrojo/c54765a748cd87a395a2b865359d6add)

[By ascarotero.com](https://oscarotero.com/jquery/) - Really well done

[By Website Setup - PDF Version](https://websitesetup.org/wp-content/uploads/2017/01/wsu-jquery-cheat-sheet.pdf)

[By make website hub](https://makeawebsitehub.com/jquery-mega-cheat-sheet/)
> [PDF Version](https://makeawebsitehub.com/wp-content/uploads/2015/09/jquery-mega-cheat-sheet-Printable.pdf)

[Article about top 10 jquery cheat sheets](https://blog.templatetoaster.com/jquery-cheat-sheet/)
[Or by over API](https://overapi.com/jquery)
---
### Others<a name="2g++"></a>
##### Ember.js:

[Website](https://www.emberjs.com/)
##### Meteor:

[Website](https://www.meteor.com/)
##### Mithril:

[Website](https://mithril.js.org/)
##### [Node](#2c)

[Website](https://nodejs.org/en/)
---
## Other Resources: <a name="oother"></a>
### Advanced Topics
* How Browsers Work: Behind the scenes of modern web browsers
* Learning Advanced JavaScript by John Resig
* JavaScript Advanced Tutorial by HTML Dog
* WebGL Fundamentals
* Learning JavaScript Design Patterns by Addy Osmani
* Intro to Computer Science in JavaScript
* Immutable data structures for JavaScript
### Libraries/Frameworks/Tools
* Introduction to React video
* React Interview Questions
* JavaScript Promises: A Tutorial with Examples
* Khan Academy: Making webpages interactive with jQuery
* A Beginner’s Guide To Grunt: Build Tool for JavaScript
* Getting Started with Underscore.js
* jQuery Course by Code School
* Thinkster.io Courses on React and Angular
* The Languages And Frameworks You Should Learn In 2016
* ES6 Tools List on GitHub
* Getting Started with Redux
### Server-side JavaScript
* Real-time Web with Node.js Course by Code School
* NodeSchool Course
* Node.js First Look on Lynda
* All about NodeJS Course on Udemy
* Server-side Development with NodeJS Course on Coursera
[Source (with links) - 50 resources to help you start learning JavaScript - By Daniel Borowski - At Medium](https://medium.com/coderbyte/50-resources-to-help-you-start-learning-javascript-in-2017-4c70b222a3b9)
Read also:
{% link https://dev.to/worldindev/10-projects-to-become-a-javascript-master-giveaway-2o4k %}
{% link https://dev.to/worldindev/finally-how-to-understand-math-awesome-resource-list-4end %}
{% link https://dev.to/worldindev/25-udemy-courses-worth-your-money-time-2e5j %}
Thanks for reading and Happy coding ❤
---
Full Compilation of cheat sheets:
{% link https://dev.to/devlorenzo/the-ultimate-compilation-of-cheat-sheets-100-268g %}
---
## ⚡Giveaway ⚡
We are giving away any course you need on Udemy. Any price any course.
Steps to enter the giveaway
--> React to this post
--> [Subscribe to our Newsletter](https://worldindev.ck.page/) <-- Very important
--> [Follow me on Twitter](https://twitter.com/DevLorenzo1) <-- x2 Chances of winning
The winner will be announced on May 1, Via Twitter
---
## **[Subscribe to my Newsletter!](https://worldindev.ck.page/)**
* The PDF version of this article!!!
* Monday: Weekly digeeeeeests!!!
* Wednesday: Discussions and dev insights - We debate around developer lifes - Examples: The importance of coffee behind development / If you weren't a dev, you'd be a...
* Gifts, lots of 🎁. We send cheat sheets, coding advice, productivity tips, and many more!
* That's --> free <-- and you help me!
---
PS: It took me around 10 hours to complete the article - So please remember the **like ❤️** and **super like 🦄** - Let's reach the top of the month this time 🚀
[Back to Top - 🔝](#top) | devlorenzo |
667,629 | Replicate AWS CodeCommit Repositories Between Regions Using CodeBuild And CodePipeline | Replicating code repositories from one AWS region to another is a commonly used DevOps task. This art... | 0 | 2021-04-16T02:17:20 | https://dev.to/apatil88/replicate-aws-codecommit-repositories-between-regions-using-codebuild-and-codepipeline-5fh1 | aws, devops, programming | Replicating code repositories from one AWS region to another is a commonly used DevOps task. This article demonstrates how to set up continuous replication of an AWS CodeCommit repository among multiple AWS regions using AWS CodeBuild and AWS CodePipeline. This approach can be useful to maintain backups of CodeCommit repositories in different regions.
A major benefit of using this approach is that the replication process can be easily set up to trigger based on events, such as commits made to the repository.
**Note:** You need to have a basic understanding of CodeCommit, CodeBuild, CodePipeline, and Identity Access Management (IAM). Please refer to the AWS Documentation in case you are not familiar with these AWS Services.
We will be replicating a repository in us-east-1 (N. Virginia) to the us-east-2 (Ohio) region.
Let's get started
### Step 1: Setting up Build Project in CodeBuild
* Under Project Configuration, enter a name for the CodeBuild Project. In our case, it is demoappreplication

* Under Source, select the Source Provider as **AWS CodeCommit** and select the repository and branch within the repository you wish to replicate to another region. In our case, we will replicate the **test** branch under **demoapp** repository.

* Under Environment, select Operating System as **Amazon Linux 2**, Runtime(s) as **Standard**, Image as **aws/codebuild/amazonlinux2-x86_64-standard:3.0**, Image Version as **Always use the latest image for this runtime version**, and Environment type as **Linux**

* Under Buildspec, select Insert build commands and click on Switch to editor.

* Enter the following commands.
```
version: 0.2
env:
#variables:
# key: "value"
# key: "value"
#parameter-store:
# key: "value"
# key: "value"
#secrets-manager:
# key: secret-id:json-key:version-stage:version-id
# key: secret-id:json-key:version-stage:version-id
#exported-variables:
# - variable
# - variable
git-credential-helper: yes
#batch:
#fast-fail: true
#build-list:
#build-matrix:
#build-graph:
phases:
#install:
#If you use the Ubuntu standard image 2.0 or later, you must specify runtime-versions.
#If you specify runtime-versions and use an image other than Ubuntu standard image 2.0, the build fails.
#runtime-versions:
#nodejs: 12
# name: version
#commands:
# - command
# - command
#pre_build:
#commands:
#- ls -lt
build:
commands:
- git config --global --unset-all credential.helper
- git config --global credential.helper '!aws codecommit credential-helper $@'
- git config --global credential.UseHttpPath true
- git clone --mirror https://git-codecommit.us-east-1.amazonaws.com/v1/repos/demoapp LocalRepository
- cd LocalRepository
- git remote set-url --push origin https://git-codecommit.us-east-2.amazonaws.com/v1/repos/demoapp
- git config --global credential.helper '!aws codecommit credential-helper $@'
- git config --global credential.UseHttpPath true
- git fetch -p origin
- git push --mirror
post_build:
commands:
- echo Build completed
#reports:
#report-name-or-arn:
#files:
# - location
# - location
#base-directory: location
#discard-paths: yes
#file-format: JunitXml | CucumberJson
#artifacts:
#files:
# - location
# - location
#name: $(date +%Y-%m-%d)
#discard-paths: yes
#base-directory: location
#cache:
#paths:
# - paths
```
* Let CodeBuild automatically create a service role for us. In our case, CodeBuild will create a new service role named **codebuild-demoappreplication-service-role**

* Leave all the default options under Batch Configuration, Artifacts, Logs steps in CodeBuild and click Create Build Project button.
### Step 2: Configuring CodeBuild Service Role in IAM
* Navigate to IAM and add the following **codeCommit:GitPush** permissions for **us-east-2 region** resource to the service role CodeBuild created for us. In our case, we will update the permissions and resource for the **codebuild-demoappreplication-service-role**


### Step 3: Setting up CodePipeline
You can further extend this setup to trigger code replication when a code merge happens in the CodeCommit repository in us-east-1, by triggering a CodeBuild within a CodePipeline.
* Enter a name for the pipeline

* Under Source provider, select **AWS CodeCommit**. In our case, we will trigger a build when CodeCommit detects a code change in the **test** branch under **demoapp** repository.

* Under the Build step, select Build provider as **CodeBuild** and our project name **demoappreplication**

* Click on Skip deploy stage

* Review the pipeline details and click Create Pipeline.
* Once the Code Pipeline runs successfully, you should see the following:

### Summary
This article demonstrates setting up a repository replication of an AWS CodeCommit repository across multiple regions using CodeBuild and CodePipeline.
### References
* https://aws.amazon.com/blogs/devops/replicate-aws-codecommit-repository-between-regions-using-aws-fargate/ | apatil88 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.