id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
915,289 | Publish your site assets with the Netlify CLI | Learn how the Netlify command line interface (CLI) can publish your web site assets to the Netlify hosting infrastructure for you with a single command. | 15,788 | 2021-12-02T10:26:07 | https://www.netlify.com/blog/2021/12/01/highlighting-a-different-netlify-feature-each-day-in-december/ | netlify, jamstack, cli, tools | ---
title: Publish your site assets with the Netlify CLI
published: true
description: Learn how the Netlify command line interface (CLI) can publish your web site assets to the Netlify hosting infrastructure for you with a single command.
tags: Netlify, Jamstack, CLI, Tools
cover_image: https://www.netlify.com/img/blog/og-cli-deploy.png
canonical_url: https://www.netlify.com/blog/2021/12/01/highlighting-a-different-netlify-feature-each-day-in-december/
series: Netlify - a feature a day 2021
---
_Learn about [a different Netlify feature each day](https://www.netlify.com/blog/2021/12/01/highlighting-a-different-netlify-feature-each-day-in-december/?utm_campaign=featdaily21&utm_source=devto&utm_medium=blog&utm_content=cli-deploy) throughout December!
Here's one..._
Did you know that you can publish your sites to the web without leaving the comfort of your command line?
The open source [Netlify CLI](https://github.com/netlify/cli) includes lots of helpful features (expect to see learn more about that in upcoming post this month!) and deploying production or preview versions of your site can be done with the handy command: `netlify deploy`
> 💡 Install the Netlify CLI for all sorts of web development helpers and utilities with this command: `npm install -g netlify-cli`
The first time you run `netlify deploy` from a local project folder, it will give you the option of deploying to an existing site or creating a new site in a project team, and then deploy your specified folder to Netlify.
For added safety, the default target for a `netlify deploy` is a unique preview URL which the CLI helpful returns to you as a link to inspect and share. Ready to deploy to production? Then you’ll want to use the `prod` flag:
```
netlify deploy —-prod
```
More powerful deployment pipelines await you via [Netlify’s build infrastructure](https://www.netlify.com/products/build/?utm_campaign=featdaily21&utm_source=devto&utm_medium=blog&utm_content=cli-deploy), but this command is very useful for sending your assets to Netlify where we’ll publish them to the [global hosting infrastructure](https://www.netlify.com/products/edge/?utm_campaign=featdaily21&utm_source=devto&utm_medium=blog&utm_content=cli-deploy) for you.
Happy deploying!
## More information
- [Getting started with the Netlify CLI](https://docs.netlify.com/cli/get-started/?utm_campaign=featdaily21&utm_source=devto&utm_medium=blog&utm_content=cli-deploy)
- [Netlify CLI 2.0 rebuilt from the ground up](https://www.netlify.com/blog/2018/09/10/netlify-cli-2.0-now-in-beta/?utm_campaign=featdaily21&utm_source=devto&utm_medium=blog&utm_content=cli-deploy) | philhawksworth |
915,315 | Why you should make your tests fail | Let's face it, most of us developers don't necessarily love writing tests. We sometimes end up... | 0 | 2021-12-02T13:58:36 | https://cathalmacdonnacha.com/why-you-should-make-your-tests-fail | testing, javascript, webdev, beginners | Let's face it, most of us developers don't necessarily love writing tests. We sometimes end up rushing through them, and once we see that green tick next to a passing test, we're generally pretty happy to move on. However, an enemy is lurking amongst us.
## False positive test
The enemy I'm talking about here is otherwise known as a false positive test. Let's take a look at what this beast looks like.
Here we have a `select` element with some countries as options:
```tsx
<select>
<option value="">Select a country</option>
<option value="US">United States</option>
<option value="IE">Ireland</option>
<option value="AT">Austria</option>
</select>
```
Here's my test:
```tsx
it('should allow user to change country', () => {
render(<App />)
userEvent.selectOptions(
screen.getByRole('combobox'),
screen.getByRole('option', { name: 'Ireland' } ),
)
expect(screen.getByRole('option', { name: 'Ireland' })).toBeInTheDocument();
})
```
The test passes, isn't that great? ✅ I'm afraid not. 😭 Let's see why after we intentionally make it fail.
## Making your test fail
Here's a real example of a false positive test situation I ran into recently:
```tsx
it('should allow user to change country', () => {
render(<App />)
userEvent.selectOptions(
screen.getByRole('combobox'),
screen.getByRole('option', { name: 'Ireland' } ),
)
// Changed expected country from "Ireland" to "Austria" - this should fail.
expect(screen.getByRole('option', { name: 'Austria' })).toBeInTheDocument();
})
```
I was expecting the check for "Austria" to fail because it wasn't the selected country, and I was pretty surprised to see that it was still passing. Looks like we have just identified a false positive test.
Let us take a step back. The purpose of my test is to ensure that when changing a country, it is indeed the now selected option. However, after debugging for while I eventually realised that the test above only checks that the country "Ireland" exists, instead of checking if it's selected.
Here's how I eventually fixed it:
```tsx
it('should allow user to change country', () => {
render(<App />)
userEvent.selectOptions(
screen.getByRole('combobox'),
screen.getByRole('option', { name: 'Ireland' } ),
)
// Now checking if the option is selected
expect(screen.getByRole('option', { name: 'Ireland' }).selected).toBe(true);
})
```
Now, I am correctly checking that the option is selected and all is good. I wouldn't have found this unless I intentionally made my test fail, so I'm glad my persistence has paid off and I avoided a potential bug.
## Final thoughts
I've been burnt enough times in the past by false positive tests that I have vouched to always intentionally make my tests fail before moving on to the next one. Since doing this, I've gotten a lot more confident in my tests knowing that they'll only pass in the correct circumstances.
That's about all I have to share with you today. Let me know in the comments if you found this article useful. 🙌
## Want to follow along?
I mainly write about real tech topics I face in my everyday life as a Frontend Developer. If this appeals to you then feel free to follow me on Twitter: [https://twitter.com/cmacdonnacha](https://twitter.com/cmacdonnacha)
Bye for now 👋 | cathalmacdonnacha |
915,347 | Free Bootstrap 5 Admin Dashboard Template - Dash UI | Dash-UI is a Bootstrap 5 Admin & Dashboard Theme. Dash UI Kit a free and open source components... | 0 | 2021-12-03T08:08:09 | https://dev.to/imjituchauhan/free-bootstrap-5-admin-template-dash-ui-19j2 | webdev, opensource, css, javascript | Dash-UI is a [Bootstrap 5 Admin & Dashboard](https://dashui.codescandy.com/) Theme. Dash UI Kit a free and open source components and templates kit fully coded with Bootstrap 5.
Demo - [Dash UI](https://codescandy.com/dashui/index.html)
Support - [Join the Community](https://github.com/codescandy/Dash-UI/discussions)
## What's in the package
- 50+ Components
- **Bootstrap 5.1.3**
- HTML5 & SCSS
- **Gulp Based Workflow**
- 100% Responsive
- Authentication Pages
- **Collapsible Sidebar**
- Font Icons
- Google Fonts
- Developer [Docs](https://codescandy.com/dashui/docs/index.html)
## Pages
- [Project Dashboard](https://codescandy.com/dashui/index.html)
- [Profile](https://codescandy.com/dashui/pages/profile.html)
- [Account Settings](https://codescandy.com/dashui/pages/settings.html)
- [Billing](https://codescandy.com/dashui/pages/billing.html)
- [Pricing](https://codescandy.com/dashui/pages/pricing.html)
- Authentication Pages
- [Sign In](https://codescandy.com/dashui/pages/sign-in.html)
- [Sign Up](https://codescandy.com/dashui/pages/sign-up.html)
- [Forgot Password](https://codescandy.com/dashui/pages/forget-password.html)
- 404 Error
- Menu Level
- Layout
**Thank you for reading.**
If you like this product, please support us by sharing our GitHub repository! [Github](https://github.com/codescandy/Dash-UI).
| imjituchauhan |
915,600 | 6 Days Left until BotMeNot’s beta launch on the 7th of December, 2021! | In case you’ve missed it, we’ve recently announced BotMeNot’s beta launch! BotMeNot (Beta) will go... | 0 | 2021-12-02T15:58:06 | https://dev.to/botmenot/6-days-left-until-botmenots-beta-launch-on-the-7th-of-december-2021-48ik | discuss, security, testing | In case you’ve missed it, we’ve recently announced BotMeNot’s beta launch!
BotMeNot (Beta) will go live on the 7th of December 2021 and if you decide to register as a beta user you’ll receive 100 free credits!

What is BotMeNot about and why should you become a beta user?
BotMeNot is a tool that tests websites’ bot protection levels
It can be used by website owners or anyone interested in bot protection
Currently, there are three types of tests you can choose from - Starter, Light, and Smart Tests
By knowing how well protected a website is, you’ll know what are the chances of bots slowing down the server’s response time
You’ll get to know how easy it is to scrape content from your website.
You’ll be able to understand if your website is at risk of being targeted by spambots
You’ll get to know how effective your current bot protection solution is (if you’re using one)
Interested?
Head over to https://botmenot.com/beta-users-wanted/!
Register and claim your 100 free credits!
| botmenot |
915,614 | Are you a video fan?🚀 | How many videos do you watch a week? I'm sure a lot. Some of the videos for pleasure, and some of... | 0 | 2021-12-02T16:14:31 | https://dev.to/liadshviro/are-you-a-video-fan-k44 | codereview, programming, productivity, video | How many videos do you watch a week? I'm sure a lot.
Some of the videos for pleasure, and some of them for educational reasons.

About two years ago (Covid 19) video usage increased dramatically.
We have learned that it is much more convenient to transfer and share knowledge between each other via video instead of physical meetings. As a video consumer, you can understand the content much faster, enjoy it more and can always stop and continue in your free time from the same point you stopped watching.
And even better, you can watch at 1.5x speed and save time.

While observing the development world 👨💻👩💻, you write thousands of lines of code every week (!)
How many times have you found yourself repeatedly explaining what you did to someone else?
How easy it was if you could record📷 Your screen and explain what you have developed in your own words instead of explaining it 10 times? 😀

Meet Speacode, a cool tool that has been developed by developers to developers that allow you in just 2 clicks the ability to record yourself from the IDE to explain exactly what you wrote so you can share it with the relevant people.
You can share the video within the relevant lines of codes or if necessary even share it with friends via a link 😀
If you want to take a try you can download it from Jetbrains marketplace here - [Speacode Video Screen Recorder for Code | Python Java JS PHP etc](https://plugins.jetbrains.com/plugin/15672-speacode-video-screen-recorder-for-code--python-java-js-php-etc)

| liadshviro |
915,620 | Getting started with scheduled pipelines | CircleCI’s scheduled pipelines let you run pipelines at regular intervals; hourly, daily, or weekly.... | 0 | 2021-12-14T23:24:02 | https://circleci.com/blog/using-scheduled-pipelines/ | circleci, pipelines, cicd | ---
title: Getting started with scheduled pipelines
published: true
date: 2021-12-14 14:00:00 UTC
tags: circleci, pipelines, cicd
cover_image: https://production-cci-com.imgix.net/blog/media/Tutorial-Beginner-RP.jpg?ixlib=rb-3.2.1&auto=format&fit=max&q=60&ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&w=750
canonical_url: https://circleci.com/blog/using-scheduled-pipelines/
---
CircleCI’s scheduled pipelines let you run [pipelines](/blog/what-is-a-ci-cd-pipeline/) at regular intervals; hourly, daily, or weekly. If you have used [scheduled workflows](/blog/manual-job-approval-and-scheduled-workflow-runs/), you will find that replacing them with scheduled pipelines gives you much more power, control, and flexibility. In this tutorial, I will guide you through how scheduled pipelines work, describe some of their cool use cases, and show you how to get started setting up scheduled pipelines for your team. I will demonstrate using both the API and the UI and how you can set up scheduled pipelines from scratch or migrate to them from scheduled workflows.
## Prerequisites
Here is what you will need to follow the tutorial:
- [CircleCI account](https://circleci.com/signup/) with a project configured
- Some Node.JS knowledge
You can use the [sample project](https://github.com/zmarkan/android-espresso-scrollablescroll) as you go through the steps.
## Scheduling regular builds and tests
There are many reasons to schedule CI/CD work instead of just executing when you push code to a repository. The first and most obvious reason to schedule is to have regular builds of software. Often folks outside of the development team (product managers, QA engineers, and other stakeholders) need access to the latest build of software you and your team are working on. Of course, you can always trigger a build manually on demand, but that can be distracting for you and other developers on the team. It is so much easier to automate this process and point all the stakeholders to where they can find the newest build. This applies to everything from web apps and mobile apps, to backend services, libraries, and anything in between.
Scheduled builds automatically build software at night or off-hours when there is no development going on. These nightly builds (as they are sometimes called) take the latest revision off your main branch and produce a staging or beta build of the software.
You can use scheduled pipelines to run the entire test suite so the nightly build also verifies that your software works and is ready and waiting when you start your next working day. Scheduling allows you to run those expensive and time-consuming functional or integration tests you do not want to run on every commit. The build does not need to be run nightly. You can run it at a cadence that suits your team, every few hours, or even several times per hour.
## Scheduling other kinds of work
But why stop at builds? You might also schedule work that is not necessarily a build but needs to happen regularly, like a batch process, database backups, or restarting services. If it can be added to a script on a machine that has access to your environments, it can be done in a CI/CD system.
## Legacy way of scheduling — scheduled workflows
The ability to schedule work isn’t exactly new to CircleCI. Development teams have been able to include scheduling as part of the configuration, with schedules defined using cron syntax on the workflow level. There are a few downsides to this approach, though:
- It requires development work to make any changes to the schedule, or even to review schedules already in place.
- In CircleCI, pipelines are triggered, not workflows.
- Fine-tuning permissions was difficult because it was not always clear who scheduled the work that was triggered.
## How scheduled pipelines improve developer experience
Here are some of the benefits of scheduled pipelines:
- You can schedule entire pipelines, including any pipeline parameters you want to pass.
- With scheduling handled outside the configuration file, you can query and set schedules using both the API and the UI. This ability provides more flexibility with who manages scheduling and execution.
- Scheduled pipelines work with [contexts](https://circleci.com/docs/2.0/contexts/). Contexts give you fine-grained control of who has access to perform and schedule certain jobs. You can gate your deployment credentials to only the engineers with sufficient permissions, so no one else can set up those schedules. Contexts can also be used with dynamic configuration to unlock even more flexibility for your CI/CD setup.
Now that we have covered some basic facts about scheduled pipelines, we can implement one.
## Implementing a scheduled pipeline
First, we need a pipeline to schedule. Luckily, I have a [project that had used scheduled workflows](https://github.com/zmarkan/android-espresso-scrollablescroll). It is an open source Android library project that runs nightly deployments, so it is ideal for scheduling use cases. I recommend that you fork the project on GitHub and set it up as a project on CircleCI.
For our example schedule, we want to run a build every night and deploy it to the Sonatype snapshots repository. This build makes the repository available for anyone to use and get the freshest code.
There is a workflow already defined for it in our `.circleci/config.yml` - `nightly-snapshot`:
```
parameters:
run-schedule:
type: boolean
default: false
nightly-snapshot:
when: << pipeline.parameters.run-schedule >>
jobs:
- android/run-ui-tests:
name: build-and-test
system-image: system-images;android-23;google_apis;x86
test-command: ./gradlew assemble sample:connectedDebugAndroidTest
- deploy-to-sonatype:
name: Deploy Snapshot to Sonatype
requires:
- build-and-test
```
The workflow has two jobs: one to run the tests and another to deploy the snapshot. There is no scheduling logic in the pipeline itself, apart from a `when` statement that checks for the `run-schedule` pipeline parameter. Our scheduler will set the parameter when triggering the pipeline.
## Using the API to schedule pipelines
To get started with scheduling using the API, you need the API token. To get the token, log in to CircleCI and click your avatar in the bottom left corner. Clicking your avatar opens User Settings. Navigate to Personal API Tokens, create a new token, and save it somewhere safe. In the sample project there is a `build-scheduling` directory, with a file called `.env.sample`. You can copy that file to `.env` and replace the placeholder token with yours. You should do the same with other parts of the `.env` file: `PROJECT_ID` and `ORG_NAME`.
```
CIRCLECI_TOKEN=YOUR_CIRCLECI_TOKEN
PROJECT_ID=YOUR_PROJECT_ID
ORG_NAME=YOUR_VCS_USERNAME
VCS_TYPE=github
```
With environment variables set, we can make the API calls. Scheduled pipelines use the [CircleCI API v2](/blog/introducing-circleci-api-v2/). To post a new schedule, you need to make a POST request to the endpoint. The endpoint consists of your CircleCI project, your username, and your VCS provider. Here is an example: `https://circleci.com/api/v2/project/github/zmarkan/android-espresso-scrollablescroll/schedule`
This example POST call from the `setup_nightly_build.js` script uses Axios JavaScript library:
```
const token = process.env.CIRCLECI_TOKEN
axios.post("https://circleci.com/api/v2/project/github/zmarkan/android-espresso-scrollablescroll/schedule", {
name: "Nightly build",
description: "Builds and pushes a new build to Sonatype snapshots every night. Like clockwork.",
"attribution-actor": "system",
parameters: {
branch: "main",
"run-schedule": true
},
timetable: {
per_hour: 1,
hours_of_day: [1],
days_of_week: ["TUE", "WED", "THU", "FRI", "SAT"]
}
},{
headers: { 'circle-token': token }
}
)
```
The body payload contains the schedule name, which must be unique, and an optional description. It includes an attribution actor, which can be either `system` for a neutral actor or `current`, which takes your current user’s permissions (as per the token you use). The payload also includes parameters like which branch to use when triggering the pipeline and any other parameters you have set up. This tutorial uses `run-schedule` in the config file. The last part of the body is the `timetable`, which is where we define when and how frequently to run our scheduled pipelines. The fields to use here are `per_hour`, `hours_of_day`, and `days_of_week`. Note that this does not take a cron expression, which makes it more easily parsable by humans reasoning with the API.
In the headers, we will pass a `circle-token` using the token we generated earlier in the CircleCI UI.
In the timetable, we set everything to run at 1:00 am (UTC), on Tuesday to Saturday, the night after the work has finished. We do not need to run on Sunday and Monday, because in our example project, there is no one working over the weekend. The codebase will not change on those days.
In addition to the POST method, the API takes exposes other methods such as GET, DELETE, and PUT for retrieving, deleting, and updating schedules. There are samples for `get_schedules.js` and `delete_schedule.js` in the repository.
## Using the GUI
Instead of using the API, you can set up scheduled pipelines from right in the CircleCI dashboard. From your project in CircleCI, go to Project Settings, and select **Triggers** from the menu on the left.
Click **Add Scheduled Trigger** to open the page where you can set up a new scheduled pipeline. The form has the same options as the API: trigger name, days of the week, start time, parameters, attributed user, and the others.
Click **Save Trigger** to activate it. The trigger will be ready to start scheduling your pipelines at the dates and times you specified.

## Migrating from scheduled workflows
So far, we have explored setting up the pipelines and reviewing them using both the API and the GUI. Now we can focus on migrating your existing scheduled workflows to the more advantageous scheduled pipelines.
This example shows a scheduled workflow where everything is defined in the config file. It includes the trigger configuration to the workflow definition, passing in the schedule with the cron expression:
```
nightly-snapshot:
triggers: #use the triggers key to indicate a scheduled build
- schedule:
cron: "0 0 * * *" # use cron syntax to set the schedule
filters:
branches:
only:
- main
jobs:
- android/run-ui-tests:
name: build-and-test
system-image: system-images;android-23;google_apis;x86
test-command: ./gradlew assemble sample:connectedDebugAndroidTest
- deploy-to-sonatype:
name: Deploy Snapshot to Sonatype
requires:
- build-and-test
```
In our case, this is run at midnight every day, and we want to trigger it only on the main branch.
To migrate there are a few steps to complete:
1. Trigger the existing `nightly-snapshot` workflow in the scheduled pipeline.
2. Introduce a new pipeline variable called `run-schedule`, as we did in the first example.
3. For all workflows, add `when` expressions that indicate to run them when `run-schedule` is `true` and not to run other workflows unless `run-schedule` is `false`.
```
parameters:
run-schedule:
type: boolean
default: false
workflows:
build-test-deploy:
when:
not: << pipeline.parameters.run-schedule >>
jobs:
...
nightly-snapshot:
when: << pipeline.parameters.run-schedule >>
jobs:
...
```
The rest of the process is the same as when setting up from scratch:
1. Resolve the cron expression; you can use an [online tool like this](https://crontab.guru/#0_0_*_*_*).
2. Set up the schedule using either the API or the GUI with the new timetable configuration, and the `run-schedule` pipeline parameter.
## Conclusion
Scheduled pipelines are a versatile component of your CI/CD toolkit, allowing you to:
- Trigger your builds on a recurring basis
- Utilize pipeline parameters
- Fine-tune control of who can set up pipelines
- Clearly define the permissions needed when they run using CircleCI contexts
In this tutorial, you have learned about how scheduled pipelines work in CircleCI and how to set them up, either from scratch or porting from scheduled workflows. We also covered how to use the API and the web UI, and reviewed some use cases for them to get you started.
Let us know how you and your team fare with scheduled pipelines, porting your scheduled workflows, or if you would like to see any additions to the UI or API features.
If you have any feedback or suggestions about topics I should cover next, reach out to me on [Twitter - @zmarkan](https://twitter.com/zmarkan/). | zmarkan |
915,624 | Implement JavaScript Array Methods From Scratch | Table of Contents Introduction prototype this Array Methods Resources ... | 0 | 2021-12-05T00:24:06 | https://dev.to/zagaris/implement-javascript-array-methods-from-scratch-2pl1 | javascript, webdev, beginners |
## Table of Contents
1. [Introduction](#introduction)
2. [prototype](#what-is-prototype)
3. [this](#what-is-this)
4. [Array Methods](#lets-implement-the-array-methods)
5. [Resources](#resources)
## Introduction
The **JavaScript Array class** is a global object that is used in the construction of arrays. Array is a special type of object that is mutable and it is used to store multiple values.
In this article, we will implement our own array methods from scratch. These implementations <u>don't intend to replace the existing methods</u> but to provide a better understanding of how these methods work and their uses.
| Methods | Description |
| ----------- | -----------|
| [indexOf()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/indexOf) |Returns the first index at which a given element can be found in the array, otherwise returns -1. |
|[lastIndexOf()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/lastIndexOf)|Returns the last index at which a given element can be found in the array, otherwise returns -1.
| [reverse()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reverse) | Returns the reversed array.
| [forEach()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach) |Executes a provided function once for each array element.|
| [map()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) |Creates a new array with the results of calling a provided function on every element in the calling array.|
| [filter()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) | Creates a new array with all elements that pass the test implemented by the provided function.|
| [reduce()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) |Applies a function against an accumulator and each element in the array to reduce it to a single value.|
For a better understanding of Higher Orders Functions and specifically `map()`, `filter()` and `reduce()` methods you can check this [article](https://dev.to/zagaris/understanding-map-filter-and-reduce-in-javascript-293i).
Before we start to implement these methods, we will take a quick look on how `prototype` and `this` work.
## What is prototype?
In JavaScript, every function and object has a property named **prototype** by default. **Prototypes** are the mechanism by which JavaScript objects inherit methods and properties with each other. **Prototypes** are very useful when we want to add new properties to an object which will be shared across all the instances.
```javascript
function User () {
this.name = 'George',
this.age = 23
}
User.prototype.email = 'george@email.com';
User.prototype.userInfo = function () {
console.log('[User name]: ', this.name, ' [User age]: ', this.age);
}
const user = new User();
console.log(user.email); // george@email.com
user.userInfo(); // [User name]: George [User age]: 23
```
In the example above, we create the function object `User` that has the properties `name` and `age`. Then, we access the `User` function object with `prototype` property and we add the property `email` and the function `userInfo()` to it.
## What is this?
The value of `this` is determined by the object that currently owns the space that `this` keyword is in (runtime binding).
```javascript
function User () {
this.name = 'George',
this.age = 23,
this.printInfo = function() {
console.log(this);
}
this.orders = {
orderId: '12345',
printOrderId: function() {
console.log(this);
}
}
}
const user = new User();
user.printInfo(); // User { name: 'George', age: 23, printInfo: [Function], orders: { orderId: '12345', printOrderId: [Function: printOrderId] } }
user.orders.printOrderId(); // { orderId: '12345', printOrderId: [Function: printOrderId] }
```
In the example above, we use again the function object `User` and add the object `orders` to it. The `user.printInfo()` prints the `this` value and in this case it contains all the properties of the `User` function object. The `user.orders.printOrderId()` prints only the properties of the `orders` object and that happens because the method `printOrderId()` is called through the `orders` object.
## Let's implement the Array Methods
In order to implement the methods, we will access the `Array` object via `prototype` property and then we will add our new methods. The `this` keyword inside the methods has the value of the array that is calling the corresponding array method.
_**Custom indexOf**_
```javascript
Array.prototype.customIndexOf = function (value) {
for (let i = 0; i < this.length; i++) {
if (this[i] == value)
return i;
}
return -1;
}
const output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
console.log(output.customIndexOf(2)); // 1
```
In the example above, the `customIndexOf` method takes as a parameter a value and then we iterate the array until we find the corresponding value and return its index.
_**Custom lastIndexOf**_
```javascript
Array.prototype.customLastIndexOf = function (value) {
for (let i = this.length - 1; i >= 0; i--) {
if (this[i] == value)
return i;
}
return -1;
}
const output = [1, 2, 3, 4, 5, 9, 7, 9, 9, 10];
console.log(output.customLastIndexOf(9)); // 8
```
In the example above, the `customLastIndexOf` method takes as a parameter a value and then we iterate the array until we find the last corresponding value and return its index.
_**Custom reverse**_
```javascript
Array.prototype.customReverse = function () {
let left = 0;
let right = this.length - 1;
while(left < right) {
let temp = this[left];
this[left] = this[right];
this[right] = temp;
left++;
right--;
}
return this;
}
const output = [1, 'b', 'abc', { name: 'Jonh' }, 10];
console.log(output.customReverse()); // [10, { name: 'Jonh' }, 'abc', 'b', 1]
```
In the example above, the `customReverse` method reverses in place the array and returns it.
_**Custom forEach**_
```javascript
Array.prototype.customForEach = function (callback) {
for (let i = 0; i < this.length; i++) {
callback(this[i], i, this);
}
}
const output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
output.customForEach(elem => {
console.log(elem);
}); // 1 2 3 4 5 6 7 8 9 10
```
In the example above, the `customForEach` method takes as a parameter a callback function and it is applied on every element in the array. Also, the callback function receives additional the index and the array itself in case that will be used.
_**Custom map**_
```javascript
Array.prototype.customMap = function map(callback) {
const results = [];
for (let i = 0; i < this.length; i++) {
results.push(callback(this[i], i, this));
}
return results;
}
let output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
output = output.customMap(elem => {
return 3*elem;
});
console.log(output); // [ 3, 6, 9, 12, 15, 18, 21, 24, 27, 30]
```
In the example above, the `customMap` method takes as a parameter a callback function and for each element in the array we apply the callback function and we return the result in a new array. Again, the callback function receives additional the index and the array itself in case that will be used.
_**Custom filter**_
```JavaScript
Array.prototype.customFilter = function (callback) {
const results = [];
for (let i = 0; i < this.length; i++) {
if(callback(this[i], i, this))
results.push(this[i]);
}
return results;
}
let output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
output = output.customFilter((elem) => {
return elem % 2 === 0;
});
console.log(output); // [ 2, 4, 6, 8, 10 ]
```
In the example above, the `customFilter` method takes as a parameter a callback function and for each element in the array we apply the callback function and for the values that pass the callback function we return the result in a new array.
_**Custom reduce**_
```JavaScript
Array.prototype.customReduce = function (callback, initialValue) {
let value = initialValue;
for (let i = 0; i < this.length; i++) {
value = callback(value, this[i]);
}
return value;
}
const output = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const sum = output.customReduce((acc = 0, elem) => {
return acc + elem;
});
console.log(sum); // 55
```
In the example above, the `customReduce` method takes as parameters a callback function and an accumulator variable and we apply the callback function against the accumulator for each element in the array until to reduce it to a single value.
You can check my github repository [here](https://github.com/zagaris/Array-Methods-JavaScript).
## Resources
* [MDN: JavaScript Array](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array)
* [MDN: Prototype](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Objects/Object_prototypes)
* [MDN: This - JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this)
| zagaris |
915,626 | Anatomy of a high-velocity CI/CD pipeline | If you’re going to optimize your development process for one thing, make it speed. Not the kind of... | 0 | 2021-12-02T16:35:17 | https://medium.com/developing-koan/anatomy-of-a-high-velocity-ci-cd-pipeline-43a1ae3b798b | ci, continuousdelivery, devops, tdd | If you’re going to optimize your development process for one thing, make it speed. Not the kind of speed that racks up technical debt on the team credit card or burns everyone out with breathless sprints, though. No, the kind of speed that treats time as your most precious resource, which it is.
_Speed is the startup’s greatest advantage_. Speed means not wasting time. Incorporating new information as soon as it’s available. Getting products to market. Learning from customers. And responding quickly when problems occur. But speed with no safeguards is simply recklessness. Moving fast requires systems for ensuring we’re still on the rails.
We’ve woven many such systems into the sociotechnical fabric of our startup, but maybe the most crucial among them are the continuous integration and continuous delivery processes that keep our work moving swiftly towards production.
## The business case for CI/CD
Writing in 2021 it’s hard to imagine building web applications without the benefits of continuous integration, continuous delivery, or both. Running an effective CI/CD pipeline won’t score points in a sales pitch or (most) investor decks, but it can make significant strategic contributions to both business outcomes and developer quality of life. The virtuous cycle goes something like this:
* faster feedback
* fewer bugs
* increased confidence
* faster releases
* more feedback (even faster this time)
Even on teams (like ours!) that haven’t embraced the dogma (or overhead) of capital-A-Agile processes, having the confidence to release early and often still unlocks shorter development cycles reduces time to market.
As a developer, you’re probably already bought into this idea. If you’re feeling resistance, though, here’s a quick summary for the boss:

The business case for continuous integration and delivery
## Is CI/CD worth the effort?
Nobody likes a red build status indicator, but the truth is that builds fail. That’s why status dashboards exist, and a dashboard glowing crimson in the light of failing builds is much, much better than no dashboard at all.
Still, that dashboard (nevermind the systems and subsystems it’s reporting on) is pure overhead. Not only are you on the hook to maintain code and release a dozen new features by the end of the week, but also the litany of scripts, tests, configuration files, and dashboards needed to build, verify, and deploy it. When the server farm of Mac Minis in the basement hangs, you’re on the hook to restart it. That’s less time available to actually build the app.
This is a false dilemma, though. You can solve this problem by throwing resources at it. Managed services eliminate much of the maintenance burden, and when you’ve reached the scale where one-size-fits-all managed services break down you can likely afford to pay a full-time employee to manage Jenkins.
So, there are excuses for not having a reliable CI/CD pipeline. They just aren’t very good ones. The payoff — in confidence, quality, velocity, learning, or _whatever_ you hope to get out of shipping more software — is well worth any pain the pipeline incurs.
Yes, even if it has to pass through XCode.
## A guiding principle
Rather than prescribing the ultimate CI/CD pipeline in an edict from on-high, we’ve taken guidance from one of our team principles and evolved our practices and automation from there. It reads:
> **Ship to Learn**. We release the moment that staging is better than prod, listen early and often, and move faster because of it.
Continuous integration is a big part of the story, of course, but the same guidance applies back to the pipeline itself.
1. **Releasing the moment that staging is better than prod** is easy to do. This is nearly always the case, and keeping up with it means having both a lightweight release process and confidence in our work. Individual investment and a reasonably robust test suite are all well and good; better is having a CI/CD pipeline that makes them the norm (if not the rule).
2. **Listening early and often** is all about gathering feedback as quickly as we possibly can. The sooner we understand whether something is working or not, the faster we can know whether to double down or adapt. Feedback in seconds is better than in minutes (and certainly better than hours).
3. **Moving faster** includes product velocity, of course, but also the CI/CD process itself. Over time we’ve automated what we reasonably can; still, several exception-heavy stages remain in human hands. We don’t expect to change these soon, so here “moving fast” means enabling manual review and acceptance testing, but we don’t expect to replace them entirely any time soon.
## So, our pipeline
Product velocity depends on the pipeline that enables it. With that in mind, we’ve constructed our pipeline to address the hypothesis that _issues uncovered at any stage are more exponentially expensive to fix than those solved at prior stages_. Issues _will_ happen, but checks that uncover them early on drastically reduce friction at the later, more extensive stages of the pipeline.
Here’s the boss-friendly version:

Test early, test often
## Local development
Continuous integration starts immediately. If you disagree, consider the feedback time needed to integrate and test locally versus anywhere else. It’s seconds (rebasing against our `main` branch or acting on feedback from a pair-programming partner), minutes (a full run of our test suite) or less.
We’ve made much of if automatic. Our editors are configured to take care of [styles and formatting](https://prettier.io/); [TypeScript provides a first layer of testing](/developing-koan/porting-koans-150-000-line-javascript-codebase-to-typescript-b4818ccc42ac); and [shared git hooks](https://rjzaworski.com/2018/01/keeping-git-hooks-in-sync) run project-specific static checks.
One check we don’t enforce is to run our full test suite. Run time goes up linearly with the size of a test suite, and — while we’re culturally averse to writing tests for their own sake — running our entire suite on every commit would be prohibitively expensive. _What needs testing_ is up to individual developers’ discretion, and we avoid adding redundant or pointless tests to the test suite just as we avoid redundant test _runs_.
Make it fast, remember? That applies to local checks, too. Fast checks get run. Slow checks? No-one has time for that.
## Automated CI
Changes pushed from local development to our central repository trigger the next layer of checks in the CI pipeline. Feedback here is slower than in local development but still fairly fast, requiring about 10 minutes to run all tests and produce a viable build.
Here’s what it looks like in Github:

Green checks are good checks.
There are several things going on here: repeats of the linting and static analysis run locally, a run through our completed `backend` test suite, and deployment of artifacts used in manual QA. The other checks are variations on this theme—different scripts poking and prodding the commit from different angles to ensure it's ready for merging into `main`. Depending on the nature of the change, we may require up to a dozen checks to pass before the commit is greenlit for merge.
## Peer review
In tandem with the automated CI checks, we require manual review and sign-off before changes can be merged into `main`.
“Manual!?” I hear the purists cry, and yes — the “M” word runs counter to the platonic ideal of totally automated CI. Hear me out. The truth is that every step in our CI/CD pipeline existed as a manual process first. Automating something before truly understanding it is a sure path to inappropriate abstractions, maintenance burden, and at least a few choice words from future generations. And it doesn’t always make sense. For processes that are and always will be dominated by exceptions (design review and acceptance testing, to pick two common examples) we’ve traded any aspirations at full automation for tooling that _enables_ manual review. We don’t expect to change this any time soon.
Manual review for us consists of (required) code review and (optional) design review. Code review covers a [checklist](https://github.com/rjz/code-review-checklist) of logical, quality, and security concerns, and we (plus Github [branch protection](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches)) require _at least_ two team members to believe a change is a good idea before we ship it. Besides collective ownership, it’s also a chance to apply a modicum of QA and build shared understanding around what’s changing in the codebase. Ideally, functional issues that weren’t caught locally get caught here.
## Design review
Design review is typically run in tandem with our counterparts in product and design, and aims to ensure that designs are implemented to spec. We provide two channels for reviewing changes before a pull request is merged:
1. preview builds of [a “live” application](/developing-koan/routing-on-the-edge-913eb00da742) that reviewers can interact with directly
2. [storybook](https://storybook.js.org/) builds that showcase specific UI elements included within the change
Both the preview and storybook builds are linked from Github’s pull request UI as soon as they’re available. They also nicely illustrate the type of tradeoffs we’ve frequently made between complexity (neither build is trivial to set up and maintain), automation (know what would be trickier? Automatic visual regression testing, that’s what) and manual enablement (the time we _have_ decided to invest has proven well worth it).
The bottom line is that — just like with code review — we would prefer to catch design issues while pairing up with the designer during initial development. But if something slipped through, design review lets us respond more quickly than at stages further down the line.
The feedback from manual review steps is still available quickly, though: generally within an hour or two of a new pull request being opened. And then it’s on to our staging environment.
## Continuous delivery to staging
Merging a pull request into our `main` branch finally flips the coin from continuous integration to continuous delivery. There's one more CI pass first, however: since we identify builds by the commit hash they're built from, a merge commit in `main` triggers a new CI run that produces the build artifact we deliver to our staging environment.
The process for vetting a staging build is less prescriptive than for the stages that precede it. Most of the decision around how much QA or acceptance testing to run in staging rests with the developer on-call (who doubles as [our de-facto release manager](/developing-koan/making-the-most-of-our-startups-on-call-rotation-8769a110c24c)), who will review a list of changes and call for validation as needed. A release consisting of well-tested refactoring may get very little attention. A major feature may involve multiple QA runs and pull in stakeholders from our product, customer success, and marketing teams. Most releases sit somewhere in the middle.
Every staging release receives at least passing notice, for the simple reason that we use Koan ourselves — and specifically, an instance hosted in the staging environment. We eat our own dogfood, and a flavor that’s always slightly ahead of the one our customers are using in production.
Staging feedback isn’t without hiccups. At any time we’re likely to have 3–10 feature flags gating various in-development features, and the gap between staging and production configurations can lead to team members reporting false positives on features that aren’t yet ready for release. We’ve also invested in internal tooling that allows team members to adopt a specific production configuration in their local or staging environments.

The aesthetics are edgy (controversial, even), but the value is undeniable. We’re able to freely build and test features prior to production release, and then easily verify whether a pre-release bug will actually manifest in the production version of the app.
If you’re sensing that issues caught in staging are more expensive to diagnose and fix than those caught earlier on, you’d be right. Feedback here is much slower than at earlier stages, with detection and resolution taking up to several hours. But issues caught in staging are still much easier to address before they’re released to production.
## Manual release to production
The “I” in CI is unambiguous. Different teams may take “integration” to mean different things — note the inclusion of critical-if-not-exactly-continuous manual reviews in our own integration process — but “I” always means “integration.”
The “D” is less straightforward, standing in (depending on who you’re talking to, the phase of the moon, and the day of the week) for either “Delivery” or “Deployment,” and t[hey’re not quite the same thing](https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment). We’ve gained enormous value from Continuous Delivery. We haven’t made the leap (or investment) to deploy directly to production.
That’s a conscious decision. Manual QA and acceptance testing have proven tremendously helpful in getting the product right. Keeping a human in the loop ahead of production helps ensure that we connect with relevant stakeholders (in product, growth, and even key external accounts) prior to our otherwise-frequent releases.
## Testing in production
As the joke goes, we test comprehensively: all issues missed by our test suite will be caught in production. There aren’t many of these, fortunately, but a broad enough definition of testing ought to encompass the instrumentation, monitoring, alerting, and customer feedback that help us identify defects in our production environment.
We’ve previously shared an outline of our [cherished (seriously!) on-call rotation](/developing-koan/making-the-most-of-our-startups-on-call-rotation-8769a110c24c), and the instrumentation beneath it is a discussion for another day, but suffice to say that an issue caught in production takes much longer to fix than one caught locally. Add in the context-switching required from team members who have already moved on to other things, and it’s no wonder we’ve invested in catching issues earlier on!
## Revising the pipeline
Increasing velocity means adding people, reducing friction, or (better yet) both. [Hiring is a general problem](/@rjzaworski/sharing-our-startups-hiring-manual-efe2094a180c). Friction is specific to the team, codebase, and pipeline in question. We [adopted TypeScript](/developing-koan/porting-koans-150-000-line-javascript-codebase-to-typescript-b4818ccc42ac) to shorten feedback cycles (and [save ourselves runtime exceptions](https://rjzaworski.com/2019/05/making-the-case-for-typescript) and [pagerduty incidents](/developing-koan/making-the-most-of-our-startups-on-call-rotation-8769a110c24c)). That was an easy one.
A less obvious bottleneck was how much time our pull requests were spending waiting for code review — on average, around 26 hours prior to merge. Three and half business days. On _average_. We were still deploying several times per day, but with several days’ worth of work-in-process backed up in the queue and plenty of context switching whenever it needed adjustment.
Here’s how review times tracked over time:

This chart is fairly cyclical, with peaks and troughs corresponding roughly to the beginning and end of major releases — big, controversial changes as we’re trailblazing a new feature; smaller, almost-trivial punchlist items as we close in on release day. But the elephant in the series lands back around March 1st. That was the start of Q2, and the day we added “Code Review Vitals” to our dashboard.
It’s been said that sunlight cures all ills, and simply measuring our workflow had the dual effects of revealing a significant bottleneck _and_ inspiring the behavioral changes needed to correct it.
Voilá! More speed.
## Conclusion
By the time you read this post, odds are that our CI/CD pipeline has already evolved forward from the state described above. Iteration applies as much to process as to the software itself. We’re still learning, and — just like with new features — the more we know, and the sooner we know it, the better off we’ll be.
With that, a humble question: what have you learned from your own CI/CD practices? Are there checks that have worked (or totally flopped) that we should be incorporating ourselves?
We’d love to hear from you! | rjz |
915,641 | Typing Effect by using CSS | As you may have already seen some website which has some kind of typing animation. It looks cool... | 19,763 | 2021-12-03T12:35:11 | https://dev.to/j471n/typing-effect-by-using-css-50p | css, webdev, beginners, tutorial | As you may have already seen some website which has some kind of typing animation. It looks cool right but what if I tell you it is very easy to do and you can do it with just CSS only, not without using JS.
First of all, let's visualize what I am talking about -
### Preview

Now let's look at the code, how can we make that happen
### HTML
```html
<h1 class="typing">You had me at 'hello'.</h1>
```
HTML is very simple we just need to use only one element in order to make this work.
### CSS
```css
/* Typing Class */
.typing {
color: #fff;
overflow: hidden;
white-space: nowrap;
letter-spacing: 0.15em;
border-right: 0.15em solid orangered;
animation: typing 3.5s steps(40, end) infinite,
cursor-blink 0.75s step-end infinite;
}
/* The typing effect for the text */
@keyframes typing {
from {
width: 0;
}
to {
width: 100%;
}
}
/* The cursor blinking effect */
@keyframes cursor-blink {
from,
to {
border-color: transparent;
}
50% {
border-color: orangered;
}
}
```
{% codepen https://codepen.io/j471n/pen/JjrdRmL %}
[](https://codepen.io/j471n/pen/JjrdRmL)
### Conclusion
It is as simple as that, now you can use this in your projects wherever you want. You can also make that work with JS, but that's another story for another time.
> You can now extend your support by buying me a Coffee.😊👇
[](https://www.buymeacoffee.com/j471n)
#### Also Read
- [Curved Timeline in CSS](https://dev.to/j471n/curved-css-timeline-5ab3)
- [How to use Web Storage API?](https://dev.to/j471n/how-to-use-web-storage-api-3o28)
- [Video as Text background using CSS](https://dev.to/j471n/video-as-text-background-using-css-58im) | j471n |
915,649 | Typing Effect with typed.js | As you may have already seen some website which has some kind of typing animation. It looks cool... | 19,762 | 2021-12-09T04:26:24 | https://dev.to/j471n/typing-effect-with-js-34b8 | javascript, webdev, beginners, tutorial | As you may have already seen some website which has some kind of typing animation. It looks cool right but what if I tell you it is very easy to do.
I have already made an article about how you can make this type of effect with CSS but today we will build something different than that. In this effect, you can give multiple strings and it will display one by one.
First of all, let's visualize what I am talking about -
### Preview

To make this work we need to use a library called [typed.js](https://mattboldt.github.io/typed.js/) So firstly add the following script to your project.
```html
<script src="https://cdn.jsdelivr.net/npm/typed.js@2.0.12"></script>
```
Now let's look at the rest of code, how this is cooking.
### HTML
```html
<h1>Hi, I am <span class="title"></span></h1>
```
HTML is very simple we just need to use only one element in order to make this work.
### JS
```js
var options = {
strings: ["Jatin Sharma", "React Developer", "Python Developer"],
typeSpeed: 40,
backSpeed: 40,
loop: true
};
var typed = new Typed(".title", options);
```
{% codepen https://codepen.io/j471n/pen/qBPdXdm %}
[](https://codepen.io/j471n/pen/qBPdXdm)
### Conclusion
It is as simple as that, now you can use this in your projects wherever you want. To learn how you can make typing effect in CSS visit the given link.
> You can now extend your support by buying me a Coffee.😊👇
[](https://www.buymeacoffee.com/j471n)
#### Also Read
- [Curved Timeline in CSS](https://dev.to/j471n/curved-css-timeline-5ab3)
- [How to use Web Storage API?](https://dev.to/j471n/how-to-use-web-storage-api-3o28)
- [Typing Effect by using CSS](https://dev.to/j471n/typing-effect-by-using-css-50p) | j471n |
915,666 | Day 78/100 Npm Vs Yarn | Tackling the details more deeply towards topics often neglected the difference between their... | 15,249 | 2021-12-02T17:42:17 | https://dev.to/riocantre/day-78100-npm-vs-yarn-5ck5/ | 100daysofcode, javascript, programming, motivation | Tackling the details more deeply towards topics often neglected the difference between their functions and primary purpose. Somehow, reviewing minor information would make it more understandable and easy to remember.

Here are the difference between the usage of each commands :
- Install dependencies from package.json:
```
npm install == yarn
```
- Install a package and add to package.json:
```
npm install package --save == yarn add package
```
- Install a devDependency to package.json:
```
npm install package --save-dev == yarn add package --dev
```
- Upgrade a package to its latest version:
```
npm update --save == yarn upgrade
```
- Install a package globally:
```
npm install package -g == yarn global add package
```
- Remove a dependency from package.json:
```
npm uninstall package --save == yarn remove package
```
## Extensions to consider
These are some extensions which I think would be helpful to make your coding more interactive.
- [Rainbow Brackets](https://plugins.jetbrains.com/plugin/10080-rainbow-brackets)
- [Reactjs code snippets](https://marketplace.visualstudio.com/items?itemName=xabikos.ReactSnippets)
- [Snazzy Operator](https://marketplace.visualstudio.com/items?itemName=aaronthomas.vscode-snazzy-operator)
- [Spirited Away Color Theme](https://marketplace.visualstudio.com/items?itemName=MaxfieldWalker.vscode-color-theme-spirited-away)
- [Bitter Sweet Theme](https://marketplace.visualstudio.com/items?itemName=gerane.Theme-Bittersweet)
- [Mark down all in One](https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one)
## Code snippet
```
npx cowsay hello
Ok to proceed? (y) y
_______
< hello >
-------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
```
Stay in touch with me along with my coding journey. Don't hesitate to reach out and let's venture together!
[](https://twitter.com/CantreRio)
[](https://medium.com/@smittenaf301icu)
[](https://dev.to/riocantre)
| riocantre |
915,722 | Understanding filters in Office 365 triggers | This article explains how to use filters provided for Office365 triggers. In Office 365 triggers you... | 0 | 2021-12-06T09:26:11 | https://tech.forums.softwareag.com/t/understanding-filters-in-office-365-triggers/253665 | webmethods, integration, connectors | ---
title: Understanding filters in Office 365 triggers
published: true
date: 2021-12-02 15:06:50 UTC
tags: #webmethods, #integration, #connectors
canonical_url: https://tech.forums.softwareag.com/t/understanding-filters-in-office-365-triggers/253665
---
This article explains how to use filters provided for Office365 triggers.
In Office 365 triggers you can use the Email and Subject filter to add a condition based on a particular email address or a particular subject term respectively.
[](https://aws1.discourse-cdn.com/techcommunity/original/3X/3/7/37daa58a91fe298bec6b5e48b7bd925ae24dab3a.png "image")
You can click on the “Add Email” and “Add Subject” buttons to add filters.
**Let’s see how to add filters for Email ids.**
[](https://aws1.discourse-cdn.com/techcommunity/original/3X/4/f/4f0634ef739ab930dce658c432d3a57b65c51616.png "image")
Once you click on Add Email button, you will see a field to add email ID.
Once you add the email ID inside the above Email ID field means, you are configuring the trigger to be executed when the email contains the mentioned Email ID.
**And** condition for Email ID.
[](https://aws1.discourse-cdn.com/techcommunity/original/3X/1/b/1bfa4d80e7f6667f9fd751a3baf8a9900b1427e4.png "image")
To set **AND** condition click on the “Add Email ID’s” button, shown in the above screenshot.
In the above screenshot, we have set is an **AND** condition, where the trigger will execute only when the email has both the mentioned Email ID’s
**OR** condition for Email ID’s
To add an **OR** condition see the below image.
[](https://aws1.discourse-cdn.com/techcommunity/original/3X/1/f/1f2d79b872379cff9724df1aea445d53c513b876.png "image")
To add an **OR** condition click on the “Add Email” button on the above image.
[](https://aws1.discourse-cdn.com/techcommunity/original/3X/7/2/7294e68654c5a5e46db35a154032bd3ccaa75b75.png "image")
The above screenshot depicts the example of the **OR** condition. The trigger will execute only either the email contains the email ID from Email Ids 1 or Email Ids 2.
**Now let us see how to use the filter for both the Email ID and Subject field.**
You can set a filter when you want the trigger to get executed if it has a particular Email ID and a particular subject.
For this, you would need to add an Email ID in the Email field and need to add Subject in the Subject field.
See the attached image:
[](https://aws1.discourse-cdn.com/techcommunity/original/3X/1/e/1e4d61028019887c9265720211d09677e02867df.png "image")
Here we have set a condition, that the trigger will get executed when the email has [john@example.com](mailto:john@example.com) as one of the email IDs and the subject contains the word “demo” in it.
<small>1 post - 1 participant</small>
[Read full topic](https://tech.forums.softwareag.com/t/understanding-filters-in-office-365-triggers/253665) | techcomm_sag |
915,861 | Imagen al lado del texto | como puedo poner ese texto al lado de la imagen? | 0 | 2021-12-02T19:18:48 | https://dev.to/angeles_kerwin/imagen-al-lado-del-texto-57k | como puedo poner ese texto al lado de la imagen? | angeles_kerwin | |
916,025 | 100 días de código: 30, avance en The Odin Project. | ¡Hey hey hey! Bienvenidos a este día 30 del reto. Realmente no he avanzado mucho porque practique el... | 0 | 2021-12-03T00:15:27 | https://dev.to/darito/100-dias-de-codigo-30-avance-en-the-odin-project-46cb | spanish, 100daysofcode, webdev | ¡Hey hey hey!
Bienvenidos a este día 30 del reto. Realmente no he avanzado mucho porque practique el uso de prototipos en Javascript.
Me siento realmente motivado desde que elegí la ruta a seguir en The Odin Project la cual fue Javascript Full stack.
### Ayer:
- Practique 30 min de Touch Typing.
- Comencé a aprender TypeScript con la documentación oficial que puedes encontrar [aquí](https://www.typescriptlang.org/).
- Termine la sección completa de Foundations de The odin project.
- Investigue cual era la mejor ruta para seguir como desarrollador Full stack y al final elegí JavaScript Full Stack.
### Hoy:
- Avance hasta la sección del proyecto de biblioteca en la ruta de Javascript.
- Practique 30 minutos de Touch Typing.
- Hice algunos ejercicios de Hacker Rank.
Hoy no escribiré mucho porque no fue un día especialmente variado y quiero seguir avanzando en el ejercicio asi que hasta pronto!
Espero que tengan mucho éxito con sus proyectos.
Adiós mundo! | darito |
916,030 | Guide to model training: Part 4 — Ditching datetime | TLDR Apply feature engineering by converting time series data to numerical values for... | 0 | 2021-12-03T01:09:35 | https://dev.to/mage_ai/guide-to-model-training-part-4-ditching-datetime-2eg6 | programming, machinelearning, tutorial, python | ## TLDR
Apply feature engineering by converting time series data to numerical values for training machine learning models.
## Outline
- Recap
- Before we begin
- The datetime data type
- Converting to date
- What’s next?
## Recap
In our series so far, we’ve gone over scaling data to prepare for model training. We started with a dataset filled with categorical and numerical values and scaled them so that a computer could understand them. For the remainder of our dataset, we’re almost ready to begin model training; we just need to scale our dates.
## Before we begin
In this section, we’ll be revisiting the datatypes of numerical and categorical values. Please read [part 1](https://www.mage.ai/blog/qualitative-data) and [part 2](https://www.mage.ai/blog/scaling-numerical-data) before proceeding if you’re unfamiliar with those terms. We’ll be using the same [big_data](https://app.box.com/s/ktd1t87fl925hjxkzsclp1343eq822f1) dataset used throughout the model training guides.
## Importance of dates
When collecting data to feed into machine learning models, it’s common to have data on when a user signed up. The model can use this information to find hidden correlation between users. Maybe there was a sign-up bonus or event for users when creating an account. The data would reflect on the success and failure and would be considered when reviewing the model.
## Modern day standards
Dates are important and critical to success, especially when collaborating across different locations or countries. Dates can be written in so many ways, across multiple time zones, so the internet agreed on a standard to be used, under [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601), last updated in 2019. It simplifies dates into what’s known as the datetime format, to represent dates using numerical values to begin formatting.
## The datetime data types
Our dates are formatted as 2021–11–30 as an example. It follows a year, month, day format. But when you think about what data type it is, it’s hard to say for sure. A computer thinks of it as an object or string at first. But when humans look at it, it’s obviously a number. So what is the actual data type?
### strftime format
In Pandas, there is a to_datetime function that will convert the datatype to a datetime value. This usually requires a formatter that specifies how to parse the input by year, month, day, day of week, month name, hour, minute, second, and even account for 12 hour time or time zones. Datetimes in Pandas follow the strftime format used in UNIX.
<center>_Datetime abbreviations and outputs cheat sheet (Source: [DevHints](https://devhints.io/datetime))_</center>
## Converting dates
In our current dataset we have one datetime value, Dt_Customer, logged when a user first signs up for an account. Upon inspection, it’s a string or object data type.
<center></center>
### String to datetime
Looking at the output, we see 21–08–2021, which shows that it is in month, day, year format. By comparing with the cheatsheet, to format it we’ll match it with %d-%m-%Y.
<center></center>_The output standard is YYY-MM-DD_</center>
### Datetime to Integer
But we aren’t completed yet. Even though we have it in datetime format, machines still cannot understand it. To finish off the conversion, we’ll break down the datetime into their own columns for year, month, and day.
The datetime format must follow the ISO, and contain functions that allow it to parse specific portions. For Pandas we’ll be using the dt.year, dt.month, and dt.day methods.
<center></center>
Once we are sure that the values match, let’s remove the original column so the dataset contains only machine readable values.
<center></center>
## What’s next
Now that all of our data has been modified to be so simple that a computer can understand and generate models. Throughout the series we’ve covered scaling data, filling in missing values, and now converting to datetime. For our finale, we’ll take all of our finished datasets from parts 1 thru 4, and combine them together to begin training a classification model for remarketing on whether we should send or not send another email to our customers.
| mage_ai |
916,050 | Elixir Circuits.I2C with Mox | This is written in Japanese. I might convert it to English later, maybe. ... | 0 | 2021-12-03T02:34:36 | https://dev.to/mnishiguchi/elixir-circuitsi2c-with-mox-3186 | elixir, i2c, mox, mock | [Mox.verify_on_exit!/1]: https://hexdocs.pm/mox/Mox.html#verify_on_exit!/1
[Mox.stub_with/2]: https://hexdocs.pm/mox/Mox.html#stub_with/2
[mox]: https://hexdocs.pm/mox/Mox.html
[José Valim]: https://twitter.com/josevalim
[mocks-and-explicit-contracts]: https://dashbit.co/blog/mocks-and-explicit-contracts
[Elixir]: https://elixir-lang.org/docs.html
[circuits_i2c]: https://github.com/elixir-circuits/circuits_i2c
[aht20]: https://github.com/elixir-sensors/aht20
[dialyxir]: https://github.com/jeremyjh/dialyxir
[behaviour]: https://elixirschool.com/en/lessons/advanced/behaviours/
[I2C]: https://ja.wikipedia.org/wiki/I2C
[typespecs-and-behaviours]: https://elixir-lang.org/getting-started/typespecs-and-behaviours.html
[Circuits.I2C.functions]: https://hexdocs.pm/circuits_i2c/Circuits.I2C.html#functions
[arity]: https://ja.wikipedia.org/wiki/%E3%82%A2%E3%83%AA%E3%83%86%E3%82%A3
[GenServer]: https://hexdocs.pm/elixir/1.12/GenServer.html
This is written in Japanese. I might convert it to English later, maybe.
## はじめに
[Elixir]のテストでモックを用意するときに利用する[Elixir]パッケージとして、[mox]が人気です。[Elixir]作者の[José Valim]さんが作ったからということもありますが、ただモックを用意するだけではなく[Elixirアプリの構成をより良くするためのアイデア][mocks-and-explicit-contracts]にまで言及されているので、教科書のようなものと思っています。
一言でいうと「その場しのぎのモックはするべきではない」ということです。モックが必要となる場合はまず契約([behaviour])をしっかり定義し、それをベースにモックを作ります。結果としてコードの見通しもよくなると考えられます。そういった考え方でのモックを作る際に便利なのが[mox]です。
しかしながら、[mox]の設定方法は(慣れるまでは)あまり直感的ではなく、おそらく初めての方はとっつきにくそうな印象を持つと思います。自分なりに試行錯誤して導き出した簡単でわかりやすい[mox]の使い方があるので、今日はそれをご紹介させていだだこうと思います。いろいろなやり方があるうちの一例です。
例として、[circuits_i2c]を用いて温度センサーと通信する[Elixir]コードのモックを考えてみます。
[Elixir]のリモートもくもく会[autoracex](https://autoracex.connpass.com/)でおなじみのオーサムさん(@torifukukaiou)が以前こうおっしゃってました。
> 原典をあたるが一番だとおもいます。
> 原典にすべて書いてある。
まずは[José Valimさんの記事][mocks-and-explicit-contracts]と[ドキュメント][mox]を一通り読んで頂いて、その上で戸惑った際の一助になれば幸いです。
## 依存関係
- [mox]をインストール。
- 契約をしっかり定義するためには、それ以前に[型][typespecs-and-behaviours]がちゃんと定義されている必要があります。ですので理想としては[dialyxir]で型チェックした方が良いと個人的には考えてます。
```diff
# mix.exs
...
defp deps do
[
...
+ {:dialyxir, "~> 1.1", only: [:dev, :test], runtime: false},
+ {:mox, "~> 1.0", only: :test},
...
]
end
...
```
```
$ cd path/to/my_app
$ mix deps.get
```

## [Circuits.I2C][circuits_i2c]
- [I2C]通信するのに便利な[Elixir]パッケージ。
- 例えば、Elixirアプリから[I2C]に対応するセンサーと通信するのに使える。
- センサーが相手のアプリで、どのようにしてセンサーの無いテスト環境でアプリをイゴかせるようにするかが課題。
[Circuits.I2Cに定義されている関数][Circuits.I2C.functions]を確認してみます。これらの関数でセンサーを用いて通信します。
```elixir
Circuits.I2C.open(bus_name)
Circuits.I2C.read(i2c_bus, address, bytes_to_read, opts \\ [])
Circuits.I2C.write(i2c_bus, address, data, opts \\ [])
Circuits.I2C.write_read(i2c_bus, address, write_data, bytes_to_read, opts \\ [])
```
以降で`Circuits.I2C`をモックに入れ替えできように工夫して実装していきます。
## [behaviour]を定義する
- まずは「データ転送する層」の契約を定義します。どう定義するかは任意です。
- 例として、個人的に気に入っているパターンを挙げます。
```elixir
# lib/my_app/transport.ex
defmodule MyApp.Transport do
defstruct [:ref, :bus_address]
## このモジュールで使用される型
@type t ::
%__MODULE__{ref: reference(), bus_address: 0..127}
@type option ::
{:bus_name, String.t()} | {:bus_address, 0..127}
## このbehaviourの要求する関数の型
@callback open([option()]) ::
{:ok, t()} | {:error, any()}
@callback read(t(), pos_integer()) ::
{:ok, binary()} | {:error, any()}
@callback write(t(), iodata()) ::
:ok | {:error, any()}
@callback write_read(t(), iodata(), pos_integer()) ::
{:ok, binary()} | {:error, any()}
end
```
## [behaviour]を実装する(本番用)
- 実際のセンサーに接続することを想定した実装。
- 記述量が比較的少ない場合、僕は便宜上[behaviour]の定義と同じファイルにまとめることが多いです。
```elixir
# lib/my_app/transport.ex
defmodule MyApp.Transport.I2C do
@behaviour MyApp.Transport
@impl MyApp.Transport
def open(opts) do
bus_name = Access.fetch!(opts, :bus_name)
bus_address = Access.fetch!(opts, :bus_address)
case Circuits.I2C.open(bus_name) do
{:ok, ref} ->
{:ok, %MyApp.Transport{ref: ref, bus_address: bus_address}}
{:error, reason} ->
{:error, reason}
end
end
@impl MyApp.Transport
def read(transport, bytes_to_read) do
Circuits.I2C.read(transport.ref, transport.bus_address, bytes_to_read)
end
@impl MyApp.Transport
def write(transport, data) do
Circuits.I2C.write(transport.ref, transport.bus_address, data)
end
@impl MyApp.Transport
def write_read(transport, data, bytes_to_read) do
Circuits.I2C.write_read(transport.ref, transport.bus_address, data, bytes_to_read)
end
end
```
## [behaviour]を実装する(スタブ)
- センサーがない環境での実行を想定した実装。
- モックの基本的な振る舞いを定義。
- モック自体は空のモジュールで、スタブがモックに振る舞いを与えるイメージ。
- 引数は[arity]さえあっていればOK。
- [behaviour]の型に合う正常系の値を返すようにする。
- 記述量が比較的少ない場合、僕は便宜上[behaviour]と同じファイルにまとめることが多いです。
- どうしても`test`配下に置きたい場合は、[`test/support`を用意するやり方](https://hexdocs.pm/mox/Mox.html#module-compile-time-requirements)が[mox]のドキュメントに紹介されてます。
```elixir
# lib/my_app/transport.ex
defmodule MyApp.Transport.Stub do
@behaviour MyApp.Transport
@impl MyApp.Transport
def open(_opts) do
{:ok, %MyApp.Transport{ref: make_ref(), bus_address: 0x00}}
end
@impl MyApp.Transport
def read(_transport, _bytes_to_read) do
{:ok, "stub"}
end
@impl MyApp.Transport
def write(_transport, _data) do
:ok
end
@impl MyApp.Transport
def write_read(_transport, _data, _bytes_to_read) do
{:ok, "stub"}
end
end
```
## モックのモジュールを準備する
- テスト用にモックのモジュールを準備する。
- MyApp.Transportの実装(`:transport_mod`)を入れ替えできるようにする。
- テスト時に`:transport_mod`をモック(`MyApp.MockTransport`)にしておく。
```diff
# test/test_helper.exs
+ Mox.defmock(MyApp.MockTransport, for: MyApp.Transport)
+ Application.put_env(:aht20, :transport_mod, MyApp.MockTransport)
ExUnit.start()
```
## `MyApp.Transport`を用いてアプリを書いてみる
例えば温度をセンサーのから読み込む[GenServer]を書くとこんな感じになります。
```elixir
# lib/my_app.ex
defmodule MyApp do
use GenServer
@type option() :: {:name, GenServer.name()} | {:bus_name, String.t()}
@spec start_link([option()]) :: GenServer.on_start()
def start_link(init_arg \\ []) do
GenServer.start_link(__MODULE__, init_arg, name: init_arg[:name])
end
@spec measure(GenServer.server()) :: {:ok, MyApp.Measurement.t()} | {:error, any()}
def measure(server), do: GenServer.call(server, :measure)
@impl GenServer
def init(config) do
bus_name = config[:bus_name] || "i2c-1"
# ここでモジュールを直書きしないこと!
case transport_mod().open(bus_name: bus_name, bus_address: 0x38) do
{:ok, transport} ->
{:ok, %{transport: transport}, {:continue, :init_sensor}}
error ->
raise("Error opening i2c: #{inspect(error)}")
end
end
...
# 動的にモジュールを入れ替えする関数。
# これをモジュール属性(@transport_mod)として定義してしまうとコンパイル時に固定されてしまう
# ので注意が必要です。関数にしておくとが実行時に評価されるので直感的で無難と思います。
defp transport_mod() do
Application.get_env(:my_app, :transport_mod, MyApp.Transport.I2C)
end
end
```
## モックを用いてテストを書いてみる
- `import Mox`で[mox]の関数を使えるようになります。
- `setup`はおまじないです。
```elixir
# test/my_app_test.exs
defmodule MyAppTest do
use ExUnit.Case
import Mox
setup :set_mox_from_context
setup :verify_on_exit!
setup do
# モックにスタブをセットする。これでセンサーがなくてもコードがイゴくようになります。
Mox.stub_with(MyApp.MockTransport, MyApp.Transport.Stub)
:ok
end
...
```
```elixir
# test/my_app_test.exs
test "measure" do
# 各テストでexpectを用いて具体的にどの関数がどのようにして何度呼ばれることが「期待されるか」を指定。
MyApp.MockTransport
|> Mox.expect(:read, 1, fn _transport, _data ->
{:ok, <<28, 113, 191, 6, 86, 169, 149>>}
end)
assert {:ok, pid} = MyApp.start_link()
assert {:ok, measurement} = MyApp.measure(pid)
assert %MyApp.Measurement{
humidity_rh: 44.43206787109375,
temperature_c: 29.23145294189453,
timestamp_ms: _
} = measurement
end
test "measure when read failed" do
MyApp.MockTransport
|> Mox.expect(:read, 1, fn _transport, _data ->
{:error, "Very bad"}
end)
assert {:ok, pid} = MyApp.start_link()
assert {:error, "Very bad"} = MyApp.measure(pid)
end
```
今回ご紹介したパターンは[AHT20のElixirパッケージ](https://github.com/elixir-sensors/aht20)でバリバリ活躍しています。
以上!
:tada::tada::tada:
| mnishiguchi |
916,057 | Code Smell 108 - Float Assertions | Asserting two float numbers are the same is a very difficult problem TL;DR: Don't compare... | 9,470 | 2021-12-03T03:11:21 | https://maximilianocontieri.com/code-smell-108-float-assertions | oop, tutorial, codenewbie, cleancode | *Asserting two float numbers are the same is a very difficult problem*
> TL;DR: Don't compare floats
# Problems
- Wrong test results
- Fragile tests
- Fail fast principle violation
# Solutions
1. Avoid floats unless you have REAL performance concerns
2. Use arbitrary precision numbers
3. If you need to compare floats compare with tolerance.
# Context
Comparing float numbers is an old computer science problem.
The usual solution is to use threshold comparisons.
We recommend avoiding floats at all and trying to use infinite precision numbers.
# Sample Code
## Wrong
[Gist Url]: # (https://gist.github.com/mcsee/2fc79af85305eaada328fd324cb38c0d)
```java
Assert.assertEquals(0.0012f, 0.0012f); // Deprecated
Assert.assertTrue(0.0012f == 0.0012f); // Not JUnit - Smell
```
## Right
[Gist Url]: # (https://gist.github.com/mcsee/570958fcfb8e52379b7ddde2389ad6f8)
```java
Assert.assertEquals(0.0012f, 0.0014f, 0.0002); // true
Assert.assertEquals(0.0012f, 0.0014f, 0.0001); // false
// last parameter is the delta threshold
Assert.assertEquals(12 / 10000, 12 / 10000); // true
Assert.assertEquals(12 / 10000, 14 / 10000); // false
```
# Detection
[X] Automatic
We can add a check con *assertEquals()* on our testing frameworks to avoid checking for floats.
# Tags
- Test Smells
# Conclusion
We should always avoid comparing floats.
# Relations
{% post https://dev.to/mcsee/code-smell-71-magic-floats-disguised-as-decimals-2g7p %}
# More Info
- [Fail fast](https://dev.to/mcsee/fail-fast-48dm)
# Credits
Photo by <a href="https://unsplash.com/@mbaumi">Mika Baumeister</a> on <a href="https://unsplash.com/s/photos/numbers">Unsplash</a>
* * *
> God made the natural numbers; all else is the work of man.
_Leopold Kronecker_
{% post https://dev.to/mcsee/software-engineering-great-quotes-26ci %}
* * *
This article is part of the CodeSmell Series.
{% post https://dev.to/mcsee/how-to-find-the-stinky-parts-of-your-code-1dbc %} | mcsee |
916,162 | JavaScript Course for free 2022 From Zero to Expert | In this course section, we will share with you the JavaScript course for free in 2022: from zero to... | 0 | 2021-12-03T05:50:59 | https://dev.to/alimammiya/javascript-course-for-free-2022-from-zero-to-expert-5bj4 | javascript, tutorial, beginners, programming | In this course section, we will share with you the [JavaScript course for free](https://usemynotes.com/javascript/) in 2022: from zero to expert! The modern JavaScript course for everyone! This JavaScript course contains challenges, theory, and Interview questions. So let’s start.
## What you'll learn
- Become an advanced JavaScript developer from scratch.
- How to become job-ready by understanding how JavaScript really works behind the project.
- JavaScript fundamentals: if/else, loops, operators, variables, boolean, arrays, functions, strings, and objects.
- Modern Object-oriented programming (OOP): Classes, prototypal inheritance, constructors, encapsulation, etc.
- Asynchronous JavaScript: Event loop, promises, and async/await.
- How to think and work like a good developer: researching, problem-solving, and, workflows.
- Modern ES5 and ES2015 from the beginning: arrow functions, spread operator, optional chaining, and destructuring.
- How to architect your code using common patterns and flowcharts.
## JavaScript Course content
- [What is JavaScript?](https://usemynotes.com/what-is-javascript/)
- [What are Variables in JavaScript?](https://usemynotes.com/what-are-variables-in-javascript/)
- [What are data types in JavaScript?](https://usemynotes.com/what-are-data-types-in-javascript/)
- [What are Operators in JavaScript?](https://usemynotes.com/what-are-operators-in-javascript/)
- [What are functions in JavaScript?](https://usemynotes.com/what-are-functions-in-javascript/)
- [What is arrow function in JavaScript?](https://usemynotes.com/what-is-arrow-function-in-javascript/)
- [JavaScript Conditional Statements](https://usemynotes.com/javascript-conditional-statements/)
- [JavaScript Switch Statement](https://usemynotes.com/javascript-switch-statement/)
- [Loops in JavaScript](https://usemynotes.com/loops-in-javascript/)
- [Break and Continue in JavaScript](https://usemynotes.com/break-and-continue-in-javascript/)
- [String and Methods in JavaScript](https://usemynotes.com/string-and-methods-in-javascript/)
- [Arrays in JavaScript](https://usemynotes.com/arrays-in-javascript/)
- [Object in JavaScript](https://usemynotes.com/object-in-javascript/)
- [Date in JavaScript](https://usemynotes.com/date-in-javascript/)
- [Math in JavaScript](https://usemynotes.com/math-in-javascript/)
- [JavaScript Popup Boxes](https://usemynotes.com/javascript-popup-boxes/)
- [Events in JavaScript](https://usemynotes.com/events-in-javascript/)
- [DOM in JavaScript](https://usemynotes.com/dom-in-javascript/)
- [Classes in JavaScript](https://usemynotes.com/classes-in-javascript/)
- [Inheritance in JavaScript](https://usemynotes.com/inheritance-in-javascript/)
- [Difference between ES5 and ES2015](https://usemynotes.com/difference-between-es5-and-es2015/)
- [Latest Version of ECMAScript](https://usemynotes.com/latest-version-of-ecmascript/)
- [Promises in JavaScript](https://usemynotes.com/promises-in-javascript/)
- [Async and Await in JavaScript](https://usemynotes.com/async-and-await-in-javascript/)
- [Destructuring in JavaScript](https://usemynotes.com/destructuring-in-javascript/)
- [JavaScript Interview Questions and Answers](https://usemynotes.com/javascript-interview-questions-and-answers/)
## Requirements
- No coding experience is required to take this JavaScript course for free! I lead you from beginner to expert level!
- Any computer and operating system (OS) will work — Microsoft Windows, Linux, or macOS. We will set up your code editor for the course.
- A basic understanding of HTML and CSS is a plus point.
## Description
JavaScript is the most popular programming/coding language in the world. It powers the entire modern web. It provides millions of high-paying jobs in the world.
That's why you want to learn JavaScript too. And you came to the right place!
### Why This JavaScript Course Is Right for You?
*This is the complete JavaScript course provided by [Use My Notes](https://usemynotes.com/). It's an all-in-one course that will take you from the very fundamentals of JavaScript, all the way to building complex applications.*
You will learn modern JavaScript step by step from the very beginning. I'll walk you through practical and fun code examples and important principles of how JavaScript works behind the scenes.
You'll also learn how to think like a developer, how to plan application features, how to architect your code, how to debug code, and many other real-world skills you'll need in your developer job.
And unlike other courses, it really covers beginner, intermediate and advanced topics, so you don't need to buy any other courses to master JavaScript from the beginning!
But... you don't need to go into all these topics. This is a huge course because, after all, it is "The Complete JavaScript Course". In fact, it's like many courses in 1. But you can become an excellent developer only by learning certain parts of the curriculum. That's why I have designed this course in a very modular way and designed paths that will make you move faster in the course.
### So what exactly is covered in the course?
- Master the JavaScript fundamentals: if/else, loops, operators, variables, boolean, arrays, functions, strings, and objects.
- Learn modern JavaScript (ES2015) from the beginning: optional chaining, arrow functions, spread operator, and destructuring.
- How JavaScript works behind the scenes: hoisting, reference values, engines, the call stack, scoping, the 'this' keyword, and more.
- Deep dive into functions: higher-order functions, bind, arrow functions, first-class, and closures.
- Deep dive into OOP: constructor functions (ES2015), classes (ES5), encapsulation, abstraction, inheritance, prototypal inheritance, and polymorphism.
- Deep dive into asynchronous JavaScript: the promises, async/await, event loop, and error handling. You will use these with AJAX calls to access data from third-party APIs.
- Learn the modern tools used by professional web developers.
### This course is for you if...
- If you want to gain a true & deep understanding of JavaScript
- If you're trying to learn JavaScript but: 1) still don't really understand JavaScript, or 2) still don't feel confident coding real apps.
## Who this course is for:
- If you want to get a true and deep understanding of JavaScript then take this course so that you can get a better understanding.
- Take this course if you're trying to learn JavaScript but: 1) still don't really understand JavaScript, or 2) still don't feel confident coding real apps.
- Take this course if you're interested in using a library/framework like React, Angular, Vue, or Node in the future.
- If you already know JavaScript so kindly Take this course for advanced. This course covers specialist topics for your better understanding.
- If you want to get started with programming, take this course: JavaScript is a great first language!
Originally posted on - [JavaScript Course for free](https://alimammiya.hashnode.dev/javascript-course-for-free-2022-from-zero-to-expert) | alimammiya |
916,205 | Create QR codes in JavaScript | This is an example of how you could create QR codes in JavaScript (and Node.js) with the "qrcode" npm... | 0 | 2021-12-03T07:43:34 | https://dev.to/coder4_life/create-qr-codes-in-javascript-325 | javascript, node, webdev, programming | This is an example of how you could create QR codes in JavaScript (and Node.js) with the "qrcode" npm package. This video includes browser and Node.js examples where we create the QR codes as base64 image strings.
{% youtube U0pRmzCxbnk %} | coder4_life |
916,370 | Instalación de DDEV y despliegue de proyecto Drupal 9 en Ubuntu 20.04 | Instalación de los componentes necesarios para ejecutar DDEV y crear entornos locales de trabajo de... | 0 | 2021-12-03T11:08:56 | https://dev.to/daniconil/instalacion-de-ddev-y-despliegue-de-proyecto-drupal-9-en-ubuntu-2004-236 | ddev, drupal, docker, ubuntu | Instalación de los componentes necesarios para ejecutar DDEV y crear entornos locales de trabajo de PHP. Docker y Docker Compose son requisitos imprescindibles y los instalaremos con la configuración mínima para que funcione DDEV.
El índice de este tutorial es:
1. Instalación de Docker
2. Instalación de Docker Compose
3. Instalación de DDEV y la plantilla de Drupal 9
## ¿Qué es DDEV?
DDEV es una herramienta de código abierto que facilita la creación de entornos locales de desarrollo PHP. Al montar una instancia en tu servidor, existe la posibilidad de implementar marcos de trabajo de Drupal, WordPress, TYPO3, Backdrop, Magento, Laravel, etc.
Un importante detalle de DDEV es su naturaleza de entorno local de desarrollo, es decir, no es visible en red. Para que sea accesible desde otro equipo, puedes usar [ngrok](https://ngrok.com) para dotar de URL al entorno.
[Web oficial de DDEV](https://ddev.com)
[Centro de documentación de DDEV](https://ddev.readthedocs.io/en/stable/)
## ¿Qué es Docker?

Docker es una plataforma que empaqueta software en unidades llamadas "contenedores" incluyendo los requisitos necesarios para ejecutar un proyecto determinado, como por ejemplo configurar una versión concreta de un servidor de base de datos, bibliotecas, java, etc, distinta a la del sistema operativo que lo contiene. Por eso el uso de la palabra "contenedor", porque encapsula dichos requisitos para un desarrollo óptimo de lo que se ejecute en él.
Aunque esta definición de Docker se pueda asimilar a una virtualización, hay que tener en cuenta que la diferencia principal es el menor consumo de recursos en el uso de contenedores con respecto a máquinas virtuales debido a que, en el primer caso, no es necesario instalar un sistema operativo sobre otro tal como ocurre en las virtualizaciones.
[Web oficial de Docker](https://www.docker.com)
[Centro de documentación de Docker](https://docs.docker.com)
## ¿Qué es Docker Compose?
Docker Compose es una herramienta para gestionar varios contenedores a través de un archivo de configuración YAML, permitiendo que, a través de un solo comando, se pueda poner en marcha o eliminar un contenedor.
[Documentación sobre Docker Compose](https://docs.docker.com/compose)
:heart: para @davidjguru , [penyaskito](https://github.com/drud/ddev-contrib/pull/128), [erikaheidi](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-20-04) y [bhogan](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-20-04).
---
## Instalación de Docker
Actualizamos nuestro sistema para instalar los paquetes más recientes:
```
$ sudo apt update
```
Instalamos los requisitos necesarios:
```
$ sudo apt install build-essential apt-transport-https ca-certificates software-properties-common curl
```
Agregamos la clave GPG del repositorio oficial de Docker:
```
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
```
y, tras el OK, agregamos el repositorio:
```
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
```
En caso de que no actualice repositorios automáticamente, ejecutamos de nuevo:
```
$ sudo apt update
```
Con la actualización anterior, comprobamos que no hay ningún error e instalamos Docker desde el repositorio oficial, no desde los originales de Ubuntu:
```
$ sudo apt install docker-ce
```
Comprobamos si ha sido instalado y ejecutándose si ejecutamos:
```
$ sudo systemctl status docker
```
y obtenemos una salida similar a esta:
```
[sudo] password for daniel:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-08-27 19:27:56 CEST; 25min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 736 (dockerd)
Tasks: 11
Memory: 114.5M
CGroup: /system.slice/docker.service
└─736 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
```
Para el cometido de este tutorial sería suficiente, puedes investigar sobre los comandos de Docker ejecutando
```
$ docker
```
o accediendo a la [documentación oficial dedicada](https://docs.docker.com/engine/reference/commandline/docker).
## Extra
Si al arrancar Docker obtenemos el siguiente error:
```
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/ >docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json: dial unix /var/run/docker.>sock: connect: permission denied
```
Otorgamos permisos al *user* en el grupo *docker* de la siguiente manera
```
$ sudo usermod -aG docker $USER && sudo reboot
```
forzando a reiniciar para aplicar los cambios.
## Instalación de Docker Compose
Procedemos a la descarga y configuración de permisos para ejecutar Docker Compose.
### Descarga del archivo
A la hora de realizar este tutorial, la última versión corresponde a `2.1.1`, comprueba cuál es la más reciente en el [listado de lanzamientos en GitHub](https://github.com/docker/compose/releases). Sólamente tendrías que cambiar el número. Las variables `$(uname -s)` y `$(uname -m)` eligen tu sistema y arquitectura (32 ó 64 bits), respectivamente.
```
$ sudo curl -L "https://github.com/docker/compose/releases/download/v2.1.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
```
## Permisos
Ajustamos los permisos para que sea ejecutable:
```
$ sudo chmod +x /usr/local/bin/docker-compose
```
Comprobamos la instalación:
```
$ docker-compose --version
```
Debemos tener un resultado similar a:
> docker-compose version 2.1.1
Así, comprobamos que Docker Compose está correctamente instalado.
## Instalación de DDEV y la plantilla de Drupal 9

Dividimos en dos esta parte para mayor claridad.
## Instalación de DDEV
Descargamos el script con los comandos para descargar e instalar DDEV:
```
$ curl -O https://raw.githubusercontent.com/drud/ddev/master/scripts/install_ddev.sh
```
En el siguiente enlace podemos observar el [contenido de este script](https://github.com/drud/ddev/blob/master/scripts/install_ddev.sh).
Otorgamos permisos para que el script sea ejecutable:
```
$ chmod +x install_ddev.sh
```
y ejecutamos con:
```
$ ./install_ddev.sh
```
Durante el proceso, es posible que solicite instalar algunas dependencias o requisitos. Una vez finalizado, ejecuta el siguiente comando para comprobar que está correcto:
```
$ ddev version
```
Obtenemos una salida similar a esta, dependiendo de las versiones y la arquitectura de nuestro sistema:
```
ITEM VALUE
DDEV version v1.18.1
architecture amd64
db drud/ddev-dbserver-mariadb-10.3:20211017_mysql_arm64
dba phpmyadmin:5
ddev-ssh-agent drud/ddev-ssh-agent:v1.18.0
docker 20.10.11
docker-compose v2.1.1
mutagen 0.12.0
os linux
router drud/ddev-router:v1.18.0
web drud/ddev-webserver:v1.18.1
```
A través de
```
$ ddev
```
listamos los comandos de `ddev`. Puedes visitar el [centro de documentación](https://ddev.readthedocs.io/en/stable/users/cli-usage/#favorite-commands) para más detalles.
*Nota: Hasta este punto, cualquier instalación con DDEV será igual, por lo que si queremos instalar otras plantillas como Magento, en lugar de Drupal, toda la tarea realizada hasta ahora no es necesario repetirla.*
## Instalación de la plantilla Drupal 9
Creamos y accedemos a la carpeta donde albergaremos el proyecto:
```
$ mkdir miproyecto && cd miproyecto
```
Dentro de la carpeta raíz, creamos la estructura de directorios de Drupal 9. Ambas opciones ofrecen el mismo resultado, sirviendo su observación para conocer mejor el funcionamiento de DDEV.
- Opción 1
Al escribir en el terminal
```
$ ddev config
```
Nos preguntará la ruta donde queremos guardarla y el tipo de plantilla. En este caso, elegiremos Drupal 9. Marco en negrita lo que he introducido. El resto es la salida de la instalación:
```
daniel@ubuntuserver:~/miproyecto$ ddev config
Creating a new ddev project config in the current directory (/home/daniel/miproyecto)
Once completed, your configuration will be written to /home/daniel/miproyecto/.ddev/config.yaml
Project name (miproyecto): Drupalea
The docroot is the directory from which your site is served.
This is a relative path from your project root at /home/daniel/miproyecto
You may leave this value blank if your site files are in the project root
Docroot Location (current directory): [He dejado en blanco]
Found a php codebase at /home/daniel/miproyecto.
Project Type [backdrop, drupal6, drupal7, drupal8, drupal9, laravel, magento, magento2, php, shopware6, typo3, wordpress] (php): drupal9
Ensuring write permissions for Drupalea
No settings.php file exists, creating one
Existing settings.php file includes settings.ddev.php
Configuration complete. You may now run 'ddev start'.
```
Devolviéndonos el terminal. Importante indicar que `Docroot location` lo he dejado en blanco para que la estructura se cree, por defecto, en la carpeta raíz y no sea, por ejemplo, un subdirectorio.
- Opción 2
De antemano, podemos incluir las dos opciones incluidas arriba a través de parámetros:
```
$ ddev config --docroot=web --create-docroot --project-type=drupal9
```
Obtenemos la siguiente salida:
```
Creating a new ddev project config in the current directory (/home/daniel/miproyecto)
Once completed, your configuration will be written to /home/daniel/miproyecto/.ddev/config.yaml
Created docroot at /home/daniel/miproyecto/web
You have specified a project type of drupal9 but no project of that type is found in /home/daniel/miproyecto/web
Ensuring write permissions for miproyecto
No settings.php file exists, creating one
Existing settings.php file includes settings.ddev.php
Configuration complete. You may now run 'ddev start'.
```
Por ahora, no es necesario ejecutar `ddev start`.
### Descarga de la plantilla Drupal 9 a través de Composer
Una vez tenemos la estructura, y sin salirnos de la carpeta raíz, ejecutamos:
```
$ yes | ddev composer create drupal/recommended-project
```
El "yes" del inicio sirve para automatizar el paso en el que solicita si puede seguir para reescribir la estructura creada en el anterior paso. Al modificar parte del árbol de directorios creado con `ddev config`, es normal que solicite permisos para seguir.
*Nota: No hay que confundir "Docker Compose", descrito en este tutorial, con "Composer", el administrador para proyectos PHP con el que instalaremos los paquetes de Drupal 9.*
Instalamos **Drush**, una aplicación basada en shell que se utiliza para controlar, manipular y administrar los sitios web de Drupal, además de otros módulos básicos para este CMS como admin_toolbar y devel que añadirán funcionalidades extra de navegación.
```
$ ddev composer require drush/drush drupal/admin_toolbar drupal/devel
```
Obteniendo una salida similar a esta:
```
Using version ^10.6 for drush/drush
./composer.json has been created
Running composer update drush/drush
Loading composer repositories with package information
Updating dependencies
Lock file operations: 46 installs, 0 updates, 0 removals
-Locking chi-teck/drupal-code-generator (1.33.1)
...
Writing lock file
Installing dependencies from lock file (including require-dev)
Package operations: 46 installs, 0 updates, 0 removals
-Downloading ... Extracting archive
-Installing ...: Extracting archive
...
12 package suggestions were added by new dependencies, use `composer suggest` to see details.
Package container-interop/container-interop is abandoned, you should avoid using it. Use psr/container instead.
Generating autoload files
18 packages you are using are looking for funding.
Use the `composer fund` command to find out more!
```
*Nota: Los puntos suspensivos representa la larga lista de paquetes que se van descargando e instalando automáticamente.*
Sobre este tipo de instalación via línea de comandos, tienes más información en la [web oficial de Drupal](https://www.drupal.org/docs/develop/using-composer/using-composer-to-install-drupal-and-manage-dependencies) dedicado a esta materia.
Antes de lanzar la instancia, configuramos la base de datos:
```
$ ddev exec drush si --site-name=Drupaleame --account-name=admin --account-pass=password -y
```
Como último paso antes de obtener una instancia de Drupal 9, lanzamos lo instalado:
```
$ ddev start
```
Obteniendo una salida similar a:
```
Starting Drupalea...
Building ddev-ssh-agent
Recreating ddev-ssh-agent ...
Recreating ddev-ssh-agent ... done
ssh-agent container is running: If you want to add authentication to the ssh-agent container, run 'ddev auth ssh' to enable your keys.
Running Command=ip address show dev docker0
Building db
Building web
Creating ddev-Drupalea-db ...
Creating ddev-Drupalea-db ... done
Creating ddev-Drupalea-dba ...
Creating ddev-Drupalea-web ...
Creating ddev-Drupalea-web ... done
Creating ddev-Drupalea-dba ... done
Creating ddev-router ...
Creating ddev-router ... done
Ensuring write permissions for Drupalea
Existing settings.php file includes settings.ddev.php
Ensuring write permissions for Drupalea
Successfully started Drupalea
Project can be reached at https://drupalea.ddev.site https://127.0.0.1:49154
```
*Nota: El puerto puede ser diferente.*
- Opción 1, sistema con entorno gráfico
En caso de que el entorno local donde hayamos instalado `DDEV` **tenga escritorio tipo GNOME, KDE, XFCE, etc**, podemos ejecutar lo siguiente desde terminal para lanzar a nuestro navegador el configurador de Drupal:
```
$ ddev launch
```
- Opción 2, sistema sin entorno gráfico
Si hemos instalado DDEV en un servidor sin escritorio y lanzamos `ddev launch`, obtenemos el siguiente error
```
/home/daniel/miproyecto/.ddev/commands/host/launch: line 61: xdg-open: command not found
Failed to run launch ; error=exit status 127
```
Como es lógico, al no tener entorno gráfico, xdg-open no se encuentra instalado en el sistema ni objetivo sobre el que ejecutar, que sería un navegador tipo Mozilla Firefox.
En este caso, probamos el navegador **lynx** de la siguiente manera:
Instalamos el navegador desde repositorios de Ubuntu:
```
$ sudo apt install lynx
```
y probamos
```
$lynx drupalea.ddev.site
```
Veremos, en el terminal, el texto de una página de configuración de Drupal.
>**Welcome to Example-Drupal |Example-Drupal (p1 of 2)**
> #alternate
>
> Skip to main content
>
>User account menu
>
> Show — User account menu Hide — User account menu
> * Log in
>
> Home
> Drupaleame
>
>Main navigation
>
> Show — Main navigation Hide — Main navigation
> * Home
>
>Welcome to Drupaleame
>
> No front page content has been created yet.
> Follow the User Guide to start building your site.
> Subscribe to
A partir de ahora, con Docker, Docker Compose y DDEV ya presentes en nuestro equipo, podemos probar otras plantillas como WordPress o Magento a través de DDEV.
---
Foto: "[Containers](https://www.flickr.com/photos/60132504@N08/5939008153)" by tsuna72 is licensed with [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) | daniconil |
916,488 | Top IT Career Meetups in November, 2021 | Eager to see the finest IT Career meetup videos from this November? Say no more, check our... | 0 | 2021-12-03T14:38:05 | https://blog.meetupfeed.io/it-career-meetups-november-2021/ | career, programming, techtalks | Eager to see the finest IT Career [**meetup videos**](https://blog.meetupfeed.io/it-career-meetups-november-2021/) from this November? Say no more, check our hand-picked selection below.
[**Writing Code is Easy, Being a Great Developer is Hard | Helen Scott**](https://meetupfeed.io/talk/tech-talks-writing-code-is-easy-being-a-great-developer-is-hard)
Dive into matters with Helen, who tells you all the things you need to know besides how to write a code well. Explore a whole bunch of other skills you might not have even realised you needed – until now. Find out how you can be an extraordinary developer wanted by many headhunters.
[**An Introduction To Accessibility & Why A Techy Needs A Personal Brand | Robert Precious & Rachel Skelton**](https://meetupfeed.io/talk/an-introduction-to-accessibility-why-a-techy-needs-a-personal-brand)
Jump into accessibility and let Robert elaborate on the triggers that start accessibility work at a company. Let him guide you through the start, problems, testing and so much more. Why does a techie need a personal brand now more than ever? Rachel continues by answering all your questions and making it easier for you to stand out of the crowd.
[**Five Programming Interview Red Flags In Your Career | Ryan McBeth**](https://meetupfeed.io/talk/five-programming-interview-red-flags)
Be prepared for any interview in your career and spare your own time by looking at instant red flags. You will thank Ryan McBeth in the long run, although it might be hard turning down an offer based on these things. But having a job at a company where you can truly fit in is invaluable. This is how you can easily filter your offers.
[**Emotional Intelligence in Tech | Rosemarie Wilson**](https://meetupfeed.io/talk/emotional-intelligence-in-tech-with-rosemarie-wilson)
Some might think that being a developer is all about the technical skills, but let’s not forget about emotional intelligence, which is just as important for success. Get an overview of how you can communicate your solutions better by improving your emotional intelligence. Let’s not forget that you still have to work with other people even as a developer.
[**Five Programming Skills to Boost your Resume | Ryan McBeth**](https://meetupfeed.io/talk/five-programming-skills-to-boost-your-resume)
Learn about five skills that will boost your resume right away. And you can learn them in only two week! The mentioned things are: SQL, Clustered & Non-Clustered Index, Jira, Git, Aws & Azure and many many more. Which of these skills do you already possess?
| meetupfeedio |
916,610 | Best Mobile Apps For Couples Who Want To Stay Connected | Being next to your soul mate every minute gives you joy and warmth. However, time can also be spent... | 0 | 2021-12-03T14:49:17 | https://dev.to/downeranthony/best-mobile-apps-for-couples-who-want-to-stay-connected-3lb7 | mobile, android, ios | Being next to your soul mate every minute gives you joy and warmth. However, time can also be spent with benefit. You can share secret desires, learn something new about your loved one, and just have fun together via mobile apps. Also, with their help, you can not miss important dates and always be aware of the upcoming significant days for the relationship.
## Top Mobile Apps For Couples
Here are some of the top apps for couples to consider:
### 1) Truth or Dare
In it you can play the popular game "Truth or Dare" with a choice of the level that you desire. Questions will be highlighted for you and your loved one, but you will have to answer them extremely truthfully. Otherwise, you need to perform the specified action.
The app can be used on dates. Truth or Dare offers a special mode for intimate life with more explicit questions and challenges for couples in love.
**Advantages:**
- Thousands of questions with tasks for a couple.
- Difficulty levels with the frankness setting.
- The ability to create your own game.
Couples can play not only on their own. The app offers modes for a big company to play at a party or even within a family circle without frank and embarrassing questions or actions.
### 2) Desire
The application is designed to help couples diversify their relationships and intimate life. It is equally suitable for people with new relationships and those who have been married for a long time. Desire will push you to discover new horizons and rethink life. And your relationship with your partner will freshen up and become stronger.
In Desire, you have to choose from hundreds of tasks or create your own and challenge your soul mate. For each consent to the accomplishment of the assigned task, the person receives points. And with their accumulation, hotter tests open up for couples. The application also offers access to a private chat with the ability to send photos and secret communication. This functionality is extremely useful for couples who are at a distance.
**Advantages:**
- Daily task updates.
- Secret chat for a couple with sending a photo,
- Private diary with history of events.
You can also take part in the daily quiz. The phone will ask you and your loved personal question.
**Also Read:** [Top 9 Dating Apps](https://www.hyperlinkinfosystem.com/blog/top-9-dating-apps)
### 3) Couple Widget — Love Events Countdown
One of the most popular apps for couples in love. With it, you will be aware of the most important information about your relationship. And you will not forget to congratulate your soul mate.
All events are displayed inside the built-in calendar. You can also view a list of the nearest dates, indicating the number of days remaining until them. The customizable widget is capable of displaying this information in a variety of formats.
**Advantages:**
- Accurate counting of events and holidays in the relationship of a couple.
- Several days advance notice of important dates.
- Ample opportunities for customizing the interface or widget.
The application will be useful to both partners. And to exclude the access of unauthorized people, you can set a numeric password for the entrance.
### 4) Never Have I Ever
The game contains thousands of questions with tasks of different levels of frankness. The essence of the game lies in the need to truthfully answer about any actions. If a person from a pair made it, then he drinks the selected drink. These games help couples relax, get to know each other, and build confidence.
You can also create your own sets of question cards. Therefore, it will be possible to organize the most fun evening for your partner, considering his interests. And before the standard game, you can choose one of four difficulties. If the tasks seem overly difficult or overt, then you just need to lower the level to easy.
**Advantages:**
- Hundreds of questions with original problems.
- Functionality for creating your own cards.
### Love Calendar and Widget
The popular calendar app helps you keep track of all upcoming events for your couple. It calculates holiday dates from your first date, relationship start, engagement or wedding day. Also, the built-in tracker shows the elapsed time since any important event. And the dark theme makes the app comfortable to use at night.
The desktop widget shows all the information you need about lovers at the moment. You can customize it by its appearance or the displayed event. Also, colorful cards for many dates are available for lovers. You only need to choose one and send it to your significant other in any convenient way.
**Advantages:**
- Event tracking with notifications.
- Built-in greeting cards.
- Convenient widgets for the desktop.
Such a calendar will be useful to all people in serious relationships. With it, you will not miss any date and will be able to immediately congratulate your loved one.
### Which Of Us
This is one of the best apps for couples in love. It is a fun game with interesting questions. After launch, participants are shown a short phrase describing an action. It is necessary to determine which of those present performs it more often, more often or better. The application will help you find your common interests and will bring you a little closer.
Couples in love should choose the "Adult" mode. It is there that the frankest questions are found. They are all invented by the development team. Therefore, the effect of the game will be as vivid as possible.
Advantages:
- Search for common interests, topics for communication and a fun pastime.
- Hand-picked excitement questions.
- Work without internet connection.
There is a limited amount of time available for questions. Therefore, each pair will have to answer quickly and honestly.
Conclusion
The above-mentioned apps are some of the best mobile apps that are designed for people in romantic relationships. The apps help strengthens the bonds of the relationship between the people involved. When you consider developing a mobile app for yourself and your loved ones, you should get in contact with a leading Android and [iOS app development agency](https://www.hyperlinkinfosystem.com/mobile-app-development-agency) such as Hyperlink InfoSystem. | downeranthony |
916,615 | Set up es-lint, prettier, husky, lint-staged in vs-code for vanilla JS | As a beginner developer I ran to different problem while setting up the es-lint ,prettier,husky and... | 0 | 2021-12-05T16:07:00 | https://dev.to/adarshmaharjan/set-up-es-lintprettierhuskylint-staged-in-vs-code-1oig | javascript, webdev, beginners, tutorial | As a beginner developer I ran to different problem while setting up the es-lint ,prettier,husky and lint-staged. This is the beginners guide to setting up the es-lint with prettier by using husk with lint-staged.I have compiled the knowledge from different docs,blogs,videos and hints while writing this blog and the reference are at the bottom of the page
In this blog I have assumed that the user knows the basics of JavaScript,npm(package-manager),git and this will be simple procedural guide to make the configuration approach easy.
## Installing
At first we install node,git and vs-code in the computer or your computing device. The instruction are present on their official website and the links are given below
- [Visual Studio Code](https://code.visualstudio.com/)
- [Node.js](https://nodejs.org/)
- [git](https://git-scm.com/)
## Initializing git
After installing the above things first open vs code then we initialize the git through its terminal or the shell of your device in the given directory that
```
git init
```
We also create .gitignore file in the same directory to make sure the file we don't want to track will not to committed in the staging area. For this project we write <mark>/node_modules</mark> in the .gigignore file as we don't want to track it as it consist most many files to track and occupy lots of space.We write the following text in the .gitignore file
```
/node_modules
```
## Downloading Extension
We download the prettier and es-lint extension from the given vs code extension panel or you can also use web or command in the terminal to download and operate the extension
## Installing packages
Now we open the project directory that you want to configure in VS-code and first we initialize npm so that the packege.json command will be created.The command is given below.
```
npm init -y
```
<mark>npm init -y</mark> will simply generate an empty npm project without going through an interactive,so now we install required packages with the following command.
```
npm i -D eslint prettier husky lint-staged eslint-config-prettier
```
The <mark>-D</mark> flag will install the packages as the "devDependencies"(i.e Devlopment Dependinces).
- "devDependencies": Packages that are only needed for local development and testing.
## Configuring es-lint
The eslint can be configured with the following command given below:
```
npx eslint --init
```
After execution of the command es-lint will ask some commnd in the terminal and you can configur your project as per your needs.
### My configuration
As my project is just a basica vanilla JS I have configured the es-lint in the following way:
```
1. How would you like to use ESLint?
To check syntax, find problems, and enforce code style
2. How would you like to use ESLint?
JavaScript modules (import/export)
3. Which framework does your project use?
None of these
4.Does your project use TypeScript?
No
5. Where does your code run?
Browser
6. How would you like to define a style for your project?
Use a popular style guide
7. Which style guide do you want to follow?
Airbnb: https://github.com/airbnb/javascript
8. What format do you want your config file to be in?
JSON
9. Would you like to install them now with npm?
Yes
```
You can always configure the eslint as per your needs after the q&a is finished it will install the additional dependencies and create .eslintrc.json file or can be in diffrent file format as per your choice before
## Configuring Prettier
We create .prettierrc file in the same directory so that the we can enforce the prettier rules. Some of the enforced prettier rules are given below:
```
{
"semi": flase,
"printWidth": 120,
"singleQuote": true,
"arrowParens": "always",
"proseWrap": "preserve"
}
```
After this process we add the "prettier" in the .eslintrc.json file so that the conflicting rules between prettier and eslint will be handeled be eslint-config-prettier.After adding the give code the file will be as shown below.
```
{
"env": {
"browser": true,
"es2021": true
},
"extends": ["airbnb-base", "prettier"],
"parserOptions": {
"ecmaVersion": "latest",
"sourceType": "module"
},
"rules": {}
}
```
## Configuring husky and lint-staged
The fastest way to start using lint-staged is to run the following command in your terminal:
```
npx mrm@2 lint-staged
```
This command will install and configure husky and lint-staged depending on the code quality tools from your project's package.json dependencies, <mark>so please make sure you install (npm install --save-dev) and configure all code quality tools like Prettier and ESLint prior to that</mark>.
Don't forget to commit changes to package.json and .husky to share this setup with your team!
Now change a few files, git add or git add --patch some of them to your commit, and try to git commit them.
After this the code of the package.json will look as given below:
```
{
"name": "canvas_tutorial",
"version": "1.0.0",
"description": "",
"main": "script.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"prepare": "husky install"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"eslint": "^8.3.0",
"eslint-config-airbnb-base": "^15.0.0",
"eslint-config-prettier": "^8.3.0",
"eslint-plugin-import": "^2.25.3",
"husky": "^7.0.4",
"lint-staged": "^12.1.2",
"prettier": "^2.5.0"
},
"lint-staged": {
"*.js": "eslint --cache --fix",
"*.{js,css,md}": "prettier --write",
"*": "git add"
}
}
```
## Testing
Please test your project to know if the all the process is working well. If your project is linted and well formatted and will only be staged when there is no linting or formatting error then every thing is working well.
## Further more
This is just a stepping stone and you can do a lot with eslint,prettier,husky etc.I would be very glad for to your recommendation for further improvement of the blog
## References
- [Set up ESLint, Prettier and pre-commit hooks using Husky for WordPress from Scratch](https://dev.to/ajmaurya/set-up-eslint-prettier-and-pre-commit-hooks-using-husky-for-wordpress-from-scratch-1djk)
- [Diving into husky and lint staged](https://dev.to/laurieontech/diving-into-husky-and-lint-staged-2hni)
- [Automate Prettier, ESLint using Husky and Lint-Staged](https://youtu.be/pmn-UvFPftQ)
- [lint-staged](https://github.com/okonet/lint-staged)
- [npm](https://docs.npmjs.com/)
| adarshmaharjan |
916,638 | This is what I've experienced in Web/Software Development for more than 6 years. | I remember the first time I've got a job as a web developer. At that time I'm still studying at my... | 0 | 2021-12-03T16:21:41 | https://dev.to/msulaimanmisri/this-is-what-ive-experienced-in-websoftware-development-for-more-than-6-years-4468 | webdev, programming | I remember the first time I've got a job as a web developer. At that time I'm still studying at my University. I do part-time in one small media agency in Sarawak, Malaysia.
My reason to do part-time at that time is just to study how web developers do the work in real life. I don't want to know based on a textbook or from my lecturers.
After I graduate I have got a job in one organization. Which is in IT Department along with 2 fresh graduates. Just like me. Who doesn't have much experience in developing internal system.
I just get lucky because I have 1 year of working as a part-timer web developer. So I share what I know with other developers what I know.
Fast forward, I moved to a few organizations and web/software houses. And this is what I get.
For those who just want or enter the web/software development house, these are my Pro's and Con's. Hope this will help you to choose which one you want to choose to start. Organization or web/software house.
This may be not applicable to you. This is 100% based on my experience.
Oh by the way, when I mention organization, that's is Internal Team. And I also will short the web/software house into a software house.
## Pro's work in Organization
1.**<u> No need to focus or master many programming languages</u>**. If you are a back-end developer, probably you just need to know, master and use one programming language. Once you start coding the internal system, that language gonna stay for the next 5 - 10 years, maybe.
2. **<u>More Focus</u>**. That is because you just have one system to develop. Even though the functionality will expand, but only in that system.
3. **<u>Better Communication</u>**. You will have direct communication with the users. For example, an internal system will be used by other departments within the organization. So if they want to request new features, they just can gather the team.
## Con's
1. **<u>Request from users can be changed out-of-blue</u>**. Sometimes, your higher management people will say A today, next week they will say B. And you need to follow that.
2. **<u>Pressure from other departments</u>**. Some people, don't know that as a developer, we need to do some research before getting our hands dirty. They will ask you when the new features gonna complete. You need to learn how to work under pressure.
3. **<u>Salary is not offered much</u>**. Each organization that I joined. The salary that they offered is lower than another offer that I get. However, this is your choice. I've my own reason why I still accept to work in an organization at that time.
## Okay so let's look at Software House.
## Pro's
1. You really **<u>can learn and experience much</u>** by developing many web/software.
2. There's a lot of senior developers that can help you. Even for a small company that I've joined, they do have experience developers.
3. Higher salary.
## Con's
1. For a small software house, **<u>you need to know 2 or 3 programming languages (at least know the basic), and sometimes you need to learn new languages.</u>**. Because some clients want you to develop the system using a specific programming language or Framework.
Because they are willing to invest handsomely, your company might not reject that and appoint you or your team to do. So prepare yourself.
2. **<u>Need to handle more than one project</u>**. Because of a software house, the sales team will always find customers. So there's a possibility that you will handle more than one project. So you need to prepare with Management Skills.
3. **<u>Faced the Project Manager who doesn't know the tech world</u>**. Yes, on my side I've once faced PM who don't know the Tech world. But work in a software house. So it is difficult to say no. Here, you need to learn how to communicate effectively.
There's a lot I want to share, but I think these are enough to make you as entry-level or fresh graduates to choose. Is it an organization or software house to be chosen as your first experience in the real-live environment as web @ software developer.
Thank you for reading :) | msulaimanmisri |
916,664 | 👁️Accelerates your coding using Gaze | CLI tool that accelerates your quick coding | 0 | 2021-12-03T17:09:14 | https://dev.to/wtetsu/accelerates-your-coding-using-gaze-17p8 | programming | ---
title: 👁️Accelerates your coding using Gaze
published: true
description: CLI tool that accelerates your quick coding
tags: programming
#cover_image: https://direct_url_to_image.jpg
---
Software development often forces us to execute the same command again and again, by hand!
👁️[Gaze](https://github.com/wtetsu/gaze) runs commands for you, right after you save a file.
It greatly helps you focus on writing code!

Gaze is designed as a CLI tool that accelerates your coding.
* Easy to use, out-of-the-box
* Super quick reaction
* Language-agnostic, editor-agnostic
* Flexible configurations
* Useful advanced options
* Multiplatform (macOS, Windows, Linux)
* Appropriate parallel handling
## 👁️Installation
On macOS
```
brew install gaze
```
Windows/Linux: download binary https://github.com/wtetsu/gaze/releases
## 👁️How to use
Gaze is really easy to use.
```
gaze .
```
And invoke your favorite editor on another terminal and edit it!
```
vi a.py
```
## 👁️More examples
Gaze at one file. You can just specify file names.
```
gaze a.py
```
---
Gaze doesn't have complicated options to specify files. You can use wildcards (\*, \*\*, ?) that shell users are familiar with. **You don't have to remember Gaze-specific command-line options!**
```
gaze "*.py"
```
---
Gaze at subdirectories. Runs a modified file.
```
gaze "src/**/*.rb"
```
---
Gaze at subdirectories. Runs a command to a modified file.
```
gaze "src/**/*.js" -c "eslint {{file}}"
```
---
Kill an ongoing process, every time before it runs the next. This is useful when you are writing a server.
```
gaze -r server.py
```
---
Kill an ongoing process, after 1000(ms). This is useful if you love to write infinite loops.
```
gaze -t 1000 complicated.py
```
---
In order to run multiple commands for one update, just simply write multiple lines (use quotations for general shells). If an exit code was not 0, Gaze doesn't invoke the next command.
```
gaze "*.cpp" -c "gcc {{file}} -o a.out
ls -l a.out
./a.out"
```
## 👁️Motivation
I often write Python scripts to automate boring tasks. I created a.py, wrote 5 lines of code and run "python a.py". Since the result was not perfect, I edited a.py again, and run "python a.py" again.
Again and again...
Then, I found myself going back and forth between the editor and terminal and typing the same command thousands of times that is total waste of time and energy!
That's the reason I developed Gaze in order to automate commands execution.
See more details!
https://github.com/wtetsu/gaze
| wtetsu |
916,677 | My experience with hand on APIs workshop By using postman. | My hand on experience of learning APIs 101 with a postman workshop is amazing. before this workshop,I... | 0 | 2021-12-03T17:35:51 | https://dev.to/minal11/my-experience-with-hand-on-apis-workshop-by-using-postman-1o4i | todayilearned, devops, testing, discuss | My hand on experience of learning APIs 101 with a postman workshop is amazing. before this workshop,*I didn't be familiar with what is API and how it's work?* It's my very first workshop from where I practically learned about API. How the speaker is clarifying is very considerable along with this the session was intuitive all through. **after going to this workshop I observed how it's straightforward and how easy to use API by using such a best API like the postman. *It has an extremely easy to understand interface*.
Also, I find that by using postman API we can store data in a coordinated manner**, there are **tons of resources** accessible for users, **easy move to code repositories moreover**, it's a very good API for **developers to create, share, text and documents APIs**.
I recommended to all that ought to go to this sort of workshop it significantly gives a practical demo of whatever you desire to learn. In the end, I want to say that **creating an account on postman is very simple** and do check out postman API
#PostmanAPI #PostmanStudent
` | minal11 |
916,702 | AUTO INCREMENT On A MySQL Secondary Column | MySQL AUTO_INCREMENT MySQL provides us with AUTO_INCREMENT attribute that allows us to... | 0 | 2021-12-03T20:54:33 | https://dev.to/vinodsys/auto-increment-on-a-mysql-secondary-column-2jcn | mysql, autoincrement, tables, myisam | ## MySQL AUTO_INCREMENT
MySQL provides us with AUTO_INCREMENT attribute that allows us to create a column that contains a sequence of numbers (1, 2, 3, and so on). AUTO_INCREMENT attribute is generally used to generate a unique number that act as a primary key in a table.
Let see a simple example. Here we have a table Mydevices that stores different devices we have such as laptop, computer, cellphone, tablet and so on.
The MySQL statement to create the table Device is given below
`CREATE TABLE `MyDevices` (
`DeviceId` int PRIMARY KEY NOT NULL AUTO_INCREMENT,
`DeviceName` varchar(10) NOT NULL,
`CreatedAt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdatedAt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP
);
`
Executing this gives us the table MyDevices

Lets Add a few rows, to see that the sequencing is working as we want.
`Insert into mydevices(DeviceName) VALUES ('Laptop');
Insert into mydevices(DeviceName) VALUES ('Computer');
Insert into mydevices(DeviceName) VALUES ('Phone');
Insert into mydevices(DeviceName) VALUES ('Tablet');
`
If we do a query on our table we should find that MySQL have added a nice Sequences number to our DeviceId column

Now comes the twist. Lets add one more column, say CustomerId in the above table. Lets drop the Devices table and recreate it as follows;
`
DROP TABLE MyDevices;
CREATE TABLE `MyDevices` (
`CustomerId` int NOT NULL,
`DeviceId` int NOT NULL PRIMARY KEY AUTO_INCREMENT,
`DeviceName` varchar(10) NOT NULL,
`CreatedAt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdatedAt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP
);
`
Lets Add a few rows, to see that the sequences are working as we want.
`Insert into mydevices(CustomerId, DeviceName) VALUES (1,'Laptop');
Insert into mydevices(CustomerId, DeviceName) VALUES (1,'Computer');
Insert into mydevices(CustomerId, DeviceName) VALUES (1,'Phone');
Insert into mydevices(CustomerId, DeviceName) VALUES (2,'Tablet');
Insert into mydevices(CustomerId, DeviceName) VALUES (2,'Laptop');
Insert into mydevices(CustomerId, DeviceName) VALUES (2,'Printer');
`
Lets see how the data looks like now;

As you will see, the auto increment feature keeps on adding the sequences number for each row as we insert. (1,2,3,4,5,6)
But what if this is not the behavior we want.. What if we want the sequence number to increase per customer. That is each customer has its own sequence numbers for the devices.
So Customer 1, has sequences 1,2,3... and so on
while Customer 2 devices gets a sequence number again from 1,2,3.. and so forth.
There are many ways to do this such as using stored procedures. But let us see today a very easy method to achieve this, by defining the table in a certain way.
## ENGINE= MyISAM
The default engine storage for MySQL relational database management system prior to versions prior to 5.5 was MyISAM. It is based on the older ISAM code, but it has many useful extensions.
If we define tables either as MyISAM or BDB, it gives the ability to specify AUTO_INCREMENT on a secondary column in a multiple-column index.
For such tables, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix.
This is useful when you want to put data into ordered groups. Lets adapt this to the above example of MyDevices table.
Lets create the table now as a MyISAM table
`CREATE TABLE `MyDevices` (
`CustomerId` int NOT NULL,
`DeviceId` int NOT NULL AUTO_INCREMENT,
`DeviceName` varchar(10) NOT NULL,
`CreatedAt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UpdatedAt` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`CustomerId`, `DeviceId`)
) ENGINE= MyISAM;`
Lets now insert data to this table (we will add a few more inserts that before);
`Insert into mydevices(CustomerId, DeviceName) VALUES (1,'Laptop');
Insert into mydevices(CustomerId, DeviceName) VALUES (1,'Computer');
Insert into mydevices(CustomerId, DeviceName) VALUES (1,'Phone');
Insert into mydevices(CustomerId, DeviceName) VALUES (2,'Tablet');
Insert into mydevices(CustomerId, DeviceName) VALUES (2,'Laptop');
Insert into mydevices(CustomerId, DeviceName) VALUES (3,'Printer');
Insert into mydevices(CustomerId, DeviceName) VALUES (4,'Printer');
Insert into mydevices(CustomerId, DeviceName) VALUES (4,'Laptop');
`
Lets see what we got into our tables;

There you go, we got now separate sequences for each of the customers.
QUIZ : In the above table what sequence do you think will customer 3 have for DeviceId if we inserted this SQL statemwent;
`Insert into mydevices(CustomerId, DeviceName) VALUES (3,'Fax');
`
Answer : 2 ( as the last sequence generated for customer 3 was '1', the next number it assigned will be 1+1 = 2
Hope you enjoyed this article..
| vinodsys |
917,132 | WASM : The End of Javascript? | Microsoft has been working on WASM for a long time now. Their project named Blazor was doing well... | 17,014 | 2021-12-04T06:47:16 | https://dev.to/jwp/wasm-the-end-of-js-39oa | webassembly, blazor | Microsoft has been working on WASM for a long time now. Their project named Blazor was doing well back in 2019 with its precursor Razor going back to at least 2014.
I initially rejected Blazor thinking Javascript and JS packages are so dominant that the same type of Open Source collaboration will never catch up with NPM. If NPM already has a package I want... then what's the need for Blazor?
#VS2022 and .Net 6
VS2022 Just made it to GA of which I installed last week. I spent an entire week investigating Blazor and realized my distrust of Microsoft had blinded me. Blazor is an industry disrupter.
Then I backtracked and watched the keynote VS2022 release event on YouTube. It showed what Blazor can do and was over the top good. They showed 2 Activison games written in C++ running in the browser made possible by Blazor.
Blazor also supports all .Net languages as well as Javascript and TypeScript.
The story improved, Blazor can compile to different operating systems including IOS, Linux, Android and Windows. This means a single code base for Web Apps is ready today.
The compiled WASM code also produces an .exe file. When clicked, it acts just like a desktop app. There was no explicit IIS server start required.
Rumor has it that Electron is no longer needed for desktop apps in TypeScript or Javascript, just replace it with Blazor.
Blazor also integrates seamlessly with both TypeScript and Javascript. Just create a Blazor Server Project and import all the NPM packages you want.
Blazor syntax is similar to JSX in that both code and html are in same file. The '@' sign indicates code blocks where native C# syntax is used.
Want Speed? Go view those Activision videos. Mentioned earlier.
I've now 'seen the light' regarding WASM. It's something I can't unsee. Blazor is my newfound target to pursue in 2022 as I'm pretty sure its a disrupter. My enthusiasm for Microsoft engineering has just returned to 2006 levels.
Finally: No, WASM isn't the end of Javascript. It just breaks up the once exclusive Javascript only club. It's the rebirth of how applications can be created to run within the browser. | jwp |
917,145 | Advent of Code 2021 Python Solution: Day 4 | Again, I am using my helper function from day 1 to read data. This challenge was little bit harder... | 15,777 | 2021-12-04T06:32:29 | https://dev.to/qviper/advent-of-code-2021-python-solution-day-4-5fi1 | python, adventofcode, beginners, programming | Again, I am using my helper function from day 1 to read data.
This challenge was little bit harder than previous but with the help of NumPy, I did it.
```python
import numpy as np
data,data1 = get_data(day=4)
def get_blocks(dt):
block = []
num = [int(i) for i in dt[0].split(",")]
row = []
tdata=[]
blocks = 0
for d in dt[2:]:
if d == "":
tdata.append(block)
block=[]
blocks+=1
else:
block.append([int(i) for i in d.strip().split(" ") if i!=""])
tdata.append(block)
block=[]
blocks+=1
tdata = np.array(tdata).reshape(blocks,-1, 5)
return tdata, num
def get_first_matched(tdata, num):
results = np.zeros_like(tdata).astype(np.bool)
matched = False
for n in num:
for i,block in enumerate(tdata):
results[i] += block==n
# search across row
if (results[i]==[ True, True, True, True, True]).all(axis=1).any():
print(f"Row Matched Block:{i}")
matched=True
break
# search across cols
if (results[i].T==[ True, True, True, True, True]).all(axis=1).any():
print(f"Col Matched Block: {i}")
matched=True
break
if matched:
print(f"\nResult Block: {tdata[i]}")
s = (tdata[i]*~results[i]).sum()
print(f"Sum: {s}")
print(f"Last number: {n}")
print(f"Answer: {n*s}\n")
break
def get_last_matched(tdata, num):
results = np.zeros_like(tdata).astype(np.bool)
matched = False
mblocks=[]
all_blocks = list(range(0, len(results)))
for n in num:
for i,block in enumerate(tdata):
results[i] += block==n
# search across row
if (results[i]==[ True, True, True, True, True]).all(axis=1).any():
print(f"Row Matched Block:{i}")
if i not in mblocks:
mblocks.append(i)
if len(mblocks) == len(all_blocks):
matched=True
# search across cols
if (results[i].T==[ True, True, True, True, True]).all(axis=1).any():
print(f"Col Matched Block: {i}")
if i not in mblocks:
mblocks.append(i)
if len(mblocks) == len(all_blocks):
matched=True
if matched:
i = mblocks[i]
print(f"\nResult Block: {tdata[i]}")
s = (tdata[i]*~results[i]).sum()
print(f"Sum: {s}")
print(f"Last number: {n}")
print(f"Answer: {n*s}")
break
d1,n1 = get_blocks(data1)
get_first_matched(tdata=d1, num=n1)
get_last_matched(tdata=d1, num=n1)
```
## Why not read more?
* [Gesture Based Visually Writing System Using OpenCV and Python](https://q-viper.github.io/2020/08/01/gesture-based-visually-writing-system-using-opencv-and-python/)
* [Gesture Based Visually Writing System: Adding Visual User Interface](https://q-viper.github.io/2020/08/11/gesture-based-visually-writing-system-make-a-visual-user-interface/)
* [Gesture Based Visually Writing System: Adding Virtual Animationn, New Mode and New VUI](https://q-viper.github.io/2020/08/14/gesture-based-visually-writing-system-adding-virtual-animation-new-mode-and-new-vui/)
* [Gesture Based Visually Writing System: Add Slider, More Colors and Optimized OOP code](https://q-viper.github.io/2020/08/21/gesture-based-visually-writing-system-add-slider-more-colors-and-optimized-code/)
* [Gesture Based Visually Writing System: A Web App](https://q-viper.github.io/2020/08/29/gesture-based-visually-writing-system-web-app/)
* [Contour Based Game: Break The Bricks](https://q-viper.github.io/2020/08/16/contour-based-game-break-the-bricks/)
* [Linear Regression from Scratch](https://q-viper.github.io/2020/08/07/writing-a-linear-regression-class-from-scratch-using-python/)
* [Writing Popular ML Optimizers from Scratch](https://q-viper.github.io/2020/06/05/writing-popular-machine-learning-optimizers-from-scratch-on-python/)
* [Feed Forward Neural Network from Scratch](https://q-viper.github.io/2020/05/30/writing-a-deep-neural-network-from-scratch-on-python/)
* [Convolutional Neural Networks from Scratch](https://q-viper.github.io/2020/06/05/convolutional-neural-networks-from-scratch-on-python/)
* [Writing a Simple Image Processing Class from Scratch](https://q-viper.github.io/2020/05/30/image-processing-class-from-scratch-on-python/)
* [Deploying a RASA Chatbot on Android using Unity3d](https://q-viper.github.io/2020/08/04/deploying-a-simple-rasa-chatbot-on-unity3d-project-to-make-a-chatbot-for-android-devices/)
* [Naive Bayes for text classifications: Scratch to Framework](https://q-viper.github.io/2020/03/04/text-classification-using-naive-bayes-scratch-to-the-framework/)
* [Simple OCR for Devanagari Handwritten Text](https://q-viper.github.io/2020/02/25/building-ocr-for-devanagari-handwritten-character/) | qviper |
917,234 | Day 55 of 100 Days of Code & Scrum: Eighth Weekly Retrospective (RIP my streak) | Happy weekend, everyone! This week may have been the least productive one during my entire... | 14,990 | 2021-12-04T10:06:36 | https://dev.to/rammina/day-55-of-100-days-of-code-scrum-eighth-weekly-retrospective-rip-my-streak-49e3 | 100daysofcode, beginners, javascript, productivity | Happy weekend, everyone!
This week may have been the least productive one during my entire [challenge](https://dev.to/rammina/100-days-of-code-and-scrum-a-new-challenge-24lp). My 51-day streak also ended this week, and I took a two-day break. I just try not to be too hard on myself, because I had personal issues getting in the way. I can still somehow make up for it in the coming weeks.
Let's move on to my weekly retrospective!
## Weekly Sprint Goals
- fix the bugs in my company website.
- deploy my company website after cleaning everything up.
- after the website, learn Ghost so I can use it to build my personal blog.
- continue to learn Next.js and Typescript by using concepts while I build my website or maybe just from reading documentations.
- continue studying for Professional Scrum Master I (PSM I) certification.
- continue networking, but allocate less time to this (coding is more important).
## Weekly Review
Here are the things I've managed to do:
- fix remaining bugs in my company website.
- deployed my [company website's homepage](https://www.rammina.com/).
- learned about Ghost, particularly a Jamstack setup using Next.js frontend deployed on Vercel and Ghost on Heroku as my headless CMS.
- discovered various features in Next.js and Vercel, such as zones, Image Optimization, blurDataURL, and monorepos.
- continue my studies for PSM I (not by much).
- expanded my network.
...I guess it wasn't really that bad?
## Weekly Retrospective
Moving on, let's tackle what I've managed to do well, what my shortcomings are, and what I could do better next time.
### What Went Great
- I successfully deployed my business page.
- learn plenty regarding Ghost.
- enhanced my knowledge in Next.js and Vercel.
- expanded my network a little bit more.
### Some Mistakes I've Made
- broke my coding streak and lost my momentum.
- I pretty much neglected a lot of things that had little to nothing to do with my website.
- learned almost nothing new about Typescript and Scrum this week.
- got distracted at times and couldn't focus much.
### Things I Could Improve On
- learn a little bit even if it's not much, on bad days.
- should probably do just a little bit more of activities related to my secondary goals.
- prioritize tasks that are more likely to help me meet my weekly goals.
- I should DEFINITELY install something that blocks me from checking certain sites at a specific time.
### Resources/Recommended Readings
- [Content API | Ghost](https://ghost.org/docs/content-api/)
- [Environment Variables | Next.js](https://nextjs.org/docs/basic-features/environment-variables)
- [Deployment | Next.js](https://nextjs.org/docs/deployment)
- [Official Next.js tutorial](https://nextjs.org/learn/basics/create-nextjs-app?utm_source=next-site&utm_medium=nav-cta&utm_campaign=next-website)
- [The Typescript Handbook](https://www.typescriptlang.org/docs/handbook/intro.html)
- [The 2020 Scrum Guide](https://scrumguides.org/scrum-guide.html)
- [Mikhail Lapshin's Scrum Quizzes](https://mlapshin.com/index.php/scrum-quizzes/)
Thank you for the support, everyone! Enjoy your weekend!

### DISCLAIMER
**This is not a guide**, it is just me sharing my experiences and learnings. This post only expresses my thoughts and opinions (based on my limited knowledge) and is in no way a substitute for actual references. If I ever make a mistake or if you disagree, I would appreciate corrections in the comments!
<hr />
### Other Media
Feel free to reach out to me in other media!
<span><a target="_blank" href="https://www.rammina.com/"><img src="https://res.cloudinary.com/rammina/image/upload/v1638444046/rammina-button-128_x9ginu.png" alt="Rammina Logo" width="128" height="50"/></a></span>
<span><a target="_blank" href="https://twitter.com/RamminaR"><img src="https://res.cloudinary.com/rammina/image/upload/v1636792959/twitter-logo_laoyfu_pdbagm.png" alt="Twitter logo" width="128" height="50"/></a></span>
<span><a target="_blank" href="https://github.com/Rammina"><img src="https://res.cloudinary.com/rammina/image/upload/v1636795051/GitHub-Emblem2_epcp8r.png" alt="Github logo" width="128" height="50"/></a></span>
| rammina |
917,333 | Sustainability on AWS (re:Invent2021) | Werner Vogels is an unparalleled titan of cloud computing, and his Keynote is one of the most popular... | 0 | 2021-12-04T15:09:00 | https://ssennett.net/posts/aws-sustainability-pillar/ | aws | Werner Vogels is an unparalleled titan of cloud computing, and his Keynote is one of the most popular sessions of re:Invent. And when he [announced the new Sustainability Pillar](https://aws.amazon.com/blogs/aws/sustainability-pillar-well-architected-framework/) of the AWS Well-Architected Framework at re:Invent this week, it was met with plenty of excitement.
If you haven't read the paper on the new [Sustainability Pillar](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/sustainability-pillar.html) or seen Werner's keynote yet (starting 1:28:19), I'd encourage you to check them out.
{% youtube 8_Xs8Ik0h1w?t=5299 %}
## Amazon's Role in Sustainability
Amazon's original pledge to use 100% renewable energy has been revised. With an original target of 2030, they now predict to be fully sustainable by 2025; **just three years away** (I know, it still feels like mid-2020 to me too), with a global capacity of 12GW.

One of the underlying concepts they've introduced is their Shared Responsibility Model. The breakdown is brilliantly succinct, just as it is for security:
* AWS is responsible for the sustainability **of** the cloud
* You are responsible for the sustainability **in** the cloud
But really... can we please not name it the same as the other [AWS Shared Responsibility Model](https://aws.amazon.com/compliance/shared-responsibility-model/), or at least unify them both?
## Measurability
Measuring your organization's climate impact is extremely challenging, let alone when your technology is running on mysterious black boxes in locations half-way around the world that you'll never see.
> "Moving on-premise workloads to AWS can lower your carbon footprint by 88%" -- Dr. Werner Vogels
While AWS's economies of scale definitely mean byte-for-byte, your workload will be less carbon impactful than a traditional datacenter, it's not enough if you don't know how your footprint is looking.
As far as measuring your footprint in AWS goes at the moment, you can basically say "less resources, less carbon", and that your bill is a great proxy metric for that. If you're spending money on resources, it's consuming electricity, after all.

AWS announcing the pending release of the **AWS Customer Carbon Footprint Tool** is some *really* exciting news. It's also one of the rare cases where AWS have announced something that isn't yet publicly available, which says something.
Along with your current usage, it will also project your carbon footprint into the future, including accounting for AWS's growing use of renewable energy.
The day we can quantify our environmental impact in the cloud is the day we can use it as a metric to meaningfully drive our impact down.
## Practical steps
Okay, so this is all cool; but how can you actually drive down that impact? In a phrase: "Use the least possible".
> "The greenest energy is the energy you don't use" -- Peter DeSantis
The Sustainability Pillar includes a section on [Best practices for sustainability in the cloud](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/best-practices-for-sustainability-in-the-cloud.html), which is written better than I ever could, and I won't repeat it all.
One thing that Peter DeSantis did cover in his [re:Invent 2021 Keynote](https://www.youtube.com/watch?v=9NEQbFLtDmg) (also highly recommended, brilliant presenter) was how AWS's use of specialized hardware also opens new opportunities.

In Machine Learning, Trainium and Inferentia (those names tho...) are highly optimized for their specific tasks, and that includes their power efficiency. And for general compute, Graviton (now with Graviton3!) offers similar advantages not just in price, but resource efficiency also.
Another big one is Lambda. With AWS controlling the resourcing allocation to execute each of these individual functions, plus abstracting away so much of the underlying infrastructure, it's an incredibly efficient way not just to build and run code, but also to make the best use of the resources underneath it; including electricity.
There's a full session on ARC325 Architecting for sustainability (currently only on the re:Invent site) available as well, if you want to dive deeper
## Relationship to the other pillars

Like all of the pillars, Sustainability is definitely mutually supporting of the other five pillars: [Operational Excellence](https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/welcome.html), [Security](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html), [Reliability](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html), [Performance Efficiency](https://docs.aws.amazon.com/wellarchitected/latest/performance-efficiency-pillar/welcome.html), and [Cost Optimization](https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/welcome.html). But it adds some nuance in interesting ways.
Some points may feel counter-intuitive, such as [Back up data only when difficult to recreate](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/back-up-data-only-when-its-difficult-to-recreate-it.html). The question is really whether there is *value* in backing up the data. Old crash dumps? Probably not. Core customer data? Definitely.
It suggests that the two scenarios as either where there is business value, or required for compliance purposes. But there may also be cases where even if data can be recreated that it would simply be more efficient to store it anyway. Complex video renders or machine learning models come to mind as two examples.
[Optimize impact on customer devices and equipment](https://docs.aws.amazon.com/wellarchitected/latest/sustainability-pillar/optimize-impact-on-customer-devices-and-equipment.html) flies in the face of some modern practices about pushing work to the edge and the front-end. But for any work of computational intensity, the efficiency is in the cloud.
Another point is that by reducing impact on customer devices, ensuring a light load, and optimizing for backwards compatibility, you ensure that the devices themselves are less prone to replacement. Along with reducing e-waste, it also just makes sense economically.
## What's next?

Companies, including yours and mine, need to get serious about measuring and reducing our impact on the climate. With the Carbon Footprint Tool on the way, I'm very excited to see how that may enlighten us all.
But our business and technology strategies must align too. Just as we mandate tooling and frameworks for consistent experience, so too must be consider how we define and action sustainability targets. With that in mind, here's a thought exercise:
> **How would you reduce your companies cloud carbon footprint by 25%?**
It's not so outlandish that those strategies will be ours to implement sooner than we think. | ssennettau |
917,376 | Thoughts on Legacy Code and How to Live With it | Let's start with one statement before we even define what legacy code is. You will never get rid... | 0 | 2021-12-04T14:54:32 | https://www.johnraptis.dev/thoughts-on-legacy-code/ | webdev, programming, productivity | Let's start with one statement before we even define what legacy code is.
> You will never get rid of legacy code, you can only learn how to deal and live with it.
>
So what is legacy code anyway? You might fall into the trap of thinking that is just old code. This kind of answer is over-simplistic and doesn't reflect reality. Legacy code, yes, can be old, but old code is not necessarily considered legacy just because it is old. With this approach, your own very code you wrote yesterday, is legacy by now. So what is happening?
There is a *why* we write code that might end up as legacy. There is also the *how* and *what*. That we shall discuss later.
Legacy code has all or at least one of the following characteristics. It is a black box, difficult or impossible to extend and there is no documentation on how it works. And the scary thing is that we are writing it as we speak if don't consider all the above.
The "*why"* we write code that most likely ends up as legacy is outside of the actual coding and implementation aspect of things. It has to do more with the business side of things. If you are an indie maker or a small startup you are more likely to build fast and revisit later. In today's day and age if you haven't taken this route you most likely don't have a business. You don't even know if you are going to be live till next week and every resource you have you drop it toward what is going to have the maximum return and value for your company. Eventually, you will have a hint that your business is viable, where you should start considering the effects it will have for your business in the future. Everyone is talking about a rewrite of the entire codebase, which never happens and there are good reasons for this as well.
If a company, from a startup, becomes a scale-up or even bigger, it usually has some dark place in the codebase from the early days. I have heard stories of entire repositories being deemed never to be seen by the team except for some lone genius.
All the above are why this happens. It is more of a business decision, derived from business needs that is causing this. It is a reality and it's almost certain it will stay that way. So acknowledging the problem is vital in order to move to practical ways of combating the problem. You can hack your way through when it comes to adding new features but inevitably everything will blow in your face, sooner or later.
Many companies never took off because they were too focused on the code to be perfect and there are companies that shipped crapy code that did really well but then they were crushed by their own weight due to not being able to make changes anymore.
It's ok to hack something fast to push the business forward but you need a way to get out of the ditch you are slowly digging for yourself. It is called technical debt for a reason. It means you are going to pay for in the future with your time, or someone else's time.
So having a pragmatic and effective approach is vital.
## Refactor vs Rewrite
As mentioned above the first thing you might think of is to start from a clean slate. This is not a good idea unless you have the recourses and a dedicated team to do so which is rare, or the codebase is extremely large and want to do some kind of rebrand and get rid of entire sections and features, which a rewrite might be the best decision to be made. In a [Maintainable podcast episode, Swizec Teller](https://maintainable.fm/episodes/swizec-teller-sr-engineer-mindset) talked about this and made a great point regarding a rewrite, amongst other great points.
> If you refactor the majority of a codebase where nothing from the original code exists, were you rewriting or refactoring?
>
I remember in my first job looking at codebases of various websites we were maintaining that used some old framework. I couldn't hold myself to ask in my naivety, *why aren't we rewriting all of this mess?* Turns out that this would be a terrible idea. Besides the business cost in terms of hours, code — although old — works and has stood the test of time. It survived bug fixes and live quality assurance based on user feedback. Rewriting it will likely introduce new bugs and off we go again.
In [this post](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/), Joel Spolsky makes a strong case against rewrites and takes a more pragmatic, real-life approach.
As developers we find the idea of a rewrite appealing because it is easier — at least that is what we think. The old code might look ugly but it is code that works. It's easier to judge code written before us and think we have a better solution often ignoring what business logic the old code serves and what edge case it was trying to solve. It is hard to read code and understand it so we have the impulse to avoid it.
> It's harder to read code than to write it. — Joel Spolsky
>
Although I generally agree with the premise that a rewrite is not often justifiable, in some cases, it might make sense. If a section in a project can't be expanded anymore or if doing so needs hacks and workarounds, some things are not being used anymore and it's isolated, starting from scratch can be the right thing to do.
## Documentation
The most evident solution when dealing with this is writing documentation. Documentation is one of those things that all developers agree upon but few do, at least in a practical and efficient way. Some keep everything in their head, hoping they remember it while others take the *let's write a novel* approach. They write an endless amount of it which makes it incredibly hard to update, resulting in a disorganized, outdated mess. This is not helpful since you are making an extra effort to cut through the noise and find not only what you need, but if you need it at all in the first place.
> "I didn't have time to write a short letter, so I wrote a long one instead" - Mark Twain
>
It's is far easier to write lots of mindless documentation just for the sake of writing it than to be brief, concise and, make sense. Bullet points are better than paragraphs and short explanatory sentences are better than long-winded explanations. It is pretty much an art in and of itself.
Having a link at a prominent place in the source code linking to a doc describing all the technical decisions that took place, can be helpful for your future self and co-workers. Explaining *how* and *why* you chose a particular solution or approach and pointing out quirks and possible edge cases will make a world of difference next time you visit the code and decide to make improvements.
Learn how to write [good technical documentation](https://medium.com/@VincentOliveira/how-to-write-good-software-technical-documentation-41880a0e7814) which is short, easy to skim, easy to update, and in the place where it's needed. Easier said than done, right?
## Extendibility
A piece of code might work but it is really dead if you can't extend it. Sure it can work as expected and still generate revenue for the company but one-day shit will hit the fan. You can minimize the problem by keeping things isolated so it doesn't affect the rest of the code though this can't be a permanent solution.
You really don't want to end up in a situation wherein order to add a new feature, you have to make changes and preserve code that is based on old practices or technologies you no longer support just to fill a business need.
Usually, when you write new stuff they will depend on some legacy system, depending on the size. The larger the size of the codebase, the more challenging it will be to fix if systems are not isolated from each other. That is why having code that is tightly coupled is considered a bad idea. The more the coupling, the more the complexity.
Swyx has a great read on how we should [optimize for change](https://dev.to/swyx/how-to-optimize-for-change-a2n) and gave great insights on how we should design our abstractions and anything we code with that in mind. And the reason for this is to solve the *requirements volatility problem*, where changes in the specs of our product happen all the time whether we like it or not. We don't prepare ourselves for this kind of scenario so when a requirement changes it finds us unprepared so we write a "temporary" implementation. This usually doesn't happen just once, so all the previous hacks pile up and cause our codebase to mummify. Once all this sloppiness established itself it becomes increasingly hard to get rid of.
> Hard-to-delete code drives out easy-to-delete code over time
>
So optimizing your code for change and making it easier to delete, ironically makes it easier to extend in time and you will not produce legacy code *today*.
## Practical Implementation
In general, the code you write today is old code by tomorrow. So much of it has to do with *how* we write code and with *what* intention. Is your intention to be seen as clever or is it to help all the future developers that will come probably long after you are gone. Since we don't know what is going to be built, it is sane to make it easy to change code in the future. This has some ethical aspect to it from the developer's side to leave the codebase in a bit better condition than it was found.
I love the phrase by Swizec Teller again from the Maintainable podcast episode.
> It's easier to fix code that is repeated than code that is abstracted in the wrong way
>
It is far better to detour from so-called best practices and make it easier for another person to come and change stuff than have complex logic that is difficult to decipher and which will certainly break things if you try to refactor it.
The goal here is to slowly make incremental changes on legacy systems. When that option stops, that is where code starts being legacy. Code that changes is the only constant.
## Testing
An obvious but often overlooked way is to write tests which too can be done incrementally. Each time you visit a certain section — depending on the case — you can add unit tests. Or you can start testing more generic things like if a user is logged in. Having code that is really tightly coupled together might lead you to start testing the outer layers of a section, and gradually achieving full coverage. Then you can start moving stuff around more easily. The only downside in this approach is that often legacy codebase's don't even have tests or test infrastructure implemented which you have to do extra work for that. But regardless of the initial hustle, it will prove itself to be a valuable tool down the line.
## Faceless Changes
Ok, this might seem odd. But the true test that you don't have and don't generate legacy code today - at least as much - is if anyone in your team can go and make changes throughout the codebase. And the newcomers have a way of discovering how to work with it without the need to ask around for clarification. This doesn't mean you don't have places in the codebase where things aren't messy, but at least there is a way to navigate through. When your codebase doesn't depend on one single person, that is a healthy sign.
You want to make yourself unnecessary and detach your ego from the code.
Within teams that is also a sign of seniority. To be able to tackle the issue of dealing and working with legacy systems.
## Conclusion
Nothing of the aforementioned can happen overnight. Incrementally is the keyword here.
To summarize, we learned what legacy code is. It is code that we are not able to extend. We can co-exist with legacy code and start improving it with *just as much* documentation that is easy to update and always be in a semi-rewrite(aka refactor) mode.
As engineers, we solve problems that happen to be soluble by code. Code is just a tool so it doesn't make sense to get too attached to it. Eventually, any code you have written will slowly fade away, so go on and find the next interesting problem to solve.
| john2220 |
917,381 | Javascript的5个小技巧 | 删除Array中重复的值 利用Set特性。 const array = [1,2,2,3,3,3,4,4,4,4,5]; const unique = [... new... | 15,780 | 2021-12-04T19:23:59 | https://dev.to/andylow/javascriptde-4ge-xiao-ji-qiao-4bjf | javascript | ## 删除Array中重复的值
利用`Set`特性。
```js
const array = [1,2,2,3,3,3,4,4,4,4,5];
const unique = [... new Set(array)];
```
## 模板字符串
使用反引号 (`` ` ` ``) 来代替普通字符串中的用双引号和单引号。
```js
const str = `Hello ${expression}!`;
const num1 = "1";
const num2 = "2";
console.log(`${num1}${num2}`); // 12
```
[更多资讯](https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Template_literals)
## 类转换
使用`!!`转换成Boolean。
```js
const obj = { field1: "value" };
const bin = 0;
console.log( !!obj ); // true
console.log( !!bin ); // false
```
使用`+`转换成Number
```js
const num1 = "1";
const num2 = "2";
console.log( (+num1) + (+num2) ); // 3
```
## 空值合并运算符
当需要给空值一个默认值,可以使用`??`。
那么为何不是`||`呢?因为`||`无法区分 false、0、空字符串 "" 和 null/undefined。
```js
const field1 = "value";
const field2 = false;
const field3 = 0;
const field4 = null;
console.log( field1 || "default" ); // value
console.log( field2 || "default" ); // default
console.log( field3 || "default" ); // default
console.log( field4 || "default" ); // default
console.log( field1 ?? "default" ); // value
console.log( fiedl2 ?? "default" ); // false
console.log( field3 ?? "default" ); // 0
console.log( field4 ?? "default" ); // default
```
高级用法可以参阅[这里](https://zh.javascript.info/nullish-coalescing-operator)
## 可选链 `?.`
使用`?.`简去使用`if else`
```js
const obj = {
sayHi: ()=>console.log("Hi")
};
const empty = { };
const nullValue = null;
obj.sayHi(); // Hi
empty.sayHi(); // "'empty' undefined error"
nullValue.sayHi(); // "'nullValue' undefined error"
empty?.sayHi(); // "no error"
// equal to
if( empty && empty.sayHi ) {
empty.sayHi();
}
nullValue?.sayHi(); // "no error"
// equal to
if( nullValue && nullValue.sayHi ) nullValue.sayHi();
``` | andylow |
917,984 | How to Create GUI for remote server with i3 window manager | Hello there. This article is about providing management via a UI by installing a window manager or... | 0 | 2021-12-05T16:23:02 | https://dev.to/nuhyurdev/how-to-create-gui-for-remote-server-with-i3-window-manager-2mj9 | devops, linux, productivity, tutorial | Hello there. This article is about providing management via a UI by installing a window manager or desktop environment for servers and establishing a remote connection. A graphical server and desktop environment are not used on remote servers because server resources are usually limited. In this post I will be working on a remote server with i3, a lightweight window manager.
Note: Instead of a remote server, I will prepare a virtual server by giving a random IP on my own computer with Vagrant and perform the operations. This is how you can make your first attempts.
## What's i3 wm
For operating systems based on the Linux kernel, the graphical interface does not directly belong to the operating system. It is a module that connects to the operating system like any application. In this way, a crash on the graphics side does not affect the operating system. It is sufficient to resolve the graphics server error and restart it.” X Window” is a widely used graphics server developed in a very flexible way. Most current and known distributions use Xorg as graphics server. It contains a "window manager" inside. Its task is to organize operations such as screen layout of windows, adjusting the size, creating virtual desktops. Enlightment, Blackbox, kwin in kde and i3 etc… These are examples of window managers. i3 window manager is very light (1.2mb) and easy to use thanks to keyboard shortcuts. This is an alternative to install on your remote servers that are not very resource rich.
## Installing Ubuntu server via vagrant(optional)
With Vagrant, you can create a virtual server on virtualbox and work on this server. After installing the appropriate vagrant version and oracle virtualbox application for your system, you can create a vagrant box and work as if you were working with a machine with a static IP.
[Click the link to download vagrant](https://www.vagrantup.com/)
“**vagrant boxes**” describe operating systems that are installed and ready to be used. After downloading and creating a vagrant file on [“Vagrant cloud”](https://app.vagrantup.com/boxes/search), it can be raised with vagrant.
With vagrant box add **{title}** or **{url}**, you can download a new vagrant box from the new cloud. You can check with `vagrant box list`, create a `vagrantfile` with `vagrant init`, and raise the server with `vagrant run`. If different configurations are required, you can create vagrant fie manually and then raise it with `vagrant up`.
Let's download the ubuntu box, which is in the first place on the cloud.
```sh
$ vagrant box add ubuntu/trusty64
```
After checking if it is in the list, let's save the following vagrant file with the name `Vagrantfile` in the directory we are working in. It can then be lifted up with `vagrant up`. You can send a "shut down" signal with `vagrant halt` to turn it off.
```
NAME = "remote-Server"
IP = "192.168.1.XX"
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.hostname = NAME
config.vm.network "public_network", ip: IP
config.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", '2048']
vb.name='remote-server'
end
config.vm.provision "shell" do |s|
s.inline = "echo Server created succesfully!"
end
end
```
The IP given here is an IP address in the internal network. The virtual machine is a different computer in the internal network and is connected with a bridge adapter.
Select your bridge adapter after the up command. Wifi or ethernet adapter. You can see the hostname and username information, and other information...
Once it's up we can connect via ssh.
```sh
$ ssh vagrant@192.168.1.X
```
**password: vagrant**
and we are inside...
## Installation of xserver and i3 wm
If your server does not have a graphics server on it, it must be installed before i3. You can check it with `startx`. It is a shell script that runs `xinit`.
`Xorg` and `openbox `should be installed on Ubuntu. It is available in the Repository. You can install them using “`apt install`”. You can find x graphics server installations on Centos, opensuse, fedora or bsd based systems in user guides, wiki pages and forums. (stackoverflow, superuser, stackexchange etc)
Afterwards, the i3 package in the repository can be downloaded and installed.
Then the graphics server should be started using the `startx` shell script.
If you do not have limiting factors such as memory and storage space on your remote server, you can use **"xfce desktop environment"**. This “DE” has a clean interface and uses very little resources.
## Connection of Server
There are different options available at this stage. You can make a remote connection and manage the system with the "remote connection" application you want. You can select an application that you can connect with the operating system you use and the operating system you use on the remote server. Tools such as __teamviewer__, __anydesk__, __tigerVNC__, __tightVNC__, __realVNC__, __remmina__, __zoho assist__, __Apache Guacamole__, __NoMachine__, __xrdp__, __freeNX__ are widely used. Here we come across `SSH`, `RDP` and `VNC` as protocols. `SSH (Secure Shell)` is a secure remote connection protocol.` RDP (Remote Desktop Protocol)` is a protocol developed by Microsoft that enables connection with the graphical interface of remote windows machines. `VNC (Virtual Network Connection)` is a platform-independent, server-client architecture protocol that allows to manage applications on a graphical interface running on any network server with a remote connection. You can use any of the above tools for remote control.
`Apache Guacomole` is a little different. Because it has a client-less structure. After the installation on the server side, the connection can be established and managed via the web browser of any computer.
You can use one of the advanced software mentioned above or other than these. But I will use a vnc application to be lightweight. The packages required on the server side are `i3` and `xorg`for Ubuntu server. Apt will automatically install the dependent packages. Firefox and vim can also be installed. These packages should also be installed in distributions such as Centos, SUSE, Arch. Package names may be different in the repository, but basically a graphical server driver and a window manager or desktop environment are needed.
```sh
$ sudo apt install xorg i3 firefox vim
```
Then tightvncserver should be installed.
```sh
$ sudo apt install tightvncserver
```
Then the server should be started with the vncserver command. At this stage, vnc password should be determined, if needed, password can be set for view-only mode.
It gives each session an id and to kill the session;
```sh
$ vncserver -kill :session_number
```
Connection can be made by using any client application that supports vnc on the client side. realVNC or remmina etc. applications can be used.
In order to have a secure connection, tunneling is done with ssh on the client side. 5900+N port is the default vnc port. Here N stands for the screen number. So more than one X session can be connected. __Default port + session number__ is to make it easier to manage.
```sh
$ ssh -L 5901:127.0.0.1:5091 -C -N -l username ip_addr
```
You can monitor whether the connection is established with netstat.(It may differ in some distros. For example, on Solus, the `ss` command does the same job as netstat.)
Then, it is sufficient to select the `VNC protocol` on` Remmina` and connect to the `localhost:5901` port. Then the connection can be established with vnc password.

It offers an option to change the config file at first launch or to use it with its default settings.` Alt+enter` is used to open terminal with default settings. `Alt+h` can be used to split the new window's position vertically, `alt+v` to split it horizontally. You can learn the usage information in detail from the user's guide on its web page or from the blogs. Below is a firefox browser, active processes and a terminal.

`vncserver` can be made into a `systemd` service. In this way, vnc session can be started at every system startup.
It can be used in a lightweight "desktop environment" like xfce instead of a "window manager". You can integrate it by making some changes in the configuration settings…
| nuhyurdev |
918,327 | Job interview question and answer | Tricky techniques | Welcome to this page. I am really happy for your landing. Most probably, you have got your... | 0 | 2021-12-06T04:43:19 | https://remoteful.dev/blog/job-interview-question-answer | remotework, job, remotejobs | Welcome to this page.
I am really happy for your landing. Most probably, you have got your interview call, and you are here. Congratulation, my dear.
But still, you are stressed. What will be asked? How to answer? What will happen if you can’t answer?
I can feel this problem.
But great news! This article is the answer to those questions. Today’s writing will help you to prepare yourself for your next **job interview question and answer.**
So, relax, my dear.
If you are experienced enough to face the job interview, you will still know something new from this article.
I will cover different **job interview questions and answers** for various positions.
Keep reading. Cheers!

# Interview questions and answers for freshers:
### 1. Tell me about yourself
In most cases, this question is asked to fresher. This can be a starting question of an interview.
**Purpose:**
---- To observe your confidence level.
---- To observe your attitude.
Keep your answer short. Talk about your family, education, and your strengths and abilities to your interviewer as you are fresher.
**Caution:** Do not talk too much about your family. Emphasize your education and your strength.
This is the place you can highlight your personality to break the ice.
**Example:** “I am James. I was born in Washington, Georgia, to a family mainly consisting of teachers. I am hard-working, love to take on challenging jobs, and always look for creative solutions.”
### 2. What are your biggest strengths?
It is one of the most frequently asked interview questions for fresher. In this area, you can show your belief in yourself.
**Purpose:**
To find out your most confident level in yourself.
**Caution:** Do not show overconfidence in your answer. The interviewer may take it negatively.
**Example:** “My greatest strength is a problem-solving skill. I always love to take on challenging tasks. I found this a great match with your job description, and that’s the reason I applied.”
### 3. What are your biggest weaknesses?
It’s a very tricky question. Especially those who are fresher may stuck to answer this. Who wants to tell his weakness, especially during the interview?
**Purpose:** The main objective of this question is to see how you can handle embarrassing situations.
**Example:** “Well, I tend to take on too much responsibility and continuously work on that until it finishes. It is hampering my personal life, and I am trying to balance responsibility and my personal life. Now I am getting a good result each day.”
Look, the answer you have given is not directly related to the job you are applying for. This way, you can handle the tricky question in a tricky way.
### 4. Where do you want to see yourself in five years?
This is another tricky question. Remember, the interviewer who will ask you this question doesn’t want to hear “I don’t know,” or, “Ummm,” or, “hmmm,” or, “that’s hard to say.”
**Purpose:** Generally, an interviewer asks this question to test your loyalty to the company and to know your long-term career goal.
That’s why you need to be careful to answer this question.
**Example:** “In five years, I want to complete the internal training programs of your organization to develop my skill and hope to take my career to the next level within this company.”
Please never ever say that you want to be the CEO of that company in five years.
### 5. Why should we hire you?

To answer this question focus on your skills that match with the position.
**Purpose:** To see how confident you are in your abilities.
**Example:** “Honestly speaking, I have all the skills that are directly related to this position. I am always flexible to learn new things and will contribute something to the growth of your company.”
My dear reader, I have already covered 5 important Job Interview questions and answers for freshers, but there are more to cover. Please have patience and read on the rest.
### 6. What do you know about this organization?
This is the time to convince your interviewer that you have studied their website before attending the interview.
**Purpose:** To see whether you are prepared for the interview and know the company’s business model or not?
**Example:** “This Company was established in 1971, started with only 6 employees, and now around 1200 employees work in this. The company wants to capture 6 new markets within the next 2 years. Customer satisfaction is your main goal.”
### 7. What is your salary expectation?
This question is usually asked when you are near to hire. I mean when the interviewer thinks that you are a good fit for the position.
Two things to keep in mind.
--- Do not directly tell any actual number.
--- Tell them how you will perform if they hire you.
**Example:** “Actually, salary is not my priority. As a fresher, gaining practical knowledge is my first priority. I just agree with your company standard for this role.”
### 8. How good are you at handling pressure?
Simply telling that you are good enough at handling pressure is not enough. You can give an example, how well you were in handling pressure.
**Purpose:** To see how pressure can affect your job performance.
**Example:** “I know the art of handling pressure. To be honest, I can prioritize tasks and stay focused under pressure. In my final exam, I was pressured to pass with excellent marks, and it helped me to prioritize tasks, and I was succeeded.”
### 9. What are your short-term goals?
**Purpose:** This question is generally asked to see how much you want the job. It would be best if you answered in a way that is tied with the job you are applying for. I hope you understand what I mean to say.
**Example:** “To find a job that goes with me and has a great career prospect.”
### 10. How flexible are you regarding overtime?
The organization may need you to put in extra hours occasionally. If you agree to help, answer that way. If you are not agreed, tell them that politely and honestly.
**Example:** “In the toughest situation, I am always ready to help for some days, but I need to fill up my family commitments as well. That’s why I usually prefer to maintain a healthy work-life balance.”
### 11. Do you have any questions?
Generally, this question is asked at the end of the interview. I recommend researching the company well ahead of the interview. Prepare one or two questions to answer.
**Example:** “May I ask what would be my everyday tasks?”
I think, by this time, you got a good idea **of job Interview questions and answers for freshers,** and I hope it will help you prepare.
## Job interview question and answer for web developers:
In front of an interviewer, you may feel overwhelmed and lack confidence. Hmm, I understand. It’s a normal thing.
No worries, dear. Now, you will get sound knowledge regarding job **interview questions for web developers**.This might help you a lot.
Could you please take a look below at preparing yourself for some commonly-asked web development interview questions?
I think, yes. Thank you, dear.
Let’s take a dive. Cheers!

### 1. How can you speed up a webpage to load?
**Purpose:** The interviewer wants to see how expert you are in your sector by asking this question.
**Answer:** The following are some methods for reducing the time it takes for a web page to load:
- Minify HTML, CSS, Javascript
- Reduce the image size. If it is a WordPress website, we can use a plugin to reduce the size.
- Clean the web code
- Remove any widgets that aren't needed.
- Multiple redirections should be avoided whenever possible.
- Create AMPs.
- Reduce lookups
- Minimize caching.
- Take a good web hosting service. It is crucial to speed up a website.
### 2. What makes HTTP 2.0 better than HTTP 1.1?
**Answer:** The main benefit of HTTP/2 over HTTP/1.1 is that it is faster. It provides faster delivery of web pages. Some other advantages include:
- Less broadband consumption
- Improvement of web positioning
- Immediate presentation.
### 3. What are the new HTML5 form elements?
**Answer:** In the specification of the HTML5 form, there are five new form elements.
They are:
- Datalist
- Progress
- Keygen (generates an encryption key)
- output
- meter
### 4. Describe how to utilize Canvas in HTML.
**Answer:** It is used to draw graphics using JavaScript. This can be used to create animations and to draw graphs.
### 5. How do you tell the difference between Canvas and SVG?
**Answer:** The differences are:
- Canvas is well suited for gaming applications. On the other hand, SVG (Scalable Vector graphics) doesn’t suit any gaming applications.
- Canvas refers to painting the program. SVG (Scalable Vector graphics) refers to drawing the program.
- Canvas images are not that flexible. SVGs are more flexible.For example, We can expand the size beyond its natural.
These are the main differences between Canvas and SVG.
### 6. What is the canvas's default border size?
**Answer:** There is no default border size. We can use CSS to adjust it.
There are some more commonly asked **interview questions for the junior, senior, front end and backend web developers.** They are as follows:
- What is the definition of a pseudo-class?
- In JavaScript, what is namespacing?
- What is CORS, and how does it work? Why it is important?
- What is the best method to integrate five different stylesheets into a website?
- What technical and non-technical skills are required of a front-end developer?
- What is a callback function?
- Tell me the difference between Class and Prototypal inheritance, and how do you explain it?
- Explain how variables in CoffeeScript vary from variables in JavaScript.
## Sales and marketing interview questions and answers:

### 1. Why is working in sales so appealing to you?
**Purpose:** Sales manager wants to know what motivates you. Is it the challenge or something else?
**Example:** I have always loved to work as a salesperson because it creates an opportunity to meet with new people every day. I always love to take on challenges, and I feel proud that I am helping businesses get better results. These reasons push me to work in sales.
### 2. What is your proudest professional achievement?
**Purpose:** An interviewer asks this question to know what value you can bring to their company if they hire you.
**Answer:** In my work as an insurance sales representative, I achieved all of my sales targets and acquired 25+ new clients in just five months. I always feel proud of this achievement.
### 3. Please tell us about a marketing campaign that did not go as planned. What went wrong, exactly?
**Purpose:** This question is used by interviewers to assess your problem-solving abilities.
**Example:** “When I was working for XYZ company, they launched a new product. I worked on the marketing campaign for that. The campaign did not work well as expected. Actually, we relied too heavily on marketing data. That data did not take into account key metrics. I concluded that the entire campaign should not be based on the raw numbers alone.”
### 4. What will be a brand strategythat you consider to have been very successful?
**Purpose:** The interviewer asks this to know your marketing knowledge depth.
**Example:** “Well, I think there are a couple of success factors of a good brand strategy. Understanding the target audience, unique value proposition, unique brand slogan, and greater exposure in digital media are the key factors I consider to have been successful.”
There are more **sales and marketing interview questions** you need to be prepared for. Please have a look below:
- Have you been effective in meeting your goals on a consistent basis?
- What motivates you to pursue a career in sales and marketing?
- What's the most recent thing you've learned yourself?
- Have you consistently met your goals?
- What's your strategy for dealing with customer objections?
- In your selling process, what role do content and social media play?
- What would you do in your first month if you were hired for this position?
- What do you believe our company/sales team could improve on?
- Which marketer do you admire the most, and why?
- How do you react when marketing initiatives or projects fail?
- Are you comfortable making decisions based on big volumes of data?
- Have you ever tried a fresh approach or idea?
- What are your favorite software platforms, and why?
There are many more different interview questions for various job roles, for example, **designer interview questions like** **[graphic designer interview questions](https://www.naukri.com/blog/graphic-design-interview-questions-and-answers-2/)**, **[software designer interview questions](https://medium.com/javarevisited/25-software-design-interview-questions-to-crack-any-programming-and-technical-interviews-4b8237942db0)**, system design **[interview questions, interview questions for fashion designers](https://cbonline-gary.medium.com/most-frequently-asked-fashion-designer-interview-questions-e9405da8f2d3)**, and more.
But, some common interview questions are more or less asked in any job role. I have already covered all those.
One more thing I need to share with you. Read some articles **[on virtual interview tips](https://remoteful.dev/blog/virtual-interview-tips)** before your interview date. I guarantee you will get an unbelievable result. Cheers.
## Bottom line:

I know you are tense regarding your interview. Your nerves are not calm. Relax, my friend. Go through the **job interview questions and answers** again and again that you have learned today in this article. These are the best picked-up questions and will help you boost your confidence before the interview.
I hope fresher and experienced candidates for different job roles have been benefitted by reading this piece of content. I believe you are now 10 times ahead of other job candidates. | ryy |
923,770 | Odoo Developers | I am looking for perm and contract Odoo developers for a well-known retail company in the EU. Get in... | 0 | 2021-12-11T16:30:22 | https://dev.to/calvinjnr/odoo-developers-550h | python | I am looking for perm and contract Odoo developers for a well-known retail company in the EU. Get in touch for more details - calvinjnr@sky.com | calvinjnr |
923,868 | Github-content-generator | New content generator solution for Github.@allifam | 0 | 2021-12-11T18:09:13 | https://dev.to/ddelrio95/github-content-generator-4ljh | python, github, automation, programming | New content generator solution for Github.@allifam | ddelrio95 |
923,876 | How To Partially Update A Document in Cloud Firestore | There will be a scenario where you will want to partially update a document on the Cloud Firestore... | 0 | 2021-12-13T14:46:18 | https://dev.to/rajatamil/how-to-partially-update-a-document-in-cloud-firestore-4f1d | firebase, javascript, programming | There will be a scenario where you will want to partially update a document on the Cloud Firestore without compromising [security rules](https://softauthor.com/firebase-cloud-firestore-security-rules/).
Let’s say you have an orders collection and a signed-in user with a user role that can create a new order document in that collection, like in the screenshot below.

>*Recommended*
>[*Firebase Cloud Firestore — Add, Update, Get, Delete, OrderBy, Limit*](https://softauthor.com/firestore-querying-filtering-data-for-web/)
###User Role
The READ and WRITE security rules to access the orders collection for a user would be like this:
WRITE RULE
```JavaScript
match /orders/{ordersId} {
allow write: if
request.auth.uid != null && request.auth.token.isUser == true
}
```
The above security rule will allow a user to create a new document when he/she is logged in and the user role is isUser.
At this stage, you may wonder where the isUser role coming from?
There are a couple of ways to create user roles in Firebase. I use Auth Claims to create roles via Cloud Functions.
For more information on this topic, take a look at my other article that covers in-depth on how to create user roles using Auth Claims.
>*Recommended*
>[*Firebase + Vue.js ← Role Based Authentication & Authorization*](https://softauthor.com/firebase-vuejs-role-based-authentication-authorization/)
READ RULE for a user to access his/her own orders not others.
```JavaScript
match /orders/{ordersId} {
allow write: if
request.auth.uid == resource.data.user.uid && request.auth.token.isUser == true
}
```
The above security rule will allow users to get orders when the logged-in user’s uid matches the uid which is inside the user field in the order document like in the screenshot below.

The security rule also checks if the logged-in user has an isUser role.
That’s pretty straight forward.
###Driver Role
As you can see from the order image below, I’ve assigned a driver to the order as soon as it’s created. I did it this way for demonstration purposes.

In the real world, you’ll need to assign a driver after the order is placed via the Admin panel or by sending an order notification to the available drivers to accept the order before the order is placed, then add it to the order document.

When a driver is assigned to the order, he/she needs to access certain information in the order, such as the store name, store address, user address, etc.
So let’s give the READ access to the order that the driver is assigned to.
```JavaScript
match /orders/{ordersId} {
allow read: if
request.auth.uid == resource.data.driver.uid && request.auth.token.isDriver == true
}
```
That’s good.
Now the user can READ and WRITE his/her own order and the driver can only READ the order document.
Nice.
>*Recommended*
>[*Firebase + Vue.js ← Role Based Authentication & Authorization*](https://softauthor.com/firebase-vuejs-role-based-authentication-authorization/)
###Order Status
Now, I want to provide the order status to the user as the driver progresses with his/her order, such as food picked, delivered, etc.
[Continue Reading...](https://softauthor.com/firebase-cloud-firestore-partially-update-a-document/) | rajatamil |
923,936 | Build a data warehouse quickly with Amazon Redshift - Part 3 - The final layer cake | Building a cloud data warehouse and business intelligence modernization In this final... | 17,204 | 2021-12-14T21:01:43 | https://dev.to/aws-builders/build-a-data-warehouse-quickly-with-amazon-redshift-part-3-the-final-layer-cake-5798 | beginners, analytics, aws, tutorial | ## Building a cloud data warehouse and business intelligence modernization
In this final tutorial for AWS Redshift I will explain how we can bake our cake and eat it too by loading data into the Redshift cluster through to creating data visualizations with AWS QuickSight.
If you are reading this post for the first time, you may wish to read the last three tutorials:
* [Getting Started with AWS - On the way to cloud](https://dev.to/aws-builders/getting-started-with-aws-a-sweet-journey-5cjj)
* [Build a data warehouse with AWS Redshift - Part 1](https://dev.to/aws-builders/build-a-data-warehouse-quickly-with-amazon-redshift-2op8)
* [Build a data warehouse with AWS Redshift - Part 2](https://dev.to/aws-builders/build-a-data-warehouse-quickly-with-amazon-redshift-create-an-amazon-redshift-cluster-part-2-3f64)
## Table of Contents
* Quick Introduction
* Synopsis
* High level architecture
* Demo
## Introduction

I am an AWS Community Builder in data, a data scientist and also a Business Performance Analyst at Service NSW. I have consulted and instructed in data analytics and data science for government, financial services and startups for 6 years. It was at an AI startup and in Public Sector that I used Amazon services including Amazon S3, Amazon Sagemaker, AWS Glue, Amazon Redshift and Amazon QuickSight.
* Service NSW is part of Department of Customer Service
* A One stop shop NSW government agency
* Our vision to become the world’s most customer-centric government agency
## Synopsis
We built a well-architected modern data analytics pipeline with a fully-managed, petabyte-scale cloud data warehouse using Amazon Redshift under the Department of Customer Service using AWS Organization.
With lake house architecture we integrated a data lake, data warehouse and business intelligence with security, performance, elasticity, data velocity, cost effectiveness and governance to design a unified analytics platform for our organization-wide data architecture roadmap.
Amazon Redshift has massively parallel processing (MPP) for fast and complex queries and can analyze all our data.
The AWS services used to build our analytics roadmap were:

## High level architecture
In a busy contact centre we serve customers and generate petabytpes of data in structured and un-structured data formats. One of our initial questions on the journey with AWS included how do we store all of this data? How do we create a data lake to also bring in external data from another data lake?

Working in a sprint fashion, this greenfield government project included test, build and deploy. By understanding the current state we wanted to create a vision of the end state and how to improve customer experience for NSW citizens.
Using a Miro board to sketch the high level architecture (it's ok to draw what you think might be the final solution and then consult and iterate) collaborating with platform teams and also consulting with AWS to develop a solution using AWS Cloud Formation template.
With this greenfield project using my consulting hat, I helped our stakeholders develop the first use case, iterate and develop the next one and so forth. What is our minimum viable product? What are our blockers? IAM permissions and stopping and starting. Reading the database developer guide, testing with a sandbox environment and also working in a production environment.

#### 1. Amazon S3 - Data Ingestion Layer
In building a solution for the first business use case to ingest data extracted from an external cloud and upload the csv files into folders within our Amazon S3 bucket (our data lake). Our team were given access to AWS S3 via IAM user roles under our AWS account.
#### 2. Amazon Glue Studio - Data Processing Layer
Amazon Glue Studio was used to pre-process data from the 'source' Amazon S3 bucket stored in a csv file to extract, transform and load data back into a 'target' Amazon S3 bucket.
Amazon Glue Studio is a great data manipulation tool or ETL to visually inspect data pre-processing steps and them if there are errors, and Python code is scripted by updating nodes and successfully completing information at each check point. There are AWS Glue Studio tutorials to get started and you may also watch on-demand sessions from AWS re:invent 2021.
Dates can be updated from a string to a date data format quickly using AWS Glue Studio. At the time of writing this blog we are testing automated scripts to clean data from AWS S3 as our 'source'and AWS Redshift cluster as our 'target' and we are testing a sample table before a production solution is created.
#### 3. Amazon Redshift - Data Consumption Layer
A data engineer granted permissions to the AWS Redshift cluster role. Using the Query Editor, a connection to the database was created and a table was created using SQL.
At the data consumption layer, data was loaded from an AWS S3 bucket into the AWS Redshift cluster using the COPY command.
#### 4. Amazon QuickSight - Business Intelligence and Dashboards
Connect to the data stored in the Amazon Redshift cluster by connecting to the database using the credentials and import the data into Amazon QuickSight using SPICE(super-fast, parallel, in-memory calculation engine).
Once data is loaded into SPICE, you may inspect the data using a preview feature. Once data is loaded you may create custom visualizations such as a bar chart and Amazon QuickSight will provide business intelligence informing you of the Top 10 and Bottom 10 insights of the selected data fields with a short narrative which is very useful to help you understand your data.
Amazon QuickSight allows you to share your dashboard and schedule reports via email to select audiences and download the dashboard a a PDF.
#### Why use a Lake house approach?
We have the task to improve customer experience to understand NSW citizens by building a data architecture roadmap to include business intelligence, reporting with structured, semi-structured and unstructured data and building machine learning capability for sentiment analysis and topic modelling

The benefits of using a lake house approach included:
* Analyze all our data, unified data access
* Cost-effective
* Scalable workloads
* Run fast and complex queries
* Access to analytical services for ETL
A checklist of business requirements:
* Tableau Server and Amazon Redshift connectors
* Fast SQL query performance
* Unstructured data can be stored in Amazon S3 data lake
* Fully managed enterprise data warehouse on AWS
* Based on PostgreSQL
#### Amazon Redshift Architecture
Amazon Redshift is an enterprise data warehouse fully managed on AWS. Customer thinks about servers. How much RAM? How much CPU is needed?
* Deployment Considerations: Good data model, Optimize Amazon Redshift Cluster. Columnar storage, fewer joins, denormalizing table schemas and keeping columns sizes in tables as narrow as possible. All of these will improve query performance.

* Client applications: Amazon Redshift integrates with ETL and BI
* Connections: Amazon Redshift communicates with client applications using JDBC and ODBC drivers for PostgreSQL e.g. Tableau and Pentaho.
* Leader Node: Communicates with compute nodes to perform database operations.
* Compute Nodes: Leader node compiles code the execution plan and assigns the code to individual compute nodes.
* Node Slices: A compute node is partitioned into slices. Each node processes a portion of the workload assigned to the node.
Columnar storage for tables optimizes analytic query performance to drastically reduces the overall disk I/O requirements and the amount of data you need to load from disk.
With massively parallel processing (MPP), Amazon Redshift distributes the rows of a table to the compute nodes so data can be processed in parallel.
#### Amazon Redshift Cluster
The Amazon Redshift cluster was launched and configured using Cloud Formation template.
The component details of the cluster included:
* 1 cluster, 1 leader node and 3 compute nodes
* Node type: dc2.large with storage 160 GB/node. $0.25/node/hour
* DC2 high performance
* Cluster permissions: IAM role created for the Amazon Redshift role for our IAM User Group
* Cluster Details: Database details created with database port number 5439
* Network and security: Two subnets created in two different Availability zones to improve fault tolerance

## Demo - create an Amazon Redshift cluster
Step 1: Login in to AWS Management Console
In the Amazon Management Console you may type Amazon Redshift in the search bar.
Step 2: Navigate to Amazon Redshift home page
Click on Create cluster.

Step 3: In Cluster configuration provide a name for your cluster
If your organization has never created an Amazon Redshift cluster you may access the Free Tier, otherwise click Production
Step 4: Configure cluster:
Cluster identifier
* Node type: dc2.large
* Nodes: 3
Step 5: Database configuration:
Master username:
Master password:

Step 6: Configure Cluster Permissions
* In order for Amazon Redshift to have access to Amazon S3 to load data an IAM role is created to access to cluster via a Redshift role
Step 7: Security Group
* VPC and security group attached to the VPC will be created by your administrator
* Subnet Group is also created by your administrator

Step 8: Click close to create cluster
It will take a few minutes for the cluster to be provisioned and you can check the status of the blue banner.

Step 9: Cluster created
After your cluster has successfully been created, the banner message will turn green and the created cluster will have a green tick with the wording 'available'.

Step 10: Inspect the properties of your Redshift cluster
* You may even download any JDBC and ODBC drivers
* You may resize the cluster if required.

Step 11: Click Query Editor
Connect to the database
Enter the database credentials that were used to create the Amazon Redshift cluster

Step 12: Create a Table
* Query Editor uses standard SQL commands to create a table.

Step 13: Load data from Amazon S3 into Amazon Redshift
* Use the COPY command.

Step 14: Fast Querying
* SQL queries may be saved and scheduled
* SQL queries can return results in milliseconds
* Export results

Step 15: Connect to Amazon Redshift database
* Upload data by connecting to the Amazon Redshift cluster.
Step 16: Review the data
* Preview and format data in Amazon QuickSight before it is loaded into SPICE.

Step 17: SPICE and Amazon QuickSight
Once data is stored in SPICE, data visualizations can be created for reporting dashboards to provide business insights.
Amazon QuickSight can provide the Top 10 and Bottom 10 insights and a narrative using machine learning to help you understand your data.
Dashboards can be easily shared via email and reports may be scheduled.

You may also watch on-demand from AWS re:Invent to learn more about the new add-on enterpise feature for AWS QuickSight Q that can help you author dashboards based on topics using natural language processing which is very exciting!
You can read the blog [here](https://aws.amazon.com/blogs/aws/amazon-quicksight-q-business-intelligence-using-natural-language-questions/)

## Resources
* [Query the database](https://docs.aws.amazon.com/redshift/latest/mgmt/query-databases.html)
* [Amazon Redshift Database Developer Guide](https://docs.aws.amazon.com/redshift/latest/dg/redshift-dg.pdf)
* [Amazon Redshift best practices for loading data](https://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html)
* [Data warehouse system architecture](https://docs.aws.amazon.com/redshift/latest/dg/c_high_level_system_architecture.html)
Stay informed with the latest AWS product updates from AWS Re:invent 2021:
* AWS Re-invent 2021 announcements [link](https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2021/)
Until next time, happy learning!
## Next Tutorial: Chatbot | abc_wendsss |
924,089 | I asked GitHub Copilot if it will replace developers! | Please consider liking and subscribing <3 Github copilot tells an offensive joke: | 0 | 2021-12-12T04:54:09 | https://dev.to/virejdasani/i-asked-github-copilot-if-it-will-replace-developers-45ka | github, javascript, ai, webdev | {% youtube s-XGQOE8E1g %}
Please consider liking and subscribing <3
Github copilot tells an offensive joke:
{% youtube SFBSbAJBTOM %} | virejdasani |
924,119 | 5 Best Docker Courses for Java Developers to Learn in 2024 | Do you want to learn Docker in 2024? Here are the best online courses to learn Docker in 2024 from Udemy, Pluralsight, and other online platforms. | 0 | 2021-12-12T08:16:05 | https://dev.to/javinpaul/5-best-docker-courses-for-developers-to-learn-in-2022-o83 | docker, java, devops, programming | ---
title: 5 Best Docker Courses for Java Developers to Learn in 2024
published: true
description: Do you want to learn Docker in 2024? Here are the best online courses to learn Docker in 2024 from Udemy, Pluralsight, and other online platforms.
tags: Docker, Java, DevOps, Programming
//cover_image: https://direct_url_to_image.jpg
---
*Disclosure: This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.*

Hello Devs, if you want to learn Docker and looking for the best Docker Courses from Java and Spring Boot developers point of view, or from a DevOps engineering point of view then you have come to the right place.
Earlier, I shared **[free Spring Boot courses](https://www.java67.com/2017/11/top-5-free-core-spring-mvc-courses-learn-online.html)** and **[free Docker courses](https://www.java67.com/2018/02/5-free-docker-courses-for-java-and-DevOps-engineers.html),** and in this article, I will share the best Docker courses for Java and Spring Boot developers.
Java is one of the most popular and widely used programming languages. It is an evergreen programming language.
For Java developers, Docker is turning out to be a game-changer. Docker is emerging rapidly, and it's now one of the most [essential tools for all kinds of programmers](https://medium.com/javarevisited/11-essential-skills-to-become-software-developer-in-2020-c617e293e90e), and there are reasons for it like Docker makes both development and deployment easier.
By using Docker, you can deploy any kind of [Microservices](https://medium.com/javarevisited/8-best-online-courses-to-learn-service-oriented-soa-and-microservices-architecture-94c01d6b94e6) in the same way. It also makes scaling your services easier using Kubernetes.
You can further read my earlier post, [why every developer should learn Docker](https://www.java67.com/2020/11/why-learn-docker-container-and-tool-in.html) to learn more about the advantages of Docker for both the development and development of modern software and the cloud computing world.
At the same time, learning [Docker](https://www.docker.com/) can be a challenge if you are a beginner or have never used Docker before, but with proper guidance and the right resources, you can easily understand Docker.
Once you know the basic Docker concepts and commands, using Java with it becomes easy.
There are not many resources on the internet for *learning Docker with Java and Spring*. I have researched and curated the best Docker courses for Java and Spring developers.
This article will list the top five courses that will help you understand Docker and teach you how to deploy Java and Spring Boot applications on Docker, but if you are in a hurry then just check [Docker for Java Developers](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-for-java-developers%2F) course to start with.
## 5 Best Online Courses to Learn Docker Containers in 2024
Without wasting any more of your time, here is my list of best Docker courses for Java and Spring developers. The list includes the best Docker courses from Udemy, Pluralsight, and other popular online learning portals. It also provides beginner and advanced Docker courses to suit the needs of beginner and experienced developers.
### 1\. [Docker and Kubernetes: The Complete Guide ](https://click.linksynergy.com/deeplink?id=CuIbQrBnhiw&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-and-kubernetes-the-complete-guide%2F)
Docker and Kubernetes generally go with each other. It is an excellent choice to learn Kubernetes with Docker. In this course, the instructor dives deep into how to use both Docker and Kubernetes
You will learn how to use Docker and how [Kubernetes](https://medium.com/javarevisited/7-free-online-courses-to-learn-kubernetes-in-2020-3b8a68ec7abc) can be used with it.
It is a Comprehensive Docker course with a total video content of 22 hours, and you will learn Docker from scratch, no previous experience is required
Here are things you will learn in this course:
1. Build a CI + CD pipeline from scratch with Github, Travis CI, and AWS
2. Master the Docker CLI to inspect and debug running containers
3. Automatically deploy your code when it is pushed to Github!
4. Understand the purpose and theory of Kubernetes by building a complex app
The best thing about this course is that the instructor explains every necessary concept required to understand Docker and Kubernetes.
Here is the link to check this course - [Docker and Kubernetes: The Complete Guide ](https://click.linksynergy.com/deeplink?id=CuIbQrBnhiw&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-and-kubernetes-the-complete-guide%2F)
[](https://click.linksynergy.com/deeplink?id=CuIbQrBnhiw&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-and-kubernetes-the-complete-guide%2F)
------
### 2\. [Docker for Java Developers](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-for-java-developers%2F)
This course at Udemy is one of the most popular Docker with Java courses. It is for those individuals who want to learn Docker with Java programming language. Essential topics such as running docker containers, publishing docker images to docker hub, using docker swarm, using maven to create docker images, and many others are covered in this course.
This course is specifically for Java developers. It is a beginner-level course with a total video content of ten hours.
Requirements
- Basic knowledge of Java.
- Knowledge of Spring is recommended.
- Basic knowledge of Linux.
Created by John Thompson, one of my favorite Udemy instructors and author of [**Spring Framework: Beginner to Guru**](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fspring-framework-5-beginner-to-guru%2F), this is one of the best Udemy courses to learn Docker, and if you like John's teaching style, which is the right mix of theory and practical then you will love this course.
It's also very affordable, and you can buy for just $10 on Udemy sales which happen now and then; check this course out might be happening right now.
Learn more about this course here - [Docker for Java Developers](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-for-java-developers%2F)
[](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-for-java-developers%2F)
--------
### 3\. [Master Docker with Java - DevOps for Spring Microservices](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-course-with-java-and-spring-boot-for-beginners%2F)
This is another best-selling docker with Java course at Udemy. In this course, the instructor explains how to use Docker with Java to run Java microservices.
Several other important topics such as creating docker images for Java Spring boot, containerizing Java Spring Boot React full-stack application with Docker, using [MySQL](https://www.java67.com/2021/11/top-5-courses-to-learn-mysql-database.html) with Docker, docker commands, and docker architecture are covered in this course.
It is a beginner-level course with a total video content of six and a half hours.
Requirements
- Basic knowledge of Java.
- Basic knowledge of Spring Boot.
- Basic knowledge of DevOps
This course was created by Ranga Karnam from In28Minutes, another great Java instructor on Udemy and author of excellent courses like **[Master Microservices with Spring Boot and Spring Cloud](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fmicroservices-with-spring-boot-and-spring-cloud%2F)**, one of the best to learn Microservices.
Ranga has excellent knowledge of Docker, Cloud Computing, and Spring Framework. His teaching style makes it easy to learn these modern concepts; I highly recommend this course to any Java developer who wants to learn Docker in 2024.
Here is the link to join this course - [Master Docker with Java - DevOps for Spring Microservices](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-course-with-java-and-spring-boot-for-beginners%2F)
[](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-course-with-java-and-spring-boot-for-beginners%2F)
--------
### 4\. [Play by Play: Docker for Java Developers](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fplay-by-play-docker-java-developers-arun-gupta-michael-hoffman)
It is a play-by-play docker course in Java. It is one of the most popular Docker with Java courses at Pluralsight. In this course, java experts Arun Gupta and Michael Hoffman dive into the advanced docker concepts with Java. They cover many important concepts such as docker fundamentals, Docker Swarm, and docker-compose.
Again, It is a beginner and intermediate-level course with a total video content of nearly two hours.
Requirements:
Basic knowledge of Java.
The best thing about this Pluralsight Java and Docker course is that it's an unrehearsed and unscripted course, so you will learn how people use Docker in their day-to-day course.
It also touches on important topics like Docker commands and Docker composes, and it's delivered by experts like Arun Gupta, a Java Champion, and a Docker Captain.
Here is the link to join this course -[Play by Play: Docker for Java Developers](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fplay-by-play-docker-java-developers-arun-gupta-michael-hoffman)
[](https://medium.com/javarevisited/top-10-pluralsight-courses-to-learn-programming-and-software-development-during-covid-19-stay-at-30b7d8a4f88f)
By the way, you would need a Pluralsight membership to join this course which costs around $29 per month or $299 per year (14% discount).
I highly recommend this subscription to all programmers as it provides instant access to more than 7000+ online courses to learn any tech skill.
Alternatively, you can also use their **[10-day-free-pass](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Flearn)** to watch this course for FREE.
-------
### 5\. [Docker - Hands-On for Java Developer](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-hands-on%2F)
Another popular course at Udemy is for those Java developers who want to have real-world experience of Docker with Java. In this course, the instructor starts by building real microservice architecture using [Spring Boot](https://www.java67.com/2018/06/5-best-courses-to-learn-spring-boot-in.html) and deploying the application with docker containers.
It is a beginner-level course with a total video content of nearly nine hours.
Requirements
- Basic knowledge of Java.
- Basic knowledge of Spring Boot.
- Experience with AWS is recommended.
If you want to get real-world, hands-on experience with Docker, this is the best online course for you to learn how to deploy a [Java Microservice Architecture](https://javarevisited.blogspot.com/2021/09/microservices-design-patterns-principles.html) using Docker and Docker Swarm.
Here is the link to join this course -[Docker - Hands-On for Java Developer](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-hands-on%2F)
[](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fdocker-hands-on%2F)
That's all about the **best Docker Courses for DevOps to learn in 2024**. In this list, we added only beginner-level courses. All of these courses cover all the essential docker concepts that are required to use Java with it.
Some of these courses are long while few are small. Before choosing any of these courses, make sure you understand the Java programming language because none of these courses focus on Java.
Other **DevOps, Cloud, and Programming Resources **you may like
- [Top 5 Courses to Learn Jenkins for Automation and DevOps](https://javarevisited.blogspot.com/2018/09/top-5-jenkins-courses-for-java-and-DevOps-Programmers.html)
- [7 Free Online Courses to learn Kubernetes in 20243](https://medium.com/javarevisited/7-free-online-courses-to-learn-kubernetes-in-2020-3b8a68ec7abc)
- [My favorite courses to learn Amazon Web Service](https://javarevisited.blogspot.com/2020/05/top-5-amazon-web-services-aws-courses-for-beginners-and-experienced-programmers.html)
- [22 Tech Skills Java Developers Can Learn in 2024](https://javarevisited.blogspot.com/2020/03/top-20-skills-java-developers-can-learn.html#axzz6k4XBgTw4)
- [The 2024 DevOps Developer RoadMap](https://medium.com/hackernoon/the-2018-devops-roadmap-31588d8670cb)
- [Top 5 Hibernate and JPA Courses for Java JEE Developers](http://javarevisited.blogspot.sg/2018/01/top-5-hibernate-and-jpa-courses-for-java-programmers-learn-online.html)
- [10 Docker and Kubernetes Courses for Programmers](https://dev.to/javinpaul/top-10-courses-to-learn-docker-and-kubernetes-for-programmers-4lg0)
- [My favorite courses to learn DevOps for experienced](https://javarevisited.blogspot.com/2018/09/10-devops-courses-for-experienced-java-developers.html)
- [10 Tools Java Developers Should Learn in 2024](http://www.java67.com/2018/04/10-tools-java-developers-should-learn.html)
- [5 Free Selenium Courses to Learn Automation Testing](https://javarevisited.blogspot.sg/2018/02/top-5-selenium-webdriver-with-java-courses-for-testers.html)
- [6 Maven Courses for Java Developers](http://www.java67.com/2018/02/6-free-maven-and-jenkins-online-courses-for-java-developers.html)
- [10 Free Docker Courses for Java and DevOps Professionals](https://javarevisited.blogspot.sg/2018/02/10-free-docker-container-courses-for-Java-Developers.html)
- [10 Free Courses to learn AWS and Cloud for Programmers](https://medium.com/javarevisited/top-10-courses-to-learn-amazon-web-services-aws-cloud-in-2020-best-and-free-317f10d7c21d)
- [13 Best DevOps Courses for Developers](https://medium.com/javarevisited/13-best-courses-to-learn-devops-for-senior-developers-in-2020-a2997ff7c33c)
Thanks for reading so far. If you like these *best Docker courses for developers*, please share them with your friends and colleagues. If you have any questions or feedback, then please drop a note.
>**P. S.** - If you want to learn Docker from scratch and look for a free online course, you can also check out this [HANDS-ON DOCKER for JAVA Developers [FREE],](https://click.linksynergy.com/deeplink?id=JVFxdTr9V80&mid=39197&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fintroduction-to-docker-for-java-developers%2F) a free course on Udemy. It's completely free, and all you need is a free Udemy account to enroll in this online training course
| javinpaul |
924,199 | Striver's SDE Sheet Journey - #2 Pascal's Triangle | Hi👋,devs. I have started a Journey called Striver's SDE Sheet Journey and in this journey, I have... | 0 | 2021-12-15T08:59:38 | https://dev.to/sachin26/strivers-sde-sheet-journey-2-pascals-triangle-gp2 | beginners, dsa, programming, java | Hi👋,devs.
I have started a Journey called **Striver's SDE Sheet Journey** and in this journey, I have successfully solved the first problem [#1](https://dev.to/sachin26/strivers-sde-sheet-journey-1-set-matrix-zeroes-4l68) from the SDE Sheet, now i am going to tackle the second one.
## [#2 Pascal's Triangle](https://leetcode.com/problems/pascals-triangle/)
In this problem we have given a Integer number or `numRows` and we need to return first `numRows` of pascal's triangle as a 2D array.
```javascript
Input: numRows = 5
Output: [[1],[1,1],[1,2,1],[1,3,3,1],[1,4,6,4,1]]
```

as you can see in the fig,
- in Pascal's Triangle, each number is the sum of two numbers directly above it.
- each row's first & last value is 1.
after getting some data points of the Pascal Triangle pattern. I got an idea to tackle this problem.
let's discuss my idea step by step.
---
> _my approach_
**1.** initialize a `numRows` size of the array `pascalTriangle` that holds a list of Integer `row`.
**2.** run a loop from `numRow = 1` to `numRows`.
>> **2.1** inside a loop we again create an array `row` with a size of iteration number or `numRow` and fill it with **1's**.
e.g. `numRow = 3` then `row = [1,1,1]`
>> **2.2** run a loop from `i = 1` to `numRow - 1`.
>> >> **2.2.1** compute the row's value from previous row values. (we can get the previous row from `pascalTriangle` array)
e.g. `value = preRow[i-1] + preRow[i]`
>> >> **2.2.1** set the computed value to the `row` array at index i.
>> **2.3** add `row` array to `pascalTriangle` array.
**3.** return `pascalTriangle` array.
**4.** end.
> Let's do a Dry Run this approach on a simple example.
```javascript
Input: numRows = 5
expected Output: [[1],[1,1],[1,2,1],[1,3,3,1],[1,4,6,4,1]]
```
**step-1** initialized an array of size `numRows` called `pascalTriangle = [[],[],[],[],[]]`.
```javascript
pascalTriangle = [[],[],[],[],[]]
```
**step-2** run a loop from `numRow = 1` to `numRows`.
**at numRow = 1**
> **1.** initialized an array size of `numRow` and fill with **1's**.
`row = [1]`
> **2.** run a loop from `i = 1` to `numRow - 1`.
> > **at i = 1** => ( 1 < 1-1 ) loop condition does not satisfy the running criteria so that this loop does not get executed.
> **3.** add `row` array to `pascalTriangle` array.
```javascript
pascalTriangle = [[1],[],[],[],[]]
```
**at numRow = 2**
> **1.** initialized an array size of `numRow` and fill with **1's**.
`row = [1,1]`
> **2.** run loop.
> > **at i = 1** => ( 1 < 2-1 ) loop condition does not satisfy the running criteria
> **3.** add `row` array to `pascalTriangle` array.
```javascript
pascalTriangle = [[1],[1,1],[],[],[]]
```
**at numRow = 3**
> **1.** initialized an array size of `numRow` and fill with **1's** e.g. `row = [1,1,1]`
> **2.** run loop.
> > **at i = 1** => ( 1 < 3-1 ) loop condition satisfy the running criteria then,
> > > **1.** calculate the value from the previous row.
`value = priRow[1-1] + preRow[1]` => `2 = 1 + 1`
> > > **2.** set `value` to `row` array at 1 index.
`row = [1,2,1]`
> > **at i = 2** => ( 2 < 3-1 ) loop condition does not satisfy the running criteria then loop end.
> **3.** add to `row = [1,2,1]` to `pascalTriangle` array.
```javascript
pascalTriangle = [[1],[1,1],[1,2,1],[],[]]
```
**at numRow = 4**
> **1.** initialized an array size of `numRow` and fill with **1's** e.g. `row = [1,1,1,1]`
> **2.** run loop
> > **at i = 1** => ( 1 < 4-1 ) loop condition satisfy the running criteria then,
> > > **1.** calculate the value from the previous row.
`value = priRow[1-1] + preRow[1]` => `3 = 1 + 2`
> > > **2.** set `value` to `row` array at 1 index.
set `value` to `row` array at i index,`row = [1,3,1,1]`
> > **at i = 2** => ( 2 < 4-1 ) loop condition satisfy the running criteria then,
> > > **1.** calculate the value from the previous row.
`value = priRow[2-1] + preRow[2]` => `3 = 2 + 1`
> > > **2.** set `value` to `row` array at 1 index.
`row = [1,3,3,1]`
> > **at i = 3** => ( 3 < 3-1 ) loop condition does not satisfy the running criteria then loop end.
> **3.** add to `row = [1,3,3,1]` to `pascalTriangle` array.
```javascript
pascalTriangle = [[1],[1,1],[1,2,1],[1,3,3,1],[]]
```
after the last iteration. we got output as we expected.
```javascript
pascalTriangle = [[1],[1,1],[1,2,1],[1,3,3,1],[1,4,6,4,1]]
```
> _Java_
```java
import java.util.ArrayList;
class Solution {
public List<List<Integer>> generate(int numRows) {
// step-1 initialized an array that holds a list of Integer
List<List<Integer>> pascalTriangle = new ArrayList<List<Integer>>(numRows);
// step-2 run a loop from 1 to numRow
for(int numRow=1; numRow<=numRows;numRow++){
// step-2.1 initialized a array & filled with 1's
List<Integer> row = new ArrayList<Integer>(Collections.nCopies(numRow,1));
// step2.2 run a loop from 1 to numRow -1
for(int i=1;i<numRow-1;i++){
// get previous row array from pascalTriangle array
List<Integer> preRow = pascalTriangle.get(numRow-2);
// step-2.2.1 calculate value from previous row array
int value = preRow.get(i-1) + preRow.get(i);
// step-2.2.2 set value at index i
row.set(i,value);
}
// step-2.3 add the row array to pascalTriangle array
pascalTriangle.add(row);
}
// step-3 return the pascalTriangle array
return pascalTriangle;
}
}
// end
```
> Other Approaches
### _Algo #1_
_in this approach we can compute the Triangle's row array by adding **1** at index 0._
`example:-`
`[1,1]` add 1 at 0 index -> [1,<u>**1**,**1**</u>] -> `[1,2,1]`
`[1,2,1]` add 1 at 0 index -> [1,<u>**1**,**2**</u>,1] -> `[1,3,2,1]` -> [1,1,<u>**2,1**</u>] -> `[1,3,3,1]`
`[1,3,3,1]` add 1 at 0 index -> [1,<u>**1,3**</u>,3,1] -> `[1,4,3,3,1]` -> [1,4,<u>**3**,**3**</u>,1] -> `[1,4,6,3,1]` -> [1,4,6,<u>**3,1**</u>] -> `[1,4,6,4,1]`
> let's understand this approach step by step.
**step-1.** initialize two arrays `pascalTri` and `row`.
**step-2.** run a loop from `numRow = 1` to `numRows`.
> **1.** add **1** to `row` array at index 0.
> **2.** run a loop from `i = 1` to `row.size -1`.
> > **1.** update `row` array value.
e.g. `row.set(i,row.get(i) + row.get(i+1))`
> **3.** add a copy of the `row` array to `pascalTri` array.
**step-3** return `pascalTri` array.
**step-4** end.
> Java
```java
public class Solution {
public List<List<Integer>> generate(int numRows)
{
// initialize two Lists
List<List<Integer>> pascalTri = new ArrayList<List<Integer>>();
ArrayList<Integer> row = new ArrayList<Integer>();
// run 1 to numRows
for(int numRow=1;numRow<=numRows;numRow++)
{
// add 1 at index 0
row.add(0, 1);
// update row value
for(int i=1;i<row.size()-1;i++)
row.set(i, row.get(i)+row.get(i+1));
// add a copy of row instance
pascalTri.add(new ArrayList<Integer>(row));
}
return pascalTri;
}
}
```
> _Time Complexity_
we are initializing a row array and traversing the row's element for updating the cell.
so that time complexity : **O(numRows*numRows)**
> _Space Complexity_
so we are initilaizing a 2D matrix (`pascalTri` and `row`).
then space complexity : **O(pascalTri*row)**
_Thank you for reading this article. if you have any query please share them in the comment box_ | sachin26 |
924,214 | If-Else or Switch or Match - Laravel | We have an example of entering data of different type and here we use if if($request->type ==... | 0 | 2021-12-12T11:03:51 | https://dev.to/morcosgad/if-else-or-switch-or-match-laravel-3h45 | php, laravel, programming, beginners | We have an example of entering data of different type and here we use if
```php
if($request->type == 'users'){
$users = User::all();
dd($users)
}
if($request->type == 'projects'){
$projects = Project::all();
dd($projects)
}
```
But we want to make the code easier and less, so here I want to use switch
```php
switch ($request->type) {
case 'users': $model = 'App\\Models\\User'; break;
case 'projects': $model = 'App\\Models\\Project'; break;
}
$records = $model::all();
dd($records)
```
But I want to covet more powerful code using php8 so I will use match
```php
$model = match($request->type){
'users' => 'App\\Models\\User',
'projects' => 'App\\Models\\Project',
}
$records = $model::all();
dd($records)
```
Source :- https://www.youtube.com/watch?v=sTm5P4IGD5g
Source :- https://stitcher.io/blog/php-8-match-or-switch
I hope you enjoyed the code.
| morcosgad |
955,706 | What I learned about using and building public APIs | I've recently learned about the API Economy concept (A good primer and a contrarian view). After a... | 0 | 2022-01-15T03:28:11 | https://dev.to/gwenshap/what-i-learned-about-using-and-building-public-apis-2d00 | beginners | ---
title: What I learned about using and building public APIs
published: true
description:
tags: #beginners
---
I've recently learned about the API Economy concept ([A good primer](https://www.notboring.co/p/apis-all-the-way-down) and [a contrarian view](https://www.swyx.io/api-economy/)). After a bit of research, I was pretty amazed at how so much of the stuff you need to build your product already exists out there. I think everyone knows about Stripe, Twilio, Auth0.. but I was surprised to learn about Notarize, which has API for signing and notarizing documents, or FaunaDB which is a database with REST API, or background checks with API, etc, etc.
I learned to never implement anything without checking if someone else already provides that as a service via an API that I could integrate with. I found out that Postman (which also has a nice desktop app for testing APIs) hosts API world where tons of APIs are hosted and it lets you experiment with them online: https://www.postman.com/explore
I also learned about JAMStack, which is a community of FE engineers who are building entire websites without ever building a backend - simply by using other services and filling in the gaps with serverless functions (AWS Lambda and such).
It makes sense for pretty much every service to add public APIs and allow others to integrate. It opens up new use-cases and revenue streams, and for the most part we already have these APIs. OpenAPI makes it pretty easy to define the APIs, and then generate the web interfaces in your language, generate documentation, generate mocks, generate tests, etc... All those generated stuff is super important:
* Developers will need the documentation to use your API and do the integrations.
* Mocks will allow them to test the integration without loading your production application. Maybe even try a "mocked MVP" so you can get feedback before building your app.
* Generated tests will let you continuously validate that you didn't break the APIs (since other developers will rely on their for their applications, it is critical not to break their applications - they won't come back and tell their friends).
https://swagger.io/ has great tools for OpenAPI.
I cover all this with a bunch of examples in a video:
{% youtube cTnTW5Cq0b0 %}
I'm also working on a SaaS-in-a-Box backend service (hopefully with both great APIs and great integrations) to make life simpler for SaaS developers. If you are willing to give me feedback on the MVP, mind sharing your email with me in this form? I promise it won't be used for marketing, I'll just personally connect with you to discuss https://forms.gle/8tW73MwEWdWu4rB27. | gwenshap |
924,228 | An Overview of Cloud Computing. | What is Cloud Computing? Cloud computing is the on-demand delivery of IT resources over the Internet... | 0 | 2021-12-12T13:09:57 | https://dev.to/tioluwaniope/an-overview-of-cloud-computing-32eh | beginners, webdev, tutorial, cloudskills |
**What is Cloud Computing?**
Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS), Microsoft Azure , Google Cloud, etc.
Organizations of every type, size, and industry are using the cloud for a wide variety of use cases, such as data backup, disaster recovery, email, virtual desktops, software development and testing, big data analytics, and customer-facing web applications.

## CLOUD COMPUTING SERVICE MODELS ##
**Infrastructure as a Service (IaaS)**
IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives you the highest level of flexibility and management control over your IT resources. It is most similar to the existing IT resources with which many IT departments and developers are familiar.
_Example Of IaaS_: DigitalOcean, Linode, Rackspace, Amazon Web Services (AWS), Cisco Metapod, Microsoft Azure, Google Compute Engine (GCE)
**Platform as a Service (PaaS)**
PaaS is used to manage underlying infrastructure (usually hardware and operating systems), and allows you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.
_Examples of PaaS _ : AWS Elastic Beanstalk, Windows Azure, Force.com, Google App Engine, Apache Stratos, OpenShift
**Software as a Service (SaaS)**
SaaS provides you with a complete product that is run and managed by the service provider. In most cases, people referring to SaaS are referring to end-user applications (such as web-based email). With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software.
_Examples of SaaS_ : Google Workspace, Dropbox, Salesforce, Cisco WebEx, Concur, GoToMeeting, zoom

## Components in Cloud Computing
Cloud computing components correspond to platforms such as front end, back end, and cloud-dependent delivery and the utilized network. So, a framework of cloud computing is broadly categorized as three specifically known as clients, distributed servers and data center. For the operation of this computing, the following three components have a big hand and the responsibilities of these components are described below:
**Clients**
Clients in cloud computing in general are the operation of Local Area Networks (LAN’s). These might be in the form of desktops, laptops, mobiles, tablets to enhance mobility. Clients hold the responsibility of interaction which pushes for the management of data on cloud servers.
**Datacentre**
Is an array of servers that houses the subscribed application. The Progress of the IT industry has brought the concept of virtualizing servers, where the software might be installed through the utilization of various instances of virtual servers. This approach streamlines the process of managing dozens of virtual servers on multiple physical servers.
**Distributed Servers**
These are considered as a server where that is housed in the other location. So, the physical servers might not be housed in a similar location. Even the distributed server and the physical server appear to be in different locations, they perform as they are so close to each other.
Further, cloud computing has many other components and those come under mainly as four classifications and these components are the services of cloud computing and they are **IaaS, PaaS, SaaS, Cloud Computing Architecture**
## Emerging Cloud Computing Trends for 2021 ##
_Kubernetes_
- Kubernetes is a management tool introduced by Google and it plays a significant role in revolutionizing the global economy. Kubernetes is developing excitement and making the dreams of industries come to reality.
_Cloud AI_
- AI-enabled cloud is one of the most important emerging trends that is used to gain access to massive amounts of data. Such data helps them to optimize the core competencies by using machine learning.
_Improving SaaS operations_
- SaaS is gaining more prominence, more specialized platforms like Better Cloud, Cloud Manager, etc are emerging to manage migrations and operations. This is enabling comprehensive management of solution suites like Microsoft Office 365, Google G Suite, and other leading SaaS products.
_Multi-Cloud Becomes Omni-Cloud_
- Multi-level cloud has become much popular in the business industry. Many companies have started using cloud computing in their workflow. It is believed that as more portable applications enable smoother connectivity, sooner the multi-cloud would turn into omni-cloud.
_Quantum Computing_
- It can be said that the performance of computers would increase in the coming years. It will only be possible through quantum computing. Quantum computing will enable computers to process data at a faster face.
**_This should help you understand cloud computing and why we should look out for the Cloud and it's services in the years to come. My next post will talk about the emerging trends of Cloud Computing in the Year 2022._**
 | tioluwaniope |
924,245 | Beginner To Pro - Python Learning Path | Why Learn Python Python is one of the best programming languages of the 21st century. It's... | 0 | 2021-12-12T12:21:39 | https://dev.to/enirox/beginner-to-pro-python-learning-path-18k7 | python, codenewbie, webdev, beginners | ## Why Learn Python
Python is one of the best programming languages of the 21st century. It's not going anywhere anytime soon. Why? Well, because it's easy to learn, a clean and structured programming language with powerful capabilities.
So having seen how important a programming language Python is, How exactly should a beginner start to learn it. One of the most frustrating things about learning a programming language is how generic the learning resources are, which makes newbie programmers quit learning particular programming because of how intimidating learning may seem.
So what exact path should a beginner take to learn Python programming language. Where should I start? What projects should I do? What frameworks should I use?
This is the learning path I would take to learning Python as a newbie programmer. This is just my research of what works for me and what I believe could work for you. So without further ado let's get it.
1. Python Basics
2. Data Structures and Algorithms
3. Web Development
4. Web Scraping and Automation
5. Artificial Intelligence and Machine Learning
### Python Basics
This is the most important step of learning python as it simply cannot be skipped, You have to learn the very basic syntax of the programming language, this will serve as a foundation for you to go into any deeper learning. You however should not waste too much time on this as it's not very motivating. You can check out books to learn to program, pick one of the numerous courses on Udemy. You can even learn it for free on python's official website.
### Data Structures and Algorithms
DSA is the next path to take as a newbie developer. learning Data Structures and Algorithms can be daunting and difficult, _it was for me_.but one should not that it is one of the most important skills to have, not just as a Python Programmer, but as an overall programmer. This is because programmers that are proficient in Data Structures and Algorithms can easily perform any task about Calculations, Data Processing, Automated Reasoning, etc.DSA shows your problem-solving skills and abilities among prospective employers.
### Web Development
One area Python shines is in the area of web development. With a variety of frameworks to choose from including Flask, Django, Web2Py, Bottle.py, CherryPy. Being proficient in web development with python will enable you to create almost any web application of your choice. Web Development is also a very lucrative field, hence knowing web development will provide you with higher job opportunities.
### Web Scraping and Automation
Python is mostly known as one of the best web scraping languages. This enables you to handle web crawling/scraping processes smoothly. BeautifulSoup is one of the most python frameworks that make web scraping an easy task to do. And Python also enables one to create scripts that automate repetitive tasks in a very efficient way.
### Artificial Intelligence and Machine Learning
Another area where Python shines is in the area of AI and ML as it has been pegged as the est Programming language for AI. It offers great libraries and frameworks as it offers scientific computations, statistical calculations, computational capabilities, and much more. So if you fancy creating the next _JARVIS_, you can't go wrong with Python
### Projects
This one although not on the list is one of the things you should not outlook, as much as you will be learning the programming language, you must build projects. This does not only solidify your learning and skills, it's going to provide you with an arsenal of projects you can show to prospective employers. So even as you learn... Build
## What Do You Want to Do
It's important to note that this is from a personal point of view. Learning paths should be done based on what you want to do with the language of your choosing, The learning path for a data scientist or mobile developer might be completely different from the ones I've stated above. This is a more generalized way of learning the programming language.
_I will be writing a more detailed article on the resources to use to learn soon. Follow and Subscribe so that you don't miss out._
| enirox |
924,358 | Starting journey toward building Blockchain App | Photo by Zoltan Tasi on Unsplash Hi all, in this article, we will discuss the path toward building... | 15,852 | 2021-12-12T15:44:43 | https://dev.to/nikhildhawan/starting-journey-toward-building-blockchain-app-2c33 | blockchain, technology | Photo by <a href="https://unsplash.com/@zoltantasi?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Zoltan Tasi</a> on <a href="https://unsplash.com/s/photos/ethereum?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
Hi all, in this article, we will discuss the path toward building our DAP (Decentralized Applications) using blockchain specifically based upon Ethereum.
So when Ethereum was launched it was kept in mind that it will help make Dap's and nowadays it's a famous cryptocurrency and it can also be used to transfer data based upon blockchain technology. There are many networks involved in the overall ecosystem and each network has 1 or more nodes and each node is running an Ethereum client. Each node has a full copy of an identical blockchain.
For end consumers, there are wallets from which they can access their wallets and carry out transactions e.g. we have chrome extension Metamask and developers use web3.js. You can install Metamask in your chrome form [here](https://chrome.google.com/webstore/detail/metamask/nkbihfbeogaeaoehlefnkodbefgpgknn?hl=en).
For our journey toward building our App, we will use the Rinkeby Test network of Ethereum which help us to test out our transactions without burning real cash.

If we see here we have an account address that is unique to a user and will be the same across different networks. For getting free ETH in your test account to the address you get in your wallet you can use [https://faucets.chain.link/](https://faucets.chain.link/ )

These tokens don't have any real value but can be used in testing our smart contract codes for free before we make them usable on the main ETH network.
The account address we see on the wallet is our public address(can be referred to as an email address for understanding) which we can use to receive or send ETH tokens and there are 2 other keys involved with it Public and Private key which combined to act as an identifier for our account to carry out the transaction.
Thanks for reading this. In upcoming articles in this series, we will continue our journey to learn more and discuss how is the transaction carried out and important pointers in that.
| nikhildhawan |
924,377 | Hologram shooter | A post by Jayant Goel | 0 | 2021-12-12T17:09:37 | https://dev.to/jayantgoel001/hologram-shooter-528 | codepen | {% codepen https://codepen.io/justjspr/pen/BajmpGY %} | jayantgoel001 |
924,477 | +Features no GitLab CI/CD | GitLab e AutoDevOps O Auto DevOps oferece uma configuração de CI/CD predefinida que... | 15,662 | 2021-12-15T17:41:27 | https://dev.to/bernardo/features-no-gitlab-cicd-2n5i | gitlab, git, beginners, devops | ## GitLab e AutoDevOps
O Auto DevOps oferece uma configuração de CI/CD predefinida que permite detectar, construir, testar, implantar e monitorar automaticamente seus aplicativos. Isso torna mais fácil configurar cada projeto de forma mais consistente.
Ele é ativado por padrão para todos os seus projetos, mas pode ser desativado pelo administrador no nível da instância. Ele pode ser desativado e ativado por usuários do [GitLab.com](http://gitlab.com/) no nível do projeto, e os usuários autogerenciados também podem ativá-lo no nível do grupo ou instância.
## Estágio de pacote do GitLab
O GitLab permite que as equipes empacotem seus aplicativos e dependências, gerenciem contêineres e criem artefatos com facilidade. O registro de contêiner privado e seguro e os repositórios de artefatos são integrados e pré-configurados para funcionar perfeitamente com o gerenciamento de código-fonte do GitLab e pipelines de CI / CD. Garanta a aceleração do DevOps com pipelines de software automatizados que fluem livremente sem interrupção.
## Gerenciamento de pacote aprimorado
O uso do sistema de pacotes do GitLab permite que os usuários pesquisem e utilizem artefatos de construção rapidamente, o que aprimora a reutilização em toda a organização. Isso torna mais fácil para todas as equipes colaborar e compartilhar as melhores práticas para minimizar o tempo de lançamento no mercado e aumentar a eficiência geral.
- **Registro de pacote** - Cada equipe precisa de um local para armazenar seus pacotes e dependências. O GitLab visa fornecer uma solução abrangente, integrada em nosso único aplicativo, que oferece suporte ao gerenciamento de pacotes para todas as linguagens e formatos binários comumente usados.
- **Container Registry** - um Container Registry * é um registro seguro e privado para imagens Docker integradas ao GitLab. A criação, envio e recuperação de imagens funcionam imediatamente com o GitLab CI/CD.
- **Helm Chart Registry** - as integrações de cluster do Kubernetes podem aproveitar as vantagens dos gráficos Helm para padronizar seus processos de distribuição e instalação. O suporte a um registro de gráfico de leme integrado permite uma orquestração de contêineres autogerenciada melhor.
- **Proxy de dependência** - o proxy de dependência * do GitLab pode servir como um intermediário entre seus desenvolvedores locais e automação e o mundo de pacotes que precisam ser buscados em repositórios remotos. Adicionando uma camada de segurança e validação a um proxy de cache, você pode garantir confiabilidade, precisão e capacidade de auditoria para os pacotes dos quais você depende.
- **Notebooks Jupyter** - Notebooks Jupyter são um tipo comum de código usado para casos de uso de ciência de dados. Com o GitLab, você pode armazenar e controlar a versão desses notebooks da mesma forma que armazena pacotes e códigos de aplicativos.
- **Git LFS** - Git LFS (Large File Storage) é uma extensão Git, que reduz o impacto de arquivos grandes em seu repositório baixando as versões relevantes deles lentamente. Especificamente, arquivos grandes são baixados durante o processo de checkout, e não durante a clonagem ou busca.
## Estágio de lançamento do GitLab
GitLab ajuda a automatizar o lançamento e entrega de aplicativos, encurtando o ciclo de vida de entrega, agilizando processos manuais e acelerando a velocidade da equipe. Com a entrega contínua sem toque (CD) embutida no pipeline, as implantações podem ser automatizadas em vários ambientes, como preparação e produção, e o sistema simplesmente sabe o que fazer sem ser informado - mesmo para padrões mais avançados como implantações canário. Com sinalizadores de recursos, auditoria / rastreabilidade integrada, ambientes sob demanda e páginas GitLab para entrega de conteúdo estático, você poderá entregar mais rápido e com mais confiança do que nunca.

##Conceitos que vão te ajudar no dia a dia
Agora um pouco de glossário de conceitos para ajudar no dia a dia =D
## Continuous Delivery
A prática de Entrega Contínua (CD) garante a entrega de código validado de CI para sua aplicação por meio de um pipeline de implantação estruturado.
Juntos, CI e CD agem para acelerar a rapidez com que sua equipe pode entregar resultados para seus clientes e partes interessadas. O CI ajuda a detectar e reduzir bugs no início do ciclo de desenvolvimento, e o CD move o código verificado para seus aplicativos com mais rapidez.
## Pages
GitLab Pages é um recurso que permite publicar sites estáticos diretamente de um repositório no GitLab.
Você pode usá-lo para sites pessoais ou comerciais, como portfólios, documentação, manifestos e apresentações comerciais. Você também pode atribuir qualquer licença ao seu conteúdo.
O Pages não suporta processamento dinâmico do lado do servidor, por exemplo, como requer .php e .asp. Consulte este artigo para saber mais sobre sites estáticos versus sites dinâmicos.
## Review Apps
Os aplicativos de análise do GitLab incluem:
**Visualização automática ao vivo** - Codifique, confirme e visualize seu branch em um ambiente ao vivo. Os aplicativos de revisão geram ambientes dinâmicos automaticamente para suas solicitações de mesclagem.
**Um clique para colaborar** - Os designers e gerentes de produto não precisarão verificar sua filial e executá-la em um ambiente de teste. Basta enviar um link para a equipe e deixá-los clicar.
**Totalmente integrado** - Com a revisão de código do GitLab, CI / CD integrado e aplicativos de revisão, você pode acelerar seu processo de desenvolvimento com uma ferramenta para codificação, teste e visualização de suas alterações.
**Flexibilidade de implantação** - Implante no Kubernetes, Heroku, FTP e muito mais. Você pode implantar em qualquer lugar onde possa fazer script com .gitlab-ci.yml e você tem controle total para implantar tantos tipos diferentes de aplicativos de revisão quanto sua equipe precisar.
## Lançamento incremental
Quando você tem uma nova versão do seu aplicativo para implantar na produção, pode usar uma implementação incremental para substituir apenas alguns pods pelo código mais recente. Isso permitirá que você verifique primeiro como o aplicativo está se comportando e, posteriormente, aumente manualmente a distribuição em até 100%.
## Feature flags
Feature flags permitem que as equipes obtenham o CD, permitindo que implantem recursos escuros na produção como lotes menores para testes controlados, separando a entrega do recurso do lançamento do cliente e removendo o risco da entrega.
## Release orchestration
Gerenciamento e orquestração de lançamentos como código baseado em notificações inteligentes, agendamento de entrega e recursos compartilhados, períodos de blackout, relacionamentos, paralelização e sequenciamento, bem como suporte para integração de processos e intervenções manuais.
## Release evidence
A evidência de lançamento inclui recursos como controles de segurança de tempo de implantação para garantir que apenas imagens de contêiner confiáveis sejam implantadas no Kubernetes Engine e, de forma mais ampla, inclui todas as garantias e coleta de evidências necessárias para que você confie nas mudanças que está entregando.
## Gerenciamento de segredos
O Vault é um aplicativo de gerenciamento de segredos oferecido pela HashiCorp. Ele permite que você armazene e gerencie informações confidenciais, como variáveis de ambiente secretas, chaves de criptografia e tokens de autenticação. O Vault oferece acesso baseado em identidade, o que significa que os usuários do Vault podem se autenticar por meio de vários de seus provedores de nuvem preferidos.

Vlw flw | bernardo |
342,903 | A Short Borg Tutorial | Borg is the new generation of attic. borg init --encryption=keyfile /path/to/repo borg create -C lz... | 0 | 2020-05-24T15:36:16 | https://mmap.page/dive-into/borg/ | ---
title: A Short Borg Tutorial
published: true
date:
tags:
canonical_url: https://mmap.page/dive-into/borg/
---
[Borg](https://www.borgbackup.org/) is the new generation of [attic](https://attic-backup.org/).
```
borg init --encryption=keyfile /path/to/repo
borg create -C lz4 /path/to/repo::NAME-YOUR-BACKUP ~/Documents
```
## Free space
Before you start creating backups,please make sure that there is always a good amount of free spaceon the filesystem that has your backup repository (and also on ~/.cache).
## Encryption
Default enabled with `repokey` method.
```
borg init --encryption=none|repokey|keyfile PATH
```
When repository encryption is enabled,all data is encrypted using 256-bit AES encryptionand the integrity and authenticity is verified using HMAC-SHA256.
- `repokey`: key is stored inside the repository (`config`);
- `keyfile`: key is stored under `~/.config/borg/keys/`;
In both modes, the key is stored in encrypted formand can be only decrypted by providing the correct passphrase.
Do not forget to backup the key (e.g. via `borg key export`).
For automated backups the passphrase can be specifiedusing the BORG\_PASSPHRASE environment variable.
Be careful how you set the environment;using the env command, a `system()` call or using inline shell scriptsmight expose the credentials in the process list directlyand they will be readable to all users on a system.Using `export` in a shell script file should be safe, however,as the environment of a process is accessible only to that user.Also, using a shell command may leak the passphrase in shell history file.
For server backup, have a look at [borgmatic](https://torsion.org/borgmatic/).
## Compression
```
borg create --compression TYPE
```
Default is no compression.
- fast repo storage and some compression: `lz4`
- less fast repo storage and a bit more compression: `zlib`
- very slow repo storage and high compression: `lzma`
`lz4` is very fast thus preferred.`lzma` is preferred when the repository is on a remote host with slow (dial-up) connection.
## Upgrade from attic
```
borg upgrade --inplace REPOSITORY
```
If a backup copy is required, omit the `--inplace` option.
## Hosted Services
- [rsync.net](https://www.rsync.net/products/borg.html) also works with scp, rsync, git, etc. Nodes available globally.
- [BorgBase](https://www.borgbase.com/) dedicated borg repository hosting with specific APIs. It also offers a free plan (5 GB and 2 repos), and paid plans are cheaper than rsync.net. Nodes available in US and EU. | weakish | |
924,543 | Monads in a simple way | Monads were created by mathematicians in 1960 and rediscovered by computer scientists in 1990 as a... | 0 | 2021-12-13T21:17:15 | https://dev.to/kindsloth/monads-in-a-simple-way-7f9 | haskell, functional, programming | Monads were created by mathematicians in 1960 and rediscovered by computer scientists in 1990 as a new way to handle effects. Let's write some examples, imagine a function that gets the tail from a list like this:

The problem here is obvious, this function can fail if he receives an empty list, now imagine that for any reason we're using this function in our program to work with lists and for some reason, it receives an empty list, in this case, our program will break and the language compiler will give to us an exception, so how to solve this? to solve this we can use the Maybe Monad, let's write this:

Now we have the function **_safeTail_** which is total, which means that this function will never fail, even if receives an empty list. As you can see Maybe Monad is like a box that encapsulates a value, so this box can have some value or nothing, let's write our own Maybe Monad in Haskell:

As you can see the Maybe Monad allows us to handle functions that can fail, but now that these values are encapsulated how we can work with him? Monads essentially are Functors so we can map functions in our Monads without changing your structure, let's have an example with lists:

Here we map the function **_(+1)_** in our list. As you can see we don’t change its structure, and that’s basically what Functors allow us to do. But, what if we want to pass our boxed value to some function that takes just a value and transforms this into another type of monadic value? We can use the bind operator for this, which is one of the functions necessary for the implementation of a monad. In Haskell, this is represented by the symbol: **_>>=_**.
Imagine that we are using our **_safeTail_** that we write before and now we want to pass the list to a function that takes some list and find the first even element in the list, let's write this function:

After writing this we can pass our element to this function using the bind operator, let's write this:

One thing that you can notice here is that the **_getFirstEven_** function also returns a Maybe Monad, so what the bind operator does is get the value from the box and put him back in the box after doing something. This helps us to compose our monadic functions handling all the effects that we can find on the way.
So the central point here is that Monads allow us to handle effects, like for example handle Input/Output with IO Monad. This is very powerful because it allows us to have handled effects with pure functions, meaning that it also allows us to build real-world applications with pure languages like Haskell.
A little explanation about this: Haskell, for example, is a pure language which means that we just have pure functions. Pure functions are like mathematical functions: they will produce exactly the same output given the same input, so it’s not possible to have side effects, for example. While building a real-world application, on another note, we need to handle side effects like getting some data typed by the user, or reading some file. So we may think that Haskell is a useless language, but Monads allow us to handle these effects while maintaining the functions pure. | kindsloth |
924,626 | Why You Need a Coding Scratch File | Sometimes the best place to write code for your project is outside it. This may sound like a lie.... | 0 | 2021-12-13T01:16:22 | https://kevinhicks.software/blog/88/why-you-need-a-coding-scratch-file | productivity, programming, beginners | Sometimes the best place to write code for your project is outside it.
This may sound like a lie. How could writing code outside of a project be the best place for the project's code? How can it even help the project at all?
Working within a project, especially a large and complex one, other code can get in the way of what you are writing.
## By removing the distraction of unrelated code, you can focus on your task.
When experimenting with designs or figuring out a tricky issue, it's quicker and easier to focus on the smallest amount of code needed. You don't have to worry about passing arguments around, multiple software layers, and other complexities. You also won't need as many manual steps or automated tests to test your code.
Sometimes you will need the rest of the project code, but removing it can be a huge productivity boost when you don't.
## How does a scratch file help?
A scratch file allows you to write code that can be compiled, ran and debugged without the rest of your project or setting up a new one. You can copy the code to your project and delete the file when you finish. Scratch files also help avoid accidentally committing temporary code.
It's like a sticky note for the code you throw out when done with it.
## Create your first scratch file.
Scratch files can usually be created inside your IDE. The process is different between IDEs, so search for "your IDE name + scratch file" to get started. It's also helpful to read everything your IDE can do with scratch files when you look this up.
Once you create the file, start writing code.
**There are also options outside of IDEs for scratch files.**
You could use a fiddle site (like [jsfiddle](https://jsfiddle.net/)) for your language. Some languages and frameworks have tools for this, such as [artisan tinker](https://laravel.com/docs/8.x/artisan) for Laravel and [LinqPad](https://www.linqpad.net/) for C#. A default project for your language or framework could also serve as a scratch file if you need to work across multiple layers with no other custom code.
Learn about the scratch file options for your project and use what you prefer.
While not suitable for every situation, scratch files are a fantastic tool when focusing on a small amount of stand-alone code.
| kevinhickssw |
924,682 | How WebSocket protocol designs bidirectional messaging and implements in Go | WebSocket WebSocket is a mechanism for low-cost, full-duplex communication on Web, which... | 0 | 2021-12-13T04:59:20 | https://dev.to/hgsgtk/how-websocket-protocol-designs-bidirectional-messaging-and-implements-in-go-260f | webdev, websocket, go | ## WebSocket
WebSocket is a mechanism for low-cost, [full-duplex](https://en.wikipedia.org/wiki/Duplex_(telecommunications)#FULL-DUPLEX) communication on Web, which protocol was standardized as [RFC 6455](https://datatracker.ietf.org/doc/html/rfc6455).
The following diagram, quoted by [Wikipedia](https://en.wikipedia.org/wiki/WebSocket), describe a communication using WebSocket between client and server.

"Handshake" is explained in the following two articles.
- [Deep dive into WebSocket opening handshake protocol with Go](https://dev.to/hgsgtk/websocket-client-3gni)
- [Learn WebSocket handshake protocol with gorilla/websocket server](https://dev.to/hgsgtk/learn-websocket-handshake-protocol-with-gorillawebsocket-server-10k9)
- [How decided a value set in Sec-WebSocket-Key/Accept header](https://dev.to/hgsgtk/how-decided-a-value-set-in-sec-websocket-keyaccept-header-l79)
This post will focus on "Bidirectional messages"
## WebSocket server
Let's learn the WebSocket bidirectional messaging protocol with a specific Go implementation, which is available on a [GitHub repository](https://github.com/hgsgtk/go-snippets/blob/e756b7f76d58a23e17d150b77bec9bb798fb2db2/goproxy/wsproxy/server/main.go).
```go
package main
import (
"fmt"
"log"
"net/http"
"github.com/gorilla/websocket"
)
var upgrader = websocket.Upgrader{}
func main() {
p := 12345
log.Printf("Starting websocket echo server on port %d", p)
http.HandleFunc("/", echo)
if err := http.ListenAndServe(fmt.Sprintf(":%d", p), nil); err != nil {
log.Panicf("Error while starting to listen: %#v", err)
}
}
func echo(w http.ResponseWriter, r *http.Request) {
// Start handshaking
c, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Printf("Upgrading error: %#v\n", err)
return
}
defer c.Close()
// End
log.Println("Success to handshake with client")
// Start bidirectional messages
for {
mt, message, err := c.ReadMessage()
if err != nil {
log.Printf("Reading error: %#v\n", err)
break
}
log.Printf("recv: message %q", message)
if err := c.WriteMessage(mt, message); err != nil {
log.Printf("Writing error: %#v\n", err)
break
}
}
// End bidirectional messages
}
```
## Open handshake
The first half of the code is the server implementation for "Handshake".
```go
// 1. Initializes parameters for upgrading an HTTP connection to a WebSocket connection.
var upgrader = websocket.Upgrader{}
// 2. Starts WebSocket server which waits client HTTP requests on port 12345
func main() {
p := 12345
log.Printf("Starting websocket echo server on port %d", p)
http.HandleFunc("/", echo)
if err := http.ListenAndServe(fmt.Sprintf(":%d", p), nil); err != nil {
log.Panicf("Error while starting to listen: %#v", err)
}
}
func echo(w http.ResponseWriter, r *http.Request) {
// 3. Start handshaking
c, err := upgrader.Upgrade(w, r, nil)
if err != nil {
log.Printf("Upgrading error: %#v\n", err)
return
}
defer c.Close()
// End
log.Println("Success to handshake with client")
// ...(Bidirectional messages part)
```
As noted in the comments in the code, to start up a WebSocket server, do the following:
1. Initializes parameters for upgrading an HTTP connection to a WebSocket connection.
2. Starts WebSocket server which waits client HTTP requests on port 12345
3. Start handshaking
When you run this program, it will start an HTTP server on port 12345.
```
$ go run server/main.go
2021/12/13 11:54:22 Starting websocket echo server on port 12345
```
You can check if the handshake is actually working by making a curl request (but it's not perfect).
```
$ curl -X GET http://localhost:12345 \
-H "Connection: upgrade" \
-H "Upgrade: websocket" \
-H "Sec-Websocket-version: 13" \
-H "Sec-Websocket-Key: 08kp54j1E3z4IfuM1m75tQ==" \
-H "Host: localhost:12345" \
-H "Origin: http://localhost:12345"
```
When you send it, you will see that the request was accepted in stdout.
```
$ go run main.go
(omit)...
2021/12/13 12:07:19 Success to handshake with client
```
See [Learn WebSocket handshake protocol with gorilla/websocket server](https://dev.to/hgsgtk/learn-websocket-handshake-protocol-with-gorillawebsocket-server-10k9) for more information.
## WebSocket connection
The key to understanding the message exchanging is the return value of this line.
```go
c, err := upgrader.Upgrade(w, r, nil)
```
The variable `c` is a pointer of [websocket.Conn](https://pkg.go.dev/github.com/gorilla/websocket#Conn) struct. The Conn type represents a WebSocket connection.
When the handshake is completed, you can start exchanging messages over the established WebSocket connection.
You can receive and send messages by calling the Conn's [ReadMessage](https://pkg.go.dev/github.com/gorilla/websocket#Conn.ReadMessage) and [WriteMessage](https://pkg.go.dev/github.com/gorilla/websocket#Conn.WriteMessage).
```go
// Start bidirectional messages
for {
mt, message, err := c.ReadMessage()
if err != nil {
log.Printf("Reading error: %#v\n", err)
break
}
log.Printf("recv: message %q", message)
// (omit...writing message part)
}
// End bidirectional messages
```
Use `for` loop so that the server keeps waiting for messages from the client.
## Data Framing
In the WebSocket protocol, data is transmitted using a sequence of frames. This wire format for the data transfer part is described by the [ABNF](https://datatracker.ietf.org/doc/html/rfc5234) (Augmented BNF for Syntax Specification). A overview of the framing is given in the following figure.

Server and client exchange data in accordance with this method.
## Read messages from the peer
[ReadMessage](https://pkg.go.dev/github.com/gorilla/websocket#Conn.ReadMessage) is a method to read messages from the peer which interprets data in base framing protocol.
```go
mt, message, err := c.ReadMessage()
```
The implementation of [ReadMessage](https://github.com/gorilla/websocket/blob/v1.4.2/conn.go#L1062) is as follow:
```go
func (c *Conn) ReadMessage() (messageType int, p []byte, err error) {
var r io.Reader
messageType, r, err = c.NextReader()
if err != nil {
return messageType, nil, err
}
p, err = ioutil.ReadAll(r)
return messageType, p, err
}
```
The [Conn.NextReader](https://pkg.go.dev/github.com/gorilla/websocket#Conn.NextReader) function reads bytes received from the peer.
```go
messageType, r, err = c.NextReader()
```
## Opcode
First return value is `messageType` which represents WebSocket opcode numbers defined in [RFC 6455 - 11.8. WebSocket Opcode Registry](https://datatracker.ietf.org/doc/html/rfc6455#section-11.8).
| Opcode | Meaning |
| -- | -- |
| 0 | Continuation Frame |
| 1 | Text Frame |
| 2 | Binary Frame |
| 8 | Connection Close Frame |
| 9 | Ping Frame |
| 10 | Pong Frame |
There are two types of opcode: message for exchanging data and controlling WebSocket connection.
- For exchanging data
- Text Frame
- Binary Frame
- For controlling WebSocket connection
- Connection Close Frame
- Ping Frame
- Pong Frame
- Contination Frame
The messages for exchanging data distinguishes between text and binary data messages. And, the WebSocket protocol defines four types of control messages: close, ping, pong, and continuation.
## Conclusion
This article explains WebSocket bidirectional message using gorilla/websocket server implementation.
Related articles are here.
- [Learn WebSocket handshake protocol with gorilla/websocket server](https://dev.to/hgsgtk/learn-websocket-handshake-protocol-with-gorillawebsocket-server-10k9)
- [Deep dive into WebSocket opening handshake protocol with Go](https://dev.to/hgsgtk/websocket-client-3gni) | hgsgtk |
924,754 | An introduction to React Hooks | Introduction Hooks are here to aid you if you don't like lessons. Hooks are methods that let you... | 0 | 2021-12-13T07:13:56 | https://dev.to/ashikarose/an-introduction-to-react-hooks-3f58 | **Introduction**
[Hooks ](https://reactjs.org/docs/hooks-intro.html)are here to aid you if you don't like lessons. Hooks are methods that let you leverage React's state and lifecycle capabilities without having to use classes. It allows you to connect into React state and lifecycle features from function components, as the name implies.
Hooks don't work inside classes and are backwards compatible, which means they don't introduce any new features. So it's entirely up to you whether you want to use them or not. This new functionality allows you to leverage all of React's features, including function components.
Make sure your hooks are written in standard follow format. To ensure that hooks are called in the same sequence every time, call them at the top level of React functions. Avoid calling them from inside loops, nested functions, or conditions. Also, make sure you're calling them from a React function component rather than a JavaScript function. If you don't follow that guideline, you can end up with some unusual behaviours. To ensure that these rules are applied automatically, React includes a linter plugin.
To use hooks, you don't need to install anything. They're included with React starting with version 16.8.
To set up the React development environment (you'll need the newest version of Node.js to run npx), follow these steps:
npx create-react-app exploring-hooks
React hooks have rendered render props and HOCs (Higher-Order Components) practically unnecessary, making sharing stateful code much more pleasant.
React comes with a number of hooks pre-installed. The most important are useState and useEffect.
**Need for React Hooks**
To reuse stateful logices: The render props pattern and higher-order components (HOC) are two advanced React ideas. It is not possible to reuse stateful component logic in React in any way. Although the use of HOC and render properties patterns can solve this problem, it makes the code base inefficient and harder to understand because components are wrapped in a slew of others to share functionality. Hooks allow us to exchange stateful functionality in a much better and cleaner way without altering the component hierarchy.
‘This’ keyword :The keyword 'this' is used for two reasons. The first has nothing to do with React and everything to do with javascript. To work with classes, we must first understand how the javascript 'this' keyword works, which is rather different from other languages. Although the principles of props, state, and unidirectional data flow are simple to grasp, using the 'this' keyword when implementing class components might be challenging. Event handlers must also be bound to the class components. Classes have been discovered to not concise efficiently, leading in unreliable hot reloading, which can be remedied with Hooks.
Simplifying challenging scenarios: When designing components for complicated scenarios like data fetching and event subscription, all relevant code is likely to be scattered throughout multiple life cycle methods.
**Effect Hook**
We use the [Effect hook](https://reactjs.org/docs/hooks-effect.html), i.e. useEffect, from React function components to conduct side effects tasks like modifying the DOM, data fetching, and so on. These actions are so named because they can have an impact on other components and aren't possible to perform during rendering. componentDidMount, componentDidUpdate, and componentWillUnmount are all powered by the UseEffect hook. Because the components require access to their state and props, UseEffect is declared inside them. UseEffect can be called at the beginning of a render, at the end of each render, or to specify clean up depending on how we express them. They run by default after every render, including the first one.
import React, {
useState,
useEffect
} from 'react';
function usingEffect() {
const [count, setCount] = useState(0);
/* default behaviour is similar to componentDidMount and componentDidUpdate: */
useEffect(() => {
// Change document title
document.title = `Clicked {count} times`;
});
return (
<div>
<p>Clicked count = {count}</p>
<button onClick={() => setCount(count + 1)}>
Button
</button>
</div>
);
}
The document title is updated every time the count is updated in the example above. Because it runs at the first render and after every update, useEffect works similarly to componentDidMount and componentDidUpdate combined. This is useEffect's default behaviour, although it can be altered. Let's have a look at how.
If you want to use useEffect as componentDidMount then you can pass an empty array [] in the second argument to useEffect as in the below example:
import React, {
useState,
useEffect
} from 'react';
function usingEffect() {
const [count, setCount] = useState(0);
// Similar to componentDidMount
useEffect(() => {
// Only update when component mount
document.title = `You clicked {count} times`;
}, []);
return (
<div>
<p>Your click count is {count}</p>
<button onClick={() => setCount(count + 1)}>
Button
</button>
</div>
);
}
**Hooks State**
The useState() function component is used to define, alter, and use states using hooks. The current state and a function to update the state are returned as two value pairs. It's something along these lines. This is not the same as setState in a class. The fact that setState does not merge the old and new states distinguishes it. UseState() is a function component that can be called from within a function component or from an event handler. The initial state is supplied as an input to useState(), and it is used on the first render. There are no restrictions on the number of state hooks that can be used in a single component; we can use as many as we wish.
import React, {
useState
} from 'react';
function usingStateHook() {
// use useState() to declare state variable count
const [count, setCount] = useState(0);
return (
<div>
{/* display count value */}
<p>Your click count is {count}</p>
{/* on click update count value using setCount */}
<button onClick={() => setCount(count + 1)}>
Button
</button>
</div>
);
}
In the above example, we've included a counter that keeps track of how many times the button has been pressed. In this case, useState(0) sets the count to '0' and returns the setCount function, which can be used to update the count. In the onClick event handler, we call setCount with the value 'count+1'. Here, count is the previous count value; for example, if the count was two before, setCount will increase it by one to three.
[**Custom Hooks**](https://reactjs.org/docs/hooks-custom.html)
React also allows us to create our own hook, which is a javaScript function with the word 'use' at the beginning of its name and can call other hooks. When we have parts that can be reused in our project, we normally develop custom hooks. We must first define them, which may include the use of other hooks, before we can refer to them as other hooks.
function FriendStatus(props) {
const isOnline = useFriendStatus(props.friend.id);
if (isOnline === null) {
return 'Loading...';
}
return isOnline ? 'Online' : 'Offline';
}
function FriendListItem(props) {
const isOnline = useFriendStatus(props.friend.id);
return (
<li style={{ color: isOnline ? 'green' : 'black' }}>
{props.friend.name}
</li>
);
}
**Wrapping Up**
Hooks allow us to leverage different functionalities, such as lifeCycle functions, without having to develop classes. Get assistance from the best [react development agency](https://www.cronj.com/react/react-development-agency) Hooks come in a variety of shapes and sizes, each with its own set of functions. The useState and useEffect hooks are the most popular among them.
| ashikarose | |
924,767 | Go Channel Patterns - Wait For Task | To improve my Go Programming skills and become a better Go engineer, I have recently purchased an... | 15,904 | 2021-12-13T07:39:41 | https://dev.to/b0r/go-concurrency-patterns-wait-for-task-34de | go, programming | ---
series: [Go Channel Patterns]
---
To improve my Go Programming skills and become a better Go engineer, I have recently purchased an excellent on-demand education from [Ardan Labs](https://www.ardanlabs.com/education/). Materials are created by an expert Go engineer, [Bill Kennedy](https://twitter.com/goinggodotnet).
{% twitter 1443663502808395780 %}
I have decide to record my process of learning how to write more idiomatic code, following Go best practices and design philosophies.
This series of posts will describe channel patterns used for orchestration/signaling in Go via goroutines.
## Wait For Task Pattern
The main idea behind **Wait For Task** pattern is to have:
- a channel that provides a signaling semantics
- a goroutine that **waits for task** so it can do some work
- a goroutine that sends work to the previous goroutine
### Example
In this example we have an `employee` (`a` goroutine) that doesn't know immediately what to do. The `employee` waits for `manager` to give him some work to do.
Once `manager` finds some work for the `employee`, it notifies `employee` by sending a signal (`paper`) via communication channel `ch`.
Feel free to try the example on [Go Playground](https://play.golang.com/p/WUrCw1C8d7C)
```go
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
// make channel of type string which provides signaling semantics
// unbuffered channel provides a guarantee that the
// signal being sent is received
ch := make(chan string)
// goroutine 'a' that waits for some work => employee
go func() {
// employee waits for signal that it has some work to do
p := <-ch
fmt.Println("employee : received signal : ", p)
}()
// simulate the idea of unknown latency (do not use in production)
// e.g. manager is thinking what work to pass to the employee
time.Sleep(time.Duration(rand.Intn(500)) * time.Millisecond)
// when work is ready, send signal form manager to the employee
// sender (employee) has a guarantee that the worker (employee)
// has received a signal
ch <- "paper"
fmt.Println("manager : sent signal")
time.Sleep(time.Second)
}
```
### Result (1st execution)
```sh
go run main.go
manager : sent signal
employee : received signal : paper
```
## Conclusion
In this article, wait for task channel pattern was described. In addition, simple implementation was provided.
Readers are encouraged to check out excellent [Ardan Labs](https://www.ardanlabs.com/education/) education materials to learn more.
#### Note
Depending on the country you are coming from, [Ardan Labs](https://www.ardanlabs.com/education/) education might be a little bit expensive. In that case you can always contact them and they will provide you a link to their scholarship form.
Resources:
1. [Ardan Labs](https://www.ardanlabs.com/)
2. [Cover image by Igor Mashkov from Pexels](https://www.pexels.com/photo/radio-telescope-against-sky-with-stars-6325001/)
| b0r |
955,846 | Pre-load Angular Modules🥳 | Preloading 1. In preloading, feature modules are loaded in background asynchronously. In... | 0 | 2022-01-15T09:03:29 | https://dev.to/krishna7852/pre-load-angular-modules-5hg | angular, typescript, javascript, performance | ## Preloading
**1.** In preloading, feature modules are loaded in background asynchronously. In preloading, modules start loading just after application starts.
**2.** When we hit the application, first **AppModule** and modules imported by it, will be loaded eagerly. Just after that modules configured for preloading is loaded asynchronously.
**3.** Preloading is useful to load those features which are in high probability to be visited by user just after loading the application.
**4.** To configure preloading, angular provides `preloadingStrategy` property which is used with `RouterModule.forRoot` in routing module. Find the code snippet.
```
@NgModule({
imports: [
RouterModule.forRoot(routes,
{
preloadingStrategy: PreloadAllModules
})
],
------
})
export class AppRoutingModule { }
```
**5.** To configure preloading features modules, first we will configure them for lazy loading and then using Angular in-built **PreloadAllModules** strategy, we enable to load all lazy loading into preloading modules.
**6.** Using PreloadAllModules strategy, all modules configured by `loadChildren` property will be preloaded. The modules configured by `loadChildren` property will be either lazily loaded or preloaded but not both. To preload only selective modules, we need to use custom preloading strategy.
**7.** Once we configure `PreloadAllModules` strategy, then after eager loading modules, Angular searches for modules applicable for preloading. The modules configured by `loadChildren` will be applicable for preloading. We will take care that these feature modules are not imported in application module i.e. `AppModule`.
**8.** We can create custom preloading strategy. For this we need to create a service by implementing Angular `PreloadingStrategy` interface and override its preload method and then configure this service with `preloadingStrategy` property in routing module. To select a module for custom preloading we need to use data property in route configuration. data can be configured as `data: { preload: true }` for selective feature module preloading.
here we will use in-built preloading strategy i.e. `PreloadAllModules` strategy. Find the example.
Module and routing module for feature 1:
**country.module.ts**
```
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { ReactiveFormsModule } from '@angular/forms';
import { CountryComponent } from './country.component';
import { CountryListComponent } from './country-list/country.list.component';
import { CountryService } from './services/country.service';
import { CountryRoutingModule } from './country-routing.module';
@NgModule({
imports: [
CommonModule,
ReactiveFormsModule,
CountryRoutingModule
],
declarations: [
CountryComponent,
CountryListComponent
],
providers: [ CountryService ]
})
export class CountryModule {
constructor() {
console.log('CountryModule loaded.');
}
}
```
**country-routing.module.ts**
```
import { NgModule } from '@angular/core';
import { RouterModule, Routes } from '@angular/router';
import { CountryComponent } from './country.component';
import { CountryListComponent } from './country-list/country.list.component';
const countryRoutes: Routes = [
{
path: '',
component: CountryComponent,
children: [
{
path: 'country-list',
component: CountryListComponent
}
]
}
];
@NgModule({
imports: [ RouterModule.forChild(countryRoutes) ],
exports: [ RouterModule ]
})
export class CountryRoutingModule { }
```
Module and routing module for feature 2:
**person.module.ts**
```
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { ReactiveFormsModule } from '@angular/forms';
import { PersonComponent } from './person.component';
import { PersonListComponent } from './person-list/person.list.component';
import { PersonService } from './services/person.service';
import { PersonRoutingModule } from './person-routing.module';
@NgModule({
imports: [
CommonModule,
ReactiveFormsModule,
PersonRoutingModule
],
declarations: [
PersonComponent,
PersonListComponent
],
providers: [ PersonService ]
})
export class PersonModule {
constructor() {
console.log('PersonModule loaded.');
}
}
```
**person-routing.module.ts**
```
import { NgModule } from '@angular/core';
import { RouterModule, Routes } from '@angular/router';
import { PersonComponent } from './person.component';
import { PersonListComponent } from './person-list/person.list.component';
const personRoutes: Routes = [
{
path: '',
component: PersonComponent,
children: [
{
path: 'person-list',
component: PersonListComponent
}
]
}
];
@NgModule({
imports: [ RouterModule.forChild(personRoutes) ],
exports: [ RouterModule ]
})
export class PersonRoutingModule { }
```
Now find the `AppModule` and `AppRoutingModule`.
**app.module.ts**
```
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
import { AddressComponent } from './address.component';
import { PageNotFoundComponent } from './page-not-found.component';
import { AppRoutingModule } from './app-routing.module';
@NgModule({
imports: [
BrowserModule,
AppRoutingModule
],
declarations: [
AppComponent,
AddressComponent,
PageNotFoundComponent
],
providers: [ ],
bootstrap: [ AppComponent ]
})
export class AppModule {
constructor() {
console.log('AppModule loaded.');
}
}
```
**app-routing.module.ts**
```
import { NgModule } from '@angular/core';
import { RouterModule, Routes } from '@angular/router';
import { PreloadAllModules } from '@angular/router';
import { AddressComponent } from './address.component';
import { PageNotFoundComponent } from './page-not-found.component';
const routes: Routes = [
{
path: 'country',
loadChildren: () => import('./country/country.module').then(mod => mod.CountryModule)
},
{
path: 'person',
loadChildren: () => import('./person/person.module').then(mod => mod.PersonModule)
},
{
path: 'address',
component: AddressComponent
},
{
path: '',
redirectTo: '',
pathMatch: 'full'
},
{
path: '**',
component: PageNotFoundComponent
}
];
@NgModule({
imports: [
RouterModule.forRoot(routes,
{
preloadingStrategy: PreloadAllModules
})
],
exports: [
RouterModule
]
})
export class AppRoutingModule { }
```
We can see in the `AppRoutingModule` that we are using PreloadAllModules strategy for preloading. The module configured by `loadChildren` i.e. `CountryModule` and `PersonModule` will be preloaded.
**Output**
When we hit the application for first time, we can see following logs in browser console.
```
AppModule loaded.
Angular is running in the development mode. Call enableProdMode() to enable the production mode.
CountryModule loaded.
PersonModule loaded.
```
We can observe that application started after loading `AppModule` and then application preloaded `CountryModule` and `PersonModule`.
Checkout to main blog [Angular Module Loading strategies
](https://dev.to/krishna7852/angular-module-loading-eager-lazy-and-preloading-3jj4) | krishna7852 |
924,801 | What Do You Look For In A Crypto Marketing Company? | A crypto marketing company has the ability to promote any sort of token-based campaign that aims to... | 0 | 2021-12-13T09:02:04 | https://dev.to/cryptopromoters/what-do-you-look-for-in-a-crypto-marketing-company-3686 | cryptomarketingcompany, cryptocurrency, cryptomarketing | A **[crypto marketing company](https://cryptopromoters.org/)** has the ability to promote any sort of token-based campaign that aims to raise money. With such a service provider, it is possible for you to have perfect operations for your crowdfunding project. Also, you are to present your idea to a large number of investors.
Through such a service provider, you are able to get success in every phase of the fund-raising program and get the right amount of capital easily. Since there are so many inclusions in crypto marketing services, it is a must that it is done by professionals.
## Why should you choose dedicated Crypto Marketing Solutions?
You might think that you could promote your ICO, IEO, IDO, or STO with a digital marketing agency, that would be a big mistake. That’s because there are so many things that make the crypto domain different from the rest of the industries.
Also, you get to bring more efficacy in very little time and take your project to the apex of its domain. When that happens in the incipient phase of your business, it gets ready for so many other challenges in the future. Moreover, you get a brilliant start and your startup gets much more than you can expect.
Every single feature or USP of your project gets highlighted in the most appropriate manner. Also, you create a big buzz around your enterprise and keep things going at a very progressive rate as well. When you are working with professionals in this domain, it is very easy to give more focus to the toughest challenges in the marketing process.
Not only that, it helps you overcome some fundamental problems very easily. Once you start to think about the strategy, you get prolific knowledge about the whole process. At the time of planning and executing the whole campaign, you get more certain about the whole thing.
## How does Crypto Marketing zero in on the investors?
This inclusive practise work in the most efficacious manner and gives you the best outcomes in very little time. The overall advantages begin to soar and you make things more certain as well. From transactions to fintech operation, it can get everything streamlined.
Through the inclusive nature of this marketing model, it becomes possible for businesses to have a productive regime in raising funds. While doing that, they can also maintain thoroughly cost-effective spending. This gives them certainty and overall efficacy in operations as well.
There are so many points that you can take care of plans and give you success without any tradeoffs. While giving you perfect results, you also get to bring more possibilities of success. The overarching nature of this marketing strategy gives you better forwarding steps with ease.
## How to plan and execute such a campaign with the help of professionals?
Well, if you are already working with a professional, then you don’t have to worry about doing anything on your own. However, as a business owner, you need to be aware of the activities and challenges existing in this job. Also, you need to be sure about what kind of tasks you need in the campaign.
To get that level of knowledge, you need to indulge in a little research and development. Also, you need to be more productive in your approach so there’s absolutely no problems in executing them. At the same time, you also need to be thoroughly sure about the target audiences.
In order to gain efficacy and proficiency, you need to be sure about the overall disposition of the campaign. You also need to be certain about the whole ecosystem of the market so you have no issues in identifying the opportunities as well as challenges.
Through an inclusive approach, it is totally possible to make the campaign rewarding for the budding business. At the same time, you need to be certain about the possible outcomes of the methods being used. You need to be definitive and experimental at the same time.
> Must Read:- **[Role of Influencer Marketing in Crypto](https://www.cryptopromoters.org/blog/influencer-marketing/)**
Once you maintain that kind of scalability, it is easier for you to bring more chances of success. The same approach helps you run the whole campaign with transparency and utmost clarity. Also, you will be able to get more surety in targeting the potential customers for the long term.
## Why should you choose Crypto Promoters as your crypto marketing agency?
Crypto Promoters helps you run a seamless campaign that results in a booming venture. We make it possible with a research-based marketing disposition and makes the whole thing more inclusive. At the same time, we give you insights into the crypto industry and make your project alluring for every investor.
If you have a token-based project and you want to do fund-raising successfully, associate with us right now!
> Telegram: https://t.me/CRYPTO_SHILLER
> Twitter: https://twitter.com/CryptoPromoterG
| cryptopromoters |
924,806 | 🧭How to create a content explorer view in Microsoft Lists | 🎬NEW VIDEO N. 154🎬 In this video tutorial, you’ll learn how to create a content explorer view in... | 0 | 2021-12-13T09:16:14 | https://dev.to/giulianodeluca/how-to-create-a-content-explorer-view-in-microsoft-lists-4418 | microsoftlists, lists, sharepoint, giulianodeluca | ---
title: 🧭How to create a content explorer view in Microsoft Lists
published: true
description:
tags: MicrosoftLists, Lists, SharePoint, GiulianoDeLuca
cover_image: https://raw.githubusercontent.com/giuleon/MicrosoftListsContentExplorer/master/Thumbnail.png
---
🎬NEW VIDEO N. 154🎬
In this video tutorial, you’ll learn how to create a content explorer view in Microsoft Lists.
I’ll walk you through every step to create a dynamic list filtering to have a content navigator that displays the content in detail according to the button clicked, in this way the list will be more readable.
Preview:

🔗Check out my demo:
https://github.com/giuleon/MicrosoftListsContentExplorer
{% youtube fWYntPmwCac%} | giulianodeluca |
924,834 | If you're uncomfortable with RxJS, this article is for you! | RxJS patterns are popular nowadays as data management and manipulation in a web app are crucial for... | 0 | 2021-12-13T10:10:14 | https://dev.to/famzil/if-youre-uncomfortable-with-rxjs-this-article-is-for-you-1c9m | programming, webdev, javascript, react | RxJS patterns are popular nowadays as data management and manipulation in a web app are crucial for every app.
Who didn’t retrieve data from an API or whatever the source is in their app?
This article covers:
- The context and data
- RxJS and why is it used?
- When to use it?
This article had successfully made my junior mentees more comfortable with RxJS and even started implementing solutions with RxJS, that's why I'm sharing it with #dev.to community.
I hope you'll find it useful…
[RxJS like you've never seen it!](https://javascript.plainenglish.io/if-the-word-rxjs-makes-you-uncomfortable-this-article-is-for-you-b27780240b28)

My today's story ends here, I hope you enjoyed it and learned from it ❤
---
Dear reader, thank you for existing in my life. Let's get in touch on [Medium](https://medium.com/@famzil/), [LinkedIn](https://www.linkedin.com/in/fatima-amzil-9031ba95/), [Instagram](https://www.instagram.com/the_frontend_world/), [YouTube](https://www.youtube.com/channel/UCaxr-f9r6P1u7Y7SKFHi12g), or [Twitter](https://twitter.com/FatimaAMZIL9).
See my [e-book](http://www.fam-front.com/) about web essentials and general culture. | famzil |
924,952 | 2 in one windows in Java Swing | How can I connect the window with camera (Right window) in Main window (Left window) so that they are... | 0 | 2021-12-13T10:54:37 | https://dev.to/namisrn/2-in-one-windows-in-java-swing-2hn6 |
How can I connect the window with camera (Right window) in Main window (Left window) so that they are in one window?
 | namisrn | |
924,974 | Journey of a dyslexic programmer ? What is it like ? | Ever since I was a little kid I always knew that there was an issue with my reading, writing and... | 0 | 2021-12-14T13:13:26 | https://dev.to/aravindasiva/journey-of-a-dyslexic-programmer-what-is-it-like--2bgf | career, dyslexia, productivity, computerscience |
Ever since I was a little kid I always knew that there was an issue with my reading, writing and spelling, but never knew what it was. I always thought I was one of those different kids.
As I grew up in India, schools didn't really have anything special for dsylexic kids. Well, it wasn't really the school's mistake since most of the folks in the community had no idea what is Learning Disorder or any type of disorder at all.
I was called names "trouble maker", "he is trouble", "Lazy", etc., (names are translated to English XD). As a kid it didn't bother me, rather I thought it was cool for some reason.
During my primary and High school years, there was at least 2 Parent-Teacher meetings where your parent and your teacher discuss about you and your Half-yearly performance with a report card.
Every year at least one of the teacher says the same phrase to my parents:
> **"I know he is very smart. But seems like he doesn't want to write and show that on paper."**

By time hearing this phrase from teachers started to annoy me as I wasn't doing that on purpose. Started to wonder and ask myself **_"Why me?"_**
This phrase started when I was in pre-school where I didn't want to recite a poem or a Rhyme (which was subject called "Recitation"), I was standing in front of the whole class and din't spill a word out of my mouth. So, the teacher called my mum and said "He did not utter a single word from his mouth, regardless I am going to pass him because I know he knows the poem/Rhyme". I have no recollection of this incident whatsoever since I was 5 years old then. But Hey, Thanks to my mum.

If you are from an Indian middle class family, You already know that excelling in academics is important for your family, family friends, relatives, your father's friends, your dog, your dog's friends, your dead fish, you get the point XD.
So, as a kid, you are under constant pressure to get good marks. So you get into a good school, good university, good job, good life, etc., the race never ends.
I struggled with letters, numbers, left and right. I still find myself using fingers to count things, confusing left and right. The normal education system was not quite enough for me. I started to use my own ways to cope with academics and I found myself doing a little bit better in exams even though it did not reflect on my report card, I found it easier than before since I adapted my own strategies to learn and write.
Since everything is an image in my head I was drawn towards solving problems, puzzles and arts and patterns. On the other hand, I cannot have an image for numbers or words in my head which didn't make sense.
> For example: You say **"John Doe has an apple in his hand"**,
here I can picture John doe, _if I already knew him_, I can picture an apple, and I can picture a human hand and when you put everything as a picture you have a dashing John doe with a white shirt and blue jean with a red apple in his a hand. Can you see it?
I used to play loads of games everyday in my teenage, and you know how parents fell about their son playing violent games filled with killing, more killing, sex. I wouldn't be surprised if they thought I am spawn of some underworld demon. So my parents were against extensive gaming. So, I went out 9 am in the morning to play games in game center (remember those places, How time flies) and come back home 9 pm (I had my meals in middle, stop judging me now XD). And yes, I lied about where I am going.
> **_Now you know mum and dad, if you are reading this. Sorry 🤔?_**
Most of the games got this classic chasing scene where you chase the bad guy with a NPC or getting chased by the bad guy, and the NPC sitting with you in car or a horse starts giving you directions like 'take right now', 'take hard left'. ahhhhh...and I am so confident I take the wrong right and yeah, I had to replay the whole freaking mission because I took a right instead of left, I have tried to kill the NPC character, but lucky him, he can't take damage.

> "I am so good at eating, So I must be a good chef"
Yeah, That's my thought process and decided to make games with game engines. Well in my defense I made some pretty decent games.
Also, learned a valuable lesson to stick with just what I am good at.
So, I decided to join computer science and explore the world of computers and internet. I had a huge rush when I created my first "Hello world" in my university Computer science lab. I just wanted to know more about how does computer does these things. I had so many questions. I must thank those handful of teachers who believed in me even though I was lot of trouble 😇.
At the same time, it was difficult for me to type code with basic text editor like notepad++ or something similar where I didn't have syntax highlighting, auto complete, peek definitions, type checking and all those goodies modern IDE provides today. Or at least I didn't know.
Even with modern IDE available today with all the plugin, I was able to develop something good, But I found myself easily get lost in it. So I started creating a mind map for my projects (mind map in my actual mind) So, every time I see a block of code or a function I can visualize which box it belongs to and how it is connected with other boxes. This greatly helped me with keeping track and remembering them. I also stared writing notes and drawings of what's in my head So, I can go back and look at it.

Today I do Web development, API, and even 3d model integration. How time flies!!. I have decided to try and make mistakes. adapt from my mistakes and make new ones. Dyslexia doesn't mean you cannot code. I think dyslexia is rather helpful to visualize your project which makes it better. Coding just becomes a design in your head, In fact Dyslexia was helping it.
Nowadays Modern code editors lets you bring your own font, Code editor theme matters too, Play with it until you find the right one for you.
Use plugins to visualize your indentation, I use `indent-rainbow` in VSCode
Getting lost with **brackets** ? Use `Bracket pair colorizer`
With or without dyslexia, or whatever level you are on, Take a step and make mistakes. Dyslexic? Well, it will help you a lot.
<br/>
If you have made it to the end of the post I took too long to write, Congratz 🎉🎉
<br/>
Never in my life thought I would write such a long post. Kindly avoid my mistakes in this post 😅 Can't read such a big post again 😅 But, Hey! Well done to you 🥳
<br/>
This is my first long blog post and thank you for your time reading it ✌🏼
Your comments are very welcome here. | aravindasiva |
925,132 | Launching the M.V.P. | Devshub Coming to the point. I am launching a MVP called Devshub. Basically, I am building... | 0 | 2021-12-13T14:14:55 | https://dev.to/tazim404/launching-the-mvp-797 | javascript, programming, webdev, showdev |
# [Devshub](https://devshub.netlify.app/)
Coming to the point. I am launching a MVP called [Devshub](https://devshub.netlify.app).
Basically, I am building a social network for developers to share their projects and view what other developers built.
Currently, it is a small MVP and running on a free tier. Slowly and gradually when it will grow I will keep introducing many features
### Some features that I am focusing on:
- Messaging feature
- Timeline feature
- Analytics dashboard feature
- Badgeing feature
- ...and many more amazing features.
So I want developers like you to join and help me with this and we can make this big thing for the developer community.
If you find any bug or problem or report on the site there is a section or if you want to work together mail me at [Tazim Mail](mailto:rahbartazim@gmail.com)
Its time to look forward.
Thank you
| tazim404 |
925,333 | 3 Reason Why Javascript Should be your First Language | There are many languages to choose as your first language, i.e. Python, JavaScript, Go, and Ruby. All... | 0 | 2021-12-13T16:32:28 | https://dev.to/kinjiru09/3-reason-why-javascript-should-be-your-first-language-22jo | javascript, node, programming, beginners | There are many languages to choose as your first language, i.e. Python, JavaScript, Go, and Ruby. All these languages have been heralded as easy languages to learn. There are good reasons to call these languages “easy”.
1. They are easy to set up and relatively easy to start creating projects.
2. They have easy syntax and concepts that a beginner would find easy to understand right away.
3. They have vibrant communities around them.
4. There are a ton of libraries to help you build robust programs.
5. There are many tutorials, books, videos, courses and other resources to learn these languages.
Another reason to learn one of these languages is there are many companies looking for developers who know these languages and they are willing to pay a decent wage. For example, [the average JavaScript developer with experience can make over $100,000.](https://www.ziprecruiter.com/Salaries/Javascript-Developer-Salary)
But which one of these languages should be your first language?
There are three reasons why Javascript should be your first language.
##1) Language
The actual Javascript language, ignoring the platform i.e. browser or Nodejs, has been a matter of controversy for years. Many people love it, others hate it. It is a widely used language with a long history. But the language has gone through so many revisions, that now we are stuck in this hybrid state, where some developers write JavaScript one way and other developers write it another way. This could be very intimidating and down right annoying at times. But in reality, it created a world where a developer can learn how to code in different mindsets.
JavaScript is a multi-paradigm scripting language. It supports object-oriented, imperative, and functional programming styles. Even though it’s a dynamic language, you can use strongly typed languages that are built on top of JavaScript, like Typescript.
The flexibility of the language allows you to write in all these different paradigms. Mastering these paradigms can benefit you later on when you want to learn another language that emphasizes one of these paradigms, i.e. an object oriented language.
##2) Platform
JavaScript started in the web browser. It’s one of the core technologies of the web. Learning JavaScript allows you to understand how the web works. The amazing thing about Javascript is you literally have access to thousands and thousands of websites at your fingertips. That means you have access to all these websites’ Javascript code. You can read other people’s code, learn, and practice.
Getting started with JavaScript is relatively easy. Open a browser and start playing with code in the console, or open up any text editor and start writing some code inside of html tags and then open the file in your browser, no setup required.
Now, JavaScript engines are common components of both server-side website deployments and non-browser applications. With the creation of Nodejs, React Native, Cordova, Electron and other application frameworks, you can build mobile applications, desktop applications, games and server side applications and services.
JavaScript has even appeared in some embedded systems.
> To be honest though, depending on your requirements and needs, JavaScript may not always be the best solution for non website apps.
The fact that you can learn JavaScript and transfer that knowledge to a different platform is very powerful and a great incentive for learning the language.
Even though many languages, like Python, can be used across different platforms, Javascript still dominates the web.
##3) Concepts & Design Patterns
The third reason why you should learn JavaScript is that there are concepts and design patterns that are openly exposed to you while learning the language on different platforms.
For example, if you write server side JavaScript using Nodejs, you are introduced to principles and design patterns that are at the core of the Nodejs ecosystem. For example, you will learn the following concepts and design patterns:
1. **Modules**
2. **Event Loop**
3. **Callbacks**
4. **Event Emitter**
**1) Modules**
The concept of modules teaches you how to structure your code in small well defined components. Each module should focus on one thing and it should do it well. This helps you keep your code simple and understandable. This also helps with testing. This concept is seen throughout all of Nodejs APIs. This is good training for new developers.
**2) Event Loop**
When you learn about Nodejs asynchronous nature and it’s Event Loop, you are actually seeing the reactor pattern in use. This design pattern is an event handling pattern. Each I/O operation/event, i.e. file access, network operation, etc, is associated with a handler (see callbacks below). When an operation is done, its result is passed to the handler and the handler is invoked. The event loop handles all of this.
**3) Callbacks**
Because of Nodejs asynchronous nature, it uses a unique design pattern at its core called the callback pattern.When an operation is done, it sends the result to another function.
This pattern has pros and cons. But you are exposed to another design pattern, so it is still good for new developers to see how design patterns are used.
**4) Event Emitter**
The event emitter class is at the core of Nodejs. This shows the observer pattern in use. An object can notify listeners when its state changes, i.e. when a button is clicked, when user input text in a text box, etc. This is a common design pattern used in many programming frameworks and platforms.
Seeing how it is used in a production grade framework like Nodejs is good training for new developers.
> Every platform has its own pros and cons and design principles. These are just a few I wanted to mention that are associated with Nodejs.
;TLDR;
Javascript is used in many places and it can be fun to write and use in personal projects as well as professional projects. After learning JavaScript you can definitely find a developer job to get experience as a developer. Then you can learn other languages to advance your career. | kinjiru09 |
925,344 | Part 2/2 - Game in ReactJS - Cuzzle | In these articles I would like to share with you the process of development of the latest game that I... | 15,995 | 2021-12-21T15:09:58 | https://dev.to/jorger/part-22-game-in-reactjs-cuzzle-1662 | react, javascript, css, pwa | In these articles I would like to share with you the process of development of the latest game that I developed called Cuzzle _**(cube + puzzle = Cuzzle)**_ ReactJS, this game was inspired by the original game called [cuzzle](https://apps.apple.com/us/app/cuzzle/id1185407542) developed by [Redline Games](https://www.redline.games/games)
In the [first part](https://dev.to/jorger/part-12-game-in-reactjs-cuzzle-403a), we discuss the game and its options, in this part we are going to talk about the technical aspects.
**You can play the game online here: [https://cuzzle-react.vercel.app/](https://cuzzle-react.vercel.app/)**
## 1. Stack.
As you know the game is developed in [ReactJS](https://reactjs.org/), other main libraries that I used were:
* **[Create React App (CRA)](https://create-react-app.dev/):** This is a library that I have used previously for other games/projects, this is an easy starting point for a react project because we have all our environment configured, for example, we have [webpack](https://webpack.js.org/), hot reloading, [service workers](https://developers.google.com/web/fundamentals/primers/service-workers) (via [workbox](https://developers.google.com/web/tools/workbox)) and other features ready to be used, for this type of project I think is the best way to learn React.
* **[React Spring](https://react-spring.io/):** Another library in my toolbox of favorite libraries, this library allows us to create animations, in my case I use this library for cubes movements, I like how we can use a hook to indicate movement, for this project I used [useSpring](https://react-spring.io/hooks/use-spring) hook, in this hook we can indicate the type of animation that we need, also we have an event ([onRest](https://react-spring.io/common/props#events)) that indicates when the animation ends.
* **[Nuka Carousel](https://github.com/FormidableLabs/nuka-carousel)**: This is a good library to manage carousels, I like the simplicity that this library has. I have used this library in the [list of levels](https://cuzzle-react.vercel.app/levels).
* **[Reach Router](https://www.npmjs.com/package/@reach/router):** This is a library from the same developers of [react-router](https://reactrouter.com/), as the name implies it’s possible to set different routers/pages, in the case of the game we have five routes.
* **[qrcode-decoder](https://www.npmjs.com/package/qrcode-decoder) and [qrcode.react](https://www.npmjs.com/package/qrcode.react)**: The first library helps to read the information of our QR code, with the second we can create the QR, in this QR we save our level (actually the information we save is the URL, as mentioned in the [first part](https://dev.to/jorger/part-12-game-in-reactjs-cuzzle-403a#:~:text=Fourth%20step%20-%20Publish%20level) the level created in the editor is saved in the URL).
* **[howler](https://www.npmjs.com/package/howler):** Is an audio library. It defaults to Web Audio API and falls back to HTML5 Audio. This makes working with audio in JavaScript easy and reliable across all platforms.
* **[react-keyboard-event-handler](https://www.npmjs.com/package/react-keyboard-event-handler):** A React component for handling keyboard events (keyup, keydown and keypress), it's only available in the desktop version.
* **[sweetalert](https://www.npmjs.com/package/sweetalert):** A beautiful, responsive, customizable, accessible replacement for Javascript's popup boxes ([alert](https://developer.mozilla.org/en-US/docs/Web/API/Window/alert), [confirm](https://developer.mozilla.org/en-US/docs/Web/API/Window/confirm) and [prompt](https://developer.mozilla.org/en-US/docs/Web/API/Window/prompt)).
* **[share-api-polyfill](https://www.npmjs.com/package/share-api-polyfill):** This is a polyfill for [Web Share API] (https://developer.mozilla.org/en-US/docs/Web/API/Navigator/share) which can also be used on the desktop, it is very useful because if the browser does not support the Web Share API, the user can still share the information without problems.
Also I used other packages like [prop-types](https://www.npmjs.com/package/prop-types) and [lodash.clonedeep](https://www.npmjs.com/package/lodash.clonedeep)
Other `devDependencies` used were:
* **[Vercel](https://www.npmjs.com/package/vercel)**: This package help to publish the game in the [service](https://vercel.com/) with the same name, this service is very good, it’s simple to publish a project, I’m using this service for several years (even when the name of the service was `now`)
* **[Serve](https://www.npmjs.com/package/serve)**: Sometimes was necessary to test the game on real devices, in that case, I used this package to create an URL that I can use on my local (when I needed to share the URL with other people I used [ngrok](https://www.npmjs.com/package/ngrok))
* **[CodeSpaces](https://github.com/features/codespaces)** This is a great tool, because this allows us to have a complete development environment in the cloud, I had the option to open the project on different devices through the browser or open it in [VS code](https://code.visualstudio.com/docs/remote/codespaces), this was very useful for me, because I had the option to develop the game on different computers (even on an iPad).
## 2. CSS.
Most of the interface of the game was developed in CSS, sometimes I use SVGs in this case for the icons, I know there are a lot of good libraries to use with CSS, in my case I used “regular CSS” In this case, each component has their CSS file (if it’s necessary) sometimes when I need to change something in the interface I used the style property or using the className method, but also I used the option to share variables between CSS and JS, in this case with the [setProperty](https://developer.mozilla.org/en-US/docs/Web/API/CSSStyleDeclaration/setProperty) (create a CSS variable o modify its value).

In games like this it is possible to use [canvas](https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API), sometimes it's the best option, for this particular game (actually almost [all the games](https://github.com/Jorger) I have developed in react don't use canvas) I only use CSS, for me CSS is enough, yes, the performance is sometimes not the best, but for me the idea is to learn, not just ReactJS but CSS as well.
## 3. Technical challenges.
This is the second isometric game that I have developed (I previously developed a game with isometric style [Hocus](https://hocus-taupe.vercel.app/)). My previous games/projects were mostly 2D, the challenges in this project were mainly related to the depth of the cubes.
### Component creation
With ReactJS (and other libraries/frameworks based on components) it's "easy" to create new components (and reuse them), in this game we have several components, but it was a special challenge creating visual components like:

* **Floors**: As we saw in the [first part](https://dev.to/jorger/part-12-game-in-reactjs-cuzzle-403a#:~:text=Fist%20step%20-%20Add%20floors
) we have different types of floors, but at the end of the day it is just one component that changes its behavior based on its `props`
```jsx
import "./styles.css";
import { SIZE_FLOOR } from "../../../utils/constants";
import PropTypes from "prop-types";
import React from "react";
import Variants from "./variants";
const Floor = React.memo(
({
style = {},
size = SIZE_FLOOR,
type = "",
animated = true,
shake = false,
up = false,
}) => (
<div
className={`floor-wrapper ${shake ? "shake" : ""} ${up ? "up" : ""}`}
style={{ ...style, width: size, height: size + 20 }}
>
{type && <Variants type={type} animated={animated} />}
<div className="floor-base" />
<div className="floor" />
</div>
)
);
Floor.propTypes = {
style: PropTypes.object,
size: PropTypes.number,
type: PropTypes.string,
animated: PropTypes.bool,
shake: PropTypes.bool,
up: PropTypes.bool,
};
export default Floor;
```
In this example we can see the `<Floor />` component, but also we can see the `<Variants />` component.
```jsx
import { SUFIX_ARRIVAL_POINT } from "../../../../utils/constants";
import ArrivalPoint from "./arrivalPoint";
import Portal from "./portal";
import Switch from "./switch";
const Variants = (props) => {
let finalType = props.type || "";
const extendprops = {
...props,
};
if (props.type.includes(SUFIX_ARRIVAL_POINT)) {
finalType = SUFIX_ARRIVAL_POINT;
extendprops.color = props.type.split("-")[2] || "white";
}
const variants = {
switch: <Switch />,
"arrival-point": <ArrivalPoint {...extendprops} />,
portal: <Portal {...extendprops} />,
};
return variants[finalType] || null;
};
export default Variants;
```
* **Cube:** This component is simpler compare to floors, but only when it's a static cube.
```jsx
import "./styles.css";
import { SIZE_FLOOR } from "../../../utils/constants";
import PropTypes from "prop-types";
import React from "react";
const Cube = React.memo(
({ style = {}, size = SIZE_FLOOR - 25, color = "", opacity = false }) => {
return (
<div
className={`cube ${color}${opacity ? " opacity" : ""}`}
style={{ ...style, width: size, height: size + 20 }}
/>
);
}
);
Cube.propTypes = {
style: PropTypes.object,
color: PropTypes.string,
opacity: PropTypes.bool,
size: PropTypes.number,
};
export default Cube;
```
As you can see it's just one div, I like to use just one div when possible, sometimes it's possible to do a lot of things with just [one div](https://a.singlediv.com/) and I think it's good, because as you know when we use less DOM elements it's good for performance.
Who creates the "magic" to show a cube is CSS, in this case using the [transform](https://developer.mozilla.org/en-US/docs/Web/CSS/transform) and [linear-gradient](https://developer.mozilla.org/en-US/docs/Web/CSS/gradient/linear-gradient()) properties with the [before pseudo-element](https://developer.mozilla.org/en-US/docs/Web/CSS/::before)
### Cube depth
In a 2D game, the elements are at the same level, but in games like this, depth is an important aspect to take into consideration.

As you know in CSS we have the concept of [stacking context](https://web.dev/learn/css/z-index/), when we have two elements with absolute positions the element that is in the right position and/or below by default will be on top of the other element, if we want to change that behavior it is necessary to use the [z-index](https://developer.mozilla.org/es/docs/Web/CSS/z-index) property, but for it to work all elements need to be in the [same stacking context](https://www.joshwcomeau.com/css/stacking-contexts/#layers-and-groups).
I use the z-index only in the component where the user can move the cubes, in the editor wasn’t necessary because all the elements are static at that point, all the floors render from left to right and top to down, in this case, all the floors have an absolute position (the positions were calculated by a helper) due to behavior of rendering by default a floor have the right position and depth with respect to the other floors, I mean the next floor will have a greater depth than the previous one, the same strategy was applied for the cubes with one exception, in this case, the cubes have a z-index (the same z-index), the reason is, it was necessary to make sure the cubes are located above the floors, I don’t use another container for the cubes, the floors and the cubes are in the same container.
### Movement of the cubes
This was one of the most challenging parts of the game because I needed to move the cube or cubes (sometimes the player can move two cubes at the same time), to do that I used the [react-spring](https://react-spring.io/) library, with this library and its [useSpring](https://react-spring.io/hooks/use-spring) hook I can indicate the move I need (left, top), the move only happens if it passes some validations
**A cube only can move if:**

**There is a floor:**, if the player wants to move a cube to a new position, that position needs to have a floor, it sounds fun to say that, but a level code was necessary to have such validation.
**In front, there is another cube and there is a floor:** In this case, the validation of the floor is made for the second cube, this means when the main cube has in front of another cube, it’s necessary to know if the new position of the second cube has a floor.
**In front, there is another cube and in front of that cube, there is no other cube:** This scenario is when we have three cubes, the main cube wants to move to a position where there is another cube, now it’s necessary to know if the new poison the second cube is available, if in that position there is another cube, there will be no movement.
### Portal validation.
Validate this type of cube was also challenging, some considerations that were necessary to have:

**Not accepting movement input while teleporting a cube:** when the user indicates the movement, it is possible to maintain a sustained movement, this means that the user does not need to leave the cursor finger (or keyboard in desktop) to indicate a new movement, wit portals it’s a little different because when a cube enters to a portal, this cube change to a new position, at that moment we have an animation that shows the change, to prevent a strange behavior, the input from the user is ignored, when the teleportation animation ends, the inputs are enabled again.
**Validate that the portal exit point is available:** All the cubes in the game can be teleport, if a regular cube is teleported the exit of the portal is blocked for that cube, in that case when a cube enters a portal the cube moves normally, this means that the portal behaves like a regular floor, this is convenient sometimes for some levels.
**Undo validation.**

The game has the option to restart a level, but the undo action is different because it needs to show the level in the previous state, to achieve this action was necessary to create a new state, in this state we save the previous state of the level, but no all the complete state, this means, that it only saves the things that change at that moment, for example, if the user only moves a cube, it’s not necessary to save the state of the floors or the state of the other cubes.
For example when a cube or cubes move, in that case, we save in an object called floors, the previous position where the cube was, always we have at least a cube moving.
But when it’s necessary to validate undo option for floors we have to take into consideration the type of the floor, for example, if the floor is a shaking floor, when the undo option is applied it’s necessary to show the floor again in the same position that we have before, other case is when we have a switch floor, it’s necessary to save the floors that were related with the switch and hide that floors again.
For portals the logic was different, in this case, we don’t have the teleportation animation because it’s possible to press the undo button many times, in that case, we only save the previous position of the cube and when the user press the undo button, the cube returns to the previous position, in that case, we see an animation of the cube from the current position to its previous position.
**Save level solution**
The state that was used for the undo option also was utilized for the solution option, in this case, only it saves this data when we are in the editor, the same state was utilized because when the user uses the undo option, at the same time is removing that movement from the solution.
The solution is saved in an array, this array contains the valid moves (if a user made a movement and the cube doesn’t move that movement isn’t saved) to prevent a big solution the editor has a limit of 250 movements, but a potential array of 250 movements continues to be a big value, that’s why the solution has a special treatment to save, in this case, the array is converted to a string, eliminating the repeated values and saving the number of times a value is repeated (when it is consecutive), when a level is loaded there is a conversion to the array again.

For example **37,27,12,42,32,44,23** is the solution for the level in the gif, in this case `37` means that it's necessary move the cube seven times to the position 3 (right-top)
**Run solution.**

The user can play many times a particular level, can restart it, and also can use the undo option the times that are necessary, but there are moments that a level is very complex, that is why the game has the option to show the solution of the level, that is the reason why in the editor to play the level and validate that the level has a solution, that solution is saved and later is executed to show the solution.
In other game that I developed [Mr. Square](https://dev.to/jorger/mr-square-en-reactjs-primera-parte-49eo), we have the same option to show the solution of the level, but in that case, the user needs to follow the movements to complete the level, but this game is different because the user doesn’t need to make any movement, all the movements are automatically executed, in this case the player only is a viewer, because of that, was necessary to disable all the inputs for the user, also if the user had made movements before, was necessary to restart el level before to start executing the solution.
# Conclusions.
This game was a good challenge for me, this is the ninth game that I develop in ReactJS, with each game I learn something new, I do not consider myself a game developer, I do this type of project because it is fun and at the same time I learn new concepts of this library.
In these articles I wanted to share on a high level how I developed this game, the game code at the moment is private, just like the other games, my idea is maybe to create a course where I can show you how you can develop these games.
I'm not a native english speaker, actually this is my first articule in English, to be honest was more complex to me write these articules that develop the game 😅, but was a good experience, the next game (I have a new game in my head right now 😅) that I develop the idea is to write another articles in English, I know that I can improve day by day.
Thanks for reading these articles, you can find me at:
* [Twitter.](https://twitter.com/ostjh)
* [Github.](https://github.com/Jorger)
* [Linkedin.](https://www.linkedin.com/in/jorge-rubiano-a8616319)
| jorger |
925,589 | Making a better 2D asset pipeline | I hate image formats. Maybe it's an emotional reaction to hours upon hours of searching... | 0 | 2021-12-21T20:54:10 | https://dev.to/hamishmilne/making-a-better-2d-asset-pipeline-5d3e | unity3d, gamedev | ## I hate image formats.
Maybe it's an emotional reaction to hours upon hours of searching for obscure specifications, finding bizarre proprietary blobs in supposedly standardised files, wondering if the tool I'm running is even doing anything, and having my (fairly high end) PC completely run out of memory several times - but it's an understandable one, I think.
As a warning, this post is going to be pretty long, boring, and rant-y. If that's not for you, feel free to skip it and go look at some cool WebGL demo, but if you like this sort of thing then read on!
## A bit of background.
After a long day of coding, I like to sit down for a nice evening of coding - specifically, working on a to-be-announced game project along with a couple of long-time friends. Naturally, I've ended up in the architect's role, which means I'm responsible for our asset pipelines, and in the (primarily) 2D game we're making, that means a *lot* of sprites.
In our game, each character bust is built from a whole load of individual pieces. Arms, feet, noses, shins, eyebrows, breasts, clothes - everything can be mixed and matched, like a space-age Mr. Potato Head. This is obviously more difficult to set up in the short term, but it allows us to quickly create expressive, high-quality character busts that hopefully don't look *too* creepily alike.
Our artist has a particular creative process that I make no claim of understanding, but the important thing here is the file structure, which looks like this:

For Photoshop neophytes (like myself): the folder-looking things are exactly that, and are called 'groups'. Groups can be infinitely nested. The leaf nodes of this tree structure are 'layers', and they contain image data. This could include vector shapes, text, or other effects, but for our purposes we can assume it's raster (bitmap) data.
Each of these serves a different purpose: the lowest level layers are where the actual work gets done - so, we might have a layer for some line art, then another one for some shading. Directly above that (usually!) we have a group that collects all the layers for one 'piece', like 'SS_Ears_Kobold_Front_1'. Above that, there are groups used for organisation - gathering together the Kobold ears, all the ears, all the head pieces and so on. This allows the artist to hide the pieces that aren't relevant while working, but making them visible when he needs to check that they fit with the other adjacent pieces. There are also layers and groups used as references, temporary sketches, you get the idea.
For practical reasons, the artist splits his work into multiple PSD (Photoshop Document) files which, as we'll see later, is probably a good thing.
So! How do we get *that* into a format we can use in Unity?
### Attempt #1: keep it simple, stupid.
The solution we first jumped to was the one built in to Photoshop: "Quick export layers as PNG". The artist need only individually mouse-select the <u>**_434_**</u> groups corresponding to the pieces, hit the aforementioned option, and generate a load of images like this:

And nicely cropped as well, how thoughtful! So I can just drop them all into Unity, and...

Hmm. Okay, maybe I can make this work...

Yeah.
In hindsight, I spent an embarrassingly high number of hours, in and out of voice calls with the artist, trying to pixel-perfectly align each piece with some 433 others, before coming to the eminently sensible conclusion that this would not do.
## Attempt #2: Not enough RAM
After looking into a few possible solutions, including things like [adding a nearly-transparent pixel in two corners of every layer to trick it into not cropping the images](https://community.adobe.com/t5/photoshop-ecosystem-discussions/export-without-crop-trim/td-p/9914373), the artist found a plugin for Photoshop that provided a slightly more configurable PNG exporter. The new exports would all be identical in size, matching the size of the canvas, with the graphic correctly positioned within it. The downside: you have to merge (combine into a single layer) each of the 434 groups before export - a process that takes several times as many clicks as the previous one. Safe to say, he was *not* pleased.
I, however, was ecstatic!

...at least until I tried to shift-select all the images in Unity. After *minutes* of waiting for the unresponsive editor, my PC locked up, having completely run out of memory.
The problem is the sheer size of the images: 3500x5200 (enough to stay sharp on a 4k monitor). The size on disk isn't much changed with the addition of all this empty space--PNG provides pretty good lossless compression after all--but in Unity (both in the editor, and at runtime) things aren't so simple.
In order to display a texture on screen, a desktop GPU requires it to be ultimately stored in one of a few different formats. For RGBA images like ours, we've got essentially two options: raw, uncompressed data at 32 bits per pixel, or the [DXT5](https://en.wikipedia.org/wiki/S3_Texture_Compression) fixed-rate compression at 8 bits per pixel. Since blank pixels now cost just the same as filled ones, it adds up to 70MiB or 17MiB for a single image. All together it's 30GiB or 7.5GiB, and that could be for both system and GPU memory depending on the exact operation. The cost when selecting assets in the editor is probably twice that due to internal buffering.
As an aside, [crunch compression](https://github.com/BinomialLLC/crunch) is a bit of a red herring here. While it does reduce the size on disk, it needs to be expanded back to regular DXT when it's first displayed, so it won't solve our memory issues by itself.
Now as much I love wasting my development headspace with caveats like "Don't ever touch Ctrl+A", this wouldn't do for the eventual players of the game, who would need something like a 2080 Ti in order to run our 2D RPG. If you're familiar with Unity you've probably just yelled the solution at the screen: sprite atlases! With one of these, we can pack all our sprites together into as few textures as possible, cropping out the empty space and even nestling them in each others' concave spaces, while preserving their positional data.

A tip: you can drag a folder into the 'packable objects' list, and Unity will recursively add all the sprites within it to the atlas. Saves a lot of clicking and dragging!
After adding the folder, I hit the "Pack Preview" button, and--you guessed it--got another out of memory crash. Perhaps I could create *multiple* atlases, limited to a few dozen pieces each, but that would compromise memory efficiency, download size, and draw call count, all because I insisted on having a load of empty space around each sprite. And the problem would only get worse; we projected our final sprite count to be at *least* 10 times what we currently have.
## Attempt #3: Process and reprocess.
So we can't have cropped sprites because we lose the positional data, and we can't have uncropped sprites due to memory issues. But since the only thing the empty space provides us is an offset coordinate, perhaps we can extract this information from the raw images, and store them cropped in the project?
```csharp
[MenuItem("Tools/Trim sprites")]
public static void TrimSprites()
{
// For each selected Texture asset...
var textures = Selection.objects.OfType<Texture2D>();
foreach (var texture in textures) {
var path = AssetDatabase.GetAssetPath(texture);
var importer = (TextureImporter)AssetImporter.GetAtPath(path);
// Disable compression, and make it 'readable';
// this allows us to get a pointer to its data later on.
importer.isReadable = true;
importer.maxTextureSize = 8192;
importer.textureCompression = TextureImporterCompression.Uncompressed;
// Re-import the asset.
importer.SaveAndReimport();
// Find the bounds of the graphic by looking for non-transparent pixels
var rect = GetNonTransparentBounds(texture);
if (rect.min != Vector2Int.zero || rect.size != new Vector2Int(texture.width, texture.height)) {
// Calculate the new sprite pivot based on the computed bounds
var sourcePivot = importer.spritePivot;
var sourcePivotPixels = sourcePivot * new Vector2(texture.width, texture.height);
importer.spritePivot = (sourcePivotPixels - rect.min) / rect.size;
// Copy the graphic to a new, correctly-sized texture
var trimmed = new Texture2D(rect.width, rect.height, TextureFormat.ARGB32, false);
trimmed.SetPixels(texture.GetPixels(rect.x, rect.y, rect.width, rect.height));
trimmed.Apply();
// Write the texture to the original file path in the PNG format
File.WriteAllBytes(path, trimmed.EncodeToPNG());
}
// Undo the previous changes to the import settings
importer.isReadable = false;
// Re-import the asset, again.
importer.SaveAndReimport();
}
}
```
These new cropped, transformed sprites will happily pack into an atlas, while staying correctly aligned, so... yay?
Well as you might imagine, this import/calculate/re-import process isn't exactly quick, and if the assets need updating we'd need to reset the pivot point and re-run the process. Plus, if we needed recover a layered PSD file from these images, it's more difficult to do so (though admittedly not impossible), and after all of this we still don't have the group structure of the original file. It "works", but surely, *surely*, there's a more sustainable solution out there.
## Attempt #4: Just use PSD bro!
In the past, Unity had very poor support for layered image files, at best flattening the entire image into a single texture. This is rapidly changing, however, with the addition of the [2D PSD Importer](https://docs.unity3d.com/Packages/com.unity.2d.psdimporter@7.0/manual/index.html) package. This adds a [scripted importer](https://docs.unity3d.com/Manual/ScriptedImporters.html) which takes the original image, extracts all the layers, automatically crops and packs them into an atlas (not as efficiently as a regular sprite atlas, but good enough to save on memory use in the editor!), while *keeping* the group structure. You can even share skeleton rigs between different sprite groups, and (in the beta version) individually enable or disable layers in the import settings.

The artist, however, was sceptical. In order to get sprites for the pieces (instead of each art layer) he would still need to manually merge all the piece layer groups like he does now (the importer can sort of do this, but it's not very flexible), but with the added downside of having to upload a much larger file: the uncropped PNGs total about 20MiB, where the PSD was around 250!
A bizarre limitation of scripted importers is that's impossible to handle any file extension (like `.psd`) that Unity handles natively - even if the author of said importer is Unity themselves. Thus, the 2D PSD importer actually imports PS<u>B</u> files - a very similar, but much less well-supported format. Before you send the devs any strongly-worded letters though, you can simply rename your `.psd` files to `.psb` and it'll work fine (a feature that remains undocumented at the time of writing, naturally).
I persuaded the artist to send me his work file, and, with reckless curiosity, dropped it into Unity, which spun on the importer for about half an hour before crashing (probably due to out of memory, but I was disinclined to confirm this by trying again). Given the sheer number of art layers I'm not too surprised, but in any case I'd have to rule out postprocessing the file in Unity.
## Attempt #5: In a tiff.
As much as Adobe might pretend otherwise, PSD isn't the only layered raster format in existence. The [TIFF](https://en.wikipedia.org/wiki/TIFF) [format](https://en.wikipedia.org/wiki/RAS_syndrome) supports multiple layers (called 'pages' or 'directories'), and both Photoshop and GIMP can save a TIFF file that they claim preserves layer information.
Since I'm a cheap bastard I don't have a Photoshop license, so I used GIMP to start with. Exporting to TIFF gives you some extra settings, which I filled out like so:

The whole process was rather quick, just a few seconds. I then opened the exported file, and got some more settings:

And here's the layer structure that resulted:

So, it looks like the TIFF export process had individually merged each top-level layer group. This was... not as helpful as I'd hoped, but it's something! The artist would need to un-structure his work, moving each piece group to the top level, but I theorised that it would be much faster than manually merging and selecting each of them.
Of course, Unity won't handle these layered TIFFs in a useful way, so I had to make a scripted importer of my own!
```csharp
// Remember: Unity won't let us handle '.tif' or '.tiff'!
[ScriptedImporter(1, "tif2")]
public class MyTiffImporter : ScriptedImporter
{
public override void OnImportAsset(AssetImportContext ctx)
{
// This uses BitMiracle.LibTiff.NET from nuget
using (var tif = Tiff.Open(ctx.assetPath, "r"))
{
var textures = new List<Texture2D>();
var pivots = new List<Vector2>();
int maxWidth = 0, maxHeight = 0;
// For each 'page' within the TIFF file...
do {
// Get some metadata (width, height, name)
var width = tif.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
var height = tif.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
maxWidth = Mathf.Max(maxWidth, width);
maxHeight = Mathf.Max(maxHeight, height);
if (tif.GetField(TiffTag.PAGENAME) == null) {
Debug.Log("No page name");
continue;
}
var name = tif.GetField(TiffTag.PAGENAME)[0].ToString();
// Read the 'raster' (pixel data)
var raster = new int[width * height];
tif.ReadRGBAImage(width, height, raster);
var bounds = GetNonTransparentBounds(raster, width, height);
// Skip all-transparent pages
if (bounds.width <= 0 || bounds.height <= 0) {
continue;
}
// Calculate the page's X/Y offset
var xres = tif.GetField(TiffTag.XRESOLUTION)[0].ToDouble();
var yres = tif.GetField(TiffTag.YRESOLUTION)[0].ToDouble();
var xpos = tif.GetField(TiffTag.XPOSITION)?[0].ToDouble() ?? 0;
var ypos = tif.GetField(TiffTag.YPOSITION)?[0].ToDouble() ?? 0;
var pageBounds = new RectInt(
(int)(xpos * xres),
(int)(ypos * yres),
width,
height
);
// Calculate the pivot based on the page's position and calculated bounds
var srcPivot = (new Vector2(0, 1)*pageBounds.size + new Vector2(-1, 1)*pageBounds.position)/pageBounds.size;
var pivot = ((srcPivot * pageBounds.size) - bounds.min) / bounds.size;
pivots.Add(pivot);
// Create a new texture for the cropped image
var tex = new Texture2D(bounds.width, bounds.height, TextureFormat.RGBA32, false) {
name = name,
alphaIsTransparency = true
};
// Copy the pixels from the raster into the texture
var data = tex.GetPixelData<int>(0);
// data.CopyFrom(raster);
for (int i = 0; i < bounds.height; i++) {
for (int j = 0; j < bounds.width; j++) {
data[i * bounds.width + j] = raster[(bounds.x + j) + (bounds.y + i) * width];
}
}
textures.Add(tex);
// ReadDirectory returns false when there are no more pages to read
} while (tif.ReadDirectory());
// Create a new texture for the combined image
var atlas = new Texture2D(64, 64, TextureFormat.DXT5, false) {
alphaIsTransparency = true
};
// This function packs the textures somewhat loosely, but good enough for now!
var rects = atlas.PackTextures(textures.ToArray(), 0, 4096, true);
EditorUtility.CompressTexture(atlas, TextureFormat.DXT5, TextureCompressionQuality.Best);
ctx.AddObjectToAsset("atlas", atlas);
ctx.SetMainObject(atlas);
for (int i = 0; i < textures.Count; i++) {
// Add the Sprite to the asset
var sprite = Sprite.Create(atlas,
rects[i].TransformSpace(new Rect(0, 0, 1, 1), new Rect(0, 0, atlas.width, atlas.height)),
pivots[i], 100);
sprite.name = textures[i].name;
ctx.AddObjectToAsset(sprite.name, sprite);
}
}
}
public static RectInt GetNonTransparentBounds(int[] raster, int w, int h)
{
int x = w;
int y = h;
int maxX = 0;
int maxY = 0;
for (int i = 0; i < w; i++) {
for (int j = 0; j < h; j++) {
int c = raster[i + j * w];
int alpha = (c >> 24) & 0xff;
if (alpha != 0)
{
if (i < x) x = i;
if (i > maxX) maxX = i;
if (j < y) y = j;
if (j > maxY) maxY = j;
}
}
}
return new RectInt(x, y, maxX - x, maxY - y);
}
}
```
It's rough, but it sort of works!

More importantly, though: how easy would it be for our artist to create a valid file? At first it seemed the answer was 'extremely' - as simple as Save As -> TIFF - but when I opened the file in GIMP it seemed to only have a single layer.
Opening the file in the very retro-looking [TIFF Inspector](http://www.thesilentfish.de/software/archive/itm_TiffInspec103.en.html) gave me this:

One directory (i.e. layer), and a lot of 'privately defined' tags. Using [a very helpful reference](https://www.awaresystems.be/imaging/tiff/tifftags/private.html), we find that most of these tags are irrelevant--colour profiles, thumbnail data and suchlike--but tag 37724 seems to have some proprietary Photoshop-related data, which is corroborated by the [Photoshop TIFF spec](https://www.alternatiff.com/resources/TIFFphotoshop.pdf).
Hang on, Photoshop TIFF spec? Yeah, the TIFFs that Photoshop creates are, for our purposes, totally proprietary, so it's essentially a PSD file but even less well supported. Great! Apparently [ImageMagick](https://imagemagick.org/script/formats.php) has support for getting the layer data out of this 'variant', so if you have a bunch of files in the format already, you can still make use of them.
I could get the artist to open his work file in GIMP and go through the export process there, but by this point it seemed like a bit of a hassle for not much benefit.
## Attempt #6: Can't I just ZIP a bunch of PNGs together or something?
Before I resorted to making my own format out of chewing gum and string, I thought I'd have a quick browse of the other layered raster formats out there, just to see if there were any other options - and wouldn't you know it, there are! The [OpenRaster](https://www.openraster.org/baseline/baseline.html) format (`.ora`) is an open standard, is supported in GIMP, and is *literally a ZIP of PNGs*, with an XML file describing the layer structure - groups and all:

So it *appeared* (foreshadowing!) that OpenRaster was a good candidate for our 'final' asset format. However, we still had the problem of how to go from our many-layered PSD work file to a few-layered OpenRaster file. Merging all the layers was still a manual process, and I have to confess I wouldn't be happy if I had to do several thousand clicks just to get my art in a nice format for the programmers.
So I made a plugin!
```scheme
(define (merge-and-optimize-recursive image items)
(for-each (lambda (item)
(define merged-layer item)
(if (= TRUE (car (gimp-item-is-group item))) (let* (
(children (vector->list (cadr (gimp-item-get-children item))) )
)
; If any children are not groups, merge the item
; Otherwise, recurse.
(if (= 0 (foldr * 1 (
map car (map gimp-item-is-group children)
) ) )
(set! merged-layer (car (gimp-image-merge-layer-group image item)))
(merge-and-optimize-recursive image children)
)
) )
; Auto-crop the (possibly merged) layer
(if (= FALSE (car (gimp-item-is-group merged-layer))) (let* ()
(gimp-image-set-active-layer image merged-layer)
(plug-in-autocrop-layer RUN-NONINTERACTIVE image merged-layer)
) )
) items )
)
(define (script-fu-merge-and-optimize image layer)
(gimp-image-undo-group-start image)
; The final assets will be in 8-bit RGBA, so convert the image to that if needed.
(if (not (= PRECISION-U8-GAMMA (car (gimp-image-get-precision image))))
(gimp-image-convert-precision image PRECISION-U8-GAMMA)
)
(merge-and-optimize-recursive image (vector->list (cadr (gimp-image-get-layers image))) )
(gimp-image-undo-group-end image)
)
(script-fu-register
"script-fu-merge-and-optimize"
"Merge layer groups and optimize"
"Merge layer groups and optimize"
"Hamish Milne"
"Hamish Milne"
"2021"
"*"
SF-IMAGE "Image" 0
SF-DRAWABLE "Layer" 0
)
(script-fu-menu-register "script-fu-merge-and-optimize" "<Image>/Image")
```
... Or, more specifically a '[Script-Fu](https://docs.gimp.org/en/gimp-using-script-fu-tutorial.html) Script'. The -ahem- 'code' above is [Scheme](https://www.gnu.org/software/mit-scheme/), specifically [TinyScheme](http://tinyscheme.sourceforge.net/home.html), GIMP's scripting runtime of choice. It's also possible to use Python 2.7 (ugh), or compile an executable from scratch, but Scheme is a lot less painful for simple scripts like this one.
There's a few things to note here:
* The input files will generally use high-precision colours for more accurate composition while editing. Before everything, I change the image precision to 8-bit gamma encoding, since this is what will ultimately be used by Unity when importing; skipping this step will result in needlessly large output files.
* My heuristic for whether to merge layers is somewhat specific to our pipeline. I merge the groups directly above the leaf layers and leave the rest alone.
* I run the Auto Crop function on every layer, which cuts it down to the smallest rectangle that encloses the graphic. This obsoletes my Unity-based solution, and naturally makes the output files smaller still.
I have to say, it was certainly satisfying watching layer groups flick down the screen as the script did its thing. [If you're sensitive to flashing images, though, I recommend looking away from the screen for a bit...](https://i.imgur.com/etcWoQA.gif)
On exporting to the OpenRaster format, everything seemed to work! I had, effectively, a ZIP of all the cropped layers, in PNG format, along with the layer structure. That's just about everything I'd been looking for, right?
*Right?*
Well... For starters, Unity has *zero* support for OpenRaster, so I'd need to make another scripted importer. Not too difficult, since the format's so simple, but I couldn't help feeling some chagrin that I'd have to essentially re-implement all the features of the Unity PSD importer, just in a much more janky way.
Also, it took GIMP several minutes to do the export. On a more reasonably-specced machine, and with a larger file, it might push half an hour. I don't know exactly *why* this is the case, when exporting to TIFF or PSD takes seconds, but it's probably to do with the OpenRaster exporter being a Python plugin, where those other ones are built in to the main program.
Man, it'd be cool if we could just use PSD, huh?
## Attempt #7: Seriously, just use PSD.
The main issue we had with using PSD wasn't the format itself, it was the total effort required to prepare the file for export. With the plugin I'd made, that just became a non-issue; I'd reduced the preparation time to almost nothing, and we could export in whatever format we pleased. Why wouldn't PSD do?
In fact, with all my optimizations, the PSD file exported after running the script was only about 20% larger than the equivalent OpenRaster file, and about half the size of all the un-cropped PNGs. The PSD shrunk from 250MiB to a little over 10. And this time, Unity didn't crash on the importer!

The only major caveat with this approach is the size of the intermediary atlas. Unlike regular sprite atlases, the PSD importer will create one big atlas texture per file. Unity (and indeed, most GPUs) has a maximum texture size of 16k square, even in the editor. If your sprites don't fit, they'll be shrunk until they do, and the sprite atlas won't be able to un-shrink them later on. So if the intermediary atlas looks pretty full, you might want to break up the PSD into smaller files.
Another thing to watch out for is this:

Neither GIMP nor Unity's PSD importer will perfectly handle every Photoshop feature. The effect above is caused by a mask being incorrectly applied, so if your layers are doing anything beyond being linearly composed together, it's a good idea to rasterize them just before you export.
## Conclusion: What have we learned?
If you find yourself in the incredibly specific position of having a huge amount of structured, fixed-position sprites, created in Photoshop, that need to be imported into Unity, you've got a few options:
Quick export layers as PNGs:
* Pro: Quick and simple
* Pro: Small file size (thanks to the cropping)
* Con: You lose the positional data and group structure (thanks to the cropping)
Use [SuperPNG](https://www.fnordware.com/superpng/) or similar to export un-cropped PNGs:
* Pro: No special software needed to process the output
* Pro: Keeps position data
* Con: Need to manually merge each sprite group
* Con: No group structure
* Con: Requires post-processing in Unity
* Con: Very easy to accidentally run out of memory in the editor
Use the PSD work file directly:
* Pro: Easiest of the bunch for the artist
* Pro: You use exactly the same file, making it easy to stay in sync
* Pro: Keeps the position data and group structure
* Pro: Importer made by Unity, will get new features over time
* Con: *Massive* file size
* Con: Importing takes forever and might crash if the file is too big
* Con: If you have a weird layer structure, make use of masks, smart objects etc. you might still need to pre-process the file for Unity to display it correctly
Bring each sprite's group to the top level, then export to TIFF in GIMP:
* Pro: Fewer clicks than merging each sprite group individually
* Pro: Fairly small file size
* Pro: Keeps position data
* Con: No group structure
* Con: Requires a custom scripted importer
* Con: Multi-page TIFFs not well supported
Use a GIMP script to optimize the file, then export to OpenRaster:
* Pro: Smallest file size of the lot
* Pro: Automatic processing saves a lot of time
* Pro: Simple format, easy to parse and use elsewhere
* Pro: Keeps the position data and group structure
* Con: Requires a custom scripted importer
* Con: Format not well supported
* Con: Takes a long time to export
Use a GIMP script to optimize the file, then export to PSD (our solution of choice):
* Pro: Fairly small file size
* Pro: Automatic processing saves a lot of time; quick to export
* Pro: Keeps the position data and group structure
* Pro: Importer made by Unity, will get new features over time
* Con: Need to rasterize masks etc. for Unity to display it correctly
## Recap: How we do it
For reference (or if you've just skipped to the end), here's our full 2D pipeline, step by step:
* **Artist setup:**
* Install Photoshop (obviously)
* Install GIMP
* Copy the Scheme code above to `%APPDATA%\GIMP\<GIMP version>\scripts\script-fu-merge-and-optimize.scm`
* **While creating:**
* Structure the layers as you like, but make sure that the group for each sprite only contains layers, and not other groups.
* **To export:**
* Delete (not hide!) any layers that you don't want in the final output (references and so on)
* If you're using masks, smart objects, patterns etc., rasterize and/or merge the layers as appropriate so that only simple layers remain
* Open the PSD file in GIMP
* Run the script by going to 'Image -> Merge layer groups and optimize'
* NB, it's not necessary to make all the layers visible at this stage
* Check the results, then export the file as PSD
* **To import:**
* Install the "2D PSD Importer" package, if it's missing
* Change the image file's extension to `.psb`, and copy it into the project
* Check the import settings - in particular, the texture size, hidden layers, and layer group options
* Ensure your sprites are added to an atlas before building
And... that's it! After a lot of trial and error, we've got what I think is a pretty powerful and robust asset pipeline, which hopefully won't make our artist pull his hair out every time he needs to do an export.
All the code in this post is [ISC](https://choosealicense.com/licenses/isc/) licensed. Feel free to use it if you find it useful! | hamishmilne |
925,604 | Use Symbolic Links to version control your config files | Coming from Windows based OS, I thought of symbolic links in Unix based OS as a way of creating... | 0 | 2021-12-14T01:26:00 | https://dev.to/rounakcodes/use-symbolic-links-to-version-control-your-config-files-2048 | dotfiles, shell, config, stow | Coming from Windows based OS, I thought of symbolic links in Unix based OS as a way of creating shortcuts like "Create a Desktop shortcut for ..." and so I never paid much attention to it.
I have now (15 years late) discovered its value.
Usually, you have your configuration (*dotfiles*) files for shell, text editors, git etc in your home directory (or `~/.config`).
It is not convenient to version control your home directory to backup some of these config files.
The solution is to create symbolic links. Create any directory like `~/.dotfiles` and move all the config files from your home directory to `~/.dotfiles`. Then create a symbolic link in the home directory. Now you can version control `~/.dotfiles` with ease.
I wrote this short article just to draw attention to the value of symbolic links.
For implementation details, you can watch these excellent videos which also introduces `GNU Stow` to avoid the pain of creating symbolic links manually.
1. [Create dotfiles folder and use symoblic links](https://www.youtube.com/watch?v=gibqkbdVbeY)
2. [GNU Stow](https://www.youtube.com/watch?v=CxAT1u8G7is)
| rounakcodes |
1,180,830 | Automatically and intelligently processing SMS with attachments? | Hi there. Doesn't seem like it's asking too much, but I'm not sure. I'd like to be able to assign a... | 0 | 2022-08-31T18:41:48 | https://dev.to/redherring917b/automatically-and-intelligently-processing-sms-with-attachments-2039 | twilio, help | Hi there. Doesn't seem like it's asking too much, but I'm not sure.
I'd like to be able to assign a webhook to a recipient phone number that would automatically fire off whenever an SMS was received. I'd like that webhook to contain the details of the SMS, including the message itself, the sending mobile number, as well as download url's for any attachments (typically photos).
On my (the webhook processing page) end, I'd then parse the details and save the attachments to a sending mobile number specific folder on our server (the server where the webhook processing page is found).
It doesn't really sound all that complicated to me - my end isn't, assuming that I can get the full contents of the SMS send via webhook - but I'm having difficulty getting a straight enough answer from any possible SMS processing vendor.
Thanks! | redherring917b |
925,658 | Building a waitlist for your product with Next.js | Building a waitlist allows your future users to express interest in you, before you've even started... | 0 | 2021-12-14T02:29:31 | https://blog.propelauth.com/nextjs-waitlist/ | javascript, nextjs, webdev, tutorial | Building a waitlist allows your future users to express interest in you, before you've even started your MVP. You can see if your messaging resonates with potential customers, and when you are ready to launch, the users from your waitlist will make excellent early product testers.
In this post, we'll build the following Next.js application:

We will use Next.js for both the frontend and backend thanks to Next.js API routes. API routes are great for this because they are serverless. If we get a sudden burst of users, it will scale up to handle the additional load. We also don't have to pay for any servers when no one is signing up.
Since there's not that much code, we'll walk through and explain all of it.
## Creating our Next.js Application
### Creating a blank project
Use `create-next-app` to set up a new project, and then `yarn dev` to run it.
```shell
$ npx create-next-app@latest waitlist
$ cd waitlist
$ yarn dev
```
I like to start with a blank project, so let's replace the existing code in `pages/index.js` with this:
```jsx
import Head from 'next/head'
import styles from '../styles/Home.module.css'
export default function Home() {
return (
<div className={styles.container}>
<Head>
<title>Waitlist</title>
<meta name="description" content="A quick, scalable waitlist"/>
<link rel="icon" href="/favicon.ico"/>
</Head>
</div>
)
}
```
We can also delete everything in `styles/Home.module.css`, we'll replace it shortly. If you go to `http://localhost:3000`, you'll see a blank page with **Waitlist** as the title.
### Creating a two column layout
As you saw before, we want a classic two column layout with an image on the right and some marketing text on the left. We'll use a flexbox layout. Add the following to your `styles/Home.module.css`.
```css
.container {
background-color: #293747; /* background color */
min-height: 100vh; /* cover at least the whole screen */
height: 100%;
display: flex; /* our flex layout */
flex-wrap: wrap;
}
.column {
flex: 50%; /* each column takes up half the screen */
margin: auto; /* vertically align each column */
padding: 2rem;
}
/* On small screens, we no longer want to have two columns since they
* would be too small. Increasing it to 100% will make the two columns
* stack on top of each other */
@media screen and (max-width: 600px) {
.column {
flex: 100%;
}
}
```
Back in `pages/index.js`, we will add two components for the left and right columns. On the right side, we'll put an image of some code. You can put an image of the product, a mockup, something fun from unsplash, or anything really. For now, the left side will have some placeholder text.
```jsx
// ...
<Head>
<title>Waitlist</title>
<meta name="description" content="A quick, scalable waitlist"/>
<link rel="icon" href="/favicon.ico"/>
</Head>
// New components
<LeftSide/>
<RightSide/>
</div>
)
}
// These functions can be moved into their own files
function LeftSide() {
return <div className={styles.column}>
Hello from the left side
</div>
}
function RightSide() {
return <div className={styles.column}>
<img width="100%" height="100%" src="/code.svg"/>
</div>
}
```

The right side looks great! It covers the right half of the screen like we expected. The left side, however, is pretty ugly and unreadable. Let's address that now.
### Formatting our marketing text
We know what we want our `LeftSide` to say, let's start by updating it so the text matches our image above. For now, we'll also put in placeholder styles which we will add afterwards.
```jsx
function LeftSide() {
return <div className={styles.column}>
<img width="154" height="27" src="/logo.svg"/>
<h1 className={styles.title}>
Quick Scalable<br/>
<span className={styles.titleKeyword}>Waitlist</span>
</h1>
<div className={styles.subtitle}>
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore
et dolore magna aliqua.
</div>
</div>
}
```

If it wasn't for the bad contrast between the black text and the background, this wouldn't look too bad. Now we can add the `title`, `titleKeyword`, and `subtitle` classes (in `styles/Home.module.css`) to clean it up.
```css
.title {
font-size: 4rem;
color: white;
}
.titleKeyword {
color: #909aeb;
}
.subtitle {
font-size: 1.2rem;
font-weight: 250;
color: white;
}
```

### Adding the waitlist form
Our frontend is really coming together. The only remaining part is the form where the user can submit their email address. We'll place this in a separate component called `Form` and add it to the bottom of our `LeftSide` component.
```jsx
function LeftSide() {
return <div className={styles.column}>
{/* same as before */}
<Form />
</div>
}
function Form() {
const [email, setEmail] = useState("");
const [hasSubmitted, setHasSubmitted] = useState(false);
const [error, setError] = useState(null);
const submit = async (e) => {
// We will submit the form ourselves
e.preventDefault()
// TODO: make a POST request to our backend
}
// If the user successfully submitted their email,
// display a thank you message
if (hasSubmitted) {
return <div className={styles.formWrapper}>
<span className={styles.subtitle}>
Thanks for signing up! We will be in touch soon.
</span>
</div>
}
// Otherwise, display the form
return <form className={styles.formWrapper} onSubmit={submit}>
<input type="email" required placeholder="Email"
className={[styles.formInput, styles.formTextInput].join(" ")}
value={email} onChange={e => setEmail(e.target.value)}/>
<button type="submit" className={[styles.formInput, styles.formSubmitButton].join(" ")}>
Join Waitlist
</button>
{error ? <div className={styles.error}>{error}</div> : null}
</form>
}
```
A few things to note about the `Form` component:
- We use a [controlled component](https://reactjs.org/docs/forms.html#controlled-components) for the email input.
- We set up an error at the bottom that is conditionally displayed
- Once `hasSubmitted` is true, we stop displaying the form and instead display a thank you message.
Let's clean it up with css before we finish the `submit` method.
```css
.formWrapper {
padding-top: 3rem;
display: flex; /* two column display for input + button */
flex-wrap: wrap;
}
/* Shared by the input and button so they are the same size and style */
.formInput {
padding: 12px 20px;
box-sizing: border-box;
border: none;
border-radius: 5px;
font-size: 1.1rem;
}
.formTextInput {
flex: 70%; /* take up most of the available space */
background-color: #232323;
color: white;
}
.formSubmitButton {
flex: 30%; /* take up the rest of the space */
background-color: #7476ED;
color: white;
}
.error {
color: red;
}
```

### Making a request to a Next.js API route
Our design is finished! Now all we have to do is make sure when you click submit that two things happen:
1. The frontend makes a request to our backend with the email address
2. The backend saves the email address somewhere
The first one is actually pretty simple. Here's our finished `submit` method:
```js
const submit = async (e) => {
e.preventDefault();
let response = await fetch("/api/waitlist", {
method: "POST",
body: JSON.stringify({email: email})
})
if (response.ok) {
setHasSubmitted(true);
} else {
setError(await response.text())
}
}
```
We use the fetch method to send a post request to `/api/waitlist` with a JSON body that includes our user's email. If the request succeeds, we flip `hasSubmitted` and the user gets a nice message. Otherwise, the user sees an error returned from our backend.
`/api/waitlist` refers to an API route that we have not yet created, which is our only remaining step.
## Creating a Next.js API route
### Creating an empty route
Our blank application actually started with an API route in `/pages/api/hello.js` which looks like this:
```js
export default function handler(req, res) {
res.status(200).json({ name: 'John Doe' })
}
```
Since this route is in `/pages/api/hello.js`, it will be hosted under `/api/hello`. We can test this with curl:
```shell
$ curl localhost:3000/api/hello
{"name":"John Doe"}
```
Our frontend is making a request to `/api/waitlist`, however, so let's delete `hello.js` and make a new file `/pages/api/waitlist.js`.
```js
// To make sure only valid emails are sent to us, install email validator:
// $ yarn add email-validator
// $ # or
// $ npm i --save email-validator
import validator from "email-validator"
export default async function handler(req, res) {
// We only want to handle POST requests, everything else gets a 404
if (req.method === 'POST') {
await postHandler(req, res);
} else {
res.status(404).send("");
}
}
async function postHandler(req, res) {
const body = JSON.parse(req.body);
const email = parseAndValidateEmail(body, res);
await saveEmail(email);
res.status(200).send("")
}
async function saveEmail(email) {
// TODO: what to do here?
console.log("Got email: " + email)
}
// Make sure we receive a valid email
function parseAndValidateEmail(body, res) {
if (!body) {
res.status(400).send("Malformed request");
}
const email = body["email"]
if (!email) {
res.status(400).send("Missing email");
} else if (email.length > 300) {
res.status(400).send("Email is too long");
} else if (!validator.validate(email)) {
res.status(400).send("Invalid email");
}
return email
}
```
Most of the work there is just boilerplate for validating the JSON body and email that we get. But, this is actually all you need to handle the request that the frontend makes.
Go back to your frontend, type in an email, and click **Join Waitlist**. You should see your success message, and in the logs you should see `Got email: {YOUR EMAIL}`.
### How to persist waitlist emails
While logging the email is fine, you are probably going to want something more durable. This part is really dependent on your stack.
As an example, if you don't expect a lot of users and are already using Slack, you can use a [Webhook integration](https://api.slack.com/messaging/webhooks) to send a message to slack every time a user signs up. Here's how to do that using the [@slack/webhook](https://www.npmjs.com/package/@slack/webhook) library.
```js
const { IncomingWebhook } = require('@slack/webhook');
const url = process.env.SLACK_WEBHOOK_URL;
async function saveEmail(email) {
const webhook = new IncomingWebhook(url);
await webhook.send({
text: 'New waitlist request: ' + email,
});
}
```
You could also save it to a database. [CockroachDB](https://www.cockroachlabs.com/lp/serverless/) recently announced support for a highly available serverless DB that you can write to with any Postgres library, like `pg`:
```js
import { Pool, Client } from 'pg'
const connectionString = process.env.DB_CONNECTION_STRING;
async function saveEmail(email) {
try {
const client = new Client({connectionString})
await client.connect()
const query = 'INSERT INTO waitlist(email) VALUES($1)'
const values = [email]
const res = await client.query(query, values)
await client.end()
} catch (err) {
console.log(err.stack)
res.status(503).send("An unexpected error has occurred, please try again");
}
}
```
Or you could use services like [Airtable](https://www.airtable.com/) which has its own API for saving to a sheet. If you have a CRM, you might want to save entries directly to that instead. There are a lot of options to choose from.
## Extra features
This waitlist is pretty easy to extend. You may, for example, want to:
- **Collect more information** - Just add more fields to the frontend and parse/save them on the backend.
- **Persist whether the user has ever signed up** - Right now if the user refreshes, they are always set back to the "has not submitted" state. You can address this by saving/reading `hasSubmitted` from `localStorage`.
Ultimately, the important thing is you are getting the information you need from your future users, and you are saving it durably.
## Next steps/Plug
After building out your waitlist, you will probably begin to build out an MVP of your product. You can dramatically speed up that process by using [PropelAuth](https://www.propelauth.com) - a hosted authentication service which provides a complete login and account management experience for both B2C and B2B businesses.
All UIs that your users will need are already built (from login to profile pages to organization management) and configurable via a simple UI. Your users get powerful features like 2FA and it only takes minutes to set up. We hope you'll check it out!
## Attributions
- The image of code was generated from [Carbon](https://carbon.now.sh)
- The placeholder logo is from [Logoipsum](https://logoipsum.com/) | propelauthblog |
925,707 | Getting Started with the React AutoComplete Component | Learn how easily you can create and configure the React AutoComplete component of Syncfusion using... | 0 | 2021-12-14T04:59:47 | https://dev.to/syncfusion/getting-started-with-the-react-autocomplete-component-30ok | react, reactnative, webdev, webcomponents | Learn how easily you can create and configure the [React AutoComplete component](https://www.syncfusion.com/react-ui-components/react-autocomplete) of Syncfusion using the create-react-app command. This video also explains how to configure a few of the control’s basic features like binding list and remote data and customizing pop-up height and width.
Download an example from GitHub: https://github.com/SyncfusionExamples...
Refer to the following documentation for the Syncfusion React AutoComplete component: https://ej2.syncfusion.com/react/docu...
Check out this online example of the React AutoComplete component: https://ej2.syncfusion.com/react/demo...
{% youtube qpnnfN_E8PY %} | techguy |
925,725 | Tailwind CSS v3 Button Examples | In this section we will create tailwind css 3 button, tailwind v3 light button, button with icon,... | 0 | 2021-12-14T06:03:35 | https://larainfo.com/blogs/tailwind-css-v3-button-examples | tailwindcss, css, tutorial, webdev | In this section we will create tailwind css 3 button, tailwind v3 light button, button with icon, create beautiful tailwind button styles, tailwind v3 glow button example with Tailwind CSS version 3. Before we start you need to install or setup tailwind v3 project, you can check below article.
[How to install & setup Tailwind CSS v3](https://larainfo.com/blogs/how-to-install-setup-tailwind-css-v3)
####Example 1
tailwind v3 simple button with every new tailwind v3 color
```html
<button class="px-6 py-2 rounded bg-slate-400 hover:bg-slate-500 text-slate-100">Button</button>
<button class="px-6 py-2 rounded bg-zinc-400 hover:bg-zinc-500 text-zinc-100">Button</button>
<button class="px-6 py-2 rounded bg-neutral-400 hover:bg-neutral-500 text-neutral-100">Button</button>
<button class="px-6 py-2 rounded bg-stone-400 hover:bg-stone-500 text-stone-100">Button</button>
<button class="px-6 py-2 text-orange-100 bg-orange-400 rounded hover:bg-orange-500">Button</button>
<button class="px-6 py-2 rounded bg-amber-400 hover:bg-amber-500 text-amber-100">Button</button>
<button class="px-6 py-2 rounded bg-lime-400 hover:bg-lime-500 text-lime-100">Button</button>
<button class="px-6 py-2 rounded bg-emerald-400 hover:bg-emerald-500 text-emerald-100">Button</button>
<button class="px-6 py-2 text-teal-100 bg-teal-400 rounded hover:bg-teal-500">Button</button>
<button class="px-6 py-2 rounded bg-cyan-400 hover:bg-cyan-500 text-cyan-100">Button</button>
<button class="px-6 py-2 rounded bg-sky-400 hover:bg-sky-500 text-sky-100">Button</button>
<button class="px-6 py-2 rounded bg-violet-400 hover:bg-violet-500 text-violet-100">Button</button>
<button class="px-6 py-2 text-purple-100 bg-purple-400 rounded hover:bg-purple-500">Button</button>
<button class="px-6 py-2 rounded bg-fuchsia-400 hover:bg-fuchsia-500 text-fuchsia-100">Button</button>
<button class="px-6 py-2 rounded bg-rose-400 hover:bg-rose-500 text-rose-100">Button</button>
```

####Example 2
tailwind v3 light button
```html
<button class="px-6 py-2 text-sm rounded shadow bg-slate-100 hover:bg-slate-200 text-slate-500">Button</button>
<button class="px-6 py-2 text-sm text-orange-500 bg-orange-100 rounded shadow hover:bg-orange-200">Button</button>
<button class="px-6 py-2 text-sm rounded shadow bg-emerald-100 hover:bg-emerald-200 text-emerald-500">Button</button>
<button class="px-6 py-2 text-sm rounded shadow bg-cyan-100 hover:bg-cyan-200 text-cyan-500">Button</button>
<button class="px-6 py-2 text-sm rounded shadow bg-violet-100 hover:bg-violet-200 text-violet-500">Button</button>
<button class="px-6 py-2 text-sm rounded shadow bg-rose-100 hover:bg-rose-200 text-rose-500">Button</button>
```

####Example 3
tailwind v3 button with icon
```html
<div class="flex">
<button type="button"
class="inline-flex items-center px-6 py-2 text-sm font-medium text-center rounded text-rose-100 bg-rose-500 hover:bg-rose-600">
Read more
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 ml-2 mt-0.5" fill="none" viewBox="0 0 24 24"
stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M17 8l4 4m0 0l-4 4m4-4H3" />
</svg>
</button>
<button type="button"
class="inline-flex items-center px-6 py-2 text-sm text-center rounded text-cyan-500 bg-cyan-100 hover:bg-cyan-200">
Read more
<svg xmlns="http://www.w3.org/2000/svg" class="w-6 h-6 ml-2 mt-0.5" fill="none" viewBox="0 0 24 24"
stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M17 8l4 4m0 0l-4 4m4-4H3" />
</svg>
</button>
<button class="inline-flex items-center px-6 py-2 text-sm text-green-100 bg-green-600 rounded hover:bg-green-700">
<svg xmlns="http://www.w3.org/2000/svg" class="w-5 h-5 mr-2 text-white" fill="none" viewBox="0 0 24 24"
stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2"
d="M3 3h2l.4 2M7 13h10l4-8H5.4M7 13L5.4 5M7 13l-2.293 2.293c-.63.63-.184 1.707.707 1.707H17m0 0a2 2 0 100 4 2 2 0 000-4zm-8 2a2 2 0 11-4 0 2 2 0 014 0z" />
</svg>
<span>Buy Now</span>
</button>
</div>
```

####Example 4
tailwind v3 glow button with rounded, transition-colors, duration, shadow etc.
```html
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl bg-slate-500 hover:bg-slate-600 text-slate-100 shadow-slate-400">Button</button>
<button class="px-6 py-2 text-sm text-blue-100 transition-colors duration-300 bg-blue-500 rounded-full shadow-xl hover:bg-blue-600 shadow-blue-400">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl text-emerald-100 bg-emerald-500 hover:bg-emerald-600 shadow-emerald-400">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl text-cyan-100 bg-cyan-500 hover:bg-cyan-600 shadow-cyan-400">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl text-violet-100 bg-violet-500 hover:bg-violet-600 shadow-violet-400">Button</button>
```

tailwind v3 shadow color button with perfect color shadow.(recommended)
```html
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl bg-slate-500 hover:bg-slate-600 text-slate-100 shadow-slate-400/30">Button</button>
<button class="px-6 py-2 text-sm text-blue-100 transition-colors duration-300 bg-blue-500 rounded-full shadow-xl hover:bg-blue-600 shadow-blue-400/30">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl text-emerald-100 bg-emerald-500 hover:bg-emerald-600 shadow-emerald-400/30">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl text-cyan-100 bg-cyan-500 hover:bg-cyan-600 shadow-cyan-400/30">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 rounded rounded-full shadow-xl text-violet-100 bg-violet-500 hover:bg-violet-600 shadow-violet-400/30">Button</button>
```

tailwind v3 outline button with shadow glow color
```html
<button class="px-6 py-2 text-sm text-indigo-500 transition-colors duration-300 border-2 border-indigo-400 rounded-full shadow-xl shadow-indigo-300/30 hover:bg-indigo-500 hover:text-indigo-100">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 border-2 rounded-full shadow-xl text-cyan-500 border-cyan-400 shadow-cyan-300/30 hover:bg-cyan-500 hover:text-cyan-100">Button</button>
<button class="px-6 py-2 text-sm transition-colors duration-300 border-2 rounded-full shadow-xl text-rose-500 border-rose-400 shadow-rose-300/30 hover:bg-rose-500 hover:text-rose-100">Button</button>
```

####Example 5
tailwind css v3 button with hover effect with group hover, transform, transition, group-hover:-translate etc.
```html
<button class="relative px-6 py-2 group">
<span class="absolute inset-0 w-full h-full transition duration-300 ease-out transform translate-x-1 translate-y-1 bg-black group-hover:-translate-x-0 group-hover:-translate-y-0"></span>
<span class="absolute inset-0 w-full h-full bg-white border-2 border-black group-hover:bg-black"></span>
<span class="relative text-black group-hover:text-white"> hover effect</span>
</button>
<button class="relative px-6 py-2 group">
<span class="absolute inset-0 w-full h-full transition duration-300 ease-out transform translate-x-1 translate-y-1 bg-red-700 group-hover:-translate-x-0 group-hover:-translate-y-0"></span>
<span class="absolute inset-0 w-full h-full bg-white border-2 border-red-700 group-hover:bg-red-700"></span>
<span class="relative text-red-700 group-hover:text-red-100"> hover effect </span>
</button>
<button class="relative px-6 py-2 group">
<span class="absolute inset-0 w-full h-full transition duration-300 ease-out transform translate-x-1 translate-y-1 bg-cyan-700 group-hover:-translate-x-0 group-hover:-translate-y-0"></span>
<span class="absolute inset-0 w-full h-full bg-white border-2 border-cyan-700 group-hover:bg-cyan-700"></span>
<span class="relative text-cyan-700 group-hover:text-cyan-100"> hover effect</span>
</button>
<button class="relative px-6 py-2 group">
<span class="absolute inset-0 w-full h-full transition duration-200 ease-out transform translate-x-1 translate-y-1 bg-indigo-700 group-hover:-translate-x-0 group-hover:-translate-y-0"></span>
<span class="absolute inset-0 w-full h-full bg-white border-2 border-indigo-700 group-hover:bg-indigo-700"></span>
<span class="relative text-indigo-700 group-hover:text-indigo-100"> hover effect </span>
</button>
```
 | saim_ansari |
925,823 | Parallelizing Tasks Using MongoDB | Recently I published a post on our company's engineering blog. Check it out if you work with MongoDB... | 0 | 2021-12-14T06:50:58 | https://dev.to/eidellev/parallelizing-tasks-using-mongodb-24pj | mongodb, typescript, node, backend | Recently I published a [post](https://medium.com/zencity-engineering/parallelizing-tasks-using-mongodb-d788429b2b67) on our company's engineering blog.
Check it out if you work with MongoDB and want to find out how you can use it for parallel processing.
| eidellev |
925,835 | Database relation with laravel | Artisan Command: php artisan db:seed | 0 | 2021-12-14T07:23:03 | https://dev.to/vatheara/database-relation-with-laravel-24mm | laravel |
Artisan Command:
`php artisan db:seed` | vatheara |
925,850 | get user in AppServiceProvider Laravel | Laravel session is initialized in a middleware so you can't access the session from a Service... | 0 | 2021-12-14T07:49:50 | https://dev.to/vatheara/get-user-in-appserviceprovider-laravel-5cj9 | laravel | Laravel session is initialized in a middleware so you can't access the session from a Service Provider, because they execute before the middleware in the request lifecycle
If for some other reason you want to do it in a service provider, you could use a view composer with a callback, like this:
`view()->composer('*', function($view)
{
if (Auth::check()) {
// $view->with('currentUser', Auth::user());
dd(Auth::user());
}else {
// $view->with('currentUser', null);
dd('nope not logged in');
}
});`
| vatheara |
925,900 | Top Data Recovery Tools For APFS Drives | Read this article about top tools to recover data lost from ApFS drives used on Mac computers or... | 0 | 2021-12-14T09:24:51 | https://dev.to/hetmansoftware/top-data-recovery-tools-for-apfs-drives-2obc | beginners, testing, tutorial, security | Read this article about top tools to recover data lost from ApFS drives used on Mac computers or other Apple devices. We will explore thoroughly what each of the utilities can do!
## Introduction
Apple File System or ApFS is the new file system by Apple which is used with latest Mac devices. However, this file system is in no way an extension of HFS+. In APFS, you are not going to find the things we remember from HFS+: Catalog File, Attribute File, Allocation File, and Extent Overflow File, as well as the journal. The new file system uses a different approach to protect changes to the files and their data.
As we know, this file system has been optimized for flash drives and SSDs.
The main innovations in this operating system are improved encryption algorithms, optimized memory usage, crash protection, cloning of files and folders, and smart space usage patterns. In practice, it means more stable operation, increased read/write speeds and even more protection for user data. But what if crash protection didn’t work and some data was lost?
YouTube: {% youtube e1xg5hANli0 %}
## The method of recovery
ApFS offers an opportunity to restore certain states of the file system, including restoration of old or removed versions of files. The container superblock contains a link to the element known as checkpoint. Such checkpoint refers to the preceding container superblock which stores the information on an older state of the file system. This way, we can try to restore several older states of the file system by analyzing this chain of superblocks inside the container.
ApFS is a file system making use of the copy-on-write principle, and that is why every block is copied before changes are applied. Therefore, there is a kind of history for all files which have not been overwritten and comply with the file system structure. This fact leads to a number of artifacts which can be used in the course of file recovery.
Based on what we know about such artifacts, we have defined various approaches to file recovery, relying on various types of artifacts as starting points. All methods deal with the file system in blocks of 4096 bytes which is the smallest block size observed in ApFS. These blocks are checked for presence of metadata structures, which are, in their turn, analyzed and used for file extraction.
Only Mac computers with High Sierra operating system or higher can read and write to ApFS disks. Windows computers require special software to access such volumes.
Data recovery utilities let you recover data from APFS drives without having to use any additional software. They find partitions of this type and add them to the drive manager. In order to recover any information, you need to connect an APFS drive to a Windows computer.
The APFS file system is designed to store data in its root directory, which contains all other directories and files, including the ones we’re interested in.
We have conducted a benchmark involving most popular data recovery tools, and you can find all the results below.
## Top Tools to Recover Data from APFS Drives
On a computer with mac OS Catalina, we have created a structure of several containers, with volumes inside each of them. After that, we have scanned the test disk with most popular data recovery solutions.

Initially, we have selected the following products for the test: Hetman Partition Recovery, R-Studio, EaseUS Data Recovery Wizard, Disk Drill, and Recuva. As we examined them more closely, we had to exclude DiskDrill and Recuva from the list as they don’t support APFS file system. It was an astonishing fact, because these products are among the most popular solutions, and DiskDrill even sets the recovery standard for Mac computers.
In the end, we have started the test to see which of the three utilities performs best of all: Hetman Partition Recovery, R-studio, or EaseUS Data Recovery Wizard.
We have copied some photos, videos, and documents to the test disk, and then removed some of the data.

We performed the tests on a computer running Windows 10.
## Testing Hetman Partition Recovery
The program recognized the test disk with APFS file system properly. In this case of a simple deletion, a fast scan will suffice.

The program was able to find all the files without effort; both the existing files and the removed files are displayed, and the ones that have been deleted are marked with a red cross. Their contents can be previewed if necessary. The disk structure and file names are retained.

All we have to do is to save the recovered files to a disk.
##Testing RStudio
This program also recognizes the test disk and identifies the file system type properly.

However, after a quick scan it can’t display any removed data.

After the full scan, the program managed to find the deleted data, and marked it with a red cross. The disk structure and file names are retained, and the files are available for preview.

## Testing EaseUS
This program displays the test disk, but we could only identify it by its size, because neither its name nor file system type are shown.

There is no such thing as a quick scan here, so advanced scan starts immediately.
In the end, EaseUS fails to display the disk structure (as the other two candidates did), file names are lost, and the files are only sorted to folders by file type. There are no markings to suggest if this is the deleted data or the data which is still on the disk, so it’s hard to tell whether the program was able to find only removed files, or it decided to display all the files.

The only hint we could use was the number of documents, photos and videos which is shown in each folder.

This program has coped with the task, though it took more time, and it was unable to restore the directory tree!
Summing up, all the candidates have passed the first test, but some were rather inconvenient to use.
By the way, here’s one more important remark – EaseUS has no option to save a disk image and then mount it and use for further recovery operations, which is quite unsafe when you’re dealing with cases of data loss. Every time you run the scan, there is a risk of losing important data so the best way to scan the disk is actually using its image rather than the actual volume. Such approach will increase the chances to recover files without causing additional damage.
## Container Superblock removed
We have decided to make things more complicated and simulate damage to the container superblock, which is located in the first two sectors of the test disk.
Using a Hex editor, we have erased these two sectors. After that, we have scanned the disk with each of the utilities and received some interesting results.

| hetmansoftware |
928,380 | Under the Lid: How AtomicJar is Reshaping Testcontainers | Let’s get nerdy with it. On this week’s episode of Dev Interrupted, Dan gets technical with Sergei... | 0 | 2021-12-16T17:32:39 | https://devinterrupted.com/podcast/under-the-lid-how-atomicjar-is-reshaping-testcontainers/ | testing, cloud, techtalks, podcast | Let’s get nerdy with it.
On this week’s episode of Dev Interrupted, Dan gets technical with Sergei Egorov, co-founder and CEO of [AtomicJar](https://www.atomicjar.com/).
With the mission to make integrated testing simpler and easier, AtomicJar created the Testcontainers Cloud which allows developers to test their code against real dependencies, not mocks. Today, Testcontainers powers over a million builds per month, helping developers build and release their software with confidence.
Dan and Sergei also talk about the difficulty of finding time to code once you become a CEO, the challenges of building a product for developers, and the culture differences between Russian devs and U.S. devs.
If you’re a developer or enjoy learning about [dev tools](https://linearb.io/blog/workerb-developer-automation/?__hstc=75672842.b37abbbdf4f34a742895a6b2675da07e.1632418321637.1639602501081.1639674868896.174&__hssc=75672842.5.1639674868896&__hsfp=1615045989), this is the episode for you!
{% spotify spotify:episode:53pBC3LXms5PUoBrrSA4qq %}
## Episode Highlights Include:
* How Sergei became a [Java champion](https://dev.java/community/jcs/)
* What it's like to grow up in [Siberia](https://www.google.com/search?q=siberia&rlz=1C1CHBF_enUS949US949&oq=siberia&aqs=chrome..69i57j46i433i512l4j46i131i433i512j0i131i433i512j46i512j0i433i512j46i433i512.2072j0j4&sourceid=chrome&ie=UTF-8)
* Cultural differences between U.S. and Russian devs
* Letting go of writing code when you become CEO
* Why it's hard to build a product for developers
## Join the Dev Interrupted Community
With over 2000 members, the Dev Interrupted Discord Community is the best place for Engineering Leaders to engage in daily conversation. No sales people allowed. [Join the community >>](https://discord.com/invite/devinterrupted)
 | conorbronsdon |
928,574 | How to Find Which Career is Right for Me: The Ultimate Guide | It is the age-old question: ‘What should I be when I grow up?’ This decision is made easier for... | 0 | 2021-12-16T20:45:29 | https://remoteful.dev/blog/how-find-career-right-for-me | remotework, remotejob, job, remotejobs | 
It is the age-old question: ‘What should I be when I grow up?’ This decision is made easier for some of us by a family with the same career. Others might have an internship in their preferred field. But for many like you may think, **“how to find which career is right for me?”**
I agree this is a common problem, but the good news is that this article will guide you to find a career right for you.
Within a couple of seconds, we will discover the ways. Only one special request, please read the article attentively, not just scan this copy.
It is going to be awesome writing today. Please have a look below.
## How to find which career is right for me?
Since you are undecided on what you want to do, you can take some steps to find the right career for you.
### Explore your interests:
We have all heard "follow your passion." But what does it mean, really?
It is important to think about what you are interested in outside of work. For instance, do you like drawing? If so, how can you incorporate these interests into your career?
You may want to explore careers in art or design. You might find a job that excites you more than anything else! Do not discount jobs that do not seem related to your passions.
So, it’s time to **explore interests** since you are thinking about **‘how to find a career that is right for me?’**

Have a look at the list below if you are having trouble identifying your passions:
- Is there something you always enjoy doing?
- Find out what you have been reading for hours.
- Ask your family, friends, and colleagues what they think your interest is in?
- What internal or external issues would you like to solve
willingly in previous (or current) companies?
- What outcomes give you the most pleasure?
- What three things did you genuinely enjoy doing today?
These excellent questions may help you **explore your interests** that can signal you to choose the appropriate career.
### Find out what you are skilled at (personal strengths):
Identifying your **personal strengths** and weaknesses can be a fantastic place to start to find the perfect career.
Consider a career as an emergency room physician or a fireman if you are good at handling chaotic situations.
It is essential to know your personal strengths because they will help you identify which areas of entrepreneurship interest you.
If you are a great team-builder and good at problem-solving, consider a role in human resources or consulting.
If you like working with data and numbers, you might enjoy an accountant or statistician career. If reading other people's emotions excites you, try psychology or social work.
If creativity comes easily to you, maybe graphic design will be the best fit for your personality!
There are many **types of career** paths. You can decide based on your **personal strengths.**
### Understand what you want in a career:
**What do you want from your career?** What are your values? For example, if you value having a prominent career and a high salary, a low-paying, low-profile occupation is unlikely to satisfy you.
There's no right or wrong way to figure out what you want in a career— just be honest with yourself. Think who you are and what you seek in life.
### Ask a mentor:
Never undervalue the importance or power of a good mentor! When I decided to change careers, I realized I couldn't do it alone. I chose to work with mentors because I wanted to learn from the best.
No matter what type of mentor you select, you will receive direction and assistance in furthering your career.
### Learn more about your personality type:
There are various methods for determining your **personality type;** many of them are based on your reactions to certain situations.

You may ask me, “what’s the relationship between personality type and finding the right career?”
Have patience, my dear, and read on.
Different personality types may naturally gravitate toward different interests and skills, including occupations. Various personality tests provide a list of common career options for each personality type.
After taking several **personality type tests,** if one or two careers appear across multiple tests, that career is likely worth researching.
Now, how can you identify your **personality type?**
Some tools, such as the Jungian type index, the Keirsey temperament sorter, etc., can help you.
### Do trial jobs:
I think it’s an excellent way. One of my friends used this method to find the right career. He is very pleased with his career now.
Job trials effectively eliminate possibilities that may appear exciting but won’t work in the long run.
When you start a new work, you may be put on probation for a few months, so why not put the jobs on probation as well to see if you enjoy them?
It sometimes happens the companies have not budget to hire you as temporary staff. In that case, you could look for a short internship or job shadowing.
This way, you could find the right career for you.
### Self-help books:

My dear, I think now you already know many ways. You have got the answer to **‘how to find which career is right for me’** by this time.
I think you will not disagree to know a little further. What do you say?
There are lots of books that are ready to guide you. I recommend a book named [‘The ultimate guide to choosing the right career path.](https://www.amazon.com/Ultimate-Guide-Choosing-Right-Career-ebook/dp/B00OVJIYDQ)’ I think this is one of the **best books for career guidance.** This 98 pages book will help those who are confused about making the best choices for their career.
The book's primary goal is to remove all the doubts when someone chooses a career. You can include many more **best books for career guidance** in your list.
**Career hacks:** This **career hack** is for those who are doing a job right now and have no luxury deciding the right career for them in the present situation.
Making fun with your colleagues, gossiping with them in your work gap may boost your job satisfaction.
## Conclusion:
Yes, you have finally finished reading the article **'how to find which career is right for me.'**

Now it's your turn to follow the ways that I have mentioned.
Identifying your **personal strengths, personality types,** passion, and skills is the most important. You may take help from a mentor and read some of the **best books for career guidance.**
You may look for doing a trial job or short internship to find the best career for you. After following all those, you are ready to apply for the right career job.
For your job application, you may consider [REMOTEful](https://remoteful.dev/). | ryy |
928,831 | Colon Broom Reviews – What is Colon Broom? | Is Colon Broom safe to take? | Colon Broom may assist address the issues with the gastrointestinal systems. Colon Broom is an... | 0 | 2021-12-17T05:43:18 | https://dev.to/colon_broom/colon-broom-reviews-what-is-colon-broom-is-colon-broom-safe-to-take-1601 | [Colon Broom](https://colonbroomus.wixsite.com/colonbroom) may assist address the issues with the gastrointestinal systems. Colon Broom is an supplement that might assist with overseeing stomach-related issues and keeping up with stomach-related wellbeing. Colon Broom is a dietary supplement detailed and sold by Max Nutrition LLC. The supplement is powerful in treating obstruction, swelling, sporadic defecations, and other stomach-related issues. Colon Broom supplement may likewise assist with advancing weight loss and further develop assimilation processes and the overall support of stomach-related wellbeing.
https://colonbroomus.wixsite.com/colonbroom
https://colonbroomus.jimdosite.com/
https://colonbroom.godaddysites.com/
https://promosimple.com/giveaways/colon-broom-reviews-what-is-colon-broom-is-colon-broom-safe-to-take/
https://promosimple.com/ps/18876/colon-broom
https://sites.google.com/view/colon-broom-us/home
https://sites.google.com/view/colonbroomus/home
https://fetchbinarydog.com/colon-broom/
https://www.provenexpert.com/colon-broom/
https://colon-broom.footeo.com/actualite/2021/12/17/colon-broom-reviews-what-is-colon-broom-is-colon-broom-safe-to-.html
https://colonbroom.clubeo.com/news/2021/12/17/colon-broom-reviews-what-is-colon-broom-is-colon-broom-safe-to-
https://caramellaapp.com/colonbroom/jOzuKkJYB/colon-broom-reviews-what-is-colon-broom
https://caramel.la/colonbroom/jOzuKkJYB/colon-broom-reviews-what-is-colon-broom
https://mymediads.com/articles/90399
https://telescope.ac/colonbroom
https://en.gravatar.com/colonbroompill
https://www.beatstars.com/colonbroom/about
https://linktr.ee/colonbroom
https://www.quora.com/What-is-a-colon-broom/answer/Colon-Broom-1
https://twitter.com/Colon_Broom
https://twitter.com/Colon_Broom/status/1471712811306860552
| colon_broom | |
929,096 | Magento Migration- Have You Given a Thought to Theme Migration | Assuming you weren't one of the individuals who migrated up to Magento 2 before June 2020, you likely... | 0 | 2021-12-17T13:25:45 | https://dev.to/sneharawat/magento-migration-have-you-given-a-thought-to-theme-migration-7cn | magento, design, devops, productivity | Assuming you weren't one of the individuals who migrated up to Magento 2 before June 2020, you likely definitely know why you ought to consider it shortly. Even though your Magento eCommerce store might, in any case, be unblemished and running, the absence of value fixes and security patches is reason enough to say goodbye to Magento 1 and proceed with Magento 2 with Magento Migration Services. This is because security upkeep and backing are indispensable to have the option to warrant a protected and smooth insight for your Magento eCommerce development.
The [**Magento Migration Services**](https://www.orangemantra.com/services/magento-migration/) can be isolated into four key stages. These stages incorporate theme migration, the migration of any augmentations you might have introduced, customizations you want, and ultimately the migration of every one of your information. Magento theme migration is one of the main stages in the process since, in such a case that your theme hasn't been migrated flawlessly, your eCommerce store might be in a difficult situation!
## Why You Should Think About Magento Theme Migration
How does the theme even help you? Is it exactly how the store looks? No. Your Magento theme is considerably more than a simple ornamental backdrop. It decides the configuration, design just as the usefulness of your eCommerce store. The theme is the thing that guarantees that your clients have a steady and smooth client experience with your site, regardless of whether that is as far as perusing or making exchanges.
Since you comprehend and recognize the significance of the Magento theme, how about we talk regarding the reason why you ought to be not kidding about the theme migration. At whatever point you choose to update from Magento 1 to 2, you can't simply migrate your theme easily, all things considered. This is because Magento Migration Services is not the same as Magento 1 and surprisingly your theme will require many changes in its coding to assist with making it viable with **[Magento 2 development](https://www.orangemantra.com/services/magento-2-development/)**, and therefore you want to give it some genuine idea.
So except if you have a group of Magento advancement specialists or are a Magento proficient yourself, it's to your greatest advantage to employ Magento designers from a presumed Magento 2 development organization to play out the theme migration for you.
**Magento Theme Migration- Multiple Options to Choose From**
As a Magento 2 development, you have various theme choices to deal with the appearance of your new M2 store. Be that as it may, your decision will rely upon your necessities and reasons.
**Do you need the Exact Same Theme?**
Assuming you have caused another site to as of late and don't have any desire to change the M1 theme, you can make a comparable theme that is viable with the M2 rendition. Typically, vendors who would rather not change the client shopping experience pick recreating a similar theme for Magento 2. It might require some investment and cost you additional cash however to keep up with a similar degree of consistency, you can pick this choice.
**You have Ideas**
Assuming you have your very own thoughts, work with a Magento engineer and assemble a one-of-a-kind theme. At the point when you make a theme without any preparation, you will want to fuse every one of the functionalities needed for your **[Magento eCommerce development](https://www.orangemantra.com/services/magento-ecommerce-development/)**. Ensure you list down the entirety of your thoughts and make another plan for your store. Having a visual depiction will provide you with an unmistakable thought of how the site will examine what's to come. You want a major financial plan for making another theme since you will require help from experienced front-end and back-end designers. It will likewise imply that the dispatch of your internet business store will be postponed until the theme is conclusive.
Essentially, assuming that you like what you see on the web, you can request that the designers recreate a similar theme for your store. It will be less tedious than building a spic and span theme because the engineers should clone the look and features of another site.
**Are you Budget-Conscious?**
Assuming your spending plan is limited, you don't have to pick the default Magento theme. There are many free responsive Magento 2 themes accessible in the commercial center. Notwithstanding, recall that a greater part of the free themes has restricted usefulness and basic plan. It will require some investment to observe a theme that matches your prerequisites. You will likewise need to invest extensive energy testing the themes and guaranteeing the right decision for your site.
**Conclusion**
In the wake of finding out pretty much every one of the various choices, what is your decision? Do you like the easy instant themes or will you expect the errand of making a theme without any preparation? Settle on a choice later with a careful examination of your circumstances. Assuming you are in a rush and need a speedy arrangement without thinking twice about features, go for an instant Magento 2 theme.
Would you like to migrate from Magento 1 to Magento 2? **[Hire a Magento developer](https://www.orangemantra.com/services/hire-magento-developer/)** from a reputed organization that offers end-to-end Magento migration administrations to make the action straightforward and easy for your business. It will likewise help you in picking the best Magento 2 themes to guarantee an excellent and useful internet business store.
| sneharawat |
930,237 | Ensuring Santa's Success With Automated Tests (C# Advent 2021) | An exploration of TDD in the spirit of the season. | 0 | 2021-12-19T03:49:47 | https://seankilleen.com/2021/12/santa-sleigh/ | csharp, dotnet, nunit, xunit | ---
title: "Ensuring Santa's Success With Automated Tests (C# Advent 2021)"
published: true
description: "An exploration of TDD in the spirit of the season."
tags:
- csharp
- dotnet
- nunit
- xunit
canonical_url: https://seankilleen.com/2021/12/santa-sleigh/
cover_image: https://seankilleen.com/images/overlays/santa-sleigh-g6328721e2_1920.jpg
---
Happy Holiday season, everyone! I'm happy to have snagged one of the spots in this year's [C# Advent Calendar](https://www.csadvent.christmas). Be sure to check out the other 49 great posts when you're done here!
_____
I'm a big fan of automated testing, but I know that many developers in the C# space are still getting used to some of the concepts, so I figured we'd take this time to do a gentler introduction to test automation with a slightly contrived scenario, and solve it using NUnit and xUnit.
If you're already very used to automated testing, or have very strong opinions on it, this post likely won't be for you as it's meant to be a gentle introduction to applying some of the concepts. But I hope you'll get something useful out of it regardless!
_____
You can read the rest of these tutorials -- covering examples in both NUnit and xUnit -- at <https://seankilleen.com/2021/12/santa-sleigh/>. | seankilleen |
936,343 | Sort String by Flipping | Description You are given a string s consisting of the letters "x" and "y". In addition,... | 0 | 2021-12-25T12:52:27 | https://dev.to/jiangwenqi/sort-string-by-flipping-4ja3 | java, programming, algorithms, 100daysofcode | # [Description](https://binarysearch.com/problems/Sort-String-by-Flipping)
You are given a string `s` consisting of the letters `"x"` and `"y"`. In addition, you have an operation called `flip`, which changes a single `"x"` to `"y"` or vice versa.
Determine the smallest number of times you would need to apply this operation to ensure that all `"x"`'s come before all `"y"`'s.
**Constraints:**
- `0 ≤ n ≤ 100,000` where `n` is the length of `s`
## Example 1
### Input
```
s = "xyxxxyxyy"
```
### Output
```
2
```
### **Explanation**
```
It suffices to flip the second and sixth characters.
```
---
# Intuition
# Implementation
```java
import java.util.*;
class Solution {
public int solve(String s) {
int length = s.length();
int[] x = new int[length];
int currX = 0;
for (int i = length - 1; i >= 0; i--) {
if (s.charAt(i) == 'x') {
currX++;
}
x[i] = currX;
}
int ans = Integer.MAX_VALUE;
int currY = 0;
for (int i = 0; i < length; i++) {
if (s.charAt(i) == 'y') {
ans = Math.min(ans, currY + x[i]);
currY++;
}
}
return Math.min(currY, ans);
}
}
```
## Time Complexity
- Time: O(n)
- Space: O(n) | jiangwenqi |
937,386 | 5 Reasons Why You Should Prefer PHP for Website Development | PHP has emerged as a leading technology for creating dynamic and static websites and web... | 0 | 2021-12-27T09:53:19 | https://dev.to/freita_browning/5-reasons-why-you-should-prefer-php-for-website-development-4c46 | php, webdev, programming |

PHP has emerged as a leading technology for creating dynamic and static websites and web applications. And, there are lots of websites or web app already using PHP today. PHP is a popular server-side scripting language offering several benefits in website development. If you want to develop a complete website or web application for your business, you can choose PHP for creating the desired solution. Indeed, PHP provides several benefits over other web development technologies to develop well-customized solutions. There are many reasons you should choose PHP web development to create your website.
## Here are the five reasons for choosing PHP for web development
**1. Open-source and free**
PHP, which is an acronym for Hypertext Preprocessor, is a free and open-source technology. Since it is a free technology, everyone can use this tech without incurring any cost or having a paid license. Like any other open-source technology, PHP has a cost-benefit for any business to create various web solutions. You can choose a reliable PHP development service to create the desired web solutions.
**2. Support multiple platforms**
PHP is a cross-platform technology that means it is supported on different types of operating systems. It is fully compatible with major operating systems and many servers. Hence, PHP scripts can run on most major servers, making it easy to develop web applications on various servers. So, PHP's multi-server and multi-platform compatibility make it a preferred choice for web development.
**3. Flexible and dynamic**
PHP is a flexible technology because of its open-source nature. It allows web developers to build customized solutions. PHP-based websites and web applications are also secure because it offers numerous security features such as fool-proof encryption. Further, PHP-based web apps can load automatically and don’t require mutual intervention. It provides greater flexibility than other server-side languages, along with scalability and encryption that make PHP a robust option for creating top-notch web solutions. You can [hire the best PHP developer](https://www.csschopper.com/hire-dedicated-php-developer-professional.shtml) to create unique web solutions as per your requirements leveraging the flexibility of the PHP language.
**4. PHP frameworks**
PHP frameworks are conducive to developing top-notch solutions with expedited workflows and simplified coding. There are lots of PHP frameworks like Laravel, CodeIgniter, CakePHP, Zend, etc., that enable developers to create PHP solutions efficiently. These frameworks offer extensive functions and libraries, various development architecture support such as MVVM and MVC. PHP frameworks help in deep-down PHP coding and developers' additional libraries and tools to create custom web solutions. All in all, these frameworks are powerful tools for creating efficient and out-of-box solutions.
**5. PHP powers CMSs**
Another solid reason why PHP web development is a preferred CMS (Content Management System). Drupal, WordPress, Joomla, Magento, and many more CMSs work on PHP that means they are written in PHP. PHP is your entry ticket if you want to develop a custom website with any of these CMSs. You need to [choose a reliable PHP website development company](https://www.csschopper.com/php-web-development.shtml) for creating the desired website using any CMS.
## Final note
Websites have become a crucial factor for every business because they help them reach out to customers easily. However, PHP offers many advantages and features for website development. There are a lot of things like core PHP, CMSs, and PHP frameworks that you can utilize to create a perfect website for your business. You can use these frameworks and platforms to build customized web solutions according to your business needs. However, selecting the right PHP Web Development Company can help you create quality solutions.
| freita_browning |
937,409 | MySQL query timeouts on big database | Hi devs I have a question. I'm maintaining a laravel/MySQL application that has a lot of custom... | 0 | 2021-12-27T10:29:52 | https://dev.to/jobokoth/mysql-query-timeouts-on-big-database-g1l | mysql, laravel, aws, bigdata | Hi devs
I have a question. I'm maintaining a laravel/MySQL application that has a lot of custom reports that are generated using raw sql queries. As the usage grows, we are experiencing a lot of query timeouts because the data keeps growing, despite all our best efforts to optimize the queries, index the tables, etc.
Is there an AWS solution (or otherwise solution) that can help in querying big data and generate reports?
Thanks
| jobokoth |
937,543 | "Kem kẹp sandwich là một trong những món ăn vặt đường phố được người dân Singapore rất yêu thích. | "Kem kẹp sandwich là một trong những món ăn vặt đường phố được người dân Singapore rất yêu thích. Vì... | 0 | 2021-12-27T14:01:48 | https://dev.to/kemkephawkerstar/kem-kep-sandwich-la-mot-trong-nhung-mon-an-vat-duong-pho-duoc-nguoi-dan-singapore-rat-yeu-thich-3k03 | webdev, beginners | "Kem kẹp sandwich là một trong những món ăn vặt đường phố được người dân Singapore rất yêu thích.
Vì thế Hawker Star ra đời để đem đếm món kem kẹp sandwich chuẩn vị Singapore tại Việt Nam!
Website: https://kemkepsingapore.com
Hotline: 097 908 24 69
Email: hawkerstar.kinhdoanh@gmail.com
Địa chỉ: 38 Nguyễn Huệ, Bến Nghé, Quận 1, TP.HCM
Chỉ đường: https://goo.gl/maps/qNVDwyk16ZXjoVdd9
Liên kết Mạng Xã Hội: https://goo.gl/maps/qNVDwyk16ZXjoVdd9
Fanpage: https://www.facebook.com/hawkerstar
Youtube: https://www.youtube.com/channel/UC5oQklAd-c-VrBZIxH5kLzw
Linkedin: https://www.linkedin.com/in/singapore-kemkep-594323229/
instagram: https://www.instagram.com/hawkerstar/ "
| kemkephawkerstar |
937,666 | Création de site avec Hugo | Installer un site sur Hugo peux poser des difficultés lors de l’installation. La partie sur... | 0 | 2021-12-27T15:27:39 | https://blog.moulard.org/creation-de-site/ | ---
title: Création de site avec Hugo
published: true
date: 2019-11-13 14:17:21 UTC
tags:
canonical_url: https://blog.moulard.org/creation-de-site/
---
Installer un site sur Hugo peux poser des difficultés lors de l’installation.
## La partie sur Hugo
### Installation de Hugo
Sur Ubuntu, un simple `snap install hugo` fonctionne.
### Création du site
Pour créer le site, il suffit de faire la commande
```
hugo new site blog
```
### Ajouter un thème
Il faut rechercher dans la [liste des thèmes](https://themes.gohugo.io/) le thème qui plaît. Une fois le thème choisis, il faut l’installer dans `themes/<nom du thème>` et ajouter `theme = "<nom du thème>"` dans le fichier `config.toml`.
#### Installation du thème [`Cactus Plus`](https://themes.gohugo.io/hugo-theme-cactus-plus/)
```
# Récupérer le thème sur GitHub
git clone https://github.com/nodejh/hugo-theme-cactus-plus themes/hugo-theme-cactus-plus
# Installer le bon fichier de configuration à la place de l'ancier
cp themes/hugo-theme-cactus-plus/exampleSite/config.toml .
```
### Créer un post sur le blog
#### En utilisant Hugo
```
hugo new posts/premier-post.md
```
#### A la main
```
$EDITOR content/posts/premier-post.md
```
#### Metadata sur le post
Tout d’abord, les metadata sont ajoutée directement dans le fichier markdown grâce au separateurs `---`. Cela donne un document qui commence par:
```
---
title: "Creation de site avec Hugo"
date: 2019-11-13T15:17:21+01:00
---
```
Voici une liste exhaustive des éléments que l’on peux ajouter dans les metadata:
- `title`: Spécifie le nom de l’article
- `date`: Explicite la date de rédaction du document
- `url`: Permet de définir l’URL utilisé pour le post
- `draft`: `[true|false]` Permet(ou non) que le post soit visible dans un environnement de production
- `tags`: Permet de définir les tags de l’article et donc permet de faire le lien entre les post.
- `author`: Spécifie le nom de l’auteur
- `meta_image`: Spécifie l’image qui représente l’article
- `type`: Spécifie le type d’article
Ce qui peux donner une entête comme cela
```
---
title: "Creation de site avec Hugo"
author: Tom Moulard
draft: false
type: post
date: 2019-11-13T15:17:21+01:00
url: /creation-de-site
categories:
- Uncategorized
tags:
- tutoriel
- installation
- hugo
meta_image: 2019/hugo-creation.png
---
```
### Démarrer le serveur localement
```
hugo server
```
#### Démarrer le serveur localement avec les drafts
```
hugo server -D
```
### Voir le site web
Il suffit maintenant d’aller voir [localhost:1313](http://localhost:1313) pour observer le travail effectué.
```
$BROWSER http://localhost:1313
```
## Paramétrage du site
La prochaine section à pour but de modifier le fichier de configuration `config.toml`.
### Paramètres sur l’auteur
```
author = "<Nom de l'auteur>"
description = "<Description du site>"
bio = "<Description de l'auteur>"
```
### Disqus
Il faut avoir un compte sur le site [Disqus](https://disqus.com)(ou en [créer un](https://disqus.com/profile/signup/)). Puis il faut créer un [nouveau site](https://disqus.com/admin/create/) et récupérer le nom court du site.
```
enableDisqus = true
disqusShortname = "<nom cours du site>"
```
## Déploiement
### En utilisant docker-compose
```
hugo:
image: jojomi/hugo:latest
volumes:
- ./src/:/src
environment:
- HUGO_WATCH=true
- HUGO_REFRESH_TIME=3600
- HUGO_THEME=<Nom du thème>
- HUGO_BASEURL=mydomain.com
restart: always
```
Par exemple:
```
hugo:
image: jojomi/hugo:latest
volumes:
- .:/src
environment:
- HUGO_WATCH=true
- HUGO_REFRESH_TIME=3600
- HUGO_THEME=hugo-theme-cactus-plus
- HUGO_BASEURL=mydomain.com
restart: always
```
Puis `docker-compose up` | tommoulard | |
937,674 | ANSSI et automatisation de MOOC Secnumacademie avec Selenium | Pour mon école, j’ai eu a faire un MOOC de l’ANSSI. Or après avoir fait tous les tests du mooc à... | 0 | 2021-12-27T15:28:10 | https://blog.moulard.org/anssi-mooc-on-hacking/ | ---
title: ANSSI et automatisation de MOOC Secnumacademie avec Selenium
published: true
date: 2020-04-13 16:57:41 UTC
tags:
canonical_url: https://blog.moulard.org/anssi-mooc-on-hacking/
---
Pour mon école, j’ai eu a faire un [MOOC](https://secnumacademie.gouv.fr/) de l’ANSSI.
Or après avoir fait tous les tests du mooc à 100% sans avoir écouté les cours, j’ai décidé de récupérer l’attestation de réussite du MOOC pour le donner à mon école. Mais j’ai eu un message d’erreur:

J’ai donc voulu suivre les cours, mais ils prenaient beaucoup trop de temps à suivre. Voici donc un bot pour suivre les cours de manière semi-automatique:
```
from selenium import webdriver
import time
user = "REDACTED"
passw = "REDACTED"
def main():
driver = webdriver.Chrome()
driver.get("https://secnumacademie.gouv.fr/")
driver.find_elements_by_id("btn_access_insc")[0].click()
driver.find_elements_by_id("login")[0].send_keys(user)
driver.find_elements_by_id("password")[0].send_keys(passw)
xpath = '/html/body/div[2]/div/div[2]/div/div[1]/div[1]/div[2]/a[1]'
driver.find_elements_by_xpath(xpath)[0].click()
while True:
if input("continue ? [Y/n]") == "n":
exit(0)
driver.switch_to.default_content()
driver.switch_to.frame("DEFAUT")
driver.switch_to.frame("contents")
iframe_id = driver.find_elements_by_id("content")[0].find_elements_by_tag_name("iframe")[0].get_attribute("id")
driver.switch_to.frame(iframe_id)
driver.execute_script("for(var i = 0; i < 15; i++) {document.querySelector('#Stage_menu_inferieur_bouton_suivant_hit').click()}")
if __name__ == ' __main__':
main()
```
Un bot que vous pourrez trouver sur mon [repository github](https://github.com/tomMoulard/python-projetcs).
Il suffit de:
- mettre son username et son password dans les variables `user` et `passwd`.
- lancer le bot
- attendre que le bot ait connecté le navigateur
- sélectionner le module, l’unité ainsi que le premier cours
- entrer `Y` quand le bot demande si on veut continuer (étape à répéter tant qu’il y a un sous module à suivre)
# En images
## Récupérer le code
```
mkdir mook-hack && cd mook-hack
wget https://raw.githubusercontent.com/tomMoulard/python-projetcs/master/anssi-mooc/mooc.py
```
## Installer les dépendances
```
sudo apt install -y python3-selenium chromium-chromedriver
```
## Mettre son username/password
```
$EDITOR +4 mooc.py
```
## Lancer le bot
```
python3 mooc.py
```

## Sélectionner le module

## Sélectionner l’unité

## Sélectionner le premier cours
- angry clicking noise \* 
## Entrer `Y`
```
y
```
# Upgrades
Dans le futur, on pourrait:
- lire les vidéos
- ne pas faire `Y` pour chaque cours
# Conclusion
bla bla bla il faut suivre ses cours
Selenium c’est cool pour automatiser l’utilisation d’un site web
- Se blog a été écrit en réalisant un mooc ANSSI \* | tommoulard | |
937,704 | Errors and suspicious code fragments in .NET 6 sources | The .NET 6 turned out to be much-awaited and major release. If you write for .NET, you could hardly... | 0 | 2021-12-27T15:32:37 | https://dev.to/_sergvasiliev_/errors-and-suspicious-code-fragments-in-net-6-sources-2md5 | dotnet, csharp, opensource, codequality | The \.NET 6 turned out to be much\-awaited and major release\. If you write for \.NET, you could hardly miss such an event\. We also couldn't pass by the new version of this platform\. We decided to check what interesting things we can find in the sources of \.NET libraries\.

## Details about the check
I took the sources from the branch of \.NET 6 release [on GitHub](https://github.com/dotnet/runtime/tree/v6.0.0)\. This article covers suspicious places only from the libraries \(those that lies in src/libraries\)\. I didn't analyze the runtime itself \- maybe next time\. :\)
I checked the code with the [PVS\-Studio static analyzer](https://pvs-studio.com/en/pvs-studio/)\. As you probably guessed from this article, PVS\-Studio 7\.16 supports the analysis of projects on \.NET 6\. You can read more about new enhancements of the current release here\. The PVS\-Studio C\# analyzer for Linux and macOS now works on \.NET 6 as well\.
Over the year, PVS\-Studio significantly expanded the functionality of the C\# analyzer\. In addition to the support of the \.NET 6 platform, we added the plugin for Visual Studio 2022 and new security\-diagnostics\. Besides, we also optimized the C\# analyzer's performance for large projects\.
But you came here to read about \.NET 6, didn't you? Let's not waste time\.
## Suspicious code fragments
### Miscellaneous
This section includes various interesting code fragments that I could not group together into common category\.
**Issue 1**
Let's start with something simple\.
```cpp
public enum CompressionLevel
{
Optimal,
Fastest,
NoCompression,
SmallestSize
}
internal static void GetZipCompressionMethodFromOpcCompressionOption(
CompressionOption compressionOption,
out CompressionLevel compressionLevel)
{
switch (compressionOption)
{
case CompressionOption.NotCompressed:
{
compressionLevel = CompressionLevel.NoCompression;
}
break;
case CompressionOption.Normal:
{
compressionLevel = CompressionLevel.Optimal; // <=
}
break;
case CompressionOption.Maximum:
{
compressionLevel = CompressionLevel.Optimal; // <=
}
break;
case CompressionOption.Fast:
{
compressionLevel = CompressionLevel.Fastest;
}
break;
case CompressionOption.SuperFast:
{
compressionLevel = CompressionLevel.Fastest;
}
break;
// fall-through is not allowed
default:
{
Debug.Fail("Encountered an invalid CompressionOption enum value");
goto case CompressionOption.NotCompressed;
}
}
}
```
PVS\-Studio warning: [V3139](https://pvs-studio.com/en/docs/warnings/v3139/) Two or more case\-branches perform the same actions\. ZipPackage\.cs 402
In fact, this method performs mapping from *CompressionOption* to *CompressionLevel*\. The suspicious thing here is that the *CompressionOption\.Normal* and *CompressionOption\.Maximum* values are mapped to the *CompressionLevel\.Optimal* value\.
Probably *CompressionOption\.Maximum* should match *CompressionLevel\.SmallestSize*\.
**Issue 2**
Now let's practice a little\. Let's take the *System\.Text\.Json\.Nodes\.JsonObject* for our experiments\. If you wish, you can repeat the described operations using the release version of \.NET 6 SDK\.
The *JsonObject* type has 2 constructors: one constructor accepts only options, the other \- properties and options\. Well, it's clear what kind of behavior we should expect from them\. Documentation is available [here](https://docs.microsoft.com/en-us/dotnet/api/system.text.json.nodes.jsonobject.-ctor?view=net-6.0)\.
Let's create two instances of the *JsonObject* type and use each of the constructors\.
```cpp
static void JsonObject_Test()
{
var properties = new Dictionary<String, JsonNode?>();
var options = new JsonNodeOptions()
{
PropertyNameCaseInsensitive = true
};
var jsonObject1 = new JsonObject(options);
var jsonObject2 = new JsonObject(properties, options);
}
```
Now let's check the state of the objects we created\.

The *jsonObject1* state is expected, but the *jsonObject2* object state is not\. Why the *null* value is written in the *\_options* field? It's a little confusing\. Well, let's open the source code and look at these constructors\.
```cpp
public sealed partial class JsonObject : JsonNode
{
....
public JsonObject(JsonNodeOptions? options = null) : base(options) { }
public JsonObject(IEnumerable<KeyValuePair<string, JsonNode?>> properties,
JsonNodeOptions? options = null)
{
foreach (KeyValuePair<string, JsonNode?> node in properties)
{
Add(node.Key, node.Value);
}
}
....
}
```
In the second constructor, the *options* parameter is simply abandoned \- it is not passed anywhere and is not used in any way\. Whereas in the first constructor, *options* are passed to the base class constructor, where they are written to the field:
```cpp
internal JsonNode(JsonNodeOptions? options = null)
{
_options = options;
}
```
The corresponding PVS\-Studio warning: [V3117](https://pvs-studio.com/en/docs/warnings/v3117/) Constructor parameter 'options' is not used\. JsonObject\.cs 35
**Issue 3**
If we talk about the forgotten parameters, there was another interesting fragment\.
```cpp
public class ServiceNameCollection : ReadOnlyCollectionBase
{
....
private ServiceNameCollection(IList list, string serviceName)
: this(list, additionalCapacity: 1)
{ .... }
private ServiceNameCollection(IList list, IEnumerable serviceNames)
: this(list, additionalCapacity: GetCountOrOne(serviceNames))
{ .... }
private ServiceNameCollection(IList list, int additionalCapacity)
{
Debug.Assert(list != null);
Debug.Assert(additionalCapacity >= 0);
foreach (string? item in list)
{
InnerList.Add(item);
}
}
....
}
```
PVS\-Studio warning: [V3117](https://pvs-studio.com/en/docs/warnings/v3117/) Constructor parameter 'additionalCapacity' is not used\. ServiceNameCollection\.cs 46
According to code, the *additionalCapacity* parameter of the last constructor is checked in *Debug\.Assert* and not used for anything else\. It looks suspicious\. It's especially amusing \- other constructors pass some values for *additionalCapacity* parameter\.
**Issue 4**
Here's the test for the ability of foresight \(oops, spoilers\)\. Study the following code and try to guess what triggered the analyzer\.
```cpp
public override void CheckErrors()
{
throw new XsltException(SR.Xslt_InvalidXPath,
new string[] { Expression },
_baseUri,
_linePosition,
_lineNumber,
null);
}
```
It would seem that an exception is simply thrown\. To understand what is wrong here, you need to look at the *XsltException* constructor\.
```cpp
internal XsltException(string res,
string?[] args,
string? sourceUri,
int lineNumber,
int linePosition,
Exception? inner) : base(....)
{ .... }
```
If you compare the order of arguments and parameters, it becomes clear what triggered the analyzer\. It looks like the line position and the line number switched places\.
Order of arguments:
* *\_linePosition*
* *\_lineNumber*
Order of parameters:
* *lineNumber*
* *linePosition*
PVS\-Studio warning: [V3066](https://pvs-studio.com/en/docs/warnings/v3066/) Possible incorrect order of arguments passed to 'XsltException' constructor: '\_linePosition' and '\_lineNumber'\. Compiler\.cs 1187
**Issue 5**
Here is sufficiently large piece of code\. There must be some kind of typo hidden there\.\.\. Would you like to try to find it?
```cpp
public Parser(Compilation compilation,
in JsonSourceGenerationContext sourceGenerationContext)
{
_compilation = compilation;
_sourceGenerationContext = sourceGenerationContext;
_metadataLoadContext = new MetadataLoadContextInternal(_compilation);
_ilistOfTType = _metadataLoadContext.Resolve(
SpecialType.System_Collections_Generic_IList_T);
_icollectionOfTType = _metadataLoadContext.Resolve(
SpecialType.System_Collections_Generic_ICollection_T);
_ienumerableOfTType = _metadataLoadContext.Resolve(
SpecialType.System_Collections_Generic_IEnumerable_T);
_ienumerableType = _metadataLoadContext.Resolve(
SpecialType.System_Collections_IEnumerable);
_listOfTType = _metadataLoadContext.Resolve(typeof(List<>));
_dictionaryType = _metadataLoadContext.Resolve(typeof(Dictionary<,>));
_idictionaryOfTKeyTValueType = _metadataLoadContext.Resolve(
typeof(IDictionary<,>));
_ireadonlyDictionaryType = _metadataLoadContext.Resolve(
typeof(IReadOnlyDictionary<,>));
_isetType = _metadataLoadContext.Resolve(typeof(ISet<>));
_stackOfTType = _metadataLoadContext.Resolve(typeof(Stack<>));
_queueOfTType = _metadataLoadContext.Resolve(typeof(Queue<>));
_concurrentStackType = _metadataLoadContext.Resolve(
typeof(ConcurrentStack<>));
_concurrentQueueType = _metadataLoadContext.Resolve(
typeof(ConcurrentQueue<>));
_idictionaryType = _metadataLoadContext.Resolve(typeof(IDictionary));
_ilistType = _metadataLoadContext.Resolve(typeof(IList));
_stackType = _metadataLoadContext.Resolve(typeof(Stack));
_queueType = _metadataLoadContext.Resolve(typeof(Queue));
_keyValuePair = _metadataLoadContext.Resolve(typeof(KeyValuePair<,>));
_booleanType = _metadataLoadContext.Resolve(SpecialType.System_Boolean);
_charType = _metadataLoadContext.Resolve(SpecialType.System_Char);
_dateTimeType = _metadataLoadContext.Resolve(SpecialType.System_DateTime);
_nullableOfTType = _metadataLoadContext.Resolve(
SpecialType.System_Nullable_T);
_objectType = _metadataLoadContext.Resolve(SpecialType.System_Object);
_stringType = _metadataLoadContext.Resolve(SpecialType.System_String);
_dateTimeOffsetType = _metadataLoadContext.Resolve(typeof(DateTimeOffset));
_byteArrayType = _metadataLoadContext.Resolve(
typeof(byte)).MakeArrayType();
_guidType = _metadataLoadContext.Resolve(typeof(Guid));
_uriType = _metadataLoadContext.Resolve(typeof(Uri));
_versionType = _metadataLoadContext.Resolve(typeof(Version));
_jsonArrayType = _metadataLoadContext.Resolve(JsonArrayFullName);
_jsonElementType = _metadataLoadContext.Resolve(JsonElementFullName);
_jsonNodeType = _metadataLoadContext.Resolve(JsonNodeFullName);
_jsonObjectType = _metadataLoadContext.Resolve(JsonObjectFullName);
_jsonValueType = _metadataLoadContext.Resolve(JsonValueFullName);
// Unsupported types.
_typeType = _metadataLoadContext.Resolve(typeof(Type));
_serializationInfoType = _metadataLoadContext.Resolve(
typeof(Runtime.Serialization.SerializationInfo));
_intPtrType = _metadataLoadContext.Resolve(typeof(IntPtr));
_uIntPtrType = _metadataLoadContext.Resolve(typeof(UIntPtr));
_iAsyncEnumerableGenericType = _metadataLoadContext.Resolve(
IAsyncEnumerableFullName);
_dateOnlyType = _metadataLoadContext.Resolve(DateOnlyFullName);
_timeOnlyType = _metadataLoadContext.Resolve(TimeOnlyFullName);
_jsonConverterOfTType = _metadataLoadContext.Resolve(
JsonConverterOfTFullName);
PopulateKnownTypes();
}
```
Well, how's it going? Or maybe there is no typo at all?
Let's first look at the analyzer warning: [V3080](https://pvs-studio.com/en/docs/warnings/v3080/) Possible null dereference of method return value\. Consider inspecting: Resolve\(\.\.\.\)\. JsonSourceGenerator\.Parser\.cs 203
The *Resolve* method can return *null*\. That's what method's signature indicates\. And that's what PVS\-Studio warns us about when it detects the possibility of returning *null* value with the help of the interprocedural analysis\.
```cpp
public Type? Resolve(Type type)
{
Debug.Assert(!type.IsArray,
"Resolution logic only capable of handling named types.");
return Resolve(type.FullName!);
}
```
Let's go further, to another overload of *Resolve*\.
```cpp
public Type? Resolve(string fullyQualifiedMetadataName)
{
INamedTypeSymbol? typeSymbol =
_compilation.GetBestTypeByMetadataName(fullyQualifiedMetadataName);
return typeSymbol.AsType(this);
}
```
Note that *typeSymbol* is written as nullable reference type: *INamedTypeSymbol?*\. Let's go even further \- to the *AsType* method\.
```cpp
public static Type AsType(this ITypeSymbol typeSymbol,
MetadataLoadContextInternal metadataLoadContext)
{
if (typeSymbol == null)
{
return null;
}
return new TypeWrapper(typeSymbol, metadataLoadContext);
}
```
As you can see, if the first argument is a null reference, then the *null* value is returned from the method\.
And now let's go back to the *Parser* type constructor\. In this type constructor, usually the result of the *Resolve* method call is simply written to some field\. But PVS\-Studio warns that there is an exception:
```cpp
_byteArrayType = _metadataLoadContext.Resolve(typeof(byte)).MakeArrayType();
```
Here, the *MakeArrayType* instance method is called for the result of the *Resolve* method call\. Consequently, if *Resolve* returns *null*, a *NullReferenceException* will occur\.
**Issue 6**
```cpp
public abstract partial class Instrument<T> : Instrument where T : struct
{
[ThreadStatic] private KeyValuePair<string, object?>[] ts_tags;
....
}
```
PVS\-Studio warning: [V3079](https://pvs-studio.com/en/docs/warnings/v3079/) 'ThreadStatic' attribute is applied to a non\-static 'ts\_tags' field and will be ignored Instrument\.netfx\.cs 20
Let's quote [the documentation](https://docs.microsoft.com/en-us/dotnet/api/system.threadstaticattribute?view=net-6.0): *Note that in addition to applying the [ThreadStaticAttribute](https://docs.microsoft.com/en-us/dotnet/api/system.threadstaticattribute?view=net-6.0) attribute to a field, you must also define it as a static field \(in C\#\) or a Shared field \(in Visual Basic\)\.*
As you can see from code, the *ts\_tags* is instance field\. So, it makes no sense to mark the field with the *ThreadStatic* attribute\. Or there's some kind of black magic going on here\.\.\.
**Issue 7**
```cpp
private static JsonSourceGenerationOptionsAttribute?
GetSerializerOptions(AttributeSyntax? attributeSyntax)
{
....
foreach (AttributeArgumentSyntax node in attributeArguments)
{
IEnumerable<SyntaxNode> childNodes = node.ChildNodes();
NameEqualsSyntax? propertyNameNode
= childNodes.First() as NameEqualsSyntax;
Debug.Assert(propertyNameNode != null);
SyntaxNode? propertyValueNode = childNodes.ElementAtOrDefault(1);
string propertyValueStr = propertyValueNode.GetLastToken().ValueText;
....
}
....
}
```
PVS\-Studio warning: [V3146](https://pvs-studio.com/en/docs/warnings/v3146/) Possible null dereference of 'propertyValueNode'\. The 'childNodes\.ElementAtOrDefault' can return default null value\. JsonSourceGenerator\.Parser\.cs 560
If the *childNodes* collection contains fewer than two elements, the call of *ElementAtOrDefault* returns the *default\(SyntaxNode\)* value \(i\.e\. *null*, since *SyntaxNode* is a class\)\. In this case, a *NullReferenceException* is thrown on the next line\. It is especially strange that *propertyValueNode* is a nullable reference type, but it \(*propertyValueNode*\) is dereferenced without checking\.
Perhaps there is some implicit contract here that there is always more than one element in *childNodes*\. For example, if there is *propertyNameNode*, then there is also *propertyValueNode*\. In this case, to avoid unnecessary questions, one can use the *ElementAt* method call\.
**Issue 8**
There is such a structure – *Microsoft\.Extensions\.FileSystemGlobbing\.FilePatternMatch*\. This structure overrides the *Equals\(Object\)* method, which seems logical\. [Documentation describing the method\.](https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.filesystemglobbing.filepatternmatch.equals?view=dotnet-plat-ext-6.0)

Let's say we have code that calls this method:
```cpp
static void FPM_Test(Object? obj)
{
FilePatternMatch fpm = new FilePatternMatch();
var eq = fpm.Equals(obj);
}
```
What do you think will happen if *FPM\_Test* is called with a *null* value? Will the *false* value be written to the *eq* variable? Well, almost\.

The exception is also thrown if we pass as an argument an instance of a type other than *FilePatternMatch*\. For example\.\.\. If we pass an array of some kind\.

Have you guessed yet why this happens? The point is, in the *Equals* method, the argument is not checked in any way for a *null* value or for type compatibility, but is simply unboxed without any conditions:
```cpp
public override bool Equals(object obj)
{
return Equals((FilePatternMatch) obj);
}
```
PVS\-Studio warning: [V3115](https://pvs-studio.com/en/docs/warnings/v3115/) Passing 'null' to 'Equals' method should not result in 'NullReferenceException'\. FilePatternMatch\.cs 61
Of course, judging from the documentation, no one promised us that *Equals\(Object\)* would return *false* if it does not accept *FilePatternMatch*\. But that would probably be the most expected behavior\.
### Duplicate checks
The interesting thing about duplicate checks\. You may not always explicitly know — is it just redundant code or should there be something else instead of one of duplicate checks\. Anyway, let's look at a few examples\.
**Issue 9**
```cpp
internal DeflateManagedStream(Stream stream,
ZipArchiveEntry.CompressionMethodValues method,
long uncompressedSize = -1)
{
if (stream == null)
throw new ArgumentNullException(nameof(stream));
if (!stream.CanRead)
throw new ArgumentException(SR.NotSupported_UnreadableStream,
nameof(stream));
if (!stream.CanRead)
throw new ArgumentException(SR.NotSupported_UnreadableStream,
nameof(stream));
Debug.Assert(method == ZipArchiveEntry.CompressionMethodValues.Deflate64);
_inflater
= new InflaterManaged(
method == ZipArchiveEntry.CompressionMethodValues.Deflate64,
uncompressedSize);
_stream = stream;
_buffer = new byte[DefaultBufferSize];
}
```
PVS\-Studio warning: [V3021](https://pvs-studio.com/en/docs/warnings/v3021/) There are two 'if' statements with identical conditional expressions\. The first 'if' statement contains method return\. This means that the second 'if' statement is senseless DeflateManagedStream\.cs 27
At the beginning of the method, there are several checks\. But, here's the bad luck, one of the checks \(*\!stream\.CanRead*\) is completely duplicated \(both the condition and *then* branch of the *if* statement\)\.
**Issue 10**
```cpp
public static object? Deserialize(ReadOnlySpan<char> json,
Type returnType,
JsonSerializerOptions? options = null)
{
// default/null span is treated as empty
if (returnType == null)
{
throw new ArgumentNullException(nameof(returnType));
}
if (returnType == null)
{
throw new ArgumentNullException(nameof(returnType));
}
JsonTypeInfo jsonTypeInfo = GetTypeInfo(options, returnType);
return ReadFromSpan<object?>(json, jsonTypeInfo)!;
}
```
PVS\-Studio warning: [V3021](https://pvs-studio.com/en/docs/warnings/v3021/) There are two 'if' statements with identical conditional expressions\. The first 'if' statement contains method return\. This means that the second 'if' statement is senseless JsonSerializer\.Read\.String\.cs 163
Yeah, a similar situation, but in a completely different place\. Before using, there is the *returnType* parameter check for *null*\. It's good, but they check the parameter twice\.
**Issue 11**
```cpp
private void WriteQualifiedNameElement(....)
{
bool hasDefault = defaultValue != null && defaultValue != DBNull.Value;
if (hasDefault)
{
throw Globals.NotSupported(
"XmlQualifiedName DefaultValue not supported. Fail in WriteValue()");
}
....
if (hasDefault)
{
throw Globals.NotSupported(
"XmlQualifiedName DefaultValue not supported. Fail in WriteValue()");
}
}
```
PVS\-Studio warning: [V3021](https://pvs-studio.com/en/docs/warnings/v3021/) There are two 'if' statements with identical conditional expressions\. The first 'if' statement contains method return\. This means that the second 'if' statement is senseless XmlSerializationWriterILGen\.cs 102
Here the situation is a little more exciting\. If the previous duplicate checks followed one after another, here they are at different ends of the method \- almost 20 lines apart\. However, the *hasDefault* local variable being checked does not change during this time\. Accordingly, either the exception will be thrown during the first check, or it will not be thrown at all\.
**Issue 12**
```cpp
internal static bool AutoGenerated(ForeignKeyConstraint fk, bool checkRelation)
{
....
if (fk.ExtendedProperties.Count > 0)
return false;
if (fk.AcceptRejectRule != AcceptRejectRule.None)
return false;
if (fk.DeleteRule != Rule.Cascade) // <=
return false;
if (fk.DeleteRule != Rule.Cascade) // <=
return false;
if (fk.RelatedColumnsReference.Length != 1)
return false;
return AutoGenerated(fk.RelatedColumnsReference[0]);
}
```
PVS\-Studio warning: [V3022](https://pvs-studio.com/en/docs/warnings/v3022/) Expression 'fk\.DeleteRule \!= Rule\.Cascade' is always false\. xmlsaver\.cs 1708
Traditionally, the question is \- was it needed checking another value or is it just redundant code?
### Missing interpolation
First, let's have a look at a couple of warnings found\. Then, I'll tell you a little story\.
**Issue 13**
```cpp
internal void SetLimit(int physicalMemoryLimitPercentage)
{
if (physicalMemoryLimitPercentage == 0)
{
// use defaults
return;
}
_pressureHigh = Math.Max(3, physicalMemoryLimitPercentage);
_pressureLow = Math.Max(1, _pressureHigh - 9);
Dbg.Trace($"MemoryCacheStats",
"PhysicalMemoryMonitor.SetLimit:
_pressureHigh={_pressureHigh}, _pressureLow={_pressureLow}");
}
```
PVS\-Studio warning: [V3138](https://pvs-studio.com/en/docs/warnings/v3138/) String literal contains potential interpolated expression\. Consider inspecting: \_pressureHigh\. PhysicalMemoryMonitor\.cs 110
It almost seems like someone wanted to log the *\_pressureHigh* and *\_pressureLow* fields here\. However, the substitution of values won't work, since the string is not interpolated\. But the interpolation symbol is on the first argument of the *Dbg\.Trace* method, and there is nothing to substitute in the argument\. :\)
**Issue 14**
```cpp
private void ParseSpecs(string? metricsSpecs)
{
....
string[] specStrings = ....
foreach (string specString in specStrings)
{
if (!MetricSpec.TryParse(specString, out MetricSpec spec))
{
Log.Message("Failed to parse metric spec: {specString}");
}
else
{
Log.Message("Parsed metric: {spec}");
....
}
}
}
```
PVS\-Studio warning: [V3138](https://pvs-studio.com/en/docs/warnings/v3138/) String literal contains potential interpolated expression\. Consider inspecting: spec\. MetricsEventSource\.cs 381
One is trying to parse the *specString* string\. If it doesn't work out, one need to log the source string, if it works out \- to log the result \(the *spec* variable\) and perform some other operations\.
The problem again is that both in the first and in the second case the interpolation symbol is missing\. As a consequence, the values of the *specString* and *spec* variables won't be substituted\.
And now get ready for the promised story\.
As I mentioned above, I [checked the \.NET Core libraries](https://pvs-studio.com/en/blog/posts/csharp/0656/) in 2019\. I found several strings that most likely had to be interpolated, but because of the missed '$' symbol they were not\. In that article, the corresponding warnings are described as issue 10 and issue 11\.
I created the [bug report on GitHub](https://github.com/dotnet/runtime/issues/30599)\. After that, the \.NET development team fixed some code fragments described in the article\. Among them \- the errors with interpolated strings\. [The corresponding pull request](https://github.com/dotnet/corefx/pull/40322/commits/a328cc8bf763d193474fac1870c9e26a9314748a)\.
Moreover, in the Roslyn Analyzers issue tracker, was created the [task](https://github.com/dotnet/roslyn-analyzers/issues/2767) of developing a new diagnostic that would detect such cases\.

My colleague described the whole story in a little more detail [here](https://pvs-studio.com/en/blog/posts/0659/)\.
Let's get back to the present\. I knew all this and remembered it, so I was very surprised when I came across errors with missed interpolation again\. How can that be? After all, there already should be the out\-of\-the\-box diagnostic to help avoid these errors\.
I decided to check out that diagnostic development issue from August 15, 2019, and it turned out\.\.\. that the diagnostic is not ready yet\. That's the answer to the question \- where the interpolation errors come from\.
PVS\-Studio has been detecting such problems since 7\.03 release \(June 25, 2019\) \- make use of it\. ;\)
### Some things change, some don't
During the check, I came across the warnings several times that seemed vaguely familiar to me\. It turned out that I had already described them last time\. Since they are still in the code, I assume that these are not errors\.
For example, the code below seems to be a really unusual way to throw an *ArgumentOutOfRangeException*\. This is issue 30 from the [last check](https://pvs-studio.com/en/blog/posts/csharp/0656/)\.
```cpp
private ArrayList? _tables;
private DataTable? GetTable(string tableName, string ns)
{
if (_tables == null)
return _dataSet!.Tables.GetTable(tableName, ns);
if (_tables.Count == 0)
return (DataTable?)_tables[0];
....
}
```
However, I have a few questions about other fragments already discovered earlier\. For example, issue 25\. In the loop, the *seq* collection is bypassed\. But only the first element of the collection, *seq\[0\]*, is constantly accessed\. It looks\.\.\. unusual\.
```cpp
public bool MatchesXmlType(IList<XPathItem> seq, int indexType)
{
XmlQueryType typBase = GetXmlType(indexType);
XmlQueryCardinality card = seq.Count switch
{
0 => XmlQueryCardinality.Zero,
1 => XmlQueryCardinality.One,
_ => XmlQueryCardinality.More,
};
if (!(card <= typBase.Cardinality))
return false;
typBase = typBase.Prime;
for (int i = 0; i < seq.Count; i++)
{
if (!CreateXmlType(seq[0]).IsSubtypeOf(typBase)) // <=
return false;
}
return true;
}
```
PVS\-Studio warning: [V3102](https://pvs-studio.com/en/docs/warnings/v3102/) Suspicious access to element of 'seq' object by a constant index inside a loop\. XmlQueryRuntime\.cs 729
This code confuses me a little\. Does it confuse you?
Or let's take issue 34\.
```cpp
public bool Remove(out int testPosition, out MaskedTextResultHint resultHint)
{
....
if (lastAssignedPos == INVALID_INDEX)
{
....
return true; // nothing to remove.
}
....
return true;
}
```
PVS\-Studio warning: [V3009](https://pvs-studio.com/en/docs/warnings/v3009/) It's odd that this method always returns one and the same value of 'true'\. MaskedTextProvider\.cs 1531
The method always returned *true* before, and it does the same thing now\. At the same time, the comment says that the method can also return *false*: *Returns true on success, false otherwise*\. The same story we can find in the [documentation](https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.maskedtextprovider.remove?view=net-6.0)\.
The following example I will even put in a separate section\. Even though it was also described in the previous article\. Let's speculate a little not only on the code fragment itself, but also on one feature used in the fragment – nullable reference types\.
### About nullable reference types again
In general, I have not yet figured out whether I like nullable reference types or not\.
On the one hand, nullable reference types have a huge advantage\. They make signature of methods more informative\. One glance at a method is enough to understand whether it can return *null*, whether a certain parameter can have a *null* value, etc\.
On the other hand, all this is built on trust\. No one forbids you to write code like this:
```cpp
static String GetStr()
{
return null!;
}
static void Main(string[] args)
{
String str = GetStr();
Console.WriteLine(str.Length); // NRE, str - null
}
```
Yes, yes, yes, it's synthetic code, but you can write it this way\! If such a code is written inside your company, we go \(relatively speaking\) to the author of *GetStr* and have a conversation\. However, if *GetStr* is taken from some library and you don't have the sources of this library \- such a surprise won't be very pleasant\.
Let's return from synthetic examples to our main topic – \.NET 6\. And there are subtleties\. For example, different libraries are divided into different solutions\. And looking through them, I repeatedly wondered – is nullable context enabled in this project? The fact that there is no check for *null* \- is this expected or not? Probably, this is not a problem when working within the context of one project\. However, with cursory analysis of all projects, it creates certain difficulties\.
And it really gets interesting\. All sorts of strange things start showing up when there is migration to a nullable context\. It seems like a variable cannot have *null* value, and at the same time there is a check\. And let's face it, \.NET has a few such places\. Let me show you a couple of them\.
```cpp
private void ValidateAttributes(XmlElement elementNode)
{
....
XmlSchemaAttribute schemaAttribute
= (_defaultAttributes[i] as XmlSchemaAttribute)!;
attrQName = schemaAttribute.QualifiedName;
Debug.Assert(schemaAttribute != null);
....
}
```
PVS\-Studio warning: [V3095](https://pvs-studio.com/en/docs/warnings/v3095/) The 'schemaAttribute' object was used before it was verified against null\. Check lines: 438, 439\. DocumentSchemaValidator\.cs 438
The '\!' symbol hints that we are working with a nullable context here\. Okay\.
1\. Why is the 'as' operator used for casting, and not a direct cast? If there is confidence that *schemaAttribute* is not *null* \(that's how I read the implicit contract with '\!'\), so *\_defaultAttributes\[i\]* does have the *XmlSchemaAttribute* type\. Well, let's say a developer likes this syntax more \- okay\.
2\. If *schemaAttribute* is not *null*, why is there the check for *null* in *Debug\.Assert* below?
3\. If the check is relevant and *schemaAttribute* can still have a *null* value \(contrary to the semantics of nullable reference types\), then execution will not reach *Debug\.Assert* due to the thrown exception\. The exception will be thrown when accessing *schemaAttribute\.QualifiedName*\.
Personally, I have a lot of questions at once when looking at such a small piece of code\.
Here is a similar story:
```cpp
public Node DeepClone(int count)
{
....
while (originalCurrent != null)
{
originalNodes.Push(originalCurrent);
newNodes.Push(newCurrent);
newCurrent.Left = originalCurrent.Left?.ShallowClone();
originalCurrent = originalCurrent.Left;
newCurrent = newCurrent.Left!;
}
....
}
```
On the one hand, *newCurrent\.Left* can have a *null* value, since the result of executing the *?\.* operator is written to it \(*originalCurrent\.Left?\.ShallowClone\(\)*\)\. On the other hand, in the last line we see the annotation that *newCurrent\.Left* not *null*\.
And now let's look at the code fragment from \.NET 6, that in fact, was the reason why I started to write this section\. The *IStructuralEquatable\.Equals\(object? other, IEqualityComparer comparer\)* implementation in the *ImmutableArray<T\>* type\.
```cpp
internal readonly T[]? array;
bool IStructuralEquatable.Equals(object? other, IEqualityComparer comparer)
{
var self = this;
Array? otherArray = other as Array;
if (otherArray == null)
{
if (other is IImmutableArray theirs)
{
otherArray = theirs.Array;
if (self.array == null && otherArray == null)
{
return true;
}
else if (self.array == null)
{
return false;
}
}
}
IStructuralEquatable ours = self.array!;
return ours.Equals(otherArray, comparer);
}
```
If you look at the last code lines in Visual Studio, the editor will helpfully tell you that *ours* is not *null*\. It can be seen from the code – *self\.array* is nonnullable reference variable\.

OK, let's write the following code:
```cpp
IStructuralEquatable immutableArr = default(ImmutableArray<String>);
var eq = immutableArr.Equals(null, EqualityComparer<String>.Default);
```
Then we run it for execution and see a *NullReferenceException*\.

Whoops\. It seems that the *ours* variable, which is not *null*, in fact still turned out to be a null reference\.
Let's find out how that happened\.
1. The *array* field of the *immutableArr* object takes the default *null* value\.
1. *other* has a *null* value, so *otherArray* also has a *null* value\.
1. The check of *other is ImmutableArray* gives *false*\.
1. At the time of writing the value to *ours*, the *self\.array* field is *null*\.
1. You know the rest\.
Here you can have the counter\-argument that the immutable array has incorrect state, since it was created not through special methods/properties, but through calling the *default* operator\. But getting an NRE on an *Equals* call for such an object is still a bit strange\.
However, that's not even the point\. Code, annotations and hints indicates that *ours* is not *null*\. In fact, the variable does have the *null* value\. For me personally, this undermines trust in nullable reference types a bit\.
PVS\-Studio issues a warning: [V3125](https://pvs-studio.com/en/docs/warnings/v3125/) The 'ours' object was used after it was verified against null\. Check lines: 1144, 1136\. ImmutableArray\_1\.cs 1144
By the way, I wrote about this problem in the [last article](https://pvs-studio.com/en/blog/posts/csharp/0656/) \(issue 53\)\. Then, however, there was no nullable annotations yet\.
**Note**\. Returning to the conversation about operations on the *ImmutableArray<T\>* instances in the default state, some methods/properties use special methods: *ThrowNullRefIfNotInitialized* and *ThrowInvalidOperationIfNotInitialized*\. These methods report the uninitialized state of the object\. Moreover, explicit implementations of interface methods use *ThrowInvalidOperationIfNotInitialized*\. Perhaps it should have been used in the case described above\.
Here I want to ask our audience – what kind of experience do you have working with nullable reference types? Do you like them? Or maybe you don't like them? Have you used nullable reference types on your projects? What went well? What difficulties did you have? I'm curious as to your view on nullable reference types\.
By the way, my colleagues already wrote about nullable reference types in a couple of articles: [one](https://pvs-studio.com/en/blog/posts/csharp/0631/), [two](https://pvs-studio.com/en/blog/posts/csharp/0764/)\. Time goes on, but the issue is still debatable\.
## Conclusion
In conclusion, once again, I would like to congratulate the \.NET 6 development team with the release\. I also want to say thank you to all those who contribute to this project\. I am sure that they will fix the shortcomings\. There are still many achievements ahead\.
I also hope that I was able to remind you once again how the static analysis benefits the development process\. If you are interested, you can try PVS\-Studio on your project as well\. By the way, click on [this link](https://pvs-studio.com/net6-checking), and get an extended license that is valid for 30 days, not 7\. Isn't that a good reason to try the analyzer? ;\)
And by good tradition, I invite you to subscribe to [my Twitter](https://twitter.com/_SergVasiliev_) so as not to miss anything interesting\.
| _sergvasiliev_ |
937,727 | Single method accessors and mutators in laravel | We have already known the usage of accessors and mutators. We use accessors in laravel model to... | 0 | 2021-12-27T16:10:33 | https://dev.to/asifzcpe/single-method-accessors-and-mutators-in-laravel-1l00 | webdev, laravel, accessor, mutator | We have already known the usage of accessors and mutators. We use accessors in laravel model to modify any field data while retrieving the records and mutators to modify any field data while inserting to database.
So, to modify any field before, we needed two separate methods for a single filed i.e. one method for accessor and one for mutator but in latest laravel release I mean in laravel 8.77.0, we can use a single method for both accessor and mutator using closure .
Given below is the syntax for single mehtod accessor and mutator:
```
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Casts\Attribute;
use Carbon\Carbon;
class Order extends Model
{
/**
* Get the order tax.
*/
protected function order_date(): Attribute
{
return new Attribute(
fn ($value) => Carbon::parse($value)->format('d-m-Y'), // accessor
fn ($value) => Carbon::parse($value)->format('Y-m-d'), // mutator
);
}
}
```
In the above closures, the first closure is for accessor and the next for mutator. | asifzcpe |
1,180,947 | Nagios Plugins for Linux v31 | What's new in version v31 "Counter-intuitive" ... | 0 | 2022-08-31T19:56:45 | https://dev.to/madrisan/nagios-plugins-for-linux-v31-2njc | nagios, c, linux, monitoring | What's new in version v31 "Counter-intuitive"
### Fixes
- Libraries
* `lib/container_docker_memory`: fix an issue reported by _clang-analyzer_.
* Make sure `sysfs` is mounted in the plugins that require it.
### Enhancements / Changes
- Plugin `check_filecount`
* New plugin `check_filecount` that returns the number of files found in one or more directories.
- Plugin `check_memory`
* `check_memory`: support new units kiB/MiB/GiB.
Feature asked by [mdicss](https://github.com/mdicss).
See discussion [#120](https://github.com/madrisan/nagios-plugins-linux/discussions/120).
- `contrib/icinga2/CheckCommands.conf`
* Contribution from Lorenz [RincewindsHat](https://github.com/RincewindsHat): add _icinga2_ command configurations.
- Build
* configure: ensure libprocps is v4.0.0 or better if the experimental option `--enable-libprocps` is passed to `configure`.
- Test framework
* Add some unit tests for `lib/xstrton`.
* New unit tests `tslibfiles_{filecount,hiddenfile,size}`.
- Package creation
* Add Linux Alpine 3.16 and remove version 3.13.
* Do not package experimental plugins in the rpm `nagios-plugins-linux-all`.
* Add Fedora 36 and drop Fedora 33 support.
* CentOS 8 died a premature death at the end of 2021. Add packages for CentOS Stream 8 and 9.
- GitHub Workflows
* Build the Nagios Plugins Linux on the LTS Ubuntu versions only. The version 21 seems dead.
* Add build tests for all the supported oses.
* Update the os versions used in tests.
* CentOS 8 died a premature death at the end of 2021. Remove it from the list of test oses.
* Add CodeQL analysis
| madrisan |
937,765 | Spring Boot Netflix Zuul becomes Spring Cloud Gateway | see https://blog.coffeebeans.at/archives/1738 | 0 | 2021-12-27T18:10:35 | https://dev.to/mbogner/spring-boot-netflix-zuul-becomes-spring-cloud-gateway-265h | springcloud, netflixzuul, zuul, springboot | see [https://blog.coffeebeans.at/archives/1738](https://blog.coffeebeans.at/archives/1738) | mbogner |
937,779 | Practicing Data Structures and Algorithms Daily! | Boring but worthy Data structures and algorithms as important as your communication skills... | 0 | 2021-12-27T19:10:33 | https://dev.to/seek4samurai/practicing-data-structures-and-algorithms-daily-n62 | programming, beginners, tutorial, algorithms | ## Boring but worthy
Data structures and algorithms as important as your communication skills if you're into programming. If it's your dream to work as full time software engineer, well this thing should be your playground.
For software engineers, understanding of different types of algorithms is important. So if you had some similar planning get ready for them.
## Where to start
First things first! make sure to first get a average or good command over some programming languages like C++, Java, Python or else. Once you're good to go, try going deeper into their some concepts like pointers and structures like stacks, vectors, arrays, dictionaries and queues.
It's recommended that you have some mentor for guidance.
## Now comes the painful part
Get ready for some brain damage and mental pain, once dived deep into above things you'll be introduced to some terms like recursion and iterations. Talking about my own experience I had a huge confusion and problems while understanding algorithms full of recursions. Terms like Time and Space complexity will be one factor judging your algorithms.
## Resources for practice
Learning data structures and algorithms can get a little easy if you solve more and more questions and knowing the best way to solve them.
Which algorithms fits the best and why? some things that one should have a good knowledge of.
I've been doing some of these algorithms since few days as well my repo:- [CPP-Data-structures-and-algorithms](https://github.com/Seek4samurai/DSA-Cpp)
Here are some websites I use for practicing questions :- [Hackerrank](https://www.hackerrank.com/) and [Leetcode](https://leetcode.com/). Check these websites for questions and try solving them.
_Hope this helps and motivates._ | seek4samurai |
937,817 | 10 habits of 10x developer | Over 16 years of educating, tutoring and mentoring developers I have recognized ten habits that... | 0 | 2021-12-27T19:26:38 | https://dev.to/tomaszs2/10-habits-of-10x-developer-96p | programming, webdev, beginners | Over 16 years of educating, tutoring and mentoring developers I have recognized ten habits that differentiate 10x developers from others. 10x developers:
1. Take insane amount of notes
2. Start to code only when they gather all information and requirements needed to implement a feature
3. First reproduce the bug before looking into the code
4. Validate results early and often with all stakeholders
5. Effectively manage incoming messages and tasks
6. Kick off day with an easy task to boost speed
7. Make sure they work in a proper task order everyone is happy with
8. Understand improving the development process as a part of their responsibilities
9. Ask a lot of questions
10. Value a team compromise more than their private opinions
11 bonus: master providing solutions rather than memorizing useless aspects of technologies
I have met hundreds of developers during my career so far, and the list above is bulletproof way to point out a person that is 10x developer.
If you want to learn more about how to become 10x developer, follow me! | tomaszs2 |
937,887 | Table Driven Scanners | Introduction A scanner or tokenizer is a part of the compiler, it is also that potion of... | 0 | 2021-12-27T20:13:12 | https://dev.to/ezpzdevelopement/table-driven-scanners-40jd | ## Introduction
A scanner or tokenizer is a part of the compiler, it is also that potion of code or program that gets a set of characters as input and produces a set of tokens or words as an output.
We can count three types of scanners table-driven scanners, direct coded scanners, and hand-coded scanners, each one of them has some advantages, disadvantages, based on this, we can determine when to use any of the previously listed approaches, i also need to mention that some compiler designer and developers use a hybrid approach of this three, or they might also just combine two approaches.
Note that hand-coded scanners are the most used in most of the commercial versions of the famous compilers, This is due to many reasons, one of them is that unlike the other two table-driven scanners, and direct coded scanners; hand-coded scanner is not auto-generated it is for the developer to take care of the implementation and to improve the compiler performance.
## Table Driven Scanners
But in this post, i want to explain and show what I understand about table-driven scanners, so as mentioned above in the introduction section this scanner is auto-generated, take as an example, [Lex](http://www.cse.aucegypt.edu/~rafea/csce447/slides/Lex_M02.pdf) and [Flex](https://www.math.utah.edu/docs/info/flex_toc.html).
At first, we must have a set of regular expressions as an input for the scanner generator to produce tables, we will also need the help of what we can call a skeleton scanner to control the process of recognizing our language and return a set of tokens.
An example of the produced tables can be a classifier table, transition table, and token type table
***classifier table:*** take a character as an input and identify whether this character belongs to any of the groups that make up our DFA or our RES, for example, a character can be a digit, a special character like ,;=+, a lower case alphabet, or an upper case alphabet, the figure below show an example of this table.

***transition table:*** take the current state and the character group and return the next state to us, as you can see in the table below.

***token type table:*** take a state as an input, and return the type of token that i must-see in this state, in the case when there is no token this table returns invalid or something similar, the figure below shows an example of a token type table.

Now i want to go back to the skeleton scanner, it is the program or the potion of code that control this process, you can find an algorithmic implementation with a deep and clear explanation in Engineering a compiler by Keith Cooper Linda Torczon on page 86, according to the last mentioned book, this skeleton scanner is usually divided into 3 sections initializations, a scanning loop that models the DFA's behavior.
in the end, i want to mention that the table-driven implementation in addition to the other two implementations, direct and hand-coded have different runtime costs, but they all have same asymptotic complexity per character. | ezpzdevelopement | |
937,908 | How to Create Aliases on macOS with Terminal | Aliases are those badass secret identities and mysteries that developers type into their terminal to... | 0 | 2021-12-27T20:57:05 | https://blog.zahrakhadijha.com/how-to-create-aliases-on-macos-with-terminal | tutorial, beginners, webdev, programming | Aliases are those badass secret identities and mysteries that developers type into their terminal to perform a task faster and make their workflow simpler. For the longest time, I procrastinated creating aliases because the thought of learning how to do it *seemed* really hard. But actually, it's extremely easy. And its made my life SO MUCH better, so I wanted to share how to do it.
## Creating Aliases for Zsh Shell
- Go to your terminal
- Type in the command
```
cd ~
```
To make sure you are at your root directory,
- Then type in
```
open .zshrc
```
to open up your `.zshrc` folder. You should see a screen like this:

- Scroll down to where it says something like **# alias ohmyzsh="mate ~/.oh-my-zsh"**
- After the hashes, type in your own alias, i.e.:
```
alias cmsg="git commit -m"
```
Every time I want to run `git commit -m`, I can now use `cmsg` to do that task.
- Next `Cmd + S` to Save. Close the window.
- THEN for the changes to take affect, you will have to enter the following in your terminal:
```
source .zshrc
```
### Example Aliases
Alias names can often be hard to come up with for beginners—I know it was for me until I saw a senior engineer using some of them—so some examples would be:
```terminal
alias project:start="yarn"
alias project:build="yarn project build"
alias project:test="yarn test"
alias project:start="yarn && yarn project build"
alias project="/application/documents/project/project-file"
```
Voilà You can now use your alias to perform the task you want faster.
If you have any questions, feel free to DM me on [Twitter](https://twitter.com/zahrakhadijha) !
| za-h-ra |
938,181 | A11y tips: add a skip link | A skip link is a link located at the beginning of the page and that allows the user to jump directly... | 15,912 | 2021-12-28T07:35:41 | https://dev.to/carlosespada/a11y-tips-add-a-skip-link-222m | a11y, tips, html | A skip link is **a link located at the beginning of the page and that allows the user to jump directly to certain blocks** (main content, main search, important asides...) without having to go through all the previous elements that are repeated throughout of the entire site (logo, main navigation...).
It is **especially useful for users who navigate with the keyboard**, and it can also be an aid for those who navigate using headings or landmarks.
There are several rules to follow to build a good skip link:
- It must be the first focusable element on the page.
- May or may not be visible to all users all the time.
- If it is hidden from startup, it should be visible when you receive the keyboard focus.
- Must be an internal link (starting with `#`).
- When clicking on it, the focus should move to the target element. For greater security in this behavior, it is better to add `tabindex="- 1"` to that target element and thus make it focusable.
Although all content should be contained in a landmark region, **the skip link can be located outside of any landmark**. Some validators will flag it as a warning but it is not a violation of the WCAG.
You can see some examples of skip links on the [British Government](https://www.gov.uk/) page or on [Wikipedia](https://en.wikipedia.org/). Try navigating with the keyboard from the address bar and see how it behaves. | carlosespada |
938,207 | Things to Know Before You Start Tailwind CSS | With the rising popularity of Tailwind CSS, many of us might jump directly on Tailwind just to catch... | 16,052 | 2021-12-28T08:32:32 | https://clumsycoder.hashnode.dev/getting-started-with-tailwindcss | tailwindcss, beginners, css, webdev | With the rising popularity of Tailwind CSS, many of us might jump directly on Tailwind just to catch up with the hype. It won't take them much of their time to realize that it is not like any other traditional CSS framework. They might not even get a full picture of Tailwind and end up hating it thinking it's harder than plain CSS.
Here is my attempt to cover everything that you need to know to use Tailwind CSS in your next project.
> This blog focuses on CSS concepts that are essential to know for using Tailwind CSS. It is **not** a tutorial to get started with it.
[Official documentation](https://tailwindcss.com/docs/installation) has covered that part pretty well.
## Utility First Frameworks
The main reason why Tailwind is different from other frameworks is that it is a utility based framework, whereas frameworks like Bootstrap or Bulma are component based frameworks. Now, what's the difference?
Component based frameworks provide a set of components that are used by adding predefined classes to HTML elements. For example, Bootstrap has a total of 24 components that we can use by simply adding related classes.
Tailwind CSS on the other hand is a utility framework. It doesn't limit your design by predefined opinionated components but provides powerful building blocks that are useful to create a unique design for your projects.
Tailwind achieves this by having a class for every CSS property. Additionally, we can add our valid CSS properties to make a new tailwind class as per our needs. This gives total freedom and flexibility while designing the front end.
But this comes with a cost. Component based libraries don't require you to know much about plain CSS. You refer to the documentation and use the code that you need. Tailwind being a low level utility based framework doesn't work in that way. You need to have a profound understanding of CSS to use Tailwind in the best way.
## Intermediate CSS
There is only one word that describes CSS - overwhelming. Though Tailwind gives you flexibility and saves time requires to type everything explicitly, your knowledge of plain CSS is considered as the main pre-requisites.
### Sizing Units
Especially, `rem`. All utility classes use `rem` for styling. Tailwind has classes from smallest rem unit `0.125rem` (2px) to all the way up to 24rem. Additionally, you can add units as per your choice in `tailwind.config` file (which can be `px`, `em` or anything else too).
### Responsive Design
Tailwind is mobile first approach to styling. So whatever you write is suited for the smallest screen size defined while configuring tailwind. By default, the smallest utility class is `sm` that sets media query of a minimum width of 640px. You are supposed to mention the breakpoint if you are adding classes for bigger sizes than that.
That's why, knowing about responsive design, sizing units and media queries are a must to create anything eye-pleasing with Tailwind.
### Flexbox & Grid
Creating layouts using grid and flexbox classes is easy as compared to traditional ways of using them. However, you might find it confusing because many classes conflict with each other resulting in a single class that is used with both layouts.
I was talking about `justify-{value}`, `align-{value}`, `place-{value}` and `gap`. These classes serve the same purpose for flexbox as well as grid. Having a clear understanding of them would help you to save hours of confusion.
### Other Important Concepts
Of course, this is not it; CSS is more than that, and so is Tailwind. Units, responsive design and Flexbox/Grid need a special mention because once you understand how these things work, it won't be difficult for you to use Tailwind with its max out capacity.
Other than that, here are a few more things that are good to know:
- CSS Transition and Transform Property
- [Aspect Ratios](https://tailwindcss.com/docs/aspect-ratio)
- [Preflight](https://tailwindcss.com/docs/preflight): Tailwind has a set of base styling practices built on top of [modern-normalize](https://github.com/sindresorhus/modern-normalize).
I am not mentioning padding, margin, box sizing, float and z-index because I believe that if you don't know how to work with them, you should think about sticking to plain CSS for some more time.
## When to Use Tailwind?
1. If you have intermediary experience with plain CSS and know how CSS is supposed to work, you can use Tailwind pretty much anywhere you want.
2. If your front end stack has a component based library such as react, using tailwind will allow you to use it with its maximum reusability and scalability of Tailwind.
## When NOT to use Tailwind?
1. Do not even consider using Tailwind if you don't have intermediate experience with CSS. You might not face issues in the very beginning but as your project gets bigger, it'd get messier and confusing.
2. If you are building websites with Vanilla JS, you might not use Tailwind at its full potential. Using same class names for every different component would make the code redundant, non-readable and messy.
3. If you are building prototypes and time/deadline is a major factor, using tailwind will slow you down. Tailwind is best suitable for big projects which give you enough time to focus on design as well as logic.
4. Similarly, using Tailwind with projects where backend logic or backend services is more important than design must be avoided. Because you'd end up spending more time on design when your focus should be on logic.
## TL;DR
Tailwind CSS is a utility based framework so you have to combine multiple utilities and create a component by yourself. That's why knowing about plain CSS is a must. You should have a clear understanding of sizing units, responsive design and media queries. Knowing about conflicting flexbox and grid properties would also save your confusion when you start building projects.
Don't jump to Tailwind if you are not comfortable with plain CSS.
## Conclusion
Using Tailwind when you struggle with fundamentals will only lead to frustrations. But once your fundamentals are clear, no one can stop you from harnessing the full potential of Tailwind CSS!
I have scheduled more blogs about starting Tailwind CSS so if you're interested, do subscribe to the newsletter and follow [my blog](https://clumsycoder.hashnode.dev/). Also, if you think there's anything that can be improved or added, I am happy to hear your opinion. I am most active on [Twitter](https://twitter.com/clumsy_coder) and [LinkedIn](https://www.linkedin.com/in/7JKaushal).
Happy Designing! 🎨 | clumsycoder |
938,296 | Topic: React Hook and Custom React Hook | When I was learning to React, some parts seemed to me difficult. Today, I have decided to... | 0 | 2021-12-28T10:32:10 | https://dev.to/zahidulislam144/topic-react-hook-and-custom-react-hook-5d6l | react | ## When I was learning to React, some parts seemed to me difficult. Today, I have decided to write some of my learnings and ideas about react hook and custom react hook.
## - **What is hook, basically?**
In real life, hook means a one kind of ring that holds something. In react, hook works basically based on this following concept such as if we use React hook, there will be a state that stores data and provides data in any component during setting the states in a component or multiple component using the same state. As we can use the same state in many components, so it is thought that it holds data like a ring and can be linked in any component. But there have some rules and guidelines to use react hooks.
1. Hook must be declared at the top of the return functions and inside the React function.
2. Don’t declare hooks in loop, conditions and nested functions.
3. Hook can be written in functional component only.
**Example:**
```
// imported hooks to use from react
import React, { useEffect, useState } from "react";
//react function
const MyOrder = () => {
// setting react hook
const [order, setMyOrder] = useState(‘’ ”) / useState(null);
retrun(
// code contents in JSX format.
);
}
export default MyOrder;
```
1. From, above example, we can say that, this is a React functional component. Now I am going to give explanation.
2. Firstly, we create a ‘MyOrder.js’ file as a component that handles order related task. The component name must be started with an uppercase letter.
3. Secondly, export the created component for further use in other component. If it isn’t exported, it can’t be imported in another component.
4. Thirdly, we need to define our hook. Hook always start ‘use’ key word. In the above example, there is used useState hook. Now, come to the main point that how to set our state and how to store data in state? In state, data is stored in array format. So, at first we take two elements inside an array [order, setOrder]. Here, setOrder sets our data into an order element. When, setOrder element is first called, it renders the React component and store the data in a memory cell in order element. When it is rendered again, it will create another memory call to store the data. But when it is rendering again and again, it returns the previous stored data serially except creating a new cell. So, we should set hook serially to reduce creating bugs.
## - **What is useEffect hook, basically?**
useEffect is also a hook and one kind of function. It is a call back function. Because Each time it passes a call back function after rendering a component. It can pass an array with call back function. This call back function will be called with the first render of the component and after that it will be called with the changing of the array. But if there is no array element, then it will be called once for the first time of rendering of the component. One thing is to be noted that, the array element is called dependency of useEffect.
**Example:**
```
useEffect(() => {
fetch(`http://localhost:8000/premium-autos/select-now/${id}`)
.then((res) => res.json())
.then((data) => {
console.log(data);
setOrder(data);
});
}, [id]);
```
Here, after the render of the component, the useEffect hook is called and fetch the URL and send a response. Basically, this hook is used to fetch data from API. When, we need to set some dependency that data will be fetched with a specific id or email, then we can set a dependency in array. Here, [id] is the dependency.
## - **Benefits of Hooks**
1. We can write many function in hooks, and use in other components easily.
2. It makes our component easy for reusability, readability and testing.
## - **What is custom hook, basically?**
Custom hook is nothing but a function. When we need to write a specific logic for any project and need to use same logic in other components, then custom hook plays an important role. Just we need to create a JS file named ‘useAuth.js’ as an example, here we can write any name also but writing ‘use’ at first it looks quite good. It’s just a convention of naming a custom hook. In custom hook, there will be written necessary function. Here, I have given an example ‘useAuth.js’ Because authentication is needed in every component
**Example:**
```
import React, { useEffect, useState } from "react";
//react function and it will be our custom hook
const useAuth = ( ) => {
// setting react hook
const [userName, setUserName] = useState(‘’ ”) / useState(null);
const handleInputField = ( ) => {
// code contents
}
const handleForm = ( ) => {
// code contents
}
// returning function so that these function can be accessed in other components when custom hook will be called
retrun(
handleInputField,
handleForm
);
}
export default useAuth;
```
| zahidulislam144 |
938,411 | about_CRUD_operation | What are CRUD Operations: How CRUD Operations Work, Examples, Tutorials & More If you’ve ever... | 0 | 2021-12-28T11:19:28 | https://dev.to/cosmicraybd/aboutcrudoperation-40gc | What are CRUD Operations: How CRUD Operations Work, Examples, Tutorials & More
If you’ve ever worked with a database, you’ve likely worked with CRUD operations. CRUD operations are often used with SQL, a topic we’ve covered in-depth (see this article, this one, and this one for some of our recent SQL tips and tricks). Since SQL is pretty prominent in the development community, it’s crucial for developers to understand how CRUD operations work. So, this article is meant to bring you up to speed (if you’re not already) on CRUD operations.
The Definition of CRUD
Within computer programming, the acronym CRUD stands for creating, read, update and delete. These are the four basic functions of persistent storage. Also, each letter in the acronym can refer to all functions executed in relational database applications and mapped to a standard HTTP method, SQL statement or DDS operation.
It can also describe user-interface conventions that allow viewing, searching and modifying information through computer-based forms and reports. In essence, entities are read, created, updated and deleted. Those same entities can be modified by taking the data from a service and changing the setting properties before sending the data back to the service for an update. Plus, CRUD is data-oriented and the standardized use of HTTP action verbs.
Screenshot Source: Oracle
Most applications have some form of CRUD functionality. In fact, every programmer has had to deal with CRUD at some point. Not to mention, a CRUD application is one that utilizes forms to retrieve and return data from a database.
The first reference to CRUD operations came from Haim Kilov in 1990 in an article titled, “From semantic to object-oriented data modelling.” However, the term was first made popular by James Martin’s 1983 book, Managing the Data-base Environment. Here’s a breakdown:
CREATE procedures: Performs the INSERT statement to create a new record.
READ procedures: Reads the table records based on the primary keynoted within the input parameter.
UPDATE procedures: Executes an UPDATE statement on the table based on the specified primary key for a record within the WHERE clause of the statement.
DELETE procedures: Deletes a specified row in the WHERE clause.
How CRUD Works: Executing Operations and Examples
Based on the requirements of a system, the varying users may have different CRUD cycles. A customer may use CRUD to create an account and access that account when returning to a particular site. The user may then update personal data or change billing information. On the other hand, an operations manager might create product records, then call them when needed or modify line items.
During the Web 2.0 era, CRUD operations were at the foundation of most dynamic websites. However, you should differentiate CRUD from HTTP action verbs. For example, if you want to create a new record you should use “POST.” To update a record, you would use “PUT” or “PATCH.” If you wanted to delete a record, you would use “DELETE.” Through CRUD, users and administrators had the access rights to edit, delete, create or browse online records.
An application designer has many options for executing CRUD operations. One of the most efficient of choices is to create a set of stored procedures in SQL to execute operations. With regard to CRUD stored procedures, here are a few common naming conventions:
The procedure name should end with the implemented name of the CRUD operation. The prefix should not be the same as the prefix used for other user-defined stored procedures.
CRUD procedures for the same table will be grouped together if you use the table name after the prefix.
After adding CRUD procedures, you can update the database schema by identifying the database entity where CRUD operations will be implemented.
| cosmicraybd | |
938,521 | Create a slug system with Strapi v4 | Steps to create a slug system with Strapi v4 | 0 | 2021-12-28T15:09:52 | https://dev.to/elchiconube/create-a-slug-system-with-strapi-v4-1abm | strapi, strapiv4, slug, javascript | ---
title: Create a slug system with Strapi v4
published: true
description: Steps to create a slug system with Strapi v4
tags: strapi, strapiv4, slug, javascript
cover_image: https://images.pexels.com/photos/574069/pexels-photo-574069.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=750&w=1260
---
Let's create a slug system with Strapi V4.
1 Create a new file following this structure
```
./src/api/[api-name]/content-types/[content]/lifecycles.js
```
We can control the lifecycle on this file, so we can transform our information on several events. Check the documentation.
2 Install slugify dependency
```
yarn add slugify
```
3 Add code on your lifecycle file.
```
const slugify = require("slugify");
module.exports = {
beforeCreate(event) {
const { data } = event.params;
if (data.title) {
data.slug = slugify(data.title, { lower: true });
}
},
beforeUpdate(event) {
const { data } = event.params;
if (data.title) {
data.slug = slugify(data.title, { lower: true });
}
},
};
```
As you can see the slug is based on our title.
That's it!
So easy
| elchiconube |
938,556 | try to post something | hi everyone, this is long from google. love to hear from you guys | 0 | 2021-12-28T15:37:57 | https://dev.to/truonglong88/try-to-post-something-22if | new, trypro, create | hi everyone, this is long from google.
love to hear from you guys | truonglong88 |
938,765 | Help please | Hello, I'm new here in the community and I need help please, I have an application in Reactjs,... | 0 | 2021-12-28T15:56:08 | https://dev.to/juandavidlasso/help-please-47j3 | react, javascript | Hello, I'm new here in the community and I need help please, I have an application in Reactjs, yesterday I update all my dependencies, and I run my application with npm start and I got these errors, can you help me please





 | juandavidlasso |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.