id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
846,352
Protected routes in react with react router and redux
Protected routes can only be accessed by authenticated users in an application. React-router and...
0
2021-10-02T06:11:16
https://dev.to/akanstein/protected-routes-with-react-router-and-redux-3e62
react, redux, routes, reactrouter
Protected routes can only be accessed by authenticated users in an application. React-router and redux have been winning a combination for a lot of SPA(single page applications), but for a newbie, figuring how to combine these two packages to implement a protected route can seem a bit complex. We'll be looking at how to implement protected routes with react-router and redux in a simplified manner. We'll assume you are familiar with react. If however, you're unfamiliar with react you can checkout [https://reactjs.org/docs/getting-started.html](https://reactjs.org/docs/getting-started.html). ####SETUP We'll start off by spinning up a react app with CRA(create-react-app). To learn more about CRA checkout [https://reactjs.org/docs/create-a-new-react-app.html](https://reactjs.org/docs/create-a-new-react-app.html). ``` npx create-react-app my-protected-app ``` ####Dependencies Redux is an open source library for managing state in centralised manner, it is very popular in the frontend community and a must for many dev roles. React router provides declarative routing for react. It is the go to library for react SPAs routing. Install these dependencies to get going started ``` yarn add react-router-dom redux react-redux or npm install react-router-dom redux react-redux --save ``` ####Setting up our app > NB: you can skip this part if your app is already setup First we'll create a `Home` component. ``` import React from "react"; const Home = () => { return ( <div className="App"> <h1>Welcome to my protected route!</h1> <h2>We've got cookies</h2> </div> ); }; export default Home; ``` Home view on browser ![Home page view](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7hgv4cpe7kwff6d8gh1.png) Then we'll create a `Login` component from which the user logs in to access the home page. ``` import React from "react"; const Login = () => { return ( <div className="App"> <div className="login-form"> <h4 className="form-title">Login</h4> <div className="form-control"> <input type="text" name="username" placeholder="Username" /> </div> <div className="form-control"> <input type="password" placeholder="Enter password" name="password" /> </div> <button className="login-btn"> Login </button> </div> </div> ); }; export default Login; ``` Login view ![Login view](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pc3lii4hhwrocpd82ltj.png) Then we add the `style.css` for the app styling. ``` html { box-sizing: border-box; } *, *:before, *:after { box-sizing: inherit; } .App { font-family: sans-serif; text-align: center; height: 100vh; width: 100%; } .login-form { width: 450px; padding: 20px 25px; border-radius: 10px; border: solid 1px #f9f9f9; text-align: left; margin: auto; margin-top: 50px; background: #b88bf72f; } .form-title { margin-top: 0; margin-bottom: 15px; text-align: center; } .form-control { margin-bottom: 15px; width: 100%; } .form-control input { border-radius: 5px; height: 40px; width: 100%; padding: 2px 10px; border: none; } .login-btn { padding: 5px 10px; border-radius: 5px; border: none; background: rgb(60, 173, 239); color: #fff; } ``` ####Setting up redux Let's create `store` directory, then a `types.js` file in `src/store` to export our different store action types ``` export const LOGIN_USER = "LOGIN USER"; ``` Next, we'll create a `store.js` file in our `src/store` folder. Here we instantiate our store and it's initial state. ``` import { createStore } from "redux"; import { LOGIN_USER } from "./types"; const intitialState = { authenticated: false }; const reducer = (state = intitialState, action) => { switch (action.type) { case LOGIN_USER: return { ...state, authenticated: true }; default: return state; } }; const store = createStore(reducer); export default store; ``` Our initial state object contains an authenticated state which is false by default, indicating a user is not logged in. We see more on changing this state. Checkout [createStore](https://redux.js.org/api/createstore) to learn more about setting up redux. ####Setting up react-router In the `src/index.js` file ``` import React from "react"; import ReactDOM from "react-dom"; import { BrowserRouter as Router } from "react-router-dom"; import App from "./App"; ReactDOM.render( <React.StrictMode> <Router> <App /> </Router> </React.StrictMode>, document.getElementById("root") ); ``` We import `BrowserRouter` and wrap our `App` component in it. At this point we create our `ProtectedRoute` to handle verifying a user is authenticated before rendering the component. If the user is not authenticated we want to redirect them to the login page. ``` import React from "react"; import { useSelector } from "react-redux"; import { Redirect, Route } from "react-router-dom"; const ProtectedRoute = ({ path, exact, children }) => { const auth = useSelector((store) => store.authenticated); return auth ? ( <Route path={path} exact={exact}> {children} </Route> ) : ( <Redirect to="/login" /> ); }; export default ProtectedRoute; ``` We check the authenticated state in our redux store and render the component on the condition that it authenticated is `true`. Next, in our `App.js` we add `Switch` giving our app ability to switch components between routes. We also bring in our components, our protected route and set up our store in our App component. ``` import React from "react"; import "./styles.css"; import { Provider } from "react-redux"; import store from "./store"; import { Route, Switch } from "react-router-dom"; import ProtectedRoute from "./ProtectedRoute"; import Home from "./Home"; import Login from "./Login"; const App = () => { return ( <Provider store={store}> <Switch> <Route path="/login"> <Login /> </Route> <ProtectedRoute exact path="/"> <Home /> </ProtectedRoute> </Switch> </Provider> ); }; export default App; ``` ####Finishing up To finish things up, we modify our `Login` component with the ability to change the state at login. ``` import React, { useState } from "react"; import { useDispatch } from "react-redux"; import { useHistory } from "react-router"; import { LOGIN_USER } from "./store/types"; const Login = () => { const dispatch = useDispatch(); const history = useHistory(); const [inputs, setInput] = useState({ username: "", password: "" }); function inputChange(e) { setInput({ ...inputs, [e.target.name]: e.target.value }); } function login(event) { event.preventDefault(); if (!inputs.username || !inputs.password) return; dispatch({ type: LOGIN_USER }); history.push("/"); } return ( <div className="App"> <form onSubmit={login} className="login-form"> <h4 className="form-title">Login</h4> <div className="form-control"> <input type="text" name="username" placeholder="Username" onChange={inputChange} /> </div> <div className="form-control"> <input type="password" placeholder="Enter password" name="password" onChange={inputChange} /> </div> <button type="submit" className="login-btn"> Login </button> </form> </div> ); }; export default Login; ``` We use the `useDispatch` hook to dispatch actions to our redux store. Notice we use the `LOGIN_USER` type we create in our `store/types` in dispatch. We finally round by routing to the home route with the `useHistory` from react-router. Now as far as our inputs aren't empty we can login to the home page. From here, more can be done to add extra features, congrats on your protected route.
akanstein
846,406
Ultimate Guide to setup React Context API with a custom hook [Typescript]
This is a guide to help you set up React Context API with typescript. 🤨 What is React...
0
2021-09-30T10:09:35
https://damiisdandy.com/articles/ultimate-guide-to-setup-react-context-api-with-custom-hook-and-typescript/
typescript, javascript, react, tutorial
This is a guide to help you set up React Context API with typescript. ## 🤨 What is React Context API? Context is designed to share data that can be considered “global” for a tree of React components, This prevents [Prop drilling](https://kentcdodds.com/blog/prop-drilling) and allows you to pass data around your react component tree efficiently. There are external libraries like [Redux](https://kentcdodds.com/blog/prop-drilling) that help with this, but luckily react implemented a built-in feature called [React Context API](https://reactjs.org/docs/context.html) that does this perfectly. Let's dive in! 😁 ### Setup 🛠 To set up the project we need to first create a `create-react-app` application with the typescript template, To do this open up a terminal window and run the command ``` npx create-react-app context-typescript --template typescript # or yarn create react-app context-typescript --template typescript ``` Open the `context-typescript` directory in your favorite text editor like VS code and delete the following files within the `src` directory. * `App.css` * `App.test.tsx` or simply run the commands ``` cd context-typescript/src rm App.css App.test.tsx ``` Then open up the `App.tsx` file, clear everything within it and copy the following lines of code inside it. ```javascript // src/App.tsx import logo from './logo.svg'; function App() { return ( <div> </div> ); } export default App; ``` ### Declaring the Interfaces and types we'll use 🧩 Within the `react-app-env.d.ts` file we'll declare the [Interface](https://www.typescriptlang.org/docs/handbook/2/objects.html) for our global state, We will be building a To-do application in this example to illustrate the use of the context API. ```javascript // react-app-env.d.ts interface Todo { id: number; title: string; isCompleted: Boolean; createdAt: Date; } interface State { isDark: boolean; todos: Todo[]; } ``` ### Creating Our Context 🌴 Create a folder in the `src` directory called `context` within it create two files called `index.tsx` and `reducer.ts`. or run the commands ``` mkdir src/context cd src/context touch index.tsx reducer.ts ``` Within the `index.tsx` we'll create our context, global context provider, and our custom hook. In the `reducer.ts` we'll create our reducer function. Open up the `index.tsx` type the following ```javascript // src/context/index.tsx import { createContext, Dispatch, ReactNode, useContext, useReducer, } from "react"; // Initial State const initialState: State = { isDark: false, todos: [ { id: 0, title: "Prepare dev.to article ✍", createdAt: new Date("2021-09-28T12:00:00-06:30"), isCompleted: false, }, { id: 2, title: "Watch season 3 episode 2 of Attack on titans 👀", createdAt: new Date("2021-09-30T11:00:00-06:30"), isCompleted: false, }, ], }; ``` We simply just imported all that we'll be using in the file and initiated our initial state. Notice how we used the `State` interface. Before we create our Context let's first declare the `Interface` and `type` we'll be using for our context. Within the `react-app-env.d.ts` file add the following lines of code. ```javascript // react-app-env.d.ts ... type ActionTypes = 'TOGGLE_MODE' | 'ADD_TODO' | 'REMOVE_TODO' | 'MARK_AS_DONE'; interface Action { type: ActionTypes; payload?: any; } ``` We've just declared the `Action` interface and its respective types (`ActionTypes`) Now we can create our context, Add the following lines of code underneath the initial state we just declared in the `index.tsx` ```javascript // src/context/index.tsx ... // Create Our context const globalContext = createContext<{ state: State; dispatch: Dispatch<Action>; }>({ state: initialState, dispatch: () => {}, }); ``` We've already imported the `createContext` function and `Dispatch` interface, we also implemented our `Action` interface, and set the initial state to our `initialState` ### Creating the Reducer 📦 Before we create the reducer function lets the `Type` for our reducer function within the `react-app-env.d.ts` file ```javascript // react-app-env.d.ts ... type ReducerType = (state: State, action: Action) => State; ``` This is simply a function that takes in the `State` and `Action` and returns the `State`. Within the `reducer.ts` file, copy the function below. ```javascript // src/context/reducer.ts const reducer: ReducerType = (state, action) => { switch (action.type) { case "TOGGLE_MODE": return { ...state, isDark: !state.isDark } case "ADD_TODO": const mostRecentTodos = state.todos.sort((a, b) => b.id - a.id); return { ...state, todos: [ ...state.todos, { // generate it's id based on the most recent todo id: mostRecentTodos.length > 0 ? mostRecentTodos[0].id + 1 : 0, title: action.payload, isCompleted: false, createdAt: new Date(), } ] }; case "REMOVE_TODO": return { ...state, todos: state.todos.filter(el => el.id !== action.payload) } case "MARK_AS_DONE": const selectedTodo = state.todos.find(el => el.id === action.payload); if (selectedTodo) { return { ...state, todos: [...state.todos.filter(el => el.id !== action.payload), { ...selectedTodo, isCompleted: true, }] } } else { return state } default: return state; } } export default reducer; ``` Based on the `ActionTypes` type we previously initialized, we are using for the `switch` statement's `action.type` Because we are using Typescript our text editor or IDE helps us with IntelliSense for the action types. ![typescript intelliSense](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q36917so0a8ri7u5uhfz.png) ### Creating the Global Provider 🌐 Within the `index.tsx` file we'll import the reducer function we just created. ```javascript // src/context/index.tsx ... import reducer from "./reducer"; ... ``` Then we'll create the global provider that we'll wrap around our root component ```javascript // src/context/index.tsx ... // Provider to wrap around our root react component export const GlobalContextProvider = ({ children, }: { children: ReactNode; }) => { const [state, dispatch] = useReducer(reducer, initialState); return ( <globalContext.Provider value={{ state, dispatch, }} > {children} </globalContext.Provider> ); }; ``` We've previously imported `ReactNode` and `useReducer`. The `Provider` property is gotten from our previously created `globalContext`, We also added in the parameters `reducer` and `initialState` inside the `useReducer` hook, _(psst! picture `useReduer` as `useState` on steroids 💪)_. The `children` prop is simply the direct child component of `GlobalContextProvider` (our entire app). Now we simply just wrap the `GlobalContextProvider` around our root component within the `src/index.tsx` file Your code should look like this ```javascript // src/index.tsx import React from "react"; import ReactDOM from "react-dom"; import "./index.css"; import App from "./App"; import reportWebVitals from "./reportWebVitals"; import { GlobalContextProvider } from "./context"; ReactDOM.render( <React.StrictMode> <GlobalContextProvider> <App /> </GlobalContextProvider> </React.StrictMode>, document.getElementById("root") ); // If you want to start measuring performance in your app, pass a function // to log results (for example: reportWebVitals(console.log)) // or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals reportWebVitals(); ``` ### Custom Hook 📎 We are going to create a hook that lets us access our global state and dispatch function anywhere in our component tree (react app). Before we do that let's create its `Type`, this is useful because it lets us use the power of Typescript. We'll declare this within the `react-app-env.d.ts` file like we always have. ```javascript // react-app-env.d.ts ... type ContextHook = () => { state: State, dispatch: (action: Action) => void; } ``` This is a function that simply returns an object that contains our global state and dispatch function. Now we create the hook within the `src/context/index.tsx` file ```javascript // src/context/index.tsx ... // Custom context hook export const useGlobalContext: ContextHook = () => { const { state, dispatch } = useContext(globalContext); return { state, dispatch }; }; ``` We previously imported the `useContext` hook, which takes in our `globalContext`. ### Using our custom hook Within the `App.tsx` file we'll import the `useGlobalContext` hook we just created. ```javascript // src/App.tsx import logo from './logo.svg'; import { useGlobalContext } from "./context"; function App() { const { state, dispatch } = useGlobalContext(); return ( <div> </div> ); } export default App; ``` With the power of typescript, we have IntelliSense to help us out. ![typescript intelliSense](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6nti2onhratkysign8l5.png) ![typescript intelliSense](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kzx1z33eoomnurisjxy.png) That's all for this tutorial 🎉, This is my first article 😅, feedback will be nice, Be sure to comment down below if you have any questions, additions, or subtractions. The full source code to with project with a functioning todo application is linked below 👇👇 {% github damiisdandy/context-api-typescript no-readme %} Thank you for reading 🙏!
damiisdandy
846,412
Modernizing Amazon database infrastructure - migrating from Oracle to AWS | AWS White Paper Summary
1. Challenges with using Oracle databases Amazon started facing a number of challenges...
0
2021-09-30T08:40:20
https://dev.to/awsmenacommunity/modernizing-amazon-database-infrastructure-migrating-from-oracle-to-aws-aws-white-paper-summary-ej7
aws, database
#1. Challenges with using Oracle databases Amazon started facing a number of challenges with using Oracle databases to scale its services. ##1.1 Complex database engineering required to scale • Hundreds of hours spent each year trying to scale the Oracle databases horizontally. • Database shards was used to handle the additional service throughputs and manage the growing data volumes but in doing so increased the database administration workloads. ##1.2 Complex, expensive, and error-prone database administration Hundreds of hours spent each month monitoring database performance, upgrading, database backups and patching the operating system for each instance and shard. ##1.3 Inefficient and complex hardware provisioning • Database and the infrastructure teams expended substantial time forecasting demand and planning hardware capacity to meet it. • After forecasting, hundreds of hours spent in purchasing, installing, and testing the hardware. • Additionally, teams had to maintain a sufficiently large pool of spare infrastructure to fix any hardware issues and perform preventive maintenance. • The high licensing costs were just some of the compelling reasons for the Amazon consumer and digital business to migrate the persistence layer of all its services to AWS. #2. AWS Services Overview about the key AWS database Services. ##2.1 Purpose-built databases • Amazon expects all its services be globally available, operate with microsecond to millisecond latency, handle millions of requests per second, operate with near zero downtime, cost only what is needed, and be managed efficiently by offering a range of purpose-built databases. The three key database services to host the persistence layer of their services: [Amazon DynamoDB](https://aws.amazon.com/dynamodb) [Amazon Aurora](https://aws.amazon.com/rds/aurora/?aurora-whats-new.sort-by=item.additionalFields.postDateTime&aurora-whats-new.sort-order=desc) [Amazon Relational Database Service (Amazon RDS) for MySQL or PostgreSQL](https://aws.amazon.com/rds/) ##2.2 Other AWS Services used in implementation • [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3) • [AWS Database Migration Service](https://aws.amazon.com/dms) • [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2) • [Amazon EMR](https://aws.amazon.com/emr) • [AWS Glue](https://aws.amazon.com/glue) ##2.3 Picking the right database • Pick the most appropriate database based on scale, complexity, and features of its service. • Business units running services that use relatively static schemas, perform complex table lookups, and experience high service throughputs picked Amazon Aurora. • Business units using operational data stores that had moderate read and write traffic, and relied on the features of relational databases selected Amazon RDS for PostgreSQL or MySQL. #3. Challenges during migration The key challenges faced by Amazon during the transformation journey. ##3.1 Diverse application architectures inherited • Amazon has been defined by a culture of decentralized ownership that offered engineers the freedom to make design decisions that would deliver value to their customers. This freedom proliferated a wide range of design patterns and frameworks across teams. Another source of diversity was infrastructure management and its impact on service architectures. ##3.2 Distributed and geographically dispersed teams • Amazon operates in a range of customer business segments in multiple geographies which operate independently. • Managing the migration program across this distributed workforce posed challenges including effectively communicating the program vision and mission, driving goal alignment with business and technical leaders across these businesses, defining and setting acceptable yet ambitious goals for each business units, and dealing with conflicts. ##3.3 Interconnected and highly interdependent services Amazon operates a vast set of microservices that are interconnected and use common databases. Migrating interdependent and interconnected services and their underlying databases required finely coordinated movement between teams. ##3.4 Gap in skills As Amazon engineers used Oracle databases, they developed expertise over the years in operating, maintaining, and optimizing them. Most service teams shared databases that were managed by a shared pool of database engineers and the migration to AWS was a paradigm shift for them. ##3.5 Competing initiatives Lastly, each business unit was grappling with competing initiatives. In certain situations, competing priorities created resource conflicts that required intervention from the senior leadership. #4. People, processes, and tools The following three sections discuss how three levers were engaged to drive the project forward. ##4.1 People One of the pillars of success was founding the Center of Excellence (CoE). The CoE was staffed with experienced enterprise program managers. The leadership team ensured that these program managers had a combination of technical knowledge and program management capabilities. ##4.2 Processes and mechanisms This section elaborates on the processes and mechanisms established by the CoE and their impact on the outcome of the project. **Goal setting and leadership review** It was realized early in the project that the migration would require attention from senior leaders. They used the review meeting to highlight systemic risks, recurrent issues, and progress. **Establishing a hub-and-spoke model** It would be arduous to individually track the status of each migration. Therefore, they established a hub-and-spoke model where service teams nominated a team member, typically a technical program manager, who acted as the spoke and the CoE program managers were the hub. **Training and guidance** A key objective for the CoE was to ensure that Amazon engineers were comfortable moving their services to AWS. To achieve this, it was essential to train these teams on open source and AWS native databases, and cloud-based design patterns. **Establishing product feedback cycles with AWS** This feedback mechanism was instrumental in helping AWS rapidly test and release features to support internet scale workloads. This feedback mechanism also enabled AWS to launch product features essential for its other customers operating similar sized workloads. **Establishing positive reinforcement** To ensure that teams make regular progress towards goals, it is important to promote and reinforce positive behaviors, recognize teams, and celebrate their progress. The CoE established multiple mechanisms to achieve this. **Risk management and issue tracking** Enterprise scale projects involving large numbers of teams across geographies are bound to face issues and setbacks. ##4.3 Tools Due to the complexity of the project management process, the CoE decided to invest in tools that would automate the project management and tracking. #5. Common migration patterns and strategies The following section describes the migration of four systems used in Amazon from Oracle to AWS. ##5.1 Migrating to Amazon DynamoDB – FLASH **Overview of FLASH** • Set of critical services called the Financial Ledger and Accounting Systems Hub (FLASH). • Enable various business entities to post financial transactions to Amazon’s sub-ledger. • It supports four categories of transactions compliant with Generally Accepted Accounting Principles (GAAP)—account receivables, account payables, remittances, and payments. • FLASH aggregates these sub-ledger transactions and populates them to Amazon’s general ledger for financial reporting, auditing, and analytics. ![FLASH](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/off6apeqwwplwnv49l56.png) >>>>>>>>>>>>>> _Data flow diagram of FLASH_ <<<<<<<<<<<<<< **Challenges with operating FLASH services on Oracle** FLASH is a high-throughput, complex, and critical system at Amazon. It experienced many challenges while operating on Oracle databases. **a. Poor latency** The poor service latency despite having performed extensive database optimization. **b. Escalating database costs** Each year, the database hosting costs were growing by at least 10%, and the FLASH team was unable to circumvent the excessive database administration overhead associated with this growth. **c. Difficult to achieve scale** As FLASH used a monolithic Oracle database service, the interdependencies between the various components of the FLASH system were preventing efficient scaling of the system. **Reasons to choose Amazon DynamoDB as the persistence layer** a. Easier to scale b. Easier change management c. Speed of transactions d. Easier database management **Challenges and design considerations during refactoring** The FLASH team faced the following challenges during the re-design of its services on DynamoDB: **a. Time stamping transactions and indexed ordering** After a timestamp was assigned, these transactions were logged in a S3 bucket for durable backup. DynamoDB Streams along with Amazon Kinesis Client Libraries were used to ensure exactly-once, ordered indexing of records. When enabled, DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and durably stores the information for up to 24 hours. Applications can access a series of stream records, which contain an item change, from a DynamoDB stream in near real time. After a transaction appears on the DynamoDB stream, it is routed to a Kinesis stream and indexed. **b. Providing data to downstream services** • Enable financial analytics. • FLASH switched the model to an event-sourcing model where an S3 backup of commit logs was created continuously. • The use of unstructured and disparate tables was eliminated for analytics and data processing. • The team created a single source of truth and converged all the data models to the core event log/journal, to ensure deterministic data processing. • Amazon S3 was used as an audit trail of all changes to the DynamoDB journal table. • [Amazon SNS](https://aws.amazon.com/sns/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) was used to publish these commit logs in batches for downstream consumption. • The artifact creation was coordinated using [Amazon SQS](https://aws.amazon.com/sqs/). The entire system is SOX compliant. • These data batches were delivered to the general ledger for financial reporting and analysis. **c. Archiving historical data** FLASH used a common data model and columnar format for ease of access and migrated historical data to Amazon S3 buckets that are accessible by Amazon Athena. Amazon Athena was ideal as it allows for a query-as-you-go model which works well as this data is queried on average once every two years. Also, because Amazon Athena is serverless. **Performing data backfill** [AWS DMS](https://aws.amazon.com/dms/) was used to ensure reliable and secure data transfer. It is also SOX compliant from source to target, provided the team granular insights during the process. ![DMS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9n6t7hhi5y5b29rf5jjl.png) >>>>>>>>>> _Lift and shift using AWS DMS and RDS_ <<<<<<<<<< **Benefits** • Rearchitecting the FLASH system to work on AWS database services improved its performance. • Although FLASH provisioned more compute and larger storage, the database operating costs have remained flat or reduced despite processing higher throughputs. • The migration reduced administrative overhead enabling focus on optimizing the application. • Automatic scaling has also allowed the FLASH team to reduce costs. ##5.2 Migration to Amazon DynamoDB – Items and Offers **Overview of Items and Offers** • A system manages three components associated with an item – item information, offer information, and relationship information. • A key service within the Items and Offers system is the Item Service which updates the item information. **Challenges faced when operating Item Service on Oracle databases** The Item Service team was facing many challenges when operating on Oracle databases. **Challenging to administer partitions** The Item data was partitioned using hashing, and partition maps were used to route requests to the correct partition. These partitioned databases were becoming difficult to scale and manage. **Difficult to achieve high availability** To optimize space utilization by the databases, all tables were partitioned and stored across 24 databases. **Reaching scaling limits** Due to the preceding challenges of operating the Items and Offers system on Oracle databases, the team was not able to support the growing service throughputs. ![Item](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0dj7frnv3w0t2u09min5.png) >>>>>>>>>>>>>>> _Scale of the Item Service_ <<<<<<<<<<<<<<< **Reasons for choosing Amazon DynamoDB** Amazon DynamoDB was the best suited persistence layer for IMS. It offered an ideal combination of features suited for easily operating a highly available and large-scale distributed system like IMS. **a. Automated database management** **b. Automatic scaling** **c. Cost effective and secure** The following figure displays one of the index tables on Oracle that stored SKU to ASIN mappings. ![Oracle](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rz34ythkr39nfmsui1bs.png) >>>>>>>>>> _Table structure of Item Service on Oracle_ <<<<<<<<<< The following figure shows the equivalent table represented in DynamoDB. All other Item Service schemas were redesigned using similar principles. ![DynamoDB](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szsy8ezndrsermo2ee16.png) >>>>>>>>> _Table structure of Item Service on DynamoDB_ <<<<<<<<< **Execution** After building the new data model, the next challenge was performing the migration. The Item Service team devised a two-phased approach to achieve the migration — live migration and backfill. **i. Live migration** * Transition the main store from Oracle to DynamoDB without any failures and actively migrate all the data being processed by the application. * The item Service team used three stages to achieve the goal: a. The copy mode: Validate the correctness, scale, and performance of DynamoDB. b. The compatibility mode: Allowed the Item Service team to pause the migration should issues arise. c. The move mode: After the move mode, the Item Service team began the backfill phase of migration that would make DynamoDB the single main database and deprecate Oracle. **ii. Backfill** • AWS DMS was used to backfill records that were not migrated by the application write logic. • Oracle source tables were partitioned across 24 databases and the destination store on DynamoDB was elastically scalable. • The migration has scaled by running multiple AWS DMS replication instances per table and each instance had parallel loads configured. • To handle AWS DMS replication errors, the process automated by creating a library using the AWS DMS SDK. • Finally, fine tune configurations on AWS DMS and Amazon DynamoDB to maximize the throughput and minimize cost. ![IMS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/76oiw5je6m5ze2roznis.png) >>>>>>>>>>>>>>> _Backfill process of IMS_ <<<<<<<<<<<<<<< **Benefits** After the migration, the availability of Item Service has improved, ensuring consistent performance and significantly reduced the operational workload for the team. Also, the team used the point-in-time recovery feature to simplify backup and restore operations. The team received these benefits at a lower overall cost than previously, due to dynamic automatic scaling capacity feature. ##5.3 Migrating to Aurora for PostgreSQL – Amazon Fulfillment Technologies (AFT) **Overview of AFT** The Amazon Fulfillment Technologies (AFT) business unit builds and maintains the dozens of services that facilitate all fulfillment activities. A set of services called the Inventory Management Services facilitate inventory movement and are used by all other major services to perform critical functions within the FC. **Challenges faced operating AFT on Oracle databases** The AFT team faced many challenges operating its services on Oracle databases in the past. **a. Difficult to scale** All the services were becoming difficult to scale and were facing availability issues during peak throughputs due to both hardware and software limitations. **b. Complex hardware management** Hardware management was also becoming a growing concern due to the custom hardware requirements required from these Oracle clusters. ![AFT](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0v2xqkjlvbcr8yv02tk.png) >>>>>>>>>>>>> _Databases services used by AFT_ <<<<<<<<<<<<< **Reasons for choosing Amazon Aurora for PostgreSQL** Picking Amazon Aurora for three primary reasons. a. Static schemas and relational lookups. b. Ease of scaling and feature parity. c. Automated administration. Before the migration, the team decided to re-platform the services rather than rearchitect them. Re-platforming accelerated the migration by preserving the existing architecture while minimizing service disruptions. **Migration strategy and challenges:** The migration to Aurora was performed in three phases: **a. Preparation phase** • Separate production and non-production accounts to ensure secure and reliable deployment. • Aurora offers fifteen near real-time read replicas while a central node manages all writes. • Aurora uses SSL (AES-256) to secure connections between the database and the application. Important differences to note are: i. How Oracle and PostgreSQL treat time zones differently. ii. Oracle and PostgreSQL 9.6 is different partitioning strategies and their implementations. **b. Migration phase** [AWS SCT](https://aws.amazon.com/dms/schema-conversion-tool/) was used to convert the schemas from Oracle to PostgreSQL. Subsequently DMS performed a full load and ongoing Change Data Capture (CDC) replication to move real-time transactional data. ![SCT](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36s9pwoqr612zyf4mvmm.png) >>> _Steps in the migration of schemas using AWS SCT and AWS DMS_ <<< The maxFileSize parameter specifies the maximum size (in KB) of any CSV file used to transfer data to PostgreSQL. It was observed that setting maxFileSize to 1.1 GB significantly improved migration speed. Since version 2.x AWS DMS has been to increase this parameter to 30 GB. **c. Post-migration phase** Monitoring the health of the database becomes paramount in this phase.One important activity that must occur in PostgreSQL is vacuuming. Aurora PostgreSQL sets auto-vacuum settings according to instance size by default, but one size does not always fit all different workloads, so it is important to ensure auto-vacuum is working properly as expected. **Benefits** • After migrating to Amazon Aurora, provisioning additional capacity is achieved through a few simple mouse clicks or API calls reducing the scaling effort by as much as 95%. • High availability is another key benefit of Amazon Aurora. • The business unit is no longer limited by the input/output operations. ##5.4 Migrating to Amazon Aurora – buyer fraud detection **Overview** Amazon retail websites operate a set of services called Transaction Risk Management Services (TRMS) to protect brands, sellers, and consumers from transaction fraud by actively detecting and preventing it. The Buyer Fraud Service applies machine learning algorithms over real-time and historical data to detect and prevent fraudulent activity. Challenges of operating on oracle The Buyer Fraud Service team faced three challenges operating its services using on-premises Oracle databases. **a. Complex error-prone database administration** The Buyer Fraud Service business unit shared an Oracle cluster of more than one hundred databases with other fraud detection services at Amazon. **b. Poor latency** To maintain performance at scale, Oracle databases were horizontally partitioned. As application code required new database shards to handle the additional throughput, each shard added incremental workload on the infrastructure in terms of backups, patching, and performance. **c. Complication hardware provisioning** After capacity planning, the hardware business unit coordinated suppliers, vendors, and Amazon finance business units to purchase the hardware and prepare for installation and testing. Application design and migration strategy The Buyer Fraud Service business unit decided to migrate its databases from Oracle to Amazon Aurora. The team chose to re-factor the service to accelerate the migration and minimize service disruption. The migration was accomplished in two phases: **i. Preparation phase** • Amazon Aurora clusters were launched to replicate the existing Oracle databases. • A shim layer has built to perform simultaneous r/w operations to both database engines. • The business unit migrated the initial data, and used AWS DMS to establish active replication from Oracle to Aurora. • Once the migration was complete, AWS DMS was used to perform a row-by-row validation and a sum count to ensure that the replication was accurate. ![SHIM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5obu46rwwilvsipz7erj.png) >>>> _Dual write mode of the Buyer Fraud Service using SHIM layer_ <<<< **ii. Execution phase** Buyer Fraud Service began load testing the Amazon Aurora databases to evaluate read/write latencies and simulate peak throughput events such as Prime Day. Results from these load tests indicated that Amazon Aurora could handle twice the throughput of the legacy infrastructure. **Benefits** • Performance, scalability, availability, hardware management, cloud-based automation, and cost. • AWS manages patching, maintenance, backups, and upgrades improved the application performance. • The migration has also lowered the cost of delivering the same performance as before. • The improved performance of Amazon Aurora has allowed to handle high throughput. • Buyer Fraud service was able to scale its largest workloads, support strict latency requirements with no impact to snapshot backups. • Hardware management has gotten exponentially easier with new hardware being commissioned in minutes instead of months. #6. Organization-wide benefits • Services that migrated to DynamoDB, saw significant performance improvements such as a 40% drop in 99th percentile latency, OS patching, database maintenance and software upgrades. • Additionally, the elastic capacity of preconfigured database hosts on AWS has eliminated administrative overhead to scale by allowing for capacity provisioning. #7. Post-migration operating model This section discusses key changes in the operating model for service teams and its benefits. ##7.1 Distributed ownership of databases • The migration transformed the operating model to one focused on distributed ownership. • Individual teams now control every aspect of their infrastructure including capacity provisioning, forecasting and cost allocation. • Each team also had the option to launch Reserved or On-Demand Instances to optimize costs based on the nature of demand. • The CoE developed heuristics to identify the optimal ratio of On-Demand to Reserved Instances based on service growth, cyclicality, and price discounts. • Focusing on innovation on behalf of customers. ##7.2 Career growth The migration presented an excellent opportunity to advance the career paths of scores of database engineers. These engineers who exclusively managed Oracle databases in data centers were offered new avenues of growth and development in the rapidly growing field of cloud services, NoSQL databases, and open-source databases.
haythammostafa
846,445
Getting Started with HTML
Get Started What do you need Any code editor and a browser on your 💻 Here are some...
0
2021-09-30T09:51:59
https://dev.to/nikiljos/getting-started-with-html-2n1d
html, webdev, beginners
### Get Started **What do you need** > Any code editor and a browser on your 💻 >>Here are some good options >> - [VSCode](https://code.visualstudio.com/) >> - [Atom](https://atom.io/) > >> And these are some really useful VSCode extensions >> - [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) >> - [Live Server](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer) ## Basics of HTML This is the structure of a basic HTML Document ```HTML <!DOCTYPE html> <html> <head> <title>Page Title here</title> </head> <body> Write anything here </body> </html> ``` Let me Break it down for you - `<!DOCTYPE html>` - Used to decalre that it is a HTML5 Document - `<html>` - The whole HTML document is written inside the the `<html>`and`</html>` tag - `<title>` - Used to define the title of your webpage - `<body>` - Everything you see inside the browser are written inside the `body` tag Now Here's the list of some really useful tags - `<div></div>` - `<p></p>` - `<span></span>` - `<b></b>` - `<i></i>` ## Here are some cool resources you could refer to - [W3Schools](https://www.w3schools.com/html/) - [MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/HTML)
nikiljos
846,518
The Red Team Chronicles — No such thing as a miracle solution
Red Team Chronicles — No such thing as a miracle solution Now you’ve met Philippe, let’s...
0
2021-10-11T13:51:04
https://medium.com/gitguardian/the-red-team-chronicles-no-such-thing-as-a-miracle-solution-6fdb2170463a
pentesting, cybersecurity, redteam, applicationsecurity
--- title: The Red Team Chronicles — No such thing as a miracle solution published: true date: 2021-06-24 14:52:51 UTC tags: pentesting,cybersecurity,redteam,applicationsecurity canonical_url: https://medium.com/gitguardian/the-red-team-chronicles-no-such-thing-as-a-miracle-solution-6fdb2170463a --- ### Red Team Chronicles — No such thing as a miracle solution ![](https://cdn-images-1.medium.com/max/1024/1*pjc6ZzHREPJQBLEkkOpCBQ.jpeg) _Now you’ve met Philippe, let’s talk about a very common misconception that security professionals may have: “I have already bought this “all-in-one” or “one-size-fits-all” solution, so now I should be safe.”_ ![](https://cdn-images-1.medium.com/max/1024/1*2DYpAfSFk4HlWsKYM_3k7A.png) ### Let’s listen to what Philippe has to say about this. When talking to various organizations, I regularly meet IT teams who would love to find THE solution. You know, this perfect tool that you would just have to install and you’d be protected against all possible attacks… As you can imagine, such a solution does not exist despite claims from unscrupulous vendors. **Quite often, we see security solutions that are not properly implemented or simply do not work as expected.** This often leads to generating too many false positives where normal events get miscategorized as security incidents and end up being ignored over time (until real security incidents occur). This is what we call **“security fatigue”**. On the other hand, some security solutions are not properly configured leading to false negative, where real security incidents do not even generate an alert. **When we successfully penetrate an organization’s IT infrastructure, either we do not trigger an alert because their notification systems have been disabled, or, they have so many alerts that the alert that really matters is lost in the mix and gets ignored.** This is one of the reasons why, one of our last phases after a successful compromise is to voluntarily perform actions that should generate alerts in order to measure the detection capabilities of our customers. And surprisingly, most of the time this does not trigger any response. For example, we create a new rogue domain admin user. This is typically easy to detect and could be an indicator that something wrong is happening: an admin user is not created every day, and its creation should follow a strict change management process. As such, these types of events should be under strict surveillance. The bottom line here is that security teams (and real-time monitoring solutions) should focus on compromise indicators rather than trying to look at everything all the time. They should also evaluate each solution to ensure that it does what it claims and they should run diagnostics to evaluate what solutions are needed **and how they should be implemented**. > _One piece of advice: Do not take for granted vendors’ claims. I cannot count the number of times we bypassed so-called “new generation security tools” with very basic techniques._ For instance, a few years back, one of our clients spent over 6 months (and a lot of money) to deploy a _new generation_ antivirus throughout their organization, and it took us less than 5 minutes to realize that it could be deactivated by simply uninstalling the application. This demonstrates a typical issue when organizations overprotect one door, but they leave another door wide open. As an attacker, if you run into a security solution that is efficient you simply try to go around it. ![](https://cdn-images-1.medium.com/max/1024/0*zlhO2_m2qEyUoG36.png) When evaluating your security posture, you should always try to look at your environment like an attacker would. This is why our slogan is “We protect you from people like us.” You should think of all possible entry points and have a holistic approach to cover all your bases, rather than try to build a fortress around your crown jewels. This is why running red team exercises internally or externally is important. _As you can see, Philippe has some interesting stories to share as well as some useful recommendations to make. If you are interested, keep following the Red Team Chronicles by subscribing to our newsletter or following us on twitter or LinkedIn._ [Checkout _Episode 3_](https://blog.gitguardian.com/security-illusion-of-the-fortress/)_!_ _Originally published at_ [_https://blog.gitguardian.com_](https://blog.gitguardian.com/red-team-chronicals-miracle-solution-philippe-caturegli/) _on June 24, 2021._ * * *
cwinqwist
846,523
Have you ever needed to import CSV files from your users? 3 super tools that that make this a breeze!
If you've ever tried to implement a CSV importer, you know how annoying it can be to devote expensive...
0
2021-09-30T11:50:43
https://dev.to/sangoitejas/have-you-ever-needed-to-import-csv-files-from-your-users-3-super-tools-that-that-make-this-a-breeze-16ap
webdev, csv, dataimporter, datauploader
If you've ever tried to implement a CSV importer, you know how annoying it can be to devote expensive development time to a feature only to see your users struggle with it. In certain cases, developers aim to improve the user experience by providing FAQs and tutorials that explain consumers how to use their importer appropriately. This, on the other hand, simply moves the burden from the product to the user. Users don't want to go through pages of instructions or watch video tutorials just to submit a simple spreadsheet. Here are 3 CSV importers that make CSV collection 10x faster: ## [1. csvbox.io](https://csvbox.io) CSVbox a no-code CSV importer that helps to implement a production-ready import feature in your web app in just a few minutes. Your users get an elegant upload experience as they upload CSV, Excel files, map columns and validate data all in the importer itself. You get ready-to-use clean data in your app, database or API. ## [2. Flatfile.com](https://flatfile.com) Flatfile Portal is the AI-powered data importer that makes self-serve data onboarding seamless. Flatfile's data onboarding platform ensures that companies can import data quickly and seamlessly and that it's clean and ready to use. CSV or XLS files are ingested and you can set target data models for data validation, allowing users to match incoming file data. With features like automatic column matching recommendations, Flatfile learns over time how data should be organized, which helps make the data onboarding process more efficient for your customers. Flatfile's validation features provide control over how data is formatted. ## [3. Papaparse](https://www.papaparse.com/) Papaparse is a fast and powerful CSV parser for the browser that supports web workers and streaming large files. Its a robust JavaScript library that is claimed to be the fastest in-browser CSV parser. This is your one-stop-shop for parsing CSV to JSON.
sangoitejas
846,557
GitHub integration for Orbit for anyone with multiple communities
Orbit is "mission control for your community", a single, shared view of members and activity. For...
0
2021-11-24T13:44:13
https://dev.to/floord/github-integration-for-orbit-for-anyone-with-multiple-communities-28he
github, orbit, community, actionshackathon21
[Orbit](https://orbit.love) is "mission control for your community", a single, shared view of members and activity. For my work for k6 I've been adding integrations to our Orbit "workspace" so that I can make correlations between data points, and to act accordingly. For instance: this person asking a question on Twitter, posted the same question in our Discourse, let's link to our explanation in our reply. Discourse after all allows for more characters than Twitter, and anyone passing by might learn about our community Discourse and check it out. I've used the [GitHub Actions workflow](https://www.github.com/orbit-love/github-actions-templates) for this, which is just adding a bunch of YAML files and "secrets" to a personal repo. ![YAML files in a repository](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/62rm7q2xd8ax02key5iq.png) Next I wanted to hook up our public repositories on GitHub, to scan those for activity, and I ran into an issue. k6 was recently acquired by Grafana Labs, and in August we moved all k6 repositories under the Grafana Labs organization on GitHub. Grafana uses Orbit to listen to all the repos in that organization, and it seems Orbit can't send data from one organization to two different workspaces. Nicolas Goutay, Sr. Software Engineer, Ben Greenberg, (their former) Developer Advocate, and Josh Dzielak, Co-Founder & CTO, responded unanimously to [my request](https://twitter.com/FloorDrees/status/1443137807560679431): it's just not possible. Yet. Now I think Orbit should give this feature request some priority, because I think that many larger (open source) players have multiple product communities they support - Grafana has Prometheus, Grafana itself, Tempo, etc, Red Hat has OpenShift, OpenStack, Ansible, HashiCorp has Consul, Terraform, Vagrant. Currently information on all these different project feed into one workspace and I can imagine the signal-to-noise ratio is... not great. Colleague, and Tempo maintainer, Daniel González Lopes to the rescue! Part of the SRE team at k6, Daniel helped me (ab)use the [GitHub API](https://docs.github.com/en/rest) to search GitHub for public Grafana repos with "k6" in the name, and get all the recent events for each one of them. From those events we create new members (or update them if they already exist), and create new activities for each one of them. While functional in the first iteration, Daniel spent some time to update the activity types to be more legible / without all the breadcrumb info. Before: ![Activity graph before](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d460bjeomuvpc0zjvq4k.png) After: ![Activity graph after](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xncdpi62apxdz4v41bpc.png) We're hoping tags in Orbit Activities will help to filter by repo in the future. But in Orbit's [reference material](https://docs.orbit.love/reference/post_-workspace-id-activities) tags are marked as "experimental". In all caps. Anyway, the idea is that we for this as well use GitHub Actions. We'll let the implementation simmer for a few weeks, make tweaks here and there, and the gradually add the other integrations. With every integration we add we expect to see duplication, and I'd rather deal with those on a case by case basis, than to go through 300+ possible matches at once. "Fun" anecdote: Daniel thought it'd be neat to use "starring a repo" as an activity as well, which exposed some rate limits on Orbit's end. We plan to run the script hourly, to avoid big data (not to be confused with Big Data) imports moving forward. Orbit's reports / graphs are pretty rudimentary, so I would love to see *someone* create an Orbit data source plugin for Grafana. Imagine being able to view community data with your other project's health metrics... Looking at [Marcus Olsson](https://twitter.com/marcusolsson) 👀 Our little integration [lives on GitHub](https://github.com/grafana/orbit-github-integration) for you to fork, copy, or take inspiration from.
floord
846,743
3 Sites for FREE UI KITS!
Save for later. uistore.design ui8.net uispace.net P.S....
0
2021-09-30T13:37:14
https://dev.to/deyrupak/3-sites-for-free-ui-kits-1ghg
webdev, productivity, design, uiweekly
_Save for later._ <br/> ### [uistore.design](https://www.uistore.design/) ![uistore](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9uy11pu2agd6r1978mwz.png) <br/> ### [ui8.net](https://ui8.net/) ![ui8](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpx26d5ofywyr4z1d7ex.png) <br/> ### [uispace.net](https://uispace.net/) ![uispace](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmiv5avpkvmq63eivwrt.png) <br/> P.S. Want the next post to be something specific? Do let me know in the comments. 🤘🏻 <br/> Connect with me : [Github](https://github.com/deyRupak) Support me : [Buy me a coffee!](https://paypal.me/deyrdx?locale.x=en_GB)
deyrupak
846,870
Começando com Spark
O Spark é uma ferramenta para processamento de dados em grande escala, escrita na linguagem de...
0
2021-09-30T17:38:33
https://dev.to/dgoposts/comecando-com-spark-g30
datascience, javascript
O Spark é uma ferramenta para processamento de dados em grande escala, escrita na linguagem de programação funcional **Scala**, possui foco em velocidade, facilidade de uso e análises sofisticadas. Netflix, Yahoo e eBay são algumas empresas que implementaram soluções através do Spark. ## Spark Ecosystem O ecossistema Spark inclui cinco componentes principais: Spark Streaming, MLlib e GraphX, Spark SQL e Spark Core. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifn6c9wmytyj3cp5z8yp.png) ### Spark Streaming O Spark Streaming facilita a criação de soluções de streaming escalonáveis e tolerantes a falhas. Ele traz a API integrada à linguagem Spark para o processamento de stream, para que você possa escrever jobs de streaming da mesma forma que os jobs em lote. O Spark Streaming oferece suporte a Java, Scala e Python, e apresenta semânticas "exatamente uma vez" com estado, prontas para uso. ### MLlib MLlib é a biblioteca de machine learning escalonável do Spark com ferramentas que tornam a ML prática escalonável e fácil. MLlib contém muitos algoritmos de aprendizado comuns, como classificação, regressão, recomendação e clustering. Também contém fluxos de trabalho e outros utilitários, incluindo transformações de recursos, construção de pipeline de ML, avaliação de modelo, álgebra linear distribuída e estatísticas. ### GraphX GraphX é a API Spark para gráficos e computação paralela a gráficos. É flexível e funciona perfeitamente com gráficos e coleções. Unifica extrair, transformar, carregar, análise exploratória, e computação gráfica iterativa em um sistema. Além de uma API altamente flexível, GraphX vem com uma variedade de algoritmos de gráfico. Ela compete em desempenho com os sistemas gráficos mais rápidos, mantendo a flexibilidade, tolerância a falhas e facilidade de uso do Spark. ### Spark SQL Spark SQL é o módulo Spark para trabalhar com dados estruturados que oferece suporte a uma maneira comum de acessar uma variedade de fontes de dados. Ele permite consultar dados estruturados dentro de programas Spark, usando SQL ou uma API DataFrame familiar. O Spark SQL oferece suporte à sintaxe HiveQL e permite o acesso a armazenamentos existentes do Apache Hive. O modo de servidor fornece conectividade padrão por meio de conectividade de banco de dados Java ou conectividade aberta de banco de dados. ### Spark Core O Spark Core é um mecanismo de processamento de dados distribuído de uso geral. Além disso, há bibliotecas para SQL, processamento de stream, machine learning e computação gráfica, sendo que todas elas podem ser usadas juntas em um aplicativo. O Spark Core é a base de todo um projeto, fornecendo despacho distribuído de tarefas, programação e funcionalidades básicas de E/S. --------------- That's all folks! ✌️
dgoposts
846,885
Introducing the First Set of Syncfusion .NET MAUI Controls
The wait is over! Syncfusion has rolled out its first set of .NET MAUI controls in its Essential...
0
2021-10-04T06:59:17
https://www.syncfusion.com/blogs/post/introducing-the-first-set-of-syncfusion-net-maui-controls.aspx
maui, dotnet, csharp
--- title: Introducing the First Set of Syncfusion .NET MAUI Controls published: true date: 2021-09-30 13:52:33 UTC tags: maui, dotnet, csharp canonical_url: https://www.syncfusion.com/blogs/post/introducing-the-first-set-of-syncfusion-net-maui-controls.aspx cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/reir5rtfn9147tlhiyiy.png --- The wait is over! [Syncfusion](https://www.syncfusion.com/ "Link to Syncfusion UI controls") has rolled out its first set of .NET MAUI controls in its [Essential Studio 2021 Volume 3 release](https://www.syncfusion.com/forums/169291/essential-studio-2021-volume-3-main-release-v19-3-0-43-is-available-for-download "Link to download Essential Studio 2021 Volume 3 release"). As you know, the [.NET multi-platform app UI (MAUI)](https://docs.microsoft.com/en-us/dotnet/maui/what-is-maui "Link to .NET MAUI documentation") is an evolution of Xamarin.Forms. It mainly focuses on single-project development for different platforms such as [Android](https://en.wikipedia.org/wiki/Android_(operating_system) "Link to Wikipedia page for Android (operating system)"), [iOS](https://en.wikipedia.org/wiki/IOS "Link to Wikipedia page for iOS"), [macOS](https://en.wikipedia.org/wiki/MacOS "Link to Wikipedia page for macOS"), and [Windows](https://en.wikipedia.org/wiki/Microsoft_Windows "Link to Wikipedia page for Microsoft Windows"). To fulfill your custom control requirements in the .NET MAUI platform, we are working hard to provide brand new controls that are fast, feature-rich, and flexible to use in your apps. To begin our journey in .NET MAUI, in this 2021 Volume 3 release, we came out with three new preview controls: - [Charts (Cartesian and circular charts)](#charts) - [Radial Gauge](#radial-gauge) - [Tab View](#tab-view) This blog post will be a quick introduction to these new controls. ## Charts The .[NET MAUI Charts](https://syncfusion.com/maui-controls/maui-charts "Link to .NET MAUI Charts") control is the perfect tool to visualize data. It has a high level of user interactivity that focuses on the development, productivity, and ease of use. Its rich feature set includes data binding, multiple axes, animations, data labels, tooltip, selection, and zooming. ![.NET MAUI Charts](https://www.syncfusion.com/blogs/wp-content/uploads/2021/09/NET-MAUI-Charts.png)<figcaption>.NET MAUI Charts</figcaption> ### Key features - **Chart types** : Cartesian and circular charts represent data in a unique style with crisp UI visualization in a user-friendly manner. - **Interaction** : You can easily interact with the .NET MAUI Charts with features such as tooltip, selection, zooming, and panning. - **Data binding** : The .NET MAUI Charts control map data from a specified path using the data-binding concept. - **Multiple series** : Simultaneously render multiple series with options to compare and visualize two different series. - **Customization** – You can easily customize the chart features like title, axes, legends, and data labels. **Note:** For more details, refer to [Learn More](https://syncfusion.com/maui-controls/maui-charts) – [User Guide](https://help.syncfusion.com/maui/cartesian-charts/overview) – [Download Free Trail](https://www.syncfusion.com/downloads/maui/) – [Github Samples](https://github.com/syncfusion/maui-demos). ## Radial Gauge The [.NET MAUI Radial Gauge](https://syncfusion.com/maui-controls/maui-radial-gauge "Link to .NET MAUI Radial Gauge") is a multi-purpose data visualization control. It displays numerical values on a circular scale. Its rich feature set includes axes, ranges, pointers, and annotations that are fully customizable and extendable. You can use this control to create speedometers, temperature monitors, dashboards, multi-axis clocks, watches, compasses, and more. ![.NET MAUI Radial Gauge](https://www.syncfusion.com/blogs/wp-content/uploads/2021/09/NET-MAUI-Radial-Gauge-2.png)<figcaption>.NET MAUI Radial Gauge</figcaption> **Key features** - **Axes** : The .NET MAUI Radial Gauge’s axis is a circular arc in which a set of values are displayed along a linear or custom scale. You can easily customize the axis elements, such as labels, ticks, and axis lines with built-in properties. - **Ranges** : Visual elements that quickly visualize where a value falls on the axis. - **Pointers** : Indicate values on an axis. The Radial Gauge has three customizable types of pointers: needle, marker, and range pointers. - **Pointer animation** : Animate the pointer in a visually appealing way when the pointer moves from one value to another. - **Pointer interaction** : You can drag a pointer from one value to another to change a value at runtime. - **Annotations** : Add multiple controls, such as text and images, as an annotation to a specific point of interest in the Radial Gauge. **Note:** For more details, refer to [Learn More](https://syncfusion.com/maui-controls/maui-radial-gauge) – [User Guide](https://help.syncfusion.com/maui/radialgauge/overview) – [Download Free Trail](https://www.syncfusion.com/downloads/maui/) – [Github Samples](https://github.com/syncfusion/maui-demos). ## Tab View The [.NET MAUI Tab View](https://syncfusion.com/maui-controls/maui-tab-view "Link to .NET MAUI Tab View") is a simple, intuitive interface for tab navigation in mobile applications, allowing users to switch between different tabs. ![.NET MAUI Tab View](https://www.syncfusion.com/blogs/wp-content/uploads/2021/09/NET-MAUI-Tab-View-2.png)<figcaption>.NET MAUI Tab View</figcaption> **Key features** - Nested tab support with different header placements. - Fixed and scrollable tab headers. - Image and text support for headers. - Customizable headers with different fonts and colors. **Note:** For more details, refer to [Learn More](https://syncfusion.com/maui-controls/maui-tab-view) – [User Guide](https://help.syncfusion.com/maui/tabview/overview) – [Download Free Trail](https://www.syncfusion.com/downloads/maui/) – [Github Samples](https://github.com/syncfusion/maui-demos) ## FAQs ### Are these controls migrated from Xamarin.Forms? No. These controls were developed from scratch with the .NET MAUI graphics library and framework layouts, themselves with improved APIs and performance. ### Is there any upgrade path from Xamarin to MAUI? We are planning to provide Xamarin.Forms controls that are compatible with the .NET MAUI framework. But we are also working to deliver these brand-new .NET MAUI controls that can be used in your migrated .NET MAUI project, with minimal breaking changes. We will update the migration document for each control so you can easily replace our Xamarin.Forms controls with our .NET MAUI controls in your app. As said before, the new .NET MAUI controls have some breaking APIs. If you are not willing to accept these breaking changes, then we will be providing those Xamarin.Forms controls with .NET MAUI framework compatibility (with the assembly name suffixed MauiCompat). This strategy is yet to be 100 % confirmed, though. We’d appreciate your ideas in the comments about this plan. Or please feel free to contact us through our [support forum](https://www.syncfusion.com/forums "Link to the Syncfusion support forum"), [Direct-Trac](https://www.syncfusion.com/support/directtrac/ "Link to the Syncfusion support system Direct Trac"), or [feedback portal](https://www.syncfusion.com/feedback/xamarin-forms "Link to Syncfusion Feedback Portal"). Based on your requirements, we will plan our priorities. ### What are the platforms supported by Syncfusion .NET MAUI controls? Syncfusion supports Android, iOS, macOS, and Windows. However, the preview controls support only Android, iOS, and macOS platforms. ### How do I get started with Syncfusion .NET MAUI controls? A good place to start would be our comprehensive [getting started with .NET MAUI controls documentation](https://help.syncfusion.com/maui/overview "Link to Getting started with .NET MAUI controls documentation"). ### Do I need to purchase a license for MAUI or will the Xamarin license give us a free MAUI license? Licenses are not needed for our preview controls. In the future, there will not be separate licenses for Xamarin and MAUI, as MAUI is an advanced version of Xamarin. ### When will it be fully ready for enterprise applications? As per the .NET MAUI [roadmap](https://github.com/dotnet/maui/wiki/Roadmap "Link to .NET MAUI Roadmap on GitHub"), the framework itself will be production-ready by Q2 2022. So, we’ll strive to make our controls production-ready along with the .NET MAUI GA release. But this is subject to change. ### Can I deploy an app that uses Syncfusion .NET MAUI controls to unlimited clients? The suite’s still in preview, but once it becomes production-ready, you can deploy apps that use Syncfusion .NET MAUI controls to unlimited clients. We only license on a per-developer basis and do not charge any runtime, royalty, or deployment fees. ## Coming soon In the 2021 Volume 4 release, you can expect the following preview controls in our .NET MAUI package: - List View - Scheduler - Linear Gauge - Signature Pad - Rating - Slider - Range Slider - Barcode ## Summary Thanks for reading! This is the first step in our .NET MAUI journey. Syncfusion’s support for .NET MAUI is still a work in progress. This is the first set of controls that are ready to show you. Details on these controls are also available on our Release Notes and What’s New pages. We are thankful for your great response to our [Xamarin UI controls](https://www.syncfusion.com/xamarin-ui-controls "Link to Xamarin.Forms UI controls"). Your support and feedback helped in making our Xamarin controls a market leader. You can expect almost all our Xamarin.Forms controls in our .NET MAUI suite, and they should perform even better in this platform. If you have any feedback or any special requirements or controls needed with Syncfusion’s .NET MAUI support, please let us know in the comments below. You can also contact us through our [support forum](https://www.syncfusion.com/forums "Link to the Syncfusion support forum"), [Direct-Trac](https://www.syncfusion.com/support/directtrac/ "Link to the Syncfusion support system Direct Trac"), or [feedback portal](https://www.syncfusion.com/feedback/xamarin-forms "Link to Syncfusion Feedback Portal"). We are always happy to assist you! **Related blogs** - [Reuse Xamarin.Forms Custom Renderers in .NET MAUI](https://dev.to/karthickramasamy08/how-to-reuse-xamarin-forms-custom-renderers-in-net-maui-18c8-temp-slug-8337623 "Link to How to Reuse Xamarin.Forms Custom Renderers in .NET MAUI blog") - [Create Custom Renderers for a Control in Xamarin.Forms](https://dev.to/kartik110895/how-to-create-custom-renderers-for-a-control-in-xamarin-forms-5e23-temp-slug-8469071 "Link to How to Create Custom Renderers for a Control in Xamarin.Forms blog") - [5 Advantages of .NET MAUI Over Xamarin](https://dev.to/syncfusion/5-advantages-of-net-maui-over-xamarin-3ca2 "Link to 5 Advantages of .NET MAUI Over Xamarin blog") - [Goodbye Xamarin.Forms, Hello MAUI!](https://dev.to/syncfusion/goodbye-xamarin-forms-hello-maui-g85 "Link to Goodbye Xamarin.Forms, Hello MAUI! blog")
sureshmohan
847,097
Which companies are doing DevRel?
Have you ever been asked or wondered - "How many companies have a DevRel program?" So have we, and...
0
2021-09-30T19:01:44
https://dev.to/carolinelewko/which-companies-are-doing-devrel-16h4
devrel
Have you ever been asked or wondered - "How many companies have a DevRel program?" So have we, and we never had a good answer, or a reference to point to so we created a Developer Relations Program Directory. Developer Products include APIs, SDKs, HDKs, tools, and reference designs. There are also Developer Services from hackathon organizers, recruiters and app stores. What's missing or inaccurate? Check it out and let us know. As of this post - we have 760 companies. [DevRel Program Directory](https://www.devrelbook.com/devreldirectory)
carolinelewko
847,114
Streaming Analytics Using FlinkSQL Webinar
I wanted to share some resources from today's talk. Documentation on Using Flink SQL on...
0
2021-09-30T20:13:30
https://dev.to/tspannhw/streaming-analytics-using-flinksql-webinar-3fa9
![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwdw49jdyv7sg1nc7d6k.png) ![image](https://pbs.twimg.com/media/FAi3zPmUUAULRAv?format=jpg&name=large) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8s9uwzp5qchhi2e7qhc.jpg) I wanted to share some resources from today's talk. Documentation on Using Flink SQL on StreamNative Cloud * https://docs.streamnative.io/cloud/stable/compute/flink-sql-cookbook My source code for the EdgeAI IoT application * https://github.com/tspannhw/StreamingAnalyticsUsingFlinkSQL All the source code from the microservices applications * https://github.com/streamnative/streamnative-academy/tree/master/microservices-webinars Those free E-books we mentioned: * https://streamnative.io/en/download/manning-ebook-apache-pulsar-in-action/ https://streamnative.io/en/download/oreilly-ebook-mastering-apache-pulsar/ Upcoming Events: ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lm2e3nvrmhbs0cqkc1g.png) https://pulsar-summit.org/ ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ug7amm82avf0k4u4hjp2.png) https://www.starburst.io/info/trinosummit/#agenda Just one last query to show you: select top1, avg(CAST (cputempf as double)) as avgcputempf, avg( CAST(gputempf as double)) as avggpttempf from jetsoniot2 /*+ OPTIONS('scan.startup.mode'='earliest') */ group by top1 ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4pjhslfw36y1d6su3wy9.jpg) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv3xjcegpse9urv7wn8x.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w2a49f6jp7t7ida1di4.png) Connect with Us! * https://streamnative.io/ * https://github.com/addisonj * https://twitter.com/addisonjh * https://twitter.com/paasDev * https://streamnative.io/webinars/ * https://streamnative.io/event/webinar-series-building-microservices-with-pulsar/ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8i5bhjso9uh2mbyb2qfm.jpg)
tspannhw
847,116
Utiliser WebSockets avec React
Pour mon dernier projet, j'ai dû utiliser Websockets pour créer un site Web qui affiche des données...
0
2021-09-30T20:13:56
https://dev.to/muratcanyuksel/utiliser-websockets-avec-react-4061
react, websockets, javascript
Pour mon dernier projet, j'ai dû utiliser Websockets pour créer un site Web qui affiche des données de trading en temps réel. Je ne connaissais rien aux WebSockets, et il m'a fallu quelques heures redoutables pour commencer. C'est le seul problème, en fait, pour commencer; le reste est assez clair. Ce court article espère aider les autres à économiser le temps qu'il m'a fallu pour en comprendre les bases. La plupart des tutoriels sur le web mentionnent une syntaxe "require". Vous le savez. Lorsque vous souhaitez utiliser un certain outil, composant ou image dans votre composant en JS ou React, vous devez faire quelque chose comme `const qqch = require ("./dossier/qqch")` . Maintenant, comme je l'ai dit, la plupart des tutoriels que j'ai trouvés sur le Web commencent par cette syntaxe même, qui vous pousse à commencer à travailler avec WebSockets en utilisant la syntaxe require. C'est inutile, et peut-être même faux de nos jours. Je ne sais pas si cela fonctionne ou non, mais je suis certain que la façon dont j'utilise fonctionne parfaitement au moment où j'écris cet article le 09/12/2021. Alors, sans plus tarder, parlons de la façon dont nous pouvons utiliser ce protocole. Cet article suppose que vous ayez une connaissance pratique de Vanilla JS et de React.js, que vous sachiez gérer le format json et le code asynchrone. Je lance mon application avec vite (avec la commande suivante : npm init vite@latest et choisissez React dans le menu), mais vous pouvez utiliser votre propre structure, ou create-react-app. Cela n'a pas vraiment d'importance. Pour une introduction plus approfondie sur WebSocket, visitez [javascript.info](https://javascript.info/websocket) ## Qu'allons-nous construire ? Nous allons créer une application React.js très simple d'une page qui récupère les continuous-data de bitstamp.net et les affiche sur la page. Les données changeront tout le temps. Le service que vous utilisez n'a pas vraiment d'importance, tant qu'il s'agit de WebSockets, le reste est de Vanilla JS. ## Construire le logiciel Commençons par nous connecter au protocole WebSocket de bitstamp en écrivant le code suivant `const ws = new WebSocket("wss://ws.bitstamp.net");` Maintenant, en utilisant cette constant ws, nous pouvons nous abonner à n'importe quel canal défini sur le site Web de bitstamp et obtenir des données en temps réel à partir de là. Vous pouvez trouver toutes les informations concernant les canaux, les propriétés et tout à partir [d'ici](https://www.bitstamp.net/websocket/v2/) Maintenant, abonnez-vous à une chaîne. Je vais m'abonner au canal oder_book_v2 et spécifier que je veux voir les taux de change btc/usd. Cet appel est défini dans le guide de bitstamp. Vous pouvez changer le canal et les devises comme vous le souhaitez. Voici l'appel : ``` const apiCall = { event: "bts:subscribe", data: { channel: "order_book_btcusd" }, }; ``` L'étape suivante consiste à envoyer cet appel au serveur en l'ouvrant => ``` ws.onopen = (event) => { ws.send(JSON.stringify(apiCall)); }; ``` Maintenant, nous voulons faire quelque chose avec chaque donnée. Donc, chaque fois que nous recevons un message du serveur, nous ferons quelque chose. Écrivons un code asynchrone avec try/catch ``` ws.onmessage = function (event) { const json = JSON.parse(event.data); console.log(`[message] Data received from server: ${json}`); try { if ((json.event = "data")) { console.log(json.data); } } catch (err) { // whatever you wish to do with the err } }; ``` Si nous ouvrons la console, nous verrions une grande quantité de données provenant du serveur. C'est ça, en fait. Nous avons les données, elles arrivent dans un flux, et nous pouvons faire ce que nous voulons avec eux. C'est si facile. Cependant je veux afficher les données d'une manière particulière . Laissez-moi coller le code et je vous expliquerai immédiatement après : ``` import React, { useState } from "react"; function App() { //give an initial state so that the data won't be undefined at start const [bids, setBids] = useState([0]); const ws = new WebSocket("wss://ws.bitstamp.net"); const apiCall = { event: "bts:subscribe", data: { channel: "order_book_btcusd" }, }; ws.onopen = (event) => { ws.send(JSON.stringify(apiCall)); }; ws.onmessage = function (event) { const json = JSON.parse(event.data); try { if ((json.event = "data")) { setBids(json.data.bids.slice(0, 5)); } } catch (err) { console.log(err); } }; //map the first 5 bids const firstBids = bids.map((item) => { return ( <div> <p> {item}</p> </div> ); }); return <div>{firstBids}</div>; } export default App; ``` Alors, que se passe-t-il ici ? Comme vous pouvez le voir, il s'agit d'un composant très basique de l'application React.js. J'utilise useState hook donc je l'importe aussi avec react. Je définis l'état et lui donne une valeur initiale. Je procède comme indiqué précédemment, sauf que je définis l'état à json.data.bids (les offres étant une propriété du canal de live order et indiquées sur la page de bitstamp) et limite la quantité de données que je recevrai à 5, pour faciliter les choses. Je mappe les données que je reçois, enregistrées dans l'état (comme vous le savez, React demande une clé pour chaque élément. Je ne l'utiliserai pas ici. J'utilise généralement uniqid pour cela, vous pouvez le faire vous-même.) Je retourne les données mappées et voilà ! Si vous faites de même, vous devriez voir exactement 5 lignes de données en constante évolution à l'écran. J'espère que cet article aidera quelqu'un. Cordialement et continuez à coder!
muratcanyuksel
847,275
Mi nueva startup - Mensajería y paquetería express en Mérida, Yucatán
Ahora me voy a rifar un servicio de paquetería y mensajería express y una app para delivery...
14,836
2021-10-01T01:54:57
https://dev.to/g7b/mi-nueva-startup-mensajeria-y-paqueteria-express-en-merida-yucatan-3on2
startup, react, aws, nextjs
## Ahora me voy a rifar un servicio de paquetería y mensajería express y una app para delivery en Mérida, Yucatán. --- ### ¿Por qué? Pues la ciudad esta creciendo bastante y servicios como iVoy o 99minutos no dan servicio en esta ciudad y aunque estuvieran me vale madres. ### ¿Qué creo que se necesita? * **Clientes, jaja** ### No ya en serio, * Una landing page para ofrecer la info de los servicios. * Una landing para que se registren los repartidores. * Una app para rastreo de entregas y solicitud de recolección. * App para repartidores, para asignar los servicios y hacer el tracking en tiempo real. * Redes sociales, para esta startup chance con LinkedIn y Face la armamos. * Un help desk y centro de atención a clientes. * Unas 2 motos propias para las entregas. ### De RH, ¿qué se necesita? * 2 personas que entreguen * 1 persona de atención a clientes ### ¿Cuánto se va a cobrar? Mínimo $35 MXN hasta 4 km y $10 varitos el km extra. ### La plataforma, ¿qué debe tener? * En el home toda la info del servicio, qué se ofrece, dónde, por qué, cómo y cuánto cuesta. * Rastreo, * Solicitar recolección, * Para E-commerce, * Para Empresas, * Regístrate como mensajero o repartidor, * Mensajería express, * Contacto y ayuda, * Políticas y * Términos y condiciones. ### ¿Cómo se va a llamar? Ya lo tengo pero todavía no lo pongo aquí hasta que compre el dominio, capaz que lo compran nomás por chingar. Ya con eso... Todo lo que me falte ahí lo voy poniendo o lo ponen aquí abajo ↓
g7b
847,711
Pricing Table In Tailwind CSS
In this pricing table, we used Tailwind CSS components and utility classes. For making responsive we...
0
2021-10-01T10:00:35
https://dev.to/w3hubs/pricing-table-in-tailwind-css-4jp5
css, html, webdev, codenewbie
In this pricing table, we used Tailwind CSS components and utility classes. For making responsive we used responsive classes which are already in utility classes. Here we used responsive grid classes and text-color with font-size classes. Also, we used ul and li to show features in the list view. Make it yours now by using it, downloading it, and please share it. we will design more elements for you. [Source code](https://w3hubs.com/pricing-table-in-tailwind-css/)
w3hubs
847,732
Time is on my side: active time
I hope you hear the Rolling Stones song with the name of this blogpost in your head, and if you...
14,843
2021-10-01T14:02:38
https://dev.to/fritshooglandyugabyte/time-is-on-my-side-active-time-1k35
postgres, performance, internals, yugabyte
I hope you hear the Rolling Stones song with the name of this blogpost in your head, and if you don't: visit [youtube](https://youtu.be/sEj8lUx0gwY) or even if you hear it in your head you also might go there and play the song. Tuning a database is all about time, and understanding where time has gone. Sadly, there isn't a facility in postgresql that accounts time spent so that you can calculate the total active time and the average amount of sessions active. The indicator for the state of a postgres backend can be seen in `pg_stat_activity.state` in the database, and the states can be seen in the [pgstat.h header file](https://sourcegraph.com/github.com/yugabyte/yugabyte-db@v2.9.0/-/blob/src/postgres/src/include/pgstat.h?L721). The function that sets the value in pg_stat_activity.state is [pgstat_report_activity](https://sourcegraph.com/github.com/yugabyte/yugabyte-db@v2.9.0/-/blob/src/postgres/src/backend/postmaster/pgstat.c?L3028), and the state is used as an argument for this function. This means we can use perf to set a probe on pgstat_report_activity, and get the state from the function's argument list from the CPU register. My environment looks like this: - Alma linux 8.4 (x86_64). - Postgres 11.13 from the PGDG postgres yum repository. To set a perf probe on pgstat_report_activity and get the state argument use: `perf probe -x /usr/pgsql-11/bin/postgres pgstat_report_activity %di` After the probe is added, it will not do anything. For that you have to record the probe activity. This is done in the following way: `perf record -e 'probe_postgres:*'`. After execution, the perf executable is busy and recording any probe activity, until you press ctrl-C, which will write the result to a file in the current working directory called perf.data. If you want to record any execution of function pgstat_report_activity on the system, you can use perf record -e '..'. Additionally, you can use -p PID for recording just one backend process, or -p PID,PID for recording multiple backend processes. If you recorded activity you are not done yet. You can view the recorded output using `perf script`, and it looks something like this: ``` [root@alma-84 ~]# perf script postmaster 3288 [000] 6542.445581: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x2 postmaster 3288 [000] 6542.445731: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x1 postmaster 3288 [000] 6543.303748: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x2 postmaster 3288 [000] 6543.303859: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x1 postmaster 3288 [000] 6543.952743: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x2 postmaster 3288 [000] 6543.952855: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x1 postmaster 3133 [001] 6545.933558: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x2 postmaster 3133 [001] 6545.933734: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x1 postmaster 3133 [001] 6546.584847: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x2 postmaster 3133 [001] 6546.584958: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x1 postmaster 3133 [001] 6547.236539: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x2 postmaster 3133 [001] 6547.236644: probe_postgres:pgstat_report_activity: (6eea90) arg1=0x1 ``` The first column is the name of the process, the second column is the PID; so I recorded two processes, the third column is the CPU number, fourth column is the relative timestamp, the fifth column is the probe group and name, the sixth column is the probe address, and the seventh column is the extra argument that I requested (%di, which is the CPU register that carries the argument), which indicates the state to which pgstat_report_activity set the backend to: 1=idle, 2=running. This means that this data needs to be processed in order to make the data meaningful. I created a small awk script called [postgres_activity.awk](https://gist.github.com/fritshoogland-yugabyte/eeb967a71cf41b42fe51027c1da07052) that processes the data, and prints it in a way that allows you to work with the data. It should be used in the following way: `perf script | ./postgres_activity.awk` From the above raw perf data, it creates the following overview: ``` [root@alma-84 ~]# perf script | ./dbtime.awk total time in file 4.791063 s 100.00 % pid: 3133 1.303086 s 27.20 % state idle 2 1.302694 s 99.97 % state running 3 0.000392 s 0.03 % pid: 3288 1.507274 s 31.46 % state idle 2 1.506901 s 99.98 % state running 3 0.000373 s 0.02 % ``` This allows you calculate some statistics that are important for calculating load: - The total time recorded in the file is 4.791063 seconds. - The total active time is the sum of the running states: 0.000392+0.000373=0.000765. That is the socalled dbtime. - This means that the average amount of activity is: 0.000765/4.791063=0.0001596722898. That is the socalled average active session time. Of course the above numbers are silly, but the principal function of using these numbers isn't: average active session time can be directly used to match with the number of CPUs to see how much CPU load is produced by postgres. In order to remove the probe, execute: `perf probe --del 'probe_postgres:pgstat_report_activity'` Or even simpler, if you got one probe or want to remove all probes: `perf probe --del '*'`
fritshooglandyugabyte
847,777
Flash messages with Hotwire
I’ll show you how to add flash messages to Rails, using a simple stimulus controller to auto dismiss...
14,845
2021-10-02T15:35:30
https://dev.to/phawk/flash-messages-with-hotwire-2o15
rails, hotwire, tutorial, stimulus
I’ll show you how to add flash messages to Rails, using a simple stimulus controller to auto dismiss them and some basic styling with tailwind css. {% youtube gk_qDsKMIrM %} --- ## Summary of what was done ### `app/views/layouts/application.html.erb` ```erb <!DOCTYPE html> <html> <head> <!-- ... --> </head> <body class="text-gray-600"> <%= render partial: "shared/flash" %> <!-- ... --> </body> </html> ``` ### `app/views/shared/_flash.html.erb` ```erb <% flash.each do |key, value| %> <div data-controller="flash" class="flex items-center fixed top-5 right-5 <%= classes_for_flash(key) %> py-3 px-5 rounded-lg"> <div class="mr-4"><%= value %></div> <button type="button" data-action="flash#dismiss"> <svg xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor"> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" /> </svg> </button> </div> <% end %> ``` ### `app/javascript/controllers/flash_controller.js` ```js import { Controller } from "@hotwired/stimulus"; export default class extends Controller { connect() { setTimeout(() => { this.dismiss(); }, 5000); } dismiss() { this.element.remove(); } } ```
phawk
847,817
A cap of tea
A story about distributed systems, hype-driven design and the Socratic hardships of friendship ...
0
2021-10-05T19:56:13
https://dev.to/pzavolinsky/a-cap-of-tea-2io5
distributedsystems, tea
_A story about distributed systems, hype-driven design and the [Socratic hardships](https://en.wikipedia.org/wiki/Socratic_method) of friendship_ ## Fancy some tea? - Me: Hey, fancy some tea? - You: Sure, I'm always up for some tea! - Me: Cool, let's share a cap of tea - You: You mean a _cup_ of tea right? - Me: No no, I'm talking about a _cap_ of tea, hear me out. If only there was a way for us to pick the right time to meet and share some tea, right? Wonder no more... - You: [oh, no, not another pitch] ... ## The pitch "What is more powerful than the synergy of tea? Can we leverage the power of time to accelerate and optimize the engagement of the face time tea experience? Surely we can level up the tea experience to the next gen, to a rockstar, uber viral, vision.", I blurt. "Wow, what a load of nonsense", you think, you feel a lot dumber just by having heard all that stuff. Those brain cells are not coming back. An idea pops in to your head [Patent pending]: "The BS compressor". A lossy compression algorithm capable of boiling down all that nonsense into a simple "make tea better" or something. You start honing your time warping skills to see if you can start piping your live meetings through the BS compressor and gain precious hours of your life back. You keep daydreaming of a better world with less fuzz and start thinking that maybe the problem with the world is that we are using the [wrong font](https://www.sansbullshitsans.com/). "Hey, are you with me?", I say while poking you in the arm. "Sure, let's get this over with" ## The product "...as I was saying, we'll have this multi-tiered microservice architecture, where the service mesh communicates via an event bus ripe for enrichment..." "Wait a minute, what does this product _do_?" "What do you mean? The services, the bus, the replicas, the protocols, the consensus...", I trail off, confused. A small tear runs through my cheek. "You mentioned tea?" "Right tea. There's tea in there somewhere, but did I tell you about the orchestrator?" You can feel the pressure building inside your head. The [user-centered](https://en.wikipedia.org/wiki/User-centered_design) designer in you wants to scream in rage. After some effort you manage to compose yourself. "But what about the user experience? What is this product _for_? Why would _I_ use it?" you manage to say while swallowing sadness. "Nah, don't worry about that. You know what's the best way to spice a depressing [NPS](https://en.wikipedia.org/wiki/Net_promoter_score), to cheer up a sad Kano (no, not the [violent cyborg](https://en.wikipedia.org/wiki/Kano_(Mortal_Kombat)), the [other one](https://en.wikipedia.org/wiki/Kano_model))? You're right, a beefy architecture diagram. An 8pt Courier New thing of beauty depicting in excruciating detail every little aspect of your architecture! Users love that stuff" "Sure, whatever, what makes this idea so unique?" "I'm glad you asked!" "I bet you are" ## The impossible trifecta "Unlike previous unsuccessful attempts my idea captures the three fundamental properties of the best tea..." "Color, aroma and flavor?" "Wrong, you coffee-drinking muggle: consistency, availability and partition-tolerance" "That's some powerful tea you're brewing" you say rolling your eyes. "I sure am, let me tell you all about these wonderful properties" I reply completely immune to sarcasm. "I can't wait" ## Parsley and wood chippings "When you've been in the tea industry for as long as I have, you know that that fancy green chai is _expensive_. No one in their right mind would serve that to customers. Instead we just brew the chai with parsley or, in dire times, with wood chippings and green food coloring. But what if two customers approach the counter at the same time and order a green chai? It'd be very bad for business if one of them would get the parsley chai while the other gets the wood chipping chai. That's just _wrong_" "_That_'s what's wrong?" you ask, making a mental note of never _ever_ drinking my tea. "Anyway, I thought we were talking of a computer system, all that microservice stuff and all, but now it looks like you are opening a tea shop..." "Pivot or die, my friend, pivot-or-die" ## A guy named Chad "How can you make sure every server knows which chai recipe to use at any given time?" you ask. "Piece of cake, we just have a single server. That guy really loves his tea. He works 24/7 non-stop. When he's feeling drowsy he just takes one on the house. No one knows his name or where he came from so we just call him Chad. Whenever we need to change the chai recipe we just tell Chad: hey Chad, we're running low on the good stuff, stop brewing parsley and switch to wood chippings. Sure thing, boss!" Not knowing where to begin to express all the kinds of wrong here, you just opt to stay away from it and just ask: "So your solution to the [consistency problem](https://en.wikipedia.org/wiki/Consistency_model) is having a single Chad?" "Clearly, single Chad, zero fuzz" ## Ouroboros' queue "But wait, what happens if it's rush hour and lots of customers want their tea at the same time? Or, even worse if Chad collapses under the pressure." you are truly concerned now. "Nah, that Chad has the immune system of a horse with a self-patching kernel. He can take it, but just in case, we are hiring a bunch of other servers, so that in case he's out or something we can still serve our customers" Now that's more like it, multiple servers, some redundancy, higher resiliency to Chad nonsense, but you clearly see where things are going, so you ask: "But if you hire multiple servers how can you make sure they all follow the same chai recipe?" ## An army of Chads "Oh, you're gonna love this! We hired a bunch of servers. They all have colorful back stories, interesting personalities and unique network addresses, but to keep things simple I just call them all Chad. Even better, they all have fancy Bluetooth ear pieces that keep them communicated at all times, so I can scream 'Chad, wood chipping time!' in the mic and all of them answer in unison: 'Sure thing, boss!', It's beautiful, I tell you" "OK, let me get this straight: your approach to [availabilty](https://en.wikipedia.org/wiki/Availability) is hiring a bunch of random people, calling them all 'Chad' and relying on some Bluetooth dongles to transmit the stuff you bark over the microphone?" "Right you are" At this point you are tempted to ask "how do the Chads make sure all of them use the same recipe?" but you know me all too well to fall down this [consensus](https://en.wikipedia.org/wiki/Consensus_(computer_science)) rabbit hole. So instead you ask your original question again: "But wait, you told me your _consistency_ strategy was 'Single Chad, zero fuzz', but to have proper _availability_ you now hired an 'Army of Chads', so what happens now?" ## AC / DC "Worry not, that was before, now that my Chads have their Bluetooth pieces the sky is the limit. Bluetooth is flawless, you know" "Right, flawless...", you say with more eye rolling. Now that you think about it, your last expression of sarcasm seems like a lifetime ago. "Didn't you mention partition tolerance?" ## Denial as an architectural pattern "Yeah, partition tolerance, that is _so overrated_" "Wait, what?!" "To be honest, I don't believe in it" "Hold on, you don't _believe_ in network parti..." you trail off. At this point in our friendship you can spot a tangent from miles away so instead you try a more delicate approach. "Let's say that a Chad runs out of juice on his Bluetooth dongle, or maybe walks in front of a microwave or something, what happens then?" ## A rock and a hard place "Hmm, let me think about it... well clearly we'd like to avoid that whole parsley/wood chipping debacle, so that Chad should stop serving tea until he can get his dongle in working condition..." "Yeah, that looks like a very _consistent_ approach, but what about all the angry customers queuing in front of your broken Chad?" "Hmm, you're right, perhaps a better approach would be for that Chad to keep using the last known chai recipe, that way we can keep serving customers and everyone is happy..." "Ah, the _available_ approach, I like it. Although theoretically you could be serving two different types of chai at any given moment" You can see my face in slow-mo as it dawns on me "so you're saying that because I went for that crappy Bluetooth stuff, that now I must decide between angry tea-less customers and the parsley/wood chipping debacle?" ## The crux of CAP Frankly you are surprised it took me so long to see the writing on the walls. Network partitions are unavoidable whether because your Bluetooth ran out of battery, because the mailman was being chased by your neighbor's dog and dropped your letter, because you moved to too far from your wifi access point or because of a faulty network card. And, when network partitions _do_ occur, you are left with a tough choice: - Drop _availability_ in favor of _consistency_ and _partition tolerance_ (what's usually called a `CP` system) leaving you with a bunch of angry tea-less customers. - Drop the [strong consistency](https://en.wikipedia.org/wiki/Strong_consistency) guarantee leaving you with a system that's both _available_ and _partition tolerant_ (i.e. an `AP` system) but serves all kinds of weird chai. In the end you feel bad to see me so heartbroken so you give me some kind words of comfort: "Hey, don't worry, it's not so bad, I'm sure 'A [CAP](https://en.wikipedia.org/wiki/CAP_theorem) of tea' will be a big success, you just need to replace that Bluetooth crap with something more reliable, like carrier pigeons"
pzavolinsky
848,034
Mac 設定 Part 2 -- VS Code 個人用設定
why 私はこれらを必ずカスタムして使う。 なので初期設定用に記事しておく。 VSCode Extensions 5 つ 設定ファイルの編集での設定 5 つ ...
19,271
2021-11-01T06:34:55
https://dev.to/kaede_io/my-vs-code-setting-2a55
vscode
# why 私はこれらを必ずカスタムして使う。 なので初期設定用に記事しておく。 1. VSCode Extensions 5 つ 2. 設定ファイルの編集での設定 5 つ --- --- # Extensions 5 つ --- ## 1. Vim 私には必須。 hjkl での移動や yy dd pp でのコピー削除貼り付けができるようになる jj で escape する方法は後述 --- ## 2. Rainbow Brackets ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lruw46xqqqlyp42goksw.png) ブラケットのペアごとに赤、緑、青、黄色、と色が変わるようなる まとまりが分かりやすくなる --- ## 3. indent-rainbow IntelliJ と同じく、インデントもいける ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsxjs3bi7frusmwzgvqf.png) IntelliJ と違って再起動必要なくて嬉しい --- ## 4. Rainbow CSV Indent-Rainbow の CSV 版。 こちらは少し眩しいかも。 カラムが増えると、そもそも Excel の方が見やすい。 --- ## 5. Spell Checker https://qiita.com/f0lst/items/400b01e430d06f0be690 typo 負債が残るのを防ぐ --- --- # 設定ファイルの編集 --- ## ファイルの位置 Mac ユーザーだと ```ruby User/{userName}/Application Support/Code/ ``` この位置に settings.json ファイルがある。 これが全てのプロジェクト共通の設定。 プロジェクトファイルごとの `{projectName}/.vscode/` にも作れる。 --- ## テキストファイルの GUI からの開き方 左下の歯車をクリックすると設定は GUI モードで弄れる。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c213jgzhanjh6rr8buj.png) 隠し要素として、テキストファイル自体も編集できる。 右上のこのファイルが回転してるアイコンをクリックすることで。 --- # VScode おすすめ設定 5 つ ## 0. 結論 ```json { "files.autoSave": "afterDelay", "window.zoomLevel": 1, "editor.fontSize": 16, "editor.mouseWheelZoom": true, "editor.tabSize": 2, "vim.insertModeKeyBindings": [ { "before": ["j", "j"], "after": ["<Esc>"] } ], } ``` ## 1. Mac VScode Vim で jk 長押しスクロールする Mac の VSCode では長押しが無効化?されている。 下記のコマンドを打って再起動する。 すると有効化される。 Vim で j/k の長押しでスクロールできるようになる。 ```json defaults write com.microsoft.VSCode ApplePressAndHoldEnabled -bool false ``` 参考 https://book-reviews.blog/fix-problem-to-press-and-hold-keyboard-on-VSCode/ ## 2. タブを 2 スペースに ```json "editor.tabSize": 2, ``` これが一番コンパクトに見える。 --- ## 3. vim 拡張で jj が esc になるようにする かなり快適になる。 ターミナルと共通化している。 ```ruby "vim.insertModeKeyBindings": [ { "before": ["j", "j"], "after": ["<Esc>"] } ], ``` ## 4. エディタのフォントサイズ ```ruby "editor.fontSize": 16, "editor.mouseWheelZoom": true ``` デフォルトのサイズを 11 から 16 CMD + スクロールで調整できるように。 --- ## 5. ウインドウの拡大 サイドバーや上部のタブを大きくできる。 ```ruby "window.zoomLevel": 1, ``` https://github.com/microsoft/vscode/issues/25967#issuecomment-333638737 1 で 20% 大きくなるらしい。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhfxjb0i5x7azsimqdui.png)
kaede_io
848,036
A first update on our salary survey
🎉 We have a little update on our salary survey which we launched roughly three months ago (check out...
0
2021-10-01T14:47:41
https://dev.to/infosec_jobscom/a-first-update-on-our-salary-survey-28g1
security, cybersecurity, opendata, career
🎉 We have a little update on our salary survey which we launched roughly [three months ago](https://insights.infosec-jobs.com/share-your-salary-and-see-what-everyone-else-is-making-in-infosec-the-cyber-security-space/) (check out [https://salaries.infosec-jobs.com/](https://salaries.infosec-jobs.com/) if you haven’t yet) and needless to say we’re still pretty excited about it. About four weeks after the launch we enabled the [download](https://salaries.infosec-jobs.com/download/) feature on the site so everyone can get the latest dataset in JSON and CSV format. Furthermore there’s now a weekly sync of these results to a dedicated [github repo](https://github.com/foorilla/infosec-jobs-com-salaries) as well. As initially announced, but not yet implemented during that time, we built our own [FX data API](https://fxdata.foorilla.com/) to provide free and public currency data (yes, you can use it as well if you like!) for the Forex calculations taking place on the dataset in the salary_in_usd column. This is because we allow people to fill in their annual salary in their home or actually paid out currency and then do the work for you to translate that into its corresponding USD amount (yearly average) for better comparability/reference, with data provided by the [Bank for International Settlements](https://www.bis.org/) (🏦 the bank for the central banks, basically). Well, it’s always fascinating how much effort can go into something seemingly simple like a salary survey (hint: way more than you anticipated). But still, it looks like it’s worth the effort. We also put in some more descriptive information on the [download page](https://salaries.infosec-jobs.com/download/) about what each column in the dataset represents or how to interpret it. Should be pretty straight forward by now, and hopefull very easy to work with. Now the plan is to keep this site up there indefinitely for the future to collect remote work salary information year by year on an ongoing basis. With this in mind it should be a good reason now to share this with your colleagues and friends if you haven’t done so yet. 😉 It’ll be very interesting to see how much data we can gather in the long term, and also keep in mind that all this is in the [public domain](https://creativecommons.org/publicdomain/zero/1.0/) (though mentioning the data came from us would be nice and also increases the amount of data available to share). Meaning it’s free to use by anyone for anything. 🙂 Last but not least: Many thanks to all of you who filled out the survey form and shared the site with others. That’s pretty awesome! 💪 *This post appeared first on https://insights.infosec-jobs.com/a-first-update-on-our-salary-survey/*
infosec_jobscom
848,090
Blog Post of My Learning
The week of September 20, 2021 saw me get frustrated and confused as I continued with my learning of...
0
2021-10-01T16:40:45
https://dev.to/edouble79/kdfjdklsjf-2nh2
The week of September 20, 2021 saw me get frustrated and confused as I continued with my learning of Ruby in Codecademy. I learned "Printing the Output" which uses string methods to capitalize. "Control Flow in Ruby" was interesting because I learned about Environment, If and True expressions, white space, and Else. During this week I also learned how to use gets.chomp which was difficult because of the language like I mentioned before (former school teacher). I then studied about Loops & Iterators, Infinite Loops. As my mentor Gino Capio mentioned before, "Erik, take breaks and do not forget to reach out in Coolcats group.
edouble79
848,329
Hacktoberfest: Contribute to our temporal database system
We are a (very) small team working on a database system in our spare time (https://sirix.io |...
0
2021-10-01T21:34:10
https://dev.to/johanneslichtenberger/hacktoberfest-contribute-to-our-temporal-database-system-5hf2
hacktoberfest, database, java, kotlin
We are a (very) small team working on a database system in our spare time (https://sirix.io | https://github.com/sirixdb/sirix). {% github sirixdb/sirix %} It began as a research system at the University of Konstanz and was the main focus of two PhD thesis and several bachelor and master thesis. Johannes, the current maintainer worked on the system for his bachelor as well as master thesis. Furthermore, he also contributed as a research assistent. The system first of all builds a trie based index over all currently stored revisions. To efficiently reconstruct a revision the timestamps and the offsets into the log-file, the main storage, are written to a revision file. Second, the main document index is referenced from the revision roots. Furthermore, the system stores a path summary as well as secondary indexes in subtrees of the revision root pages. The tree of in-memory indexes are mapped to a sequential log-file during a `commit` in a postorder traversal. The parent pages store hashes of their children in references to the child pages. This can be used to check if data has correctly been stored in the future. Another idea is to version the data leaf pages, thus that not the whole page has to be copied during a write. Instead, a clever sliding snapshot algorithm is used to avoid read- or write-peaks which would occur during full snapshots of a page after increments haved been written. **We'd be very happy to get contributions during Hacktoberfest but also in the long run.**
johanneslichtenberger
848,357
React Native Mobile Apps, Integrating Expo Image Picker, Supabase Buckets and Image Upload
This is a short video following up on the previous react-native Expo Camera and upload to Supabase...
14,606
2021-10-01T22:24:48
https://dev.to/aaronksaunders/react-native-mobile-apps-integrating-expo-image-picker-supabase-buckets-and-image-upload-51pi
reactnative, supabase, imagepicker, video
This is a short video following up on the previous react-native [Expo Camera](https://docs.expo.dev/versions/latest/sdk/camera/) and upload to Supabase video that I made last week. In this video, I am working with [Expo Image Picker](https://docs.expo.dev/versions/latest/sdk/imagepicker/) in react native to pick image and upload the images to [Supabase](https://supabase.io/) Join with me on my journey of refreshing my memory with [React Native Video Series](https://youtube.com/playlist?list=PL2PY2-9rsgl0TTqJk3tCNJnBAjwHCjdYM) and building mobile applications {% youtube o-LR-rR5HAM %} [![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/W7W31U7HM)
aaronksaunders
848,669
Is there an alternative for document.execCommand('SaveAs', null, 'myFile.html'); in chromium browsers (Microsoft-Edge)
Is there an alternative for...
0
2021-10-02T06:45:23
https://dev.to/ambareeshc92/is-there-an-alternative-for-document-execcommand-saveas-null-myfile-html-in-chromium-browsers-microsoft-edge-fgj
javascript
{% stackoverflow 69414397 %}
ambareeshc92
848,691
Railway FBP v1.0.8 is released 🎉
Railway FBP v1.0.8 introduce Monads. A rework was done on this release when deepen into monads...
0
2021-10-02T08:04:57
https://dev.to/darkwood-fr/railway-fbp-v1-0-8-is-released-1kn2
Railway FBP v1.0.8 introduce Monads. A rework was done on this release when deepen into monads theory and their implementation in PHP. This could give the project deeper approach when considering the implementation to get much cleaner way to use it. https://github.com/darkwood-fr/railway-fbp/releases/tag/v1.0.8
matyo91
848,723
Open Sourcing URL Shortener
Taking our first steps in Open Source. In this article, we want to share our journey in making URL Shortener service open source. And we welcome your contributions.
0
2021-10-02T09:26:42
https://www.smallcase.com/blog/open-sourcing-url-shortener/
fastify, urlshortener, opensource
--- title: Open Sourcing URL Shortener published: true description: Taking our first steps in Open Source. In this article, we want to share our journey in making URL Shortener service open source. And we welcome your contributions. tags: fastify, urlshortener, openSource cover_image: https://www.smallcase.com/blog/wp-content/uploads/2021/07/Open-Source-URL-Shortner.png canonical_url: https://www.smallcase.com/blog/open-sourcing-url-shortener/ --- Open Source Software (OSS) has been the main driving force in democratizing access to so many awesome tools with way more transparency than ever possible. It’s never too late to start giving back to the community and contribute towards a better OSS culture. That’s why we started this journey by open-sourcing our in-house URL Shortener service. The reason for choosing this is to assess the road ahead and be in a better position to embark on our open source journey. ## Road to Open Sourcing URL Shortener Let’s take a look at the steps involved in open sourcing this service. ### 1. Business logic abstraction Being an internal service, the URL Shortener was strongly tied with our tracking API which is used for, as the name suggests, tracking purposes. We needed to decouple these services before open-sourcing URL Shortener as having internal dependencies in an open-source project is unfeasible for obvious reasons. This called for refactoring. ![URL shortener initial setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u1dpcte9ppsa3ez8juzw.png) As shown in the diagram, CTA (Call To Action) token is generated in the notifications service and is passed down to the URL shortener whenever a notification needs to be sent. URL Shortener then stores the CTA token <> original URL mapping in a separate table. And, it simply passes the CTA token and the original URL to the tracking API whenever someone clicks on the short link. As you might guess, the CTA token has nothing to do with a URL shortening service and therefore it should not have any context of such tokens. This presented before us, the quest to pull URL Shortener out of the loop and stop passing any redundant data to it. Let’s take a look at the steps involved: ![URL Shortener Business Logic Abstraction HLD](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s6kund78e8img1e90hrd.png) As you can see, the above HLD proposes a different way of passing down the CTA Token in a way where URL Shortener is not bothered with unnecessary data. Let’s go through it step-by-step: **Step 1:** Notification services hit the URL Shortener to get the short URL whenever a notification needs a short link (eg. referral emails, market open reminder SMS, order updates SMS, etc.). **Step 2:** URL Shortener generates the short URL, maps it with the corresponding original (or “long”) URL, and returns the result to the notification service. **Step 3:** After sending the notification, the shortener forwards the CTA token & the corresponding original URL to our tracking API service when someone clicks on the short URL. **Step 4:** Tracking API stores this original URL to CTA token mapping in the PostgreSQL. Forwarding the data to tracking API happens like this: ![URL Shortener Modified Tracking HLD](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jlu3oks6w698nwd3bh5.png) **Step 1:** User clicks on the short link received through email or SMS notification. **Step 2:** URL Shortener receives the short URL user clicked and redirects him/her to the original URL. **Step 3:** Before redirecting the user to the original URL, the shortener also emits a kafka event that contains the short URL that the user clicked. **Step 4:** Tracking API, upon receiving the short URL, stores the data corresponding to that in the PostgreSQL table for analytics purposes. No user-specific data is used in any shape & form. We only use the metadata to understand the delivery, click rates, etc. And that is it for the business logic abstraction. Following these steps, we were able to make the URL Shortener loosely coupled with our tracking API and free from internal dependencies. ### 2. Refactor The URL Shortener service was created a little more than 2 years ago to fulfil the needs of an internal URL shortening service. It was just a bare-bone HTTP server with SQLite as the database. But with the increase in the notification sent from smallcase, the number of requests to the shortener has increased significantly over time, it gets around 500k (read + write) requests per month. There were a couple of things that need addressal: 1. Simple & non-scalable nature of the service 2. No logging pipeline to debug when something goes wrong. 3. No way to avoid getting duplicate short keys for different URLs. 4. No purging of stale entries from the database. #### Moving to Fastify As I mentioned, the initial setup was not reliable enough for the growing needs. There were some major changes required in the implementation. There were three options we had in mind: 1. Using S3, AWS Lambda, and CloudFront 2. Using AWS API Gateway and Dynamo DB 3. Fastify with MongoDB & Redis Let’s talk about each one of them. ##### Using S3, AWS Lambda, and CloudFront This approach aims to use S3 as a redirection engine by activating website hosting on the bucket. This way, for each short URL we can create a new empty object with a long URL attached in the website redirect metadata. On top of this, we can create a bucket lifecycle policy to purge the entries older than a set timeframe. To create an admin page, all we need is a basic page hosted on S3 which will trigger a POST request to API Gateway invoking a lambda function which will: - create a short key - create an empty S3 object - store the short URL (<BASE_URL>/<shortKey>) as the redirection destination in the object properties. While going ahead with this approach meant we didn’t have to worry about scalability or [High Availability](https://www.digitalocean.com/community/tutorials/what-is-high-availability), it certainly ties us with AWS offerings and implicitly denies any flexibility when it comes to change of service vendor. ##### Using AWS API Gateway with Dynamo DB If we observe closely, all that lambda function is doing is storing the short URL in the empty S3 object. Hence, we can cut out on the resources & cost using this approach. Let’s take a look at how API Gateway combined with Dynamo DB would work here. There are four phases a request goes through when using the API Gateway: - Method Request - Integration Request - Integration Response - Method Response [Method Request](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-settings-method-request.html) involves creating API method resources, attaching HTTP verbs, authorisation, validation, etc. [Integration Request](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-integration-settings-integration-request.html) is responsible for setting up the integration between API Gateway & DynamoDB. One thing to note here, we need to modify the request & change it to a format that DynamoDB understands. [Method Response](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-settings-method-response.html) is configured to send the response back to the client which can be 200, 400, or some other. [Integration Response](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-integration-settings-integration-response.html) is what we get from DynamoDB but again, we need to convert this back into the format that the client understands. Again, while this approach allows us to get rid of the lambda and uses Apache VTL to communicate with DynamoDB, this presents the vendor lock-in we saw in the previous approach as it is strongly tied to AWS offerings. Also, it leaves us with us zero-control over the execution. ##### Fastify with MongoDB & Redis It is immediately noticeable that this approach gives us complete control over the service with no vendor lock-in. We can choose any data storage solution as per our needs, custom logging setup, and even in-house key generation service if we want. Looking at the [benchmarks](https://www.fastify.io/benchmarks/), Fastify is the clear winner among other Nodejs frameworks. It has [faster routing](https://github.com/delvedor/find-my-way), [JSON handling with faster rendering](https://github.com/fastify/fast-json-stringify) and a bunch of ready-made [plugins](https://www.fastify.io/ecosystem/). While this is perfect in terms of what we wanted, it also means we now have to make sure that MongoDB and Redis are highly available otherwise it directly affects our service. This means developers’ bandwidth is extensively required which was not the case in the previous approaches. With our Fastify application in place, we were able to plug our improved custom logging pipeline which is a huge benefit to the developer experience because the old pipeline was not reliable for the scale we now operate at. #### Adding logging pipeline With the increasing number of requests and possibly errors, we needed a proper logging setup to debug and monitor the service. That’s why we chose [bunyan](https://www.npmjs.com/package/bunyan) to log insightful data in our application. These logs sit conveniently on our new logging pipeline running on EFK (or, Elasticsearch Fluentd Kibana) stack. While this deserves a separate blog post on its own, let’s take a brief look at how the logs travel from our application to the [kibana dashboard](https://www.elastic.co/guide/en/kibana/current/dashboard.html). ![Logging pipeline used for URL shortener](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3e3euqz7u5twhzko1dk4.png) - The logs that we have written inside the application are produced to the standard output. The fluentd collector (which is present in all the applications using the EFK logging pipeline) takes all the logs from the stdout and forwards them to the fluentd aggregator. - The aggregator is simply where all the logs get collected from various [AWS EC2](https://aws.amazon.com/ec2/) application instances. All the logs then go through the [plugins](https://www.fluentd.org/plugins) installed to process the logs. - These potentially transformed logs are then sent to the Elasticsearch nodes over the network where this data gets stored. The structure of the logs needs to follow a predetermined pattern and that’s why Elasticsearch needs an index mapping to understand the structure of logs comings its way. This helps in [indexing](https://www.elastic.co/blog/what-is-an-elasticsearch-index) and storing data. - Kibana uses the structured logs data to show the logs nicely on a [dashboard](https://www.elastic.co/guide/en/kibana/current/dashboard.html). Since the data is structured, Kibana enables us to create visualisations and custom dashboards (a collection of different visualisations) on top of it. #### Generating unique short key With the increasing number of short key generations, there’s a higher probability that the key generation service can spit out the same short keys for two different original (or long) URLs, if not handled correctly. The solution to this problem is simply not let a short key get reused. Now there are two ways to achieve this: 1. Do not generate a duplicate short key 2. Retry until a unique short key is generated Let’s take a look at both approaches. ##### Do not generate a duplicate short key To make sure we don’t generate a duplicate short key ever, we need to know what keys have been generated already. One approach could be creating two tables in PostgreSQL, one for the available keys (let’s say AvlK) and one for the keys that are occupied (let’s say OccK). So while creating a short URL, we would fetch one unused key from AvlK table, add it to OccK table and return it. Two database operations, one short URL. Not fair. ##### Retry until a unique short key is generated Instead of maintaining two tables just to get one short key, we can work with just one PostgreSQL table which will store the keys already occupied. We can then simply generate a random key, check if it is occupied, and assign it if it is not. ![NanoId Collision Calculator](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6eoo54yt30p5uaboiz9k.png) Looking at the results on [nanoId collision calculator](https://zelark.github.io/nano-id-cc/), we can see that after three days of generating short keys at rate of 70/hr, there is 1% probability of encountering at least one collision. ``` 70 keys generation per hour * 24 hours * 3 days = 5040 short keys ``` So 1% probability means, having at least **one collision in every 5k short keys generation**. #### Passively purging the URL mappings Short URLs are not supposed to have a lifetime of decades or even more than 1 year depending upon the use case. As it is not practical to store the entries forever. That’s why purging is required. But the implementation can be flexible. At smallcase, short URLs are majorly generated for two broad categories: 1. For transactional notifications 2. For campaigns The short links generated for the transactional notifications are not supposed to be active forever whereas the links that are generated for the campaigns are supposed to be active till the campaign is active. Considering the differences in the lifespan of different short links, they needed to be treated differently when it comes to purging the entries from the database. One approach was to run a job that would remove all the entries which are older than a set timeframe. But turns out there was a better way with minimal additional effort. Instead of running a dedicated job to purge entries, we could simply handle this when we’re creating short URLs. Remember we were doing retries to land upon a key that was not already occupied? A minor change in that process handled purging for us. Just when you get an already occupied key, allow overwriting to that key only if it has hit the expiration date (which is also stored during the creation of the short key along with the mapping). This increases the time of creating short links comparatively but this is the trade-off you need to make to ensure unique keys. ### 3. Documentation Lastly, the crucial part of an open-source project. Documentation. These were the following things that were on the checklist: 1. README.md 2. CONTRIBUTING.md 3. CODE_OF_CONDUCT.md 4. LICENCE 5. CODEOWNERS 6. Templates for creating issues & submitting PRs for streamlined flow. And finally, making the project public! 🎉 ## What Next? This was our journey to open-sourcing the URL shortener service that we use at smallcase. I believe open source not only helps in building a better tool, but it also builds a community of people that care about equal access to software. At the end of the day, we learn from each other. The project is available here: [github.com/smallcase/smalllinks](github.com/smallcase/smalllinks) Please feel free to create an issue on Github if you find any improvement opportunities or bugs present in the project. I’ll be happy to connect 😃.
rishabh570
848,830
Ways to Make Money?
hello guys! I won't explain you with long and complicate words for that coz everybody can make money...
0
2021-10-02T09:49:07
https://dev.to/thuhtetdev/ways-to-make-money-19gc
productivity, webdev, career, programming
hello guys! I won't explain you with long and complicate words for that coz everybody can make money on their own way, so here's the key points to modify your mind. 1. Don't ever think about making money is hard 2. You can make money right now but amount will be based on yours. 3. You can't make money right now .. it takes time to make money.. but amount will be based on yours preparation. That's all. Making money is not hard. Choose your way. For me, I'm not that rich now. But I can live on my own and currently follow that No.3 rule to set up my mind. I wanna make more income streams as much as I can. I'm currently working on a full time job but I wanna expand my income by teaching and sharing. After that, making passive incomes will be my final target. Think about what's your method to make money. Share in this comment section Thanks for your time.
thuhtetdev
848,845
AZ CLI - Deleting Terraform Test Resource Groups
This is a series of quick posts, tips and tricks when working with the Azure CLI. Deleting...
0
2021-10-02T10:41:29
https://dev.to/dylanmorley/az-cli-deleting-terraform-test-resource-groups-4kkg
azure, terraform, cloud
This is a series of quick posts, tips and tricks when working with the Azure CLI. ## Deleting Multiple Resource Groups When working with the Terraform Provider for Azure, you may be adding a new feature that requires you to run the test automation packs. In cases where there are errors in the Provider as you're developing the feature, you might end up with errors like so ```shell === CONT TestAccEventGridEventSubscription_deliveryMappings testcase.go:88: Step 1/2 error: Error running pre-apply refresh: exit status 1 Error: Argument or block definition required on terraform_plugin_test.tf line 47, in resource "azurerm_eventgrid_event_subscription" "test": 47: [ An argument or block definition is required here. testing_new.go:63: Error retrieving state, there may be dangling resources: exit status 1 --- FAIL: TestAccEventGridEventSubscription_deliveryMappings (10.18s) FAIL ``` Note part of the error description - *there may be dangling resources:* Because the test didn't complete, the destroy phase of the test might not have completed, which means you're in a state where there are resources in Azure that have been created by the test code that need tidying up Luckily, we can do this with a bit of Powershell + AZ CLI. ```shell $resource_groups=(az group list --query "[? contains(name,'acctest')][].{name:name}" -o tsv) foreach ($group in $resource_groups) { Write-Output "Deleting resource group $group" az group delete -n $group --yes } ``` * Assumes you're logged in and in the subscription you want to work with Terraform tests create resource groups following an `acctest` naming convention, so we find all of those that match, then delete them one by one.
dylanmorley
848,864
CAP Theorem : Scenarios explained
CAP theorem applies to behavior exhibited by the distributed system with respect to three attributes...
0
2021-10-02T15:50:05
https://dev.to/sridharanprasanna/cap-theorem-scenarios-explained-31a6
cap
CAP theorem applies to behavior exhibited by the distributed system with respect to three attributes Consistency, Availability and Partition tolerance, for definition refer to the [wiki](https://en.wikipedia.org/wiki/CAP_theorem). The theorem defines the choice of behavior that the system can exhibit from the end user's perspective. It states that, at any given point of time, only 2 of the 3 behaviors can be guaranteed. Lets look at what each of the 3 behavior individually mean: - **Consistency:** The system shall always respond to the end user's read request with consistent data. [See note](#note1). - **Availability:** The system shall always respond to a user's request to read/write. - **Partition Tolerance:** The system will function even if the communication between the nodes drops i.e. partitions are created within the network. If this behavior is acceptable, then the system is tolerant to various partitions not interacting with each other and the system can continue to function with some sacrifices in quality of service. For the below scenarios, let us consider that there are just two nodes which can hold some data. Node A is responsible for both read and write, whereas Node B is designed to be only read from. Lets take a look at all the 3 combinations that are possible, lets start with the simplest. #Combination 1: Consistency & Availability. In this combination, it is expected that the system is both consistent and available. It also means the system is resilient to inter node communications; and any partitions in the network is not tolerated. When the writes happen to the Node A, then all the reads are suppose to get back the latest data consistently bounded by definition of consistency. Also the system is always available, all the read/write requests are honored without errors. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/maaomcpus2hdvwii2pjg.png) #Combination 2 & 3: These combinations are possible when the system is expected to function even when network partitions occur and system can tolerated it. I.e. even if the nodes are unable to communicate system will function. ##Combination 2: Consistency and Partition tolerance: When there are 2 or more partitions in the system, then the system can be designed to make one of the cluster/partition as the primary and have the other nodes/partitions dormant. The primary cluster/partition is responsible and active for any data read and writes. When requests for reads are serviced by this primary node, the data is guaranteed to be latest and consistent. However if the read/write requests happen to go to nodes that are in dormant state, the system may not respond, in other words availability is not guaranteed. When network is partitioned, then in order to keep the system consistent, partition 1 is made primary and the others are made dormant. So any request for read from system will get consistent data (if served by Node 1) or no data at all (if served by Node 2). ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2qjs4wnikm0smulhjb4i.png) ##Combination 3: Availability and Partition tolerance: When there are 2 or more partitions in the system, and the system can be designed to function as it was earlier. I.e. the nodes that were responsible for read and write will continue to do that; the nodes that were used for reads will continue to respond to end user's read requests. So end user will always perceive the system to be available, however data reads highly depend on the node to which the read requests goes. As there is no internode communication, the data may be out of sync, and end user may perceive the system as inconsistent. When network is partitioned, in order to keep the system available, any read request is serviced irrespective of whether the data is stale or not. So depending on the node that services the request the data may not be perceived consistent. ![image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwjk9kpwuk3ubx71pdwz.png) ##Note on consistency: <a name="note1"></a> The definition for consistency still depends on the implementation that was designed in the system. E.g. Eventual, Session, Strong. For purpose of above discussion, if we say system is consistent, then it is bounded by the design of consistency in the system. ##Note on post communication restoration in case of partition tolerance: This post just discusses the system's response as soon as nodes are unable to communicate with each other. Once communication between the nodes/partitions is restored, what techniques are used to synchronize the nodes is outside the purview of this article; E.g. Consensus algorithm.
sridharanprasanna
848,930
How to change base repo in Github CLI?
When you use Github cli for fist time on locally clone forked repo, then you get some message that...
0
2021-10-02T14:35:27
https://dev.to/utsavladani/how-to-change-base-repo-in-github-cli-5fhp
github, githubcli, issue
When you use Github cli for fist time on locally clone forked repo, then you get some message that asked to setup base repo to execute all queries. ``` $ gh issue list ? Which should be the base repository (used for e.g. querying issues) for this directory? [Use arrows to move, type to filter] <name of owner> / <name of repo> > <your name> / <name of repo> ``` If you setup wrong repo here, then these is no any ``gh`` command to change base repo. I face this issue, but not found solution easily. The solution is: 1. Open ``.git/config`` file 2. Remove line ``gh-resolved = base`` 3. Save the file and close OR you can use this command ```bash $ git config --unset remote.upstream.gh-resolved ``` That's it, now you prompted again to select repo. Select correct repo and go ahead. For more info refer this discussion on Github: https://github.com/cli/cli/issues/1864
utsavladani
849,127
Automate CI/CD build pipeline for your Springboot app using Jenkins and Github Webhooks
In this article, you will be learning how to set up a CI/CD pipeline for your springboot application...
0
2021-10-02T16:49:56
https://dev.to/saucekode/automate-ci-cd-build-pipeline-for-a-springboot-app-using-jenkins-and-github-webhooks-3h30
devops, beginners, jenkins
--- title: Automate CI/CD build pipeline for your Springboot app using Jenkins and Github Webhooks published: true description: tags: #devops #beginners #jenkins //cover_image: https://direct_url_to_image.jpg --- In this article, you will be learning how to set up a CI/CD pipeline for your springboot application using Jenkins and Github webhooks. ## Prerequisites - A working knowledge of Springboot - Jenkins installed on your machine ### Introduction Continous integration and delivery make up the devops lifecycle. There are 8 phases in the devops lifecyle. Continous delivery make up the first four phases which include: **plan, code, build and test**. Continous integration make up the last four phases which include: **integrate, deploy, operate and monitor**. Jenkins is a tool for integrating automatically into the existing codebase after testing is complete. It spans both continous delivery (build and test phase) and continous integration (integration). ### Step 1: Create a repository and branches If your project is already hosted on Github, you can skip this step. Otherwise, create a repository on Github, map the remote url to your project and create two branches: **prod** and **dev**. The dev branch is what Jenkins will be interfacing with, once there is a successful build in this branch then we can integrate the changes into our prod branch, this would interface with your deployment platform. Using this method, we will catch errors during development and avoid them at production. ### Step 2: Setup Jenkins server To start up Jenkins for Ubuntu users, locate the *jenkins.war* file in the /usr/share/jenkins path. Once found, run the command below to start Jenkins manually. ``` java -jar jenkins.war --httpPort=8080 ``` > For Windows and MacOs users, [view](https://www.jenkins.io/doc/book/installing/) the official Jenkins documentation for how to setup the server. You should have Jenkins running now. Jenkins default port is 8080. Head over to your browser and run **localhost:8080**. An aside, Springboot runs on port 8080 as well, you will have to change your port using **server.port=9090** in your *application.properties* file. If you're using Jenkins for the first time, you will be presented with a page that has path to a file. The file contains your Jenkins administrator password. Copy the path. To view the contents, run this command ``` cat <file path> ``` Once that is done, you should be able to see the password. Copy and paste into the input field provided and click the button. Next up, you will be required to install Jenkins plugins. You can choose to install all plugins or install selected plugins. Go for the former and wait for the installation to be done. If it fails, retry. After the installation is complete. You will be required to create a new user admin. You can choose to skip this step and continue with the initial administrator credentials provided by Jenkins. I strongly advise you create a new user. Once that is done, you will be redirected to your Jenkins dashboard. ## Step 3: Set up Jenkins credentials This step is vital in the build automation. On the dashboard menu items, click **Manage Jenkins** and then, **Manage Credentials**. There is an already existing Jenkins credential. Click on it and then the Global credentials. ![add jenkins credentials](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbcbyp2r53yq83sphk76.png) Click **Add Credentials** and you will be presented with where to fill your details. Github no longer has support for username and password. In selecting a credential kind, pick SSH Username with private key. This means you should already have your SSH keys (private and public). If you do not have one, follow this link to generate yours and link to Github [here](https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh). Enter your username and select **Enter directly** in the private key option and make sure it is your generated private SSH key you are entering not the public SSH key. Click OK to continue. ## Step 4: Set up Jenkins pipeline Go back to your dashboard. Create a job. Enter a name for your job, select Freestyle project and save to continue. Enter a description for your Jenkins job in the General tab. In the source code management tab, select Git, enter the repository url for your project. Under credentials, select the credential your created. In branches to build, edit the prepopulated branch and set to dev. In the Build Triggers tab, check **GitHub hook trigger for GITScm polling**. You can choose to run this trigger with the **Poll SCM** option. This setup would require you input a time schedule for when the hook trigger would happen. Currently, I didn't select the Poll SCM option. Scroll to the Build tab and select **Execute shell**. This is where we specify the build command for our Spring application. ``` ./mvnw install test ``` Input this maven build command, it runs both test and build our app into an artifact. Once done, click save. ## Step 5: Setup Github WebHook The webhook automates the build process as opposed to manually running the build from Jenkins. Navigate to the settings tab of your spring app github repository, click on **Webhooks**. You will be required to provide a **Payload URL** and **secret**. The secret key is a token you generate in Jenkins and it is used to make API calls. For our payload URL, we will generate one using ngrok. > Ngrok is a tool for generating a live URL for your locally hosted project. We need to expose Jenkins to our webhook, hence the need for ngrok. Download ngrok zip file from [here](https://ngrok.com/download) and unzip. Open your terminal and cd into the directory you unzipped ngrok. For Ubuntu users, run this command to generate a url: ``` ./ngrok http 8080 ``` Make sure to append **/github-webhook/** to your payload url, this is required by the webhook. Set content type to **application/json**. Let's get our secret token. In your Jenkins dashboard, click on the dropdown arrow beside your name located at the top left corner. Select **configure**. ![jenkins api token](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1iri89ajhznc99e2bmy.png) Scroll down to locate the API token tab, click on **Add new token** and generate one. Once it is generated, copy the token, save and paste it in the secret field in github. In github, still in webhooks, the question **Which events would you like to trigger this webhook?**, for this article we will select **Just the push event** option and click **Add webhook**. You can explore other options to see what they offer. ## Step 6: Test CI/CD pipeline You should be in your dev branch. Make changes to your code and push. The build started automatically. ![jenkins buils](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ts0bhjne2pm0wllgosh.png) Once the build is successful, next, make a merge request to the prod branch and deploy. ## Conclusion In this article, we successfully setup a CI/CD pipeline using Jenkins and with Github webhooks, automated our build process as opposed to manually triggering the build from Jenkins.
saucekode
849,162
Create Mirrored Cursor Movement with CSS and JavaScript
This week's Codepen Challenge was "Reflections" so I created a pen where a light follows your cursor...
0
2021-10-03T17:54:43
https://dev.to/dianale/create-mirrored-cursor-movement-with-css-and-javascript-249i
css, codepen, javascript, webdev
This week's Codepen Challenge was **"Reflections"** so I created a pen where a light follows your cursor and generates and mirrors the reflection on the opposite side: {% codepen https://codepen.io/pursuitofleisure/pen/LYLveKE %} ## Set up the light and its reflection in the HTML ```html <div id="container"> <div id="flashlight"></div> <div id="flashlight-reflection"></div> </div> ``` Super simple: we have a container, the flashlight and the flashlight's reflection as separate divs: ## Set up the CSS and Position the Lights ```css #flashlight, #flashlight-reflection { width: 40px; height: 40px; background-color: rgba(255, 255, 255, 0.95); position: absolute; top: 50px; left: 50px; box-shadow: 0 0 20px 12px #fff; border-radius: 50%; } #flashlight-reflection { left: auto; right: 50px; box-shadow: 0 0 20px 12px #EB7347; } ``` This is also pretty simple. The container contains the background color and the boundary line down the middle. The cursor's flashlight and its reflection are styled almost exactly the same except for the colors and most importantly, the `left` and `right` properties for the absolutely-positioned lights. The cursor has a `left` property, but for the reflection we're going to use the `right` property instead. This will make the JavaScript simpler later on: ## Dynamically Move the Lights with JavaScript Let's walk through the more complicated parts of the JavaScript: ```javascript // set boundary of coordinates to parent div const bounds = container.getBoundingClientRect(); // Move the cursor light and reflection on the mousemove event function moveFlashlight(e) { // get mouse coordinates relative to parent const x = e.clientX - bounds.left; const y = e.clientY - bounds.top; // move the light with the cursor and put the cursor in the center flashlight.style.left = `${x - 20}px`; flashlight.style.top = `${y - 20}px`; // move the reflection based on the cursor's position flashlightReflection.style.right = flashlight.style.left; flashlightReflection.style.top = flashlight.style.top; } ``` ### Find the X and Y Positions of the Cursor with `event.ClientX` and `event.ClientY` This was actually easier than I expected. Because the lights are absolutely positioned, if we can find the X and Y coordinates of the mouse cursor, we can dynamically update the CSS position with JavaScript to follow the cursor with the `mousemove` event. This article explains how to [console.log the position of the cursor](https://www.gavsblog.com/blog/get-the-current-position-of-the-mouse-from-a-javascript-event). ### Set the Boundary to be the Parent Container But in this scenario, our lights are divs within a parent container, and if your container is not the full-width of the screen, you're going to get inconsistent X and Y values based on the browser width. So we'll need to set the [boundaries for these values to be the parent](https://stackoverflow.com/questions/16154857/how-can-i-get-the-mouse-coordinates-relative-to-a-parent-div-javascript) to make it easier. This will always make the top left corner (0, 0). ### Update the Light's CSS (the Cursor) Now that we have the X and Y coordinates we can move the flashlight to follow the cursor by changing the `left` and `top` values. You'll notice I'm subtracting 20px from both the X and Y; this is because the light is 40px wide and 40px high and I want the cursor to be in the center. ### Update the Light's Reflection Since we're mirroring on the Y-axis, in terms of moving up and down, the reflection's location will always match the cursor's Y-coordinates, so we set these equal to each other. For the X-axis, this is more complicated. As the cursor moves closer to the center or reflection line (to the right), the reflection should move closer (to the left). Conversely if the cursor moves farther away or to the left, the reflection should move to the right. I originally created a global variable to store the previous value of X, then check if the movement is increasing and if so, then to add or subtract the position of the reflection's X-position as needed. ```javascript // Unnecessary code if (x > xFlashlight) { flashlightReflection.style.right = `${x - 1}px`; } else { flashlightReflection.style.right = `${x + 1}px`; } ``` Then I realized this code was completely unnecessary because of the CSS. Remember the cursor light is using the `left` property and the reflection is using the `right` property, and they are set to the same value for their respective properties when initialized. Therefore the actual numeric values will always be the same, so we can take the reflection's `right` value and set it equal to the cursor's `left` value. ## Conclusion This was a fun and difficult challenge since I had never tried anything similar, but you can see how it's relatively simple to mimic and follow cursor movement with CSS and JavaScript.
dianale
849,353
The Math of TypeScript's Types
I've been studying functional programming for a few years and in doing so I've learned a lot about...
0
2021-10-02T23:24:47
https://dev.to/jethrolarson/the-algebras-of-typescript-s-types-1akd
typescript, functional, math
I've been studying functional programming for a few years and in doing so I've learned a lot about types that I would not have otherwise. In particular, studying Haskell and related technology begets some interesting concepts. The one I want to talk about this time is "algebraic types". From [wikipedia](https://en.wikipedia.org/wiki/Algebraic_data_type): > In computer programming, especially functional programming and type theory, an algebraic data type is a kind of composite type, i.e., a type formed by combining other types. If you've used TypeScript for a bit you've probably been exposed to some of these already. ## Sum Types Let's say we have a short list of commands that an API will support, "list" and "pop". We can use string literal types to represent the valid values: ```ts type ListCommand = 'list' type PopCommand = 'pop' ``` Each of these types only have one value--the specific string that is valid. If we want a function that can take either command we can compose these types together using the `|` operator. ```ts type ValidCommand = ListCommand | PopCommand const performCommand = (command: ValidCommand) => ``` The `|` type operator creates a sum type of the types to the left and right of it. What you should notice is that the sum adds the potential values together, i.e. ValidCommand accepts ListCommand plus PopCommand. This may be more apparent with bigger types: ```ts type CommandOrFlags = ValidCommand | Flags ``` You'll remember that `Flags` had four valid values and `ValidCommand` has two. Thus `CommandOrFlags` has six (4 + 2). So we can add types, can we multiply them too? Yes, indeed! ## Product Types As you may have noticed, you can think of `|` operator for sum types as "or". Product types are the other half of that, the "and". So then we have to find data structures that represent a type having multiple values simultaneously. Tuple is a great candidate for that since it allows us to have different types at its indices. ```ts type Flags = [boolean, boolean] const flags: Flags = ? ``` How many potential values can be assigned to `flags`? With this small set we could list them out but we can use what we know about the individual types to figure it out with math. The type `[boolean, boolean]` must have exactly two values and those values must be booleans. Booleans themselves have two potential values. If you know that tuples form a product type with the types they contain you just have to multiply `2 x 2` to know that the total inhabitants of the composed type is four. Another way of encoding product types is to use an object: ```ts type Point = {x: number; y: number} ``` So in a way you're already using them! ## Sum types are a monoid Monoid is an interesting mathematical structure that is pretty simple. * **Composition**: Monoid is type with a combine operation that can join to values of the same monoid. `a o b = c` where `a` `b` and `c` are same type, and `o` is the operation. * **Associativity**: The operation must be associative `(a o b) o c = a o (b o c)` * **Unit**: operation must have a "unit" value that doesn't change the value it's composed with. `a o unit = unit o a = a` As an example Arrays form a monoid where `concat` is the operation and `[]` is the unit: ```ts // Composition [1,2].concat([3,4]) // is same as [1,2,3,4] // Associativity [1,2].concat([3,4]).concat([5,6) // is the same as [1,2].concat([3,4].concat([5,6])) // Unit [1,2].concat([]) // same as [].concat([1, 2]) // same as [1, 2] ``` The thing I actually wanted to share is that types themselves form a monoid with `|` as the operation and `never` as the unit! ```ts // Composition type Foo = 1 | 2 // Associativity type Bar = 1 | (2 | 3) type Bar = (1 | 2) | 3 // Unit type Baz = 1 | never type Baz = never | 1 type Baz = 1 ``` Mathematically there is also a monoid for product types but I haven't found a way to make it work in TypeScript. ## Getting more mileage out of sum types in TypeScript In exploring usage of sum types you may run into some challenge in writing functions that use them: ```ts type Error = string type Message = string type Response = Error | Message const handleResponse = (resp: Response) => { // wait. How do I actually have different behavior // since both cases are strings at run-time??? } ``` A trick to make this work is to provide some extra data at runtime so that your code can discriminate between the members of the sum. This is called a discriminated union. ```ts type Error = {kind: 'Error'; message: string} type Message = {kind: 'Message'; message: string} type Response = Error | Message const handleResponse = (resp: Response): string => { switch(resp.kind){ case 'Error': { console.error(resp.message) return 'Fail' } case 'Message': return resp.message } } ``` Switch is used here as poor man's pattern matching. TypeScript has good support for this technique and will make sure you have your cases covered if you leave off the default case. You can even have empty types to union with states that have no other data: ```ts type Error = {kind: 'Error'; message: string} type Message = {kind: 'Message'; message: string} type Pending = {kind: 'Pending'} type Response = Error | Message | Pending const handleResponse = (resp: Response): string => { switch (resp.kind) { case 'Error': { console.error(resp.message) return 'Fail' } case 'Message': return resp.message } // TS ERROR: Function lacks ending return statement and return type does not include 'undefined'.ts(2366) } ``` Then to fix it you can just handle the case: ```ts const handleResponse = (resp: Response): string => { switch (resp.kind) { case 'Error': { console.error(resp.message) return 'Fail' } case 'Message': return resp.message case 'Pending': return 'Pending' } } ``` That's all for now. I hope this gives you new things to think about and you get a chance to use Sum types to better model your applications. Edit note: I'd like to give a special thanks to @Phantz for the extra knowledge they dropped in the comments!
jethrolarson
849,497
Vue the world and React accordingly
:)
0
2021-10-03T05:06:43
https://dev.to/kit/vue-the-world-and-react-accordingly-fh5
:)
kit
882,479
3 Books that will boost your Leadership Skills
This blog post is different than the usual content from here. This time I would like to share with...
0
2021-12-30T13:20:52
https://magdamiu.medium.com/3-books-that-will-boost-your-leadership-skills-350c999add4c
management, leadership, motivation, books
--- title: 3 Books that will boost your Leadership Skills published: true date: 2021-10-25 08:30:00 UTC tags: management,leadership,motivation,books canonical_url: https://magdamiu.medium.com/3-books-that-will-boost-your-leadership-skills-350c999add4c --- This blog post is different than the usual content from here. This time I would like to share with you 3 books that helped me a lot in my career as a leader. I love to read books and after I finish reading a book I do a schema or a summary, or a mindmap to extract the useful info for me, to build my leadership toolbox.🧰 From these 3 books, I learned more about myself, about how important is to understand my WHY, or how helpful are OKRs even to build a personal learning plan. I also understood how essential is to create a context where my team can be successful and motivated to do their best. Happy reading! 🤓📚 ![](https://cdn-images-1.medium.com/max/640/0*SNSM4TTM2pNkVkNH) ### 1. “Start with why” by Simon Sinek ![](https://cdn-images-1.medium.com/max/332/0*EAPGR7CJtLRyKw5b.jpg) The author starts the book by making the distinction between a leader and the one who leads: “_There are leaders and there are those who lead. Leaders hold a position of power of influence. Those who lead inspire us”._ Great leaders are able to inspire people to act by giving them a sense of purpose or belonging. For those who are inspired, the motivation to act is deeply. This book is not about what to do or how to do the things in our life or at work, its purpose is to offer us the cause of action, it challenges us to start with WHY. The core of the book is _The Golden Circle_. There are three parts of The Golden Circle: Why, How, and What. - The outermost and largest circle is **WHAT** : Every individual or organisation easily explain what their product or job is. - The middle circle is **HOW** : HOW is not as obvious as WHAT. But when organisations know HOW they do WHAT they do, they have clarity in their product or service’s differentiating factor. This is also known as Unique Selling Proposition or “differentiating value proposition”. - The innermost and smallest circle is **WHY** : It is very difficult for organisations or people to explain WHY they do WHAT they do. The WHY needs to be answered in terms of purpose, cause or belief. The WHY and HOW are connected to the limbic brain and the WHAT with the neocortex. - Limbic brain = Responsible for all feelings, like trust and loyalty. It’s also responsible for all human behavior and decision-making, yet it has no capacity for language. - Neocortex = Responsible for all of our rational and analytical thought, and language. ![](https://cdn-images-1.medium.com/max/1023/0*N2eeydSMUnbo2-FG) **How do you differentiate between a fad and an idea that can change lives forever?** ![](https://cdn-images-1.medium.com/max/1024/0*iMVjkU8oa551Z4dD.png) The [Law of Diffusion of Innovations stated by Everett M. Rogers](https://en.wikipedia.org/wiki/Diffusion_of_innovations) pertains to the bell curve of product adoption. The curve outlines the percentage of the market who adopt your product, beginning with the Innovators (2.5%), followed by Early Adopters (13.5%), Early Majority (34%), Late Majority (34%) and Laggards (16%). **The Golden Circle + The Cone** ![](https://cdn-images-1.medium.com/max/300/0*NPMkpGw2vv36DkJ2.png) The Golden Circle is actually a bird’s eye view of a cone which represents the three- dimensional structure of organisations. When you have beliefs, a why, your what is just one of the ways of bringing that why to life. Often people don’t know what they’re going to do. They know what they believe, and they find their what along the way. e.g. Simon Sinek believes in inspiring others. Writing a book is just one way of doing it. The leader sits at the top of the cone — at the start, the point of WHY — while the HOW-types sit below and are responsible for actually making things happen. The leader imagines the destination and the HOW-types find the route to get there. **💡To learn more about this topic:** - Watch [Simon Sinek’s TED Talk](https://simonsinek.com/discover/how-great-leaders-inspire-action/?ref=gcppt) - [Book club with Simon](https://www.youtube.com/playlist?list=PLgCOAz4cqZMQbcgga7oZPhgtlaZ1GUbtp) ### 2. “Measure what matters” by John Doerr ![](https://cdn-images-1.medium.com/max/198/0*e8xWZFS1Y8IMjIU7.jpg) _OKR is about goal setting in a collaborative way._ As the former Intel CEO Andy Grove explained in his book, “High Output Management”, there are two questions to be answered to successfully setup a system of shared objectives, like OKRs: - **Where do I want to go? This answer provides the objective. (WHAT)** - **How will I pace myself to see if I am getting there? This answer provides the milestones, or key results. (HOW)** John Doerr, one of Google’s early investors and a current Board of Directors member, learned about OKRs from Andy Grove while at Intel. **OKRs superpowers:** - **Superpower #1** => focus and commit to priorities - **Superpower #2** => align and connect for teamwork - **Superpower #3** => track for accountability - **Superpower #4** => stretch for amazing An OKR can be modified or even scrapped at any point in its cycle and Key Results should be: - Succinct - Specific - Measurable Too many objectives can blur our focus on what counts so we should keep our focus and set a number of objectives not greater than 5, because if we try to focus on everything we focus on nothing. Continuous performance management could be done using OKRs in combination with **CFRs** : - **Conversations** : an authentic, richly textured exchange between manager and contributor, aimed at driving performance - **Feedback** : bidirectional or networked communication among peers to evaluate progress and guide future improvement - **Recognition** : expressions of appreciation to deserving individuals for contributions of all sizes **💡To learn more about this topic:** - [How Google sets goals: OKRs / Startup Lab Workshop](https://www.youtube.com/watch?v=0Pj0WIQWz8M) - [Guide: Set goals with OKRs](https://rework.withgoogle.com/guides/set-goals-with-okrs/steps/introduction/) - [OKRs at Gitlab](https://about.gitlab.com/company/okrs/) ### 3. “Why Motivating People Doesn’t Work . . . ” by Susan Fowler ![](https://cdn-images-1.medium.com/max/190/0*1HZs8DLjkIDmkCeu.jpg) Inspiring and motivating people are key aspects of leadership that are as critical as they are elusive. The people are always motivated. The right question we should ask is not if they are motivated, but **what motivates them**? **The real secret to motivation is creating an environment where people are optimally motivated to perform at their highest level.** The author describes in the book a spectrum of Motivation Model: 1. **Disinterested** motivational outlook: People find no value in going to meetings — they think it wastes time or overwhelms them. 2. **External** motivational outlook: People view meetings as an opportunity to use their positions or power, improve their images in the eyes of others, or gain the promise of rewards. 3. **Imposed** motivational outlook: People feel pressured to attend meetings either to avoid feelings of guilt or shame because everyone else is attending or out of fear of what would happen if he or she did not attend. 4. **Aligned** motivational outlook: People link meetings to something they value, such as learning, and use the opportunity to learn something or teach others. 5. **Integrated** motivational outlook: People link meetings to their larger work or life purpose, such as raising awareness about a certain issue. 6. **Inherent** motivational outlook: People naturally enjoy meetings and the sense of camaraderie they create. What truly motivates people is having three core psychological needs met: - autonomy - relatedness - competence Collectively known as **ARC**. **💡To learn more about this topic:** - [Interview with Susan Fowler](https://www.youtube.com/watch?v=nKTsIIA1oNM) - [Article about Motivation written by James Clear](https://jamesclear.com/motivation) - [All You Need To Know To Motivate Millennials](https://www.forbes.com/sites/forbesagencycouncil/2018/03/30/all-you-need-to-know-to-motivate-millennials/?sh=2b5fb5060aed) _Originally published at_ [_http://magdamiu.com_](https://magdamiu.com/2021/10/25/3-books-that-will-boost-your-leadership-skills/) _on October 25, 2021._
magdamiu
849,656
API Integration and Agile Business simulation game in Task force
Today, we are closing week 5 in the task force, this week was very different to all other 4 weeks...
0
2021-10-03T09:49:22
https://dev.to/ntwariegide/api-integration-and-agile-business-simulation-game-in-task-force-567g
codeofafrica, awesomitylab, agile, devjournal
Today, we are closing week 5 in the task force, this week was very different to all other 4 weeks we've covered. This week started with improving the ShowApp project. We wanted to add functionality of cropping image upload which was tough but educational. In the afternoon APIs for the LOT dashboard were ready, so we started integrating APIs. It was tough because APIs were not working well, so it took us time to know how we could integrate. It was also educational because of learning new things like Google Auths with Firebase. ![google auth](https://i2.wp.com/swiftsenpai.com/wp-content/uploads/2020/06/Google-Sign-In-Firebase-Feature-Image.jpeg?w=1065&ssl=1) In soft skills, this week we did **Agile simulation game** which was fun bug educative. I wanted to try new things because I used to be a scrum master, so I wanted to try being a Software Consultant from RISA. Being a software consultant was very difficult because It was tough to convince the Team to make sure that they followed the rules from RISA. I learned to talk to teams, making them understand what I want, making sure that they have the best DONE deliverable at the end of the sprint. ![software consultant](https://images.easytechjunkie.com/group-of-well-dressed-people-at-a-desk-looking-at-a-tablet.jpg) I also enjoyed working with the best team: scrum master, scrum team. I also noticed how it is not easy to be scrum master because in the sprint retrospective you have to be ready for questions. ![scrum master meme](https://images.squarespace-cdn.com/content/v1/53252e4ce4b026a30a70a340/1542331531936-AY0OKYCOU9FIOXV3PIB7/IMG_4194.PNG) This week, we also implemented a game in the codewars session. It was important because It made me understand how important it is to understand every detail of a problem to solve. On Friday we had a brainstorming session, our team also discussed Problem breakdown and we came up with the solution to help in the process of vaccination for children in Rwanda. In conclusion, this week was tough but educational because we tried new things. See you next week :)
ntwariegide
849,799
Weekly Digest 39/2021
Welcome to my Weekly Digest #39. This weekly digest contains a lot of interesting and inspiring...
10,701
2021-10-03T17:11:44
https://dev.to/marcobiedermann/weekly-digest-39-2021-3dh8
css, javascript, react, webdev
Welcome to my Weekly Digest #39. This weekly digest contains a lot of interesting and inspiring articles, videos, tweets, podcasts, and designs I consumed during this week. It's the time of the year again. [Hacktoberfest](https://hacktoberfest.digitalocean.com/) just started, so let's give something back to our wonderful open-source community. 🎃 --- ## Interesting articles to read ### Partitioning GitHub’s relational databases to handle scale More than 10 years ago, [GitHub.com](http://github.com/) started out like many other web applications of that time—built on Ruby on Rails, with a single MySQL database to store most of its data. [Partitioning GitHub's relational databases to handle scale](https://github.blog/2021-09-27-partitioning-githubs-relational-databases-scale/) ### Documenting pull requests is as important as writing good code Our approach to documenting PRs for our colleagues and our future selves. [How and why we document Pull Requests](https://monzo.com/blog/2021/09/30/documenting-pull-requests-is-as-important-as-writing-good-code/) ### Let’s Dive Into Cypress For End-to-End Testing Is end-to-end testing a painful topic for you? In this article, Ramona Schwering explains how to handle end-to-end testing with Cypress and make it make it not so tedious and expensive for yourself, but fun instead. [Let's Dive Into Cypress For End-to-End Testing - Smashing Magazine](https://www.smashingmagazine.com/2021/09/cypress-end-to-end-testing/) --- ## Some great videos I watched this week ### Introduction to React Native Web In this video we convert a scoreboard in React to use React Native Web, learn how to style our elements and some differences when collecting input, using buttons, and rendering lists. {% youtube RXwwiZ3OWXE %} by [Leigh Halliday](https://twitter.com/leighchalliday) ### React Native in 100 Seconds React Native allows developers to build cross-platform apps for iOS, Android, and the Web from a single JavaScript codebase. Get started building your first native mobile app with React Native {% youtube gvkqT_Uoahw %} by [Fireship](https://twitter.com/fireship_dev) ### Liquid tab bar interaction {% youtube 4NE1gwa8oOM %} by [Ana Tudor](https://twitter.com/anatudor) ### Hello Worldin' Some Web Component Libraries Sometimes you just gotta try the thing to get to know the thing a little bit. Chris had bookmarked one he saw called Tonic, so we muscled our way through that with a smidge of templating and state management, then did the same thing in Lit, then did it again in petite-vue. {% youtube Y2drx-WHU8k %} by [Chris Coyier](https://twitter.com/chriscoyier) ### Breakin' Up CSS Custom Properties Why not take every major styling choice on a particular component and make it into a custom property? Then, when you need a variation, you can just change the custom property and not re-declare the entire ruleset. This has some nice advantages, like clearly presenting a menu of things-to-change and not needing to dig into subcomponents to re-style variations. {% youtube 89WRYMPw-hE %} by [Chris Coyier](https://twitter.com/chriscoyier) --- ## Useful GitHub repositories ### react-philosophies Things I think about when I write React code {% github mithi/react-philosophies %} ### Nord An arctic, north-bluish color palette. {% github arcticicestudio/nord %} --- ## dribbble shots ### DashApp landing ![https://cdn.dribbble.com/users/895215/screenshots/16545487/media/733dc9e0af9cc35350a2e3dc02e6ed32.png](https://cdn.dribbble.com/users/895215/screenshots/16545487/media/733dc9e0af9cc35350a2e3dc02e6ed32.png) by [Valeria Rimkevich](https://dribbble.com/shots/16545487-DashApp-landing) ### Ae - NFT Marketplace Header ![https://cdn.dribbble.com/users/2685252/screenshots/16547697/media/e86496ac255745e0f1fa67cc630d478e.png](https://cdn.dribbble.com/users/2685252/screenshots/16547697/media/e86496ac255745e0f1fa67cc630d478e.png) by [Syafrini Nabilla](https://dribbble.com/shots/16547697-Ae-NFT-Marketplace-Header) ### Flux - Expense Management UI Kit ![https://cdn.dribbble.com/users/2125046/screenshots/16548857/media/e9960b76fbab33635774063262218154.png](https://cdn.dribbble.com/users/2125046/screenshots/16548857/media/e9960b76fbab33635774063262218154.png) by [Ofspace Digital Agency](https://dribbble.com/shots/16548857-Flux-Expense-Management-UI-Kit) --- ## Tweets {% twitter 1442495944239435777 %} {% twitter 1443031859663937541 %} {% twitter 1443570300529086467 %} {% twitter 1443572280924147717 %} {% twitter 1443988884782784514 %} {% twitter 1444195449095737346 %} --- ## Picked Pens ### Pikachu submit button {% codepen https://codepen.io/codeanddream/pen/xxrmXaW %} by [Mina](https://twitter.com/Codeanddream) ### FettePalette {% codepen https://codepen.io/meodai/pen/JjJevLW %} by [David A.](https://twitter.com/meodai/) ### Responsive CSS Powered Parallax {% codepen https://codepen.io/jh3y/pen/GRELWqJ %} by [Jhey](https://twitter.com/jh3yy) --- ## Podcasts worth listening ### The Changelog – Fauna is rethinking the database This week we’re talking with Evan Weaver about Fauna — the database for a new generation of applications. {% spotify spotify:episode:2QTiFlNBc0sWllPTWqmwa0 %} ### Junior to Senior – Swizec Teller Swizec and David talk about the key differences between Junior devs and senior devs, the concept of a 10x engineer, and different ways to gain experience quickly. {% spotify spotify:episode:5w3rETeBNYoionQDwklaop %} ### Ladybug – What Is An API & How Do You Use One? APIs are part of our daily roles as software developers, but what are they? What different types are there? And how can you design a good one? {% spotify spotify:episode:4lgPs8gCzKQsCj1njkeJoa %} ### Software Engineering Daily – Git Scales for Monorepos In a version control system, a Monorepo is a version control management strategy in which all your code is contained in one potentially large but complete repository. {% spotify spotify:episode:6pyiy9xyQpYkif1ZtlIquX %} ### Syntax – Changelog Frontend Feud In this episode of Syntax, Scott and Wes do a crossover episode with Changelog's JS Party! {% spotify spotify:episode:0gRlUacBqpNng13HByVE9z %} --- Thank you for reading, talk to you next week, and stay safe! 👋
marcobiedermann
849,800
Deploy React Apps using Apache2, how and why?
In this article we will together go through the process of deploying front end applications to...
0
2021-10-03T11:23:12
https://gist.github.com/AmrHalim/6fff4aa9130f893482f0b72774ab0059
react, javascript, devops, webdev
In this article we will together go through the process of deploying front end applications to production environments ( specifically [React](https://reactjs.org/) applications ). ### How does the web work? Before we dig into the actual steps needed to deploy React applications, let’s first think about how the web works in general. When you visit a URL like this: `http://my-domain.com/user/profile`, you’re basically sending a request searching the web to find if there’s an [A record](https://support.dnsimple.com/articles/a-record/) for this domain linked to any IP address, aka server, and if it finds one, it sends this request to that server. But for this server to be able to handle that request, there needs to be some kind of software, from now on let’s call it a [web server](https://en.wikipedia.org/wiki/Web_server) to handle this request and get some response to send it back to you! There are many web servers out there that you can use. For this article, we’ll focus on the configurations for [Apache2](https://httpd.apache.org/). Another popular option that can be used is [Nginx](https://www.nginx.com/), but the idea is exactly the same. When this request reaches the web server, what happens is that the web server checks if this domain name ( in our case `http://my-domain.com` ) is configured to any directory/folder in this server ( in case of Apache2, the default root directory is `/var/www/html` ), and if so, it basically serves/displays the web application/hosted files on the path that you passed in the URL, `/user/profile`. Which means that this request will go to the files ( by default an index.html file ) in the `/var/www/html/user/profile` directory. ### Virtual Hosts The way you configure the domain-names/directories mapping in Apache2 is by configuring what we call a [virtual host](https://httpd.apache.org/docs/2.4/vhosts/) in this path `/etc/apache2/sites-available/default`, which basically allows you to host multiple web applications on the same machine, each in a separate directory. A basic Virtual Host will look like this: ``` <VirtualHost YOUR_IP_ADDRESS:80> ServerName www.my-domain.com ServerAlias my-domain.com DocumentRoot "/var/www/html" AllowOverride All </VirtualHost> ``` This configurations basically mean that any incoming request to `YOUR_IP_ADDRESS`, on PORT `80` ( which is the default port for Apache2 ), will serve the files stored in the `/var/www/html` directory, following the URL that the user requested, from now on let's call it `Route`. * Note that we had to add `AllowOverride All`, and that's necessary because we'll need to add an [.htaccess](#htaccess-file) file later on and this needs to be there for it work. * You might find this property in your default configurations with `AllowOverride None`, you just need to change it to `All`, and remember to restart your Apache2 configurations by running this command `sudo systemctl apache2 restart`, or an equivalent command for your webserver, to restart your configurations. ###### HTTPs Configurations If you want your application to run on https, you will also need to have another configuration files to handle your incoming secured requests, but that's out of the scope of this article. I might post another article later on how you can create and maintain a self signed certificate using [let's encrypt](https://letsencrypt.org/). For the sake of this article, we'll assume that your application is going to be hosted on the root folder of the server, aka the default configurations. ### Hosting Files Once you configure your domain to point to your server and add your virtual hosts, then you can basically host any file of any extension on this server to be served using this domain. One way to respond to a user who is sending the `/user/profile` request is to create a directory `/user/profile` in the root directory of this domain, and then create an `index.html` file in this directory. In this case, the content of this file will be served to the user. But that's not why we're here! So let's talk about React deployment flow. ### React Deployment Flow #### Build your app To deploy a react application you'll first need to build your application, this might differ according to the way you structured your application. But regardless of how your app is configured, you should be able to run a similar command to `npm run build` command to build your app, which will give you the final build files in a folder called `build`, and those are the files that we need to deploy/upload to the web application path on the server ( in our case `/var/www/html/` ). ###### Two important points to notice here: * in the `root` folder of your build folder you'll find an `index.html` file; * if you open this file, you'll find in the `<head>` section one or more `<script>` tags that point to your React application code, including how you're handling your routes. [Remember](#hosting-files) how we talked about hosting static files, specifically `index.html` files to the server? Keep that in mind for now. #### Deploy your files One of the ways you can use to upload your files to the server is using FTP ( File Transfer Protocol ) softwares, I usually use [FileZilla](https://filezilla-project.org/). You might also be using docker or git to host your build files, and all you have to do at this point is to fetch the latest updates to your folder or re-run your docker image/container with the latest updates. #### .htaccess file Before we talk about this file and give an example of the minimal content you need to have for your app to work on Apache2 web server, let's first quickly remember the incoming request that we're trying to send to our server. I'm assuming at the moment that: * `/var/www/html/` folder is empty; * you have a route in your React app that's called `/user/profile`; * the incoming request is trying to reach the `/user/profile/` route. But in fact, there's no directory path in our server that matches this route, so what will happen now if we don't have any instructions to our web server ( Apache2 ) to handle this incoming request, you'll definitely get a 404 Not Found error page! That's why we need to add the [.htaccess file](https://httpd.apache.org/docs/current/howto/htaccess.html) to instruct Apache2 to basically redirect all the incoming requests to this folder to the index.html file, [which will know how to handle your request](#two-important-points-to-notice-here). Finally, let's have a look on how the `.htaccess` file should look like at the minimal shape for your React application to work ( this piece of code is stolen from the official [React deployment page](https://create-react-app.dev/docs/deployment/), don't tell anyone! ): ``` Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.html [QSA,L] ``` By default, Apache2 will ignore all the `.htaccess` files. You will need to install [a module](http://httpd.apache.org/docs/current/mod/mod_rewrite.html) to tell Apache2 to enable overriding the directories configurations using .htaccess files. To do that, you just need to run this command `sudo a2enmod rewrite`. Don't forget to restart Apache2 for this configuration to take place. Simply run `sudo systemctl apache2 restart`. And that's it! Now you have your application up and running on production.
amrhalim
849,895
Master CSS Selectors
Finding the correct selector was one of the hardest parts when I did my first web automated test and...
0
2021-10-03T14:28:43
https://dev.to/isolderea/master-css-selectors-4me7
Finding the correct selector was one of the hardest parts when I did my first web automated test and it took me some time to find the right resources for every problematic selector I had. If you want to know how you can learn CSS Selectors with not more than 5 minutes a day come join me explore them and resolve fun tasks. Together we will Master CSS Selectors. See you there. #css https://lnkd.in/euNy9zzN
isolderea
858,620
Considerations for Building a Web Component
There are a lot of things to consider when creating a new web component. What attributes and...
15,883
2021-10-10T17:36:11
https://dev.to/opencoder/considerations-for-building-a-web-component-5c19
There are a lot of things to consider when creating a new web component. What attributes and properties do we need? How are we going to style it? Are there accessibility concerns? What about security? The list goes on. Today, I'll go through the thought process of a making a new component with an example. ## The Design Comp ![Learning Card Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vgznar0xq15ls4gsu4w3.PNG) As you can see in the image above, we are going to make a "learning card" that consists of a couple things. 1. An SVG icon 2. Banner with Header and Subheader 3. Slotted HTML in the bottom If we consider the design there's a few things we want to be mindful of when putting this thing together. First off, you might notice the color and icon match a different subheader. So maybe we want to check the subheader to set the color and icon. You might also notice there are different fonts, font sizes, and padding to consider. Design-wise this should be relatively simple, as long as we can make everything responsive and set the overflow. ## Let's Break It Down Looking at the comp above, we can break it down into one component consisting of three other components. 1. Icon with circle around it 2. Banner with Header, Subheader, and Icon component 3. A "skeleton" or outline of the card (A component with something in the top and something in the bottom) 4. Put these all together to make a "learning card" By breaking this component down, we'll be able to reuse any of these components in the future, and we can have more control over how things interact. For each of the components we'll have to ask ourselves those initial questions: * What are the props and attributes? * Design concerns? * Accessibility? * Security? Once we have an answer to each of these questions, we tackle each component until we are able to put everything together. ## Expectations and Previous Experience I expect this component will be relatively easy to put together. However, there are a few things that I think might be challenging. The first is I don't know much about slotting content. This will be done in the "skeleton" / outline component, and we want to make sure that any HTML can be arranged in the card. The second challenge will be keeping the design consistent. We want the left margin to be consistent in the bottom half, and make sure content overflows in a natural way for both the banner and the bottom portion. If we can make the card responsive and slot content properly, then I think this will be a really good example of atomic design. In the past I made [a penguin button](https://github.com/PenStat/PenStat-CTA/tree/master/penguin-state-button), and I learned some valuable things from this experience. After that project, I have a better understanding of using properties to control the different attributes on a web component. I also learned a cool way to control the style of the component using CSS variables and Open WC's API. Overall, I learned how to answer those initial questions before creating the component which will be valuable when building this card. {% github elmsln/project-two %}
opencoder
850,664
SvelteKit GraphQL Queries using fetch Only
SvelteKit GraphQL queries using fetch only: how you can drop Apollo client and urql dependencies altogether to make your Svelte app leaner.
0
2021-10-04T09:21:17
https://rodneylab.com/sveltekit-graphql-queries-fetch/
svelte, graphql, webdev, javascript
--- title: "SvelteKit GraphQL Queries using fetch Only" published: "true" description: "SvelteKit GraphQL queries using fetch only: how you can drop Apollo client and urql dependencies altogether to make your Svelte app leaner." tags: "svelte, graphql, webdev, javascript" canonical_url: "https://rodneylab.com/sveltekit-graphql-queries-fetch/" cover_image: "https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ns4jbw5nncmlxdat1koa.png" --- ## 😕 Why drop Apollo Client and urql for GraphQL Queries? In this post we'll look at how you can perform SvelteKit GraphQL queries using fetch only. That's right, there's no need to <a aria-label="See how to use Apollo Client with SvelteKit" href="https://rodneylab.com/use-apollo-client-sveltekit/">add Apollo client</a> or urql to your Svelte apps if you have basic GraphQL requirements. We will get our GraphQL data from the remote API using just fetch functionality. You probably already know that <a aria-label="Read M D N documents on the fetch A P I" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API">the fetch API is available in client code</a>. In SvelteKit it is also available in <a aria-label="Learn more about load function is Svelte Kit docs" href="https://kit.svelte.dev/docs#loading">load functions</a> and server API routes. This means you can use the code we produce here to make GraphQL queries directly from page components or any server route. We will use a currency API to pull the latest exchange rates for a few currencies, querying initially from a server API route. This will be super useful for a backend dashboard on your platform. You can use it to track payments received in foreign currencies, converting them back to your local currency be that dollars, rupees, euros, pounds or even none of those! This will be so handy if you are selling courses, merch or even web development services globally. Once the basics are up and running we will add an additional query from a client page and see how easy Svelte stores make it to update your user interface with fresh data. If that all sounds exciting to you, then let's not waste any time! ## ⚙️ SvelteKit GraphQL Queries: Setup We'll start by creating a new project and installing packages: ```shell pnpm init svelte@next sveltekit-graphql-fetch && cd $_ pnpm install ``` When prompted choose a **Skeleton Project** and answer **Yes** to TypeScript, ESLint and Prettier. ### API Key We will be using the <a aria-label="Open S W O P A P I documentation" href="https://swop.cx/documentation">SWOP GraphQL API</a> to pull the latest available currency exchange rates. To use the service we will need an API key. There is a free developer tier and you only need an email address to sign up. Let's <a aria-label="Sign up or a S W O P A P I account" href="https://swop.cx/account/register/developer">go to the sign up page now, sign up</a>, confirm our email address and then make a note of our new API key. ![SvelteKit GraphQL Queries: Getting an API Key. Sign up form for API key. Title is Getting Started. There are boxes to enter an email address and password as well as a start now submit button. Additional information includes a statement that no credit card is required. There is a checkbox to agree to the Terms and Conditions. Finally there is a link to sign if if you already have an account](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cop2lumantrd8ycj5hf2.png) ### Configuring `dotenv` Let's configure `dotenv` now so we can start using the API quick sticks. Install the `dotenv` package and the following font which we will use later: ```shell pnpm install -D dotenv @fontsource/source-sans-pro ``` Next edit `svelte.config.js` to use `dotenv`: ```javascript import 'dotenv/config'; import preprocess from 'svelte-preprocess'; /** @type {import('@sveltejs/kit').Config} */ const config = { // Consult https://github.com/sveltejs/svelte-preprocess // for more information about preprocessors preprocess: preprocess(), kit: { // hydrate the <div id="svelte"> element in src/app.html target: '#svelte' } }; export default config; ``` Finally, create a `.env` file in the project root folder containing your API key: ```plaintext SWOP_API_KEY="0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" ``` With the preliminaries out of the way let's write our query. ## 🧱 API Route To create a GraphQL query using fetch, basically all you need to do is create a query object and a variables object, convert them to a string and then send them as the body to the right API endpoint. We will use `fetch` to do the sending as it is already included in SvelteKit though you could choose `axios` or some other package if you wanted to. In addition to the body, we need to make sure we include the right auth headers (as you would with Apollo client or urql). That's enough theory. If you want to do some more reading, <a aria-label="Read more about graph q l queries with fetch" href="https://www.netlify.com/blog/2020/12/21/send-graphql-queries-with-the-fetch-api-without-using-apollo-urql-or-other-graphql-clients/">Jason Lengstorf from Netlify wrote a fantastic article</a> with plenty of extra details. Let's write some code. Create a file at `src/routes/query/fx-rates.json.ts` and paste in the following code: ```typescript import type { Request } from '@sveltejs/kit'; export async function post( request: Request & { body: { currencies: string[] } } ): Promise<{ body: string } | { error: string; status: number }> { try { const { currencies = ['CAD', 'GBP', 'IDR', 'INR', 'USD'] } = request.body; const query = ` query latestQuery( $latestQueryBaseCurrency: String = "EUR" $latestQueryQuoteCurrencies: [String!] ) { latest( baseCurrency: $latestQueryBaseCurrency quoteCurrencies: $latestQueryQuoteCurrencies ) { baseCurrency quoteCurrency date quote } } `; const variables = { latestQueryBaseCurrency: 'EUR', latestQueryQuoteCurrencies: currencies }; const response = await fetch('https://swop.cx/graphql', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `ApiKey ${process.env['SWOP_API_KEY']}` }, body: JSON.stringify({ query, variables }) }); const data = await response.json(); return { body: JSON.stringify({ ...data }) }; } catch (err) { const error = `Error in /query/fx-rates.json.ts: ${err}`; console.error(error); return { status: 500, error }; } } ``` ### What this Code Does This is code for a fetch API route making use of SvelteKit's router. To invoke this code from a client, we just send a `POST` request to `/query/fx-rates.json` with that path being derived from the file's path. We will do this together shortly, so just carry on if this is not yet crystal clear. You can see in lines `9`&ndash;`24` we define the GraphQL query. This uses regular GraphQL syntax. Just below we define our query variables. If you are making a different query which does not need any variables, be sure to include an empty variables object. In line `31` you see we make a fetch request to the SWOP API. Importantly, we include the `Content-Type` header, set to `application/json` in line `34`. The rest of the file just processes the response and relays it back to the client. Let's create a store to save retrieved data next. ## 🛍 Store We will create a store as our &ldquo;single source of truth&rdquo;. Stores are an idiomatic Svelte way of sharing app state between components. We won't go into much detail here and you can <a aria-label="Learn more about Svelte stores in the Svelte tutorial" href="https://svelte.dev/tutorial/writable-stores">learn more about Svelte stores in the Svelte tutorial</a>. To build the store, all we need to do is create the following file. Let's do that now, pasting the content below into `src/lib/shared/stores/rates.ts` (you will need to create new folders): ``` import { writable } from 'svelte/store'; const rates = writable([]); export { rates as default }; ``` Next, we can go client side to make use of SvelteKit GraphQL queries using fetch only. ## 🖥 Initial Client Code: SvelteKit GraphQL queries using fetch We are using TypeScript in this project, but very little, so hopefully you can follow along even though you are not completely familiar with TypeScript. Replace the content of `src/routes/index.svelte` with the following: ```html <script context="module"> export const load = async ({ fetch }) => { try { const response = await fetch('/query/fx-rates.json', { method: 'POST', credentials: 'same-origin', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ currencies: ['CAD', 'GBP', 'IDR', 'INR', 'USD'] }) }); return { props: { ...(await response.json()) } }; } catch (error) { console.error(`Error in load function for /: ${error}`); } }; </script> <script lang="ts"> import '@fontsource/source-sans-pro'; import rates from '$lib/shared/stores/rates'; export let data: { latest: { baseCurrency: string; quoteCurrency: string; date: Date; quote: number }[]; }; rates.set(data.latest); let newCurrency = ''; let submitting = false; </script> <main class="container"> <div class="heading"> <h1>FX Rates</h1> </div> <ul class="content"> {#each $rates as { baseCurrency, quoteCurrency, date, quote }} <li> <h2>{`${baseCurrency}\${quoteCurrency}`}</h2> <dl> <dt> {`1 ${baseCurrency}`} </dt> <dd> <span class="rate"> {quote.toFixed(2)} {quoteCurrency} </span> <details><summary>More information...</summary>Date: {date}</details> </dd> </dl> </li> {/each} </ul> </main> ``` With TypeScript you can define variable types alongside the variable. So in line `25`, we are saying that data is an object with a single field; `latest`. `latest` itself is an array of objects (representing currency pairs in our case). Each of these objects has the following fields: `baseCurrency`, `quoteCurrency`, `date` and `quote`. You see the type of each of these declared alongside it. ### What are we Doing Here? The first `<script>` block contains a load function. In SvelteKit, load functions contain code which runs before the initial render. It makes sense to call the API route we just created from here. We do that using the fetch call in lines `4`&ndash;`11`. Notice how the url matches the file path for the file we created. The JSON response is sent as a prop (from the return statement in lines `12`&ndash;`14`). Another interesting line comes in the second `<script>` block. Here at line `23`, we import the store we just created. Line `24` is where we import the props we mentioned as a `data` prop. The types come from the object we expect the API to return. It is not too much hassle to type this out for a basic application like this one. For a more sophisticated app, you might want to generate types automatically. We will have to look at this in another article, so this one doesn't get too long. Next we actually make use of the store. We add the query result to the store in line `27`. We will actually render whatever is in the store rather than the result of the query directly. The benefit of doing it that way is that we can easily update what is rendered by adding another other currency pair to the store (without any complex logic for merging what is already rendered with new query results). You will see this shortly. This should all work as is. Optionally add a little style before continuing: #### Optional Styling <details> <summary>Optional Styling</summary> ```html <style> :global body { margin: 0px; } .container { display: flex; flex-direction: column; background: #ff6b6b; min-height: 100vh; color: #1a535c; font-family: 'Source Sans Pro'; } .content { margin: 3rem auto 1rem; width: 50%; border-radius: 1rem; border: #f7fff7 solid 1px; } .heading { background: #f7fff7; text-align: center; width: 50%; border-radius: 1rem; border: #1a535c solid 1px; margin: 3rem auto 0rem; padding: 0 1.5rem; } h1 { color: #1a535c; } ul { background: #1a535c; list-style-type: none; padding: 1.5rem; } li { margin-bottom: 1.5rem; } h2 { color: #ffe66d; margin-bottom: 0.5rem; } dl { background-color: #ffe66d; display: flex; margin: 0.5rem 3rem 1rem; padding: 1rem; border-radius: 0.5rem; border: #ff6b6b solid 1px; } .rate { font-size: 1.25rem; } dt { flex-basis: 15%; padding: 2px 0.25rem; } dd { flex-basis: 80%; flex-grow: 1; padding: 2px 0.25rem; } form { margin: 1.5rem auto 3rem; background: #4ecdc4; border: #1a535c solid 1px; padding: 1.5rem; border-radius: 1rem; width: 50%; } input { font-size: 1.563rem; border-radius: 0.5rem; border: #1a535c solid 1px; background: #f7fff7; padding: 0.25rem 0.25rem; margin-right: 0.5rem; width: 6rem; } button { font-size: 1.563rem; background: #ffe66d; border: #1a535c solid 2px; padding: 0.25rem 0.5rem; border-radius: 0.5rem; cursor: pointer; } .screen-reader-text { border: 0; clip: rect(1px, 1px, 1px, 1px); clip-path: inset(50%); height: 1px; margin: -1px; width: 1px; overflow: hidden; position: absolute !important; word-wrap: normal !important; } @media (max-width: 768px) { .content, form, .heading { width: auto; margin: 1.5rem; } } </style> ``` </details> Ok let's take a peek at what we have so far by going to <a href="http://localhost:3000/">localhost:3000/</a>. ![SvelteKit GraphQL Queries: Story so Far. Image shows top part of a screenshot of how the app should look now, with styling applied. There is an FX Rates title and below a block containing two currency pair results: EUR\CAD and EUR\GBP. Eash shows the value of 1 euro in the curreny and a More information link](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8suhpivwxnrvky7uc9pz.png) ## 🚀 SvelteKit GraphQL queries using fetch: Updating the Store Finally we will look at how updating the store updates the user interface We will add a form in which the user can add a new currency. Edit `src/routes/index.svelte`: ```html <script lang="ts"> import '@fontsource/source-sans-pro'; import rates from '$lib/shared/stores/rates'; export let data: { latest: { baseCurrency: string; quoteCurrency: string; date: Date; quote: number }[]; }; rates.set(data.latest); let newCurrency = ''; let submitting = false; async function handleSubmit() { try { submitting = true; const response = await fetch('/query/fx-rates.json', { method: 'POST', credentials: 'same-origin', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ currencies: [newCurrency] }) }); const responseData = await response.json(); const rate = responseData.data.latest[0]; submitting = false; rates.set([...$rates, rate]); newCurrency = ''; } catch (error) { console.error(`Error in handleSubmit function on /: ${error}`); } } </script> <main class="container"> <div class="heading"> <h1>FX Rates</h1> </div> <ul class="content"> {#each $rates as { baseCurrency, quoteCurrency, date, quote }} <li> <h2>{`${baseCurrency}\${quoteCurrency}`}</h2> <dl> <dt> {`1 ${baseCurrency}`} </dt> <dd> <span class="rate"> {quote.toFixed(2)} {quoteCurrency} </span> <details><summary>More information...</summary>Date: {date}</details> </dd> </dl> </li> {/each} </ul> <form on:submit|preventDefault={handleSubmit}> <span class="screen-reader-text" ><label for="additional-currency">Additional Currency</label></span > <input bind:value={newCurrency} required id="additional-currency" placeholder="AUD" title="Add another currency" type="text" /> <button type="submit" disabled={submitting}>Add currency</button> </form> </main> ``` In line `82` you see using Svelte it is pretty easy to link the value of an input to one of our TypeScript or javascript variables. We do this with the `newCurrency` variable here. In our `handleSubmit` function, we call our API route once more, this time only requesting the additional currency. In line `45` we see updating state is a piece of cake using stores. We just spread the current value of the rate store (this is nothing more than an array of the existing five currency objects) and tack the new one on the end. Try this out yourself, adding a couple of currencies. The interface should update straight away. ## 🙌🏽 SvelteKit GraphQL queries using fetch: What Do You Think? In this post we learned: - how to do SvelteKit GraphQL queries using fetch instead of Apollo client or urql, - a way to get up-to-date currency exchange rate information into your site backend dashboard for analysis, accounting and so many other uses, - how stores can be used in Svelte to update state. There are some restrictions on the base currency in SWOP's developer mode. The maths (math) to convert from EUR to the desired base currency isn't too complicated though. You could implement a utility function to do the conversion in the API route file. If you do find the service useful, or and expect to use it a lot, consider supporting the project by upgrading your account. As an extension you might consider pulling historical data from the SWOP API, this is not too different, to the GraphQL query above. Have <a aria-label="Open the S W O P Graph Q L playground" href="https://swop.cx/account/dashboard/playground">play in the SWOP GraphQL Playground to discover more of the endless possibilities</a>. Finally you might also find the <a aria-label="open the Purchasing Power A P I Git Hub repo" href="https://github.com/rwieruch/purchasing-power-parity">Purchasing Power API</a> handy if you are looking at currencies. This is not a GraphQL API though it might be quite helpful for pricing your courses in global economies you are not familiar with. Is there something from this post you can leverage for a side project or even client project? I hope so! Let me know if there is anything in the post that I can improve on, for any one else creating this project. You can leave a comment below, <a aria-label="tweet Rodney Lab" href="https://twitter.com/intent/user?screen_name=askRodney">@ me on Twitter</a> or try one of the other contact methods listed below. You can see the <a aria-label="Open the Rodney Lab Git Hub repo" href="https://github.com/rodneylab/sveltekit-graphql-fetch">full code for this SvelteKit GraphQL queries using fetch project on the Rodney Lab Git Hub repo</a>. ## 🙏🏽 SvelteKit GraphQL queries using fetch: Feedback Have you found the post useful? Do you have your own methods for solving this problem? Let me know your solution. Would you like to see posts on another topic instead? Get in touch with ideas for new posts. Also if you like my writing style, get in touch if I can write some posts for your company site on a consultancy basis. Read on to find ways to get in touch, further below. If you want to support posts similar to this one and can spare a few dollars, euros or pounds, please <a aria-label="Support Rodney Lab via Buy me a Coffee" href="https://rodneylab.com/giving/">consider supporting me through Buy me a Coffee</a>. Finally, feel free to share the post on your social media accounts for all your followers who will find it useful. As well as leaving a comment below, you can get in touch via <a aria-label="Reach out to me on Twitter" href="https://twitter.com/messages/compose?recipient_id=1323579817258831875">@askRodney</a> on Twitter and also <a aria-label="Contact Rodney Lab via Telegram" href="https://t.me/askRodney">askRodney on Telegram</a>. Also, see <Link aria-label="Get in touch with Rodney Lab" to="/contact">further ways to get in touch with Rodney Lab</Link>. I post regularly on <Link aria-label="See posts on svelte kit" to="/tags/sveltekit/">SvelteKit</Link> as well as other topics. Also <Link aria-label="Subscribe to the Rodney Lab newsletter" to="/about/#newsletter">subscribe to the newsletter to keep up-to-date</Link> with our latest projects.
askrodney
857,981
Split Cards Effect
so in this post i wont write much but will link a video where i have shown and explained every step...
0
2021-10-10T02:56:15
https://dev.to/official_fire/split-cards-effect-2mhp
so in this post i wont write much but will link a video where i have shown and explained every step :) and made it easier so you can understand 😊 ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffkow5o41tepbdl1tw2t.png) so here is the link - https://www.youtube.com/watch?v=8MdlGXaK0YQ&t=14s *Source Code Link* - https://github.com/CoderZ90/split-card-effect :D Happy Coding Meet you in the next blog / vid Suggest me a topic for blog Hope you are safe and Happy Also don't forget to [subscribe](https://youtube.com/c/codingire?sub_confirmation=1) 🌟⚡
official_fire
858,470
Nest.js integration tests
Nest.js integration tests TDD is really hype shortcut, but it's hard to follow this...
0
2021-10-10T15:46:31
https://dev.to/devsilk/nest-js-integration-tests-o8n
# Nest.js integration tests TDD is really hype shortcut, but it's hard to follow this approach with only unit tests. The answer for this problem are integration tests . Here I'll present you how to setup those tests with Nest.js framework and PostgresSQL database. Those tests will cover whole API request lifecycle, from controllers, services, dtos, repositories, database entities and much more... ## Github Complex example https://github.com/devsilk-software/nestjs-integration-tests ## API setup Basic endpoint which creates Dog and saves it in Postgres database: ##### Controller Our entry-point which will listen for `POST` requests on `/dogs` path ```typescript import { Body, Controller, Post } from '@nestjs/common'; import { DogsService } from './dogs.service'; import { CreateDogDto } from './dto/create-dog.dto'; @Controller('dogs') export class DogsController { constructor(private readonly dogsService: DogsService) {} @Post() async create(@Body() createDogDto: CreateDogDto) { const created = await this.dogsService.create(createDogDto); return { id: created.id, }; } } ``` ##### Service This is the class which has injected connection to Postgres database and executes `save` query ```typescript import { Injectable } from '@nestjs/common'; import { Connection } from 'typeorm'; import { DogPostgresEntity } from './database/dog.entity'; import { CreateDogDto } from './dto/create-dog.dto'; @Injectable() export class DogsService { constructor(private connection: Connection) {} async create(createDogDto: CreateDogDto) { return this.connection.getRepository(DogPostgresEntity).save(createDogDto); } } ``` ##### Postgres entity Definition of dog table in Postgres database ```typescript import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm'; @Entity() export class DogPostgresEntity { @PrimaryGeneratedColumn() id: number; @Column({ type: 'varchar' }) name: string; @Column({ type: 'int' }) age: number; @Column({ type: 'varchar' }) breed: string; } ``` #### `dogs.module` imported in `app.module` ```typescript import { Module } from '@nestjs/common'; import { TypeOrmModule } from '@nestjs/typeorm'; import { DogPostgresEntity } from './database/dog.entity'; import { DogsController } from './dogs.controller'; import { DogsService } from './dogs.service'; @Module({ imports: [TypeOrmModule.forFeature([DogPostgresEntity])], controllers: [DogsController], providers: [DogsService], }) export class DogsModule {} ``` ## Test module setup The most important part, where we'll cover all utils necessary for start writing integration tests for our Dog endpoint ### Setup Postgres with Docker Compose 1. Create `docker-compose.int.test.yml` at the root directory ```yml version: '3.3' services: int-postgres: container_name: int-postgres image: postgres ports: - 5432:5432 env_file: - .int.env ``` 2. Create `.int.env` file at the root directory where we will store database variables ```.env POSTGRES_HOST=localhost POSTGRES_USER=root POSTGRES_PASSWORD=root POSTGRES_DB=test-db POSTGRES_PORT=5432 ``` 3. Create `Makefile` at the root directory with useful command ```Makefile start-test-db: docker-compose -f docker-compose.int.test.yml up -d ``` 4. Run `make start-test-db` command ```bash $ make start-test-db ``` Now our testing database is running on port `5432`, to use our tests we need to create some helpful utils ### Setup test module 1. Create `test.module.ts` in `test` directory which will be used in every test file ```typescript import { Module } from '@nestjs/common'; import { ConfigModule } from '@nestjs/config'; import { TypeOrmModule } from '@nestjs/typeorm'; @Module({ imports: [ ConfigModule.forRoot({ envFilePath: '.int.env', isGlobal: true, }), TypeOrmModule.forRoot({ type: 'postgres', host: process.env.POSTGRES_HOST, port: +process.env.POSTGRES_PORT, username: process.env.POSTGRES_USER, password: process.env.POSTGRES_PASSWORD, database: process.env.POSTGRES_DB, autoLoadEntities: true, entities: [process.cwd() + '/**/*.entity{.ts,.js}'], synchronize: true, }), ], }) export class TestModule {} ``` As you can see here we are referring to our `.int.env` just to get values used for setting up connection with Postgres 2. Create `create-test-app.util.ts` in `test/utils` directory ```typescript import { DynamicModule, ForwardReference, Type } from '@nestjs/common'; import { NestExpressApplication } from '@nestjs/platform-express'; import { Test } from '@nestjs/testing'; import { Connection } from 'typeorm'; import { TestModule } from '../test.module'; type Imports = Array< Type<any> | DynamicModule | Promise<DynamicModule> | ForwardReference >; interface TestApp { connection: Connection; app: NestExpressApplication; } export async function createTestApp(imports: Imports): Promise<TestApp> { const testingModule = await Test.createTestingModule({ imports: [TestModule, ...imports], }).compile(); const app = await testingModule .createNestApplication<NestExpressApplication>() .init(); const connection = testingModule.get<Connection>(Connection); return { app, connection, }; } ``` What is happening here ? We are creating `testingModule` which is combination of `TestModule` created in previous step and modules passed via function arguments, based on that module we can create our `NestExpressApplication` and get connection to Postgres database. 3. Let's create last two utils which will be used in tests: `test/utils/clear-all-tables.ts` ```typescript import { Connection } from 'typeorm'; export const clearAllTables = async (connection: Connection): Promise<void> => { await Promise.all( connection.entityMetadatas.map(({ name }) => { return connection.createQueryBuilder().delete().from(name).execute(); }), ); }; ``` This one will be triggered after every test providing to have database in the same state before every test case `test/utils/make-request.ts` ```typescript import { INestApplication } from '@nestjs/common'; import * as request from 'supertest'; export function makeRequest(app: INestApplication) { return request(app.getHttpServer()); } ``` This one will be used to make HTTP request to our API in tests ### Let's create our first integration test ```typescript import { HttpStatus } from '@nestjs/common'; import { NestExpressApplication } from '@nestjs/platform-express'; import { Connection } from 'typeorm'; import { clearAllTables } from '../../test/utils/clear-all-tables.util'; import { createTestApp } from '../../test/utils/create-test-app.util'; import { DogsModule } from './dogs.module'; describe('Dogs module', () => { let app: NestExpressApplication; let connection: Connection; beforeAll(async () => { const testApp = await createTestApp([DogsModule]); connection = testApp.connection; app = testApp.app; }); afterEach(async () => { await clearAllTables(connection); }); afterAll(async () => { await connection.close(); await app.close(); }); }); ``` What is happening here ? `beforeAll` - we are creating our application with `createTestApp` which returns connection to database and `express` application `afterEach` - after every `it` we want to have the same state of database, so our tables needs to be empty `afterAll` - we need to remember to close connection to database and close `express` app Now we can add test scenarios for success respons in `POST /dogs` endpoint ```typescript describe('Dogs module', () => { let app: NestExpressApplication; let connection: Connection; beforeAll(async () => { const testApp = await createTestApp([DogsModule]); connection = testApp.connection; app = testApp.app; }); afterEach(async () => { await clearAllTables(connection); }); afterAll(async () => { await connection.close(); await app.close(); }); describe('POST /dogs', () => { describe('201', () => { it('should create dog and return id', async () => { const response = await makeRequest(app).post('/dogs').send({ name: 'Dingo', age: 3, breed: 'Beagle', }); expect(response.status).toBe(HttpStatus.CREATED); expect(response.body.id).toBeDefined(); }); it('should create dog and save it to db', async () => { const given: CreateDogDto = { name: 'Dingo', age: 3, breed: 'Beagle', }; const response = await makeRequest(app).post('/dogs').send(given); expect(response.status).toBe(HttpStatus.CREATED); expect(response.body.id).toBeDefined(); // check if saved dog in database has same properties sent via POST request const found = await connection .getRepository(DogPostgresEntity) .findOne(response.body.id); expect(found).toMatchObject({ id: response.body.id, name: given.name, age: given.age, breed: given.breed, }); }); }); }); }); ``` As you can see in every `it` we are making HTTP call to our API and then we can check the most important stuff - database changes and API responses. Thanks to such a approach we can test more complex endpoints which refers with multiple tables. ### For VSCode users If you don't want to run all test cases at once you can install [Jest Runner](https://marketplace.visualstudio.com/items?itemName=firsttris.vscode-jest-runner). Thanks to that it's really easy to debug every test case separately ![Debugging test cases](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aon6pqb31se15r267pbh.png)
olgierd
858,587
7 javaScript Array methods you should know
Arrays are one of the most common things a programmer uses or is likely to come across in a project....
0
2021-10-11T13:11:17
https://dev.to/mcube25/7-javascript-array-methods-you-should-know-7mf
javascript, webdev, beginners, programming
Arrays are one of the most common things a programmer uses or is likely to come across in a project. In this regard the array method we are going to look into should come in handy. We are going to use a single array for our examples ```javascript const clubs = [ { name: "All-stars", fans: 20000 }, { name: "Bay", fans: 30000 }, { name: "C-stars", fans: 25000 }, { name: "D-pillars", fans: 40000 }, { name: "Clos", fans: 60000 }, { name: "Magic", fans: 45000 } ] ``` Let us take a look at this methods and what they do to an array ##### filter The filter method is used to filter out or remove all elements from an array that affirms the subject in the proposition logic and they are returned in a new array without altering the original array for example ```javascript const filterClub = clubs.filter((item) => { return item.fans <= 30000; }); ``` All the clubs having fans of less than or equal to 30000 fans are going to be returned to a new array. ![Screenshot (91)](https://user-images.githubusercontent.com/23004266/136793752-642ffd60-6877-4de0-9f95-655106c33168.png) The filter method is a simple method to use. It returns true or false for each item. If the item is true, it is included in the new array and if it is false it is not included. The filter method does not change the array or object it is being filtered over. This method is convenient because we do not have to worry about the old array being changed when using it subsequently. ##### map This method allows the taking of an array and converting it to a new array so that all items in the array are going to look slightly different. Let us say we want to get the names of every clubs in the array sample. We can use the map method for this. Example ```javascript const clubNames = clubs.map((item) => { return item.name }); ``` ![Screenshot (92)](https://user-images.githubusercontent.com/23004266/136794128-9061b13f-9b3b-4f23-ad42-5312a846758e.png) We get a new array that prints out the names of the club in the original array without altering the original array. This is super convenient when you want to get the items in an object or the keys of an object or convert an array from one form to another. It has millions of uses. #### find This method allows a single object to be found in an array of objects. The method takes a single item as a parameter and returns the first item that returns true for the statement. ```javascript const findClub = clubs.find((item) => { return item.name === "All-stars" }); ``` ![Screenshot (93)](https://user-images.githubusercontent.com/23004266/136794411-39d02d01-230d-4e89-bf1e-fac9de361276.png) ##### forEach This method does not return anything unlike the methods we covered previously. It works very similarly to a forLoop but it takes a function instead and takes a single parameter ```javascript clubs.forEach((item) => { console.log(item.name); }); ``` For every single element inside the array, it prints out the names. The method makes working with an array where you have to loop through them much easier so that you don't have to write clunky, long forLoop syntax. ##### some This function does not return a brand new array. Instead what it does is to return true or false. We can check if some items in the array affirms or denies the subject in the proposition logic. example ```javascript const highestFans = clubs.some((item) => { return item.fans <= 30000 }); ``` ![Screenshot (94)](https://user-images.githubusercontent.com/23004266/136794833-9c90bc54-f70f-403d-aecd-629ad492cf30.png) It checks if any item value returns true and returns the first item that matches the criteria. ##### every This method checks if every single item in the array affirms the subject proposition logic and returns true or false example ```javascript const highestFans = clubs.every((item) => { return item.fans <= 30000 }); ``` ![Screenshot (95)](https://user-images.githubusercontent.com/23004266/136795110-f3d3b6d8-2c96-4034-9e56-eb550e1e461f.png) ##### reduce This method performs an operation on the array and returns a combination of all the different operations. To get the total of all the fans in our clubs array we use the reduce method in the following way ```javascript const totalFans = clubs.reduce((x, item) => { return item.fans + x; }, 0); ``` ![Screenshot (96)](https://user-images.githubusercontent.com/23004266/136795286-40a8f964-e2b1-4e90-81e7-58e12bdecc42.png) It takes a property and an item we want the property to be reduced to. It also takes a second parameter which is where we want to start the reduce from. In our case it starts from 0.
mcube25
858,601
Weekly Digest 40/2021
Welcome to my Weekly Digest #40 of this year. This weekly digest contains a lot of interesting and...
10,701
2021-10-10T17:00:55
https://dev.to/marcobiedermann/weekly-digest-40-2021-2dld
css, javascript, react, webdev
Welcome to my Weekly Digest #40 of this year. This weekly digest contains a lot of interesting and inspiring articles, videos, tweets, podcasts, and designs I consumed during this week. --- ## Interesting articles to read ### How I built a modern website in 2021 Kent rewrote [kentcdodds.com](http://kentcdodds.com/) using the latest technologies and he wants to talk about what he did. [How I built a modern website in 2021](https://kentcdodds.com/blog/how-i-built-a-modern-website-in-2021) ### 4 things you didn’t know you could do with GitHub Actions GitHub Actions is a powerful platform that empowers your team to go from code to cloud, all from the comfort of your repositories. [4 things you didn't know you could do with GitHub Actions | The GitHub Blog](https://github.blog/2021-03-04-4-things-you-didnt-know-you-could-do-with-github-actions/) ### Conditional Border Radius in CSS How to use CSS comparison functions to create a conditional border-radius [Conditional Border Radius In CSS - Ahmad Shadeed](https://ishadeed.com/article/conditional-border-radius/) --- ## Some great videos I watched this week ### Em & Rem Units Em and Rem units are used to size your text elements relative to one another. By assigning em units to your elements, you can change all of their font sizes at once, simply by changing the font size of the elements' root tag. {% youtube j7zf_iZjQB4 %} by [Christopher Lis](https://twitter.com/christopher4lis) ### What Are Build Tools in Web Development {% youtube V5qvWl-O-zE %} by [Scott Tolinski](https://twitter.com/stolinski) ### Go in 100 Seconds Learn the basics of the Go Programming Language. Go was developed at Google as a modern version of C for high-performance server-side applications. {% youtube 446E-r0rXHI %} by [Fireship](https://twitter.com/fireship_dev) ### Redux Sagas vs Redux Toolkit Query Redux Saga and Redux Toolkit Query are two great ways to do API access. Let's compare them head to head doing full create, read, update and delete operations on both libraries. If you've only seen one of these in action you'll want to check this out! {% youtube 0W4SdogReDg %} by [Jack Herrington](https://twitter.com/jherr) ### Custom Hooks in React In this video, Amy explains what a hook is and how to set up a custom hook. Using her custom audio player as a starting point, she makes the code more reusable converting it to a custom hook library. {% youtube yn7M6qOV_9o %} by [Amy Dutton](https://twitter.com/selfteachme) ### Setup Multiple Pages with Vite This lesson quickly demonstrates how to add multiple pages to a Vite application. This essentially means you get a multi-page app without adding any specific routing library. {% youtube STeKBm67l6M %} by [Basarat Ali Syed](https://twitter.com/basarat) --- ## Useful GitHub repositories ### MJML The only framework that makes responsive email easy. MJML is a markup language designed to reduce the pain of coding a responsive email. {% github mjmlio/mjml %} ### @react-three/flex Placing content in THREE.js is hard. @react-three/flex brings the webs flexbox spec to react-three-fiber. It is based on Yoga, Facebook's open-source layout engine for react-native. {% github pmndrs/react-three-flex %} ### Kubeapps A web-based UI for deploying and managing applications in Kubernetes clusters {% github kubeapps/kubeapps %} ## dribbble shots ### Dashboard Pricing Page ![https://cdn.dribbble.com/users/1723105/screenshots/16622997/media/621a5bb7c121c5872bbbe5daa636e695.png](https://cdn.dribbble.com/users/1723105/screenshots/16622997/media/621a5bb7c121c5872bbbe5daa636e695.png) by [Emmanuel Ikechukwu](https://dribbble.com/shots/16622997-TeamWork-Dashboard-Pricing-Page) ### Pixiedia Agency ![https://cdn.dribbble.com/users/1619633/screenshots/16622979/media/e637807809c01291ace5b4cba7aa26d3.png](https://cdn.dribbble.com/users/1619633/screenshots/16622979/media/e637807809c01291ace5b4cba7aa26d3.png) by [Afshin T2Y](https://dribbble.com/shots/16622979-Pixiedia-Agency) ### Crypter Dashboard Concept ![https://cdn.dribbble.com/users/3798578/screenshots/16622484/media/2e9a2d8b8c0716b533886e63cc29f25e.png](https://cdn.dribbble.com/users/3798578/screenshots/16622484/media/2e9a2d8b8c0716b533886e63cc29f25e.png) by [Arshia Amin Javahery](https://dribbble.com/shots/16622484-Crypter-Dashboard-Concept) ### Segmentation Icon ![https://cdn.dribbble.com/users/299116/screenshots/16613252/media/e61def3244bdeb77ecd823ac50f38501.jpg](https://cdn.dribbble.com/users/299116/screenshots/16613252/media/e61def3244bdeb77ecd823ac50f38501.jpg) by [Fabrizio Boni](https://dribbble.com/shots/16613252-Segmentation-Icon-for-Rule-Communication-Clay) --- ## Tweets {% twitter 1444736773142241285 %} {% twitter 1444861827427356676 %} {% twitter 1445607281484124168 %} {% twitter 1446164732315066371 %} --- ## Picked Pens ### Password Generator {% codepen https://codepen.io/marcobiedermann/pen/dyREBVZ %} by [Marco Biedermann](https://twitter.com/BiedermannMarco) ### Liquid Button {% codepen https://codepen.io/z-/pen/dyzyNQX %} by [Zed Dash](https://twitter.com/Osorpenke) --- ## Podcasts worth listening ### Ladybug – A Day In The Life Of Four Software Engineers What is a typical day in the life like for a software engineer? To close out Season 6, we thought it’d be a great idea to give you some insight into our workdays, as we all have very different roles and are in different stages of our careers. {% spotify spotify:episode:5XtKLmxOWQ3mRHhi5RJa8T %} ### Syntax – PHP Is Good and We’re Just Re-Creating It {% spotify spotify:episode:5FNgg3cnXQGFrKMTuXkQ4E %} ### Chats with Kent – Building Awesome Demos {% spotify spotify:episode:4TYc9qUvMdeTXTofxwkNUF %} Learn how to use projects to improve your skills and problem-solving! ### Chats with Kent – Effective Learning Learn strategies on how to effectively teach yourself! {% spotify spotify:episode:3jn6m5nLUedPQzZhAEh5mv %} ### Chats with Kent – Advancing Your Skills Learn about how good habits can be deliberately formed to advance your skills! {% spotify spotify:episode:5vLMag4y36A7fXmdHRDBGK %} --- Thank you for reading, talk to you next week, and stay safe! 👋
marcobiedermann
858,700
hey.
I'm new here. What’s up guys?
0
2021-10-10T21:03:36
https://dev.to/wtfcxt/hey-27lp
I'm new here. What’s up guys?
wtfcxt
858,904
A list of 75 app ideas, that don't exist yet and that people would actually use
https://github.com/Divide-By-0/app-ideas-people-would-use It's difficult to stay motivated coding or...
0
2021-10-11T02:26:04
https://dev.to/yushian/i-made-a-list-of-75-app-ideas-that-don-t-exist-yet-and-that-people-would-actually-use-5733
githunt, ideas, projects, webdev
https://github.com/Divide-By-0/app-ideas-people-would-use It's difficult to stay motivated coding or on a side project when you don't know if people will use it, or if your end goal is solving a solved problem. This list aims to fix that. Would love to hear the community's thoughts :)
yushian
859,044
How to Easily Format Markdown Files in VS Code
Every respectable software project needs a README. This file provides crucial information about what...
0
2021-10-11T11:10:38
https://betterprogramming.pub/how-to-easily-format-markdown-files-in-vs-code-9c6bcecbe6f2
tutorial, vscode, markdown, productivity
Every respectable software project needs a `README`. This file provides crucial information about what the project is, how to work with it, and other relevant information for developers. `README` files are written in *markdown*, a special markup syntax. The syntax for markdown is simple enough, but it can be a pain to manually type out, and it’s easy to make simple mistakes and typos. Wouldn’t you like to just use the `Cmd+B` keyboard shortcut to bold some text instead of typing `**` around your text? Or what about creating a nicely formatted table in your `README`, especially when editing an existing table? Wouldn’t it be nice if the table formatting and column width adjustments were taken care of for us? Markdown is wonderful, but it’s not exactly as easy as working with a Google doc when applying formatting. The [SFDocs Markdown Assistant VS Code extension](https://marketplace.visualstudio.com/items?itemName=salesforce.sfdocs-markdown-assistant) is here to help! --- ## Common Use Cases In this article, we’ll look at some common use cases when writing a markdown file. We’ll first look at simple text formatting like bold, italic, or strikethrough. Next, we’ll look at writing numbered lists. Finally, we’ll look at creating and modifying tables. Let’s get started! --- ## Bold, Italic, and Strikethrough Text Let’s start with the simple stuff. In markdown syntax, you can make your text bold by wrapping your text in `**`, italic by wrapping your text in `_`, and strikethrough by wrapping your text in `~~`. It’s not a huge burden to type out these characters, but it’d be really nice if we could just use keyboard shortcuts to format the text in the same way that we can when working with a Google doc. Here’s the markdown we’ll use to apply these styles: ![Markdown for bold, italic, and strikethrough text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/13zc90lapnyvrtmux2l7.png) <figcaption>Markdown for bold, italic, and strikethrough text</figcaption> And here’s what the output looks like when viewing the markdown file: ![Output for bold, italic, and strikethrough text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmlngvy5s8c511vvvxg0.png) <figcaption>Output for bold, italic, and strikethrough text</figcaption> Now, let’s show how easy it is to apply these various styles when using the VS Code extension! We can make our text bold using `Cmd+B`, italic using `Cmd+I`, and strikethrough using `Option/Alt+S`. ![Demo for writing bold, italic, and strikethrough text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wnfjrhjrducxu75q3mc.gif) <figcaption>Demo for writing bold, italic, and strikethrough text</figcaption> --- ## Numbered Lists Next, let’s take a look at lists. When creating a numbered list in a Google doc, hitting the `Enter` key after typing your first list item creates the second list item with the `2.` prefix already in place. In a markdown file, however, you have to manually type the number prefix for each item. It’d be nice if we could have that done for us automatically! Here’s the markdown we’ll use to create a numbered list: ![Markdown for a numbered list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bli1psf7xj3vjhg7tkms.png) <figcaption>Markdown for a numbered list</figcaption> And here’s what the output looks like when viewing the markdown file: ![Output for a numbered list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3b6am9i7lp234mqbt108.png) <figcaption>Output for a numbered list</figcaption> Now, let’s show how easy it is to create a numbered list when using the VS Code extension! We start by typing `1. Bread` to create the first item in our list. Then when we hit the `Enter` key, the next number prefix is added for us automatically! ![Demo for creating a numbered list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/khwurd50yl811be8esrx.gif) <figcaption>Demo for creating a numbered list</figcaption> --- ## Tables Finally, let’s look at creating and modifying tables in markdown. Tables are easy enough to create, but a pain to modify, especially as the column widths change. Here’s the markdown we’ll use to create a table: ![Markdown for a table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1kwg95vp0kpkogpmb0b.png) <figcaption>Markdown for a table</figcaption> And here’s what the output looks like when viewing the markdown file: ![Output for a table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzf5i8wvrwik9wabebr8.png) <figcaption>Output for a table</figcaption> Now, let’s see a demo of how long it takes to create the table from scratch without using the VS Code extension. Note how much time we spend modifying the column widths, and we’re only doing this for a table with three rows! Imagine doing this with a much larger dataset and how much of a headache this would be. ![Demo for manually creating a table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4nh7re61rgrssoeys2tg.gif) <figcaption>Demo for manually creating a table</figcaption> Now, let’s show how easy it is to create and modify a table when using the VS Code extension. For starters, we can insert an empty table by right-clicking in our file to open the context menu and then selecting “Table >> Table: Add.” We can then specify the number of columns and rows and hit `Enter` to create the dummy layout. Then, as we enter text into the table, we can navigate from cell to cell using `Tab` and `Shift+Tab`. Whenever those keys are pressed to navigate to a different table cell, the column widths automatically adjust. This is probably my favorite feature! ![Demo for creating a table using the VS Code extension to help](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gkm8y27bx0eijms6vdwv.gif) <figcaption>Demo for creating a table using the VS Code extension to help</figcaption> You can find even more cool features in the right-click context menu or by reading the [extension’s docs](https://marketplace.visualstudio.com/items?itemName=salesforce.sfdocs-markdown-assistant). --- ## Conclusion This markdown tool is incredibly simple, but it’s often the simple things that make the developer experience better. The SFDocs Markdown Assistant looks like a VS Code extension I’ll be keeping around!
thawkin3
859,059
Integration Testing in Flutter
In case it helped :) We will cover briefly: Setup for integration test flutter Using Robot...
0
2021-10-13T15:02:03
https://dev.to/aseemwangoo/integration-testing-in-flutter-18pf
computerscience, productivity, flutter, programming
*In case it helped :)* <a href="https://www.buymeacoffee.com/aseemwangoo" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Pass Me A Coffee!!" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a> <!-- wp:paragraph --> <p>We will cover briefly:</p> <!-- /wp:paragraph --> <!-- wp:list {"ordered":true} --> <ol><li>Setup for integration test flutter</li><li>Using Robot pattern for testing</li><li>Test the app</li><li>(Optional) Recording performance</li></ol> <!-- /wp:list --> <!-- wp:quote --> <blockquote class="wp-block-quote"><p></p></blockquote> <!-- /wp:quote --> {% youtube 9F6LkvvTTl8 %} <!-- wp:heading {"level":3} --> <h3>Setup for integration tests</h3> <!-- /wp:heading --> {% youtube UQ4HIzgwxlM %} <!-- wp:paragraph --> <p>Integration testing (also called end-to-end testing or GUI testing) is used to simulate a user interacting with your app by doing things like clicking buttons, selecting items, scrolling items, etc.&nbsp;</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>We use the <code>integration_test</code> package for writing integration tests. Add the plugin to your <code>pubspec.yaml</code> file as a development dependency</p> <!-- /wp:paragraph --> ```dart # pubspec.yaml dev_dependencies: flutter_test: sdk: flutter integration_test sdk: flutter ``` <!-- wp:quote --> <blockquote class="wp-block-quote"><p>Detailed description <a href="https://flutter.dev/docs/testing/integration-tests" rel="noreferrer noopener" target="_blank">here</a>.</p></blockquote> <!-- /wp:quote --> <!-- wp:paragraph --> <p><strong>Begin Code</strong></p> <!-- /wp:paragraph --> {% youtube hhjSTJOJ-2g %} <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*QcN7y0Kd5GuKKPe3aELTBQ.png" alt="Summary for Integration Tests"/><figcaption>Summary for Integration Tests</figcaption></figure></div> <!-- /wp:image --> <!-- wp:list --> <ul><li>We create a new directory <code>test_driver</code>containing a new file, <code><a href="https://github.com/AseemWangoo/dynamism/blob/master/test_driver/integration_test.dart" rel="noreferrer noopener" target="_blank">integration_test.dart</a></code></li><li>We need to call <code>integrationDriver()</code> inside this file, but we will customize it, for allowing us to save the results of the integration tests.</li></ul> <!-- /wp:list --> ```dart ## Save the results of the integration tests integrationDriver(responseDataCallback:(Map&lt;String, dynamic> data) async { await fs.directory(_destinationDirectory).create(recursive: true); final file = fs.file(path.join( _destinationDirectory, '$_testOutputFilename.json', )); final resultString = _encodeJson(data); await file.writeAsString(resultString); }); const _testOutputFilename = 'integration_response_data'; ``` <!-- wp:quote --> <blockquote class="wp-block-quote"><p>This will create a file <code>integration_response_data.json</code> after our tests are finished.</p></blockquote> <!-- /wp:quote --> <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*B-KJ2h2Pb8AFEbypEbugcA.png" alt=""/><figcaption>Integration Test Results</figcaption></figure></div> <!-- /wp:image --> <!-- wp:paragraph --> <p>Next, we write tests under <code><a href="https://github.com/AseemWangoo/dynamism/blob/master/integration_test/app_test.dart" rel="noreferrer noopener" target="_blank">integration_test/app_test.dart</a></code> We need to call the method <code>IntegrationTestWidgetsFlutterBinding.ensureInitialized()</code> in the main function.</p> <!-- /wp:paragraph --> <!-- wp:quote --> <blockquote class="wp-block-quote"><p>Note: The tests should follow <code>integration_test/&lt;name&gt;_test.dart</code> pattern</p></blockquote> <!-- /wp:quote --> <!-- wp:list --> <ul><li>To run our integration tests, first, make sure the emulators are running, and then we run the following</li></ul> <!-- /wp:list --> ```dart flutter drive \ --driver=test_driver/integration_test.dart \ --target=integration_test/app_test.dart ## To specify a device flutter drive \ --driver=test_driver/integration_test.dart \ --target=integration_test/app_test.dart -d "9B4DC39F-5419-4B26-9330-0B72FE14E15E" ## where 9B4DC39F-5419-4B26-9330-0B72FE14E15E is my iOS simulator ``` <!-- wp:paragraph --> <p><strong>Using Robot pattern for testing</strong></p> <!-- /wp:paragraph --> {% youtube A5lcduAfGDo %} <!-- wp:paragraph --> <p>Robot Testing is an end-to-end (E2E) technique that mimics human interaction. It focuses on <strong>WHAT WE ARE TESTING</strong> instead of <strong>HOW WE ARE TESTING.</strong></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Instead of writing all the tests in one file, we basically create robots per screen and then use the integration testing tools to test our application.</p> <!-- /wp:paragraph --> <!-- wp:quote --> <blockquote class="wp-block-quote"><p><a href="https://verygood.ventures/blog/robot-testing-in-flutter" rel="noreferrer noopener" target="_blank">Inspired by this</a></p></blockquote> <!-- /wp:quote --> <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*7AtZPVa8x6fopJFYOBdJ7w.png" alt="Robot pattern for testing"/><figcaption>Robot pattern for testing</figcaption></figure></div> <!-- /wp:image --> <!-- wp:heading {"level":4} --> <h4>Create a&nbsp;Robot</h4> <!-- /wp:heading --> <!-- wp:list --> <ul><li>We create a new file called <code><a href="https://github.com/AseemWangoo/dynamism/blob/master/integration_test/robots/home_robot.dart" rel="noreferrer noopener" target="_blank">home_robot.dart</a></code> which takes in the <a href="https://api.flutter.dev/flutter/flutter_test/WidgetTester-class.html" rel="noreferrer noopener" target="_blank">WidgetTester</a> as a constructor parameter.&nbsp;</li></ul> <!-- /wp:list --> <!-- wp:quote --> <blockquote class="wp-block-quote"><p>Note: Every robot takes in the WidgetTester as a constructor parameter and defines the methods as required by that screen</p></blockquote> <!-- /wp:quote --> ```dart class HomeRobot { const HomeRobot(this.tester); final WidgetTester tester; Future<void> findTitle() async {} Future<void> scrollThePage({bool scrollUp = false}) async{} Future<void> clickFirstButton() async {} Future<void> clickSecondButton() async {} } ``` <!-- wp:paragraph --> <p>and we have the methods declared as required per our home screen.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Inside our, <code><a href="https://github.com/AseemWangoo/dynamism/blob/master/integration_test/app_test.dart" rel="noreferrer noopener" target="_blank">app_test.dart</a></code> we initialize the HomeRobot inside the<code>testWidgets</code> function and then use the methods as per the behavior</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*vH-FTb6IYbvl8fq7w3tckg.png" alt="Home Robot example"/><figcaption>Home Robot example</figcaption></figure></div> <!-- /wp:image --> <!-- wp:list --> <ul><li>Even the non-technical stakeholders can understand the tests here, which is the point for robot testing.</li><li>We repeat the steps for the second and third screens, by creating screen respective robots and writing tests inside the main file.</li></ul> <!-- /wp:list --> <!-- wp:paragraph --> <p><strong>Test the app</strong></p> <!-- /wp:paragraph --> {% youtube 7pwrgfZ7LJs %} <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*M2ZDK4yyLTIhfLs4Xre55w.png" alt="Home Screen"/><figcaption>Home Screen</figcaption></figure></div> <!-- /wp:image --> <!-- wp:paragraph --> <p>This is our home screen, we write the first test for finding the title.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":4} --> <h4>Find title</h4> <!-- /wp:heading --> <!-- wp:list --> <ul><li>We make use of the tester inside the home robot and call <code>pumpAndSettle</code> </li><li>Finally, we expect to find the text (on the home screen) and only with one widget.</li></ul> <!-- /wp:list --> ```dart Future<void> findTitle() async { await tester.pumpAndSettle(); expect(find.text("Fames volutpat."), findsOneWidget); } ``` <!-- wp:quote --> <blockquote class="wp-block-quote"><p>Note: <code><a href="https://api.flutter.dev/flutter/flutter_test/WidgetTester/pumpAndSettle.html" rel="noreferrer noopener" target="_blank">pumpAndSettle</a></code> triggers the frames, until there is nothing to be scheduled</p></blockquote> <!-- /wp:quote --> <!-- wp:heading {"level":4} --> <h4>Scroll the&nbsp;page</h4> <!-- /wp:heading --> <!-- wp:list --> <ul><li>We specify the finder (for the scroll widget) and call the <code><a href="https://api.flutter.dev/flutter/flutter_test/WidgetController/fling.html" rel="noreferrer noopener" target="_blank">fling</a></code> method on the tester. </li><li>In order to scroll down, we need to specify y-offset as negative and for scrolling up, vice-versa</li></ul> <!-- /wp:list --> ```dart Future<void> scrollThePage({bool scrollUp = false}) async { final listFinder = find.byKey(const Key('singleChildScrollView')); if (scrollUp) { await tester.fling(listFinder, const Offset(0, 500), 10000); await tester.pumpAndSettle(); expect(find.text("Fames volutpat."), findsOneWidget); } else { await tester.fling(listFinder, const Offset(0, -500), 10000); await tester.pumpAndSettle(); expect(find.text("Sollicitudin in tortor."), findsOneWidget); } } ``` <!-- wp:paragraph --> <p>Once the scrolling is done, we expect to find the text at the bottom</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*hRHgAVdbx_a41Wf8ZGsDaQ.png" alt="HomeScreen bottom part"/><figcaption>HomeScreen bottom part</figcaption></figure></div> <!-- /wp:image --> <!-- wp:heading {"level":4} --> <h4>Click the&nbsp;button</h4> <!-- /wp:heading --> <!-- wp:list --> <ul><li>After reaching the bottom of the home screen, now we simulate the button press.</li><li>First, we make sure that the button is visible, by calling<code><a href="https://api.flutter.dev/flutter/flutter_test/WidgetController/ensureVisible.html" rel="noreferrer noopener" target="_blank">ensureVisible</a></code> </li><li>Finally, we simulate the button press using <code><a href="https://api.flutter.dev/flutter/flutter_test/WidgetController/tap.html" rel="noreferrer noopener" target="_blank">tap</a></code></li></ul> <!-- /wp:list --> ```dart Future<void> clickFirstButton() async { final btnFinder = find.byKey(const Key(HomeStrings.bOp1)); await tester.ensureVisible(btnFinder); await tester.tap(btnFinder); await tester.pumpAndSettle(); } ``` <!-- wp:list --> <ul><li>Once, the buttons are pressed we are navigated to the next screens. Let’s test those.</li></ul> <!-- /wp:list --> <!-- wp:heading {"level":4} --> <h4>Second Screen</h4> <!-- /wp:heading --> <!-- wp:list --> <ul><li>As mentioned above, we follow the robot pattern, hence we have the <code><a href="https://github.com/AseemWangoo/dynamism/blob/master/integration_test/robots/secondscreen_robot.dart" rel="noreferrer noopener" target="_blank">secondscreen_robot.dart</a></code> </li><li>We find the title, scroll the page (same as above), additionally now we test the back button.</li><li>We navigate back to the home screen using <code><a href="https://api.flutter.dev/flutter/flutter_driver/PageBack-class.html" rel="noreferrer noopener" target="_blank">pageBack</a></code> (this finds the back button on the scaffold)</li></ul> <!-- /wp:list --> ```dart Future<void> goBack() async { await tester.pageBack(); await tester.pumpAndSettle(); } ``` <!-- wp:heading {"level":4} --> <h4>Third Screen</h4> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Below is our third screen</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*gVmbvssXOnFg5rqJUJDWoA.png" alt="Third Screen"/><figcaption>Third Screen</figcaption></figure></div> <!-- /wp:image --> <!-- wp:list --> <ul><li>We follow the robot pattern, hence we have the <code><a href="https://github.com/AseemWangoo/dynamism/blob/master/integration_test/robots/thirdscreen_robot.dart" rel="noreferrer noopener" target="_blank">thirdscreen_robot.dart</a></code> </li><li>We find the title, scroll the page (same as above), additionally now we perform the click action on the tile using <code><a href="https://api.flutter.dev/flutter/flutter_test/WidgetController/tap.html" rel="noreferrer noopener" target="_blank">tap</a></code></li></ul> <!-- /wp:list --> ```dart Future<void> clickTile(int item) async { assert(item != null &amp;&amp; item >= 0 &amp;&amp; item &lt;= 5); final key = 'fringilla_item_${item.toString()}'; final itemFinder = find.byKey(Key(key)); await tester.tap(itemFinder); await tester.pumpAndSettle(); } ``` <!-- wp:paragraph --> <p>With the click of the tile, we open a web view with a URL, but now we have a cross icon.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center"} --> <div class="wp-block-image"><figure class="aligncenter"><img src="https://cdn-images-1.medium.com/max/1600/1*KL9yGwP9hgsyBUlPndkqTQ.png" alt="WebView Flutter"/><figcaption>WebView Flutter</figcaption></figure></div> <!-- /wp:image --> <!-- wp:list --> <ul><li>We find the cross icon, using <code>find.byIcon</code> and then use tap to close the current webview.</li></ul> <!-- /wp:list --> ```dart Future<void> goBack() async { final closeIconFinder = find.byIcon(Icons.close); await tester.tap(closeIconFinder); await tester.pumpAndSettle(); await tester.pageBack(); await tester.pumpAndSettle(); } ``` <!-- wp:paragraph --> <p>Now, we come to the third screen and from there, we call<code><a href="https://api.flutter.dev/flutter/flutter_driver/PageBack-class.html" rel="noreferrer noopener" target="_blank">pageBack</a></code> to return to the home screen.</p> <!-- /wp:paragraph --> <!-- wp:heading {"level":3} --> <h3>Recording Performance</h3> <!-- /wp:heading --> <!-- wp:list --> <ul><li>Let’s say you want to record the performance of your test (maybe scroll) we first make use of <code>IntegrationTestWidgetsFlutterBinding.ensureInitialized()</code> </li><li>Then we call the <code><a href="https://api.flutter.dev/flutter/package-integration_test_integration_test/IntegrationTestWidgetsFlutterBinding/watchPerformance.html" rel="noreferrer noopener" target="_blank">watchPerformance</a></code> and inside it, we perform our scroll</li></ul> <!-- /wp:list --> ```dart final listFinder = find.byKey(const Key('singleChildScrollView')); await binding.watchPerformance(() async { await tester.fling(listFinder, const Offset(0, -500), 10000); await tester.pumpAndSettle(); }); ``` <!-- wp:quote --> <blockquote class="wp-block-quote"><p>Note: The result is saved under the JSON file (which we created during the initialization of integration tests)</p></blockquote> <!-- /wp:quote --> <!-- wp:paragraph --> <p><a rel="noreferrer noopener" href="https://github.com/AseemWangoo/dynamism" target="_blank"><em>Source code.</em></a> {% youtube 9F6LkvvTTl8 %} *In case it helped :)* <a href="https://www.buymeacoffee.com/aseemwangoo" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Pass Me A Coffee!!" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
aseemwangoo
859,123
Set vs Array
There are several ways you can solve coding problems as a JavaScript developer, thanks to the...
0
2021-10-11T07:13:52
https://dev.to/david4473/set-vs-array-14c0
There are several ways you can solve coding problems as a JavaScript developer, thanks to the plethora of pre-established data structures designed to solve simple to real-world problems. Data structures are techniques for storing and organizing data, which enables efficient modification. The data structure also determines relationships between data and functions to use in accessing them. JavaScript data structures have their respective use-cases; this is due to the unique properties each object possess. However, this doesn't mean there isn't any form of similarities between them. The Array and Set data structure types have a lot in common, and we're going to be looking at what similarities they share, how they differ from each other and their use-cases. *** ##What is a Set and an Array? ###Set Set is a data type that stores data that are unique values of any type, whether primitive or object references. Set stores data in collections with keys that are iterable in the order of insertion. ###Array An array on the other hand - is a global object that stores high-level list-like objects called arrays in the consecutive memory for later use. Every array has several cells, and each cell has a numeric index that is used to access its data. It is the most basic and most used of all data structures. Now that we know what they are, let's take a look at the similarities between these objects. *** ##Similarities & Differences For us to comprehend some of the technical aspects of this section, we need to understand how to construct or initialize either of the objects. Both objects have a built-in constructor which utilises the `new` syntax for declaring a new data structure. However, unlike a set, an array is not limited to this method of declaration. An array can also be declared with the literal method. You can initialize a new Set like this: ```javascript const set = new Set(); ``` And a new array like this: ```javascript const arr = new Array(); ``` Or: ```javascript const arr = []; ``` In addition, the literal method of initialising an array is much faster; construction-wise, and even performance-wise. The constructor method, however - is slower compared to the former, and prone to mistakes such as this: ```javascript const arr = new Array(20); arr[0] //outputs: undefined arr.length //outputs: 10 //Literal method const arrLtr = [20]; arrLtr[0] //outputs: 20 arrLtr.length //outputs: 1 //The later is accurate ``` With that out of the way, we can now go into the nitty-gritty of what these objects have in common. The similarities between Set and Array is conspicuous. if you've ever worked with arrays before (who hasn't anyway), you'll immediately notice some of the things they have in common. But even with the glaring dead ringer, the differences between them are quite stretched. One of the biggest, perhaps the biggest, differences between Set and Array is that; Set can only contain one instance of a value. For example, in an array of `people`, a `peter` value can appear as many times as you want. Whereas in Set, you can only have one `peter` value. If you try to add more, nothing happens. What if you hard-code duplicate values into a set, what happens? Set will only pick one of the duplicate values and delete the rest. However, Set's intolerance for duplicate data values has its perks, especially in cases where you don't want duplicate values or data leaks in your data structure. Most of Set and Array's likeness lies within how they operate; how they populate their structure with data and otherwise. *** ##Inserting and removing elements Each object has its built-in methods for adding and removing values from its respective structure. While Array has more than one method of inserting or removing values, Set only has one. Before going deep into this section, we first need to understand how items are inserted and removed from a data structure. A Data structure is basically a stack of objects that are inserted and removed according to the last-in-first-out principle, which is also known as (LIFO). Elements can only be added and removed from the top of the stack. A helpful analogy is to think of a stack of books; you can add new books from the top and remove only the book at the top. <img width="100%" style="width:100%" src="https://media.giphy.com/media/tC5t2np7BZK5WI3G3O/source.gif"> ###Inserting and removing elements in an Array The Array has two methods for this functionality `push()` and `pop()`. The push method adds new items to the top of the data structure, while pop removes items from the top of the structure i.e the last item added. ####Array.protoype.push() ```javascript const arr = [ ]; arr.push("Mario") console.log(arr); //Output: // Mario ``` ####Array.protoype.pop() ```javascript const arr = ["Mario", "Luigi", "Bowser"] //Remove "Bowser" arr.pop() console.log(arr) //Output: // Mario, Luigi ``` ####Array.protoype.unshift() / shift() The `unshift()` and `shift()` methods works contrary to the LIFO principle, these methods are used to insert and remove items to the bottom of a data structure i.e at index `0`. ####Array.protoype.unshift() ```javascript const arr = [ 4, 6, 8, 10 ]; //Add 2 to index 0 arr.unshift(2) console.log(arr) //Output: // 2, 4, 6, 8, 10 ``` ####Array.protoype.shift() ```javascript const arr = [ 2, 4, 6, 8, 10 ]; //Remove 2 from index 0 arr.shift() console.log(arr) //Output: // 4, 6, 8, 10 ``` ####Array.protoype.splice() `splice()` is another array method that can be used to remove or replace an existing element in an array. The method takes in two parameters for deleting an element and three for replacing or adding a new element: ```javascript splice(start, deleteCount) ``` Or ```javascript splice(start, deleteCount, item1) ``` `start` The index at which you want to start inserting or delete elements from an array. `deleteCount` An integer specifying the number of elements to be removed from the `start` index. N.B This is an optional parameter. `item` This is the element to be added to an array from the `start` index. This is also an optional parameter. Here is a basic `splice()` demo: **Remove 0 (zero) elements before index 2, and insert "John".** ```javascript let students = ['Hugh', 'Jack', 'Dave', 'Katerina']; students.splice(2, 0, 'John'); console.log(student); //Output: //['Hugh', 'Jack', 'john' , 'Dave', 'Katerina'] ``` **Remove 1 element at index 1** ```javascript let students = ['Hugh', 'Jack', 'Dave', 'Katerina']; let removedStudents = students.splice(1, 1); console.log("Students:" + students); console.log("Removed students:" + removedStudents); //Output: //Students: ['Hugh', 'Jack', 'john' , 'Dave', 'Katerina'] //Removed students: [ 'jack' ] ``` `splice()` can also be used to instantly clear every element in an array like this: ```javascript let fruits = ['Mangos', 'papaya', 'Apple']; fruits.splice(0, fruits.length); console.log(fruits); //Output: // [ ] Empty array ``` For a more in-depth analysis on `splice()`, [Visit the Mozilla docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice). ####Array.from() The `from()` method lets you populate an array with an array-like or iterable object. That is; an object with a length property and iterable objects such as a `set`. In a nutshell, you can populate an array with elements in a pre-existing `set`. ```javascript let fruits = new Set(['Mangos', 'papaya', 'Apple']); let arr = Array.from(fruits); console.log(arr); //Output: // [ 'Mangos', 'papaya', 'Apple' ] ``` ###Inserting and removing items in a Set Just as stated above, Set has only one method of adding items to its structure; using the `add()` method. This method works just like Array's `push()` method. It also has a `delete()` method for removing items just like the `pop()` method. Set follows the LIFO principle of inserting and removing elements in a data structure, but unlike Array, Set doesn't have a way around the LIFO principle; elements can only be inserted and removed from the top of a set data structure. ####set.prototype.add() ```javascript const set = new Set(); set.add("Peach"); set.add("Mario"); console.log(set) //Output: // "Peach", "Mario" ``` ####set.prototype.delete() ```javascript const set = new Set(["Peach", "Mario", "Bowser"]); set.delete("Bowser"); console.log(set); //Output: // "Peach", "Mario" ``` Similarly to Array's `from()` method, Set can also populate its structure with a pre-existing array. Set doesn't require an extra method for this functionality, the constructor handles it just fine: ```javascript let arr = ['John', 'Mike', 'Steph']; let newSet = new Set(arr); console.log(newSet); // Output: // { 'John', 'Mike', 'Steph' } ``` ####set.prototype.clear() Set's `clear()` method is a very straightforward and efficient method for clearing out elements in a data structure. It doesn't require extra arguments like the former's method. ```javascript let newSet = new Set([ 'John', 'Mike', 'Steph' ]); newSet.clear(); console.log(newSet); // Output: // { } Empty set object ``` *** ##Accessing elements How elements in a data structure are accessed or selected is determined by the data type being used. Take an Array, for example, elements in an array are accessed by their cell's numeric index. This means, the first element in an array can be accessed at index 0, and the last element; at the index value equal to the array's length minus 1. **Accessing the first element** ```javascript let arr = [ 'cat', 'dog', 'mouse' ]; console.log(arr[0]); //This will output 'cat' //Because the 'cat' value is at index 0 ``` **Accessing the last element** ```javascript let arr = [ 'cat', 'dog', 'mouse' ]; console.log(arr[ arr.length -1]); //This will output 'mouse' //Because the 'mouse' element is the last in the array ``` Set accesses elements in a data structure differently. This is because a set does not support the selection of random elements by its index like Array. So the `indexOf()` method will only work with an array. ```javascript let arr = [ 'cat', 'dog', 'mouse' ]; let newSet = new Set(arr); console.log(arr[0]) // outputs 'cat' console.log(newSet[0]) // outputs undefined ``` Set checks if an element is in its structure with the `has()` method. This method is simpler compared to Array's technique for checking elements. ```javascript let animalSet = ([ 'cat', 'dog', 'mouse' ]); let isMouse = animalSet.has('mouse') console.log(isMouse) // outputs true ``` Performing something similar in an array requires an extra conditional check and code with the `indexOf()` method like this: ```javascript let arr = [ 'Peach', 'Mario', 'Bowser' ]; //checks if Mario is at index 1 let isMario = arr.indexOf('Mario') === 1; console.log(isMario) // outputs true ``` The `indexOf()` method returns `-1` if the element being searched for is not present in the structure. *** ##Length and size Certain functionalities require we check the total amount of elements present in an array or set structure. Javascript has a global `length` property for displaying the number of characters in a string and elements in any data type that stores list-like elements. And If there's anything to go by, it's that an array works with almost every global method and property in Javascript. So without dwelling too much on it, here's a demo of the `length` property in an array: ```javascript const arr = [2, 3, 4]; console.log(arr.length) //Outputs // 3 ``` Set has its unique method for checking the number of elements in a set; with the `size` method. This method works exactly like the length property: ```javascript const set = new Set(); set.add(2) set.add(3) set.add(4) console.log(set.size) //Outputs // 3 ``` *** ##Iteration There are different ways one could iterate over elements in an array or set, but unlike the former; set isn't compatible with the conventional techniques for iterating over elements in programming using functions like the `for` and `while` loop statements. Looping in a set can be easily done with the `forEach()` method. Although several other methods can be used in its stead, methods like: - iterator() - values() - entries()... The `forEach()` method is perhaps the most unequivocal of them all. ```javascript const breeds = new Set(); breeds.add("dog") breeds.add("cat") breeds.add("bunny") const iterate = (value) => { console.log(value + " breed" ); } breeds.forEach(iterate); //Outputs: // dog breed // cat breed // bunny breed ``` When it comes to iteration, an array has a list of flexible options to choose from, your choice is often determined by the kind of functionality you're trying to implement. We can't go deeply into every one of these methods as they are beyond the scope of this article, but we'll examine the most used of them all; the `for` and `while` loop. ####For loop ```javascript const arr = [ 1, 2, 3, ] for (var i = 0; i < arr.length; i++) { console.log('number: ' + arr[i]); } //Output: // number: 1 // number: 2 // number: 3 ``` ####While loop ```javascript let x = 0; while (x < 5) { x++; } console.log(x); //Output: // 5 ``` Visit the [Mozilla docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Loops_and_iteration) to learn more about these statements. *** ##When to use Set or Array Knowing when to use a set or an array is a no-brainer, it generally boils down to two things; unique elements and Performance. - Use set when you want unique elements in your data structure. Although an array can be modified to accept unique elements, set is, however, optimized for such functionalities out of the box. - If you want high-performance element lookups use set's `has()` method. - If you want an easy access of elements, easy element swapping and binary search of elements ( i.e accessing elements located in any part of a data structure; front, middle and back ), an array is your best bet. - If you want to prevent memory leaks in your data structure, use a set. - If you want flexibility and more feature, use an array. *** ##Conclusion One thing to note is that both objects shine within their range of specialities, even though the array has the efficacy of becoming the Swiss army knife of data structure; set's high-performance methods and intolerance for duplicates makes it stand out. Your decision on which to use will be highly influenced by the nature of the project you're working on. It's not really about which is better, but which is right for the job.
david4473
859,138
Learn functional, learn systems and learn object oriented
If you are a junior or intermediate, you should consider picking up projects or languages that help...
0
2021-10-11T19:00:43
https://dev.to/ebuckley/learn-functional-learn-systems-and-learn-object-oriented-10ie
systems, javascript, clojure, go
If you are a junior or intermediate, you should consider picking up projects or languages that help you round out the functional, object oriented and systems trio of languages. After 10 years of putting text files into compilers and interpreters, here is my take. ## why we should do it We should do it as a form of practice and professional development, even if we do not need it right now at our day job. Learning new languages increases enjoyment of coding, and brings fresh ideas to your craft. ## Why functional? Understanding pure functions is the reason you should pick up a functional programming language. This is one of the key ideas to writing test-able code. If you are new to functional programing, the experience can be a very steep hill climb. The reward in my opinion is the most impactful on the quality of code regardless of what other language you are using. My recommended functional language is Clojure. The reason I picked up a lisp is that the language has a very strong foundation and an incredibly simple syntax. I believe it to be a 'stripped down' feel which helps you really zero in on the core concepts in the paradigm. ## Why OOP? An object oriented language is important to learn because it introduces a very familiar vernacular for modelling process, business and the world. Simply put, it helps you communicate about software projects with other people. On a technical level. Object oriented languages fill a pretty broad source of options, so it could be a difficult choice. For my own purpose I have chosen to become an expert in golang. Although a purist may rightly claim that Go is not OOP, I believe it fills that same niche. You can use the `interface` and `struct` features to achieve polymorphism. The different object oriented languages I have used in my day job to this stage is vast. As you pick up the next one, it becomes faster and easier to get productive in the language.but The concepts boil down to just a few differences, and they all share many strong similarities in how they approach state, assignment and memory management. PHP, Python, Java, Golang, c#, Javascript being the ones I have personally used. # Why Systems? A systems language completes the set of types of languages you should learn. You will learn to appreciate the high level of abstraction you can achieve with different languages. When I started working on projects with systems languages I also learned a greater depth about my operating system, infrastructure and memory management. As programmers, our job is to create magic inside the box. When you understand the lower level of the abstraction you get to unveil the magic for what it is. Having a solid basis in a systems language will help you unveil problems when garbage collection or operating systems features are causing issues with performance Right now I think the go language is my favorite pick for a systems language. It allows you to access the OS API with relative ease, and you get the compiled language which opens up really interesting project opportunities in the ops, sysadmin and SRE space. In addition to this, it is worthwhile understanding the power of manual memory management. You can get this with languages like, c, rust or d. I wouldn't go so far as to say it is the most important concept to learn, but it can give you magical super powers when you really need code to perform in a reliable and fast way. # What it means to have a solid base In an average coders career, you will learn a lot of different languages. Practicing learning languages opens up opportunities for picking up the most interesting projects. It broadens the array of problems you can solve. Hones your craft no matter what tool you are using to write that next best project. It matters not so much which language you pick, but try to target the variety of niches which let you bring in the good ideas from the other platform. What are your three picks under the FP, OOP and systems category?
ebuckley
859,358
[Part 2] A proactive approach to handling application errors
NOTE: You will need to create a sentry account for this tutorial. This is the second part in a 3...
14,906
2021-10-11T08:30:43
https://dev.to/wednesdaysol/part-2-a-proactive-approach-to-handling-application-errors-34bo
devops, javascript, webdev, react
<b>NOTE: You will need to create a sentry account for this tutorial.</b> This is the second part in a 3 part series on how to proactively handle errors in your applications across the stack. Issues on the front-end are more easily noticeable. In a lot of applications this is beautifully handled by having an error boundary. I have seen people create Error Boundaries that react differently to different kind of errors and provide a really good experience even in the face of an error. While this certainly helps calm the user down in the spur of the moment, having the ability to proactively be informed about these issues would be a blessing. This allows us to root cause and fix issues before they escalate into a PR problem. Sentry is an Error Monitoring and reporting solution that integrates well with frontend applications. This tutorial assumes that you are familiar with - [React](https://reactjs.org/) - [Error Boundaries in React](https://reactjs.org/docs/error-boundaries.html) In this tutorial we will - Create an account with Sentry - Integrate sentry into the application - Add support for source-maps - Test your integration and source maps ## Create an account with Sentry ### Step 1 Go to [https://sentry.io/](https://sentry.io/) and click on <b>GET STARTED</b> ![615d7ad09c9ba70eb9614511_Screen Shot 2021-08-24 at 3.59.09 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtffbv8nhgaodimpbdxq.png) ### Step 2 Add in your details and click <b>CREATE YOUR ACCOUNT</b> ![615d7ae9fbb2462c0fdffdd0_Screen Shot 2021-08-24 at 3.59.37 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8tknng7w7xvqn1xbcxmz.png) ### Step 3 You will be redirected to the onboarding screen as shown below. Click on <b>I'm Ready</b> ![615d7b03fbb24685aae0002d_Screen Shot 2021-08-24 at 4.00.48 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snri2cydr178z5zk7bfc.png) ### Step 4 Select <b>React</b>, choose a suitable project name and click <b>Create Project</b> ![615d7b1b9fac621964c9fa88_Screen Shot 2021-08-24 at 4.21.02 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88atvohngz3npewitwqg.png) ### Step 5 You will be redirected to the <b>"Configure React"</b> page. Copy the dsn value. ![615d7b2e7d1e6496fab8e961_Screen Shot 2021-08-24 at 4.48.45 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17bjl0wts8visgr03vgb.png) ## Integrate sentry into the application We will now send sentry errors from the ErrorBoundary component ### Step 1 Clone this repo: [https://github.com/wednesday-solutions/react-template](https://github.com/wednesday-solutions/react-template) ### Step 2 Install the dependencies ``` yarn add @sentry/react @sentry/tracing ``` ### Step 3 Copy the dsn from the 1st project and add it in the .env.development and in the .env file ``` SENTRY_DSN=XYZ ``` ### Step 4 Create a sentry service. ``` vi app/services/sentry.js ``` Copy the snippet below in the `sentry.js` file ``` import * as Sentry from '@sentry/react'; import { Integrations } from "@sentry/tracing"; import { isLocal } from '@utils'; export function initSentry () { if (!isLocal()) { Sentry.init({ environment: process.env.ENVIRONMENT_NAME, dsn: process.env.SENTRY_DSN, integrations: [new Integrations.BrowserTracing()], tracesSampleRate: 1.0 }); } } ``` ### Step 5 Add the snippet below in the `app/app.js` ``` ... import { initSentry } from '@services/sentry'; ... initSentry(); // Chunked polyfill for browsers without Intl support if (!window.Intl) { ... } else { ... } ... ``` In order to test your integration locally, temporarily make a small change in the if condition of the initSentry function ``` ... if (true || !isLocal() { ... } ... ``` ### Step 6 ``` yarn start ``` Go to [http://localhost:3000](http://localhost:3000) and you open the developer tools. Go to the network tab. You should see an outgoing request to the sentry servers. ![615d7b53bb23c23d2a1e9cfb_Screen Shot 2021-08-24 at 6.26.50 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8r7gqggvzzqkjo5pnppd.png) <b>Congratulations! Sentry has been setup.</b> ### Step 7 Now let's integrate sentry in the ErrorBoundary so that we can report back to sentry whenever there is an error. Copy this snippet into the `app/services/sentry.js` ``` ... export function reportError(error, errorInfo) { Sentry.captureException(error, { extra: errorInfo }, ); } ``` Copy this snippet into the `app/components/ErrorBoundary/index.js` ``` import { reportError } from '@services/sentry'; ... componentDidCatch(error, errorInfo) { console.error(error, errorInfo); reportError(error, errorInfo); } ... ``` ### Step 8 Test your integration by adding this snippet in the `app/app.js` file ``` ... } else { render(translationMessages); } const a = null; console.log(a.abc); // Install ServiceWorker and AppCache in the end since ... ``` Navigate to your project on sentry and you should see something like this ![615d7b6a5cb29b2c8500c77f_Screen Shot 2021-08-24 at 6.42.07 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9u2di1u8t6711bn7g1gv.png) You should also be able to filter by environment ![615d7bb371e7cb8452e44f96_Screen Shot 2021-08-24 at 6.42.19 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8jsxkv8tswoy2hqifq4o.png) ## Add support for source-maps ### Step 1 Click on the event to get some more details about it ![615d7bc6cc65f1810bb1d65b_Screen Shot 2021-08-24 at 6.44.06 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xg2smol5rm7pg608aio.png) You will notice that it is not very easy to track where the exact issue is. We will now integrate source-maps so that we get the complete stack trace. ### Step 2 In sentry go to Settings → Developer Settings → New Internal Integration ![615d7bdecc65f1569fb1d66b_Screen Shot 2021-08-24 at 6.53.07 PM (1)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ahhbf9tjnq20nmqz8gy.png) Add the name of the integration like <b>Github Action Release</b> Setup permissions. We will need <b>Admin</b> for Release and <b>Read</b> for Organization Click <b>Save</b> and <b>copy the token</b> ### Step 3 Go to your repository on Github → Settings → Secrets → New Repository Secret name it <b>SENTRY_AUTH_TOKEN</b> and paste the token in the value field. Similarly add <b>SENTRY_ORG</b> and <b>SENTRY_PROJECT</b> to the secrets. Those these are not really secrets it will allow you to reuse this workflow as is in all your projects. ### Step 4 Now we will write the sentry workflow that will handle deployment to <b>AWS S3</b> and upload the source-maps. Create an S3 bucket and enable static website hosting ![615d7c03235c81be34c308b8_Screen Shot 2021-08-25 at 4.04.13 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lkqos15cr0ko7z8l2pz.png) ![615d7c0ed33fbad3f94e9307_Screen Shot 2021-08-25 at 4.03.59 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3ntjmfmo8cw57z5j0a3.png) Create a new workflow for uploading the source-maps ``` rm .github/workflows/cd.yml vi .github/workflows/sentry.yml ``` Copy the following snippet. in the `sentry.yml` file ``` name: Upload Source Maps on: push: branches: - master jobs: upload-source-maps: runs-on: ubuntu-latest env: SENTRY_RELEASE: ${{ github.sha }} SOURCE_DIR: './build/' AWS_REGION: ${{ secrets.AWS_REGION }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} PATHS: '/*' AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }} steps: - uses: actions/checkout@v2 - name: Install dependencies run: yarn - name: Build run: export SENTRY_RELEASE=${{ github.sha }} && yarn build - name: AWS Deploy #5 uses: jakejarvis/s3-sync-action@v0.5.0 with: args: --acl public-read --follow-symlink - name: Set env BRANCH run: echo "BRANCH=$(echo $GITHUB_REF | cut -d'/' -f 3)" >> $GITHUB_ENV - name: Get environment_name id: vars run: | if [[ $BRANCH == 'master' ]]; then echo ::set-output name=environment_name::production else echo ::set-output name=environment_name::development fi - name: Create Sentry release uses: getsentry/action-release@v1 env: SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }} SENTRY_ORG: ${{ secrets.SENTRY_ORG }} SENTRY_PROJECT: ${{ secrets.SENTRY_PROJECT }} with: environment: ${{steps.vars.outputs.environment_name}} sourcemaps: './build' set_commits: 'auto' ``` 1. Add environment variables for <b>AWS_REGION</b>, <b>AWS_ACCESS_KEY_ID</b>, <b>AWS_SECRET_ACCESS_KEY</b>, <b>AWS_S3_BUCKET</b> 2. Set the <b>environment_name</b> to either <b>production</b> or <b>development</b> based on the branch. Update the `initSentry` function `services/sentry.js` as follows ``` export function initSentry() { ... Sentry.init({ release: process.env.SENTRY_RELEASE, environment: process.env.ENVIRONMENT_NAME, dsn: process.env.SENTRY_DSN, integrations: [new Integrations.BrowserTracing()], tracesSampleRate: 1.0 }); ... } ``` ## Testing your integration and source-maps Paste this snippet in your `app/containers/App/index.js` ``` import React, { useEffect } from 'react'; ... export function App({location}) { useEffect(() => { if (process.env.NODE_ENV !== 'test') { const a = null; // eslint-disable-next-line console.log(a.a300); } }, []); ... } ... ``` Commit your code and push it. Wait for the sentry action to complete. Navigate to the URL where the website is hosted. ![615d7c4ed33fba334c4e9327_Screen Shot 2021-08-25 at 4.46.17 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/es6tryrnciurcbprszyu.png) You'll be greeted with a <b>Sorry. Something went wrong!</b> screen. Don't worry, this means your <b>ErrorBoundary</b> has been invoked. Go to sentry and take a look at the issue. ![615d7c5dd80aaa80e5106e4d_Screen Shot 2021-08-25 at 5.55.45 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xx5vi98o5ly2082if28.png) <b>We now have support for release mapping!</b> ## Adding support for suspected commits Add a <b>Github Integration</b> Go to Settings → Integrations → Github ![615d7c72bb23c2de9e1eb0b3_Screen Shot 2021-08-27 at 6.43.39 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r2l79b90dzg8ry4t2dqd.png) Choose the right organisation → Only select repositories → Install ![615d7c8466fdd1b0d641f99f_Screen Shot 2021-08-27 at 6.45.15 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkkg9fffu3x6tudenibm.png) Reload the react application to fire a new event. You should now start seeing <b>Suspect commits</b> which help attribute the issue to the commit that introduced it. Filter all issues by releases, and assign issues to the right team member! ![615d7c9309607e213f8970fb_Screen Shot 2021-08-25 at 5.55.36 PM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izi77dlqmk6jazhxbr97.png) ## Where to go from here You now have the ability to proactively handle errors on the backend. Use the sendMessage function to capture and report errors to slack. Pull only the relevant logs using the request-id as a filter. I hope you enjoyed reading this article as much as I enjoyed writing it. If this peaked your interest stay tuned for the next article in the series where I will take you through how to proactively report frontend errors using Sentry. If you have any questions or comments, please join the forum discussion below.
wednesdaysol
859,410
Social Media Icons Hover Animation
A post by valeshgopal
0
2021-10-11T09:43:20
https://dev.to/valeshdev/social-media-icons-hover-animation-4e70
codepen
{% codepen https://codepen.io/valeshgopal/pen/dyzyLgZ %}
valeshdev
859,471
Let's Showwcase - A platform to connect, build, show, and grow
What does Growth mean to you as a developer, programmer, or creator? To me, growth is an...
0
2021-10-11T13:15:39
https://blog.greenroots.info/lets-showwcase-a-platform-to-connect-build-show-and-grow
programming, showdev, codenewbie, beginners
What does `Growth` mean to you as a developer, programmer, or creator? To me, growth is an ever-increasing metric to identify that you are doing great in, - Acquiring knowledge. - Connecting to like-minded, discussing ideas. - Building communities. - Getting opportunities. - Creating products. - Providing services. - Generating revenues. - Fueling your passion. Showcasing your creativity and talent. You can not do these alone. You need support. You need to connect to a `network` of people, tools, technologies that helps you with your growth. In the modern era of learning, sharing, and building, there is no scarcity of platforms that help you with many of the requirements we discussed. But, the practical problem is, there are too many of them. We as developers must engage ourselves in problem-solving, building products, getting them to the market. It doesn't justify our energy, effort, and talent to lurk around multiple platforms to push our `Growth Quotient` forward. We need one platform that allows us to do it all from one place! # Meet Showwcase [Showwcase](https://showwcase.com/) is a network built for developers, coders, programmers. It is a platform to help you connect, learn & share, showcase your passion, create opportunities, and getting paid for the work you do best. ![Showwcase.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633521754410/gCVqCpdAOR.png) We learn by reading and doing. But, we learn faster by observing, learning from the experiences of someone we admire. A meaningful connection helps you with that. Today, with many social networking platforms, You may be connected to someone, but there are no right avenues to utilize that connection. Showwcase aims to solve the problem of `meaningless` connections by providing ample ways to connect to the like-minded and share. Not only that, there are many other features to show your work, stay engaged in a community, create a stunning profile, and many more you can do. Let's learn about all the great feature offerings from `Showwcase`. ## Developer Portfolio ![2.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633530616526/0eqVQow5C.png) A portfolio is a professional face of a developer. Usually, a good developer portfolio/profile gives an idea about, - Who are you? - How to contact you? What are your social footprints? - What skills do you have? - What's your professional history? - What are your credentials? - What kind of side projects have you done? - Do you have anything to showcase? Maybe a product you have worked on, ebook published, anything. `Showwcase` provides you the feature to create your developer profile with all that within a couple of minutes. You can capture all about you and your career, passion, work, shows under one roof. ![Profile](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xsaczqyfr66kb3sxps46.gif) It is a great way to let your future connections know about you. You can also decide what section of your profile to show or hide from the user settings. ## Showcasing your Passion(Work) ![3.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633530629147/FSGaJl4vA.png) We all do varieties of work. You may be a blogger, YouTuber, side hustler building applications, a mentor helping people be successful, many more. Each of these streams of work needs the motivation to go on. One of the sources of motivation is that people see your work and recognize it. `Showwcase` encourages you to show your creation to others from the community. Bring your hard work as a public show to get feedback, find recognition. You can create shows using out-of-the-box templates to showcase products, side projects, videos, code snippets. You can also start from a blank template or even import your work from Medium, Dev blog, WordPress, TinyLetter, or anywhere else. ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633532330023/0CVn36bAM.png) The motto of Showwcase is to showcase what you love building, gather feedback, stay motivated. ## Build your Communities ![4.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633530639696/agpO_4ekf.png) A community lets you find like-minded people and keep you informed about the happening in your areas of interest. `Showwcase` is growing with the number of communities. Join the communities of your interest. Learn from the discussions. Share ideas, knowledge and grow together. ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633532057801/bmpxEkLeF.png) You can also create a community of your own and ask people to be part of it. It is an excellent opportunity to build awareness of a skill, process, value by building a relevant community. ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633532130744/s21-ACdnc.png) ## Learn & Share ![5.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633530650704/dsL3qxKn7.png) When you are on Showwcase, you can choose to follow people, communities you are interested in. Based on your interest, you will see the posts, shows as feed. Feed is a great way to pick and learn, involve in a discussion, appreciate hard work, and get involved. ![Learn](https://res.cloudinary.com/atapas/image/upload/v1633532607/demos/srSTAEpJme_lt0pob.gif) You can start a thread to share knowledge with people. If a thread is related to a particular topic/interest, you can post it for the related community. ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633532869027/5cDt1G5Yl.png) ## Creating Opportunities ![6.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633530665602/jOWMo8KR25.png) `Showwcase` is a network built for developers. With the help of communities, your initial connections, your content always reaches other developers. You build the audience for your content by creating and sharing them consistently. Showwcase handles the content creation tools, payment gateways to monetize your content. You can choose to start putting your content behind a paywall. Create memberships, set the right price for your content, create subscriptions. While monetizing your content is a great idea, but your opportunities are not limited to it. Your showwcase profile speaks for you. It may bring you many job opportunities, mentoring opportunities, coaching & training opportunities. ## Build Connections ![7.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633530680771/cjEA1pmeQ.png) In any networking platform, connection matters the most. `Showwcase` lets you build a meaningful connection with many. People see your work and know you for it. This nature makes Showwcase very different and productive from many other social networking platforms. Follow like-minded people. If you have worked with someone from your network before, you can request to add them to a `work with` relationship. It increases the visibility and growth for future connections. ![image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1633540073316/50bkt6mgS.png) # Great, So What's Next? Are you on `Showwcase` already? If so, great. I hope you agree with all the benefits of the platform we have discussed so far. If you are not part of the showwcase yet, please [join in](https://showwcase.com/). Please feel free to use the invite code `joinatapas398` to join. You can also request for [early access from here](https://form.typeform.com/to/VhRL8wV1?typeform-source=showwcase.typeform.com). <hr /> That's all for now. I hope you found this article informative and insightful. Let's connect. You can find me at, - [Showwcase](https://www.showwcase.com/atapas398) - [Twitter](https://twitter.com/tapasadhikary) - [YouTube](youtube.com/tapasadhikary) - [GitHub](https://github.com/atapas)
atapas
859,475
makeStyles is dead, long live makeStyles!
Material-ui decided to deprecate the hook API in v5 in favour of the styled API. I worked with them...
0
2021-10-11T10:54:20
https://dev.to/garronej/makestyles-is-dead-long-live-makestyles-i67
typescript, react, css
![tss-react](https://user-images.githubusercontent.com/6702424/134704429-83b2760d-0b4d-42e8-9c9a-f287a3353c13.gif) Material-ui decided to [deprecate the hook API](https://github.com/mui-org/material-ui/issues/26571#issuecomment-878641387) in v5 in favour of the styled API. I worked with them to provide an alternative under the form of a third party module for people who prefers JSS over styled. [Ref](https://github.com/garronej/tss-react/issues/3#issuecomment-879022879). [It's mentioned](https://mui.com/guides/migration-v4/#2-use-tss-react) in mui's migration guide from v4 to v5. It also features a, type-safe by design, version of [the `withStyles`](https://github.com/garronej/tss-react#withstyles) HOC. ![demo_withStyles](https://user-images.githubusercontent.com/6702424/136705025-dadfff08-7d9a-49f7-8696-533ca38ec38f.gif) Checkout [tss-react](https://github.com/garronej/tss-react)
garronej
859,489
Top 10 dev.to articles of the week😱.
Most popular articles published on the dev.to
14,897
2021-10-11T11:36:40
https://dev.to/ksengine/top-10-dev-to-articles-of-the-week-2opj
webdev, javascript, beginners, css
--- title: Top 10 dev.to articles of the week😱. published: true description: Most popular articles published on the dev.to cover_image: https://source.unsplash.com/featured/?coding tags: webdev,javascript,beginners,css series: Weekly dev.to top 10 --- DEV is a community of software developers getting together to help one another out. The software industry relies on collaboration and networked learning. They provide a place for that to happen. Here is the most popular articles published on this platform. ## #1 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--VPn8w7BI--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54pcvww3qy3ewjlcfuib.jpg)](https://dev.to/shantanu_jana/skeleton-screen-loading-animation-using-html-css-1ec3) {% post https://dev.to/shantanu_jana/skeleton-screen-loading-animation-using-html-css-1ec3 %} ## #2 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--1NTR0FOn--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n8fxot7rrkosfbq7pm3w.png)](https://dev.to/comscience/3-tips-from-atomic-habits-that-helped-me-get-a-job-at-microsoft-56ih) {% post https://dev.to/comscience/3-tips-from-atomic-habits-that-helped-me-get-a-job-at-microsoft-56ih %} ## #3 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--_v318rT4--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/px9iba7cbwwy7nxiozmy.png)](https://dev.to/aviyel/node-js-from-beginners-to-advance-31id) {% post https://dev.to/aviyel/node-js-from-beginners-to-advance-31id %} ## #4 [![Image of post](https://dev.to/social_previews/article/836320.png)](https://dev.to/stefirosca/5-free-coding-resources-that-helped-me-get-my-first-frontend-developer-job-4ak4) {% post https://dev.to/stefirosca/5-free-coding-resources-that-helped-me-get-my-first-frontend-developer-job-4ak4 %} ## #5 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--l18BZdnJ--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ztvxlsud4r2tdb6cqf6x.jpg)](https://dev.to/damiisdandy/pagination-in-javascript-and-react-with-a-custom-usepagination-hook-1mgo) {% post https://dev.to/damiisdandy/pagination-in-javascript-and-react-with-a-custom-usepagination-hook-1mgo %} ## #6 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--FwtTaOWi--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fbzyzwborowp8j3k891.png)](https://dev.to/neer17/opinionated-guide-on-tweaking-vs-code-for-productivity-1o53) {% post https://dev.to/neer17/opinionated-guide-on-tweaking-vs-code-for-productivity-1o53 %} ## #7 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--2ooeQHyz--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ouen4ds2unq2dhbiwf8.jpeg)](https://dev.to/faisalpathan/why-to-use-map-over-object-in-js-306m) {% post https://dev.to/faisalpathan/why-to-use-map-over-object-in-js-306m %} ## #8 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--Tw2vAlSi--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vv8782vlhtpepesrgvq9.png)](https://dev.to/ubahthebuilder/how-to-build-an-accordion-menu-using-html-css-and-javascript-3omb) {% post https://dev.to/ubahthebuilder/how-to-build-an-accordion-menu-using-html-css-and-javascript-3omb %} ## #9 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--Q-xBhFBs--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j88boxbocm22jhebd4v9.jpg)](https://dev.to/shantanu_jana/create-a-simple-stopwatch-using-javascript-3eoo) {% post https://dev.to/shantanu_jana/create-a-simple-stopwatch-using-javascript-3eoo %} ## #10 [![Image of post](https://res.cloudinary.com/practicaldev/image/fetch/s--lbGXpTCh--/c_imagga_scale,f_auto,fl_progressive,h_500,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bkbpfu0frtaafcsbvd1.jpg)](https://dev.to/coderamrin/build-10-css-projects-in-10-days-project-5-301b) {% post https://dev.to/coderamrin/build-10-css-projects-in-10-days-project-5-301b %} > Orginal authors of this articles are @shantanu_jana, @comscience, @pramit_armpit, @stefirosca, @damiisdandy, @neer17, @faisalpathan, ubahthebuilder, shantanu_jana, coderamrin Enjoy these articles. Follow me for more articles. Thanks 💖💖💖
ksengine
860,200
What HTML tag do you use for Sarcasm?
Ah yes, sarcasm, the pinnacle of human language. Life would be so incredibly dull without it. And...
0
2021-10-12T16:00:32
https://auroratide.com/posts/making-sarcastic-text
html, webdev
Ah yes, sarcasm, the _pinnacle_ of human language. Life would be so _incredibly_ dull without it. And yet, despite sarcasm's profound influence on both oral and written conversation, we don't have a way to denote it in text! I mean, at least in person you can roll your eyes or change your tone to indicate some witty derision. But text? It's just neutral words on a page. We can't even use code to properly mark something as sarcastic! **Or _can_ we?** * [Can't we use punctuation&#x2e2e;](#cant-we-use-punctuation) * [Beyond the period 🧐](#beyond-the-period) * [Huh, textual punctuation? &lt;/sarcasm&gt;](#huh-textual-punctuation-ltsarcasmgt) * [&lt;/sarcasm&gt; is official?!](#ltsarcasmgt-is-official) * [HTML Tags are like Knives](#html-tags-are-like-knives) * [Tagging Sarcasm with HTML](#tagging-sarcasm-with-html) * [The... &lt;i&gt; Tag?](#the-ltigt-tag) * [<em>Em</em>ulating Verbal Cues](#emulating-verbal-cues) * [But don't use &lt;q&gt;!](#but-dont-use-ltqgt) * [Sooo... where does this leave us?](#sooo-where-does-this-leave-us) ## Can't we use punctuation&#x2e2e; Do you enjoy lemonade on a hot summer day. I know I sure do? ...Is it just me, or is something _off_ about those two sentences&#x2e2e; Sorry, that was meant to be a rhetorical question! You could tell because I used the <b class="keyword">percontation point</b>, that backwards question mark thing. It was invented in the 1500s specifically for questions not meant to be answered. You don't really see it <del>a lot</del> <ins>ever</ins> nowadays though, as it fell out of favor a long time ago. <figure><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/by9b6rlfoacma6qhlf7a.png" alt="A backwards question mark" /><figcaption><p>The Percontation Point sure looks funky.</p></figcaption></figure> Anyways, those first couple sentences feel wrong because I used unexpected punctuation. In a way, our periods, exclamation points, and question marks convey _tone_, namely a neutral, excited, and questioning tone respectively. So if punctuation makes words sound exciting, how about punctuation for making something sound sarcastic? And people have tried that! Let me introduce you to... ![Four punctuation points in a row.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/efbyohr3ytxophafcsj2.png) 1. The <i lang="fr">point d'ironie</i> (1899) 2. The <i>irony point</i> (1966) 3. The <i lang="nl">ironieteken</i> (2007) 4. The [SarkMark](https://www.sarcmark.com/)<sup>TM</sup> (2010, and yes... it's even trademarked) And of course, none of these ever caught on. Looks like we're stuck with just three ways to end a sentence, `.`, `?`, or `!`. <b>Except...</b> ![What if I told you we have hundreds of punctuation marks?](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy9vtnyybfc9j2qkyk8p.png) ## Beyond the period 🧐 The web has inspired written (typed?) language to adapt in fascinating ways, not the least of which is the advent of **emoji**. Text loses facial cues, so... let's just add faces to text! End a sentence with an emoji and suddenly the words have a voice 😊 Let's see how emoji changes one simple sentence... * That was a good joke 🤣 * That was a good joke 👏 * That was a good joke 🙃 The first two seem sincere in their praise, albeit in different ways. That last one, though, sounds a bit... _sarcastic_ 🤔 So in a way, emoji used this way can be thought of _like_ punctuation, giving sentences a very wide variety of tones you'd otherwise only be able to pick up in person. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmgig4lnonyf87rtc558.png) But maybe a yellow face isn't appropriate or possible where you want to make your snide comment. Is there a way to use _just text_ as punctuation? ## Huh, textual punctuation? &lt;/sarcasm&gt; Using text as punctuation may sound a bit silly at first, but people have (and still do) actually do this for sarcasm! Peruse the internet long enough and you might have seen people write sentences like this: > John Doe is a brilliant politician <mark>&lt;/sarcasm&gt;</mark> That [&lt;/sarcasm&gt;](https://www.urbandictionary.com/define.php?term=%3C%2Fsarcasm%3E) bit denotes sarcasm (clearly). And yes, I _did_ just link Urban Dictionary as a reference &lt;/sarcasm&gt;. <small>Nowadays, it's usually shortened to just `/s`.</small> So where did such a funny looking thing come from anyway? Well, it turns out to be a bit of a **code joke**! Websites are coded (in part) using a language called Hypertext Markup Language (<abbr>HTML</abbr>). HTML gives pages structure, determining whether a block of text is a paragraph, or a heading, or some other thing. This is done using <b class="keyword">tags</b>; for example, the bolded "code joke" from the earlier sentence uses the `<strong>` tag, which indicates it is an _important_ phrase. A web author would code it like this: ```html It is a <strong>code joke</strong>! ``` Every start tag is paired with an end tag, so the `</strong>` there indicates the end of the important text. And while `<sarcasm>` is _not_ a real HTML tag, people started using "&lt;/sarcasm&gt;" to indicate the end of a sarcastic phrase! _Wait wait wait_, did I say it wasn't a real tag? Let me correct myself real quick... <figure><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5nsm37wxx5qqjocj2aj6.gif" alt="A man quickly flips through a book." /><figcaption><p><a href="https://tenor.com/view/book-confusion-huh-what-read-gif-16432979">Book Confusion</a> by <a href="https://tenor.com">tenor.com</a></p></figcaption></figure> ## &lt;/sarcasm&gt; is official?! Funnily enough, `</sarcasm>` is in the _[official HTML rulebook](https://html.spec.whatwg.org)_! All the way down in [section 13.2.6.4.7](https://html.spec.whatwg.org/#parsing-main-inbody) is a little blurb telling browsers what to do if they encounter `</sarcasm>` in code: > [When handling a token with] <mark>An end tag whose tag name is "sarcasm"</mark>: Take a deep breath, then act as described in the "any other end tag" entry below. Ironically, the instruction itself is rather sarcastic. But perhaps disappointingly, this is saying there's nothing special about the sarcasm end tag, and it should be treated like everything else. In other words, it's just a jab at the historical use of the meme. And besides, this just the _end_ tag; the handbook has nothing for a start tag `<sarcasm>`, and every _real_ HTML element has a start tag. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vg1rmeombu34r5igzbzs.png) Hmm, speaking of _real_ HTML elements... above we saw that `<strong>` was used to mark text as being very important, and yet it wasn't _named_ `<important>`. So, even if there's not an HTML element _named_ `<sarcasm>`, is it possible for there to be something we could _use_ for sarcasm? In other words, **is there a way to denote sarcasm... with code?!** ## HTML Tags are like Knives Did you know that some knives have holes in them? <figure><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gq0i9x90wtkyvtww4hz9.png" alt="A knife with three large holes on the blade." /><figcaption><p>A <a href="https://www.cutco.com/products/product.jsp?item=traditional-cheese-knife">Cutco Cheese Knife</a></p></figcaption></figure> Those holes aren't there to be trendy. It turns out a knife like this is designed specifically for cutting _cheese_. I dunno if you're like me and just cut cheese with a normal knife, but sometimes when I do that the cheese sticks to the blade. The holes on a cheese knife prevent that stickage, allowing for a cleaner, far more exquisite cut. Indeed, cooking is an advanced enough field that it has a specific knife for practically any conceivable purpose... ...kinda like HTML tags! The [official HTML rulebook](https://html.spec.whatwg.org) lists a myriad of tags, each with a specific purpose in mind. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7dfcc7dyhhu4pv2f94bo.png) See, HTML tags impart meaning, or **semantics**, to text they annotate. Here are some examples: * `<strong>` indicates that the text is important, serious, or urgent. * `<blockquote>` is used for text that is a direct quote from somewhere else. * `<h1>` denotes the main title of the web page. As a side effect, it also usually makes the title visibly larger. Because each tag has specific semantics, it's possible to misuse them. Just as how I shouldn't use a butter knife to cut boned meat, a web author would not use an `<h1>` tag just to make some text big. The `<h1>` tag is _only_ for the page's title, so to make some different text big the author would need to use something else. So our question, really, is whether or not the glorious HTML handbook has a tag whose semantics include sarcasm! {% details <small>Why would we want this anyway?</small> %} <p><small><a href="https://users.soe.ucsc.edu/~maw/papers/kbs_2014_justo.pdf">Studies with computers</a> have shown that machines are not great at identifying sarcasm without significant help. Using code to annotate things like importance, emphasis, and sarcasm can help machines.</small></p><p><small>One practical use is with screen readers, which read web pages aloud to those who cannot see the page. Maybe there's a future where if text is marked as sarcastic, the screen reader can indicate as much by fluctuating its tone.</small></p> {% enddetails %} ## Tagging Sarcasm with HTML Let's say your friend told a pretty bad pun, and somehow your able to respond with HTML code. You want to say, "That was perfect." Problem is, that phrase on its own is very ambiguous. If only you could mark it somehow... ```html <SOMETHING?>That was perfect.</SOMETHING?> ``` ### The... &lt;i&gt; Tag? Well, there are dozens upon dozens of HTML tags, and _none_ of them are specifically for sarcasm. How perfect 🙃 The one tag that comes the closest is the `<i>` tag. It has many uses, one of them being used for text that is in an <q cite="https://html.spec.whatwg.org/#the-i-element">alternate voice or mood</q>. In a way, sarcasm _is_ a different mood from the rest of the text, so lacking an alternative... ```html <i class="sarcasm">That was perfect.</i> ``` <small>It is recommended to use `class` to specify why the `<i>` tag is being used, since the tag can be used for many different things.</small> There's one **very big problem** with this idea, though. By default, the `<i>` tag _italicizes_ text, and by convention, italic text is interpretted as verbal stress, not sarcasm. It is possible to undo the italics with Cascading Style Sheets (<abbr>CSS</abbr>), a web technology that lets authors adjust how things look. But doing that leaves us back at the beginning: "That was perfect," with no indication of sarcasm! <figure><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vstu0xzskb2zjgwyzaqc.gif" alt="A man dramatically rolls his eyes." /><figcaption><p><a href="https://tenor.com/view/house-md-gregory-house-ugh-whatever-eye-roll-gif-7380271">Gregory House</a> by <a href="https://tenor.com">tenor.com</a></p></figcaption></figure> Even though the text is _semantically_ tagged as being sarcastic, it does not outwardly present itself that way, which is arguably _less_ than useless. If only there were _some_ other way to make something appear sarcastic... <small>Textual semantics is usually tied with conversations about <b class="keyword">accessibility</b>, making pages work for abled and disabled people alike. By adding semantics to a page, it becomes more usable by people who cannot otherwise see the page.</small> <small>Thing is, accessibility goes both ways. If a screen reader announces text as a title to a non-sighted person, then that text better _appear_ like a title to sighted people as well!</small> ### <em>Em</em>ulating Verbal Cues Sarcasm gets lost in text due to losing certain cues, like body language and tone. We saw that emoji are kind of able to simulate facial language, so is there a way to simulate _tone_? In fact, there _is_, and I've been using it all throughout this post! All of the **_italic text_** hints at some kind of verbal stress. In HTML code, this is accomplished using the `<em>` tag, and according to the rulebook, its purpose is to <em>em</em>phasize words and phrases in order to change the overall meaning of the sentence. ```html This <em>emphasizes</em> the word. ``` For example, the following two sentences are exactly the same, but because a different word is emphasized in each, they imply different situations. * "I did not eat the _cookie_." - implying something else was eaten * "I did not _eat_ the cookie." - implying something else happened to the cookie ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/njzvzzx8s186f52e4iz1.png) So let's get back to our "That was perfect" phrase. Now equipped with the glorious power of `<em>`, we can do two things: * Add _word cues_, extra words that suggest a deeper meaning * Add _verbal stress_ to sharpen the phrase's sarcasm <blockquote style="font-style: normal;"> <p>Wow, <em>that</em> was <em>just</em> perfect.</p> </blockquote> ```html Wow, <em>that</em> was <em>just</em> perfect. ``` {% details <small>Can't I use <code>&lt;i&gt;</code> instead of <code>&lt;em&gt;</code> for italics?</small> %} <p><small>**No!**</small></p><p><small>Even though both tags result in italic text, they have different purposes. The <code>&lt;i&gt;</code> tag is for an alternate mood, which is why it is appropriate to use to tag an entire sentence as generally sarcastic. The <code>&lt;em&gt;</code> tag is used for verbal emphasis, which is why it is better for modifying key words to simulate speaking a phrase sarcastically.</small></p> {% enddetails %} ### But don't use &lt;q&gt;! There's one last HTML element worth talking about: `<q>`! It represents text that is <q cite="https://html.spec.whatwg.org/#the-q-element">quoted from another source</q>, and has the effect of automatically adding quotation marks. Sarcasm is often associated with so-called "air quotes", but the `<q>` element is _only_ for quoting some other thing. In fact, the HTML handbook goes so far to say <q cite="https://html.spec.whatwg.org/#the-q-element">it is inappropriate to use the q element for marking up sarcastic statements</q>! So yeah, don't use it 🙃 ## Sooo... where does this leave us? Text loses both verbal and non-verbal cues, making it harder to detect sarcasm. Oh no! But when there's a will, there's a way! * Recreate facial cues with emoji 🙃 * Wittily use textual convention to your advantage &lt;/sarcasm&gt; * In HTML code, tag a sentence as sarcastic with the `<i>` tag. * Or, strategically stress words with italics and `<em>`. So thousands of words later, I guess I should end by asking one last question. Was this ever _really_ a problem to begin with&#x2e2e; ## Resources * [How to show sarcasm in text](https://www.quickanddirtytips.com/education/grammar/how-to-show-sarcasm-in-text) - Sarah Peters * [Irony Punctuation](https://en.wikipedia.org/wiki/Irony_punctuation) - Wikipedia * [Egocentrism Over E-Mail](https://web-docs.stern.nyu.edu/pa/kruger_email_ego.pdf) - Kruger et al * [Extracting relevant knowledge for the detection of sarcasm](https://users.soe.ucsc.edu/~maw/papers/kbs_2014_justo.pdf) - Justo et al * [HTML Living Standard](https://html.spec.whatwg.org) * [The i element](https://html.spec.whatwg.org/#the-i-element) * [The em element](https://html.spec.whatwg.org/#the-em-element) * [The q element](https://html.spec.whatwg.org/#the-q-element)
auroratide
860,207
Chokoku CAD - A breakthrough CAD software on your browser
Chokoku CAD can create complex shapes with few and simple...
0
2021-10-11T21:46:56
https://dev.to/itta611/chokoku-cad-a-breakthrough-cad-software-on-your-browser-hic
javascript
Chokoku CAD can create complex shapes with few and simple controls. https://github.com/itta611/ChokokuCAD ![Sample](https://github.com/itta611/ChokokuCAD/raw/main/img/sample1.png)
itta611
860,324
7 Ways to Escape CSS Hell
Ever have this happen? lol Yeah, me too. Here are the 7 ways to completely center whatever you...
0
2021-10-12T00:23:51
https://dev.to/stackbit/7-ways-to-escape-css-hell-2ck6
css, webdev, beginners, tutorial
Ever have this happen? lol ![Funny meme about centering with css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r2aymznlg20174dp1kn.png) Yeah, me too. Here are the 7 ways to completely center whatever you want with CSS. ## `1. text-align: center;` This works only on `display: inline` & `display: inline-block` elements. Note also that it must be applied to the parent element. ![Centering images and text with text-align: center css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z3s2cf83jcl80t1bvkvj.png) ## `2. margin: 0 auto;` This works only on `display: block` elements. And the element must have a width. You can also specify just `margin-left: auto` and `margin-right: auto` if you want margins on the top or bottom. ![Centering elements inside a div with margin: 0 auto css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1qn1ukzmjdwlwlfx0z7q.png) ## `3. vertical-align: middle;` This works only on `display: inline` & `display: inline-block` elements. ![Centering elements inside a list with vertical-align: middle css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izjek6gpt9z9w44l54ov.png) ## `4. float: center;` lol (You cannot center floated elements.) ![It's impossible to both horizontally and vertically center an element with float: center css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/maojxdh727cnyhqse4aq.png) ## `5. Centering absolute` When this comes up, use `transform` and `50%` coordinates to center an absolutely positioned element. ![Centering child divs of a position: relative parent div with css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ia7j5deozist5z6wr09.png) ## `6. Centering with flexbox` Flexbox has a bunch of different alignment classes that are always applied to the parent. This here will be completely centered within the box. ![Centering elements inside a div with flexbox css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2jejajip6lb2707df4s.png) ## `7. the one I forgot ;-)` ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ibn1xbzzst34jfvmvp9n.png)
rylandking
860,348
Avaliação comparativa entre gRPC e REST com .NET Core
Resultados preliminares do meu projeto para o mestrado da UFABC Título do...
0
2021-11-01T20:07:57
https://dev.to/bernardo/avaliacao-comparativa-entre-grpc-e-rest-com-net-core-4imh
dotnet, grpc, rest, benchmark
**Resultados preliminares do meu projeto para o mestrado da UFABC** ## **Título do Projeto** *Avaliação comparativa entre gRPC e REST com .NET Core* ## **Objetivo** *Comparar duas formas de execução de API para definir qual seria a mais performática quanto a tempo de execução em milissegundos dentro de um contexto específico.* ## **Motivação e Desafios de Pesquisa** *Com o avanço das tecnologias de sistemas distribuídos em computação em nuvem tem sido cada vez mais importante a utilização de padrões que ajudem com a performance de comunicação.* *O REST foi criado em 2000 como uma interface de programação de aplicações e ganhou espaço e apreço dos desenvolvedores, porém, o cenário na qual o REST foi criado ainda não tinha ampla utilização da computação em nuvem como no cenário atual.* *Sistemas distribuídos em diferentes tecnologias e arquiteturas computacionais têm demonstrado uma sobrecarga em alguns sistemas distribuídos complexos por limitações naturais de performance, com isso vejo a necessidade de buscar novas formas, arquiteturas e protocolos mais performáticos do que o padrão atual de REST para cenários complexos.* *No gRPC as mensagens são serializadas em Protobuf, um formato de mensagem binária que serializa rapidamente no cliente e no servidor.* *Uma mensagem gRPC é sempre menor do que uma mensagem JSON equivalente. Outro ponto importante é que o gRPC foi projetado para HTTP/2, uma revisão importante do HTTP que fornece benefícios significativos de desempenho.* *A tabela a seguir fornece uma comparação dos recursos entre as APIs gRPC e REST.* ![Comparativos](https://raw.githubusercontent.com/brbernardo/gRPCvsREST/main/Image/comparativo.png) Figura 1 - Tabela compartativa *entre as APIs gRPC e REST* *Desta forma para atender ao objetivo dessa avaliação irei comparar o tempo de execução de uma aplicação utilizando o protocolo gRPC e a mesma aplicação usando o padrão REST.* ## **Métricas** *Utilizarei média aritmética simples, representada por ∑ x i /n. Onde x i é o tempo de execução em milissegundos e o n será o número de iterações realizadas.* ## **Indicadores** *Para apoiar a análise dos resultados estarei utilizando três indicadores que serão referenciados a seguir em todo experimento como Error, StdDev e Median. Onde:* *Error: índice que calcula a metade do intervalo de confiança de 99,9%* *StdDev: índice que calcula o desvio padrão de todas as medições* *Median: índice que calcula o valor que separa a metade mais alta de todas as medições* ## **Carga de Trabalho** *Os serviços podem ser executados de forma independente para análise dos resultados pertinentes ao propósito de cada um.* *Para executar os projetos, em um prompt de comando, siga as instruções abaixo.* *Para execução do API REST* ```bash cd gRPCvsREST\RestAPI dotnet run -c Releas ``` *Para execução do Serviço gRPC* ```bash cd gRPCvsREST\GrpcService dotnet run -c Releas ``` *Para avaliação comparativa dos protocolos usando o Benchmark* ```bash cd gRPCvsREST\Client dotnet run -c Release ``` ## **Ferramentas, Plataformas, Software** *Para comparação entre o desempenho do gRPC (HTTP/2 com Protobuf) e do REST (HTTP com JSON) criei duas aplicações utilizando a linguagem .NET Core, uma para cada protocolo.As aplicações estao disponíveis no repositorio do GitHub [https://github.com/brbernardo/gRPCvsREST](https://github.com/brbernardo/gRPCvsREST)* *Para geração dos resultados foi utilizada a biblioteca de código aberto BenchmarkDotNet. A aplicações estão disponíveis no repositório do GitHub* *[https://github.com/brbernardo/BenchmarkDotNet](https://github.com/brbernardo/BenchmarkDotNet)* *Inclui uma cópia do [BenchmarkDotNet](https://github.com/brbernardo/BenchmarkDotNet) no repositório [gRPCvsREST](https://github.com/brbernardo/gRPCvsREST) para facilitar reproduções futuras.* *Para execução do experimento utilizarei um servidor virtual Ubuntu hospedado na plataforma Digitalocean com 1 vCPU AMD e 1 gigabyte de memória ram.* *Será necessário a instalação do servidor gRPC para execução do projeto. Os passos para a instalação estão descritos nesta URL https://docs.microsoft.com/pt-br/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-3.1&tabs=visual-studio-code* ## **Parâmetros, Fatores e Níveis** *Utilizaremos o parâmetro de milissegundos em todo o experimento* *Os fatores relevantes a esse experimento são o consumo de memória ram e rom, taxa de carregamento do sistema e processos em execução* *Níveis do servidor no momento do teste* ```bash System load:  0.05 Usage of /:   10.2% of 24.06GB Memory usage: 23% Swap usage:   0% Processes:    99 ``` ## **Descrição dos Experimentos** *A execução de experimentos será através da execução dos métodos GrpcGetMenssage feito em gRPC e RestGetMessage feito em REST. É através da observação da aplicação de Benchmark com 100 e 200 iterações.* *As métricas e os indicadores podem variar e nos resultados apresentarei os números consolidados ao final da quantidade de iterações.* *Legenda dos resultados* - *Method: método usado na comparação* - *IterationCount: número de iterações* - *Mean: média aritmética de todas as medições* - *Error: metade do intervalo de confiança de 99,9%* - *StdDev: desvio padrão de todas as medições* - *Median: valor que separa a metade mais alta de todas as medições* - *1 us: 1 microssegundo (0.000001 segundo)* ## **Resultados** *Como os experimentos de avaliação de desempenho serão conduzidos. Haverá apenas um tipo de experimentos ou tipos diferentes? Tipos podem variar de acordo com os objetivos, como a duração, o tipo de carga de trabalho, etc.* *A tabela a seguir exibe os resultados da execução do Benchmark.* ![resultados](https://raw.githubusercontent.com/brbernardo/gRPCvsREST/main/Image/metricas%2Bind.png) Figura 2 - Tabela de resultados *O gráfico a seguir evidencia o ganho de desempenho considerável ao usar o gRPC.* ![Grafico de resultados](https://raw.githubusercontent.com/brbernardo/gRPCvsREST/main/Image/resultados.png) Figura 3 - Grafico de resultados ## **Considerações finais e análise** *No experimento o gRPC se demonstrou um pouco mais performático, 3 vezes mais em comparação com o Rest, em termos de tempo de execução.* *Foi observado um considerável aumento da complexidade de execução de carga de trabalho quando comparados. O gRPC possui uma complexidade maior por necessitar da criação do servidor de serviço.* *Os níveis de utilização do servidor se manteve equivalente na execução dos projetos se desconsiderarmos o consumo do servidor do serviço gRPC.*
bernardo
860,531
Power Plant Equipment Suppliers
Skv Energy Services Private Limited power plant equipment or spare suppliers provide Boiler Spare...
0
2021-10-12T06:41:05
https://dev.to/energyskv/power-plant-equipment-suppliers-28n9
beginners, startup, webdev
Skv Energy Services Private Limited power plant equipment or spare suppliers provide Boiler Spare parts , ash handling spares , fuel handling spares and hardware general items with the best quality you can check the Boiler Spare parts list and all other items on our website <p><a href="https://www.skvenergyservices.com/">Power Plant Equipment Suppliers</a></p>
energyskv
860,753
Advanced Javascript Functions
What is a Javascript functions A function is a block of organized,reusable code that is...
0
2021-10-12T09:49:26
https://dev.to/luxacademy/advanced-working-with-functions-d0b
javascript, beginners, webdev, tutorial
## What is a Javascript functions A function is a block of organized,reusable code that is used to perform a single,related action. ## Advanced Working with Functions Function basics include function declarations,passing parameters and function scope. check out this article that cover into to Javascript functions. [Javascript Functions](https://dev.to/luxacademy/javascript-functions-257f) In this article we are going to discuss the following: * The new function * Immediately invoked functions * closures * Arrow functions * This keyword * The call method * The apply method * The bind method * Default parameters * Rest parameters * Spread parameters ## The new function The new operator lets developers create an instance of a user-defined object type or of one of the built-in object types that has a constructor function. ```bash function Car(make, model, year) { this.make = make; this.model = model; this.year = year; } const car1 = new Car('VW', 'GTI', 2017); console.log(car1.make); // VW ``` ### Immediately Invoked Function Expression(IIFE) An IIFE Lets us group our code and have it work in isolation,independent of any other code. Invokes a function right away where its defined. This prevents functions and variables from polluting the global object. ```bash (function hello() { console.log('Hello World'); //Hello })(); ``` To make it a function expression, we assign it to a variable or use it in another expression. ### closures A closure is a feature in JavaScript where a function inner scope has access to the outer scope. In the example below closure help keep message within the scope and it can be accessed in the getMessage function. ```bash let greeting = (function () { let message = 'Hello'; let getMessage = function () { return message; }; return { getMessage: getMessage } })(); console.log(greeting.message); //Hello ``` ### Arrow functions Arrow functions were introduced ES6.Refers to anonymous functions with their own unique syntax.Simpler way to create a function. #### Why? * shorter syntax * this derives it value from enclosing [lexical scope](https://en.wikipedia.org/wiki/Scope_(computer_science)) ### Shortcomings. * Arrow functions don't have their own this value. * No argument object - we can't reference arguments ```bash let greet = () => { return 'Hello world'; } let message = greet(); console.log(message); //Hello World ``` If there is one parameter parenthesis are optional. ```bash let greet = name => 'Hello' + name; ``` ### This keyword Refers to the owner of the function we are executing So if it's a standard function,this refers to the global window object;otherwise it can refer to the object that a function is a method of. ```bash let message = { name: 'john', regularFunction(name) { console.log('Hello' + this.name) }, arrowFunction: () => console.log('Hi' + this.name) } message.regularFunction(); // Hello John message.arrowFunction();// Hi ``` ### The call method The call() allows for a function/method belonging to one object to be assigned and called for a different object. call() provides a new value of this to the function/method. With call(), you can write a method once and then inherit it in another object, without having to rewrite the method for the new object. ```bash let car1 = { brand: 'Vw', color: 'blue' } let car2 = { brand: 'Toyota', color: 'white' } let returnCarBrand = function () { console.log('Car brand is ' + this.brand) } returnCarBrand.call(car1); // Car brand is Vw returnCarBrand.call(car2); // Car brand is Toyota ``` ### The apply method The apply() method calls a function with a given this value, and arguments provided as an array. Same syntax as call difference is that call accepts an argument list, while apply accepts a single array of arguments. ```bash function bookTitle(name, author) { console.log(name + 'is written by ' + author); console.log(this); } bookTitle.apply(['HTML & CSS: Design and Build Web Sites', 'Jon Duckett']); ``` ### The bind method Allows to make a copy of a function and then change the value of this. ```bash let book = { author: 'Mary', getAuthor: function () { return this.author; } } let book2 = { author: 'John' }; let getAuthorcopy = book.getAuthor.bind(book2); console.log(getAuthorcopy()); // John ``` ### Default parameters Allow named parameters to be initialized with default values if no value or undefined is passed. ```bash function sayHi(message, name = 'John') { console.log(message + name) } ``` ### Rest parameters The rest parameter syntax allows a function to accept an indefinite number of arguments as an array. Rest parameters should always come after regular parameters. ```bash let sayHi = function greet(...names) { names.forEach(name => console.log('Hi ' + name)) } greet('Welcome', 'John', 'Mary', 'James') // Hi John // Hi Mary // Hi James ``` ### Spread Operator Allows an a function to take an array as an argument and spread out its elements so that they can be assigned to individual parameters ```bash function greet(user1, user2) { console.log('Hello' + user1 +' and ' + user2) } let names = ['John','Mary'] greet(...names); //Hello John and Mary ```
wanjema
860,773
Ten Powerful AI Chatbot Development Frameworks
Nowadays, the use of Chatbots has evolved, and now you can see them in use on any social media...
0
2021-10-12T10:34:38
https://dev.to/webocculttechnologies/ten-powerful-ai-chatbot-development-frameworks-50ip
ai, aidevelopment
Nowadays, the use of Chatbots has evolved, and now you can see them in use on any social media platform like Telegram, Hangouts, Facebook, Slack, or your website. Creating effective and powerful Customer Loyalty Management (CRM) takes a lot of effort and time. Chatbot helps you scale and balance your business cycle and maintain the CRM routine like a pro. Since AI Powers and Controls all this, it perceives and understands the language unequivocally and responds perfectly to the opposite person as if a living person is talking to you and instantly gathers all the information and data you need from your existing customers. Businesses receive a large number of customer inquiries every day. It becomes challenging to handle all these queries effortlessly. So here is an alternative. [AI chatbot development](https://www.weboccult.com/services/artificial-intelligence/) frameworks can save your day! For many businesses, chatbots have now become essential to providing smooth customer service and efficient global operations. According to recent research, it was predicted that chatbots would handle over 85% of consumer interactions by 2021. However, despite the multiple benefits of developing and using a chatbot for businesses, many available development frameworks can confuse and frustrate the new entrepreneur looking to design conversational UX. Several AI chatbot development frameworks are fighting for the best place with the latest updates and consistent releases. However, two factors help you determine whether a chatbot is worth the investment or not: one is increased efficiency, and the other is time-saving. We've listed the ten best AI chatbot frameworks to help you choose the best one for your business needs. Let's jump in! ##Microsoft Bot Framework## This Microsoft Bot Framework is a set of powerful tools, services, and SDKs that provide a solid foundation for developers to build and connect intelligent bots. This framework is ideal for developing enterprise-level conversational AI experiences. Microsoft Bot Framework connectors allow you to deploy chatbots to apps, websites, Microsoft Teams, Cortana, Facebook Messenger, Skype, and more. It has two main components: BotBuilder SDKs and Channel Connectors. Channel connectors allow you to connect the Chatbot to other messaging channels. In addition, the Bot Builder SDK account contains several templates and code samples that instantly help developers get started with chatbot development. ##Dialogue flow## Dialogflow is a conversation platform that lets you design and builds chatbots and other voice applications. It is powered by machine learning from Google and can connect users on major messaging channels like Facebook, mobile apps, Amazon Alexa, Google Assistant, Twitter, Messenger, etc. Dialogflow is the most beneficial tool for generating omnichannel chatbots with minimal coding. It runs on the Google Cloud platform and can be scaled to serve hundreds of millions of users. In addition, Dialogflow is user-friendly and supports over 20 languages ​​, and is probably the best framework for developing NLP-based applications. ##Wit.ai## Wit.ai is an open-source natural language processing API. It allows developers to create devices and apps that users can talk. for example, you can generate text or voice bots that humans can speak to with their favorite messaging platform. Wit is free for all commercial use. It is an NLP platform that allows developers to configure intents and entities. For example, developers can use the HTTP API to connect the wit.ai to your Chatbot or other apps. This Wit.ai provides the SDK in Python, Node.js, and Ruby. ##Amazon Lex## Amazon Lex is a framework for building conversational interfaces in any application using text and voice. Amazon Lex is the same technology that powers Amazon Alexa. Amazon Lex automatically scales as a fully managed service, and you don't have to worry about managing infrastructure. You can build, test, and deploy your chatbots immediately from the Amazon Lex console using this Amazon Lex. Amazon Lex bots can be published to messaging platforms such as Slack, Facebook Messenger, Twilio SMS, and Kik. In addition, Amazon Lex provides SDKs for iOS and Android to build bots for your mobile apps. ##BotMan## BotMan is another most successful chatbot development framework for PHP. In addition, there are different chatbot development tools for Python, Node Js, Java, C #. BotMan helps you publish your Chatbot to the following channels: Hangouts Chat, Facebook Messenger, Cisco Spark, Microsoft Bot Framework, HipChat, Telegram, Slack, WeChat, and Twilio. BotMan is the only PHP framework that helps developers build a chatbot using PHP. In addition, BotMan comes with a custom chat widget that you can directly use to add a BotMan powered Chatbot to your website. ##BotKit## Botkit is an open-source chatbot framework acquired by Microsoft and is considered the best developer tool for building chatbots, apps, and other custom integrations for major messaging platforms. It runs on a natural language processing engine from LUIS.ai and integrates open-source libraries. BotKit helps publish chatbots to messaging channels like Microsoft Teams, Slack, Cisco Jabber, Cisco Webex, Google Hangouts Chat, Facebook Messenger, Microsoft Bot Framework. Botkit additionally provides a web chat plugin that you can install on any website. Thus, BotKit can be used simply with all major NLP platforms. ##Rasa battery## Rasa Stack is an open-source conversation framework. It has two main components Rasa Core and Rasa NLU. First is the infrastructure layer for developers to create, enhance, and use better AI assistants. Second, it is a Machine Learning Framework for automated voice and text assistants. This framework provides the tools and infrastructure necessary for contextual, resilient, and high-performing assistants. The main advantage of using Rasa Stack is that the Chatbot can be deployed on your server, keeping everything in-house. In addition, it is a framework for dialogue management, natural language understanding, and integrations. ##Pandorabots## Pandorabots offers an online web service for building and deploying chatbots. This Pandorabots uses AIML (Artificial Intelligence Markup Language) for the chatbot conversation script. In addition to premium libraries and modules such as the Mitsuku module available for a monthly fee, Pandorabots also provides free and open-source libraries such as ALICE, Rosie, and Base Bot. It recently added additional functionality in which you can design your AIML. Integration of this Chatbot is possible on various apps, websites, and other messaging platforms, Cortana, etc. ##IBM Watson Assistant## IBM Watson Assistant is built on a billion-word Wikipedia neural network to create conversational interfaces in any device, application, or channel. It supports 13 languages ​​and provides an SDK for developers to build applications around Watson Assistant. It communicates quickly with bot users. You can use SDKs in Python, Java, iOS. It offers several free, standard, and premium plans. IBM Watson Assistant uses machine learning to respond to natural language input on websites, mobile devices, messaging apps, and bots. ##Chatfuel## Chatfuel is a modern chatbot-building platform for creating chatbots for Facebook Messenger. It is one of the most widely used open-source platforms for Facebook Messenger-based chatbots. So far, over 350,000 bots have been built with Chatfuel. One of the main advantages of using this platform is the simple editing toolsets that allow users to create chatbots without prior coding experience. Additional key features include integrated analytics, provision of multiple languages ​​, and plugins for integration into Twitter, Facebook, Google, Dropbox, Live Chat, etc. ##Conclusion## Since these chatbots arrived, they have helped businesses tremendously in lead generation, customer support, marketing, etc., emerging as an essential business tool. In addition, this tool lays the foundation for increased efficiency and improved customer experience. We have listed the Chatbot frameworks above that you can choose for your business. Of course, there isn't a perfect setting, and it depends on the needs, so you need to explore them all and understand what works best for your business. Or, if you are a developer, you may be involved in learning about Chatbot development. With that said, now, if you are planning to build a chatbot for your business or business, choosing a chatbot development framework that meets all of your business requirements is essential. Pick and choose the one that works best for you and your business.
webocculttechnologies
860,865
Statement of work vs scope of work
When your business reaches a new level, it may need to expand or even form a new team to complete...
0
2021-10-12T12:40:19
https://dev.to/maddevs/statement-of-work-vs-scope-of-work-6kg
management, webdev
When your business reaches a new level, it may need to expand or even form a new team to complete tasks on a large scale. If you give an assignment to the team without a detailed description, for example, "Make a delivery application". Briefly and concisely. Most likely, it will lead you to an unexpected result. But if you prefer reliability and high-quality performance, you will inevitably face the preparation of a statement of work and scope of work. At first glance, it seems that they contain the same thing. But there is a difference. And knowing this difference will make your projects great. Not again. But consistently great. [![IT Staff Augmentation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imy1963fm9c9hwhm8uut.png)](https://maddevs.io/approach/delivery-models/staff-augmentation/?utm_source=devto&utm_medium=sow) Let's take more in-depth on the statement of work. ##What is a statement of work The Statement of Work, or SOW, is a legal document, a binding contract between you and service providers, that describes what needs to be done to get the desired project. This document is typically composed before the start of work to define its goals, values, tasks, budget, deadlines and more. From our experience, we can say that usually, all the customers make the statement of work. However, at Mad Devs, we often assist in drafting this important document with advice and guidance. After all, some stages can be soberly assessed only by technical specialists who have performed similar tasks more than once. SOW bases on three pillars: * Business needs - to determine the current situation in the company's business and identify problems or opportunities that the new product can respond to. * Product description - to detail the characteristics and features of the project to get the desired result easily. * Strategic plan - to outline the steps to be taken to launch the project and desired deadlines. This document helps the specialists you have hired to regulate work expectations and milestones. The purpose of it is to eliminate controversy and miscommunication. [![The SWOT team: What it is and why we leverage it](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45itmcnbjt6ihkrwt7qc.png)](https://maddevs.io/insights/blog/swot-team-for-software-development-projects/?utm_source=devto&utm_medium=sow) Writing a statement of work can seem like a daunting task, as it displays many points. But this is only at first glance. The structure will help in this. ##How to write SOW Some key tips help you draft a solid statement of work that your external team understands everything they need to know about the project. The most important sections are: **Purpose of project** Remember that in the beginning, there was the Word? Similarly, at the beginning of a new product or service, there is a stated purpose. The ultimate goal defines the entire path from idea to project launch. **Place of work** Mention place/site of project execution. Sometimes you also have project teams in different cities/countries. It is necessary to indicate it in the document. **Scope of work** List requirements, specifications, notices and design-related references. It is all about the project life-cycle. The scope of work can stand alone if your project doesn't need an SOW in some stages of development. **Milestones & Deliverables** Write down all measurable tasks in minor details and due dates according to the strategic plan. To avoid long time frames during which your project can lose its relevance. **Schedule** Indicate the timeline for the phase-out of the project. When it starts and when it needs to be finished. This is a negotiable section. It requires analytics from the team, which will be responsible for the execution of tasks. **Project cost & budget** Unlimited budgets are rare in business. Therefore, it is necessary to distribute the budget across all stages. This forms the scope of work, baseline plan and schedule of the project. [![Custom software development pricing strategies](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f98i5dxoupcq2rb8dtd3.png)](https://maddevs.io/customer-university/custom-software-development-pricing-strategies/?utm_source=devto&utm_medium=sow) **Standards & testing** Testing requirements, standards, and legal compliance should also be noted. For example, they are any terms and conditions, such as IP rights, proprietary info, non-disclosures, and confidentiality agreements. **Success/failure** This can be one of the most critical parts of an SOW. Write down your collaboration expectations - the final product, amount of budget allocation, communication during work. In other words, clarify what can be considered as project success. And what you would like to avoid. If you've read this far, then congratulations! You've collected all the Pokémons. Now you know all the points you need to specify to compose an accurate and unambiguous statement of work. But, as indicated above, one of the listed sections can be strong and independent and exist individually. Yes, it is the scope of work. And it's time to talk about it in more detail. [![Here is why IT projects are late and exceed budgets](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0idtnrasvrbnjxxfqbk.png)](https://maddevs.io/customer-university/why-it-projects-are-late-and-exceed-budgets/?utm_source=devto&utm_medium=sow) ##What is the scope of work Scope of work is a kind of document stating what hired team will be doing during the project implementation. In other words, it is a guideline for understanding what needs to be done, the foundation of project planning. Scope of work agrees on the requirements for the project and identifies potential risks that could interfere with the workflow. If you do not prescribe it, you will most likely be faced with unexpected tasks that will steal your time and resources and get into your budget. For example, when a team sees all tasks sequentially, it can be caught early on that one task conflicts with another. And you can immediately adjust the schedule and deadlines. So you can run with the hare and hunt with the hounds. The working process and the final product will not suffer. [![Red flags in software development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghji5ymm2lmm5vmyhwm9.png)](https://maddevs.io/customer-university/red-flags-in-software-development/?utm_source=devto&utm_medium=sow) Does it seem that this is about the same as the SOW? Don't jump to conclusions. Let’s look at the structure. ##How to write scope of work To start writing the scope of work, you can imagine it like a roadmap that can guide a team through the process. And it is no matter what task you face, from a website redesign to building a new app or feature. The following sections generally make up a scope of work: **Project overview** Include brief information related to the project. It can be background, goals, problem statement and people who are involved in it. **Tasklist** Designate global tasks and subtasks. You can also split the work into phases. For example, you have a website redesign task. The first phase is "research and planning". The second phase is "prototyping and wireframe pages". Then it is "design and development". And the last but not least phase is "testing". **Deliverables & Timeline** Determine when the project will begin and will end, all the phases with working days. And don't forget to mention what achievements will be accomplished along the way. **Expected outcome** The result of the project must be the answer to the problem statement. Be specific here to establish clear expectations for contractors/clients. **Reports** You need to receive status reports, progress reports, and variance reports to track the project's progress. Define how you will get reports and when to expect them. Some of the scopes of work include a glossary of terms, an overview of acronyms, and additional reference material that can help reflect the complete picture of the project and impact the success of the execution. So, there are significantly fewer points, but more emphasis on tasks. [![Software Development Metrics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/60hfi25rz568wca6q2tl.png)](https://maddevs.io/insights/blog/software-development-metrics/?utm_source=devto&utm_medium=sow) ##The bottom line When it comes to the statement of work (SOW) and the scope of work, it is always very easy to get confused. To understand this once and for all, you just need to remember the essence of these two types of documents. Statement of work is a legal contract to describe your business goals, services and results delivered by a project. At the same time, the scope of work gives the vision to arrange tasks and steps to complete the project. The scope of work can be a section within the SOW. Think of it as if you wanted to bake a chocolate cake and have cocoa powder and chocolate bars. You can use both ingredients. Or you can only add only cocoa powder to the dough. It all depends on what kind of cake you need. [![Delivery models in IT](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7flctt4ieyurjhye4l7.png)](https://maddevs.io/approach/delivery-models/?utm_source=devto&utm_medium=sow) --- _Previously published at [maddevs.io/blog](https://maddevs.io/insights/blog/statement-of-work-vs-scope-of-work/?utm_source=devto&utm_medium=sow)._
maddevsio
860,878
Clean React-Redux, Redux-Saga client-side solution.
Hello there! On my previous post MERN client-side I have talked about a MERN client application with...
0
2021-10-13T07:31:33
https://dev.to/alanst32/clean-react-redux-redux-saga-client-side-solution-np
react, redux, typescript, saga
Hello there! On my previous post [MERN client-side](https://dev.to/alanst32/a-mern-stack-update-for-2021-part-b-client-side-24o6) I have talked about a MERN client application with React, Typescript and the use of RxJs as an observables solution to collect and subscribe api response data. Then I got on my mind, "How about Redux? Is it still worth?" As we know [Redux](https://redux.js.org) is a state manager container for JavaScript apps. It is a robust framework that allows you to have state control and information in all components/containers of your application. It works like a flow with a single store, it can be used in any environment like react, angular 1/2, vanilla etc. And to support the use of Redux in React we also have [React-Redux](https://react-redux.js.org). A library that allow us to keep Redux solution up to date with React modern approaches. Through React Hooks from React-Redux we can access and control the store. Is without saying that without React-Redux I would not recommend the use of Redux in applications today. On that thought I have decided to create a different MERN client-side solution with React and Typescript but not this this time with Redux and React-Redux. And to make the application even more robust I am using [Redux-Saga](https://redux-saga.js.org), which is basically a Redux side effect manager. Saga enables approaches to take parallel executions, task concurrency, task cancellation and more. You can also control threads with normal Redux actions. Comparing with React-Thunk, Saga it may seems complex at first but is a powerful solution. (But that's a talk for another post right ;) ) >EDIT: As you may know, there are modern approaches to implement Redux on your application today. I have decided to write/keep this post in the "old" way in order o not cover multiple topics in the same article, with that on mind we could easily discuss new Saga/Redux approach in another article since the base is covered. Now, without stretching too far, let's code! ### 1 - Client Project. As this application is a similar solution from my previous post, I won't focus on the Node, Typescript and Webpack configuration. But exclusively on the Redux state flow between the CRUD operations. #### Project structure ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pznwy1bufi4c7854xh7.png) ### 2 - Redux Flow. As we know for our Redux flow we need to set: - Redux Actions - Redux Reducer - Redux Selector - Redux Store And to work with the asynchronous calls to back end I am going to use a middleware layer. - Redux Saga layer #### Actions _src/redux/actions/studentActions.ts_ ```typescript import StudentModel, { StudentRequest } from "@models/studentModel"; // TYPES export enum STUDENT_ACTIONS { GET_STUDENTS_REQUEST = 'GET_STUDENTS_REQUEST', GET_STUDENTS_SUCCESS = 'GET_STUDENTS_SUCCESS', GET_STUDENTS_ERROR = 'GET_STUDENTS_ERROR', INSERT_STUDENT_REQUEST = 'INSERT_STUDENT_REQUEST', INSERT_STUDENT_SUCCESS = 'INSERT_STUDENT_SUCCESS', INSERT_STUDENT_ERROR = 'INSERT_STUDENT_ERROR', UPDATE_STUDENT_REQUEST = 'UPDATE_STUDENT_REQUEST', UPDATE_STUDENT_SUCCESS = 'UPDATE_STUDENT_SUCCESS', UPDATE_STUDENT_ERROR = 'UPDATE_STUDENT_ERROR', DELETE_STUDENT_REQUEST = 'DELETE_STUDENT_REQUEST', DELETE_STUDENT_SUCCESS = 'DELETE_STUDENT_SUCCESS', DELETE_STUDENT_ERROR = 'DELETE_STUDENT_ERROR', ADD_SKILLS_REQUEST = 'ADD_SKILLS_REQUEST', ADD_SKILLS_SUCCESS = 'ADD_SKILLS_SUCCESS', ADD_SKILLS_ERROR = 'ADD_SKILLS_ERROR', }; interface LoadingState { isLoading: boolean, } interface CommonErrorPayload { error?: { message: string, type: string, }, } // ACTION RETURN TYPES export interface GetStudentsRequest { type: typeof STUDENT_ACTIONS.GET_STUDENTS_REQUEST; args: StudentRequest, }; export interface GetStudentsSuccess { type: typeof STUDENT_ACTIONS.GET_STUDENTS_SUCCESS; payload: StudentModel[], }; export interface GetStudentsError { type: typeof STUDENT_ACTIONS.GET_STUDENTS_ERROR; payload: CommonErrorPayload, }; export interface InsertStudentRequest { type: typeof STUDENT_ACTIONS.INSERT_STUDENT_REQUEST; args: StudentModel, } export interface InsertStudentSuccess { type: typeof STUDENT_ACTIONS.INSERT_STUDENT_SUCCESS, }; export interface InsertStudentError { type: typeof STUDENT_ACTIONS.INSERT_STUDENT_ERROR; payload: CommonErrorPayload, }; export interface UpdateStudentRequest { type: typeof STUDENT_ACTIONS.UPDATE_STUDENT_REQUEST; args: StudentModel, }; export interface UpdateStudentSuccess { type: typeof STUDENT_ACTIONS.UPDATE_STUDENT_SUCCESS, }; export interface UpdateStudentError { type: typeof STUDENT_ACTIONS.UPDATE_STUDENT_ERROR; payload: CommonErrorPayload, }; export interface DeleteStudentRequest { type: typeof STUDENT_ACTIONS.DELETE_STUDENT_REQUEST; args: string[], }; export interface DeleteStudentSuccess { type: typeof STUDENT_ACTIONS.DELETE_STUDENT_SUCCESS, }; export interface DeleteStudentError { type: typeof STUDENT_ACTIONS.DELETE_STUDENT_ERROR; payload: CommonErrorPayload, }; // ACTIONS export const getStudentsRequest = (args: StudentRequest): GetStudentsRequest => ({ type: STUDENT_ACTIONS.GET_STUDENTS_REQUEST, args, }); export const getStudentsSuccess = (payload: StudentModel[]): GetStudentsSuccess => ({ type: STUDENT_ACTIONS.GET_STUDENTS_SUCCESS, payload, }); export const getStudentsError = (payload: CommonErrorPayload): GetStudentsError => ({ type: STUDENT_ACTIONS.GET_STUDENTS_ERROR, payload, }); export const insertStudentRequest = (args: StudentModel): InsertStudentRequest => ({ type: STUDENT_ACTIONS.INSERT_STUDENT_REQUEST, args, }); export const insertStudentSuccess = (): InsertStudentSuccess => ({ type: STUDENT_ACTIONS.INSERT_STUDENT_SUCCESS, }); export const insertStudentError = (payload: CommonErrorPayload): InsertStudentError => ({ type: STUDENT_ACTIONS.INSERT_STUDENT_ERROR, payload, }); export const updateStudentRequest = (args: StudentModel): UpdateStudentRequest => ({ type: STUDENT_ACTIONS.UPDATE_STUDENT_REQUEST, args, }); export const updateStudentSuccess = (): UpdateStudentSuccess => ({ type: STUDENT_ACTIONS.UPDATE_STUDENT_SUCCESS, }); export const updateStudentError = (payload: CommonErrorPayload): UpdateStudentError => ({ type: STUDENT_ACTIONS.UPDATE_STUDENT_ERROR, payload, }); export const deleteStudentRequest = (args: string[]): DeleteStudentRequest => ({ type: STUDENT_ACTIONS.DELETE_STUDENT_REQUEST, args, }); export const deleteStudentSuccess = (): DeleteStudentSuccess => ({ type: STUDENT_ACTIONS.DELETE_STUDENT_SUCCESS, }); export const deleteStudentError = (payload: CommonErrorPayload): DeleteStudentError => ({ type: STUDENT_ACTIONS.DELETE_STUDENT_ERROR, payload, }); ``` ###### Understanding the code. No mystery here. On a redux flow we need to set which actions will be part of the state control, and for each CRUD operation I have set a state of REQUEST, SUCCESS and ERROR result. Which you will understand the reason why following below. One interesting point here is since I am coding in Typescript I can benefit of Enum and Types usage to make our code clearer and more organised. #### Reducer _src/redux/reducer/studentReducer.ts_ ```typescript import { STUDENT_ACTIONS } from "redux/actions/studentActions"; const initialState = { isGetStudentsLoading: false, data: [], getStudentsError: null, isInsertStudentLoading: false, insertStudentError: null, isUdpateStudentLoading: false, updateStudentError: null, isDeleteStudentLoading: false, deleteStudentError: null, }; export default (state = initialState, action) => { switch(action.type) { case STUDENT_ACTIONS.GET_STUDENTS_REQUEST: return { ...state, isGetStudentsLoading: true, getStudentsError: null, }; case STUDENT_ACTIONS.GET_STUDENTS_SUCCESS: return { ...state, isGetStudentsLoading: false, data: action.payload, getStudentsError: null, }; case STUDENT_ACTIONS.GET_STUDENTS_ERROR: return { ...state, isGetStudentsLoading: false, data: [], getStudentsError: action.payload.error, }; // INSERT case STUDENT_ACTIONS.INSERT_STUDENT_REQUEST: return { ...state, isInsertStudentLoading: true, insertStudentError: null, }; case STUDENT_ACTIONS.INSERT_STUDENT_ERROR: return { ...state, isInsertStudentLoading: false, insertStudentError: action.payload.error, }; // UPDATE case STUDENT_ACTIONS.UPDATE_STUDENT_REQUEST: return { ...state, isUdpateStudentLoading: true, updateStudentError: null, }; case STUDENT_ACTIONS.UPDATE_STUDENT_ERROR: return { ...state, isUdpateStudentLoading: false, updateStudentError: action.payload.error, }; // DELETE case STUDENT_ACTIONS.DELETE_STUDENT_REQUEST: return { ...state, isDeleteStudentLoading: true, deleteStudentError: null, }; case STUDENT_ACTIONS.DELETE_STUDENT_ERROR: return { ...state, isDeleteStudentLoading: false, deleteStudentError: action.payload.error, }; default: return { ...initialState, } } } ``` _src/redux/reducer/rootReducer.ts_ ```typescript import { combineReducers } from "redux"; import studentReducer from "./studentReducer"; const rootReducer = combineReducers({ entities: combineReducers({ student: studentReducer, }), }); export type AppState = ReturnType<typeof rootReducer>; export default rootReducer; ``` ###### Understanding the code. Reducers are functions that takes the current state and an action as argument, and return a new state result. In other words, (state, action) => newState. And in the code above I am setting how the Student state model is going to be according to each action received. As you can see the whole state is not being overwritten, but just the necessary attributes according to the action. This application only has one reducer, but in most of the cases you will break down your reducers in different classes. To wrap them together we have the _rootReducer_ class. Which basically combines all the reducers in the state. #### Selector In simple words, a "selector" is a function that accepts the state as an argument and returns a piece of data that you desire from the store. But of course it has more finesse than that, it is an efficient way to keep the store at minimal and is not computed unless one of its arguments changes. _src/redux/selector/studentSelector.ts_ ```typescript import { get } from 'lodash'; import { createSelector } from 'reselect'; import { AppState } from '@redux/reducer/rootReducer'; const entity = 'entities.student'; const getStudentsLoadingState = (state: AppState) => get(state, `${entity}.isGetStudentsLoading`, false); const getStudentsState = (state: AppState) => get(state, `${entity}.data`, []); const getStudentsErrorState = (state: AppState) => get(state, `${entity}.getStudentsError`); export const isGetStudentsLoading = createSelector(getStudentsLoadingState, (isLoading) => isLoading); export const getStudents = createSelector(getStudentsState, (students) => students); export const getStudentsError = createSelector(getStudentsErrorState, (error) => error); const insertStudentLoadingState = (state: AppState) => get(state, `${entity}.isInsertStudentLoading`, false); const insertStudentErrorState = (state: AppState) => get(state, `${entity}.insertStudentError`); export const isInsertStudentLoading = createSelector(insertStudentLoadingState, (isLoading) => isLoading); export const insertStudentError = createSelector(insertStudentErrorState, (error) => error); const updateStudentLoadingState = (state: AppState) => get(state, `${entity}.isUdpateStudentLoading`, false); const updateStudentErrorState = (state: AppState) => get(state, `${entity}.updateStudentError`); export const isUpdateStudentLoading = createSelector(updateStudentLoadingState, (isLoading) => isLoading); export const updateStudentError = createSelector(updateStudentErrorState, (error) => error); const deleteStudentLoadingState = (state: AppState) => get(state, `${entity}.isDeleteStudentLoading`, false); const deleteStudentErrorState = (state: AppState) => get(state, `${entity}.deleteStudentError`); export const isDeleteStudentLoading = createSelector(deleteStudentLoadingState, (isLoading) => isLoading); export const deleteStudentError = createSelector(deleteStudentErrorState, (error) => error); const isAddSkillsLoadingState = (state: AppState) => get(state, `${entity}.isAddSkillsLoading`, false); const addSkillErrorState = (state: AppState) => get(state, `${entity}.addSkillsError`); export const isAddSkillsLoading = createSelector(isAddSkillsLoadingState, (isLoading) => isLoading); export const addSkillsError = createSelector(addSkillErrorState, (error) => error); ``` ###### Understanding the code. With the selector concept on mind, we can take from the code above is that we are returning the desire part of the store we need according to the function created. For instance in _getStudentsLoadingState_ I don't need to return the whole store to the caller, but only the flag that indicates whether the students are being loaded instead. #### Store The Redux store brings together the state, actions and reducers to the application. Is an immutable object tree that holds the current application state. Is through the store we will access the state info and dispatch actions to update its state information. Redux can have only a single store in your application. _src/redux/store/store.ts_ ```typescript import { createStore, applyMiddleware } from 'redux'; import createSagaMiddleware from '@redux-saga/core'; import { composeWithDevTools } from 'redux-devtools-extension'; import rootReducer from '../reducer/rootReducer'; import logger from 'redux-logger'; import { rootSaga } from '@redux/saga/rootSaga'; const initialState = {}; const sagaMiddleware = createSagaMiddleware(); const store = createStore(rootReducer, initialState, composeWithDevTools(applyMiddleware(sagaMiddleware, logger))); sagaMiddleware.run(rootSaga) export default store; ``` ###### Understanding the code. For Store creation, it is required to set the Reducer or the Reducers combined and the initial state of the application. And if you are using a middleware like I am, the middleware is also required to be set into the store. In this case is the class _rootSaga_ which I am describing below. #### Saga According to Saga website: >In redux-saga, Sagas are implemented using Generator functions. To express the Saga logic, we yield plain JavaScript Objects from the Generator. We call those Objects Effects. ...You can view Effects like instructions to the middleware to perform some operation (e.g., invoke some asynchronous function, dispatch an action to the store, etc.). With Saga we can instruct the middleware to fetch or dispatch data according to an action for example. But of course is more complex than that, but don't worry I will break down and explain the code below into pieces. With Saga I can set the application to dispatch or fetch APIS according to the action received. _src/redux/saga/studentSaga.ts_ ```typescript import { all, call, put, takeLatest, takeLeading } from "redux-saga/effects"; import StudentModel, { StudentRequest } from '@models/studentModel'; import { formatDate } from '@utils/dateUtils'; import { get } from 'lodash'; import axios from 'axios'; import { isEmpty } from 'lodash'; import { deleteStudentError, getStudentsError, getStudentsRequest, getStudentsSuccess, insertStudentError, STUDENT_ACTIONS, updateStudentError } from "@redux/actions/studentActions"; // AXIOS const baseUrl = 'http://localhost:3000'; const headers = { 'Content-Type': 'application/json', mode: 'cors', credentials: 'include' }; const axiosClient = axios; axiosClient.defaults.baseURL = baseUrl; axiosClient.defaults.headers = headers; const getStudentsAsync = (body: StudentRequest) => { return axiosClient.post<StudentModel[]>( '/student/list', body ); } function* getStudentsSaga(action) { try { const args = get(action, 'args', {}) const response = yield call(getStudentsAsync, args); yield put(getStudentsSuccess(response.data)); } catch(ex: any) { const error = { type: ex.message, // something else can be configured here message: ex.message, }; yield put(getStudentsError({error})); } } const insertStudentsAsync = async (body: StudentModel) => { return axiosClient.post( '/student', body ) } function* insertStudentSaga(action) { try { const studentModel = get(action, 'args'); if (studentModel == null) { throw new Error('Request is null'); } yield call(insertStudentsAsync, studentModel); const getAction = { type: STUDENT_ACTIONS.GET_STUDENTS_REQUEST, args: {}, }; yield call(getStudentsSaga, getAction); } catch(ex: any) { const error = { type: ex.message, // something else can be configured here message: ex.message, }; yield put(insertStudentError({error})); } }; const updateStudentAsync = async (body: StudentModel) => { return axiosClient.put( '/student', body ); }; /** * * @param action {type, payload: StudentModel} */ function* updateStudentSaga(action) { try { const studentModel = get(action, 'args'); if (studentModel == null) { throw new Error('Request is null'); }; yield call(updateStudentAsync, studentModel); const getStudentRequestAction = getStudentsRequest({}); yield call(getStudentsSaga, getStudentRequestAction); } catch(ex: any) { const error = { type: ex.message, // something else can be configured here message: ex.message, }; yield put(updateStudentError({error})); } }; const deleteStudentsAsync = async (ids: string[]) => { return axiosClient.post( '/student/inactive', {ids} ); }; /** * * @param action {type, payload: string[]} */ function* deleteStudentSaga(action) { try { const ids = get(action, 'args'); if (isEmpty(ids)) { throw new Error('Request is null'); }; yield call(deleteStudentsAsync, ids); const getStudentRequestAction = getStudentsRequest({}); yield call(getStudentsSaga, getStudentRequestAction); } catch(ex: any) { const error = { type: ex.message, // something else can be configured here message: ex.message, }; yield put(deleteStudentError({error})); } }; function* studentSaga() { yield all([ takeLatest(STUDENT_ACTIONS.GET_STUDENTS_REQUEST, getStudentsSaga), takeLeading(STUDENT_ACTIONS.INSERT_STUDENT_REQUEST, insertStudentSaga), takeLeading(STUDENT_ACTIONS.UPDATE_STUDENT_REQUEST, updateStudentSaga), takeLeading(STUDENT_ACTIONS.DELETE_STUDENT_REQUEST, deleteStudentSaga), ]); } export default studentSaga; ``` ###### Understanding the code. Let's break into pieces here: ###### 1 - Exported function _studentSaga()_. To put it simple, I am telling SAGA to wait for an action and then to perform or call a function. For instance when _GET_STUDENTS_REQUEST_ is dispatched by Redux, I am telling SAGA to call _getStudentsSaga_ method. But in order to achieve that I have to use the SAGA API, in specific the methods: - <b>takeLatest</b>: Forks a saga on each action dispatched to the store that matches the pattern. And automatically cancels any previous saga task started previously if it's still running. In other words, if _GET_STUDENTS_REQUEST_ is dispatched multiple times, SAGA will cancel the previous fetch and create a new one. - <b>takeLeading</b>: The difference here is that after spawning a task once, it blocks until spawned saga completes and then starts to listen for a pattern again. - <b>yieldAll</b>: Creates an Effect that instructs Saga to run multiple Effects in parallel and wait for all of them to complete. Here we set our actions to the attached Saga fork method to run in parallel in the application. ###### 2 - Updating the Store with SAGA_. Now that the (action/methods) are attached to Saga effects, we can proceed to the creation of effects in order to call APIS or update the Redux Store. ###### 3 - getStudentsSaga()_ method. More SAGA API is used here: - <b>yield call</b>: Creates an Effect that calls the function attached with args as arguments. In this case, the function called is an Axios API POST that returns a Promise. And since is a Promise, Saga suspends the generator until the Promise is resolved with response value, if the Promise is rejected an error is thrown inside the Generator. - <b>yield put</b>: Here, I am setting the store with the new Student list data, by creating an Effect that instructs Saga to schedule an action to the store. This dispatch may not be immediate since other tasks might lie ahead in the saga task queue or still be in progress. You can, however expects that the store will be updated with the new state value. The rest of the class is more of the same flow, I operate the CRUD methods accordingly to the logic and use the same Saga effects necessary to do it. But Saga offers way more possibilities, don't forget to check it out its [API reference](https://redux-saga.js.org/docs/api/) for more options. ###### 4 rootSaga. By this time you might have been wondering, "Where is the rootSaga specified on the Store?". Below we have the _rootSaga_ class, which follows the same principle as _rootReducer_. Here we combines all Saga classes created on the application. _src/redux/saga/rootSaga.ts_ ```typescript import { all, fork } from "redux-saga/effects"; import studentSaga from "./studentSaga"; export function* rootSaga() { yield all([fork(studentSaga)]); }; ``` ### 3 - Hook up Redux with React. Now that all redux flow is set, is time to hoop up with React Components, to do that we just need to attach the Redux Store as a provider to the application. _src/index.tsx_ ```typescript import * as React from "react"; import * as ReactDOM from "react-dom"; import App from 'App'; import { Provider } from 'react-redux'; import store from "@redux/store/store"; ReactDOM.render( <Provider store={store}> <App/> </Provider>, document.getElementById('root') ); ``` ### 4 - Use of Redux on Components. For last, we now are able to consume state and dispatch actions from/to Redux, at first we will dispatch an action to tell Redux and Saga to fetch students data. _<b>Note:<b/> For the purpose of this article and to focus on Redux I have shortened the code in areas not related to Redux. However, if would be able to check the whole code, you can check tis Git Repository, the link is by the end of this post._ ###### Fetching data. _src/components/home/index.tsx_ ```typescript import React, { useEffect, useState } from "react"; import _ from 'lodash'; import StudentModel, { StudentRequest } from "@models/studentModel"; import StudentForm from "@app/studentForm"; import StudentTable from "@app/studentTable"; import { useDispatch } from "react-redux"; import { createStyles, makeStyles } from '@mui/styles'; import { Theme } from '@mui/material'; import { getStudentsRequest } from "@redux/actions/studentActions"; const useStyles = makeStyles((theme: Theme) => createStyles({...}), ); export default function Home() { const classes = useStyles(); const dispatch = useDispatch(); const emptyStudentModel: StudentModel = { _id: '', firstName: '', lastName: '', country: '', dateOfBirth: '', skills: [] }; useEffect(() => { const args: StudentRequest = { name: '', skills: [], }; dispatch(getStudentsRequest(args)); }, []); return ( <div className={classes.home}> <StudentForm></StudentForm> <StudentTable></StudentTable> </div> ); } ``` ###### Understanding the code. With the new updates on React and React-Redux framework we can now use specific hooks on functional components to manage our state with Redux. On the code above through the hook _useEffect_ an action is dispatched to fetch Students data. - <b>useDispatch</b>: This hooks replicates the old _mapDispatchToProps_ method, which is to set dispatch actions to the redux store. And since the code is in typescript, we can take the advantages of passing actions that are already mapped by interfaces. But underneath what is happening is the same as: ```typescript dispatch({ type: 'GET_STUDENTS_REQUEST', args: { name: '', skills: [] } }) ``` ###### Saving and reloading state data. Now that the data is loaded, we can proceed with the rest of CRUD operations. _src/components/studentForm/index.tsx_ ```typescript import { Button, TextField, Theme } from '@mui/material'; import { createStyles, makeStyles } from '@mui/styles'; import React, { useState } from "react"; import { Image, Jumbotron } from "react-bootstrap"; import logo from '@assets/svg/logo.svg'; import StudentModel from "@models/studentModel"; import { useSelector } from "react-redux"; import { isEmpty } from 'lodash'; import { getStudents } from "@redux/selector/studentSelector"; import { insertStudentRequest } from "@redux/actions/studentActions"; import { useDispatch } from "react-redux"; const useStyles = makeStyles((theme: Theme) => createStyles({ {...} }), ); function JumbotronHeader(props) { const classes = useStyles(); const { totalStudents } = props; return ( <Jumbotron .../> ); } export default function StudentForm(props) { const students = useSelector(getStudents); const dispatch = useDispatch(); const classes = useStyles(); const [firstName, setFirstName ] = useState(''); const [lastName, setLastName] = useState(''); const [country, setCountry] = useState(''); const [dateOfBirth, setDateOfBirth] = useState(''); const totalStudents = isEmpty(students) ? 0 : students.length; async function insertStudentAsync() { const request: StudentModel = { firstName, lastName, country, dateOfBirth, skills: [] }; dispatch(insertStudentRequest(request)); } return ( <div className={classes.header}> <JumbotronHeader totalStudents={students.length}/> <form> // Form Components {...} <Button id="insertBtn" onClick={() => insertStudentAsync()}> Insert </Button> </form> </div> ); } ``` ###### Highlights What is important to here is when the button is clicked a Redux Action is dispatched by _useDispatch_ hook, to insert student data on database and also to refresh the student list afterwards. _src/components/studentTable/index.tsx_ ```typescript import React, { useEffect, useState } from "react"; import StudentModel from "@models/studentModel"; import { isEmpty } from 'lodash'; import { getStudents, isGetStudentsLoading } from "@redux/selector/studentSelector"; import { deleteStudentRequest, updateStudentRequest } from "@redux/actions/studentActions"; import { useDispatch, useSelector } from "react-redux"; import { shadows } from '@mui/system'; import { createStyles, makeStyles } from '@mui/styles'; import {...} from '@mui/material'; import { KeyboardArrowDown, KeyboardArrowUp } from '@mui/icons-material' const useStyles = makeStyles((theme: Theme) => createStyles({ {...} }), ); function getSkillsSummary(skills: string[]) { {...} } function SkillsDialog(props: { openDialog: boolean, handleSave, handleClose, }) { const { openDialog, handleSave, handleClose } = props; const classes = useStyles(); const [open, setOpen] = useState(false); const [inputText, setInputText] = useState(''); useEffect(() => { setOpen(openDialog) }, [props]); return ( <Dialog open={open} onClose={handleClose}> {...} </Dialog> ) } function Row( props: { student: StudentModel, handleCheck } ) { const classes = useStyles(); const dispatch = useDispatch(); const { student, handleCheck } = props; const [open, setOpen] = useState(false); const [openDialog, setOpenDialog] = useState(false); const openSkillsDialog = () => {...}; const closeSkillsDialog = () => {...}; async function saveSkillsAsync(newSkill: string) { const skills = student.skills; skills.push(newSkill); const request: StudentModel = { _id: student._id, firstName: student.firstName, lastName: student.lastName, country: student.country, dateOfBirth: student.dateOfBirth, skills: skills }; dispatch(updateStudentRequest(request)); closeSkillsDialog(); } return ( <React.Fragment> <TableRow ...> {...} </TableRow> <TableRow> <TableCell ...> <Collapse ...> <Box className={classes.innerBox}> <Typography ...> <Table ...> <TableBody> <Button...> {student.skills.map((skill) => ( <TableRow key={skill}> <TableCell ...> </TableRow> ))} <SkillsDialog openDialog={openDialog} handleClose={closeSkillsDialog} handleSave={saveSkillsAsync} /> </TableBody> </Table> </Box> </Collapse> </TableCell> </TableRow> </React.Fragment> ); } export default function StudentTable() { const dispatch = useDispatch(); const students: StudentModel[] = useSelector(getStudents); const isLoading: boolean = useSelector(isGetStudentsLoading); const [selectedAll, setSelectedAll] = useState(false); const [studentList, setStudentList] = useState<StudentModel[]>([]); useEffect(() => { setStudentList(students); }, [students]); useEffect(() => { {...} }, [studentList]); const handleCheck = (event, id) => { {...} } const handleSelectAll = (event) => { {...} } async function deleteStudentsAsync() { const filter: string[] = studentList .filter(s => s.checked === true) .map(x => x._id || ''); if (!isEmpty(filter)) { dispatch(deleteStudentRequest(filter)); }; } const LoadingCustom = () => {...} return ( <TableContainer component={Paper}> { isLoading && ( <LoadingCustom /> ) } {!isLoading && ( <Table aria-label="collapsible table"> <TableHead> <TableRow> <TableCell> <Checkbox ... /> </TableCell> <TableCell> <Button variant="contained" color="primary" onClick={() => deleteStudentsAsync()}> Delete </Button> </TableCell> <TableCell>{...}</TableCell> </TableRow> </TableHead> <TableBody> {studentList.map((row) => { return ( <Row .../> ); })} </TableBody> </Table> )} </TableContainer> ); } ``` ###### Highlights - <b>useSelector</b>: Similar to useDispatch this hook replicates _mapStateToProps_ redux old method. Allows you to extract data from the Redux store state, using a selector function. In our example I am loading students list data from store. As for the rest of CRUD operations I continue to use _useDispatch_ to perform the actions necessary. ### Final Considerations and GIT. With the new behaviour of functional components creation in React. React-Redux hooks extends Redux lifetime. Otherwise I would not recommend using Redux instead of RxJS for example. Furthermore, using SAGA as middleware make the application even more robust, which allows us to control the effects of asynchronous calls through the system. If you have stayed until the end, thank you very much. And please let me know your thoughts about the usage of Redux on current present. You can check the whole code of the project on its git repository: [MERN-CLIENT-REDUX](https://github.com/alanst32/mern-client-redux). See ya.
alanst32
861,200
Improving Backend Performance Part 1/3: Lazy Loading in Vaadin Apps
If you have a table or data grid with, say, more than a few hundred rows in it, you should be using...
14,995
2021-10-12T17:06:59
https://dzone.com/articles/improving-backend-performance-part-1-lazy-loading-in-vaadin-apps
java, vaadin
If you have a table or data grid with, say, more than a few hundred rows in it, you should be using lazy loading. This is especially true in the case of Vaadin's `Grid` component which makes it very easy to show data from an array or collection of POJOs. In this article, I'll show you how easy it is to take advantage of Spring Boot data to easily implement lazy loading in Vaadin Flow applications. You can find a video version of this topic if you prefer: {% youtube uOqmR_kJcvU %} Understanding the Example Application ------------------------------------- Let's start with a previous Vaadin Flow application that I developed from scratch in a [previous article](https://dzone.com/articles/realistic-test-data-generation-for-java-apps). In short, we have a [Spring Boot](https://spring.io/projects/spring-boot) application that connects to a [MariaDB](https://mariadb.com) instance using Hibernate/JPA to read data about books. There is a JPA Entity called `Book`, a Spring repository called `BookRepository`, and a service class called `BookService`. This is a common pattern you might find in the industry. The web user interface (UI) is implemented with [Vaadin Flow](https://vaadin.com/flow) in a class called `BooksView`. Understanding the Problem to Solve ---------------------------------- Let's say the application has been in production for some time and there are 50.000 books in the database. With this rather modest number of rows in a table, we can easily run out of memory with the default JVM configuration of a Spring Boot application. If we use the `findAll()` method to get all the books from the database and show them in a `Grid`, every time a view is requested, a new copy of the data is kept in memory. With 50.000 books in the example application, this means that once there are two views opened in a browser (two users or one user with two tabs showing the view), we'll get an out-of-memory error. Even if we increased the JVM memory to zillions of bytes, we'd find that loading the books takes too long for a good user experience. Any time we think a table could contain more than, say, 100 rows, we should stop using the `findAll()` method and consider the lazy loading technique. Lazy loading allows us to query only part of the data set from a database. A screen will rarely be able to show 50.000 rows, so it's a waste of resources to load all the 50.000 books when we can show only a few. Fixing the Backend ------------------ In Spring Data, the `findAll()` method is overloaded with a version that allows us to specify a slice or "page" of data. JPA uses this information to build the underlying SQL query using the LIMIT and OFFSET clauses. Here's the change we need in the service class: ```Java @Service public class BookService { private final BookRepository repository; public BookService(BookRepository repository) { this.repository = repository; } public Stream<Book> findAll(int page, int pageSize) { return repository.findAll(PageRequest.of(page, pageSize)).stream(); } } ``` Fixing the Grid --------------- The `Grid` component overloads the method `setItems()` with a version that allows us to pass the information of the page of the `Grid` that is currently visualized in the browser according to the position of the scroll bar. The change is simple: ```Java @Route("") public class BooksView extends VerticalLayout { public BooksView(BookService service) { var grid = new Grid<Book>(); ... grid.setItems(query -> service.findAll(query.getPage(), query.getPageSize())); ... } } ``` Testing the Solution -------------------- With these simple changes, we can try the application and check that it certainly loads the data much faster and that we can open many tabs without crashing the application. We could also enable SQL query login by adding the following to the **application.properties** file: ``` spring.jpa.show-sql=true ``` If we scroll through the data in the browser, we should see the queries in the server's log.
alejandro_du
882,800
Memory Management in Java | Heap vs Stack
In Java, a variable can either hold a primitive type or a reference of an object. This holds true for...
0
2021-10-31T11:41:08
https://dev.to/wasinaseer/memory-management-in-java-heap-vs-stack-3i63
heap, stack, memory, java
In Java, a variable can either hold a primitive type or a reference of an object. This holds true for all cases. Primitive types are stored in stack and objects are stored in heap. To store objects, it needs to maintain their structure. ![Variable holds either a primitive type or the reference of an object ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/53k0grjdlezurvapcl5n.png) We’ve always heard that a variable can be passed by value or passed by reference. But, essentially, in Java, there is no such thing as pass-by-reference. It always passes variables by value. But, you need to see what the variable holds. If it holds primitive type then we call it to pass by value or in the case of the reference of an object, we call it to pass by reference. However, it always passes the copy of whatever it holds. Remember that whenever a variable is passed, its copy will be passed. And, if the variable contains the pointer of an object then the copy will also contain the same pointer. So, if we change anything in the referred object, it will also reflect all variables that hold the same pointer. ##Final Variables and Const Correctness Final variables in java are not essentially final. I know we cannot change the value assigned to the variable but if you assign a reference to an object. Then, we can change the referred object without changing the reference which the final variable holds. This is against the concept of const correctness. Const correctness is basically the concept that if you pass a final variable to a function and the function is able to modify the state of the referred object. Then, the language lacks const correctness.
wasinaseer
861,222
How to Survive as Programmer ?
Hello People, I am Ganesh currently pursuing graduation in computer science engineering during my...
0
2021-10-12T17:38:44
https://dev.to/devgancode/how-to-survive-as-programmer--57d6
Hello People, I am Ganesh currently pursuing graduation in computer science engineering during my programing journey I questioning my self How to survive as programmer? because programming is not for everyone some people like to code or someone not but as computer engineer you have to code and in this entire process I found answer. If you want to survive as programmer then you must explore yourself in different tech of domains like web development software development applications and designing part also if you explore in various tech, then at the some point you can find your interest that actually help you for development and your career. Then you can easily survive as programmer Developer Designer whatever you want to be...!!
devgancode
861,242
Python - Input and Output
Subscribe to our Youtube Channel To Learn Free Python Course and More Input and Output In this...
0
2021-10-12T18:24:21
https://dev.to/introschool/python-input-and-output-3amc
python, tutorial, beginners, programming
[Subscribe to our Youtube Channel To Learn Free Python Course and More](https://www.youtube.com/channel/UCQ8FDc6mi2BxxUSz8zcsy-A) **Input and Output** In this section, we will learn about how to take input and give output in Python. Till now we were writing static programs, it means that we were not taking any input from the user. But in the real world, A developer often interacts with the user, in order to get data or show the user some result. For example, you are making a program which takes a name as an input from the user and shows a greeting message with the user’s name on the screen. For this program, you need to know two things. First, how to take input from the user and second is how to show output. Python provides lots of built-in functions. We will discuss Functions in detail later in the course. But for now, Functions are a reusable unit of code that is designed to perform single or related actions. You have already seen a **print()** function that is used to show output. Similarly, there is built-in function **input()** for taking input from the user. How to Take Input from the User **How to Take Input from the User** We will use Python’s built-in function **input()** for taking input from the user. Let’s see how it works. Input takes a string as an optional parameter. Usually it's a description of what input you want to take from the user. See the below example. ``` # get the name from a user name = input('Enter your name : ') Enter your name : ``` When you try to run this program the following thing will happen: - When you execute input() function, the program will stop until the user provides the required input. - Once user enters the name it will save it in the variable **name.** **How To Show Output** Now we have the user input that we can show on the input. To show input on the screen we use Python’s built-in function print() is used - In print function, you can pass multiple objects in the print functions ``` # print() function print(2, 'helo', 3, True) # Output: 2 hello 3 True a = 2, print('a =', a) # Output # a = 2 ``` Using **sep** and **end** parameter in print function. **sep** The sep parameter is used to separate objects. Default value is a space character(‘ ‘). ``` # sep print(2, 'Hello', 3, True, sep=', ') # Output: 2, Hello, 3, True ``` **end** The end parameter is printed at the last. ``` # end a = 'Hello world' print('a =', a, end='!') # Output: a = Hello world! print('code =', 3, sep='0', end='0') # output: code =030 ```
introschool
861,862
How to setup HPC cluster on AWS using AWS Parallel Cluster 2.0 | With Monitoring tool Grafana
In this tutorial, you will learn about High-performance computing using AWS. You will design 3 nodes...
0
2021-10-14T06:38:54
https://dev.to/aws-builders/how-to-setup-hpc-cluster-on-aws-using-aws-parallel-cluster-20-with-monitoring-tool-grafana-28ep
aws, hpc, tutorial, awshpc
In this tutorial, you will learn about High-performance computing using AWS. You will design 3 nodes cluster ( 1 master node, 2 compute nodes). Once you will set up a cluster then you will integrate the Grafana tool to monitor the cluster resources. Grafana is multi-platform open-source analytics and interactive visualization web application. Once the cluster and monitoring tool will be done, you will execute your first HPC job ( prime number calculation ). 1. Setup cluster of 3 nodes ( 1 Master node, 2 Compute node) 2. Setup Grafana for monitoring cluster resources 3. Run you first HPC job ( Prime number calculation) See complete Tutorial on YouTube For more reference {% youtube iHuu1mdWZYQ %}
sam4aws
861,938
Why and when you should use Vuex
As developers, we sometimes fall into the trap of using technologies just because they are popular...
0
2021-10-13T08:32:27
https://fiddit.io/blog/why-and-when-you-should-use-vuex/
javascript, webdev, vue
As developers, we sometimes fall into the trap of using technologies just because they are popular and or commonly used together. That's why it can be beneficial to take a step back and truly understand the **why** of each technology we use. In this blog post, I will try to do this in regards to [Vuex](https://vuex.vuejs.org/) while answering the following question: - What problem does Vuex solve? - How does it solve the problem? ## The Beginning Let's start with just plain old Vue. You only have one component which includes the state, the template to render your HTML, and methods that modify that state. ![First Component](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j55f6dav3o7je4meza8j.png) Your component has perfect encapsulation and life is good. Now you add a second component and you pass it some of the state of the first component via props. ![Second Component](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dv6dr7v7oglk6qc5ywyo.png) Simple enough. Now imagine the following scenario: The component at the bottom of this graph needs some state from the first component. ![Many Components](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o29prtw3r1wnycej2t8o.png) In this graph you can see that we pass the needed state through many layers of components, this approach is referred to as **prop drilling**. It might not seem like a problem by looking at this simple graph, but imagine what this graph would look like in a large application. Things will start to get messy quick. But what exactly is the cause of increased complexity when using this approach? - Even if the components in-between don't need the state from the first component, they still need to pass them to the next component. (Increased Coupling) - The number of changes needed to rename a prop is high. (Code Duplication) - It becomes less simple to locate the place in your code where the state is modified. This increases cognitive load. (Increased Complexity) ## Your Application Grows As your application grows, it will eventually come to a point where more and more state is needed by multiple components scattered across your component hierarchy. You also often find the need to control part of the state of the parent component by one of its children,which means you'll now have to trigger events from the child component and listen for them in the parent. This of course increases coupling even more. In the graph below you will see a small application that has gotten to the point where global state can simplify the code. ![Too many Components](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud237bnmntatzfynm1bn.png) Just imagine what a nightmare it would be if the component red component (bottom left) needs to access state from the yellow component (bottom right). To solve this issue we have three different options: 1. Move the state up to the top of our component hierarchy, so that we can then pass it down again to the components that need it. 2. Send the state up the component hierarchy via events and then pass it down via props. 3. Use global state. By now you should know that the first two options can become very complex, especially in larger applications. So let's take a look at the third option. ## Global State This is where global state comes in, it allows us to access and modify the state from anywhere within our application. In Vue this could be as simple as doing this: ```javascript methods: { toggleTheme: () => { this.$root.darkMode = true; } } ``` Now you could use it in other components simply by referencing `this.$root.darkMode`. As you can probably tell from the example code we are setting the theme for the application. In this case, this should truly be available throughout the program, it would not make sense for this to be bound to a component. The question then arises: If this is approach is so simple why do we need Vuex to manage our global state instead? The problem with global state is that it has some inherent problems: - The global state can be modified from anywhere, this means it becomes harder to predict what the value is at runtime and where it was changed from. (Increased Complexity) - If two components depend on the same global variable, this means that the components are now coupled. This is not only a problem of global state as we had the same problem before. But it is a **new** problem if you didn't have any coupling between the components before. - Makes testing harder. Since now you'll have to mock the global state. (Increased Complexity) ## Flux This is where Flux comes in. Flux is a pattern for managing data flow in your application. I'll try to give you a quick introduction to Flux below. **So what is Flux?** Going back to our example from the graph above, where the bottom left component (red) needs state from the bottom right component (yellow). Here is how this would work in Vuex (which is the official Flux implementation for Vue): ![Feels good man](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t9yqkfa03kfcmx83rq29.png) - Components dispatch actions (e.g. user clicks a button) - The store updates based on what the action it receives (e.g. "increment" will increase the count property in the store) - Components update when the store updates Instead of coupling the data with the component, Flux (and therefore Vuex) keeps the data completely separate. Different implementations of Flux often use different terms and add or omit a few parts of the original pattern, so it can get confusing sometimes. But at the root, all of the implementations follow the same principle. If you want more information about Flux,you can get a great overview [here](https://github.com/facebook/flux/tree/main/examples/flux-concepts). ## Vuex Ok, so Vuex is the official Flux implementation for Vue, and just like the example above shows, it solves the same "prop drilling" problems like our global state example from the "Global State" section above. One of the main differences between the global state example and Vuex is that Vuex actually encourages its users to keep [most](https://vuex.vuejs.org/guide/state.html#components-can-still-have-local-state) of the application state inside the store. That way Vuex becomes the single source of truth. At the same time, it tries to mitigate the problems that global state inherently has by providing a better developer experience. So what are the advantages of Vuex compared to using regular global state? - Standardized patterns for modifying state - Better integration with Vue - Great [Debugging Tools](https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd?hl=en) and integration in testing utils to allow for [easier testing](https://vue-test-utils.vuejs.org/guides/using-with-vuex.html) - Better support since it's used a lot by the Vue community Overall Vuex offers great value for medium to large applications. When you have a small application you might consider not using it.
carstenbehrens
862,143
AWS Access Keys - A Reference
AWS Access Keys are the credentials used to provide programmatic or CLI-based access to the AWS APIs....
0
2021-10-13T10:46:33
https://www.nojones.net/posts/aws-access-keys-a-reference
aws, security, cloud, devops
--- canonical_url: https://www.nojones.net/posts/aws-access-keys-a-reference published: true --- AWS Access Keys are the credentials used to provide programmatic or CLI-based access to the AWS APIs. This post outlines what they are, how to identify the different types of keys, where you're likely to find them across the different services, and the order of access precedence for the different SDKs and tools. ## What are AWS Access Keys? AWS Access Keys are credentials used to authenticate to the AWS APIs. Any time you execute the `aws` command line tools, or use any kind of tool or script that interacts with AWS, access keys are what you use to identify yourself to AWS. They're tied to an IAM principal - either an IAM user or an IAM role. There are, broadly speaking, two primary types of access keys: - **Access keys tied to an IAM user** - these do not expire, and are commonly used to allow systems hosted outside of AWS to authenticate to AWS resources. They're also often seen used by engineers in organizations that have not implemented single sign-on (SSO) to authenticate to their AWS estate. - **Temporary keys** - these are typically issued to AWS services through instance roles or similar, or as a result of an IAM role being assumed. Unlike keys tied to an IAM user, they sre usually only valid for a maximum of 12 hours (though frequently much less). Access keys are broken down into three primary components: - **Access Key ID** - roughly equivalent to a username, this is the unique identifier for the set of access keys you're using. This can be found in Cloudtrail logs when using long-lived access keys. - **Secret Access Key** - the secret component of any set of credentials, these are used to [sign requests to the AWS API](https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html). Consider them equivalent to a password, and protect them accordingly. - **Session Token** - only required with temporary keys, these are passed alongside the Secret Access Key in the `X-Amz-Security-Token` header or query string field. ## Identifying access key types The first four characters of the access key ID will tell you where the access key originated, and knowing the common ones can speed up debugging or incident investigations. The [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids) lists all the IAM identifer prefixes, including those used by access keys, but the main ones for access keys are listed below: | Key prefix | Key type | | ---------- | ------------------------------------------------------------ | | ABIA | [AWS STS service bearer token](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_bearer.html) | | AKIA | Access key tied to an IAM user | | ASIA | [Temporary (AWS STS) access key IDs](https://docs.aws.amazon.com/STS/latest/APIReference/API_Credentials.html) | Beyond the details shared above, exactly how the identifiers are generated remains internal implementation details that AWS haven't seen fit to share, and the same to a degree with the secret access keys and session tokens. [Scott Piper](https://summitroute.com/blog/2018/06/20/aws_security_credential_formats/) and [Aidan Steele](https://awsteele.com/blog/2020/09/26/aws-access-key-format.html) have done some interesting analysis on the access key IDs, which is well worth a read if you'd like to dig further into this topic. ## Where can they be stored? There are a number of different places that access keys can be found. Some are generic locations that can be used within AWS or externally, and some are specific to the AWS service being used. ### A summary of places to check If you've got a shell and you're not sure what AWS service you've landed in, the below is a summary of places to look: - Environment variables - `~/.aws/credentials` and `~/.aws/config` - `http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE-NAME` - `169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` where `$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is an environment variable defined within the container - `$AWS_CONTAINER_CREDENTIALS_FULL_URI` with `$AWS_CONTAINER_AUTHORIZATION_TOKEN` sent in the Authorization header, where `$AWS_CONTAINER_CREDENTIALS_FULL_URI` and `$AWS_CONTAINER_AUTHORIZATION_TOKEN` are environment variables defined within the container ### Generic Locations There are some standard locations that libraries and tools will look for keys, no matter where they're running. From an offensive perspective, these are great places to look when you gain a foothold somewhere that you expect to find AWS keys, especially developer workstations. #### Environment variables Access keys can be passed in as environment variables to a given environment. They are always stored in the following environment variables, named much as one would expect: - `AWS_ACCESS_KEY_ID` - access key ID - `AWS_SECRET_ACCESS_KEY` - secret access key - `AWS_SESSION_TOKEN` - session token #### Configuration files AWS provides two on-disk locations where credentials can be stored. These files were originally intended as configuration files for the AWS CLI, but the SDKs and other tools will also pick access keys up from these locations. On a typical developer's system, these will often contain a number of different profiles, with the necessary information and credentials to authenticate to a range of AWS accounts across an organization. - `~/.aws/credentials` - intended as the proper location to store access keys - `~/.aws/config` - holds the configuration information for each profile, but may also include credentials Ben Kehoe has published [a more detailed explanation](https://ben11kehoe.medium.com/aws-configuration-files-explained-9a7ea7a5b42e) of these files and how they operate. This, in turn, prompted an explanation from one of the original engineers, James Saryerwinnie, [to provide the backstory](https://twitter.com/jsaryer/status/1294365822819999744) as to why the `config` and `credentials` files have been implemented the way they have. Ben's also just published [a great guide on how best to use these files](https://ben11kehoe.medium.com/never-put-aws-temporary-credentials-in-env-vars-or-credentials-files-theres-a-better-way-25ec45b4d73e), and how to avoid stuffing them full of temporary credentials, which is well worth a read if this is something you or your colleagues commonly do. ### Service Specific Locations There are several different methods used to provide credentials to code executing inside AWS services. While the SDKs and CLI will transparently check all the right places, it's useful to know where they live under the hood. #### EC2 Credentials associated with EC2 instance profiles are provided via the instance metadata service (IMDS), which lives at `169.254.169.254`. It contains a range of useful data about the instance and its configuration - hackingthe.cloud has [a useful summary posted here](https://hackingthe.cloud/aws/general-knowledge/intro_metadata_service/). `http://169.254.169.254/latest/meta-data/iam/security-credentials/` will list the roles provisioned on the instance, and a request to `http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE-NAME` will return the access keys associated with the `ROLE-NAME` role. Obviously, replace `ROLE-NAME` with the name of the role you're interested in. **IMDS version 1** An example curl command to get access keys from an instance running IMDS version 1 is shown below: ```bash curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE-NAME ``` **IMDS version 2** AWS built a hardened version of the IMDS following breaches involving Server-Side Request Forgery attacks being used to steal credentials from an instance's IMDS. As a result, if IMDS version 2 is enabled, a token is required to access data contained within. To access IMDSv2, the process is: - Request a token by submitting a PUT request to `http://169.254.169.254/latest/api/token` - Make a request to other endpoints on the metadata service, passing the aforementioned token in the `X-aws-ec2-metadata-token` header. An example curl command is shown below. ```bash TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token"` && curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE-NAME -H "X-aws-ec2-metadata-token: $TOKEN" ``` **GuardDuty evasion** If you steal credentials from an instance's metadata service in an account with GuardDuty enabled and use them from outside of AWS, the `UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.OutsideAWS` GuardDuty finding will trigger. At the time of writing it is possible to bypass this by creating an EC2 instance in an AWS account you control and using the credentials from there. #### Lambda Unlike many other AWS compute services, Lambda provides role credentials as environment variables, as described above. Once you have code execution inside a Lambda function, the `printenv` command will return all environment variables, including the access key id, secret access key and session token. Equally, the following shell commands will retrieve the access keys and session token from within a Lambda function: ```bash echo $AWS_ACCESS_KEY_ID echo $AWS_SECRET_ACCESS_KEY echo $AWS_SESSION_TOKEN ``` ##### Step Functions Step Functions operate as Lambda functions under the hood, and thus credentials are in environment variables, as you'd find on Lambda. #### Elastic Container Service (ECS) ECS uses an instance metadata service, much like EC2, only for ECS this is hosted at `169.254.170.2`. Additionally, the ECS team took a different approach to harden ECS tasks against server side request forgery attacks by generating a random UUID for each role in each task execution. As such, credentials are found at `http://169.254.170.2/v2/credentials/<random-uuid>`. The `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` environment variable is set within the container to inform applications where to access credentials, and the AWS SDKs transparently look up this environment variable when looking for credentials in the instance metadata service. As such, from within the container, the following command will retrieve credentials for the task execution role: ```bash curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI ``` The team at Rhino Security Labs also [published a great blog post](https://rhinosecuritylabs.com/aws/weaponizing-ecs-task-definitions-steal-credentials-running-containers/) on reconfiguring ECS task definitions to recover credentials associated with the task. A number of other services appear to be built on top of ECS, and thus also use the same access method described above: ##### SageMaker Notebooks Sagemaker notebooks appear to operate inside an ECS container. The following command will retrieve access keys for the assigned role when executed within a SageMaker Notebook shell: ```bash curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI ``` ##### App Runner App Runner appears to use ECS under the hood, and thus has an ECS-style instance metadata service available. The following command when executed inside App Runner will retrieve access keys for the assigned role: ```bash curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI ``` ##### CodeBuild CodeBuild containers appear to run on top of ECS. To retrieve access keys associated with the role assigned to a CodeBuild build project, alter the buildspec to execute the following command: ```bash curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI ``` ##### Batch AWS Batch appears to be built on top of ECS. The following command executed within Batch will retrieve access keys associated with the Batch execution role: ```bash curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI ``` #### Elastic Kubernetes Service (EKS) Methods for providing access keys to containers inside EKS vary a lot, depending on who set the cluster up and when. The canonical approach suggested by AWS now is IAM Roles for Service Accounts (IRSA), which is an EKS feature designed to map IAM roles to pods using Kubernetes service accounts. AWS have posted a pretty good description of how it functions in their [EKS Best Practices Documentation](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#iam-roles-for-service-accounts-irsa). The short of it is that credentials are acquired using an OIDC token via the `sts:AssumeRoleWithWebIdentity` API call. The process looks like this: * A pod with a service account tied to an IAM Role will call a public OIDC discovery endpoint for AWS IAM upon startup * This endpoint signs the OIDC token issued by the cluster * This signed JWT token is transparently used by the CLI or SDKs to call `sts:AssumeRoleWithWebIdentity` * `sts:AssumeRoleWithWebIdentity` returns temporary access keys associated with the relevant IAM role When the pod is spun up, the EKS control plane injects the role ARN and the web identity token into the pod. These are defined in environment variables: ```bash AWS_ROLE_ARN=arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token ``` As this is a relatively new feature, you may also find credentials in these places: * IMDS for the underlying nodes, if EKS on EC2 is deployed and [kiam](https://github.com/uswitch/kiam) or similar isn't deployed * Injected as secrets as environment variables inside a container * Injected as a mounted file inside a container * Stored in application configuration files for the app inside a container * The usual AWS configuration files * Access keys tied to IAM users stored in the cluster's secrets store #### CloudShell CloudShell provisions temporary credentials with a matching set of permissions to the user or role instantiating the CloudShell shell. As such, these will often have a fairly useful permission set, if you can get inside a developer's CloudShell shell. CloudShell runs on one of the AWS container services, however its access key provisioning appears to operate differently to ECS. The `$AWS_CONTAINER_CREDENTIALS_FULL_URI` environment variable defines the address and path for the metadata service endpoint that supplies the credentials, however from testing this does not appear to be randomized. From instantiating a few different CloudShell instances, it appears to always resolve to `http://localhost:1338/latest/meta-data/container/security-credentials`. To provide a layer of protection against SSRF, an Authorization header must also be sent with the request, containing the value of the `AWS_CONTAINER_AUTHORIZATION_TOKEN` environment variable. An example curl command to retrieve the shell's access keys is shown below: ```bash curl $AWS_CONTAINER_CREDENTIALS_FULL_URI -H "Authorization: $AWS_CONTAINER_AUTHORIZATION_TOKEN" ``` #### Elastic Beanstalk/CodeStar etc A number of AWS services act as wrappers or abstractions around numerous others in order to make it easier to deploy applications. In these cases, the location of access keys tied to any assigned roles will depend on the underlying services in use. #### Lightsail Lightsail does not allow IAM roles to be assigned to instances. If there are access keys to be found, they'll be manually configured by the system administrator. As such, they'll likely be in on-disk configuration files, the environment variables, or hardcoded in application source code. ## AWS API calls that return access keys Kinnaird McQuade posted a [detailed run-down of the different AWS API calls that return credentials](https://kmcquade.com/2020/12/sensitive-aws-api-calls/), and is worth keeping in your back pocket as a reference. I've listed the known calls that return AWS access keys below. Given the size of the AWS APIs, it's probably best not to consider this list exhaustive, but it's a useful starting point for policy writing, security monitoring and penetration testing. - [codepipeline:PollForJobs](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PollForJobs.html) - [cognito-identity:GetCredentialsForIdentity](https://docs.aws.amazon.com/cognitoidentity/latest/APIReference/API_GetCredentialsForIdentity.html) - [iam:CreateAccessKey](https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreateAccessKey.html) - [iam:UpdateAccessKey](https://docs.aws.amazon.com/IAM/latest/APIReference/API_UpdateAccessKey.html) - [sso:GetRoleCredentials](https://docs.aws.amazon.com/singlesignon/latest/PortalAPIReference/API_GetRoleCredentials.html) - [sts:AssumeRole](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html) - [sts:AssumeRoleWithSaml](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-saml.html) - [sts:AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-web-identity.html) - [sts:GetFederationToken](https://docs.aws.amazon.com/cli/latest/reference/sts/get-federation-token.html) - [sts:GetSessionToken](https://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html) ## Access Key Precedence Each of the main AWS tools and libraries maintain their own order of precedence for loading access keys, and they're not completely aligned with each other. There's also several SDK-specific options to make the SDKs work in a manner more familiar to developers working in a particular language. I've listed the most common below for reference. ### CLI Per the [CLI documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-precedence): 1. Environment variables 2. CLI credentials file - `~/.aws/credentials`, `C:\Users\USERNAME\.aws\credentials` on Windows. 3. CLI configuration file – `~/.aws/config` on Linux or macOS, `C:\Users\USERNAME\.aws\config` on Windows. 4. Container credentials 5. Instance profile credentials ### Boto3 Python SDK Per the [Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html): 1. Passing credentials as parameters in the boto.client() method 2. Passing credentials as parameters when creating a Session object 3. Environment variables 4. Shared credential file (~/.aws/credentials) 5. AWS config file (~/.aws/config) 6. Assume Role provider 7. Boto2 config file (/etc/boto.cfg and ~/.boto) 8. Instance metadata service on an Amazon EC2 instance that has an IAM role configured. ### .NET SDK Per the [AWS SDK for .NET documentation](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/creds-assign.html): 1. Credentials that are explicitly set on the AWS service client (within the application source code) 2. A credentials profile with the name specified by a value in [AWSConfigs.AWSProfileName](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Amazon/TAWSConfigs.html#properties). 3. A credentials profile with the name specified by the `AWS_PROFILE` environment variable. 4. The `[default]` credentials profile. 5. [SessionAWSCredentials](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Runtime/TSessionAWSCredentials.html) that are created from the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN` environment variables, if they're all non-empty. 6. [BasicAWSCredentials](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Runtime/TBasicAWSCredentials.html) that are created from the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables, if they're both non-empty. 7. IAM Roles for Tasks, if running as an ECS task. 8. Credentials from the EC2 instance metadata service. ### Go SDK Per the [Go SDK documentation](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials), assuming you're using the default credential provider: 1. Passing credentials as a parameter to the client 2. Passing credentials when creating a session 3. Environment variables. 4. Shared credential file (~/.aws/credentials) / AWS config file (~/.aws/config) (order not specified) 5. If your application uses an ECS task definition or RunTask API operation, IAM role for tasks. 6. If your application is running on an Amazon EC2 instance, IAM role for Amazon EC2. ### Javascript SDK Per the [Javascript SDK documentation](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html): 1. Credentials that are explicitly set through the service-client constructor 2. Environment variables 3. The shared credentials file 4. Credentials loaded from the ECS credentials provider 5. Credentials that are obtained by using a credential process specified in the shared AWS config file or the shared credentials file 6. Credentials loaded from AWS IAM using the credentials provider of the Amazon EC2 instance
njonesuk
862,294
Fontes de Dados - Data Science
As fontes de dados (datasets) são o principal insumo para a Ciência de Dados. Datasets completos e...
0
2021-10-13T14:29:49
https://dev.to/dgoposts/fontes-de-dados-data-science-2acm
datascience, analytics, data
As fontes de dados (datasets) são o principal insumo para a Ciência de Dados. Datasets completos e confiáveis fornecem subsídios fundamentais, formam a base de qualquer análise de dados de alto nível. Quase em sua totalidade, são representados por dados tabulares em formato de planilha onde as linhas são os registros dos acontecimentos e as colunas são as características desses acontecimentos. Neste artigo, irei apresentar datasets para três grandes áreas: economia, saúde e meio ambiente. --- ###Economia **Tesouro Nacional Transparente** Contém informações produzidas e atualizadas pelo tesouro nacional para livre utilização, desde que citada a fonte. Informações sobre dívida pública, títulos do tesouro, ajustes fiscais entre outros. https://www.tesourotransparente.gov.br/ **Banco Mundial - Dados abertos** Dados, pesquisas, relatórios entre outras fonte de dados fornecidos pelo banco mundial com o intuito de fomentar o desenvolvimento global. https://data.worldbank.org/ **Fundo Monetário Intercional** Dados abertos sobre o FMI, como é mais comumente conhecido. Confira os investimos aplicados por país através dos relatórios de de resultados anuais divulgados pelo FMI. https://www.imf.org/en/Data **Financial Times** Bases de dados abertas de um dos maiores jornais do mundo de negócios e notícias econômicas. https://markets.ft.com/data/ **Global Financial Data** Fornece informações econômicas, financeiras e históricas dos mais abrangentes possível. Especializada em fornecer dados financeiros e econômicos que vão dos anos 1000 até o presente. https://www.globalfinancialdata.com/ **Banco Central dos Estados Unidos** O FRED (Federal reserve ) traz dados sobre todo o sistema financeiro norte americano. https://fred.stlouisfed.org/ **Google Finance** Todas as principais informações do mercado financeiro mundial em uma página. Tudo o que está ocorrendo neste minuto de mais importante. https://www.google.com/finance **Mercado Bitcoin** Uma forma automatizada de obter dados de negociações do Mercado Bitcoin. O acesso à API de dados é público, não é necessário criar uma conta tampouco autenticar. https://www.mercadobitcoin.com.br/api-doc/ --- ###Saúde **Ministério da Saúde** Dados abertos Informações sobre o ministério da saúde em si, como orçamento, investimentos e iniciativas. http://www.dados.gov.br/organization/ministerio-da-saude-ms **ANVISA** Portal de dados abertos da ANVISA. Consulte dados sobre agrotóxicos, alimentos, cosméticos, medicamentos, produtos para a saúde, saneamento básico e outros. http://portal.anvisa.gov.br/dados-abertos **DATASUS** Portal de dados abertos do sistema único de saúde. Consulte relatórios e informativos sobre a saúde do brasileiro. http://www2.datasus.gov.br/DATASUS/ **Repositório de dados abertos da OMS** Diversos estudos e fontes de dados sobre a organização mundial da saúde. https://www.who.int/gho/database/en/ **U.S Food & Drug** Base americana - Remédios e Alimentação. Recall de medicamentos, Plantações sustentáveis, emergências públicas e muito mais. https://www.fda.gov/food **HealthData.gov** Dedicado a tornar os dados sobre saúde mais acessíveis a empreendedores, pesquisadores e formuladores de políticas, na esperança de obter melhores resultados para a saúde de todos. https://healthdata.gov/ **Nacional Center Institute** Instituto nacional do câncer norte americano. Diversas estatísticas sobre os vários tipos desta doença, ações, pesquisas e tratamentos sobre. https://seer.cancer.gov/faststats/ **Centers for Disease Control and Prevention** Centro de controle e prevenção de doenças. Dados e estatísticas sobre diversas doenças, pesquisas e vacinas. https://www.cdc.gov/datastatistics/ --- ###Meio ambiente **Climate Data Online** Centro nacional norte americano de informações ambientais. Dados sobre o clima mundial e meio ambiente de diversos centros, interligados entre si. https://www.ncdc.noaa.gov/cdo-web/datasets **Nasa Earth Observatory** É o observatório de condições globais da Nasa. Busque e visualize vasto conteúdo sobre cada tópico que cause impacto no planeta terra. https://earthobservatory.nasa.gov/ **CCI - Open data Portal** Computa 40 anos de observações de satélite, mesclando diversos dados observados por eles sobre as mudanças climáticas e de relevo. Uma das melhores iniciativas de forma livre em todo mundo. https://gisgeography.com/free-world-climate-data-sources/ **OECD** O objetivo da OECD é construir políticas melhores para um melhor lugar no futuro. Realiza e disponibiliza diversos estudos, dados climáticos. https://data.oecd.org/environment.htm ---- That's all folks! ✌️
dgoposts
862,536
How to Handle Required Inputs in Angular
Handling Required Inputs in Angular Directive and Component. Hey, this article presents...
0
2021-10-13T16:08:17
https://medium.com/@redin.gaetan/angular-for-everyone-required-inputs-ee916b2feaae
typescript, angular, javascript, decorators
--- title: How to Handle Required Inputs in Angular published: true date: 2021-09-02 09:40:40 UTC tags: typescript,angular,javascript,decorators canonical_url: https://medium.com/@redin.gaetan/angular-for-everyone-required-inputs-ee916b2feaae --- #### Handling Required Inputs in Angular Directive and Component. ![](https://cdn-images-1.medium.com/max/1024/1*r33XIdq-Ur8Y87CzrzilfQ.jpeg) Hey, this article presents tips for handling required inputs in a Directive or a Component. The classic approach could be implementing the ngOnInit method and throwing errors when the input value is not set. ``` public ngOnInit(): void { if (this.myInput === undefined) { throw new Error('input myInput is required for MyDirective'); } } ``` But I don’t want to have to write this for all my required properties… So yep, I’m a bit lazy … ^^ But the main goal is to **simplify the code**. I recently found an interesting way to do it just once! Do you know the concept of Typescript [decorators](https://www.typescriptlang.org/docs/handbook/decorators.html)? ### V1 class decorator That’s what I would like to code: ``` @Directive(...) @RequiredInputs('input1', 'input2') export class MyDirective { @Input() public input1!: unknown; @Input() public input2!: unknown; @Input() public input3Optional ?: unknown; } ``` Here’s how to implement the decorator which allows doing that: {% gist https://gist.github.com/GaetanRdn/fce8417d59a04638c219ca351745ddf2 %} Now you will get an error for input1 and input2 if it’s not set on the directive using: ``` <span myDirective></span> // throw Error <span myDirective input1="1"></span> // throw Error <span myDirective input1="1" input2="toto"></span> // No Error ``` As it’s said in comment by a reader you also can handle this with using the compoent/directive’s selector like this: ``` @Directive({ selector: '[myDirective][input1][input2]', ... }) ``` But the error message will not be consistent. The directive will not be recognized by Angular if you do it like this and that’s all. Whereas here with the decorator you will have an explicit message for the case. So yes there are many ways to handle it, just choose your prefer way. ### V2 property decorator Here’s the code: {% gist https://gist.github.com/GaetanRdn/de66053b7a08831966b9ec7a6ba4c6d3 %} You juste have to use it like this: ``` @Component(...) class MyComponent { @Input() @Required() public myProperty: unknown; ... } ``` Thanks for reading. What about you? How do you handle this use case? Please tell me. ### Learn More - [A decorator to coerce Boolean properties](https://medium.com/@redin.gaetan/angular-un-decorator-pour-forcer-le-type-bool%C3%A9en-%C3%A0-une-propri%C3%A9t%C3%A9-32931c79163) - [TypeScript Function Overloads](https://dev.to/gaetanrdn/typescript-function-overloads-1eeo) - [Angular for everyone: All about it](https://link.medium.com/SXPQgRn7xjb)
gaetanrdn
862,559
API Development: The Complete Guide for Building APIs Without Code
The term “API” gets thrown around a lot these days, but what does it mean? What can you...
0
2021-10-13T16:00:47
https://www.karllhughes.com/posts/api-development
![](https://www.karllhughes.com/assets/img/api-development.png) =============================================================== **The term “API” gets thrown around a lot these days, but what does it mean? What can you use an API for? Do you have to have a developer on your team to build or use an API? In this guide, we’ll explore all those questions and more, including a spotlight on the tools you can use as a [non-technical founder](https://www.karllhughes.com/posts/non-technical-founder-hiring-cto) to build your own APIs.** You don’t have to understand what an API is at this point, but if you do, feel free to skip the first section and move right into “[Why Build an API?](https://www.karllhughes.com/#why-build-an-api)”. If you already know why you need one, then skip down to “[Things to Consider When Building an API](https://www.karllhughes.com/#things-to-consider-when-building-an-api)”, and if you are seasoned at building APIs but just want to know how you can build them without a developer, jump all the way down to “[Tools for Building APIs Without Code](https://www.karllhughes.com/#tools-for-building-apis-without-code)”. What is an API? --------------- API stands for “Application Programming Interface.” Before I lose you with a bunch of technical jargon, let me put it in simple terms: **an API is a way for computer programs to talk to each other**. APIs are used in almost all software, websites, mobile apps, or computer games. Some companies even make money using only their APIs, but before I get to that, let’s take a look at an example of an API you’re probably familiar with: ![](https://i.imgur.com/idFuNEf.png) Ever seen a screen like this? This is a Facebook login button, and it uses Facebook’s API to allow users to verify their identity. It essentially lets you skip entering your username and password by using your Facebook account as proof that you are who you say you are. Developers who use Facebook’s API can save themselves time by not having to build their own username and password login system, instead piggybacking off of Facebook’s. Another way that APIs can be used is to show data stored in another platform on your website. Have you ever seen a comment form on a site that looks like this? ![](https://i.imgur.com/TAFn3lZ.png) When comments are powered by Disqus it means that the website uses the [Disqus API](https://disqus.com/) to store their comments so they don’t have to manage them, remove spam, or write lots of code themselves. APIs can save developers a lot of time. Here’s another great example of using an API to get data from a third-party source: ![](https://i.imgur.com/eKIl8LC.png) [FinanceBoards](https://financeboards.com/) uses stock market data provided by several different sources to create charts, graphs, reports, and more for investors. If you want to build any kind of stock market tracking application, you’ll need to get data like this, and [stock data APIs](https://rapidapi.com/blog/best-stock-api/) make it relatively easy to do so. There’s almost no end to the kinds of things you can build using APIs, and there are hundreds of free APIs you can use in your projects [check out this list on Github](https://github.com/toddmotto/public-apis), but that’s a topic for another time. For the remainder of this guide, we’ll focus on building APIs. Sponsor [![](https://www.karllhughes.com/assets/img/draft-sq.png)](https://draft.dev/?src=karllhughes) ### [Want Great Content Like This for Your Site?](https://draft.dev/?src=karllhughes) At [Draft.dev](https://draft.dev/?src=karllhughes), we create technical content for startups looking to reach software engineers. Stop begging your engineers to write blog posts and build a high-quality, reliable content engine today. [Learn More →](https://draft.dev/?src=karllhughes) Why Build an API? ----------------- APIs are very powerful because they allow developers to take someone else’s work and build their own app or product from it, but why do API creators do it? It may seem that giving away your company’s data or features in an API could help your competitors, but when done right, an API can allow your company to grow into new areas that you never thought possible. Let’s take a look at some companies that have used their APIs to grow and eventually dominate their fields. ### Quora: Using an API for Internal Use Only ![](https://i.imgur.com/6DbtWGi.png) First of all, APIs do not have to be publicly available at all. In fact, most companies that have an API only use them internally to allow different parts of their website to talk to each other. [Quora](https://www.quora.com/) is a great example of this, as they have an API, but do not offer external developers a way to gain access to it. Instead, they use this API to keep the data in their mobile and web apps in sync. The advantage to an internal API is that you can use the same database, business rules, and shared code behind the scenes to power your mobile app, desktop app, and website without having to worry about competitors stealing your content or developers misusing your data. So even if you never plan to give your data to partners, you may want to consider building an API simply to allow developers to build different apps with the same data. APIs are a great way to do more work with fewer developers. ### Twitter: Allowing Users to Build their Ecosystem ![](https://i.imgur.com/U46KSa2.png) [Twitter](https://twitter.com/) started out with a huge focus on their API. Developers could get almost any data from Twitter they wanted - trends, hashtags, user stats - and they built some really cool stuff with it. This massive amount of open data and the tools people built actually attracted more users to Twitter. Companies could easily hook into the Twitter API to let users share their content on Twitter without leaving their site, and Twitter in turn got even more content on the platform. Twitter might have been able to build some of these applications on their own, but there’s no way they would have been able to do everything that API users have imagined. Eventually - once Twitter dominated the microblogging universe - they [tightened up their API](https://www.theverge.com/2012/8/23/3263481/twitter-api-third-party-developers) and made partners pay for specific kinds of access. While this made early adopters mad, Twitter was able to profit from the growth of their API without sacrificing the long-term profits they now get out of it. ### Diigo: An Extra Incentive for Paid Users ![](https://i.imgur.com/UTYacdW.png) [Diigo](https://www.diigo.com/) is a bookmarking and annotation tool with a generous free tier, convenient Chrome extension, and mobile sharing apps. Because some users wanted to use Diigo for more advanced purposes and build their own applications using the bookmarks they saved in Diigo, Diigo decided to offer a public API, but with a catch. It’s only available to paid users. While providing the API probably isn’t much more work than servicing their UI (in fact, it might be less work as [websites are notoriously hard to get right](https://www.karllhughes.com/posts/startup-website)), Diigo decided that the users who wanted API access were most likely willing to pay a few bucks per month to make their lives easier using the API. It certainly hooked me in and took the product from cool to a critical part of my weekly workflow. If your business has successfully found a tech-savvy audience that is begging for API access, you might want to consider offering one especially if you can profit from it. ### Mailchimp: Allowing Integrations that Encourage Greater Usage ![](https://i.imgur.com/2iW9aMc.png) [Mailchimp](https://mailchimp.com/) is one of the most popular email marketing tools out there, and it’s my personal favorite. It’s really easy to use, they have a generous free tier for small mailing lists, and they seem to always be the first when offering new features that are great for email senders. They also have a [well-documented API](https://mailchimp.com/developer/) that encourages even more interesting use cases. Without their API, you can create a campaign from a template, add users to a list manually, and look at standard reports. But, using the Mailchimp API, you can build custom templates from your own website’s data, import thousands of existing users from a database, or showcase your raw email campaign data in novel ways. There’s no limit to what you can do, but all these new use cases also mean more revenue for Mailchimp as you still have to pay them to send your emails through their service. If you’re building a software-as-a-service platform that charges based on usage, an API could be a great way to increase engagement. ### Aylien: API as a Service ![](https://i.imgur.com/lQwV0AU.png) While many APIs are a bonus or supplemental feature, some companies are built as API-first services. [Aylien](https://aylien.com/) is a text analysis and natural language processing service that doesn’t offer a user interface at all - they just sell access to their API. This means that developers hoping to analyze or categorize data can simply send their text to Aylien, then listen for a response with all the analysis they need. There’s no custom code involved at all, and using Aylien with existing data should take no more than a few minutes. They charge customers for this time savings though - once you go over their free tier, there’s a fee for using it. Aylien might someday offer a user interface, but by starting out as an API-first company, they’ve aligned themselves with developers and put the focus on their technical tooling rather than UI. Does Every Business Need an API? -------------------------------- > “Every company in the world already has valuable data and functionality housed within its systems. Capitalizing on this value, however, means liberating it from silos and making it interoperable and reusable in different contexts—including by combining it with valuable assets from partners and other third parties.” - [Apigee State of the API Economy, 2021](https://pages.apigee.com/rs/351-WXY-166/images/Apigee_StateOfAPIS_eBook_2020.pdf) Many businesses have had great success building APIs that customers or other third parties can use, but you do not _have_ to have an API, even if you are building a software-based business. In fact, the complexity of offering an API in addition to a user interface may be [too much for a small startup](https://www.karllhughes.com/posts/creating-a-tech-startup-without-a-developer), but it’s still good to understand when and why an API is appropriate. Here is a list of reasons you may or may not want to build an API. While not exhaustive, this should give you a starting point when deciding whether or not an API is right for your use case. ### You Should Probably Build an API If: * You want to build a mobile app or desktop app someday * You want to use modern front-end frameworks like [React](https://reactjs.org/) or [Angular](https://angular.io/) * You have a data-heavy website that you need to run quickly and load data without a complete refresh * You want to access the same data in many different places or ways (eg: an internal dashboard and a customer-facing web app) * You want to allow customers or partners limited or complete access to your data * You want to upsell your customers on direct API access ### You Should Probably _Not_ Build an API If: * You just need a landing page or blog as a website * Your application is temporary and not intended to grow or change much * You never intend on expanding to other platforms (eg: mobile, desktop) * You don’t understand the technical implications of building one One thing that doesn’t have to stand in your way of building an API is not having (or being) an experienced software developer. **In fact, you might be able to build a serviceable API without any custom development work, but you should understand some of the implications of giving users API access to your data.** ![](https://i.imgur.com/i9q2sLl.png) Things to Consider When Building an API --------------------------------------- When you build a website with sensitive or proprietary data, you probably want users to log in before they can access it. The same holds true for APIs - you shouldn’t make yours open to the public unless you want anyone in the world to have access. Even if you want the data to be easy to get, you may want to issue API keys just so you can keep track of who’s using it and potentially lock out anyone who abuses your API. Security is one consideration, but there are many other things you should think about when building an API: ### Authentication Who do you want to give access to your API? Paying customers? Internal employees? Anyone on the internet? If you want to institute any limits on how or how much your API is used, you’ll need some form of authentication. Common options include Basic Auth, API Keys, OAuth tokens, and JSON Web Tokens. I won’t get into the difference here, but there’s a [great article by Zapier explaining the difference here](https://zapier.com/engineering/apikey-oauth-jwt/). ### Documentation Developers who want to use your API will need some way to know how they can use it. API Documentation should describe the requests that are allowed, the format and type of data inputs allowed, and the responses returned by the API. These documents can follow certain standard formats (like the [Swagger specification](https://swagger.io/solutions/api-documentation/)), or they can be different between every API. We’ll talk more about requests and responses in the next section of this guide. ### Role and Route-Based Permissions Sometimes you will need your authentication rules to be quite complicated. For example, maybe internal developers can access certain parts of your API that public users or customers cannot. Developers can build in role or route-based permissions systems that prevent unauthorized use in specific parts of your API. ### Rate Limiting When you offer access to your API to the public, it’s usually a good idea to prevent people from using it too much or too quickly. Rate limiting can prevent users from abusing your API, scraping all your data, or simply crashing your app because they’re making so many requests. ### Logging/Analytics When your API returns an error to a user, you might want to know about it. Logging can be added to capture every request and response or just the ones that failed. Logging or analytics can also help you track how much your API is being used, especially when you’re dealing with lots of third-party users. ### Side Effects What if you want to trigger alerts, link multiple API requests together, or kick off background tasks with your API? These events are referred to as “side effects” meaning that they might not be contained in the primary request and response, but are still important actions when designing your API. Usually, this level of customization has to be custom-coded, but there are ways to manage side effects without writing code. ### Scalability “Scalability” is a term that developers use to refer to the ability of your API to grow or shrink depending on the needs of your team or customers. For example, a scalable API can handle 100 users today and 10,000 users tomorrow without throwing lots of errors. Ideally, a good, scalable API will cost less when it’s not in use, but that level of scalability is tough to reach without a developer. ### Speed 500 milliseconds (1/2 a second) may not sound like much time, but for computers, this is an eternity. While there’s no single answer to the question, “[How fast should your API be?](https://dev.to/karllhughes/building-a-response-timer-to-benchmark-api-performance-3k6k)” many successful APIs respond within 100 milliseconds. This can depend greatly on who your users are and what they’re using your API for. Real-time stock market price APIs need to be much faster than most consumer web applications. ![](https://i.imgur.com/U0j7OTh.png) How an API Works ---------------- Now that you know what an API is and why it might be a good fit for your business and some technical considerations, let’s make it a little more tangible. APIs are a way for computers to share data or functionality, but computers need some kind of interface to talk to each other. While there are many options out there, we’ll focus on [HTTP APIs (also known as Web APIs)](https://en.wikipedia.org/wiki/Web_API) as they are the most common option in web and mobile app development. ### The Building Blocks of an API **First, an API needs a data source.** In most cases, this will be a database like [MySQL](https://www.mysql.com/), [MongoDB](https://www.mongodb.com/), or [Redis](https://redis.io/) (don’t worry if you don’t know what those are, they’re basically just ways that programmers store data), but it could also be something simpler like a text file or spreadsheet. The API’s data source can usually be updated through the API itself, but it might be updated independently if you want your API to be “read-only”. **Next, an API needs a format for making requests.** When a user wants to use an API, they make a “request”. This request usually includes a verb (eg: “Get”, “Post”, “Put”, or “Delete”), a path (this looks like a URL), and a payload (eg: form or JSON data). Good APIs offer rules for making these requests in their documentation. **Finally, an API needs to return a response.** Once the API processes the request and gets or saves data to the data source, it should return a “response”. This response usually includes a status code (eg: “404 - Not Found”, “200 - Okay”, or “500 - Server Error”) and a payload (usually text or JSON data). This response format should also be specified in the documentation of the API so that developers know what to expect when they make a successful request. ![](https://i.imgur.com/PiT8p1E.png) An API can optionally do many other things (see the list of considerations above), but these three things are the most fundamental for any API. To make these concepts even more concrete, let’s access a couple real APIs and see what they look like. Don’t worry, you don’t have to know how to code to follow along. ### Accessing an API from Your Web Browser HTTP APIs actually use the same method of communication that your web browser uses when it accesses websites. This means you can access some APIs by simply typing a URL into your browser. For example, [Open Food Facts](https://world.openfoodfacts.org/data) has a free API for getting information about foods and ingredients. You can go to [this link](https://world.openfoodfacts.org/api/v0/product/737628064502.json) in your web browser to see the API response for all the data they have on “Stir Fry Rice Noodles”. When you go to that URL, you’ll see a mess of data like this: {"status":1,"product":{"ingredients_text_with_allergens":"RICE NOODLES (RICE, WATER), SEASONING PACKET (PEANUT, SUGAR, SALT, CORN STARCH, SPICES [CHILI, CINNAMON, PEPPER, CUMIN, CLOVE], HYDRDLYZED SOY PROTEIN, GREEN ONIONS, CITRIC ACID, PEANUT OIL, SESAME OIL, NATURAL FLAVOR).","generic_name_en_debug_tags":[],"nutrition_data_per_debug_tags":[],"additives_prev":" [ rice-noodles -> en:rice-noodles ] [ noodles -> en:noodles ] [ rice -> en:rice ] [ water -> en:water ] [ seasoning-packet -> en:seasoning-packet ] [ packet -> en:packet ] [ peanut -> en:peanut ] [ sugar -> en:sugar ] [ salt -> en:salt ] [ corn-starch -> en:corn-starch ] [ starch -> en:starch ] [ spices -> en:spices ] [ chili -> en:chili ] [ cinnamon -> en:cinnamon ] [ pepper -> en:pepper ] [ cumin -> en:cumin ] [ clove -> en:clove ] [ hydrdlyzed-soy-protein -> en:hydrdlyzed-soy-protein ] [ soy-protein -> en:soy-protein ] [ protein -> en:protein ] [ green-onions -> en:green-onions ] [ onions -> en:onions ] [ citric-acid -> en:e330 -> exists -- ok ] [ peanut-oil -> en:peanut-oil ] [ oil -> en:oil ] [ sesame-oil -> en:sesame-oil ] [ oil -> en:oil ] [ natural-flavor -> en:natural-flavor ] [ flavor -> en:flavor ] ","nutrition_data":"on","ingredients_from_palm_oil_n":0,"editors":["","thierrym","manu1400","andre","upcbot"],"allergens_hierarchy":[],"brands":"Thai Kitchen,Simply Asia","link":"",...} Obviously, this is impossible for humans to make sense of, but it’s actually formatted in “JSON”, a very common and easy-to-parse format for computers. To see the data in a more readable way, you can format it using a [JSON formatter](https://jsonformatter.org/). * Go to [jsonformatter.org](https://jsonformatter.org/) * Copy/paste all the messy-looking text from the openfoodfacts API response above into the left-hand side of the JSON formatter. * Click “Format/Beautify” ![](https://i.imgur.com/j6nPxdu.png) Now the data is a little easier to look at. Congratulations, you just made your first real API request! ### Accessing an API from Postman The example above was simple because the API didn’t require any authentication and we were just making a “Get” request to see all the data about a single product. If the API you’re accessing or building is more complex, you’ll likely need to use an API tool like [Postman](https://www.karllhughes.com/posts/postman-api-access). * To set up Postman, [download it for your operating system here](https://www.getpostman.com/apps). * We’ll be using the [Holiday API](https://holidayapi.com/), so go to [holidayapi.com](https://holidayapi.com/) and sign up for a free API key. * Enter the URL `https://holidayapi.com/v1/holidays` into the Postman address bar * Click “Params” and enter your `key`, `country`, and `year`. * Click “Send” to make the API request to Holiday API. ![](https://i.imgur.com/B63DsOZ.png) Just like the response from Open Food Facts, the Holiday API returns a JSON data structure: { "status": 200, "holidays": { "2017-01-01": [ { "name": "Last Day of Kwanzaa", "date": "2017-01-01", "observed": "2017-01-01", "public": false }, { "name": "New Year's Day", "date": "2017-01-01", "observed": "2017-01-02", "public": true } ], ... }} In addition to making complex API requests easier, Postman also makes responses easier to read by formatting them. This means you don’t have to use a tool like JSON Formatter to make API data readable. ### Accessing an API from Code While using your web browser or Postman is great for testing and exploring an API, you (or your customers) will eventually want to connect to your API using code. There are many ways to do this, including: * [CURL](https://curl.haxx.se/docs/manpage.html) from the command line * [AJAX](https://developer.mozilla.org/en-US/docs/Web/Guide/AJAX/Getting_Started) or [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) in Javascript * [Net::HTTP](https://ruby-doc.org/stdlib-2.5.1/libdoc/net/http/rdoc/index.html) in Ruby * [Unirest](http://unirest.io/java.html) in Java Pretty much every programming language or framework has an HTTP library that you can use to make API requests, but I won’t cover them in detail here as this guide is all about no-code solutions. Just know that developers who want to use your API will likely need a tool like this. Tools for Building APIs Without Code ------------------------------------ Now that you know what an API is, why you might need one, and how to access an API, you’re ready to get your hands dirty and actually build one. All of the solutions below (with the exception of the last two) require no code, but many of them can be enhanced or improved with some custom development work. I find this model of [starting with a no-code prototype and then enhancing it with code](https://www.karllhughes.com/posts/creating-a-tech-startup-without-a-developer) from a developer to be ideal as it allows you to test your API before you [hire a software engineer](https://www.karllhughes.com/posts/hiring-process). ### [Sheetsu](https://sheetsu.com/) **Description:** Sheetsu might be the easiest way to get started building an API because you probably already have some data in a spreadsheet that will make up the backbone of your application. Sheetsu takes any Google Sheet and turns it into a queryable, flexible API. **Pros:** * Super simple to get started. This is truly a zero-code solution * Authentication via an API key * Route-based permissions * Automatically generated docs * Input forms to allow users to add content * Can use with Zapier to trigger events in other services, send emails, etc. * Includes “instructions” you can email to developers to help them start using your API * “Handlebars” option allows you to create a frontend template from your data **Cons:** * Does not allow you to enhance your API with custom code * No user roles for advanced permissions * Linking data in multiple tables isn’t really possible * Query options are somewhat limited (just a simple search by field) * Doesn’t appear to allow rate limiting * While there is a free plan, it’s extremely limited, and you’re likely going to have to move to the $49/month plan pretty quickly ### [Airtable](https://airtable.com/invite/r/4EaSmQNr) **Description:** I’ve started using Airtable for almost everything that I used to pack into spreadsheets. The big advantage to [using Airtable for your API](https://www.karllhughes.com/posts/using-airtable-as-an-api) is that they have an excellent visual user interface and integrations with many other tools built-in. Plus, you can use Zapier to trigger custom actions when new items show up in Airtable. **Pros:** * Super simple to get started, another zero-code solution * Authentication via a single API key * Permissions using sharing settings in UI * Input forms to allow users to add content * Can use Zapier to trigger events in other services, send emails, etc. * Database-style linking between records * Query by complex functions for advanced filtering and searching of records * User roles allow limited role-based permissions * Excellent automatic documentation generated for each table * API is automatic. Every Airtable you make already has API access **Cons:** * Officially only allows up to 5 API requests per second, which might be fine for light use, but could be limiting as you scale up * Authenticating users requires them to have an Airtable account and generate their own API key * Not as customizable as some options ### [WrapAPI](https://wrapapi.com/) **Description:** In theory, WrapAPI is a powerful and extremely useful tool for scraping data or making your own static website or spreadsheet into a queryable, dynamic API. In practice, it’s a bit more complicated. I have played around with WrapAPI quite a bit, and while I’m highly technical, I found that it only works correctly for some websites. Relatively simple HTML-only sites tend to work best while complex “client-side” apps tend to get lost. Still, it’s worth trying this tool out if you already have your data on a web page (maybe an HTML table) and you want to expose it via an API as well. **Pros:** * Great if the data you have already exists on the web and it’s displayed on a simple webpage * Would also be useful for monitoring web pages that don’t offer “official” APIs * API-key based authentication * Generous free tier (30k requests/month) **Cons:** * Can’t save data through the API, they are read-only * No logging, or extensions, and customizability is limited * A little more difficult to get set up unless you know how [HTML and CSS selectors work](https://www.w3schools.com/cssref/css_selectors.asp). ### [Restdb.io](https://restdb.io/) **Description:** What I like about RestDB.io is that it starts simple, but is very powerful if you are a developer. Unfortunately, it’s not quite as easy to add things like authentication or validation as it is in some platforms, but if you have some Javascript chops, you might be able to write that on your own. **Pros:** * Can create “lookup” relationships between records * Can bring your own database to customize even further * Pricing is very good and each tier scales really high * Lots of features can be added via “Codehooks”: * Authentication * Logging * Emails * Role or route-based permissions **Cons:** * If you want to get beyond basic use-cases, you’ll need a developer to help * The auto-generated docs are too simplistic * Vendor lock-in is a likely problem once you scale. While you can export your data, it would be pretty arduous to rebuild lots of custom Codehooks ### [Bubble](https://bubble.io/) **Description:** Bubble is probably [the best web application builder](https://www.karllhughes.com/posts/bubble-web-app) I’ve seen for those who don’t code, and because it also includes an option to expose your data or workflows over an API, it’s worth noting here. You can hook into your application’s permission settings to manage access to resources or keep certain resources hidden completely. If you’re already using Bubble for your website, then using them to generate your API is an easy decision. **Pros:** * Permissions and authentication managed the same way as it is in the application builder * Documentation generation with Swagger * Logging through Bubble’s admin interface * Creating data models is pretty simple and you can import CSV files with your data **Cons:** * Learning Bubble is a big undertaking, and if you go with them, you’re likely all in * Not quite as customizable as building your own API with code, but it’s pretty close ### [Algolia](https://www.algolia.com/) **Description:** For searching data, Algolia is simply magical. Most other solutions here aren’t concerned with performance because often, building an MVP doesn’t warrant it, but if you’re in need of speed and you have a lot of data, Algolia is a great solution. It can take some technical setup depending on how you want to use it, but [there’s a Zapier connector in the works](https://discourse.algolia.com/t/algolia-for-zapier-beta/2949/7), and you can upload records via CSV or JSON. Even if you can’t write code, you can probably get Algolia working for you. **Pros:** * Super-good at searching, setting up search rules, and speed * Slightly more difficult than editing a spreadsheet, but not much * Can be used as a replacement for Solr or Elasticsearch (if you don’t know what those are, don’t worry about it) * Logging is available * Scales up as high as you want **Cons:** * Limited flexibility and customization * No documentation generation * Can get expensive at higher use levels ### [Strapi](https://strapi.io/) **Description:** Strapi is an open-source content management system that lets you self-host an API on your own server in minutes. Even if you’re not an experienced developer, you can probably follow the [setup instructions](https://strapi.io/documentation/developer-docs/latest/getting-started/quick-start.html) to get started. The biggest limitation today is that you’ll need to run and maintain a server to host your Strapi backend. But, as an [open-source company](https://www.karllhughes.com/posts/open-source-companies), I imagine Strapi will add a hosted version that makes it even easier for non-technical users to get started with. **Pros:** * Very flexible and fast * Includes a GraphQL interface, which is a better option for some use cases * Can build complex relationships between data models * Permissions and authentication rules can be set granularly * Scales up as much as your server can handle * Free (but you do pay for your server) **Cons:** * Not completely “no-code” because you have to setup and run it on your own server * More complicated to set up and configure than some options ### [PHP CRUD API](https://github.com/mevdschee/php-crud-api) **Description:** If you’re semi-technical or you can hire a developer to do some initial setup, PHP-CRUD-API might be a great option. Once you hook it up to an existing MySQL, Postgres, or SQL Server database, it automatically generates an API that is documented and highly customizable. The downside to this approach is that you’ll have to pay to host and set up the application. The upside is that it should scale with you long after your MVP. **Pros:** * Much more flexible as you can view and modify the whole app’s source code * Once set up, you just have to modify your database schema to modify the API * Scales as much as the server you put it on * Automatic [Swagger](https://swagger.io/docs/) documentation generation * Works great if you already have an existing database **Cons:** * Not really a “no-code” solution as you’ll likely have to have a developer set it up for you * While the application is free, you will have to pay for hosting it (probably $5-$25/month) * No authentication, logging, triggers, etc. included out of the box, but they can be added with some custom coding Conclusion ---------- Building an API without code is getting easier every year. When I first wrote this guide in 2017, the options were limited, but with the widespread growth of [cloud services](https://www.karllhughes.com/posts/cloud-services) and [low-code tools](https://stackoverflow.blog/2021/06/09/using-low-code-tools-to-iterate-products-faster/), I’ve been able to add many new options to this guide. I intend to keep updating this guide periodically. If you have your own suggestions for building APIs without code, [find me on Twitter](https://twitter.com/karllhughes) to let me hear about them.
karllhughes
862,573
Advanced MessagePack capabilities
Photo by Peretz Partensky / CC BY-SA 2.0 MessagePack is a binary format for data serialization. It...
0
2021-10-13T16:36:52
https://dev.to/tarantool/advanced-messagepack-capabilities-4735
php, programming, datacompression, tutorial
![e9d6306b85aa4ce811a0dcca6d033789](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/022osmi13djx55a7zc9j.jpg)<figcaption>Photo by Peretz Partensky / CC BY-SA 2.0</figcaption> MessagePack is a binary format for data serialization. It is positioned by the authors as a more efficient alternative to JSON. Due to its speed and compactness, it's often used as a format for data exchange in high-performance systems. The other reason this format became popular is that it's very easy to implement. Your favorite programming language most likely already has several libraries designed to work with it. In this article, I'm not going to tell you how MessagePack works or compare it to its counterparts: there are plenty of materials on this topic on the Internet. What's really missing is information about MessagePack's extended type system. I'll try to explain and show you by examples what it is and how to make serialization even more efficient using extension types. ## The Extension type The MessagePack specification defines 9 basic types: - Nil - Boolean - Integer - Float - String - Binary - Array - Map - Extension. The last type, Extension, is a container designed for storing extension types. Let's look closely at how it works. It will help us with writing our own types. Here is how the container is structured: <a name="ext-schema"></a>![425470e1345d1767f7f1ae6d29195f30 (1)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqtizld8htnwswrd75to.png) *Header* is the container's header (1 to 5 bytes). It contains the payload size, i.e., the length of the *Data* field. To learn more about how the header is formed, take a look at the [specification](https://github.com/msgpack/msgpack/blob/master/spec.md#ext-format-family). <a name="ext-type"></a>*Type* is the ID of the stored type, an 8-bit signed integer. Negative values are reserved for official types. User types' IDs can take any value in the range from 0 to 127. *Data* is an arbitrary byte string up to 4 GiB long, which contains the actual data. The format of official types is described in the specification, while the format of user types may depend entirely on the developer's imagination. > *The list of official types currently includes only [Timestamp](https://github.com/msgpack/msgpack/blob/master/spec.md#timestamp-extension-type) with the ID of -1. Occasionally there are proposals to add new types (such as UUIDs, multidimensional arrays, or geo-coordinates), but since the discussions are not very active, I would not expect anything new to be added in the near future.* ## Hello, World! ![34ae802c3fd31328904479bee387fe93 (2)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ssplc0x8mo1l6e3ecoqu.jpg)<figcaption>Photo by Brett Ohland / CC BY-NC-SA 2.0</figcaption> That's enough theory, let's start coding! For these examples, we'll use the [msgpack.php](https://github.com/rybakit/msgpack.php) MessagePack library since it provides a convenient API to handle extension types. I hope you'll find these code examples easy to understand even if you use other libraries. <a name="uuid-example"></a>Since I mentioned UUID, let's implement support for this data type as an example. To do so, we'll need to write an extension &mdash; a class to serialize and deserialize UUID values. We will use the [symfony/uid](https://symfony.com/doc/current/components/uid.html) library to make handling such values easier. > *This example can be adapted for any UUID library, be it the popular [ramsey/uuid](https://uuid.ramsey.dev/en/latest/), PECL [uuid](https://pecl.php.net/package/uuid) module, or a user implementation.* Let's name our class `UuidExtension`. The class must implement the `Extension` interface: ```php use MessagePack\BufferUnpacker; use MessagePack\Extension; use MessagePack\Packer; use Symfony\Component\Uid\Uuid; final class UuidExtension implements Extension { public function getType(): int { // TODO } public function pack(Packer $packer, mixed $value): ?string { // TODO } public function unpackExt(BufferUnpacker $unpacker, int $extLength): Uuid { // TODO } } ``` We determined earlier what the [type](#ext-type) (ID) of the extension is, so we can easily implement the `getType()` method. In the simplest case, this method could return a fixed constant, globally defined for the whole project. However, to make the class more versatile, we'll let it define the type when initializing the extension. Let's add a constructor with one integer argument, `$type`: ```php /** @readonly */ private int $type; public function __construct(int $type) { if ($type < 0 || $type > 127) { throw new \OutOfRangeException( "Extension type is expected to be between 0 and 127, $type given" ); } $this->type = $type; } public function getType(): int { return $this->type; } ``` Now let's implement the `pack()` method. From the method's signature, we can see that it takes two parameters: a `Packer` class instance and a `$value` of any type. The method must return either a serialized value (wrapped into the Extension container) or `null` if the extension does not support the value type: ```php public function pack(Packer $packer, mixed $value): ?string { if (!$value instanceof Uuid) { return null; } return $packer->packExt($this->type, $value->toBinary()); } ``` The reverse operation isn't much harder to implement. The `unpackExt()` method takes a `BufferUnpacker` instance and the length of the serialized data (the size of the *Data* field from the [schema](#ext-schema) above). Since we've saved the binary representation of a UUID object in this field, all we need to do is read this data and build a `Uuid` object: ```php public function unpackExt(BufferUnpacker $unpacker, int $extLength): Uuid { return Uuid::fromString($unpacker->read($extLength)); } ``` Our extension is ready! The last step is to register a class object with a specific ID. Let the ID be `0`: ```php $uuidExt = new UuidExtension(0); $packer = $packer->extendWith($uuidExt); $unpacker = $unpacker->extendWith($uuidExt); ``` Let's make sure everything works correctly: ```php $uuid = new Uuid('7e3b84a4-0819-473a-9625-5d57ad1c9604'); $packed = $packer->pack($uuid); $unpacked = $unpacker->reset($packed)->unpack(); assert($uuid->equals($unpacked)); ``` That was an example of a simple UUID extension. Similarly, you can add support for any other type used in your application: [DateTime](https://github.com/rybakit/msgpack.php/blob/master/examples/MessagePack/DateTimeExtension.php), [Decimal](https://github.com/tarantool-php/client/blob/master/src/Packer/Extension/DecimalExtension.php), Money. Or you can write a versatile extension that allows serializing any object (as it was done in [KPHP](https://vkcom.github.io/kphp/kphp-language/howto-by-kphp/serialization-msgpack.html?highlight=msgpack#internal-implementation-details)). However, this is not the only use for extensions. I'll now show you some interesting examples that demonstrate other advantages of using extension types. ## "Lorem ipsum" or compressing the incompressible ![851828579dec0b5e1c75b41834b61030 (2)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zyqp8ph4if39wthe6apk.jpg)<figcaption>Photo by dog97209 / CC BY-NC-ND 2.0</figcaption> If you've ever inquired about MessagePack before, you probably know the phrase from its official website, [msgpack.org](https://msgpack.org/): "*It's like JSON, but fast and small*." In fact, if you compare how much space the same data occupies in JSON and MessagePack, you'll see why the latter is a much more compact format. For example, the number `100` takes 3 bytes in JSON and only 1 in MessagePack. The difference becomes more significant as the number's order of magnitude grows. For the maximum value of int64 (`9223372036854775807`), the size of the stored data differs by as much as 10 bytes (19 against 9)! The same is true for boolean values &mdash; 4 or 5 bytes in JSON against 1 byte in MessagePack. It is also true for arrays because many syntactic symbols, such as commas separating the elements, semicolons separating the key-value pairs, and brackets indicating the array boundaries, don't exist in binary format. Obviously, the larger the array is, the more syntactic litter accumulates along with the payload. With string values, however, things are not so straightforward. If your strings do not consist entirely of quotes, line feeds, and other special characters that require escaping, then you won't notice a big difference between their sizes in JSON and in MessagePack. For example, `"foobar"` has a length of 8 bytes in JSON and 7 in MessagePack. Note that the above only applies to UTF-8 strings. For binary strings, JSON's disadvantage against MessagePack is obvious. > *Knowing this peculiarity of MessagePack, you can have a good laugh reading articles that compare the two formats in terms of data compression efficiency while using mainly string data for the tests. Apparently, any conclusions based on the results of such tests would make no practical sense. So take those articles skeptically and run comparative tests on* ***your own*** *data.* At some point, there were discussions about whether to add string compression (individual or in frames) to the specification to make string serialization more compact. However, the idea was rejected, and the implementation of this feature was left to users. So let's try it. Let's create an extension that will compress long strings. We will use whatever compression tool is at hand, for example, [zlib](https://www.php.net/manual/en/book.zlib.php). > *Choose the data compression algorithm based on the specifics of your data. For example, if you are working with lots of short strings, take a look at [SMAZ](https://github.com/antirez/smaz).* Let's start with the constructor for our new class, `TextExtension`. The first argument is the extension ID, and as a second optional argument, we'll add minimum string length. Strings shorter than this value will be serialized in a standard way, without compression. In this way, we will avoid cases where the compressed string ends up longer than the initial one: ```php final class TextExtension implements Extension { /** @readonly */ private int $type; /** @var positive-int */ private int $minLength; public function __construct(int $type, int $minLength = 100) { ... $this->type = $type; $this->minLength = $minLength; } ... } ``` To implement the `pack()` method, we might write something like this: ```php public function pack(Packer $packer, mixed $value): ?string { if (!is_string($value)) { return null; } if (strlen($value) < $this->minLength) { return $packer->packStr($value); } // compress and pack ... } ``` However, this wouldn't work. String is one of the basic types, so the packer will serialize it before our extension is called. This is done in the msgpack.php library for performance reasons. Otherwise, before serializing each value, the packer would need to scan the available extensions, considerably slowing down the process. Therefore, we need to tell the packer not to serialize certain strings as, you know, strings but to use an extension. As you might guess from the [UUID example](#uuid-example), it can be done via a [ValueObject](https://martinfowler.com/bliki/ValueObject.html). Let's call it `Text`, similar to the extension class: ```php /** * @psalm-immutable */ final class Text { public function __construct( public string $str ) {} public function __toString(): string { return $this->str; } } ``` So instead of ```php $packed = $packer->pack('a very long string'); ``` we'll use a `Text` object to mark long strings: ```php $packed = $packer->pack(new Text('a very long string')); ``` Let's update the `pack()` method: ```php public function pack(Packer $packer, mixed $value): ?string { if (!$value instanceof Text) { return null; } $length = strlen($value->str); if ($length < $this->minLength) { return $packer->packStr($value->str); } // compress and pack ... } ``` Now we just need to compress the string and put the result in an Extension. Note that the minimum length limit does not guarantee that the string will take less space after compression. For this reason, you might want to compare the lengths of the compressed string and the original and choose whichever is more compact: ```php $context = deflate_init(ZLIB_ENCODING_GZIP); $compressed = deflate_add($context, $value->str, ZLIB_FINISH); return isset($compressed[$length - 1]) ? $packer->packStr($value->str) : $packer->packExt($this->type, $compressed); ``` Deserialization: ```php public function unpackExt(BufferUnpacker $unpacker, int $extLength): string { $compressed = $unpacker->read($extLength); $context = inflate_init(ZLIB_ENCODING_GZIP); return inflate_add($context, $compressed, ZLIB_FINISH); } ``` Let's see the result: ```php $longString = <<<STR Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. STR; $packedString = $packer->pack($longString); // 448 bytes $packedCompressedString = $packer->pack(new Text($longString)); // 291 bytes ``` In this example, we saved 157 bytes, or *35% of what would be the standard serialization result*, on just one string! ## From "schema-less" to "schema-mixed" ![04bbfc5f6758a3841bc7753e4421e960 (8)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hom3tiyn4bw7uqyfwo1g.jpg)<figcaption>Photo by Adventures with E&L / CC BY-NC-ND 2.0</figcaption> Compressing long strings is not the only way to save space. MessagePack is a *schemaless*, or *schema-on-read*, format that has its advantages and disadvantages. One of the disadvantages in comparison with *schema-full* (*schema-on-write*) formats is highly ineffective serialization of repeated data structures. An example of such data is a selection from a database, where all elements of the resulting array have the same structure: ```php $userProfiles = [ [ 'id' => 1, 'first_name' => 'First name 1', 'last_name' => 'Last name 1', ], [ 'id' => 2, 'first_name' => 'First name 2', 'last_name' => 'Last name 2', ], ... [ 'id' => 100, 'first_name' => 'First name 100', 'last_name' => 'Last name 100', ], ]; ``` If you serialize this array with MessagePack, the repeated keys of each element in the array will take a substantial part of the total data size. But what if we could save the keys of such structured arrays just once? It would significantly cut down the size and also speed up serialization since the packer would have fewer operations to perform. Like before, we are going to use extension types for that. Our type will be a value object wrapped around an arbitrary *structured* array: ```php /** * @psalm-immutable */ final class StructList { public function __construct( public array $list, ) {} } ``` > *If your project includes a library for database handling, there is probably a special class in that library to store table selection results. You can use this class as a type instead of/along with* `StructList`*.* Here is how we are going to serialize such arrays. First, we'll check the array size. Of course, if the array is empty or has only one element, there is no reason to store keys separately from values. We'll serialize arrays like these in a standard way. In other cases, we'll first save a list of keys and then a list of values. We won't be storing an associative array list, which is the standard MessagePack option. Instead, we'll write data in a more compact form: ![3ca04136382cc4c0767fbc1626e9908d (9)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1hv5qdcphz9uxjxqbt3y.png) Implementation: ```php final class StructListExtension implements Extension { ... public function pack(Packer $packer, mixed $value): ?string { if (!$value instanceof StructList) { return null; } $size = count($value->list); if ($size < 2) { return $packer->packArray($value->list); } $keys = array_keys(reset($value->list)); $values = ''; foreach ($value->list as $item) { foreach ($keys as $key) { $values .= $packer->pack($item[$key]); } } return $packer->packExt($this->type, $packer->packArray($keys). $packer->packArrayHeader($size). $values ); } ... } ``` To deserialize, we need to unpack the keys array and then use it to restore the initial array: ```php public function unpackExt(BufferUnpacker $unpacker, int $extLength): array { $keys = $unpacker->unpackArray(); $size = $unpacker->unpackArrayHeader(); $list = []; for ($i = 0; $i < $size; ++$i) { foreach ($keys as $key) { $list[$i][$key] = $unpacker->unpack(); } } return $list; } ``` That's it! Now, if we serialize `$profiles` from the example above as a normal array and as a structured `StructList`, we'll see a great difference in size — *the latter will be 47% more compact*. ```php $packedList = $packer->pack($profiles); // 5287 bytes $packedStructList = $packer->pack(new StructList($profiles)); // 2816 bytes ``` > *We could go further and create a specialized `Profiles` type to store information about the array structure in the extension code. This way, we wouldn't need to save the keys array. However, in this case, we would lose in versatility.* ## Conclusion We've taken a look at just a few examples of using extension types in MessagePack. To see more examples, check the [msgpack.php](https://github.com/rybakit/msgpack.php/tree/master/examples) library. For the implementations of all extension types supported by the [Tarantool](https://www.tarantool.io/en/doc/latest/dev_guide/internals/msgpack_extensions/) protocol, see the [tarantool/client](https://github.com/tarantool-php/client/tree/master/src/Packer/Extension) library. I hope this article gave you a sense of what extension types are and how they can be useful. If you're already using MessagePack but haven't known about the feature, this information might inspire you to reconsider your current methods of working with the format and start using custom types. If you're just wondering which serialization format to choose for your next project, the article might help you make a reasonable choice, adding a point in favor of MessagePack :) ## Links [Get Tarantool on our website](http://www.tarantool.io/en/download/os-installation/docker-hub/?utm_source=dev&utm_medium=referral&utm_campaign=2021) [Get help in our telegram channel](http://t.me/tarantool?utm_source=dev&utm_medium=referral&utm_campaign=2021)
tarantool
862,613
PrabhuPay WooCommerce Plugin for WordPress
PrabhuPAY is a mobile wallet with mobile based payment solution under Prabhu Technology Pvt. Ltd....
0
2021-10-13T18:10:37
https://dev.to/madhavdhungana/prabhupay-woocommerce-plugin-for-wordpress-37n2
wordpress, programming
PrabhuPAY is a mobile wallet with mobile based payment solution under Prabhu Technology Pvt. Ltd. Now PrabhyPay plugin is available for WooCommerce. Get the plugin from the following github link https://github.com/madhav-dhungana/prabhu-pay-woocommerce
madhavdhungana
938,065
Timeout using context package in Go
Key takeaways context.WithTimeout can be used in a timeout implementation. WithDeadline...
0
2021-12-28T04:27:05
https://dev.to/hgsgtk/timeout-using-context-package-in-go-1b3c
go
## Key takeaways - [context.WithTimeout](https://pkg.go.dev/context#WithTimeout) can be used in a timeout implementation. - [WithDeadline](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;l=434;drc=refs%2Ftags%2Fgo1.17.5) returns [CancelFunc](https://pkg.go.dev/context#CancelFunc) that tells an operation to abandon its work. - timerCtx implements `cancel()` by stopping its timer then delegating to [cancelCtx.cancel](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;l=397), and cancelCtx closes the context. - ctx.Done returns a channel that's closed when work done on behalf of this context should be canceled. ## context.WithTimeout The context package as the standard library was moved from the golang.org/x/net/context package in [Go 1.7](https://tip.golang.org/doc/go1.7#context). This allows the use of contexts for cancellation, timeouts, and passing request-scoped data in other library packages. [context.WithTimeout](https://pkg.go.dev/context#WithTimeout) can be used in a timeout implementation. ```go func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) ``` For example, you could implement it as follow ([Go playground](https://go.dev/play/p/lEEfym9gs25)): ```go package main import ( "context" "fmt" "log" "time" ) func execute(ctx context.Context) error { proc1 := make(chan struct{}, 1) proc2 := make(chan struct{}, 1) go func() { // Would be done before timeout time.Sleep(1 * time.Second) proc1 <- struct{}{} }() go func() { // Would not be executed because timeout comes first time.Sleep(3 * time.Second) proc2 <- struct{}{} }() for i := 0; i < 3; i++ { select { case <-ctx.Done(): return ctx.Err() case <-proc1: fmt.Println("process 1 done") case <-proc2: fmt.Println("process 2 done") } } return nil } func main() { ctx := context.Background() ctx, cancel := context.WithTimeout(ctx, 2*time.Second) defer cancel() if err := execute(ctx); err != nil { log.Fatalf("error: %#v\n", err) } log.Println("Success to process in time") } ``` Canceling this context releases resources associated with it, so you should call cancel as soon as the operations running in this Context complete. ```go ctx := context.Background() ctx, cancel := context.WithTimeout(ctx, 2*time.Second) defer cancel() ``` Cancel notification after timeout is received from [ctx.Done()](https://pkg.go.dev/context#Context). Done returns a channel that's closed when work done on behalf of this context should be canceled. WithTimeout arranges for Done to be closed when the timeout elapses. ```go select { case <-ctx.Done(): return ctx.Err() } ``` When you execute this code, you will get the following result. A function call that can be completed in 1s will be finished, but a function call that can be done after 3s will not be executed because a timeout occurs in 2s. ```bash $ go run main.go process 1 done 2021/12/28 12:32:59 error: context.deadlineExceededError{} exit status 1 ``` In this way, you can implement timeout easily. ## Deep dive into context.WithTimeout Here's a quick overview. ![A diagram describing the inside of context package](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vh5ni7ikyj7lsrtwzx2l.png) [WithTimeout](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;l=506) is a wrapper function for [WithDeadline](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;drc=refs%2Ftags%2Fgo1.17.5;l=434). ```go func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { return WithDeadline(parent, time.Now().Add(timeout)) } ``` [WithDeadline](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;l=434;drc=refs%2Ftags%2Fgo1.17.5) returns [CancelFunc](https://pkg.go.dev/context#CancelFunc) that tells an operation to abandon its work. Internally, a function that calls `timerCtx.cancel()`, a function that's not exported, will be returned. ```go func WithDeadline(parent Context, d time.Time) (Context, CancelFunc) { // (omit) c := &timerCtx{ cancelCtx: newCancelCtx(parent), deadline: d, } // (omit) return c, func() { c.cancel(true, Canceled) } } // (omit) type timerCtx struct { cancelCtx timer *time.Timer // Under cancelCtx.mu. deadline time.Time } ``` A [timerCtx](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;drc=refs%2Ftags%2Fgo1.17.5;l=465) carries a timer and a deadline, and embeds a [cancelCtx](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;l=342;drc=refs%2Ftags%2Fgo1.17.5;bpv=1;bpt=1). ```go type cancelCtx struct { Context mu sync.Mutex // protects following fields done atomic.Value // of chan struct{}, created lazily, closed by first cancel call children map[canceler]struct{} // set to nil by the first cancel call err error // set to non-nil by the first cancel call } ``` timerCtx implements `cancel()` by stopping its timer then delegating to [cancelCtx.cancel](https://cs.opensource.google/go/go/+/refs/tags/go1.17.5:src/context/context.go;l=397). ```go func (c *cancelCtx) cancel(removeFromParent bool, err error) { if err == nil { panic("context: internal error: missing cancel error") } c.mu.Lock() if c.err != nil { c.mu.Unlock() return // already canceled } c.err = err d, _ := c.done.Load().(chan struct{}) if d == nil { c.done.Store(closedchan) } else { close(d) } for child := range c.children { // NOTE: acquiring the child's lock while holding parent's lock. child.cancel(false, err) } c.children = nil c.mu.Unlock() if removeFromParent { removeChild(c.Context, c) } } ``` In the function the context is closed. ## Conclusion I explained how to implement timeout with context package, and dived into internal implementation in it. I hope this helps you understand the Go implementation.
hgsgtk
862,871
Creating through a crisis
You’re a maker, right? Why don’t you make something?
0
2021-10-13T23:09:52
https://dev.to/jasonleowsg/creating-through-a-crisis-228o
beginners, codenewbie, covid19, decodingcoding
--- title: Creating through a crisis published: true description: You’re a maker, right? Why don’t you make something? tags: beginners, codenewbies, covid19, decodingcoding cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zieq1kqg161y1a5924ak.jpg --- ### *You’re a maker, right? Why don’t you make something?* --- There’s a scene in Iron Man 3 where Tony Stark was having a panic attack from the work as a superhero, and he called Harley, the kid who sheltered him while he was hiding from his enemies. Harley goes console Stark, and said, “You’re a mechanic, right? Why don’t you build something?” That line really stuck with me. Floated around in my subconscious. Then it appeared as a mini-epiphany. I’m not sure when this happened, but there’s a repeating pattern that I observed about how I cope with crisis. And it’s about building something. First it was a chronic medical condition in 2018 (which I had since recovered from, thankfully). While I was seeking treatment, I wasn’t taking on paid projects then, but I continued to work. In fact, it was that same time that I tried out the 12 startups in 12 months challenge. I called it [#1mvp1month](https://twitter.com/jasonleowsg/status/970890710751723521). I didn’t end up making 12 products, but was close – 8 products. That was a strange experience, because on one hand, I was going through a low period in terms of health and mental wellbeing. But yet I was creating like nobody’s business. Then in 2019, in a bid to get my health back, I turned to keto and intermittent fasting. Hard as f\*\*k, trying to stop eating carbs. But I did. In the process of changing my diet and my health, I made a product called [Keto List Singapore](https://ketolistsingapore.com) – a directory of keto resources and links. That helped me in a big way, and now it’s a little side business. Then, COVID-19 came. Complete topsy-turvy of life. I’m safe at home, but also stuck at home. For someone who enjoys being outside, what can I do to cope? I create. I made products *again* - websites, apps, software to help people in the pandemic. Like a man possessed, I kept making. In the end I made a total of 11 products! > Some examples of the 11 products I made: > > [Dabao Dash](https://sheet2site.com/s/dabaodash/) - a self-help community board of offers and requests, matching hawkers and small F&Bs with delivery riders affected > [Majulah Belanja](https://sheet2site.com/s/majulahbelanja/) - an offers/requests board for to help match donors with employers and migrant workers. > [VisualAid](https://visualaid.sg) - translated illustrations to help healthcare workers communicate better with migrant worker patients. > [Grant Hunt](https://gogranthunt.com/) - A chat bot to find grants for charities and non-profits in Singapore. I realised that the intensity of my creating is directly proportional to how challenged I am in a crisis, or inversely proportional to how low I feel. I guess I needed something to balance it out. Creating always felt energizing and uplifting. It gives life, right when I need to feel more alive. It’s like I can almost hear Harley say that to me over the phone (though I’m under no illusions of being a superhero): *“You’re a maker, right? Why don’t you make something?”* Damned hell I will. --- Follow my daily writings on [Lifelog](https://golifelog.com/goals/30), where I write about learning to code, goals, productivity, indie hacking and tech for good.
jasonleowsg
863,004
Swift Result for API wrangling
https://www.youtube.com/watch?v=AIb3CQH8_jg
0
2021-10-14T02:18:34
https://dev.to/ybapps/swift-result-for-api-wrangling-3pjp
https://www.youtube.com/watch?v=AIb3CQH8_jg
ybapps
863,014
Divtober Day 14: Fancy
Cartoon of a fancy-looking gentleman done with CSS and a single HTML element.
14,881
2021-10-14T11:59:44
https://dev.to/alvaromontoro/divtober-day-14-fancy-al1
codepen, divtober, css, showdev
--- title: Divtober Day 14: Fancy description: Cartoon of a fancy-looking gentleman done with CSS and a single HTML element. published: true tags: codepen,divtober,css,showdev series: divtober --- The word of the day is "fancy"... so here's a cartoon of a fancy-looking British gentleman with a hat, a monocle, and an umbrella: {% codepen https://codepen.io/alvaromontoro/pen/rNzOGMa %} ---- There's a second element (not used in the drawing) with a link to a [YouTube video of how it was coded](https://youtu.be/Giq9h88lVnc) (although it is a bit tough to follow with things "jumping around"): {% youtube Giq9h88lVnc %} *After recording the video, I changed the code a little. The image will look the same, but the code is a bit different (using the short notation) as mentioned in the comments below. ---- I did a quick second version with the man holding a cup of tea with the pinkie out: {% codepen https://codepen.io/alvaromontoro/pen/RwZPXRa %}
alvaromontoro
863,016
Smart contract for Voting
jaymakwanacodes.me To view source code all at once head over to my Github Let's understand with the...
0
2021-10-14T05:19:48
https://dev.to/jmakwana01/smart-contract-for-voting-4ca4
blockchain, solidity, web3, smartcontract
[jaymakwanacodes.me](https://www.jaymakwanacodes.me/) To view source code all at once head over to my [Github](https://github.com/jmakwana01/Simple-Bank-Smart-Contract) Let's understand with the basic, why blockchain and how this smart contract adds value to the existing system In the Traditional sense there are some concerning problem with voting system,in web2 the interface and backend is connected to a central database which raises the problem of rouge admin,the voting system code is not public which shows we need to trust that rule are been followed and my vote has been counted So to come up with a possible change let try Blockchain in the context, To understand it let start with how blockchain works,in the most basic terms ,blockchain are like link list node connected to nodes creating a network for itself ,okay! so it's solves the the rouge admin issue as it is decentralized no one owns the chain ,let try to use it to next problem the transparency feature blockchain has append only logs that are public so in that case when we deploy our (contract)code for election in blockchain it will be visible in public as it changes the state of the chain,when someone vote the address can be seen who vote(it's still anonymous) so let dive into the technical stuff ##Smart contract Smart contracts are simply programs stored on a blockchain that run when predetermined conditions are met. They typically are used to automate the execution of an agreement so that all participants can be immediately certain of the outcome, without any intermediary's involvement or time loss. also something that solves a problem as smart contract are immutable the rule of election can't be changed ##Solidity Solidity is an object-oriented programming language for writing smart contracts. It is used for implementing smart contracts on various blockchain platforms, most notably, Ethereum. We will be using Solidity to write our smart contract if you are not familiar with it don't sweat just carry on reading ##Code ```Pragma Solidity ^0.6.6;``` Solidity is a growing language there are frequent updates we are declaring which compiler to use in the line (look at it as something like package-lock.json) next let create a contract if you are familiar with java or any other object oriented language which uses classes same way the keyword contract is used in solidity ```contract PollContract {}``` Struct types are used to represent a record. we need to create a record for poll syntax:-struct struct_name { type1 type_name_1; } here we want to create struct for poll to create a poll we should add following -id -question(title for the poll) -thumbnail(image banner for the poll) -array of votes(to track the no. of votes recieved) -array of options(to create voting option for the voter) ###all the code from following line is meant to be added inside contract Pollcontract{} ``` struct Poll { uint256 id; string question; string thumbnail; uint64[] votes; bytes32[] options; }``` Now we have created a struct for poll we declaring we need this parameter for poll let do the same for the voter -id -array of votedId(to track whether the user have vote or not ) -voted map (mapping to votedId array to boll) ```struct Voter { address id; uint256[] votedIds; mapping(uint256 => bool) votedMap; }``` now let's define how pool and voter interact we will create a Poll array and call it polls and set it to private( Can be accessed only by authorized contracts), next mapping user address to the voter to extract info about the user, now we will add event poll created with parameter set to _pollId; ```Poll[] private polls; mapping(address => Voter) private voters; event PollCreated(uint256 _pollId);``` Now we will add createPoll function to as name suggest create poll we get some parameter as you see in below code we will add some require to make sure question isn't empty and have atleast 2 option for every poll ```function createPoll(string memory _question, string memory _thumb, bytes32[] memory _options) public { require(bytes(_question).length > 0, "Empty question"); require(_options.length > 1, "At least 2 options required") uint256 pollId = polls.length; Poll memory newPoll = Poll({ id: pollId, question: _question, thumbnail: _thumb, options: _options, votes: new uint64[](_options.length) }); polls.push(newPoll); emit PollCreated(pollId); }``` Now as we have written a function to create a poll we need to write a function to get poll we use require to check that poll id shouldn't be null ```function getPoll(uint256 _pollId) external view returns(uint256, string memory, string memory, uint64[] memory, bytes32[] memory) { require(_pollId < polls.length && _pollId >= 0, "No poll found"); return ( polls[_pollId].id, polls[_pollId].question, polls[_pollId].thumbnail, polls[_pollId].votes, polls[_pollId].options ); }``` now we can create a poll and get a poll let's start voting now create a function to vote ,we will add some required check to find if the poll exsist ,if option exsist and whether you have already vote the syntax msg.sender get the user address ```function vote(uint256 _pollId, uint64 _vote) external { require(_pollId < polls.length, "Poll does not exist"); require(_vote < polls[_pollId].options.length, "Invalid vote"); require(voters[msg.sender].votedMap[_pollId] == false, "You already voted"); polls[_pollId].votes[_vote] +=1 ; voters[msg.sender].votedIds.push(_pollId); voters[msg.sender].votedMap[_pollId] = true; }``` now to get the voter let create a function getVoter to get info about voterId ```function getVoter(address _id) external view returns(address, uint256[] memory) { return ( voters[_id].id, voters[_id].votedIds ); } function getTotalPolls() external view returns(uint256) { return polls.length; }``` Now our Smart contract is completed we can deploy it and check it on our Remix or some testnet as well, we can create front-end with any of the frame work and can connect it with our smart-contract using web3.js as a middleware
jmakwana01
863,018
Refactoring with Git
This is the 6th Week that I've been being in OSD 600. And this week we have a new work to do -- Lab...
0
2021-10-15T03:07:55
https://dev.to/derekjxy/refactoring-with-git-4cn4
opensource, github, javascript, html
This is the __6th Week__ that I've been being in __OSD 600__. And this week we have a new work to do -- __Lab 5__. _Different from_ the previous labs we had, this week we are going to __modify__ our code of SSG program and make it __looks better__. It's about __refactoring__ our code. _Due to we added new features to our program, the complexity of the code grows with it. We added new features and we forced to create new code paths, functions, variables. which will cause we are start losing control of the code._ __Refactoring__ is a technique for improving the structure and maintainability of our code without altering its behavior. <br> ##Procedure #### #1. Get the repository to my PC After reading the __instruction__ of the __Lab 5__. I __cloned__ my repository to my _local machine_ and then used the command `git checkout -b refactor` in git to created a new branch named __'refactor'__. And then I use the command `code . ` to run the code in __Visual Studio Code__. #### #2. Go through the code _When my SSG code available in my local machine. I read through my code again. And I found out that there are a bunch of code are similar_. __Therefore__, I decided to make some __new functions__ to _reduce the amount of duplication_. #### #3. Create Functions __Firstly__, I created a function called __"mdFileHtmlConversion"__ to store the code that adding a new _feature_ to my SSG so that all `---` in a Markdown file would get converted to an `<hr>` tag. __Secondly__, I found that the way I try to convert a `txt` file to a `html` file is very __similar__ the way I covert a `md` file to a `html` file. Therefore, I put then into a new function named __"htmlGenerator"__. __Lastly__, I have duplicated logic and code for my program to convert `a folder` and `a single file`. In order to make my program with __less duplication__. I created a new function named __"htmlConversion"__ to _store the converting logic and code_. #### #4. Improve variable naming Since I updated my code with some __new functions__, it became __more tidy__. And my next step was to __rename__ those variables that with a _non-sense name_. For example, I have a variable named __'fname'__. I mean, there are many possibilities for a variable named __'fname'__, it could be __'first name'__ or __'file name'__ or __'french name'__, etc. So, I changed it to a more specific name __'fileName'__. It's way _more clear_ than with the name 'fname'.Also, I changed the variable __'stats'__ to __'filePath'__ so that it became easier to understand. #### #5. Get rid of global variables __Finally__, I _removed_ all the `global variables` I had in my code. Instead of having global variables, I put those variables to each __specific function__ that _I will use them_. #### #6. Combine my commits After updating my code, I use this command `git rebase master -i` to start an interactive rebase, and opening the editor. And then I overwrite the `'pick'` keyword to `'squash'` so that I can combine all the commits I had into __1 commit__. Then I use the command `git commit --amend` to rename some of my __commit descriptions__. Last but not least, I __merged__ my 'refactor' branch to my 'master' branch. <br> ##My Feelings I gotta say __"Refactoring is interesting!"__ This is a good way to improve my coding structure. It saved me __53__ lines of code after refactoring,![tips](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6szhlibfw1m7fe1qd8wd.png)which is almost __1/5__ line of code in my __SSG program__. Also, my code became easier to work with, easier to understand, and easier to extend! I think I will make more refactor move in the future! Link to my repo: [[Refactoring]](https://github.com/DerekJxy/My-First-SSG/commit/48515f1a2fcd3e21f926df2106406685518842e6)
derekjxy
863,222
using c++ like plain c
its a bubble sort (sorting algorithm), i was trying to make it look more readable for beginners....
0
2021-10-14T07:38:31
https://dev.to/prathamesh_holay/using-c-as-c-5ed6
cpp
its a bubble sort (sorting algorithm), i was trying to make it look more readable for beginners. Without any function and any class just simple as c.
prathamesh_holay
863,237
LeetCode - Unique Paths
LeetCode - unique paths in m X n grid using C++, Golang and Javascript.
0
2021-10-14T08:41:48
https://alkeshghorpade.me/post/leetcode-unique-paths
leetcode, cpp, go, javascript
--- title: LeetCode - Unique Paths published: true description: LeetCode - unique paths in m X n grid using C++, Golang and Javascript. tags: #leetcode, #cpp, #golang, #javascript canonical_url: https://alkeshghorpade.me/post/leetcode-unique-paths --- ### Problem statement A robot is located at the top-left corner of a **m x n** grid (marked 'Start' in the diagram below). The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid (marked 'Finish' in the diagram below). How many possible unique paths are there? Problem statement taken from: <a href="https://leetcode.com/problems/unique-paths" target="_blank">https://leetcode.com/problems/unique-paths</a> **Example 1:** ![Container](https://alkeshghorpade.me/unique-paths.png) ``` Input: m = 3, n = 7 Output: 28 ``` **Example 2:** ``` Input: m = 3, n = 2 Output: 3 Explanation: From the top-left corner, there are a total of 3 ways to reach the bottom-right corner: 1. Right -> Down -> Down 2. Down -> Down -> Right 3. Down -> Right -> Down ``` **Example 3:** ``` Input: m = 7, n = 3 Output: 28 ``` **Example 4:** ``` Input: m = 3, n = 3 Output: 6 ``` **Constraints:** ``` - 1 <= m, n <= 100 - It's guaranteed that the answer will be less than or equal to 2 * 10^9 ``` ### Explanation #### Brute force approach As per the problem statement the robot can move either down or right. We can use recursion to find the count. Let *numberOfPaths(m, n)* represent the counts of path to reach row number m and column number n in the grid. *numberOfPaths(m, n)* in C++ can be recursively written as following. ```cpp int numberOfPaths(int m, int n){ if (m == 1 || n == 1) return 1; return numberOfPaths(m - 1, n) + numberOfPaths(m, n - 1); } ``` The time complexity of the above solution is **exponential**. There are many overlapping sub-problems and hence we can use dynamic programming approach to avoid re-computing overlapping sub-problems. #### Dynamic programming approach We can avoid re-computing the overlapping sub-problems by constructing a temporary 2D array count[][] in bottom up manner using the above recursive approach. ```cpp int numberOfPaths(int m, int n){ // create a 2D array to store results of sub-problems int count[m][n]; // count of paths to reach any cell in first column is 1 for (int i = 0; i < m; i++) count[i][0] = 1; // count of paths to reach any cell in first row is 1 for (int j = 0; j < n; j++) count[0][j] = 1; for (int i = 1; i < m; i++) { for (int j = 1; j < n; j++) count[i][j] = count[i - 1][j] + count[i][j - 1]; } return count[m - 1][n - 1]; } ``` The time complexity of the above program is **O(mn)**. The space complexity is **O(mn)**. We can reduce the space more by **O(n)** where n is column size. ```cpp int numberOfPaths(int m, int n){ int count[n] = { 1 }; count[0] = 1; for (int i = 0; i < m; i++) { for (int j = 1; j < n; j++) { count[j] += count[j - 1]; } } return count[n - 1]; } ``` #### Combinatorics approach We have to calculate *m+n-2 C n-1* here which will be **(m+n-2)! / (n-1)! (m-1)!** Let's check the algorithm on how to compute the above formula: ``` - set paths = 1 - loop for i = n; i < m + n - 1; i++ - set paths = paths * i - update paths = paths / (i - n + 1) - return paths ``` ##### C++ solution ```cpp class Solution { public: int uniquePaths(int m, int n) { long int paths = 1; for(int i = n; i < m + n - 1; i++){ paths *= i; paths /= (i - n + 1); } return int(paths); } }; ``` ##### Golang solution ```go func uniquePaths(m int, n int) int { paths := 1 for i := n; i < m + n - 1; i++{ paths *= i paths /= (i - n + 1) } return paths } ``` ##### Javascript solution ```javascript var uniquePaths = function(m, n) { let paths = 1; for(let i = n; i < m + n - 1; i++){ paths *= i; paths /= (i - n + 1); } return paths; }; ``` Let's dry-run our algorithm to see how the solution works. ``` Input: m = 3, n = 7 Step 1: set paths = 1 Step 2: loop for i = n; i < m + n - 1 i = 7 7 < 7 + 3 - 1 7 < 9 7 < 9 true paths = paths * i paths = 1 * 7 = 7 paths = paths / (i - n + 1) = 7 / (7 - 7 + 1) = 7 / 1 = 7 i++ i = 8 Step 3: loop for i < m + n - 1 8 < 8 + 3 - 1 8 < 9 8 < 9 true paths = paths * i paths = 7 * 8 = 56 paths = paths / (i - n + 1) = 56 / (8 - 7 + 1) = 56 / 2 = 28 i++ i = 9 Step 4: loop for i < m + n - 1 9 < 8 + 3 - 1 9 < 9 false Step 5: return paths So we return answer as 28. ```
_alkesh26
863,457
Startup Landing page in sveltekit, svelte, TailwindCSS
I made a Startup landing page in sveltekit and tailwind CSS for a competition. DEMO...
0
2021-10-14T11:02:09
https://dev.to/vanshcodes/startup-landing-page-in-sveltekit-svelte-tailwindcss-48d9
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6yd0o53v9mcw6xlx0ci7.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lp8l4i7nm29r59r8zpbs.png) I made a Startup landing page in sveltekit and tailwind CSS for a competition. ###DEMO VIDEO: https://youtu.be/_7t4XKAwKaw ###LIVE DEMO URL: https://linuxify-startup.vercel.app/ ###SOURCE CODE: https://rebrand.ly/startup_website #**Features** 1. SSR Enabled 2. Highly Editable nothing is hardcoded everything in the src/assets/details.js file. Even the website name is editable. 3. Size of website is just 11.803KB 4. Fast Website
vanshcodes
864,144
Laravel 🔥 tip: Generating migrations easier
Generating database migrations in Laravel is already really easy to do with the php artisan...
0
2021-10-15T01:31:15
https://dev.to/grantholle/laravel-tip-generating-migrations-easier-2jd8
laravel
Generating database migrations in Laravel is already really easy to do with the `php artisan make:migration`. There are some nice ergonomics that aren't documented that will make it even easier. I find myself struggling to write the name in snake case (replacing spaces with the `_` character). I was thinking, wouldn't it be nice if I could just write it normally and it do that for me? It already does! If we wrap the name of our migration in quotes, we're able to write it much quicker. ``` php artisan make:migration "my migration name" ``` This will generate the correct name of the migration for us by automatically running the name through the `Str::snake()` helper. This means we could also have weird casing, and it will all be ok. ## Auto table detections If you want to generate the migration with the `Schema::create()` or `Schema::table()` scaffolding already there for you, you can pass a `--create=` or `--table=` option respectively. ``` php artisan make:migration "my migration name" --table=users ``` However, the command also tries to guess your table name based on the name of your migration if you don't explicitly tell it a name. The below example will automatically create a migration with the `Schema::table('users'...)` scaffolding. ``` php artisan make:migration "add column to users" ``` We didn't specify the table option, but it detected the table based on how we named our migration. ```php public function up() { Schema::table('users', function (Blueprint $table) { // }); } public function down() { Schema::table('users', function (Blueprint $table) { // }); } ``` The `up()` and `down()` functions automatically included our table. Let's check out how. In the `MigrateMakeCommand` command class, there's a call to `TableGuesser::guess($name)`. In that guesser class, it's checking if our name contains a certain pattern: ```php const CREATE_PATTERNS = [ '/^create_(\w+)_table$/', '/^create_(\w+)$/', ]; const CHANGE_PATTERNS = [ '/_(to|from|in)_(\w+)_table$/', '/_(to|from|in)_(\w+)$/', ]; ``` What this means is that if our migration name is simply `create_[table name]_table` or just `create_[table name]` it will assume you're creating a table name whatever value [table name] is. ``` php artisan make:migration "create planets table" # or php artisan make:migration "create planets" ``` Our migration will include the same scaffolding had we done the command this way: ``` php artisan make:migration --create=planets "create planets" ``` Likewise for the `table` migrations, we can just include a few keywords along with the table name at the end of the migration name. ``` php artisan make:migration "add favorite color to users table" # or php artisan make:migration "add favorite color to users" ``` The migration will include the table details for us. You just need to include any one of the prepositions "to/from/in" followed by your table name at the end of your migration name and it will detect your table name for you automatically. Here are some examples of good migration names that will automatically detect the `users` table for us: - "make name nullable in users" - "add favorite color to users table" - "remove ssn from users" This behavior "rewards" us when we write meaningful migration names in the form of less typing... win!
grantholle
864,756
User-informed load tests
June this year I joined k6. A week later I heard that we were being acquired by Grafana Labs. I was...
0
2021-10-15T12:57:25
https://dev.to/floord/user-informed-load-tests-2foj
testing, performance, usability, script
June this year I joined k6. A week later I heard that we were being acquired by Grafana Labs. I was positively thrilled, I know Grafana as a great open source citizen. Anyway, k6 is in the performance space, and although I have some experience with testing (usability and accessibility testing), I've never really touched load testing until now. ​ There was (and is) certainly a lot to learn, and I can recommend checking out Nicole van der Hoeven's excellent videos on how to [plan](https://www.youtube.com/watch?v=EFqBWqo3IzY)(1) [realistic load tests](https://www.youtube.com/watch?v=Xz6drbGuUdI)(2). ​ One thing that struck me as odd though is that tools (like k6) that offer a [browser recorder](https://k6.io/docs/test-authoring/recording-a-session/browser-recorder/), promote it as a tool for testers, or other members of the QA team. As mentioned before I've done some usability testing and certainly so has my dad and he won't shut up about it. Now I don't often (or: never) listen to him, let alone quote him, but he says that once you've participated in a usability test, you're now a usability expert. What I think he means by that is that you know what's the expected outcome, and you can never look at the same product with fresh eyes again. ​ Having engineers potentially use the browser extension and have that be the foundation of your test script, for me, is like having the developer in the room while you're asking a potential user to perform an action. You can **hear** them think "how do they not understand how navigation works?!". As a side note: I sent engineers out of the room when they got fidgety or started to influence the user. ​ I guess what I'm trying to say is that I would expect to see teams invite users to perform a task and record their session(s). Better yet: different tasks (scenarios / user paths), combine those in your script as concurrent requests, and sprinkle some dynamic think time (`rsleep();`) on top. ​ And empty the cache after every session, to not muddy the results. ​ But of course that is usability testing, maybe perf or load testing really "just" needs to verify business requirements. While I don't think experienced testers would only test for the "happy path", I think others might. Especially as passing the test is a mandatory step in their build pipeline. ​ Please excuse the stream of consciousness, and you're hereby invited to yell at me in the comments when you think I'm totally off.
floord
342,240
Build a Forum App, from Code to Deploy
Hi there! I recently graduated from the coding bootcamp Flatiron School. Flatiron’s excellent curricu...
6,878
2020-05-23T19:11:20
https://dev.to/speratus/build-a-forum-app-from-code-to-deploy-3lcc
webdev
Hi there! I recently graduated from the coding bootcamp Flatiron School. Flatiron’s excellent curriculum taught the skills required to build full stack applications from start to finish. We learned everything from SQL to React. One thing that we did not learn, though, is how to deploy the apps we build. So, if you’re a recent bootcamp grad like me, you might be wondering how to get your newly created app hosted. Or maybe you just want to know how to set up a production environment and deploy your apps to it. Either way, I hope to help you out with this series of posts sharing my knowledge of deployment technologies. I hope to guide readers through building a generic forum app from code all the way to deploying it on Amazon’s cloud. I want to cover everything from an intro to GraphQL to continuous integration to an overview of some of Amazon’s Web Services. # The Stack Here are some of the details of the stack I plan to use. ## Backend - [PostgreSQL][postgres] database - [Ruby on Rails][rails] - [GraphQL][graphql] (to save time on the frontend and backend) ## Frontend - [Node][node] - [React][react] (using Create React App) - [Redux][redux] (maybe React Hooks instead) - [Bulma][bulma] via [rbx][rbx] (I’m definitely not the world’s best frontend developer, and I’ve found that Bulma is easy to use and customize) ## Testing and Continuous Integration - [Minitest][minitest] for the backend - [Jest][jest] for the frontend - [Travis CI][travis] for continuous integration ## Deployment - [Docker][docker] - Various [Amazon Web Services][aws] (probably EC2, but maybe Elastic Beanstalk, AWS Amplify or Fargate if EC2 proves too difficult to work with). ## Development environment I will be using Ubuntu 20.04 as my development environment, however everything should work pretty well on a Mac. Unfortunately, Ruby on Rails and Docker only seem to have partial Windows support, so Windows users will have to use the Ubuntu distro for Windows Subsystem for Linux. I’ll be using Visual Studio Code as my editor. # Road map Here's a brief overview of what I hope to cover in this series: 1. Backend a. Domain modeling and generating models b. Writing tests for our models and integrating with Travis CI c. GraphQL intro, writing types and mutations for our models 2. Frontend a. Wireframing and component hierarchy b. Writing tests and Building basic components c. Adding GraphQL queries d. Tying it all together 3. Deploy a. Making Docker containers for the front and backends b. Configuring Amazon services Maybe if I’m feeling adventurous, I’ll cover these topics: - [Kubernetes][kubernetes] - Domain name registration and configuration - HTTPS with [Let’s Encrypt][letsencrypt] This is going to be a learning experience as much for me as it will be for readers. The things I want to cover in the series are things that I have learned just by experimenting in my free time, so if you are more experienced with them than I am and see a mistake or an area which could be improved, I would welcome any suggestions you have. I hope to post a new installment in the series every week, but I cannot guarantee that I will be able to. I hope that this series is instructive and helpful to any new web developers out there. [postgres]: https://www.postgresql.org/ [rails]: https://rubyonrails.org/ [graphql]: https://graphql.org/ [node]: https://nodejs.org/ [react]: https://reactjs.org/ [minitest]: https://github.com/seattlerb/minitest [jest]: https://jestjs.io/ [travis]: https://travis-ci.org/ [docker]: https://www.docker.com/ [aws]: https://aws.amazon.com/ [kubernetes]: https://kubernetes.io/ [redux]: https://redux.js.org/ [bulma]: https://bulma.io/ [rbx]: https://github.com/dfee/rbx [letsencrypt]: https://letsencrypt.org/
speratus
866,729
.
Hello!! I'm Anshoool &amp; looking forward to explore more on this virtual world and learn as much as...
0
2021-10-17T15:13:19
https://dev.to/anshulxd/welcome-thread-155j
beginners, cybersecurity, programming, welcome
Hello!! I'm Anshoool & looking forward to explore more on this virtual world and learn as much as possible. Hoping to work with everyone so as to make this place better, secure and more reliable to be used.😀😀🤗
anshulxd
880,119
How To Build APIs That Are Resilent From 3rd Party Failures
There are many possible reasons for failures, from an outage of a third-party service (API, DB, etc.)...
0
2021-10-28T20:27:52
https://dev.to/mashaa/handling-system-failures-during-3rd-party-api-communications-499j
devops, programming
There are many possible reasons for failures, from an outage of a third-party service (API, DB, etc.) to a hardware failure, to the “classic” software bug (after all, software developers are humans too, aren’t we? 🤔). *Fault tolerance is a requirement, not a feature.* This post is an a attempt to address handling system failures during 3rd Party API communications. ![handling system 3rd party api failure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7up532noq9qwjsmbpyf3.png) #### What are some of the failure points? 1. The network connectivity between the application and the third party system might be disrupted causing communication timeouts or lost information. 2. The third party system has an internal error or machine failure causing the application not to receive a response. 3. Third party system has not been paid for, e.g late in account payment renewal. 4. We have an internal error or machine failure. Depending on the timing of this failure, two things could happen: 1. Inability to send the request. 2. Inability to receive the response from the third party. #### How do we respond to a user request when failure occurs? The diagram below attempts to show a process that would be followed to achieve resilience in API communication failure ![handle payment failure photo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5vo41ag2yjgg9e2a87kg.png) In each of the options described above a timeout, internal error or connection error will result in a request not retrieving the optimal response for customers. #### What are the possible actions to take when retry will not succeed and is unlikely to help? 1. Notifications: when such 3rd party failure occurs, notifications should be sent for quick action such as calling the customer back to manage expectation and give way out that lets them know that you have their back. 2. Showing descriptive messages that will aid to let the customer know what to do. e.g refresh, retry ##### To improve the resilience of the application we should consider the following patterns: 1.Retry 2.Caching 3.Persistence 4.Circuit Breaker ##### Retry For a **reliable and robust** retrying solution it should be: - Smart - To answer this, we use exponential backoff retrying. - Customizable - Standard way to handle errors, we should control which errors should be retried and which shouldn’t by throwing exceptions for the errors that should be retried and by catching the others. ##### Caching 1. We record each request in the database before sending the information to an external party. 2. We should have a status attribute that tracks which part of the process the request is in. 3. We update the request with a new status based on the response, normally either created, declined or successful. ##### Persistence To get more inspiration Uber manages the reprocessing of their data using kafka topics and dead-letter queues. See https://eng.uber.com/reliable-reprocessing/. Uber’s idea would fit many of our needs, so we can design a solution based on it. First, our error handling solution should identify that there is an error. Then, it needs to grab the relevant context of the error (in this example, the handled HTTP request). After that, it should send that context to some persistence layer, for later re-execution of the service flow. When a failure occurs in one of the HTTP requests, the request context (request body, query params, etc.) will be sent as a Kafka message to the first retry Kafka topic. As for the re-execution part, a polling of the first retry topic is added and if that processing fails too, it will be sent to the second topic and so forth, until it’s sent to the dlq (“dead-letter queue”) topic, for manual analysis. (The number of retry topics is arbitrary in this example, it can be any number). Let’s see how the solution I described applies to service A below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ohf3fn2bav58dqvrcg70.png) #### Summary If we implement the solutions I’ve described, we can be much more confident now when our system has failures, the system requires fewer manual interventions when it fails, and most importantly, we sleep well at night! I hope this post helped you by providing you with some new ideas for handling failures in your applications. Feel free to comment below with any questions/thoughts :-)
mashaa
880,460
Stack overflow is down forever. Discuss what you would do.
(Not real obviously). Discuss what you would do if Stack overflow was down forever!
0
2021-10-29T04:46:23
https://dev.to/__manucodes/stack-overflow-is-down-forever-discuss-what-you-would-do-mfa
(Not real obviously). Discuss what you would do if Stack overflow was down forever!
__manucodes
880,872
iPhone 13 Previews
Found iPhone 13 previews on the accounts of https://www.pinterest.com/anddasjdsa/ on Pinterest and...
0
2021-10-29T10:37:04
https://dev.to/andrewmalrowe/iphone-13-previews-3hfj
Found iPhone 13 previews on the accounts of https://www.pinterest.com/anddasjdsa/ on Pinterest and https://dribbble.com/andddrew85 on Dribbble. iPhone 13-series device previews are located on a board called Devices https://www.pinterest.com/anddasjdsa/devices/: * [iPhone 13 Mini](https://www.pinterest.com/pin/341992165463626915/) * [iPhone 13](https://www.pinterest.com/pin/341992165463626917/) * [iPhone 13 Pro](https://www.pinterest.com/pin/341992165463626919/) * [iPhone 13 Pro Max](https://www.pinterest.com/pin/341992165463626921/) iPhone 13-series device previews on Dribbble: * [iPhone 13 Mini](https://dribbble.com/shots/16502037-iPhone-13-Mini-viewport-size-screen-size-CSS-Media-Query) * [iPhone 13](https://dribbble.com/shots/16502125-iPhone-13-viewport-size-screen-size-CSS-Media-Query) * [iPhone 13 Pro](https://dribbble.com/shots/16502155-iPhone-13-Pro-viewport-size-screen-size-CSS-Media-Query) * [iPhone 13 Pro Max](https://dribbble.com/shots/16502187-iPhone-13-Pro-Max-viewport-size-screen-size-CSS-Media-Query). There's also a blog post about [iPhone 13-series viewports, resolution, and screen sizes](https://blisk.hashnode.dev/iphone-13-series-viewports-resolution-and-screen-sizes) on a blog https://blisk.hashnode.dev/. [iPhone 12-series viewports, resolution, and screen sizes](https://blisk.hashnode.dev/iphone-12-series-viewports-resolution-and-screen-sizes) There's a [Twitter account](https://twitter.com/andrii_bakirov) that covers device viewports, resolutions, and screen sizes: * [iPhone 13 Pro Max](https://twitter.com/andrii_bakirov/status/1440205880792739841) * [iPhone 13 Pro](https://twitter.com/andrii_bakirov/status/1440205799029047297) * [iPhone 13](https://twitter.com/andrii_bakirov/status/1440205734944247808) * [iPhone 13 Mini](https://twitter.com/andrii_bakirov/status/1440205633689587717) Mobile and responsive testing boards: * GitKraken board for Mobile testing: https://app.gitkraken.com/glo/board/YYAGtEIdUAlyCam8 * GitKraken board for Responsive testing: https://app.gitkraken.com/glo/board/YYAG1kIdUAlyCaoN * Trello board for Mobile testing: https://trello.com/b/9nRL3o1Q/mobile-testing-https-bliskio * Trello board for Responsive testing: https://trello.com/b/Puuu5XwU/responsive-mobile-testing-https-bliskio
andrewmalrowe
881,419
Adding Shopify to Your Next.js Stack is 🚀 + 💰 + 🙌
You have probably heard about all of the technologies I am going to mention in this post, so I won't...
0
2021-10-29T19:03:38
https://dev.to/iskurbanov/adding-shopify-to-your-nextjs-stack-is--3ahg
shopify, nextjs, react, javascript
You have probably heard about all of the technologies I am going to mention in this post, so I won't bore you with defining each one. In this post I want tell you why adding Shopify to your Next.js/React/JavaScript tech stack is probably one of the best decisions you can make for your freelancing or even as a career move. *Let's go:* ## 1. Make More Money 💰 Of course, in a world run by money, we all want to make more of it. Obviously this can't be your only reason to learn programming or to work in a particular field but this is an important factor to consider in your career. In short, we all want to be compensated properly for our efforts and if we can get paid more, why not? Developing for the e-commerce field gives you a leg up on other niches in programming. Why? There is money revolving here. People go into e-commerce to make money. And we all know that in order make money, you need to spend money. Entrepreneurs know this best. So they are willing to invest into their store in order to make money in the future. Every entrepreneur I have spoken to understands this concept and are more than willing to invest to make their stores better, faster, prettier. By adding this skillset to your tech stack, you unlock a whole new set of doors. You go from being just a regular developer to a money making machine. *Example:* **Regular React Landing Page:** $1,000-$2,000 to develop *(from my experience)* **Headless Shopify Storefront using Next.js:** $25,000 - $75,000 to develop *(from my experience)* E-commerce is rapidly integrating into every part of the web, don't exclude payments from your tech stack. ## 2. It's Super Fast 🚀 Literally, in every way. It's super fast to add Shopify to your tech stack but it's also SUPER FAST. We all know that Next.js allows you make super snappy stores. Giving that super power to e-commerce sites is very powerful. Studies have shown that faster storefront greatly increase conversion rates. Initial statistics that I've seen with Next.js in e-commerce are incredible! If you are already familiar with React.js, adding Next.js and Shopify to your Tech stack is a breeze. Not only will you open yourself up to more opportunities but you will start enjoying development more! Next.js has some of the best developer experience out of any framework, and so does Shopify. ## 3. Become a Well-Rounded Developer 🙌 By Adding Shopify and Next.js (also throw in some Tailwind CSS in there for good measures) to your tech you will quickly become a more valuable developer. Your time will be worth more and you will be able to have a bigger impact on any project that you decide to tackle. ## Why I wrote this article I have personally been developing with Shopify and React for the last 2 years and have nothing but good things to say about it. First, learning React changed my life and then adding Shopify opened incredible opportunities for me. Now my passion is to enable developers to explore better opportunities and earn more in the process. I have recently released a public Github with a Next.js + Shopify + Tailwind CSS Starter and a Course to accompany it (for those who want guidance on how to quickly add these technologies to your arsenal). Check it out here! [https://github.com/iskurbanov/shopify-next.js-tailwind](https://github.com/iskurbanov/shopify-next.js-tailwind) Landing page for the course (Made with Next.js and Tailwind CSS): [BuildNextShop.com](www.buildnextshop.com)
iskurbanov
938,167
Exactly How to Check Wifi Password Windows 10?
It has been said that Sometimes some of the people who generally forget the passwords of the wifi or...
0
2021-12-28T06:57:16
https://dev.to/designdare16/exactly-how-to-check-wifi-password-windows-10-4g9m
It has been said that Sometimes some of the people who generally forget the passwords of the wifi or face a great level of the difficulty in remembering the same try to keep those passwords safe by writing those down on the notebook or online in the notes app in the smartphone, but if you are not one of them who write down all the passwords. Then you must ask the question **[how to check wifi password windows 10](https://www.designdare.com/how-to-find-wifi-password-on-windows-10/)**? And trust me the answer for the question How to find wifi password on windows 10 is actually very simple as well as easy. Here in the article there are some proper steps to know the how to check wifi password windows 10 in the better way and easily that can be done. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvpjw4zfhqe0ryxm9ufn.jpg) **Step: 1** click on the magnifying glass that is available on the left corner of the screen at the extreme bottom. **Step: 2** the personal computer will be opened where the person will get the chance to edit the wifi passwords or we can say that the how to check wifi password windows 10. **Step: 3** now in the sharing centre you are now required to choose one particular network among all the other available network options. **Step: 4** now to know about the how to check wifi password windows 10 select one network and add the name for the same. **Step: 5** then a person is required to select the wireless properties on the screen of the P.C **Step: 6** now you are required to go on the securities tab on the screen of the P.C. **Step: 7** go on the connections tab, which is easily visible on the top of the screen on the computer. **Step: 8** entering the password there will be very beneficial as well as helpful in completing the process. However, as per our knowledge there is a complete different as well as unique process that is required to be followed in order to know as well as understand the complete process of the how to check or we can also say that how to exactly find the wifi password in the windows 10 laptop or personal computer. Therefore, to know more you can search with the same name as well, which means you can type the how to check or find the wifi password in the windows 10 laptop or personal computer as the search term on the web browser on internet.
designdare16
881,427
The Things to Keep in Mind about Auth
Five things to keep in mind about auth for the busy developer.
0
2021-10-29T18:52:42
https://developer.okta.com/blog/2021/10/29/things-to-keep-in-mind-about-auth
beginners, security, learning, cybersecurity
--- title: The Things to Keep in Mind about Auth published: true description: Five things to keep in mind about auth for the busy developer. tags: beginners, security, learning, cybersecurity cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6hiulxjdo8shqwa746y.png canonical_url: https://developer.okta.com/blog/2021/10/29/things-to-keep-in-mind-about-auth --- _It's National Cat day and Cybersecurity awareness month so I'm celebrating by reposting a blog post I wrote with permission from [Okta Dev blog](https://developer.okta.com/blog/2021/10/29/things-to-keep-in-mind-about-auth)._ There's a lot of information out there about adding authentication to your app, which is helpful! But also overwhelming. It can be hard to find relevant and _up-to-date_ information. Security best practices and technologies change, so refreshing your understanding and keeping up with current best practices is a good thing. Here are some notes I took while I reviewed my knowledge and applied my experience implementing auth. ![The 5 things to keep in mind with authing](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/overview-989d18daef0214a53e67b3406736f7cbc5520ec146b62c39f70290fc15f4939c.png) ## Prefer OAuth 2.0 and OpenID Connect If you're implementing authentication to a new application, the best practice is to use OAuth 2.0 with OpenID Connect (OIDC). The combination of OAuth 2.0 with OIDC provides consistency across many integration providers, standardized ways to access information, and security. Let's establish a baseline for the rest of this post by familiarizing ourselves with the basics. This [Illustrated Guide to OAuth and OpenID Connect](https://developer.okta.com/blog/2019/10/21/illustrated-guide-to-oauth-and-oidc) is **excellent**. What I really like about using OAuth 2.0 and OpenID Connect together is that it separates "the auths" and adds structure to each. We learned from the blog post above that OAuth 2.0 is designed for **authorization** – access to data (resources). And we learned that OIDC is a thin layer on top of OAuth 2.0 that adds login and profile information. Thus, **authentication** is the act of establishing the login session that confirms the user logging in is who they say they are. Now we can be specific in our vocabulary and understand how each standard complements the other. When you're using OpenID Connect, don't forget that you can inspect the [OpenID Connect Discovery](https://developer.okta.com/docs/reference/api/oidc/#well-known-openid-configuration) document to get a listing of endpoints and supported usages. You'll see references to some of the metadata available in the discovery response below. Armed with this knowledge, let's continue! ## Know your tokens There are three different tokens at play in OAuth 2.0 and OpenID Connect. Depending on the grant type you use (more on that coming up), you might not have all three tokens. It's good to remember the kinds of tokens you're working with and what each does. This makes documentation even easier to parse and team conversations less confusing by specifying the exact type of token instead of only using the word "token". ![ID token](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fn211ya3mm7bw6psjiox.jpg) **ID token**: The token returned from OpenID Connect containing information about the authentication of the end-user in [JSON Web Token](https://developer.okta.com/blog/2020/12/21/beginners-guide-to-jwt) (JWT) format. ![Access token](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zweznb3oxcyq0ow3z8y0.jpg) **Access token**: The token returned from OAuth flows that allows you to access the resource. Access tokens returned from Okta are in JWT format. ![Refresh token](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fj10h7fcmefptyiuwumu.jpg) **Refresh token**: A long-lived token that you exchange for the short-lived access token. Not all grant types support refresh tokens. ## Claim wisely A JWT has the capability to add custom claims to it. Adding custom metadata directly to the payload of access and ID tokens means you can get properties tailored for your application right from the get-go, right? What a great idea! Or is it? Custom claims are powerful, but remember that the contents of a JWT are visible to anyone who has one, such as an external developer calling your system and potentially the end user. You don't want to add private information with the expectation that it's safe from prying eyes. You also don't want to overload the token with a bunch of custom claims that will bloat the token. Keep your tokens lightweight and make adding custom claims a considered decision. ## Use the right grant type I think this is where developers start feeling detail overload. There are different grant types, and depending on who the caller is and what sort of software process you're working on, the optimal grant type changes. To add further confusion, some grant types (or flows) are OAuth 2.0 standards, and some are from OpenID Connect. Plus, there are updated security practices, and not all blog posts reflect the current recommendations. Yikes! Let's cut out the extraneous stuff and focus on the need-to-knows. First, let's start with listing all the available grant types. We'll define a high-level overview of each: ![Graphical summarization of all the grant types](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/all-grants-38c2642f8d9838075fd89ef56a515076c26bab699da5449c9defc781de035c9d.png) ![Authorization Code](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/authorization-code-5bd928513ab39270a8a203aee9e35895698b955fa036f70339fc11b0c2bec307.jpg) **Authorization Code** is a grant type used by web apps where the source code is private, such as a server-side web app. An authorization code and a client secret are required to get an access token. You can make the authorization code grant type even more secure for server-side web apps by using PKCE as well! More on that below. ![Client Credentials](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/client-credentials-414c63503c5ca3c6385276d32b652b334a22fc6c57dc5ac94571dcb452dbf758.jpg) **Client Credentials** is a grant type used for back-end communications, such as API to API. Users aren't involved in this flow, so there isn't an ID token available. ![Device Code](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/device-code-5145e0c5e0e28b850087c2fc8329e006bd963150b394cb73eb3853f7d85ef5c7.jpg) **Device Code** is a grant type primarily used for IoT and smart devices. The flow delegates user authentication and authorization through an external device, such as a smartphone app or browser. Device Code is available as an early access feature in Okta, or follow the steps in this post to [add OAuth device flow to any server](https://developer.okta.com/blog/2019/02/19/add-oauth-device-flow-to-any-server). ![Refresh Token](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/refresh-3dbffeb659cf7f2f4bc2df2355b79a97a160fb229c63b7fe8a0e9f285280ac12.jpg) **Refresh Token** is not a grant type, per se. It's a long-lived token the application may receive to get longer access to resources. Authorization Code, Device Code, Hybrid, and Resource Owner Password flows support refresh tokens if the authorization server is configured to give the app refresh tokens. ![Proof Key for Code Exchange](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/pkce-8c8acde1f7472705c3305ba8d21561eb020edc049a70f7fad0d70b03c4375105.jpg) **Proof Key for Code Exchange** (PKCE) is a flow to create a secret to use before exchanging the authorization code for tokens. This is not a grant type used on its own, but as an extra layer of security to the Authorization Code flow. ![Hybrid](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/hybrid-0458664ae2ab421160385edffab30a0df9deacff9d66040d0e28092d96aab7d1.jpg) **Hybrid** is a set of grant types from the OpenID Connect spec. The `code id_token` flow is the most common and combines two grants. It returns an access token via the Authorization Code grant and an ID token via the Implicit grant. ![Implicit](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/implicit-37e879e1a432f31a0214b9a202c832a24ac2aa33c8b2396c237b6ae7c567bd7a.jpg) **Implicit** is a grant type that used to be recommended for native and JavaScript apps. It is a simplified version of the Authorization Code grant but doesn't require the authorization code exchange for the access token. Because of the security risks, OAuth 2.0 no longer recommends this grant type and it will be dropped in the upcoming OAuth 2.1 spec. ![Resource Owner Password](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/resource-owner-3d48e8e1ba8575fdc08d4dc4c8eb416e7e1718ad3552e12f70f7b60dc12c8ca9.jpg) **Resource Owner Password** grant type is a flow used by trusted first-party clients. Because of the risks involved in applications directly handling credentials, OAuth 2.0 no longer recommends this grant type and it will be dropped in the upcoming OAuth 2.1. There might be legacy applications requiring this flow, but that's the only reason to use it. That's still a lot of flows! And a lot to keep in mind. We're busy developers! So let's distill this down to the preferred grants. ![The three recommended grants](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/recommended-grants-a86f273468ff524ed018a904e71cbd2d2616bd776bd1717a108f84bd70ff32a9.png) That's much better! We now have a reasonable number of grant types to work with: Device Code, Client Credentials, and a combination grant type – Authorization Code with PKCE. ![Authorization Code with PKCE](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grants/authorization-code-pkce-5eabfc6cc0a03b6c94e235439ff85b9ea31d3983b4065ea74756c59e7da70810.jpg) **Authorization Code with PKCE** is the Authorization Code grant type powered by ~~pixies~~ proof key for code exchange. The flow adds a step to first generate a code challenge as an extra layer of security. Now that we have an updated list of recommended grant types, how do we know which one to use? You may need to use multiple grant types in a complete software system depending on the scenario and actors involved. You may have a SPA, a native application, back-end services and processes, and integrations with external parties. Each of these requires consideration for which grant type to use. Don't worry; by checking the discovery endpoint you'll know if your identity provider is up to the task of securely handling your needs! We can use this handy-dandy flow chart to identify which grant type to use: ![Use Device Code for input constrained devices, use Client Credentials for back-end processes, else use Authorization Code with PKCE](https://developer.okta.com/assets-jekyll/blog/things-to-keep-in-mind-about-auth/grant-flow-chart-31453fd40c44ac95b00e02045974988c38db061323e6dccf0f5968338ab3528e.png) ## Keep it as simple as possible We're busy. And adding authentication and access security to our software is foundational, but it's not the only feature of the software. As product developers, we need to get auth into the system and have the peace of mind that we're covered without having to worry about all the details. To keep adding auth simple, use SDKs that already handle all those nitty-gritty details. Okta has SDKs for mobile, front-end, and back-end languages to get going quickly. What's that? You say that your needs are a little more complicated than simply adding an Okta SDK to your software system? When you need something more customized, incorporate [OIDC certified libraries](https://openid.net/developers/certified/) into your system that allow you to still follow auth best practices while implementing custom needs. ## Next steps These are the key takeaways that helped me better understand auth and incorporate team discussion points we had when implementing auth in the software system I worked on. This post skims over the surface though and provides just the bare essentials for busy developers. If you want to dig in further and learn more, please check out the following links * [OAuth 2.0](https://oauth.net/2/) * [OpenID Connect Foundation](https://openid.net/) * [It's Time for OAuth 2.1](https://aaronparecki.com/2019/12/12/21/its-time-for-oauth-2-dot-1) * [OAuth 2.0 Simplified](https://www.oauth.com/) * [OAuth 2.0 and OpenID Connect Overview](https://developer.okta.com/docs/concepts/oauth-openid/) If you like content about auth, don't forget to follow OktaDev on [Twitter](https://twitter.com/oktadev) and subscribe to our [YouTube channel](https://www.youtube.com/c/OktaDev/) for more great tutorials. What are your tips to keep in mind when adding auth?
alisaduncan
881,721
Imposter Syndrome
Healthy Imagination Not sure why developers struggle feeling like fakes. All people...
0
2021-10-30T03:55:46
https://dev.to/gregrossdev/imposter-symptom-46ml
beginners, webdev, programming, codenewbie
## Healthy Imagination Not sure why developers struggle feeling like fakes. All people pretend till they become what they imagine. If no creative action is taking then it’s just fantasy. ### Willingness The dedication developers put into understanding abstract computer science concepts is so admirable. From recognizing patterns to abstracting away implementations of interfaces for returning possible values. People capable of explaining their thoughts and ideas in syntax foreign to the common person. ## Definitely Real We are not imposters. There’s nothing delusional about taking actionable advances to become the idea you hold in your head about yourself. ### Change Expressing the concerns, worries, and fears along the way makes me believe your anything but an imposter.
gregrossdev
881,774
Escape If-else hell in Javascript
Backstory / Problem Few months ago, there is a certain case where I need to calculate the...
0
2021-10-31T06:52:01
https://dev.to/melvnl/escape-if-else-hell-in-javascript-odn
javascript, tutorial, productivity, beginners
###Backstory / Problem Few months ago, there is a certain case where I need to calculate the percentage of input file in each form that user has filled (It was for a react native app that take user feedback by filling several form that represent different category such as personal information form, the user property information, etc.), the system flow look like this in a nutshell. ![System flow in nutshell](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jaxjtqua09bslt0e1qq.jpg) The first approach was using if/else statement to handle the conditional logic. Although it might be a good idea for one or two conditions here and there, using multiple if-else statements chained together will make your code look very ugly, and less readable, and for my case there is probably more than 30 if-else statements in scattered in 5 different forms. Not gonna lie, it look very simple and straight to the point, yet painful to read. Also, when my peer reviewing the PR, he refers something humorous in reddit about [the code behind yandere simulator](https://www.reddit.com/r/ProgrammerHumor/comments/53uhsw/the_code_behind_yandere_simulator/) ![Yandere Simulator code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j5m0oeqx4w8xcva3hf6x.png) As you can see, it is a hell of if-else statements. ###The solution The solution will be vary, depends on your case / need. But most likely the thing that you need is **object**. As for instance, let's say you need to return a string based on a key ```JavaScript function checkStatus(status) { if (status.toLowerCase() === 'available') { return `The user is currently available` } else if (status.toLowerCase() === 'busy') { return `The user is currently busy` } else if (status.toLowerCase() === 'away') { return `The user is away from keyboard` } else if (status.toLowerCase() === 'breaktime') { return `The user is having a good lunch` } } ``` Just imagine if you have other 20+ status type ? Will you be comfortable reading or writing that much line of if-else statements? Instead we can use object or **Map object** to make a sort of table consist of paired key and value to look up to. ```javascript function checkStatus(status){ const statusList = { available: 'The user is currently available', busy: 'The user is currently busy', away: 'The user is currently away from keyboard', breaktime: 'The user is currently having a good lunc' } return statusList[status]; //console.log(statusList[status]) } ``` This can be also applied in algorithm leetcode-type-of-question to save you up some time from writing repeated if-else statement over and over again. Thanks for reading!!! Have a good day, and remember that project you always think about won't code itself 🤪.
melvnl
882,091
7 Nice API for your projects !
API is the acronym for Application Programming Interface, which is a software intermediary that...
0
2021-10-30T14:41:31
https://dev.to/codeoz/7-nice-api-for-your-projects--3ap
javascript, webdev, beginners, codenewbie
**API** is the acronym for Application Programming Interface, which is a software intermediary that allows two applications to talk to each other! Today I will share with you some nice API to use for your projects!! ### 🍿 Movie database (IMDB) _https://developers.themoviedb.org/3/getting-started/introduction_ If you need informations about movies, series, etc... You can use IMDB API that is very easy to use! ### 🥄 Food Recipes (spoonacular) _https://spoonacular.com/food-api/docs_ Are you looking for recipes? Soonacular is a great api that will provide you some recipes for your next project!! ### 👽 Rick & Morty database _https://rickandmortyapi.com/documentation/_ The Rick and Morty API is based on the television show Rick and Morty. You will have access to about hundreds of characters, images, locations and episodes !! ### 🩲 Random Advice _https://api.adviceslip.com/#top_ Very fast api to use! It will provide you some random advice! You can test it right now from this url _https://api.adviceslip.com/advice_ ### 🥸 Fake User _https://randomuser.me/?ref=undesign_ Do you need fake user for your website? This API will give you a list of fake user! ### 🤡 User icons (DiceBear) _https://avatars.dicebear.com/docs/http-api#options_ DiceBear is an avatar library for designers and developers. You can test it fastly at this url for example: _https://avatars.dicebear.com/api/male/john.svg?background=%230000ff_ ### 🐱 Cats fact _https://alexwohlbruck.github.io/cat-facts/_ This app will combine APIs and Services from the web to do just one thing… send cat facts. --- I hope you like this reading! 🎁 You can get my new book `Underrated skills in javascript, make the difference` for FREE if you follow me on [Twitter](https://twitter.com/code__oz) and send message to me 😁 and **SAVE 19$** 💵💵 Or get it [HERE](https://codeoz.gumroad.com/l/RXLYp) 🇫🇷🥖 For french developper you can check my [YoutubeChannel](https://www.youtube.com/channel/UCC675U1ZUPFASsK9-FjawtA) 🎁 [MY NEWSLETTER](https://www.getrevue.co/profile/code__oz) ☕️ You can [SUPPORT MY WORKS](https://www.buymeacoffee.com/CodeoZ) 🙏 🏃‍♂️ You can follow me on 👇 🕊 Twitter : [https://twitter.com/code__oz](https://twitter.com/code__oz) 👨‍💻 Github: https://github.com/Code-Oz And you can mark 🔖 this article!
codeoz
882,120
1 line of code: How to get the index of the lowest numeric item of an Array
const indexOfLowestNumber = arr =&gt; arr.indexOf(Math.min.apply(null,arr)); Enter fullscreen...
15,146
2021-10-30T14:56:46
https://dev.to/martinkr/1-line-of-code-how-to-get-the-index-of-the-lowest-numeric-item-of-an-array-5d8o
javascript, webdev, performance, codequality
```javascript const indexOfLowestNumber = arr => arr.indexOf(Math.min.apply(null,arr)); ``` Returns the index of the first occurrence of the lowest numerical item of the array. --- ## The repository & npm package You can find the all the utility functions from this series at [github.com/martinkr/onelinecode](https://github.com/martinkr/onelinecode) The library is also published to [npm as @onelinecode](https://www.npmjs.com/package/@onelinecode/onelinecode) for your convenience. The code and the npm package will be updated every time I publish a new article. --- Follow me on [Twitter: @martinkr](http://twitter.com/_martinkr) and consider to [buy me a coffee](https://www.buymeacoffee.com/martinkr) Photo by [zoo_monkey](https://unsplash.com/@zoo_monkey) on [Unsplash](https://unsplash.com/s/photos/fuji) ---
martinkr