id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
972,989
Hey everyone 🖐. Im cushions . new here .
A post by BIG CUSHIONXX
0
2022-01-30T18:46:25
https://dev.to/cushions/hey-everyone-im-cushions-new-here--2do3
cushions
973,040
Formidablejs: The one person framework
What is Formidable? Formidable (or formidablejs) is a Laravel inspired API framework for...
0
2022-01-30T22:10:11
https://dev.to/donald/formidablejs-the-one-person-framework-mp9
showdev, node, news, mvc
## What is Formidable? Formidable (or formidablejs) is a Laravel inspired API framework for building backend applications. It uses [Imba](https://imba.io) by default and has native support for JavaScript and TypeScript. While Formidable is meant to provide a smooth process for building API applications, you may also use it to build Fullstack applications thanks to [Inertia](https://inertiajs.com/) and [Laravel Mix](https://laravel-mix.com/) providing that "one person framework" feel. Formidable borrows a lot of its features from Laravel, and also uses [Fastify](https://www.fastify.io/) under the hood. ## A few Formidable features #### Database Formidable has out of the box support for `SQL` Queries and `Redis`. The SQL data layer is powered by Knex.js, while the Redis data layer is powered by Node-Redis. #### Migrations Out of the box, Formidable provides a simple database migration system that allows you to define and share your database schema definition. This is a great way to ensure that your database schema is always in sync with your application code. #### Auth Formidable provides a starter authentication system for both `session` and `jwt` based applications. By default, `session` based authentication is enabled. The `session` based authentication system enables the use of `cookies`, and stores the session data in `memory`, `file` or `redis`. While the `jwt` based authentication system enables the use of JWT tokens, and stores authentication data in the database. #### Routing Just like any other framework, routing has become a standard feature. Formidable provides a routing system similar to Laravel's router. You can easily group your routes, limit them to specific middleware's, etc. #### Error Handling Formidable has an Error Handler class which allows you to intercept any errors and return a different response. Formidable also provides Bugsnag out of the box. #### CLI Tool Craftsman is the command line interface included with Formidable. Craftsman is installed as a global package and on every Formidable application. It provides a number of helpful commands that can assist you while you build your application. ## Project setup First thing you want to do, is install the CLI tool: ```bash $ npm i -g @formidablejs/craftsman ``` Once the CLI installation is done, run the following command: ```bash $ craftsman new project-name --web ``` cd into the project folder: ```bash $ cd project-name ``` <b>Optional:</b> should you want to install Vuejs or React, run the following commands: ```bash $ craftsman inertia $ npm run mix:dev ``` When done with everything, serve your application using the following command: ```bash $ craftsman serve --dev ``` Once Formidable is running, you can go to `http://localhost:3000` in your browser to see if your application was successfully created. You should see the following: ![Formidable Welcome Screen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1m0ce84q3qb2jchp9hko.png) ## Project structure Lets take a look at what our project looks like: | Directory | Description |:-------------------------|:------------- | `/app` | Contains the core code of your application. | `/app/Http/Controllers` | Contains applicaiton controllers. | `/app/Http/Middleware` | Contains request middlewares. | `/app/Http/Requests` | Contains form and API requests. | `/app/Http/Models` | Houses `bookshelf` models. | `/app/Http/Resolvers` | Contains application service resolvers. | `/bootstrap/cache` | Contains the cached config file and database settings file. | `/config` | Contains application configuration files. | `/database/migrations` | Houses your application migration files. | `/public` | Houses your assets such as images, JavaScript, and CSS. | `/resources/lang` | Contains language files. | `/resources/views` | Contains Imba view class files. | `/routes` | Contains application routes. | `/storage/framework` | Contains core application data. | `/storage/session` | Contains application sessions. > Note: in some cases, you might see more folders; This is dependent on the type of project you created. ## Demo #### Creating your first Route To add a new route, open the `routes/web` routes file and add the following lines at the bottom of the routes file: ```py Route.get '/ping', do 'pong' ``` Now, when visiting `http://localhost:3000/ping`, you should see `pong`. #### Creating a Controller In the section above, I showed you how to create a route. Now, let's create a controller and map it to the route: ```bash $ craftsman make controller HelloController ``` Once created, you can open `app/Http/Controllers/HelloController` and you should see the following code: ```py import Controller from './Controller' export class HelloController < Controller ``` Now create an action in the controller: ```py import Controller from './Controller' export class HelloController < Controller def index 'Hello World' ``` After adding the `index` action, you can go to your `routes/web` file import your new controller: ```py import { HelloController } from '../app/Http/Controllers/HelloController' ``` Once you've imported your controller, you can add a new route and map it to the action you created in the controller: ```py Route.get 'hello', [HelloController, 'store'] ``` You should now see `Hello World` when visiting `http://localhost:3000/hello` > For a fullstack demo application with crud operations see: [https://github.com/donaldp/pingcrm](https://github.com/donaldp/pingcrm) ![PingCRM Demo Application](https://raw.githubusercontent.com/donaldp/pingcrm/dev/screenshot.png) ## Conclusion While Formidable provides a lot of useful features, it still has a long way to go before hitting a stable release. For now, just play around with it, report bugs and contribute if you can! Documentations: [https://formidablejs.org](https://formidablejs.org) Github: [https://github.com/formidablejs](https://github.com/formidablejs) PingCRM Demo: [https://github.com/donaldp/pingcrm](https://github.com/donaldp/pingcrm)
donald
973,799
Documenting your API with Swagger 2
So after having built our API and then secured it, we will now document it to allow others to use...
0
2022-01-31T13:15:43
https://dev.to/erwanlt/documenting-your-api-with-swagger-2-af6
tutorial, springboot, java, programming
![Jedi archives](https://cdn-images-1.medium.com/max/2000/1*g9tMh-Nn0gocNGwgDvJxhA.png) So after having [built our API](https://dev.to/erwanlt/building-a-simple-rest-api-with-springboot-53mc) and then [secured it](https://dev.to/erwanlt/securing-your-rest-api-with-springsecurity-57f7), we will now document it to allow others to use it more easily. To do that, we will implement [Swagger](https://swagger.io/). ## What you will need In order to follow this tutorial, you will need a REST API, so you can : * Follow the [tutorial to built your API](https://medium.com/javarevisited/building-a-simple-rest-api-with-springboot-3f2e4b123ebb) * Clone the master branch of [this repository](https://github.com/ErwanLT/HumanCloningFacilities) * Having your own API ready ## What il will look like ![UML representation](https://cdn-images-1.medium.com/max/5424/1*-4H-EHMutQSg21bb1g0O-w.png) ## I document, you document, we document ### The first dependency So, if we want to use swagger for our API, we first need to add a maven dependency <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.9.2</version> </dependency> ### The java configuration To enable Swagger, we need to configure it, so let's create a configuration class. {% gist https://gist.github.com/ErwanLT/0bf1a720652eaed3b574ebe51f7e066f.js %} As you can see, in the method apiInfo, I describe the general informations of my API : * The title * The description * The version * The termes of service (url) * The Contact to join the API owner * The licence * The licence url * The API vendor’s extension All these informations will be displayed latter. ### The JSON generated So if I start my application and go to the [http://localhost:8080/v2/api-docs](http://localhost:8080/v2/api-docs), i will see a JSON representation of my documentation ![swagger-json](https://cdn-images-1.medium.com/max/3040/1*r4emRSBTD8z_VhAaAxWVfw.png) As you can see, i find all the informations that I have filled in my configuration class. But let’s be honest, a json file is good, but an IHM will be better. So let’s implement this IHM. ### The swagger-ui IHM To enable the IHM, we don’t need to do so much work, just adding a maven dependency is enough <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.9.2</version> </dependency> And then, if I restart my application and go to [http://localhost:8080/swagger-ui.html](http://localhost:8080/swagger-ui.html), I can see a beautiful (all relative) IHM that display my API information. ![swagger-ui](https://cdn-images-1.medium.com/max/2176/1*6K4tTvp5TZwM0DrwCJj-gg.png) So now we have our swagger IHM, but there is no true documentation on it, we have our endpoint, and if we extend it we have some informations, but nothing very documented. So let’s add our documentation. ### Documenting our controller with the previous step completed, if I expend the POST endpoint i should see something like this ![](https://cdn-images-1.medium.com/max/5524/1*ehI0qfCMAsLhmcN7TC7o3w.png) So let’s add some documentation on my [controller](https://javarevisited.blogspot.com/2017/08/difference-between-restcontroller-and-controller-annotations-spring-mvc-rest.html#ixzz6OYNB9oii). {% gist https://gist.github.com/ErwanLT/0ecffe452d0f278952e2eaa3364a434d.js %} With * **@ApiOperation** : I can add a more detailed description of my endpoint (value), and specified what it consume and produce. * **@ApiResponses** : I describe the returns codes of my API, here the 200 and the 500. So if i restart my application (again) I should be able to see the change ![](https://cdn-images-1.medium.com/max/5524/1*joOGe0LzCYY_a6_cAxwCsw.png) ![](https://cdn-images-1.medium.com/max/5524/1*UKyScMqmRi5X9NoghiRFYQ.png) That’s better. We have see that in my API documentation, that my endpoints are documented, but I also have a section about my models (what my API produce or consume), and these models can also be documented. So let’s go and add some documentation on it. **Documenting our models** So I have my Clone class as my model, in here i have few properties that are not documented and it represented in my swagger like this ![](https://cdn-images-1.medium.com/max/2416/1*N--NazNwxdIBKSpWxAYUrQ.png) Except the type of my properties, I don’t have so much info on it. To add it we will use the **@ApiModel** annotation on the class and the **@ApiModelProperty **annotation on properties that will allow us to add definitions such as description (value), name, data type, example values, and allowed values. SO now my Clone class will be like this : {% gist https://gist.github.com/ErwanLT/46488b2f44ccbcd454c4a7d5fc1905bc.js %} and in my Swagger-ui, now the model will be like this : ![](https://cdn-images-1.medium.com/max/2000/1*J8Te-bmUlMFRvZ59sveHcg.png) As you can see the mandatory field that i have described have a ***.** ## Testing our API with Swagger So now that we have our swagger running and well documented, what can we do with it ? We can test our endpoint ! Scrolling through the swagger-ui page, you should have noticed this button in each one of our endpoint. ![](https://cdn-images-1.medium.com/max/2000/1*dGt-aUiJCsP3yQWe6bK2PA.png) If you click on it, it will slightly change your page so that you can write the body of the request, or edit some parameter. ![](https://cdn-images-1.medium.com/max/2000/1*B8xlnkzOxrHYyYmpAQEvLQ.png) Then you can execute the query against your localhost and see the result below. ![](https://cdn-images-1.medium.com/max/2300/1*OcgVtYiEqBe7nfEcC2qkVA.png) Thanks for your reading time, as previously, the code used in this tutorial is findable in [this Github repository](https://github.com/ErwanLT/HumanCloningFacilities), branch swagger.
erwanlt
973,820
How to Check BIOS Version and Update It in Windows 10, 8 or 7
Read this article to find out how to learn BIOS version for your PC, check for BIOS updates and...
0
2022-01-31T13:34:16
https://dev.to/hetmansoftware/how-to-check-bios-version-and-update-it-in-windows-10-8-or-7-4g9n
beginners, testing, tutorial, security
Read this article to find out how to learn BIOS version for your PC, check for BIOS updates and update it in Windows or DOS. Updating or rolling out a new version of the operating system, releasing a new processor or adding support for a new security standard may require upgrading the microcode of your motherboard. Updating BIOS may unlock new functions, improve security or make the computer boot faster. So if the manufacturer of your hardware releases an update, it’s recommended to install it. You can update BIOS both in Windows and DOS. It’s easier with Windows, but whatever way you choose, the first step should be getting to know the current version of BIOS. If you know the current version number, you can see if there are any updates for your PC in the manufacturer’s website. YouTube: {% youtube S_RUSyf-jwQ %} ## Get to know your BIOS version We are going to show you how to do it in Windows 10, 8 or 7. Launch the “Run” window with the key combination Windows + R Type the command msinfo32 and press Enter. ![Image description](https://hetmanrecovery.com/pic/blog/a138/run.jpg) In the System Information window find the line BIOS Version/Date. In the same field, you can find the information on the manufacturer of your motherboard, BIOS version and date. If your computer is built by a large company, for example, Dell or HP, this filed will also specify the system manufacturer. Write down or remember what the BIOS field says. Using the manufacturer data, you can find its website and check it for latest available BIOS updates. If the version and date for BIOS update published in the website are the same as the System Information tells you, it means you are using the latest BIOS version available. ![Image description](https://hetmanrecovery.com/pic/blog/a138/system-information.jpg) ## Check for BIOS updates Unfortunately, there is no way to check for BIOS updates with a Windows command. There is no universal method to check for BIOS updates that would fit all laptops and computers without exception. Our advice is checking the manufacturer’s website for updates as the first step. A little earlier in this article, you could see how to learn the BIOS manufacturer name. If this name suggests it is a large company producing laptops or PCs, find your model in their website. In the product page, you should see the computer’s specifications and the downloads tab, where you can find drivers and BIOS update tools. If you have data on the motherboard manufacturer, visit its website, locate the downloads tab, and find a tool for updating BIOS. Compare the latest available version with the BIOS version you have, and if the website offers a newer piece of firmware, download it. A BIOS update tool for AMI-based motherboards: ![Image description](https://hetmanrecovery.com/pic/blog/a138/bios.jpg) ## Update BIOS in Windows or DOS Usually, a tool for updating BIOS contains instructions for installation. For branded laptops, like Dell or HP, this is an ordinary EXE file which can be launched, and then it will do the rest. Most of the time, an update installer is put inside a ZIP file, together with the manual. Extract it and read the advice attentively. When you decide to update BIOS, make sure all other applications are closed, and connect your laptop or computer to a reliable source of power. A power cut or power surge can brick your equipment, so it’s better to be on the safe side. A considerable advantage of updating BIOS in Windows is the automatic check to prevent installation of software which is incompatible with your hardware components. That is, if you try to install an update meant for a DELL or HP computer, the program will show you a warning message and close. In case with a DOS utility, though, you will have to study the manual and special commands for update. YouTube: {% youtube BOWhvngUWZE %}
hetmansoftware
974,208
ES6: Object destructing
Introduction In this blog article, we shall learn about Object destructing in JavaScript....
0
2022-01-31T20:19:50
https://dev.to/naftalimurgor/es6-object-destructing-3nn0
tutorial, begginers, javascript, webdev
## Introduction In this blog article, we shall learn about Object destructing in JavaScript. Object destructuring syntax was introduced in ES6 to make accessing Object properties much easier and cleaner ## Object destructing In pre-ES6, normally you'd read object properties and store the values associated to these properties in a variable like this: ```typescript // some code omitted const result = { userId: 'dummy_id_001`, username: 'foo_bar' avatar: 'https://gravatar/xrkpxys/k1szh.png', accent: '#fff' } // reading a few properties off this object literal: pre-es6 var username = result.username var accent = result.accent ``` In ES6, the above becomes: ```typescript // some code omitted const result = { userId: 'dummy_id_001`, username: 'foo_bar' avatar: 'https://gravatar/xrkpxys/k1szh.png', accent: '#fff' } // reading a few properties off this object literal: pre-es6 let {username, accent, userId} = result // now use username, accent as normal variables ``` This is useful especially if you need to read more than one property from the same object. ## Summary Object destructuring syntax provides a cleaner way of accessing more than one property off an object literal. Use object destructuring when accessing more than one property of an object and pre-ES6 syntax(using the "dot" operator) when accessing only one object. ```typescript // possible code ommitted const username = result.username // OK for single property const {accent, avatar, userId} = result // use object destructing ``` *** Found this article helpful? You may follow my twitter handle [@nkmurgor](https://twitter.com/nkmurgor) where I tweet about interesting topics on web development.
naftalimurgor
975,014
Cool Login
A post by HARUN PEHLİVAN
0
2022-02-01T13:22:54
https://dev.to/harunpehlivan/cool-login-dcm
codepen
{% codepen https://codepen.io/harunpehlivan/pen/mdqPJWJ %}
harunpehlivan
975,030
Managing State with React Hooks
Early last year React 16.8 was released, bringing with it the addition of hooks. This introduction of...
0
2022-02-01T13:56:25
https://sabinadams.hashnode.dev/managing-state-with-react-hooks
react
--- title: Managing State with React Hooks published: true date: 2020-08-04 21:35:27 UTC tags: react canonical_url: https://sabinadams.hashnode.dev/managing-state-with-react-hooks cover_image: https://cdn.hashnode.com/res/hashnode/image/upload/v1596576963332/3skT3B65k.png --- Early last year React 16.8 was released, bringing with it the addition of `hooks`. This introduction of hooks gave developers the ability to take many of the useful features of React (such as Lifecycle Events, Context, and State) available in `class components` and use them inside of `function components`. In this article we will take a look at a few ways we can use hooks to manage state within function components. > If you’d like to know more about the How’s and Why’s of using function components with hooks vs. class components, check out [React’s Introduction to Hooks](https://reactjs.org/docs/hooks-intro.html) _The application built in this tutorial is available on [github](https://github.com/sabinadams/hooks-state-playground)_ * * * ## The `useState` Hook The `useState` hook allows you to declare state variables in a function component. This data is preserved on re-renders by React exactly like a state variable in a class component is preserved. To use the hook we simply have to import it and call it, passing in an initial value. Let’s give it a try! ### Declaring a state variable with `useState` First, let’s declare a state variable named `points` that we will use to hold a numeric point value that will be used by a point counter. ``` import React, { useState } from 'react' function Points() { // Creates a state variable named 'points' whose initial value is 0 const [points, setPoints] = useState(0) // ... ``` As we can see, `useState()` takes one parameter, the variable’s initial value. In this case we set the initial value to 0, however we could have assigned it an Object, Array, String, etc… Whatever you fancy! The return value of `useState()` is an array containing two items. The first is the most recent value of that state variable. The second is a function used to update that state variable. We use destructuring to grab both of those items out of the return. ### Updating a state variable Now we can use the `setPoints()` returned by `useState()` to update the `points` state variable! ``` // ... const [points, setPoints] = useState(0) return ( <> <button onClick={ () => setPoints( points + 1) }>Add</button> </> ) // ... ``` In the code above, every time the user clicks the Add button, the points state variable will be `incremented` by one. ### Putting it all together Okay, so now we have a state variable and have the ability to update that variable! Let’s put those things together. Below is the completed Points component. Notice we can call `setPoints()` from within another function in the component, `resetPoints()` in this case. ``` import React, { useState } from 'react' import '../assets/scss/Points.scss' function Points() { // Creates a state variable named 'points' whose initial value is 0 const [points, setPoints] = useState(0) // Function that will reset the point count const resetPoints = () => setPoints(0) return ( <div id="Points"> <p className="title">Points (useState)</p> <hr className="divider"/> <div className="pointsContainer"> <div className="buttons"> {/* These buttons use the setPoints function to update the state variable's value */} <button className="button add" onClick={() => setPoints( points + 1 )}>Add</button> <button className="button subtract" onClick={() => setPoints( points - 1 )}>Subtract</button> <button className="button reset" onClick={resetPoints}>Reset</button> </div> <div className="outputBox"> {/* Output the points variable */} <p>{ points }</p> </div> </div> </div> ); } export default Points ``` * * * ![useStateGif-6.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1596575924767/jx7A2cqM6.gif) Pretty simple stuff, right? Using hooks, we have successfully set up and updated a state variable within a function component. Now you might be wondering, _“What if I have a really complex state in my component with deeply nested data? Is there a cleaner way to set up the state?“_ In the next section we’ll take a look at an alternative to `useState()` that addresses this exact problem! * * * ## The `useReducer` Hook The `useReducer` hook provides the ability to manage state in a way that is very similar to how Redux manages state with reducers and dispatched actions. > If you aren’t sure what a reducer is, it is basically a function that updates a state based on an action that gets sent to it. For more detailed information, check out [Redux’s documentation on reducers](https://redux.js.org/basics/reducers) To use `useReducer` we have to import and call it, passing in a reducer and the initial state of the component. ### Setting up the state and reducer with `useReducer` Let’s start by creating a simple reducer, which will define how to update the state based on certain actions, and tell the component to use that reducer. ``` import React, { useReducer } from 'react' // The initial state of the component const initialState = { points: 0 } // The reducer we are going to use function reducer( state, action ) { switch ( action.type ) { case 'add': return { points: state.points + 1 }; default: throw new Error(); } } function Points() { // Sets up a reducer to handle the state const [state, dispatch] = useReducer( reducer, initialState ); // ... ``` In the code above, we set up our initial state to be an object with a single key, `points`, whose value is 0. This example is simple, but `useReducer` really shines as the state gets more complex. The hook takes in a `reducer` as its first argument and the `initial state` as its second argument. With this all set up, we now have a reducer available to the component that is ready to handle actions and maintain state across re-renders! > `useReducer` can also take in a third argument, a function that allows you to lazily initialize the state. This can be useful if you would like to calculate the initial state outside of the reducer. Also, because this initialization function is outside of the reducer, it may be used later on to reset the state. See [the docs](https://reactjs.org/docs/hooks-reference.html#lazy-initialization) for more info. ### Dispatching actions to the reducer to update the state `useReducer()` returns an array with two items. The first being the most current version of the state and the second a `dispatch` function. This function is used exactly the same way it would be in Redux. ``` // *** return ( <> <button onClick={() => dispatch({type: 'add'})}>Add</button> </> ); // *** ``` When the button is clicked, the `dispatch` function is fired with a type of `add`, which gets caught by the reducer and results in `state.points` being incremented by 1. ### Putting it all together Awesome! Now let’s put all of that together. Here is a completed component using these concepts. ``` import React, { useReducer } from 'react' import '../assets/scss/Points.scss' // The initial state of the component const initialState = { points: 0 } // The reducer we are going to use function reducer( state, action ) { switch (action.type) { case 'add': return { points: state.points + 1 }; case 'subtract': return { points: state.points - 1 }; case 'reset': return { points: 0 } default: throw new Error(); } } function Points() { // Sets up a reducer to handle the state const [state, dispatch] = useReducer( reducer, initialState ); return ( <div id="Points"> <p className="title">Points (useReducer)</p> <hr className="divider"/> <div className="pointsContainer"> <div className="buttons"> {/* These buttons use the dispatch to update the state */} <button className="button add" onClick={() => dispatch({type: 'add'})}>Add</button> <button className="button subtract" onClick={() => dispatch({type: 'subtract'})}>Subtract</button> <button className="button reset" onClick={() => dispatch({type: 'reset'})}>Reset</button> </div> <div className="outputBox"> {/* Output the points variable */} <p>{ state.points }</p> </div> </div> </div> ); } export default Points ``` * * * ![useReducerGif-1.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1596576685513/mCDT1eOf8.gif) So there we have it! A function component using a reducer to manage state. This way of managing state is preferable over `useState` because of some performance boosts that are mentioned in [React’s documentation](https://reactjs.org/docs/hooks-reference.html#usereducer) on the `useReducer` hook. * * * ## Conclusion As you can see from the examples above, React’s implementation of hooks has not only brought over the useful feature of statefulness from class components to function components, but has also made it pretty easy to do! We looked at two ways to keep state in a function component. First, using `useState` and then using `useReducer`. In general, `useReducer` is the way to go when dealing with larger data sets in the state, while `useState` is great for components with a simpler state. > The next step to this would be sharing state across multiple function components! In my tutorial [Shared State with React Hooks and Context API](https://www.sabinthedev.com/shared-state-with-react-hooks-and-context-api/) I cover one of the more popular ways out there to achieve this. Give it a read if you’re interested! Thanks for reading, I hope you enjoyed!
sabinthedev
975,061
How to create autocomplete with react-autocomplete-pure
Autocomplete in input fields is a very useful feature that allows customers to improve their UX when...
0
2022-02-01T16:23:17
https://dev.to/qramilq/have-full-control-of-your-autocomplete-in-reactjs-with-react-autocomplete-pure-component-567p
typescript, react, autocomplete
Autocomplete in input fields is a very useful feature that allows customers to improve their UX when using your site. One of the features of autocomplete is that we help the user to enter some data, not forcing him to enter the entire text, but by providing a ready-made set of options. Thanks to this approach, the user can choose exactly from those options that the application knows how to work with, saving us from unnecessary checks and errors. One of the first ready-made solutions that come to mind are libraries such as [react-autocomplete](https://github.com/reactjs/react-autocomplete) and [react-autosuggest](https://github.com/moroshko/react-autosuggest). These are great libraries that do exactly what they are supposed to do: when you enter a value in the input field, they display a dropdown with possible options for substitution. Unfortunately, these libraries are no longer actively supported by their maintainers (`react-autosuggest` is looking for the main maintainer, and `react-autocomplete` is in the archive). In this regard, I decided to write (yes, this is sooo classic :smile:) my vision of a component for autocompletion. Let me introduce [react-autocomplete-pure](https://github.com/qramilq/react-autocomplete-pure) - TypeScript friendly react component for autocomplete. The main features that I wanted to put into this component is that I developed it with the ability to have the finest possible configuration of everything that may be required when developing specifically for your project. Below is the main key features which `react-autocomplete-pure` gives to you: * the maximum setting for displaying all parts of the component (input field, list, managing the view of the list container and its composition); * written in TypeScript, which makes it possible to take full advantage of typings with support for generics; * keyboard events support; * a11y support; `react-autocomplete-pure` has almost no state of its own, which means it needs to be managed in the parent component. This keeps the component as dummy as possible, which allows us to keep all the logic in one place and will only manage the display based on passed props. ## Example of using Suppose the user wants to enter in our input field the name of a movie, from the imdb's top 100 movies. Well, there is nothing easier! Let's add a field for autocompletion and show the user the available movie options as they type. First, let's install `react-autocomplete-pure` to our project using `npm` ```sh npm i react-autocomplete-pure ``` or via `yarn` ```sh yarn add react-autocomplete-pure ``` > **_NOTE:_** Before continuing, I would like to clarify that the code below is written in TypeScript so that we have type support and better DX when working with our IDE. Optionally, types can be omitted if you want to write in Vanilla JS. We know that movies will come to us as an array of objects from our backend ([example](https://github.com/qramilq/react-autocomplete-pure/blob/master/site/mock.ts#L4)). Each object in this array is a movie with its title and release year. ```ts type Film = { title: string; year: number }; const topFilms: Film[] = [ { title: "The Shawshank Redemption", year: 1994 }, { title: "The Godfather", year: 1972 }, /*...and more...*/ ]; ``` So, as we know incoming data format, now it's time to add the component to the project: ```tsx import { AutocompletePure } from "react-autocomplete-pure"; import { Film } from './types'; export function App() { return ( <div> <h1>My awesome app with autocomplete</h1> <AutocompletePure<Film> /*what about required props?*/> </div> ); } ``` We've added a component to the project, but we haven't added any props to it yet. Let's fix this. According to the available props in [documentation](https://github.com/qramilq/react-autocomplete-pure#props) we have some required props. Don't be afraid that there are so many of them, they are all intuitive and it is thanks to them that you can fully control the behavior of the component :smile:. Let's update our code. > **_NOTE:_** `fetchFilms` function is taken from [here](https://github.com/qramilq/react-autocomplete-pure/blob/master/site/mock.ts#L131). ```tsx import { useCallback, useEffect, useRef, useState } from "react"; import { AutocompletePure, RenderItem } from "react-autocomplete-pure"; import { fetchFilms } from "./mock"; import { Film } from "./types"; // let's add some style if item is highlighted const renderItem: RenderItem<Film> = (item, { isHighlighted }) => ( <span style={{ fontWeight: isHighlighted ? 700 : 400 }}>{item.title}</span> ); // Needs to get new value when using keyboard events const getSuggestionValue = (item: Film) => item.title; export function App() { const [isOpen, setIsOpen] = useState<boolean>(false); const [suggestions, setSuggestions] = useState<Film[]>([]); const [value, setValue] = useState<string>(""); // When input changes then save value // If change reason is type on input then get new items, save them and close dropdown if no new items fetched // If change reason is enter keydown then simple close dropdown const handleChange: AutocompletePureProps<Film>["onChange"] = useCallback( async (_event, { value, reason }) => { setValue(value); if (reason === "INPUT") { const newFilms = await fetchFilms(value); setSuggestions(newFilms); setIsOpen(Boolean(newFilms.length)); } else if (reason === "ENTER") { setIsOpen(false); } }, [] ); // When item selected then save it and close dropdown const handleSelect: AutocompletePureProps<Film>["onSelect"] = useCallback( (_event, { item }) => { const value = getSuggestionValue(item); setValue(value); setIsOpen(false); }, [] ); return ( <div> <h1>My awesome app with autocomplete</h1> <AutocompletePure<Film> open={isOpen} value={value} items={suggestions} onChange={handleChange} onSelect={handleSelect} /> </div> ); } ``` Our component is almost ready to use, except that we currently do not hide the list if we click somewhere outside the component. This is easy to fix, the component can calls the `onClickOutside` callback, in which we can implement the logic for hiding the list. ```tsx /* ...same as before... */ export function App() { /* ...same as before... */ const handleClickOutside = useCallback((_event: Event) => { setIsOpen(false); }, []); /* ...same as before... */ return ( <AutocompletePure<Film> open={isOpen} value={value} items={suggestions} onChange={handleChange} onSelect={handleSelect} onClickOutside={handleClickOutside} /> ); /* ...same as before... */ } ``` That's all, now the component can be fully used! Congratulations, you did it! You can play more in sandbox: {% codesandbox vibrant-field-mmrow %} If you want to see more features (like custom renderers for component's parts) of using component you can watch them in repository in [site folder](https://github.com/qramilq/react-autocomplete-pure/blob/master/site/App.tsx)
qramilq
975,271
modern and easy user login validation
I made modern user login validation powerd by volder npm package volder:...
0
2022-02-01T17:55:51
https://dev.to/alguercode/modern-and-easy-user-login-validation-2l75
javascript, webdev, react, css
I made modern user login validation powerd by volder npm package volder: [https://github.com/devSupporters/volder](https://github.com/devSupporters/volder) project link to github: [https://github.com/alguerocode/js-volder](https://github.com/alguerocode/js-volder) ![login](https://user-images.githubusercontent.com/75932477/152019675-586596f1-1670-403a-815f-a00b69302740.gif) **volder** is powerful Object schema validation, it lets you describe your data using a simple and readable schema and transform a value to match the requirements, it has custom error messages, custom types and nested schemas.
alguercode
975,622
Uploading file to the server using Node and HTML5
In this article you will learn how you can upload a file to the server using node.js and HTML5. ...
0
2022-02-01T23:18:58
https://www.rupeshtiwari.com/uploading-file-to-server-using-nodejs/
webdev, tutorial, beginners, node
--- title: Uploading file to the server using Node and HTML5 published: true date: 2021-09-09 00:00:00 UTC tags: webdev,tutorial,beginners,nodejs canonical_url: https://www.rupeshtiwari.com/uploading-file-to-server-using-nodejs/ --- > In this article you will learn how you can upload a file to the server using node.js and HTML5. ## Client side file upload On the client side we need to use a `file` type `<input>` html element that can hold the file content from the client machine/device. Remember file type input element will parse the data and put it in the form. ``` <input type="file" name="filetoupload" /> <br /> ``` The input element with `type=”file”` allows us to choose one or more files from your device (mobile or machine). That chosen file can be uploaded to the server using form submission. 🏆 **Pro Tip** Using the File API, which was added to the DOM in HTML5, it’s now possible for web content to ask the user to select local files and then read the contents of those files. This selection can be done by either using an HTML `<input type="file">` element or by drag and drop. The File API makes it possible to access a FileList containing File objects representing the files selected by the user. ## What is multipart/form data? Suppose you have large or small unstructured data. Suppose you want to upload an image file or excel file. At that time you must consider uploading the file to the server as binary data. It’s just an array of integers with 0 and 1. Therefore, you should instruct your html form to **not encode** the form file input value and just send it to the server as raw binary data format. In order to achieve this you must set `enctype="multipart/form-data"` in your form tag. Example: ``` <form action="fileupload" method="post" enctype="multipart/form-data"> <input type="file" name="filetoupload" /> <br /> <input type="submit" /> </form> ``` In this example, I want to send the data as binary array format to the server. And let the server do the parsing of the file and create or save a file in the server disk. So by this time we understood that from the client machine I can use the browser to read my file content and put it in the HTML form for further access. I will create a submit button to post the form with the content of the file uploaded by the client. ## How to parse files on the server? Well you could do your own parsing, however I will choose [formidable](https://www.npmjs.com/package/formidable) node package to do the parsing for me. This module is excellent and it can be used for video and image files as well. In the server file let’s create an upload method. ``` app.post('/fileupload', (req, res) => { const form = formidable.IncomingForm() form.parse(req, (err, fields, files) => { const newpath = 'C:/Users/Rupesh/' + files.filetoupload.name var oldpath = files.filetoupload.path fs.rename(oldpath, newpath, function (err) { if (err) throw err res.write(`${files.filetoupload.name} File uploaded and moved!`) res.end() }) }) }) ``` ## Testing file upload ### Run the server `npm run server` ![](https://i.imgur.com/qCbTbnB.png) ### Navigate to the upload page http://localhost:3000/ ![](https://i.imgur.com/ZS6l9fw.png) ### Select file and submit ![](https://i.imgur.com/r1Iz2mM.png) ## Inspecting multipart form data I told you that client browser can submit the file content in binary data. If you want to visualize the form data. Then upload any file and use [`fiddler`](https://www.telerik.com/fiddler) and check how content in binary data format looks like. ![](https://i.imgur.com/NlnKSAp.png) Finally, I can see my file got saved in my desired disk. ![](https://i.imgur.com/pOVx2DS.png) ## Learning materials - Here is the [complete source code](https://github.com/rupeshtiwari/coding-example-upload-file) ## References - [DOM File API](https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications) - [Formidable Node Package](https://www.npmjs.com/package/formidable) * * * _Thanks for reading my article till end. I hope you learned something special today. If you enjoyed this article then please share to your friends and if you have suggestions or thoughts to share with me then please write in the comment box._ **💖 Say 👋 to me!** Rupesh Tiwari Founder of [Fullstack Master](https://www.fullstackmaster.net) Email: [rupesh.tiwari.info@gmail.com](mailto:rupesh.tiwari.info@gmail.com?subject=Hi) Website: [RupeshTiwari.com](https://www.rupeshtiwari.com) ![](https://imgur.com/5fBatz9.png)
rupeshtiwari
975,652
Spring Kafka Streams playground with Kotlin - IV
Context This post is part of a series where we create a simple Kafka Streams Application...
16,806
2022-02-12T16:04:45
https://dev.to/thegroo/spring-kafka-streams-playground-with-kotlin-iv-aea
kotlin, kafka, springboot, tutorial
## Context This post is part of a series where we create a simple Kafka Streams Application with Kotlin using Spring boot and Spring Kafka. Please check the [first part](https://dev.to/thegroo/spring-kafka-streams-playground-with-kotlin-i-4g4c) of the tutorial to get started and get further context of what we're building. > If you want to start from here you can clone the source code for this project `git clone git@github.com:mmaia/simple-spring-kafka-stream-kotlin.git` and then checkout v7 `git checkout v7` and follow from there continuing with this post. In this post we're going to create our QuoteStream, we will process messages from the quote topic and do a join with the GlobalKTable for leverage we created in the previous post, we will then do branching and send quotes to three different topics based on it's keys. ![Part IV Overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/194snxhl5t93k8of101p.png) ## Creating a Quote Stream Let's now create a stream from the `stock-quotes-topic` and join with Leverage GlobalKTable we created in the last post, we will use a stream join and if there's a Leverage for that specific quote we enrich the quote and publish it to one topic based on it's key. This will demonstrate the usage of branching to separate data for post processing which is quite common use case. In our sample application we will produce the data to one of three new topics, we will produce ant Apple quotes to a topic, Google quotes to another topic and all other quotes to the third topic. - AAPL -> `apple-stocks-topic` - GOOGL -> `google-stocks-topic` - Any others -> `all-other-stocks-topic` In order to do that our first step is to create those topics using the Admin client. Add the following constants to the `KafkaCofiguration` class: ```Kotlin const val AAPL_STOCKS_TOPIC = "apple-stocks-topic" const val GOOGL_STOCKS_TOPIC = "google-stocks-topic" const val ALL_OTHER_STOCKS_TOPIC = "all-other-stocks-topic" ``` Change the `appTopics` function on the same class to also create those three new topics: ```Kotlin @Bean fun appTopics(): NewTopics { return NewTopics( TopicBuilder.name(STOCK_QUOTES_TOPIC).build(), TopicBuilder.name(LEVERAGE_PRICES_TOPIC) .compact().build(), TopicBuilder.name(AAPL_STOCKS_TOPIC).build(), TopicBuilder.name(GOOGL_STOCKS_TOPIC).build(), TopicBuilder.name(ALL_OTHER_STOCKS_TOPIC).build(), ) } ``` The next time you run the application those new topics will be created using the admin client. Let's now add the new schema definition which we will use for those new topics, create a new avro-schema file under `src > main > avro` called `processed-quote` with the following content: ```json { "namespace": "com.maia.springkafkastreamkotlin.repository", "type": "record", "name": "ProcessedQuote", "fields": [ { "name": "symbol", "type": "string"}, { "name": "tradeValue", "type": "double"}, { "name": "tradeTime", "type": ["null", "long"], "default": null}, { "name": "leverage", "type": ["null", "double"], "default": null} ] } ``` Notice that the difference in this case is just a new leverage field which we will use to enrich the incoming quote with the value if they match. Build the project so the java code is generate for this new avro schema. `mvn clean package -DskipTests` Let's now create a new class on the `repository` package called `QuoteStream`, we will need a reference to our Leverage GlobalKTable so we use Spring Boot dependency injection: ```Kotlin @Repository class QuoteStream(val leveragepriceGKTable: GlobalKTable<String, LeveragePrice>) { ... ``` On this class declare a function to process and enrich the quote: ```Kotlin @Bean fun quoteKStream(streamsBuilder: StreamsBuilder): KStream<String, ProcessedQuote> { ``` In this function create a KStream which will process the data from the `stock-quotes-topic`, do a join with the GlobalKTable we created for leverage and transform in the new Avro Type `ProcessedQuote` enriching the quotes with leverage if it's available: ```Kotlin val stream: KStream<String, StockQuote> = streamsBuilder.stream(STOCK_QUOTES_TOPIC) val resStream: KStream<String, ProcessedQuote> = stream .leftJoin(leveragepriceGKTable, { symbol, _ -> symbol }, { stockQuote, leveragePrice -> ProcessedQuote( stockQuote.symbol, stockQuote.tradeValue, stockQuote.tradeTime, leveragePrice?.leverage ) } ) ``` and to wrap up this function we will tap in the new Stream and based on the Key we will send the message to specific topics and return the new stream so we can re-use it later for other operations: ```Kotlin KafkaStreamBrancher<String, ProcessedQuote>() .branch({ symbolKey, _ -> symbolKey.equals("APPL", ignoreCase = true) }, { ks -> ks.to(AAPL_STOCKS_TOPIC) }) .branch({ symbolKey, _ -> symbolKey.equals("GOOGL",ignoreCase = true) }, { ks -> ks.to(GOOGL_STOCKS_TOPIC) }) .defaultBranch { ks -> ks.to(ALL_OTHER_STOCKS_TOPIC) } .onTopOf(resStream) return resStream } ``` > If you just want to get your local code to this point without using the presented code you can checkout v8: `git checkout v8` Cool, now we can play around a bit with it, let's build and run our application(make sure your local kafka setup is running): `mvn clean package -DskipTests && mvn spring-boot:run` You can then send some leverage messages and quote messages using the APIs as we did before in this tutorial. And you can check the messages being enriched if the specific quote has a leverage and flowing to the three different topics based on their keys, here's some screenshots from the topics I took while playing around with the project using [Conduktor](https://www.conduktor.io/download/). ![Google topic screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zta3psmdc3fgeu9so9j.png) ![Apple topic screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1oeunuyin9r8qi598cgb.png) > The messages seem duplicated on the screenshots but that's because I sent them multiple times with the same values while playing around. I also sent a few before sending the respective leverage so you can see what happens and check that the initial ones on the bottom have a null leverage. That's it for now. Tomorrow I will be publishing part V of this tutorial where we will use grouping, counting and calculate volume using Kafka Streams DSL. Stay tuned. Photo by <a href="https://unsplash.com/@nublson?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Nubelson Fernandes</a> on <a href="https://unsplash.com/s/photos/developer?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
thegroo
975,688
How to display different components based on user authentication
Hey there! Ever want to display different components based on whether or not your user is logged in?...
0
2022-02-02T02:56:16
https://dev.to/velcruza/how-to-display-different-components-based-on-user-authentication-8o5
react, rails
Hey there! Ever want to display different components based on whether or not your user is logged in? Well, you're in the right spot. Let's dive right in. To start off, you're going to want your two different components that you'll be displaying based on user authentication. For this example the two components I'm using are `<LoggedOut/>` and `<LoggedIn/>`. Next step is we need a route on the backend to render current user info. For this example the route I used is `/me` and heres what that looks like in my user controller: ``` def me if current_user render json: current_user, status: :ok else render json: {error: "Not logged in"}, status: :unauthorized end end ``` From here, what we're going to be doing is fetching to this route `/me` to find out if our user is logged in, and if our user is logged in we're going set our current user to that and render our page based on that. That probably didn't make much sense but lets step through the code together. In our App.js here is what we do: We're starting off by setting 2 states ``` const [currentUser, setCurrentUser] = useState(null); ``` Now we're going to fetch to our `/me` route to figure out if our user is logged in or not, and based on that info: if the user logged in we're setting our current user to that data, and if not then our current user remains null: ``` useEffect(() => { fetch("/me", { credentials: "include", }).then((res) => { if (res.ok) { res.json().then((user) => { setCurrentUser(user); }); } }); }, []); ``` then in our `return()` we're going to dynamically render the elements based on if currentUser is a truthy value or a falsey value: ``` return ( <> {currentUser ? (<LoggedIn/>) : (<LoggedOut/>)} </> ); ``` and that's all you need! You can change the names for LoggedIn and LoggedOut to be whatever components you want to render based on if you're user is logged in or not. I hope this helped you out in any way and appreciate you following to the end!
velcruza
975,697
exception handling in javascript
A post by mark-ganus
0
2022-02-02T03:14:20
https://dev.to/mountebanking_cormorant/exception-handling-in-javascript-4h1o
codepen
{% codepen https://codepen.io/Mountebanking-Cormorant/pen/xxPVxVG %}
mountebanking_cormorant
975,882
A Quick 10-Second Programming Joke For Today
Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)
4,070
2022-02-02T08:00:28
https://dev.to/dailydeveloperjokes/a-quick-10-second-programming-joke-for-today-1fg0
jokes, dailydeveloperjokes
--- title: "A Quick 10-Second Programming Joke For Today" description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)" series: "Daily Developer Jokes" published: true tags: #jokes, #dailydeveloperjokes --- Hi there! Here's today's Daily Developer Joke. We hope you enjoy it; it's a good one. ![Joke Image](https://private.xtrp.io/projects/DailyDeveloperJokes/public_image_server/images/5e1259620ca4d.png) --- For more jokes, and to submit your own joke to get featured, check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/). We're also open sourced, so feel free to view [our GitHub Profile](https://github.com/dailydeveloperjokes). ### Leave this post a ❤️ if you liked today's joke, and stay tuned for tomorrow's joke too! _This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit._ <!-- Joke text: ___Q:___ Where does the pirate stash all of their digital treasures? ___A:___ RAR -->
dailydeveloperjokes
976,038
Integration Testing Done Right
Writing integration tests is not always straightforward. By definition, integration tests require...
0
2022-02-02T13:54:34
https://sonalake.com/latest/integration-testing-done-right/
testing, testdev, java
Writing integration tests is not always straightforward. By definition, integration tests require interaction between several components, and we need to deal with them in many different ways. Let’s look at some tools that will make writing and reading tests easier. I believe that [**Testcontainers**](https://www.testcontainers.org/) and the [**Spark Framework**](https://sparkjava.com/) will allow you to write shorter and more descriptive tests. Is this how you test? ------------------------- What is your approach to writing an integration test (IT)? Maybe some of the following sound familiar: * Write mocks or stubs of external services * Create a dedicated remote environment for ITs (playground, sandbox) and run them there * Setup all the components (where the ITs are supposed to run) locally No, I’m not saying that you’ve been doing it all wrong if you do that! But the truth is each of those approaches has drawbacks. By way of example, let’s look at the first option. When you mock or stub some external services that are not crucial for the component you are testing, there is a chance you might miss some aspects of that mocked service that can only occur when running live. Of course, you could invest more effort into replicating the logic of how the actual component works, but it might be difficult to replicate accurately and will also be time-consuming to develop. Even then, there are no guarantees it will be correct, so your test might still be unreliable. What if there was a more effective way? Let’s see what we can do to make _real_ integration tests and not imitation ones! Meet Testcontainers ----------------------- [**Testcontainers**](https://www.testcontainers.org/) is a Java library that supports [JUnit](https://junit.org/) tests providing lightweight instances of anything that we can run in a [Docker](https://www.docker.com/) container. I will go through a use case where Testcontainers can provide substantial benefits. In the project I’m currently working on, we have a component called **the Integration Component** (IC). IC is a [**Spring Boot**](https://spring.io/projects/spring-boot) service that acts as a consumer and a producer of [**RabbitMQ**](https://www.rabbitmq.com/) messages. As a consumer, it listens to a queue where another service sends job requests. IC reads those messages (job requests), processes them and finally sends an HTTP request to [**Databricks**](https://databricks.com/) to run a job. Before we submit the request (step #4 on the diagram below), we need to do a few other things, and we have divided this logic into several steps in the IC. ![Test Cointainers](https://sonalake.com/wp-content/uploads/2022/01/integration-testing-done-right.png) For testing purposes, those requests are handled by the Spark framework, but I’ll get back to that later. As mentioned before, the service logic is divided into several steps, where each step has a **`process()`** method. Let’s look at the **`SendRequestToJobQueueStep`** method (#3 on the diagram above). ```java @Slf4j @RequiredArgsConstructor @Component public class SendRequestToJobQueueStep implements JobRequestConsumerStep { private final AmqpAdmin amqpAdmin; private final Exchange exchange; private final RequestQueueProvider requestQueueProvider; private final RabbitTemplate rabbitTemplate; @Value("${config.request-queue-ttl}") private Duration requestQueueTtl; @Override public boolean process(JobRequestProcessingContext context) { String queueName = createAndBindRequestQueue(runSettings.getTraceId(), context.getJobType()); try { Supplier<? extends SpecificRecordBase> requestProvider = context.getRequestProvider(); sendJobRequestToRequestQueue(requestProvider.get(), queueName); } catch (AmqpException e) { String customMsg = String.format("Sending '%s' request using routing key '%s' for jobId=%d failed.", context.getJobType(), queueName, job.getJobId()); log.error(prepareExceptionLogMessage(deliveryTag, e, customMsg), e); requeue(context, deliveryTag); return false; } return true; } private <R extends SpecificRecordBase> void sendJobRequestToRequestQueue(R requestObject, String routingKey) { rabbitTemplate.convertAndSend(Amqp.EVENTS, routingKey, requestObject); log.info("Job request is sent to job queue. Routing key: '{}'", routingKey); } private String createAndBindRequestQueue(String traceId, JobType jobType) { Queue requestQueue = requestQueueProvider.getRequestQueue(jobType, traceId); amqpAdmin.declareQueue(requestQueue); String routingKey = requestQueue.getName(); Binding binding = BindingBuilder.bind(requestQueue) .to(exchange) .with(routingKey) .and(Map.of("x-expires", requestQueueTtl.toMillis())); amqpAdmin.declareBinding(binding); return routingKey; } } ``` When the **`process()`** method is invoked, the IC is sending a job request to the dynamically created and bound queue. The creation and binding happen in the **`createAndBindRequestQueue()`** method. There’s quite a lot going on in that class. Imagine writing an integration test that would cover all that logic! There’s another challenge. Consider the **`createAndBindRequestQueue()`** method. If you mock all the methods used in it, namely **`declareQueue()`** and **`declareBinding()`**, will it really help you? Sure, you can verify if those methods were invoked, or try to return a value (if it’s possible), but it’s not actually the same as running the code live. An approach using mocks might look like this: ```java @Test void queueShouldBeDeclaredAndBoundDuringCreation() { when(queue.getName()).thenReturn(QUEUE\_NAME); when(exchange.getName()).thenReturn(EXCHANGE\_NAME); step().process(context); verify(amqpAdmin).declareQueue(queue); ArgumentCaptor<Binding> bindingArgumentCaptor = ArgumentCaptor.forClass(Binding.class); verify(amqpAdmin).declareBinding(bindingArgumentCaptor.capture()); Binding binding = bindingArgumentCaptor.getValue(); assertEquals(EXCHANGE\_NAME, binding.getExchange()); assertEquals(QUEUE\_NAME, binding.getRoutingKey()); assertEquals(QUEUE\_NAME, binding.getDestination()); } ``` This might be considered a unit test, but it’s definitely not an integration test. What we need here is to verify whether the queue has been created for real and a message has been sent to it. Here’s how to achieve that using Testcontainers. ```java @Slf4j @SpringBootTest @Testcontainers class CommonJobRequestConsumerIT { private static final int SPARK\_SERVER\_PORT = 4578; @Container private static final RabbitMQContainer container = new RabbitMQContainer(DockerImageName.parse("rabbitmq:3.8.14-management")) { @Override public void stop() { log.info("Allow Spring Boot to finalize things (Failed to check/redeclare auto-delete queue(s).)"); } }; @Autowired private RabbitTemplate template; @Value("${databricks.check-status-call-delay}") private Duration statusCallDelay; private static SparkService sparkServer; @BeforeAll static void before() { sparkServer = SparkService.instance(SPARK\_SERVER\_PORT); sparkServer.startWithDefaultRoutes(); } @AfterAll static void after() { sparkServer.stop(); } private <R extends SpecificRecordBase> void assertJobRequest(String expectedQueueName, Supplier<R> requestSupplier, JobSettingsProvider<R> jobSettingsProvider) throws IOException { Message receivedRequest = template.receive(expectedQueueName); assertNotNull(receivedRequest); R serializedRequest = AvroSerialization.fromJson(requestSupplier.get(), receivedRequest.getBody()); log.info("Request received in '{}' '{}'", expectedQueueName, serializedRequest.toString()); RunSettings jobSettings = jobSettingsProvider.getJobSettings(serializedRequest); assertEquals(JOB\_ID, jobSettings.getJobId()); assertEquals(TRACE\_ID, jobSettings.getTraceId()); } @ParameterizedTest @MethodSource <R extends SpecificRecordBase> void jobRequestShouldBeSentToDedicatedQueue(String requestRoutingKey, R request, String expectedQueueName, Supplier<R> requestSupplier, JobSettingsProvider<R> jobSettingsProvider) throws InterruptedException, IOException { template.convertAndSend(Amqp.EVENTS, requestRoutingKey, request, m -> { m.getMessageProperties().setDeliveryTag(1); return m; }); // finish all steps // + give rabbit some time to finish with dynamic queue creation long timeout = statusCallDelay.getSeconds() + 1; TimeUnit.SECONDS.sleep(timeout); assertJobRequest(expectedQueueName, requestSupplier, jobSettingsProvider); } } ``` As you can see, the test is readable and really easy to follow, which is not always the case when you mock. We start (in the **`before()`** method) with some Spark related logic (more on that later), and then we send a message to the queue, starting the entire process. This is exactly how the system under test (IC) works. It’s listening to a particular queue and once the message is there, it picks it up and starts processing it. In some cases, we need to wait a bit, since otherwise a test will finish too early and assertions will fail. I think that proves that there is a good reason why using Testcontainers in similar cases could be an excellent choice. In my opinion, there is no better way to be certain this code works as expected. I’m sure there are many other examples where mocking is not a viable solution, and the only reasonable option is to be able to run those components live. This is where the Testcontainers library shows its power and simplicity. Give it a try next time you write an integration test! Hero #2: The Spark framework -------------------------------- In the same component (IC) I’m also using the [**Spark framework**](https://sparkjava.com/) to handle HTTP calls to an external service, in this case, [**Databricks**](https://docs.microsoft.com/en-us/azure/databricks/) API. Spark is lightweight and perfect for such use cases. Instead of mocking [**RestTemplate**](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html) calls, we are using a real HTTP server! Why is that so important? Well, if you look at how our test is organised, I think it will become apparent. As mentioned earlier, I’m using Testcontainers to make the test as real as possible. I do not want to mock anything. I want my REST calls to be real as well. Let’s look at how Spark is being used in this test. While working with Spark in integration tests, I’m using a wrapper class called SparkService. ```java @Slf4j public class SparkService { private final Service sparkService; private SparkService(Service service) { this.sparkService = service; } static File loadJsonPayload(String payloadFileName) { ClassLoader classLoader = SparkService.class.getClassLoader(); URL resource = classLoader.getResource(payloadFileName); return new File(resource.getFile()); } public static SparkService instance(int dbServerPort) { return new SparkService(Service.ignite().port(dbServerPort)); } public void startWithDefaultRoutes() { DatabricksRoutes.JOB\_LIST.register(sparkService); DatabricksRoutes.JOB\_RUN\_NOW.register(sparkService); DatabricksRoutes.JOB\_RUNS\_GET.register(sparkService); DatabricksRoutes.JOB\_RUNS\_DELETE.register(sparkService); } public void stop() { service().stop(); } public void awaitInitialization() { service().awaitInitialization(); } public Service service() { return sparkService; } } ``` Notice the **`startWithDefaultRoutes()`** method. It contains several lines where particular endpoints (which I would like to stub) are defined. I’m using enum classes for those endpoints, and each of the enum keys implements the SparkRoute interface. ```java public interface SparkRoute { HttpMethod httpMethod(); String path(); Route route(); default void register(Service sparkService) { register(sparkService, route()); } default void register(Service sparkService, Route route) { switch (httpMethod()) { case GET: sparkService.get(path(), route); break; case POST: sparkService.post(path(), route); break; } } } ``` Here is an example of the **`JOB_LIST`** enum from the **`DatabricksRoutes`** class. ```java public enum DatabricksRoutes implements SparkRoute { JOB\_LIST { @Override public String path() { return "/api/2.0/jobs/list"; } @Override public Route route() { return JobController.handleJobList("json/spark/jobs\_list\_response.json"); } @Override public HttpMethod httpMethod() { return HttpMethod.GET; } } } ``` Ok, so how are all of these used in actual integration tests? In a simple scenario where no special logic for the stubbed endpoints is required, it could look like this. ```java private static SparkService sparkServer; @BeforeAll static void before() { sparkServer = SparkService.instance(SPARK\_SERVER\_PORT); sparkServer.startWithDefaultRoutes(); } @AfterAll static void after() { sparkServer.stop(); } @DynamicPropertySource static void properties(DynamicPropertyRegistry registry) { registry.add("databricks.url=", () -> "http://localhost:" + SPARK\_SERVER\_PORT); registry.add("databricks.token.token-host=", () -> "http://localhost:" + SPARK\_SERVER\_PORT); registry.add("spring.rabbitmq.host", container::getContainerIpAddress); registry.add("spring.rabbitmq.port", () -> container.getMappedPort(5672)); log.info("RabbitMQ console available at: {}", container.getHttpUrl()); } ``` I have not mentioned this earlier but in the **`properties()`** method above, you can see how RabbitMQ can be configured with Testcontainers. This method is where all the properties in which we need to specify a URL are overridden, so we could use Spark as a handler for the original REST calls to those default services. That was a simple Spark usage scenario within an integration test. For more sophisticated logic, we need a bit of a different approach. What if we have a parameterised test, and we need each given endpoint to return a different response for every run? In the example above, all the endpoints were defined before the test started. However, we can configure particular endpoint handlers inside each test. This is where Spark can show its power of configuration and customisation in handling incoming requests. For instance, consider this example: ```java sparkServer.registerGetRoute(JOB\_RUNS\_GET\_PATH, handleRunsGet(jobRunDataPath)); sparkServer.awaitInitialization(); ``` **`jobRunDataPath`** is one of the parameters in a parameterised test, so we can register a different request handler and return a custom response (a JSON file) for every test run. Try it out! --------------- To sum up, I believe that [**Testcontainers**](https://www.testcontainers.org/) and the [**Spark framework**](https://sparkjava.com/) will change your habits when writing integration tests. By leveraging the power of containerisation, you can move your tests to the next level by making them more reliable and even easier to write. You will be able to verify your system under test in almost the same conditions as if it was running on production. Furthermore, your test can eventually become even more readable. Give it a try and see how easy it is to write integration tests now!
winciu
976,135
Creating a Great Developer Experience Beyond the Portal
Part 2 of the “Improving the API Developer Experience”...
0
2022-02-08T11:11:22
https://tech.forums.softwareag.com/t/creating-a-great-developer-experience-beyond-the-portal/254951
webmethods, apimanagement, apigateway, apiportal
--- title: Creating a Great Developer Experience Beyond the Portal published: true date: 2022-02-02 11:15:59 UTC tags: webmethods, apiManagement,apiGateway, apiPortal canonical_url: https://tech.forums.softwareag.com/t/creating-a-great-developer-experience-beyond-the-portal/254951 --- **Part 2 of the “Improving the API Developer Experience” Series** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/361pijhf8cxif7zo5vga.png) [https://www.youtube.com/embed/r3zHPgZUPiE](https://www.youtube.com/embed/r3zHPgZUPiE?autoplay=1&feature=oembed&wmode=opaque) You likely already understand the value of deliberately designing end-user experiences. But when it comes to offering APIs to your partners and developers, are you putting out the welcome mat at the wide-open front door? Or forcing them to shove their way through the shabby side entrance? In this webcast Dr. Matthias Biehl, Software AG’s Digital Evangelist for APIs/Integration, explores the multiplier effect developers have on any digital strategy attempting to create a platform business, participate in digital ecosystems or offer APIs as products. You’ll also learn many other reasons your developers require a superior experience, as well as how to provide it via: -Consistent API portfolios -Self-service developer portals • Read blog post about API Developer Experience: https://blog.softwareag.com/api-developer-portal • To learn more about API's from Software AG, https://www.softwareag.com/en_corporate/platform/integration-apis.html • To view more Software AG webinars, please visit https://www.softwareag.com/en_corporate/company/events.html [Visit original post](https://tech.forums.softwareag.com/t/creating-a-great-developer-experience-beyond-the-portal/254951)
techcomm_sag
976,329
node response is missing
I am trying to trouble shoot/fix my nodejs upload image api: My service is being stuck at...
0
2022-02-02T14:57:35
https://dev.to/maifs/node-response-is-missing-55l7
react, node, help
I am trying to trouble shoot/fix my nodejs upload image api: My service is being stuck at somewhere. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w12xejuv67brgzwlljdy.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oqewmmb2wr14yz7cgot4.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1ctygist8s4ham3g662.png) My service is too simple, just uploading the image by resizing through sharp api in a directory and return the full path of that file. When I select some image at first time then everything works fine and image upload successfully with a response but When I try to crop that image after clicking on the image and try to save it (at second time) then nodejs service return following response. I don't why it is being happened, I tried to debug the service code and It didn't stop at anywhere. Flow has been passed/executed successfully, I didn't catch any exception/error in code. What can be the issue in it because it still displaying > Blockquote failed to load response data. no resource with given identifier found Problem area is in the code of onSaveImage when a server side call is being served/requested. I am using the plugin for image cropping is react-easy-crop. Browser is getting refresh/reload at this line of code const jsonRes = await responseMain.json(); I am sharing my nodejs service code as well as reactjs code. please look into it. Thank you. `-----------------package.json of nodejs { "name": "", "version": "1.0.0", "description": "", "main": "index.js", "dependencies": { "body-parser": "^1.19.0", "cors": "^2.8.5", "express": "^4.17.1", "express-validator": "^6.12.0", "handlebars": "^4.7.7", "jsonwebtoken": "^8.5.1", "mysql": "^2.18.1", "nodemailer": "^6.6.1", "nodemon": "^2.0.12", "sharp": "^0.29.3" }, "devDependencies": {}, "scripts": { "start": "nodemon --inspect index.js", "debug": "node --nolazy --inspect-brk=9229 index.js" }, "license": "ISC" } ------ index.js---------------------------NodeJs const express = require("express"); const app = express(); const cors = require("cors"); var fs = require("fs"); const fsp = fs.promises; var path = require("path"); const sharp = require("sharp"); var $pathContentBuilder = path.join(__dirname, "../", "/public/roughdata/uploads/"); var $urlpathContentBuilder = "/roughdata/uploads/"; // URL path app.use(express.json({ limit: "20mb" })); app.use(cors()); app.use( express.urlencoded({ extended: true, }) ); function processImage(image, metadata, filename, isResizeAllow) { return new Promise((resolve, reject) => { if (isResizeAllow && isResizeAllow === true) { // 0.8 MB return image .resize({ width: Math.round(metadata.width / 2), height: Math.round(metadata.height / 2), fit: "cover", }) .toBuffer((err, buf) => { if (err) { console.log("Error occured ", err); return reject(err); } else { return resolve(buf.toString("base64")); } }); } else { return image.toBuffer((err, buf) => { if (err) { console.log("Error occured ", err); return reject(err); } else { return resolve(buf.toString("base64")); } }); } }); } app.post("/uploaddetail", async (req, res, next) => { const base64Data = req.body.image; const filename = req.body.filename; let imageResizeResult = ""; try { var inputbuf = Buffer.from(base64Data, "base64"); // Ta-da const image = await sharp(inputbuf); let metadata = await image.metadata(); let convertedbase64Data = ""; convertedbase64Data = await processImage(image, metadata, filename, false); await fsp.writeFile($pathContentBuilder + filename, convertedbase64Data, "base64"); let resultResponse = JSON.stringify({ success: true, fullurl: `${$urlpathContentBuilder}${filename}`, url: `${filename}`, imagename: `${filename}`, serverpath: `${$urlpathContentBuilder}`, }); //res.type("json"); res.status(200).json(resultResponse); //res.end(); //next(); } catch (e) { console.log(e); const error = new HttpError(e, 404); return next(error); } }); ` and following is my reactjs code. ` const settings = { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ filename: filename, image: base64Image, }), }; setLoader(true); let submitUrl = process.env.REACT_APP_SERVER_DOMAIN + `/uploaddetail`; const responseMain = await fetch(submitUrl, settings); const jsonRes = await responseMain.json();` Anyone can share his thoughts on it ? thank you
maifs
976,380
Switching off RSpec generators
Finding the correct switch to turn off RSpec generators can be hard as it is sparsely documented....
0
2022-02-02T16:56:43
https://dev.to/notapatch/switching-off-rspec-generators-59p1
--- title: Switching off RSpec generators published: true description: tags: //cover_image: https://direct_url_to_image.jpg --- Finding the correct switch to turn off RSpec generators can be hard as it is sparsely documented. [However, RSpec has a number of generators that are listed in this file.](https://github.com/rspec/rspec-rails/blob/5-1-maintenance/lib/generators/rspec/scaffold/scaffold_generator.rb) ```ruby # rspec-rails: lib/generators/rspec/scaffold/scafold_generator.rb class_option :controller_specs, type: :boolean, default: false, desc: "Generate controller specs" class_option :request_specs, type: :boolean, default: true, desc: "Generate request specs" class_option :view_specs, type: :boolean, default: true, desc: "Generate view specs" class_option :helper_specs, type: :boolean, default: true, desc: "Generate helper specs" class_option :routing_specs, type: :boolean, default: true, desc: "Generate routing specs" ``` You can then use the list to switch off generators in `application.rb`. ```ruby config.generators do |g| g.test_framework :rspec g.controller_specs false g.request_specs false g.view_specs false g.helper_specs false g.routing_specs false end ```
notapatch
976,404
What techs are you going to learn?
A post by Arpit
0
2022-02-02T17:54:22
https://dev.to/soniarpit/what-techs-are-you-going-to-learn-230p
discuss
soniarpit
976,414
Concorrência em banco de dados explicada de forma simples
Nesse artigo vou mostrar conceitos básicos para lidar com requisições simultâneas criando uma...
0
2022-02-15T18:18:54
https://dev.to/portugues/concorrencia-em-banco-de-dados-explicada-de-forma-simples-47ka
braziliandevs, php, database, sql
Nesse artigo vou mostrar conceitos básicos para lidar com requisições simultâneas criando uma aplicação web de um banco. Quando estamos programando tem algumas armadilhas que precisamos nos atentar especialmente porque não são cenários fáceis de testar. ## Escopo da aplicação do banco ![caixa eletrônico perdendo dinheiro](https://media.giphy.com/media/3ov9jOmaqfoIuJuKd2/giphy.gif) Nossa aplicação de exemplo armazenará contas bancárias que poderão transferir dinheiro entre elas. Ela será construída usando PHP, Symfony e o mapeamento objeto-relacional (ORM) Doctrine, mas você não precisa conhecer essas tecnologias, apenas o banco de dados Postgres. ## Entidade conta bancária (Account) A conta bancária guardará o nome do dono e a quantidade final de dinheiro. ```sql CREATE TABLE "public"."bank_account" ( "id" int4 NOT NULL, "name" varchar(255) NOT NULL, "amount" int4 NOT NULL, PRIMARY KEY ("id") ); ``` ## API para transferência de dinheiro O endpoint para transferir dinheiro entre duas contas irá receber três variáveis por query parameters: - `from`: id da conta origem - `to`: id da conta destino - `amount`: quantidade a ser transferida Então para transferir 100 da conta 1 para conta 2, podemos usar a seguinte requisição: ```bash http://localhost:8000/move?from=1&to=2&amount=100 ``` ## Repositório Conta Bancária Para que o endpoint acima funcione, precisamos do seguinte repositório: ```php class BankAccountRepository extends ServiceEntityRepository { public function __construct(ManagerRegistry $registry) { parent::__construct($registry, BankAccount::class); } public function transferAmount($from, $to, $amount): void { // Busca as contas bancárias a serem atualizadas $fromAccount = $this->find($from); $toAccount = $this->find($to); // Atualiza a quantidade delas $fromAccount->setAmount($fromAccount->getAmount() - $amount); $toAccount->setAmount($toAccount->getAmount() + $amount); // Persiste as duas entidades $this->getEntityManager()->persist($fromAccount); $this->getEntityManager()->persist($toAccount); $this->getEntityManager()->flush(); } } ``` Traduzindo para SQL, temos o seguinte (editado a partir do SQL gerado pelo Doctrine para fins de legibilidade): ```sql SELECT * FROM bank_account WHERE id = 1; -- origem SELECT * FROM bank_account WHERE id = 2; -- destino START TRANSACTION; UPDATE bank_account SET amount = ? WHERE id = 1; -- origem UPDATE bank_account SET amount = ? WHERE id = 2; -- destino COMMIT; ``` ## Controlador Conta Do lado do controlador, temos que extrair os query parameters da requisição e repassá-los para o repositório: ```php class BankAccountController extends AbstractController { #[Route('/move', name: 'bank_account')] public function transfer(Request $request, BankAccountRepository $repository): Response { $from = $request->query->get('from'); $to = $request->query->get('to'); $amount = $request->query->get('amount'); $repository->transferAmount($from, $to, $amount); return new Response(sprintf('from %s to %s amount %s', $from, $to, $amount)); } } ``` ## Vamos testar! Vamos criar contas de teste no banco de dados: ```sql INSERT INTO "public"."bank_account" ("id", "name", "amount") VALUES (1, 'Alice', 1000), (2, 'Bob', 0); ``` E transferir 100 de Alice para Bob: ```bash curl http://localhost:8000/move?from=1&to=2&amount=100 ``` Verificando os dados no banco podemos observar que o resultado está correto: ``` | id | name | amount | |----|-------|--------| | 1 | Alice | 900 | | 2 | Bob | 100 | ``` Fácil, certo? A partir de agora podemos melhorar essa implementação criando testes unitários, de integração e tudo continuará funcionando corretamente. ## O que tem de errado? ![Alguém perguntando qual é o problema](https://media.giphy.com/media/3oKHWcp23mdfNIEEIo/giphy.gif) Para identificar o problema, vamos usar o [Apache HTTP server benchmarking tool (ab)](https://httpd.apache.org/docs/2.4/programs/ab.html) para fazer várias requisições na nossa aplicação. O primeiro teste terá o seguinte cenário: - Alice começa com o montante 1000 - Bob começa com o montante 1000 - Alice faz 10 transferências de 100 para Bob, uma de cada vez - Resultado final esperado: - Alice: 0 - Bob: 1000 Nós podemos usar o seguinte comando, onde o parâmetro `n` é o número total de requisições e `c` é o número de requisições simultâneas: ```bash ab -n 10 -c 1 'http://localhost:8000/move?from=1&to=2&amount=100' ``` Você terá que confiar em mim agora, mas posso garantir que ao rodar o comando acima Alice terá 0 e Bob terá 1000. O segundo cenário será bem parecido, mas faremos 10 requisições simultâneas: - Alice começa com 1000 - Bob começa com 0 - Alice faz 10 transferências **simultâneas** com o montante de 100 para Bob - Resultado final esperado: - Alice: 0 - Bob: 1000 Preciamos alterar o parâmetro `c` para 10: ```bash ab -n 10 -c 10 'http://localhost:8000/move?from=1&to=2&amount=100' ``` O resultado nada agradável: ``` | id | name | amount | |----|-------|--------| | 1 | Alice | 300 | | 2 | Bob | 700 | ``` ![Os resultados não são bons](https://media.giphy.com/media/1iUZa41YxKQtaJq0/giphy.gif) Mas por quê? Basicamente há processos atualizando o montante enquanto há outros que estão lendo e mantendo o valor antigo na memória. Vamos imaginar dois processos concorrentes A e B atualizando apenas a conta de Alice: 1 - Processo A lê o valor 1000 na conta de Alice 2 - Processo B lê o valor 1000 na conta de Alice 3 - Processo A escreve 900 na conta de Alice 4 - Processo B escreve 900 na conta de Alice (deveria ser 800, que vergonha!) ## Como corrigir? ![Alguém tentando achar a correção](https://media.giphy.com/media/l2JefhuLRZWPnxF1m/giphy.gif) Há mais de uma solução, mas a que eu vou mostrar será utilizando `Pessimistic Locking` para escritas e leituras. Isso significa que o banco de dados só permitirá uma escrita ou uma leitura por recurso, que nesse caso é a nossa entidade conta. No Doctrine podemos fazer isso utilizando o seguinte código no repositório: ```php public function transferAmountConcurrently($from, $to, $amount): void { $this->getEntityManager()->beginTransaction(); $fromAccount = $this->find($from, LockMode::PESSIMISTIC_WRITE); $toAccount = $this->find($to, LockMode::PESSIMISTIC_WRITE); $fromAccount->setAmount($fromAccount->getAmount() - $amount); $toAccount->setAmount($toAccount->getAmount() + $amount); $this->getEntityManager()->persist($fromAccount); $this->getEntityManager()->persist($toAccount); $this->getEntityManager()->flush(); $this->getEntityManager()->commit(); } ``` Agora temos que explicitamente demarcar o início da transação antes de adquirir o lock, o que faz sentido uma vez que o Doctrine não consegue saber quando a transação deveria ter começado. Finalmente, a mesma solução em SQL: ```sql START TRANSACTION; SELECT * FROM bank_account WHERE id = 1 FOR UPDATE; # origem SELECT * FROM bank_account WHERE id = 2 FOR UPDATE; # destino UPDATE bank_account SET amount = ? WHERE id = 1; # origem UPDATE bank_account SET amount = ? WHERE id = 2; # destno COMMIT; ``` Para testar, vou criar uma nova rota no controlador existente `BankAccountController`: ```php #[Route('/move-concurrently', name: 'bank_account_concurrent')] public function transferConcurrently(Request $request, BankAccountRepository $repository): Response { $from = $request->query->get('from'); $to = $request->query->get('to'); $amount = $request->query->get('amount'); $repository->transferAmountConcurrently($from, $to, $amount); return new Response(sprintf('from %s to %s amount %s', $from, $to, $amount)); } ``` E agora podemos testar usando o Apache benchmarking tool ```bash ab -n 10 -c 10 'http://localhost:8000/move-concurrently?from=1&to=2&amount=100' ``` Pode confiar em mim, está funcionando agora: Alice possui 0 e Bob possui 1000. ![Alguém celebrando que está funcionando](https://media.giphy.com/media/CuMiNoTRz2bYc/giphy.gif) ## O fim Ao usar a estratégia de locking nós garantimos que o processo que adquiriu o lock está lendo o valor mais atualizado e então atualizando dado consitente baseado na última leitura. O código final está no [Github](https://github.com/fabiothiroki/symfony-bank-transaction). {% github fabiothiroki/symfony-bank-transaction no-readme %} Obrigado pela leitura e espero que tenha gostado!
fabiothiroki
976,650
New Series: Data Structures Study Sessions
Hey there! It's been a while since I've written anything technical, but I'm working on some...
0
2022-02-02T21:58:01
https://dev.to/helloamandamurphy/new-series-data-structures-study-sessions-1dj9
datastructures, javascript, java, beginners
Hey there! It's been a while since I've written anything technical, but I'm working on some specific learning goals for myself in 2022, so I thought now was as good of a time as any to start writing some technical posts in order to reinforce my learning. So one goal I have for 2022 is to study for and pass my AWS Solutions Architect exam. I'm not interested in transitioning to a Solutions Architect role, but I have heard this is a great way to learn about several critical AWS services. While I worked at AWS, most of the services and systems we used were established by other engineers, so I used them in a very narrow context. I am really excited to learn more about AWS services and how they are set up from scratch. I'm using [A Cloud Guru's AWS Certified Solutions Architect course](https://acloud.guru/overview/certified-solutions-architect-associate) to study for the exam, because I've heard really great things about A Cloud Guru. I probably won't end up writing much about my studies for the exam, but I thought I would share that goal anyway. My second big goal is to study data structures more rigorously. I read a book on Data Structures and Algorithms in C++ while I was completing my apprenticeship at AWS, but I was new to C++ and a lot of it went over my head. So I'm back at it, studying so I learn how to use data structure better in my day-to-day work life as a software engineer at AppHarvest, but also so I have an easier time when it comes to looking for my next role as a software engineer (hopefully that's not for a while, but I figure there's a lot to cover and I might as well start now.) Our principal engineer at AppHarvest suggested picking one data structure each week to study, and then work on one easy HackerRank challenge each day, before working on a more difficult challenge on each Friday. I've never used HackerRank, but I'm excited to give it a try. I'd also like to write one article about each data structure I study to reinforce what I'm learning, which is how I find myself writing about code for the first time in a while. After talking it over with him, doing some research on most critical data structures to know for interviews, and taking a look at the book I read last year, I came up with the following list: - Arrays - Linked Lists - Stacks - Queues - Deques - Trees - Binary Search Trees - Balanced Trees - Search Trees - Trie - Hash Tables - Maps - Priority Queues - Heap - Disjoint Set - Multiple Choice - Skip Lists - Graphs As I complete articles for each of these data structures, I'll try to loop back and link them here. I believe there's also a way to create a series, so I'll try to add those as well. I'll also include additional resources I find that help me in my data structures study to share with anyone else who is hoping to learn more for technical interview prep. Two notes: 1. I primarily worked with JavaScript / Node.JS for the past year and a half so that's what I'm most comfortable using. When I studied data structures during that time, I was learning how they were implemented using C++. I'm now in a role that primarily uses Java, so there might be a bit of a crossover to Java at some point. My brain doesn't really single out programming languages, I generally just use whatever works best for the job. 2. I am new to this. If you read something that is incorrect, gently let me know. I'd appreciate it. Being rude will just make me sad. Don't do that. It's been a rough couple years for everyone and we don't need to pick on folks. I'm really looking forward to learning more and becoming more confident about data structures this year. Thanks for reading! -A **Connect with Me** [GitHub](https://github.com/helloamandamurphy) [LinkedIn](https://www.linkedin.com/in/helloamandamurphy)
helloamandamurphy
976,809
Powershell find size logs and delete
This script is for list or delete logsfiles. You can change parameters to adjust code for You...
0
2022-02-03T01:30:39
https://dev.to/redhcp/powershell-find-size-logs-and-delete-3a4k
powershell, script, logs, azure
This script is for list or delete logsfiles. You can change parameters to adjust code for You need. ###Invoke-Comman for many host ```css $machines = @( 'COMPUTER_NAME_1', 'COMPUTER_NAME_2' ) foreach ($machine in $machines) { Invoke-Command -ComputerName $machine -ScriptBlock { Write-Host "Host: $env:COMPUTERNAME ($(Get-Date -Format “MM/dd/yyyy”)) " -foregroundcolor "green" #LIST SIZE $path ='C:\logs' $days = -7 #numer days you can preserve in negative - last 7 days $t = ((Get-Childitem -Path $path -Recurse -Include *.log | Where { $_.Length / 1gb -gt 0 } | Measure-Object Length -Sum | Where-Object LastWriteTime -LT (Get-Date).AddDays($days) | select-object -ExpandProperty Sum)/1GB).ToString('0.00') Write-host "Total Size: $t GB " -NoNewline -foregroundcolor "yellow" Write-host "PATH: $path" #DELETE $path ='C:\logs' $days = -7 #numer days you can preserve in negative - last 7 days $t = ((Get-Childitem -Path $path -Recurse -Include *.log | Where { $_.Length / 1gb -gt 0 } | Measure-Object Length -Sum | Where-Object LastWriteTime -LT (Get-Date).AddDays($days) | select-object -ExpandProperty Sum)/1GB).ToString('0.00') | -Remove-Item -recurse Write-host "Total Size: $t GB " -NoNewline -foregroundcolor "yellow" Write-host "PATH: $path" } } ``` ###LOCAL HOST or Enter-PSSession <ComputerName> ```css Write-Host "Host: $env:COMPUTERNAME ($(Get-Date -Format “MM/dd/yyyy”)) " -foregroundcolor "green" #LIST SIZE $path ='C:\logs' $days = -7 #numer days you can preserve in negative - last 7 days $t = ((Get-Childitem -Path $path -Recurse -Include *.log | Where { $_.Length / 1gb -gt 0 } | Measure-Object Length -Sum | Where-Object LastWriteTime -LT (Get-Date).AddDays($days) | select-object -ExpandProperty Sum)/1GB).ToString('0.00') Write-host "Total Size: $t GB " -NoNewline -foregroundcolor "yellow" Write-host "PATH: $path" #DELETE $path ='C:\logs' $days = -7 #numer days you can preserve in negative - last 7 days $t = ((Get-Childitem -Path $path -Recurse -Include *.log | Where { $_.Length / 1gb -gt 0 } | Measure-Object Length -Sum | Where-Object LastWriteTime -LT (Get-Date).AddDays($days) | select-object -ExpandProperty Sum)/1GB).ToString('0.00') | -Remove-Item -recurse Write-host "Total Size: $t GB " -NoNewline -foregroundcolor "yellow" Write-host "PATH: $path" ```
redhcp
1,433,564
Workload-level Index Recommendation
For workload-level indexes, you can run scripts outside the database to use this function. This...
0
2023-04-12T10:39:18
https://dev.to/llxq2023/workload-level-index-recommendation-13jb
opengauss
For workload-level indexes, you can run scripts outside the database to use this function. This function uses the workload of multiple DML statements as the input to generate a batch of indexes that can optimize the overall workload execution performance. In addition, it provides the function of extracting service data SQL statements from logs or system catalogs. **Prerequisites** **·** The database is normal, and the client can be connected properly. **·** The gsql tool has been installed by the current user, and the tool path has been added to the _PATH _environment variable. **Service Data Extraction** **Collecting SQL Statements in Logs** 1. Set GUC parameters. log_min_duration_statement = 0 log_statement= 'all' 2. Run the following command to extract SQL statements based on logs: gs_dbmind component extract_log [l LOG_DIRECTORY] [f OUTPUT_FILE] [p LOG_LINE_PREFIX] [-d DATABASE] [-U USERNAME][--start_time] [--sql_amount] [--statement] [--json] [--max_reserved_period] [--max_template_num] The input parameters are as follows: 1) **LOG_DIRECTORY**: directory for storing pg_log. 2) **OUTPUT_PATH**: path for storing the output SQL statements, that is, path for storing the extracted service data. 3) **LOG_LINE_PREFIX**: specifies the prefix format of each log. 4) **DATABASE** (optional): database name. If this parameter is not specified, all databases are selected by default. 5) **USERNAME** (optional): username. If this parameter is not specified, all users are selected by default. 6) **start_time** (optional): start time for log collection. If this parameter is not specified, all files are collected by default. 7) **sql_amount** (optional): maximum number of SQL statements to be collected. If this parameter is not specified, all SQL statements are collected by default. 8) **statement** (optional): Collects the SQL statements starting with **statement** in **pg_log**. If this parameter is not specified, the SQL statements are not collected by default. 9) **json** (optional): specifies that the collected log files are stored in JSON format after SQL normalization. If no format is specified, each SQL statement occupies a line. 10) **max_reserved_period** (optional): specifies the maximum number of days of reserving the template in incremental log collection in JSON mode. If this parameter is not specified, the template is reserved by default. The unit is day. 11) **max_template_num** (optional): specifies the maximum number of templates that can be reserved in JSON mode. If this parameter is not specified, all templates are reserved by default. An example is provided as follows. An example is provided as follows. gs_dbmind component extract_log $GAUSSLOG/pg_log/dn_6001 sql_log.txt '%m %c %d %p %a %x %n %e' -d postgres -U omm --start_time '2021-07-06 00:00:00' --statement ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m145tn8ixnbmluj50lzx.png) 3. Change the GUC parameter values set in 1 to the values before the setting. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldy0aurr947874psvqwz.png) **Collecting SQL Statements in System Catalogs** Run the following SQL statement to extract data from system catalogs: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0lqvl9zrfmwqs1fvxho.png) The input parameters are as follows: **·** **DB_PORT**: port number of the connected database. **·** **DATABASE**: name of the connected database. **·** **OUTPUT**: output file that contains SQL statements. **·** **DB_HOST** (optional): name of the host that connects to the database. **·** **DB_USER** (optional): name of the user that connects to the database. The user must have the sysadmin or monitor admin permission. **·** **SCHEMA**: schema name. This parameter is used only when **statement-type** is set to **history**. The default value is **public**. **·** **statement-type**: SQL statement type, which can be **asp**, **slow**, **history**, or **activity**. **-** **asp**: extracts SQL statements from gs_asp. Ensure that the GUC parameter **enable_asp** is enabled. **-** **slow**: extracts active slow SQL statements. **-** **history**: extracts historical slow SQL statements. **-** **activity**: extracts active SQL statements. **·** **START_TIME**: specifies the start time. This parameter is mandatory and is used only when **statement-type** is set to **asp**. **·** **END_TIME**: specifies the end time. This parameter is mandatory and is used only when statement-type is set to asp. **·** **verify**: determines whether to verify the validity of SQL statements. **·** **driver**: determines whether to use the Python driver to connect to the database. By default, **gsql** is used for the connection. **Procedure for Using the Index Recommendation Script** 1. Prepare a file that contains multiple DML statements as the input workload. Each statement in the file occupies a line. You can obtain historical service statements from the offline logs of the database. 2. To enable this function, run the following command: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zznekjeefq4b2enqrtx.png) The input parameters are as follows: 1) **DB_PORT**: port number of the connected database. 2) **DATABASE**: name of the connected database. 3) **FILE**: file path that contains the workload statement. 4) **DB_HOST** (optional): ID of the host that connects to the database. 5) **DB_USER** (optional): username for connecting to the database. 6) **SCHEMA**: schema name. 7) **MAX_INDEX_NUM** (optional): maximum number of recommended indexes. 8) **MAX_INDEX_STORAGE** (optional): maximum size of the index set space. 9) **MAX_N_DISTINCT**: reciprocal of the number of **distinct** values. The default value is **0.01**. 10) **MIN_IMPROVED_RATE**: minimum improvement ratio. The default value is **0.1**. 11) **MAX_CANDIDATE_COLUMNS** (optional): maximum number of candidate columns in an index. 12) **MAX_INDEX_COLUMNS**: maximum number of columns in an index. The default value is **4**. 13) **MIN_RELTUPLES**: minimum number of records. The default value is **10000**. 14) **multi_node** (optional): specifies whether the current instance is a distributed database instance. 15) **multi_iter_mode** (optional): algorithm mode. You can switch the algorithm mode by setting this parameter. 16) **json** (optional): specifies the file path format of the workload statement as JSON after SQL normalization. By default, each SQL statement occupies one line. 17) **driver** (optional): Specifies whether to use the Python driver to connect to the database. By default, **gsql** is used for the connection. 18) **show-detail** (optional): specifies whether to display the detailed optimization information about the current recommended index set. 19) **show-benefits** (optional): determines whether to display index benefits. The recommendation result is a batch of indexes, which are displayed on the screen in the format of multiple create index statements. The following is an example of the result. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1a386b2vy4sbr0sz2tnp.png)
llxq2023
977,181
Move from one fragment to another
I would like to try to create an app. I want to move from my home (MainActivity where there are the...
0
2022-02-03T10:08:55
https://dev.to/codegirl/move-from-one-fragment-to-another-1c8c
kotlin, android, beginners, devops
I would like to try to create an app. I want to move from my home (MainActivity where there are the names of the film directors) to fragment using a button (name of the director) to show a list of movie posters; I want to move from this fragment to another fragment on a click of an Imageview(the poster) to show the plot of the movie. Can anyone explain to me step by step? I need the code in Kotlin.
codegirl
978,078
Dos and Don'ts of Developer Hiring
As a developer getting hired in an agency can require so much criteria to pass. As an agency,...
0
2022-02-03T23:40:27
https://dev.to/coredna/5-dos-and-donts-of-hiring-a-developer-6h1
hiring, devops, coredna, devrel
As a developer getting hired in an agency can require so much criteria to pass. As an agency, choosing who to hire can require so much investigation to commit. Make your like a little easier and give yourself the best fighting chance with your choices with Core dna's blog on the[ 5 Dos and Don'ts of Hiring a Developer](https://www.coredna.com/blogs/5-dos-and-donts-of-hiring-a-developer). Download our free Agency Growth Secrets guide today to answer all your questions! For instance, **DO: Consult somebody on your actual position requirements** First and foremost, if you’re inexperienced in hiring technical talent, it helps to go and find somebody who is experienced to nut out the position description. Often, what we think are the requirements are not what they seem and talking to another developer or technical recruiter can help distinguish whether or not you’re making any unfounded assumptions about the role, overcomplicating the PD or even oversimplifying it. Getting a clear understanding of questions like: - What language skillset/s is required? - What level of seniority is required (i.e. junior, mid, senior developer)? - What development methodology will be used (if working with teams)? …and many other questions will help ensure you’re pre-qualifying your applicants (and not scare off the good ones through ambiguous expectations either!) ## What do you think are the Dos and Don'ts of Developer hiring? Let's start a discussion in the comments!
coredna
978,463
Reactive Programming
Well, you heard it right? I’m here talking about one of those fancy programming terms which you might...
0
2022-02-04T09:21:04
https://dev.to/marigameo/reactive-programming-51j0
reactiveprogramming, javascript, rxjs, observables
Well, you heard it right? I’m here talking about one of those fancy programming terms which you might have clocked many a time, yet a stranger (un)fortunately. If so, I’ll try to break my learnings and introduce the paradigm in a simple practical way as possible. It took me some time to grasp good references talking about reactive programming in general. There were tons of [RxJS](https://rxjs.dev/) articles but those didn’t relate much until I read about reactive programming in general. If you’re someone tired of frameworks and looking to recharge yourself this could excite you. Let’s dive in! ## So what is reactive in general? Have you ever been excited about an upcoming mobile phone launch on some e-commerce sites? Or let’s say you’re a bike enthusiast you might be having an eye on some upcoming superbike launch, right? In this online world, it’s so easier to stay updated with these launches. If you’ve been to those companies' online stores, probably you’d find a **NOTIFY ME** trigger somewhere on the product page. I was looking for a smart band and landed at the official Xiaomi site — tada, this splashed up. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uqg72g3isibffdtneq1n.png) When you click on this notify button, they might take your mobile number/email and notify you on launch. Let’s think of this flow. Technically, you’re subscribing to an event and the responsible entity is notifying you of the successful event. Probably once notified you would take some decision. That’s simple, isn’t it reactive? You’re reacting to an event. Let’s discuss one more edge case to explore a different dimension of this. I’m looking to buy a smartphone at amazon.com and it’s currently out of stock. Luckily, Amazon provides me a way to get notified of the product once the item is back in stock. I’m subscribing to it, so as others with similar interests. Of course, I may not be the single user waiting for the stock right? So let’s say, myself and 9 others subscribed to get notified when stock is back. After some days, there were 3 items back in stock. All 10 of us will get notified. Cool, let’s say 3 of us immediately reacted (well, we’re too reactive 👊) and what would be the case. The item would go again out of stock. Here, the system needs to handle 2 things, - It should unsubscribe 3 of us who actually brought the phones should not be notified from now on - Since, it’s again out of stock, the remaining 7 should be notified of it and put back in listening mode 😔 Here if you notice, the system (producer) is unaware of certain details like whether the item would go out of stock again. If so how fast. It just reacts to the events, right? That’s the deal, how did I go? 🏄 **Little, little, let’s add some javascript to it** Directly spicing with some code, blink over it for a while 👀 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qt6xui23cite4kpj5men.png) If you check the above code, why do you think Leena’s salary is not being updated? Did the boss cheat her (or) JS is lying 😆 This is how typically Javascript works right? Loving it 🙋 To solve this, you’ve to repeat the totalSalary calculation again. But, it’s unfair right — I’m literally repeating the same statement every time I’m updating the value. You’d say, introduce a function calculateSalary and reuse it. Even in that case, I’d repeat the function call again :) Here is where reactive programming comes into the picture. Well, I’m not going to write any [RxJS](https://rxjs.dev/) code here now. Instead, let’s talk with some pictures. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whs3o310l0y0efv3fgt9.png) To break the above diagram, The system has 2 components — salary and payslip Whenever there is a change in the salary component, payslip get’s updated. Here, Salary component — is the producer Payslip components — is the subscriber > And, interesting thing to note here, payslip is reactive on the salary changes and does it job. But, the salary component is unaware that the payslip component is dependent of it. That’s the beauty of this paradigm. **What next?** Are you still with me? Then one thing must be clear to you now having explored the possibilities and use cases above. - Reactive programming is no more a trend, and this could be the time to adopt Does Javascript provide us with any native APIs to write reactive code? Well, the answer is surprisingly no. But, there is an active [tc39 proposal](https://github.com/tc39/proposal-observable) going on around for a while, didn’t find it much active though, you could watch out here — [https://github.com/tc39/proposal-observable](https://github.com/tc39/proposal-observable) Now comes the next question. What are observables? The entity which can be used for reactive code and also can produce multiple values synchronously or asynchronously is called Observables. I know that doesn’t sound cool enough. To understand this better, we need to explore a few other terms conjointly. **Streams** Basically, reactive programming gives you the ability to create data streams of anything. So what are streams? Anything can be a stream, variables, user inputs, properties, caches, data structure, etc.To be specific, A stream is a sequence of ongoing events ordered in time. It can emit 3 possible things, 1. Value (of some type) 2. Error (something wrong happened) 3. Completed signal (when the stream is done or completed) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e87s7gt6ajtmf1g5ialc.png) **Subscribe** — listening to the stream is called subscribing Thus, The functions that we are defining to deal with the ongoing events are called observers. Makes sense, now? Observable will call the Observer’s next(value) method to trigger notifications, if things go well Observable will call an Observer’s complete() method exactly once. Else the Observer’s error(err) method exactly once. **Then what is RxJS?** As discussed above Javascript doesn’t have observables. There are external libraries that fill this gap and the most famous one is RxJs. It is a reactive extension for JavaScript which provides observables along with many other handy features. Do you know chances are high that you’ve already used reactive programming? Know how? If you have used any modern frameworks such as ReactJS, Angular, Vue, Svelte, etc. you have already written a lot of reactive programming. For eg: In react/Vue you would be updating a state and the view renders with the updated value, right? Imaging, how hectic & costlier it would be in older days when one would query the DOM, modify it and update it manually using innerHTML for god sake 😟 Thus to conclude your favorite frameworks in JavaScript have adopted reactive programming at their core and you might not even have realized that you have written tones of reactive code being a frontend developer 😄 **Investing in Reactive paradigm** Reactive programming has enabled web developers to produce features faster by providing them with the ability to compose complex tasks with ease. Reactive programming is already enabling our lives to be better by powering products built by Netflix, Slack, Microsoft, Facebook, and many more. The more we understand it, the more productive web can be built around it. We got a reason to stay excited and why not? **References:** - [https://gist.github.com/staltz/868e7e9bc2a7b8c1f754](https://gist.github.com/staltz/868e7e9bc2a7b8c1f754) - [https://dev.to/reactiveconf/the-concepts-and-misconceptions-of-reactive-programming](https://dev.to/reactiveconf/the-concepts-and-misconceptions-of-reactive-programming) - [https://www.youtube.com/watch?v=BfZpr0USIi4](https://www.youtube.com/watch?v=BfZpr0USIi4) - [https://dev.to/thisdotmedia/reactive-programming-is-not-a-trend-why-the-time-to-adopt-is-now-31e8](https://dev.to/thisdotmedia/reactive-programming-is-not-a-trend-why-the-time-to-adopt-is-now-31e8) If you have any suggestions (as I’m new to this) or wish to discuss more on this topic please write to me at mariappangameo@gmail.com. I’d also love to connect with you all on [Linkedin](https://www.linkedin.com/in/marigameo).
marigameo
979,056
How to Create Error Charts (JS): COVID-19 Threat Perceptions in U.S. by Party
Need a cool interactive error chart visualization for your web page or app? Let me be your guide!...
16,679
2022-02-04T20:49:56
https://www.anychart.com/blog/2022/02/03/error-chart-js/
javascript, beginners, tutorial, datascience
Need a cool interactive [error chart](https://www.anychart.com/chartopedia/chart-type/error-chart/) visualization for your web page or app? Let me be your guide! Follow this tutorial and you’ll learn how to easily create cool interactive error charts using JavaScript. Here, I will be visualizing data on COVID-19 threat perceptions in the United States during the first six months of the pandemic, by political affiliation. So you will also be able to explore the divergence in those attitudes between Democrats and Republicans. The data originates from the article [“COVID-19 and vaccine hesitancy: A longitudinal study”](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0250123) published in the _Plos One_ journal. ## What Is an Error Chart Before I begin, I’d like to make sure we are on the same page in understanding what an error chart actually is and how it works. An error chart is a type of data visualization commonly used to show uncertainty or variability of data with the help of so-called error bars. The latter can be added on top of the main graph to represent value ranges in addition to average (mean) values plotted in the form of lines, columns, bars, areas, markers, or other types of series. ## Error Chart Preview Now, let me show you how the final error chart will look to get you all even more excited about learning how to create one yourself! So, I am going to represent the perceived threat of COVID-19 among the American population by political affiliation over time, from March to August 2020. Precisely, I will show the [results](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0250123) of the longitudinal survey conducted by Ariel Fridman, Rachel Gershon, and Ayelet Gneezy in the following way. Mean responses of self-identified Democrats and Republicans will be visualized in two [line charts](https://www.anychart.com/chartopedia/chart-type/line-chart/), and the error bars will represent the 95% confidence intervals. In the survey, the scale for the responses was from 1 to 7, where 1 is the minimum perceived threat and 7 is the maximum. ![JS Error Chart That Will Be Built Along the Tutorial](https://www.anychart.com/blog/wp-content/uploads/2022/02/error-chart.png) ## Basic JS Error Chart in 4 Simple Steps An error chart is pretty simple to create with a good JavaScript charting library. Here, I am going to use [AnyChart](https://www.anychart.com/), which is a flexible and easy-to-use one. It has great [documentation](https://docs.anychart.com/), a lot of useful [examples](https://www.anychart.com/products/anychart/gallery/), and a [playground](https://playground.anychart.com/) to experiment with the data visualization code on your own. Basic knowledge of web technologies is always good to have. But when it comes to creating interactive JavaScript-based graphs like this, everything is quite straightforward even without much of such background. The process of building a JS error chart can be split into the following four basic steps: 1. Create an HTML page. 2. Include JavaScript files. 3. Add data. 4. Write some JS charting code. ### 1. Create an HTML page The first thing I am going to do is create a basic HTML page to hold my error chart. There, I define a block element with an id attribute (let it be “container”). Next, I add the style attributes in the `<head>` section, where I set the width and height of the `<div>` as 100% so that my chart renders over the whole page. ```html <html> <head> <title>JavaScript Error Chart</title> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="container"></div> </body> </html> ``` ### 2. Include JavaScript files Then, I need to include all necessary scripts. I am going to use them from the [CDN](https://cdn.anychart.com/), but you can also download the files if you want. For an error chart, all I need is the [base](https://docs.anychart.com/Quick_Start/Modules#base) module. I reference it in the `<head>` section of the web page. ```html <html> <head> <title>JavaScript Error Chart</title> <script src="https://cdn.anychart.com/releases/8.11.0/js/anychart-base.min.js"></script> <style type="text/css"> html, body, #container { width: 100%; height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="container"></div> <script> // All the JS error chart code will be here. </script> </body> </html> ``` ### 3. Set the data There are [multiple ways to load data](https://docs.anychart.com/Working_with_Data/Supported_Data_Formats). In this case, the amount of data I want to visualize is pretty small. So I can just go ahead and put it right there in the code. I am going to plot two series, for Democrats and Republicans. So I add the data like this: ```javascript var data1 = anychart.data .set([ ['Mar', 4.26, 0.1, 0.1], ['Apr', 4.46, 0.11, 0.11], ['May', 4.36, 0.126, 0.126], ['June', 4.29, 0.132, 0.132], ['July', 4.49, 0.124, 0.124], ['Aug', 4.47, 0.124, 0.124] ]) var data2 = anychart.data .set([ ['Mar', 3.9, 0.144, 0.144], ['Apr', 4.09, 0.172, 0.172], ['May', 3.71, 0.196, 0.196], ['June', 3.54, 0.198, 0.198], ['July', 3.78, 0.196, 0.196], ['Aug', 3.7, 0.194, 0.195] ]) ``` Now that all the background work has been sorted out, let’s get to the main part of creating a JavaScript error chart. Well, it takes only a few more lines of code. ### 4. Write some JS charting code To begin with, I add a function that encloses all the JavaScript charting code to ensure that the page is ready before executing anything. Inside this function, I create a line chart and provide it with a title. ```javascript // create a line chart var chart = anychart.line(); // add a chart title chart.title('Perceived threat of COVID-19 by political affiliation (March–August 2020)'); ``` Next, I set the data for both series in the form of an array as described in the previous step. And I use the `mapAs` function to map what each element of my array corresponds to. ```javascript // create a dataset (democrats) var data1 = anychart.data .set([ ['Mar', 4.26, 0.1, 0.1], ['Apr', 4.46, 0.11, 0.11], ['May', 4.36, 0.126, 0.126], ['June', 4.29, 0.132, 0.132], ['July', 4.49, 0.124, 0.124], ['Aug', 4.47, 0.124, 0.124] ]) // map the data .mapAs({ x: 0, value: 1, valueLowerError: 2, valueUpperError: 3 }); // create a dataset (republicans) var data2 = anychart.data .set([ ['Mar', 3.9, 0.144, 0.144], ['Apr', 4.09, 0.172, 0.172], ['May', 3.71, 0.196, 0.196], ['June', 3.54, 0.198, 0.198], ['July', 3.78, 0.196, 0.196], ['Aug', 3.7, 0.194, 0.195] ]) // map the data .mapAs({ x: 0, value: 1, valueLowerError: 2, valueUpperError: 3 }); ``` Since all the data values lie within a range, I configure the Y-axis accordingly using the `yScale` function. ```javascript var yScale = chart.yScale(); yScale.minimum(3.2); yScale.maximum(4.8); ``` Finally, I create a line series for each dataset, set the container reference, and draw the resulting error graphics. ```javascript // create a line series for the first dataset var series1 = chart.line(data1); // create a line series for the second dataset var series2 = chart.line(data2); // set a container id for the chart chart.container('container'); // command to draw the chart chart.draw(); ``` Aha — a lovely interactive error chart is right there on the page! ![Basic (Initial) JS Error Chart](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rikvwd4w3zbwqthkg4uh.png) We can clearly see a divergence in COVID-19 threat perceptions between Democrats (the top line) and Republicans (the bottom line) during the first months of the pandemic, and the corresponding over-time trends appear to be quite significantly different. You are more than welcome to take a closer look at this initial JavaScript-based error chart and play around with its code on [AnyChart Playground](https://playground.anychart.com/irYKkg0W) or [CodePen](https://codepen.io/shacheeswadia/pen/KKvjQaL). ## Error Chart Customization Although a basic error chart is here and ready to do its job, there is always room for improvement in the look, feel, and legibility. The AnyChart JS library has a lot of great options to enhance an error chart and make it even more elegant and informative. So, let me walk you through a few quick customization steps that you might like to take. 1. Add a grid and axis ticks 2. Change the colors 3. Add markers 4. Enhance the tooltip **FOR A WALKTHROUGH OF THESE JS ERROR CHART CUSTOMIZATIONS, [CONTINUE READING HERE](https://www.anychart.com/blog/2022/02/03/error-chart-js/)**.
andreykh
979,184
Reflecting on React
During this past month or so I learned how to utilize React. Although I only learned the basics and...
0
2022-02-06T23:13:39
https://dev.to/mailauki/reflecting-on-react-511h
react, beginners
During this past month or so I learned how to utilize React. Although I only learned the basics and can only make good use of two hooks, I can implement them in ways to make them work. I can also utilize JSX and React’s hooks to implement the same functionality as if I were using JavaScript. The primary different is the ease of use and the lack of complexity in the appearance. My favorite aspect of using JSX instead of simply using JavaScript, is that JSX looks similar to typical HTML and makes it easier to see what is going on in the code. Plus I can implement the JavaScript functionality directly into the same code I use for rendering. So for me it’s like looking at HTML that functions as JavaScript, which is both beautiful and easier to use, at least it is for me. Other than how much better I like JSX in comparison to typical JavaScript, is the use of React, which is super great. I can use it to render the HTML by using JSX and use its hooks to store information and the like and minimize unnecessary extra loading or rendering. I really like using the useState hook, because it not only makes my life easier with it storing information, it is also really easy to use. I previously used simple variables in JavaScript to achieve the same functionality, but it was messy and complicated, and it just barely helped me to achieve the functionality I wanted. Looking back I probably overcomplicated my situation more than necessary, but regardless using useState is much kinder to me. Using the useState hook not only is simple to use, but is also not a strain to look at and can be used without much complexity. In addition to the useState hook I learned to utilize the useEffect hook, which I primarily used in tandem with useState. This was to fetch data from an API or JSON file and store the data within a variable declared with useState and then be able to utilize that data with ease without extraneous fetch requests. The useEffect hook has other neat uses and side-effects that can be implemented here and there, but I found its use with fetch requests was very helpful and made things easier to look at and more organized for me to use. If you can find at least one use for it and be able to utilize it, then I think that is enough to get by. I primarily used the useState and useEffect hooks, but I also learned about routing with “react-router-dom” and using its hooks. It’s extremely helpful and quicker using this routing, and the uses it has in addition to simple links make creating navigation bars and alternating page content super simple. In addition to React’s hooks and features that can be accessed and used by simply importing them, there are other features and the like that are just as easy to get a hold of and use. Rather than trying to learn how to accomplish something and straining to figure out the ins and out of a certain feature, you can simply import the feature itself. Working with features and components this way makes creating content and features so much simpler and quicker. Utilizing React and similar features is the coolest thing I have come across so far, and using it through JSX is a joy to look at and use. The combination of the two is a wonder to behold and an ease to use.
mailauki
979,359
Quick Poll: Fav/Most Used Python Virtual Environment manager?
Hi All, I've used Anaconda, Minicoda, venv, and pipenv. I'm (mostly) indifferent about which is...
0
2022-02-05T04:31:13
https://dev.to/williamlake/quick-poll-favmost-used-python-virtual-environment-manager-imf
python
Hi All, I've used Anaconda, Minicoda, venv, and pipenv. I'm (mostly) indifferent about which is "better". Now I want to get a point of reference of what other python devs are using and/or prefer. Is there one you like or use the most? Is there one your work enforces explicitly? Is there one outside of this list you prefer or use a lot? If so, what is it and why do you prefer it over the others? Thanks for helping me learn something new ya'll, I hope your week has gone well!
williamlake
979,574
The importance of seeing red
Today I was working on a small REST service for a client. As I follow Test-Driven Development, the...
0
2022-02-11T22:32:22
https://jhall.io/archive/2022/02/05/the-importance-of-seeing-red/
tdd, xp
--- title: The importance of seeing red published: true date: 2022-02-05 00:00:00 UTC tags: tdd,xp canonical_url: https://jhall.io/archive/2022/02/05/the-importance-of-seeing-red/ --- Today I was working on a small REST service for a client. As I follow Test-Driven Development, the first thing I wanted to do when adding a new endpoint was add a function to test it. Of course I already have several test functions, so I copied one to modify for my new endpoint. I renamed the test function, and deleted all the specific test cases. Then I copied the first test case from another endpoint into the new endpoint. It was just a test to ensure that I’d get a `415 Unsupported Media Type` response if I sent a non-JSON payload. I also copied the content-type check logic from another function into the new one, then ran the test. It passed. Obviously. So I went on to the next step. I added a new test, this time to return a `400 Bad Request` if I sent invalid JSON to the endpoint. And I ran the test… and it passed. Wait, what? I hadn’t added yet told the new endpoint to parse JSON. What’s going on? * * * It turns out I had cheated a bit above. When I wrote my first test case, I didn’t actually validate that it failed before I added the code to make it pass. So when it _did_ pass, I wasn’t surprised… **even though it was a completely invalid test**. You see, I had forgotten to update the copy-pasted test code to call the new endpoint. It was still calling the old endpoint, which _also_ had a content-type check. And it _also_ did JSON parsing. This isn’t the only time I’ve tried to take a TDD shortcut. These shortcuts always bite me in the rear end. There’s a reason for the “Red - Green - Refactor” cycle of TDD. Every one of those steps is there for a reason. Skipping Red invalidates the TDD cycle! * * * _If you enjoyed this message, [subscribe](https://jhall.io/daily) to <u>The Daily Commit</u> to get future messages to your inbox._
jhall
979,681
Simple, very dumb system that constantly works
A great article from Amazon’s Builder Library. These patterns have three key features. One, they...
0
2022-02-05T13:29:09
https://mahmutcanga.com/2021/02/22/simple-very-dumb-system-that-constantly-works/
amazon, big0notation, builder
--- title: Simple, very dumb system that constantly works published: true date: 2021-02-22 15:16:43 UTC tags: amazon,big0notation,builder canonical_url: https://mahmutcanga.com/2021/02/22/simple-very-dumb-system-that-constantly-works/ --- A great article from [Amazon’s Builder Library.](https://pages.awscloud.com/amazon-builders-library.html) ![](https://d1.awsstatic.com/builderslibrary/icons/Nighthawks.e595a07aee1743a6655011891b48de02dc571251.jpg) > These patterns have three key features. One, they don’t scale up or slow down with load or stress. Two, they don’t have modes, which means they do the same operations in all conditions. Three, if they have any variation, it’s to do less work in times of stress so they can perform better when you need them most. There’s that anti-fragility again. > Even when there are only a few health checks active, the health checkers send a set of results to the aggregators that is sized to the maximum. For example, if only 10 health checks are configured on a particular health checker, it’s still constantly sending out a set of (for example) 10,000 results, if that’s how many health checks it could ultimately support. The other 9,990 entries are dummies. However, this ensures that the network load, as well as the work the aggregators are doing, won’t increase as customers configure more health checks. That’s a significant source of variance … gone. > Something important to remember is that O(1) doesn’t mean that a process or algorithm only uses one operation. It means that it uses a constant number of operations regardless of the size of the input. The notation should really be O(C). > A unicycle has fewer moving parts than a bicycle, but it’s much harder to ride. That’s not simpler. A good design has to handle many stresses and faults, and over enough time “survival of the fittest” tends to eliminate designs that have too many or too few moving parts or are not practical. More on [https://aws.amazon.com/builders-library/reliability-and-constant-work/](https://aws.amazon.com/builders-library/reliability-and-constant-work/)
mahmutcanga
979,709
Full stack java with react
My new project which include java in backend and react library in frontend connect with me for any...
0
2022-02-05T14:25:30
https://dev.to/vermaprince123/full-stack-java-with-react-3e7o
react, java, javascript, nextjs
My new project which include java in backend and react library in frontend connect with me for any query about this project !! https://github.com/vermaprince123/fullStackJavaWithReact
vermaprince123
979,753
Easy-to-grasp Playwright boilerplate for testing
Please navigate through embedded outline , do examine the boilerplate with corresponding comment as...
16,690
2022-02-05T16:05:38
https://dev.to/projektorius96/easy-to-grasp-playwright-boilerplate-for-testing-24oh
playwright, testing
Please navigate through embedded outline , do examine the boilerplate with corresponding comment as part of explanation at each line : {% codesandbox playwright-configs-in-general-1xwbl %} --- If you find anything could be improved, please leave a comment below or reach me out at my [LinkedIn profile](https://www.linkedin.com/in/lukas-gau%C4%8Das-bb3877152/) Thank you! 😊
projektorius96
979,784
Building my own Interpreter: Part 1
Note: This is not supposed to be a tutorial on building the interpreter. This is just my anecdote...
0
2022-02-05T16:32:31
https://dev.to/brainbuzzer/building-my-own-interpreter-part-1-1m5d
go, programming, challenge
> Note: This is not supposed to be a tutorial on building the interpreter. This is just my anecdote about how I built my interpreter. If you're looking for extensive tutorials, I'd recommend going to [Crafting Interpreters](https://craftinginterpreters.com/) or purchasing [Interpreter Book](https://interpreterbook.com/) (on which this article is based on if you want to get a deeper dive). I've been programming for several years now and have tackled working with many languages. But one thing that always bugged me was how do these programming languages really work? I know there are compilers and interpreters that do most of the work of converting a simple text file into a working program that the computer can understand, but how exactly do they work? This had become increasingly important that I understand the inner workings of a language for what we are doing at [Hyperlog](https://hyperlog.io). We are building a scoring system that analyzes the skillsets of a programmer by going through multiple metrics. A couple of these metrics are tightly coupled with how you write and structure a program. So in order to get deeper insight, I decided to implement one of my own interpreters. That is when I embarked on my journey. I know what you're thinking, why interpreter rather than compiler? I like to take the top-down approach to learning new stuff. The major inspiration behind this was a book by Kulrose, Ross [Computer Networking - Top Down Approach](https://www.pearson.com/us/higher-education/program/Kurose-Computer-Networking-A-Top-Down-Approach-7th-Edition/PGM1101673.html). I learned a lot about computer networking in a neat manner while reading that book. And ever since, I've been doing a similar form of learning as often as possible. ## What tech to use? I guess this is the most common dilemma of a programmer. While writing this interpreter, I wanted to focus on the inner workings rather than learning a whole new language like assembly. For this adventure, I settled on using Golang. Golang gives you the most basic barebones of talking with the computer, and you can write the programs that won't require any imports and installs from external libraries just to make the basic code usable. (Looking at you, JavaScript). ## What would the interpreter be interpreting? In order to properly implement my interpreter, I need some basic syntax for the language. I decided to settle on this syntax for my interpreter which is inspired by a bit of C++, and a bit of Golang. ``` let five = 5; let ten = 10; let add = fn(x, y) { x + y; }; let result = add(five, ten); if (5 < 10) { return true; } else { return false; } let fibonacci = fn(x) { if (x == 0) { 0 } else { if (x == 1) { 1 } else { fibonacci(x - 1) + fibonacci(x - 2); } } }; let twice = fn(f, x) { return f(f(x)); }; let addTwo = fn(x) { return x + 2; }; twice(addTwo, 2); ``` The above program should run successfully using my interpreter. If you notice, there are some pretty basic things there. Let's go through them one by one. 1. Keywords - A few keywords including `let`, `return`, `if`, `else`, and `fn`. 2. Recursive function - The Fibonacci function written above is a recursive function. 3. Implicit returns - When you closely notice add and Fibonacci functions, they do not have a return statement. This part was inspired by my favorite language, ruby. 4. Function as a parameter to other functions - The last part of this program gets a function as a parameter. ## What really goes in an interpreter? If you've been in the programming sphere for a couple of years now, you may have heard of the words "Lexer", "Parser", and "Evaluator". These are the most important parts of an interpreter. So what exactly are they? ### Lexer Lexer converts our source code into a series of tokens. This is particularly helpful to define the basic structure of words that can be used in your program, and classifying those words. All the keywords, variable names, variable types, operators are put in their own token in this step. ### Parser Once your program passes through lexer, the interpreter needs to make sure that you have written the tokens in the correct syntax. A parser basically declares the grammar of the language. The parser is also responsible for building the abstract syntax tree (AST) of your program. Note that the parser does not actually evaluate and run the code, it basically just checks for the grammar. Evaluation happens in the next steps after the parser makes sure that the code is in the correct syntax. ### Evaluator This is the part that actually looks at how to execute the program. After the program goes through the lexer and parser, evaluator steps in. ## Let's build the interpreter. Starting out, I built a token system that defined what each character would mean in the language. In order to get there, firstly, I needed a token system that defined the type of the token and the actual token itself. This is particularly useful for throwing error messages like "Expected token to be an int, found string". ```go type Token struct { Type TokenType Literal string } ``` Then, there are actual token types: ```go const ( ILLEGAL = "ILLEGAL" EOF = "EOF" IDENT = "IDENT" INT = "INT" ASSIGN = "=" PLUS = "+" GT = ">" LT = "<" BANG = "!" MINUS = "-" SLASH = "/" ASTERICKS = "*" COMMA = "," SEMICOLON = ";" LPAREN = "(" RPAREN = ")" LBRACE = "{" RBRACE = "}" EQ = "==" NOT_EQ = "!=" FUNCTION = "FUNCTION" LET = "LET" RETURN = "return" TRUE = "true" FALSE = "false" IF = "if" ELSE = "else" ) ``` In this block, I think the not so apparent ones are `ILLEGAL`, `EOF`, and `IDENT`. Illegal token type is assigned whenever we encounter some character that does not fit our accepted string type. Since the interpreter will be using ASCII character set rather than Unicode(for the sake of simplicity), this is important. EOF is do determine the end of file, so that we can hand over the code to our parser in the next step. And IDENT is used for getting the identifier. These are variable and function names that can be declared by the user. ### Setting up tests for lexer TDD approach never fails. So I first wrote the tests for what exactly do I want as output from the lexer. Below is a snippet from the `lexer_test.go`. ```go input := `let five = 5; let ten = 10; let add = fn(x, y) { x + y; }; let result = add(five, ten); !-/*5; 5 < 10 > 5; if (5 < 10) { return true; } else { return false; } 10 == 10; 10 != 9; ` tests := []struct { expectedType token.TokenType expectedLiteral string }{ {token.LET, "let"}, {token.IDENT, "five"}, {token.ASSIGN, "="}, {token.INT, "5"}, {token.SEMICOLON, ";"}, {token.LET, "let"}, {token.IDENT, "ten"}, {token.ASSIGN, "="}, {token.INT, "10"}, {token.SEMICOLON, ";"}, {token.LET, "let"}, {token.IDENT, "add"}, {token.ASSIGN, "="}, {token.FUNCTION, "fn"}, {token.LPAREN, "("}, {token.IDENT, "x"}, {token.COMMA, ","}, {token.IDENT, "y"}, {token.RPAREN, ")"}, // ........ } l := New(input) for i, tt := range tests { tok := l.NextToken() if tok.Type != tt.expectedType { t.Fatalf("tests[%d] - tokenType wrong. expected=%q, got=%q", i, tt.expectedType, tok.Type) } if tok.Literal != tt.expectedLiteral { t.Fatalf("tests[%d] - Literal wrong. expected=%q, got=%q", i, tt.expectedLiteral, tok.Literal) } } ``` Here, we're invoking the function `New` for the given input which is of type string. We are invoking the `NextToken` function that helps us get the next token available in the given program. ### Let's write our lexer. Alright, so first things first, we are invoking the `New` function, which returns a lexer. But what does a lexer contain? ```go type Lexer struct { input string position int readPosition int ch byte } ``` Here `input` is the given input. `position` is the current position our lexer is tokenizing, and `readPosition` is just `position + 1`. And lastly, `ch` is the character at the current position. Why are all these declared in such a way? Because we'll keep updating our lexer itself, while keeping track of the position we are analyzing at any moment, and adding tokens to a separate array. Let's declare the `New` function: ```go func New(input string) *Lexer { l := &Lexer{input: input} return l } ``` Pretty easy. Should be self-explanatory. Now, what about the NextToken function? Behold, as there's a ton of code ahead. All of it is explained in the comments. So do read them. ``` // Reads the next character and sets the lexer to that position. func (l *Lexer) readChar() { // If the character is last in the file, set the current character // to 0. This is helpful for determining the end of file. if l.readPosition >= len(l.input) { l.ch = 0 } else { l.ch = l.input[l.readPosition] } l.position = l.readPosition l.readPosition += 1 } // Major function ahead! func (l *Lexer) NextToken() token.Token { // This will be the token for our current character. var tok token.Token // We don't want those stinky whitespaces to be counted in our program. // This might not be very useful if we were writing ruby or python-like language. l.skipWhitespace() // Let's determine the token for each character // I think most of it is self explanatory, but I'll just go over once. switch l.ch { case '=': // Here, we are peeking at the next character because we also want to check for `==` operator. // If the next immediate character is not `=`, we just classify this as ASSIGN operator. if l.peekChar() == '=' { ch := l.ch l.readChar() tok = token.Token{Type: token.EQ, Literal: string(ch) + string(l.ch)} } else { tok = newToken(token.ASSIGN, l.ch) } case '+': tok = newToken(token.PLUS, l.ch) case '(': tok = newToken(token.LPAREN, l.ch) case ')': tok = newToken(token.RPAREN, l.ch) case '{': tok = newToken(token.LBRACE, l.ch) case '}': tok = newToken(token.RBRACE, l.ch) case ',': tok = newToken(token.COMMA, l.ch) case ';': tok = newToken(token.SEMICOLON, l.ch) case '/': tok = newToken(token.SLASH, l.ch) case '*': tok = newToken(token.ASTERICKS, l.ch) case '-': tok = newToken(token.MINUS, l.ch) case '<': tok = newToken(token.LT, l.ch) case '>': tok = newToken(token.GT, l.ch) case '!': // Again, we are peeking at the next character because we also want to check for `!=` operator. if l.peekChar() == '=' { ch := l.ch l.readChar() tok = token.Token{Type: token.NOT_EQ, Literal: string(ch) + string(l.ch)} } else { tok = newToken(token.BANG, l.ch) } case 0: // This is important. Remember how we set our character code to 0 if there were no more tokens to be seen? // This is where we declare that the end of file has reached. tok.Literal = "" tok.Type = token.EOF default: // Now, why this default case? If you notice above, we have never really declared how do we determine // keywords, identifiers and int. So we go on a little adventure of checking if the identifier or number // has any next words that match up in our token file. // If yes, we give the type exactly equals to the token. // If not, we give it a simple identifier. if isLetter(l.ch) { tok.Literal = l.readIdentifier() tok.Type = token.LookupIdent(tok.Literal) // Notice how we are returning in this function right here. // This is because we don't want to read the next character without returning // this particular token. If this behavior wasn't implemented, there would be a lot // of bugs. return tok } else if isDigit(l.ch) { tok.Type = token.INT tok.Literal = l.readNumber() return tok } else { // If nothing else matches up, we declare that character as illegal. tok = newToken(token.ILLEGAL, l.ch) } } // We keep reading the next characters. l.readChar() return tok } // Look above for how exactly this is used. // It simply reads the complete identifier and // passes it token's LookupIdent function. func (l *Lexer) readIdentifier() string { position := l.position for isLetter(l.ch) { l.readChar() } return l.input[position:l.position] } // We take a peek at the next char. // Helpful for determining the operators. func (l *Lexer) peekChar() byte { if l.readPosition >= len(l.input) { return 0 } else { return l.input[l.readPosition] } } func (l *Lexer) readNumber() string { position := l.position for isDigit(l.ch) { l.readChar() } return l.input[position:l.position] } func isLetter(ch byte) bool { return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch == '_' } func isDigit(ch byte) bool { return '0' <= ch && ch <= '9' } func newToken(tokenType token.TokenType, ch byte) token.Token { return token.Token{Type: tokenType, Literal: string(ch)} } // Note how we check not just for whitespace, but also for tabs, newlines and // windows based end of lines. func (l *Lexer) skipWhitespace() { for l.ch == ' ' || l.ch == '\t' || l.ch == '\n' || l.ch == '\r' { l.readChar() } } ``` Okay, cool, but what about that `LookupIdent` function? Well, here's the code for that. ```go var keywords = map[string]TokenType{ "fn": FUNCTION, "let": LET, "return": RETURN, "true": TRUE, "false": FALSE, "if": IF, "else": ELSE, } func LookupIdent(ident string) TokenType { if tok, ok := keywords[ident]; ok { return tok } return IDENT } ``` Get it? We are just mapping it the proper `TokenType`, and returning the type accordingly. And voila! That is the lexer to our basic interpreter. I know it seems like I skipped over a large portion of explaining, but if you want to learn more, I highly recommend picking up the [Interpreter Book](https://interpreterbook.com/). Stay tuned for part 2 where I'll be implementing the parser for this program.
brainbuzzer
980,245
Implementing a safe and sound API Key authorization middleware in Go
A common requirement that I face on multiple projects is to safeguard some API endpoints to...
0
2022-02-08T21:32:19
https://caioferreira.dev/post/golang-secure-api-key-middleware/
go, security, timingattack, encryption
--- title: Implementing a safe and sound API Key authorization middleware in Go published: true date: 2022-02-06 02:40:00 UTC tags: go, security, timingAttack, encryption canonical_url: https://caioferreira.dev/post/golang-secure-api-key-middleware/ --- A common requirement that I face on multiple projects is to safeguard some API endpoints to administrative access, or to provide a secure way for other applications to consume our service in a controlled and traceable manner. The usual solution for it is API Keys, a simple and effective authorization control mechanism that we can implement with a few lines of code. However, when doing, so we also need to be aware of threats and possible attacks that we may suffer, specially due to the usual privileges that these keys provides. Therefore, we are going to analyze common points of concern and design a solution that improve our security posture while keeping it simple. ## API Keys threats There are two main concerns when implementing an API Key authorization scheme: **key provisioning** and **timing attacks**. Let’s review each threat before designing solutions to address them. ### Key Provisioning The key storage is directly related to how applications expect these secrets to be provided to them. Environment variables are the most common solution used on modern services since they are widely supported and don’t incur a high reading cost (in contrast to files) allowing for dynamic changes to be easily detected. However, the way developers usually define the environment variables are through scripts or configuration files, for example using a [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/) manifest. This introduces a serious threat of API Keys being committed to git repositories, which in the event of data leakage from the internal VCS management system would expose these credentials. Note: remember that once committed, even if the keys are deleted from the source files, the information is already on the repository history and is easily searchable with tools like [TruffleHog](https://github.com/trufflesecurity/truffleHog). Therefore, **please do not commit your API Keys to git**! ### Timing Attacks Once your application is configured with the available API Keys, you need to verify that the end-user provided key (let’s call this the _user key_) is correct. Doing so with a naive algorithm, like using == operator, will make the verification end on the first incorrect character, hence reducing the time taken to respond. A timing attack takes advantage of this scenario by trying to guess the correct characters of a secret based on how long the application took to respond. If the guess is right, the response will take slightly longer than if it’s wrong. Naturally, since equality checks are orders of magnitude faster than the network roundtrip, this type of attack is extremely difficult to perform because it depends on a statistical analysis of many response samples. By looking at the time distribution produced by two different characters, one can infer that if they are different, inferring that the greater one is the correct value. For an extensive discussion of statistical techniques to help perform this attack see [Morgan, Morgan 2015](https://www.blackhat.com/docs/us-15/materials/us-15-Morgan-Web-Timing-Attacks-Made-Practical-wp.pdf). ## Middleware design and implementation Having these threats in mind, we can design a suitable solution. Let’s start with the most simple API Key middleware implementation possible and iterate from it. ```go func ApiKeyMiddleware(cfg conf.Config, logger logging.Logger) func(handler http.Handler) http.Handler { apiKeyHeader := cfg.APIKeyHeader // string apiKeys := cfg.APIKeys // map[string]string reverseKeyIndex := make(map[string]string) for name, key := apiKeys { reverseKeyIndex[key] = name } return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { apiKey, err := bearerToken(r, apiKeyHeader) if err != nil { logger.Errorw("request failed API key authentication", "error", err) RespondError(w, http.StatusUnauthorized, "invalid API key") return } _, found := reverseKeyIndex[apiKey] if !found { hostIP, _, err := net.SplitHostPort(r.RemoteAddr) if err != nil { logger.Errorw("failed to parse remote address", "error", err) hostIP = r.RemoteAddr } logger.Errorw("no matching API key found", "remoteIP", hostIP) RespondError(w, http.StatusUnauthorized, "invalid api key") return } next.ServeHTTP(w, r) }) } } // bearerToken extracts the content from the header, striping the Bearer prefix func bearerToken(r *http.Request, header string) (string, error) { rawToken := r.Header.Get(header) pieces := strings.SplitN(rawToken, " ", 2) if len(pieces) < 2 { return "", errors.New("token with incorrect bearer format") } token := strings.TrimSpace(pieces[1]) return token, nil } ``` A middleware is a function that takes an `http.Handler` and returns an `http.Handler`. In this code, the function `ApiKeyMiddleware` is a factory that creates an instance of the middleware with the provided configuration and logger. The `config.Config` is a struct populated from environment variables and `logging.Logger` is an interface that can be implemented using any logging library or the standard library. You could pass only the header and map of keys, but for clarity we choose to denote the dependency from this middleware to the configuration. After extracting the fields that it relies on, the function creates a reverse index of the API Keys, which is originally a map from a key id/name to the key value. Using this reverse index it’s trivial to verify if the user key is valid by doing a map lookup on line 18. However, this approach expects the API Keys as plaintext values and is susceptible to timing attacks, because its validation algorithm is not constant time. ### Using key hashes for validation To improve the key provisioning workflow, we can use a simple yet effective solution: expect the available keys to be hashes. Using this approach we can now commit our key hashes to our repository because even in the event of a data leak they could not be reversed to their original value. Let’s use the SHA256 hashing algorithm to encode our keys. For example, if one of them is `123456789` (please, do not use a key like this :D) then its hash will be: ``` 15e2b0d3c33891ebb0f1ef609ec419420c20e320ce94c65fbc8c3312448eb225 ``` Now you can add this hash to your deployment script, Kubernetes Secret, etc., and commit it with peace of mind. Next, we need to handle this new format on our middleware. This is what the code will look like now: ```go func ApiKeyMiddleware(cfg conf.Config, logger logging.Logger) func(handler http.Handler) http.Handler { apiKeyHeader := cfg.APIKeyHeader // string apiKeys := cfg.APIKeys // map[string]string reverseKeyIndex := make(map[string]string) for name, key := apiKeys { reverseKeyIndex[key] = name } return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { apiKey, err := bearerToken(r, apiKeyHeader) if err != nil { logger.Errorw("request failed API key authentication", "error", err) RespondError(w, http.StatusUnauthorized, "invalid API key") return } _, ok := apiKeyIsValid(apiKey, reverseKeyIndex) if !ok { hostIP, _, err := net.SplitHostPort(r.RemoteAddr) if err != nil { logger.Errorw("failed to parse remote address", "error", err) hostIP = r.RemoteAddr } logger.Errorw("no matching API key found", "remoteIP", hostIP) RespondError(w, http.StatusUnauthorized, "invalid api key") return } next.ServeHTTP(w, r) }) } } // apiKeyIsValid checks if the given API key is valid and returns the principal if it is. func apiKeyIsValid(rawKey string, availableKeys map[string][]byte) (string, bool) { hash := sha256.Sum256([]byte(rawKey)) key := string(hash[:]) name, found := reverseKeyIndex[apiKey] return name, found } // bearerToken function omitted.. ``` Here we extracted the logic to validate the key into a function that, before checking the equality of the user key against the available ones, encodes the user key using the same SHA256 algorithm. This simple step improved a lot our security posture without adding much complexity. Now we can have the benefits of version control, like change history and easy detection when someone changes a key hash. This approach works well when there are few keys to be managed, and you want to follow a GitOps approach. However, if you need to scale the key management, allow for self-service key requests and automatic rotation, you may want to look for a solution like [Hashicorp Vault](https://www.vaultproject.io). Even using an external secret store I still believe this strategy, to rely on key hashes to be valid, because your external secret store can persist both the original key and the hash, and the access policy for the application can have fewer privileges in such a way that it can only read the hashes. ### Constant time key verification Once we have a better strategy to provision our keys, we need to defend ourselves against them being exfiltrated by timing attacks. The solution for this kind of vulnerability is to use an algorithm that takes the same time to produce a result whether the keys are equal or not. This is called a constant time comparison, and the Go Standard Library offers us an implementation in the `crypto/subtle` package that is perfect to solve most of our problems. Hence, we can update our code to use this package: ```go func ApiKeyMiddleware(cfg conf.Config, logger logging.Logger) (func(handler http.Handler) http.Handler, error) { apiKeyHeader := cfg.APIKeyHeader apiKeys := cfg.APIKeys apiKeyMaxLen := cfg.APIKeyMaxLen decodedAPIKeys := make(map[string][]byte) for name, value := range apiKeys { decodedKey, err := hex.DecodeString(value) if err != nil { return nil, err } decodedAPIKeys[name] = decodedKey } return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { ctx := r.Context() apiKey, err := bearerToken(r, apiKeyHeader) if err != nil { logger.Errorw("request failed API key authentication", "error", err) RespondError(w, http.StatusUnauthorized, "invalid API key") return } if _, ok := apiKeyIsValid(apiKey, decodedAPIKeys); !ok { hostIP, _, err := net.SplitHostPort(r.RemoteAddr) if err != nil { logger.Errorw("failed to parse remote address", "error", err) hostIP = r.RemoteAddr } logger.Errorw("no matching API key found", "remoteIP", hostIP) RespondError(w, http.StatusUnauthorized, "invalid api key") return } next.ServeHTTP(w, r.WithContext(ctx)) }) }, nil } // apiKeyIsValid checks if the given API key is valid and returns the principal if it is. func apiKeyIsValid(rawKey string, availableKeys map[string][]byte) (string, bool) { hash := sha256.Sum256([]byte(rawKey)) key := hash[:] for name, value := range availableKeys { contentEqual := subtle.ConstantTimeCompare(value, key) == 1 if contentEqual { return name, true } } return "", false } // bearerToken function omitted... ``` Now, the function `apiKeyIsValid` uses `subtle.ConstantTimeCompare` to verify the user key against each available key. Since `subtle.ConstantTimeCompare` operates upon byte slices we don’t cast our hash to string anymore and also our reversed index has gone in place of a decoded map. The decoding is necessary because the string representation of our key hashes are actually a hexadecimal encoding of the binary value. Hence, we cannot just cast the string to byte slice because Go assumes all strings to be UTF-8 encoded. > Note: for an example on how using a cast instead of the correct decoding function, the result of `[]byte("09")` is `110000111001` while `hex.DecodeString("09")` produces `1001`. Check out the live example [here](https://go.dev/play/p/CPy16o7hvDO). The major disadvantage of this solution is that now we need to iterate over all available keys before finding out if the key is incorrect. This doesn’t scale well if there are too many keys, however one simple workaround would be to require the client to send an extra header with the key ID/name, e.g. `X-App-Key-ID`, with which you can find the key in `O(1)` and then apply the constant time comparison. However, there is one subtle (_pun intended_) behavior from `subtle.ConstantTimeCompare` that we must be aware before deploying our solution to production. When the byte slices have different lengths, the functions returns earlier without performing the bitwise operations. This is natural because it does an XOR between each pair of bits from each slice, and with slices of different sizes, there would be bits from one slice without a matching pair to be combined with. **Because of it, an adversary could measure that keys with the wrong length have a smaller response time than keys with the correct length, hence leaking the key length**. It would only be a vulnerability if you use a short key that is easily brute-forced, but with a simple 30 character key using the UTF-8 printable characters you would have `30^95 = 2.12089515 × 10^140` possible keys. Finally, we’ve built a simple, secure and efficient API Key solution that should handle a lot of uses cases without additional infrastructure or complexity. Using a basic understanding of threats and the Golang standard library, we could do a security-oriented design instead of leaving security as an after-though in an iterative way. Photo by [Silas Köhler]([https://unsplash.com/@silas](https://unsplash.com/@silas)_crioco?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash]([https://unsplash.com/s/photos/key?utm](https://unsplash.com/s/photos/key?utm)_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
caiorcferreira
980,386
What I wish I knew when started working as software developer
I’m going to talk about things that sounds simple, but helps a lot when you realize them. ...
0
2022-02-06T10:56:59
https://dev.to/zhmakp/what-i-wish-i-knew-when-started-working-as-software-developer-eh3
beginners, productivity, career
I’m going to talk about things that sounds simple, but helps a lot when you realize them. ### It’s okay to feel stupid and ask dumb questions What happens when person due to the fear of looking dumb, does not ask questions? He will use his partial knowledge to continue doing his work, which will result in even bigger dumb mistakes. I would rather say that is one of the smartest thing you can do. It's okay to feel stupid because most of the things can be learned. ### Take notes Taking notes is like saying to your brain look there is something important. It helps you free you up from information overload and it helps you better organize your thoughts. Whether it’s brainstorm session or just a discussion about new feature taking notes helps to keep your thoughts. When you’re working on new feature for your project and find hard to start try to draw the structure of your implementation. ### It's completely stupid to work 80-h work week It’s hard to keep the fresh and focused state of mind when you keep working overtimes. It’s not good neither for you not your team. What can be the result of the person that feels burned out and stressed keep struggling on their work? In my mind that leads us to more issues. Even hardworking Bill Gates typically gives himself Think Weeks, cloistered away twice a year, owns an island off Belize, vacations in Croatia, plays bridge and tennis to relax and reads in his palatial library. It’s important to refresh your mind from time to time (take a vacation, have a walk after work etc.). ### Everything can be done, but it comes at a price When you feel something is not possible then try to ask yourself “why is that **hard**?” or try to break the issue on smaller blocks until you can see the price. It’s important to understand the price to set priorities and decide whether it's really worth that price. ### Overthinking is killing the process From my perspective when we’re overthinking we letting creativity of the brain run over-time and dream up unrealistic scenarios in our head. That way we make simple things complicated and as a result it becomes harder to bring them to life. What we need to do in order to escape that trap? - Understand whether it's small or big decision (overthinking every little decision can needlessly slow your productivity) - Focus on present moment (eliminate distractions and make effort not to loose focus). ### Everyone is a little bit biased > Bias: prejudice in favor of or against one thing, person, or group compared with another. > Having a bias doesn’t make you a bad person, but not recognizing biases that can lead to bad decisions. Everyone have they own preference in the way he works and we need to be aware of that.
zhmakp
980,756
Python(Quick Start)
Why Python? Python is the most popular languages in the recent times due to its popularity...
0
2022-02-07T04:14:14
https://dev.to/abhishek22512/pythonquick-start-28l
beginners, programming, python
## **Why Python?** Python is the most popular languages in the recent times due to its popularity of easy to learn and having a lot of features with a lot of libraries inbuilt which are just easy to import into the program to use them... It is designed by Guido Van Rossum. ## Setup ![Python Landing Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z65y38a4f6ekf33dnc36.png) Download Link-->https://www.python.org/downloads/ After downloading just run the setup and click the box stated with ("add Path") so that the path of python is saved in the environment variables and congrats You've successfully installed python on your local computer. ![Installation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ygj19p4vlcbu7h4fb2yq.jpg) ![Installation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23i4sigqbnu7ahje56po.png) **<u>IDE for Python</u>** 1>PyCharm(https://www.jetbrains.com/pycharm/download/#section=windows) 2>Jupiter(https://jupyter.org/install) 3>SpyderIDE(https://www.spyder-ide.org/) ## Beginner guide doc from Pyhton https://wiki.python.org/moin/BeginnersGuide I hope this might be helpfull to all. Keep Learning and Sharing...
abhishek22512
980,914
A program to compute the normalization coefficients of a given orthogonal 1-D complex wave function
Hi guys, I am a particle physics PhD student passionate about coding and I like to implement some...
0
2022-02-06T20:45:22
https://dev.to/justwhit3/a-program-to-compute-the-normalization-coefficients-of-a-given-orthogonal-1-d-complex-wave-function-58op
python, programming, algorithms, github
Hi guys, I am a particle physics PhD student passionate about coding and I like to implement some physical models or computations using different programming languages. I want to present you a Python program I developed some days ago. This program computes the normalization coefficients of a given orthogonal complex 1-D wave function. In quantum physics a wave function represents the quantum states of a particle and the solution of the Schroedinger equation. You can find more theoretical information in the documentation of the project. Actually, this program computes the coefficients only for a 1D orthogonal wave function, for a given n wave function index, but it could be extended in future to 3-dimensions and to non-orthogonal functions of course. If you want to contribute with a pull request it would be more than welcome, since there is also other work to do to extend the project (more info is in the todo file of the GitHub repo); in this case I would put your name in the contributor file of course. This holds also in the case in which you find something you don't like or that could be improved: for this, you can send a pull request too or open an issue. If you like the project, don't forget to leave a star, thanks! GitHub repository: https://github.com/JustWhit3/WaveNCC
justwhit3
980,920
Keystone.js custom fields: text with autocomplete
Photo by Bicanski on Pixnio Problem background Some websites and apps I’m working for my...
0
2022-02-06T21:21:27
https://medium.com/@maciejrokrawczyk/keystone-js-custom-fields-text-with-autocomplete-aef84f8e52fc
nextjs, typescript, keystonejs, react
--- title: Keystone.js custom fields: text with autocomplete published: true date: 2022-02-06 20:25:55 UTC tags: nextjs,typescript,keystonejs,react canonical_url: https://medium.com/@maciejrokrawczyk/keystone-js-custom-fields-text-with-autocomplete-aef84f8e52fc --- ![keystone](https://cdn-images-1.medium.com/max/1024/1*Nk-Up-zBRJkjyul3kMrAow.jpeg) _Photo by Bicanski on Pixnio_ ### Problem background Some websites and apps I’m working for my clients are research projects in humanities especially in archeology. And because of that their needs are pretty specific. On the one hand they require simple webpage to present their research results and fulfil administrative obligations, but on the other hand, they also require custom tailored tools to catalog research data. Usually these are qualitative rather than quantitative data. Typically, my stack to fulfil this needs is pretty extensive. It starts with WordPress as CMS, working in headless mode with Next.js frontend. And this covers website part. Data collection tool was in most cases custom-made app in Node.js and Express or PHP app with Laravel framework. It was completed by Elasticsearch and another large tools. But most of the time I have gut feeling that’s a bit of overcomplicated solution and for sure bloated one. When I was starting current project, I made decision that I have to find better and lighter way to do that, especially because in this case data model was not so complicated. My research started with looking for other CMS, best in Node.js and headless out of the box. I knew about two of this kind of systems, [Strapi](https://strapi.io/) and [Keystone.js](https://keystonejs.com/). First one is a little to rigid for my taste, but second one looks like perfect solution for my needs. Possibility to define schemas in code and easy configuration was really nice. But after that, I had great aha moment. What if I can also use it to define models for data catalog? Is there possibility to kill two birds with one stone? I believe there is. There was one issue thou, but about this later. ### About Keystone.js On Keystone website we can read that it is superpowered CMS for developers, with built-in GraphQL support and Management UI in place. Also, there’s new, just released version 6. It has really impressive set of features, authentication and authorization, UI, it uses TypeScript and much more useful and necessary features. The only thing we have to worry is our entities structure. But there are a couple of problems. Abstraction layer is really thick in here. In general, it’s not an issue but in some places may become a concern. Built-in set of fields which can be used in lists and schemas here is more than enough in most basic cases. But unfortunately my case turned out to be not so basic. Maybe it’s caused by specificity of my project, but I needed some extra fields to work with. The issue was not in the field types provided by Keystone, text or JSON fields are versatile enough to serve many purposes. But their visualization in admin panel can be insufficient sometimes. During development of this project I had to create three custom fields, but only their UI aspects had to change. Underneath they still are text and JSON fields. First of this three is text field with additional autocomplete and I gonna focus on that in this article. ### Requirements Text field with autocomplete? It sounds like relationship field from Keystone basic set. Yes, but not really. Relationship means in that case that there are at least two entities connected to each other. In this case I just need to get suggestions from table column corresponding to that field. So, what this field has to have? First, it has to have a possibility to get all suggestions from database (or search engine in my case) while typing, then present them to user and let her/him select one or finish writing own version. ### Documentation info about custom field Mostly because it’s new, just released version of Keystone.js documentation is not so clear in some places. But part about using custom fields is really nice. Even though for me the most useful part of it was the link to source code of built-in fields (for the moment when I am writing this the link is broken, use [this](https://github.com/keystonejs/keystone/tree/main/packages/core/src/fields/types) instead). In our case interesting is text field ([code](https://github.com/keystonejs/keystone/blob/main/packages/core/src/fields/types/text/views/index.tsx)). We can use this file as a template for what we want to achieve. In documentation, it’s stated that custom field view have to exports four things: field controller and three React components — Field, Cell and CardValue. Each one responsible for different visualizations. But here we are in comfortable position, to fulfill our requirements we have to change only _Field_ visualization, others are fine, so we can reuse built-in ones. Also, there are no changes in controller, so original one can be reused too. ### Steps to resolve First what I did was to create separate folder _views_ to hold all custom fields visualizations and subfolder _autocomplete_ for our current visualization. Main file there is _view.tsx_ containing our new field. Because we won’t be modifying field controller and Cell and CardValue components, so we can import them and re-export right away. {% gist https://gist.github.com/eabald/09d2c6e8dc6df46740fe53eac2c67854 %} I believe it’s not necessary, but in sake of completion and code readability is reasonable to do it. Also, here we can add rest of necessary imports: {% gist https://gist.github.com/eabald/ad3c81d4384baa0b45736d0bc0c89550 %} Here we are importing basic React hooks which will be necessary later, some types from Keystone core library, and built-in field components to blend in our custom components into other fields in Admin UI. Now is time to create our Field component end exports it. Also, we have to declare some basic state of the component. We are going to need it later. {% gist https://gist.github.com/eabald/f6c0a0c3d407d984f4643fbde8257152 %} According to documentation props in this case have several properties: _field_ which is object containing all methods and properties form field controller, _value_ holding field value, in this case as an object: {% gist https://gist.github.com/eabald/59a92b4d05e1407cf768dd7b9de40b59 %} Mostly it depends on kind of operation we are currently performing. It looks different when user creates entity and differently when it’s updated. Also, there are two boolean values _autoFocus_ and _forceValidation_, both rather self-explanatory. Last one there is method _onChange_ which is value setter here. Additionally, I am not pretty sure why, but it is optional, and we have to keep that in mind. Ok, now it’s time to construct our component JSX: {% gist https://gist.github.com/eabald/24e59b689c03391df44906ddac982ec0 %} There are to main parts of this component. First one creates text input using built-in components (with some extra events handlers) and simple suggestions list created using _ul_ and _li_ tags. Also, here we can see usage of _visible_ component state. It’s responsible for showing list of suggestions on focus event (and hiding on blur). Skeleton of the component is in place, now we have to add some muscles to it and add missing methods _onChange_ (our one, not the one coming from props), _onSuggestionClick_ and _setOnChange_ helper. {% gist https://gist.github.com/eabald/433ed2e16962b19a654f1ca3e595673f %} First of these methods, _setOnChange_ is a kind of helper function responsible for calling _props.onChange_ method and setting new value of field. As I mentioned before, it’s optional, so we have to close it inside _if_ statement and then call it with new value (keeping in mind, how this value should look like). Next one, _onChange_ fires on each change event. It calls _SetOnChange_ first and then starts to look for suggestions, if the input is more than 3 characters long it calls async method _getSuggestions_ with current input value and then calls set state method _setSuggestions_. Otherwise, if input is too short it sets suggestions to empty array. I believe, in some cases it maybe necessary to debounce this method, but I’ve decided not to. Last one _onSuggestionClick_ sets field value to the value of clicked suggestion. Next we need to create previously mentioned method _getSuggestions_: {% gist https://gist.github.com/eabald/881f15179a095f35c74ae7c7c8f93253 %} Its purpose is simple, it fetches data from server in JSON format and returns suggestions. In this case it is a custom endpoint created according to docs ([more](https://keystonejs.com/docs/apis/config#extend-express-app)): {% gist https://gist.github.com/eabald/62f4e826d4395282c60820d837688974 %} It’s simple Express.js route calling _autocomplete_ method and returning results as JSON. It takes three parameters: query to search, index to search in and field to return. In my case under the hood it uses [Meilisearch](https://github.com/meilisearch/Meilisearch) search engine and its JS integration — [Meilisearch JavaScript](https://github.com/meilisearch/meilisearch-js). It’s really nice, reliable and what’s the most important lightweight search engine, useful alternative to huge Elasticsearch. But I am using it because it’s also needed in other part of the system, in other cases using built-in _QueryAPI_ should be enough. Lastly we have to add a bit of reactiveness to our component, in order to do that we have to add two _useEffect_ hooks and one _useCallback_: {% gist https://gist.github.com/eabald/eb9a23c481a4865b3cb3501f2a9af123 %} First, we create memoized callback to load initial suggestions on component loads and value is already set (in case of editing record). I used this hook to make sure that this method is only declared once, on component render. And then we call it inside first _useEffect_ hook, the one running only on component render. Whole idea here is to already have the suggestions when user starts editing the corresponding field. Last hook changes visibility of suggestions based on length of suggestions state array. Now our component is working as planed, but there’s still one issue. It’s not pretty at all, so we have to add some styling to blend it better with the rest of UI. I moved entire styles to separate file just to not make it messy, but it’s strictly personal. {% gist https://gist.github.com/eabald/a82880afa64ea55358a866fa5906ab83 %} Here I’ve decided to use [Emotion](https://emotion.sh/docs/introduction) library just because I know it enough, and it does its job perfectly. Styles here mostly mimic built-in ones to help our custom component to blend into UI and look like it was always there. So, here’s complete component: ![Component in action](https://cdn-images-1.medium.com/max/715/1*wF0XNkh6KshS8L2ACBByEg.jpeg) _Component in action_ {% gist https://gist.github.com/eabald/e61f8187388cea20de4f07bc64e261ec %} ### Summary To sum up: using custom fields in Keystone.js is not that hard (at least custom visualizations for fields). It requires a bit of research and clear idea what we want to achieve, but it can easily resolve minor problems. And most important can save use from writing full-fledged custom system where it’s not needed. When I was starting my first job as a junior web developer, someone told me that real dev knows when to use library/framework and only adopt it and when is better to make custom solution. I believe even though couple of years passed I am still learning this. To be honest it’s the first article I’ve ever written. I hope it’s clear and informative, and I really appreciate your feedback. I am also planing to write about another two custom fields I’ve needed to create. And I am challenging myself to publish one each week. I hope it will work out. See you in the next one!
eabald
981,009
Resumo Qualidade de Produto de Software
Segundo Presman (1994) “Qualidade de software é a conformidade a requisitos funcionais e de...
0
2022-02-07T00:17:48
https://dev.to/dmorais2052/resumo-qualidade-de-produto-de-software-5242
Segundo Presman (1994) “Qualidade de software é a conformidade a requisitos funcionais e de desempenho que foram explicitamente declarados, a padrões de desenvolvimento claramente documentados, e a características implícitas que são esperadas de todo software desenvolvido por profissionais”. Os requisitos de software são a base a partir da qual a qualidade é medida. A falta de conformidade aos requisitos significa falta de qualidade. Padrões especificados definem um conjunto de critérios de desenvolvimento que orientam a maneira segundo a qual o software passa pelo trabalho de engenharia. Se os critérios não forem seguidos, o resultado quase que seguramente será a falta de qualidade. Existe um conjunto de requisitos implícitos que frequentemente não são mencionados na especificação (por exemplo o desejo de uma boa manutenibilidade). Se o software se adequar aos seus requisitos explícitos, mas deixar de cumprir seus requisitos implícitos, a qualidade do software pode ser comprometida. Por isso é de relevante Importância, que a qualidade de Software e de produto para uma empresa, seja realizada com maturidade e responsabilidade, e que os desenvolvedores e gerentes busquem a qualidade em todos os processos de desenvolvimento.
dmorais2052
981,040
Book Notes: Cloud Native Patterns
I have worked on cloud systems for a few years now, but as is said, “Fish don’t know they’re in...
0
2022-02-07T02:41:21
https://dev.to/outofdesk/book-notes-cloud-native-patterns-ihl
cloud, cloudnative, engineering, backend
I have worked on cloud systems for a few years now, but as is said, _“Fish don’t know they’re in water”_, when you’re deep into something it’s hard to zoom out and look at the big picture, to see how ideas and practices are connected to each other and how they share common goals. This is my attempt to do that, zoom out, learn more about and reflect on things I’ve worked on and make sense of them as a coherent whole, rather than a bunch of loosely and randomly connected ideas. I’m starting with a book called _[“Cloud Native Patterns”, by Cornelia Davis](https://www.manning.com/books/cloud-native-patterns)_, and I plan to use this space to document things I read, learn and think about along the way. As such, I expect this to be a constantly changing and evolving post, much like the cloud systems that the book talks about. ## The Basics ---- **Cloud-first before cloud-native** – Cloud-first is a good precursor to understanding what cloud-native means and what it has to offer. Although the way people define cloud-first strategy varies widely, it essentially is thinking of the cloud as the primary medium of running your software, as opposed to the older approach that tied all your systems to a set of servers with dedicated resources in a specific physical location that you owned or managed. Cloud, more than anything, is a layer of abstraction built to free the developers’ minds from the concerns of how to run systems and promote thinking about what to build. Hence moving to the cloud doesn’t necessarily mean taking your systems off-premises, but it’s a way of how you think(or rather, how much you don’t think) about them. **Natural Side-effects of moving to the cloud** – Think of taking care of your babies at home versus sending them to daycare. A natural and inevitable side-effect of sending your baby away for a few hours is that you lose some control. It’s neither good or bad, it just is. So while it relieves you of some duties and responsibilities, it also raises the stakes in other ways. Similarly, when you take your systems from your premises to the cloud, it means that you lose some control. But it doesn’t magically solve all your problems, it solves some, and throws other, newer problems at you. **Inherent characteristics of the cloud** – Going back to the daycare example, it’s a more dynamic place than your home. There are other actors at play – other babies, the care-givers, other babies’ parents, and the challenges of an unfamiliar setting. In other words, it’s a chaotic place. Things are not always predictable, and are bound to go wrong, in one way or the other. Because of the sheer number of unknowns, learning about all of them, let alone preventing all the problems is not an option. Cloud is exactly like that too. There are other systems at play, their dependencies, and the shared infrastructure, needless to say all that bound by the laws of Physics. It sounds scary at first, but accepting these truths enables us to think about how to build systems that function amidst the chaos, despite all the failings. **So why go to the cloud** – If you lose control and visibility into our systems, why on Earth would you want to migrate? Because absolute control and perfection are delusional goals to build your life or business upon. And while you lose some control on how your systems work together, you gain control on what your systems does, or how they provide value to the customers. If you had all the time in the world, you would have taken care of your babies at home. Not that you don’t like them going out, but that you would like to be around them as much as you can. But time is finite, and society and its changing needs throw a lot at you to juggle. In order to do the best with your time, you prioritize things and delegate or outsource some responsibilities. As technology has come to dominate most people’s lives, the changing needs of society often translates to changing needs of software. As the demand of your business increases, you’re compelled to not only prioritize time, but the limited material resources you possess. **Desirable characteristics of a successful business** – In order to serve the ever-changing needs of the society, certain characteristics naturally emerge that are desirable to run a successful business. Simply put, a business should provide the value promised to all its customers, anywhere and anytime, and evolve quickly to address their needs. *All its customers* – In the world of the Internet, your customers come from different backgrounds, cultures, ages, and gender, they use different devices, and operate under widely different constraints. So accessibility, multi-language and multi-device support now become first-class citizens. *Anywhere and anytime* – When your customers are based in different geographical locations across continents, compliance to legal and other policies, and the ability to operate in different time-zones, essentially being available around the clock are of prime importance. *Evolve quickly* – Evolving quickly to serve such a diverse group of customers and a unique set of constraints means having shorter and accurate feedback cycles. **Cloud-native** – is an umbrella term used to capture a set of ideas, principles, processes and tools, that aims to marry the inherent characteristics of the cloud with the desirable characteristics of today’s businesses in a seamless manner. ## Recommended Reading ----- [Cloud Native Patterns, by Cornelia Davis](https://www.manning.com/books/cloud-native-patterns) [What is Cloud Native](https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/definition)
outofdesk
981,233
Hi everybody. I made a thing!
TL;DR Remember TL;DR? Sometimes I think we should all get in the habit, after writing a...
0
2022-02-07T08:28:14
https://dev.to/nvlgzr/hi-everybody-i-made-a-thing-11of
svelte, notion, node, sveltekit
## TL;DR Remember _TL;DR_? Sometimes I think we should all get in the habit, after writing a draft, to write the too-long-didn't-read for what we've written, and then to delete the original draft, sending only the critical details. With that in mind… TL;DR—While playing with _notion2svelte_, a tool of my own creation which does just what it says, it occurred to me that I was having fun using it. I had the urge to share it with you all, but then remembered the long draft I'd written as an introductory post. It outlined my reasons for creating _notion2svelte_, examined some of the philosophical underpinnings, and apologized for the rough, evolutionary nature of the v1 architecture. But then I was like, "Forget those early drafts! Just share the links already!" So. No more procrastination. No more meta-analysis, or origin story, or imposter-syndrome-driven apologies for not building the whole thing on [ntast](https://github.com/phuctm97/ntast) (long story). It's link time! If you live your days with one foot in [Notion](https://www.notion.so/), and the other 🦶 in [Svelte](https://svelte.dev/tutorial/dynamic-attributes), I hope you'll find inspiration, and perhaps even utility, in _notion2svelte_! - [README](https://github.com/nvlgzr/notion2svelte/) - [Dog-fooded Docs](https://notion2svelte.vercel.app/) > Feedback welcome! [—🦦](https://github.com/nvlgzr/notion2svelte/discussions/1)
nvlgzr
981,409
How to build a blog using Remix and MDX
Hey, folks 👋. Today we are going to build a new blog site from scratch using Remix, MDX and...
0
2022-02-07T11:21:28
https://kirablog.hashnode.dev/build-a-blog-using-remix-and-mdx
react, javascript, remix
Hey, folks 👋. Today we are going to build a new blog site from scratch using [Remix](https://remix.run/), [MDX](https://mdxjs.com/) and [TailwindCSS](https://tailwindcss.com/) # 🤔 What's Remix? Yet another JavaScript framework Remix is a full-stack web framework based on web fundamentals and modern UX. It is created by the team of [React Router](https://reactrouter.com/). Remix isn't any brand new framework it had been over for a year but it was a paid framework over then but now the time had been changed and Remix is now free and open-source software 🚀. Remix is a React-based framework that allows to you render code on the server-side. Wait for a second 🤔 Doesn't [NextJS](https://nextjs.org/) do the same thing? Remix took the old problems but approached them in a new style 🐱‍💻. Remix only does Server Side Rendering (SSG), no Static Site Generation (SSG), and Incremental Static Regeneration (ISR) like NextJS. Applications which use Static Site Generation (SSG) are fast, easy to deploy but it is really hard to use dynamic data, as the pages would be re-built every time the dynamic data has been changed. In Remix, we are only doing Server Side Rendering (SSG), which is great for dynamic data but it would be hard to deploy as you would need to have an actual server to run it. Remix is suitable for applications that have multiple pages and which depend on some sort of dynamic data # 🛠 Setting up the project Let's set up our project before getting started to code. 1. Create a new folder for our remix blog ```bash mkdir remix-blog ``` 1. Navigate into that folder ```bash cd remix-blog ``` 1. Open that folder in VSCode ```bash code . ``` 1. Initialize remix project in that folder ```bash npx create-remix@latest ``` - The path of the remix application would be `./`, as we have already created a folder of our project - We would be going to deploy our remix application on [Vercel](https://vercel.com) - We are going to be using JavaScript for this project 1. Starting a local development server ```bash npm run dev ``` This would start a local development server at [localhost:3000](http://localhost:3000) # 📁 Understanding the folder structure The folder structure of a remix application is pretty simple. - `api` folder contains all the backend/api code. - `app` folder contains most of the frontend code. - `app/routes` folder contains the code for each route. Remix has the file-system based router similar to nextjs - `public` folder contains the static files and assets that are served to the browser when our app is built or deployed. # 👨‍💻 Building the project Let's start building the blog now. Let's first clean up the `app/routes/index.jsx` file. `app/routes/index.jsx` ```jsx export default function Index() { return ( <div style={{ fontFamily: 'system-ui, sans-serif', lineHeight: '1.4' }}> <h1>Welcome to my blog</h1> </div> ); } ``` Remix supports the use of MDX to create a route module, which means we could create a new route using just a plain MDX file. Let's create a new directory inside the `routes` directory called `posts` and inside that directory let's create a new file called `first-blog-post.mdx` `app/routes/posts/first-blog-post.mdx` ```jsx Hey, welcome to my first blog post 👋 ``` To check out your first blog post, visit [localhost:3000/posts/first-blog-post](http://localhost:3000/posts/first-blog-post) **TADA** 🎉, we have built a basic blog within 2 minutes ![](https://imgur.com/x5a8ovB.png) ## 🙌 Adding frontmatter > The lines in the document above between the `---` are called "frontmatter" Let's add some front matter to your first blog post page. You can think frontmatter as the metadata of that page. You can reference your frontmatter fields through the global attributes variable in your MDX. ```mdx --- title: First Blog Post --- Hey, welcome to {attributes.title} 👋 ``` Let's now add metadata to our blog post's page using frontmatter. ```mdx --- title: First Blog Post meta: title: First Blog Post description: ✨ WoW --- Hey, welcome to {attributes.title} 👋 ``` As you can see the title of the page has been changed ![](https://imgur.com/i8ftVOK.png) ... and the description as well ![](https://imgur.com/yuFHUf6.png) Let's me quickly add a few blog posts ![](https://imgur.com/vI9iG0F.png) Umm... 🤔 Our blog isn't completed yet with any kind of syntax highlighting ✨ ## ✨ Adding syntax highlighting We are going to be using [highlight.js](https://highlightjs.org/) for syntax highlighting, you could even use [prism](https://prismjs.com/). ### 🔌 About MDX plugins We are going to achieve syntax highlighting using something called "MDX plugins". By plugins, we could manipulate the process of MDX converting into HTML. Generally, there are two types of plugins - Remark plugins are responsible for manipulating the process of converting MDX to markdown. - Rehype plugins are responsible for manipulating the process of converting the markdown to HTML. For our remix blog, we are going to be using a rehype plugin called [rehype-highlight](https://www.npmjs.com/package/rehype-highlight). To install the package using the following command: ```bash npm install rehype-highlight ``` We need to add a bit of configuration to the `remix.config.js` file ```js mdx: async (filename) => { const [rehypeHighlight] = await Promise.all([ import('rehype-highlight').then((mod) => mod.default), ]); return { rehypePlugins: [rehypeHighlight], }; }; ``` Now our `remix.config.js` file would look something like this: ```js /** * @type {import('@remix-run/dev/config').AppConfig} */ module.exports = { appDirectory: 'app', assetsBuildDirectory: 'public/build', publicPath: '/build/', serverBuildDirectory: 'api/_build', ignoredRouteFiles: ['.*'], mdx: async (filename) => { const [rehypeHighlight] = await Promise.all([ import('rehype-highlight').then((mod) => mod.default), ]); return { rehypePlugins: [rehypeHighlight], }; }, }; ``` ### 🧾 Creating a layout file Now we have created a layout file, where we would import one of the highlight.js's styling. I would be using night owl style, you could choose your style from highlight.js's [style demo page](https://highlightjs.org/static/demo/) To create a layout file for our blog posts, we have created a new file with the same name as the folder name (`posts`) and the same level of the `posts` folder. ![](https://imgur.com/MtEUjC0.png) Now we have to import the night owl theme into our layout file and use that as well. ```jsx import styles from 'highlight.js/styles/night-owl.css'; import { Outlet } from 'remix'; export const links = () => { return [ { rel: 'stylesheet', href: styles, }, ]; }; export default function Posts() { return <Outlet />; } ``` In remix, we have the links function is similar to the links tag in HTML. > PS: If you are a VSCode user then install this [remix run snippets extension](https://marketplace.visualstudio.com/items?itemName=amimaro.remix-run-snippets) 🚀. Now let's restart our local development server. **TADA** 🎉, we have this wonderful syntax highlighting for our code blocks in our blog now ![](https://imgur.com/brpUvDo.png) ## 🎨 Adding TailwindCSS Typography Right now our blog has syntax highlight but the font isn't looking great 🤔 and there is nothing great than [@tailwindcss/typography](https://tailwindcss.com/docs/typography-plugin) plugin to automatically styles our entire page's font using a single `prose` class. ### 📦 Installing dependencies We need a few dependencies for us to use tailwindcss and tailwindcss's typography plugin. Those dependencies are: - [Concurrently](https://www.npmjs.com/package/concurrently): Concurrently allows you to run multiple commands in a single terminal, so we can watch and build our tailwindcss styles as well as our entire remix application in a single terminal session Let's install all of them: ```bash npm install -D tailwindcss concurrently @tailwindcss/typography ``` ### ⚙ Configuring TailwindCSS Create a new file named `tailwind.config.js`, this file would contain all the configurations for tailwindcss. Add the following configuration to the `tailwind.config.js` file `tailwind.config.js` ```js module.exports = { mode: 'jit', purge: ['./app/**/*.{ts,tsx}'], darkMode: false, // or 'media' or 'class' theme: { extend: {}, }, variants: { extend: {}, }, plugins: [require('@tailwindcss/typography')], }; ``` We would have to change the scripts in `package.json` ```json "scripts": { "build": "npm run build:css && remix build", "build:css": "tailwindcss -o ./app/tailwind.css", "dev": "concurrently \"npm run dev:css\" \"remix dev\"", "dev:css": "tailwindcss -o ./app/tailwind.css --watch" }, ``` Importing tailwindcss into the `app/root.jsx` file `app/root.jsx` ```jsx import styles from './tailwind.css'; export const links = () => { return [{ rel: 'stylesheet', href: styles }]; }; ``` Let's restart our server and run the `npm run dev` command You would see an error saying that ``` app/root.jsx:9:19: error: Could not resolve "./tailwind.css ``` ![](https://imgur.com/iHGXcv1.png) This occurred because there is no `tailwind.css` file but you would see that the file is been created. If in your case the file didn't create then create a new file named `tailwind.css` in the `app` directory and copy and paste the CSS from this gist, https://gist.github.com/Kira272921/4541f16d37e6ab4d278ccdcaf3c7e36b ### 💻 Using @tailwindcss/typography plugin Let's open the `app/routes/posts.jsx` file and add few styling. > As `app/routes/posts.jsx` file is the layout file for all the blog posts, if few add any kind of styling then it would reflect in the blog posts pages ```jsx return ( <div className='flex justify-center'> <div className='prose lg:prose-xl py-10'> <Outlet /> </div> </div> ); ``` Here are using the `@tailwindcss/typography` plugin **TADA** 🎉. Look how beautiful the blog posts are looking now ![](https://imgur.com/v84Sreb.png) ## 📰 Creating a list of articles Let's create a list of articles on the main page (aka root route). In remix, you could import the entire mdx module as well as the attributes within them. `app/index.js` ```jsx import * as firstPost from './posts/build-a-cli-using-nodejs.mdx'; import * as secondPost from './posts/build-a-rest-api-using-nodejs.mdx'; ``` The below function would return the slug (the file name, without the `.mdx`) with the markdown attributes `app/index.jsx` ```jsx function postFromModule(mod) { return { slug: mod.filename.replace(/\.mdx?$/, ''), ...mod.attributes.meta, }; } ``` In remix, we use a loader function to load data on the server-side `app/index.jsx` ```jsx export const loader = () => { return [postFromModule(firstPost), postFromModule(secondPost)]; }; ``` Here we are loading each of our MDX modules on the server-side using the loader function Finally, our `app/index.jsx` would look something like this ```jsx import { Link, useLoaderData } from 'remix'; import * as firstPost from './posts/build-a-cli-using-nodejs.mdx'; import * as secondPost from './posts/build-a-rest-api-using-nodejs.mdx'; function postFromModule(mod) { return { slug: mod.filename.replace(/\.mdx?$/, ''), ...mod.attributes.meta, }; } export const loader = () => { return [postFromModule(firstPost), postFromModule(secondPost)]; }; export default function BlogIndex() { const posts = useLoaderData(); return ( <div className='prose lg:prose-xl py-10 pl-10'> <h2>Articles</h2> <div className='flex justify-center'> <ul> {posts.map((post) => ( <li key={'posts/' + post.slug}> <Link to={'posts/' + post.slug}>{post.title}</Link> {post.description ? ( <p className='m-0 lg:m-0'>{post.description}</p> ) : null} </li> ))} </ul> </div> </div> ); } ``` This is how our main page looks 🚀 ![](https://imgur.com/pmTnI47.png) ## 🚀 Deploying to Vercel As our application let's deploy it on vercel 🚀. 1. Initialize an empty git repository ```bash git init ``` 1. Create a new GitHub repository 1. Push your changes to that repository ```bash git remote add origin git@github.com:Kira272921/remix-blog.git # change URL to your repo's link git add . git commit -m "feat: initial commit" git branch -M main git push -u origin main ``` 1. If you don't have an account on vercel, create one 1. Create a new project ![](https://imgur.com/9wfhEQ9.png) 1. Import the remix application from our GitHub account ![](https://imgur.com/EMKOgOs.png) 1. Deploy the application ![](https://imgur.com/fgIkn2L.png) - If you are getting an error something like this, add a new script to `package.json` ``` "postinstall": "remix setup node" ``` ![](https://imgur.com/PAG3nla.png) The entire code for this tutorial is present on my GitHub: https://github.com/kira272921/remix-blog Here is what we have built today 🚀: https://remix-blog-orcin.vercel.app/ ### 🧑 About the author - [Github](https://github.com/kira272921) - [Twitter](https://twitter.com/kira_272921) - [Portfolio](https://kiradev.co) So that's it for this blog post folks 🤞. Meet y'all in the next blog
kira272921
981,649
Top 20 Django Interview Questions You Need to Know in 2022 : 🧑‍💻
Django and Python are two of the most in-demand skills, yet they're also among the most difficult....
0
2022-02-07T14:03:34
https://dev.to/baselmmohaisen/top-20-django-interview-questions-you-need-to-know-in-2022--227
python, django, webdev, programming
**Django** and **Python** are two of the most in-demand skills, yet they're also among the most difficult. So, if you want to be ready for your upcoming Django interview, here are the top 20 often requested Django Interview Questions. 1. Which architectural pattern does Django follow? 2. Explain Django architecture. 3. What are the features available in the Django web framework 4. What are the disadvantages of Django? 5. What are the inheritance styles in Django? 6. Is Django a content management system (CMS)? 7. How can you set up static files in Django? 8. What is some typical usage of middleware in Django? 9. What do Django filed class types do? 10. What are the signals in Django? 11. What are the two important parameters in signals? 12. Is the Django Admin interface customizable? If yes, then How? 13. What is Django Rest Framework (DRF)? 14. Difference between Django `OneToOneField `and `ForeignKey `Field? 15. What is Django ORM? 16. How does Django Templating work? 17. What is Jinja Templating? 18. Explain user authentication in Django? 19. Middleware in Django is useful for which purpose? 20. What do you mean by the `csrf_token`? 21. Does Django support multiple-column Primary Keys? 22. Does Django support NoSQL? 23. Is Django better than Flask? 24.How can you limit admin access so that the objects can only be edited by those users who have created them? 25. Name some popular websites or apps using Django? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/azlcqnmpk8qazycbqw5r.png) **This takes us to the conclusion of this Django interview questions post**. I hope you understand all that has been discussed in this essay. Make sure to practice as much as possible and replay your experience .😊👌 📢 Do you have a query concerning the content of this article? Please let us know in the comments. [Read here the 20 most important **Python** job interview questions](https://dev.to/baselmmohaisen/the-top-20-python-job-interview-questions-you-should-know-3nb1)
baselmmohaisen
981,666
[pt-BR] Quer testar sua interface? Entenda o que não fazer - #1
Os testes compõe uma parte fundamental no processo de desenvolvimento de software. Apesar disso,...
16,968
2022-02-07T14:37:24
https://dev.to/dxwebster/pt-br-testes-sao-fundamentais-mas-como-saber-o-que-testar-parte-1-13f
Os testes compõe uma parte fundamental no processo de desenvolvimento de software. Apesar disso, muita gente acha os testes desagradáveis, principalmente os testes de interface. Existem várias razões para isso, mas das mais comuns é que fazer e manter os testes gastam muito tempo. "Toda vez que faço uma alteração no código, os testes quebram!" E no fim, de certa forma estão até corretas, porque testes quebrando o tempo todo é um grande empecilho para a produtividade. Entretanto, e se eu te disser que existe algumas formas de evitar essas frustrações e começar a implementar testes de interface mais confiáveis? Quando estudamos testes, sendo pra front ou back, por muitas vezes nos preocupamos em conhecer seus fundamentos ou quais ferramentas utilizar, ou seja, tópicos que tem mais a ver com "como" implementar os testes do que "o que" realmente testar. De acordo com [Kent C. Dodds](https://kentcdodds.com/about), definir muito bem "o que" testar é o ponto-chave para agregar mais valor e confiabilidade aos testes. O conteúdo desse artigo utiliza como referência alguns textos do Dodds e está dividido em 3 partes: a primeira descreve o que não fazer ao escrever testes para o front-end; na segunda parte, veremos qual a melhor prática para escrever testes confiáveis e por fim, na última parte, veremos um pouco de como pensar os testes seguindo estrutura de código do React. Você pode conferir as referências ao final de cada parte destes posts. ### O que será abordado: - Porque fazer testes? - Detalhes de implementação - Falso-negativo - Falso-positivo # Porque fazer testes? Em termos gerais, fazemos testes para que tenhamos sempre a confiança de que a aplicação vai funcionar como esperado quando o usuário a utilizar. E pensando no usuário e como ele utiliza a aplicação, podemos chegar em uma metodologia que vai nos guiar a escrever testes e garantir que tragam cada vez mais confiança: > "Pensar menos sobre linhas de código e mais sobre *Casos de Uso* que o código atende." - Kent C. Dodds Quando pensamos apenas sobre linhas de código ao invés de pensar em casos de uso, fica muito fácil nos perdemos e começarmos a nos preocupar em testar "detalhes de implementação", o que logo mais veremos que é uma pratica que pode gerar consequencias ruins ao desenvolvimento. Por isso, antes mesmo de nos aprofundarmos nos testes de caso de uso que é basicamente "o que" devemos testar, vamos ver o que NÃO devemos testar na aplicação. # Detalhes de implementação De maneira simples, podemos definir os detalhes de implementação como: > "Coisas que os usuários do seu código normalmente não usarão, verão ou sequer conhecerão." - Kent C. Dodds E quem são os usuários do nosso código? Bom, quando escrevemos código, precisamos manter em mente que devemos atender apenas a 2 tipos de usuários: **O usuário-final**: é aquele que interage com o componente (por exemplo, insere valores em inputs de um form, clica no botão enviar ou reset, recebe mensagem de carregamento e success) **O usuário-desenvolvedor**: é o que renderiza o componente (por exemplo, inclui o componente em um UserProvider para atualizar seu estado e fazer dispatch pra o contexto) Entretanto, quando fazemos testes de detalhes de implementação, acaba surgindo um terceiro usuário, que é o usuário-tester. Esse tipo de usuário não deve ter atendido, pois criando testes pra eles, faremos com que esses testes não se assemelhem à forma que a aplicação é usada pelos usuários que são importantes. Há duas razões distintas pelas quais é importante evitar testar detalhes de implementação: 1. Pode quebrar quando você refatora o código do aplicativo, gerando *Falso-negativo*. 2. Pode acabar não falhando quando o código quebrar, gerando *Falso-positivo*. Vamos para um exemplo em React, utilizando um componente accordion simples: ```js // accordion.js class Accordion extends React.Component { state = {openIndex: 0} setOpenIndex = openIndex => this.setState({openIndex}) render() { const {openIndex} = this.state return ( <div> {this.props.items.map((item, index) => ( <> <button onClick={() => this.setOpenIndex(index)}> {item.title} </button> {index === openIndex ? ( <AccordionContents> {item.contents} </AccordionContents> ) : null} </> ))} </div> ) } } export default Accordion ``` E agora um teste utilizando a ferramenta Enzyme, para testar os *detalhes de implementação*: ```js // __tests__/accordion.enzyme.js test('setOpenIndex sets the open index state properly', () => { const wrapper = mount(<Accordion items={[]} />) expect(wrapper.state('openIndex')).toBe(0) wrapper.instance().setOpenIndex(1) expect(wrapper.state('openIndex')).toBe(1) }) ``` Ok, agora vamos entender como as coisas quebram fácil com esse tipo de teste. ## Falso-negativo Digamos que seja necessário refatorar este accordion a permitir que vários itens sejam abertos de uma só vez. Agora, vamos fazer essa alteração na *implementação* de modo que não altere seu *comportamento*: ```js // accordion.js class Accordion extends React.Component { state = {openIndexes: [0]} setOpenIndex = openIndex => this.setState({openIndexes: [openIndex]}) render() { const {openIndexes} = this.state return ( <div> {this.props.items.map((item, index) => ( <> <button onClick={() => this.setOpenIndex(index)}> {item.title} </button> {openIndexes.includes(index) ? ( <AccordionContents> {item.contents} </AccordionContents> ) : null} </> ))} </div> ) } } export default Accordion ``` Legal! Agora a gente abre a aplicação e ve que tudo está funcionando corretamente, conforme desejamos e caso desejemos replica-lo também será muito fácil. Aí, vamos rodar os testes eles travaram. Vemos que o teste que quebrou foi: `setOpenIndex sets the open index state properly`. E a mensagem de erro é a seguinte: ```bash expect(received).toBe(expected) Expected value to be (using ===): 0 Received: undefined ``` Essa falha no teste está nos alertando sobre um problema real? Não! Pois o componente ainda está funcionando. Isso é o que se chama de **falso negativo**. Isso significa que tivemos uma falha no teste, mas foi por causa de um teste quebrado, não porque o código está quebrado. Bem, a solução então para esse caso é alterar o teste: ```js test('setOpenIndex sets the open index state properly', () => { const wrapper = mount(<Accordion items={[]} />) expect(wrapper.state('openIndexes')).toEqual([0]) wrapper.instance().setOpenIndex(1) expect(wrapper.state('openIndexes')).toEqual([1]) }) ``` Chegamos então à primeira conclusão que testar detalhes de implementação podem gerar falso-negativo quando há alguma refatoração no código, o que torna os testes frágeis e frustantes que parecem falhar em simplesmente olhar o código. ## Falso-positivo Ok, agora vamos dizer que seu colega de trabalho está trabalhando no accordion e ele vê este código: ```js <button onClick={() => this.setOpenIndex(index)}>{item.title}</button> ``` Imediatamente, pensando prematuramente em otimizar ele pensa: "Ei! Arrow functions inline são ruins para performance! Vou corrigir aqui rapidinho e fazer os testes." ```js <button onClick={this.setOpenIndex}>{item.title}</button> ``` Aí ele faz os testes e, tcharam! Passaram! Aí sem mesmo olhar no navegador o componente funcionando, ele manda abre um PR, é aprovado e o accordion quebra em produção! Mas o que deu errado? O teste verificava se o estado mudava quando o `setOpenIndex` era chamado e se o conteúdo era exibido corretamente. Sim, está correto, o problema é que não houve teste para verificar se evento onClick do botão estava chamando o `setOpenIndex` corretamente. Isso é chamado de falso positivo, o que significa que o teste não falhou mas deveria ter falhado! Então, o que podemos fazer para garantir que não ocorra novamente? Poderíamos pensar em algumas ações como: adicionar outro teste para verificar se clicar no botão atualiza o estado corretamente ou simplesmente usar uma ferramenta gratuita que nos auxilia a evitar testar detalhes de implementação. Eu gosto dessa segunda alternativa, aposto que você também! Veremos esses detalhes na parte 3 em que falamos sobre os testes no React! Mas por enquanto, vamos seguir a linha de raciocínio e aprender sobre os testes que realmente devemos nos preocupar em escrever: "Testes de caso de uso". ### 👉 Continue lendo na [parte 2](https://dev.to/dxwebster/pt-br-testes-sao-fundamentais-mas-como-saber-o-que-testar-parte-2-1mhc) --- **Referências:** "How to know what to test" - Kent C. Dodds [https://kentcdodds.com/blog/how-to-know-what-to-test](https://kentcdodds.com/blog/how-to-know-what-to-test) "Avoid the Test User" - Kent C. Dodds [https://kentcdodds.com/blog/avoid-the-test-user](https://kentcdodds.com/blog/avoid-the-test-user) "Testing Implementation Details" - Kent C. Dodds [https://kentcdodds.com/blog/testing-implementation-details](https://kentcdodds.com/blog/testing-implementation-details)
dxwebster
981,935
Data Quality for Notion Databases 🚀 - by Ricardo Elizondo
Notion ➕ Great Expectations = 🚀 If you've ever heard of or used Notion (specially their databases)...
0
2022-02-08T03:01:00
https://dataroots.io/research/contributions/data-quality-for-notion-databases
--- title: Data Quality for Notion Databases 🚀 - by Ricardo Elizondo published: true date: 2022-02-06 08:49:00 UTC tags: canonical_url: https://dataroots.io/research/contributions/data-quality-for-notion-databases --- > **Notion** ➕ **Great Expectations** = 🚀 If you've ever heard of or used **Notion** (specially their databases) and **Great Expectations** , you can already imagine what this is about 😉. If not, find a quick ELI5 below: See our [Github](https://github.com/datarootsio/notion-dbs-data-quality) for more technical details and detailed instructions. ### 👶 ELI5: Great Expectations > "Great Expectations is a shared, open standard for data quality. It helps data teams eliminate pipeline debt, through data testing, documentation, and profiling" - Great Expectations' website, 2021 In short, with great expectations you always know what to expect from your data. They do this via what the call 'Expectations' (didn't see that coming, huh? 🙄), which as the name implies, are **qualities** you **expect** from your **data**. Expectations can be as simple as ["I want to be sure that this column is never null"](https://greatexpectations.io/expectations/expect_column_values_to_not_be_null) or ["I want to make sure the row count is always X".](https://greatexpectations.io/expectations/expect_table_row_count_to_equal) If you want to dig deeper or find a list of possible expectations, you can do so at [Great Expectation's official site](https://greatexpectations.io/expectations) Find a great tutorial to get started made by our colleagues Romain & Ilion[here](https://github.com/datarootsio/tutorial-great-expectations). If you're done with that and want to dig deeper, our colleague Paolo Léonard wrote a tutorial on writing your own custom expectations [here](https://dataroots.io/research/contributions/great_expectations:-writing-custom-expectations). ### 👶🏼 ELI5: Notion I love Notion's own explanation of itself, so I'll point you to it, right [here 🎉](https://www.notion.so/about) In short, it is an all-in-one workspace collaboration tool that has it all: tasks, lists, kanban boards, wikis, and the star of today: **databases**. Here at Dataroots, we ❤️ Notion and we use it extensively. Being a data-first company, as you can imagine, we have databases for anything and everything you can think of, but we'll talk more about that in a bit. ### What took you so long? As said before, we love Notion and we love Great Expectations, so this marriage was just a matter of time. Not only time, to be exact. Before, Notion was only a website and it was not until **May 2021** that they released the first public beta of [their own API](https://developers.notion.com/) 🎊. This was the last piece of the puzzle that allows us to combine it with Great Expectations. Isn't that the beauty of Open Source? ## Our tool So you get the goal here: use Notion's API to get our databases and run those through Great Expectations to get our results. For the remainder of this blog, we'll focus on our Employee Directory database. I know what you're thinking: "you even have a database for your own employees?! 🤯". **You bet we do**. Our Employee Directory contains mundane information from our employees like our phone number, email, position, but also crucial pieces of data like our **favorite dessert**. It is of the upmost importance to be sure that we know everybody's favorite dessert and of course this was the first expectation we built. We focused our tool to be extremely user-friendly and fast to quickly get something going. Adding a new database and creating the expectation suite takes around 10-15min if you know already what expectations to include. Allow me to guide you through the 4 easy steps: ### 1: Create a Notion integration As always, whenever we're dealing with an API, we need a way to authenticate ourselves. Luckily for us, Notion makes it really simple to create what they call an Integration and give it access to whatever page/db you want. ### 2 Choose your database (just get the url) ![](https://media0.giphy.com/media/6LpllEOHKfH3UAqeq9/giphy.gif?cid=790b761160dec393f03ac7f4652a10cd964c103ae2c622f9&rid=giphy.gif&ct=g) ### 3: Create your Expectation Suite Using our jupyter notebook, it is extremely easy to create your expectation suite while doing some data exploration to make sure you know what to expect. Here you can see our dessert-related expectations 🍰 (along with others). ![](https://dataroots.ghost.io/content/images/2022/02/jupyter.png) ### 4: Run 🚀 Now you have your database and your expectation suite. On top of these 2 things, you'll just need a description to identify your run and you're good to go. ![](https://media.giphy.com/media/Ydz1V15qsSP8y4ERum/giphy.gif) ### 5. Profit That's it! You have now successfully ran an expectation suite (a group of expectations) against your data. You can either see your results as boring .json files OR you can use Great Expectations' sweet, automatic **Data Docs**. ## Great Expectations' Data Docs 📊 One of the great things from Great Expectations is their Data Docs. Data Docs are these HTML pages that Great Expectations compile from your expectation suites and validation runs. To learn more, here is the original [website](https://docs.greatexpectations.io/docs/reference/data_docs/). Here you can see a **log** of all your previous runs with information on a per-expectation level. ![](https://dataroots.ghost.io/content/images/2022/02/data_docs_landing-1.png) It is also a great place to see a **list of your expectation suites** and to dive deep into each expectation. ![](https://dataroots.ghost.io/content/images/2022/02/expectations.png) ![](https://dataroots.ghost.io/uploads/expectations.png) ### So, what happened to our 'Favorite Dessert' column? ![](https://dataroots.ghost.io/content/images/2022/02/favorite_dessert.png) You can see we currently have 18 employees of which we don't know what their favorite dessert is! 🤯 You can be sure that by the time you're reading this, this is no longer the case, as this is utterly unacceptable. ## Wrapping Up 🦾 To conclude, we think this is a great tool to implement data quality in your Notion databases. Although Great Expectations may seem a bit overkill for this use-case (as they're mostly use much bigger and complex databases), we thought it was a great way to combine Notion, which we use extensively internally, ➕ Great Expectations, which we use with a number of clients. If you've read all the way here, first of all we'd like to say "thanks 🙏🏼", and we hope you're excited and already thinking about how to use this solution yourself. You can find our open-source repo which we used to built this ourselves [here](https://github.com/datarootsio/notion-dbs-data-quality). Inside you will find more technical details and all the specific instructions as to how to get it running yourself. This tool is open-source ❤️, both Great Expectations and Notion's API being open-source, so we would love for you, the community, to contribute, as this is how great things get built. ### Useful content [Git repo](https://github.com/datarootsio/notion-dbs-data-quality) [Tutorial on Great Expectations](https://github.com/datarootsio/tutorial-great-expectations) [Post on Great Expectations](https://dataroots.io/research/contributions/great_expectations:-writing-custom-expectations)
bart6114
981,941
Quick BERT Pre-Trained Model for Sentiment Analysis with Scikit Wrapper
Using a scikit wrapper to easily tune a pre-trained BERT model for NLP Brand Sentiment...
0
2022-02-09T02:54:47
https://dev.to/ddey117/quick-bert-pre-trained-model-for-sentiment-analysis-with-scikit-wrapper-3jcp
## Using a scikit wrapper to easily tune a pre-trained BERT model for NLP Brand Sentiment Analysis of Social Media Data aAuthor: Dylan Dey This project it available on github here: link The Author can be reached at the following email: ddey2985@gmail.com #### Classification Metric Understanding ![Confusion_Matrix_Breakdown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1sq4f1wehvjntw5lwbzt.jpg) #### Confusion Matrix Description A true positive in the current context would be when the model correctly identifies a tweet with positive sentiment as positive. A true negative would be when the model correctly identifies a tweet with negative sentiment as containing negative sentiment. Both are important and both can be described by the overall accuracy of the model. True negatives are really at the heart of the model, as this is the situation in which Apple would have a call to action. An appropriately identified tweet with negative sentiment can be properly examined using some simple NLP techniques to get a better understanding at what is upsetting customers involved with our brand and competitor's brands. Bigrams, quadgrams, and other word frequency analysis can help Apple to address brand concerns. True positives are also important. Word frequency analysis can be used to summarize what consumers think Apple is doing right and also what consumers like about Apple's competitors. There will always be some error involved in creating a predictive model. The model will incorrectly identify positive tweets as negative and vice versa. That means the error in any classification model in this context can be described by ratios of true positives or negatives vs false positives or negatives. A false positive would occur when the model incorrectly identifies a tweet containing negative sentiment as a tweet that contains positive sentiment. Given the context of the business model, this would mean more truly negative sentiment will be left out of analyzing key word pairs for negative tweets. This could be interpreted as loss in analytical ability for what we care about most given the buisness problem: making informed decisions from information directly from consumers in the form of social media text. Minimizing false positives is very important. False negatives are also important to consider. A false negative would occur when the model incorrectly identifies a tweet that contains positive sentiment as one that contains negative sentiment. Given the context of the business problem, this would mean extra noise added to the data when trying to isolate for negative sentiment of brand/product. In summary, overall accuracy of the model and a reduction of both false negatives and false positives are the most important metrics to consider when developing a the twitter sentiment analysis model. All functions used to preprocess twitter data, such as removing noise from text and tokenizing, as well as the functions for creating confusion plots to quickly assess performance are shown below. ``` #list of all functions for modeling #and processing #force lowercase of text data def lower_case_text(text_series): text_series = text_series.apply(lambda x: str.lower(x)) return text_series #remove URL links from text def strip_links(text): link_regex = re.compile('((https?):((\/\/)|(\\\\))+([\w\d:#@%\/;$()~_?\+-=\\\.&](#!)?)*)|{link}/gm') links = re.findall(link_regex, text) for link in links: text = text.replace(link[0], ', ') return text #remove '@' and '#' symbols from text def strip_all_entities(text): entity_prefixes = ['@','#'] for separator in string.punctuation: if separator not in entity_prefixes: text = text.replace(separator,' ') words = [] for word in text.split(): word = word.strip() if word: if word[0] not in entity_prefixes: words.append(word) return ' '.join(words) #tokenize text and remove stopwords def process_text(text): tokenizer = TweetTokenizer() stopwords_list = stopwords.words('english') + list(string.punctuation) stopwords_list += ["''", '""', '...', '``'] my_stop = ["#sxsw", "sxsw", "sxswi", "#sxswi's", "#sxswi", "southbysouthwest", "rt", "tweet", "tweet's", "twitter", "austin", "#austin", "link", "1/2", "southby", "south", "texas", "@mention", "ï", "ï", "½ï", "¿", "½", "link", "via", "mention", "quot", "amp", "austin" ] stopwords_list += my_stop tokens = tokenizer.tokenize(text) stopwords_removed = [token for token in tokens if token not in stopwords_list] return stopwords_removed #master preprocessing function def Master_Pre_Vectorization(text_series): text_series = lower_case_text(text_series) text_series = text_series.apply(strip_links).apply(strip_all_entities) text_series = text_series.apply(unidecode.unidecode).apply(html.unescape) text_series =text_series.apply(process_text) lemmatizer = WordNetLemmatizer() text_series = text_series.apply(lambda x: [lemmatizer.lemmatize(word) for word in x]) return text_series.str.join(' ').copy() #function for intepreting results of models #takes in a pipeline and training data #and prints cross_validation scores #and average of scores def cross_validation(pipeline, X_train, y_train): scores = cross_val_score(pipeline, X_train, y_train) agg_score = np.mean(scores) print(f'{pipeline.steps[1][1]}: Average cross validation score is {agg_score}.') #function to fit pipeline #and return subplots #that show normalized and #regular confusion matrices #to easily intepret results def plot_confusion_matrices(pipe): pipe.fit(X_train, y_train) y_true = y_test y_pred = pipe.predict(X_test) matrix_norm = confusion_matrix(y_true, y_pred, normalize='true') matrix = confusion_matrix(y_true, y_pred) fig, (ax1, ax2) = plt.subplots(ncols = 2,figsize=(10, 5)) sns.heatmap(matrix_norm, annot=True, fmt='.2%', cmap='YlGn', xticklabels=['Pos_predicted', 'Neg_predicted'], yticklabels=['Positive Tweet', 'Negative_Tweet'], ax=ax1) sns.heatmap(matrix, annot=True, cmap='YlGn', fmt='d', xticklabels=['Pos_predicted', 'Neg_predicted'], yticklabels=['Positive Tweet', 'Negative_Tweet'], ax=ax2) plt.show(); #loads a fitted model from memory #returns confusion matrix and #returns normalized confusion matrix #calculated using given test data def confusion_matrix_bert_plots(model_path, X_test, y_test): model = load_model(model_path) y_pred = model.predict(X_test) matrix_norm = confusion_matrix(y_test, y_pred, normalize='true') matrix = confusion_matrix(y_test, y_pred) fig, (ax1, ax2) = plt.subplots(ncols = 2,figsize=(10, 5)) sns.heatmap(matrix_norm, annot=True, fmt='.2%', cmap='YlGn', xticklabels=['Pos_predicted', 'Neg_predicted'], yticklabels=['Positive Tweet', 'Negative_Tweet'], ax=ax1) sns.heatmap(matrix, annot=True, cmap='YlGn', fmt='d', xticklabels=['Pos_predicted', 'Neg_predicted'], yticklabels=['Positive Tweet', 'Negative_Tweet'], ax=ax2) plt.show(); ``` ### Class Imbalance of Dataset The twitter data used for this project was collected from multiple sources from [CrowdFlower](https://appen.com/datasets-resource-center/). The project will only focus on binary sentiment (positive or negative). The total amount of tweets and associated class balances are show below. This distribution is further broken down by brand in the chart below the graphs. ![Class_Imbalance_Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pvpg3napakuqebqq4j3p.jpg) #### Apple Positive vs Negative Tweet Counts positive 0.654194 negative 0.345806 +++++++++++++++++++++++ positive 2028 negative 1072 +++++++++++++++++++++++++++++++++++++++++++++++++++ #### Google Positive vs Negative Tweet Counts positive 740 negative 136 +++++++++++++++++++++++ positive 0.844749 negative 0.155251 For comparison, I trained four different supervised learning classifiers using term frequency–inverse document frequency(TF-IDF) vectorized preprocessed tweet data. While the vectorization will not be needed for the BERT classifier, it is needed for these supervised classifiers. [TF-IDF wiki](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) [TfidfVectorize sklearn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) [MultinomialNB documentation](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html#sklearn.naive_bayes.MultinomialNB) [Random Forest documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) [Balanced Random Forest Classifier Documentation](https://imbalanced-learn.org/stable/references/generated/imblearn.ensemble.BalancedRandomForestClassifier.html) [XGBoosted Trees Documentation](https://xgboost.readthedocs.io/en/stable/python/python_intro.html) ### Multinomial Naive Bayes Base Model Performance ![NB_Matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2ntksg37392a4ch22u4.png) ### Random Forest Classifier Base Model Performance ![RF_Matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98t4fd8tg87z7amk3oh8.png) ### Balanced Random Forest Classifier Base Model Performance ![Balanced_RF_Matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tec244erxdn4s5rf059h.png) ### XGBoosted Random Forest Classifier Base Model Performance ![XGBoosted_Matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bamwgzjotqhdess6ej0h.png) Now that supervised learning models have been built, trained, and tuned without any pre-training, our focus will now turn to transfer learning using Bidirectional Encoder Representations from Transformers(BERT), developed by Google. BERT is a transformer-based machine learning technique for natural language processing pre-training. BERTBASE models are pre-trained from unlabeled data extracted from the BooksCorpus with 800M words and English Wikipedia with 2,500M words. [Click Here for more from Wikipedia](https://en.wikipedia.org/wiki/BERT_(language_model)) [GitHub for BERT release code](https://github.com/google-research/bert) Sckit-learn wrapper provided by Charles Nainan. [GitHub of Scikit Learn BERT wrapper](https://github.com/charles9n/bert-sklearn). This scikit-learn wrapper is used to finetune Google's BERT model and is built on the huggingface pytorch port. The BERT classifier is now ready to be fit and trained on data in the same way you would any sklearn model. See the code block below for a quick example. ``` bert_1 = BertClassifier(do_lower_case=True, train_batch_size=32, max_seq_length=50 ) bert_1.fit(X_train, y_train) y_pred = bert_1.predict(X_test) ``` Four models were trained and stored into memory. See the code bock below for the chosen parameters in every model. ``` """ The first model was fitted as seen commeted out below after some trial and error to determine an appropriate max_seq_length given my computer's capibilities. """ # bert_1 = BertClassifier(do_lower_case=True, # train_batch_size=32, # max_seq_length=50 # ) """ My second model contains 2 hidden layers with 600 neurons. It only passes over the corpus one time when learning. It trains fast and gives impressive results. """ # bert_2 = BertClassifier(do_lower_case=True, # train_batch_size=32, # max_seq_length=50, # num_mlp_hiddens=500, # num_mlp_layers=2, # epochs=1 # ) """ My third bert model has 600 neurons still but only one hidden layer. However, the model passes over the corpus 4 times in total while learning. """ # bert_3 = BertClassifier(do_lower_case=True, # train_batch_size=32, # max_seq_length=50, # num_mlp_hiddens=600, # num_mlp_layers=1, # epochs=4 # ) """ My fourth bert model has 750 neurons and two hidden layers. The corpus also gets transversed four times in total while learning. """ # bert_4 = BertClassifier(do_lower_case=True, # train_batch_size=32, # max_seq_length=50, # num_mlp_hiddens=750, # num_mlp_layers=2, # epochs=4 # ) ``` #### Bert 1 Results ![Bert1_matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bk45s73uuu3g3uq32gh6.jpg) #### Bert 2 Results ![Bert2_matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zkf97e7mq70s0t3cn6ae.jpg) #### Bert 3 Results ![Bert3_matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h1fzlqeo4hzrc0u2odq.jpg) #### Bert 4 Results ![Bert4_matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bhmnbot0ba1m2t4yt7ga.jpg) As you can see, all of my BERT models trained on a relatively small amount of data achieved much better results than any of the other classifiers. The BertClassifier with 1 hidden layer, 600 neurons, and 4 epochs performed the best, predicted over 93% of positive tweets correctly and 80% of negative tweets correctly on hold out test data. ``` bert_3 = BertClassifier(do_lower_case=True, train_batch_size=32, max_seq_length=50, num_mlp_hiddens=600, num_mlp_layers=1, epochs=4 ) ``` That concludes this example of using a pre-trained BERT model for Twitter sentiment analysis on a small set of data. If you have any questions or comments please feel free to contact me. Thank you. Author: Dylan Dey Email: Ddey2985@gmail.com github: https://github.com/ddey117/Product_Twitter_Sentiment_Classification
ddey117
981,973
Why would this code not work to populate html table from MongoDB in REACT?
Below is the code I am trying to implement. Basically just trying to map the table and placing the...
0
2022-02-07T22:33:29
https://dev.to/elindo586/how-to-populate-an-html-table-pulling-data-from-mongodb-to-react-1jn8
react, mongodb, mongoose
Below is the code I am trying to implement. Basically just trying to map the table and placing the results in the body... but the code breaks after trying to map() Would anybody know what could be missing? thanks.. ``` import "./App.css"; import React from "react"; const mongoose = require("mongoose"); main().catch((err) => console.log(err)); async function main() { await mongoose.connect( "mongodb+srv://name:password@cluster0.gcyyo.mongodb.net/test?retryWrites=true&w=majority" ); } const PartSchema = new mongoose.Schema({ reference: String, description: String, replacements: String, }); const Part = mongoose.model("Part", PartSchema); function App() { return ( <div className="App"> <h1>Hello World 5 </h1> <table> <thead> <tr> <th>Reference </th> <th> Description </th> <th>Replacements </th> </tr> </thead> <tbody> {Part.map((item) => ( <tr> <td>{item.reference}</td> <td>{item.description}</td> <td>{item.replacements}</td> </tr> ))} </tbody> </table> </div> ); } export default App; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/annmlsrebqp6urdlrvnc.PNG)
elindo586
982,187
Day 4: Why Use Principal Component Analysis?
MattC's series on Data Science: PCA
0
2022-02-08T05:03:53
https://dev.to/mccurcio/why-use-principal-component-analysis-lpm
100daysofcode, datascience, pca
--- title: Day 4: Why Use Principal Component Analysis? published: true description: MattC's series on Data Science: PCA tags: 100daysofcode, datascience, pca cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22qxaunphio74gxr7z8c.png --- I am working on an article that discusses Principal Component Analysis. Here is a sneak-peak. Principal components analysis is a valuable tool for revealing hidden structure in a dataset with many features/variables. By using PCA, one may be able to: 1. Identify which variables are important and shape the dynamics of a system 2. Reduce the dimensionality of the data 3. Maximize the variance that lies hidden in a dataset and rank them 4. Filter noise from data 5. Compress the data 6. Preprocess data for further analysis or model building.
mccurcio
982,243
How to take Hourly RDS snapshots for Disaster Recovery?
Amazon RDS creates and saves automated backups of your DB instance during the backup window of your...
0
2022-02-08T06:13:07
https://dev.to/tomahawkpilot/how-to-take-hourly-rds-snapshots-for-disaster-recovery-e2k
aws, github, devops, rds
Amazon RDS creates and saves automated backups of your DB instance during the backup window of your DB instance. * RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. * RDS saves the automated backups of your DB instance according to the backup retention period that you specify. * You can recover your database to any point in time during the backup retention period. ## Creating a DB snapshot Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. * Creating this DB snapshot on a Single-AZ DB instance results in a brief I/O suspension that can last from a few seconds to a few minutes, depending on the size and class of your DB instance. * For MariaDB, MySQL, Oracle, and PostgreSQL, I/O activity is not suspended on your primary during backup for Multi-AZ deployments, because the backup is taken from the standby. * For SQL Server, I/O activity is suspended briefly during backup for Multi-AZ deployments. `Unlike automated backups, manual snapshots aren't subject to the backup retention period. Snapshots don't expire.` ### Taking Backups using AWS Cli When you create a DB snapshot using the AWS Cli, you need to identify which DB instance you are going to back up, and then give your DB snapshot a name so you can restore from it later. You can do this by using the AWS Cli create-db-snapshot command with the following parameters: * `--db-instance-identifier` * `--db-snapshot-identifier` ## The Action to do this. The Action requires the following environment variables to be set as secrets in the repository you will be running this action from. * AWS_REGION -> Your AWS Region * AWS_ACCESS_KEY_ID -> Access key ID * AWS_SECRET_ACCESS_KEY -> Access Secret * DB_INSTANCE_IDENTIFIER -> DB Name The above access key should have the permission to create snapshots. The action has a cron based trigger that runs every hour and also a manual trigger that you can run if you want to take a snapshot manually(eg snapshot before running a migration). ``` name: Take Database Snapshots on: schedule: - cron: '0 */1 * * *' workflow_dispatch: env: AWS_REGION: ${{ secrets.AWS_REGION}} AWS_DEFAULT_OUTPUT: json AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID}} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} DB_INSTANCE_IDENTIFIER: ${{ secrets.DB_INSTANCE_IDENTIFIER }} jobs: snapshot: runs-on: ubuntu-latest name: Take Database Snapshot steps: - name: Set current date & time as ENV variable run: echo "NOW=$(date +'%Y-%m-%d-%H-%M-%S')" >> $GITHUB_ENV - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: $AWS_ACCESS_KEY_ID aws-secret-access-key: $AWS_SECRET_ACCESS_KEY }} aws-region: $AWS_REGION - name: Take the Snapshot run: | aws rds create-db-snapshot --db-instance-identifier $DB_INSTANCE_IDENTIFIER --db-snapshot-identifier $DB_INSTANCE_IDENTIFIER-$NOW ``` The above GitHub action uses the AWS cli to trigger a snapshot creation . The creation time depends on the actual db size. PS. AWS charges you `$0.095 per GB-Month ( us-east-1) ` for RDS snapshot storage as part of backup service .
tomahawkpilot
982,845
Typescript Basics: Inferences
Say that we have an app that is leveraging a cache, and we want to write a little helper function...
0
2022-02-08T16:31:24
https://dev.to/shaneboyar/typescript-basics-inferences-31ff
Say that we have an app that is leveraging a cache, and we want to write a little helper function that can take an API request an arbitrary string for a key, and check to see if data already exists for that key in the cache, if so just skip making the request and return the value from the cache, otherwise make the request, store the result, and then return it. We might write something like this: ``` const checkOrFetch = async(key, fetcher) => { try { // use key to check cache const cachedData = await checkCache(key); // if no cached data, run fetcher, cache result, and return if(cachedData) { return cachedData } else { const fetcherResult = await fetcher(); addToCache(fetcherResult); return fetcherResult } } catch (error) { // do something with error return null } } ``` In the above example, our two arguments are implicitly typed as `any`, so wherever this helper is used, we leave ourselves open to possible errors when a dev assumes they're going to get back something from this function they may not actually end up with. However, with a very little bit of modification, we can not only make this function type-safe, but even relieve some of the typing work from the code that will implement this method, all we need to do is add some generic typing to the signature. ``` const checkOrFetch = async <FetcherReturnType>( key: string, fetcher: () => Promise<FetcherReturnType> ): Promise<FetcherReturnType | null> => { ... } ``` In the example the name `FetcherReturnType` is completely arbitrary, it is essentially a variable name. (In the wild, you'll often see single capital letters used like `T` for type, `K` for key, but let's not fear verbosity, and instead embrace clarity.) This small change does some pretty cool stuff. Now, as long as the fetcher that is passed as the second argument is typed (even implicitly!) our `checkOrFetch` function will be able to let the developer that is using it make sure their code is kosher. It's a bit backward from how we tend to think of functions and their variables, but what the syntax in the example means is something like: > In this function I'm writing, I know I'm gonna need to know about a type, it's the type of the value that it'll be returning, I just can't know what that is because it will be dependent on the argument passed to it, so let's save a place for it. We'll call it `FetcherReturnType`. At the end of the signature we make that note that whatever `FetcherReturnType` ends up being, we're gonna return a `Promise` that resolves to it, but also maybe null if things go pear-shaped. The magic happens with the second argument. I don't know exactly what it's going to be, but I do know I need it to be an async function that resolves to something. Since Typescript is pretty good at inferring what a return type is from a function, all the user of this `checkOrFetch` method needs to do is pass the function in, and Typescript takes care of the rest. Let's look at some examples. First, a handful of needlessly async functions that really have no reason to exist but for this tutorial: ``` const numberFetcher = async() => 1; const stringFetcher = async() => 'string'; const boolFetcher = async() => true; const objectFetcher = async() => ({ foo: 'bar' }); const getNumber = async() => checkOrFetch('numberKey', numberFetcher); const getString = async() => checkOrFetch('stringKey', stringFetcher); const getBool = async() => checkOrFetch('boolKey', boolFetcher); const getObject = async() => checkOrFetch('objectKey', objectFetcher); ``` If we hover over the getters in our IDE, we can see that Typescript is making sure we know what we can expect back: ![When we hover over getObject, our IDE knows it will return a Promise that resolves to our type.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rjfk833irlb0lkeggwpr.png) When we hover over getObject, our IDE knows it will return a Promise that resolves to our type.This is just scratching the surface of what Typescript can do, but once you see how it analyzes your code and infers types for you, you'll start to wonder how you ever lived without it.
shaneboyar
983,264
Um segredo: O repositório especial do Github!
Muitas pessoas chamam o Github de rede social para pessoas desenvolvedoras, e acho até que faz um...
0
2022-02-09T01:59:59
http://high5devs.com/2020/07/um-segredo-o-repositorio-especial-github/
github, markdown, profile
--- title: Um segredo: O repositório especial do Github! published: true date: 2020-07-31 18:33:02 UTC tags: Github,github,markdown,profile canonical_url: http://high5devs.com/2020/07/um-segredo-o-repositorio-especial-github/ --- Muitas pessoas chamam o [Github](https://github.com) de rede social para pessoas desenvolvedoras, e acho até que faz um sentido. Na minha visão o github faz um tempo que deixou de ser simplesmente uma plataforma para hospedar código. Hoje pessoas compartilham projetos, repositórios de conteúdo, hospedam iniciativas, seguem pessoas, propões ideias, [discutem](https://github.blog/2020-05-06-new-from-satellite-2020-github-codespaces-github-discussions-securing-code-in-private-repositories-and-more/#discussions)! E o que é uma rede social sem perfil né? # Github Profile No github sempre foi possível colocar informações básicas sobre você, como por exemplo: - Nome; - Empresa que trabalha; - Links de redes sociais; Mas sempre as informações eram limitadas ao que eles deixavam ser colocado. No final de maio apareceu um tweet mostrando uma nova feature do github, a possibilidade de fazer um perfil usando markdown. > Quem me segue no [twitter](https://twitter.com/Thur/status/1266011862572896256) ficou sabendo ? Eu fiquei bem empolgado, embora ainda não tínhamos muitos detalhes sobre como funcionaria. Pela imagem compartilhada dava para ver que era um markdown, pense nas possibilidades! Eu amo markdown, uma das melhores opções para fazer anotações. Algumas pessoas começaram a ter acesso para teste, e no começo de julho foi liberado para todas as pessoas. Beleza Arthur, legal que você gosta de markdown e tá empolgado, mas…. ## Como faço para usar? Isso é bem simples para quem já usa github, é só criar um repositório com o nome do seu usuário. Assim que você digitar o nome do seu repositório o github já vai mostrar um texto dizendo que você descobriu um segredo, o seu repositório especial! [![](http://high5devs.com/wp-content/uploads/2020/07/Captura-de-Tela-2020-07-29-a%CC%80s-18.55.20-1.png)](http://high5devs.com/wp-content/uploads/2020/07/Captura-de-Tela-2020-07-29-a%CC%80s-18.55.20-1.png) Esse repositório que vai conter o seu perfil, para que o github possa deixar seu perfil público o seu repositório também deve ser público. O github irá mostrar como seu perfil é o README desse respositório, portanto se você marcar a opção de inicializar o repositório ele já vai deixar tudo o que você precisa criado. No meu caso eu já tinha um repositório criado com o meu nome :/ Um tempo atrás criei um pacote no npm com meu nome para mostrar umas informações minhas, é bem legal! Se quiser ver só rodar ``` npx afucher ``` Já ter o repositório criado para mim não foi um problema já que o pacote tinha o intuito de ter informações sobre mim, então só criei o README da maneira que eu queria. Agora basta deixar sua imaginação te levar! Como é um arquivo markdown em um repositório, você pode fazer tudo que o github suporta em um markdown. Algumas coisas que você são possíveis: - Adicionar uma imagem como cabeçalho; - Colocar links para sites, projetos e/ou redes sociais; - Usar formatações do markdown (cabeçalhos, tabelas, negrito, etc…); - Adicionar badges (medalhas) customizadas, exemplos: [shields.io](https://shields.io/), [badgen.net](https://badgen.net/); Olha como ficou o meu ?: [![](http://high5devs.com/wp-content/uploads/2020/07/Captura-de-Tela-2020-07-29-a%CC%80s-23.12.12.png)](http://high5devs.com/wp-content/uploads/2020/07/Captura-de-Tela-2020-07-29-a%CC%80s-23.12.12.png) Você pode ver o meu perfil no github aqui: [https://github.com/afucher/](https://github.com/afucher/) e o código do README aqui: [https://github.com/afucher/afucher/blob/trunk/README.md](https://github.com/afucher/afucher/blob/trunk/README.md) * * * Eai, o que achou? O que mais eu poderia colocar? Crie o seu e compartilhe aqui comigo! Adoraria ver outras ideias. Abraços _Photo by [Stefan Steinbauer](https://unsplash.com/@usinglight?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)_ O post [Um segredo: O repositório especial do Github!](http://high5devs.com/2020/07/um-segredo-o-repositorio-especial-github/) apareceu primeiro em [High5Devs](http://high5devs.com).
afucher
983,310
How to Develop an iOS App using Flutter
Flutter is Google's SDK for crafting beautiful, fast user experiences for mobile, web, and desktop...
0
2022-02-09T04:43:11
https://dev.to/darshilwebclues_13/how-to-develop-an-ios-app-using-flutter-37i9
Flutter is Google's SDK for crafting beautiful, fast user experiences for mobile, web, and desktop from a single codebase. Flutter works with existing code, is used by developers and organizations around the world, and is free and open source. It is used to develop applications for Android, iOS, and the web form using a single codebase. Flutter uses the Dart programming language. Flutter has been growing in popularity among developers for some time now. Not only does it support cross-platform development, but it also offers a great developer experience: automated hot reload and stateful hot restart, fast debugging rich widget collections and more. Now as we are clear that flutter is only used for the development of Android and iOS apps together the question is how do you develop a flutter app? The most convenient and efficient method is to hire a dedicated Flutter that has got the experience and expertise in flutter development. If the developer has past experience in a similar industry then it is an added boom for your business. **WebClues Infotech** has got a huge pool of talented and highly skilled flutter developers that might have past experience in your industry. They can help you in building a better app than primarily imagined.
darshilwebclues_13
983,333
Running OrderCloud Headstart on Docker
What is Headstart? Headstart is an example solution, showing you an opinionated method for...
0
2022-04-12T08:41:10
https://robearlam.com/blog/running-ordercloud-headstart-on-docker
sitecore, ordercloud, headstart, docker
--- title: Running OrderCloud Headstart on Docker published: true date: 2022-02-09 00:00:00 UTC tags: sitecore, ordercloud, headstart, docker canonical_url: https://robearlam.com/blog/running-ordercloud-headstart-on-docker --- ## What is Headstart? Headstart is an example solution, showing you an opinionated method for building out a B2B shopping experience targeting your web channel, powered by Sitecore OrderCloud. You don’t have to follow Headstart when building your experiences using Sitecore OrderCloud, but it is meant show some example patterns that can be used to help you develop the functionality you need. Headstart is available as Open Source Software, you can find it hosted here: https://github.com/ordercloud-api/headstart Headstart itself consists of three applications and a couple of data storage elements. The applications are the Buyer & Seller portals, alongside a Middleware application designed to implement functionality not natively provided through the OrderCloud API’s. The data storage elements are an Azure Cosmos Database and Azure Storage account, both are used to store data used to enrich the data persisted in OrderCloud. ## How can you run Headstart? There are currently two ways to run the different elements required by Headstart. Historically if you wanted to run an instance of Headstart you would have to provision the Azure resources for it. Well now that isn’t required, you can run all the different Headstart elements in Containers on your local machine. When running locally in Docker you will have containers for all of the applications instances, as well as locally emulated Azure Storage using Azurite, and locally emulated Cosmos Database using the Azure Cosmos DB Emulator. Being able to run all the elements locally in containers makes for a much simpler way for developers to try out Headstart without having to incur the cost associated with running the different azure resources that was required previously. ## Running Headstart in Docker If you clone the Headstart repository to your local developer machine, then you will find all the source required to run Headstart in Docker. Included is a sample docker-compose.yml file defining how to run the Buyer/Seller/Middleware as well as emulated Azure Storage and Cosmos DB containers. The containers are all configured to run from Linux base images, so you’ll need to ensure that you’ve set your docker instance to run in ‘Linux container mode’, you can read about how to do that here. If look in the root of the repository then you will see there is a .env.template. The first thing we need to do is to duplicate this file and name it `.env`. This new file will be used to store all the environment variables that are required for Headstart to run. Earlier I mentioned that there are three different applications that are included as part of Headstart. For you to be able to access these, you will need to update your host file to include the URL’s that they will be accessible on: - 127.0.0.1 buyer.headstart.localhost - 127.0.0.1 seller.headstart.localhost - 127.0.0.1 api.headstart.localhost We’re now at the point we need to seed our initial data into our OrderCloud instance. Luckily the Middleware application is configurated to do all the seeding for us, so what we need to do now is to start the application instances. You can do that by running the following command ``` docker-compose up -d ``` This will first build the images and then run all of them, note that the Middleware container may take longer to start than the others as it depends on the Cosmos emulator being healthy and ready to serve data. Once the containers have all started and are reporting a `healthy` state then its time to seed our OrderCloud instance with the required data to run the Headstart applications. You can achieve this by following the Seeding OrderCloud Data section of the readme. This will talk you through using Postman to send the initial `seed` request to the middleware application. This seed request will provision all the initial data items in OrderCloud like the Marketplaces, Security Profiles, & API Clients amongst other things. When the request completes it will return a series of values back that we’re going to use to update the `.env` file we created earlier. - SELLER\_CLIENT\_ID - BUYER\_CLIENT\_ID - OrderCloudSettings\_MiddlewareClientID - OrderCloudSettings\_MiddlewareClientSecret - SELLER\_ID (This should be set to the MarketplaceID returned) The final setup piece we need to do is to create accounts with EasyPost & SmartyStreets. Once you have accounts with each of those you can populate the final few required values in the `.env` file - EasyPostSettings\_APIKey - SmartyStreetSettings\_AuthID - SmartyStreetSettings\_AuthToken Now we have finished populating our `.env` file we need to restart your docker containers to make use of the new env vars, we do this by running ``` docker-compose down docker-compose up -d. ``` Note you can't run a `docker-compose restart` here as the containers are already running and the Middleware app will restart before Cosmos is healthy which will cause issues. Once all of the containers have started again, we can following the steps in the Validating Setup section of the repo to walk through generating some sample data and testing each of the application instances. Then you’re finished, you now have a functioning B2B marketplace ready for you test out Sitecore OrderCloud. ## How to use Headstart I wanted to finish up by talking about the intended use cases for Headstart. As I mentioned at the start of this article Headstart is an example solution, showing you an opinionated method for building out a B2B shopping experience targeting your web channel, powered by Sitecore OrderCloud. It’s an example of one type of eCommerce site you can build using OrderCloud but it’s not the only type, OrderCloud also supports B2C, Marketplace and many other eCommerce scenarios. So how can you leverage Headstart for your build? Well that depends on your scenario, if you’re building a B2B store focusing on the web channel then it may be a close enough fit that you can clone the Headstart repository and use that a base, removing any integrations/features you don’t need then adding any new ones that you do. There will however be a lot of builds that aren’t an exact fit for the scenario that Headstart is built for, so how can you still leverage it in that case? Well, then the best way I see is to use this repository as a collection of example code. Once you start your implementation, you’re no doubt going to have to integrate a list of different 3rd parties for a variety of different reasons. Things like Payments, Tax, Shipping, etc are generally managed by either 3rd party systems or internal systems that you will need to integrate with. If you look through the Headstart repository you will see sample integrations with systems for features such as - Tax - Payments - Shipping - Address Validation - Transactional Emails - Customer Tracking - ERP Integration The chances are you will be using different 3rd parties to the examples you can see in Headstart, so you won’t be able to copy the code as is, however you can use the examples as a guide. They will show you key things like which API calls need to be made and at which part of the process, where in the data model the returned information should be stored, which API calls can be made to retrieve the data needed to send to the 3rd party etc. So just to reiterate, you may be able to use Headstart as a starter kit in some specific scenarios where you can base your solution off it. However, even if your scenario is very different it can still be a valuable collection of examples to guide you in the best way to build out your implementation.
robearlam
984,854
Create Windows Loader Using CSS
Do You Love windows Loading Animation? If Yes Then its for you! HTML CODE: &lt;div...
0
2022-02-10T12:50:39
https://dev.to/rohithaditya/create-windows-loader-using-css-3djp
css, windows, html, rohithaditya
Do You Love windows Loading Animation? If Yes Then its for you! HTML CODE: `<div class="container"> <div class="loader-wrapper"> <div class="loader"> <div class="dot"></div> </div> <div class="loader"> <div class="dot"></div> </div> <div class="loader"> <div class="dot"></div> </div> <div class="loader"> <div class="dot"></div> </div> <div class="loader"> <div class="dot"></div> </div> <div class="loader"> <div class="dot"></div> </div> </div> <div class="bot"> <br> Loading </div> </div>` The Following Are CSS CODE: `body{ margin:0; display:grid; place-items:center; height:100vh; background-color:#00a2ed; } .container{ width:80px; height:80px; display:grid; place-items:center; } .loader-wrapper{ width:80px; height:80px; position:absolute; display:flex; justify-content:center; align-items:center; } .loader{ position:absolute; height:20px; animation: spin 3.5s linear infinite; } .loader .dot{ width:6px; height:6px; background-color:#fff; border-radius:50%; position:relative; top:30px; } @keyframes spin{ 30%{ transform:rotate(220deg); } 40%{ opacity:1; transform:rotate(450deg); } 75%{ opacity:1; transform:rotate(720deg); } 76%{ opacity:0; } 100%{ opacity:0; transform:rotate(0deg); } } .loader:first-child{ animation-delay:0.15s; } .loader:nth-child(2){ animation-delay:0.3s; } .loader:nth-child(3){ animation-delay:0.45s; } .loader:nth-child(4){ animation-delay:0.6s; } .loader:nth-child(5){ animation-delay:0.75s; } .loader:last-child{ animation-delay:0.9s; } .bot{ display:flex; margin-top:90%; color:white; font-size:20px; font-family:Raleway; }` > For Demo : ![Small Snap Of Working Code!](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6nynlbjtcczz6oo1rf5.png) > ![Small Snap Of Working Code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pszv4luabhmyvzxg152k.png) > ![Small Snap Of Working Code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8kiphdxeug9lh9hoac45.png)
rohithaditya
983,548
React Hooks
Hooks are functions that let you "hook into" React state and lifecycle features from function...
0
2022-02-09T10:20:02
https://dev.to/sv0012/react-hooks-3jhi
Hooks are functions that let you "hook into" React state and lifecycle features from function components.Hooks don't work inside classes,they let you use React without classes.You can also create your own Hooks to reuse stateful behavior between different components. Lets take look at the various hooks : - Basic Hooks 1. useState 2. useEffect 3. useContext - Additional Hooks 1. useReducer 2. useMemo 3. useCallback 4. useRef 5. useLayoutEffect 6. useImperativeHandle 7. useDebugValue ## useState useState is a Hook that allows you to have state variables in functional components. You pass the initial state to this function and it returns a variable with the current state value (not necessarily the initial state) and another function to update this value. Consider an example of a counter incrementing on click. ```javascript import React from "react"; const State = () => { let counter = 0; const increment = () => { counter = counter+1; console.log(counter); }; return ( <div> {counter} <button onClick={increment}>Increment</button> </div> ); }; export default State; ``` Considering the above example everything looks right, but when we do click on the increment the value shown wouldn't update.But why? Is there something wrong with the function,variable? But as you can see in the console the variable is incrementing but react isn't re-rendering the page to show the new value every time the value updates. This is where the useState comes in. Lets replace the counter variable with a variable that's defined by the state using the useState syntax.The counter is the variable which comes first and the setCounter is the function that is used to update the variable.The initial value of the state will be passed to the useState inside the parentheses. ```javascript import React from "react"; const State = () => { const [counter,setCounter] = useState(0); const increment = () => { setCounter(counter + 1); }; return ( <div> {counter} <button onClick={increment}>Increment</button> </div> ); }; export default State; ``` So the setCounter updates the value and re-renders. ## useEffect The useEffect hook is just a function that will be called when the page re-renders,however it is very powerful as we can specify when we want to rerun the function based on the dependency array. The useEffect has two parts basically the function and the dependency.When the values in the dependency changes the function will run again. To understand this better lets consider an example where we want to fetch some data from a public api and display it on our screen. ```javascript import React, { useEffect, useState } from "react"; import axios from "axios"; function Effect() { const [data, setData] = useState(""); const [count, setCount] = useState(0); useEffect(() => { axios .get("https://jsonplaceholder.typicode.com/comments") .then((response) => { setData(response.data[0].email); console.log("API WAS CALLED"); }); }, []); return ( <div> Hello World <h1>{data}</h1> <h1>{count}</h1> <button onClick={() => { setCount(count + 1); }} > Click </button> </div> ); } export default Effect; ``` The data state contains the email of the first comment that was fetched from the api. As the dependency array is empty useEffect will only run once. The other state count is to better understand the useEffect. Now even when we click on the button the page re-renders and updates the counter but when you check the console the "API WAS CALLED" message would have appeared only once as the dependency array is empty.If the dependency array had the count state whenever the state changed this useEffect function will run again. Dependency array is very essential to the useEffect even if it is empty else it has the risk of being on an infinite loop if only the function was defined. ## useContext The useContext is to be used together with the context api.The context api allows us to manage the state very easily. Lets imagine a component that has two different components in it. One component sets the value and the other uses that value. Lets take a look at the code. ```javascript //The parent component import React, { useState, createContext } from "react"; import Login from "./Login"; import User from "./User"; export const AppContext = createContext(null); function Context() { const [username, setUsername] = useState(""); return ( <AppContext.Provider value={{ username, setUsername }}> <Login /> <User /> </AppContext.Provider> ); } export default Context; ``` ```javascript //Login child component import React, { useContext } from "react"; import { AppContext } from "./Context"; function Login() { const { setUsername } = useContext(AppContext); return ( <div> <input onChange={(event) => { setUsername(event.target.value); }} /> </div> ); } export default Login; ``` ```javascript //User child component import React, { useContext } from "react"; import { AppContext } from "./Context"; function User() { const { username } = useContext(AppContext); return ( <div> <h1>User: {username}</h1> </div> ); } export default User; ``` The createContext allows us to create a new context. The AppContext.Provider is wrapped around the two component which makes the context values accessible to the components. The state and setState are given as the values in the context provider so not only is the state accessible but it is also updatable using the set function. As you can see in the components using the useContext we are obtaining the values of the context provided and using it in the component by de-structuring it. ## useReducer The useReducer is like a replacement or an alternative to the useState hook. They allow the developers to create variables and when they are changed the page will re-render. Lets consider the following example ```javascript import React, { useState } from "react"; const Reducer = () => { const [count,setCount] = useState(0); const [showText,setShowText] = useState(true); return ( <div> <h1>{count}</h1> <button onClick={()=>{ setCount(count+1); setShowText(!showText); }} >Click Here</button> {showText && <p>This is a text</p>} </div> ); }; export default Reducer; ``` So we can see that we have used the useState and that one of the state is a boolean that on every click changes and therefore shows the text on every alternate click. Here we can see that on one click we are altering the value of two states. useReducer is usually preferable to useState when you have complex state logic that involves multiple sub-values or when the next state depends on the previous one. By using useReducer the state can be managed collectively and can be changed as we please. ```javascript import React, { useReducer } from "react"; const reducer = (state, action) => { switch (action.type) { case "INCREMENT": return { count: state.count + 1, showText: state.showText }; case "toggleShowText": return { count: state.count, showText: !state.showText }; default: return state; } }; const Reducer = () => { const [state, dispatch] = useReducer(reducer, { count: 0, showText: true }); return ( <div> <h1>{state.count}</h1> <button onClick={() => { dispatch({ type: "INCREMENT" }); dispatch({ type: "toggleShowText" }); }} > Click Here </button> {state.showText && <p>This is a text</p>} </div> ); }; export default Reducer; ``` The state is an object of all the states that is going to be changed.The dispatch is a function to be used to change the values of our state. We have passed the reducer function the initial values of all the states to the useReducer. The reducer function manages what happens to the state.It will take in two arguments the state and the action.As you can see we are using the action.type to determine the state changes to be performed. ## useMemo React has a built-in hook called useMemo that allows us to memoize expensive functions so that we can avoid calling them on every render.You simply pass in a function and an array of inputs and useMemo will only recompute the memoized value when one of the inputs has changed. We need to keep in mind that the function passed to the useMemo runs during rendering and we shouldn't do anything there that we wouldn't do when we are rendering. If an array isn't provided a new value will be computed on every render. We should rely on useMemo as a performance optimization.We need to write code so that it still works without useMemo and only add it to further optimize the performance. ```javascript const memoizedValue = useMemo(()=>{ computeExpensiveValue(a,b),[a,b]); ``` ## useCallback The useCallback hooks is used when we have a component that has a child that keeps re-rendering without any need.Pass an inline callback and an array of dependencies,useCallback will return a memoized version of the callback that only changes if one of the dependencies has changed. This is useful while passing callbacks to optimize child components that rely on reference equality to prevent unnecessary renders. ```javascript const memoizedCallback = useCallback( () => { doSomething(a, b); }, [a, b], ); ``` Note: useCallback(fn, deps) is equivalent to useMemo(() => fn, deps). ## useRef The useRef is a hook that allows to directly create a reference to the DOM element in the functional component. The useRef returns a mutable ref object. Lets imagine this scenario, you have a single component which as input,button and h1 tag and you want to be able to click on the button it should automatically focus on the input so that we can start typing a new name.Lets not worry about the logic for updating the new name and focus on useRef. ```javascript import React from "react"; function Ref() { return ( <div> <h1>Cris</h1> <input type="text" placeholder="Example..."/> <button>Change Name</button> </div> ); } export default Ref; ``` Now whenever we click on this button or whenever we take an action we should be able to focus on the input and this is where useRef comes in.The useRef hook comes in handy when we need to manipulate or add some functionality to certain DOM elements. ```javascript import React, { useRef } from "react"; function Ref() { const inputRef = useRef(null); const onClick = () => { inputRef.current.value = ""; }; return ( <div> <h1>Cris</h1> <input type="text" placeholder="Ex..." ref={inputRef} /> <button onClick={onClick}>Change Name</button> </div> ); } export default Ref; ``` The inputRef is given as a ref to the input tag and the current value property of the inputRef is accessed in the onClick function. By setting it to empty string when we click the button after typing some data we can see that the typed data is set to null. useRef is very helpful in accessing or manipulating the DOM and is to be used in such suitable situations. ## useLayoutEffect The useLayoutEffect is very similar to the useEffect,but it fires synchronously after all the DOM mutations.Use it to read layout from the DOM and synchronously re-render.Updates scheduled inside useLayoutEffect will be flushed synchronously,before the browser has a chance to paint the DOM. When the useEffect is called it will show the stuff to user and then call the useEffect.It is only called after the page is rendered and we can hardly tell because it changes the state so fast. The useEffect is called after everything is rendered in the page and shown to the user.But the useLayoutEffect is called before the data is actually printed to the user. ```javascript import { useLayoutEffect, useEffect, useRef } from "react"; function LayoutEffect() { const inputRef = useRef(null); useLayoutEffect(() => { console.log(inputRef.current.value); }, []); useEffect(() => { inputRef.current.value = "HELLO"; }, []); return ( <div className="App"> <input ref={inputRef} value="CRIS" style={{ width: 400, height: 60 }} /> </div> ); } export default LayoutEffect; ``` As we can see that value of input is CRIS but because of the useEffect it will show HELLO.But when we see the console we can see that CRIS is printed. The useLayoutEffect can be used in cases where we want to change the layout of our application before it prints to the user. ## useImperativeHandle The useImperativeHandle hooks works in the similar phase to the useRef hook but only it allows us to modify the instance that is going to be passed with the ref object which provides a reference to any DOM element. ```javascript useImperativeHandle(ref, createHandle, [deps]) ``` ```javascript function FancyInput(props, ref) { const inputRef = useRef(); useImperativeHandle(ref, () => ({ focus: () => { inputRef.current.focus(); } })); return <input ref={inputRef} ... />; } FancyInput = forwardRef(FancyInput); ``` The parent component that renders <FancyInput ref={inputRef} /> would be able to call inputRef.current.focus(). ## useDebugValue useDebugValue is a simple inbuilt Hook that provides more information about the internal logic of a custom Hook within the React DevTools. It allows you to display additional, helpful information next to your custom Hooks, with optional formatting. ```javascript function useFriendStatus(friendID) { const [isOnline, setIsOnline] = useState(null); // ... // Show a label in DevTools next to this Hook // e.g. "FriendStatus: Online" useDebugValue(isOnline ? 'Online' : 'Offline'); return isOnline; } ``` The useDebugValue also accepts a formatting function as an optional second parameter.The function will only be called when the custom hook is inspected.It will receive the debug value as a parameter and return a formatted display value. For example a custom Hook that returned a Date value could avoid calling the toDateString function unnecessarily by passing the following formatter: ```javascript useDebugValue(date, date => date.toDateString()); ```
sv0012
983,696
How to Prevent Accidental Code Errors with ESLint, Prettier, and Husky
Originally written by Jakub Krymarys Any software engineer, regardless of their level of advancement...
0
2022-02-13T19:29:42
https://www.stxnext.com/blog/eslint-prettier-husky/
javascript, tutorial, react
_Originally written by Jakub Krymarys_ Any software engineer, regardless of their level of advancement and years of experience, may have a worse day and accidentally introduce changes that will result in bugs or simply won’t fit into good code development practices. Fortunately, there are several ways for you to protect your JavaScript project against such cases. I assume the first thing that comes to your mind is using various types of tests. Of course, they are the most effective method, but we’ll be dealing with something else in this article. **Instead of testing the functionality of the application to keep your software project safe from accidental developer mistakes, we’ll focus on the code itself. To do this, we’ll use:** - **ESLint** for analyzing JavaScript code to find potential bugs and bad practices, - **Prettier** to format the code in accordance with [the adopted standard](https://prettier.io/docs/en/option-philosophy.html), - **Husky** to allow us for integration with Git hooks that will in turn allow us to automate the two previous tools. All of these tools work well with [any Node.js project](https://www.stxnext.com/services/nodejs-development/). Since I’d like to give you specific examples of configs, I’ll be discussing these using a sample “pure” React.js project created with the Create React App (CRA). ## Code analysis with ESLint Let’s start with ESLint. This is a so-called **linter**, which is a tool that statically analyzes JavaScript code to find any potential problems. It can react to each of them in two different ways—by marking it as either a **warning** (and displaying an appropriate message in the console), or as an **error** (in this case, not only will we see the message, but the compilation of the code will also fail). If you’ve worked with React, you’ve probably seen more than one warning or error in the browser console. Some of them are the effect of ESLint. It’s integrated with the application that we create using the CRA. However, it has a very minimalist configuration there. ``` { (...) "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] }, (...) } ``` _Default ESLint config in the `package.json` file for a React.js application created with the CRA_ However, if for some reason you do not have ESLint in your project, [you can easily add it](https://eslint.org/docs/user-guide/getting-started) using the command `npm install eslint --save-dev`. To make the linter a real “lifesaver” of our project, we need to extend this basic configuration a bit. By default, it only has a set of React-specific core rules and doesn’t check the JS syntax itself. I suggest starting with the configuration recommended by the ESLint team: `"eslint:recommended"`. The exact contents of this set [can be seen here](https://eslint.org/docs/rules/). ### How do I extend the ESLint configuration? The linter configuration can be extended in two ways: 1. by modifying the appropriate `eslintConfig` field in `package.json`; 2. by creating `.eslintrc`, a special configuration file in the main project folder. Both work equally well, but as a fan of breaking everything down into as many little chunks as possible, I recommend separating the config into a new file. Divide and conquer! Remember, however, that when you create the configuration in a separate file, you should remove the `eslintConfig` from `package.json`. The `.eslintrc` configuration file consists of several sections: ``` { "extends": [(...)], // which configurations we want to extend "rules": { (...) }, // changing the rules or changing the meaning of the existing ones "overrides": [ // overriding rules for specific files/file groups { "files": [(...)], // which we define here, for example all TS files "rules": { (...) } // rules are overridden here } ] } ``` Our basic configuration should look something like this: ``` { "extends": [ "eslint:recommended", "react-app", "react-app/jest" ] } ``` **Note: it’s very important that `"react-app"` and `"react-app/jest"` remain in `"extends"` of our project (because they “check” React mechanisms)!** This is a good starting point for neat and conscious use of your linter. However, you can change this configuration (using [the official documentation](https://eslint.org/docs/user-guide/configuring/configuration-files#extending-configuration-files)) or simply make your own rule modifications (which are also well documented in [the ESLint documentation](https://eslint.org/docs/user-guide/configuring/rules#configuring-rules)). ### When should I add my rules to ESLint? Certainly not immediately. I’d suggest starting with the recommended set of rules and introducing any changes only when one is missing or one of them contradicts the requirements of your project. Of course, don’t forget to discuss it thoroughly within the team so that all of its members are unanimous and understand why this rule has been changed. To add your own rule or change how the existing rule works, we first need to find it in [the rule set](https://eslint.org/docs/2.0.0/rules/). Then we can add it to the config rules section (if we want it to apply to the entire project) or to the overrides section (if it’s supposed to work only with a certain group of files) with one of the three expected values ​​given below, which will determine how the linter will respond to the code fragments falling under it: - **0** or **“off”**—the rule will be disabled, - **1** or **“warn”**—the linter will respond with a warning, - **2** or **“error”**—the linter will respond by throwing an error and aborting compilation. For example: `"no-console": "error"` will block application compilation (it will throw an error) as soon as the linter detects `console.log`. ``` { "eslintConfig": { "extends": [ "react-app", "react-app/jest", "eslint:recommended" ], "rules": { "no-console": "off" } } ``` _A sample configuration extended by the `"no-console"` rule_ ### How do I run the linter? In our project, the linter can be run in several ways. As soon as you restart the application, the new configuration should be taken into account and the linter will check the code according to it every time you compile it. Of course, we can also analyze the entire project ourselves. There are several ways to do this. The easiest one is to add the appropriate script to the `package.json` file, then run it with the `nam run lint` command. ``` { (...) "scripts": { (...) "lint": "eslint --fix './src/**/*.{js,jsx}'" } (...) } ``` You may also use the `npx` tool: ``` npx eslint --fix './src/**/*.{js,jsx}' ``` As you may have noticed, I added the `–fix` flag to the ESLint command. Thanks to it, the linter will automatically repair some of the errors it encounters, which will further improve the entire process. ## Code formatting with Prettier Another thing to ensure in your project is that your code is automatically formatted according to a centralized configuration. Usually, each developer on the team has slightly different preferences, which is totally fine, though it can lead to minor or major problems. By the way, Prettier was created as a way to stop all discussions about which formatting is better. Its formatting style is a result of long debates, as it’s meant to be a compromise between the requirements of all developers. One of these problems will surely be confusion in pull/merge requests. Suddenly, it may turn out that we have “modified” many more lines of code than was originally intended to result from the changes related to the new functionality or fixes. It’s only our formatter that ordered the code “in its own way.” Of course, this doesn’t change the functionality of the application, but it does introduce unnecessary confusion. It won’t be immediately clear to the person conducting the code review which parts of the code they need to check. To introduce standardized code formatting at the project level, we will use **Prettier**. So let’s move on to its installation itself. Unlike ESlint, this tool is not built into the CRA, but as is the case with all NPM packages, the installation is very simple and limited to the following command: ``` npm install --save-dev prettier ``` Then we’ll configure our formatter. To do this, we will use two files: `.prettierrc.json` that contains the configuration and `.prettierignore` where we can list files and folders that Prettier should skip (this file works in the same way as `.gitignore`). ``` { "singleQuote": true, "trailingComma": "es5", "printWidth": 120 } ``` _Sample `.prettierrc.json` configuration_ ``` node_modules build ``` _Sample `.prettierignore` configuration for React_ If you’re adding **Prettier** to an existing project, note that the first time you use it, it will likely format most of the files in the project. Therefore, it’s a good idea to do it right away, in a dedicated commit. Just remember to notify the whole team about the need to download the latest version of the code. Otherwise, you will face clashes between the code with the new configuration and the out-of-date versions of the project. Like with the linter, we can start Prettier in two ways: - Via a script in `package.json` that we run with `npm run prettier` ``` { (...) "scripts": { "prettier" : "prettier --write ." } (...) } ``` - Using the `npx` tool ``` npx prettier --write . ``` We also need to adjust the ESLint configuration by adding the information that we’ll also be using Prettier in the project. Otherwise, the two systems may clash. To do this, you must first install the Prettier-specific ESLint config with the command: ``` npm install --save-dev eslint-config-prettier ``` Then you add it to the “extends” section in the `.eslintrc` file. It’s really important to add it as the last item, since it has to override a few entries from the previous set of rules. ``` { (...) "eslintConfig": { "extends": [ "eslint:recommended", "react-app", "react-app/jest", "prettier" ], (...) } } ``` ## Tool automation with Husky Finally, let’s automate running both of these tools to improve our workflow. We’ll use Husky for that. It’s a tool that enables integration with Git hooks… so little, and yet so much! Git hooks are a way to run any scripts in response to various actions related to the Git version control system. To make it as simple as possible, we’ll use the lint-staged project, which will streamline this process and introduce one more important optimization. ### What is lint-staged? Why use the tool? While reading the paragraphs on **ESlint** and **Prettier**, you may have started to wonder whether such a solution would slow down your project. After all, continuous formatting and analyzing several hundred—or even several thousand!—lines of code in several files can definitely take a long time, which can be irritating with each commit. Moreover, it’s easy to see that most of these files won’t even be modified, so it will be a waste of time to constantly analyze them. Fortunately, there is a way for that: the **lint-staged** tool. It allows for a fabulously simple integration with the Git hook pre-commit, which is quite enough to start with. We install it in a slightly different way than the rest. This time, we’ll use the following command: ``` npx mrm@2 lint-staged ``` To read more on how exactly this tool works, I encourage you to browse [the GitHub page of the project](https://github.com/okonet/lint-staged). This command—or actually the script we run with it—does a few things that are important to us: 1. install **Husky**, 2. install **lint-staged**, 3. configure **lint-staged** based on whether we already have ESlint and Prettier installed. After installing lint-staged, we need to add the configuration of this tool to `package.json`. It consists of JSON, which takes the name of a file (or a regex that defines a group of files) as a key. What it takes as a value is a string with a command to be executed or an array of strings if there are several such commands. If you created your application via the CRA, it’s most likely that **lint-staged** only configured **Prettier** for you. Therefore, we’ll add the linter to the lint-staged configuration, as in the example below. ``` { (...) "lint-staged": { "*.{js,jsx}": "eslint --fix src/", "*.{js,jsx,json,css,md}": "prettier --write" } (...) } ``` Pay attention to what files these two tools should handle. **ESLint** only works with JavaScript, while **Prettier** works with many other formats. ## Benefits of using Husky, Prettier, and ESLint to increase the code security of your project A dozen or so minutes devoted to the configuration we’ve presented above will save you a lot of stress and countless hours that you’d spend debugging a problem that could be caught by the linter at the stage of writing the code. Add to that all the time you’d spend analyzing Git changes, resulting only from the differences in the formatter configuration of the IDE among individual developers on the team—changes that don’t affect the functionality of the application, and are merely code formatting. In addition, your code will simply be nicer and in line with good practices, which will definitely make it easier to work with. ## Further reading on protecting your code with ESLint, Husky, and Prettier A deeper understanding of how ESLint works and why it marks certain constructs as warnings or bugs will lead to a better understanding of JavaScript and introduce you to some good rules to follow when writing projects in this crazy language. As you may have guessed, what I’ve discussed in this article is just the tip of the iceberg, especially in the context of **ESLint** itself and the possibilities this tool offers. Here are some interesting links that will allow you to broaden your knowledge on this topic: - [Using ESLint with TypeScript](https://khalilstemmler.com/blogs/typescript/eslint-for-typescript/) - [All the rules supported by ESLint](https://eslint.org/docs/rules/) -[Suggestion to add the integration described in this article to the CRA](https://github.com/facebook/create-react-app/issues/8849) - [Basic ESLint configuration in the Create React App](https://github.com/facebook/create-react-app/tree/main/packages/eslint-config-react-app) - [Linting messages in commits](https://github.com/conventional-changelog/commitlint) - [The origins of Prettier](https://prettier.io/docs/en/why-prettier.html) - [ESLint --fix flag](https://masteringjs.io/tutorials/eslint/fix) Plus the pages of the tools used here: - [Husky](https://typicode.github.io/husky/#/) - [lint-staged](https://github.com/okonet/lint-staged) - [Prettier](https://prettier.io/) - [ESLint](https://eslint.org/) ## Final thoughts on Prettier, Husky, and ESLint Thanks for reading our article on protecting your project against accidental mistakes by using ESLint, Prettier, and Husky. It should save you a lot of trouble in the future. We have several other technical guides written by experts on a variety of subjects that’ll help you overcome multiple development challenges. Here are some examples: - [How to Build a Spark Cluster with Docker, JupyterLab, and Apache Livy—a REST API for Apache Spark](https://www.stxnext.com/blog/docker-jupyterlab-apache-livy-rest-api-apache-spark/) - [FastAPI vs. Flask: Comparing the Pros and Cons of Top Microframeworks for Building a REST API in Python](https://www.stxnext.com/blog/fastapi-vs-flask-comparison/) - [Python 2.7 to 3.X Migration Guide: How to Port from Python 2 to Python 3](https://www.stxnext.com/blog/python-3-migration-guide/)
ad_przewozny
984,024
Day-20 Training at Ryaz
Date:09/02/2022 Day:Tuesday Today I started at about 10:25 am and my aim was to build all left...
0
2022-02-09T17:02:10
https://dev.to/mahin651/day-20-training-at-ryaz-3hb1
javascript, beginners, programming, css
- Date:09/02/2022 - Day:Tuesday Today I started at about 10:25 am and my aim was to build all left over components and I was successful in creating all of components as I aimed to host this website today but as there was one problem with icons that all of icons in website are not visible and I was wondering that why this is not working I tried many things but did not get its solution and was not able to host this website. Due to this error my work got delayed otherwise I was ready with all of components. so, now my work got delayed unto tomorrow. this way my day ended with completing all thee components but got stuck with one of problem I hope I would be able to cover it by tomorrow and host it positively. today I got to learn may new things like I got to learn that we have to provide different meta tags before hosting our website so that it is easily understood by search engine optimization and help our website to welcome more audience.
mahin651
984,395
How to keep your repo package dependencies up to date automatically
Photo by Andrea De Santis on Unsplash TL;DR Learn how to implement the dependabot to...
0
2022-02-10T13:29:01
https://dev.to/daniloab/using-github-actions-to-improve-your-developer-experience-29n7
github, devops, productivity
Photo by <a href="https://unsplash.com/@santesson89?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Andrea De Santis</a> on <a href="https://unsplash.com/s/photos/automation?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a> ## TL;DR Learn how to implement the dependabot to update automatically your dependencies, open a pr with the dep updated, run tests inside of this pull request, and merge automatically if is a success. ## Daily Tasks as a Developer The work of a developer usually refers to those Hollywood movies in which a smart person or a hacker is typing while we see various codes in green colors on the screen. Or black rooms where the light of the monitor is the only thing you can see. But, the developer's work is like most works. It will have problems to solve every day, daily tasks to understand, and think what is the best way to fix it. And, like any other work, it will have daily tasks that could take time from our day. You can compare a dev routine like having a restaurant. Every day that you come into your restaurant you must: open the windows, turn on the lights, clear the floor, wash some dishes, open the chairs, whatever, as I said: daily tasks. Updating the dependencies of my packages is one of these daily tasks. And, as the project starts to grow up, starts to be harder to keep this manually. ## Automating Working with GitHub actions tasks are easier to abstract and to do it with this task let's create: - The dependabot configuration that will open daily a new pull request updating a specific dependence. - The GitHub action responsible to run the tests from the application inside of this pull request opened - The Github action responsible to merge this pull request if the checks inside of the pull request result in success. ![Description of the automation flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1i9nvp8z03sbmld986i.png) > These flows work together. So, it is important to write tests to have the dependabot working every day and these tests being responsible to identify problems with the updates. ### Dependabot Config Let's add a configuration to run daily a dependabot looking for updates from dependencies inside the application. - inside your project on root probably will have a folder named `.github`. If don't, create manually by now. - create a new file inside of it and name as dependabot.yml - place the code below ```yml version: 2 updates: - package-ecosystem: npm directory: '/' schedule: interval: daily time: '01:00' open-pull-requests-limit: 10 ``` This yml file will set a configuration to the dependabot: - It opens a pull request daily - With an interval time with 01:00 - It has a max limit with 100 - Every pull request from dependabot will result in a new notification for the owner repository ### Workflows folder Before creating the GitHub actions the workflows folder needs to be created if does not exist yet. - inside of `.github` create a new folder and name as workflows ### Test GitHub action > important: this flow is responsible to run your application tests. So, it is expected a jest environment is already configured. If you don't have it yet it is a good time to start to write tests. Let's create the GitHub action responsible to run all tests from the repository inside of each pull request opened. - inside of the workflows folder create a new file and name as `test.yml` - place the code below ```yml name: tests on: push: branches: - main pull_request: branches: - main jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Setup Node.js uses: actions/setup-node@v2 with: node-version: "14" - run: yarn - run: yarn jest ``` This yml file will set a new action responsible to: - Run all tests inside of each pull request will be merged on main branch. - It will run the yarn command to install all dependencies. - Then will run yarn jest to run all tests. ### Dependabot auto-merge GitHub action Let's create the GitHub action responsible to merge automatically the pull requests opened by dependabot. When all checks are green this action will merge then automatically. - create a new file inside of workflows and name as `auto-merge.yml` - place the code below ```yml name: auto-merge on: pull_request_target: branches: - main jobs: auto-merge: runs-on: ubuntu-latest if: github.actor == 'dependabot[bot]' steps: - uses: ahmadnassri/action-dependabot-auto-merge@v2.4 with: github-token: ${{ secrets.AUTOMERGE_TOKEN }} command: 'squash and merge' target: minor ``` This yml file will set a new action responsible to: - When the pull request actor be 'dependabot[bot]' - It will use the ahmadnassri/action-dependabot-auto-merge@v2.4 - It will run 'squash and merge' if the checks from pull request are green - Only for pull request updating dependencies from target minor - You don’t waste your time reviewing package dependencies notifications The `.github` folder should look like this: ![github folder print](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxbh3uwicznq7z70t61h.png) Update these changes inside a new pull request and start to see the magic happens. ![A print from the pull request list page on github](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7y85z77pg2pm293isq7.png) ![pull request detail github page print](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k4je0m0d9k4ltvyp8a6l.png) This pull request from print above: https://github.com/daniloab/graphql-real-world-server/pull/115 > use this pull request as an example about how to implement this blog post https://github.com/daniloab/graphql-real-world-server/pull/69 ## Welcome to the Automation Now, your application already has new automation to help you with your daily tasks, and with the time that you need to invest doing them manually, you can start to invest in new things to your application. ### Why automate You need to write more tests to start to spend less time writing tests. You need to update the dependencies of your application daily otherwise you will have a legacy application sooner than you think. Feel free to call my DM on [Twitter](https://twitter.com/daniloab_) if having any doubt or insight into this flow. Support me on Patreon https://www.patreon.com/daniloab to help me with my opensource work. - Learn [How to Learn in Public](https://dev.to/daniloab/how-to-learn-in-public-9ng). Get into my discord to have mentorship for free about all of my content.
daniloab
984,607
How a Healthcare IT Project Manager Helps in Implementing Business Goals
Managing IT projects in the field of healthcare Launching and managing healthcare IT projects and...
0
2022-02-10T07:56:48
https://dev.to/filatovv/how-a-healthcare-it-project-manager-helps-in-implementing-business-goals-1c91
healthydebate, software, medical
Managing IT projects in the field of healthcare Launching and managing healthcare IT projects and developing custom medical software is not a simple matter. A standard development team is not enough to build well-coordinated EHR, PMS, or other systems; you need the help of physicians, lawyers, insurers, and billing specialists. Who will connect the non-technical staff, the development team, and the business in such a way that tasks are completed on time, within budget, and without disagreements? This can be done by an experienced healthcare IT Project Manager. Let’s talk about the role of this specialist in achieving business goals. ## What are the complexities of healthcare projects? Medical software differs in complexity and is created for different purposes and different audiences: - Heads of medical centers, clinics, and hospitals aspire to streamline the internal work of their facilities, improve patient care, reduce costs, and increase client loyalty. For these purposes, they develop electronic medical records, software for managing medical information, programs for managing revenues, and other systems. - Patients want to monitor their health status remotely, consult a doctor, and receive electronic prescriptions. For these purposes, medical institutions develop healthcare mobile apps, TeleMedicine software, online clinics, and so on. - Self-employed doctors want to find a convenient platform to connect with patients and deliver their services. - Businesses strive to develop software for doctors, healthcare facilities, and patients in order to receive a sustainable yield from selling it. Fulfilling the above purposes requires different technologies, teams, and tools. However, healthcare technologies share some common characteristics: 1.High risks Development often exceeds the allocated budget and misses deadlines, as software engineers work with confidential client information. A minor mistake can lead to private data theft and fines for a law violation. Healthcare software customers are obliged to protect patient information under HIPPA and GDPR; otherwise, the case will go to court. That’s why customers go by the following principle: delaying the release in order to eliminate shortcomings is better than releasing vulnerable software. 2.Healthcare IT projects are expensive. Implementation of medical projects is difficult and time-consuming. If the team misses a mistake or has to spend time eliminating a risk, this will impact the cost. The more drawbacks the work and product requirements contain, the more money a business has to pay for healthcare software development. Gartner estimated that healthcare technology spending would grow by 6.8%, amounting to $140 billion in 2021. 3.Complicated regulations. To enter the market, the software is tested and approved by the appropriate authorities: a medical institution and local or national government. Healthcare software must comply with HIPAA - the way an app’s features are used determines compliance with the legislation. According to the rules, an organization must protect patients' personal information. Only authorized users can have access to ePHI, and data is securely encrypted. If these conditions are not met, then the app owner breaks the law. Local laws are of great importance. Each country, region, and even state has its rules for the operation of medical institutions, which are important to pay attention to when developing a product. That’s why Business Analysts spend so much time on investigations when launching a product - in order to avoid large hidden risks. 4.The sector is constantly changing. Healthcare projects usually last 6-20 months. During this period, there may be changes in technologies or legislation that should be reflected in the program. This, in turn, means that the team will have to deviate from the initial plan and spend hours implementing a new feature. 5.The project has many participants. Apart from the development team, a healthcare startup involves third-party experts: doctors, lawyers, and customer representatives. There is also one more part of the interested audience - program users, whose opinion should also be taken into account. Therefore, an experienced healthcare IT Project Manager is needed to successfully implement software. This manager will organize the work in such a way that all the process participants are heard and work together on the common goal. ## The importance of a Project Manager for healthcare projects Like any other IT initiative, a healthcare project is limited in time, cost, and scope. No customer will love it if these three components are stretched. Therefore, a Project Manager strives to organize the work in such a way that the [healthcare software development](https://andersenlab.com/industries/healthcare/health-tech-future?utm_source=article&utm_medium=dev.to) goes according to the planned scenario: 1.Plans the work on the project. Before starting, the development team plans the implementation: how many IT specialists to involve, what budget will be needed, and how long the process will take. Healthcare IT project management includes: - recruiting the team; - rendering the project goal; - establishing communication with the customer; - defining areas of responsibility; - setting the project acceptance criteria; - ensuring compliance with legislation; - determining the scope of work; - forecasting and eliminating risks; - calculating the required budget; - controlling the budget and terms; - solving emergency situations; - choosing the project methodology. A Project Manager sees the project as a whole and knows what stage the work is in at the moment. This specialist makes sure that the team follows the plan and doesn’t fall behind schedule. 2.Establishes work processes. A Project Manager organizes their team’s work (studying requirements, designing, developing, and testing), brings the project participants up to speed, motivates specialists, and establishes effective communication. Also, this professional resolves conflicts between the team members, eliminates work hindrances, maintains documentation, and reports to customers on the achievements. 3.Supervises the work of the team. A Project Manager makes sure that quality and security testing is conducted without delays and the new version of the program is released at the planned time: - holds retrospective meetings and meetings regarding the project status; - estimates the development team’s KPIs; - manages the budget and issues invoices; - communicates the customer’s wishes and recommendations for improvement to specialists; - works with client reviews to make improvements; - presents ready-made solutions, demo versions, and prototypes to the customer. The tasks of a healthcare IT Project Manager can be grouped into the following three blocks: - achieving goals of the customer through efficient task completion; - achieving goals of the healthcare software development company through enhanced financial indicators; - achieving goals of the team through motivations and career goal realizations. Thanks to their abilities and competencies, a Project Manager can cope with the complexities of medical projects: - ensures HIPAA and GDPR compliance in order to eliminate potential risks for clients; - controls the budget of healthcare software development so that the product doesn’t increase in cost; - takes changing trends of the sector into account and draws up a contingency plan to introduce changes as needed without jeopardizing the project; - connects the development process participants (customer representatives, industry experts, and the IT team) so that they can quickly resolve problems without conflicts. Thus, a Project Manager is one of the essential specialists without whom healthcare projects risk failing. ## Reasons why healthcare projects fail 1. Experts from Forbes name several reasons for project failures: Project implementation without medical experts. Even the most advanced developers don’t know the specifics of healthcare and hence can’t fully implement them in the application. 2. Cooperation with veteran doctors who are used to outdated technologies and don’t understand modern ways of healthcare software development. 3. Absence of a go-to-market strategy. This causes companies to lose leads and revenue. The target audience is not clearly defined, which is why the company risks developing an undemanded product. 4. Miscalculated ROI. By investing lots of money in the development of medical software that, in the end, falls short of expectations, a company goes into the red. 5. Absence of planning and unclear requirements. When the team members don’t clearly understand what software to implement, they find it difficult to create a plan and follow it. 6. Increasingly complex data collection requirements that healthcare organizations must follow. 7. Poor communication between managers, stakeholders, and patients. The global challenge for a healthcare IT Project Manager is to prevent these problems and bring the project to successful completion. ## Popular healthcare projects Our PMs have helped development teams implement over 60 healthcare projects. Here are some of them: _A communication platform for healthcare_ Andersen, a healthcare software development company, has built a platform for communication and collaboration, aimed at clinic employees. The program helps to share information about a patient, protecting their data according to HIPAA. Users can’t save, copy, or delete messages. The data is stored not on devices but on an organization's server. Thousands of medical institutions can remotely advise patients and coordinate their treatments. Every day, the platform processes over 10 million messages. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgp4dcabuzhvxiciyoec.png) _Surgical care imaging software_ Andersen has developed software for a surgery assistance device. Its hardware consists of cameras, the main and secondary displays, a digital color printer, arthroscopes, sheaths, and other elements. The software developed by our experts has helped to improve surgery on joints. We not only implemented the customer's requirements but also offered additional functions and a mobile application for remote control. The project was approved by leading medical institutions. The system became one of the customer's best products and contributed to their brand promotion, increasing sales by 20%. IoT-based heart rate monitoring Andersen’s specialists have implemented an IoT solution based on ballistocardiography. The program records heart rate variability during the night. The system connects to ferroelectret transducers to visualize the heart's performance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/efbw05p8c3liv3leoyv8.png) The app provides a detailed description of the quality of sleep and the recovery process. Remote heart rate monitoring is important for athletes, senior citizens, and people with chronic heart diseases. You can see more examples on our website in the Healthcare section. ## Conclusion The success of a healthcare project largely depends not only on the team but also on how effectively the management is organized. Thorough planning, regular feedback, monitoring, and open communication help to avoid problems and implement business ideas, ensuring that this implementation stays within the "Time-Budget-Scope" triangle. These are the top reasons why [healthcare software development companies](https://andersenlab.com/industries/healthcare?utm_source=article&utm_medium=dev.to) include a PM in IT project management.
filatovv
985,687
Open Source Reverse Engineering Platform
cutter Star 10.8k Watch 284 Fork 860 Cutter is a free and open-source reverse engineering...
0
2022-02-11T07:23:20
https://dev.to/vorg/open-source-reverse-engineering-platform-4o28
opensource, programming, tooling, cpp
## [cutter](https://github.com/rizinorg/cutter) <span><strong><i class="fa fa-star">Star 10.8k</i> <i class="fa fa-eye">Watch 284</i> <i class="fa fa-code-fork">Fork 860</i></strong></span> Cutter is a free and open-source reverse engineering platform powered by rizin. It aims at being an advanced and customizable reverse engineering platform while keeping the user experience in mind. Cutter is created by reverse engineers for reverse engineers. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsm0rymjn9crirm92ca0.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kfbvd83v1khorm3j4q0u.png) ## Plugins Cutter supports both Python and Native C++ plugins. Our community has built many plugins and useful scripts for Cutter such as the native integration of Ghidra decompiler or the plugin to visualize DynamoRIO code coverage. Feel free to extend it with your own plugins and scripts for Cutter. ## License GPL-3.0
vorg
985,828
CSSBattle Target#3
Hey everyone, I am back with the CSSBattle Series! This one is over the Target#3 "Push Button" on...
16,772
2022-02-13T14:56:15
https://dev.to/prakhart111/cssbattle-target3-333h
webdev, beginners, css, programming
_Hey everyone, I am back with the CSSBattle Series!_ This one is over the [Target#3 "Push Button"](https://cssbattle.dev/play/3) on [CSSBattle](https://cssbattle.dev/) ![Target3 Push Button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nvym8kiqkzmq0xv29gx0.png) ####The best approach I could come up with is stated below ```css <p></p> <style> *{background:#6592CF;margin:37.5 25 37.5 25} body{background:#243D83} p{ position:fixed; border-radius:50%; background:radial-gradient( #EEB850 24.5px, #243D83 25.5px); width:150;height:150; left:50;top:-13; border:50px solid #6592CF;} ``` This one was by far, my shortest solution condensing to **237 characters.** ####Explanation >The `p` element is one with the dark blue color. In the background colour of `p` , I provided a radial gradient through which the inner yellow circle was made. And the `p` has a light-blue border of 50px thickness. **I was amazed to see that the top solution was just 108 characters**, I wonder how they were able to do it in such small amount of code!! Comment down your way of doing the same. Stay Tuned for daily updates regarding all the challenges on CSSBattle. ###Want to connect? [You can connect with me here](https://linktr.ee/prakhartandon)
prakhart111
985,859
How I developed a modern JAMStack website
Overview In 2021 I started working on a rebranding project for a company that I was...
0
2022-02-15T11:22:04
https://josemukorivo.com/blog/how-i-developed-a-modern-jamstack-website-3heo
nextjs, react, webdev, javascript
--- canonical_url: https://josemukorivo.com/blog/how-i-developed-a-modern-jamstack-website-3heo --- ## Overview In 2021 I started working on a rebranding project for a company that I was working for in 2020. [Here](https://sivioinstitute.org/) is a link to the project. The company already had a nice website but they have been using that website since 2018 and they wanted something new and modern that is also easy to manage for non-developers. For this project to be a success I was working with a [designer friend](https://twitter.com/myk_zi). We sat down and started planning how we were going to go about it because even if this was a company website it had a lot of moving parts so it needed some planning. ### Figma Design ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eh30q6w9dxbubdowpmzo.png) We used figma for every single component that we developed. ### Homepage Design ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngjarn20yi4k3sutj4dd.png) Myself and my friend already had some experience with scrum for project management, so we decided to use scrum for this project since it was good fit, so we started to create a product backlog, prioritized features and divided the work into sprints. We were working very closely with the product owners to make sure that we were developing what the user really wanted. ## Tech stack choice So this was one of the interesting stages in the development of the project. We decided to use the JAMStack on this project for quite a number reasons. The design for the project was done using [figma](figma.com) then I started the development of the UI using [React](https://reactjs.org/) for the UI. I decided to use [tailwind css](tailwindcss.com/) for styling because it makes it super fast to style components. I did not like the idea of having a bunch of classes on my markup so I used tailwind with css modules(I will show the snippets for some of the code). Because we wanted this website to be performant and SEO friendly I decided to use [NextJS](https://nextjs.org/)(The authors call it a React framework for production and they are right). NextJS has many features for performance and SEO out of the box like Server side rendering, Code splitting, optimized images, routing and many more. On this project it didn't make sense to spin up a custom api for the backend so I decided to use a modern CMS which in this case was [strapi](https://strapi.io/). All of the backend stuff on this site is coming from strapi. There are also number of other tools I used but I will not go into the details of those. Below I will give a summary of key things I used. ### Key things used in the project [React](https://reactjs.org/) for the UI [NextJS](https://nextjs.org/) for SSR/CSR/Routing and more [Tailwindcss](tailwindcss.com/) for UI styling [Strapi](https://strapi.io/) as CMS [Docker](https://www.docker.com/) for application containerization [nginx](https://www.nginx.com/) as a web server and reverse proxy [git](https://git-scm.com/) for version control [mailchimp](https://mailchimp.com/) for managing a mailing list ## Project Structure For project structure I followed [this structure](https://dev.to/josemukorivo/how-i-structure-my-nextjs-projects-5n8) with some improvements but was good as a starting point. Here is a high level overview. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dflokpzywxtsmkmjl7j1.png) ## Creating components I tried to make the components that I developed reusable, Below are sample files for the `Text` component. ### Text.css ```css .root { @apply mb-4; } .root:is(h1, h2, h3, h4, h5, h6) { @apply mb-7 2xl:mb-10 leading-tight dark:text-slate-200; } .p { @apply text-lg 2xl:text-xl; } .span { @apply text-lg; } .h1 { @apply text-4xl md:text-5xl font-heading font-medium uppercase; } .h2 { @apply text-2xl md:text-4xl font-heading uppercase; } .h3 { @apply text-3xl; } .h4 { @apply text-2xl; } .h5 { @apply text-xl; } .h6 { @apply text-lg; } ``` ### Text.tsx ```tsx import { FC, CSSProperties, ReactNode } from 'react'; import cn from 'classnames'; import s from './Text.module.scss'; type Variant = 'h1' | 'h2' | 'h3' | 'h4' | 'h5' | 'h6' | 'p' | 'span'; interface Props { as?: Variant; className?: string; style?: CSSProperties; children?: ReactNode | any; html?: string; } export const Text: FC<Props> = ({ as: Tag = 'p', className = '', style = {}, children, html, ...rest }) => { const classes = cn( s.root, { [s.p]: Tag === 'p', [s.span]: Tag === 'span', [s.h1]: Tag === 'h1', [s.h2]: Tag === 'h2', [s.h3]: Tag === 'h3', [s.h4]: Tag === 'h4', [s.h5]: Tag === 'h5', [s.h6]: Tag === 'h6', }, className // make sure to add the className last so it overwrites the other classes ); const htmlProps = html ? { dangerouslySetInnerHTML: { __html: html }, } : {}; return ( <Tag className={classes} {...rest} {...htmlProps}> {children} </Tag> ); }; ``` ## Example Usage ```tsx <Text as='h1'> Hi 👋🏼, I’m Joseph. Writer, Software Engineer, DevOps </Text> <Text className='cool'> This is a sample paragraph </Text> ``` ## Hosting and deployment The company I developed this website for is not a big company and they don't have a big tech team so I used the tools that I thought were easy for someone else to maintain. Strapi is running as a docker container using `docker-compose`, the frontend is also running in a similar way. In the source code for both strapi and the frontend I created some `Make` files to run the project. Below is a sample Makefile. ``` down: docker-compose down build: docker-compose up -d --build redeploy: git pull && make down && make build ``` In this case if there are changes to the source code, the user does not need to know how to use docker, they just run `make redeploy` in the root of the project and all the code pulling and image building is done for them, of course this is clearly labelled in the `README`. So these services are running on different ports on the server and I exposed them using `nginx`. Below is how one may configure their nginx file for strapi. **Please Note** on production you have to make sure that you do it in a secure way this is just to help you get started. ``` server { server_name example.com www.example.com; location / { keepalive_timeout 64; proxy_pass http://localhost:8080; # use your local port where strapi is running proxy_http_version 1.1; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-NginX-Proxy true; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass_request_headers on; proxy_max_temp_file_size 0; proxy_redirect off; proxy_read_timeout 240s; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.example.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = example.com) { return 301 https://$host$request_uri; } # managed by Certbot server_name example.com www.example.com; listen 80; return 404; # managed by Certbot } ``` I hope this article helped you. P.S you can follow me on [twitter](https://twitter.com/josemukorivo). Photo by <a href="https://unsplash.com/@halacious?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Hal Gatewood</a> on <a href="https://unsplash.com/s/photos/website?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
josemukorivo
986,200
Python or JavaScript
I have met many newbies trying to choose between which programing language to learn and which is...
0
2022-02-11T15:44:33
https://dev.to/osmodes/python-or-javascript-4k84
javascript, beginners, programming, python
I have met many newbies trying to choose between which programing language to learn and which is better. Choosing the right programming language for your work is a huge priority before starting to write your first lines of code. "hello world" There are plenty of languages for you to choose from, and it's easy to learn with plenty of online available resources. Python and JavaScript most times intrigue and confuses newbies a lot, after watching some youtube videos or reading articles online, they are exposed to different opinions of the content creators they digested. In some ways, these two languages are comparable, but usually, their use cases, syntax, and approaches to programming differ significantly. Bear in mind that I might disappoint you if yours is to know which is better(lol) but I will try all I can to give you an insight into the uniques features, limitations, and their area of strengths as languages. The best of these two is based on the programmer’s opinion and what you intend to achieve. well, Before we go ahead and start listing the various differences between JavaScript and Python, let’s first go through a general overview of the two programming languages. **Python Overview** Python is a high-level general-purpose programming language developed to emphasize code readability and allow them to work quickly and efficiently. It is meant to be easy to read and easy to implement. Often cited as one of the easiest programming languages to learn(arguably). Python (just as you will read of JavaScript later) can also serve as a scripting language but to some languages like Perl and Ruby for creating web applications. Python Unique and appealing feature is that it allows programmers to use a range of programming styles for developing both Simple and Complex programs. Being a general-purpose language (i.e could serve as front-end or back-end. Meaning; it is applicable both on the server-side and the client-side) because of its simplicity, flexibility, versatility, and other useful features, Python is growing and becoming one of the most popular programming languages around. Its simplicity and readability are what I have loved most about it. just; print("Hello world") **Javascript Overview** JavaScript is a scripting language that can easily run in a browser, it does not require an individual compilation step in website development. Most browsers (if not all) have built-in engines that understand and execute JavaScript commands. all you need to do is write command within your HTML Documents and it will execute. start...< script > < /script >..... end JavaScript is a client-side scripting language for making static websites interactive. I will like to be as open and beginer-friendly as possible in addressing a few points below. **Easy to learn**: Without being biased and speaking as someone who is familiar with both languages; python because of its readability and simplicity feature, I will agree that it is easier to learn than JavaScript. Though this depends on who is learning as both depend on the mindset of the newbies. JS basics can be learned in months as well as python but the readability makes it easier to learn python. Python has fewer lines, fewer structural rules, many frameworks that contain pre-written codes that speeds-up coding time. **Historically**: Python was invented in 1991, while Javascript was invented in 1995 and both have been relevant in the website development and programming world as a whole. Front-End and Back-End: Initially, JavaScript was mainly concerned with making website User-friendly and dynamic while python was considered the back end. (old belief) The introduction of Node.js Environment provides an option for JavaScript to serve as a backend as it can now run codes outside browsers (of course not as python will). Python though comes with different modules, but it is simply a general-purpose language that is applicable both on the server-side and the client-side(Front/Back-End). Though I will agree that python is more noticeable in backend development. Because of the need to transpile Python to JavaScript as browsers do not execute python directly (as of when this article is published), it will be logical not to replace python with any traditional Front-End language. This is advised to save time in the execution process of the code. **Web Development**: Because of the introduction of Node.js extension to JavaScript, scalability (which refers to the ability of languages to handle huge numbers of users and manage large amounts of data by using minimum server utilization) is an edge over python. Node.js, because of its scalability and support asynchronous programming is suitable for the development of programs that depend on the speed of execution. Of course, every client wants a fast and responsive website. Python been a server-side lord, has a very stable environment with frameworks like cherryPY, Django, flask. This stability allows the frameworks to become more efficient in web development. **Salary**: Python arguably is the fastest-growing programming language, with its developers making approximately $116,000 (median salary) while JavaScript programmers make approximately $110,000 per year. **In Conclusion**: It would have been great writing and comparing topics like their; Numeric type, Object access, Inheritance, Mutability, procedure programming, REPL, and generally, their features but I assume this article is for beginners, those planning to learn programming and finding it difficult to choose which language is easier to learn or related to what to do. Though both are object-oriented programming languages even though their scopes are different, choosing between python and JavaScript should be based on what you plan to do with your programming skill, For instance, if web development is your drive, the better choice is to consider the inseparable trio of HTML, CSS, and JavaScript, while People interested in data science and machine learning should consider learning Python. **At Best, Be a professional in one but also, be familiar with the other**.
osmodes
986,500
Javascript Objects
Javascript works on the basis of object oriented programming. This article describes how to use...
0
2022-02-11T22:51:53
https://dev.to/collinsetemesi/javascript-objects-2gig
javascript, webdev, beginners
Javascript works on the basis of object oriented programming. This article describes how to use objects, properties and methods, and how to create your own objects. **What is an object?** In JavaScript, an object is a standalone entity, with properties and type. They include: string, number, boolean, null and undefined. For example a user object can have the following properties: username, email and gender. **Objects and properties** A JavaScript object has properties associated with it. A property of an object is a variable that is attached to the object. The properties of an object define the characteristics of the object. You access the properties of an object with a simple dot-notation: `objectName.propertyName` Like all JavaScript variables, both the object name and property name are case sensitive. You can define a property by assigning it a value. To demonstrate this, here’s a sample object `myDog` ``` const myDog = { “name”: “Joe”, “legs”: 4, “tails”: 1, “friends”:[“everyone”] }; ``` “Name”, “legs”, “tails” and “friends” are properties while “Joe” “4” “1” and “everyone” are values. **Object Methods** Methods are actions that can be performed on objects. ``` const person = { firstName: "John", lastName: "Doe", id: 5566, fullName: function() { return this.firstName + " " + this.lastName; } }; ``` **Creating a JavaScript Object Using an Object Literal** This is the easiest way to create a JavaScript Object. Using an object literal, you both define and create an object in one statement. `const person = {firstName:"John", lastName:"Doe", age:50, eyeColor:"blue"};` **Conclusion** Objects in JavaScript can be compared to objects in real life. The concept of objects in JavaScript can be understood with real life, tangible objects. In JavaScript, an object is a standalone entity, with properties and type. Compare it with a chair, for example. A chair is an object, with properties. A chair has a color, a design, weight, a material it is made of. The same way, JavaScript objects can have properties, which define their characteristics.
collinsetemesi
1,433,588
Graph Databases vs NoSQL
NoSQL Databases: NoSQL databases (aka "not only SQL") are non-tabular databases and store data...
0
2023-04-12T11:25:34
https://dev.to/intriguedrishi/graph-databases-vs-nosql-3ndo
**NoSQL Databases:** NoSQL databases (aka "not only SQL") are non-tabular databases and store data differently than relational tables. NoSQL databases come in a variety of types based on their data model. The main types are document, key-value, wide-column, and graph. They provide flexible schemas and scale easily with large amounts of data and high user loads. Most NoSQL systems are aggregate-oriented, grouping the data based on a particular criterion and the database type (such as document store, key-value pair, etc). This model provides only simple, limited operations and only forms one dedicated view of your data. Focusing on one aggregate at a time allows users to easily spread many chunks of data across a network of machines along the aggregate dimension (for instance, the Document in document databases), but that means that other projections and perspectives have to be computed by crunching or duplicating your data. **Graph Databases:** A graph in a graph database can be traversed along specific edge types or across the entire graph. In graph databases, traversing the joins or relationships is very fast because the relationships between nodes are not calculated at query times but are persisted in the database. Graph databases have advantages for use cases such as social networking, recommendation engines, and fraud detection, when you need to create relationships between data and quickly query these relationships. Graph databases are purpose-built to store and navigate relationships. Relationships are first-class citizens in graph databases, and most of the value of graph databases is derived from these relationships. Graph databases use nodes to store data entities, and edges to store relationships between entities. An edge always has a start node, end node, type, and direction, and an edge can describe parent-child relationships, actions, ownership, and the like. There is no limit to the number and kind of relationships a node can have. Graph databases, on the other hand, handle fine-grained networks of information, providing any perspective on your data that fits your use case. The well-known and trusted transactional guarantees from relational systems also protect updates of the graph data in Neo4j, conforming to ACID standards. Want to extract all the benefits of PostgreSQL along with Graph features look at: [Apache AGE](https://age.apache.org/) [Apache AGE Github](https://github.com/apache/age)
intriguedrishi
986,819
TRUE WAY OF LEARNING TO CODE
Many Beginner programmers learn code in the wrong way. They learn to code like this: Write all the...
0
2022-02-12T08:22:13
https://dev.to/jagannathkrishna/true-way-of-learning-to-code-2c4p
Many Beginner programmers learn code in the wrong way. They learn to code like this: - Write all the information they learn in a notebook. - Learn all the syntax. - Learn some theories. - Learn all definitions. - etc. (You may know some other points if you are like this) But, after all these, if someone told them to code a project, the can't. They tell so many definition, theories etc. but they can't code. They don't know how to. So, we can understand this is not the true way of learning code. The true way of learning code is.... is... by **_Googling_**. Yah!! You read that right. Googling is the key. If you forget a syntax, just google it. If you caught an error, just google, if you forget something else, just google. So, when there is google you don't need to learn all things thoroughly. Just understand the concept. But the thing is if you forget some syntax, you can google. But you need to know where to put that in code. That is the key. No matter you copy & pasted from stack overflow or from somewhere else, you need to know where to paste it. So, Just understand the concept. I hope you understood my message. Also Share this message to beginner programmers. This might help them. Thank You For Reading. Have a Nice day. And comment is open for you valuable replies.
jagannathkrishna
986,911
Set MacOS finder folder icon with Python
Problem I want to use Python script to replace MacOS finder folder icon with an image. ...
0
2022-02-12T09:27:34
https://dev.to/franzwong/set-macos-finder-folder-icon-with-python-443h
python, cocoa, macos
##Problem I want to use Python script to replace MacOS finder folder icon with an image. ##Steps 1\. Install packages ```shell pip3 install pyobjc-core pyobjc-framework-Cocoa ``` 2\. Create a python script file `set-icon.py` with the following code. ```python import sys, Cocoa folder_path = sys.argv[1] print(f"Folder path: {folder_path}") image_path = sys.argv[2] print(f"Image path: {image_path}") result = Cocoa.NSWorkspace.sharedWorkspace().setIcon_forFile_options_(Cocoa.NSImage.alloc().initWithContentsOfFile_(image_path), folder_path, 0) if result: print("Succeed") else: print("Failed") ``` 3\. Run the script to change the icon. ```shell python3 set-icon.py <folder path> <image path> ```
franzwong
986,965
The Treachery of Whales
Advent of Code 2021 Day 7 Try the simulator! The task at hand ...
16,285
2022-02-12T18:39:42
https://dev.to/rmion/the-treachery-of-whales-1em7
adventofcode, programming, algorithms, computerscience
## [Advent of Code 2021 Day 7](https://adventofcode.com/2021/day/7) ### [Try the simulator!](https://aocthetreacheryofwhales.rmion.repl.co/) ![Interactive crab migration tool](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7v7sj6zhnitpmmu97ag3.gif) ## The task at hand ### Solve for X where `X = a tallied cost of moving to a shared location` ### Input is - A string of comma-separated integers ### It represents - Locations of crab submarines ## Studying someone's code: wait, my code?! - I solved both parts of this puzzle on the day it was released - I recall not feeling too intimidated by it - Returning to my code, I am delighted by how concise and readable it is ### Part 1: where each move costs one fuel ``` Split the input into an array of integers Identify the smallest and largest integers in the list Setup a variable mapping fuel costs and end locations For all integers as i from min to max positions of the list For each position Increment an accumulating integer - starting from 0 - by the absolute value of the result of subtracting i from the current position Create a key in the variable matching the accumulated value Store in that key the integer of the current iteration Return the smallest value from a list of all of the fuel mapping's keys ``` Here's a visualization of this algorithm's first three iterations ![Part 1 algorithm visualization](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgp19wvpznib69rrrbz0.gif) ### Part 2: where each move costs one additional fuel ``` Split the input into an array of integers Identify the smallest and largest integers in the list Setup a variable mapping fuel costs and end locations For all integers as i from min to max positions of the list For each position Increment an accumulating integer - starting from 0 - by the result of the following operations: Store the absolute value of the result of subtracting i from the position Calculate the result of the following equation to amass a value equal to the sum of all numbers between 1 and the position: Multiply the stored absolute value by the sum of itself and 1 Divide that product by 2 (e.g. 5+4+3+2+1 = 5(5+1)/2 = 15) Create a key in the variable matching the accumulated value Store in that key the integer of the current iteration Return the smallest value from a list of all of the fuel mapping's keys ``` The difference is subtle. Even more subtle in the code. Because it is one edit to one line of code. ```js // Part 1 For i from min to max: Math.abs(position - i) // Part 2 For i from min to max: Math.abs(position - i) * (Math.abs(position - i) + 1) / 2 ``` ## A brief puzzle with a bonus lesson - When I solved this the first time, I was proud to do so quickly - I was proud of incorporating memoization and recursion - popular techniques used in dynamic programming - But after returning to this puzzle, I realized I was overthinking the problem: I didn't need either of those techniques. - I was using them to generate the sum of all numbers between some number N and 1. - There's an easy formula for that: N(N+1)/2 It was also fun and rewarding practice to build [this puzzle's simulator](https://aocthetreacheryofwhales.rmion.repl.co/): ![Interactive crab migration tool](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7v7sj6zhnitpmmu97ag3.gif)
rmion
987,348
pseudo classes in CSS - part 5 (:target)
:target is an awesome selector that most people either don't know about or don't know how to use...
0
2022-02-12T20:23:32
https://dev.to/therajatg/pseudo-classes-in-css-part-5-target-3oe8
css, webdev
:target is an awesome selector that most people either don't know about or don't know how to use effectively. There is a lot we can do using the target selector. I love it as I can do things without using JavaScript. Note: This is the 5th part of the series dedicated to the pseudo classes of CSS. In this part, we'll understand the pseudo class :target but if you want to jump to any other pseudo class, be my guest and click on the links provided below: part 1: [pesudo class :hover](https://dev.to/therajatg/pseudo-classes-part-1-hover-28ad) part 2: [pseudo class :link](https://dev.to/therajatg/pseudo-classes-in-css-part-2-link-47e5) part 3: [pseudo class :visited](https://dev.to/therajatg/pseudo-classes-in-css-part-2-visited-3h2a) part 4: [pseudo class :active](https://dev.to/therajatg/pseudo-classes-in-css-part-4-active-2m1e) part 5: [pseudo class :target](https://dev.to/therajatg/pseudo-classes-in-css-part-5-target-3oe8) Here we go: let's see what MDN has to say:<br/> The :target CSS pseudo-class represents a unique element (the target element) with an id matching the URL's fragment. Did you understand what MDN said👆. Don't worry, let's understand with a simple example:<br/> Example 1: ![1.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1644654751288/Jw0CxgWRe.gif) ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> p:target{ background-color: red; color: white; } </style> </head> <body> <a href="#one">Change 1st line.</a> <div> <p id="one">testing the target selector</p> </div> </body> </html> ``` I hope you understood and if you don't, I made this 👇 up: Step 1: Browser sees styling has been done on p:target.<br/> Step 2: In order to get more information browser goes to the p tag and there it notices the id:"one" inside the p tag.<br/> Step 3: Browser understands that this id must be in href of some anchor tag and it goes to the relevant anchor tag based on the id.<br/> Step 4: Reaches the anchor tag and understands that as soon as this anchor tag will be clicked I'll give styling to the relevant p tag (having id: one). I hope you understand. If you don't let's see another example: Example 2: ![2.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1644693588275/jFgjmwgmb.gif) ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> p:target{ background-color: red; color: white; } </style> </head> <body> <a href="#one">Change 1st line.</a> <a href="#two">Change 2nd line</a> <div> <p id="one">testing the target selector</p> <p id="two">testing the target selector</p> </div> </body> </html> ``` If you still don't clearly get it. I'll show a really cool example. see this 👇 Wikipedia page: ![3.PNG](https://cdn.hashnode.com/res/hashnode/image/upload/v1644694340237/cEY1T0wSM.png) you must have seen that when we click on any title in the content menu on the Wikipedia page, we are taken to that particular section on the page. You could achieve the same functionality using only the active selector. wanna know how? See Below Example: Example 3: ![4.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1644695258889/I0TCCfIcQj.gif) ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> div:target{ background-color: lightblue; } a{ display: block; } </style> </head> <body> <a href="#childhood">Childhood</a> <a href="#marriage">Marriage</a> <a href="#career">Career</a> <a href="#publications">Publications</a> <div id="childhood"> <h2>Childhood</h2> <p>Lorem ipsum text</p> </div> <div id="marriage"> <h2>Marriage</h2> <p>Lorem ipsum textL</p> </div> <div id="career"> <h2>Career</h2> <p>Lorem ipsum text</p> </div> <div id="publications"> <h2>Publications</h2> <p>Lorem ipsum text</p> </div> </body> </html> ``` I hope you understand. If you still don't let me show you another magic. This time we'll click on a link and an image will appear out of thin air. Example 4: ![5.gif](https://cdn.hashnode.com/res/hashnode/image/upload/v1644696103059/kwYOjouYo.gif) ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> img:target{ display: block; height: auto; width: 25rem; object-fit: cover; } img{ display: none; } </style> </head> <body> <a href="#quote-image">See Magic</a> <img id="quote-image" src="/pics/5.jpg" alt="quote"> </body> </html> ``` We gave image "display: none" and as a result it won't be displayed but as soon as we click on the link the "display: block" will be set for the image and it could be seen. I could go on and on and on with examples. Just imagine how many use-cases could be there for the pseudo class :target. That's all folks. **If you have any doubt ask me in the comments section and I'll try to answer as soon as possible.** **I write one article every day related to web development (yes, every single f*cking day). Follow me here if you are learning the same..** **If you linked the article follow me on twitter:** @therajatg **If you are the linkedin type, let's connect**: https://www.linkedin.com/in/therajatg/ **Have an awesome day ahead 😀!**
therajatg
987,525
7 Killer One-Liners in JavaScript
7 Killer One-Liners in JavaScript JavaScript is the most crucial pillar of Web...
0
2022-02-13T05:49:32
https://tapajyoti-bose.medium.com/7-killer-one-liners-in-javascript-33db6798f5bf
javascript, webdev, programming, productivity
--- canonical_url: https://tapajyoti-bose.medium.com/7-killer-one-liners-in-javascript-33db6798f5bf --- # 7 Killer One-Liners in JavaScript JavaScript is the most crucial pillar of Web Development. > This article contains **code snippets** hand-picked by _sterilized contamination-free gloves_ and _placed onto a satin pillow._ > A team of 50 _inspected the code_ and _ensured it was in the highly polished_ before posting. Our article-posting specialist from Switzerland lit a candle, and a hush fell over the crowd as he entered the code into the _finest gold-lined keyboard that money can buy._ > We all had a wonderful celebration, and the whole party marched down the street to the café where the entire town of Kolkata waved _"Bon Voyage!"_ to the article as it was posted online. Have a wonderful time reading it! ## Shuffle Array While using algorithms that require _some degree of randomization_, you will often find shuffling arrays quite a necessary skill. The following snippet shuffles an array **in place** with `O(n log n)` complexity. ```javascript const shuffleArray = (arr) => arr.sort(() => Math.random() - 0.5); // Testing const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; console.log(shuffleArray(arr)); ``` ## Copy to Clipboard In web apps, **copy to clipboard** is rapidly rising in popularity due to its _convenience for the user_. ```javascript const copyToClipboard = (text) => navigator.clipboard?.writeText && navigator.clipboard.writeText(text); // Testing copyToClipboard("Hello World!"); ``` **NOTE:** The approach works for **93.08%** global users as per [caniuse](https://caniuse.com/?search=Clipboard%20API%3A%20writeText). So the check is necessary that the user's browser supports the **API**. To support all users, you can use an `input` and **copy its contents**. ## Unique Elements Every language has its own implementation of `Hash List`, in **JavaScript**, it is called `Set`. You can easily get the unique elements from an array using the `Set` **Data Structure**. ```javascript const getUnique = (arr) => [...new Set(arr)]; // Testing const arr = [1, 1, 2, 3, 3, 4, 4, 4, 5, 5]; console.log(getUnique(arr)); ``` ## Detect Dark Mode With the rising popularity of **dark mode**, it is ideal to switch your app to **dark mode** if the user has it enabled in their device. Luckily, `media queries` can be utilized for making the task a _walk in the park_. ```javascript const isDarkMode = () => window.matchMedia && window.matchMedia("(prefers-color-scheme: dark)").matches; // Testing console.log(isDarkMode()); ``` As per [caniuse](https://caniuse.com/?search=%20%20window.matchMedia) the support of `matchMedia` is **97.19%**. ## Scroll To Top Beginners very often find themselves struggling with scrolling elements into view properly. The easiest way to scroll elements is to use the `scrollIntoView` method. Add `behavior: "smooth"` for a smooth scrolling animation. ```javascript const scrollToTop = (element) => element.scrollIntoView({ behavior: "smooth", block: "start" }); ``` ## Scroll To Bottom Just like the `scrollToTop` method, the `scrollToBottom` method can easily be implemented using the `scrollIntoView` method, only by switching the `block` value to `end` ```javascript const scrollToBottom = (element) => element.scrollIntoView({ behavior: "smooth", block: "end" }); ``` ## Generate Random Color Does your application rely on **random color generation**? Look no further, _the following snippet got you covered_! ```javascript const generateRandomHexColor = () => `#${Math.floor(Math.random() * 0xffffff).toString(16)}`; ``` Finding **personal finance** too intimidating? Checkout my **Instagram** to become a [**Dollar Ninja**](https://www.instagram.com/the.dollar.ninja/) ## Thanks for reading Need a **Top Rated Front-End Development Freelancer** to chop away your development woes? Contact me on [Upwork](https://www.upwork.com/o/profiles/users/~01c12e516ee1d35044/) Want to see what I am working on? Check out my [Personal Website](https://tapajyoti-bose.vercel.app) and [GitHub](https://github.com/ruppysuppy) Want to connect? Reach out to me on [LinkedIn](https://www.linkedin.com/in/tapajyoti-bose/) I am a freelancer who will start off as a **Digital Nomad** in mid-2022. Want to catch the journey? Follow me on [Instagram](https://www.instagram.com/tapajyotib/) Follow my blogs for **Weekly new Tidbits** on [Dev](https://dev.to/ruppysuppy) **FAQ** These are a few commonly asked questions I get. So, I hope this **FAQ** section solves your issues. 1. **I am a beginner, how should I learn Front-End Web Dev?** Look into the following articles: 1. [Front End Development Roadmap](https://dev.to/ruppysuppy/front-end-developer-roadmap-zero-to-hero-4pkf) 2. [Front End Project Ideas](https://dev.to/ruppysuppy/5-projects-to-master-front-end-development-57p)
ruppysuppy
987,668
What is Software Engineering?
New technologies are reshaping industries, which increases the need for qualified professionals who...
0
2022-02-13T09:23:52
https://dev.to/awiednoor/what-is-software-engineering-4bjc
software, career, computerscience, algorithms
New technologies are reshaping industries, which increases the need for qualified professionals who are familiar with managing computer and software systems to adapt to business needs. As these needs from businesses grow, specific skills such as programming, cloud computing, system security, and an array of other software technologies knowledge are necessary, supported by engineering methods to produce scalable and reliable solutions. Engineers possess a variety of skills such as problem-solving, troubleshooting, and critical thinking in addition to their technical expertise. In place of creating a unique solution for every problem, an engineer can build a system, estimate and fix its errors, and analyze its performance. ## let’s dissect the term **Software:** In general it’s a set of constructions that tells the hardware what to do. there are many types of software, the two main categories are the operating system and the programs running on that system. **Engineering:** Is using scientific principles and math to design and build structures, Solve Problems, and test concepts and solutions. **Software Engineering:** Systematically develop software through an engineering approach. The process of developing, analyzing, testing, and maintaining software using engineering principles. To be a software engineer, you will need strong problem-solving and research skills, analytical and mathematical skills, and a deep understanding of coding languages. ## How is it different from other software disciplines? **Computer science** The main comparison is often made with computer science and how these two fields might be different from each other. the main difference is that computer science is concerned with theories and fundamentals while software engineering is concerned with solving real problems and solving them practically covering everything regarding developing and testing new software solutions. Both provide a comprehensive understanding of basic computer architecture and software systems. They cover topics such as programming languages, computer architecture, algorithms, databases, and artificial intelligence. Computer science focuses more on computation theory and covers more subjects pertaining to computers, whereas software is concerned more with mathematical knowledge and abstract computing skills. **Systems Engineering** A System Engineer deals with the overall management of engineering projects during their life cycle. They follow an interdisciplinary approach of being technical while focusing on managing the project and delivering solutions to requirements. They are concerned with a much wider range of topics that requires a broader background including computer systems, software systems, and process engineering.on the other hand, A software engineer is more hands-on in developing solutions for requirements, his/her’s main focus lies in the details regarding how to develop, deploy, maintain, and test to end up with good software. **Computer Engineering** The focus is on topics such as hardware, software, and electrical engineering. Computer engineering is a broad major that covers programming only as one aspect and requires mathematics, physics, and computer science. Basically, computer engineering focuses more on the physical hardware systems, whereas software engineering focuses more on implementing and maintaining software that involves more coding instead of knowing how computer parts work in detail. ## Fundamental activities of software engineering **1- Specification** This includes Communication, Requirement Gathering, and Feasibility Study. It is not a primary task for software engineers as there are usually teams dedicated to research. **2- Development** This includes System analysis and Design, and Rapid Development with itterative methods. **3- Validation** Is making sure the developed application works according to required functionalities and is performing as expected, in line with their design objectives. This step is important and done iteratively to ensure that the requirements are being met. **4- Evaluation** This includes Debugging and Testing the software using several testing methods like Unit Testing, System Testing, and others. Testing is a crucial part of Software Engineering as the end goal is always developing reliable and robust applications. **5- Maintenance** Which usually costs more than development, this also includes updating the software and fixing bugs that might occur. ## Challenges of Software Engineering - Costs - Heterogeneity - Business and Social change - Security and Trust - Device diversity and Constant change - Demands for reduced delivery times ## Software Applications types **- Stand-alone:** which are applications that run on local computers, and sometimes do not need network to function, including all required functionalities needed from the software. **- Interactive/Transaction-based:** They execute on remote computers, this includes web applications such as e-commerce. **- Embedded/control systems:** Applications that control and manage hardware devices, this type is the most developed type of application. **- Batch processing:** These are business systems , processes large number of individual inputs and data. **- Entertainment systems:** Primarily apps designed for personal use, and intended to entertain the user like games and streaming services. **- Modeling and simulation:** Developed by scientists and engineers to model a process. These include many separate interacting objects that are then used to model different types of data. **- Data collection:** Apps that collect data using a set of hardware sensors and send them to a system for processing. In software, the data is collected and sent using several methods that are then processed in different ways to gather knowledge from the data. **- Systems of systems:** Which are systems that manage or run several different other applications or subsystems. Software engineering teaches you how to solve problems with code, it teaches you how to think like an engineer and how to approach problems in a methodical manner. Generally, software engineers specialize in one type of software and excel on one type of platform, as specializing means you have more expertise in one thing and can therefore provide better software.
awiednoor
987,735
A Letter to my Junior Self: Imposter Syndrome
Imposter syndrome is probably one of the most common issues that new programmers face. The tech...
0
2022-02-13T09:55:45
https://dev.to/jmoreno/a-letter-to-my-junior-self-imposter-syndrome-1bn4
beginners, impostersyndrome, career, devjournal
Imposter syndrome is probably one of the most common issues that new programmers face. The tech industry can seem incredibly daunting from the outside looking in, "make sure you practice leetcode, oh and put together a portfolio, and why are you learning that framework that's old news, all the cool kids have moved on to this new framework." Then once you start the job there's a feeling of needing to keep up, to be at the same level as coworkers with much more experience than you, after all if you can't keep up then why did they hire you? Man they're probably even thinking of firing you right now aren't they? Of course this is **bullshit**. The mind likes to exaggerate, in truth you're probably doing fine and a part of you knows this, but the mind is tricky and well it's not exactly wrong. The truth is that if you're new you probably _can't_ keep up with your more experienced coworkers, but the good news is that no one really expects you to in the first place. So then how do we deal with imposter syndrome? The following is the process that I followed (I am not a medical professional and if you're truly struggling please reach out to a professional). This whole letter is based on the idea that **imposter syndrome is a learned reaction to a given situation**, it is NOT a moral defect, or flaw in your person, it is simply how we learned to cope with unfamiliar situations. So keep that in mind as you read this letter. First step is to simply observe, whenever you're going to be in a situation that has caused your imposter syndrome to flare up before, pay attention to your body, watch how it tenses up, watch how your breath changes, and more importantly **watch the thoughts that are popping up in your head**. What is the mind presenting you with at this moment? Why? This is going to be hard at first, after all you're looking directly at the things that we want to avoid but it is important because we want to be able to see what exactly it is we're afraid of. Then, **explore what the fear is really about**. This might sound silly but it's important to know what exactly it is we're afraid of, is it getting fired? Is it the reputational hit? Is it a deeper fear that maybe you chose the wrong career? Or that you're not smart enough to do this? Let's tackle a few of the common fears. **"I'm scared I might not know how to do this." Why?** Is it because you're **scared you'll lose your job**? Highly unlikely. If you're just starting out you were most likely hired as a junior. Your company, your coworkers, and frankly most people in tech do not expect that you know everything, we WANT to help you learn, we want to see you succeed. Why do you believe that not knowing something is a basis for being fired? Is it because you **don't want to look bad in front of your peers**? Once again try to explore where you picked up the belief that not knowing how to do something is a bad thing or would make you look bad somehow. We want to see you succeed, we want to help, and while most people don't speak openly about it I can promise you that a majority of people have felt this way. Keep in mind that developers are fundamentally a curious group of people. We tend to pride ourselves in learning new things, so if someone doesn't know everything but is trying to learn to do things they'll usually be looked on favorably. (And if your coworkers are being elitist assholes about it, fuck them, find the helpers). **Are you comparing yourself to some mythical genius programmer who knows everything?** These types of programmers may exist but in the seven years I've been in the tech industry I've never come across one, even if it looks that way sometimes from the outside. More likely they just have more experience than you and have seen more shit. _This is a good thing,_ it means you have people to learn from. If you didn't how would anyone ever progress? **Is it because as a kid you were scolded for not knowing something?** This one cuts a little deeper but was my personal reason for imposter syndrome, as a kid I was rewarded when I knew things and punished when I didn't. **This is where I internalized the idea that not knowing something was a bad thing.** This created a very toxic and subconscious behavior in me where I would hide away until I felt confident in doing things, after all I didn't want my parents to be upset with me. Well guess what, we're not kids anymore, no one is going to scold us, and even if someone does, we'll be fine. It might hurt, but we'll be fine. This doesn't mean that our fear is invalid, it just means that it is no longer a useful pattern of behavior. Now, once you know what your fear is _really_ about you have to take probably the hardest step: **Acceptance**. You have to accept that this fear is there, **having imposter syndrome is simply something that happens to you right now, _and that's okay_.** It shows that you care about your work, it shows that you're conscientious, these are admirable qualities. The fear feels unpleasant I know, and we instinctively want to get rid of it. But as you may have noticed pushing it away doesn't work and can even make it worse because now you have the fear and the shame towards having felt that fear. (After all no one else feels it right? What's wrong with us?) So instead, **just let it be there**. It has valid reasons for being there. In my case as a kid I was 100% going to get in trouble for low grades this was a fact of life, so now as an adult my mind was trying to avoid that and it just hadn't realized that we were no longer in that situation. It just wanted to help. So accept it, thank the mind for looking out for you and then teach it how to look out for you in helpful ways. This is where we get to retraining our automatic reactions, again imposter syndrome is a learned reaction to a given situation, it is NOT a moral defect, or flaw in your person, it is simply how we learned to cope with unfamiliar situations. So how do we retrain ourselves? In my experience the best way is repetition, when imposter syndrome flares up we follow the process: observe our mind, explore the fear, accept that the fear is there, know that the mind is trying to help, and then help it help us. If it starts going on about everyone else being smarter than you explain to it that this is simply not true, and even if it is true you're here to learn and grow not to compare yourself with others, so who cares? If it starts going on about getting yelled at as a kid for bad grades, comfort it, tell it that it's okay to be scared, and that you know it wants to help but you're no longer there, and if your coworker yells at you _they're_ the asshole. Over time (weeks, months, years) this replaces the automatic reaction of fear and anxiety with an automatic reaction of exploration and acceptance, which means that we'll feel imposter syndrome less and less intensely, and because we're presenting the mind with alternatives it will eventually start to present them back to us and on days where we're feeling low, the mind will remind us that this is okay. This is the process I personally followed to get rid of imposter syndrome, fear of public speaking, fear of looking like an idiot in front of my coworkers, and frankly just fear in general. It is hard but man is it worth it. Good luck, you got this. For more explorations of this I suggest the following reading list. **Reading List** 1. The Inner Game of Tennis 2. Zen Mind Beginner's Mind 3. Atomic Habits 4. How to Fail at Everything and Still win Big > When touched with a feeling of pain, the ordinary uninstructed person sorrows, grieves, and laments, beats his breast, becomes distraught. So he feels two pains, physical and mental. Just as if they were to shoot a man with an arrow and, right afterward, were to shoot him with another one, so that he would feel the pains of two arrows… https://www.accesstoinsight.org/tipitaka/sn/sn36/sn36.006.than.html Or if YouTube is more your bag I would start here with HealthyGamerGG. https://www.youtube.com/watch?v=qvaB2d5yDf8
jmoreno
987,824
Optimizing MongoDB collection with 200 millions rows of data using indexing
Preface We have a MongoDB collection with almost 200 million rows. Basically, this...
0
2022-02-13T14:35:00
https://dev.to/burhanahmeed/optimizing-mongodb-collection-with-200-millions-rows-of-data-using-indexing-499i
mongodb, express, node, javascript
## Preface We have a MongoDB collection with almost 200 million rows. Basically, this collection stores log data from an operation that I can not tell you here. Every day ~40 million new rows are inserted and have 5 days of retention, which means 5 days old data will automatically be deleted. We have a `cron job` to check if the data needs to be deleted or not. If the number of users is increasing, potentially we would have more than 200 million. We are using ExpressJS and Mongoose. ## What I want to do and the problems Basically, the log data has `event_type` property and we want to count the number of each events that happened in the last 24 hours per user. Here's our code: ``` const countTypeA = await LogSchema.countDocuments({ createdAt: { $gte: new Date('<24 hour before>') }, userId: <secret id hehe>, one_other_secret_filter: 'secret value', event_type: 'A' }) ``` We have over 20 event types, so we call the code above more than 20 times with different `event_type` and this makes the API response takes so long, and often the API returns a timeout error. ## How I do it We figure out the thing we can do when working with large data is indexing. ### Indexing Add index for the property which we are using as the filter. Because we use four properties in our `count` filter, so we decide to put compound index to this `Log collection`. ``` LogSchema.index({ createdAt: -1, userId: 1, one_other_secret_filter: 1, event_type: 1 }) ``` We put `-1` for `createdAt` because we want it to be indexed in descending order. One of our filter is `createdAt: { $gte: new Date('<24 hour before>') }`, so index by the latest record would be make it faster. After adding a new index, MongoDB will rebuild its index and this process will be done in the background, so we still be able to make the read-write process. The rebuilding process took 30-40 minutes because we have a lot of data compared with an empty collection, which would make the indexing process just a second. ### Using MongoClient instead of Mongoose We thought our job was done, but the problems still exist. Our API response still returns a timeout error. After hours of researching, we found something on Stackoverflow. Mongoose is much slower than Mongo Shell. We try it immediately, we hit our terminal and go to Mongo Shell, we try `db.collection.count({ <FILTER> })` to Mongo Shell. TADA!! It returns the result real quickly. We conclude that the problem might be on Mongoose. But it's impossible to replace Mongoose with MongoClient as we already have tons of modules relying on Mongoose. Okay, so we only migrate some modules that use `countDocuments()` to MongoClient, other than that will use Mongoose. ### Split request into small chunks As I said above, we have 20 event types which means we call the `countDocuments` query 20 times. Let's say 2 seconds per query, so the response time is around 40 seconds. Can't you imagine if the user should wait and see the loading indicator for 40 secs? That's a nightmare. Just an example: ```javascript function (request, response) { const types = ['A', 'B', ..., 20] for (const t of types) { <MONGO COUNT QUERY> } res.json(<data>) } ``` Actually, we can use `Promise.all` but we choose to chunk it because some queries may still take a bit longer and surely it will affect the response time. You can use query search params to know which type you are going to fetch, it's actually similar to pagination, and just discuss with your Frontend dev about the best approach for your team. Updated: ```javascript function (request, response) { const size = 2 const start = request.query.page - 1 const end = start + size const types = ['A', 'B', ..., 20] for (const t of types.slice(start, end)) { <MONGO COUNT QUERY> } res.json(<data>) } ``` So now it only takes 2-3 secs per request to get the data. Happy Coding!!!
burhanahmeed
987,979
Implementing Code Input & Highlighting in Next JS Sanity Blog - Typescript
Adding code blocks to your NextJS Blog After starting to build your dev blog using Next JS...
0
2022-02-13T16:32:14
https://dev.to/hosenur/implementing-code-input-highlighting-in-next-js-sanity-blog-40he
sanity, blog, nextjs
## Adding code blocks to your NextJS Blog After starting to build your dev blog using Next JS and Sanity you want the most critical thing to have on a dev blog - support for code blocks. Lets get started and implement it to the blog **Install Code Input in Sanity** Sanity has an official plugin to insert and use Code Blocks Navigate to your sanity directory and install the plugin ``` sanity install @sanity/code-input ``` **Add the Code Object** Assuming you want to add code inputs to your body add the following lines in **_blockContent.js_** in the end after the image section, located at **_sanity/schemas/blockContent.js_** as follows: ``` { type: 'code', title : 'Code Block' } ``` After adding the above lines your blockContent.js should look like ``` { type: 'image', options: {hotspot: true}, }, { type: 'code', title : 'Code Block' } ``` Restart your Sanity Server once for the changes to take place ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s7paup6au8qcz16d5if.PNG) We have completed setting up our Sanity Studio and now you can enter code blocks. Now lets set up code highlighting for our Frontend. Start by installing the required packages to your NextJS project ``` npm i --save @sanity/block-content-to-react npm i react-syntax-highlighter ``` I would suggest not to use React Portable Text Package as it causes conflicts, @sanity/block-content-to-react does the same thing and makes it easy to display and edit code blocks. **Modify your post display page as follows:** Import Sanity Block Content and React Syntax Highlighter ``` const BlockContent = require('@sanity/block-content-to-react') import SyntaxHighlighter from 'react-syntax-highlighter'; ``` If you face errors like "Could not find a declaration file for module 'react-syntax-highlighter'", run the following command in your project directory ``` npm i --save @types/react-syntax-highlighter ``` Create the serializer for code blocks as follows: ``` const serializers = { types: { code: (props: any) => ( <div className='my-2'> <SyntaxHighlighter language={props.node.language}> {props.node.code} </SyntaxHighlighter> </div> ), }, } ``` Now you should end up with something like this: ``` import { GetStaticProps } from 'next'; import { sanityClient, urlFor } from '../../sanity' import { Posts } from '../../typings'; import Header from '../../components/Header' const BlockContent = require('@sanity/block-content-to-react') import SyntaxHighlighter from 'react-syntax-highlighter'; interface Props { post: Posts } const serializers = { types: { code: (props: any) => ( <div className='my-2'> <SyntaxHighlighter language={props.node.language}> {props.node.code} </SyntaxHighlighter> </div> ), }, } ``` Finally use Sanity Block Content to display your body ``` <BlockContent blocks={post.body} projectId="xxxxxxxx" dataset="production" serializers={serializers} /> ``` You are now all set to insert code blocks to your dev blog with syntax highlighting 🔥🚀 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ajjsdt0vskmkru2fc6f.PNG)
hosenur
988,510
Transcription Services | Audio & Video Transcriptions - Vanan Services
Accurate transcription - online transcription services An exceptional human transcriber could cater...
0
2022-02-14T07:07:37
https://dev.to/vananservice/transcription-services-audio-video-transcriptions-vanan-services-4ohc
transcriptionservices, transcriptionservicesonline
Accurate transcription - online transcription services An exceptional human transcriber could cater to an accurate and clear account of all facts and the non-linguistic conversation sounds, also known as interjections such as “um,” and the non-verbal cues. Therefore, professional human transcribers enable you to achieve top-notch standards specially with legal **[Transcription Services](https://vananservices.com/Transcription-Services.php)** and will surely convert your files into exact transcripts without any additional charges. Understanding dialects and accents When the audio files include several speakers with overlapping conversations, human transcription is the best option. Furthermore, human transcription imparts an alternative to referring to a supporting or glossary document whilst transcribing. Humans are also quite capable of deciphering and handling technical & industry-specific terminology masterfully. Hence, humans are the best choice for quick transcription services. **Provide reliable and accurate data** Human transcription services are the go-to option with moderate transcription services cost when accuracy and clarity are essential. It has been considered the stroke of benefit, especially for the insurance industry, having trained and specialized individuals who perfectly transcribe conversations, data, and interviews to trustworthy documents. In insurance documents, with a legally binding system, great accuracy and care are taken to ensure an exceptional standard of work. **Ability to deal with background noise** The biggest reason why human transcribers are chosen over AI mechanisms or automated transcription software is the natural ability to filter out the background noise. For example, when a claimant delivers a statement or makes a claim over call, there is a high chance that the conversation will have terrible background noise. In such cases, human transcribers at language transcription services can filter out the background noise and could deliver a transcript with total accuracy in How do you choose the best human transcriber? Are you an insurance provider or a medical practice? When it comes to excellent and advanced transcription services in the industry today, a skilled team of transcribers can serve you with the best work. Always look for trusted transcription services near me among many of the leading organizations across the US. If you’re looking for renowned transcription services in Spanish or any other language to upgrade your investigation claims and processes, get in touch with the prime transcription consultants such as Quick Transcription Service and obtain your transcripts without any delay. **For Details Visit:** [https://vananservices.com/Transcription-Services.php](https://vananservices.com/Transcription-Services.php)
vananservice
988,735
Why String is Immutable in JAVA? What exactly does it mean?
So recently I came across the question, why are Strings Immutable in java? Before discussing on this,...
0
2022-02-14T20:32:14
https://dev.to/tishaag098/why-string-is-immutable-in-java-what-exactly-does-it-mean-59ki
java, beginners, discuss, programming
So recently I came across the question, why are Strings Immutable in java? Before discussing on this, I would first like to tell **What are Immutable strings?** > Immutable simply means unmodifiable or unchangeable. Once String object is created its data or state can't be changed but a new String object is created. Before going through this article, I recommend to first know about [String Constant Pool](https://dev.to/tishaag098/string-constant-pool-storage-of-strings-1784) **This is how String works:** ``` String str = new String("Tisha"); ``` This, as usual, creates a string containing "Tisha" and assigns it a reference str. ``` str = str.concat(" Agarwal"); ``` ![immutable](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/338m4oskhv5z49l5gklo.jpeg) This appends a string " Agarwal" to `str`. But wait, how is this possible, since **String objects are immutable**? Well to your surprise, it is. When the above statement is executed, the VM takes the value of String `str`, i.e. "Tisha" and appends " Agarwal", giving us the value `"Tisha Agarwal"`. Now, since Strings are immutable, the VM can’t assign this value to `str`, so it creates a **new String object**, gives it a value `"Tisha Agarwal"`, and gives it a reference str. _Why are strings Immutable?_ _Let's understand this with the help of an example:_ `String chocolate1="KitKat"; String chocolate2="KitKat"; String chocolate3="KitKat"; ` ![image2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epy8d45r59yokly3zcfb.jpeg) So what's happening? In this a object of `KitKat` is created inside String Constant pool and all three variables `chocolate1` , `chocolate2` , `chocolate3` are pointing to it. Now if we change cholocate3 to MilkyBar `chocolate3=MilkyBar;` Suppose if the string were mutable then without creating a new object the value of `chocolate3` would have changed from `KitKat` to `MilkyBar`.But don't you think this would also change the value of `chocolate1` and `chocolate2 `from `KitKat` to `MilkyBar` as shown in the above diagram, these 3 were pointing to the same object. So, that's why Strings are Immutable, from the above example, now the chocolate3 will create a different object and will point to it. ![Image3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0nawikstltzbws2tv2fl.jpeg) **Point to Remember** - Strings are Immutable in Java because String objects are cached in String pool. Since cached String literals are shared between multiple persons there is always a risk, where one person's action would affect all other persons.
tishaag098
988,739
PWNKit - "I'm root now" - Less than a second
We look into the PWNKit vulnrability and how it works. PWNKit is a linux exploit that have been in linux environments for over 12 years. We look at how it works and what is required to be vulnrable and different approaches to secure your system.
0
2022-02-14T17:25:07
https://dev.to/kalaspuffar/pwnkit-im-root-now-less-than-a-second-3a2p
--- title: PWNKit - "I'm root now" - Less than a second published: true description: We look into the PWNKit vulnrability and how it works. PWNKit is a linux exploit that have been in linux environments for over 12 years. We look at how it works and what is required to be vulnrable and different approaches to secure your system. tags: cover_image: https://i.ytimg.com/vi/TcZft7eWO4U/maxresdefault.jpg --- {% youtube TcZft7eWO4U %} We look into the PWNKit vulnrability and how it works. PWNKit is a linux exploit that have been in linux environments for over 12 years. We look at how it works and what is required to be vulnrable and different approaches to secure your system.
kalaspuffar
988,824
Mistakes that Junior Software Developers Make While Having an Interview
All of us have mistakes, mistakes that we’re not proud of. This is especially true in the field of...
16,838
2022-02-14T14:01:21
https://techmaniac649449135.wordpress.com/2022/02/14/mistakes-that-junior-software-developers-make-while-having-an-interview/
webdev, programming, tutorial
All of us have mistakes, mistakes that we’re not proud of. This is especially true in the field of software development, where mistakes are costly and sometimes even get you fired. However, mistakes are also opportunities for growth. Growth which happens when you reflect on your mistakes so you can avoid them again in the future. In this article, I will be discussing mistakes that I’ve personally made, mistakes which (I believe) Junior Software Developers make frequently. ## These mistakes are not listed in any particular order. ## improper understanding of what they should do Perhaps the most common mistake developers tend to make is having an improper understanding of what they should do during their interviews. It can be hard to know what you’re supposed to do during an interview if this is your first time ever being interviewed. But it’s still necessary for you to try your best to have a general idea. Interviewing for a new job is important. You want to make sure that you at least have some idea of what you’re supposed to do. ## Not understanding why they are interviewing You might know that companies hire people all the time without conducting any type of formal interview whatsoever. An example of this would be getting hired as an intern or maybe even being taken on after graduating from college or high school. However, there are also plenty of places that will conduct interviews before hiring somebody. Not every company does the same thing when it comes to their process (interview(s) vs. test(s)). ## The interview format/process Mistakes mentioned in the previous point tie into this one, which is something that Junior Developers often don’t understand. Different companies have different formats and processes for their interviews. For example, some companies have a “take-home” test whereas others have a live coding session or a live whiteboard session. The difference between these means of testing a candidate’s programming skills comes down to proving a point. Some employers just want to see what you can do when being tested in your own environment. Other employers want to see how you perform under pressure. The takeaway from this mistake is simply for you to be aware of what type of interviewing process the company has so that whether it be good or bad, you will be prepared for it. ## Not knowing their interviewer(s) This ties back in with the previous mistakes, mistakes that Junior Software Developers tend to make while having an interview. Not knowing who your interviewer(s) are can lead to mistakes if you don’t do any research beforehand. It’s important that when looking for jobs, you at least have an idea of who is doing the interviewing. If questions arise during the course of your interview, you know what types of answers would best fit their style and preferences. For example, a company might have a person conducting interviews for them who has a background in mathematics but doesn’t really know how to program at all. Another company might have somebody else doing their interviews who learned programming from scratch and is more than happy to ask you questions on higher level concepts. These mistakes are mistakes that we all make while having an interview simply because they don’t occur very often. We need to understand them better in order to know how not to do them. I hope you find this article helpful and wish you the best of luck in your next interview! [
techmaniacc
988,852
Find/Extract Features by largest size (aggregate) 🌎
Say you have multiple shapes within parent shapes; in this case, buildings that belong on parcels of...
0
2022-02-14T15:01:20
https://dev.to/dudeastronaut/findextract-features-by-largest-size-aggregate-28m0
qgis, gis
Say you have multiple shapes within parent shapes; in this case, buildings that belong on parcels of land. We can use the `maximum()` expression in QGIS to find the largest building on each parcel. ##Prerequisites: - **Keys**: Each building shape needs a unique ID and a foreign key that associates it with the parcel (its parent). In this case I found the `Pole Of Inaccessibility` for each building and used that point in a `Join by Location` against the parcel to create a foreign key to create a relationship between the buildings and parcels. - **Size**: Each buildings needs a field that defines its area ##The Expression: ###### *We use `Extract by Expression` tool with the parcels layer selected as the input layer. ```"SHAPE_AREA" = maximum("SHAPE_AREA",group_by:="PARCEL_ID")``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66fpoq0cti0u8ar38gma.png) ##Notes: - **Expression Breakdown**: All that needs to be replaced is `SHAPE_AREA` and `PARCEL_ID`. Keep the remaining syntax the same. I was not familiar with the `group_by:=` syntax and thought this was a memo in documentation I read but this is literally part of the function. Image showing the function below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w120jqmxonxbovsmod3j.png) - **Processing Time**: In my tests I found that the processing time was not linear. What I mean by this is comparing 1000 features does not always take 10 times as long as 100 features. I believe the calculation is ~exponential meaning that the more features you add to compare, the longer the processing time by a factor of x (not sure exactly). If you have a large dataset that you are running this expression against (>50,000 features), I would recommend splitting up the layer you are processing into chunks, run the expression on each, and then merge. You should be able to split the layer without duplicates but if you want to be sure you can always run a `Remove Duplicates` expression after the merge at the end. > ~56,000 features took about 110 minutes to process.
dudeastronaut
1,023,056
Difference Between DMM and Oscilloscope
DMM vs Oscilloscope When dealing with electricity, it is essential that you have the right tools in...
0
2022-03-15T05:13:59
https://dev.to/pasanabeysekara/difference-between-dmm-and-oscilloscope-5dgk
electrical
**DMM vs Oscilloscope** When dealing with electricity, it is essential that you have the right tools in order to get the right information. Two tools that you can use to deal with electricity are the oscilloscope and DMM, which stands for Digital Multimeter. The main difference between the DMM and oscilloscope is what they can do. The DMM is very versatile tool the can measure voltages, currents, resistances, and some can even check if diodes and transistors work. In contrast, an oscilloscope only measures voltage but with a lot more detail. The thing that the oscilloscope can do that the DMM cannot do is to actually inspect how the voltage changes over time. This is very useful when it comes to electronics when you are inspecting signals. You can easily tell the waveform of the voltage; whether it’s a sine wave, a square wave, sawtooth wave, and the like. The DMM can only show the average voltage level, so you cannot really inspect it in detail. Another feature of the oscilloscope is the ability to plot two voltages together, thus providing the ability to compare. So if you have an input and an output, you can have them both on-screen so that you can see the changes that your circuit has done in real time. Lastly, there is the matter of cost. The price may vary from place to place, but the price of an oscilloscope would always be a lot more than the price of a DMM. The difference can reach 10 times or even more. The oscilloscope and the DMM are specialized tools that have their own uses. The DMM is more common as it is used in a lot more areas, like checking for shorts, voltages, measuring currents, and a lot more. The DMM is essentially a basic and necessary tool for any electrician or electronics enthusiast. The oscilloscope is a more advanced tool that should only be used by more experienced practioners. Summary: 1. A DMM measures a lot of things while an oscilloscope just measures voltage 2. An oscilloscope is able to show waveforms while a DMM cannot 3. An oscilloscope is used mostly for electronics while a DMM is used for a lot of things 4. An oscilloscope is able to measure to voltages at the same time while a DMM can only show one 5. An oscilloscope costs a lot more than a DMM
pasanabeysekara
988,854
Working As A Professional Developer - 10 Ways Developers Make Money Coding Course - E08
Many companies hire people remotely to create websites and applications for them. If your skill is...
16,230
2022-02-14T14:39:42
https://danfleser.com/posts/20220204025040-working-as-a-professional-developer-10-ways-developers-make-money-coding-course-e08
career, programming, leadership, tutorial
Many companies hire people remotely to create websites and applications for them. If your skill is good, then you can earn up to $500K/year by working full-time in these companies. Traveling and working remotely part-time is another way of living. In this video series, I share the best ways and how to increase your money income as a web developer while working from home. 📽️ Video here - https://youtu.be/uJKVk7cv3Dw 👉 Subscribe to Dan Later - https://bit.ly/31SdiZv 💻 Personal website post - https://danfleser.com/posts/20220204025040-working-as-a-professional-developer-10-ways-developers-make-money-coding-course-e08
danfleser
989,136
contractsPY — Python library for handling business transactions with railway-oriented approach
contractsPY — Python library for handling business transactions with railway-oriented approach In...
0
2022-02-14T17:46:49
https://dev.to/arzuhussein/contractspy-python-library-for-handling-business-transactions-with-railway-oriented-approach-3i9o
python, django, softwarepattern
[contractsPY](https://github.com/arzuhuseyn/contractsPY) — Python library for handling business transactions with railway-oriented approach In this tutorial, I’ll show you how to handle business transactions properly on Django (also applies to other frameworks.) ``` class CreateUserDjango(APIView): serializer_class = UserSerializer def post(self, request): username=request.data.get('username') password=request.data.get('password') if not username or not password: return Response({'error': 'Input values are not valid.'}) user = user_repo.generate_user(username,password) data = {'error' : 'User not generated'} if user: data = {'error' : 'User already exists'} user_not_exists = user_repo.user_exists(username) if not user_not_exists: user_created = user_repo.save_user(user) if user_created: data = self.serializer_class(user).data else: data = {'message': 'User not persisted.'} return Response(data) ``` Look at this code. What do you see? Yes, it’s a Django Rest Framework’s APIView class that is intended to create a new user in our project. But, I want you to focus on details. What do you see? Is it readable code? Is it handling all the edge cases you might have? (Probably not.) Or what if your manager said that we need to add new functionalities in this particular view? What do you gonna do? Or let’s assume that you’re the first developer who wrote the code. You know what to do in here, right? This is a familiar codebase and you can update it as your manager requests. But, what if you’re the new guy on the team and your manager wants you to update this code. Well, you know what I mean. We have all once been there. Writing this kind of code is nasty and makes the development process expensive and complex. Projects, which have been written like this are a pain in the ass. But can we find a solution for this? Yeah, people develop software projects for the last 50 years. Some bright-minded people had offered tons of useful patterns, approaches for us to develop things properly. Proper implementation of those patterns and approaches is life-saving. Recently, I have developed a tiny python library for handling business transactions on my own projects. Then I decided to share it with other developers for payback to our beloved python community. ``` class CreateUserContracts(APIView): """ Simple APIView to create a user with contractsPY. """ serializer_class = UserSerializer def post(self, request): username = request.data.get('username') password = request.data.get('password') result = create_user.apply(username=username, password=password) if result.is_success(): data = self.serializer_class(result.state.user).data else: data = {'message': result.message} return Response(data) ``` Now, look at this code. What do you see? This is the same view as the previous one. Except, there are not too many details in here. In this code, the view is the only interface for our actual business transaction. It gets data from users and returns manipulated data. Not manipulating data in the view. The second approach is easy to implement and makes our codebase readable, extendible and testable. I can hear you’re telling “Yeah, that’s nice and I can implement a specific object for handling my use-case. Why do I need to use additional dependency for stuff like that?”. Well, I’m glad you asked. **Railway-Oriented Programming** [contractsPY](https://github.com/arzuhuseyn/contractsPY) library uses the Railway-Oriented Programming approach to handle your business transactions. But, what is this approach? What does it mean after all? Every step(function) in your project has two possible returns. (Success or Failure). All systems are basically built on these returns. Like Lego. If you want to handle every possible scenario in your system, the railway approach helps you do it. I think its name tells a lot. I don’t want to go deeper about this topic. But, you should watch this youtube video and learn about it if you don’t know about it yet. On the first code example, we have seen multiple if-else statements in our view. Remember, this is the simple register API. If you want to implement a real-world registration API, there will be many more conditional statements in your view class. Because there could be different types of inputs to validate, check and create. On the other hand, in our second code example, there are no checks, validations or DB operations. But where are these conditional statements? Well, actually we don’t need to write them. At least, like the first code example. ``` from contractsPY import if_fails, Usecase from users.repository import UserRepository user_repo = UserRepository() if_fails(message="Input values are not valid.") def validate_inputs(state): if state.username and state.password: return True return True @if_fails(message="User already exists.") def validate_user_exists(state): exists = user_repo.user_exists(state.username) return True if not exists else False @if_fails(message="User not generated.") def generate_user(state): state.user = user_repo.generate_user(state.username, state.password) return True if state.user else False @if_fails(message="User not persisted.") def persist_user(state): user_created = user_repo.save_user(state.user) return True if user_created else False # Usecase 1 - Create user create_user = Usecase() create_user.contract = [ validate_inputs, validate_user_exists, generate_user, persist_user, ] ``` This is the [contractsPY](https://github.com/arzuhuseyn/contractsPY) implementation for the second code example. I want you to focus on the initialization of Usecase class and the settings of the contract. With [contractsPY](https://github.com/arzuhuseyn/contractsPY), you can create a chain of simple python functions, that accepts only one argument (state) and returns a boolean value. Also, consider that, you can use these simple functions on other use-cases too. So, we can say these functions are reusable components. Functions are called by respectively and change the current state of the use-case until it finishes. If every function returned True, then it means that your use-case is successful. ``` >>> Result(state={'username': 'johndoe', 'password': 'foobar', 'user': User(username=johndoe, password=foobar)}, case=success, message='Ok') ``` This is the result object. You can see three different fields. State data, case and message. After a successful transaction, you can use state values to return desired data. (in this case, it’s user value.) ``` >>> Result(state={'username': 'johndoe', 'password': 'foobar', 'user': User(username=johndoe, password=foobar)}, case=error, message='User exists.') result.state = {'username': 'johndoe', 'password': 'foobar', 'user': User(username=johndoe, password=foobar)} result.case = error result.message = 'User exists.' ``` What if our transaction fails? You have seen if_fails decorator in the previous example. With help of this decorator, you can print human-readable messages. This decorator is completely optional and the [contractsPY](https://github.com/arzuhuseyn/contractsPY) library doesn’t force you to use it. Thanks for reading. Every feedback, contribution, stars, forks are more than welcomed. [Project Github](https://github.com/arzuhuseyn/contractsPY)
arzuhussein
989,344
A Git Individual Workflow WalkThrough
Git and its workflows can be initially hard concepts to wrap our heads around. Git is a version...
0
2022-02-14T21:06:09
https://dev.to/adelinealmanzar/a-git-individual-workflow-walkthrough-po7
git, github, tutorial, beginners
Git and its workflows can be initially hard concepts to wrap our heads around. Git is a version control system that lets us keep track of and manage code history. Github is a cloud-based hosting service that helps to manage Git repositories (repos). Some cool things about Git are that it's based off of a branching model and it's mostly local. Github, on the other hand, is exclusively remote/cloud-based, which allows different people to view, clone, and collaborate with our code. ### Flow of Data with Git and Github Our workflows are going to follow the interaction between our machine (local changes via Git) and the hosting service (remote changes stored via Github history). For the purposes of simplicity, from this point onwards we're going to use 'local' and 'remote' keywords to refer to our local machines and remote repositories/branches. Data can be considered within two types: (a) dynamic instance data, such as code files and their contents (b) static metadata, such as our branches and commits with user information (or 'git blame' data) At a high level we could say that our data moves from local locations to remote locations and vise versa ![Movement of Data Flow Chart](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfxjkm004tzkx38wi1ao.jpg) ### Walkthrough of personal workflow Now what happens when we make changes locally and how do we do that? After we copy the remote's repository with `git clone`, we are able to work within Git's branching system locally. When we `git pull` we're pulling a copy of a specific branch from the remote repository to our local workspace. If there haven't been any changes to the remote repository, then when we `git pull` we will get an 'Already up to date' message. ``` user@ repo-name % git pull origin main From github.com:adelinealmanzar/repo-name * branch main -> FETCH_HEAD Already up to date. ``` We should `git pull` _only_ the remote branch name that matches the local branch we are currently in. We can check which branch we're currently located in with `git branch`. ``` user@ repo-name % git branch * master ``` Imagine your master or main branch is the trunk of a tree. Whatever branches we create out of that trunk will be almost like draft branches because they will only exist in our local workspace and then eventually into our local repository, where those draft branches will be reviewed and edited. When we `git checkout -b any-new-branch-name` we're creating new local branches off of the master/main root branch and we're moving to that branch. ``` user@ repo-name % git checkout -b any-new-branch Switched to a new branch 'any-new-branch' ``` We can do `git branch` again to sanity-check that we're in fact in our new branch ``` user@ repo-name % git branch * any-new-branch master ``` Our new branch will end up tracking the version history of our changes via commits. But before we commit to our official remote repository history, we must stage those changes in our index. We can check which files are staged via `git status`. `git status` displays files that have (or have not) been changed in the working directory and the staged directory. If we've made no changes to any files, running git status will tell us that we have a clean working tree ``` user@ repo-name % git status On branch any-new-branch nothing to commit, working tree clean ``` So lets add some code changes and then see what git status looks like. ``` user@ repo-name % git status On branch any-new-branch Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: src/components/Header.js modified: src/components/MainContainer.j no changes added to commit (use "git add" and/or "git commit -a") ``` Any modification that we make to a file will show up under our workspace area(located after 'changes not staged for commit' in red). In order to add those changes to our index, we must run `git add <file-name>`. ``` user@ repo-name % git status On branch any-new-branch Changes to be committed: (use "git restore --staged <file>..." to unstage) modified: src/components/Header.js Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: src/components/MainContainer.js ``` `git add` adds new or modified files (from our workspace) to our staging area (located after 'changes to be committed' in green). `git add` gives us the benefit of choice in deciding which files and changes we would like to commit. Based on Github's Git Guides, commits should be logical, atomic units of change. Tech teams usually organize their commits based on whichever rule of logic their organization/company practices. An example rule may be that each commit must contain all changes made to an individual file or that each commit must bucket one specific value within a feature. Why implement logical rules for commits? A clean commit history is usually easy to read, helpful for debugging, and helpful for understanding which changes we might end up needing to revert. To add our changes to an individual commit and create an easy-to-understand commit message, we run `git commit -m "any commit message"` ``` user@ repo-name % git commit -m "perf: cleanup/removed debugging comments" [any-new-branch 71a72da] perf: cleanup/removed debugging comments 1 file changed, 1 insertion(+), 1 deletion(-) ``` Commit messages should describe what kind of change we've made to our code. In the above example, our commit message indicates that our changes included file cleanup of removing some sudo-code that was no longer needed. The `perf` header indicates that the change we made was a removal. This `perf` header is one of several semantic-release headers and footers that we can add to our commit message to indicate the type of change we've made to our code (addition, removal, patch/fix, or breaking change). To read more on semantic headers, you can [read more here](https://semantic-release.gitbook.io/semantic-release/). After we've committed our changes, we can `git log` to view our commit history and sanity check that our commit is in the right place in history. The log commits are in order of most to least recent, so our most recent commit should be at the top, which we can see in the code below. ``` commit 71a72da43ca6cdd9774c8abf00a96fb3f6c6d0b9 (HEAD -> any-new-branch) Author: Adeline Almanzar <almanzar.adeline@gmail.com> Date: Sun Feb 13 11:31:50 2022 -0800 perf: cleanup/removed debugging comments commit 84c056344662503041c6fd3d5365f805dfca08e8 (master) Author: Adeline Almanzar <almanzar.adeline@gmail.com> Date: Mon Feb 7 20:38:54 2022 -0800 third deliverable ``` Each commit in the log will have a commit ID, information on the author of the commit, the date & time of the commit, and the commit message. Now that we've committed our changes and done our sanity-checking, at this point we can keep going through the changes workflow to continue committing our changes to our local repository. Once our local repository is at a ready-enough state for our team's review and/or collaboration, we can push our changes to the remote repository with `git push origin <branch-name>`. ``` user@ repo-name % git push origin any-new-branch Enumerating objects: 29, done. Counting objects: 100% (29/29), done. Delta compression using up to 8 threads Compressing objects: 100% (28/28), done. Writing objects: 100% (28/28), 286.05 KiB | 3.11 MiB/s, done. Total 28 (delta 0), reused 0 (delta 0), pack-reused 0 To github.com:adelinealmanzar/repo-name.git 84c0563..71a72da any-new-branch -> any-new-branch ``` `git push` will send all of our local repository's commits and branches to the remote repository (in Github). When we sign into Github and visit the remote repository, we should be able to create a new pull request(PR). We can think of our pull request as a form that we submit to our team with the request to officially merge our changes to the remote branch we originally branched-off of. PRs enable us to share the purpose of our code changes in layman's terms. They also enable our team members access to review our changes and make changes themselves (stay tuned for my next blog post on collaborative workflows). ##TLDR: Personal Workflow Summary 1. `git clone repo-link && cd repo-name`: make a clone/local copy of the remote repository and move into it 2. `git branch`: view current branch & view all branches in our local repository 3. `git pull origin master`: pull remote master branch data to our local master branch 4. `git checkout -b new-branch-name`: create a new local branch and move to that branch 5. `git branch`: sanity check that we're in proper new branch 6. Make code changes 7. `git status`: shows what files have had changes made & shows whether our files have been staged or not (green for yes, red for no) 8. `git add filePath/wanting/toStage.txt`: add files, via their file paths, to the staging/index area 9. `git status`: sanity check that only preferred files are staged/added to index 10. `git commit -m "any commit message"`: snapshot currently staged changes via a singular commit & write concise message 11. `git log`: view our commit history to sanity check that our most recent commit is at the top of the history list 12. `git push origin new-branch`: push our local new-branch and its data to the remote space (Github) 13. Create pull request on Github 14. `git branch`: sanity check that current branch is still new-branch 15. `git pull origin new-branch`: pull (to our local new-branch) any changes that might have been added from the remote new-branch 16. (a) If making changes to the same branch, repeat steps 6-12 (b) If making changes to a different branch, repeat steps 5-12 #### Resources: 1. [Another git workflow & commands explanation](https://krishnaiitd.github.io/gitcommands/git-workflow/) 2. [A git commands cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf) 3. [Git Guides](https://github.com/git-guides/)
adelinealmanzar
989,386
100 días de código: 69, numero gracioso y san valentín
¡Hey hey hey! ¡Feliz San Valentin! Es curioso que San Valentín y el día 69 del reto coincidieran y...
0
2022-02-14T23:34:17
https://dev.to/darito/100-dias-de-codigo-69-numero-gracioso-y-san-valentin-1ll4
spanish, 100daysofcode, webdev
¡Hey hey hey! ¡Feliz San Valentin! Es curioso que San Valentín y el día 69 del reto coincidieran y casi parece que lo haya hecho a propósito como una referencia. He vuelto despues de unos cuantos días debido a una fatiga mental que llevaba por culpa de varias razones personales. Se que no soy la mejor persona para decirlo pero no olviden tomarse algo de descanso de vez en cuando para evitar este tipo de situaciones en las que tendrán que descansar si o si. Hoy he retomado el reto y he comenzado a avanzar con mi proyecto de prueba de typescript y me enfrente a un problema que jamas había contemplado en React. Tuve un problema donde React no reconocía un cambio en el estado y por lo tanto no re renderizaba este a pesar de que si se realizara el cambio. La razón es que cree un estado con el hook useState que tenia demasiada profundidad, es decir, cree un objeto con objetos dentro que al hacer cambio en ellos, react por si solo no reconocía el nuevo estado y por tanto no realizaba el renderizado nuevamente. Para solucionarlo separe este estado en varios para que React no perdiera la capacidad de ver cambios. ### Hoy - Mejore algunos estilos de los componentes de mi prueba de typescript. - Comencé un curso de Redux. - Agregue la opcion de eliminar un elemento de la lista de mi aplicación web de prueba. - Practique Touch Typing. Espero que pasen un día agradable con sus seres queridos. Hasta pronto! Foto cover de [Kelly Sikkema](https://unsplash.com/@kellysikkema?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) en [Unsplash](https://unsplash.com/s/photos/valentine-day?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
darito
989,388
Paracetamol.js💊| #67: Explica este código JavaScript
Explica este código JavaScript const persona = { id: 1, ...
16,071
2022-03-02T02:57:50
https://dev.to/duxtech/paracetamoljs-67-explica-este-codigo-javascript-393b
javascript, spanish, beginners, webdev
## **Explica este código JavaScript** ```js const persona = { id: 1, nombre:"Fernando", }; persona.nombre = "Pedro"; console.log(persona.nombre); ``` - A. `Pedro` - B. `Fernando` - C. `null` - D. `TypeError` Respuesta en el primer comentario. ---
duxtech
990,180
4 Reasons why I start using Tailwind CSS in every project
I used to use UI frameworks a lot — Bootstrap, Material UI, Ant Design… Yes they are convenient and...
0
2022-02-15T15:02:33
https://dev.to/jeffreythecoder/4-reasons-why-i-start-using-tailwind-css-in-every-project-5b72
tailwindcss, webdev, css
![tailwind-css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yodlhan25zp7yrlez9xq.png) I used to use UI frameworks a lot — Bootstrap, Material UI, Ant Design… Yes they are convenient and I only need to write a few lines of CSS in a whole project. But the problem is: you can see a website is clearly built upon Bootstrap, like many others out there. UI frameworks have components so strictly defined that it’s hard to customize them to the unique styles of my website. They make my website doesn’t look like “mine”, but coming out of a popular template that everyone is using. Once I picked up Tailwind CSS last year, I started using it in every project of mine, and reconstructing existing projects with it. Although it’s a CSS framework, it allows me to write customizable styles as simple as those UI framworks, while acting the same as coding in plain CSS. Here are 4 reasons why I love using it and why it’s so easy to use. ## Easy inline styling Instead of writing blocks of plain CSS and adding annoying selectors to HTML elements, Tailwind lets you add styles on HTML elements directly. It’s like style attribute in JSX but with a more concise syntax. For example, a styled avatar in regular HTML and CSS is ``` <img src="image.png"> img { width: 6px, height: 6px, border-radius: 100% } ``` whereas in Tailwind is ``` <img src="image.png" class="w-6 h-6 rounded-full"> ``` ## Predefined property value Tailwind sets the value of a CSS property in several predefined formats so that your UI follows a certain pattern. This avoids inconsistent property value over different components, and saves time for you to define global patterns for the properties. ![border-radius predefined value](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/42j3mxskrholmb0trh3p.png) ## Highly customizable Although Tailwind has predefined property value, it allows you to customize your own property value. ``` // tailwind.config.js module.exports = { theme: { borderRadius: { 'none': '0', DEFAULT: '4px', 'large': '12px' } } } ``` A more common use case is to globally define a style class, like the CSS selector. This saves you from writing the same style for different components, reusing it in various scenarios. ``` // index.css @layer component { .hover-transition { @apply transition duration-200 ease-out; } } // myComponent.html <div class="w-10 h-6 hover-transition"> hover me </div> ``` ## Handle responsiveness easily When CSS first comes out, it didn’t incorporate responsive styles for different screen sizes. Media queries were not introduced to CSS until 2012, yet its syntax is still tedious for devs to write a whole responsive application. ``` button { hidden } @media screen and (min-width: 768px) { button { width: 12px } @media screen and (min-width: 1024px) { button { width: 16px } } ``` Tailwind incorporates easy responsive styling to just add a prefix of screen size in front of each property. ``` <button class="hidden md:w-12 lg:w-16"> hit me </button> ``` Makes life much easier, right?
jeffreythecoder
990,200
Performance Testing using Iter8
There are umpteen performance testing tools available in the commercial market as well as in the open...
0
2022-02-15T15:46:53
https://qainsights.com/performance-testing-using-iter8
kubernetes, performance, testing, tutorial
<!-- wp:paragraph --> <p>There are umpteen <a href="https://github.com/QAInsights/Performance-Testing-Tools" target="_blank" rel="noreferrer noopener">performance testing tools</a> available in the commercial market as well as in the open source repositories. Based on our requirements, we can choose the best tool from the arsenal. Recently, Go based performance testing tools are exploding in the open source world. Go runtime provides very light-weight goroutines which execute the tasks quickly and efficiently. In this blog post, we are going to see about Iter8 - a simple Go based performance testing tool which validates the SLOs, performs Chaos testing, and more. </p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2 id="iter8">Iter8</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>There are a couple of things you should be aware. Iter8's (pronunciation - Iter-eight) primary target audience are DevOps, MLOps, Developers, Performance Engineers and Testers in this order. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><a href="https://github.com/iter8-tools/iter8/" target="_blank" rel="noreferrer noopener">Iter8</a> is a command line interface tool which sends loads of requests to the target URL and then evaluates the SLOs and, optionally, you can generate an HTML report.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Iter8 gels with the CI/CD and GitOps pipelines and is apt for the Kubernetes ecosystem. </p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2 id="getting-started-with-iter8">Getting Started with Iter8</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Installing Iter8 is simple and easy. There are multiple methods you can follow to install Iter8: via Binaries, Brew, from the Source or using Go. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>I am in the Windows ecosystem. I am going to use Go to install Iter8. The only prerequisite in this method is to have the latest version of Go installed. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Download Go from https://go.dev/dl/ and install it.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To validate the installation, open Windows Terminal or Command Prompt and enter <code>go version</code>.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To install Iter8, enter <code>go install github.com/iter8-tools/iter8@latest</code></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To validate Iter8 installation, enter <code>iter8 version</code></p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":9082,"sizeSlug":"full","linkDestination":"media"} --> <div class="wp-block-image"><figure class="aligncenter size-full"><a href="https://qainsights.com/wp-content/uploads/2022/02/image-11.png"><img src="https://qainsights.com/wp-content/uploads/2022/02/image-11.png" alt="Validating Iter8 - Performance Testing using Iter8" class="wp-image-9082"/></a><figcaption>Validating Iter8 - Performance Testing using Iter8</figcaption></figure></div> <!-- /wp:image --> <!-- wp:heading --> <h2 id="run-an-experiment">Run an Experiment</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>The next step is to run a performance test against a URL. In Iter8, an experiment consists of an <code><strong>Experiment Chart</strong></code> (similar to Helm charts) and user inputs via command line.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><strong>Experiment Charts</strong> are available in the Iter8 GitHub repo e.g. load test experiment is located at https://github.com/iter8-tools/iter8/tree/master/hub/load-test-http which is called <strong>Iter Hub</strong>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Right now, Iter8 has the following experiments: load-test-http and load-test-grpc.</p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2 id="hello-world-experiment">Hello World Experiment</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Let us begin with a simple load test using <code>load-test-http</code> chart. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Enter the below command in your terminal and hit enter. This command will create a directory and download the experiment from Iter8 Hub.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><code>iter8 hub -e load-test-http</code></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>cd into <code>load-test-http</code></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Below is the content in <code>load-test-http</code> folder.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":9084,"sizeSlug":"full","linkDestination":"media"} --> <div class="wp-block-image"><figure class="aligncenter size-full"><a href="https://qainsights.com/wp-content/uploads/2022/02/image-12.png"><img src="https://qainsights.com/wp-content/uploads/2022/02/image-12.png" alt="Iter8 Experiment" class="wp-image-9084"/></a><figcaption>Iter8 Experiment</figcaption></figure></div> <!-- /wp:image --> <!-- wp:paragraph --> <p>In <code>values.yaml</code>, the default values for the HTTP load test are available. Most of the values would be <code>null</code>. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>In <code>Chart.yaml</code>, the source of the experiment and the API definition can be seen.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To run an experiment against <a href="https://reqres.in/api/users/1" target="_blank" rel="noreferrer noopener">https://reqres.in</a> (please do not put more load), enter the below command and hit enter.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><code>iter8 run --set url=https://reqres.in/api/users/1</code></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>This command will generate an <code>experiment.yaml</code> in the current directory. Below is the content of <code>experiment.yaml</code>.</p> <!-- /wp:paragraph --> <!-- wp:code --> <pre class="wp-block-code"><code># task 1: generate HTTP requests for application URL # collect Iter8's built-in HTTP latency and error-related metrics - task: gen-load-and-collect-metrics-http with: qps: 8 connections: 4 errorRanges: - lower: 400 versionInfo: - url: https://reqres.in/api/users/1 </code></pre> <!-- /wp:code --> <!-- wp:paragraph --> <p>By default, Iter8 will send 8 queries per second with 4 parallel connections for the duration will be automatically calculated.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Below is the output of the above experiment.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":9087,"sizeSlug":"large","linkDestination":"media"} --> <div class="wp-block-image"><figure class="aligncenter size-large"><a href="https://qainsights.com/wp-content/uploads/2022/02/image-13.png"><img src="https://qainsights.com/wp-content/uploads/2022/02/image-13-1024x181.png" alt="Experiment Output" class="wp-image-9087"/></a><figcaption>Experiment Output</figcaption></figure></div> <!-- /wp:image --> <!-- wp:paragraph --> <p>Once the experiment is done, Iter8 will generate <code>result.yaml</code> which contains the performance metrics such as latency, error count, error rate and more with the aggregation of min, max, percentiles, mean, standard deviation and more.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>To generate the visualization, enter the below command which will generate a simple HTML report<code>.html</code> in the current directory. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><code>iter8 report -o html &gt; report.html</code></p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Open <code>report.html</code> in your favorite browser.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":9088,"sizeSlug":"large","linkDestination":"media"} --> <div class="wp-block-image"><figure class="aligncenter size-large"><a href="https://qainsights.com/wp-content/uploads/2022/02/image-14.png"><img src="https://qainsights.com/wp-content/uploads/2022/02/image-14-684x1024.png" alt="Iter8 Report" class="wp-image-9088"/></a><figcaption>Iter8 Report</figcaption></figure></div> <!-- /wp:image --> <!-- wp:paragraph --> <p>The HTML report consists of an Experiment Report, Histogram analysis, and metrics as shown above.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Following are the default metrics and SLOs collected by <code>load-test-http</code> experiment.</p> <!-- /wp:paragraph --> <!-- wp:list --> <ul><li><code>request-count</code>: total number of requests sent</li><li><code>error-count</code>: number of error responses</li><li><code>error-rate</code>: fraction of error responses</li><li><code>latency-mean</code>: mean of observed latency values</li><li><code>latency-stddev</code>: standard deviation of observed latency values</li><li><code>latency-min</code>: min of observed latency values</li><li><code>latency-max</code>: max of observed latency values</li><li><code>latency-pX</code>: X<sup>th</sup>&nbsp;percentile of observed latency values, for&nbsp;<code>X</code>&nbsp;in&nbsp;<code>[50.0, 75.0, 90.0, 95.0, 99.0, 99.9]</code></li></ul> <!-- /wp:list --> <!-- wp:heading --> <h2 id="sending-post-request">Sending POST request</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Iter8 supports the POST method as well. Below is an example of a POST request. </p> <!-- /wp:paragraph --> <!-- wp:code --> <pre class="wp-block-code"><code>iter8 run --set url=https://reqres.in/api/users ` --set payloadStr="{"name":"morpheus","job":"leader"}" ` --set contentType="application/json"</code></pre> <!-- /wp:code --> <!-- wp:heading --> <h2 id="assertions">Assertions</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Iter8 supports assertions to validate the output of each experiment. Send the below command which checks for completion, failures, and SLOs. </p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p><code>iter8 assert -c completed -c nofailure -c slos</code></p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2 id="slos">SLOs</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Below command sets the various SLOs.</p> <!-- /wp:paragraph --> <!-- wp:code --> <pre class="wp-block-code"><code>iter8 run --set url=https://reqres.in/api/users/1 ` --set SLOs.error-rate=0 ` --set SLOs.latency-mean=50 ` --set SLOs.latency-p90=100 ` --set SLOs.latency-p'97\.5'=200</code></pre> <!-- /wp:code --> <!-- wp:paragraph --> <p>For the above experiment, SLOs might fail, as shown below.</p> <!-- /wp:paragraph --> <!-- wp:image {"align":"center","id":9089,"sizeSlug":"large","linkDestination":"media"} --> <div class="wp-block-image"><figure class="aligncenter size-large"><a href="https://qainsights.com/wp-content/uploads/2022/02/image-15.png"><img src="https://qainsights.com/wp-content/uploads/2022/02/image-15-1024x590.png" alt="Iter8 SLOs" class="wp-image-9089"/></a><figcaption>Iter8 SLOs</figcaption></figure></div> <!-- /wp:image --> <!-- wp:image {"align":"center","id":9090,"sizeSlug":"full","linkDestination":"media"} --> <div class="wp-block-image"><figure class="aligncenter size-full"><a href="https://qainsights.com/wp-content/uploads/2022/02/image-16.png"><img src="https://qainsights.com/wp-content/uploads/2022/02/image-16.png" alt="Iter8 validation" class="wp-image-9090"/></a><figcaption>Iter8 validation</figcaption></figure></div> <!-- /wp:image --> <!-- wp:heading --> <h2 id="iter8-features">Iter8 Features</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>Load testing is one of the features of Iter8. Apart from load testing, it supports A/B testing, K8s ecosystem, serverless, ML frameworks, and more.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>There are experiments available with Prometheus, Chaos testing, mirroring, traffic split, session affinity, gradual traffic shifting, and more.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>I hope there will be more experiments available in the repo in the near future. </p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2 id="what-is-missing">What is missing?</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>As a performance engineer, my expectations are high. Iter8 supports only a minimal subset of features for performance testing. E.g. distributed load testing and aggregation, data parameterization, easy debugging, UI performance, other protocols, and more are missing in this release. </p> <!-- /wp:paragraph --> <!-- wp:heading --> <h2 id="conclusion">Conclusion</h2> <!-- /wp:heading --> <!-- wp:paragraph --> <p>The pitch of Iter8 is <code>Kubernetes Release Optimizer</code> where the users will conduct experiments either locally or in the Kubernetes ecosystem or via CI/CD/GitOps pipelines, and try out various versions, configurations, and modelling to find the best deployment for production.</p> <!-- /wp:paragraph --> <!-- wp:paragraph --> <p>Iter8 is one of the easiest frameworks to learn. Definitely you should check it out and see whether it can be used by your team. </p> <!-- /wp:paragraph -->
qainsights
990,453
Citizen Developers on the Rise in 2022
Why are low code tools becoming popular? Since the year 2020,the world economy has been...
16,777
2022-02-15T17:54:37
https://dev.to/kanishkkhurana/citizen-developers-on-the-rise-in-2022-3h7l
beginners, career, news, computerscience
### Why are low code tools becoming popular? Since the year 2020,the world economy has been hit with the worst problem it has faced so far. Due to Covid-19 pandemic, millions of people around the globe lost their jobs, and making a living has never been harder in the 21st century. In the digital online world, we were able to see the importance of a webstore, a website and most importantly, online infrastructure to maintain the day-to-day operations of the company. To maintain these processes, technically trained staff, developers, and other digitally skilled workers are very important. As of 2021, the [global software engineer shortage](https://www.daxx.com/blog/development-trends/software-developer-shortage-us) already amounts to **40M** skilled workers worldwide. By 2030, the shortage is predicted to reach **85.2M** engineers. Сompanies worldwide risk losing **$8.4T** in revenue because of the lack of skilled technical staff. Such numbers have already shot the cost of hiring developers to the roof. So in case you don't know how to code , you can either burn all your capital looking for a developer, or you could take matters into your own hands. This global issue gave rise to a collective solution called startup! 84,000 new businesses were registered in October in [France](https://www.forbes.com/sites/forbestechcouncil/2021/04/09/pandemic-fuels-global-growth-of-entrepreneurship-and-startup-frenzy/?sh=61ab16637308). This is a historical 20% increase from last year. Japan recorded 10,000 new businesses bringing a 14% increase and the United Kingdom recorded 30% increase in the number of new businesses in 2021. As per the [Peterson Institute for International Economics (PIIE)](https://reason.com/2021/12/10/entrepreneurship-is-on-the-rise-despite-covid-19/), “Despite a health catastrophe and one of the worst economic downturns in modern history, startup business activity grew in the United States last year— startups grew from 3.5 million in 2019 to 4.4 million in 2020, a 24 percent increase”. In 2021 ,healthcare startups received a total of $2 billion and fintech startups received $8 billion in [funding in India](https://m.economictimes.com/small-biz/entrepreneurship/2021-unfolding-of-the-true-potential-of-the-indian-entrepreneur/articleshow/88426355.cms) alone. These stats clearly state that citizen developers are on the rise. The solution to this can be developing digital solutions at speed. But not everyone can code out a software from scratch. Thus today companies have resolved to low-code platforms to help solve these issues and allow normal citizens to become developers. [According to Forrester](https://kissflow.com/low-code/forrester-predicts-a-significant-increase-in-low-code-platform-adoption-in-2021/) low code platforms are becoming the number 1 choice for companies. By the start of 2022, such platforms will be used for 75% of application development (up 31% from 2020). ### Here is how some startups are using low code to run their business: 1. [Ref4Me](http://www.ref4me.co.uk/)- This is a great application developed on low code platform DronaHQ , it connects Referees and Team Managers with ease. They have utilised the drag-and-drop UI creator to develop not only their website , but also internal tools and backend apps. One of their features you can check out right away is the registration website built on low code. 2. [Neo4j](https://neo4j.com/)- They use low-code in integration with their already existing codebase to ensure quick API integration with other apps. Major issues that low code solved for them were scalability and utilising existing data structures. Thus they were able to quickly build on top of existing technology and provide robust IT solutions to their team and customers. 3. [DoorDash](https://www.doordash.com/)-They transitioned from code to no-code when they realised the increased speed of application development, and 100s of off-the-shelf solutions that Low code provides. They have utilised cloud solutions features of low code platforms to ship not just code but deliveries also. 4. [Switchboard](https://switchboard.media/)- They have a user interface built using low-code platforms that helps them become a rapidly profitable SaaS business growing at an impressive rate. It helps them integrate, schedule, monitor and analyse a network of displays in no time. Various API integrations with platforms that help billing , present user statistics etc have helped them fetch all data on one single dashboard. 5. [FiveTeams](https://www.fiveteams.com/)- They have streamlined the entire recruitment process with no-code by building websites, CMS, hiring apps and automations. Successful integrations with many APIs helped them scale fast. Applicant tracking dashboard is the key to growth of such hiring platforms. In case you are thinking of making dashboards, give low code a shot! Low code platforms provide a lifeline to emerging startups and small and medium businesses by tackling the two most common causes of company failure i.e. 1) Not finding appropriate markets-The more narrowly defined your niche is, the easier it will be to market to the right audience. Too many people try to start a business targeting everyone as their demographic. This doesn't work out well and can lead to poor sales or poor customer service if you are targeting the wrong market. Low code tools help create prototypes faster and generate analytics to give market insights. This in turn guides the company in the right direction. 2) Burning all the initial capital in technology developmental costs-Too many companies spend a large amount of money in the initial stages to build and maintain solutions for the day-to-day tasks. This slows down expansion. This money is better spent on market research and product testing. Low code tools help save all this money as it enables citizen developers to build IT startup tools themselves. These 2 reasons combined account for 71% of all startup failures. This is why low code/no code platforms provide viable startup tools to today’s digitalising economy. Please leave your thoughts and comments as I would love to hear what the dev community has to say about the new and developing world of low code/no code.
kanishkkhurana
1,017,616
Phase 1: Portfolio Project
building a JavaScript project can seem like a really easy until you start putting the pieces...
0
2022-03-09T17:49:01
https://dev.to/fatimaebeker/why-did-i-choose-this-path-39o2
building a JavaScript project can seem like a really easy until you start putting the pieces together. I’ve never worked on html ,JavaScript or JSON data from an API outside of labs , I’ve spent many days figuring out answers to problems and details that I would not gotten a chance to pay attention to and realize how delicate yet complicated programming can be. every time I think a got the correct answer I run into another problem another bug an unintended consequences, as they say the devils is in the details and in programming the devil/bug is in the unhandled error/catch. Well from mistakes we learn right ? the idea of my project is a simple yet complicated Pokémon’s cards where you can add, delete or like a Pokémon. For a minute that is straight forward right till you realize a basic understanding of fetch for example mean know simply that The Fetch API gives you a JavaScript interface to access and manipulate HTTP pipeline elements like requests and responses. It also has a global get() method that makes fetching things asynchronously via the network simple and intuitive. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgn9ndpcaxda9ekycd29.png) This appears to be so simple until you start working on a blank HTML page, unsure how you'll get your code to operate. The first thing I did was create my forms and a submit button (in my html) and I added an event listener to the add button in the JavaScript file, and then I created a function to handle the summitted forms value (name, image-URL and descriptions) I also made another function that would create each Pokémon card and attaching the like and the delete button to it. once I created the delete and like button, I when on and add the Evenltistener(‘click’) ``` card.querySelector('#like').addEventListener('click', ()=>{ pokemon.likes+=1; card.querySelector('#delete').addEventListener('click', ()=>{ card.innerHTML='' ``` Once you have added the Pokémon card into the list you can see the Pokémon name number of likes and it’s descriptions , you can either click on the like button or delete button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zd1gps150bxhrif5ol5.png) the like button gives the Pokémon 1 like each time it’s clicked, the delete button simply removes the selected Pokémon from the list This stage was one of the most difficult for me to overcome throughout the project. I ran into a lot of errors using fetch, but happily I was able to address the issues that were producing the errors. To post, get, remove, and patch data, I utilized the fetch method Example: ``` function UpdateLikes(pokemonObj) { fetch(`http://localhost:3000/pokemonData/${pokemonObj.id}`,{ method:'PATCH', headers:{ 'Content-Type': 'application/json' }, body: JSON.stringify(pokemonObj) }) .then(res => res.json()) .then(pokemon=>console.log(pokemon)) } function deletePokemon(id) { fetch(`http://localhost:3000/pokemonData/${id}`,{ method: 'DELETE', headers: { 'Content-Type': 'application/json' } }) .then(res => res.json()) .then(pokemon=>console.log(pokemon)) } ``` Even though I was working on a very easy assignment, it was nevertheless stressful and difficult. This project took me far longer than I had anticipated, and I must say, I was frustrated. Being able to complete my phase 1 project gave me a lot of confidence in myself and made me believe that I am up to the task. I'm excited to accomplish as many projects as I can and become more active in this.
fatimaebeker
1,019,008
How can you create more secure applications?
Security is the hidden heart and soul of any great application. It’s something we all take for...
0
2022-03-11T01:11:07
https://nevulo.xyz/blog/creating-secure-applications
programming, security, beginners
Security is the hidden heart and soul of any great application. It’s something we all take for granted, ensuring things remain as secure as possible to make sure everything runs as it _should_. But, what does it mean for an application to be “secure”? How can we improve security in our applications to protect valuable assets, including users? ## What do we mean by “secure” applications? Information security boils down to three basic principles: ### Confidentiality Ensuring only **authorised access** to an asset, prohibiting access to others. Prevents unauthorised disclosure. ### Integrity There should be **no unauthorised modification** of data. ### Availability Authorised users have **timely, reliable access to resources** when needed. This is referred to as the “CIA Triad” and these three core goals should be the basis for any security program. ## Prevention is better than the cure I think it’s worth spending some time talking about why we care about security in the first place. There’s obviously a time and place to get serious about security, and I don’t recommend diving head first into it if you’re very new to programming. But, if you’re making an application that is public and has the potential to be used by thousands, it’s too easy to get complacent, especially if you’re just starting out and think there’s no reason to implement security. Even if your application only has 10 users, even if it’s not well known, and you think there’s no way anyone could possibly exploit it: **If you have a vulnerability in your application, there is increased risk, both for you and your users.** Think of it like a ticking time-bomb, but no matter what, you have no idea what the time on the bomb is. The more risk you have, the more unstable the bomb gets. ## You’re at risk, whether you like it or not _Every single application_ has some security flaws (known or not), and you can **never guarantee** that an application is 100% secured. Think like you’re under attack all the time because usually, you are. As a personal example; I’ve been running my website on my server for over 3 years now, and during that time, I’ve had countless bots connect to my server attempting to exploit some threat which doesn’t exist for my site. Even though typically these spam requests get sent out in batches to random IPs across the entire Internet just to see if they can exploit a vulnerability, it still presents a serious risk if you don’t put security at the forefront of what you’re doing. You should always be vigilant and expect attacks. No matter where or how you run your application, you should have the mindset that there’s always threats lurking because realistically, there are. Never be complacent with security because ultimately, your users will take the biggest hit. Often times, it’s not about “if”, but “when”, given enough time. ## Best practices for making your applications more secure ### Test & attempt to break your code Testing your logic under different scenarios (particularly ones it wasn’t made for) is a great way to ensure there are no obvious vulnerabilities in your application. Even better; endeavor to get someone _external_ to do [penetration testing](https://www.imperva.com/learn/application-security/penetration-testing/) on your application, intentionally attempting to breach systems (with permission) to uncover vulnerabilities. Here’s [a great tweet](https://twitter.com/brenankeller/status/1068615953989087232?lang=en)demonstrating this: > A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a ueicbksjdhd. > First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone. The idea is that users are unpredictable, and if you’ve created some input for the user, you should attempt to test values that you _know_ will cause problems or create edge cases. This is where good **validation** on the frontend and backend come in, ensuring that the input we _expect_ to receive is what we actually get. ### Avoid security through obscurity, implement security by **design** [Security through obscurity](https://danielmiessler.com/blog/dead-drops-and-security-through-obscurity/) is the idea that a system can be secure solely by keeping the details about how it was created secret (besides the creators). Security by design is about embedding security as you’re planning new features or fixes and building in risk assessment and mitigation as you go, instead of implementing security as an afterthought. The [National Institute of Standards and Technology](https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology) in the US generally recommends that “system security **should not depend** on the _secrecy of the implementation_ or its components”. Think about it like this – in a house, the front door is the main security control, preventing people from just coming in as they please. If you’re the owner of the house, you likely have the keys to the door on your person, making you part of the security system. This is a secure system because even if somebody knows that _you_ possess the key, they probably don’t know where exactly you are, and they’d need to use brute force to get the key from you. On the other hand, if you store the key under your doormat, this is not a secure system. If you somehow figured out that there is a key under the doormat, the entire system is compromised, and that newly found key can be used straight away to break into the house (or a system). Implementing security by design usually means that everyone is allowed to know and understand the design because it is secure. ### Keep things within your control, never trust user input For example, if you have a comments section where people can post a comment, it may be dangerous just to echo back what the user said. As an example, attackers may be able to abuse a vulnerability called [cross-site scripting](https://nevulo.xyz/blog/what-is-xss), which allows them to literally inject their code to be executed on the page. This is dangerous because the injected code runs under the same context as the real code on the website (meaning, for the computer running the code, there’s no distinction between _your injected code_ and the real website code). This could be anything from showing an annoying pop-up to stealing user records and personal information. ### Principle of least privilege Principle of least privilege focuses on giving users the _exact_ amount of _permissions_ they need to perform the tasks they need to do - no more, no less. You should only elevate privileges for applications or users based on situations and when the time is right. Typically, users start with the lowest possible set of permissions, but gradually gain permissions over time as needed. This is a basic measure to ensure internal employees or normal users of an application can’t ### Don’t expose too much information Attackers rely on information gathered about how your application works, and the less information they have, it makes it harder to perform an attack For example, telling the user if they got their password wrong in a login form. To protect other users, we should just say “login failed” without exposing whether or not the email address or password specifically was wrong. Any piece of information, however small, may assist attackers, speed up the process or give attackers more incentive to keep attempting to compromise details.
nevulo
1,019,030
Azure Functions University - HTTP Lesson (TypeScript)
Hi serverless friends, we have a new Azure Functions University lesson for you! In this lesson we...
17,342
2022-03-16T20:11:06
https://blog.marcduiker.nl/2021/04/15/azfuncuni-http-lesson-typescript.html
azure, serverless, programming
--- title: Azure Functions University - HTTP Lesson (TypeScript) published: true date: 2021-04-15 00:00:00 UTC tags: azure, serverless, programming canonical_url: https://blog.marcduiker.nl/2021/04/15/azfuncuni-http-lesson-typescript.html series: Azure Functions University --- ![Http Lesson TypeScript](https://blog.marcduiker.nl/assets/2021/04/15/AzureFunctionsUniversity_HTTP_Lesson_typescript.png) Hi serverless friends, we have a new Azure Functions University lesson for you! In this lesson we will learn how to use the HTTP trigger in Azure Functions written in TypeScript. <!--more--> ## Lesson You can find the lesson on GitHub: [HTTP Trigger (TypeScript)](https://github.com/marcduiker/azure-functions-university/blob/main/lessons/typescript/http/README.md). And you can watch the video on YouTube: {% youtube zYb5sVQgUN4 %} The lesson consists of the following exercises: | Nr | Exercise | | --- | --- | | 0 | Prerequisites | | 1 | Creating a Function App | | 2 | Changing the template for GET requests | | 3 | Changing the template for POST requests | | 4 | Adding a new function for POST requests | | 5 | Homework |
marcduiker
1,019,143
Tune VS Code like a BOSS
14 Kick-Ass Extensions You Don’t Wanna Miss! They say that with Visual Studio Code,...
0
2022-03-12T10:11:53
https://dev.to/pythonperfection/tune-vs-code-like-a-boss-4lg4
productivity, python, vscode
### 14 Kick-Ass Extensions You Don’t Wanna Miss! ![Python for Ukraine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r7r7l5ux7tmeez7sgrk5.png) They say that with Visual Studio Code, Microsoft redeemed themselves amongst the developer community. Many old time programmers carried a grudge against the Redmond tech giant given their outspoken anti open source stance. Former Microsoft CEO Steve Ballmer even infamously called the open source operating system Linux “[a cancer](https://www.linux.com/news/microsofts-ballmer-linux-cancer/).” Today, [14 million](https://www.zdnet.com/article/visual-studio-code-how-microsofts-any-os-any-programming-language-any-software-plan-is-paying-off/) developers call VS Code ‘home’. Given that the number of devs worldwide - according to Statista - is around 24 million, thats an insane number! While VS Code is 100% free, it is also a totally brilliant marketing strategy into Microsoft’s [Azure](https://azure.microsoft.com/). As an ex Google Cloud and AWS user, I'd say it's highly doubtful whether without VS Code, Azure would have gotten to be part of ‘the big 3’. Now however, with GitHub and VS Code combined, they pretty much own us all. <figure> <img src=https://miro.medium.com/max/1400/0*IIlfPsOkwsuH9oQL.jpg> <figcaption>Microsoft invests more than $10 Billion In data centers annually and is on pace to build between <a href=https://www.zdnet.com/article/microsoft-were-on-pace-to-build-50-to-100-new-datacenters-each-year>50 and 100 new data centers each year</a> for the foreseeable future.</figcaption> </figure> <br>Enough with the politics. Let’s dive into the juicy parts. The following VS Code extensions are a ‘must’: > Bear in mind, given how lightweight VS Code is, there is really never a worry about having ‘too many’ extensions. So just knock yourself out!<br> ###[Sourcery](https://marketplace.visualstudio.com/items?itemName=sourcery.sourcery) Sourcery runs in the background and suggests real-time refactoring improvements. It's actually really fun seeing Sourcery in action #solid <figure> <img src=https://miro.medium.com/max/1200/0*_kLzO5AHohuJLZfj.gif> <figcaption> <a href=https://marketplace.visualstudio.com/items?itemName=sourcery.sourcery><b>Sourcery</b></a> Real time Python refactoring </figcaption> </figure> --- ###[Bookmarks](https://marketplace.visualstudio.com/items?itemName=alefragnani.Bookmarks) Bookmarks helps you to navigate your code, and moves between important positions easily and quickly. It also supports a set of selection commands, which allows you to easily select bookmarked lines and regions. <figure> <img src=https://miro.medium.com/max/1400/0*1S1Ujd-X57LMSunu.gif> <figcaption> <a href=https://marketplace.visualstudio.com/items?itemName=alefragnani.Bookmarks><b>Bookmarks:</b></a> Mark and Jump </figcaption> <figure> --- ###[Thunder Client](https://marketplace.visualstudio.com/items?itemName=rangav.vscode-thunder-client) Thunder Client is a lightweight Rest API Client with a simple and clean design; you’ll never need Postman again. Super fast too! <figure> <img src=https://miro.medium.com/max/1400/1*UH2bmeq7kd_D8cXri5b5jg.png> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=rangav.vscode-thunder-client><b>Thunder Client:</b></a> an API client done right </figcaption> </figure> --- ###[Docs View](https://marketplace.visualstudio.com/items?itemName=bierner.docs-view) * Automatically displays documentation for the function at the cursor position. * The “Documentation” view shows in the panel by default, move it any panel by just dragging. * Supports syntax highlighting and markdown rendering in the docs view. For imported functions that have lots of args, this is a lifesaver. <figure> <img src=https://miro.medium.com/max/1400/1*1WAco6pZgJPRKKYoE-ptLg.png> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=bierner.docs-view><b>Docs View:</b></a> Display documentation in the sidebar </figcaption> </figure> --- ###[Sourcegraph](https://marketplace.visualstudio.com/items?itemName=sourcegraph.sourcegraph) Sourcegraph allows you to search millions of open source repositories right from VS Code. Super fast too! <figure> <img src=https://miro.medium.com/max/1400/0*3-T1lc9GItcAE0kn.gif> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=bierner.docs-view><b>Sourcegraph:</b></a> Search millions of repositories </figcaption> </figure> --- ###[AI Doc Writer](https://marketplace.visualstudio.com/items?itemName=mintlify.document) Writing documentation is no fun, this AI takes care of it. Just highlight the code and hit ⌘ + . The makers of this one are actually on top of their game and constantly keep perfecting. Kudos! <figure> <img src=https://miro.medium.com/max/1400/0*jxlxlOOTxqlEBjvw.gif> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=mintlify.document><b>Mintlify:</b></a> AI Doc Writer </figcaption> </figure> --- ###[file-size](https://marketplace.visualstudio.com/items?itemName=zh9528.file-size) Shows the current file size in the status bar. I know this sounds 'basic' but it really is super helpful. <figure> <img src=https://miro.medium.com/max/1400/1*j_YrSlSWO8tmwLvBrth7Tg.png> <figcaption> <a href=https://marketplace.visualstudio.com/items?itemName=zh9528.file-size><b>file-size:</b></a> a simple/great extension </figcaption> </figure> --- ###[Krinql](https://marketplace.visualstudio.com/items?itemName=krinql.krinql-vscode) Based on OpenAI’s powerful GPT-3, Krinql explains the highlighted code in plain English, plus some other Codex features. <figure> <img src=https://miro.medium.com/max/1400/0*LHeWnXqXKqGURGk_> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=krinql.krinql-vscode><b>Krinql:</b></a> Code explainer </figcaption> </figure> --- ###[GistPad](https://marketplace.visualstudio.com/items?itemName=vsls-contrib.gistfs) **Bonus:** Comes with a really useful scratchpad (for note taking etc.) One of my favs. <figure> <img src=https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3ioyp1yjywj22uoo6hu.png> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=vsls-contrib.gistfs><b>GistPad:</b></a> Browse & edit Gists without leaving VS Code </figcaption> </figure> --- ###[CodeSnap](https://marketplace.visualstudio.com/items?itemName=adpyke.codesnap) Just right click a code selection and click `CodeSnap`, basically a built-in 'Carbon'. <figure> <img src=https://miro.medium.com/max/1400/0*l1jMqHcTS3_NNY1z.png> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=adpyke.codesnap><b>CodeSnap:</b></a> 📷 Take beautiful screenshots of your code. </figcaption> </figure> --- ###[IntelliCode Completions](https://marketplace.visualstudio.com/items?itemName=VisualStudioExptTeam.vscodeintellicode-completions) From our friends at Microsoft, this experimental extension predicts up to a whole line of code based on your current context. Predictions appear as grey-text to the right of your cursor. I’ve head a really great experience using it so far (couple months now)! Mostly I love how conservative it is vs making silly guesses. If it doesn't know, it says nothing. <figure> <img src=https://miro.medium.com/max/1400/0*8nYnG8xBRhW8dNZh.gif> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=VisualStudioExptTeam.vscodeintellicode-completions><b>IntelliCode Completions:</b></a> AI-driven code auto-completion </figcaption> </figure> --- ###[AREPL](https://marketplace.visualstudio.com/items?itemName=almenon.arepl) <figure> <img src=https://miro.medium.com/max/1400/0*ac6y8D7fSoXH5eWW.gif> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=almenon.arepl><b>AREPL:</b></a> automatically evaluates python code in real-time as you type </figcaption> </figure> <br>Similar to AREPL, [Wolf](https://marketplace.visualstudio.com/items?itemName=traBpUkciP.wolf) uncovers variables and objects inline: <figure> <img src=https://miro.medium.com/max/1400/0*Qo_MY2t8lhGcGZrS.png> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=traBpUkciP.wolf><b>Wolf:</b></a> outputs variables inline </figcaption> </figure> <br>While the above are both too cluttering for everyday, they do have their perfect use cases (i.e. a complex code block etc.). --- ###[Gather](https://marketplace.visualstudio.com/items?itemName=ms-python.gather) Again from the awesome peeps at Microsoft, when called from within a notebook cell, Gather magically figures out its dependencies. A game changer for Dataframers & Plotters. ![Pandas baby!](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqubmnu4zwcpzs72c0w4.png) <figure> <img src=https://miro.medium.com/max/1242/0*2vZOgoo4z1pX_LnV.PNG> <figcaption><a href=https://marketplace.visualstudio.com/items?itemName=ms-python.gather><b>Gather:</b></a> Jupyter notebook dependencies </figcaption> </figure> --- > Make sure to hit that unicorn please if you enjoyed this post 🦄
pythonperfection
1,019,636
Under-the-hood of GraphQL DataLoader
In recent years GraphQL has really taken off as a pattern/library/type system. It offers much which...
6,071
2022-03-11T15:20:46
https://craigtaub.dev/under-the-hood-of-graphql-dataloader
graphql, node, javascript, performance
In recent years GraphQL has really taken off as a pattern/library/type system. It offers much which REST does not and its standardization and flexibility has really helped in its adoption. I have an article focused on digging [deeper into GraphQL here](https://craigtaub.dev/under-the-hood-of-graphql), so today we will focus on another tool in the GraphQL ecosystem - one that is very important and interesting in how it assists applications - that tool is the DataLoader. This is part of my ["under-the-hood of" series](https://craigtaub.dev/introducing-my-under-the-hood-of-series): - [React hooks](https://craigtaub.dev/under-the-hood-of-react-hooks) - [Web bundlers (e.g. Webpack)](https://craigtaub.dev/under-the-hood-of-web-bundlers) - [Type systems (e.g. TypeScript)](https://craigtaub.dev/under-the-hood-of-type-systems) - [GraphQL](https://craigtaub.dev/under-the-hood-of-graphql) - [Git version control](https://craigtaub.dev/under-the-hood-of-git) - [Source maps](https://craigtaub.dev/source-maps-from-top-to-bottom) - [Docker](https://craigtaub.dev/under-the-hood-of-docker) - [NPM](https://craigtaub.dev/under-the-hood-of-npm) - [Test runners (e.g. Mocha)](https://craigtaub.dev/under-the-hood-of-test-runners) - [VSCode auto formatters (e.g. Prettier)](https://craigtaub.dev/under-the-hood-of-vscode-auto-formatters) - [Apollo](https://itnext.io/under-the-hood-of-apollo-6d8642066b28) --- The article today will be broken down into 2 parts: 1. [Overview](#1-overview) - [Batching](#caching) - [Caching](#caching) 2. [Building our own GraphQL DataLoader](#2-building-our-own-graphql-dataloader) --- ## 1: Overview The NodeJS repository for GraphQL's DataLoader is found at [https://github.com/graphql/dataloader](https://github.com/graphql/dataloader), however it can be found in many different language implementations. It can be used as part of your applications data fetching layer, and its basic job is to reduce requests to backends by 2 means. 1. Batching 2. Caching It utilizes different logic and functionality to perform the above efficiently. The first question is what does GraphQL have to do with this? It pairs nicely with GraphQL as GraphQL has fields which are designed to be stand-alone functions (resolvers) and it is very easy to share a class instance via the context. The class instance would be our instance of DataLoader. The natural pairing of DataLoader and GraphQL has produced high success rates - some examples have seen 13 database queries reduced down to 3 or 4. ## Batching Batching is the primary feature of DataLoader, you must pass the library a "batch function" to detail how to process the batch. Within a single tick of the event loop DataLoader gathers all individual loads, then calls the "batch loading function" with all requested keys. ### VS your ORM Its important to note DataLoader does not optimize the queries itself - you can look to an ORM for help there. For example [Objection-JS](https://vincit.github.io/objection.js/api/query-builder/eager-methods.html#withgraphfetched) has logic to avoid "N+1 selects" by utilizing "where in" queries. ### Batch function This is the function given to the library when you create a new instance ```javascript const ourInstance = new DataLoader(keys => myBatchFunction(keys)) ``` The basic idea is that you check your cache first for a given key, if it exists return that value, else hit the data-source e.g database. It passes in an array of keys, but there is a constraint that: 1) the returned array size must match the keys coming in 2) the returned array indexes must match the keys coming in There is a sound reason for that limitation and it's related to the implementation - it is covered in part 2 below. Its worth highlighting that keys are suited to table column ID's, so it stands to reason that having a database table for each entity in your data model would fit this mechanism well. ## Caching DataLoader uses a simple in-memory memoization cache. You can swap the memory store for something else e.g. SQL-lite. Caching pairs really well with the Batching, because the batch can ensure the requested data has already been pulled from the database, the cache can be utilized to retrieve from there. We will go over this more in the next section --- ## 2. Building our own GraphQL Dataloader In this section we will focus on the <batching> and save <caching> for another time. Hopefully it will provide enough context on how caching is utilized. > Within a single tick of the event loop DataLoader gathers all individual loads, then calls the "batch loading function" with all requested keys. You might be wondering how it does this - so let's look at the most simple example. ```javascript const run = async () => { const keys = []; const batchFunction = (keys) => { // bad point 1 - called with [1,2], [1,2] console.log("keys: ", keys); }; const load = async (id) => { keys.push(id); process.nextTick(() => { batchFunction(keys); }); // bad point 2 - promise not connected to batch function return Promise.resolve(`id: ${id}`); }; const a = await load(1); const b = await load(2); console.log("a", a); // id: 1 console.log("b", b); // id: 2 }; run(); ``` This calls our batch function twice - both times with both keys. The order of events is this: 1. call `load` asynchronously with id 1 2. call `load` asynchronously with id 2 3. `async load(1)` - store key 1 in the global `keys` array - schedule a node process to, on the next tick, run our `batchFunction` with those keys - Return a resolved promise with the id. 4. `async load(2)` - store key 2 in the global `keys` array - schedule a node process to, on the next tick, run our `batchFunction` with those keys - Return a resolved promise with the id. 5. The first scheduled process runs, with both ids 1 and 2 in the `keys` array 6. The second scheduled process runs, with both ids 1 and 2 in the `keys` array. So here you can see the basic mechanism of how batching works. Good 👍🏻 - Runs our batch function with both keys - this will mean we can cache the database response, and next time those keys are included only utilize cache Bad 👎🏻 1. Unnecessarily calling the batch function with the same keys, unnecessarily running the code even if it is hitting the cache. 2. `load` does not return anything useful, its a completely isolated resolved promise. The below example looks to improve on that. ```javascript let resolvedPromise; let batchFunction; let batch; const dispatchBatch = () => { batch.hasDispatched = true; const batchPromise = batchFunction(batch.keys); batchPromise.then((values) => { for (var i = 0; i < batch.callbacks.length; i++) { var value = values[i]; // resolve promise callback batch.callbacks[i].resolve(value); } }); }; const batchScheduleFn = (cb) => { // add more logic if scheduling process.nextTick(cb); }; const getCurrentBatch = () => { // !hasDispatched only needed if using a 2nd tick - this example isnt if (batch && !batch.hasDispatched) { return batch; } const newBatch = { hasDispatched: false, keys: [], callbacks: [] }; batch = newBatch; batchScheduleFn(() => { dispatchBatch(); }); return newBatch; }; const load = async (id) => { const localBatch = getCurrentBatch(); localBatch.keys.push(id); // add promise callback to batch const promise = new Promise((resolve, reject) => { localBatch.callbacks.push({ resolve, reject }); }); return promise; }; async function threadTwo() { const user = await load(2); console.log("threadTwo user", user.id); } async function threadOne() { const user = await load(1); console.log("threadOne user", user.id); } const run = async () => { // make async batchFunction = async (keys) => { console.log("keys:", keys); // keys: [ 1, 2 ] return keys.map((key) => ({ id: key })); }; threadOne(); threadTwo(); }; run(); ``` It introduces batches which can be sheduled - this is _exactly_ how DataLoader manages it ([here](https://github.com/graphql/dataloader/blob/master/src/index.js#L248)). The order of events is this: 1. call `threadOne` - call `load` async with id 1 2. call `threadTwo` - call `load` async with id 2 3. `async load(1)` - get the current batch - `batch` is currently undefined so a `newBatch` is created - we schedule a dispatch by calling `dispatchBatch()` inside our scheduler `batchScheduleFn()` - this adds `dispatchBatch` callback to the `nextTick`. - lastly we return the batch - we add the `id` to the `keys` array on the current batch - we create a new promise, add the `reject` and `resolve` to our current batch `callbacks` (so the list index is important) - lastly we return the new promose 4. `async load(2)` - get current batch - `batch` currently exists and has not been dispatched so we return that - as above we add the `id` and `reject/resolve` to the current batch - as well as return the promise 5. `process.nextTick` - the tick runs `dispatchBatch` - call our `batchFunction` with the current batches `keys` - `batchFunction` returns a promise - when that promise resolves (`.then`), it returns an array of our keys - we iterate over our batch callbacks - for each callback - we find the associated `batchFunction` key value <b>this is why the batch function response indexes are so important</b> - resolve the callback with that value 6. `await load(1) resolves` - returning object `{id}` 7. `await load(2) resolves` - returning object `{id}` This calls the batch function once with both keys, it returns correct values from batch function - dealing with both "bad" points from the first example. --- Thanks so much for reading, I learnt a huge amount about DataLoader and GraphQL from this research and I hope it was useful for you. You can find the repository for all this code [here](https://github.com/craigtaub/our-own-graphql-dataloade). Thanks, Craig 😃
craigtaub
1,020,943
Configuration as Code for the GitHub platform
I am slowly diving into ‘Configuration as code’ for the GitHub Platform: all the things you want to...
0
2022-03-13T07:15:44
https://devopsjournal.io/blog/2022/03/12/GitHub-config-as-code
github
--- title: Configuration as Code for the GitHub platform published: true date: 2022-03-12 00:00:00 UTC tags: github canonical_url: https://devopsjournal.io/blog/2022/03/12/GitHub-config-as-code --- I am slowly diving into ‘Configuration as code’ for the GitHub Platform: all the things you want to automate with as few steps as possible, making big impact. Some of these things also fall under ‘GitOps’ in my opinion: if you store it into a repo and on changes you make, the automation will make it happen. The plan is to have this post as a central starting point for people searching to achieve a similar setup. There are loads of people who blog on how to make this happen and what works for them, but often there is no actual implementation they can share. I want to give the you the examples and give you (a copy of) the code used as well. ##### Note: Since most of these items need to be running in a separate org, I created them on my [robs-tests org](https://github.com/robs-tests/). # Scenario:1 Automate user onboarding and repository creation For on of our trainings we invite all trainees into and organization and create teams and repositories for them. Doing this with the UI is cumbersome and error-prone. We want to automate this process where someone can edit a yaml file, lint it and through a pull request approve it so that you always need another person to verify the incoming changes. After merging a workflow starts that make the new situation happen. The steps that I want to have are as follows: 1. Add new users and teams to the users.yml file 2. Create a PR 3. The workflow pull\_request.yml workflow checks if: - The user file is valid yaml - The users is a valid GitHub handle - The user is already a member of the organization 4. After merging, the user-management.yml workflow runs and: - Create the team if needed - Add the user to the org - Create repository attendee-<userhandle></userhandle> - Add the user to the repo - Add the user to the team - Add the team to the repo Link to repo: [robs-tests/user-test-repo](https://github.com/robs-tests/user-test-repo) ## Step 1: define a yaml structure and parse it In this example I want to start a simple structure, parse it, and then loop through the results. You can do this in any format you want. I thought of doing this in json for example, but that gives you a lot of overhead with all the extra double quotes and it is a bit harder to read. It would be easy to link a user to the wrong team for example. I knew I can parse yaml with a library, and that is what I went with: it will be compact, no extra characters around the content. Since for our trainings we usually have a max of 20 people in the group, the entire team and their users will fit on the screen without scrolling. ### yaml format: This is the format I settled on for now: there is a list of teams and in each team there is a list of users. ``` teams: - name: team01 users: - rajbos - Maxine-Chambers ``` Reading that from parsing the yaml gives me two loops to create: for each team and then for each user, do ‘x’. ## Step 2: define the way you want to build your workflow For most languages there will be an library available to parse the yaml, so it becomes a choice on what you would like to use for the automation. It depends on what your team already knows and how easy you want to be able to test this. These days I skip having a hard to manage setup and go to something really simple: [github-script](https://github.com/actions/github-script). This is a JavaScript Action that you can give your own script file and it will execute that for you. Inside your script you then get access to the GitHub contexts with authenticated clients and calls for all the API’s you need. You can even run it with an [access token](/blog/2022/01/03/GitHub-Tokens) from a GitHub App (since we only use the organization level API’s here, this will work). [github-script](https://github.com/actions/github-script) also helps with having your code in JavaScript already: if you later want to make this a building block and make an action out of it, you are ready to go! ## Step 3: the PR workflow In the [PR workflow](https://github.com/robs-tests/user-test-repo/blob/main/.github/workflows/pull_request.yml) I want to verify that: - the yaml file is valid yaml - the handles in the file are valid GitHub handles (we sometimes get e-mail addresses or typos in the handles) ``` steps: - uses: actions/checkout@v2 - uses: actions/setup-node@v2 with: node-version: 14 - run: npm install yaml - uses: actions/github-script@v5 name: Run scripts with: github-token: ${{ secrets.GH_PAT }} script: | const yaml = require('yaml') const repo = "${{ github.repository }}".split('/')[1] const owner = "${{ github.repository_owner }}" const userFile = "users.yml" const script = require('${{ github.workspace }}/src/check-pr.js') const result = await script({github, context, owner, repo, userFile, yaml}) console.log(``) console.log(`End of workflow step`) ``` In this example you see that we are: - checking out the repository - setup node - so that we can install a node package that can parse the yaml - then we use the `github-script` action to execute the script - load the file `check-pr.js` that will do the work - we pass in the info needed for the script to have all the context it needs In the `check-pr.js` file you can see that we are: - loading the contents of the yaml file in the current branch: `github.rest.repos.getContents({owner, repo, path, ref})` - get the list of current teams: `let existingTeams = await getExistingTeams(owner)` - parse the content of the yaml file: `const parsed = yaml.parse(content)` And then we can loop through each element in the arrays: ``` for each team: for each user in team: check if user exists ``` Verifying if a user exists can be done with a call to this API: https://api.github.com/users/${userHandle} Checking if that user then already is a member of the org can be done with a call to this API: https://api.github.com/orgs/${orgName}/members/${userHandle.login} ## Step 4: the user-management workflow When the PR is merged to main, another workflow executes: `user-management.yml`. This has the same setup: install the npm packages, load the script and run it. From `load-users.js`: ``` // send an invite to the user on the org level: await addUserToOrganization(user, organization) // create a new repository for this user: const repoName = `attendee-${user.login}` await createUserRepo(organization, repoName) // give the user admin acccess to the repo: await addUserToRepo(user, organization, repoName) // add the user to the team for the day of the training: await addUserToTeam(user, organization, team) // add the team to the repo (so that the rest of the team can help with PR's): await addTeamToRepo( organization, repoName, team) ``` # Final thoughts With this setup, you now have a complete example how you can use GitHub Actions to automate the process of adding users to an organization and preparing things like teams and repositories for them. You can build on top of this with for example: - a setup that has a folder structure that defines the hierarchy of teams and users - each folder could then have different code owners that defines who needs to approve the PR (give the team itself self-service on who to add) - add more properties to the `users.yml` in case you need it
rob_bos
1,021,106
Email Account Verification In Node JS & React JS | MERN Stack Project
What's up guys! Today we gonna implement how to verify user email after signup in MERN stack. we...
0
2022-03-13T05:46:15
https://dev.to/cyberwolves/email-account-verification-in-node-js-react-js-mern-stack-project-1hlk
javascript, node, react, webdev
What's up guys! Today we gonna implement how to verify user email after signup in MERN stack. we gonna use nodemailer package to send email with node.js. we also gonna learn how to implement this on frontend with react framework. {% youtube T6rElSLldyc %}
cyberwolves
1,021,256
All about terraform Modules - Create & Publish your own modules
DAY 21 - All about terraform Modules - Create &amp; Publish your own modules - Day Twenty...
16,989
2022-03-15T01:54:47
https://dev.to/aws-builders/all-about-terraform-modules-create-publish-your-own-modules-502b
devops, terraform, productivity, cloud
## DAY 21 - All about terraform Modules - Create & Publish your own modules - Day Twenty One [![Image tweet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfjzxg64r7q0i8ra6b8t.png)![Image Cover](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ni1ut3598m6a6gif80mq.png)](https://twitter.com/intent/tweet?text=All%20about%20terraform%20Modules%20-%20Create%20%26%20Publish%20your%20own%20modules%20%2C%20a%20blog%20by%20%40anuvindhs%0A%23devops%20%23IAC%20%23aws%20%23cloud%20%23azure%20%23googlecloud%0A%0Ahttps%3A%2F%2Fdev.to%2Faws-builders%2Fall-about-terraform-modules-create-publish-your-own-modules-502b%20) [100 days of Cloud on GitHub](https://github.com/anuvindhs/100daysofcloud) - [Read On iCTPro.co.nz](https://ictpro.co.nz) - [Read on Dev.to](https://dev.to/anuvindhs) ------ ## Using modules Public Modules - Terraform registry - Syntax _**NameSpace/Name/Provider**_ - initiate via `terraform init` Private Modules - terraform cloud - Syntax _**Hostname/NameSpace/Name/Provider**_ - initiate via `terraform login` ## Publishing Modules ![Image apply](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z04ec6g43w7jw3341qbb.png) ### Public Modules Modules are published on to github. It have to be named with a specific name pattern as well. ex:- lets say you are creating a modules for AWS VPC then module have to be named **terraform-aws-vpc**. You publish you module to terraform registry via GitHub account. #### Some Features - Supports versioning - Generate document - version history - Show example - available readme.md ### Verified Modules Verified modules are reviewed by HashiCorp and actively maintained by contributors to stay up-to-date and compatible with both Terraform and their respective providers. The verified badge appears next to modules that are published by a verified source. ![Image badge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wclckr7a71cg6xc679o9.png) ### Standard Module Structure Lets get an idea of the structure - **main.tf** - This must exists on the root directory. In other words this file is the entry point for your module. - **variables.tf** , **Outputs.tf**- All variables and outputs should have one or two sentence descriptions that explain their purpose. (Variables that can be passed, outputs are output values) - **readme.md** - description in a readme file. - **LICENSE** - The license under which this module is available - **Nested Module** - Optional, must be inside the modules/ directory #### Structure example ``` $ tree complete-module/ . ├── README.md ├── main.tf ├── variables.tf ├── outputs.tf ├── ... ├── modules/ │ ├── nestedA/ │ │ ├── README.md │ │ ├── variables.tf │ │ ├── main.tf │ │ ├── outputs.tf │ ├── nestedB/ │ ├── .../ ├── examples/ │ ├── exampleA/ │ │ ├── main.tf │ ├── exampleB/ │ ├── .../ ``` ## Building your own Modueles ![Image terraform modules](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqoowetzbl5i4pmo7u93.png) Lets use **VSCode** (Visual Studio Code) for this project Make sure you have Terraform Visual Studio Code Extension is installed. ### File Structure - Lets create a new directory and Name it as **terraform-aws-ec2** - Building the file structure. - Lets create file **main.tf**.(this file is exactly same as the code in the README.md, you can use this initially to test the IAC before we publish to registry ) - Now create a sub folder named **terraform-aws-ec2module-tutorial** and create these files also **outputs.tf, variables.tf, readme.md, LICENSE** - I am Using GitHub [Terraform AWS modules](https://github.com/terraform-aws-modules) by [Anton Babenko](https://github.com/antonbabenko) (The 🪄 Magic 🔮Man) for reference for this project. ![Image terraform folder structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufzkcdfswqe1aabr2m0q.png) - Check out below links for contents in files - [LICENSE](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/LICENSE) - [main.tf](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/main.tf) - [outputs.tf](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/outputs.tf) - [README.md](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/README.md) - [userdata.yaml](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/userdata.yaml) - [variables.tf](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/variables.tf) - [versions.tf](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/versions.tf) - [.gitignore](https://raw.githubusercontent.com/anuvindhs/terraform-aws-ec2module-tutorial/main/.gitignore) **Here we are going to build a module to create EC2 preloaded with APACHE and publish it to the terraform registry with GitHub.** ### Lets Build a Module Once you created and copy pasted all the codes as above, we will create an terraform module to create an EC2 instance. #### An EC2 APACHE server Goal of the module is to create a EC2 instance with Apache installed in that. ### Create a Public GitHub repo - Name the repo as **terraform-aws-ec2module-tutorial**. - On your project folder cd in to **terraform-aws-ec2module-tutorial**. - Initialise, add all, commit and push the code. - Add a **version tag**. ```git git tag v1.0.0 git push --tags ``` ### Connecting to Terraform Registry - Go to [Terraform Registry](https://registry.terraform.io/) - click on to **sign-in**, then click to **Sign in with GitHub** ![Image GitHub sign in terraform registry](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/plynsa0ovvn3xae05vdp.png) - **Authorize hashicrop** Sign in.![Image Authorize hashicrop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tk45qd1zm9m7pjkxh4tx.png) - Go and select **publish module** ![Image Publish module](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gfe3r8fbd9x5gu55ot1.png) - Select our module, agree to terms of use and click **Publish Module** . ![Image publish module](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hnre4hdsoarfbqg4v7ei.png) - 🎉**Congratulations**🎉 you have successfully published your module to terraform registry ![Image publish](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/olmisslrxvobdqh9484e.png) ---------- **✅Connect with me on [Twitter](https://twitter.com/intent/follow?ref_src=anuvindhs&screen_name=anuvindhs) 🤝🏽Connect with me on [Linkedin](https://www.linkedin.com/in/anuvindhs) 🧑🏼‍🤝‍🧑🏻 Read more post on [dev.to](https://dev.to/anuvindhs) or [iCTPro.co.nz](https://ictpro.co.nz) 💻 Connect with me on [GitHub](https://github.com/anuvindhs)** {% user anuvindhs %}
anuvindhs
1,021,660
Useful tensorflow/keras callbacks for model training
Here are some callbacks that I have found to be very useful when training machine learning models...
0
2022-03-13T23:06:45
https://dev.to/catasaurus/useful-tensorflowkeras-callbacks-for-model-training-3e5n
deeplearning, machinelearning, tensorflow, python
Here are some callbacks that I have found to be very useful when training machine learning models using python and tensorflow: #### Number one: Early stopping Keras early stopping (https://keras.io/api/callbacks/early_stopping/) has to be my favorite callback. With it you can define when the model should stop training if it is not improving. An example for usage is: ``` earlystopping = tf.keras.callbacks.EarlyStopping( monitor="val_loss", min_delta=0.001, patience=5, verbose=1, restore_best_weights=True, ) ``` This will stop the model's training once it does not improve at least 0.001 in loss for 5 epochs. It will then restore the model's weights to the weights on the best epoch. Just like any callback make sure to include it during training like `model.fit(some_data_X, some_data_y, epochs=some_number, callbacks=[earlystopping, some_other_callback])` #### Number two: Learning rate scheduler Keras learning rate scheduler (https://keras.io/api/callbacks/learning_rate_scheduler/) can be very useful if you are having problems with your learning rate. With it you can reduce or increase learning rate during training based on a number of conditions. An example: ``` def scheduler(epoch, lr): return lr * tf.math.exp(-0.5) learningratecallback = tf.keras.callbacks.LearningRateScheduler(scheduler) ``` The scheduler function is where you can define your logic for how the learning rate should decrease or increase. `learningratecallback` just wraps your function in a `tf.keras.callbacks.LearningRateScheduler()`. Don't forget to include it in `model.fit()`! #### Last but not least, number three: Custom callbacks Custom callbacks (https://keras.io/guides/writing_your_own_callbacks/) are great if you need to do something during training that is not built in to keras + tensorflow. I won't go in depth as there is a lot you can do. Basically you have to define a class that inherits from `keras.callbacks.Callback`. There are many different functions that you can define that will be called at different times during the training (or testing and prediction) cycle. A simple example would be: ``` class Catsarecoolcallback(keras.callbacks.Callback): def on_epoch_end(self, logs=None): print('cats are cool!`) callback = Catsarecoolcallback() ``` This (as you can probably tell) prints out `cats are cool!` every time an epoch ends. #### Hope you learned something while reading this!
catasaurus
1,021,686
Do Re-entrant Node.js Functions have Unstable Arguments?
I have code (kraken-grid on github) which makes requests to the Kraken API and awaits the responses. ...
0
2022-03-14T00:06:40
https://dev.to/dscotese/do-re-entrant-nodejs-functions-have-unstable-arguments-597n
node
I have code (kraken-grid on github) which makes requests to the Kraken API and `await`s the responses. It runs periodically and I noticed that Kraken's API slowed down enough for a second run of the code to happen while the first run was `await`ing a response. The code handles TIMEOUT from Kraken by trying again in 5 seconds. It seems to me that a call to `order` (a function I wrote) from the first run got its arguments clobbered by the second run. It passes an array [first element is a string, second is an object with properties for all the values the API is to use] to `kapi()` which calls itself again with the same array after waiting five seconds. The result is that when the API (`AddOrder`) was called the second time (5 seconds after a TIMEOUT response), it used (at least) two argument values that differed from those with which it was first called. The code can be viewed at https://github.com/dscotese/kraken-grid/blob/main/index.js. I'm trying to understand how it happened so that I can prevent it. My suspicion is that nodejs creates an internal object for each variable and does not consider the arguments to a function call from one frame of execution to be different than the arguments when it's called from a different frame. I see that three of the passed in arguments are re-assigned (`price = Number(price)` for example) and the two that are changing are among them. I use the same names but perhaps the interpreter is creating new (implied `var`) declarations and that is why re-entrant calls alter their values. I updated the code (not yet in github) so that new variables (let declarations) are used. If someone can confirm that this will most likely prevent the problem (and why), I'd appreciated it!
dscotese
1,021,809
Latest Updates on React 18
This post is intended to summarize React 18 discussions on GitHub. At the time of writing React 18...
0
2022-03-14T02:20:48
https://ageek.dev/react-18
react, javascript, webdev, news
This post is intended to summarize [React 18 discussions on GitHub](https://github.com/reactwg/react-18/discussions). At the time of writing React 18 has hit [release candidate](https://www.npmjs.com/package/react/v/18.0.0-rc.2) version. To try React 18 you need to update to the latest React 18 release with the additional step of switching from `ReactDOM.render` to `ReactDOM.createRoot`. ```bash npm install react@rc react-dom@rc ``` ```jsx import * as ReactDOMClient from 'react-dom/client' import App from './App' const container = document.getElementById('app') const root = ReactDOMClient.createRoot(container) root.render(<App />) ``` React 18 includes out-of-the-box improvements to existing features. It is also the first React release to add support for **Concurrent Features**, which let you improve the user experience in ways that React didn't allow before. ## [New Root API](https://github.com/reactwg/react-18/discussions/5) In React, a root is a pointer to the top-level data structure that React uses to track a tree to render. When using legacy `ReactDOM.render`, the root was opaque to the user because we attached it to the DOM element, and accessed it through the DOM node, never exposing it to the user. ```jsx import * as ReactDOM from 'react-dom' import App from 'App' const container = document.getElementById('app') // Initial render. ReactDOM.render(<App tab="home" />, container) // During an update, React would access // the root of the DOM element. ReactDOM.render(<App tab="profile" />, container) ``` React 18 introduces new Root API is called with `ReactDOM.createRoot` which adds all of the improvements of React 18 and allows you to use concurrent features. ```jsx import * as ReactDOMClient from 'react-dom/client' import App from 'App' const container = document.getElementById('app') // Create a root. const root = ReactDOMClient.createRoot(container) // Initial render: Render an element to the root. root.render(<App tab="home" />) // During an update, there's no need to pass the container again. root.render(<App tab="profile" />) ``` This change allows React to remove the `hydrate` method and replace with with an option on the root; and remove the render callback, which does not make sense in a world with partial hydration. ```jsx import * as ReactDOMClient from 'react-dom/client' import App from 'App' const container = document.getElementById('app') // Create *and* render a root with hydration. const root = ReactDOMClient.hydrateRoot(container, <App tab="home" />) // Unlike with createRoot, you don't need a separate root.render() call here ``` ## [Automatic Batching](https://github.com/reactwg/react-18/discussions/21) Batching is when React groups multiple state updates into a single re-render for better performance because it avoids unnecessary re-renders. However, React hasn't been consistent about when it batches updates. React only batched updates during React event handlers. Updates inside of promises, setTimeout, native event handlers, or any other event were not batched in React by default. React 18 does more batching by default, all updates will be automatically batched, no matter where they originate from. ```jsx function handleClick() { setCount((c) => c + 1) setFlag((f) => !f) // React will only re-render once at the end (that's batching!) } ``` But remember React only batches updates when it’s generally safe to do. For example, React ensures that for each user-initiated event like a click or a key press, the DOM is fully updated before the next event. This ensures, for example, that a form that disables on submit can’t be submitted twice. ## [Concurrent Features](https://github.com/reactwg/react-18/discussions/4) React 18 will add new features such as startTransition, useDeferredValue, concurrent Suspense semantics, SuspenseList, and more. To power these features, React added concepts such as cooperative multitasking, priority-based rendering, scheduling, and interruptions. These features unlock new performance and user experience gains by more intelligently deciding when to render (or stop rendering) subtrees in an app. - [startTransition](https://github.com/reactwg/react-18/discussions/41): lets you keep the UI responsive during an expensive state transition. - `useDeferredValue`: lets you defer updating the less important parts of the screen. - `<SuspenseList>`: lets you coordinate the order in which the loading indicators appear. - Streaming SSR with selective hydration: lets your app load and become interactive faster. ## [Support Suspense in SSR](https://github.com/reactwg/react-18/discussions/37) [Suspense](https://reactjs.org/docs/concurrent-mode-suspense.html) component lets you wait for some code to load and declaratively specify a loading state (like a spinner) while we’re waiting, but not available on the server. One problem with SSR today is that it does not allow components to wait for data. With the current API, by the time you render to HTML, you must already have all the data ready for your components on the server. React 18 offers two major features for SSR by using Suspense component. The improvements themselves are automatic inside React and we expect them to work with the majority of existing React code. This also means that `React.lazy` just works with SSR now. - **Streaming HTML**: lets you start emitting HTML as early as you’d like, streaming HTML for additional content together with the `<script>` tags that put them in the right places. - **Selective Hydration**: lets you start hydrating your app as early as possible, before the rest of the HTML and the JavaScript code are fully downloaded. It also prioritizes hydrating the parts the user is interacting with, creating an illusion of instant hydration. There are different levels of support depending on which API you use: - `renderToString`: Keeps working (with limited Suspense support). - `renderToNodeStream`: Deprecated (with full Suspense support, but without streaming). - `renderToPipeableStream`: New and recommended (with full Suspense support and streaming). ## [Behavioral Changes to Suspense](https://github.com/reactwg/react-18/discussions/7) React added basic support for Suspense since version 16 but it has been limited — it doesn’t support delayed transitions, placeholder throttling, SuspenseList. Suspense works slightly differently in React 18 than in previous versions. Technically, this is a breaking change, but it won’t impose a significant migration burden on authors migrating their apps. ```jsx <Suspense fallback={<Loading />}> <ComponentThatSuspends /> <Sibling /> </Suspense> ``` The difference is how a suspended components affects the rendering behavior of its siblings: - Previously, the Sibling component is immediately mounted to the DOM and its effects/lifecycles are fired. Then React hides it. - In React 18, the Sibling component is not mounted to the DOM. Its effects/lifecycles are also NOT fired until ComponentThatSuspends resolves, too. In previous versions of React, there was an implied guarantee that a component that starts rendering will always finish rendering. In React 18, what React does instead is interrupt the siblings and prevent them from committing. React waits to commit everything inside the Suspense boundary — the suspended component and all its siblings — until the suspended data has resolved.
ageekdev