id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,371,100
Collaborative Virtual Teams
Some employers are demanding a return to the office. Is this really good for them or their employees?
0
2023-02-19T01:33:12
https://dev.to/cheetah100/collaborative-virtual-teams-58ng
remote, home, virtual
--- title: Collaborative Virtual Teams published: true description: Some employers are demanding a return to the office. Is this really good for them or their employees? tags: remote, home, virtual cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmcgk6u7q7ypn5q7clpb.png # Use a ratio of 100:42 for best results. # published_at: 2023-02-19 00:52 +0000 --- Working as part of a collaborative virtual team is normal for me now, having spent several years running teams around the world. The virus caused many companies to adapt by allowing employees to work from home, but many may have seen this as a temporary measure. Now the virus is clearly in the rear view mirror they are aiming to return people to the office. However, it is possible to learn from this experience, to see that it introduced us to ways of working that are more efficient, less wasteful, and which give businesses greater flexibility. {% embed https://www.youtube.com/watch?v=F19d7jyONVs %} My experience of the last several years show that it is possible to run high performing distributed global teams. While it is true that it introduces challenges, especially around communication and team building, the benefits vastly outweigh them. Running distributed teams may also bring traditional practices into question. The traditional office where people turn up for specific hours to work in small boxes was very much along the same pattern as industrial production. So is the approach of having managers which oversee the disenchanted to beat them into performing their duties. Needless to say this is not a model which fits complex mental tasks well. In 2004, when I employed my own team, my concern wasn't with developers meeting specific hours worked, but rather meeting delivery. The best way to ensure delivery is to ensure developers are empowered rather than micro managed. It is also critical that they have a sense of ownership, rather than simply following orders and operating as a cog in the machine. This means giving developers a degree of artistic expression in terms of developing solutions they have a hand in designing. Some businesses may be terrified that employees will slack off if given the opportunity. One company I briefly flirted with insisted on a continuous open video stream. This is essentially just transferring the same old approach of control to remote work. Needless to say if you don't trust your developers, and can't motivate them through positive team development, a work from home approach isn't a great fit. But then neither is working in the office. The best team I've worked with operated on a distributed basis. While some of the team were office based we all operated through video conferencing to bind us into one high functioning team, rather than separate geographically based teams. Regardless of whether you work in an office or at home the success factors were all around how we built an effective team and organized communications. Teams are more than a bunch of colocated employees. It is this aspect I think that will be a success regardless of where people work.
cheetah100
1,371,477
How To Make a Calculator Using HTML, CSS & JavaScript | HTML Calculator 🧮
We are going to make a dark theme♠️ calculator using simple HTML,CSS & JavaScript.🧑‍💻 The goal...
0
2023-02-19T11:32:24
https://dev.to/kaizendeveloper/how-to-make-a-calculator-using-html-css-javascript-html-calculator-3a2i
We are going to make a dark theme♠️ calculator using simple HTML,CSS & JavaScript.🧑‍💻 The goal is not to go to complex logic and css. the main goal is to keep it simple and write the less code to make beautiful UI with basic calculator functionality 1. Simple HTML Markup ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Calculator</title> <link rel="stylesheet" href="style.css"> <script src="main.js" defer></script> </head> <body> <div id="calculator"> <input type="text" name="display" id="display" disabled/> <br /> <button id="ac" class="operator">AC</button> <button id="de" class="operator">DEL</button> <button id="." class="operator">.</button> <button id="/" class="operator">/</button> <br /> <button id="7">7</button> <button id="8">8</button> <button id="9">9</button> <button id="*" class="operator">*</button> <br /> <button id="4">4</button> <button id="5">5</button> <button id="6">6</button> <button id="-" class="operator">-</button> <br /> <button id="1">1</button> <button id="2">2</button> <button id="3">3</button> <button id="+" class="operator">+</button> <br /> <button id="00">00</button> <button id="0">0</button> <button id="=" class="equal operator">=</button> </div> </body> </html> ``` 2. To handle functionality, one JavaScript function is required ``` const display = document.querySelector("#display"); const button = document.querySelectorAll("button"); button.forEach((btn) => { btn.addEventListener("click", () => { if (btn.id === "=") { display.value = eval(display.value); } else if (btn.id === "ac") { display.value = ""; } else if (btn.id === "de") { display.value = display.value.slice(0, -1); } else { display.value += btn.id; } }); }); ``` 3. Making calculators beautiful with CSS ``` body { text-align: center; margin: 0; background: #bf4d5d; display: flex; align-items: center; justify-content: center; height: 100vh; } #calculator { max-width: 360px; background-color: #0d1b2a; border-radius: 10px; padding: 20px; box-shadow: 0 10px 20px rgba(0, 0, 0, 0.19), 0 6px rgba(0, 0, 0, 0.23); } #display { width: 100%; height: 56px; font-size: 20px; text-align: right; margin-bottom: 10px; padding: 20px; border-radius: 5px; color: #e0e1dd; background-color: rgba(13, 27, 42, 0.3); border: none; box-sizing: border-box; } .operator { color: #219ebc; } #ac { color: #bf4d5d } .equal { width: calc(50% - 10px); } button { width: calc(25% - 10px); height: 50px; font-size: 20px; margin: 10px 5px; border: none; border-radius: 5px; float: left; background-color: rgba(13, 27, 42); color: #e0e1dd; -webkit-box-shadow: -1px 1px 30px -5px rgba(0, 0, 0, 0.2); -moz-box-shadow: -1px 1px 30px -5px rgba(0, 0, 0, 0.2); box-shadow: -1px 1px 30px -5px rgba(0, 0, 0, 0.2); } button:active { transform: scale(1.05); box-shadow: 0 2px 4px 1px rgba(0, 0, 0, 0.19), 0 1.5px 1.5px rgba(0, 0, 0, 0.23); } ``` 🛖Please checkout the git repository https://github.com/kaizen-developer/simple-calculator 📺Please watch the video on YouTube that shows the same step by step https://youtu.be/HjA1m7xsARo I hope you enjoy creating a calculator this way.
kaizendeveloper
1,371,537
Why do browser consoles return undefined? Explained
Overview As web developers, we are blessed with consoles in the browsers to interact and...
0
2023-02-19T12:35:27
https://dev.to/sobitp59/why-do-browser-consoles-return-undefined-explained-26lm
## Overview As web developers, we are blessed with consoles in the browsers to interact and debug our web pages and applications. The most common type of console is the JavaScript console, which is found in most modern web browsers. These consoles are built into the web browser that provides a command-line interface(CLI) for executing JavaScript commands and inspecting and debugging the state of our web pages. In this blog, we will try to find the answer to "Why browser console returns undefined?". ## The Browser Console The first time I interacted with the console(press F12) was during following a course on JavaScript on Udemy. I performed a lot of operations and getting the results in real-time in the console felt so convenient. But, one thing that constantly bothered me was in some operations it returns the result as expected like adding two numbers `10 + 39` gives `49`, while in other cases like after defining a function or variable it gives `undefined`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/457rdl8g8s2ua13cqje9.png) So, let's understand the reason behind getting undefined. But, before that, we have to know what `undefined` actually is. ## Understanding Undefined In JavaScript, `undefined `is a primitive value that is assigned to variables that have been declared but have not been initialized with a value, or to function parameters that have not been provided with an argument. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgf681ovqizww1ejmkbf.png) We can say that `undefined `is like a placeholder for the variables and functions' arguments in the memory space until it's not get assigned with any value or passed with any arguments in functions. ## Understanding the working of REPL and Console The way code is executed in the console of a web browser is similar to the Read-Eval-Print Loop (REPL) methodology that is commonly used in interactive programming environments. However, the console of a web browser is not strictly a REPL environment but it does share some similarities in terms of how code is entered, executed, and displayed. This is different from how we code in a traditional programming environment, where we typically have to save our code to a file, compile it (if necessary), and then run it. Now first, let's understand what each part of the abbreviation of REPL tells us: - Read: Hey! I am Read and I help in reading the input(code) that you enter. If your input is not correct(syntactically), I will not be able to pass your input to Eval. - Eval: Heya! I am Eval(not evil) and I help in evaluating the input that you gave me via Read so that I can determine the results for your input. - Print: Helloo! I am Print and I help in displaying the result of the evaluation provided by the Eval on the console. - Loop: Hello! I am Loop and I help in taking you back to the Read so that you continue your chit-chat(giving input) with Read. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mf1hwbclixfwy4zsj29.png) So basically, it is a type of interactive programming environment that reads the code, evaluates it, prints the result (if there is one), and then waits for us to enter more code. ## Understanding the Reason Behind Undefined We know that because of the REPL methodology, the codes get executed immediately after we press ENTER. We also know there are three steps( Read, Evaluate and Print ) in between before it gets ready to read our code again (basically before Loop ). At the Print stage, it is decided what should be printed in the console by JavaScript Console. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdcliki46omoc6tfuzax.png) So, let's understand the different scenarios that probably might help us to understand when the JavaScript console decides to print `undefined`, and when not. ### the console.log() In JavaScript, every function returns a value, even if there is no explicit specification of one using the return keyword. If no return statement is used, the function automatically returns `undefined` by default. When we call `console.log()` with a value as an argument, the value is printed to the console and then `undefined` is printed because `console.log()` itself does not have a return value. ```js console.log('hello world!') // hello world! // undefined ``` It is designed to output messages to the console, but it doesn't return a value that could be used elsewhere in the code. So, after the value is printed to the console, the function returns undefined as its default return value. ### the explicit return type Since we know that every function in JavaScript returns something. If there will be nothing, then it will be undefined by default. So, let's tell the browser console explicitly what we want to return and see how it responds to us. ```js function animeLists(){ return ['naruto','attack on titans', 'one piece'] } // undefined // animeLists() // ▸(3) ['naruto', 'attack on titans', 'one piece'] ``` When we run the above code in the console, at first we will get undefined. This is because just defining a function does not produce a return value, so the console does not have anything to display as a result of the function definition. The function definition is just stored in memory somewhere and is not executed until we call it. In the next line, when we called the function `animeLists` and since we are explicitly telling the console to return the `['naruto', 'attack on titans', 'one piece']` using `return` keyword, we will only get the output and not `undefined`. ### the expressions An expression in JavaScript is a combination of values, variables, and operators that can be evaluated to a single value. It can be as simple as a single value or it can be as complex as a combination of multiple expressions. ```js //01: Arithmetic Expression 2 + 3 * 4 // returns 14 //02: Function call Expression Math.max(1, 2, 3) // returns 3 //03: Ternary operator Expression x > y ? "x is greater" : "y is greater" // returns the greater one //04: Boolean Expression true === 1 // returns false ``` In all the examples above, the console does not display the value `undefined` because after reading the expression, it gets evaluated and the result is immediately printed on the console. After printing the result, it loops back to Read so that it starts taking more inputs from us. ### the final example Till now if you have got a little idea about how all these work from the examples above, then let's finally conclude this, understanding the example below step by step. Before that, let's give aliases to "Read", "Eval", "Print "and "Loop" as "Mr. Read", "Mr. Eval", "Mr. Print" and "Mr. Loop" respectively. ```js // Run this code in console function greetUser(){ const message = 'hey user! welcome to console'; console.log(message); console.log('your total cost for the stay is : '); return 2100 - 100; } ``` - Step 1: When we enter this input in the console and press ENTER, Mr. Read will look into our code and finds everything fine. So, he tells Mr. Eval to look into it. - Step 2: Now, when Mr. Eval looks into the code, he finds that it is just a function definition because the function hasn't been called yet. Since there is nothing to return, Mr Eval returns undefined and sends it to Mr. Print. - Step 3: The job of Mr. Print is to display whatever is received by Mr. Eval on the console. This time it was undefined, so he prints the same on the console. - Step 4: Since the final result is printed for the execution of code till now, Mr. Loop will tell Mr. Read, "hey Mr. Read! you can now take another input from the user". And this process continues. Till now, we have only gotten the undefined because we haven't called the function yet, so let's finally call the function `greetUser()`. ```js greetUser(); // calling the function ``` The same process will be followed this time as well, let's understand this one as well. - Step 1: After we call the function greetUser, Mr. Read will look into our code and reads the various commands we have given. - Step 2: Now, Mr. Loop will evaluate our code and since we are explicitly returning with the return keyword, we will not get undefined. After evaluation, Mr. Loop sends the result to Mr. Print. - Step 3: Now, Mr. Print displays the final result on the console. ``` hey user! welcome to console your total cost for the stay is : 2000 ``` - Step 4: Again, Mr. Loop will tell Mr. Read, "hey Mr. Read! you can now take another input from the user". And this process continues. ## Summary - A console is a built-in tool in modern web browsers that provides a command-line interface for debugging and interacting with web pages. - It is similar to the Read-Eval-Print Loop (REPL) methodology used in interactive programming environments where, - Read: It reads the input we enter - Eval: It evaluates and determines the results of the input. - Print: It prints the result on the console. - Loop: It loops back to Read so that Read can take another input. - In JavaScript, every function returns a value, by default it is undefined. And since console.log() does not have a return value so it prints undefined. - If we explicitly state the return value in the code then it resolves the issue of getting "undefined" as output. - I hope this blog helps you to understand at least the basic concept behind returning undefined to the console. If it does, do share it and if you want to connect with me, say Hi! [here](https://twitter.com/sobit_prasad).
sobitp59
1,371,541
GitHub Webhook: our own “server” in NodeJs to receive Webhook events over the internet.
We are writing an HTTP “server” in NodeJs to receive GitHub Webhook events. We use the ngrok program...
0
2023-02-19T12:42:56
https://dev.to/behainguyen/github-webhook-our-own-server-in-nodejs-to-receive-webhook-events-over-the-internet-354d
git, webhook, node, ngrok
*We are writing an HTTP “server” in NodeJs to receive GitHub Webhook events. We use the ngrok program to make our server publicly accessible over the internet. Finally, we set up a GitHub repo and define some Webhook on this repo, then see how our now public NodeJs server handles GitHub Webhook's notifications.* <a href="https://github.com/" title="GitHub" target="_blank">GitHub</a> enables subscribing to receive activities occur on repositories. This is known as Webhooks. This is the official documentation page <a href="https://docs.github.com/en/developers/webhooks-and-events/webhooks/about-webhooks" title="About webhooks" target="_blank">About webhooks</a>. To subscribe, we must have a public HTTP endpoint which understands how to process notifications from GitHub Webhook's events. We are going to write our own “server” application, in <a href="https://nodejs.org/en/" title="NodeJs" target="_blank">NodeJs</a>, which implements this endpoint: all it does is logging the received notifications to the console. To make our “server” public, <a href="https://github.com/" title="GitHub" target="_blank">GitHub</a> recommends using <a href="https://ngrok.com/" title="ngrok" target="_blank">ngrok</a> -- this application enables localhost applications accessible over the internet. <h2>Table of contents</h2> <ul> <li style="margin-top:10px;"><a href="#environments">Environments</a></li> <li style="margin-top:10px;"><a href="#nodejs-server">Our “server” in NodeJs</a></li> <li style="margin-top:10px;"><a href="#install-ngrok-ubuntu">Install ngrok for Ubuntu 22.10 kinetic</a></li> <li style="margin-top:10px;"><a href="#github-webhook-test-our-server">Set up GitHub Webhook and test our server</a></li> </ul> <h3 style="color:teal;"> <a id="environments">Environments</a> </h3> In this post, I'm using <a href="https://ubuntu.com/download/desktop/thank-you?version=22.10&architecture=amd64" title="Ubuntu" target="_blank">Ubuntu</a> version <code>22.10 kinetic</code>, and <a href="https://nodejs.org/en/" title="NodeJs" target="_blank">NodeJs</a> version <code>18.7.0</code>, and <a href="https://ngrok.com/" title="ngrok" target="_blank">ngrok</a> version <code>3.1.1</code>. But, please note, both <a href="https://nodejs.org/en/" title="NodeJs" target="_blank">NodeJs</a> and <a href="https://ngrok.com/" title="ngrok" target="_blank">ngrok</a> are available under Windows 10. All material discussed in this post should also work in Windows 10, I have not tested it, but I have done something similar (using Python) under Windows 10. <h3 style="color:teal;"> <a id="nodejs-server">Our “server” in NodeJs</a> </h3> The primary objective is to demonstrate the flow -- how everything works together: I'm keeping it to a minimum demonstrable piece of functionality. After we subscribe to an event in GitHub, and whenever that event has occurred, GitHub will <code><strong>POST</strong></code> a notification to the <code>Payload URL</code> that we specify when setting up the Webhook. In the context of this post, the <code>Payload URL</code> is just simply a <code>POST</code> route that we implement on the server. The <code>Payload URL</code> method is extremely simple: it just prints to the console whatever GitHub gives it, and sends back a text response so that GitHub knows the notification has been successfully received. The default root route (<code>/</code>) is a <code>GET</code>, and will just simply send back a <em>“Hello, World!”</em> message. I have the code running under <code>/home/behai/webwork/nodejs</code>. ``` Content of /home/behai/webwork/nodejs/package.json: ``` ```javascript { "name": "Git Webhook", "version": "0.0.1", "dependencies": { "express": "latest", "body-parser": "latest" }, "author": "Van Be Hai Nguyen", "description": "Learn Git Webhook Server" } ``` We use the latest versions of <a href="https://expressjs.com/" title="Express web framework"target="_blank">Express web framework</a> and the middleware <a href="https://www.npmjs.com/package/body-parser" title="body-parser" target="_blank">body-parser</a>. To install the packages, while within <code>/home/behai/webwork/nodejs</code>, run: ``` $ npm i ``` ``` Content of /home/behai/webwork/nodejs/webhook.js: ``` ```javascript const express = require( 'express' ); const bodyParser = require("body-parser") const app = express(); app.use(bodyParser.json()) app.use(bodyParser.urlencoded({ extended: false })) app.get( '/', function ( req, res ) { res.send("Hello, World!") } ); app.post('/git-webhook', function(req, res) { let data = req.body; console.log(data); res.send('Received!'); }) const port = 8008; app.listen( port, function() { console.log( `App listening on port ${port}!` ) }); ``` <ul> <li style="margin-top:10px;">The server listens on port <code>8008</code>.</li> <li style="margin-top:10px;">The <code>Payload URL</code>'s route is <code>/git-webhook</code>.That means the full URL on localhost is <code>http://localhost:8008/git-webhook</code>.</li> <li style="margin-top:10px;">The <code>Payload URL</code> method's response is simply <code>Received!</code>.</li> <li style="margin-top:10px;">The default route <code>http://localhost:8008</code> responds with <code>Hello, World!</code>.</li> </ul> Run it with: ``` $ node webhook.js ``` ![01-060.png](https://behainguyen.files.wordpress.com/2023/02/01-060.png) On the Ubuntu machine, <code>curl http://localhost:8008</code> on a command line, and <code>http://localhost:8008</code> via a browser should respond with <code>Hello, World!</code>. <h3 style="color:teal;"> <a id="install-ngrok-ubuntu">Install ngrok for Ubuntu 22.10 kinetic</a> </h3> The official page <a href="https://ngrok.com/docs/getting-started" title="Getting Started with ngrok" target="_blank">Getting Started with ngrok</a>, describes the installation process for different operating systems. I skipped Step 1 of this instruction, since I already have a web server application of my own. In <a href="https://ngrok.com/docs/getting-started#step-2-install-the-ngrok-agent" title="Step 2: Install the ngrok Agent" target="_blank">Step 2: Install the ngrok Agent</a>, I just ran the long and scary looking command listed under <strong>"For Linux, use Apt:"</strong>. I then completed all the instructions described under <a href="https://ngrok.com/docs/getting-started#step-3-connect-your-agent-to-your-ngrok-account" title="Step 3: Connect your agent to your ngrok account" target="_blank">Step 3: Connect your agent to your ngrok account</a>. Please read through <a href="https://ngrok.com/docs/getting-started#step-4-start-ngrok" title="Step 4: Start ngrok" target="_blank">Step 4: Start ngrok</a>. Since our server above listens on port <code>8008</code>, provided that it is still running, we start <code>ngrok</code> with: ``` $ ngrok http 8008 ``` The screen should look like the following: ![02-060.png](https://behainguyen.files.wordpress.com/2023/02/02-060.png) <code>https://53a0-58-109-142-244.au.ngrok.io/</code> is the public URL for our server above: anybody with this URL can access our server running on our private network. The GitHub <code>Payload URL</code> is then <code>https://53a0-58-109-142-244.au.ngrok.io/git-webhook</code>. <strong>Please note that, since we're running the free version of <code>ngrok</code>, every time we start <code>ngrok</code>, we'll have a different URL!</strong> Please be mindful of that, but for our learning purpose, this is not a problem. From my Windows 10 machine, I request <code>https://53a0-58-109-142-244.au.ngrok.io/</code> using Postman, (but a browser would do, too), I get the expected response, as seen: ![03-060.png](https://behainguyen.files.wordpress.com/2023/02/03-060.png) <code>ngrok</code> also logs the request: ![04-060.png](https://behainguyen.files.wordpress.com/2023/02/04-060.png) It appears <code>ngrok</code> works okay with our “server”. We can now set up GitHub Webhook, and test our <code>https://53a0-58-109-142-244.au.ngrok.io/git-webhook</code> endpoint. <h3 style="color:teal;"> <a id="github-webhook-test-our-server">Set up GitHub Webhook and test our server</a> </h3> Webhooks are local to each GitHub repo. We'll create a new repo <code>learn-git</code> for this purpose. When <code>learn-git</code> has been created, click on <code>Settings</code> on the top right hand corner, then on <code>Webhooks</code> on left hand side, then <code>Add webhook</code> button on the top right hand. For <strong>Payload URL</strong>, specify <code>https://53a0-58-109-142-244.au.ngrok.io/git-webhook</code>. For <strong>Content type</strong>, select <code>application/json</code>: ![05-060.png](https://behainguyen.files.wordpress.com/2023/02/05-060.png) Leave everything else at default, click the green <code>Add webhook</code> button: ![06-060-1.png](https://behainguyen.files.wordpress.com/2023/02/06-060-1.png) Note under <strong>Which events would you like to trigger this webhook?</strong>, we leave it at the default <code>Just the <strong>push</strong> event.</code> That means, this Webhook will notify our server only when we check something into this repo. GitHub tells us that it has sent our server (i.e. to the <strong>Payload URL</strong>), a ping event: ![07-060.png](https://behainguyen.files.wordpress.com/2023/02/07-060.png) According to the above screen, our server should have received this ping event with no problem: indeed, it logs some JSON data, and <code>ngrok</code> also logs a new POST request to <code>/git-webhook</code> endpoint: ![08-060-a.png](https://behainguyen.files.wordpress.com/2023/02/08-060-a.png) ![08-060-b.png](https://behainguyen.files.wordpress.com/2023/02/08-060-b.png) At this point, the repo is still empty. Let's do some check in, i.e. <code>push</code>. The Webhook should trigger. <code>D:\learn-git\</code> has some files. Let's initialise the repo and check them in. Note the check in message <em>“Initial checking should have two files.”</em> (I meant <em>“check in”</em> 😂): ``` D:\learn-git>git init D:\learn-git>git config user.name "behai-nguyen" D:\learn-git>git config user.email "behai_nguyen@hotmail.com" D:\learn-git>git add . D:\learn-git>git commit -m "Initial checking should have two files." D:\learn-git>git branch -M main D:\learn-git>git remote add origin https://github.com/behai-nguyen/learn-git.git D:\learn-git>git push -u origin main ``` The Webhook does trigger, our server logs the notification data, note that the logged message matches the check in message above; and also <code>ngrok</code> records another new POST request to <code>/git-webhook</code> endpoint: ![09-060-a.png](https://behainguyen.files.wordpress.com/2023/02/09-060-a.png) ![09-060-b.png](https://behainguyen.files.wordpress.com/2023/02/09-060-b.png) Back to GitHub <code>learn-git</code> repo, go back to Webhook area, click on the <code>payload link</code> as pointed to by the arrow in the following screen: ![10-060.png](https://behainguyen.files.wordpress.com/2023/02/10-060.png) Click on <code>Recent Deliveries</code> tab, there are two (2) events, <em>push</em> and <em>ping</em> as we've gone through above: ![11-060.png](https://behainguyen.files.wordpress.com/2023/02/11-060.png) Pick on the <code>push</code> event, then click on <code>Response 200</code> tab, under <strong>Body</strong>, we should see the text <code>Received!</code>, which is the response from our NodeJs server: ![12-060.png](https://behainguyen.files.wordpress.com/2023/02/12-060.png) Note that, the <code>Request</code> tab has two sections, <strong>Headers</strong> and <strong>Payload</strong>. The data that gets posted to our server is the <strong>Payload</strong> data: GitHub Webhook documentation should help us understand what this data means, so can we can use it correctly. Pick a file in https://github.com/behai-nguyen/learn-git.git, edit it directly and commit. This should trigger a <code>push</code> event. It does. Our server does get notified, note that the messages match: ![13-060-a.png](https://behainguyen.files.wordpress.com/2023/02/13-060-a.png) ![13-060-b.png](https://behainguyen.files.wordpress.com/2023/02/13-060-b.png) Let's sync <code>js1.js</code>, edit it locally and check it in properly. Command to sync: ``` D:\learn-git>git pull ``` Make some changes to <code>js1.js</code> locally; then check it in. Note the two messages <em>“Test Webhook.”</em> and <em>“Check in from local machine via command.”</em>: ``` D:\learn-git>git add js1.js D:\learn-git>git commit -m "Test Webhook." -m "Check in from local machine via command." D:\learn-git>git push -u origin main ``` We get the expected response to our server. And <code>ngrok</code> records four (4) POST requests to <code>/git-webhook</code> endpoint: ![14-060.png](https://behainguyen.files.wordpress.com/2023/02/14-060.png) ![15-060.png](https://behainguyen.files.wordpress.com/2023/02/15-060.png) The <code>Recent Deliveries</code> tab (discussed before), should now have four (4) entries. Through the screen captures presented throughout this post, it should be apparent that we can change properties of an existing Webhook, including the <code>Payload URL</code>. Due to the fact that our so-called server is so simple, it will work happily with other Webhook events beside <code>push</code>. I have tested with <code>Send me everything.</code>, and raised issues to the <code>learn-git</code> repo, the server logs notifications as it does for <code>push</code>. This little server is good for examining the structure of the payloads we get for different Webhook events. The GitHub documentation should have this info, but for me personally, visualising the data makes reading these documents easier. This concludes this post. I hope you find it helpful and useful. Thank you for reading, and stay safe as always.
behainguyen
1,373,018
Liveblocks - Collab Instantly!
What is Liveblocks? I have recently started playing around with some collaboration related...
0
2023-02-21T17:49:22
https://dev.to/mantiq/liveblocks-collab-instantly-29m6
liveblocks, typescript, react, webdev
## What is Liveblocks? I have recently started playing around with some collaboration related technologies, most notably [yjs](https://docs.yjs.dev/yjs-in-the-wild) & [liveblocks](https://liveblocks.io) In this article I will be focusing on Liveblocks as it was quite the fun implementing features in applications while using it. The team behind it summarizes it quite well in the following quote: > *Collaborative experiences in days, not months* --- ## Concepts ![Room](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ykiu04hykfzgdt3h8sk.png) ### Room A room is the space people can join to collaborate together. ![Presence](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1opb2023j8e7lwig8e2d.png) ### Presence Presence represents people’s movements and actions inside the room. People in the room are able to see what others are doing in real-time. ![Storage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ewndpp4r3ivta58szo6.png) ### Storage Storage represents the items people can interact with in the room. In the physical world, storage could be represented as documents and notes on a whiteboard. ![Storage Persistance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctvzms3eelatru24t91y.png) ### Storage Persistance Storage data automatically persists when people leave the room. The data can also be cleared and stored on your database using the API endpoints. --- ## Easy collabing You probably ask yourself, how easy? Well it is as easy as: ```ts const others = useOthers(); const updateMyPresence = useUpdateMyPresence(); ``` With these little things you are able to boost your App experience from plain old monotone to a lively multiplayer app! What I also love is that they have a cpl of packages out, one that I like to use is the `@liveblocks/zustand` package with which you can have a nice lively shareable state throughout the app, it is as simple as: ```ts import create from "zustand"; import { createClient } from "@liveblocks/client"; import { liveblocks } from "@liveblocks/zustand"; import type { WithLiveblocks } from "@liveblocks/zustand"; type Cursor = { x: number; y: number }; type State = { cursor: Cursor; setCursor: (cursor: Cursor) => void; }; const client = createClient({ publicApiKey: "PUBLIC_KEY", }); const useStore = create<WithLiveblocks<State>>()( liveblocks( (set) => ({ cursor: { x: 0, y: 0 }, setCursor: (cursor) => set({ cursor }), }), { client, presenceMapping: { cursor: true, }, } ) ); export default useStore; ``` And with such simple code you would be able to have cursors already flying around, with additional state also see updates of the state in real time and also with 60fps experience if you set the throttle as low as 16! --- ## Conclusion I would recommend it to many people who strive to implement collaborative features as it makes the whole process easier, you get to save months of work and have it in days, live cursors, live updated data on your screen, without the trouble of breaking your head against the wall! Also love that it is focused on typescript as well which improves the whole developer experience. All in all I would recommend it, and I am already looking for ways and ideas what and how to implement, with liveblocks, implementing things like a spreadsheet, mural whiteboard etc becomes 10000 times easier, so what are you waiting for, check out their [website](https://liveblocks.io)!
mantiq
1,372,091
Effortlessly Elevate Your React Code with ESLint and Prettier: A Step-by-Step Guide for 2023
Hey Dev community! If you're a #React developer, you know the pain of dealing with messy and...
0
2023-02-20T01:45:54
https://dev.to/aaron_janes/effortlessly-elevate-your-react-code-with-eslint-and-prettier-a-step-by-step-guide-for-2023-179d
webdev, javascript, react, tutorial
Hey Dev community! If you're a #React developer, you know the pain of dealing with messy and inconsistent code. That's where my latest blog post comes in - I have put together a step-by-step guide on how to use #ESLint and #Prettier to effortlessly elevate your code in 2023. Check it out and let me know what you think! #webdev #codingtips #StackFails" Link: https://bit.ly/3SdQ85l
aaron_janes
1,372,133
What is Monkey Patching?
All about monkey patching with examples.
0
2023-02-20T03:08:00
https://dev.to/himankbhalla/what-is-monkey-patching-4pf
monkeypatching, programmingbasics, python, ruby
--- title: What is Monkey Patching? published: true description: All about monkey patching with examples. tags: monkeypatching, programmingbasics, python, ruby cover_image: https://miro.medium.com/v2/resize:fit:1400/0*i7I-kxMt-huH82fu # Use a ratio of 100:42 for best results. # published_at: 2023-02-20 03:08 +0000 --- Monkey patching is a programming technique used in software development that allows developers to change or extend the behaviour of existing code without directly modifying the source code. This can be particularly useful when the code is closed-source or the developer doesn’t have access to the original code. In this post, we’ll explore the concept of monkey patching and provide several examples to illustrate how it works. ## What is Monkey Patching? Monkey patching is a technique that allows developers to change the behaviour of an existing piece of code at runtime. This is done by modifying or adding attributes or methods to an object or class, which can then be called as if they were part of the original code. The term “monkey patching” comes from the idea of a monkey modifying or “patching” a codebase, much like a monkey might tinker with a tool or piece of machinery. Monkey patching is often used to fix bugs or add new features to third-party libraries, frameworks, or applications. In some cases, it can be used to add functionality to the language itself. However, monkey patching can be a double-edged sword. While it can be a powerful tool in the hands of experienced developers, it can also lead to unexpected behaviour and hard-to-find bugs. Examples of Monkey Patching: Here are a few examples that illustrate how monkey patching can be used in practice: **Modifying an Existing Function** Let’s say we’re working with a third-party library that contains a function that doesn’t quite do what we need it to do. Instead of modifying the library’s source code, you can use monkey patching to modify the function’s behaviour at runtime. Here’s an example in Python: ``` import third_party_library def new_function(arg1, arg2): # new implementation return result third_party_library.existing_function = new_function ``` In this example, we’re importing a third-party library and then defining a new function that we want to use instead of the existing function provided by the library. We then use monkey patching to replace the existing function with our new function. **Adding a New Method to a Class** Another common use case for monkey patching is adding a new method to an existing class. Here’s an example in JavaScript: ``` class OriginalClass { originalMethod() { console.log('Original Method'); } } OriginalClass.prototype.newMethod = function() { console.log('New Method'); }; let obj = new OriginalClass(); obj.newMethod(); ``` In this example, we’re creating a new method called “newMethod” and adding it to the prototype of the “OriginalClass” class. We can then create an instance of the “OriginalClass” and call the new method as if it were part of the original class. **Fixing a Bug in a Third-Party Library** Monkey patching can also be used to fix bugs in third-party libraries that you can’t modify directly. Here’s an example in Ruby: ``` require 'third_party_library' # Define a new method to fix the bug class ThirdPartyLibrary::OriginalClass def fixed_method # new implementation return result end end ``` In this example, we’re requiring a third-party library and then defining a new method called “fixed_method” that fixes a bug in the original library. We’re using the “::” operator to access the original class in the library and then adding our new method. ## Conclusion: Monkey patching is a powerful technique that can be used to modify or extend the behaviour of existing code at runtime. While it can be a useful tool for fixing bugs or adding new features to third-party libraries, it can also introduce unexpected behaviour, hard to find bugs. Originally posted on [medium](https://medium.com/@himankbh/monkey-patching-63defd49ba7a).
himankbhalla
1,372,213
Exploring the Best Mobile App Development Frameworks of 2023
The world has gone mobile and mobile apps are now a necessity for businesses to stay relevant and...
0
2023-02-20T05:04:16
https://dev.to/janefraserof/exploring-the-best-mobile-app-development-frameworks-of-2023-3kg1
programming, javascript, android
The world has gone mobile and mobile apps are now a necessity for businesses to stay relevant and competitive. Mobile app development frameworks make it easier and faster for developers to create high-quality apps as [numberous app ideas](https://appticz.com/mobile-app-ideas) emerge daily. In this blog, we will explore the top mobile app development frameworks to choose in 2023. ## Mobile App Statistics According to Statista, the number of smartphone users worldwide is expected to reach 7.33 billion by 2023. This means that the demand for mobile apps will continue to increase, making it more important for businesses to develop their own apps. In fact, mobile apps are projected to generate $935 billion in revenue by 2023. Here is the Number of mobile app downloads worldwide from 2016 to 2022 **_Number of mobile app downloads worldwide from 2016 to 2022 _** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wxme7n97rl24r1syt2dt.png) ## Top Mobile App Development Frameworks in 2023 ### React Native React Native is a popular mobile app development framework that allows developers to build high-quality apps for both iOS and Android platforms. It is based on the React JavaScript library and enables developers to create complex UI designs with ease. React Native is an open-source framework and has a large community of developers, which means that there are plenty of resources and support available. It also allows for hot-reloading, which enables developers to see their changes in real-time. ### Flutter Flutter is another popular mobile app development framework that is known for its fast development and high-quality results. It is developed by Google and uses the Dart programming language. Flutter is known for its hot-reloading feature, which allows developers to see the changes they make to the code in real-time. It also has a large library of widgets, which makes it easy to create complex UI designs. ### Ionic Ionic is a popular mobile app development framework that uses HTML, CSS, and JavaScript to create cross-platform apps. It is based on Angular and allows developers to create high-quality apps with ease. Ionic also has a large library of plugins and extensions that make it easy to integrate with other technologies. ### Xamarin Xamarin is a mobile app development framework that uses C# and .NET to create high-quality apps for both iOS and Android platforms. It is known for its ease of use and allows developers to share code across multiple platforms. Xamarin also has a large community of developers, which means that there are plenty of resources and support available. ### NativeScript NativeScript is a mobile app development framework that allows developers to build native apps for iOS and Android platforms using JavaScript or TypeScript. It is based on Angular and enables developers to create high-performance apps with a native look and feel. NativeScript also has a large library of plugins and extensions that make it easy to integrate with other technologies. ### PhoneGap PhoneGap is a mobile app development framework that uses HTML, CSS, and JavaScript to create cross-platform apps for iOS, Android, and other platforms. It is known for its ease of use and allows developers to create apps quickly without requiring a lot of code. PhoneGap also has a large community of developers, which means that there are plenty of resources and support available. ### Framework7 Framework7 is a mobile app development framework that uses HTML, CSS, and JavaScript to create high-quality apps for iOS and Android platforms. It is known for its speed and flexibility, and allows developers to create complex UI designs with ease. Framework7 also has a large library of plugins and extensions that make it easy to integrate with other technologies. ### Sencha Touch Sencha Touch is a mobile app development framework that uses HTML, CSS, and JavaScript to create cross-platform apps for iOS, Android, and other platforms. It is known for its ease of use and allows developers to create apps quickly without requiring a lot of code. Sencha Touch also has a large library of plugins and extensions that make it easy to integrate with other technologies. ### Corona SDK Corona SDK is a mobile app development framework that uses Lua programming language to create high-quality apps for iOS, Android, and other platforms. It is known for its ease of use and allows developers to create apps quickly without requiring a lot of code. Corona SDK also has a large library of plugins and extensions that make it easy to integrate with other technologies. ### Onsen UI Onsen UI is a mobile app development framework that uses HTML, CSS, and JavaScript to create high-quality apps for iOS and Android platforms. It is known for its speed and flexibility, and allows developers to create complex UI designs with ease. Onsen UI also has a large library of customizable components and templates that make it easy to create beautiful and responsive apps. ### Conclusion Choosing the right mobile app development framework is essential for businesses to create high-quality apps that meet their specific needs. React Native, Flutter, Ionic, and Xamarin are all great options to consider in 2023. They each have their own unique features and benefits, and it is important to evaluate each framework based on your specific requirements.   The frameworks listed above are just a few of the many options available, and each has its own strengths and weaknesses. It is important to carefully evaluate each framework by consulting with a **[best mobile app development company](https://appticz.com)** to choose the one that best meets your needs. With the increasing demand for mobile apps, it is important for businesses to invest in the right technology to stay competitive in the market.
janefraserof
1,372,492
Create a rainbow-coloured list with :nth-of-type()
This is my favourite thing to do with the :nth-of-type() selector: First, you need some colour...
0
2023-04-09T10:59:59
https://rachsmith.com/create-a-rainbow-coloured-list-with-css/
css, frontend
--- title: Create a rainbow-coloured list with :nth-of-type() published: true date: 2023-02-20 20:03:00 UTC tags: css, frontend canonical_url: https://rachsmith.com/create-a-rainbow-coloured-list-with-css/ --- This is my favourite thing to do with the `:nth-of-type()` selector: {% codepen https://codepen.io/rachsmith/pen/yLxYwxx %} First, you need some colour values. I like to store them in [CSS custom properties](https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties) so it easy to order them the way you like later. ``` :root { --red: #f21332; --orange: #f27225; --pink: #e20b88; --yellow: #f2ad24; --green: #00b249; --blue: #1844b5; --purple: #a033b3; } ``` Then you want to add an `:nth-of-type` declaration for each colour. The [functional notation](https://developer.mozilla.org/en-US/docs/Web/CSS/:nth-child#functional_notation) for each `:nth-of-type` is in the format of `An+B`, where `A` is the total number of colours you have, and `B` is the position of each colour. So as I have 7 colours, my functions look like `7n+1`, `7n+2`, `7n+3` and so on... ``` li:nth-of-type(7n + 1) { color: var(--red); } li:nth-of-type(7n + 2) { color: var(--orange); } li:nth-of-type(7n + 3) { color: var(--pink); } li:nth-of-type(7n + 4) { color: var(--yellow); } li:nth-of-type(7n + 5) { color: var(--green); } li:nth-of-type(7n + 6) { color: var(--blue); } li:nth-of-type(7n + 7) { color: var(--purple); } ``` You can never have too many rainbows. {% codepen https://codepen.io/rachsmith/pen/yLxYwxx %}
rachsmith
1,372,554
Sharp & Glowing dark card | Chrome only
Based on this tweet: https://twitter.com/aleksliving/status/1620874863690014721?s=20 Please note...
0
2023-02-20T11:45:28
https://dev.to/lukyvj/sharp-glowing-dark-card-chrome-only-5g90
codepen, css, houdini
<p>Based on this tweet: <a href="https://twitter.com/aleksliving/status/1620874863690014721?s=20" target="_blank">https://twitter.com/aleksliving/status/1620874863690014721?s=20</a></p> <p>Please note that this pen uses CSS @property for that is not yet available everywhere, I’ll make a cross browser version. </p> <p>Best viewed in chrome </p> {% codepen https://codepen.io/LukyVj/pen/YzOXepM %}
lukyvj
1,372,565
Today I completed 2041 Solutions on Leetcode🥳🥳🥳
A post by Miss Pooja Anilkumar Patel
0
2023-02-20T12:06:34
https://dev.to/chiki1601/today-i-completed-2041-solutions-on-leetcode-2ge4
chiki1601, challenge, programming, beginners
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ogdjl9womzi401xyqe2f.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/de74l53etmaaj7rtnovm.png)
chiki1601
1,372,612
How do you gracefully shut down Pods in Kubernetes?
When you type kubectl delete pod, the pod is deleted, and the endpoint controller removes its IP...
0
2023-02-20T13:16:07
https://dev.to/danielepolencic/how-do-you-gracefully-shut-down-pods-in-kubernetes-4gl0
kubernetes, devops
When you type `kubectl delete pod`, the pod is deleted, and the endpoint controller removes its IP address and port (endpoint) from the Services and etcd. You can observe this with `kubectl describe service`. ![Listing endpoints with kubectl describe](https://res.cloudinary.com/learnk8s/image/upload/v1676872994/threads/graceful-shutdown-2_l2gafi.png) But that's not enough! **Several components sync a local list of endpoints:** - kube-proxy keeps a local list of endpoints to write iptables rules. - CoreDNS uses the endpoint to reconfigure the DNS entries. And the same is true for the Ingress controller, Istio, etc. ![Endpoints propagation in Kubernetes](https://res.cloudinary.com/learnk8s/image/upload/v1676872994/threads/graceful-shutdown-3_oxakmq.png) All those components will (eventually) remove the previous endpoint so that no traffic can ever reach it again. At the same time, the kubelet is also notified of the change and deletes the pod. _What happens when the kubelet deletes the pod before the rest of the components?_ ![Endpoints are not propagated and removed at the same time](https://res.cloudinary.com/learnk8s/image/upload/v1676872994/threads/graceful-shutdown-4_m1pjvc.png) **Unfortunately, you will experience downtime** because components such as kube-proxy, CoreDNS, the ingress controller, etc., still use that IP address to route traffic. _So what can you do?_ Wait! ![The kubelet will immediately delete the pod, even if the endpoint is not propagated](https://res.cloudinary.com/learnk8s/image/upload/v1676872994/threads/graceful-shutdown-5_dkdoaw.png) **If you wait long enough before deleting the pod, the in-flight traffic can still resolve, and the new traffic can be assigned to other pods.** _How are you supposed to wait?_ ![The kubelet should wait for the endpoints to propagate before deleting the pod](https://res.cloudinary.com/learnk8s/image/upload/v1676872995/threads/graceful-shutdown-6_oezur9.png) When the kubelet deletes a pod, it goes through the following steps: - Triggers the `preStop` hook (if any). - Sends the SIGTERM. - Sends the SIGKILL signal (after 30 seconds). ![The kubelet deleting the pod goes through 3 steps: preStop hook, SIGTERM and SIGKILL](https://res.cloudinary.com/learnk8s/image/upload/v1676872995/threads/graceful-shutdown-7_ijesks.png) **You can use the `preStop` hook to insert an artificial delay.** ![You can use a preStop hook to delay deleting a pod](https://res.cloudinary.com/learnk8s/image/upload/v1676872995/threads/graceful-shutdown-8_mocv2u.png) **You can listen to the SIGTERM signal in your app and wait.** Also, you can gracefully stop the process and exit when you are done waiting. Kubernetes gives you 30s to do so (configurable). ![You can catch the SIGTERM signal in your app and wait](https://res.cloudinary.com/learnk8s/image/upload/v1676872995/threads/graceful-shutdown-9_ddmpkm.png) _Should you wait 10 seconds, 20 or 30s?_ There's no single answer. While propagating endpoints could only take a few seconds, Kubernetes doesn't guarantee any timing nor that all of the components will complete it at the same time. ![Endpoint propagation timeline in Kubernetes](https://res.cloudinary.com/learnk8s/image/upload/v1676872995/threads/graceful-shutdown-10_n6swjw.png) If you want to explore more, here are a few links: - https://learnk8s.io/graceful-shutdown - https://freecontent.manning.com/handling-client-requests-properly-with-kubernetes/ - https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods - https://medium.com/tailwinds-navigator/kubernetes-tip-how-to-gracefully-handle-pod-deletion-b28d23644ccc - https://medium.com/flant-com/kubernetes-graceful-shutdown-nginx-php-fpm-d5ab266963c2 - https://www.openshift.com/blog/kubernetes-pods-life And finally, if you've enjoyed this thread, you might also like: - The Kubernetes workshops that we run at Learnk8s https://learnk8s.io/training - This collection of past threads https://twitter.com/danielepolencic/status/1298543151901155330 - The Kubernetes newsletter I publish every week, "Learn Kubernetes weekly" https://learnk8s.io/learn-kubernetes-weekly
danielepolencic
1,372,796
Python For Loops, Range, Enumerate Tutorial
Python for loop is used to iterate over a sequence of elements such as a list, tuple, or string. The...
0
2023-02-20T15:46:24
https://dev.to/max24816/python-for-loops-range-enumerate-tutorial-20a2
Python for loop is used to iterate over a sequence of elements such as a list, tuple, or string. The `range` function can be used to create a sequence of numbers, and `enumerate` can be used to iterate over a sequence while also keeping track of the index. ## [**Python for Loop**](https://www.programdoc.com/python/for-loop) The syntax of the `for` loop in Python is as follows: ```py for variable in sequence: # code to be executed for each element in sequence ``` Here, `variable` is a new variable that takes on the value of each element in the `sequence`. The code inside the loop is executed once for each element in the `sequence`. ### Example Let's say we have a list of names and we want to print each name: ```py names = ['One', 'Two', 'Three', 'Four'] for name in names: print(name) ``` Output: ```py One Two Three Four ``` ## The `range` Function The `range` function in Python is used to create a sequence of numbers. It can take up to three arguments: `start`, `stop`, and `step`. The `start` argument specifies the starting value of the sequence (default is 0), the `stop` argument specifies the ending value (not inclusive), and the `step` argument specifies the step size (default is 1). ### Example Let's say we want to print the numbers from 0 to 9: ```py for i in range(10): print(i) ``` Output: ```py 0 1 2 3 4 5 6 7 8 9 ``` ## The `enumerate` Function The `enumerate` function in Python is used to iterate over a sequence while also keeping track of the index. It returns a tuple containing the index and the element at that index. ### Example Let's say we have a list of names and we want to print each name along with its index: ```py names = ['One', 'Two', 'Three', 'Four'] for i, name in enumerate(names): print(i, name) ``` Output: ```py 0 One 1 Two 2 Three 3 Four ``` ## Explore Other Related Articles [Python if-else Statements Tutorial](https://dev.to/max24816/python-if-else-statements-tutorial-3p25) [Python set tutorial](https://dev.to/max24816/python-set-tutorial-hcp) [Python tuple tutorial](https://dev.to/max24816/python-tuple-tutorial-3m62) [Python Lists tutorial](https://dev.to/max24816/python-lists-tutorial-2g43) [Python dictionaries comprehension tutorial](https://dev.to/max24816/python-dictionaries-comprehension-tutorial-3bda)
max24816
1,372,904
The flutter life
The back end I was using with firebase broke. Obsolescence is the only sure thing from fast paced...
0
2023-02-20T16:58:45
https://dev.to/greatmartyscott/the-flutter-life-577j
The back end I was using with firebase broke. **Obsolescence** is the only sure thing from fast paced large companies. I spent a few hours plucking around the problem but the function library doesn’t allow the only solution I knew. Perhaps I’ll give up and ask for advice from a prof. But they are just gonna stare at the same documentation I am. And I feel I’m capable of solving it myself (_at least it’ll be more rewarding_). I figured I should start a dev journal to track my issues and development, having instilled every other process I’m comfortable with to push my advancement I thought I should attempt this as well. I’ve kept journals and logs before in my life, but I commonly fall off due to my weariness of the consistent dedication required for the task. I should probably get over that. All I know is flutter is cool, non-current tutorials are _evil_. And documentation is commonly more abstract than how I want to apply it.
greatmartyscott
1,372,937
rsync: a top-notch tool for file synchronization
I used to perform data backups among Linux machines by using rsync. Recently, I've been spending more...
0
2023-02-20T17:53:59
https://dev.to/trantuantubk/rsync-an-excellent-file-synchronization-tool-ekf
rsync, mobaxterm, windows, crossplatform
I used to perform data backups among Linux machines by using **rsync**. Recently, I've been spending more time working on a Windows 11 laptop as my main host, and I've discovered that rsync is also included in MobaXterm. I've just performed some familiar synchronization tasks between local folders on my Windows host and between this Windows host and a remote Linux machine, and I'm happy to see that it works correctly. I still need to perform other synchronizations to make sure it works thoroughly in my new settings, but in any case, it's a great tool that makes me feel as if I were working in a Linux system. **References** https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories
trantuantubk
1,373,086
Create Stunning User Interfaces with These Top 30 React UI libraries
stunning User Interfaces with These Top 30 React UI libraries 📌 Full Article Link : Link ...
0
2023-02-20T19:49:09
https://dev.to/ziontutorial/create-stunning-user-interfaces-with-these-top-30-react-ui-libraries-26eb
webdev, javascript, react, beginners
stunning User Interfaces with These Top 30 React UI libraries **📌 Full Article Link** : [Link](https://ziontutorial.com/) #Top 30 React UI Libraries for Building Beautiful Interfaces Discover the top 30 React UI libraries for enhancing the user interface of your web applications. From navigation menus to modals, calendars, and image croppers, these libraries offer a wide range of tools to help you create powerful, responsive, and beautiful UIs. Material-UI: MUI provides a simple, customizable, and accessible library of React components. Follow your own design system, or start with Material Design. https://mui.com/ React Bootstrap: React Bootstrap is a popular UI library that provides a set of reusable components for building responsive and mobile-first web applications. Link: https://react-bootstrap.github.io/ Ant Design: Ant Design is a comprehensive UI library that provides a wide range of components for building high-quality web applications. Link: https://ant.design/ Styled Components: Styled Components is a popular library for styling React components with CSS. It allows you to write CSS directly in your JavaScript code. Link: https://styled-components.com/ React Select: React Select is a flexible and easy-to-use library for building select inputs in React. It provides a range of features such as search, async loading, and multi-select. Link: https://react-select.com/home React Toastify: React Toastify is a simple and customizable toast library for React that provides a range of notification styles such as success, warning, and error. Link: https://github.com/fkhadra/react-toastify React Virtualized: React Virtualized is a library for efficiently rendering large lists and tables in React. It provides a set of components such as List, Table, and Grid that can handle thousands of rows with ease. Link: https://bvaughn.github.io/react-virtualized/#/components/List React DnD: React DnD is a drag and drop library for React that allows you to build complex drag and drop interfaces with ease. Link: https://react-dnd.github.io/react-dnd/about
ziontutorial
1,373,116
A simple mortgage calculator using python
Hi Guys! I want to show my simple python mortgage calculator. It calculates the monthly payment and...
0
2023-02-20T20:33:29
https://dev.to/norbikx1/a-simple-mortgage-calculator-using-python-271j
beginners, programming, python
Hi Guys! I want to show my simple python mortgage calculator. It calculates the monthly payment and total amount to give back. Works only with fixed interest rate. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/irfmvrmr22jslt3kmckq.png) This is a very simple code. However, it does the calculations well with right results. Feel free to make it better or adapt it for your projects. This is the link for the project GitHub: https://github.com/Norbikx1/MortgageCalculator In conclusion, I made this program for the codeacademy requirements. Maybe, someone is going to find it useful. So, this is why I would like to publish it.
norbikx1
1,373,127
Como ter cuidado com as burlas de empregadores em trabalhos remoto
Trabalhar remotamente tem se tornado uma opção cada vez mais comum para profissionais em todo o...
0
2023-02-20T21:01:38
https://dev.to/mvfernando/como-ter-cuidado-com-as-burlas-de-empregadores-em-trabalhos-remoto-30od
trabalhoremoto, devalert, mvfernando, evitarburlas
Trabalhar remotamente tem se tornado uma opção cada vez mais comum para profissionais em todo o mundo. No entanto, essa modalidade de trabalho pode trazer consigo alguns riscos, especialmente quando se trata de lidar com empregadores que não são confiáveis. Infelizmente, há muitos casos de burlas de empregadores que não pagam pelo trabalho realizado e simplesmente desaparecem. Infelizmente passei por essa experiência e estudei relatos do mesmo, tive prazer de falar com algumas pessoas, nesse caso vitimas (outros profissionais e freelancers) que passaram pelo mesmo. Porém, neste artigo, trago como identificar esse tipo de empregador/burlador e o que fazer para evitá-las. **Como os empregadores podem burlar profissionais remotos?** Há várias maneiras pelas quais os empregadores desonestos podem burlar os trabalhadores remotos. Alguns exemplos incluem: * Contratação para um trabalho e depois desaparecer sem pagar; * Oferecer trabalhos sem pagamento adiantado e depois desaparecer sem pagar; * Pagar parcialmente pelo trabalho e depois desaparecer sem pagar a diferença; * Ameaçar ou pressionar o profissional para aceitar condições injustas. Essas são apenas algumas maneiras pelas quais os empregadores podem burlar profissionais remotos. É importante lembrar que, embora a maioria dos empregadores seja honesta e confiável, há sempre alguns que tentam se aproveitar de profissionais que estão trabalhando à distância. **Como identificar essas burlas?** Há vários sinais de alerta que podem ajudar os profissionais a identificar empregadores desonestos antes de aceitar um trabalho. Aqui estão alguns exemplos: * Falta de informações: se o empregador não fornece informações detalhadas sobre o trabalho, como prazos, requisitos ou pagamento, isso pode ser um sinal de alerta. * Comunicação ruim: se o empregador não responder às perguntas ou demorar muito para responder, isso pode indicar que eles não são confiáveis. * Falta de histórico: se o empregador não tem um histórico de trabalho ou não possui avaliações de outros profissionais, isso pode ser um sinal de alerta. * Falta de referências: se o empregador não fornecer referências ou se recusar a fornecê-las quando solicitadas, isso pode ser um sinal de alerta. **O que fazer se você for vítima de uma burla de empregador?** Se você se encontrar em uma situação em que um empregador não paga pelo seu trabalho, existem _N_ coisas que você pode fazer, mas recomendo essas opções: * Entre em contato com o empregador: tente entrar em contato com o empregador para resolver o problema. Às vezes, pode haver um mal-entendido ou problema que pode ser resolvido com a comunicação. * Tente resolver por meio de uma plataforma de mediação: muitas plataformas de trabalho remoto têm serviços de mediação que podem ajudar a resolver disputas entre trabalhadores e empregadores. * Fazer uma postagem no portal [Reclame aqui](https://www.reclameaqui.com.br/) * Procure assistência legal: se todas as outras opções falharem, você pode precisar procurar assistência legal para recuperar o pagamento pelo seu trabalho. Em alguns casos funciona. Em resumo, trabalhar remotamente oferece muitas vantagens, como a flexibilidade e a conveniência de trabalhar de qualquer lugar. No entanto, também pode trazer alguns riscos, especialmente quando se trata de lidar com empregadores desonestos. Para evitar ser vítima de uma burla, é importante fazer uma pesquisa completa sobre o empregador antes de aceitar um trabalho, procurar sinais de alerta durante a negociação do trabalho e tomar medidas para se proteger em caso de uma burla. Se você foi vítima de uma burla, não hesite em buscar ajuda e orientação. Embora seja uma situação desagradável, há medidas que você pode tomar para recuperar o pagamento que lhe é devido. Lembre-se, é melhor ser cauteloso e tomar as medidas de precaução necessárias do que se tornar uma vítima de uma burla de empregador. Espero que este artigo tenha sido útil e informativo. Se você tiver alguma dúvida ou comentário, não hesite em entrar em contato. 🚀 **OBRIGADO POR LER** 🙌🏼 Precisa de um desenvolvedor Web com a melhor classificação para acabar com seus problemas de desenvolvimento? Entre em contato comigo: [Aqui!](https://t.me/elio_fernandes) Quer se conectar? Entre em contato comigo no [LinkedIn](https://www.linkedin.com/in/mvfernando/) Ver outros links: [Clicar aqui!](https://linktr.ee/elio.fernandes28)
mvfernando
1,373,460
IPv4 Address 2023 Infographics
Introduction An Internet Protocol (IP) address is an unique identifier of a computer on...
0
2023-02-21T04:21:26
https://dev.to/ip2location/ipv4-address-2023-infographics-6jh
ipv4, programming, datascience
## Introduction An Internet Protocol (IP) address is an unique identifier of a computer on the internet or even a local network. With this IP address, computers on a network can communicate with each others and send information. IP addresses are managed by the Internet Assigned Numbers Authority (IANA) and its regional registries to various organizations worldwide. IPv4 address is the main standard being used on the internet today. To observe and study the usage of IPv4 address, data of IPv4 address allocations in 2022 is collected and a report titled ‘Internet IP Address 2023 Report‘ is generated. Based on the report, an IPv4 address infographic is created and displayed below to give a better understanding of IPv4 address allocations in 2022. The infographic illustrates about total IP address allocated by continents, IP address ownership by countries, IP changes over past years and so on. ## IPv4 Address Infographics Based on the infographic as shown below, there are 3.38 billion public IP addresses allocated in the year 2022. Out of 3.38 billion IP addresses allocated, 37% of IP address is allocated to the United States. Therefore, it is not surprising that almost half of the IP addresses are owned by North America due to the United States’ allocation. The next biggest is Asia which takes up to 26.7%. In 2022, it is clearly seen that 32% of countries have dropped in their rankings. Moreover, the top 5 countries with the most IP addresses allocated are the United States, China, Japan, Germany and the United Kingdom. For usage types, 55% of IP addresses allocated are from the ISP usage type, which is the highest (55%) among other usage types. DCH, COM, MOB and EDU usage types are listed together with ISP in the list of top 5 IP address ownership by usage types in 2022. Furthermore, Comcast Cable Communications which has 70 million IP addresses is the highest in the ISP group. For the Data Center group, Amazon Technologies which has 46 million IP addresses is the highest. Over the past 6 years, the IP changes based on the top 5 usage types fluctuate to varying degrees. A huge change is observed for the MOB usage type. It had a huge drop in 2019-2020 and increased back in 2020-2021 and then dropped again in 2021-2022. For the DCH usage type, it drops significantly after the year 2018-2019 and then increased in 2021-2022. ## Conclusion Overall, the highest total IP address allocated in 2022 is from the United States which is located in North America and belongs to the ISP usage type. ![IPv4 Address 2023 Infographics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wjww1pizzu25eo4rtqa.png) ![IPv4 Address 2023 Infographics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3d32335spw7z5m9ybhvi.png) ![IPv4 Address 2023 Infographics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99wnng412oa9foy6sfk9.png) ![IPv4 Address 2023 Infographics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kzqutagdodvpak0381nc.png) Read the full report in details at [https://www.ip2location.com/reports/internet-ip-address-2023-report](https://www.ip2location.com/reports/internet-ip-address-2023-report#devto)
ip2location
1,381,790
Steps To Master AWS
Steps To Master AWS Mastering AWS requires dedication, practice, and ongoing learning....
0
2023-02-27T21:39:00
https://dev.to/pranjal563/steps-to-master-aws-1fke
## Steps To Master AWS Mastering AWS requires dedication, practice, and ongoing learning. _Here are some steps you can follow to master AWS_: **1. Learn the Basics**: Start by gaining a solid understanding of the core AWS services such as EC2, S3, and VPC. Review AWS documentation and take online tutorials to learn the basics. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q62b8lpix7q9elbxijys.gif) **2. Get Certified**: AWS certifications demonstrate your knowledge and skills in specific areas of AWS. Consider pursuing multiple certifications such as AWS Certified Solutions Architect, Developer, and DevOps Engineer. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xnt367oy8pyiwkxrsiwq.png) **3. Specialize in a Particular Area**: AWS offers many services and technologies, and it can be challenging to learn everything. Consider specializing in a specific area, such as database management, serverless computing, or machine learning. **4. Learn from Experience**: Practice is crucial in mastering AWS. Create your projects and use AWS services to implement them. Participate in AWS hackathons, build solutions for real-world problems, and test and experiment with different services. **5. Participate in the AWS Community**: Connect with other AWS professionals and learn from their experiences. Attend AWS user group meetings, participate in online forums, and network with other professionals. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1z12a8vk9621wvqebjt.png) **6. Stay Updated on New AWS Services**: AWS continuously releases new services and features. Stay updated by reading AWS blogs and attending AWS events such as AWS re:Invent and re:Skill. **7. Use AWS Best Practices**: Follow AWS best practices such as using automation, securing resources, and implementing monitoring and logging. **8. Attend Formal Training**: Consider attending formal training courses to gain in-depth knowledge and hands-on experience with AWS services. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xiewx5wfc9try8rcqcpa.png) **9. Experiment with Third-Party Tools**: There are many third-party tools and services that integrate with AWS. Experiment with these tools to enhance your AWS skills. ## Conclusion _Overall_, mastering AWS requires a combination of formal training, practical experience, ongoing learning, and participation in the AWS community. By following these steps, you can become a skilled AWS professional and build a successful career in the cloud computing industry.
pranjal563
1,383,016
How We Can Help You Improve Your Existing Website
As the digital world becomes increasingly competitive, it's essential to have a website that not only...
0
2023-02-28T20:35:03
https://dev.to/vantero/how-we-can-help-you-improve-your-existing-website-1gm5
As the digital world becomes increasingly competitive, it's essential to have a website that not only looks great but also functions smoothly and provides a great user experience. However, designing and maintaining a website can be a daunting task, especially for business owners who may not have the necessary technical expertise. At Vantero Development, we understand the importance of having an excellent website for your business. That's why we offer a range of services aimed at helping businesses improve their existing websites. Here are some ways we can help you: 1. Website Audit One of the first steps we take when working with clients is to conduct a thorough website audit. This process involves analyzing your website's design, functionality, and user experience to identify areas that need improvement. We'll also assess your website's speed, security, and mobile responsiveness, which are all critical factors in ranking high on search engines. 2. User Experience (UX) Design If your website isn't providing a great user experience, visitors are more likely to leave and never return. Our UX designers can help you identify and solve any issues with your website's layout, navigation, and overall design. By making your website more intuitive and user-friendly, we can increase engagement, reduce bounce rates, and ultimately drive more conversions. 3. Content Optimization Content is king, and having high-quality, relevant content on your website is essential for both users and search engines. We can help you optimize your website's content to improve its relevance, readability, and SEO. This includes conducting keyword research, optimizing headlines and meta descriptions, and ensuring that your content is engaging and easy to read. 4. Website Performance Optimization Slow-loading websites are a big turn-off for visitors, and they can also negatively impact your search engine rankings. We can help you optimize your website's performance by reducing page load times, optimizing images, and compressing files. We can also ensure that your website is optimized for different devices and browsers, providing a seamless experience for all users. 5. Security and Maintenance Keeping your website secure and up-to-date is critical in today's digital landscape. We offer regular maintenance and security updates to ensure that your website is protected against malware, hackers, and other online threats. We can also provide regular backups, ensuring that your website's data is safe and recoverable in case of a disaster. In conclusion, having a great website is essential for any business that wants to succeed online. At Vantero Development, we offer a range of services aimed at helping businesses improve their existing websites. Whether you need a website audit, UX design, content optimization, performance optimization, or security and maintenance, we're here to help. Contact us today to learn more about how we can help you take your website to the next level.
vantero
1,384,051
Discover @ Linux FINOS
Proud that Discover is Joining the Fintech Open-Source Foundation (FINOS) https://bit.ly/3m9EIUh !...
0
2023-03-01T17:02:32
https://dev.to/angel_diaz_rodriguez/discover-linux-finos-3k6e
Proud that Discover is Joining the Fintech Open-Source Foundation (FINOS) https://bit.ly/3m9EIUh ! Explore Discover Technology https://technology.discover.com
angel_diaz_rodriguez
1,384,622
Tagged templates in JavaScript
JavaScript tagged templates are a powerful feature that allow's developers to create custom template...
0
2023-03-02T03:11:13
https://dev.to/mrh0200/tagged-templates-in-javascript-n44
javascript, webdev, beginners, programming
JavaScript tagged templates are a powerful feature that allow's developers to create custom template literals that can be used to manipulate and transform data in a more flexible and expressive way. ### Template literals Template literals were introduced in ECMAScript 6 as a new way to create strings in JavaScript. They use backticks (`` ` ``) instead of quotes (' or ") and allow for easy interpolation of variables using `${}`. For example: ```javascript const name = "Jhon"; console.log(`Hello ${name}!!`) //Hello Jhon!! ``` ### Tagged templates Tagged templates is also a template literal followed by a function block. For example: ```javascript const person = "Mike"; const age = 28; function myTag(strings, ...values) { const str0 = strings[0]; // "That " const str1 = strings[1]; // " is a " const str2 = strings[2]; // "." const userName = values[0] // Mike const age = values[1] // 28 const ageStr = age > 99 ? "centenarian" : "youngster"; // We can even return a string built using a template literal return `${str0}${userName}${str1}${ageStr}${str2}`; } const output = myTag`That ${person} is a ${age}.`; console.log(output); // That Mike is a youngster. ``` ### How do tagged templates work? In the above example `myTag` is called with a parsed template with array for strings as first argument and values as subsequent arguments ```javascript myTag`That ${person} is a ${age}.` Parsed into strings = ['That ',' is a ','.'] values = ['Mike',28] ``` The `strings` array contains static parts of the message and `values` contain the dynamic parts of the text, `myTag` function is used to manipulate the text. ### Use cases for tagged templates 1) Localize strings ```javascript function localize(strings, ...values) { // Use an i18n library to translate the static parts of the string const translatedStrings = strings.map(s => i18n.translate(s,'es')); // Combine the translated strings and dynamic values to create the localized string let result = translatedStrings[0]; for (let i = 0; i < values.length; i++) { result += values[i] + translatedStrings[i + 1]; } return result; } const name = "John"; const message = localize`Hello, ${name}! How are you?`; console.log(message); // "Hola John Cómo estás" ``` In this example, the localize function uses an i18n library to translate the static parts of the string and then combines the translated strings and dynamic values to create the localized string. 2) Styled templates: ```javascript function format(strings,...values){ let result = strings[0]; for (let i = 0; i < values.length; i++){ result+=`<b>${values[i]}</b>strings[i+1]` } return result } const name = "Mike", age=35; console.log(format`My name is ${name}. I am ${age} years old`) //My name is <b>Mike</b>. I am <b>35</b> years old ``` ### Conclusion Tagged templates are a powerful feature of JavaScript that can be used for a variety of purposes. They allow developers to create custom template literals that can be used to manipulate and transform data in a flexible and expressive way. --- If you have read until the end, leave a like and share your thoughts in the comment section <center>Happy coding</center> ---
mrh0200
1,384,729
How to automatically close your issues once you merge a PR
It's really annoying go through all your issues, and close each one once a feature request or bug...
13,860
2023-03-03T06:27:19
https://dev.to/github/how-to-automatically-close-your-issues-once-you-merge-a-pr-1li4
github, opensource, management
It's really annoying go through all your issues, and close each one once a feature request or bug report has been completed. The pull request gets merged and then you gotta go find the issue that it corresponds to... way too much time. Now GitHub has given you a way to automatically close your issues once a pull request has been merged. Let me show you how. ## Step 1. Create a pull request Like you usually do, open your pull request as normal. Go to your repository, make some changes and open a pull request, but don't click "Create pull request" yet: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fyrf59rc8fvnc10prky.JPG) ## Step 2. Tag the relevant issue Before you click "Create pull request", we need to link the relevant issue to the pull request so that the correct issue is automatically closed. <ol>1. Write part of your commit message as usual:</ol> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jzft7kaqyw3y8jp4m5xr.JPG) <ol>2. Add a "linking" keyword to your commit message:</ol> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u70187mjuw2qoa95jjf.jpg) You can see above here, I've used "closes" (underlined). There are several keywords you can use including: - close - closes - closed - fix - fixes - fixed - resolve - resolves - resolved <ol>3. Tag the relevant issue by typing `#` on your keyboard and selecting or typing in the number of the issue you want to link to:</ol> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i97jk3mf6upf5nj8fqy8.jpg) ## Step 3. Merge and close your pull request Now that you have all the necessary messages and tags, you can go ahead and merge the pull request as usual. <ol>1. Click "Merge pull request":</ol> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iefzecvd8f4kh81yto3b.JPG) <ol>2. You should see the message that your pull request has been successfully merged:</ol> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3sydftc1y518wyof5pit.JPG) You can delete your branch now if you like. ## Step 4. Issue is automatically closed When you navigate back to the relevant issue, you'll see that it is now automatically closed: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t403og7z3m5zwr6gbak5.JPG) You'll also see the pull request is mentioned in the issue history, and there's a message at the top which reads "Fixed by #96". This means the issue has been resolved by pull request number 96. ## Automatically closing issues I hope this short tutorial gives you a little more control and automation over your repositories on GitHub. If you'd like to see this as a video, check out our YouTube Short: {% youtube Ywd_ZMOMAas %} There are also other ways you can automatically close issues. Read up on them all in the [GitHub Docs](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue).
mishmanners
1,384,739
Progressive Web App Development using React.js: A comprehensive guide
React.js is a prominent solution for creating sophisticated UI interactions that connect with the...
0
2023-03-02T05:31:10
https://www.ifourtechnolab.com/blog/progressive-web-app-development-using-react-js-a-comprehensive-guide
webdev, beginners, react, reactnative
React.js is a prominent solution for creating sophisticated UI interactions that connect with the server in real-time using JavaScript-driven websites. It is capable of competing with top-tier UI frameworks and greatly simplifies the development process.In this article, we will learn how to create a progressive web application using React.js (PWA). But, before we get started, we'll learn about progressive web apps and why they're so important. PWA is a dynamic web application that may work independently and offers several benefits such as performance, flexibility in terms of utilizing it with or without an internet connection, platform-specific, and installable. ## What is the importance of a progressive web application (PWA)? A progressive web app is an enhanced type of web app that has some of the same capabilities as a native or platform-specific app. For example, progressive web apps can be installed directly on a user’s home screen and can run in a standalone window. These apps run fast and reliably under poor network conditions and can even function offline. Think of how your typical mobile user moves through changing environments while using your app. For example, they might start out in a building that has a reliable high-speed network. But, when they walk out to the street, they may lose Wi-Fi and fall back to cellular connectivity. They might catch a strong 4G or even 5G signal, or, they may hit a low-service dip that only has 3G. How can they stay on your app in all network conditions, even when they have no connectivity at all? A progressive web app lets you reach your users wherever they are and serve them fast and reliable user experiences in any network environment. ### Want to hire an esteemed [Mobile App development company](https://www.ifourtechnolab.com/mobile-application)? Your search ends here. ## How to build a Progressive Web App (PWA) with React.js? It's necessary to set up your project before you can start coding. Let's begin by making sure you can use React (if you're already comfortable with React code, you can definitely skip this piece!). When using web frameworks like Angular, React, or Vue for development, you must use Node.js, especially if you intend to employ libraries and packages to speed up the process. The Node Package Manager, also known as "npm," is a widely used tool for using such packages and libraries. You can start building your React application using webpack and install and remove packages using this program, among many other things. For your requirements, you can use npm to build a React application using a PWA template, allowing you to start coding right away. Whenever you begin developing a React project, you can use Facebook's templates by using the npm command "create-react-app." Let's develop the PWA basic application by executing the following command: ``` npx create-react-app my-first-pwa-app --template cra-template-pwa ``` ### Read More: [Top 10 Mobile App Development tools to use in 2023](https://www.ifourtechnolab.com/blog/top-10-mobile-app-development-tools-to-use-in-2023) The following is a breakdown of the above command: - **npm:** Each npm command must begin with npm (or, more specifically, the node package manager you have installed; nonetheless, 'npx' is used here and is included with npm version 5.2.0). This facilitates the use of npm packages and manages numerous functionalities. - **create-react-app:** This command launches the well-known Create React App tool, which assists you in creating the basic react project. - **Project Title:** This is simply the application's placeholder title. You can give the name to the app whatever want. Here, the standard "my-first-pwa-app" name is utilized. - **template:** This is a debate. By adding an argument to a command, you are essentially turning on an option. You can choose a particular template for our beginning React application here. - **cra-template-pwa:** The PWA template for your PWA react application is called cra-template-pwa. After executing this command, your PWA React application should begin to develop. Your command-line interface must offer a constant stream of prompts. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kvxhoyn6yuwl1v1b44sd.png) The folder structure of your program up to this point is shown here. When it comes to PWAs, there are a few files you should be aware of: **service-worker.js:** A service worker is a special type of web worker that intercepts network requests from a web app and controls how these requests are handled. In particular, the service worker can manage the caching of resources and retrieval of these resources from the cache, thereby supporting the offline-first mode that’s crucial for PWAs. ### Planning to [hire dedicated ReactJS developers](https://www.ifourtechnolab.com/hire-dedicated-react-developers)? Service workers provide essential support for running a progressive web application. A service worker takes the form of a JavaScript file that runs in a separate, non-blocking thread from the main browser thread. **manifest.json:** In essence, this is a configuration file that lists several customizable properties for progressive web apps. When the application is presented, it can choose the icons, names, and colors to be used. ``` { "short_name": "React PWA", "name": "A React Todo PWA", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" }, { "src": "logo192.png", "type": "image/png", "sizes": "192x192" }, { "src": "logo512.png", "type": "image/png", "sizes": "512x512" } ], "start_url": ".", "display": "standalone", "theme_color": "#F4BD42", "background_color": "#2B2929", } ``` The functionality of these attributes in the manifest are: - The attributes "short_name" and "name" are used within the users’ home screens and icon banners respectively. - The "icons" is an array containing the set of icons used on home or splash screens. - The "start_url" is the page displayed on startup. In this case the home page. - The "display" property will be responsible for the browser view. When standalone, the app hides the address bar and runs in a new window like a native app. - Property "theme_color" is the color of the toolbar in the app. - Property "background_color" is the color of the splash screen. We will link the manifest file to the index.html as ## Conclusion PWAs deliver native-like experiences and increased engagement through features such as home screen additions, push notifications, and many others that do not require installation. When built using React.js, these apps have the potential to provide a fantastic user experience with fantastic interaction. React.js is a prominent platform that competes with top-tier UI frameworks and considerably simplifies web app development. This article has walked you through the process of building a progressive web application using React.js. We also learned about the relevance of PWA in the business.
ifourtechnolab
1,384,823
How To Train Your Employees in Cybersecurity Awareness
Cybersecurity threats have become increasingly sophisticated and complex, making protecting an...
0
2023-03-02T07:11:06
https://dev.to/navcharans/how-to-train-your-employees-in-cybersecurity-awareness-2gcl
cybersecurity
Cybersecurity threats have become increasingly sophisticated and complex, making protecting an organization's critical assets and data more challenging. As a result, educating employees on the importance of cybersecurity and how to recognize and respond to potential threats is essential. Cybersecurity awareness training can reduce the risk of a security breach, minimize the impact of a security incident, and safeguard sensitive information. This blog post aims to provide a comprehensive guide on training employees in cybersecurity awareness. It will cover the necessary steps to assess your organization's [cybersecurity risks](https://www.acecloudhosting.com/mitigate-cybersecurity-risks-webinar/), design a cybersecurity awareness training program, determine the essential topics to cover, engage employees, and evaluate the effectiveness of the training. ## Assess Your Organization's Cybersecurity Risks Assessing an organization's cybersecurity risks is essential to identify potential vulnerabilities and threats that could compromise the organization's critical assets and data. Here are some reasons why assessing cybersecurity risks is necessary: Identify potential weaknesses: Assessing cybersecurity risks helps identify weaknesses in an organization's systems, processes, and infrastructure. This allows for proactive measures to be taken to address vulnerabilities before cybercriminals exploit them. Prioritize security resources: Prioritizing cybersecurity risks helps to prioritize security resources and investments. It enables organizations to focus on areas most vulnerable to cyber threats, allowing them to allocate resources more effectively. Compliance: Compliance requirements such as HIPAA, PCI-DSS, and GDPR mandate that organizations assess their cybersecurity risks regularly. Failure to comply with these regulations can result in hefty fines and legal liabilities. Reputation: A cybersecurity breach can damage an organization's reputation and lead to a loss of customer trust. Assessing cybersecurity risks helps to minimize the likelihood of a data breach, which allows for maintaining the organization's reputation. Business continuity: Cybersecurity breaches can disrupt an organization's operations, leading to financial losses and reputational damage. Assessing cybersecurity risks helps to identify potential threats and vulnerabilities that could affect business continuity and enable organizations to implement measures to prevent disruptions. Assessing cybersecurity risks is critical to ensuring that an organization's critical assets and data are protected from cyber threats. It helps to prioritize security investments, comply with regulations, maintain the organization's reputation, and ensure business continuity. Let's look at all the steps you must take for a thorough assessment. ## Conducting a cybersecurity risk assessment Before designing a cybersecurity awareness training program, it's crucial to conduct a comprehensive cybersecurity risk assessment to identify the potential risks and vulnerabilities that could impact your organization. A risk assessment should involve identifying and assessing the critical assets, data, and systems that require protection. ## Identifying cybersecurity threats Once you've assessed your organization's critical assets, you need to identify the potential cybersecurity threats that could exploit vulnerabilities in your system. Cybersecurity threats can range from malware, phishing attacks, social engineering, ransomware, and insider threats. ## Identifying critical assets and data Identifying critical assets and data is crucial in determining the level of protection required for each resource. Critical assets may include sensitive information such as customer data, financial data, and intellectual property. ## Creating a risk mitigation plan After identifying the [cybersecurity threats and critical assets](https://www.acecloudhosting.com/managed-network-security/), you need to develop a risk mitigation plan that outlines the measures you'll take to reduce the risks. This plan should cover the policies and procedures that employees must follow to protect critical assets and data. ## Creating a Cybersecurity Awareness Training Program The awareness training program is your employees' guide through this journey. Here's how you make one: ## Designing a cybersecurity awareness training program The cybersecurity awareness training program should be designed to meet the specific needs of your organization. It should be tailored to the cybersecurity risks identified during the risk assessment process. ## Choosing the right training methods There are various methods of delivering cybersecurity awareness training, including classroom training, online training, and simulations. Choosing the right training methods will depend on the size of your organization, the available resources, and the preferred learning style of your employees. ## Determining the frequency of training The frequency of cybersecurity awareness training should be based on the risk level and the changing cybersecurity landscape. Annual or bi-annual training is typically recommended, but additional training may be required when there is a significant change in the cybersecurity risk environment. ## Evaluating the effectiveness of the training It is essential to evaluate the effectiveness of the cybersecurity awareness training program to determine whether it has achieved the desired results. Evaluation can be conducted through surveys, quizzes, or simulations that test employees' knowledge and ability to respond to potential threats. ## Essential Topics to Cover in Cybersecurity Awareness Training ### Password security Employees should be educated on the importance of password security and the best practices for creating and managing passwords. This includes using strong and unique passwords, not sharing passwords, and changing passwords regularly. ### Phishing and social engineering Phishing attacks and social engineering are common cybersecurity threats that involve tricking employees into divulging sensitive information. Employees should be trained to recognize these attacks and respond appropriately. ### Malware and ransomware Malware and ransomware are types of software that can infect systems and cause damage or encrypt critical data. Employees should be educated on the signs of malware and ransomware and how to prevent their spread. ### Physical security Physical security is also essential in protecting an organization's critical assets and data. Employees should be trained on the importance of physical security measures such as securing laptops, smartphones, and other devices, preventing unauthorized access to buildings and facilities, and proper disposal of sensitive information. ### Reporting incidents and suspicious activity Employees should be educated on the importance of reporting incidents and suspicious activity to the IT department or cybersecurity team promptly. This includes reporting lost or stolen devices, phishing attempts, malware infections, or any other suspicious activity. ### Employee Engagement and Communication How can you keep you employees involved and invested in this program? Here are a few suggestions: ### Importance of employee engagement Employee engagement is critical in promoting cybersecurity awareness and creating a culture of security. Engaged employees are more likely to participate in cybersecurity awareness training, adopt best practices, and report security incidents. ### Communicating the importance of cybersecurity awareness It's essential to communicate the importance of cybersecurity awareness to employees and how it relates to the protection of the organization's critical assets and data. Communication can be done through email campaigns, posters, newsletters, and regular updates on cybersecurity risks. ### Encouraging employee participation Encouraging employee participation in cybersecurity awareness training can be achieved through various methods such as gamification, incentives, and friendly competitions. This helps to make the training more engaging and enjoyable, increasing the likelihood of participation. ### Rewarding good behavior Rewarding employees who exhibit good cybersecurity practices can help to reinforce positive behavior and encourage others to adopt best practices. Rewards can include recognition, bonuses, or promotions. ## Conclusion Cybersecurity awareness training is essential in protecting an organization's critical assets and data from cybersecurity threats. It helps to educate employees on how to recognize and respond to potential threats and promotes a culture of security within the organization. Creating an effective cybersecurity awareness training program requires a comprehensive risk assessment, tailored training content, and a culture of engagement and communication. Regular evaluation and feedback can help to ensure that the training program is effective in reducing cybersecurity risks. As the cybersecurity landscape evolves, new threats and risks will emerge, and it's essential to stay up-to-date with the latest trends and technologies. Future considerations may include incorporating machine learning and artificial intelligence into cybersecurity awareness training, increasing the use of simulations, and creating more personalized training content based on employees' individual needs and learning styles.
navcharans
1,384,917
Using Grafana to visualize CI Workflow Stats
Grafana is an excellent tool for creating dashboards to gather data from various sources and serve...
0
2023-03-02T08:43:53
https://www.runforesight.com/blog/using-grafana-to-visualize-ci-workflow-stats
githubactions, ci, devops, github
Grafana is an excellent tool for creating dashboards to gather data from various sources and serve them in a single place. According to [Newstack,](https://thenewstack.io/will-grafana-become-easier-to-use-in-2022/) Grafana is the second most popular observability tool, counting 800,000 active installs and over 10 million users. These numbers also include some of our customers, and we have collected much feedback about bringing Foresight’s data & charts to Grafana Dashboards. Using a dashboard like Grafana helps teams have a whole view of what the team is doing. Improving visibility and having a streamlined analysis for faster problem-solving. When technical issues arise, having all the data in a single place can help you solve them more quickly. Instead of spending time gathering data from multiple sources, you can access everything you need in one location. This can reduce downtime and minimize the impact of technical issues on productivity. Makers define Grafana as “ _the open source analytics & monitoring solution for every database_” and we wanted to make Foresight one of those. Making it easier for our users to observe their CI workflow health & stats. ## Introducing Foresight Plugin for Grafana! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v7561hz706w1vpoeqp7u.png) Foresight is a powerful tool for monitoring and managing your continuous integration (CI) and testing workflows within GitHub Actions. With Foresight, you can easily track key metrics related to your workflows, such as build times, failure rates, etc. Now, with the Foresight plugin for Grafana, you can bring those insights directly into your Grafana dashboards. This plugin enables you to visualize Foresight’s workflow stats, and workflow failure heatmap features directly within Grafana, making it easier than ever to monitor the performance of your workflows and take action when issues arise. Whether you’re a developer, an operations team member, or a project manager, the Foresight plugin for Grafana can help you stay on top of your workflows and improve your team’s productivity. ![](https://miro.medium.com/v2/resize:fit:1400/0*8tBf29QFICY_ZXDM.gif) This panel plugin comes with two tabs, [Stats of the workflows:](https://docs.runforesight.com/features/workflow-dashboard) - Total execution count - Fail & Success Rate - Fail count - Average Duration - P95 [Workflow failure heatmap](https://docs.runforesight.com/features/highlights#status) - number of errors per day & workflow Many more features are coming, and your feedback is highly appreciated. In conclusion, Grafana is a powerful data visualization tool that can help you make sense of your technical data and increase productivity. By bringing all your technical data to a central location and visualizing it in meaningful ways, you can gain insights into your operations, identify areas for improvement, and make data-driven decisions that lead to greater efficiency. With the Foresight plugin for Grafana, you can take your data analysis to the next level by visualizing key CI and testing metrics directly within your Grafana dashboards. To learn more about Grafana and the [Foresight plugin](https://docs.runforesight.com/integrations/grafana), check out the documentation today and explore the power of data-driven decision-making. ![](https://miro.medium.com/max/1400/0*4CLDKOCWoKroxS_3.png) [https://app.runforesight.com/signup](https://app.runforesight.com/signup) _Originally published at_ [_https://www.runforesight.com_](https://www.runforesight.com/blog/using-grafana-to-visualize-ci-workflow-stats)_._
boroskoyo
1,385,017
Server-render your SPA in CI at deploy time 📸
If you deploy your SPA using GitHub Actions you can add this new action to your workflow to have it...
0
2023-03-02T10:47:36
https://dev.to/bryce/server-render-your-spa-in-ci-at-deploy-time-2798
javascript, webdev, github, githubactions
If you deploy your SPA using [GitHub Actions](https://github.com/features/actions) you can add this new [action](https://github.com/marketplace/actions/server-side-render-ssr-with-react-snap) to your workflow to have it build server-rendered HTML! Server-side rendering (SSR) is [great for SEO and performance](https://medium.com/walmartglobaltech/the-benefits-of-server-side-rendering-over-client-side-rendering-5d07ff2cefe8). I use it for projects that have an expensive initial render or have links that I want to be discoverable. [react-snap](https://github.com/stereobooster/react-snap) is a tool to help with SSR; a while ago I wrote about it: {% embed https://dev.to/bryce/perform-a-react-disappearing-act-with-react-snap-1eo3 %} I've been using it as a `postbuild` script but it recently broke in CI. The fix for it became rather complex, so rather than include this in each project that I use it for I decided to bundle everything into a standalone action. This also significantly reduced the number of per-project dependencies as it prevents installing big ones like `puppeteer`. Though `react` is in the name, this will work for any framework that supports hydration. In [Svelte](https://svelte.dev/docs) for example, this just means switching the `hydrate` flag: ```js import App from './App.svelte'; const app = new App({ target: document.querySelector('#server-rendered-html'), hydrate: true }); ``` Once your app is hydratable, replacing your `build` step with this [action](https://github.com/marketplace/actions/server-side-render-ssr-with-react-snap) will run `npm build` followed by `react-snap`: ```yml jobs: prerender: runs-on: ubuntu-latest steps: - name: Checkout 🛎️ uses: actions/checkout@v3 ... - name: Server-side render uses: brycedorn/react-snap-action@v1.0.2 ``` You can then deploy this to GitHub Pages or wherever. Give it a try and let me know if it helps simplify your workflow! {% github brycedorn/react-snap-action no-readme %}
bryce
1,385,117
Create and Validate a Sign-Up Form in .NET MAUI
A sign-up form allows users to create an account in an application by providing details such as name,...
0
2023-03-02T15:31:59
https://www.syncfusion.com/blogs/post/create-validate-sign-up-form-in-dotnet-maui.aspx
maui, forms, mobile, data
--- title: Create and Validate a Sign-Up Form in .NET MAUI published: true date: 2023-03-02 11:00:00 UTC tags: maui, forms, mobile, data canonical_url: https://www.syncfusion.com/blogs/post/create-validate-sign-up-form-in-dotnet-maui.aspx cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/53odc1ne0xqikfizvsia.png --- A sign-up form allows users to create an account in an application by providing details such as name, email address, and password. Usually, such a form is used for registration or membership subscription. The Syncfusion [.NET MAUI DataForm](https://www.syncfusion.com/maui-controls/maui-dataform "Syncfusion .NET MAUI DataForm") allows developers to create data entry forms. This control also supports [validating](https://help.syncfusion.com/maui/dataform/validation "Data Validation in .NET MAUI DataForm") user input. By validating the input, users will be prompted to provide only correct values into the form, maintaining the integrity and consistency of the data stored in the database to prevent errors, inconsistencies, and security threats. In this article, we’ll see how to create a sign-up form and validate the data entered using the Syncfusion .NET MAUI DataForm control. **Note:** Refer to the [.NET MAUI DataForm](https://help.syncfusion.com/maui/dataform/overview "Getting Started with .NET MAUI DataForm") documentation before getting started. ## Creating a sign-up form using the .NET MAUI DataForm control First, create a sign-up form using the .NET MAUI DataForm control. ### Initialize the .NET MAUI DataForm control Follow these steps to initialize the .NET MAUI DataForm control: 1. [Create a new .NET MAUI application](https://learn.microsoft.com/en-us/dotnet/maui/get-started/first-app?view=net-maui-7.0&tabs=vswin&pivots=devices-android "Build your first app on .NET MAUI") in [Visual Studio](https://visualstudio.microsoft.com/vs/ "Visual Studio"). 2. Syncfusion .NET MAUI components are available in the [NuGet Gallery](https://www.nuget.org/ "NuGet Gallery"). To add the [DataForm](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.html "Syncfusion.Maui.DataForm Namespace") to your project, open the NuGet package manager in Visual Studio, search for [Syncfusion.Maui.DataForm](https://www.nuget.org/packages/Syncfusion.Maui.DataForm "Syncfusion.Maui.DataForm NuGet package"), and then install it. 3. Import the control’s namespace **Syncfusion.Maui.DataForm** in the XAML or C# code. 4. Initialize the SfDataForm control in the XAML page. ```xml <ContentPage> ….. xmlns:dataForm="clr-namespace:Syncfusion.Maui.DataForm;assembly=Syncfusion.Maui.DataForm" ….. <dataForm:SfDataForm/> </ContentPage> ``` 5.The NuGet package [Syncfusion.Maui.Core](https://www.nuget.org/packages/Syncfusion.Maui.Core "Syncfusion.Maui.Core NuGet Package") is a dependent package for all Syncfusion .NET MAUI controls. In the **MauiProgram.cs** file, register the handler for the Syncfusion core assembly. ``` csharp builder.ConfigureSyncfusionCore() ``` ### Create the data form model Let’s create the data form model for the sign-up form. This consists of fields to store specific information such as names, addresses, phone numbers, and more. You can also add [attributes](https://help.syncfusion.com/maui/dataform/data-annotations "Data annotations in .NET MAUI DataForm") to the data model class properties for efficient data handling. Refer to the following code example. ```csharp public class SignUpFormModel { [Display(Prompt = "Enter your first name", Name = "First name")] public string FirstName { get; set; } [Display(Prompt = "Enter your last name", Name = "Last name")] public string LastName { get; set; } [Display(Prompt = "Enter your email", Name = "Email")] public string Email { get; set; } [Display(Prompt = "Enter your mobile number", Name = "Mobile number")] public double? MobileNumber { get; set; } [Display(Prompt = "Enter your password", Name = "Password")] public string Password { get; set; } [Display(Prompt = "Confirm password", Name = "Re-enter Password")] [DataType(DataType.Password)] public string RetypePassword { get; set; } [DataType(DataType.MultilineText)] [Display(Prompt = "Enter your address", Name = "Address")] public string Address { get; set; } [Display(Prompt = "Enter your city", Name = "City")] public string City { get; set; } [Display(Prompt = "Enter your state", Name = "State")] public string State { get; set; } [Display(Prompt = "Enter your country", Name = "Country")] public string Country { get; set; } [Display(Prompt = "Enter zip code", Name = "Zip code")] public double? ZipCode { get; set; } } ``` ### Create the sign-up form with editors By default, the data form auto generates the data editors based on the primitive data types such as **string** , **enumeration** , **DateTime** , and **TimeSpan** in the [DataObject](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.SfDataForm.html#Syncfusion_Maui_DataForm_SfDataForm_DataObject "DataObject property of .NET MAUI DataForm control") property. The .NET MAUI DataForm supports [built-in editors](https://help.syncfusion.com/maui/dataform/editors "Data Editors in .NET MAUI DataForm") such as text, password, multiline, combo box, autocomplete, date, time, checkbox, switch, and radio group. Refer to the following code example. In it, we set the data form model ( **SignUpFormViewModel** ) to the **DataObject** property to create the data editors for the sign-up form. **XAML** ```xml <Grid.BindingContext> <local:SignUpFormViewModel/> </Grid.BindingContext> <dataForm:SfDataForm x:Name="signUpForm" DataObject="{Binding SignUpFormModel}"/> ``` **C#** ```csharp public class SignUpFormViewModel { /// <summary> /// Initializes a new instance of the <see cref=" SignUpFormViewModel " /> class. /// </summary> public SignUpFormViewModel() { this.SignUpFormModel = new SignUpFormModel(); } /// <summary> /// Gets or sets the sign-up model. /// </summary> public SignUpFormModel SignUpFormModel { get; set; } } ``` <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Sign-Up-Form-Created-using-.NET-MAUI-DataForm-Control.webp" alt="Sign-Up Form Created Using .NET MAUI DataForm Control" style="width:100%"> <figcaption>Sign-Up Form Created Using .NET MAUI DataForm Control</figcaption> </figure> ## Validating the data in the sign-up form using the .NET MAUI DataForm control We have created the sign-up form. Let’s proceed with the validation processes. ### Validate the data using the validation attributes The .NET MAUI DataForm control provides the attributes to handle data validation. In our example, we’ll add the following validation checks to our sign-up form: - **Required fields validation** : Ensures that all required fields, such as name, email, and password, are filled out before submitting the form. - **Email validation** : Checks whether the email data is in the correct format. The email data should include the **@** character (e.g., example@domain.com). - **Password strength validation** : Ensures that the provided password satisfies certain criteria, such as minimum length and the inclusion of special characters. - **Confirm password validation** : Ensures that the confirmed password matches the provided password. - **Phone number validation** : Ensures that the phone number provided is valid and is in the correct format. Refer to the following code example. ```csharp [Display(Prompt = "Enter your first name", Name = "First name")] [Required(ErrorMessage = "Please enter your first name")] [StringLength(20, ErrorMessage = "First name should not exceed 20 characters")] public string FirstName { get; set; } [Display(Prompt = "Enter your last name", Name = "Last name")] [Required(ErrorMessage = "Please enter your last name")] [StringLength(20, ErrorMessage = "First name should not exceed 20 characters")] public string LastName { get; set; } [Display(Prompt = "Enter your email", Name = "Email")] [EmailAddress(ErrorMessage = "Please enter your email")] public string Email { get; set; } [Display(Prompt = "Enter your mobile number", Name = "Mobile number")] [StringLength(10, MinimumLength = 6, ErrorMessage = "Please enter a valid number")] public double? MobileNumber { get; set; } [Display(Prompt = "Enter your password", Name = "Password")] [DataType(DataType.Password)] [DataFormDisplayOptions(ColumnSpan = 2, ValidMessage = "Password strength is good")] [Required(ErrorMessage = "Please enter the password")] [RegularExpression(@"^(?=.*[a-z])(?=.*[A-Z])[a-zA-Z\d]{8,}$", ErrorMessage = "A minimum 8-character password should contain a combination of uppercase and lowercase letters.")] public string Password { get; set; } [Display(Prompt = "Confirm password", Name = "Re-enter Password")] [DataType(DataType.Password)] [Required(ErrorMessage = "Please enter the password")] public string RetypePassword { get; set; } [DataType(DataType.MultilineText)] [Display(Prompt = "Enter your address", Name = "Address")] [Required(ErrorMessage = "Please enter your address")] public string Address { get; set; } [Display(Prompt = "Enter your city", Name = "City")] [Required(ErrorMessage = "Please enter your city")] public string City { get; set; } [Display(Prompt = "Enter your state", Name = "State")] [Required(ErrorMessage = "Please enter your state")] public string State { get; set; } [Display(Prompt = "Enter your country", Name = "Country")] public string Country { get; set; } [Display(Prompt = "Enter zip code", Name = "Zip code")] [Required(ErrorMessage = "Please enter your zip code")] public double? ZipCode { get; set; } ``` <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Validating-the-Sign-Up-Form-using-Validation-Attributes.webp" alt="Validating the Sign-Up Form Using Validation Attributes" style="width:100%"> <figcaption>Validating the Sign-Up Form Using Validation Attributes</figcaption> </figure> ### Show validation success message If the input values are correct, show the successful [validation message](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.DataFormDisplayOptionsAttribute.html#Syncfusion_Maui_DataForm_DataFormDisplayOptionsAttribute_ValidMessage "ValidMessage property of .NET MAUI DataForm"). This will show the users that their provided data is in the required format. Refer to the following code example. Here, we will display the valid message **Password strength is good** at the bottom of the **Password** field upon successful validation. ```csharp [Display(Prompt = “Enter your password”, Name = “Password”)] [DataType(DataType.Password)] [DataFormDisplayOptions(ColumnSpan = 2, ValidMessage = “Password strength is good”)] [Required(ErrorMessage = “Please enter the password”)] [RegularExpression(@”^(?=.*[a-z])(?=.*[A-Z])[a-zA-Z\d]{8,}$”, ErrorMessage = “A minimum 8-character password should contain a combination of uppercase and lowercase letters.”)] public string Password { get; set; } ``` <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Displaying-Validation-Success-Message-in-the-Sign-Up-Form.webp" alt="Displaying Validation Success Message in the Sign-Up Form" style="width:100%"> <figcaption>Displaying Validation Success Message in the Sign-Up Form</figcaption> </figure> ### Set validate modes in the DataForm The .NET MAUI DataForm control supports the following [validation modes](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.SfDataForm.html#Syncfusion_Maui_DataForm_SfDataForm_ValidationMode "ValidationMode property of .NET MAUI DataForm control") to denote when the value should be validated: - [**LostFocus**](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.DataFormValidationMode.html#Syncfusion_Maui_DataForm_DataFormValidationMode_LostFocus "LostFocus Field of .NET MAUI DataForm"): This is the default validation mode, and the input value will be validated when the editor loses focus. - [**PropertyChanged**](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.DataFormValidationMode.html#Syncfusion_Maui_DataForm_DataFormValidationMode_PropertyChanged "PropertyChanged Field of .NET MAUI DataForm"): The input value will be validated immediately when it is changed. - [**Manual**](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.DataFormValidationMode.html#Syncfusion_Maui_DataForm_DataFormValidationMode_Manual "Manual Field of .NET MAUI DataForm"): Use this mode to manually validate the values by calling the [Validate](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.SfDataForm.html#Syncfusion_Maui_DataForm_SfDataForm_Validate "Validate() method of .NET MAUI DataForm") method. Refer to the following code example. Here, we have set the validation mode as **PropertyChanged**. ```xml <dataForm:SfDataForm x:Name="signUpForm" DataObject="{Binding SignUpFormModel}" ValidationMode="PropertyChanged" CommitMode="PropertyChanged"/> ``` <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Validate-the-Data-on-Property-Change-in-the-Sign-Up-Form.webp" alt="Validate the Data on Property Change in the Sign-Up Form" style="width:100%"> <figcaption>Validate the Data on Property Change in the Sign-Up Form</figcaption> </figure> ### Validate the data using IDataErrorInfo We can implement the [IDataErrorInfo](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.idataerrorinfo?view=net-7.0 "IDataErrorInfo Interface of .NET MAUI Framework") interface in the data object class to validate the sign-up form. Refer to the following code example. Here, we implement the **IDataErrorInfo** validation in the **RetypePassword** field. ```csharp public class SignUpFormModel : IDataErrorInfo { [Display(Prompt = "Enter your password", Name = "Password")] [DataType(DataType.Password)] [DataFormDisplayOptions(ColumnSpan = 2, ValidMessage = "Password strength is good")] [Required(ErrorMessage = "Please enter the password")] [RegularExpression(@"^(?=.*[a-z])(?=.*[A-Z])[a-zA-Z\d]{8,}$", ErrorMessage = "A minimum 8-character password should contain a combination of uppercase and lowercase letters.")] public string Password { get; set; } [Display(Prompt = "Confirm password", Name = "Re-enter Password")] [DataType(DataType.Password)] [Required(ErrorMessage = "Please enter the password")] [DataFormDisplayOptions(ColumnSpan = 2)] public string RetypePassword { get; set; } [Display(AutoGenerateField = false)] public string Error { get { return string.Empty; } } [Display(AutoGenerateField = false)] public string this[string name] { get { string result = string.Empty; if (name == nameof(RetypePassword) && this.Password != this.RetypePassword) { result = string.IsNullOrEmpty(this.RetypePassword) ? string.Empty : "The passwords do not match"; } return result; } } } ``` <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Validating-the-Sign-Up-Form-Field-using-IDataErrorInfo-Interface.webp" alt="Validate the Data on Property Change in the Sign-Up Form" style="width:100%"> <figcaption>Validating the Sign-Up Form Field Using IDataErrorInfo Interface</figcaption> </figure> ### Validate the data using INotifyDataErrorInfo You can also validate the data by implementing the [INotifyDataErrorInfo](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.inotifydataerrorinfo?view=net-7.0 "INotifyDataErrorInfo Interface of .NET MAUI Framework") interface in the data object class. Refer to the following code example. Here we implemented **INotifyDataErrorInfo** validation in the **Country** field. ```csharp public class SignUpFormModel : INotifyDataErrorInfo { [Display(Prompt = "Enter your country", Name = "Country")] public string Country { get; set; } [Display(AutoGenerateField = false)] public bool HasErrors { get { return false; } } public event EventHandler<DataErrorsChangedEventArgs> ErrorsChanged; [Display(AutoGenerateField = false)] public IEnumerable GetErrors(string propertyName) { var list = new List<string>(); if (propertyName.Equals("Country") && string.IsNullOrEmpty(this.Country)) list.Add("Please select your country"); return list; } } ``` <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Validating-the-Sign-Up-Form-using-INotifyDataErrorInfo-Interface.webp" alt="Validating the Sign-Up Form Using INotifyDataErrorInfo Interface" style="width:100%"> <figcaption>Validating the Sign-Up Form Using INotifyDataErrorInfo Interface</figcaption> </figure> ### Validate the form before signing up Finally, we’ll validate the complete form when the **Sign-up** button is clicked by using the [Validate](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.DataForm.SfDataForm.html#Syncfusion_Maui_DataForm_SfDataForm_Validate "Validate() method of .NET MAUI DataForm") method. Refer to the following code example. ```csharp private async void OnSignUpButtonClicked(object? sender, EventArgs e) { if (this.dataForm != null && App.Current?.MainPage != null) { if (this.dataForm.Validate()) { await App.Current.MainPage.DisplayAlert("", "Signed up successfully", "OK"); } else { await App.Current.MainPage.DisplayAlert("", "Please enter the required details", "OK"); } } } ``` After executing the previous code example, we will get the output shown in the following images. <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Validating-the-Entire-Sign-Up-Form-Before-Submitting-1.webp" alt="Validating the Entire Sign-Up Form Before Submitting" style="width:100%"> <figcaption>Validating the Entire Sign-Up Form Before Submitting</figcaption> </figure> <figure> <img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/03/Sign-Up-Form-Showing-Validation-Messages.webp" alt="Sign-Up Form Showing Validation Messages" style="width:100%"> <figcaption>Sign-Up Form Showing Validation Messages</figcaption> </figure> ## GitHub reference Check out the complete code example to [create and validate a sign-up form using the .NET MAUI DataForm on GitHub](https://github.com/SyncfusionExamples/create-and-validate-the-sign-up-form-using-.NET-MAUI-DataForm/tree/master "Create and validate a sign-up form using .NET MAUI DataForm GitHub Demo"). ## Conclusion Thanks for reading! In this blog, we have learned how to create and validate a sign-up form using the [.NET MAUI DataForm](https://www.syncfusion.com/maui-controls/maui-dataform ".NET MAUI DataForm") control. Try out the steps in this blog and leave your feedback in the comments section below. For current Syncfusion customers, the newest version of Essential Studio is available from the [license and downloads page](https://www.syncfusion.com/account/downloads "License and Downloads page of Essential Studio"). If you are not a customer, try our 30-day [free trial](https://www.syncfusion.com/downloads "Free Evaluation of the Syncfusion Essential Studio") to check out these new features. You can also contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [feedback](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"), or [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"). We are always happy to assist you! ## Related blogs - [Authenticate the .NET MAUI App with Azure AD](https://www.syncfusion.com/blogs/post/authenticate-the-net-maui-app-with-azure-ad.aspx "Blog: Authenticate the .NET MAUI App with Azure AD") - [Designing Effective Data Entry Forms in .NET MAUI](https://www.syncfusion.com/blogs/post/designing-effective-data-entry-forms-in-net-maui-a-step-by-step-guide.aspx "Blog: Designing Effective Data Entry Forms in .NET MAUI") - [Easily Develop a Travel Destination Listing UI in .NET MAUI](https://www.syncfusion.com/blogs/post/travel-destination-listing-ui-in-dotnet-maui.aspx "Blog: Easily Develop a Travel Destination Listing UI in .NET MAUI") - [OCR in .NET MAUI: Building an Image Processing Application](https://www.syncfusion.com/blogs/post/ocr-in-net-maui-building-an-image-processing-application.aspx "Blog: OCR in .NET MAUI: Building an Image Processing Application")
jollenmoyani
1,385,218
Building Serverless Applications with AWS Lambda
“AWS Lambda in Action: Event-driven serverless applications” by Danilo Poccia is a comprehensive...
0
2023-03-02T14:08:44
https://dev.to/dominguezdaniel/building-serverless-applications-with-aws-lambda-1l0k
aws, books, serverless, lambda
{% youtube gGe022vdK5E %} “[AWS Lambda in Action: Event-driven serverless applications](https://rebrand.ly/devshelf-004 )” by Danilo Poccia is a comprehensive guide for developers looking to understand and implement serverless applications using Amazon Web Services (AWS) Lambda. The book covers the core concepts of serverless architecture and walks readers through the process of building and deploying real-world applications on AWS Lambda. **What Does This Book Cover?** The book explains the benefits of serverless computing and why AWS Lambda is a popular choice for implementing event-driven applications. It also provides clear, step-by-step instructions for setting up the necessary components for serverless computing, including AWS S3, API Gateway, and DynamoDB. Throughout the book, readers will learn how to develop, test, and deploy Lambda functions using Node.js, and also how to integrate other AWS services and tools. > Chapter 1: Running functions in the cloud > Chapter 2: Your first Lambda function > Chapter 3: Your function as a web API > Chapter 4: Managing security > Chapter 5: Using standalone functions > Chapter 6: Managing identities > Chapter 7: Calling functions from a client > Chapter 8: Designing an authentication service > Chapter 9: Implementing an authentication service > Chapter 10: Adding more features to the authentication service > Chapter 11: Building a media-sharing application > Chapter 12: Why event-driven? > Chapter 13: Improving development and testing > Chapter 14: Automating deployment > Chapter 15: Automating infrastructure management > Chapter 16: Calling external services > Chapter 17: Receiving events from other services The book is well-organized and easy to follow, making it accessible to both beginner and intermediate-level developers. It provides hands-on examples and practical advice for optimizing serverless applications and avoiding common pitfalls. The author’s writing style is clear and concise, making it easy to understand the concepts being presented. In conclusion, “[AWS Lambda in Action](https://rebrand.ly/devshelf-004 )” is a must-read for developers looking to build and deploy serverless applications on AWS. The book provides a solid foundation of knowledge and practical skills for utilizing AWS Lambda, and is a valuable resource for anyone looking to get started with serverless computing. 📚[Get the Book in Amazon](https://rebrand.ly/devshelf-004 ) --- [![DevShelf](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v04cas2nc9gcvaznw3va.jpg)](https://devshelf.co/)
dominguezdaniel
1,385,221
Validate file with ZOD
Zod is a TypeScript-first schema validation library with static type inference. You can create...
0
2023-03-03T09:33:12
https://dev.to/banerjeeprodipta/validate-file-with-zod-20o
zod, validation, file, schema
Zod is a TypeScript-first schema validation library with static type inference. You can create validation schemas for either field-level validation or form-level validation1. Here’s an example of how you can use Zod for schema validation for a file: ``` const MAX_FILE_SIZE = 5000000; function checkFileType(file: File) { if (file?.name) { const fileType = file.name.split(".").pop(); if (fileType === "docx" || fileType === "pdf") return true; } return false; } export const fileSchema = z.object({ z.any() .refine((file: File) => file?.length !== 0, "File is required") .refine((file) => file.size < MAX_FILE_SIZE, "Max size is 5MB.") .refine((file) => checkFileType(file), "Only .pdf, .docx formats are supported."),` }); ```
banerjeeprodipta
1,385,253
JavaScript Data Types
Understanding data types is an essential aspect of writing effective JavaScript code. ...
0
2023-03-03T16:33:17
https://www.90-10.dev/js-data-types/
javascript
--- title: JavaScript Data Types published: true date: 2023-01-07 08:00:00 UTC tags: JavaScript canonical_url: https://www.90-10.dev/js-data-types/ --- Understanding data types is an essential aspect of writing effective JavaScript code. ## Primitive Data Types There are six primitive data types - they are simple and immutable i.e. their values cannot be changed once they are created. Primitive data types are the building blocks of JavaScript and are used to represent basic values: 1. **String** - text (a sequence of characters), such as "hello world". String manipulation functions, such as `toUpperCase`, `toLowerCase`, `split`, `slice`, and more are available. Strings can be concatenated using the `+` operator. 2. **Number** - a numeric value, such as 42. JavaScript provides a rich set of mathematical operations, such as addition, subtraction, multiplication, division, modulo, and more. Special numeric values such as `Infinity` and `NaN` are supported. 3. **Boolean** - a logical value, either true or false, used to represent the truth value of an expression often used in conditional statements, loops, and other control structures. JavaScript provides logical operators such as `&&`, `||`, and `!` to manipulate boolean values. 4. **Undefined** - a variable that has been declared but has not been assigned a value. Undefined values can lead to bugs in your code, and it is important to always initialize variables before using them. 5. **Null** - a variable that has been explicitly assigned the value null - represents the absence of any object value, often used to indicate that a variable or object property does not have a value. 6. **Symbol** - (introduced in ECMAScript 6) a unique identifier that can be used as the key of an object property. Symbols are used to create unique identifiers for object properties. ## Complex Data Types 1. **Object** - a collection of properties, where each property consists of a key-value pair. Objects can be created using the object literal syntax, which looks like this: `{}`. They are used to represent more complex data structures in JavaScript and can contain other objects, functions, and even arrays. 2. **Function** - a block of code that can be called by other parts of the code. They can be created using the function keyword, and can take parameters and return values. Functions are used to create reusable blocks of code for custom logic, perform calculations, manipulate data, and more. ## Type Coercion An important JavaScript feature in the context of data types type coercion - the language can automatically convert one data type to another. It can lead to unexpected behavior in your code if you are not careful. For example, the `+` operator can be used to concatenate strings, but it can also be used to add numbers. If you try to add a string and a number, JavaScript will convert the number to a string and concatenate them together. To avoid unexpected coercion, it is important to always use strict equality operators (`===` and `!==`) and to explicitly convert data types when necessary. ## Take away JavaScript data types are foundational and provide the basis for building more complex data structures, algorithms, and programming patterns. By understanding them intimately, you can write better code and avoid common pitfalls. As a JavaScript developer, you will encounter many situations where you need to manipulate data types - it is important to have a clear understanding of the different data types and their properties.
90_10_dev
1,385,308
Free resources that helped me master React as a Self Taught Web Developer
React? Why? When it comes to web development and to be precise - JavaScript frameworks /...
0
2023-03-02T15:51:02
https://dev.to/asheeshh/free-resources-that-helped-me-master-react-as-a-self-taught-web-developer-58k2
webdev, react, javascript, beginners
## React? Why? When it comes to web development and to be precise - JavaScript frameworks / libraries, React.js is always the one that most people think of first. According to [State of JS 2022](https://2022.stateofjs.com/en-us/libraries/front-end-frameworks/), React is the most used frontend framework by web developers since the last 6+ years. React still continues to be the the first choice of beginners and it’s popularity lies in the fact that when it comes to frameworks being used in production - React outperforms every other framework by usage, so no matter how imperfect it might be, React is not going to go anywhere, anytime soon. ![state of js 2022](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4chuxkjuopdoprp5e11h.png) But no matter how good the tech might be, it’s always difficult as a beginner to get started with learning it and crossing the learning curve. The over saturation of content around React and it’s ecosystem contributes to this confusion. There are tons of resources (including the free ones) present right now all over the web for learning React, which is a good thing, but quality should always come before quantity. In this article, I’m writing about the resources that helped me to master React and the tools built around it. I’ll only be writing about the resources which I think beginners are most likely to benefit from and all of the resources are free. ## Video Resources / Courses 1. React in 100 seconds by [Fireship](https://youtube.com/@Fireship) {% youtube https://youtu.be/Tn6-PIqc4UM %} This video by Fireship is the best resource to get a grasp over React basics and get familiar with the way you work using React. The video is concise and to -the-point, a no-nonsense simple introduction to React for everyone. 2. Free React course by [Scrimba](https://scrimba.com) ![scrimba react course](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lswck2y2iapea83pphtz.png) This free course is created by the Scrimba team with one of the best instructors I have learnt from - [Bob Ziroll](https://twitter.com/bobziroll), this course can be done by anyone who has a basic grasp over JavaScript - though, it’s not recommended to dive right into React without knowing JavaScript well. The course is composed of 4 modules and 152 interactive sessions along with 8 projects that you’ll be building alongside learning React. This course is one of the best courses out there for learning the React - enough to get you started with it. The way of teaching Bob uses is very easy to understand, and Scrimba’s interactive coding sessions make it 100x more powerful than any other course. 3. Free React beginner course by [Kent C. Dodds](https://twitter.com/kentcdodds) on [egghead.io](https://egghead.io) ![kent's react course](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7b4tkt9rt4ppth6mqb6b.png) Kent is one of the best instructors out there when it comes to React, and it doesn’t matter if you’re a beginner or an intermediate, this course is a fun ride along the basics of React. You are going to enjoy this course for sure and you won’t regret doing it. 4. The correct way to create a React project, video by [Theo](https://twitter.com/t3dotgg) {% youtube https://youtu.be/o9TJWEPc0Lk %} This video is a must for all beginners starting to learn React where Theo has explained why you should not use [create-react-app](https://create-react-app.dev/) to scaffold a React app and also talks about the better alternatives available for the same. ## Articles / Blogs / Other Resources 1. Awesome list for React on [GitHub](https://github.com/enaqx/awesome-react) This is one of the best places to find out about different react based things and utilities according to your needs. I use it all the time to find out about libraries and other tools to make my React project building process easier and efficient. 2. React Beta Documentation Trust me when I say it, the React beta docs are so good that you might actually not need to do any course for getting a good grasp over it. And there’s a reason to it - the [React Beta Docs](https://beta.reactjs.org/) include interactive examples and along with visual diagrams. They also include a lot of challenges to test your understanding on the topics. 3. Kent’s Blogs on [React.js]([https://kentcdodds.com/blog?q=react](https://kentcdodds.com/blog?q=react)) As I already mentioned before, Kent is one of the person I really admire and I feel like his blogs are a work of art, everything is explained so well as if he was sitting right beside you and explaining the thing, though the topics he writes about might sometimes tend to be advanced. 4. [Overreacted.io]([https://overreacted.io/](https://overreacted.io/)) by [Dan Abramov](https://twitter.com/dan_abramov) This is Dan Abramov’s personal blog, where he writes about React and everything in between. Dan is a core member at React and the creator of Redux. It’s a must read blog to get some in depth knowledge about React. 5. Blogs on [Dev.to about React]([https://dev.to/t/react](https://dev.to/t/react)) Last but not least, the #React tagged blogs on [Dev.to](https://Dev.to) really make a wonderful resource for learning and building with React. There are literally tons of Blogs available there and tons more waiting to be written! ## Conclusion I think I shared quite enough resources for any beginner to start their React journey. But, please remember this - *No matter how many resources you’re using or how many courses you’re learning from, all of it won’t amount to nothing if you don’t implement what you’re learning on a project. Please, please, please - build as much as possible because that’s the only way to get practical knowledge.* Now as I’m about to end this blog, I hope I was able to provide a valuable resource for the React and the Web Dev ecosystem. In case you find something wrong here or would like to suggest something, please just leave a comment below - I’ll make sure I get to it. Thank a lot for reading if you made it till here, I really appreciate it ❤️
asheeshh
1,385,314
Part 1: C# and .NET Core
Foreword ဒီ Series ကိုမစခင် ဒီ Series မှာပါဝင်မဲ့ အကြောင်းအရာတွေနဲ့ ကျနော်တဲ့ စာရေးတဲ့ ပုံစံကို...
22,078
2023-03-03T02:49:22
https://dev.to/myozawlatt/part-1-c-and-net-core-168p
csharp, dotnetcore, dotnet
**Foreword** ဒီ Series ကိုမစခင် ဒီ Series မှာပါဝင်မဲ့ အကြောင်းအရာတွေနဲ့ ကျနော်တဲ့ စာရေးတဲ့ ပုံစံကို အရင်ပြောပါရစေ။ ဒီ Series ကတော့ Microsoft web development stack တခုဖြစ်တဲ့ ASP.NET Core ကို C# programming language အသုံးပြုပြီး မြန်မာလို ရှင်းလင်းရေးသားသွားမဲ့ Series တခုပဲဖြစ်ပါတယ်။ ကျနော် စာရေးတဲ့နေရာမှာ စာလုံးပေါင်းသတ်ပုံထက် ဖတ်လို့ရေးလို့အဆင်ပြေတဲ့ ပုံစံကို အဓိကထားပြီးရေးလေ့ရှိပါတယ်။ ဥပမာ တခါတလေ သတ်ပုံအမှန် "ကျွန်တော်" အစား "ကျနော်"၊ "ဘယ်လို" အစား "ဘလို" စသဖြင့် ရေးတတ်လို့ ဒီလိုမျိုးတွေ တွေ့တယ်ဆိုရင် နားလည်ပေးစေချင်ပါတယ်။ နောက်ပြီး Series တခုလုံးကို မြန်မာလိုရေးသားသွားမယ် ဆိုပေမဲ့ Technical term အားလုံးနီးပါးကိုတော့ မူရင်း English စာအတိုင်းပဲ အသုံးပြုသွားပါမယ်။ ဒီ Series မှာ .NET web development technology တွေဖြစ်တဲ့ **MVC**, **Razor Page**, **Web API**, **Singal R**, **EF Core**, **Minimal API** နဲ့အတူ လက်တွေ့အသုံးချ **Design Patterns**, **Useful Nuget Packages**, **Design Architectures**, **Development Best Practices**, **Security Best Practices** စတဲ့အကြောင်းအရာတွေကို အတတ်နိုင်ဆုံး နားလည်သဘောပေါက်တဲ့အထိ ရှင်းလင်းရေးသားသွားမှာဖြစ်ပါတယ်။ Knowlege level ထက်လက်တွေ့အသုံးချတဲ့ Pragmatic approach နဲ့ ရေးသွားမှာမလို့ သေချာစောင့်ဖတ်ပြီး လက်တွေ့လိုက်လုပ်ကြည့်ရင် အကျိုးရှိမဲ့ Series တခုဖြစ်တယ်ဆိုတာ ပြောပါရစေ။ **Who should read this series?** ဒီ Series ဟာ Programming သင်ပေးမဲ့ Series မဟုတ်လို့ Programming basic knowledge ရှိပြီး C# Programming language နဲ့ ရင်းနှီးကျွမ်းဝင်တဲ့ သူတွေအတွက်ပဲ အဆင်ပြေပါလိမ့်မယ်။ C# ကိုမသိသေးဘူးဆိုရင်တော့ C# tutorial တွေကို အရင်လေ့လာဖို့ အကြံပေးလိုပါတယ်။ **C# (C Sharp) [စီ-ရှပ်ပ်]** C# ဟာ 1999 မှာ .NET framework နဲ့အတူထွက်ပေါ်လာတဲ့ Programming language တခုဖြစ်ပြီးတော့ Microsoft ကဖြန့်ချီပါတယ်။ မူရင်း ဖန်တီးသူကတော့ [Anders Hejlsberg](https://en.wikipedia.org/wiki/Anders_Hejlsberg) ဖြစ်ပါတယ်။ High level programming language ဖြစ်ပြီးတော့ အရင်က csc(C Sharp Compiler) ကိုအသုံးပြုခဲ့ပြီး အခုတော့ Rosyln လို့ခေါ်တဲ့ compiler နဲ့ compile လုပ်ပါတယ်။ Compile လုပ်ပြီးသား code တွေကို .NET Runtime လို့ခေါ်တဲ့ CLR(Common Language Runtime) ပေါ်မှာ Run ပါတယ်။ အရင်ဆုံးသိထားရမှာက Java တို့ C# တို့လို High level programming language တွေက လက်ရှိ run နေတဲ့ OS ရဲ့ resource တွေကို ယျေဘုယျအားဖြင့် တိုက်ရိုက် မစီမံနိုင်ပါဘူး။ အက်ဒီအစား သူတို့ရဲ့ VM(Virtual Machine) ကနေ တဆင့် manage လုပ်ပြီးမှ JIT(Just in Time Compiler) ကိုအသုံးပြုပြီး execute လုပ်ပါတယ်။ .NET မှာရှိတဲ့ CLR က သူ့ရဲ့ VM ပါပဲ။ ဒါကြောင့်မလို့ တိုက်ရိုက် run လို့ရတဲ့ Low level language ဖြစ်တဲ့ C တို့ C++ တို့လို Memory management ဘာညာတွေ ကိုယ်တိုင်လိုက်လုပ်နေဖို့မလိုပါဘူး။ Garbage Collector တွေကနေ လုပ်ဆောင်ပေးသွားမှာပါ။ C# program တခုကို CPU ပေါ် execute လုပ်ဖို့အတွက် .NET SDK/Framework ကအဆင့် ၃ ခု လုပ်ပါတယ်။ Lowering, Compiling နဲ့ JITing ပါ။ Lowering ဆိုတာကတော့ C# မှာ ရှိတဲ့ High level feature တွေကို Low level code အဖြစ်ပြောင်းတာပါ (ဥပမာ ကိုယ်က foreach Loop ရေးလိုက်ပေမဲ့ တကယ်တမ်း နောက်ကွယ်မှာ compiler က အက်တာကို for, while, do while စသဖြင့် သင့်တော်တဲ့ looping ပုံစံပြန်ရေးပေးတာမျိုး switch expression တွေကို လိုအပ်သလို if-else/ternary စသဖြင့် ပြောင်းပေးတာမျိုးကို ပြောတာပါ)။ Compiling ဆိုတာကတော့ Lowering လုပ်ထားတဲ့ C# code ကို IL Code(Intermediate Language Code) အဖြစ်ပြောင်းပြီး DLL သို့မဟုတ် EXE File အဖြစ် သိမ်းလိုက်တာပါ။ ဒါကို Managed Code လို့လဲခေါ်ပါတယ်။ Application run တဲ့အခါမှာတော့ IL Code ကို .NET Runtime က JIT Compiler နဲ့ Native code(Machine code) ပြောင်းပြီး CPU မှာ Execute လုပ်ပါတယ်(ဒီ Process က အမြဲမလုပ်ပါဘူး တခါ Native code ပြောင်းပြီးသား code တွေကို cache ဖမ်းထားပြီး နောက်တခါ Execute လုပ်ဖို့လိုလာရင် cache ထဲက Native code ကိုပဲ run လိုက်မှာပါ)။ IL Code ပြောင်းတဲ့ Process ကို ဒီနေရာမှာ Compiling လုပ်တယ်လို့ခေါ်ပြီး ဒီ process ကို Rosyln compiler က Analyzer တွေအသုံးပြုပြီး Compile time checking လုပ်ပေးတာမလို့ တော်ရုံ code အမှားတွေကို Compile time မှာတည်းက Detect လုပ်ပြီး fix လို့ရပါတယ်။ ဒါတွေက သပ်သပ် topic ဖြစ်သွားလို့ ဒီလောက်နဲ့ပဲ ရပ်လိုက်ပါမယ်။ **.NET Core** Microsoft က .NET Framework ကိုစတင်ခဲ့ချိန်တုန်းက 100% closed source အဖြစ်စတင်ခဲ့တာပါ။ သူ့ OS ဖြစ်တဲ့ Windows development အတွက်ပဲရည်ရွယ်ခဲ့တာ ဖြစ်ပြီး အက်သည် Framework နဲ့ တွဲပြီးထုတ်ခဲ့တဲ့ Language တွေဟာ Framework integrated ဖြစ်တဲ့ Language တွေသာ ဖြစ်ခဲ့ပါတယ်။ ဆိုလိုတာကတော့ အခုအထိ VB, C#, F# စတဲ့ Language တွေဟာ .NET မရှိပဲ မ run နိုင်ပါဘူး။ PHP လိုကောင်တွေဆို Laravel မရှိလဲ Symfony လို Framework တွေနဲ့ Development လုပ်လို့ရပါတယ်။ နောက်ပိုင်း ကျယ်ပြန့်လာတဲ့ Web development ‌technology တွေနဲ့ အတူ Open source ခေတ်ရောက်လာတဲ့အချိန်မှာတော့ Microsoft ဟာ .NET ကို Open source ဘက်မှာ တာထွက်ဖို့ကြိုးစားလာပါတယ်။ အရင်က ASP.NET ကိုပဲ Open source ပေးထားတဲ့ သူရဲ့ Open source platform ဖြစ်တဲ့ Codeplex ကိုရပ်ခဲ့ပြီး Github ကိုဝယ်ခဲ့ပါတယ်။ နောက်ပြီး .NET framework ကိုလဲ version 4.8 မှာရပ်ခဲ့ပြီးတော့ အက်အစား .NET ဆိုတဲ့ Term ကိုပြောင်းလဲပြီး .NET 5 ကနေစပြီး .NET platform တခုလုံးကို opensource ပေးလိုက်ပါတယ်။ လက်ရှိဒီစာကိုရေးနေချိန်မှာတော့ .NET 8 အထိရောက်နေပြီဖြစ်ပါတယ်။ တကယ်တော့ Microsoft ဟာ အခုလို .NET platform တခုလုံးကို Open source ပေးဖို့တာစူခဲ့တဲ့ 2016 လောက်မှာ .NET Core ကို အရင်မိတ်ဆက်ခဲ့တာပါ။ ဒါပေမဲ့ version 1,2 လောက်အထိ မအောင်မြင်ခဲ့ဘူးလို့ပြောလို့ရပါတယ်။ အဓိက ပြသနာက Target framework version ပါပဲ။ အက်သည်အချိန်တုန်းက .NET Core နဲ့ရေးထားတဲ့ code တွေကို .NET framework မှာ ပြန်သုံးလို့ မရသလို .NET framework က code ကို .NET Core ပေါ်ခေါ်သုံးလို့မရဘူး။ ဒီတော့ Microsoft ဟာ Target Framework အစား .NET Standard ဆိုပြီး code Standard တခုထုတ်ပြီးတော့ .NET Eco System က code မှန်သမျှကို အက်သည် Standard ပေါ်ကနေပဲ Compile လုပ်တဲ့ Design နဲ့ ဆက်သွားပါတယ်။ version 3.1 အရောက်မှာ .NET Core ကလည်း အတိုင်းတာတခုအထိ အောင်မြင်ခဲ့ပါတယ်။ အခုတော့ ".NET Framework" ဆိုတာလဲ မရှိတော့သလို ".NET Core" ဆိုတာလဲမရှိတော့ဘဲ ".NET" အဖြစ်နဲ့သာရှေ့ဆက်နေပါတယ်။ ဒီနေရာမှာ သိထားဖို့က Microsoft ဟာ .NET framework ကိုရပ်လိုက်ပေမဲ့ Legacy support ပေးနေတုန်းဖြစ်ပြီး .NET framework ဟာလဲ အသစ်မထွက်တော့ပေမဲ့ ဆက်သွားနေတုန်းပါ။ Development stack ကို .NET လို့ပြောင်းလဲလိုက်ပေမဲ့ Developer အများစုကတော့ .NET Core ဆိုတဲ့ Term နဲ့အသားကျခဲ့တော့ အခုအထိ .NET Core လို့ခေါ်ဝေါ်နေတုန်းပါပဲ။ ဒီတော့ .NET Core နဲ့ ASP.NET Core ဘာကွာလဲမေးစရာရှိပါတယ်။ .NET ဆိုတာကတော့ .NET Platform ကိုခေါ်တာဖြစ်ပြီး အခြေခံအားဖြင့် .NET SDK မှာပါဝင်တဲ့ Feature အကုန်လုံးကို ခေါ်တဲ့နာမည်ပါ။ ASP.NET ဆိုတာကတော့ အက်သည့် Feature တွေထဲကမှ Web Developemnt framework ကိုခေါ်တာပါ။ ဘာတွေပါဝင်လဲဆိုတော့ အခု version 8 မှာဆိုရင် MVC, Razor Page, Blazor Server, Blazor Web Assembly, Web API, Signal R, Minimal API စတဲ့ feature တွေပါဝင်ပါတယ်။ .NET မှာ Web development နဲ့မဆိုင်တဲ့ WPF, MAUI, GRPC လို feature တွေ အများကြီးပါပါတယ်။ အသေးစိတ်ကိုတော့ [.NET Official Documentation](https://learn.microsoft.com/en-us/dotnet/) တွေမှာ သွားရောက်လေ့လာနိုင်ပါတယ်။ ဒါက .NET Eco system အကြောင်းရှင်းပြထားတာဖြစ်ပါတယ်။ ဒီလောက် နားလည်ထားရင် ရပါပြီ။ **Environment setup** ဒီ Series မှာ ASP.NET Core ကိုလေ့လာဖို့အတွက် မိမိရဲ့စက်မှာ .NET Environment setup ရှိဖို့လိုပါတယ်။ ကျနော်ကတော့ Visual Studio 2022 ကို အဓိက အသုံးပြုပြီး code တွေရေးသွားမယ်ဆိုပေမဲ့ VS Code နဲ့ ဘယ်လိုရေးမယ်ဆိုတာတွေကိုလဲ ထည့်ပြီးရှင်းပြပေးသွားမှာပါ။ Visual Studio က IDE ဖြစ်လို့ RAM များများရှိဖို့လိုအပ်ပါတယ်။ အနည်းဆုံး 8GB ရှိရပါမယ်။ 16 GB ရှိရင်တော့ အကောင်းဆုံးပေါ့။ Visual Studio 2022 Community ကို [ဒီနေရာ](https://visualstudio.microsoft.com/) မှာအခမဲ့ သွား down လို့ရပါတယ်။ Visual Studio Installer မှာ **ASP.NET and Web Development** ဆိုတဲ့ checkbox တခုတည်းကိုပဲ check လုပ်ပြီး install လုပ်လိုက်တာနဲ့ သက်ဆိုင်ရာ requirement တွေကို အလိုလျောက်ထည့်သွင်းပေးထားတဲ့ Development-ready setup တခုရသွားမှာပါ။ VS Code နဲ့ရေးဖို့အတွက်ကတော့ အရင်ဆုံး .NET SDK ကို [ဒီနေရာ](https://dotnet.microsoft.com/en-us/download) မှာ down ပြီး install လုပ်ပေးပါ။ ပြီးရင်တော့ VS Code ကို [ဒီနေရာ](https://code.visualstudio.com/) မှာ down ပြီး install လုပ်ပါ။ VS Code install လုပ်ပြီးပြီဆို Extension ကနေပြီး [C# Dev Kit](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csdevkit) ဆိုတဲ့ Extension ကိုသွင်းလိုက်ပါ။ အက်သည် Extension က .NET Development အတွက် လိုအပ်တာမှန်သမျှ အကုန်သွင်းပေးသွားပါလိမ့်မယ်။ ![C# Dev Kit](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/obwipw1rzs9rqps8s7fw.png) Environment setup ရပြီဆိုရင်တော့ ထုံးစံအတိုင်း Hello world ရေးကြည့်လိုက်ပါ။ အိုကေပြီဆိုရင်တော့ နောက်တပိုင်းမှာ C# နဲ့ API အသေးလေး တခု စပြီးရေးကြည့်ရအောင်ပါ။ _Focus on your code, you will be happy(iLied) xD_
myozawlatt
1,385,321
AWS open source newsletter, #147
March 6th, 2023 - Instalment #147 Welcome Welcome to edition #147 of the AWS open source...
0
2023-03-06T07:58:07
https://blog.beachgeek.co.uk/newsletter/aws-open-source-news-and-updates-147/
opensource, aws
## March 6th, 2023 - Instalment #147 **Welcome** Welcome to edition #147 of the AWS open source newsletter, featured in the [latest episode of Build on Open Source](https://www.twitch.tv/videos/1754262947). Welcome to edition #147 of the AWS open source newsletter, featured in the [latest episode of Build on Open Source](https://www.twitch.tv/videos/1754262947). This week we have new projects such as "metahub" and "savings-estimator" that we looked at in closer detail on the Build on Open Source livestream, "aws-iot-core-credential-provider-session-helper" a Python library to help simplify working with AWS IoT, "traffic-inspection-architectures-aws-cloud-wan" code that provides examples of different network architectures and how to do traffic inspection, "neptune-export" a tool to help you export your data in Amazon Neptune, "aws-organizations-tool" a command line tool to help you configure AWS Organisations, "sagemaker-external-repo-access" a nice reference architecture for Amazon Sagemaker, "aws-cdk-cfn-hook" a Python CDK app that will get you up and running quickly working with Cloudformation template hooks, and more! Also covered in this edition is content on a number of popular open source technologies, including Kubernetes, OpenSearch, FreeRTOS, AWS Lambda Powertools, Debezium, Apache Kafka, Kafka Connect, Apache Spark, Apache Hudi, DeltaStreamer, Apicurio Registry, Apache Iceberg, FFMpeg, Prometheus, Babelfish for Amazon Aurora PostgreSQL, Redis, Mastadon, and more! **Feedback** Please please please take 1 minute to [complete this short survey](https://pulse.buildon.aws/survey/PJRTOUMK) and get some exclusive content as a thank you. ### Celebrating open source contributors The articles and projects shared in this newsletter are only possible thanks to the many contributors in open source. I would like to shout out and thank those folks who really do power open source and enable us all to learn and build on top of what they have created. So thank you to the following open source heroes: Gary Stafford, Dhiraj Thakur, Rajdip Chaudhuri, Corry Haines, Tulip Gupta, Dennis Calhoun, Amir Khairalomoum, James McIntyre, Daniel Gross, Jimmy Ray, Emin Alemdar, and Toni de la Fuente. ### Latest open source projects *The great thing about open source projects is that you can review the source code. If you like the look of these projects, make sure you that take a look at the code, and if it is useful to you, get in touch with the maintainer to provide feedback, suggestions or even submit a contribution.* #### Tools **metahub** [metahub](https://aws-oss.beachgeek.co.uk/25c) is an AWS Security Finding Format (ASFF) security context enrichment and command line utility for AWS Security Hub. Using MetaHub, you can enrich your security findings with your context to use that context for filtering, deduplicating, grouping, reporting, automating, suppressing, or updating and enrichment directly in AWS Security Hub. MetaHub interacts with reading/writing from AWS Security Hub API or directly from ASFF files. You can combine these sources as you want to enrich your findings further. ![architecture of metahub tool](https://github.com/gabrielsoltz/metahub/blob/main/docs/imgs/diagram-metahub.drawio.png?raw=true) Check out more details from [this Reddit thread](https://aws-oss.beachgeek.co.uk/2ke). **savings-estimator** [savings-estimator](https://aws-oss.beachgeek.co.uk/2kh) this is a native desktop application written in Go that allows you to estimate the cost savings you can achieve in your AWS account by converting your AutoScaling Groups to Spot instances. You can simulate various scenarios, such as to keep some of your instances as OnDemand in each group (maybe covered by Reserved Instances or Savings Plans), or only convert some of your AutoScaling Groups to Spot as part of a gradual rollout. You may use any mechanism to adopt Spot, such as applying the configuration yourself group by group as per your simulation. They also provide a helpful video to show you how you can get started. {% youtube VXfCOXXtLwA %} **aws-iot-core-credential-provider-session-helper** [aws-iot-core-credential-provider-session-helper](https://aws-oss.beachgeek.co.uk/2kf) this package provides an easy way to create a refreshable Boto3 Session using the AWS IoT Core credential provider. Needs Python version 3.8 to 3.11, and some of the features include: * Automatic refresh of Boto3 credentials through requests to the AWS IoT Core credential provider. No need to manage or maintain refresh times. * Uses the underlying AWS CRT Python bindings for querying the credential provider instead of the Python standard library. This provides support for both certificate and private keys as files or as environment variables. * Extensible to using other TLS methods such as PKCS#11 hardware security modules (see Advanced section). * Only requires four function calls to create a session helper, Boto3 session, Boto3 client, and then client API calls. **traffic-inspection-architectures-aws-cloud-wan** [traffic-inspection-architectures-aws-cloud-wan](https://aws-oss.beachgeek.co.uk/2kg) this repository contains code (in AWS CloudFormation and Terraform) to deploy several inspection architectures using AWS Cloud WAN - with AWS Network Firewall as inspection solution. The use cases covered are 1/ Centralized Outbound, 2/ East/West traffic, with both Spoke VPCs and Inspection VPCs attached to AWS Cloud WAN, 3/ East/West traffic, with both Spoke VPCs and Inspection VPCs attached to AWS Transit Gateway and peered with AWS Cloud WAN, and 4/ East/West traffic, with Spoke VPCs attached to a peered AWS Transit Gateway and Inspection VPCs attached to AWS Cloud WAN. The documentation provides nice architectural diagram that outline each of these use cases, a description of what is inspected, and some sample output. ![sample output for traffic-inspection](https://github.com/aws-samples/traffic-inspection-architectures-aws-cloud-wan/blob/main/images/east_west.png?raw=true) **neptune-export** [neptune-export](https://aws-oss.beachgeek.co.uk/2ki) is a command line tool that exports Amazon Neptune property graph data to CSV or JSON, or RDF graph data to Turtle. The repo provides details of how you can also deploy this as a service within your environment. **aws-organizations-tool** [aws-organizations-tool](https://aws-oss.beachgeek.co.uk/2kk) orgtool is a configuration management tool set for AWS Organizations written in python. This tooling enable the configuration and management of AWS Organization with code. This might be useful as you transition from ClickOps to automation via infrastructure as code. Check the docs for more details on how you might do this. ### Demos, Samples, Solutions and Workshops **sagemaker-external-repo-access** [sagemaker-external-repo-access](https://aws-oss.beachgeek.co.uk/2kj) the goal of this solution is to demonstrate the deployment of AWS CodeSuite Services (i.e., CodeBuild, CodePipeline) to orchestrate secure MLOps access to external package repositories in a data science environment configured with multi-layer security. Detailed documentation as well as links to a supporting blog post you can read to help you get started. **mlops-sagemaker-github-actions** [mlops-sagemaker-github-actions](https://aws-oss.beachgeek.co.uk/2hv) similar to the previous post, this repo is an example of MLOps implementation using Amazon SageMaker and GitHub Actions. The code helps you to build a solution that automates a model-build pipeline that includes steps for data preparation, model training, model evaluation, and registration of that model in the SageMaker Model Registry. The resulting trained ML model is deployed from the model registry to staging and production environments upon the approval. ![architecture of mlops on sagemaker](https://github.com/aws-samples/mlops-sagemaker-github-actions/blob/main/img/Amazon-SageMaker-GitHub-Actions-Architecture.png?raw=true) **aws-cdk-cfn-hook** [aws-cdk-cfn-hook](https://github.com/aws-samples/aws-cdk-cfn-hook) this solution demonstrates how to create, update, and deploy CloudFormation hooks through a CI/CD pipeline using AWS Cloud Development Kit as Infrastructure as Code. It leverages AWS CDK (Python) to deploy: an AWS CodeCommit source repo that contains hook handlers, hook schema and other parameters to create the hook and its related configuration; an AWS CodeBuild stage to package and deploy the hook; and an AWS CodePipeline. ![architecture of cdk-cfn-hook solution](https://github.com/aws-samples/aws-cdk-cfn-hook/blob/main/images/CFN_Hook.jpg?raw=true) **aws-textract-cdk-commercial-acord** [aws-textract-cdk-commercial-acord](https://github.com/aws-samples/aws-textract-cdk-commercial-acord) This repo contains all the code required to do an IDP solution on AWS from document splitting, classification to extraction. The repo use a sample commercial acord data set. **rails-lambda-handler** [rails-lambda-handler](https://github.com/aws-samples/rails-lambda-handler) This repository includes and AWS Lambda handler function that launches Ruby on Rails (or other Rack compliant) application, as well as a sample application so you can see how this works. ### AWS and Community blog posts **Open Source Data Lakes** Back with another epic blog post, Gary Stafford provides a gloriously detailed walk through together with supporting sample code, that takes a look at how you can combine a number of open source projects, together with AWS services to create a real time transactional data lake. From the post: > Red Hat’s Debezium, Apache Kafka, and Kafka Connect will be used for change data capture (CDC). In addition, Apache Spark, Apache Hudi, and Hudi’s DeltaStreamer will be used to manage the data lake. Gary's posts are essential reading, so head on over to [Building Data Lakes on AWS with Kafka Connect, Debezium, Apicurio Registry, and Apache Hudi](https://aws-oss.beachgeek.co.uk/2ko) and dive right on into the lake! [hands on] ![architecture of real time transactional data lake on aws using open source software](https://github.com/garystafford/cdc-hudi-data-lake-demo/blob/main/diagram.png?raw=true) **Apache Iceberg** In the post, [Build a real-time GDPR-aligned Apache Iceberg data lake](https://aws-oss.beachgeek.co.uk/2kv), Dhiraj Thakur and Rajdip Chaudhuri show you how you can use the Iceberg table format on Athena to implement GDPR use cases like data deletion and data upserts as required, when streaming data is being generated and ingested through AWS Glue streaming jobs in Amazon S3. [hands on] ![architecture of how you can build a real time GDPR apache iceberg solution for your data lake](https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2023/01/28/image001-4.png) **Mastodon** If you have been looking for a way to host your own Mastadon instance, then why not take a look at this blog post from Corry Haines. In [Deploying Mastodon on AWS](https://aws-oss.beachgeek.co.uk/2kn) shares the lessons he learned on the way to hosting his own instance. Essential reading this week. **FFMPEG** FFmpeg is an open source tool commonly used by media technology companies to encode and transcode video and audio formats. In [Run Open Source FFMPEG at Lower Cost and Better Performance on a VT1 Instance for VOD Encoding Workloads](https://aws-oss.beachgeek.co.uk/2kp), Tulip Gupta and Dennis Calhoun share how you can optimise how your run FFMpeg on AWS using VT1 instance types on Amazon EC2 instances. **Other posts and quick reads** * [Build a GNN-based real-time fraud detection solution using the Deep Graph Library without using external graph storage](https://aws-oss.beachgeek.co.uk/2kw) provides a step-by-step process for training and evaluating a Relational Graph Convolutional Network (RGCN) model for real-time fraud detection using the open source Deep Graph Library [hands on] * [Reduce Amazon EMR cluster costs by up to 19% with new enhancements in Amazon EMR Managed Scaling](https://aws-oss.beachgeek.co.uk/2kt) provides an overview of enhancements in EMR Managed Scaling which show improved cluster utilisation (by up to 15 percent) and a reduction in cluster costs [hands on] ![reducing amazon emr cluster costs graph](https://d2908q01vomqb2.cloudfront.net/b6692ea5df920cad691c20319a6fffd7a4a766b8/2023/02/27/BDB-2751_image_001-new.png) * [Using OpsWatch to Create a Single Pane of Prometheus Metrics from Multiple Non-Native Sources](https://aws-oss.beachgeek.co.uk/2ks) explores how to integrate CloudWatch and other Amazon Web Services (AWS) data sources into a typical container and Prometheus-based monitoring world ![overview of single pane of prometheus metrics](https://d2908q01vomqb2.cloudfront.net/77de68daecd823babbb58edb1c8e14d7106e83bb/2023/01/24/OpsWatch-Arvato-Systems-4.png) * [Automated Application Failover Across Availability Zones with Floating/Virtual IP on Amazon EKS](https://aws-oss.beachgeek.co.uk/2ku) looks at design patterns that let you fail-over your EKS based applications seamlessly to another AZ while using same IP addresses, in an automated way, with no change needed in the application code [hands on] * [SaaS Data Isolation with Dynamic Credentials Using HashiCorp Vault in Amazon EKS](https://aws-oss.beachgeek.co.uk/2kx) examines a solution for implementing multi-tenant SaaS data isolation with dynamic credentials using HashiCorp Vault in an Amazon EKS environment [hands on] ![architecture for saas data isolation on EKS using HashiCorp Vault](https://d2908q01vomqb2.cloudfront.net/77de68daecd823babbb58edb1c8e14d7106e83bb/2023/02/06/HashiCorp-Vault-EKS-SaaS-4.png) **Case Studies** * [How Wiz used Amazon ElastiCache to improve performance and reduce costs](https://aws-oss.beachgeek.co.uk/2kq) is a great case study on how they were able to improved overall application performance, reduce pressure on their database, and then right-size the database instances, to save their overall TCO. ![graph of performance enhancements Redis](https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2023/03/01/06-reduced-load.png) * [Rebura: Accelerate SQL Server database modernization with Babelfish for Amazon Aurora PostgreSQL](https://aws-oss.beachgeek.co.uk/2kr) explores how Rebura helps customers modernise their SQL Server on AWS using Babelfish for Amazon Aurora PostgreSQL. ![process overview of how to modernise sql server](https://d2908q01vomqb2.cloudfront.net/8effee409c625e1a2d8f5033631840e6ce1dcb64/2023/02/27/babelfishrebura3-1024x466.png) ### Quick updates **AWS Lambda Powertools** AWS Lambda Powertools, an open-source developer library, now supports .NET to help you incorporate Well-Architected Serverless best practices into your .NET Lambda function code as early and as fast as possible. Lambda Powertools for .NET is used when developing code for the .NET 6 Lambda runtime. To dive deeper into the GA announcement, read the post from Amir Khairalomoum, [Introducing AWS Lambda Powertools for .NET](https://aws-oss.beachgeek.co.uk/2km). ![dashboard from aws console powered by aws lambda powertools](https://d2908q01vomqb2.cloudfront.net/1b6453892473a467d07372d45eb05abc2031647a/2023/02/24/AWS-X-Ray-waterfall-trace-view-1024x663.png) **OpenSearch** OpenSearch 2.6.0 is now available, with a new data schema built to OpenTelemetry standards that unlocks an array of future capabilities for analytics and observability use cases. This release also delivers upgrades for index management, improves threat detection for security analytics workloads, and adds functionality for visualization tools, machine learning (ML) models, and more. Read the full announcement from James McIntyre, [Introducing OpenSearch 2.6](https://aws-oss.beachgeek.co.uk/2kl). In related news, Krishna Kondaka, Asif Sohail Mohammed, and David Venable collaborated on the post, [Announcing Data Prepper 2.1.0](https://aws-oss.beachgeek.co.uk/2ky) that shared news about the latest release of Data Prepper, an open source tool that that accepts, filters, transforms, enriches, and routes data into your OpenSearch environment. **PostgreSQL** Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports the latest major version PostgreSQL 15. New features in PostgreSQL 15 include the SQL standard "MERGE" command for conditional SQL queries, performance improvements for both in-memory and disk-based sorting, and support for two-phase commit and row/column filtering for logical replication. The PostgreSQL 15 release also adds support for new extension pg_walinspect, and server-side compression with Gzip, LZ4, or Zstandard (zstd) using pg_basebackup. **MySQL** Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility) now supports MySQL 8.0.26. In addition to several security enhancements and bug fixes, MySQL 8.0.26 includes several changes, such as enhanced tablespace file segment page configuration and new aliases for certain identifier names. For more details, please review Aurora MySQL 3 and MySQL 8.0.26 release notes. **MariaDB** Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.6.12, 10.5.19, 10.4.28 and 10.3.38. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MariaDB community. **AWS SAM** Serverless application developers can now define multiple destinations when integrating AWS services with AWS Serverless Application Model (AWS SAM) connectors. Previously, SAM customers needed to create a SAM connector definition for every source and destination pair. For example, if a AWS::Serverless::Function needed to interact with three SNS topics using identical permissions, three SAM connectors would need to be defined for each connection from the function to each topic. ### Videos of the week **FreeRTOS** Join Daniel Gross, Developer Advocate within the AWS IoT team as he shows you how you can debug FreeRTOS within VSCode using QEMU. Check out the links as you can follow along the blog post and supporting code. Very cool. {% youtube l2GmlDN_SPo %} **Improving Secret Management in K8s with ESO** Managing secrets in Kubernetes can be a challenge. One of the optimal approaches to storing and making use of sensitive data in your clusters is to incorporate the use of external centralized secrets managers. Centralized secrets managers usually offer encryption of data at rest and expose an API for lifecycle management operations of your secrets. So how do you integrate secrets from external providers and securely expose them in your cluster? In this episode, your host Jimmy Ray is joined by Emin Alemdar who walks through the External Secrets Operator (ESO) and some good practices for managing secrets in Kubernetes. {% youtube FityN80Cpto %} **OpenSearch** This short video covers how to install and configure OpenSearch 2.5.0 with OpenSearch Dashboards On Ubuntu. {% youtube NL-B2Y508lU %} **Build on Open Source** Episode two of Build on Open Source streamed live last Friday, 3rd March. Derek and myself covered newsletters #146 and this one, #147. We had a special guest, Toni de la Fuente who walked us through his open source projects prowler. **[prowler](https://aws-oss.beachgeek.co.uk/2ct)** helps keep your AWS environments secure by running audits against well know security compliance checks, and Toni walked us through a demo of it in action. I highly recommend you watch it if you can, as this project is awesome! You can watch it on [replay here](https://www.twitch.tv/videos/1754262947) For those unfamiliar with this show, Build on Open Source is where we go over this newsletter and then invite special guests to dive deep into their open source project. Expect plenty of code, demos and hopefully laughs. We have put together a playlist so that you can easily access all (eight) of the episodes of the Build on Open Source show. [Build on Open Source playlist](https://aws-oss.beachgeek.co.uk/24u) # Events for your diary If you are planning any events in 2023, either virtual, in person, or hybrid, get in touch as I would love to share details of your event with readers. **Build on Open Source** **March 17th, twitch.tv/aws** The third episode of Build on Open Source features special guest AWS Community Builder John Preston who will be showing us compose-x, an open source tool to help you deploy applications using Amazon ECS (and other AWS services). Really looking forward to this one as John is a super start open source developer. See you there on [twitch.tv/aws](https://twitch.tv/aws), Friday 3rd at 9am GMT, 10am CET. **Power Up your Kubernetes** **March 15th, AWS Office Zurich, Switzerland** If you want to improve architecture, scaling and monitoring of your applications that run on AWS Elastic Kubernetes Service, this event is for you. During this event you will learn to scale Kubernetes applications with Karpenter, monitor your workloads, and build SaaS architectures for Kubernetes. Find out more and save your place by heading over to the registration page, [Power up your Kubernetes on AWS](https://aws-oss.beachgeek.co.uk/2jd) **Everything Open** **March14-15th Melbourne, Australia** A new event for the fine folks in Australia. Everything Open is running for the first time, and the organisers (Linux Australia) have decided to run this event to provide a space for a cross-section of the open technologies communities to come together in person. Check out the [event details here](https://aws-oss.beachgeek.co.uk/2ds). The CFP us currently open, so why not take a look and submit something if you can. **FOSSASIA** **April 13th-15th, Singapore** FOSSASIA Summit 2023 returns as an in-person and online event, taking place from Thursday 13th April to Saturday 15th April at the Lifelong Learning Institute in Singapore. If you are interested in attending in person, or virtually, find out more about the event at the [FOSSASIA Summit 2023 page](https://aws-oss.beachgeek.co.uk/2iq). **Cortex** **Every other Thursday, next one 16th February** The Cortex community call happens every two weeks on Thursday, alternating at 1200 UTC and 1700 UTC. You can check out the GitHub project for more details, go to the [Community Meetings](https://aws-oss.beachgeek.co.uk/2h5) section. The community calls keep a rolling doc of previous meetings, so you can catch up on the previous discussions. Check the [Cortex Community Meetings Notes](https://aws-oss.beachgeek.co.uk/2h6) for more info. **OpenSearch** **Every other Tuesday, 3pm GMT** This regular meet-up is for anyone interested in OpenSearch & Open Distro. All skill levels are welcome and they cover and welcome talks on topics including: search, logging, log analytics, and data visualisation. Sign up to the next session, [OpenSearch Community Meeting](https://aws-oss.beachgeek.co.uk/1az) ### Stay in touch with open source at AWS Remember to check out the [Open Source homepage](https://aws.amazon.com/opensource/?opensource-all.sort-by=item.additionalFields.startDate&opensource-all.sort-order=asc) to keep up to date with all our activity in open source by following us on [@AWSOpen](https://twitter.com/AWSOpen)
094459
1,408,191
Soroban Contracts 101 : Tokens (Part 1)
Hi there! Welcome to my tenth post of my series called "Soroban Contracts 101", where I'll be...
22,205
2023-03-20T18:25:46
https://dev.to/yuzurush/soroban-contracts-101-tokens-part-1-lao
soroban, stellar, smartcontract, sorobanathon
Hi there! Welcome to my tenth post of my series called "Soroban Contracts 101", where I'll be explaining the basics of Soroban contracts, such as data storage, authentication, custom types, and more. All the code that we're gonna explain throughout this series will mostly come from [soroban-contracts-101](https://github.com/yuzurush/soroban-contracts-101) github repository. In this tenth post of the series, I'll be covering soroban token contract. With this contract we can do various things such as initialize or create our own token, mint it to an address, check balance of specified token on specified account, and more. I will divide this post into two part since this is gonna be a long journey. ##The Contract Code This contract has several modules, which are: - **`admin.rs`** This module contains admin logic (checking admin, setting admin) ```rust use crate::storage_types::DataKey; use soroban_sdk::{Address, Env}; pub fn has_administrator(e: &Env) -> bool { let key = DataKey::Admin; e.storage().has(&key) } pub fn read_administrator(e: &Env) -> Address { let key = DataKey::Admin; e.storage().get_unchecked(&key).unwrap() } pub fn write_administrator(e: &Env, id: &Address) { let key = DataKey::Admin; e.storage().set(&key, id); } ``` This module imports `DataKey` from `storage_types.rs` and `types` from `soroban_sdk`. It defines four admin-related functions: 1. `has_administrator` - Checks if an admin is set. It uses a `DataKey` of type `Admin` and the storage `has` method to check. 2. read_administrator - Reads the admin address. It uses a `DataKey` of type `Admin` and the storage `get_unchecked` method (panicking if no admin is set). 3. `write_administrator` - Writes an admin address. It uses a `DataKey` of type `Admin` and the storage `set` method. This module contains the core logic for managing an admin address for the token contract. It allows checking, reading and writing the admin address, and checking admin authorization for functions. - **`allowance.rs`** Contains allowance logic (reading, increasing, decreasing allowance) ```rust use crate::storage_types::{AllowanceDataKey, DataKey}; use soroban_sdk::{Address, Env}; pub fn read_allowance(e: &Env, from: Address, spender: Address) -> i128 { let key = DataKey::Allowance(AllowanceDataKey { from, spender }); if let Some(allowance) = e.storage().get(&key) { allowance.unwrap() } else { 0 } } pub fn write_allowance(e: &Env, from: Address, spender: Address, amount: i128) { let key = DataKey::Allowance(AllowanceDataKey { from, spender }); e.storage().set(&key, &amount); } pub fn spend_allowance(e: &Env, from: Address, spender: Address, amount: i128) { let allowance = read_allowance(e, from.clone(), spender.clone()); if allowance < amount { panic!("insufficient allowance"); } write_allowance(e, from, spender, allowance - amount); } ``` This module imports `types` from `storage_types.rs`, including `AllowanceDataKey` and `DataKey`. These are `types` used to store allowance data. It defines three allowance-related functions: 1. `read_allowance` - Reads the allowance for a `spender` on behalf of a `from` address. It uses a `DataKey` of type `AllowanceDataKey` to look up the data in contract storage, and returns 0 if no allowance is set. 2. `write_allowance` - Writes an allowance amount for a `spender` on behalf of a `from` address. It uses a `DataKey` of type `AllowanceDataKey` to store the data. 3. `spend_allowance` - Spends some amount from an allowance. It first reads the current allowance, checks that it is sufficient, then decreases it by the amount and writes the new allowance. It panics if there is insufficient allowance. This module contains the core logic for managing allowances in the token contract. It uses storage types from another module and functions for reading/writing from storage to implement the allowance logic. - **`balance.rs`** Contains balance logic (reading, spending, receiving balance) ```rust use crate::storage_types::DataKey; use soroban_sdk::{Address, Env}; pub fn read_balance(e: &Env, addr: Address) -> i128 { let key = DataKey::Balance(addr); if let Some(balance) = e.storage().get(&key) { balance.unwrap() } else { 0 } } fn write_balance(e: &Env, addr: Address, amount: i128) { let key = DataKey::Balance(addr); e.storage().set(&key, &amount); } pub fn receive_balance(e: &Env, addr: Address, amount: i128) { let balance = read_balance(e, addr.clone()); if !is_authorized(e, addr.clone()) { panic!("can't receive when deauthorized"); } write_balance(e, addr, balance + amount); } pub fn spend_balance(e: &Env, addr: Address, amount: i128) { let balance = read_balance(e, addr.clone()); if !is_authorized(e, addr.clone()) { panic!("can't spend when deauthorized"); } if balance < amount { panic!("insufficient balance"); } write_balance(e, addr, balance - amount); } pub fn is_authorized(e: &Env, addr: Address) -> bool { let key = DataKey::State(addr); if let Some(state) = e.storage().get(&key) { state.unwrap() } else { true } } pub fn write_authorization(e: &Env, addr: Address, is_authorized: bool) { let key = DataKey::State(addr); e.storage().set(&key, &is_authorized); } ``` It imports `DataKey` from `storage_types.rs` and `types` from `soroban_sdk`. It defines six balance-related functions: 1. `read_balance` - Reads the balance for an address. It uses a `DataKey` of type `Balance` and the storage `get` method, returning 0 if no balance is set. 2. `write_balance` - Writes a balance for an address. It uses a `DataKey` of type `Balance` and the storage `set` method. 3. `receive_balance` - Increases the balance for an `address` by some `amount`. It first reads the current balance, checks authorization status, then increases and writes the new balance. It panics if deauthorized. 4. spend_balance - Decreases the balance for an `address` by some `amount`. It first reads the current balance, checks authorization status and sufficient balance, then decreases and writes the new balance. It panics if deauthorized or if insufficient balance. 5. `is_authorized` - Checks the authorization status for an `address`. It uses a `DataKey` of type `State` and the storage `get` method, defaulting to `true` if no state is set. 6. `write_authorization` - Writes the authorization status for an `address`. It uses a `DataKey` of type `State` and the storage `set` method. So in summary, this module contains the core logic for managing balances (and authorization status) in the token contract. It uses storage types from another module and functions for reading/writing from storage to implement the balance/authorization logic. - **`contract.rs`** This file is the main contract implementation ```rust use crate::admin::{has_administrator, read_administrator, write_administrator}; use crate::allowance::{read_allowance, spend_allowance, write_allowance}; use crate::balance::{is_authorized, write_authorization}; use crate::balance::{read_balance, receive_balance, spend_balance}; use crate::event; use crate::metadata::{ read_decimal, read_name, read_symbol, write_decimal, write_name, write_symbol, }; use soroban_sdk::{contractimpl, Address, Bytes, Env}; pub trait TokenTrait { fn initialize(e: Env, admin: Address, decimal: u32, name: Bytes, symbol: Bytes); fn allowance(e: Env, from: Address, spender: Address) -> i128; fn increase_allowance(e: Env, from: Address, spender: Address, amount: i128); fn decrease_allowance(e: Env, from: Address, spender: Address, amount: i128); fn balance(e: Env, id: Address) -> i128; fn spendable_balance(e: Env, id: Address) -> i128; fn authorized(e: Env, id: Address) -> bool; fn transfer(e: Env, from: Address, to: Address, amount: i128); fn transfer_from(e: Env, spender: Address, from: Address, to: Address, amount: i128); fn burn(e: Env, from: Address, amount: i128); fn burn_from(e: Env, spender: Address, from: Address, amount: i128); fn clawback(e: Env, from: Address, amount: i128); fn set_authorized(e: Env, id: Address, authorize: bool); fn mint(e: Env, to: Address, amount: i128); fn set_admin(e: Env, new_admin: Address); fn decimals(e: Env) -> u32; fn name(e: Env) -> Bytes; fn symbol(e: Env) -> Bytes; } fn check_nonnegative_amount(amount: i128) { if amount < 0 { panic!("negative amount is not allowed: {}", amount) } } pub struct Token; #[contractimpl] impl TokenTrait for Token { fn initialize(e: Env, admin: Address, decimal: u32, name: Bytes, symbol: Bytes) { if has_administrator(&e) { panic!("already initialized") } write_administrator(&e, &admin); write_decimal(&e, u8::try_from(decimal).expect("Decimal must fit in a u8")); write_name(&e, name); write_symbol(&e, symbol); } fn allowance(e: Env, from: Address, spender: Address) -> i128 { read_allowance(&e, from, spender) } fn increase_allowance(e: Env, from: Address, spender: Address, amount: i128) { from.require_auth(); check_nonnegative_amount(amount); let allowance = read_allowance(&e, from.clone(), spender.clone()); let new_allowance = allowance .checked_add(amount) .expect("Updated allowance doesn't fit in an i128"); write_allowance(&e, from.clone(), spender.clone(), new_allowance); event::increase_allowance(&e, from, spender, amount); } fn decrease_allowance(e: Env, from: Address, spender: Address, amount: i128) { from.require_auth(); check_nonnegative_amount(amount); let allowance = read_allowance(&e, from.clone(), spender.clone()); if amount >= allowance { write_allowance(&e, from.clone(), spender.clone(), 0); } else { write_allowance(&e, from.clone(), spender.clone(), allowance - amount); } event::decrease_allowance(&e, from, spender, amount); } fn balance(e: Env, id: Address) -> i128 { read_balance(&e, id) } fn spendable_balance(e: Env, id: Address) -> i128 { read_balance(&e, id) } fn authorized(e: Env, id: Address) -> bool { is_authorized(&e, id) } fn transfer(e: Env, from: Address, to: Address, amount: i128) { from.require_auth(); check_nonnegative_amount(amount); spend_balance(&e, from.clone(), amount); receive_balance(&e, to.clone(), amount); event::transfer(&e, from, to, amount); } fn transfer_from(e: Env, spender: Address, from: Address, to: Address, amount: i128) { spender.require_auth(); check_nonnegative_amount(amount); spend_allowance(&e, from.clone(), spender, amount); spend_balance(&e, from.clone(), amount); receive_balance(&e, to.clone(), amount); event::transfer(&e, from, to, amount) } fn burn(e: Env, from: Address, amount: i128) { from.require_auth(); check_nonnegative_amount(amount); spend_balance(&e, from.clone(), amount); event::burn(&e, from, amount); } fn burn_from(e: Env, spender: Address, from: Address, amount: i128) { spender.require_auth(); check_nonnegative_amount(amount); spend_allowance(&e, from.clone(), spender, amount); spend_balance(&e, from.clone(), amount); event::burn(&e, from, amount) } fn clawback(e: Env, from: Address, amount: i128) { check_nonnegative_amount(amount); let admin = read_administrator(&e); admin.require_auth(); spend_balance(&e, from.clone(), amount); event::clawback(&e, admin, from, amount); } fn set_authorized(e: Env, id: Address, authorize: bool) { let admin = read_administrator(&e); admin.require_auth(); write_authorization(&e, id.clone(), authorize); event::set_authorized(&e, admin, id, authorize); } fn mint(e: Env, to: Address, amount: i128) { check_nonnegative_amount(amount); let admin = read_administrator(&e); admin.require_auth(); receive_balance(&e, to.clone(), amount); event::mint(&e, admin, to, amount); } fn set_admin(e: Env, new_admin: Address) { let admin = read_administrator(&e); admin.require_auth(); write_administrator(&e, &new_admin); event::set_admin(&e, admin, new_admin); } fn decimals(e: Env) -> u32 { read_decimal(&e) } fn name(e: Env) -> Bytes { read_name(&e) } fn symbol(e: Env) -> Bytes { read_symbol(&e) } } ``` Our main contract code imports functions from the admin, allowance, balance, and metadata modules.Defines a Token trait with functions for all the key token operations (`initialize`, `transfer`, `mint`, etc.)Defines a `check_nonnegative_amount` helper function to check that an amount is non-negative (and panic if not). Implements the Token trait for a `Token struct`, calling into the imported module functions to implement the logic for each token operation. So in summary, this `contract.rs` file ties together the logic from the other modules to implement the full Token trait, thereby defining the core functionality of the token contract. This contract contains several function, which are : 1. `initialize` - Sets up the initial state of the contract (admin, decimals, name, symbol). This function needs the following arguments (in the correct respective types) supplied when invoking it: `admin: Address` - The admin address `decimal: u32` - The number of decimals `name: Bytes` - The token name `symbol: Bytes` - The token symbol 2. `allowance` - Reads the allowance for a spender on behalf of a from address. This function needs the following arguments (in the correct respective types) supplied when invoking it: `from: Address` - The address allowing access `spender: Address` - The address allowed to spend Returns: `i128` - The allowance amount 3. `increase_allowance` / `decrease_allowance` - Increases or decreases an allowance. This function needs the following arguments (in the correct respective types) supplied when invoking it: `from: Address` - The address allowing access `spender: Address` - The address allowed to spend `amount: i28` - The amount to increase or decrease Returns: `i128` - The balance amount 4. `balance` / `spendable_balance` - Reads a balance and spendable balance. This function needs the following arguments (in the correct respective types) supplied when invoking it: `id: Address` - The address to read balance of Returns: `i128` - The balance amount 5. `authorized` - Checks if an address is authorized.This function needs the following arguments (in the correct respective types) supplied when invoking it: `id: Address` - The address to check authorization of Returns: `bool` - The authorization status (true or false) 6. `transfer` / `transfer_from` - Transfers tokens.This function needs the following arguments (in the correct respective types) supplied when invoking it: For `transfer` : `from: Address` - The sender address `to: Address` - The recipient address `amount: i128` - The amount to transfer For `transfer_from`: `spender: Address` - The address allowed to spend from another address `from: Address` - The address the spender is spending from `to: Address` - The recipient address `amount: i128` - The amount to transfer 7. `burn` / `burn_from` - Burns (reduces supply of) tokens.This function needs the following arguments (in the correct respective types) supplied when invoking it: For `burn`: `from: Address` - The address to burn tokens from `amount: i128` - The amount of tokens to burn For `burn_from`: `spender: Address` - The address allowed to burn from another address `from: Address` - The address the spender is burning from `amount: i128` - The amount of tokens to burn 8. `clawback` - Clawbacks tokens from an address.This function needs the following arguments (in the correct respective types) supplied when invoking it: `admin: Address` - The admin address (required to call this function) `from: Address` - The address to clawback tokens from `amount: i128` - The amount of tokens to clawback 9. `set_authorized` - Sets the authorization status of an address.This function needs the following arguments (in the correct respective types) supplied when invoking it: `admin: Address` - The admin address (required to call this function) `id: Address` - The address to set authorization status for `authorize: bool` - The new authorization status (true or false) 10. `mint` - Mints tokens to an address.This function needs the following arguments (in the correct respective types) supplied when invoking it: `admin: Address` - The admin address (required to call this function) `to: Address` - The recipient address `amount: i128` - The amount of tokens to mint 11. `set_admin` - Sets the admin address.This function needs the following arguments (in the correct respective types) supplied when invoking it: `admin: Address` - The current admin address (required to call this function) `new_admin: Address - The new admin address to set 12. `decimals` / `name` / `symbol` - Reads metadata. These function will return metadata of each function, that are already set when `initialize` function invoked.
yuzurush
1,385,376
Manage Profiles in VS Code
In Visual Studio Code, a profile is a set of settings that help you customize your environment for...
0
2023-03-02T17:07:28
https://blog.paschalogu.com/Manage-Profiles-in-Visual-Studio-Code-466ba8eb5af849798bf55012646578a8
vscode, productivity, tutorial
In Visual Studio Code, a profile is a set of settings that help you customize your environment for your projects. Profiles are a great way to customize VS Code to better fit your need. You can save a set of configurations such as settings, extensions, font size, color theme etc., sync them across your devices, and even easily share them with colleagues. You can create multiple profiles for different scenarios. For example, a python project might require specific extensions to run python script smoothly, while a Golang project will require different extensions and settings. In another use case, you can create a profile with a specific set of extensions and settings like font size, and color theme for a demo project. By doing this, the demo will not mess up your normal VS Code setup, and you can customize VS Code for better visibility during your demo. ## How to Create a Profile in VS Code ### On Windows To create a new profile, navigate to the top left of your screen, Click File > Preferences > Profiles > Create Profile. ![Windows OS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yohe6sa1ulsfg3k5nr8h.png) ### For macOS Navigate to Code > Preferences > Profiles > Create Profile. ![macOS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3riqu0zq8tzw6vf0wvgx.png) Alternatively, you can click on settings (gear ⚙️ Icon) at the bottom left of your screen > Profiles > Create Profile. This works for both macOS and windows. In the image above, I have created a DevOps VS Code profile to meet my DevOps needs. When creating a profile, you can choose either “Create an Empty Profile” or “Create from current profile”. ![Create Profile](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kcj5jj801aiyytqw2toe.png) Selecting “Create from Empty Profile” above will create a new profile without any customizations as though you just downloaded and installed VS Code, while creating from current profile will carry/include all changes or settings, if any exists. ## Manage Profiles You can easily switch, rename, create, delete, export or import profiles in VS Code by following the prompts on your screen. ![Manage Profiles](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9i6j6g8ps3m2pmvefl7.png) ## Sync Profiles: Enable profiles to sync across all devices by using the [setting sync](https://code.visualstudio.com/docs/editor/settings-sync) feature. With this setting sync enabled, creating and deleting a profile from one device, creates and deletes it across all your synced devices respectively. Is VS Code Profile a feature you would like to explore? Share your thoughts in the comments section. 💡 This article was first published at [blog.paschalogu.com](https://blog.paschalogu.com/). If you found it useful, follow me on [Twitter](https://twitter.com/Paschal_ik) and also on [LinkedIn](https://www.linkedin.com/in/paschal-ogu). Thanks for reading!
paschalogu
1,385,408
Multiple Inheritance in Solidity
Solidity supports multiple inheritance, which allows a single contract to inherit from multiple...
0
2023-03-02T17:13:31
https://dev.to/shlok2740/multiple-inheritance-in-solidity-4pg
ethereum, blockchain, solidity, web3
>Solidity supports multiple inheritance, which allows a single contract to inherit from multiple contracts at the same time Multiple inheritance is a powerful feature of Solidity, the programming language used to develop smart contracts on Ethereum. It allows for code reuse and better organization of complex projects. With multiple inheritance, developers can create classes that inherit from more than one parent class. This makes it easier to implement complicated logic or incorporate existing libraries into their projects without having to write all the code from scratch. The primary benefit of using multiple inheritance in Solidity is that it enables developers to avoid writing duplicate code by inheriting common functions and properties from different parent classes instead of rewriting them each time they need them for a new project or contract type. By leveraging this feature, developers can save time while also ensuring that any changes made in one part will be reflected throughout their entire project due to its shared nature across various components which reduces maintenance costs as well as potential errors caused by redundant coding efforts. In addition, multiple inheritance can help improve readability and maintainability since related functionality can be grouped under a single class hierarchy instead of being spread out over many separate files with no clear link between them making navigation through large projects a much simpler task overall. Ultimately, multiple Inheritance provides an efficient way for Solidity users to organize complex applications within manageable hierarchies which leads to improved scalability - allowing teams to work faster on larger-scale initiatives without compromising quality standards along the way. Multiple Inheritance in Solidity allows a single contract to inherit from multiple contracts at the same time. This is achieved by using the 'is' keyword and declaring functions as virtual or overriding them with the 'override' keyword. Solidity uses C3 linearization to resolve multiple inheritance. One contract can be inherited from multiple contracts under Multiple Inheritance. A parent contract may include more than one child, whereas a child contract may include more than one parent. ##Example ``` // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; // Defining contract A contract A {      // Declaring internal      // state variable      string internal x;      // Defining external function      // to set value of      // internal state variable x      function setA() external {            x = "Blockchain";      } } // Defining contract B contract B {      // Declaring internal // state variable      uint internal mul;      // Defining external function      // to set value of internal      // state variable pow      function setB() external {            uint a = 2;            uint b = 20;            mul = a ** b;                 } } // Defining child contract C // inheriting parent contract // A and B contract C is A, B { // Defining external function // to return state variable x function getStr( ) external returns(string memory) {            return x;      }      // Defining external function      // to return state variable pow      function getMul(      ) external returns(uint) {            return mul;      } } // Defining calling contract contract caller {      // Creating object of contract C      C contractC = new C();      // Defining public function to      // return values from functions // getStr and getPow      function testInheritance(      ) public returns(string memory, uint) {            contractC.setA();            contractC.setB();            return (            contractC.getStr(), contractC.getMul());      } }  ``` ##Example with function overriding ``` // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; /* Graph of inheritance     A    / \   B   C  / \ / F  D,E */ contract A {     function foo() public pure virtual returns (string memory) {         return "A";     } } // Contracts inherit other contracts by using the keyword 'is'. contract B is A {     // Override A.foo()     function foo() public pure virtual override returns (string memory) {         return "B";     } } contract C is A {     // Override A.foo()     function foo() public pure virtual override returns (string memory) {         return "C";     } }  ``` For more content, follow me at - https://linktr.ee/shlokkumar2303
shlok2740
1,386,015
How an open-source feature flagging service supports tens of thousands of DUs on a free cloud instance
I developed an open-source feature flagging service written in .NET 6 and Angular. I have created a...
0
2023-03-03T04:21:39
https://dev.to/cosmicflood/how-an-open-source-feature-flagging-service-supports-tens-of-thousands-of-dus-on-a-free-cloud-instance-2ebp
I developed [**an open-source feature flagging service written in .NET 6 and Angular**](https://github.com/featbit/featbit). I have created a load test for the real-time feature flag evaluation service to understand my current service's bottlenecks better. The evaluation service receives and holds the WebSocket connections sent by APPs, evaluates the variation of feature flags for each user/device, and sends them back to users via WebSocket. It's the most important service which can easily reach performance bottlenecks. Here are some load test details: **Environment** A commonly available AWS EC2 service was used to host the Evaluation Server service for the tests. The instance type selected was **AWS t2.micro with 1 vCPU and 1 GiB RAM**, which is free tier eligible. To minimize the network impact on the results, the load test service (K6) runs on another EC2 instance in the same VPC. **General Test Conditions** The tests were designed to simulate real-life usage scenarios. The following test conditions were considered: * Number of new WebSocket connections established (including data-sync (1)) per second * The average P99 response time (2) * User actions: make a data synchronization request after the connection is established (1) data-sync (data synchronization): the process by which the evaluation server evaluates all of the user's feature flags and returns variation results to the user via the WebSocket. (2) response time: the time between sending the data synchronization request and receiving the response **Tests Performed** * Test duration: 180 seconds * Load type: ramp-up from 0 to 1000, 1100, 1200 new connections per second * Number of tests: 10 for each of the 1000, 1100 and 1200 per second use case **Test Results** The results of the tests showed that the Evaluation Server met the desired quality of service only up to a certain limit load. The service was able to handle up to 1100 new connections per second before P99 exceeded 200ms. The response time |Number of new connections per second|Avg (ms)|P95 (ms)|P99 (ms)| |:-|:-|:-|:-| |1000|5.42|24.7|96.70| |1100|9.98|55.51|170.30| |1200|34.17|147.91|254.60| Peak CPU Utilization % |Number of new connections per second|Ramp-up stage|Stable stage| |:-|:-|:-| |1000|82|26| |1100|88|29| |1200|91|31| Peak Memory Utilization % |Number of new connections per second|Ramp-up stage|Stable stage| |:-|:-|:-| |1000|55|38| |1100|58|42| |1200|61|45| &#x200B; **how we run the load test** You can find how we run the load test (including code source and test dataset) on our GitHub repo: [**https://github.com/featbit/featbit/tree/main/benchmark**](https://github.com/featbit/featbit/tree/main/benchmark) Could you give us a star if you like it? **Conclusion** The Evaluation Server was found to be capable of providing a reliable service for up to 1100 new connections per second using a minimum hardware setting: **AWS EC2 t2.micro (1 vCPU + 1 G RAM)**. The maximum number of connections held for a given time was **22000**, but this is not the limit. We believe the reported performance is sufficient for small businesses at negligible cost (free tier). Capacity can easily be multiplied by horizontally scaling the service as the business grows. **NOTE** All questions and feedbacks are welcome. You can join our [Slack community](https://join.slack.com/t/featbit/shared_invite/zt-1ew5e2vbb-x6Apan1xZOaYMnFzqZkGNQ) to discuss. Or visit [**FeatBit GitHub Repo**](https://github.com/featbit/featbit) to get more information. Blog Original article address: [https://www.featbit.co](https://www.featbit.co/blogs/Free-and-open-source-feature-flag-service-benchmark-I)
cosmicflood
1,391,383
GitHub Self-Hosted Runners on Azure Container Apps with Autocsaling
เรื่องมันมีอยู่ว่า อยากได้ GitHub Self-Hosted Runners แบบที่ autoscale ได้ตาม workload ครับ...
0
2023-03-07T04:41:00
https://dev.to/peepeepopapapeepeepo/github-selfhosted-runners-on-azure-container-apps-with-autocsaling-5h02
github, azure, cloud, devops
เรื่องมันมีอยู่ว่า อยากได้ GitHub Self-Hosted Runners แบบที่ autoscale ได้ตาม workload ครับ และตอนที่ไม่ใช้เลยก็สามารถ scale in instance เหลือ 0 ได้ ผมค้นไปเจอ [Autoscaling self hosted GitHub runner containers with Azure Container Apps (ACA)](https://dev.to/pwd9000/run-docker-based-github-runner-containers-on-azure-container-apps-aca-1n13) มา คิดว่าค่อนข้างตรงกับที่ต้องการ เลยเอามาลอง POC ดูครับ จากใน link ข้างต้น จะเป็นการ run container image ของ GitHub Self-Hosted Runners ด้วย Azure Container Apps และอาศัยความสามารถของ [KEDA autoscaler](https://keda.sh/) ในการ scale replica ตามจำนวน queue ใน Azure Storage Queue การทำงานจะเป็นแบบนี้ครับ 1. เมื่อ GitHub workflow ถูก trigger ให้ run มันจะไป push message ลงไปที่ Azure Storage Queue 1. KEDA scaler ใน Azure Container Apps ตรวจพบว่ามี queue เพิ่มขึ้น จึงทำการ scale out instance ของ GitHub selfhosted runners ตามจำนวน queue 1. เมื่อ scale เรียบร้อย ตัว GitHub selfhosted runners จะมารับงานจาก GitHub workflow ไป run จนเรียบร้อย 1. GitHub workflow ทำการ delete message ออกจาก Azure Storage Queue 1. KEDA scaler ตรวจพบว่ามี queue ลดลงจึง scale in instance ของ GitHub selfhosted runners ลง ถ้า queue เป็น 0 จำนวน instance ของ GitHub selfhosted runners ก็เป็น 0 ด้วยความคัน ผมจะเปลี่ยนนิดหน่อย ดังนี้ - ใช้เป็น Azure Service Bus Queue แทน Azure Storage Queue เพราะในกรณีที่มีการ send / receive message เยอะ Azure Service Bus Queue จะถูกกว่านิดหน่อย (อ่านเพิ่ม [Storage queues and Service Bus queues - compared and contrasted](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted)) - ปรับ workflow ให้ใช้ [Workload Identity Federation](https://dev.to/peepeepopapapeepeepo/let-github-action-access-azure-resources-without-password-f0i) เพื่อให้เป็น passwordless - สร้าง Azure Container Apps แบบมี Virtual Network Integration เพื่อให้ interact กับ resources ที่เป็น private ได้ Diagram จะเป็นแบบนี้นะครับ ![Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6w2u2a30zbuk58uosjl.png) และเช่นกัน ณ เวลาที่ผมเขียน blog นี้นั้น Azure Container Apps ยังมีข้อจำกัดดังนี้ - Support แค่ Linux-based x86-64 (linux/amd64) container image. - ยังไม่มี KEDA scalers for GitHub runners - ยังไม่มีที่ Southeast Asia ทีนี้ก่อนเริ่มทำเรามาดูกันก่อนว่าเราต้องมีอะไรกันบ้าง - **Azure Subscription** ( ใช้เพื่อ deploy resources ต่างๆ เช่น vnet, log analytics workspace, container apps และ service bus เป็นต้น ) - **Github Organization** ( เราจะให้ runners เป็น oganization level runners ) - **Github Repository** ( ใช้เก็บและ run workflow ) --- ## ลงมือทำกันเลย ### Prerequisites - [Create Azure subscription](https://learn.microsoft.com/en-us/dynamics-nav/how-to--sign-up-for-a-microsoft-azure-subscription) - [Sign up Github](https://docs.github.com/en/get-started/signing-up-for-github/signing-up-for-a-new-github-account) - [Create repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository) - [Create organization](https://docs.github.com/en/organizations/collaborating-with-groups-in-organizations/creating-a-new-organization-from-scratch) - [Create Github Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) โดยเลือก scope เป็น `admin:org` - Create GitHub Self-hosted Runners Container Image ดูได้จาก [docker-github-actions-runner](https://github.com/myoung34/docker-github-actions-runner) หรือ [Create a Docker based Self Hosted GitHub runner Linux container](https://dev.to/pwd9000/create-a-docker-based-self-hosted-github-runner-linux-container-48dh) แล้ว push ขึ้น container registry ของเรา ### Preflight - ทำการ install extensions ของ Azure CLI และ providers ที่ subscription ``` bash az extension add --name containerapp --upgrade az provider register --namespace Microsoft.App az provider register --namespace Microsoft.OperationalInsights az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.ServiceBus ``` - Define variables ``` bash POC_PREFIX="pocghaca" LOCATION="westus3" LOC="usw3" RESOURCE_GROUP="rg-${POC_PREFIX}-az-${LOC}-001" VNET_NAME="vnet-${POC_PREFIX}-az-${LOC}-001" INFRASTRUCTURE_SUBNET="snet-infra-az-${LOC}-001" LOG_ANALYTIC_WORKSPACE="log-${POC_PREFIX}-az-${LOC}-001" SERVICEBUS_NAME="sbn-${POC_PREFIX}-az-${LOC}-001" SERVICEBUS_QUEUE="gh-runner-scaler" CONTAINERAPPS_ENVIRONMENT="cte-${POC_PREFIX}-az-${LOC}-001" CONTAINERAPP_NAME="cta-${POC_PREFIX}-az-${LOC}-001" CONTAINER_REGISTRY_SERVER="..<YOUR-CONTAINER-REGISTRY>.." CONTAINER_REGISTRY_USER="..<YOUR-REGISTRY-USERNAME>.." CONTAINER_REGISTRY_PASS="..<YOUR-REGISTRY-PASSWORD>.." CONTAINER_IMAGE="..<YOU-CONTAINER-IMAGE>.." PAT="..<YOUR-GITHUB-ACCESS-TOKEN>.." GH_ORG="..<YOUR-ORGANIZATION-NAME>.." ``` ### Prerequisite Azure Resources - Create Resource group ``` bash az group create \ --name $RESOURCE_GROUP \ --location $LOCATION ``` - Create Log analytics workspaces ``` bash az monitor log-analytics workspace create \ --name $LOG_ANALYTIC_WORKSPACE \ --resource-group $RESOURCE_GROUP \ --quota 1 \ --retention-time 30 \ --sku PerGB2018 ``` - Create Virtual network and subnet ``` bash az network vnet create \ --resource-group $RESOURCE_GROUP \ --name $VNET_NAME \ --location $LOCATION \ --address-prefix 10.0.0.0/16 az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ --name $INFRASTRUCTURE_SUBNET \ --address-prefixes 10.0.0.0/21 ``` ### Azure Service Bus and Queue - Create Azure service bus and queue ``` bash az servicebus namespace create \ --name $SERVICEBUS_NAME \ --resource-group $RESOURCE_GROUP \ --location $LOCATION az servicebus queue create \ --resource-group $RESOURCE_GROUP \ --namespace-name $SERVICEBUS_NAME \ --name $SERVICEBUS_QUEUE ``` - Create connection string of Azure Service Bus Queue ``` bash az servicebus queue authorization-rule create \ --resource-group $RESOURCE_GROUP \ --namespace-name $SERVICEBUS_NAME \ --queue-name $SERVICEBUS_QUEUE \ --name $CONTAINERAPP_NAME \ --rights Manage Send Listen ``` - Add yourself as `Azure Service Bus Data Owner` on Azure Service Bus Namespace ![service-bus-queue-role-assignment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94xeqexiob7lie0mga07.png) - Test send a message and receive/delete a message from the queue > ⚠️ ถ้าทำใน bash ของ Azure Cloud Shell ใน run `az login` อีกครั้ง ``` bash # Send message az rest \ --resource "https://servicebus.azure.net" \ --method post \ --headers BrokerProperties='{"TimeToLive":10}' \ --url "https://${SERVICEBUS_NAME}.servicebus.windows.net/${SERVICEBUS_QUEUE}/messages" \ --body "Hello from Az REST with TTL" ``` ``` bash # Receive / Delete message az rest \ --resource "https://servicebus.azure.net" \ --method delete \ --url "https://${SERVICEBUS_NAME}.servicebus.windows.net/${SERVICEBUS_QUEUE}/messages/head" ``` ### Create Azure Container App - รวบรวมข้อมูลที่ใช้ในการสร้าง Azure Container App และ Azure Container Environment ``` bash INFRASTRUCTURE_SUBNET=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $INFRASTRUCTURE_SUBNET --query "id" -o tsv | tr -d '[:space:]') LOG_WORKSPACE_ID=$(az monitor log-analytics workspace show --name $LOG_ANALYTIC_WORKSPACE --resource-group $RESOURCE_GROUP --query "customerId" -o tsv | tr -d '[:space:]') LOG_WORKSPACE_KEY=$(az monitor log-analytics workspace get-shared-keys --name $LOG_ANALYTIC_WORKSPACE --resource-group $RESOURCE_GROUP --query "primarySharedKey" -o tsv | tr -d '[:space:]') CONNECTION_QUEUE=$(az servicebus queue authorization-rule keys list --resource-group $RESOURCE_GROUP --namespace-name $SERVICEBUS_NAME --queue-name $SERVICEBUS_QUEUE --name $CONTAINERAPP_NAME --query primaryConnectionString --output tsv | tr -d '[:space:]') ``` - Create Azure Container App Environment ``` bash az containerapp env create \ --name $CONTAINERAPPS_ENVIRONMENT \ --resource-group $RESOURCE_GROUP \ --location $LOCATION \ --infrastructure-subnet-resource-id $INFRASTRUCTURE_SUBNET \ --docker-bridge-cidr "172.17.0.1/16" \ --platform-reserved-cidr "100.64.0.0/16" \ --platform-reserved-dns-ip "100.64.0.10" \ --logs-destination "log-analytics" \ --logs-workspace-id $LOG_WORKSPACE_ID \ --logs-workspace-key $LOG_WORKSPACE_KEY \ --internal-only ``` - Create Azure Container App with autoscaling ``` bash az containerapp create \ --resource-group $RESOURCE_GROUP \ --name $CONTAINERAPP_NAME \ --image $CONTAINER_IMAGE \ --environment $CONTAINERAPPS_ENVIRONMENT \ --registry-server $CONTAINER_REGISTRY_SERVER \ --registry-username $CONTAINER_REGISTRY_USER \ --registry-password $CONTAINER_REGISTRY_PASS \ --secrets gh-token="$PAT" servicebus-connection-string="$CONNECTION_QUEUE" \ --env-vars \ ACCESS_TOKEN=secretref:gh-token \ RUNNER_SCOPE="org" \ ORG_NAME=$GH_ORG \ LABELS="poc-gh-aca" \ RUNNER_WORKDIR="/tmp/github-runner" \ --cpu "1.75" \ --memory "3.5Gi" \ --min-replicas 0 \ --max-replicas 3 \ --scale-rule-auth "connection=servicebus-connection-string" \ --scale-rule-name queue-scaling \ --scale-rule-type azure-servicebus \ --scale-rule-metadata \ "queueName=$SERVICEBUS_QUEUE" \ "namespace=$SERVICEBUS_NAME" \ "messageCount=1" ``` - ตรวจสอบ system log ``` bash az containerapp logs show \ --resource-group $RESOURCE_GROUP \ --name $CONTAINERAPP_NAME \ --type console \ --follow ``` - ตรวจสอบ console log ``` bash az containerapp logs show \ --resource-group $RESOURCE_GROUP \ --name $CONTAINERAPP_NAME \ --type system \ --follow ``` - ตรวจสอบว่า runner เกิดขึ้นใน organization ของเรา ![github-self-hosted-runner](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lmz57zm30vr42h39pyl.png) - ลอง send และ delete message แล้วตรวจสอบ Queue ใน Azure Service Bus และ check จำนวน runner ที่เกิดขึ้นใน organization ``` bash # Send message az rest \ --resource "https://servicebus.azure.net" \ --method post \ --headers BrokerProperties='{"TimeToLive":3600}' \ --url "https://${SERVICEBUS_NAME}.servicebus.windows.net/${SERVICEBUS_QUEUE}/messages" \ --body "Hello from Az REST with TTL" ``` ``` bash # Receive / Delete message az rest \ --resource "https://servicebus.azure.net" \ --method delete \ --url "https://${SERVICEBUS_NAME}.servicebus.windows.net/${SERVICEBUS_QUEUE}/messages/head" ``` ``` bash # See console log az containerapp logs show \ --resource-group $RESOURCE_GROUP \ --name $CONTAINERAPP_NAME \ --type console \ --follow ``` ``` bash # See system log az containerapp logs show \ --resource-group $RESOURCE_GROUP \ --name $CONTAINERAPP_NAME \ --type system \ --follow ``` ### Create GitHub workflow to verify our system - ทำการ create Service Principal แล้ว federate credential ไปให้ Github Organization และ Repository ที่เราสร้างเตรียมไว้โดยเลือกเป็น branch main ![Credential Federation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ejw53kzfisu7i43cfl8r.png) > รายละเอียดดูได้จาก [Let Github Action Access Azure Resources without password](https://dev.to/peepeepopapapeepeepo/let-github-action-access-azure-resources-without-password-f0i) - Create role assignment ให้ Service Principal ของเรามีสิทธิ์ `Azure Service Bus Data Receiver` และ `Azure Service Bus Data Sender` ที่ Queue ที่เราสร้างไว้ ![queue-role-assignment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ovxk3ri7ucyws5h7kno.png) - Create role assignment ให้ Service Principal ของเรามีสิทธิ์ `Contributor` บน subscription ของเรา ![subscription-role-assignment](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ix8bd5tn0gb06ydzxk82.png) - สำหรับสาย free ถ้า GitHub repository ของคุณเป็น public ต้องมา allow ที่ Runner Groups Default ก่อน ![Runner-Groups-Default](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/06z4p33zw46jmpp2upxb.png) - ต่อไปเข้าไปที่ Repository ของตัวเองแล้ว define secret และ variable เหล่านี้ - Secrets - `AZURE_CLIENT_ID` - `AZURE_SUBSCRIPTION_ID` - `AZURE_TENANT_ID` - Variables - `AZ_SERVICE_BUS_NAMESPACE` - `AZ_SERVICE_BUS_QUEUE` ![action-menu](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4s1f9hbjof0riz2whd52.png) ![action-secret](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7y6isydhatlaw3770i6.png) ![action-variable](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/od4xo8h2xaku1lvhugn5.png) - จากนั้นทำการสร้าง file workflow `.github/workflows/demo.yaml` ใน GitHub Repository ``` yaml name: Scale up selfhosted runner and run on: workflow_dispatch: permissions: id-token: write contents: read env: AZ_SERVICE_BUS_NAMESPACE: ${{ vars.AZ_SERVICE_BUS_NAMESPACE }} AZ_SERVICE_BUS_QUEUE: ${{ vars.AZ_SERVICE_BUS_QUEUE }} jobs: scale-out-runner: runs-on: ubuntu-latest steps: - name: 'Az CLI login' uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: scale out self hosted run: | az rest \ --resource "https://servicebus.azure.net" \ --method post \ --headers BrokerProperties='{"TimeToLive":3600}' \ --url "https://${{ env.AZ_SERVICE_BUS_NAMESPACE }}.servicebus.windows.net/${{ env.AZ_SERVICE_BUS_QUEUE }}/messages" \ --body "${{ github.run_id }}" deploy: runs-on: [self-hosted, poc-gh-aca] needs: [scale-out-runner] steps: - name: 'Az CLI login' uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: 'Run az commands' run: | az account show --output table scale-in-runner: runs-on: ubuntu-latest needs: [scale-out-runner, deploy] if: ${{ always() && needs.scale-out-runner.result == 'success' }} steps: - name: 'Az CLI login' uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: scale in self hosted run: | az rest \ --resource "https://servicebus.azure.net" \ --method delete \ --url "https://${{ env.AZ_SERVICE_BUS_NAMESPACE }}.servicebus.windows.net/${{ env.AZ_SERVICE_BUS_QUEUE }}/messages/head" ``` ![github-workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99uv7qqnmwpfvf7as9so.png) ใน workflow จะมี 3 jobs 1. `scale-out-runner`: จะเป็นการ send message เข้าไปที่ Service Bus เพื่อ ให้ container apps scale out GitHub SelfHosted Runners ขึ้นมารับงานไปทำ 1. `deploy`: จะเป็นส่วนงานหลักที่ pipeline ต้องทำ โดย job นี้จะทำก็ต่อเมื่อ `scale-out-runner` success 1. `scale-in-runner`: หลังจากทำงานเสร็จไม่ว่าจะ success หรือไม่ job นี้จะทำการ delete message ออกจาก Service Bus เพื่อให้ container apps ทำการ scale in GitHub SelfHosted Runners ลงไป - ให้ลองทำการ run workflow หลายๆ ครั้ง พร้อมๆ กัน ![run-workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ryjfwty1jyooz0r2d4h.png) - ตรวจสอบดูว่าการ autoscale ทำงานได้ตามที่คาดการณไว้หรือไม่ - Azure Service Bus Queue ![Azure-Service-Bus-Queue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/551n6f397wa7fegi1bvk.png) - Container App Replicas ![Container-App-Replicas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nio0su9ayt7yjfg9xh6h.png) - Github Organization-Level Self-hosted Runners ![Github-Organization-Level-Self-hosted-Runners](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rgr3j7sin9hrtezcfxin.png) - หลังจาก workflow run เรียบร้อยหมดแล้ว ลองตรวจสอบ อีกครั้ง ทุกอย่างต้องเป็น 0 - Azure Service Bus Queue ![Azure-Service-Bus-Queue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmu66kv9a66inhhg0llu.png) - Container App Replicas ![Container-App-Replicas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjpd6mf2yoey8zalax8b.png) - Github Organization-Level Self-hosted Runners ![Github-Organization-Level-Self-hosted-Runners](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6gq1yk4jihv5p86oyx5d.png) --- ## References - [KEDA | Azure Service Bus](https://keda.sh/docs/2.9/scalers/azure-service-bus/) - [Scaling in Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/scale-app?pivots=azure-cli#example-2) - [Autoscaling self hosted GitHub runner containers with Azure Container Apps (ACA)](https://dev.to/pwd9000/run-docker-based-github-runner-containers-on-azure-container-apps-aca-1n13) - [Provide a virtual network to an internal Azure Container Apps environment](https://learn.microsoft.com/en-us/azure/container-apps/vnet-custom-internal?tabs=bash&pivots=azure-cli) - [Send Message to Queue](https://learn.microsoft.com/en-us/rest/api/servicebus/send-message-to-queue) - [Receive and Delete Message (Destructive Read)](https://learn.microsoft.com/en-us/rest/api/servicebus/receive-and-delete-message-destructive-read) - [Calling Azure REST API via curl](https://mauridb.medium.com/calling-azure-rest-api-via-curl-eb10a06127)
peepeepopapapeepeepo
1,405,085
An Introduction to Pieces for Developers - JetBrains Plugin
In 2022, our team set out on a journey to build the most advanced code snippet management and...
22,291
2023-03-19T19:08:00
https://dev.to/getpieces/an-introduction-to-pieces-for-developers-jetbrains-plugin-4p3p
devops, tooling, webdev, programming
In 2022, our team set out on a journey to build the most advanced code snippet management and workflow context platform yet. The debut release of our Flagship Desktop App got the ball rolling, and now our JetBrains Plugin aims to take developers' productivity to the next level by incorporating key capabilities and our users' favorite features directly into their IDE. Effortlessly save, enrich, search, share, reference, and reuse: code snippets, workflow context, and other useful developer resources. ## Trusted by the World's Best Developers ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fglsxm8r72wgq4bkr90e.png) From students and indie developers, to startups and open source teams, to enterprise organizations and beyond, Pieces for Developers is purpose-built as a cohesive layer and a "tool between tools" that boosts productivity in three major workflow processes: researching and problem-solving in the browser, working with colleagues in collaborative environments, and lastly, writing, reviewing, referencing, and reusing code in the IDE. We are a venture-backed company supported by some of the world's best investors. Our products & company are secure and continuing to grow. Our JetBrains Plugin, in combination with our Flagship Desktop App, provides users with a first-in-kind feature set that ambitiously augments your development workflows. ## Take Back Your Time | Solving Productivity Bottlenecks Countless Google Searches, Open Tab Back-and-Forth & History Scrolling More and more developer resources live online nowadays - tutorial sites, Stack Overflow posts, documentation & open source packages, wikis & blog posts - the list goes on and on. This, combined with the constant switching back to the IDE and other tools, often causes developers to lose track of where they were and start over again at the search bar. ### Solution Highlights: The Pieces for Developers | Desktop App ships with a first-in-kind Workflow Activity Stream, letting you pick up where you left off faster than ever. ## Resurfacing Additional Context in Collaborative Environments Task and project context regarding the who, what, where, and why is often discussed over a variety of channels, such as Email, Slack, Teams, Discord, Community Forums, Twitter, Reddit, and more. Keeping track of this critical context - decision outcomes, architectures, implementation solutions, best practices, power tips, configs - is becoming harder than ever. ### Solution Highlights: When you save code snippets with the Pieces for Developers | JetBrains Plugin, it internally leverages our unique Context Awareness Engine to automatically extract and associate over 15 metadata heuristics. The Pieces for Developers | Desktop App enables you to Toggle Real-time and Scope-Relevant Suggestions around materials to save, reference, reuse, and share. ## Onboarding New Developers, Reducing Technical Debt & Code Reviews Developers continuously have to navigate ever-growing code bases, backlog tasks, Jira Tickets, GitHub Issues, and Pull Requests, all in an effort to reuse existing code, onboard new developers faster, share relevant examples, reduce technical debt, and drive code standardization. ### Solution Highlights: Powered by our On-Device Suggested Save & Pattern Engine, the Pieces for Developers | JetBrains Plugin can automatically recognize and save highly reusable, yet complementary materials for you. Leverage the Pieces for Developers | Desktop App to discover and extract reusable boilerplate through our In-Project Snippet Discovery features, which run entirely offline and on-device. ## Table of Contents & What's in this JetBrains series Article: Saving Useful Developer Materials - Stay in Flow with Single Click Save - Suggested Save and On-Device Pattern Engine - In-Project Snippet Discovery Article: AI-Powered Material Enrichment & Metadata Association - Context Awareness Engine and Origin Details - AI-Generated Smart Descriptions and Associated Commit Messages - Related Links and External Resources - AI-Generated Smart Labels and User-Added Tags - Related People and Associated Collaborators - Smart Warnings and Sensitive Information Detection Article: Search, Sort, Reference, Reuse & Share Saved Materials - In-Editor Global Search with the Command Palette - Reuse Saved Materials with Atomic Auto-Complete - Toggling Realtime and Scope-Relevant Suggestions - Accessing Saved Material Overviews from the In-IDE List View - Workflow Activity Stream and process backtracking - Personalized Link Sharing of Saved Materials and Their Context Metadata - Select and Share with Context - Access and Save Offline from Shared Links Article: Getting Started with Pieces for JetBrains and plugin FAQs - What is being installed? - Pieces for Developers | OS Server (Local Runtime Background Service) - Pieces for Developers | Flagship Desktop App - Pieces for Developers | JetBrains Plugin - Platform Requirements - Troubleshooting - Frequently Asked Questions - Solving Common Connectivity Issues - Connect with our team and Get 1:1 Support Article: Pieces for JetBrains Additional Features and Plugin Settings - Managing and Updating your saved resources - Inserting a saved Code Snippet at your current cursor location - Editing saved Code Snippets and Text Notes - Renaming a Saved Material - Reclassifying a Code Snippets' Language Association - Save a snippet with "Save to Pieces as…" in the JetBrains Plugin - Deleting a saved material - Connecting Custom Cloud Domain - Keyboard Shortcuts Follow along with our series Pieces for JetBrains extension. This series will highlight and document the features that save developers' workflows. As well as how our features are leading the charge of AI dev tools that elevate workflows and 10x your productivity. Pieces for Developers is just getting started with cutting edge, powerful and secure products across developers workflows. [Check out more of our products!](https://code.pieces.app/) [Download our JetBrains plugin here!](https://plugins.jetbrains.com/plugin/17328-pieces--save-search-share--reuse-code-snippets)
get_pieces
1,406,490
Make a video about the best contributor of the month with React and NodeJS 🚀
TL;DR See the cover of the article? We are going to create this. We will take an...
0
2023-03-20T13:04:13
https://dev.to/novu/make-a-video-about-the-best-contributor-of-the-month-with-react-and-nodejs-2feg
webdev, javascript, programming, tutorial
#TL;DR See the cover of the article? We are going to create this. We will take an organization on GitHub, review all their repositories, and check the number of merged requests done by every contributor during the month. We will declare the winner by the one with the most amount of pull requests by creating the video 🚀 In my case - I didn't win, and still made the video 😂 ![H5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zegqrivgnfzvimteprzi.gif) ## About the technologies We are going to use [Remotion](https://www.remotion.dev) and GitHub GraphQL. Remotion is an excellent library - most video libraries are very complicated. You need to deal with layers and animate everything by code. Remotion is different. It lets you write plain React - JSX / CSS and then use web scrapers to record the screen - It sounds hacky - but it's incredible 🤩 We are going to use GitHub GraphQL and not GitHub REST API - it's faster and it has better limits than the REST API. ## Novu - the first open-source notification infrastructure Just a quick background about us. Novu is the first open-source [notification infrastructure](https://novu.co). We basically help to manage all the product notifications. It can be **In-App** (the bell icon like you have in the Dev Community - **Websockets**), Emails, SMSs and so on. I would be super grateful if you can help us out by starring the library 🤩 [https://github.com/novuhq/novu](https://github.com/novuhq/novu) ![Novu](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yn90yvsd87tgik03c08v.gif) ## Let's start We will start by initiating a new Remotion project by running the command ```bash npx create-video --blank What would you like to name your video? › contributor-of-the-month ``` Once finished, go into the directory ```bash cd contributor-of-the-month ``` We can now preview Remotion by running ```bash npm run start ``` ![Remotion](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26qvjhpitpycf4f5ziqr.png) Now let’s go ahead and design our video. Create a new folder called public and add the following picture: ![background](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwp30bsvnhzlme28bs8h.png) We will put the face of the contributor in the middle, and the name on top. Now let’s open our Root.tsx file and change to code into this: ```jsx import {Composition} from 'remotion'; import {MyComposition} from './Composition'; export const RemotionRoot: React.FC = () => { return ( <> <Composition id="Contributor" component={MyComposition} durationInFrames={300} fps={30} width={2548} height={1068} defaultProps={{ avatarUrl: 'https://avatars.githubusercontent.com/u/100117126?v=4', name: 'Nevo David', }} /> </> ); }; ``` let’s talk about what’s going on here. We said that our Frame-Per-Second(fps) is 30 and the durationInFrames is 300. It means that durationInFrames / fps = seconds of the video = 10 seconds. We set the width and the height of the video based on our picture from the previous step. And we put some defaultProps. We are going to get the best contributor of the month based on GitHub. When we run Remotion Previewer, we don't have those parameters, so our best option is to give some default parameters and later replace it by the one we send from our NodeJS renderer. Party = Confetti, how can we show enough respect, without confetti? Let’s install it! ```bash npm install remotion-confetti ``` open our Composition.tsx and replace it with the following code: ```jsx import {AbsoluteFill, Img, staticFile} from 'remotion'; import {Confetti, ConfettiConfig} from 'remotion-confetti'; import {FC} from 'react'; const confettiConfig1: ConfettiConfig = { particleCount: 200, startVelocity: 100, colors: ['#0033ff', '#ffffff', '#00ff33'], spread: 1200, x: 1200, y: 600, scalar: 3, }; export const MyComposition: FC<{avatarUrl: string; name: string}> = (props) => { const {avatarUrl, name} = props; return ( <AbsoluteFill> <Confetti {...confettiConfig1} /> <AbsoluteFill> <h1 style={{ textAlign: 'center', fontSize: 200, color: 'white', float: 'left', textShadow: '10px 10px 50px #000', marginTop: -10, marginLeft: -100 }} > {name} </h1> </AbsoluteFill> <AbsoluteFill style={{ background: 'white', width: 630, height: 600, borderRadius: '100%', position: 'absolute', left: 910, top: 225, overflow: 'hidden', }} > <Img style={{minWidth: '100%', minHeight: '100%'}} src={avatarUrl} /> </AbsoluteFill> <Img src={staticFile('background-contributor.png')} width={2548} height={1068} /> </AbsoluteFill> ); }; ``` So we are starting with some imports: - **AbsoluteFill** - it’s basically a div, but Remotion knows how to work with it. - **Img** - it’s the same as img, but Remotion knows how to deal with it. - **staticFile -** the function to load our background image - **Confetti** - the component we have just installed to show confetti on the screen. - **ConfettiConfig** - Configuration to pass to our **Confetti** component**.** Next, let’s talk about the component: ```jsx FC<{avatarUrl: string; name: string}> ``` Those are basically the defaultProps we have passed from the previous step, later, we will pass the defaultProps dynamically from JS. ```jsx <AbsoluteFill> ``` We have placed multiple AbsoluteFill with inline styling around the document to place the elements on the document such as the contributor name and contributor picture. ```jsx const confettiConfig1: ConfettiConfig = { particleCount: 200, startVelocity: 100, colors: ['#0033ff', '#ffffff', '#00ff33'], spread: 1200, x: 1200, y: 600, scalar: 3, }; ``` **ConfettiConfig** - I have put a high velocity to see particles all around the screen, and some colors that are different than the dots we have on the screen. I have placed it in the middle behind the avatar picture using the x and y, and I have put 3 in the **scalar** to make the particles a little bit bigger. From here, you can already see inside the preview that we have a final animation of our contributor 🥳 I don’t know about you, but this was super easy. Now comes the harder part. ## Getting the information from GitHub We are going to extract our contributor of the month. And here is how it’s going to go: 1. We will create a new GitHub developers API key that we can use to get information from GitHub. 2. We will get all the repositories of a GitHub organization. 3. We will fetch all the **merged pull requests** of the GitHub repository until the last one of the month. 4. We will count the contributors with the most amount of merged pull requests. 5. We will send those parameters into our Remotion project and render it. I hope you are excited! Let’s start! 🚀 We need to install two more libraries: 1. moment - so we can check that the time of the pull request is within the month. 2. @octokit/rest - to send GraphQL requests to GitHub. So let’s install them ```bash npm install moment @types/moment @octokit/rest --save ``` Now, let’s go to GitHub and create our token. Head over to your tokens in settings [https://github.com/settings/tokens](https://github.com/settings/tokens) create a new classic token with the following permissions: ![Token](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6syfousm74m9pe4xzrae.png) Click on “Generate Token” and copy the requested key. Let’s create the GitHub service and the root of our scheduler. ```bash touch scheduler.ts mkdir services cd services touch github.ts ``` Let’s open our github.ts file and add some code. First, we will import Octokit and set our token from the previous step: ```jsx const rest = new Octokit({ auth: "token", }); ``` Now let’s create a new class and create our first static function to get all the repositories from the organization: ```jsx export class GitHubAPI { public static getOrganization() { return rest.graphql(`{ organization(login: "novuhq") { id name login url avatarUrl repositories(first: 100, privacy: PUBLIC, isFork: false) { totalCount pageInfo { startCursor endCursor hasNextPage hasPreviousPage } nodes { id name description url stargazerCount } } } }`); } } ``` Pretty straightforward function. It will get all the repositories from the Novu organization - you can change it to any org you want. In the final code (repository), I have put an example using dotenv. Now let’s create our next function for getting all the contributors from the repository. Here, it can be a little tricky. We can only take a maximum of 100 results every time. We are going to do the following: 1. Get the first 100 results. 2. Remove all the bots from the results. 3. If the last element of the array is in a date within this month, we will get the next 100 results in a recursive way. 4. Repeat the process until the last results are not within our month. ```jsx static async topContributorOfRepository( repo: string, after?: string ): Promise<Array<{avatarUrl: string; login: string}>> { const allPulls = await rest.graphql(` query { repository(name: "${repo}", owner: "novuhq") { pullRequests(states: [MERGED], ${ after ? `after: "${after}",` : '' } first: 100, orderBy: {field: CREATED_AT, direction: DESC}){ pageInfo { endCursor hasNextPage } nodes { createdAt author { login url avatarUrl } } } } } `); const filterArray = allPulls.repository.pullRequests.nodes.filter( (n) => moment(n.createdAt).add(1, 'month').isAfter(moment()) && n?.author?.url?.indexOf('/apps/') === -1 && n?.author?.url ); return [ ...filterArray.map((p) => ({ login: p.author.login, avatarUrl: p.author.avatarUrl, })), ...(allPulls.repository.pullRequests.nodes.length && moment(allPulls.repository.pullRequests.nodes.slice(-1)[0].createdAt) .add(1, 'month') .isAfter(moment()) && allPulls.repository.pullRequests.pageInfo.hasNextPage ? await GitHubAPI.topContributorOfRepository( repo, allPulls.repository.pullRequests.pageInfo.endCursor ) : []), ]; } ``` So our function has two parameters, “repo” and “after”. Since we will iterate over all the repositories from the first step, we need a “repo” parameter. “after” is for the recursive part, every-time we pass it, we will get the next 100. In `await rest.graphql` we can see the GraphQL code. We are passing the repo name, org name, and states - we want only merged requests (pull requests), and we tell them to order it by the time they were created in descending order. After that, we have the `filterArray` variables that filters all the results with the following parameters: 1. Not a bot - we can know if the user is a bot if inside their URL, they have /apps/, users URLs are usually “github.com/nevo-david”. 2. Date matching - we check that it’s within the month by adding the createdAt, one month and checking if it’s greater than today. Let’s say we are in March. Here are a few examples 1. If the pull request was on 10 March, we add it one month. It will be on 10 April, it’s bigger than our current date, and it’s valid. 2. If the pull request was on 10 January, we add it one month. It will be on 10 February, smaller than our current date, and invalid. After we have filtered the results, we will return a recursive array. The current filtered array + conditional recursive - if the last item is valid and GitHub says there are more rows to fetch, trigger the function again with the next 100 rows. Now we just need one more function to merge it all ```jsx static async startProcess() { const orgs = ( await GitHubAPI.getOrganization() ).organization.repositories.nodes.map((p) => p.name); const loadContributors: Array<{login: string; avatarUrl: string}> = []; for (const org of orgs) { loadContributors.push( ...(await GitHubAPI.topContributorOfRepository(org)) ); } const score = Object.values( loadContributors.reduce((all, current) => { all[current.login] = all[current.login] || { name: current.login, avatarUrl: current.avatarUrl, total: 0, }; all[current.login].total += 1; return all; }, {} as {[key: string]: {avatarUrl: string; name: string; total: number}}) ).reduce( (all, current) => { if (current.total > all.total) { return current; } return all; }, {name: '', avatarUrl: '', total: -1} as {name: string; total: number} ); return score; } ``` 1. We grab all the org and extract only the name from them - we don’t need the other parameters. 2. We iterate over the names and get all the contributors, we put everything into one array and flat it. It will look something like `[”nevo-david”, “tomer”, “tomer”, “nevo-david”, “dima”].` 3. In the end, we have another function that merges the results and returns the contributor with the most amount of pull requests. The full code should look like this: ```jsx import {Octokit} from '@octokit/rest'; import moment from 'moment'; const rest. =new Octokit({ auth: key || process.env.GITHUB_TOKEN, }); export class GitHubAPI { public static getOrganization() { return rest.graphql(`{ organization(login: "novuhq") { id name login url avatarUrl repositories(first: 100, privacy: PUBLIC, isFork: false) { totalCount pageInfo { startCursor endCursor hasNextPage hasPreviousPage } nodes { id name description url stargazerCount } } } }`); } static async topContributorOfRepository( repo: string, after?: string ): Promise<Array<{avatarUrl: string; login: string}>> { const allPulls = await rest.graphql(` query { repository(name: "${repo}", owner: "novuhq") { pullRequests(states: [MERGED], ${ after ? `after: "${after}",` : '' } first: 100, orderBy: {field: CREATED_AT, direction: DESC}){ pageInfo { startCursor endCursor hasNextPage hasPreviousPage } nodes { createdAt author { login url avatarUrl } } } } } `); const filterArray = allPulls.repository.pullRequests.nodes.filter( (n) => moment(n.createdAt).add(1, 'month').isAfter(moment()) && n?.author?.url?.indexOf('/apps/') === -1 && n?.author?.url ); return [ ...filterArray.map((p) => ({ login: p.author.login, avatarUrl: p.author.avatarUrl, })), ...(allPulls.repository.pullRequests.nodes.length && moment(allPulls.repository.pullRequests.nodes.slice(-1)[0].createdAt) .add(1, 'month') .isAfter(moment()) && allPulls.repository.pullRequests.pageInfo.hasNextPage ? await GitHubAPI.topContributorOfRepository( repo, allPulls.repository.pullRequests.pageInfo.endCursor ) : []), ]; } static async startProcess() { const orgs = ( await GitHubAPI.getOrganization() ).organization.repositories.nodes.map((p) => p.name); const loadContributors: Array<{login: string; avatarUrl: string}> = []; for (const org of orgs) { loadContributors.push( ...(await GitHubAPI.topContributorOfRepository(org)) ); } const score = Object.values( loadContributors.reduce((all, current) => { all[current.login] = all[current.login] || { name: current.login, avatarUrl: current.avatarUrl, total: 0, }; all[current.login].total += 1; return all; }, {} as {[key: string]: {avatarUrl: string; name: string; total: number}}) ).reduce( (all, current) => { if (current.total > all.total) { return current; } return all; }, {name: '', avatarUrl: '', total: -1} as {name: string; total: number} ); return score; } } ``` We have our video ready. We have our GitHub code ready. All that is left is to merge them. 😱 To use the Remotion server renderer, we need to install it, so let’s do it. ```jsx npm install @remotion/renderer @remotion/bundler --save ``` We want to run our renderer every month. So let’s install the node scheduler. ```jsx npm install node-schedule --save ``` Let’s write our processing function. It’s pretty straightforward :) ```jsx const startProcess = async () => { const topContributor = await GitHubAPI.startProcess(); const bundleLocation = await bundle( path.resolve('./src/index.ts'), () => undefined, { webpackOverride: (config) => config, } ); const comps = await getCompositions(bundleLocation, { inputProps: topContributor, }); const composition = comps.find((c) => c.id === 'Contributor')!; await renderMedia({ composition, serveUrl: bundleLocation, codec: 'gif', outputLocation: 'out/contributor.gif', inputProps: topContributor, }); }; ``` **First** - we use the function to get the top contributor from our GitHub service. **Second** - we load the Remotion bundler. We set it to our index.ts file, which is the root of our Remotion video creator. **Third** - we load all the compositions (in our case, we have one) and pass the inputProps with what we got from GitHub. (it will replace the **defaultProps** that we put in the first step). **Forth** - We look for our composition - it’s silly because we have one, we could just do `comps[0]`, but it’s more about making a point :) **Fifth** - We render the media and create a gif with the contributor of the day. We pass the inputProps again - honestly, this is in the Remotion documentation, and I am not really sure why we need to pass it twice. All we need to do is to create a schedule to run every 1st of the month and trigger the function. ```jsx schedule.scheduleJob('0 0 1 * *', () => { startProcess(); }); ``` The **“0 0 1 * *”** is a way to write cron, you can easily play with it [here](https://crontab.guru/every-month). Here is the full code of the page: ```jsx import schedule from 'node-schedule'; import {bundle} from '@remotion/bundler'; import {getCompositions, renderMedia} from '@remotion/renderer'; import path from 'path'; import {GitHubAPI} from './services/github'; schedule.scheduleJob('0 0 1 * *', () => { startProcess(); }); const startProcess = async () => { const topContributor = await GitHubAPI.startProcess(); const bundleLocation = await bundle( path.resolve('./src/index.ts'), () => undefined, { webpackOverride: (config) => config, } ); const comps = await getCompositions(bundleLocation, { inputProps: topContributor, }); const composition = comps.find((c) => c.id === 'Contributor')!; await renderMedia({ composition, serveUrl: bundleLocation, codec: 'gif', outputLocation: 'out/contributor.gif', inputProps: topContributor, }); }; ``` You can start the project by running ```bash npx ts-node src/scheduler.ts ``` And you are done 🎉 You can share it over Discord, Twitter, or any channel somebody can see it! I encourage you to run the code and post your rendered video in the comments 🤩 You can find the source code here: [https://github.com/novuhq/blog/tree/main/contributor-of-the-month](https://github.com/novuhq/blog/tree/main/contributor-of-the-month) ![Blank](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noz3b8n55mwv4lwv10qt.png) ## Can you help me? Creating tutorials takes a lot of time and effort, but it's all worth it when I see the positive impact it has on the developer community. If you find my tutorials helpful, please consider giving Novu's repository a star. Your support will motivate me to create more valuable content and tutorials. Thank you for your support! ⭐️⭐️⭐️⭐️ [https://github.com/novuhq/novu](https://github.com/novuhq/novu) ![Cat](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7hvcbk4tg26otup754fz.gif)
nevodavid
1,407,064
Scala 102: Collections and Monads
Hey there, welcome back to our Scala series! Last time, we introduce the basic types. But today,...
22,188
2023-03-19T23:20:25
https://dev.to/krlz/scala-102-collections-47n0
beginners, programming, scala, functional
Hey there, welcome back to our Scala series! Last time, we introduce the basic types. But today, we're talking about collections and implementing monads on lists. Let's start talking about Lists specifically! They're immutable, which means once you create them, they can't be changed - but that's not a bad thing! You can still create new lists from existing ones, like a boss. Lists can hold any type of data you want - even lists within lists! It's super useful in functional programming and will make your coding life a whole lot easier. ```scala val myList1 = List(1, 2, 3, 4, 5) val myList2 = List("apple", "banana", "cherry") val myList3 = List(1.0, 2.0, 3.0, 4.0, 5.0) ``` As you can see, you can create lists of any type in Scala. But what about lists inside lists? You betcha! Here's an example of a list containing other lists: ```scala val myNestedList = List(List(1, 2, 3), List(4, 5, 6), List(7, 8, 9)) ``` In this example, we've created a list that contains three other lists, each with three elements. Now, let's take a look at a method that generates lists in lists of lists and then applies a single method that flatMap all of them. Essentially, we will generate a nested list structure and then flatten it into a single list. ```scala def generateNestedLists(numLists: Int, numElements: Int): List[List[List[Int]]] = { List.fill(numLists)(List.fill(numElements)(List.fill(numElements)(scala.util.Random.nextInt(10)))) } val myNestedList = generateNestedLists(3, 3) val myFlatList = myNestedList.flatMap(x => x.flatMap(y => y)) ``` In this example, we've created a method called "generateNestedLists" that takes two parameters: the number of nested lists and the number of elements in each nested list. The method generates random integers between 0 and 10 for each element of each nested list. We then call this method and generate a nested list with 3 nested lists, each with 3 elements. Finally, we apply the flatMap method to the nested list to flatten it into a single list. The result of "myFlatList" will be a single list containing all of the elements of the nested list. But... WHAT IS FLATTEN?! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nv1h6t67786xmltud03n.png) ## FLATMAP Ah, flatMap - a magical incantation in the world of Scala programming! It's like having your own TARDIS, allowing you to traverse multiple dimensions of lists in a single bound. But what is flatMap, you ask? Well, let me tell you my friend. FlatMap is a method in Scala that takes a list, applies a function to each element in the list, and then flattens the resulting lists into a single list. Think of it like having a pile of books in your room, a bookshelf in your hallway, and a library on the other side of the world. FlatMap's like having a teleporter that can transport all of your books from your room and the bookshelf to the library in one go! But seriously, flatMap is a powerful tool for working with nested lists in Scala. It saves you from the headache of having to write complex code to iterate through each nested list, apply a function to each element, and then process the resulting lists. FlatMap simplifies all of that, allowing you to quickly and easily transform nested lists into a single list. Here's an example. Let's say you have a nested list of numbers like this: ```scala val myList = List(List(1, 2, 3), List(4, 5, 6), List(7, 8, 9)) ``` You can use flatMap to transform this into a single list like this: ```scala val myFlatList = myList.flatMap(x => x) ``` The result of myFlatList will be a single list containing all the elements of the original nested list. It's like having a genie that can grant your wishes with just one simple command! So, if you're feeling overwhelmed with nested lists in your Scala code, remember the magic word: flatMap. It's like having the power of a wizard and the ease of a genie in a single command - not to mention it'll save you from drowning in a sea of nested list headaches. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oyt6skagb5heag9a1az5.png) ## Monads in collections Now, let me tell you something even cooler - implementing monads on collections. A monad is a superpower use only in inmutable values, allowing you to encapsulate and sequence operations in a sleek and composable way. And in Scala, a monad has two operations - map and flatMap as we describe before. Basically, you can perform a chain of operations on a collection in a fun and organized way without worrying about the nitty-gritty details beforehand. You could even use monads on a list of integers and make it do all sorts of crazy things. Multiply the numbers by two? Absolutely! Add three to each result? Heck yeah! And don't even get us started on filtering out the numbers that are greater than 10... It's like magic! Here some more examples: - A list of email addresses that need to be validated before sending a message. ```scala val emails = List("example1@example.com", "example2@invalid", "example3@example.com") val validEmails = emails.filter(email => email.matches("[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,4}")) println(validEmails) ``` - A shopping cart with items that need to be computed for the total price including tax and discount. ```scala case class CartItem(name: String, price: Double, quantity: Int) val cart = List(CartItem("apple", 0.99, 3), CartItem("banana", 1.49, 2), CartItem("orange", 0.89, 5)) val totalPrice = cart.map(item => item.price * item.quantity).sum * 1.08 * 0.9 println(totalPrice) ``` - A list of transactions that need to be sorted by date. ```scala case class Transaction(date: String, amount: Double) val transactions = List(Transaction("2022-01-01", 100.0), Transaction("2022-01-02", 200.0), Transaction("2022-01-03", 50.0), Transaction("2022-01-04", 300.0)) val sortedTransactions = transactions.sortBy(_.date) println(sortedTransactions) ``` - A list of orders that need to be grouped by customers for easier tracking and analysis. ```scala case class Order(id: Int, customer: String, amount: Double) val orders = List(Order(1, "John", 100.0), Order(2, "Jane", 200.0), Order(3, "John", 50.0), Order(4, "Jane", 300.0)) val groupedOrders = orders.groupBy(_.customer) println(groupedOrders) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e1jqh6jl96jmzql4uehs.png) ## The Monads When it comes to programming, monads are like the Swiss Army knives of the software world. They can do a lot of things, but they’re most well-known for their ability to combine different types of data. And when it comes to Scala, monads are no exception. Scala has a number of monads, including List, Try, and Option. Each of these has its own unique set of capabilities, but when you combine them, you get some really powerful results. Let’s start with List. This is a type of collection that can contain any number of elements. It’s great for storing data and organizing it into a structure. You can use List to store and manipulate data in a variety of ways. Next, let’s look at Try. This is a monad that helps you handle errors and exceptions in a more organized way. It’s great for dealing with unexpected results and ensuring that your code is robust. Finally, there’s Option. This is a monad that helps you handle “optional” values. It’s great for dealing with missing data or values that may not always be present. When you combine these three monads, you get some really powerful results. For example, you can use List to store a collection of values, Try to handle errors and exceptions, and Option to handle optional values. This combination is incredibly powerful and can help you build robust, reliable code. But here’s the thing: monads can be tricky to understand and use. So if you’re new to Scala, it’s best to start with the basics and work your way up. Once you’ve got the hang of it, you can start to combine different monads to get the most out of them. And if you’re feeling brave, you can even try combining all three. Just don’t say we didn’t warn you – it’s like playing with fire, and you might end up getting burned. Or, you know, you might just end up with a really powerful piece of code. Either way, you’ll be the one who has to take the blame… or the credit. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jlm5ain7hpfowg6j9p59.png) Alright, let's take a crack at combining List, Try, and Option in a complex example using Pokémon. Say we want to write a program that takes a list of Pokémon names, fetches their data from an API, and then does the following: 1. For each Pokémon, we want to save their name and their type(s) to a database. 2. If any errors occur during the process, we want to log them and move on to the next Pokémon. 3. If any Pokémon data is missing or unknown, we want to log that as well, but still save the available data. To do this, we can use the combination of List, Try, and Option monads. First, we'll create a List of Pokémon names: ```scala val pokemonNames = List("Pikachu", "Bulbasaur", "Charmander", "Squirtle") ``` Next, we'll create a function that takes a Pokémon name and returns their data from the PokéAPI: ```scala import scalaj.http._ import scala.util.{Try, Success, Failure} import play.api.libs.json._ case class Pokemon(name: String, types: Option[List[String]]) def getPokemonData(name: String): Try[Pokemon] = { val response = Http(s"https://pokeapi.co/api/v2/pokemon/$name").asString val json: JsValue = Json.parse(response.body) Try { Pokemon( (json \ "name").as[String], (json \ "types").asOpt[List[JsObject]].map(types => types.map(obj => (obj \ "type" \ "name").as[String])) ) } } ``` This function fetches the Pokémon data from the API and returns a Try, since there may be errors in the API response. We're also using Play JSON to parse the API response and extract the relevant data. Now, we can use the map and flatMap methods of the List monad to execute the getPokemonData function for each Pokémon name and store the data in a database: ```scala val db = // database connection here pokemonNames.flatMap(name => getPokemonData(name) match { case Success(pokemon) => val types = pokemon.types.map(_.mkString(",")).getOrElse("Unknown") val sql = s"INSERT INTO pokemon (name, types) VALUES ('${pokemon.name}', '$types')" try { db.execute(sql) } catch { case e: Exception => println(s"Error saving $name to database: ${e.getMessage}") } Some(pokemon) case Failure(e) => println(s"Error fetching $name from API: ${e.getMessage}") None } ) ``` This code uses flatMap to execute the getPokemonData function for each Pokémon name and then either return the Pokémon data or a None value, depending on whether any errors occurred. If a Pokémon's data is successfully retrieved, the name and type(s) are saved in the database using an SQL statement. If any errors occur, they are logged to the console. Finally, we can use the foldLeft method of the List monad to log any missing or unknown data: ```scala pokemonList .foldLeft(List.empty[String])((missingData, pokemon) => if (pokemon.types.isEmpty) s"${pokemon.name}: Missing type information" :: missingData else missingData ) .foreach(println) ``` This code uses foldLeft to iterate through the list of Pokémon and create a new list of missing or unknown data. If a Pokémon's type(s) are not available, their name and a message are added to the missingData list. Finally, the missing data is logged to the console using the foreach method. And there you have it – a complex example that combines List, Try, and Option to fetch Pokémon data, save it in a database, and log any errors or missing data. Just be careful – with great power comes great responsibility. We still need to talk some more about collections, thats why this post will continue next time!.
krlz
1,407,423
openGauss Checking the Number of Database Connections
Background If the number of connections reaches its upper limit, new connections cannot be created....
0
2023-03-20T07:57:52
https://dev.to/tongxi99658318/opengauss-checking-the-number-of-database-connections-gg4
opengauss
Background If the number of connections reaches its upper limit, new connections cannot be created. Therefore, if a user fails to connect a database, the administrator must check whether the number of connections has reached the upper limit. The following are details about database connections: The maximum number of global connections is specified by the max_connections parameter. Its default value is 5000. The number of a user's connections is specified by CONNECTION LIMIT connlimit in the CREATE ROLE statement and can be changed using CONNECTION LIMIT connlimit in the ALTER ROLE statement. The number of a database's connections is specified by the CONNECTION LIMIT connlimit parameter in the CREATE DATABASE statement. Procedure Log in as the OS user omm to the primary node of the database. Run the following command to connect to the database: "" gsql -d postgres -p 8000 postgres is the name of the database to be connected, and 8000 is the port number of the database primary node. If information similar to the following is displayed, the connection succeeds: "" gsql ((openGauss 1.0 build 290d125f) compiled at 2020-05-08 02:59:43 commit 2143 last mr 131) Non-SSL connection (SSL connection is recommended when requiring high-security) Type "help" for help. postgres=# View the upper limit of the number of global connections. "" postgres=# SHOW max_connections; max_connections ----------------- 800 (1 row) 800 is the maximum number of session connections. View the number of connections that have been used. For details, see Table 1. NOTICE: Except for database and usernames that are enclosed in double quotation marks (") during creation, uppercase letters are not allowed in the database and usernames in the commands in the following table. Table 1 Viewing the number of session connections Description Command View the maximum number of sessions connected to a specific user. Run the following commands to view the upper limit of the number of omm's session connections. -1 indicates that no upper limit is set for the number of omm's session connections. ""postgres=# SELECT ROLNAME,ROLCONNLIMIT FROM PG_ROLES WHERE ROLNAME='omm'; rolname | rolconnlimit ----------+-------------- omm | -1 (1 row) View the number of session connections that have been used by a user. Run the following commands to view the number of session connections that have been used by omm. 1 indicates the number of session connections that have been used by omm. ""postgres=# CREATE OR REPLACE VIEW DV_SESSIONS AS SELECT sa.sessionid AS SID, 0::integer AS SERIAL#, sa.usesysid AS USER#, ad.rolname AS USERNAME FROM pg_stat_get_activity(NULL) AS sa LEFT JOIN pg_authid ad ON(sa.usesysid = ad.oid) WHERE sa.application_name <> 'JobScheduler'; postgres=# SELECT COUNT(*) FROM DV_SESSIONS WHERE USERNAME='omm'; count \-------- 1 (1 row) View the maximum number of sessions connected to a specific database. Run the following commands to view the upper limit of the number of postgres's session connections. -1 indicates that no upper limit is set for the number of postgres's session connections. ""postgres=# SELECT DATNAME,DATCONNLIMIT FROM PG_DATABASE WHERE DATNAME='postgres'; datname | datconnlimit ----------+-------------- postgres | -1 (1 row) View the number of session connections that have been used by a specific database. Run the following commands to view the number of session connections that have been used by postgres. 1 indicates the number of session connections that have been used by postgres. ""postgres=# SELECT COUNT(*) FROM PG_STAT_ACTIVITY WHERE DATNAME='postgres'; count \--------- 1 (1 row) View the number of session connections that have been used by all users. Run the following commands to view the number of session connections that have been used by all users: ""postgres=# CREATE OR REPLACE VIEW DV_SESSIONS AS SELECT sa.sessionid AS SID, 0::integer AS SERIAL#, sa.usesysid AS USER#, ad.rolname AS USERNAME FROM pg_stat_get_activity(NULL) AS sa LEFT JOIN pg_authid ad ON(sa.usesysid = ad.oid) WHERE sa.application_name <> 'JobScheduler'; postgres=# SELECT COUNT(*) FROM DV_SESSIONS; count \-------- 10 (1 row)
tongxi99658318
1,407,633
Medusa Vs Woocommerce: Comparing Two Open source Online Commerce Platforms
Introduction Woocommerce is an open source, customizable ecommerce platform built on...
0
2023-03-20T10:57:53
https://dev.to/gunkev/medusa-vs-woocommerce-comparing-two-open-source-online-commerce-platforms-4g8g
headless, opensource, commerce, medusajs
## Introduction [Woocommerce](https://woocommerce.com) is an [open source](https://www.learn-dev-tools.blog/what-does-open-source-software-mean-a-beginners-guide/), customizable ecommerce platform built on WordPress. Woocommerce offers features like flexible and secure payment and shipping integration. It is written in PHP which is a universal language that can be used to build complex applications. On the other hand, [Medusa](https://medusajs.com) is an open source composable commerce engine built with Node.Js. It offers out-of-the-box features like multi-currency support, plugin integration support, and tax configurations among many others. Besides these, Medusa is known for its Return Merchandise Authorization (RMA) flows and how it is fully customizable. Through this article, you will learn why open source ecommerce tools are essential, how to choose a tool for your ecommerce, and the difference between Woocommerce and Medusa open source online commerce platforms per developer’s experience and the general ecommerce features they offer. ## What is an Open Source Ecommerce Platform? Open source refers to software that has its source code publicly accessible for anyone to see, modify, modify, contribute to, and distribute as they want. An open source ecommerce platform is software that gives you full access to the source code, allowing you to customize the ecommerce platform to meet your needs and equally giving you total control over your store. ## What is Woocommerce? Woocommerce is a [WordPress plugin](https://wordpress.org/plugins/woocommerce/) used to build an online store. Woocommerce's first version was launched in 2011 and became extremely popular. In 2014, the plugin hit 4 million downloads. At the time of writing this article, Woocommerce has over 5 million active installations and 8.3k stars on GitHub. Woocommerce is a beginner-friendly tool, so you don’t need to be an expert developer to use it. Additionally, there are thousands of free and paid themes on Woocommerce. Woocommerce's main aim is to enable you to create an online store with just a few clicks and start selling. Woocommerce runs under WordPress; after creating your Woocommerce account, you will create a [WordPress account](https://wordpress.com/) and give some permission to Woocommerce before building a store. You will need to set up a WordPress installation before setting up a Woocommerce store. ## What is Medusa? [Medusa](https://www.notion.so/Medusa-Vs-Woocommerce-Comparing-Two-Open-source-Online-Commerce-Platforms-d9ed9f2db73c4e6f9d336d7820cd7dcf) is a JavaScript-based open source tool for building online stores. Medusa was launched in 2021, aiming to provide a great experience to developers building an ecommerce platform, using NodeJs as an engine. Although Medusa is newer than Woocommerce,  it becomes increasingly popular each day as it’s being adopted by companies and individuals across the world. What attracts businesses and developers mostly is the customization ability Medusa offers. Medusa also offers other out-of-the-box features like bulk export and import, creation and management of multiple sales channels, configuration and management of many regions on one platform, addition, customization, and sorting of products into collections, and many more. Medusa is made up of 3 components: - The [Medusa Server](https://docs.medusajs.com/quickstart/quick-start/): It’s a headless backend built on Node.js and it's the main component that contains all the logic and data of the store. - The [Admin Dashboard](https://docs.medusajs.com/admin/quickstart): The store operator uses this component to manage the store, i.e. add, remove and update products, and customize and manage the store settings. - The Storefront: Here, customers can view products and place orders. By default, Medusa offers 2 storefronts. One is built with [NextJs](https://docs.medusajs.com/starters/nextjs-medusa-starter) and the other [Gatsby](https://docs.medusajs.com/starters/gatsby-medusa-starter), but you could build your own storefront using the [Storefront REST APIs](https://docs.medusajs.com/api/store/). ## Medusa Vs Woocommerce This section highlights the difference between Medusa and Woocommerce in terms of the general features both offer and the developer’s experience. ## General Features Comparison ### Payment Gateways Woocomerce and Medusa allow both developers and businesses to integrate third-party payment solutions. Woocommerce has a variety of ready-made extensions to integrate payment. Most of these payment gateways support Stripe, Paypal, Verifone, Apple Pay, and Amazon Pay. ![payment woo commerce](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gobg96etcc8bhh9xdce3.PNG) Developers can integrate payment methods in their Medusa store using payment providers such as [Klarna](https://docs.medusajs.com/add-plugins/klarna), [Paypal](https://docs.medusajs.com/add-plugins/paypal), and [Stripe](https://docs.medusajs.com/add-plugins/stripe) with existing plugins. However, as a developer, you could also create a plugin to integrate other payment methods. ![https://lh6.googleusercontent.com/pKPLMQjg9M9RmGi2fPxoK9ADavJmEaHmtZl6Wt5v2Wxn0BxgCSbnIL3kc6ClETJFzXC_9Zlg0teNpTh6y6RoXNU3cNClVTftGa7ihF8JAFTmqsvQ_XlJeWt0YV6tzebjxuJ85vdTseW-0bwBo6FHjlc-yjQjzoTKdSMPsN31Q3_vPmP6qd-6p31TxvHL8w](https://lh6.googleusercontent.com/pKPLMQjg9M9RmGi2fPxoK9ADavJmEaHmtZl6Wt5v2Wxn0BxgCSbnIL3kc6ClETJFzXC_9Zlg0teNpTh6y6RoXNU3cNClVTftGa7ihF8JAFTmqsvQ_XlJeWt0YV6tzebjxuJ85vdTseW-0bwBo6FHjlc-yjQjzoTKdSMPsN31Q3_vPmP6qd-6p31TxvHL8w) ### Live-Chat Integrations A live chat will help engage more visitors and increase customer satisfaction across many channels. Woocommerce offers extensions like [LiveChat](https://woocommerce.com/products/livechat/?quid=cb0d6d6a56c15ac947f1f062efb6b885) that you can install and set up on your store.  WordPress has many plugins for live chat Integration compatible with Woocommerce and some of them include LiveChat's WordPress Plugin, OLark WordPress Plugin, Tidio WordPress Plugin, [Chatwoot](https://www.chatwoot.com/docs/product/channels/live-chat/integrations/wordpress), and WP Social Chat. ![https://lh5.googleusercontent.com/rPzqsSVziC9TrKXUmHppJINR-BPjSc8tN-XuBU5LxJS6eqsf-y1G7MQpxt8K0psyfPXH5V2HJAiE34K-S9yX-cftxqK-LyShdOEnueHHEzA81Ej5_I5w0uzl8-B6I_-reuJINqh_L06sbgCAl1RwZffNXgc0hJSSOSCva1qu40h8P2mb8TcJP14gCczhVg](https://lh5.googleusercontent.com/rPzqsSVziC9TrKXUmHppJINR-BPjSc8tN-XuBU5LxJS6eqsf-y1G7MQpxt8K0psyfPXH5V2HJAiE34K-S9yX-cftxqK-LyShdOEnueHHEzA81Ej5_I5w0uzl8-B6I_-reuJINqh_L06sbgCAl1RwZffNXgc0hJSSOSCva1qu40h8P2mb8TcJP14gCczhVg) Medusa, on the other hand, offers many ways to [integrate Live Chat with Tidio](https://medusajs.com/blog/how-to-integrate-live-chat-with-gatsby-tidio-medusa/), Chatwoot, Zendesk, Hotspot, ChatBot, and many others. In addition, Medusa plugin systems make it easier to [create plugins](https://docs.medusajs.com/advanced/backend/plugins/create) and integrate with these chat services. ### Taxes Woocommerce offers a simple way to manage taxes, but before configuring your tax, you need to enable taxes. You can choose to either include taxes in your product’s prices or charge taxes separately. You can set up taxes to be applied in a specific region. ![https://lh6.googleusercontent.com/vDv9CJBdJfdWri5_mozcdsbmZPueeuQQlbURyv1bLig6FxqVA1Tj5ntpWqLUC9nlP3PCsd465UFL8CrXRsfjfE6P78lfFu6wX25Z75jEByeeLKJdt9UiSlnDwQPu-WZ4ShJLD9mQjo50mXD5W8VmPt5K4HWoy6CLcJNBqDPGVsal72fq-YzLDs6w81vPJA](https://lh6.googleusercontent.com/vDv9CJBdJfdWri5_mozcdsbmZPueeuQQlbURyv1bLig6FxqVA1Tj5ntpWqLUC9nlP3PCsd465UFL8CrXRsfjfE6P78lfFu6wX25Z75jEByeeLKJdt9UiSlnDwQPu-WZ4ShJLD9mQjo50mXD5W8VmPt5K4HWoy6CLcJNBqDPGVsal72fq-YzLDs6w81vPJA) ![https://lh4.googleusercontent.com/qchh2VuEGFGUhFK-d5TWyKrxVPmyBTvpKwvGCoasahr7d0ppNcBmfHK4YODPw7EzTxlcQfY8bVWhNCOB20X9LqdqQxaIk-lHflQvyZIUEimE6TLBmagE9s6apAm1crePpHD7cmA4qgH_Vu2vfVA0guWvC2nzlfSihvPTdHHKmo8t3FSD9Y2HRlr8-82kbw](https://lh4.googleusercontent.com/qchh2VuEGFGUhFK-d5TWyKrxVPmyBTvpKwvGCoasahr7d0ppNcBmfHK4YODPw7EzTxlcQfY8bVWhNCOB20X9LqdqQxaIk-lHflQvyZIUEimE6TLBmagE9s6apAm1crePpHD7cmA4qgH_Vu2vfVA0guWvC2nzlfSihvPTdHHKmo8t3FSD9Y2HRlr8-82kbw) [Taxes](https://docs.medusajs.com/user-guide/taxes/) are available out-of-the-box in Medusa. Like Woocommerce, you can manage taxes in specific regions as well as add multiple tax rates. ![https://lh4.googleusercontent.com/ap3ycJfK_XD7Cy25t_2omrSR9TQEG5bYzDekDZVoacSApuQ-LCbpV4F_vwIIhd9_YB2kqdnSX_lggYlJHbflXpZc4WnJ0tx9Vz_tnpQ7iASVjmGXgDJVOFOWDXIWzVF4thu3qth3T6-QHUQjsXQtNs-c-M_6JhtyAzya8q5DOpnwbhsi3rDAW6YJ8vo0QQ](https://lh4.googleusercontent.com/ap3ycJfK_XD7Cy25t_2omrSR9TQEG5bYzDekDZVoacSApuQ-LCbpV4F_vwIIhd9_YB2kqdnSX_lggYlJHbflXpZc4WnJ0tx9Vz_tnpQ7iASVjmGXgDJVOFOWDXIWzVF4thu3qth3T6-QHUQjsXQtNs-c-M_6JhtyAzya8q5DOpnwbhsi3rDAW6YJ8vo0QQ) Medusa provides a [Tax API](https://docs.medusajs.com/api/admin/#tag/Tax-Rate) to developers that allow them to further integrate third-party services. ### RMA Flows Woocommerce offers a powerful and reliable RMA tool for selling, managing, processing warranties, and handling return requests within your Woocommerce store. However, you need to download the [Woocommerce Product RMA](https://woocommerce.com/document/warranty-and-returns/), install, activate and configure it. Unfortunately, this product is not free. [RMA Flows](https://docs.medusajs.com/user-guide/orders/) are available out-of-the-box in Medusa and customers can send requests to return items from an ecommerce store. The admin or store operator can manage the order status, shipping, and payment. ### Shipping Woocommerce allows you to set up shipping zones and add methods to those zones and rates to your shipping methods. Users can create as many zones as possible and add methods and rates. ![https://lh5.googleusercontent.com/yfE5MhBemMrsdApJY1LAnChSx_0EOYWrMo5-QMRE5hksyLSAzxSqFO9gRnWqgOwXIz1MVTYRIBbTyhetmvT2kc6HswxUkMzfDiipQcNK6OFPTYXO1nZ3nMSBl837Mih7P3paKdh_WiIBdhI68YLGRMQcrfVithx5OjNCNWIw36q5e71_cf5adHkX4u9XAA](https://lh5.googleusercontent.com/yfE5MhBemMrsdApJY1LAnChSx_0EOYWrMo5-QMRE5hksyLSAzxSqFO9gRnWqgOwXIz1MVTYRIBbTyhetmvT2kc6HswxUkMzfDiipQcNK6OFPTYXO1nZ3nMSBl837Mih7P3paKdh_WiIBdhI68YLGRMQcrfVithx5OjNCNWIw36q5e71_cf5adHkX4u9XAA) Medusa allows developers to integrate third-party providers like WebShipping. [Medusa has 4 components in its shipping architecture](https://docs.medusajs.com/user-guide/regions/shipping-options/): fulfillment provider**,**  [shipping options](https://docs.medusajs.com/user-guide/regions/shipping-options/), shipping methods, and shipping profiles which are the highest in the hierarchy of shipping. However, developers can equally create plugins to manage any shipping provider. ![https://lh4.googleusercontent.com/FovmJ3Wg-kquTFWru2h22QVSVUKPNwq40q_dxzIIpd5gX9J1UIcQTfpZNBAI1FB_J6zVjWXl_lsnfvgkvZrCiSByOPxb5-pL0xc244102QL4c9k_zSQMNTRA_b0I78eNm-NZuvHM92qjsuMxt46hXiHWWQNOicNgZx-a10gXM6tP__dF8pN3cVsCzu8Wkw](https://lh4.googleusercontent.com/FovmJ3Wg-kquTFWru2h22QVSVUKPNwq40q_dxzIIpd5gX9J1UIcQTfpZNBAI1FB_J6zVjWXl_lsnfvgkvZrCiSByOPxb5-pL0xc244102QL4c9k_zSQMNTRA_b0I78eNm-NZuvHM92qjsuMxt46hXiHWWQNOicNgZx-a10gXM6tP__dF8pN3cVsCzu8Wkw) ### Admin Dashboard Woocommerce does not have an admin dashboard on its own but rather uses the WordPress dashboard since it’s functionally built to use WordPress. When you install the plugin, the Woocommerce menu appears on the sidebar of the WordPress dashboard. The dashboard is simple and allows you to manage products, orders, Extensions, Reports, and other features. You need to know how the WP admin dashboard works to use Woocommerce. ![https://lh4.googleusercontent.com/YrJ0A8wgvxPas5rBXD535RQTvy9iAZOeJHfAdjxZheZnmDVeU1FP1vrMG42D0eoRbYDlHqw8CakAprg55__K50vmT4H1xE6l-YVVkEbcS0sB7c6q6_twIaWs42ysJDIBqvLNYQmVwKgc2raLBUfmLUvE0UPYOz7Cfxw6b7Ga330hQZ0pc_KTuT06-ulfGw](https://lh4.googleusercontent.com/YrJ0A8wgvxPas5rBXD535RQTvy9iAZOeJHfAdjxZheZnmDVeU1FP1vrMG42D0eoRbYDlHqw8CakAprg55__K50vmT4H1xE6l-YVVkEbcS0sB7c6q6_twIaWs42ysJDIBqvLNYQmVwKgc2raLBUfmLUvE0UPYOz7Cfxw6b7Ga330hQZ0pc_KTuT06-ulfGw) Medusa has a simple and fluent [admin dashboard](https://docs.medusajs.com/admin/quickstart). The dashboard allows you to manage products, price listings, customers, regions, currencies, settings, and many other elements of your store. ![https://lh6.googleusercontent.com/onBj5fpd9Zg6QrWOGCmWXgid97HxXCCjtOdG88_bF0r77CRmhV8kMDo1CROHqL9inOKuYjFYoptBA9G0X7lB0wXzzxerpJNhkQFWCWafCrM0-bRutZnervJypH56sGDR6Frxu3t5x1pHgeZOIceLe3mO4x6xQdA6If93E1GA3_e75f7JgXZJaV1vm_sG0A](https://lh6.googleusercontent.com/onBj5fpd9Zg6QrWOGCmWXgid97HxXCCjtOdG88_bF0r77CRmhV8kMDo1CROHqL9inOKuYjFYoptBA9G0X7lB0wXzzxerpJNhkQFWCWafCrM0-bRutZnervJypH56sGDR6Frxu3t5x1pHgeZOIceLe3mO4x6xQdA6If93E1GA3_e75f7JgXZJaV1vm_sG0A) ### Localization Woocommerce does not offer localization options on its own. To localize your store, you need to install a plugin like PoEdit, [Loco Translate](https://woocommerce.com/document/woocommerce-localization/), or [MultilingualPress](https://multilingualpress.org/). Currently, the localization feature is not also available out-of-the-box in Medusa. But it can be implemented using CMS plugins like [Contentful](https://docs.medusajs.com/add-plugins/contentful/). ### Multi-currency Support Unfortunately, multi-currency is not available out–of–the–box in Woocommerce. To set up multiple currencies in your online store, you need to purchase the [Currency Converter Widget](https://woocommerce.com/products/currency-converter-widget/) extension and set it up. There are other extensions like Woocommerce Multi-Currency or Multi-Currency Switcher which are also not free. ![https://lh4.googleusercontent.com/vEKitssP-Eck-6Wtn4UtvB67Aoj5X8AP1fjyjXM2rhnFx53zP2QSsqA-MHRRxGMNRHZqsNB4rNelWd9DYB7illMfNz_Sp_zgApveUs-_ttx8dxeW379umTjO8rH6_A71ETdHSLueK61q8UDsvhivg8KtO_uQIp_dX3zapl48Z3_kQ6-BS1UdUYA9M92ggg](https://lh4.googleusercontent.com/vEKitssP-Eck-6Wtn4UtvB67Aoj5X8AP1fjyjXM2rhnFx53zP2QSsqA-MHRRxGMNRHZqsNB4rNelWd9DYB7illMfNz_Sp_zgApveUs-_ttx8dxeW379umTjO8rH6_A71ETdHSLueK61q8UDsvhivg8KtO_uQIp_dX3zapl48Z3_kQ6-BS1UdUYA9M92ggg) Medusa supports the multi-currency feature. A business can set up a region and choose specific regional settings such as currency. Additionally, the business can manage each region from one store, hence eliminating the need to create many stores or switch between them ![https://lh4.googleusercontent.com/VE8PaS7hQTQcP8I12mrA_-KAbDtOKmZ2GCWBb0AAx4bJ8TzLTvZKDReTm4NUatNRkNRNP6w_yWjrQAT8qmxQxJdfRaOCq-KcqI4ak8xZE8_uwJht01RTbVbfDVDhZdEqjY3cYZFAHVX1a10DMT8ECJMr-96KowCMwxIoJ-hk51680v7AAiy_C9popHOn6w](https://lh4.googleusercontent.com/VE8PaS7hQTQcP8I12mrA_-KAbDtOKmZ2GCWBb0AAx4bJ8TzLTvZKDReTm4NUatNRkNRNP6w_yWjrQAT8qmxQxJdfRaOCq-KcqI4ak8xZE8_uwJht01RTbVbfDVDhZdEqjY3cYZFAHVX1a10DMT8ECJMr-96KowCMwxIoJ-hk51680v7AAiy_C9popHOn6w) ![https://lh5.googleusercontent.com/qEDbcmsmsqtGhBIyQz89rbbJpLcm3rF4bLxHUgabCIJqdPYAcxCdlqg9FwSfKkr6WLM0GFEUYC89-zd0VBG2dtsTFkqKbQ3FUD6YFZ31nCs-6_x_YmxuJ6Sp15VKTabool-iGqBiCflxQ3_J2k1WzsVEeChCXN74XZ79xYUsMeGJPg0UhsK-nnruUBLYeA](https://lh5.googleusercontent.com/qEDbcmsmsqtGhBIyQz89rbbJpLcm3rF4bLxHUgabCIJqdPYAcxCdlqg9FwSfKkr6WLM0GFEUYC89-zd0VBG2dtsTFkqKbQ3FUD6YFZ31nCs-6_x_YmxuJ6Sp15VKTabool-iGqBiCflxQ3_J2k1WzsVEeChCXN74XZ79xYUsMeGJPg0UhsK-nnruUBLYeA) ### Gift Cards You can create and sell prepaid multipurpose gift cards to customers, which they can redeem at your store. Gift cards are not available out-of-the-box in Woocommerce. Some Woocommerce gift card extensions require at least version 3.9. of Woocommerce. They are not free of charge. ![https://lh3.googleusercontent.com/vlB-FYpA6UMqBEOjSMTMoxvSFyJySL7W0eP_eWBoiFhZIiGwxNtOca0HJF5F-957fmfhvWC90YgNstj4g5MQcq9m0lzZyVZBCyhpQHoojoyqMt4vIrNc3N8CcEmj1EC7LGuZKz4v86hck9f_bUU3rsshLRqZaoBF9F0-E6fXG7ZuSHGTZbVh21Mhkbxqeg](https://lh3.googleusercontent.com/vlB-FYpA6UMqBEOjSMTMoxvSFyJySL7W0eP_eWBoiFhZIiGwxNtOca0HJF5F-957fmfhvWC90YgNstj4g5MQcq9m0lzZyVZBCyhpQHoojoyqMt4vIrNc3N8CcEmj1EC7LGuZKz4v86hck9f_bUU3rsshLRqZaoBF9F0-E6fXG7ZuSHGTZbVh21Mhkbxqeg) Gift cards are available in Medusa out-of-the-box. Merchants can specify multiple denominations and images. Customers can then purchase the gift cards as if they were buying a product. However, gift cards are not packaged and shipped like products. Customers can receive it in the form of a link or by email. ![https://lh6.googleusercontent.com/hC2g4pu_hoWSDzs3rPHRTamRbjxtixH1MtPwONRKVDJg1f3HwUDEBE9NJgKJW0OnuR12g6BcUs8oORR2eLffRZHi9ZC1H93gVm0yIEPKNXwjB5yYBAXo3r31cJcirIRiFZTDDIOp9ayxmO9nngKLfPbP86d5pWZuGUgA7AePzbW3hbxNtm7SiJBBhdo4fA](https://lh6.googleusercontent.com/hC2g4pu_hoWSDzs3rPHRTamRbjxtixH1MtPwONRKVDJg1f3HwUDEBE9NJgKJW0OnuR12g6BcUs8oORR2eLffRZHi9ZC1H93gVm0yIEPKNXwjB5yYBAXo3r31cJcirIRiFZTDDIOp9ayxmO9nngKLfPbP86d5pWZuGUgA7AePzbW3hbxNtm7SiJBBhdo4fA) ### Reporting and Analytics Woocommerce offers an analytical board where you can view your number of sales, net sales, number of orders, products sold as well as variation sold. You can compare your sales per date and even download your reports for further utilization. ![https://lh6.googleusercontent.com/ZhTL19GZWc2gq5djJoRBIxRf3r_FtIT1kWVjpNQCyjuLGorkBtfQfTJtELjADRavRImxQkmMALTCsssP0gIHLDjdQOaVx04tON2TC-thlSo-sLuIo2dx1n-wWZAUPtw1OQVhSW5qJ7u7G8ff-X1pxl7baUa4docpRXA-u748JSzI8QSUIJRtKz08upHDJQ](https://lh6.googleusercontent.com/ZhTL19GZWc2gq5djJoRBIxRf3r_FtIT1kWVjpNQCyjuLGorkBtfQfTJtELjADRavRImxQkmMALTCsssP0gIHLDjdQOaVx04tON2TC-thlSo-sLuIo2dx1n-wWZAUPtw1OQVhSW5qJ7u7G8ff-X1pxl7baUa4docpRXA-u748JSzI8QSUIJRtKz08upHDJQ) Right now, Medusa does not provide any analytic module but developers can use third-party services to add analytical features. Fortunately, with the flexibility Medusa offers, developers can easily implement and [integrate third-party services](https://docs.medusajs.com/advanced/backend/plugins/overview/) in their store. ### Speed and Performance The process of creating a store with Woocommerce is quite fast. With a few clicks, you will have your store set up and ready to sell. If you need to measure the speed and performance in terms of the tools used to build the frontend and backend, this will be different since Woocommerce is not a headless commerce. You can optimize the speed and performance of your store in various ways. For example, if you decide to use a lighter framework or just pure CSS rather than using Bootstrap, the number of stylesheet files will considerably reduce, and the pages will load rapidly increasing user experience as well as some [SEO](https://www.learn-dev-tools.blog/topics/search-engine-optimization/) factors hence improving the speed ad performance of your store Medusa's headless architecture allows you to separate the backend from the frontend. This makes it faster and lighter compared to tightly coupled architecture. So you can decide to use a lighter framework or fewer resources for both your backend and frontend and this can greatly impact the speed and performance of your store. In addition, you can set up your store following this [quick guide](https://docs.medusajs.com/quickstart/quick-start) and the process is fast since in just three steps, you will now have a complete commerce engine running. However, if you need advanced functionalities then you can check the [complete Documentation](https://docs.medusajs.com/) ### Developer’s Features Comparison This section presents a brief comparison between Medusa and Woocommerce in terms of developer features ### Documentation Both Woocommerce and Medusa provide extensive [documentation](https://www.learn-dev-tools.blog/topics/writing-good-documentation/) for both developers and business owners. [Woocommerce codex](https://woocommerce.com/documentation/plugins/woocommerce/woocommerce-codex/) provides a library of documentation and tutorials to set up, customize and expand your online store whereas Medusa has a [detailed user guide](https://docs.medusajs.com/user-guide/) to help you get started. ### Community Developers can contribute or report bugs on [Woocommerce’s GitHub repository](https://github.com/woocommerce/woocommerce). Medusa is an open source platform whose aim is to build a strong and collaborative relationship with the community. Both developers or tier persons can join the [Discord server](https://discord.gg/medusajs) to stay up to date with current trends and request and receive help from the community members as well as the core Medusa team. Developers can equally showcase their work, report bugs, propose fixes or contribute to issues to the Medusa platform on its [GitHub repository](https://github.com/medusajs/medusa). There are other channels like [Twitter and LinkedIn](https://twitter.com/medusajs) that developers can join to bring their contribution or get help. ### Installation and Time to Get Started WooCommerce’s [installation guide](https://woocommerce.com/documentation/plugins/woocommerce/getting-started/installation-and-updating/) shows you a step-by-step process on how to install and set up your online store easily. To get started with Woocommerce, developers need to know PHP, HTML, CSS3, and WordPress. Non-developers can install WooCommerce via [WordPress.com](http://wordpress.com/). Medusa provides a [quick guide](https://docs.medusajs.com/quickstart/quick-start) that allows developers to create and manage their store in just 3 steps: Installing Medusa CLI with yarn or npm, creating a new Medusa project using the CLI, and starting the Medusa server. In less than 4 steps, you will have a store set up with the features mentioned earlier. You’ll then need to set up the [admin](https://docs.medusajs.com/admin/quickstart) and storefront with Gatsby or NextJs to effectively manage your store. Medusa ecommerce store is written in Typescript and JavaScript essentially and uses SQLite as a database if no database engine is set up. For production purposes, it’s recommended to install and configure a [PostgreSQL](https://docs.medusajs.com/tutorial/set-up-your-development-environment#postgresql) database and [Redis](https://docs.medusajs.com/tutorial/set-up-your-development-environment#redis) to handle events. The step-by-step process to install and manage these tools for various OS is included in the [Medusa documentation](https://docs.medusajs.com/tutorial/set-up-your-development-environment/#nodejs). ### Deployment and Upgrade Since Woocommerce is built on top of WordPress, it’s recommended to deploy the online store on a WordPress deployment plan hence restricting deployment on any cloud hosting. Woocommerce deals mostly with updates rather than upgrades. Before thinking of updating your Woocommerce store, it’s recommended to manually back up or [automatically back up](https://jetpack.com/upgrade/backup/) your store. Unlike Woocommerce, Medusa can be deployed on any cloud hosting. In its documentation, you can find a straightforward guide on how to deploy on various platforms like [Heruko](https://docs.medusajs.com/deployments/server/deploying-on-heroku), [Digital Ocean](https://docs.medusajs.com/deployments/server/deploying-on-digital-ocean), or [Qovery](https://docs.medusajs.com/deployments/server/deploying-on-qovery)**.** Upgrading Medusa is quite simple; however, some versions may require you to run migration scripts or take additional actions. Fortunately, Medusa provides an [upgrade guide](https://docs.medusajs.com/advanced/backend/upgrade-guides/) with detailed steps to follow. ### Customization Woocommerce is not a headless architecture. Nevertheless, you can build a WooCommerce headless store on your own. This is not recommended, though, as Woocommerce is not designed for this purpose and it does not have the tools or features to build one. The fact that Medusa has a headless architecture allows anyone to easily and freely customize their storefront. You can choose any framework or programming language to use on your frontend. This also applies to the admin dashboard. Creating custom features is like adding a JavaScript file that loads automatically into Medusa as soon as you run the server. The backend [uses an API](https://docs.medusajs.com/api/admin/) so developers can extend these APIs to add third-party services and custom features. ### Developer’s Level You don’t need to be a developer to be able to set up a store with WordPress and Woocommerce. You can set up your online store with just a few clicks. However, if you need advanced features and more customization, then you may need to hire an experienced developer to build those features or make them by yourself. Medusa has a simple and understandable architecture. As a developer, you just need knowledge of JavaScript/TypeScript and NodeJs. Both the Medusa storefront and admin dashboard are built with JavaScript Frameworks. Due to the decoupled nature of Medusa, you can create the admin dashboard and storefront with any language or framework. All you need is to link the backend using [REST API](https://docs.medusajs.com/api/store/auth). ## Summary of Medusa vs Woocommerce | Property | Medusa | Woocommerce | | --- | --- | --- | | Language | TypeScript | PHP | | API | REST API | REST API but you need to enable it in the Woocommerce settings | | Stars on GitHub | https://github.com/medusajs/medusa | 8.3k | | License | https://github.com/medusajs/medusa/blob/master/LICENSE | | | Activity | Since 2021 | Since 2011 | | Latest Release | 1.7.5(5 days ago) | 7.3.0(2 weeks ago) | | Platform | - | WordPress | | Mobile App | No | Yes | | RMA Flows | Yes | Yes | | Taxes | Yes | Yes | | Localization | No | No | | Multi-Region Support | Yes | Yes | | Gift Cards | Yes | Yes | | Headless Architecture | Yes | No | | Customization | Highly Customizable | No | ## Conclusion In this article, you learned the difference between Woocommerce and Medusa based on the features each ecommerce store offers and the developer’s experience. Choosing the best ecommerce platform is an important decision to reflect on because taking the wrong decision may greatly affect your business in the long run. Woocommerce is easy to set up and can handle both smaller and bigger businesses. It is a good choice for non-developers and developers who know PHP and WordPress. Medusa is a perfect choice for anyone aiming to build headless commerce. Medusa can equally manage both larger and smaller businesses. It’s a good choice for developers with great JS/TS skills, looking forward to building a NodeJs ecommerce Platform. If you want to design your storefront and admin dashboard then Medusa is also a good choice.
gunkev
1,407,634
Leveraging Microsoft Teams to Boost Collaboration and Productivity
Collaboration and productivity tools have become increasingly vital for businesses to streamline...
21,622
2023-03-20T11:02:20
https://intranetfromthetrenches.substack.com/p/leveraging-microsoft-teams-to-boost
microsoftcl, microsoftteams, customization
Collaboration and productivity tools have become increasingly vital for businesses to streamline their operations and stay competitive in today's fast-paced digital landscape. Microsoft Teams is a cloud-based platform that offers a comprehensive set of features for collaboration, document management, and business process automation. In this article, we will focus on Microsoft Teams' standard capabilities, configuration, customization, and custom development options. This topic has been previously covered in our post on [SharePoint Online, From Out-of-the-Box to Customized Solutions: The Many Faces of SharePoint Online](https://intranetfromthetrenches.substack.com/p/from-out-of-the-box-to-customized), and we will be following a similar structure to delve into the different aspects of Microsoft Teams' extensibility. By the end of this article, you'll have a better understanding of how to leverage Microsoft Teams' functionalities to achieve your organization's specific business goals. ![Microsoft Teams Extensibility Options](https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd77af4ff-714a-4203-ae19-10e61391ae47_1890x740.png) ## Standard capabilities Microsoft Teams offers a wide array of standard capabilities, which are essentially the pre-built features that come with the platform. Although these capabilities can be used out-of-the-box, organizations often need to extend or customize them to fit their specific requirements. By leveraging Microsoft Teams' standard capabilities, businesses can create bespoke solutions that streamline their workflows and improve collaboration. Some examples of Microsoft Teams' standard capabilities are: * _Chat and messaging_: This feature allows users to send instant messages to other members of their team or organization. It also enables users to have one-on-one or group chats, share files and images, and collaborate in real-time. * _Video and audio calls_: Microsoft Teams provides high-quality video and audio calls for teams to connect with each other, regardless of their location. Users can join calls from their desktop or mobile devices, share their screen or presentation, and use the virtual whiteboard to brainstorm ideas. * _File sharing and collaboration_: This feature allows users to share and collaborate on files within Microsoft Teams. Teams members can work on the same document simultaneously, leave comments and feedback, and track changes in real-time. ## Configuration Configuration is a simple and effective method of extending Microsoft Teams components by adjusting the platform's settings and configuration data without altering the component's underlying structure. Typically, Microsoft Teams administrators are responsible for making these modifications. Configuration is a valuable tool that allows businesses to customize Microsoft Teams to suit their unique requirements. Here are some examples of configuration capabilities in Microsoft Teams: * _Teams image_: Team members can customize their Teams interface by adding a logo. * _Permission management_: With permission management, administrators can control the level of access that users have within Teams. * _Mentions and Labels_: Mentions and Labels allow users to tag specific individuals or groups in their conversations or activities. By doing so, users can draw attention to particular items or activities and ensure that their message is received by the intended audience. ## Customization Customization is a form of extensibility for Microsoft Teams that involves expanding the capabilities of existing components by adding new components or altering the structure of existing ones. This type of extensibility does not necessitate programming language skills, making it ideal for citizen developers, such as business consultants or trained customer power users. Here are some examples of customization capabilities in Microsoft Teams: * _Channels_: Microsoft Teams allows businesses to create channels, which are dedicated spaces for teams to collaborate on specific projects or topics. Channels can be customized with their own description and privacy settings. * _Custom tabs_: Custom tabs provide a way to integrate third-party applications or services within Microsoft Teams. By adding a custom tab, businesses can extend Microsoft Teams' functionality by including tools such as a shared calendar or a project management dashboard. ## Custom development Custom development is the most advanced form of Microsoft Teams extensibility that involves using programming or scripting languages to expand the platform's capabilities. This type of extensibility is usually undertaken by IT professional developers who have comprehensive IT expertise. Here are some examples of custom development capabilities in Microsoft Teams: * _Custom apps_: With custom apps, businesses can develop and publish their own applications that integrate with Microsoft Teams. These apps can be designed to perform a wide range of tasks, such as data analysis or project management. * _Custom bots_: Custom bots are programmed to interact with Microsoft Teams users, either through text or voice commands. These bots can be customized to perform various tasks, such as scheduling meetings or providing customer support. * _Custom connectors_: Custom connectors allow businesses to integrate third-party applications or services with Microsoft Teams. By creating custom connectors, businesses can expand the range of tools available to their teams, improving their productivity and collaboration. ## Summary To summarize, the article highlights four different types of extensibility options available to users for customizing the platform to suit their business requirements. Firstly, the standard capabilities of Microsoft Teams are discussed, which include features such as chat and messaging, video and audio calls, and file sharing and collaboration. Secondly, the article delves into the configuration options, which allow administrators to adjust platform settings and configuration data. Thirdly, customization is explained as a way to expand the capabilities of existing components by adding new components or altering the structure of existing ones. Finally, custom development is discussed, which involves using programming or scripting languages to expand the platform's capabilities. The article provides examples of each type of extensibility option available in Microsoft Teams, such as custom tabs, custom apps, custom bots, and custom connectors. > _Don't forget to share the article with your friends and colleagues if you find it interesting, click on the heart if you like it, or click on the comments to share what you think of the article, if you would add more, or if you want to clarify any of them._
jaloplo
1,407,732
Obsidian Notes with git-crypt 🔐
Reposting my github guide Obsidian Vault, the best markdown note setup👾. Private key encrypted cross...
0
2023-03-20T11:38:18
https://dev.to/snazzybytes/obsidian-notes-with-git-crypt-376m
obsidian, termux, markdown, gitcrypt
Reposting my [github guide](https://github.com/snazzybytes/obsidian-scripts/blob/master/README.md#obsidian-notes-with-git-crypt) Obsidian Vault, the best markdown note setup👾. Private key encrypted cross device synced notes 🤌(every pixel matches my laptop extensions, plugins, icons, themes, all of it 🌋) 💻 Laptop: Obsidian, gpg, git-crypt, vault repo 📱Phone: Obsidian, gpg, Termux, Termux Widget, sh scripts, git-crypt, vault repo ## Instructions ### Prerequisites - `git-crypt` (install via `brew install git-crypt`) - Obsidian app installed ( [download](https://obsidian.md/)) - folder created for your Obsidian vault (i.e `~/ObsidianVault`) ### Initialize git repo and setup git-crypt - initialize git repository ([as you normally would](https://docs.github.com/en/get-started/quickstart/create-a-repo)) ```shell $ cd YourVaultFolder # delete existing git repo # let's not expose cleartext history! $ rm -fr .git/ $ git init ``` - initialize `git-crypt` ```shell $ git-crypt init ``` - copy the generated secret key to `~/git-crypt-key` (you will need this `git-crypt-key` to decrypt your vault on other devices so might wanna back it up 🤙) ```shell git-crypt export-key ../git-crypt-key ``` ### Set up .gitignore and .gitattributes Here is sample `.gitignore`, you may want to put the entire `.obsidian` directory into there, but I like keep my plugins/extensions/etc as well: ```shell .obsidian/workspace .obsidian/cache ``` #### Here is sample `.gitattributes`: - i'm basically encrypting everything including my plugins `*/**`, but this can be fined tuned later as you please (all markdown files, all obsidian canvas files, all other files) ```shell *.md filter=git-crypt diff=git-crypt */** filter=git-crypt diff=git-crypt *.canvas filter=git-crypt diff=git-crypt BrainPad/** filter=git-crypt diff=git-crypt BrainPad.md filter=git-crypt diff=git-crypt ``` #### (Optional for ZSH) Improve terminal performance If you’re using `oh-mz-zsh`, the following two commands will prevent it from slowing down your command line (this will modify your vault repo's git config, not the global config): ```shell $ git config --add oh-my-zsh.hide-status 1 $ git config --add oh-my-zsh.hide-dirty 1 ``` - FYI - this results in your vault's `.git/config` to be updated with this... ```shell [oh-my-zsh] hide-status = 1 hide-dirty = 1 ``` #### Verify and test YOUR .gitattributes - run this command ```shell git ls-files -z |xargs -0 git check-attr filter |grep unspecified ``` - you should only see non critical files like `.gitattributes` be reported as unspecified - if any file is mentioned here that you want to be encrypted, tweak your `.gitattributes` further ### Testing Encryption - you should see all your encrypted files listed in the output (might take a while) ```shell git-crypt status -e ``` ### Unlocking your Vault To unlock your Vault's git repo, run this (using `../git-crypt-key` backed up earlier): ```shell git-crypt unlock ../git-crypt-key ``` ### Push your notes to Github - create **private** empty repository on GitHub (follow the instructions about how to push an existing repository that come up upon creation) > replace `YourGithubUsername/YourVaultRepo` with your own ```shell $ git remote add origin \ git@github.com:YourGithubUsername/YourVaultRepo.git $ git branch -M master # ... $ git push -u origin master ``` >**Note:** From now on, you can add, commit, push from this repository, and `git-crypt` will transparently encrypt and de-crypt your files. ### Locking Your Vault - if you want, you can lock your vault once you are done (don't have to) ```shell git-crypt lock ``` --- ### Obsidian - install the `Obsidian Git` plugin - configure the plugin: Make sure, `Disable push` is deactivated. - do this on all your desktop/laptop machines Now, every time you want to sync your changes, press `ctrl+p` and search for “Obsidian Git : commit …” The plugin will automatically pull all remote changes when you start Obsidian. If you leave it running for days, you might want to pull recent changes manually: `ctrl+p` and search for “Obsidian Git: Pull”. --- ### Common Issues #### Git related - if you get errors on `git push` and it gets stuck on 100% but not finishing, considering increasing your `httpBuffer` in your global git config and retry (this may be the first time you are pushing something bigger, if you decided to backup your plugins/extensions etc like me) ```shell git config --global http.postBuffer 524288000 ``` #### Obsidian Git plugin (desktop) If you are seeing `git-crypt` related errors in Obsidian on your desktop, it is most likely unable to find `git-crypt` in your path. Instead, tell your `.git/config` the explicit path to `git-crypt` executable (modify it manually): ```shell [filter "git-crypt"] smudge = \"/opt/homebrew/bin/git-crypt\" smudge clean = \"/opt/homebrew/bin/git-crypt\" clean required = true [diff "git-crypt"] textconv = \"/opt/homebrew/bin/git-crypt\" diff ``` If you get any `gpg` errors, add the path of your gpg executable to your global git config as well. - first check the full path to the `gpg` installed ```shell type gpg gpg is /usr/local/bin/gpg ``` - then configure git to use that full path ```shell git config --global gpg.program /usr/local/bin/gpg ``` - FYI - this results in your global `.gitconfig` to be updated with this... ```shell [gpg] program = /usr/local/bin/gpg ``` --- ## BONUS: Android Sync ### Requirements - install latest Termux from F-Droid - install Termux Widget 0.13+ ### Setup your Termux for Git - upgrade packages ```shell pkg upgrade ``` - install required packages ```shell pkg install git git-crypt ``` - make storage available in Termux (`/storage/shared/*`) ```shell termux-setup-storage ``` - generate new SSH key (press enter for empty passphrase) ```shell ssh-keygen -t ed25519 -C "your_email@example.com" ``` - add your new SSH key to your github account ([see here](https://docs.github.com/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account)) ### Setup your vault in Termux #### Vault repository setup - clone the vault repository into Termux home (for now) > replace `YourGithubUsername/YourVaultRepo` with your own ```shell git clone git@github.com:YourGithubUser/YourVaultRepo.git ``` - copy the `git-crypt-key` file into termux (you can zip it to `git-crypt-key.zip` and transfer to your device using your favorite method) - unlock the vault repository (this might take a while) ```shell # go inside cd YourVaultRepo # unlock your vault git-crypt unlock ../git-crypt-key ``` - once unlock is finished, move this github vault repo to the shared folder; this is because Obsidian app needs to be able see it: ```shell # go back home cd # move to your storage mv YourVaultRepo storage/shared/ ``` #### Android scripts setup (Termux): >To take this up a notch further, this gives us very handy commit and push and a pull shortcut that we can launch directly from the comfort of our homescreen Clone the repository, then copy the `pull.sh, push.sh, log.sh` , `repo.conf` into your termux .shortcuts directory to be able to trigger them from homescreen widget. - clone the repo containing `.sh` scripts and `.conf` file ```shell git clone git@github.com:snazzybytes/obsidian-scripts.git ``` - copy all files from `android` folder to Termux's `.shortcuts` directory (needed to get Termux Widget working) ```shell cp obsidian-scripts/android/* .shortcuts/ ``` - update `repo.conf` file with your github vault repo name (this is used by the push/pull/log `.sh` scripts) ```shell GH_REPO=YourVaultRepo ``` - make sure they are executable ```shell # go inside and change permissions cd obsidian-scripts chmod +x pull.sh push.sh log.sh # go back to home directory cd ``` - drop Termux:Widget on your homescreen and you should now see the scripts from `.shortcuts` show up on the list ![alt text](https://imgur.com/hAsT4a1.png "Termux Widget") BOOM 🚀🔥! Now you can access your encrypted vault on android too and push encrypted changes to github. [see here for demo](https://nostr.build/av/nostr.build_6db2328d571d45977cd81bb65170c82d325fe5280c346c7980ab39ec1d3e731d.mp4) #### Scripts Documented (same as the repo ones) >per latest Termux Widget version 0.13+ all custom scripts in Termux `.shortcuts` directory need proper shebangs `#!/data/data/com.termux/files/usr/bin/bash` pull.sh (allows to pull remote changes) ```shell #!/data/data/com.termux/files/usr/bin/bash source repo.conf cd ~/storage/shared/$GH_REPO git pull cd ~ bash -c "read -t 3 -n 1" ``` push.sh (allows to commit and push note changes) ```shell #!/data/data/com.termux/files/usr/bin/bash source repo.conf cd ~/storage/shared/$GH_REPO git add . git commit -m "android on $(date)" git push cd ~ bash -c "read -t 3 -n 1" ``` log.sh (allows you to check which version you are on with `git log`) ```shell #!/data/data/com.termux/files/usr/bin/bash source repo.conf cd /data/data/com.termux/files/home/storage/shared/$GH_REPO git log cd ~ bash -c "read -t 5 -n 1" ``` ### Resources and references - https://willricketts.com/obsidian-changed-everything-for-me/ - https://github.com/AGWA/git-crypt - https://buddy.works/guides/git-crypt - https://medium.com/@dianademco/writing-in-obsidian-a-comprehensive-guide-58a1306ed293 - https://renerocks.ai/blog/obsidian-encrypted-github-android/#checking-it-out-on-a-different-machine - https://publish.obsidian.md/git-doc/Start+here - https://github.com/denolehov/obsidian-git/issues/21
snazzybytes
1,407,832
New Suspense Hooks for Meteor
As we learned in the previous part of this series, why and how we could use Suspense. This article...
22,320
2023-03-20T13:42:48
https://blog.meteor.com/new-suspense-hooks-for-meteor-5391570b3007
webdev, javascript, react, showdev
![New Suspense hooks for Meteor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4t4sd7fqjh6bu453glu.png) As we learned in the previous part of this series, why and how we could use [Suspense](https://dev.to/grubba/making-promises-suspendable-452f). This article will discuss and show how to use Meteor's newly added suspendable hooks. With the changes coming to Meteor, especially with its fibers-free future, we decided to add to our hooks collection `react-meteor-data` a few hooks that can deal with those new async methods. For starters, we need to address that `useFind` still works as it always did on the *client;* it was implemented using synchronous functions, so it was safe from change. But important to note that if you are using the `useFind` hook on the server, for example, while using SSR, you will need to use the newly added `suspense/useFind` hook. This is important because now you cannot do a sync database search with the current implementation of `useFind` in the server, but with the newly added Suspense version, you can. Before showing examples, we would like to show another made-suspendable hook: `useSubscribe` now. Instead of using an `isLoading` function to know when the subscription is ready, we made it use the Suspense API. Let's look at how it was before and how it will be: ```jsx // before // Tasks.jsx function Tasks(){ const [hideDone, setHideDone] = useState(false); const isLoading = useSubscribe("tasks"); const filter = hideDone ? { done: { $ne: true } } : { }; const tasks = useFind( () => TasksCollection.find(filter, { sort: { createdAt: -1 } }), [hideDone] ); if (isLoading()) return <Loader /> // render the tasks } // With suspense // Tasks.jsx (have a Suspense wrapper outside with fallback={ <Loader /> } function Tasks(){ const [hideDone, setHideDone] = useState(false); useSubscribe("tasks"); const filter = hideDone ? { done: { $ne: true } } : { }; const tasks = useFind( TasksCollection, [filter, { sort: { createdAt: -1 } }], [hideDone] ); // render the tasks } ``` It looks and feels almost the same. We just moved the `isLoading` to the outside, and now we have a much more declarative code with a better Developer experience. The changes regarding the `useFind` are how you build your function. Now you should pass the Collection, the arguments, and its dependencies. ## What about useTracker? In this part, there will be a change to suspend when async. This is due to how Suspense works under the hood(a simple explanation: we need a way to track the promises when they are thrown, for a longer version, [check the previous article from this series](https://dev.to/grubba/making-promises-suspendable-452f) ). One of the things we will need to add one key to each useTracker instance. ```jsx // before // Tasks.jsx function Tasks(){ const isLoading = useSubscribe("tasks"); const { username } = useTracker(() => Meteor.userAsync()) const tasksByUser = useTracker(() => TasksCollection.find({username}, { sort: { createdAt: -1 } }).fetch() ); if (isLoading()) return <Loader /> // render the tasks } // With suspense // Tasks.jsx (have a Suspense wrapper outside with fallback={ <Loader /> } function Tasks(){ useSubscribe("tasks"); const { username } = useTracker("user",() => Meteor.userAsync()) const tasksByUser = useTracker("tasksByUser", () => TasksCollection.find({username}, { sort: { createdAt: -1 } }).fetchAsync() // fetch will be async ); // render the tasks } ``` It seems more bloated due to the addition of strings, but now we do not need to think about loading states, and for some of them, the new react rendering engine improves our performance, and the reactivity is maintained between renders. ## Why are these changes so significant? If, in your case, you only used `useFind` as before, without SSR and without using useTracker with async dependencies such as calling a fetch from a collection, then you may not need to update as these changes are just new API areas for solving the newly added functionalities that were added in Meteor 3.
grubba
1,407,840
Register for Notifications on Android SDK 33
Problem: Notifications are not being received after upgrading to Android SDK 33. Solution: In...
0
2023-03-20T14:14:43
https://dev.to/lancer1977/register-for-notifications-on-android-sdk-33-ao9
Problem: Notifications are not being received after upgrading to Android SDK 33. Solution: In Android SDK 33 you now have to request permissions to display Notifications. Asking for POST_NOTIFICATIONS alone wasn't sufficient, and some of these maybe overkill but this is what worked for me. To get the notifications permission, you need to first request permission for the following items: ``` private string[] PermissionsForNotifications = { Manifest.Permission.ReceiveBootCompleted, Manifest.Permission.WakeLock, Manifest.Permission.Vibrate, Manifest.Permission.Internet, Manifest.Permission.AccessNetworkState, Manifest.Permission.PostNotifications, Manifest.Permission.ForegroundService, "com.google.android.c2dm.permission.RECEIVE", "com.google.android.finsky.permission.BIND_GET_INSTALL_REFERRER_SERVICE", "android.permission.REQUEST_IGNORE_BATTERY_OPTIMIZATIONS", "android.permission.REQUEST_COMPANION_RUN_IN_BACKGROUND" }; ``` When calling the registration it seemed the Channel ID was also required. ``` private int RequestNotificationId = 1000; ActivityCompat.RequestPermissions(CurrentActivity, PermissionsForNotifications, RequestNotificationId); ```
lancer1977
1,407,884
Use AWS Controllers for Kubernetes to deploy a Serverless data processing solution with SQS, Lambda and DynamoDB
In this blog post, you will be using AWS Controllers for Kubernetes on an Amazon EKS cluster to put...
22,895
2023-03-20T16:03:54
https://abhishek1987.medium.com/use-aws-controllers-for-kubernetes-to-deploy-a-serverless-data-processing-solution-with-sqs-lambda-62025dba97bf
kubernetes, serverless, tutorial, cloud
In this blog post, you will be using [AWS Controllers for Kubernetes](https://aws-controllers-k8s.github.io/community/docs/community/overview/) on an Amazon EKS cluster to put together a solution wherein data from an [Amazon SQS queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-types.html) is processed by an AWS [Lambda function](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and persisted to a [DynamoDB](https://docs.aws.amazon.com/dynamodb/index.html) table. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b2gq6nd1laz1akl9yiod.png) AWS Controllers for Kubernetes (also known as **ACK**) leverage [Kubernetes Custom Resource and Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources) and give you the ability to manage and use AWS services directly from Kubernetes without needing to define resources outside of the cluster. The idea behind `ACK` is to enable Kubernetes users to describe the desired state of AWS resources using the Kubernetes API and configuration language. `ACK` will then take care of provisioning and managing the AWS resources to match the desired state. This is achieved by using Service controllers that are responsible for managing the lifecycle of a particular AWS service. Each `ACK` service controller is packaged into a separate container image that is published in a public repository corresponding to an individual `ACK` service controller. There is no single ACK container image. Instead, there are container images for each individual ACK service controller that manages resources for a particular AWS API. *This blog post will walk you through how to use the SQS, DynamoDB and Lambda service controllers for ACK.* ## Prerequisites To follow along step-by-step, in addition to an AWS account, you will need to have [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) and [helm](https://helm.sh/docs/intro/install/) installed. There are a variety of ways in which you can create an [Amazon EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). I prefer using [eksctl](https://eksctl.io/) CLI because of the convenience it offers. Creating an an EKS cluster using `eksctl`, can be as easy as this: ```bash eksctl create cluster --name my-cluster --region region-code ``` For details, refer to the [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Clone this GitHub repository and change to the right directory: ```bash git clone https://github.com/abhirockzz/k8s-ack-sqs-lambda cd k8s-ack-sqs-lambda ``` Ok let's get started! ## Setup the ACK service controllers for AWS Lambda, SQS and DynamoDB ### Install ACK controllers Log into the Helm registry that stores the ACK charts: ```bash aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws ``` Deploy the ACK service controller for Amazon Lambda using the `lambda-chart` Helm chart: ```bash RELEASE_VERSION_LAMBDA_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/lambda-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/lambda-chart "--version=${RELEASE_VERSION_LAMBDA_ACK}" --generate-name --set=aws.region=us-east-1 ``` Deploy the ACK service controller for SQS using the `sqs-chart` Helm chart: ```bash RELEASE_VERSION_SQS_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/sqs-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/sqs-chart "--version=${RELEASE_VERSION_SQS_ACK}" --generate-name --set=aws.region=us-east-1 ``` Deploy the ACK service controller for DynamoDB using the `dynamodb-chart` Helm chart: ```bash RELEASE_VERSION_DYNAMODB_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/dynamodb-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4) helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/dynamodb-chart "--version=${RELEASE_VERSION_DYNAMODB_ACK}" --generate-name --set=aws.region=us-east-1 ``` Now, it's time to configure the IAM permissions for the controller to invoke Lambda, DynamoDB and SQS. ### Configure IAM permissions **Create an OIDC identity provider for your cluster** > For the below steps, replace the `EKS_CLUSTER_NAME` and `AWS_REGION` variables with your cluster name and region. ```bash export EKS_CLUSTER_NAME=demo-eks-cluster export AWS_REGION=us-east-1 eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --region $AWS_REGION --approve OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f2- | cut -d '/' -f2-) ``` ### Create IAM roles for Lambda, SQS and DynamoDB ACK service controllers **ACK Lambda controller** Set the following environment variables: ```bash ACK_K8S_SERVICE_ACCOUNT_NAME=ack-lambda-controller ACK_K8S_NAMESPACE=ack-system AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) ``` Create the trust policy for the IAM role: ```bash read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_lambda.json ``` Create the IAM role: ```bash ACK_CONTROLLER_IAM_ROLE="ack-lambda-controller" ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK lambda controller deployment on EKS cluster using Helm charts" aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_lambda.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}" ``` Attach IAM policy to the IAM role: ```bash # we are getting the policy directly from the ACK repo INLINE_POLICY="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/lambda-controller/main/config/iam/recommended-inline-policy)" aws iam put-role-policy \ --role-name "${ACK_CONTROLLER_IAM_ROLE}" \ --policy-name "ack-recommended-policy" \ --policy-document "${INLINE_POLICY}" ``` Attach ECR permissions to the controller IAM role - these are required since Lambda functions will be pulling images from ECR. ```bash aws iam put-role-policy \ --role-name "${ACK_CONTROLLER_IAM_ROLE}" \ --policy-name "ecr-permissions" \ --policy-document file://ecr-permissions.json ``` Associate the IAM role to a Kubernetes service account: ```bash ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text) export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN ``` Repeat the steps for the SQS controller. **ACK SQS controller** Set the following environment variables: ```bash ACK_K8S_SERVICE_ACCOUNT_NAME=ack-sqs-controller ACK_K8S_NAMESPACE=ack-system AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) ``` Create the trust policy for the IAM role: ```bash read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_sqs.json ``` Create the IAM role: ```bash ACK_CONTROLLER_IAM_ROLE="ack-sqs-controller" ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK sqs controller deployment on EKS cluster using Helm charts" aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_sqs.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}" ``` Attach IAM policy to the IAM role: ```bash # for sqs controller, we use the managed policy ARN instead of the inline policy (unlike the Lambda controller) POLICY_ARN="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/sqs-controller/main/config/iam/recommended-policy-arn)" aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn "${POLICY_ARN}" ``` Associate the IAM role to a Kubernetes service account: ```bash ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text) export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN ``` Repeat the steps for the DynamoDB controller. **ACK DynamoDB controller** Set the following environment variables: ```bash ACK_K8S_SERVICE_ACCOUNT_NAME=ack-dynamodb-controller ACK_K8S_NAMESPACE=ack-system AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) ``` Create the trust policy for the IAM role: ```bash read -r -d '' TRUST_RELATIONSHIP <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}" } } } ] } EOF echo "${TRUST_RELATIONSHIP}" > trust_dynamodb.json ``` Create the IAM role: ```bash ACK_CONTROLLER_IAM_ROLE="ack-dynamodb-controller" ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK dynamodb controller deployment on EKS cluster using Helm charts" aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_dynamodb.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}" ``` Attach IAM policy to the IAM role: ```bash # for dynamodb controller, we use the managed policy ARN instead of the inline policy (like we did for Lambda controller) POLICY_ARN="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/dynamodb-controller/main/config/iam/recommended-policy-arn)" aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn "${POLICY_ARN}" ``` Associate the IAM role to a Kubernetes service account: ```bash ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text) export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN ``` ### Restart ACK controller Deployments and verify the setup Restart ACK service controller `Deployment` using the following commands - it will update service controller `Pod`s with `IRSA` environment variables. Get list of `ACK` service controller deployments: ```bash export ACK_K8S_NAMESPACE=ack-system kubectl get deployments -n $ACK_K8S_NAMESPACE ``` Restart Lambda, SQS and DynamoDB controller `Deployment`s: ```bash DEPLOYMENT_NAME_LAMBDA=<enter deployment name for lambda controller> kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_LAMBDA DEPLOYMENT_NAME_SQS=<enter deployment name for sqs controller> kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_SQS DEPLOYMENT_NAME_DYNAMODB=<enter deployment name for dynamodb controller> kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_DYNAMODB ``` List `Pod`s for these `Deployment`s. Verify that the `AWS_WEB_IDENTITY_TOKEN_FILE` and `AWS_ROLE_ARN` environment variables exist for your Kubernetes `Pod` using the following commands: ```bash kubectl get pods -n $ACK_K8S_NAMESPACE LAMBDA_POD_NAME=<enter Pod name for lambda controller> kubectl describe pod -n $ACK_K8S_NAMESPACE $LAMBDA_POD_NAME | grep "^\s*AWS_" SQS_POD_NAME=<enter Pod name for sqs controller> kubectl describe pod -n $ACK_K8S_NAMESPACE $SQS_POD_NAME | grep "^\s*AWS_" DYNAMODB_POD_NAME=<enter Pod name for dynamodb controller> kubectl describe pod -n $ACK_K8S_NAMESPACE $DYNAMODB_POD_NAME | grep "^\s*AWS_" ``` Now that the ACK service controller have been setup and configured, you can create AWS resources! ## Create SQS queue, DynamoDB table and deploy the Lambda function **Create SQS queue** In the file `sqs-queue.yaml`, replace the `us-east-1` region with your preferred region as well as the AWS account ID. This is what the `ACK` manifest for SQS queue looks like: ```yaml apiVersion: sqs.services.k8s.aws/v1alpha1 kind: Queue metadata: name: sqs-queue-demo-ack annotations: services.k8s.aws/region: us-east-1 spec: queueName: sqs-queue-demo-ack policy: | { "Statement": [{ "Sid": "__owner_statement", "Effect": "Allow", "Principal": { "AWS": "AWS_ACCOUNT_ID" }, "Action": "sqs:SendMessage", "Resource": "arn:aws:sqs:us-east-1:AWS_ACCOUNT_ID:sqs-queue-demo-ack" }] } ``` Create the queue using the following command: ```bash kubectl apply -f sqs-queue.yaml # list the queue kubectl get queue ``` **Create DynamoDB table** This is what the `ACK` manifest for `DynamoDB` table looks like: ```yaml apiVersion: dynamodb.services.k8s.aws/v1alpha1 kind: Table metadata: name: customer annotations: services.k8s.aws/region: us-east-1 spec: attributeDefinitions: - attributeName: email attributeType: S billingMode: PAY_PER_REQUEST keySchema: - attributeName: email keyType: HASH tableName: customer ``` > You can replace the `us-east-1` region with your preferred region. Create a table (named `customer`) using the following command: ```bash kubectl apply -f dynamodb-table.yaml # list the tables kubectl get tables ``` **Build function binary and create Docker image** ```bash GOARCH=amd64 GOOS=linux go build -o main main.go aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws docker build -t demo-sqs-dynamodb-func-ack . ``` Create a private `ECR` repository, tag and push the Docker image to `ECR`: ```bash AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com aws ecr create-repository --repository-name demo-sqs-dynamodb-func-ack --region us-east-1 docker tag demo-sqs-dynamodb-func-ack:latest $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-sqs-dynamodb-func-ack:latest docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-sqs-dynamodb-func-ack:latest ``` Create an IAM execution Role for the Lambda function and attach the required policies: ```bash export ROLE_NAME=demo-sqs-dynamodb-func-ack-role ROLE_ARN=$(aws iam create-role \ --role-name $ROLE_NAME \ --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}' \ --query 'Role.[Arn]' --output text) aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole ``` Since the Lambda function needs to write data to `DynamoDB` and invoke SQS, let's add the following policies to the IAM role: ```bash aws iam put-role-policy \ --role-name "${ROLE_NAME}" \ --policy-name "dynamodb-put" \ --policy-document file://dynamodb-put.json aws iam put-role-policy \ --role-name "${ROLE_NAME}" \ --policy-name "sqs-permissions" \ --policy-document file://sqs-permissions.json ``` **Create the Lambda function** Update `function.yaml` file with the following info: - `imageURI` - the URI of the Docker image that you pushed to ECR e.g. `<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/demo-sqs-dynamodb-func-ack:latest` - `role` - the ARN of the IAM role that you created for the Lambda function e.g. `arn:aws:iam::<AWS_ACCOUNT_ID>:role/demo-sqs-dynamodb-func-ack-role` This is what the `ACK` manifest for the Lambda function looks like: ```yaml apiVersion: lambda.services.k8s.aws/v1alpha1 kind: Function metadata: name: demo-sqs-dynamodb-func-ack annotations: services.k8s.aws/region: us-east-1 spec: architectures: - x86_64 name: demo-sqs-dynamodb-func-ack packageType: Image code: imageURI: AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-sqs-dynamodb-func-ack:latest environment: variables: TABLE_NAME: customer role: arn:aws:iam::AWS_ACCOUNT_ID:role/demo-sqs-dynamodb-func-ack-role description: A function created by ACK lambda-controller ``` To create the Lambda function, run the following command: ```bash kubectl create -f function.yaml # list the function kubectl get functions ``` ### Add SQS trigger configuration Add SQS trigger which will invoke the Lambda function when event is sent to SQS queue. Here is an example using AWS Console - Open the Lambda function in the AWS Console and click on the **Add trigger** button. Select **SQS** as the trigger source, select the SQS queue and click on the **Add** button. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ayv4kkc9b3eg333ruma4.png) Now you are ready to try out the end to end solution! ## Test the application Send few messages to the SQS queue. For the purposes of this demo, you can use the AWS CLI: ```bash export SQS_QUEUE_URL=$(kubectl get queues/sqs-queue-demo-ack -o jsonpath='{.status.queueURL}') aws sqs send-message --queue-url $SQS_QUEUE_URL --message-body user1@foo.com --message-attributes 'name={DataType=String, StringValue="user1"}, city={DataType=String,StringValue="seattle"}' aws sqs send-message --queue-url $SQS_QUEUE_URL --message-body user2@foo.com --message-attributes 'name={DataType=String, StringValue="user2"}, city={DataType=String,StringValue="tel aviv"}' aws sqs send-message --queue-url $SQS_QUEUE_URL --message-body user3@foo.com --message-attributes 'name={DataType=String, StringValue="user3"}, city={DataType=String,StringValue="new delhi"}' aws sqs send-message --queue-url $SQS_QUEUE_URL --message-body user4@foo.com --message-attributes 'name={DataType=String, StringValue="user4"}, city={DataType=String,StringValue="new york"}' ``` The Lambda function should be invoked and the data should be written to the DynamoDB table. Check the DynamoDB table using the CLI (or AWS console): ```bash aws dynamodb scan --table-name customer ``` ## Clean up After you have explored the solution, you can clean up the resources by running the following commands: Delete SQS queue, `DynamoDB` table and the Lambda function: ```bash kubectl delete -f sqs-queue.yaml kubectl delete -f function.yaml kubectl delete -f dynamodb-table.yaml ``` To uninstall the ACK service controllers, run the following commands: ```bash export ACK_SYSTEM_NAMESPACE=ack-system helm ls -n $ACK_SYSTEM_NAMESPACE helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the sqs chart> helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the lambda chart> helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the dynamodb chart> ``` ## Conclusion and next steps In this post, we have seen how to use AWS Controllers for Kubernetes to create a Lambda function, SQS, `DynamoDB` table and wire them together to deploy a solution. All of this (almost) was done using Kubernetes! I encourage you to try out other AWS services supported by `ACK` - [here is a complete list](https://aws-controllers-k8s.github.io/community/docs/community/services/). Happy Building!
abhirockzz
1,407,969
HTML tags for beginners
HTML elements consist of start and end tags. If you start the tag, you must always end it! An example...
0
2023-03-20T15:35:10
https://dev.to/codejem/html-tags-for-beginners-46b7
beginners, html, frontend, devjournal
HTML elements consist of start and end tags. If you start the tag, you must always end it! An example of this would be: `<tagname>`Text goes here...</`tagname`> As you can see this tag has a start tag and an end tag. These are some HTML Elements where you will be able to recognise the start and end of the tags: 1. <`h1`>Heading goes here..</`h1`> This is the heading element. You can see that in the middle of the start/end tag is where you insert the heading. 2. <`p`>Paragraph goes here..</`p`> This is the paragraph element. You can see that in the middle of the start/end tag is where you insert your paragraph. 3. <`strong`>Text...</`strong`> This is the strong element. The strong element is used to wrap around a certain bit of text to show the strong importance of it. An example of the <`strong`> element would be: ``` <p>This is an example of a start/closing tag.<strong>You must always close the tag!</strong></p> ``` Outcome: This is an example of a start/closing tag. **You must always close the tag!** These are only just a few of the HTML elements. There are plenty of others but in this blog I wanted to focus on understanding the very basics of starting a tag and ending it. Thank you for reading this blog. I hope that it has helped, in the meantime.... <`strong`>**KEEP CODING 😁**</`strong`>
codejem
1,408,239
How to make long functions more readable
Long functions are usually harder to understand. While there are techniques in OOP (Object-oriented...
0
2023-03-20T19:22:47
https://tahazsh.com/blog/make-long-functions-more-readable
javascript, webdev, programming, codequality
Long functions are usually harder to understand. While there are techniques in OOP (Object-oriented programming) to make them smaller, some developers prefer to see the whole code in one place. Having all code in a big function is not necessarily bad. What's actually bad is messy, unclear code in that function. Fortunately, there are some things you can do to improve that. ## Extract common functions If your long function contains some code that you've already written somewhere else outside that function, then it's better to extract that piece of code into a new function—this is the basic idea of DRY (don't repeat yourself). An example of this would be extracting a utility function like `slugify` to convert the passed string to a slug. If you are using it in multiple places in your code, extract it into a new function. ```js // util.js export function slugify(aString) { // convert aString to a slug and return it } ``` ```js import { slugify } from './util.js' function aVeryLongFunction() { // ... // At this point you needed to slugify some text const slug = slugify(title) // ... } ``` Let's say your function is still long even after you've extracted all the common pieces into helper functions. What should you do in this case? First, you need to group related code in chunks, then pick one of these two options: - Add comments to explain what each code chunk is doing. - Extract each code chunk into a nested function. It's more of a personal preference—I prefer the the latter because, first, I like reading clear, explicit function names than comments, and second, I can use the extra scope of the new function to simplify the nested function more. ## Grouping related code chunks Typically, we write code in the order it runs in. But there's one thing that sometimes is written far from its related code: variables. Some developers prefer to write all the variables at the beginning of the function—to show what variables will be used in that function. Other developers prefer to write them as close to the code chunk as possible. Both choices are good. The important thing here is to stay consistent throughout your codebase. Another step I like to do here is to separate each code chunk with a few spaces to make it easier to distinguish between different chunks. ## The comments approach After you've separated your related code chunks, it's time to tell the reader what each chunk is doing. You can easily do this by writing a clean explanation above each chunk. ```js function aVeryLongFunction() { // [explain what it's doing with comments] // ... // ... // [explain what it's doing with comments] // ... // ... // [explain what it's doing with comments] // ... // ... } ``` If you like writing comments and like this approach, that's good. There's nothing wrong with that approach—it's actually used in very popular open source projects. If you don't like comments, though, then you can extract each function into a nested function. ## The nested functions approach The end result of this approach is similar to the previous one, but instead of writing comments, you extract each chunk into a nested function in the long function. The main difference between extracting nested functions and regular functions is that in nested functions you have access to the scope of the long function. This means, you don't need to pass any parameters to it. An important thing to note here is that if you choose to extract nested functions, you have to keep the shared local variables outside the extracted functions to make it available to other nested functions. After applying it, your code will look like this: ```js function aVeryLongFunction() { function doThis() {} function thenDoThis() {} function andThenThis() {} // Define all your shared local variables here. // All nested functions above should access them // directly without passing them // Then start calling each one in order doThis() thenDoThis() andThenThis() } ``` With this approach, I can read what the function is doing in clear, separated steps. If I need to see how each step is implemented, I go to the related function.
tahazsh
1,408,296
Apify ❤️ Python: Releasing a Python SDK for Actors
Whether you are scraping with BeautifulSoup, Scrapy, Selenium, or Playwright, the Apify Python SDK...
0
2023-03-20T19:55:46
https://blog.apify.com/apify-python-sdk/
python, webscraping, webautomation
--- title: Apify ❤️ Python: Releasing a Python SDK for Actors published: true date: 2023-03-14 10:37:49 UTC tags: python, webscraping, webautomation canonical_url: https://blog.apify.com/apify-python-sdk/ --- Whether you are scraping with BeautifulSoup, Scrapy, Selenium, or Playwright, the Apify Python SDK helps you run your project in the cloud at any scale. At Apify, our mission is to empower people to create great web scrapers using the best technologies possible and to run them in the cloud effortlessly. That's why we're thrilled to introduce our new [Apify SDK for Python](https://docs.apify.com/sdk/python/?ref=apify), allowing you to write [Apify Actors](https://docs.apify.com/platform/actors?ref=apify) in Python and tap into the wide range of libraries and tools in the Python ecosystem that make web scraping simple and efficient. ```python from apify import Actor from bs4 import BeautifulSoup import requests async def main(): async with Actor: input = await Actor.get_input() response = requests.get(input['url']) soup = BeautifulSoup(response.content, 'html.parser') await Actor.push_data({ 'url': input['url'], 'title': soup.title.string, }) ``` When combined with the Apify platform, Actors have access to a wide variety of features designed specifically to meet developers web scraping and automation needs. These include on-demand scaling of computing resources, run scheduling and monitoring, data center and residential proxies, as well as the ability to [publish Actors](https://docs.apify.com/platform/actors/publishing?ref=apify) in Apify Store and even [monetize your code](https://docs.apify.com/academy/get-most-of-actors/monetizing-your-actor?ref=apify). Whether you have a simple scraper using [BeautifulSoup](https://docs.apify.com/sdk/python/docs/guides/beautiful-soup?ref=apify), a powerful web spider written with [Scrapy](https://docs.apify.com/sdk/python/docs/guides/scrapy?ref=apify), or you use [Selenium](https://docs.apify.com/sdk/python/docs/guides/selenium?ref=apify) or [Playwright](https://docs.apify.com/sdk/python/docs/guides/playwright?ref=apify) to automate browser interaction, the Apify SDK for Python will help you run your projects in the cloud at any scale. ![Apify Python](https://blog.apify.com/content/images/2023/03/image.png) ## Getting Started Actors were designed with the purpose of being used together with the Apify platform. So, to unlock the full potential of Actors, lets create one in Apify Console. This is a fairly straightforward process, and you will only need to [sign up for a free Apify account](https://console.apify.com/sign-up?ref=apify) to follow along. Once youre in Apify Console, and you go to [Actors Create New](https://console.apify.com/actors/templates?category=python&ref=apify) there, youre presented with a choice of Actor templates: ![Web Scraping Python templates](https://blog.apify.com/content/images/2023/03/image-4.png) We have predefined Actor templates for all the major web scraping libraries like Scrapy, BeautifulSoup, Playwright, and Selenium. Once you create an Actor from your selected Actor template, you can edit its code to perform the scraping tasks you need, run the Actor, and, if youre happy with it, integrate it with your existing data pipelines and schedule it to scrape data in regular intervals. ## Creating Actors locally If you want to create and run Apify Actors directly on your local computer so that you can, for example, track the source code in a version control system, you can do so using the [Apify CLI](https://docs.apify.com/cli/?ref=apify), using the command `apify create my-python-actor`. ![Apify Python SDK Templates](https://blog.apify.com/content/images/2023/03/image-3.png) When you execute that command, youll be presented with the same choice of templates as in Apify Console. Once you choose a template, an Actor will be created for you in the `my-python-actor` directory, and all its requirements will be installed in a virtual environment in `my-python-actor/.venv`. To run the actor, you can just run `cd my-python-actor && apify run`. When you run an Actor locally, its output is stored in the `storage` folder. There, you can find the contents of the Actors default [dataset](https://docs.apify.com/sdk/python/reference/class/Dataset?ref=apify), [key-value store](https://docs.apify.com/sdk/python/reference/class/KeyValueStore?ref=apify), and [request queue](https://docs.apify.com/sdk/python/reference/class/RequestQueue?ref=apify). To push the Actor to Apify Console and run it there, you can use the `apify push` command, which will upload the actors source code to the Apify platform and build the actor there. ## Get in touch Were excited to see what you will create with the Apify SDK for Python. If you find any issues, please report them in the [SDKs GitHub repository](https://github.com/apify/apify-sdk-python?ref=apify). > [🐍 Try writing an Actor in Python](https://console.apify.com/actors/templates?category=python&ref=apify) And dont forget to join our [developer community on Discord](https://discord.com/invite/jyEM2PRvMU?ref=apify). We will be waiting for you there to hear your feedback and help you with any questions that might arise.
frantiseknesveda
1,408,670
openGauss Users
You can use CREATE USER and ALTER USER to create and manage database users, respectively. openGauss...
0
2023-03-21T02:53:41
https://dev.to/tongxi99658318/opengauss-users-54pc
opengauss
You can use CREATE USER and ALTER USER to create and manage database users, respectively. openGauss contains one or more named database users and roles that are shared across openGauss. However, these users and roles do not share data. That is, a user can connect to any database, but after the connection is successful, any user can access only the database declared in the connection request. In non-separation-of-duties scenarios, openGauss user accounts can be created and deleted only by a system administrator or a security administrator with the CREATEROLE attribute. In separation-of-duties scenarios, a user account can be created only by an initial user or a security administrator. When a user logs in, openGauss authenticates the user. A user can own databases and database objects (such as tables), and grant permissions of these objects to other users and roles. In addition to system administrators, users with the CREATEDB attribute can create databases and grant permissions on these databases. Adding, Modifying, and Deleting Users To create a user, use the SQL statement CREATE USER. For example, create a user joe and set the CREATEDB attribute for the user. "" openGauss=# CREATE USER joe WITH CREATEDB PASSWORD "xxxxxxxxx"; CREATE ROLE To create a system administrator, use the CREATE USER statement with the SYSADMIN parameter. To delete an existing user, use DROP USER. To change a user account (for example, rename the user or change the password), use ALTER USER. To view a user list, query the PG_USER view. "" openGauss=# SELECT * FROM pg_user; To view user attributes, query the system catalog PG_AUTHID. "" openGauss=# SELECT * FROM pg_authid; Private Users If multiple service departments use different database user accounts to perform service operations and a database maintenance department at the same level uses database administrator accounts to perform maintenance operations, service departments may require that database administrators, without specific authorization, can manage (DROP, ALTER, and TRUNCATE) their data but cannot access (INSERT, DELETE, UPDATE, SELECT, and COPY) the data. That is, the management permissions of database administrators for tables need to be isolated from their access permissions to improve the data security of common users. In separation-of-duties mode, a database administrator does not have permissions for the tables in schemas of other users. In this case, database administrators have neither management permissions nor access permissions, which does not meet the requirements of the service departments mentioned above. Therefore, openGauss provides private users to solve the problem. That is, create private users with the INDEPENDENT attribute in non-separation-of-duties mode. "" openGauss=# CREATE USER user_independent WITH INDEPENDENT IDENTIFIED BY "1234@abc"; System administrators and security administrators with the CREATEROLE attribute can manage (DROP, ALTER, and TRUNCATE) objects of private users but cannot access (INSERT, DELETE, SELECT, UPDATE, COPY, GRANT, REVOKE, and ALTER OWNER) the objects before being authorized. NOTICE: PG_STATISTIC and PG_STATISTIC_EXT store sensitive information about statistical objects, such as high-frequency MCVs. The system administrator can still access the two system catalogs to obtain the statistics of the tables to which private users belong. Permanent User openGauss provides the permanent user solution. That is, create a permanent user with the PERSISTENCE attribute. "" openGauss=# CREATE USER user_persistence WITH PERSISTENCE IDENTIFIED BY "1234@abc"; Only the initial user is allowed to create, modify, and delete permanent users with the PERSISTENCE attribute.
tongxi99658318
1,408,677
Basic database operations: create and manage tablespaces (openGauss)
Create and manage tablespaces Background Information By using tablespaces, administrators can control...
0
2023-03-21T03:02:39
https://dev.to/490583523leo/basic-database-operations-create-and-manage-tablespaces-opengauss-2jll
Create and manage tablespaces Background Information By using tablespaces, administrators can control the disk layout of a database installation. This has the following advantages: If the partition or volume where the database is initialized is full, and more space cannot be expanded logically, you can create and use tablespaces on different partitions until the system reconfigures the space. Tablespaces allow administrators to arrange data location according to the usage patterns of database objects, thereby improving performance. A frequently used index can be placed on a stable and fast disk, such as a solid-state device. A table that stores archived data, rarely used or performance-critical, can be stored on a slower disk. The administrator can set the occupied disk space through the table space. It is used to prevent the table space from occupying other space on the same partition when sharing the partition with other data. The table space corresponds to a file system directory. It is assumed that the database node data directory /pg_location/mount1/path1 is an empty directory that the user has read and write permissions. The use of table space quota management will affect the performance by about 30%. MAXSIZE specifies the quota size of each database node, and the error range is within 500MB. Please confirm whether it is necessary to set the maximum value of the table space according to the actual situation. openGauss comes with two table spaces: pg_default and pg_global. Default table space pg_default: The default table space used to store non-shared system tables, user tables, user table index, temporary tables, temporary table index, and internal temporary tables. The corresponding storage directory is the base directory under the instance data directory. Shared tablespace pg_global: A tablespace used to store shared system tables. The corresponding storage directory is the global directory under the instance data directory. Precautions: In scenarios such as HCS, it is generally not recommended that users use custom tablespaces. User-defined tablespaces are usually used in conjunction with other storage media other than the main memory (that is, the storage device where the default tablespace is located, such as a disk) to isolate the IO resources that can be used by different businesses. In scenarios such as HCS, storage devices are Standardized configuration is adopted, and there is no other available storage medium. Improper use of custom table space is not conducive to the long-term stable operation of the system and affects the overall performance. Therefore, it is recommended to use the default table space. Steps create tablespace Run the following command to create user jack. openGauss=# CREATE USER jack IDENTIFIED BY 'xxxxxxxxx'; When the result displays the following information, it means the creation is successful. CREATE ROLE Execute the following command to create a tablespace. openGauss=# CREATE TABLESPACE fastspace RELATIVE LOCATION 'tablespace/tablespace_1'; When the result displays the following information, it means the creation is successful. CREATE TABLESPACE Among them, "fastspace" is the newly created tablespace, and "tablespace/tablespace_1" is an empty directory where the user has read and write permissions. The database system administrator executes the following command to grant the access authority of the "fastspace" tablespace to the data user jack. openGauss=# GRANT CREATE ON TABLESPACE fastspace TO jack; When the result is displayed as the following information, it means that the assignment is successful. GRANT Create objects in the tablespace If the user has the CREATE permission of the table space, he can create database objects on the table space, such as tables and indexes. Take creating a table as an example. Method 1: Execute the following command to create a table in the specified tablespace. openGauss=# CREATE TABLE foo(i int) TABLESPACE fastspace; When the result displays the following information, it means the creation is successful. CREATE TABLE Method 2: First use set default_tablespace to set the default tablespace, and then create the table. openGauss=# SET default_tablespace = 'fastspace'; SET openGauss=# CREATE TABLE foo2(i int); CREATE TABLE Suppose you set "fastspace" as the default tablespace, and then create table foo2. query tablespace Method 1: Check the pg_tablespace system table. The following command can check all table spaces defined by the system and users. openGauss=# SELECT spcname FROM pg_tablespace; Method 2: Use the meta-command of the gsql program to query the tablespace. openGauss=# \db Query table space usage Query the current usage of the tablespace. openGauss=# SELECT PG_TABLESPACE_SIZE('example'); Return the following information: pg_tablespace_size -------------------- 2146304 (1 row) Among them, 2146304 represents the size of the table space in bytes. Calculate tablespace usage. Table space usage = PG_TABLESPACE_SIZE/disk size of the directory where the table space is located. modify tablespace Run the following command to rename the tablespace fastspace to fspace. openGauss=# ALTER TABLESPACE fastspace RENAME TO fspace; ALTER TABLESPACE drop tablespace Run the following command to delete user jack. openGauss=# DROP USER jack CASCADE; DROP ROLE Run the following commands to delete tables foo and foo2. openGauss=# DROP TABLE foo; openGauss=# DROP TABLE foo2; When the result is displayed as the following information, it means the deletion is successful. DROP TABLE Run the following command to delete the tablespace fspace. openGauss=# DROP TABLESPACE fspace; DROP TABLESPACE Note: The user must be the owner of the tablespace or the system administrator to delete the tablespace.
490583523leo
1,408,694
OpenGauss common log introduction: system log
system log The logs generated by the openGauss runtime database node and the openGauss installation...
0
2023-03-21T03:33:27
https://dev.to/490583523leo/opengauss-common-log-introduction-system-log-8kl
system log The logs generated by the openGauss runtime database node and the openGauss installation and deployment are collectively called system logs. If openGauss fails during operation, you can use these system logs to locate the cause of the failure in time, and formulate a method to restore openGauss according to the log content. Log file storage path The running logs of the database nodes are placed in the corresponding directories in "/var/log/gaussdb/username/pg_log". The logs generated during the installation and uninstallation of OM openGauss are placed in the "/var/log/gaussdb/username/om" directory. Log file naming format Naming rules for database node running logs: postgresql-creation time.log By default, a new log file will be generated at 0 o'clock every day or when the log file is larger than 16MB or the database instance (database node) is restarted. Naming rules for CM operation logs: The log of cm_agent: cm_agent-creation time.log, cm_agent-creation time-current.log, system_call-creation time.log, system_call-creation time-current.log. The log of cm_server: cm_server-creation time.log, cm_server-creation time-current.log; key_event-creation time.log, key_event-creation time-current.log. om_monitor logs: om_monitor-creation time.log, om_monitor-creation time-current.log. Among them, the file without the current identifier is the historical log file, and the file with the current identifier is the current log file. When the process is initially called, the process will first create a log file with the current identifier. When the size of the log file exceeds 16MB, the current log file will be renamed to a historical log file, and a new current log file will be generated at the current time . Description of log content The default format of the log content of each line of the database node: Date + time + time zone + user name + database name + session ID + log level + log content The default format of each line of log content of cm_agent, cm_server, and om_monitor: Time + time zone + session ID + log content The SYSTEM_CALL system call log records the CM_AGENT calling tool commands during operation. The default format of each line of key_event log content: time + thread number + thread name: key event type + arbitration object instance ID + arbitration details.
490583523leo
1,408,770
creating badge-lists, fancy stuff
We're taking this, we're stealing it and we're making it open source. Like modern day Robin...
0
2023-03-21T06:25:46
https://dev.to/lizblake/creating-badge-lists-fancy-stuff-o30
webdev, javascript, beginners, opensource
![badge list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttbawbg80wklvel0z87i.png) We're taking this, we're stealing it and we're making it open source. Like modern day Robin Hood. Since I've borrowed an iPad from the University, I've been getting into making stickers and drawing stuff, so here's a rough sketch of our badge-list. ![sketch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyhgpvayjvpboynqv7ew.png) The basis of this list is a map like we employed in the previous homework. This will be used to create the elements of the badge list and update the count. Everything will be done through a Vercel endpoint with a json file that will need update somehow whether through a database or other means. The badge-element will include the badge-icon, badge-title, badge-information or badge-description, badge-creator-icon, badge-creator-name, and a badge-steps-list that needs fleshed out but will be similar the badge-list but within a smaller container. The drop-down icon is something that will be toggled with a reflective statement so that information can be populated from the Vercel endpoint only when triggered. The project will also need to be a11y compliant. Elements like information and titles should be included as slots, and the user should be able to select what icon they would look to use. While this is a rough summary, we will be implementing this general concept and then look towards adding more features and components. As always, music. {% embed https://open.spotify.com/track/3tjgYNFUvqyCGg5NEn4wYb?si=ce695a387d7e4d0e %} Toured with them in Worchester, MA. They're badass.
lizblake
1,408,920
Codecademy Python Terminal Project
https://github.com/cece3333/Flashcard_game.git One of my first project! A simple terminal game in...
0
2023-03-21T09:37:42
https://dev.to/cece3333/codecademy-python-terminal-project-43o9
codecademy, python, portfolio
https://github.com/cece3333/Flashcard_game.git One of my first project! A simple terminal game in which you can create, modify, delete and show flashcards stored in a json file. A "play mode" is accessible in which the front of a random flashcard is displayed for you to guess the back (like Anki without spaced repetition). This is the simpliest version, I intend to change the code and add classes and object to create more features.
cece3333
1,409,200
What is the best example to learn the ES6 spread operator (...) ?
The following example taken from MDN is the best example I ever found to learn the spread operator as...
0
2023-03-21T12:47:45
https://dev.to/mbshehzad/what-is-the-best-example-to-learn-about-the-es6-spread-operator--2aj8
javascript, node, react
The following example taken from **MDN** is the best example I ever found to learn the spread operator as of 21 March 2023:- ``` function sum(x, y, z) { return x + y + z; } const numbers = [1, 2, 3]; console.log(sum(...numbers)); ```
mbshehzad
1,409,337
Mac: Change the VSCode git extension account
Keychain Access: delete the credential.
0
2023-03-21T15:29:08
https://dev.to/coderethanliu/mac-change-the-vscode-git-extension-account-fm9
Keychain Access: delete the credential.
coderethanliu
1,409,339
256 HW 10- Joseph Vanacore & John Szwarc
Include images of how you are conceiving the API for the elements involved and the names As...
0
2023-03-21T16:03:52
https://dev.to/joevan21/256-hw-10-joseph-vanacore-john-szwarc-39d1
**Include images of how you are conceiving the API for the elements involved and the names** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3zth9yc9fgdgzy1fh72.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/laa4hr9j0g2uoygls0n1.png) As shown in the code above, the first screenshot shows the API input information that builds that clone of the card. It includes a description, name, title, image files and text code. This connects to the local host address thus updating the card so that it looks efficient. The second image shows the end of the API file, the first line "Cache-Control" keeps the shared cache up to 1800 seconds. The other lines classified as "Access-Control" allows origin requests from specific domains allowing certain HTTP methods to be put into place. To finish the API files, the last line ensures the resource be accessed via credentials. After all of this input, the files gets shot out to vercel to be published via local host. **What properties do you think you'll need** In JS we need properties such as "name", "title" including CSS properties "color", "background-color", "font-size", "margin", and "padding". **What sort of information will need to come from the backend to make this work?** First we would need to ensure that our Object, DOM, and array properties are correctly imputed into our system. Then the backend can use specific information gained by the codes API's and database queries to build that connect between the front and backend systems of a web component. Either using a screen cast, taking screen shots, links to your code, show how you'll apply concepts from the Homework (Shows the different implications, strings, and Boolean types being used within the card.) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0nikg6ccmxmuuss2u8t.png) (Some mild CSS examples that can be used within project 2 to assist in organization viewpoints) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5pq23lyy4a6awrukkxkm.png) (Constructor is going to be an important part of the project 2 build) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjlzbj53msch5fl18stc.png) (To ensure a drop down details/information button can be activated) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3fax133yzvkpxlwhddv.png) **Relate it to what you'll have to do in order to pull this off for Project 2** In project 2 we will be working with some backend components that we haven't been exposed to until now. We will need to research meanings for new terms such as API and figure out ways to implement that code into our current code project so that we can see how the web-components work behind the scenes. For example, knowing how the local host is published via backend code will be a necessity to the success to project 2. Article one is a focus on scope and the activity we did in class. What is the project, what will things be named, how will you initially conceive of attacking the problem For this project, we will be assigned a type of comp to work with and build using not only frontend tools but backend tools as well. We will be using some familar tooling such as properties, CSS styling, and core JS code while also being exposed to new components such as API's and connecting our project to versel and NPM. When it comes to naming the specific items at use, that will depend on if we continue to use our card from project 1 as a base or if we build something brand new. Making sure everything is named with an implication is very important into making the front and backend components to work. The first step to attacking the problem first hand is research. Find definitions to new phrasing and tooling like API's, explore vercel and figure out how the publishing works and the backend work of vercel. Find examples and new terms that can make our project better than expected. Once the research is done, applying our knowledge to the code styles of front/backend and using CSS styling to ensure our web-components are styled and designed the way wanted will be the key to attacking and figuring out anny future problems or issues we will run into.
joevan21
1,409,400
Saas Reporting Analytics - Guide to New Startup Business Building
In today’s digital age, businesses generate vast amounts of data from multiple sources, including...
0
2023-03-21T16:22:49
https://dev.to/shbz/saas-reporting-analytics-guide-to-new-startup-business-building-49e7
startup
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zklysybit5pxe2iv97u5.PNG) **In today’s digital age, businesses generate vast amounts of data from multiple sources, including customer interactions, sales transactions, marketing campaigns, and website visits.** However, more than data is needed to drive business success. It would be best to have robust reporting and analytics tools to make sense of the data and turn it into actionable insights. For instance, Churnfree has recently gained popularity as a SaaS reporting analytics tool. Churnfree provides businesses with real-time insights into their data, enabling them to make informed decisions, improve performance, and drive growth. This article will explore the key best practices for implementing SaaS reporting analytics and how they can help you enhance your business intelligence. SaaS Reporting Analytics and Growth Metrics You Should know The best approach to prepare the best customer retention strategy is by knowing the basic knowledge of SaaS reporting analytics metrics and how they can help you enhance your business intelligence. ## 1. Churn Rate The churn rate is the percentage of customers who have stopped using a product or service over a given period. In SaaS, it is essential to track churn rate as it provides insights into customer satisfaction levels. To calculate the churn rate, divide the number of customers who have stopped using the product by the total number of customers. Determine the number of customers who have canceled or not renewed their subscription within a given period. This can be a monthly or quarterly period, depending on the company’s preference. ### For example, if a company had 1,000 customers at the beginning of the month and 50 of them canceled their subscriptions during the month, the churn rate for that month would be 5%. SaaS companies use SaaS reporting tool to track their churn rate regularly and identify trends to take steps to improve customer retention. A high churn rate can indicate that customers are unsatisfied with the service, and the company may need to improve its product or customer support. ## 2. Customer Lifetime Value (CLTV) Customer Lifetime Value (CLTV) is the total value that a customer brings to a business over their lifetime. CLTV is a critical SaaS reporting analytics as it helps businesses understand each customer’s profitability. Calculating Customer Lifetime Value (CLTV) is essential for SaaS companies to understand the overall value a customer brings to their business over time. To calculate CLTV, multiply the average revenue generated per customer by the average customer lifespan. Here are the steps to calculate CLTV: Determine the average revenue per customer per month. This can be calculated by dividing the total revenue earned in a given time period by the total number of customers during that same time period. ### For example, if a company earned $100,000 in revenue in a month and had 1,000 customers during that same month, the average revenue per customer per month would be $100. Write full article on: [SaaS reporting analytics](https://churnfree.com/blog/saas-reporting-analytics/)
shbz
1,409,616
Interview Questions to Assess Company Culture Fit
Learn how to assess company culture fit with these top interview questions. Understand the importance of leadership, diversity, feedback, and career advancement in finding the right work environment.
0
2023-03-21T18:25:12
https://bekahhw.com/Interview-Culture-Questions
codenewbie, webdev, career
--- title: Interview Questions to Assess Company Culture Fit published: true description: Learn how to assess company culture fit with these top interview questions. Understand the importance of leadership, diversity, feedback, and career advancement in finding the right work environment. date: 2023-03-21 00:00:00 UTC tags: codenewbie, webdev, careers canonical_url: https://bekahhw.com/Interview-Culture-Questions --- Culture is hard to define. And we all like a different type of culture. We grow, learn, communicate in different ways; there’s not one culture that benefits everyone. Having diverse cultures is a beautiful part of the industry because there’s a space for all of us. It might be hard to find sometimes; we might need to dig. But there’s the hope that we find the space that welcomes us, as ourselves. But how do we find that? It’s something I’ve thought about a lot recently. And something I’ve failed at in my career. I’m a trusting person. I was once in a behavioral interview where the hiring manager told me that it was both a strength and weakness that I was willing to have my heart broken. I’m not interested in toughening up. I don’t particularly like my heart breaking, but I’d rather put myself into a situation where my heart is in it, than one where I don’t care; and company culture plays an important role in the interview process, my success, and the questions I ask. ## Culture Questions Good culture is about communication, representation, and a willingness to learn and grow. A stagnant culture is defensive, prideful, and inflexible. Avoid those spaces if you’re hope is to learn and grow. Often indirect questions give us a better idea of what the culture is like. Below is a mix of both. ### General Questions - What makes a good culture? - How do you provide feedback? - How do you handle an employee leaving? - How did you handle a situation in the last year where a teammate was laid off? - How would you describe your senior leadership? ### Leadership Questions Your team is critical, but leadership is more important for long-term strategy. Understanding how leadership makes decisions, the culture they want to lead, their style of communication may not immediately impact you, but it’s very likely that your success and longevity will be most impacted by the culture they exemplify. And by leadership, I don't mean HR/People teams. They often want to “present” the most interesting culture to candidates. I mean top level people in the organization. Because if they don’t care about diversity, it doesn’t matter who does. If they don’t care about communication, it doesn’t matter who does. If they don’t care that the team is experts in their field, then it doesn’t matter if your team lead does. Ultimately, they make the decisions. It’s to your benefit to determine who makes the decisions that affect your potential team and then ask questions about that person or people. - How would you describe senior leadership? - Who makes decisions that impact this team? - Have you ever disagreed with that person/persons? - What did that look like? - What’s the long-term vision of senior leadership? - How does the impact the team? How long has that been the vision? How many times has that changed since you’ve been with the company? - Who are the competitors of the company? How are we addressing competition? ## Team Lead Questions - What are your team goals? How long have those been your goals? - What were your team goals in the last three months? Did you hit those goals? Did you evaluate those goals? Did you have a team retro? - How do you evaluate progress? What do your career ladders look like? When do we get evaluated for career advancement? - Give an example of when your team failed. What did that look like? How did you respond? Did anyone from leadership respond? - What does team disagreement look like for you? How do you negotiate that experience? - How do you provide your team with feedback? Is that built into your process? ## Culture-Specific - Describe your culture. What has been done to create that culture? - Who’s in charge of defining your culture? - How important is it for you to have people who agree with you (IMO, if the answer is very, this is a warning sign that they don’t want disagreement)? - What activities do you do as a company that shapes the organization's culture? - What’s your mission? What are your core values (if they don’t know the answer, that tells you how seriously they’re taken)? - Can I talk to someone in your organization that’s a part of {cultural initiative}? That’s a starter list. As human beings, we all thrive in different environments. So finding the right environment for you, the environment that allows you to thrive, is one where you ask questions to decide if it works for you. I don’t think there’s a perfect work environment. There might be compromises, but each of us--individually--knows where we draw the line. So how close to the line do you want to compromise for company culture?
bekahhw
139,659
JavaScript of the Week: a conversation with Peter Cooper
A quick conversation with Peter Cooper, editor of JavaScript Weekly. Learn how he got started and what he thinks about JavaScript!
0
2019-07-12T16:04:19
https://www.fdoglio.com/post/javascript-of-the-week-a-conversation-with-peter-cooper
javascript, node, javascriptoftheweek, interview
--- title: JavaScript of the Week: a conversation with Peter Cooper published: true description: A quick conversation with Peter Cooper, editor of JavaScript Weekly. Learn how he got started and what he thinks about JavaScript! tags: javascript, node.js, javascript of the week, interview canonical_url: https://www.fdoglio.com/post/javascript-of-the-week-a-conversation-with-peter-cooper --- Hello and welcome to a new installment of what I like to call _"JavaScript of the Week"_. In it, I want to feature one person from our beloved Node.js + JavaScript community asking them some interesting questions and getting some advice for the newcomers. Hopefully showing that these massively influential people started just like everyone else, and if they made it, you can too! This week we have with us Peter Cooper (A.K.A [@peterc](http://twitter.com/peterc) on Twitter) , editor of JavaScript Weekly (from [https://cooperpress.com/](https://cooperpress.com/) a publication we all know very well. Let's take a look at what he had to say, shall we? ![Peter Cooper](https://thepracticaldev.s3.amazonaws.com/i/vzo60wdleoskhqvhha4y.png) ## 1. Tell me a bit about yourself (hobbies, education, etc) I left school at 16 and I've been programming forever. I have three children so who has time for hobbies? Haha. I play FIFA (the soccer video game) quite a lot just to ensure I actually have something other than programming I can talk about.. ## 2. How old were you when you started programming? And what language was it? Around 5. I don't remember but my parents have photos! I first remember programming around 6 years old but it was just very basic PRINTing to the screen, etc, nothing fancy! ## 3. How long have you been working with JavaScript? My first posts about JavaScript online were in 1996 so 23 years? But it's been very on and off, I've not been a professional JavaScript developer all of that time, of course. ## 4. What got you started with it? I have a bad memory but my earliest posts online were all about adding extra functionality to HTML forms. Basic validation, stuff like that, in the era before built-in validation (which arrived with HTML5). That's really all JavaScript was good for back then as it was really slow and limited. ## 5. If you could re-define the language, what would you change? This'll be controversial but if this were a historical revision (and not a revision now that everyone's used to JavaScript) I would have avoided prototypal inheritance and gone with something less idiomatic. ## 6. What would you say is the best feature of JavaScript? The accessibility. I'm not a huge fan of many of the language's features but people seem to find it easy to learn and get involved with. ## 7. What advice would you give to someone who's just starting to learn JS/Node? Subscribe to JavaScript Weekly? Haha. ## 8. Any particular learning resource you'd like to recommend? [MDN](https://developer.mozilla.org/en-US/), Wes Bos's courses, and always good books (such as those by Dr. Axel). ## 9. Is there a project / website / something you'd like to promote while you're at it? _No response given here, but I'd suggest checking out JavaScript Weekly if you haven't!_ ## 10. Favorite superhero? Grace Hopper (Note from Fernando: I love this question, we're getting responses that are outside of what we would expect. Do you know who Grace Hopper is? If you don't, [you should](https://en.wikipedia.org/wiki/Grace_Hopper)!) --- And that is it for our JavaScript of this Week coming all the way from the man who's usually getting in our inbox on a weekly basis. What did you think? I liked the fact that he's been punching keys on and off since he was 5 (or 6) years old! Coming from a parent of 2, I definitely want my kids to learn how to code in the near future, given the fact that now everything is digital. But that wasn't the case back then, I'm not sure when Peter was born (thinking back on it, I should ask a question about that... mh), but when I was a kid (early 80's), that wasn't the case and I would've loved having contact with computers at such an early age! Leave your comments below if you have any questions for him or comments about his answers! And if you haven't already, follow [@peterc](http://twitter.com) to stay up-to-date with his work! See you on the next one!
deleteman123
141,935
Face Detection in Python
Hi in this post i'll show you how to use OpenCV library to detect faces inside a picture, in a future...
0
2019-07-17T22:23:20
https://dev.to/demg_dev/face-detection-in-python-1n5f
python, facedetection, opencv
Hi in this post i'll show you how to use OpenCV library to detect faces inside a picture, in a future post i'll show you how to detect in a Real-Time video that was amazing stuff to practice the visual recognition. > well let's start ## Requirements check your version of python or install python 3.6. ``` bash python--version ``` then we need to install OpenCV library by searching in the [official webpage](https://opencv.org/), in my local environment i use the OpenCV v4.1 you can use instead a previous version v3.X, i recommend you to follow the official instructions to install in the documentation. ## Code the Face Detection solution Now we start coding the face detection program. First need to import the OpenCV library and the system library(could be optional, i use this library to get a parameter when i run the python code). ```python import cv2 import sys ``` Next to import statements, we need to get the parameter passed from the user, this parameter was the picture to test in the face detection program. ```python # Get user supplied values imagePath = sys.argv[1] ``` In the next part we need to add in our code the magic for face detection this part is called Haar Cascade feature; you maybe ask why is a haar cascade, well now i'll explain you what it is. > Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. Here we will work with face detection. Initially, the algorithm needs a lot of positive images (images of faces) and negative images (images without faces) to train the classifier. Then we need to extract features from it. For this, haar features shown in below image are used. They are just like our convolutional kernel. Each feature is a single value obtained by subtracting sum of pixels under white rectangle from sum of pixels under black rectangle. Now we have an idea was the HaarCascade can continue with the code, i'm using the Haar cascade file created by Rainer Lienhart ( the file you found in the repository of the code), we need to add the file to the code. - by OpenCV docuentation ```python cascPath = "haarcascade_frontalface_default.xml" ``` Next, we need to train the OpenCV classifier with the haar cascade file. ```python # Create the haar cascade faceCascade = cv2.CascadeClassifier(cascPath) ``` In the next step we need to read the image and set to gray color because OpenCV do better job with gray scale pictures with haar cascade classifier. ```python # Read the image image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) ``` In this part the magic is done, the OpenCV framework do the magic detecting the faces inside an image and return the position of the face in the image, you can adjust the parameters to get more detailed results or more crazy haha!!!. ```python # Detect faces in the image faces = faceCascade.detectMultiScale( gray, scaleFactor = 1.05, minNeighbors = 5, minSize = (30, 30), flags = cv2.CASCADE_SCALE_IMAGE ) ``` To finish our code for face detection just print the number of faces found in the image and then draw a rectangle with the face inside them, and show a windows with the images and the faces rounded; for close the window press any key. ```python print("Found {0} faces!".format(len(faces))) # Draw a rectangle around the faces for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) height, width, channels = image.shape cv2.imshow("{0} Faces found".format(len(faces)), image) cv2.waitKey(0) ``` This is an example of the result more accurate. ![Lot people good quiality](https://1.bp.blogspot.com/-E7gxWsBDFt0/XS9IqyqpG-I/AAAAAAAAWzo/or6VTnoARuolE0FyWqPQ1U05YnO-UVyugCPcBGAYYCw/s320/Captura.PNG) Like you see in the image they recognize a hand like a face, that was some of the issues i found, i try to set more accurate values but it detect less faces you can play with it to see how it works. And this is another example, like you see detect many faces in a low resolution picture but have a lot of errors, in this image and this configurations only 1 face is not detected. ![Lot of people low quiality](https://1.bp.blogspot.com/-w0X2CmxtJjk/XS9R_wGjTmI/AAAAAAAAWz0/_9KPKNVQ1Vo3L7cDWJZNgTN_5FUYM3CFwCLcBGAs/s640/Captura1.PNG) ## Complete code Here is the complete code, like you see only takes 33 lines with white spaces, it is a really short program with powerfull applications. ```python import cv2 import sys # Get user supplied values imagePath = sys.argv[1] cascPath = "haarcascade_frontalface_default.xml" # Create the haar cascade faceCascade = cv2.CascadeClassifier(cascPath) # Read the image image = cv2.imread(imagePath) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Detect faces in the image faces = faceCascade.detectMultiScale( gray, scaleFactor = 1.05, minNeighbors = 5, minSize = (30, 30), flags = cv2.CASCADE_SCALE_IMAGE ) print("Found {0} faces!".format(len(faces))) # Draw a rectangle around the faces for (x, y, w, h) in faces: cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2) height, width, channels = image.shape cv2.imshow("{0} Faces found".format(len(faces)), image) cv2.waitKey(0) ``` ## Conclusion For finish this post, like you see, if you use the actual solution the face detection you can develop a more detailed program like a doorbell or something like a cam in the traffic light to detect a face and maybe do a better recognition of the person face. Next to this you maybe try to do the same program but in a real video or wait for my next post to learn how to do it.
demg_dev
145,639
Interfacing your UI components
In recent years, front-end development became an important part of my life. But when I started years...
0
2019-07-20T08:33:11
https://kevtiq.co/blog/interfacing-your-ui-components/
webdev, javascript, learning, ui
In recent years, front-end development became an important part of my life. But when I started years ago, I did not understand what an API was. I worked with them, but I never cared what it exactly was, or what it requires building one. I knew what the concept of interfaces in UI, but its relation with the letter "I" of API was lost to me. At a certain point, collaboration becomes more important. Your colleagues should be able to use and understand your work. This was the point for me I started to see the connection between API and UI in front-end development. ## What is an interface? As a front-end engineer, always take the reusability of your work into account. On the other end, our work should also be usable and accessible to users. Both concepts come together with the modern way of working, where design systems are at the center of attention. As [Alla Kholmatova describes in her book](https://www.smashingmagazine.com/design-systems-book/) a design system comprises reusable patterns. But how do you make something reusable, especially when the content pattern itself is rather complex? This is where the concept of interfaces come into play. The ever so trustworthy [Wikipedia](<https://en.wikipedia.org/wiki/Interface_(computing)>) defines an interface as stated below. > An interface is a shared boundary across which two or more separate components of a computer system exchange information When looking at this definition with my front-end goggles, I directly see the word _component_. Two or more separate UI components that work together is exactly how we create most design systems. In [React](https://reactjs.org/docs/components-and-props.html) for instance, you provide data from a parent component to a child component through the _props_ of the child component. So is this the spot in front-end development where we design and develop interfaces? Yes, yes it is. As mentioned though, this is not the only place where interfaces play a role. When a user clicks on a button, or fills in a form and submits it, he or she is interacting with one (or more) of our components. The UI the user is interacting with is the shared boundary from the interface definition. The interactions of a user are a way of transferring information about his or her intentions towards our components. ## Component anatomy So we are dealing with two types of interfaces when designing and developing components. By combining multiple of these components we can create a UI that the user can use that connects to our system. Great! Are we done? Not completely. When we change something in one interface, it affects the other interface of the same component. To understand this better, we have to look at the component anatomy. ![The UI component anatomy](https://kevtiq.co/img/ui-component-anatomy.png) A UI component consists, as you can see, of several parts that interact with each other. When a user interacts with the UI by clicking a button, some logic triggers inside the component. Depending on the logic, several things can happen within the component. The internal state of the component gets updated, we send a request to the back end, or we provide information back to the user. One important path inside the component is missing though. Through its API, it can provide information to other components. This only works when other components connect to your component, by providing a callback function (e.g. an `onClick` function for a button component). Your component can provide information to others through their APIs, and vice versa. Another component can provide information through the API to your component. This is the interface used by other engineers. Our components runs some logic when another connects through the API. Depending on the logic, it either updates its internal state, provides information back, or update the UI based on the information. In the last case, it is our component that describes in its API how it can connect with other components. It describes what type of information it can receive, but also when it can provide information back (e.g. callback functions like `onClick`). We can often assume that other engineers are not aware of the internals of our UI components. So the interfaces become a way to describe how we want others to use and interact with our work. But how can we describe our interfaces to ensure others know of how they should interact with them? > The interfaces become a way to describe how we want others to use and interact with our work ## Describing interfaces This problem is partly already solved for your users with proper design. Providing good visual queues to the user so they know where and how they can interact with your component is a good first step. A second step lives within implementing the designs. Not every user interacts with a UI the way you envision. This can have various reasons, but a big one can be disabilities. When a user is partly blind, he or she might use a screen reader to interact with your component. The design of the UI does not have to change, but on an implementation level, consider these use cases. This is called the field of [accessibility](https://www.smashingmagazine.com/category/accessibility/) (or `a11y`). In the rest of this post, however, I want to discuss the engineers' interface or the API. Describing how we want other engineers to interact with our UI component is not a trivial task. As an engineer, myself included, we often have the feeling that our work is self-explanatory. It is not. We have to describe at least some things to ensure engineers of different levels can use our work if we want to. - What APIs of our UI component do they have access to; - For each API, how they can use it and what its purpose is (e.g. describing how they can influence the styling of your UI component); - Examples showing the actual outcome (UI) and the influence of different combinations of API inputs. You can achieve the above in various ways. You can write extensive documentation in a markdown (`.md`) file (e.g. `README.md`). A fun option is building a documentation website. Her where you can interact with the components. If that requires a too big of an investment, using tools like [Gitbook](https://www.gitbook.com/) or [Storybook](https://storybook.js.org/) are good techniques for documenting UI components extensively, with a low investment. ## API guidelines for React Until now, it was a lot of text, too little examples (my bad, sorry). So let's discuss some pointers for describing your API using React. Hopefully, you see that the examples can also apply to other frameworks. In React, your APIs are the [_props_](https://reactjs.org/docs/components-and-props.html) you define. Let's look at a small button example with some properties. ```jsx const Button = ({ onClick, variant, children, override, className, type }) => { return ( <button onClick={onClick} type={type} className={`${override.defaultClassName} ${className}`} data-variant={variant}> {children} </button> ); }; Button.propTypes = { variant: PropTypes.oneOf(['primary', 'stroke', 'flat']).isRequired, onClick: PropTypes.func.isRequired, override: PropTypes.object }; Button.defaultProps = { variant: 'primary', className: '', override: { defaultClassName: 'btn' } }; ``` At the top, we see our actual component. In the `return` statement we see what UI is being generated, but we can also see how to apply the _props_. More importantly are the `Button.propTypes` and `Button.defaultProps` in this case. The former is a way in React to describe the types of the values we expect of each of the _props_ and if they are required. For the `variant` _prop_ we also see that we restrict the values it can have. With `defaultProps` we define some default values used by the component. Using default values is a good way to avoid unwanted side effects when someone uses our component. When you do not define `className`, you will get `undefined` as a result. But because set a default value to an empty string, this empty string will be used instead of `undefined`. This avoids potential side-effects when someone creates a CSS class called undefined. One of the _props_ that might seem weird is `override`. Let's say you use a default class name for your buttons called `.btn`. Although it is a sensible and good name, other developers working on different projects might use a different default class name. In the `override` _props_ you can to override some default internal variables typically used. Ideally, it is hardly used, but it's an easy way to make your components more powerful for others to use. As a developer though, you do not want to set `override.defaultClassName` every time. In this case, you can create a new component that wraps our `Button` component. This avoids the need for the other developer to know the internal logic of our component. ```jsx const PrimaryButton = (props) => (<Button variant="primary" override={{ defaultClassName='company-btn' }} {...props} />); ``` ## Now what? Interfacing your components is hard. Other developers using your UI component might not be interested in the internals of it. Still, ensure that they realize how they can use and interact with it. In the end, they influence the users' interface, the UI. Users also need to understand how they can interact with our components. When trying to achieve this, start small (e.g. naming convention of APIs). From there, you can expand and find better ways of interfacing than described in this post. *This article was originally posted on [kevtiq.co](https://kevtiq.co/blog/interfacing-your-ui-components/)*
vyckes
180,540
Being a Successful Beginner
It’s only been a month since I started learning how to code, but that month has felt like a year. Sin...
0
2019-09-30T15:57:29
https://dev.to/lberge17/being-a-successful-beginner-1ljc
beginners
It’s only been a month since I started learning how to code, but that month has felt like a year. Since then, my perspective on learning and curriculum has changed drastically. In college, I viewed classes and assignments as items on a checklist—meaning the importance was placed in the completion of a checklist rather than on the items themselves. Checking off boxes is always so satisfying and used to make me feel like I was being productive. This strategy worked well in terms of being able to get good grades on my transcript and a nice GPA for my resume. However, when it comes to actually applying that knowledge, pieces of paper have very little practical use. When learning to code, I knew the emphasis was no longer placed on grades and pieces of paper. I had to be able to build tangible projects using my knowledge. So, I decided to stop viewing my work as a checklist and instead focus on retaining information. I decided to focus deeply on each lesson, put away distractions, and play around with the code until I felt I had a good conceptual understanding. In the beginning, this definitely slowed down my learning process, but I found it saved me time in the long run because I didn’t have to go back to previous lessons as often. While I don’t claim to have the perfect, cookie-cutter understanding of how to be a beginner and learn to code effectively, I do want to share some of the tactics I’ve used to help me thus far in my journey. I’m sure my perspective on learning and being a beginner will continue to grow and change overtime. With that disclaimer, my current opinion on what makes a successful beginner. <h2>Commit time and energy to learning</h2> <p>Coding takes a lot of time to learn, just like learning a new language. There's no magic in the amount of time it will take either; you have to really give yourself time to absorb the concepts. Spend whatever time you have each day dedicated to learning to code and put away any distractions (social media, video games, online shopping, etc.) Thirty minutes of undistracted, deep, focused work is far better than 4 hours of half-coding and half-netflix.</p> <h2>Don't take shortcuts</h2> <p>When you don't understanding a concept, spend extra time playing around with the code and researching. Taking an extra moment on the difficult concepts you don't understand will save you tons of time in the long run when you begin building on those concepts. I've definitely been tempted to move on if the solution is correct but I don't understand why. But, whenever I gave into that, I ended up spending way more time later on and getting more overwhelmed than I would have had I taken the time to understand my code conceptually.</p> <h2>Make imperfect plans and just build</h2> <p>Planning can save a lot of time and save you from having to completely scrap hours of coding. However, too much planning can also get in the way. When I was beginning, I found <a href="https://medium.com/edge-coders/the-mistakes-i-made-as-a-beginner-programmer-ac8b3e54c312">this article</a> which I think really helped me frame my planning practices. It's definitely worth a read if you're starting out. Having a "good-enough" plan has saved me a lot of time when building my projects and also made the code so much cleaner off-the-bat.</p> <h2>Enjoy the process</h2> <p>Programming is not a skill you can ever be a complete expert in. Technology is always evolving. There's no book you can read in a major programming language that will tell you all you will ever need to know about that language. In choosing to pursue web development, this fact was actually one of the major pulls for me. I've always loved learning new things, and this career would actually force me to keep my knowledge sharp. Always learning, always a beginner. So I'm enjoying learning to code and being a beginner. I don't view code lessons and challenges as assignments or chores, but rather I view them like I view reading a good book, playing music, or a video game. It's all about enjoying the journey I'm on because, with any luck, I'll always be a beginner in some aspect of coding. There will always be room for growth.</p> <p>These perspectives have really helped shaped my learning thus far. If I ever feel stuck or unmotivated, I take a break and turn back to these ideas. Learning any new skill is about figuring out what works best for you. What keeps you motivated?</p>
lberge17
182,269
Take a screenshot of VSCode using Polacode Extension
Sometimes I add some tweets that should include code, so we don't have the option to embed code on...
0
2019-10-03T22:39:55
https://dev.to/arbaoui_mehdi/take-a-screenshot-of-vscode-using-polacode-extension-524h
webdev, vscode, plugins, tools
--- title: Take a screenshot of VSCode using Polacode Extension published: true description: cover_image: https://thepracticaldev.s3.amazonaws.com/i/k8o4tn51n5zwprs2ioio.png tags: #webdev #vscode #plugins #tools --- Sometimes I add some tweets that should include code, so we don't have the option to embed code on twitter, so for that, the only solution is to add an image, in this post I'll show you how to create a screenshot of your code from [Visual Studio Code](https://code.visualstudio.com) by using this extension [Polacode](https://marketplace.visualstudio.com/items?itemName=pnp.polacode) First, you've to install the extension, click on the `Extensions` icon on the left sidebar of your editor to open the extensions Market place, then type `Polacode` you'll find multiple ones, choose the one that has more downloads. ![Install Polacode for vscode](https://thepracticaldev.s3.amazonaws.com/i/cm0sreyxoo9fiphnylgf.gif) Then we've to show the command palette by using the shortcut `Cmd+Shift+P` for Mac users, or `Ctrl+P` for Windows users, then type `> Polacode` ![Access Polacode Extension](https://thepracticaldev.s3.amazonaws.com/i/2dcnw4g257k2hk45a627.gif) Open any file including code, in my case, I'm using a JavaScript as an example, select the code and it will be shown in the right of the editor. ![take a screenshot using vscode](https://thepracticaldev.s3.amazonaws.com/i/ybj3eo7usco5u4q79vid.gif) I save the file `code.png` on the Desktop folder, so open the file to see the result. ![Open Polacode Result](https://thepracticaldev.s3.amazonaws.com/i/egpaocp4j31toycz2gz2.gif)
arbaoui_mehdi
202,090
Relative color luminance
C/P-ing this from my older blog posts. This one is from 2014. since I was a junior dev pretty much. N...
0
2019-11-07T23:26:39
https://lvidakovic.com/blog/relative-color-luminance
color, javascript, luminance
C/P-ing this from my older blog posts. This one is from 2014. since I was a junior dev pretty much. Nevertheless it's astonishing how many digital products get this wrong when applying the hyped up dark mode. --- This is the method for calculating color luminance about which Lea Verou gave talk at the Smashing conference. It enables you to dynamically pick appropriate colors in a way the text remains readable to the human being. Full explanation of the formula is at [w3.org](https://www.w3.org/WAI/GL/wiki/Relative_luminance). The procedure goes as follows: 1. Calculate luminance for text and background 2. Calculate the contrast ratio using the following formula. (L1 + 0.05) / (L2 + 0.05), where L1 is the relative luminance of the lighter of the foreground or background colors, and L2 is the relative luminance of the darker of the foreground or background colors. 3. Check that the contrast ratio is equal to or greater than 4.5:1 The key to it all is to retain proper contrast ratio between foreground and background color luminance. Here is the actual function that returns the relative luminance of the color: ```javascript // color array - [r,g,b] , each color with value ranging from 0 to 255 // @return int - number [0-100], describes relative luminance of the color, // 0 represents the luminance of completely black while // 100 represents the luminance of the white color. function luminance(color) { var rgb = color.map(function(c) { c /= 255 // to 0-1 range return c < 0.03928 ? c / 12.92 : Math.pow((c + 0.055) / 1.055, 2.4) }) return ( 21.26 * rgb[0] + // red 71.52 * rgb[1] + // green 7.22 * rgb[2] ) // blue } ``` To test it right now you can open up a browser’s console, paste the function and try to use it right away. Cheers!
apisurfer
204,793
Windows JS Dev in WSL Redux
Back in September I did a post on setting up a JS dev environment in Windows using WSL (Windows Subsy...
0
2019-11-16T18:39:50
https://dev.to/vetswhocode/windows-js-dev-in-wsl-redux-33d5
javascript, webdev, windows, linux
Back in September I did a post on setting up a JS dev environment in Windows using WSL (Windows Subsystem for Linux). Quite a bit has changed in the past couple months so I think we need to revisit and streamline it a bit. You can now get WSL2 in the Slow ring for insiders and a lot has changed in Microsofts new Terminal app. So to start, we need to be Windows Insiders for this. You could skip the whole insider thing and use WSL v1, but your going to have a pretty big performance hit and things won't work just right. So goto `Settings -> Updates & Security -> Windows Insider Program`. You will want to opt in and select your ring. For us today, the slow ring will get us everything we need. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/1z45aye3vek8grjz6v4w.png) You may need to run a few updates, what am I saying.. you will run some updates, reboot and maybe some more. The key is you want to be on build 19013. To verify you are on the right build Hit Start, type `winver`, and then press Enter. The second line in the “About Windows” box tells you which version and build of Windows 10. Next we need to make sure we turn on 'Windows Subsystem for Linx'. Press the start key and start typing 'Turn Windows Features on or Off'. You should see it appear and select it. A dialog window will appear. You want to turn on 'Virtual Machine Platform' and 'Windows Subsystem for Linux' as you see below. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/wlq60un4niso96m8crtp.png) Next we want to get you the new Windows Terminal from the Store app. Just go in there and search 'Windows Terminal' and you should find it no problem. Install it. Next we want to open your new terminal up. It will default to Powershell for now. That's fine, we want to run a commands in there for our setup. Type `wsl --set-default-version 2` in your terminal window. This will tell Windows we want any Linux distribution installed to use version 2 of WSL. Now that we have done that we need to install a Linux distro. I recommend Ubuntu, mainly because I know everything works in it. You want to select the one without a version number (I'll visit why in another post) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/hai4ezhq1im66zdyah9n.png) To veryify it's installed and we have the WSL2 version we can type `wsl -l -v` in the terminal and you will get a list of distros you have installed and what version of WSL they are using. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/iqipprosj2exnv8m0caw.png) The next piece of software I would like to see installed is [Visual Studio Code](https://code.visualstudio.com/). This is what I use for my main editor. Now let's make Ubuntu the default when we open the Terminal App. Open your terminal app and goto settings. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/i0tl87x18q9osi8cr8nt.png) This will open the `profiles.json` file in code. Here we will make our changes. Every profile has a GUID. We just make sure the one we want as our default is is set at the line `"defaultProfile":` ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/xjeo3hcu1ntmjdbhp042.png) Now you can do this with whatever distro you want in the future, but our focus is on Ubuntu. So now almost everything is installed. We just need to tweak things up a little. First off, I like to use the [Zsh shell](https://www.zsh.org/) as my main shell. You by no means are required to use it. If you choose to stick with bash the only step you have is to make it your default shell in the Terminal App. But Zsh has some nice optimizations that I believe improve life a little. You can install Zsh simply by typing `sudo apt install zsh` in your Ubuntu Terminal. Next we are going to run just a couple more commands before we set up our zsh config. * We are going to install [Oh-My-Zsh](https://ohmyz.sh/) which they themselves describe as "Oh My Zsh is a delightful, open source, community-driven framework for managing your Zsh configuration. It comes bundled with thousands of helpful functions, helpers, plugins, themes, and a few things that make you shout... 'Oh My ZSH!'" and to do this you type the following in the terminal. ```sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"``` * We'll add the plugin 'zsh-autosuggestions' this is very handy by using past commands to help you auto fill future ones. The command for install is.. ```git clone https://github.com/zsh-users/zsh-autosuggestions ~/.oh-my-zsh/custom/plugins/zsh-autosuggestions``` * And finally zsh-nvm will help us keep a current node install and even change versions if needed. ```git clone https://github.com/lukechilds/zsh-nvm ~/.oh-my-zsh/custom/plugins/zsh-nvm``` Once you have done all of the above commands we will edit our .zshrc. First make sure your in your /home directory by typing `cd` and pressing enter. Next run `nano .zshrc`. First you can change your theme if you want, I have 'bira' selected for myself at this time. You can see some of your options at https://zshthem.es/all/ . ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/joswrcqwy2qsayxa9ti0.png) Next we want to add the plugins we installed earlier. This is a little further down the config. Just enter them as I have in the picture below. ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/h98ke8uhgn64s2tror5a.png) Once this is done you will press `ctrl + o` to write the file and `ctrl + X` to close nano. Now type `source .zshrc` to load your plugins and theme. And now we install the LTS version of node simply by typing `nvm install --lts` Let's also make a directory for our future projects by typing `mkdir Projects` or whatever you would like to call it. So now you can `cd Projects` which will put you in that directory. From there we can open that folder with VSCode by typing `code .` while we are in the working directory of our choice. You should be able to 'Rock and Roll' at this point. Feel free to reach out with any questions. WSL is changing pretty quick, along with Windows itself, so this post may go out of date fairly soon. I will try to link the most current version at the top in the future if that is the case.
heytimapple
205,304
Infographic: Top 11 Chrome Extensions For Developers And Designers In 2019
Dominating the browser market share with 63.99% as of Aug 2018 – Aug 2019, Google Chrome has been the...
0
2019-11-14T10:41:31
https://www.lambdatest.com/blog/infographic-top-11-chrome-extensions-for-developers-and-designers-in-2019/
webdev, productivity, ux
Dominating the [browser market share](https://gs.statcounter.com/browser-market-share) with 63.99% as of Aug 2018 – Aug 2019, Google Chrome has been the pinnacle of web browsers. As a result of immense worldwide adoption, it is no surprise to witness a bustling marketplace for Google Chrome extensions. Some are there to help you become more productive at your daily activities and some are specific to your profession or hobby. However, with the availability of innumerous extensions, it could be tough finding out the top Chrome extensions meant for you, or being notified about them. And considering the pace at which the web development domain is evolving, it becomes all the more important to stay up-to-date. Which is why I am going to help you quickly realize the top Chrome extensions in 2019 for web developers and designers. [**Also Read: 19 Chrome Extensions For Web Developers & Designers In 2019**](https://www.lambdatest.com/blog/op-chrome-extensions-for-developers-2019/?utm_source=dev&utm_medium=Blog&utm_campaign=Rahul-14112019&utm_term=Rahul) ![Infographic: Top 11 Chrome Extensions For Developers And Designers In 2019](https://www.lambdatest.com/blog/wp-content/uploads/2019/09/Infographic-Google-Chrome-Extensions__1568802298_45.64.8.98-jpg.jpg) ### Share this Image On Your Site **Please include attribution to www.lambdatest.com with this graphic.** ``` <a href='https://www.lambdatest.com/blog/infographic-top-11-chrome-extensions-for-developers-and-designers-in-2019/'> <img src='https://www.lambdatest.com/blog/wp-content/uploads/2019/09/Infographic-Google-Chrome-Extensions__1568802298_45.64.8.98-jpg.jpg' alt='Top 11 Chrome Extensions For Developers And Designers In 2019' 540px border='0' /> </a> ``` ## References: 1. [ColorZilla](https://chrome.google.com/webstore/detail/colorzilla/bhlhnicpbhignbdhedgjhgdocnmhomnp?ref=designrevision.com) 2. [Fontface Ninja](https://chrome.google.com/webstore/detail/fontface-ninja/eljapbgkmlngdpckoiiibecpemleclhh?ref=designrevision.com) 3. [LambdaTest Screenshots](https://chrome.google.com/webstore/detail/lambdatest-screenshots/fjcjehbiabkhkdbpkenkhaahhopildlh?hl=en) 4. [Marmoset](https://chrome.google.com/webstore/detail/marmoset/npkfpddkpefnmkflhhligbkofhnafieb?hl=en) 5. [Clear Cache](https://chrome.google.com/webstore/detail/clear-cache/cppjkneekbjaeellbfkmgnhonkkjfpdn?ref=designrevision.com) 6. [Page Ruler](https://chrome.google.com/webstore/detail/page-ruler/emliamioobfffbgcfdchabfibonehkme?ref=designrevision.com) 7. [Check My Links](https://chrome.google.com/webstore/detail/check-my-links/ojkcdipcgfaekbeaelaapakgnjflfglf?hl=en) 8. [JSON Viewer](https://chrome.google.com/webstore/detail/json-viewer/gbmdgpbipfallnflgajpaliibnhdgobh?ref=designrevision.com) 9. [Corporate Ipsum](https://chrome.google.com/webstore/detail/corporate-ipsum/lfmadckmfehehmdnmhaebniooenedcbb?ref=designrevision.com) 10. [CSSViewer](https://chrome.google.com/webstore/detail/cssviewer/ggfgijbpiheegefliciemofobhmofgce?hl=en) 11. [Web Developer](https://chrome.google.com/webstore/detail/web-developer/bfbameneiokkgbdmiekhjnmfkcnldhhm) [![ cross browser testing](https://www.lambdatest.com/blog/wp-content/uploads/2018/11/Adword-Cyber2.jpg)](https://accounts.lambdatest.com/register/?utm_source=dev&utm_medium=Blog&utm_campaign=Rahul-14112019&utm_term=Rahul)
rahul8124
205,918
How to build a social network with mongoDB?
I'm wanted to start developing a social network like instagram (more or less). but I have tried to un...
0
2019-11-15T13:46:47
https://dev.to/rynrn/how-to-build-a-social-network-with-mongodb-42b
mongodb, node, database, architecture
I'm wanted to start developing a social network like instagram (more or less). but I have tried to understand how to design my DB (using mongodb) for the main queries. So I have few questions: 1. how to save the data of followers/following in the db? it should be in the same document of the users or other? 2. how to fetch all the posts for users from his followers? 3. can I use mongodb for building a social network like instagram?
rynrn
214,160
JavaScript Data Structures: Singly Linked List: Recap
JavaScript Data Structures: Singly Linked List: Recap
3,259
2019-12-02T20:13:20
https://dev.to/miku86/javascript-data-structures-singly-linked-list-reca-210b
beginners, tutorial, javascript, webdev
--- title: "JavaScript Data Structures: Singly Linked List: Recap" description: "JavaScript Data Structures: Singly Linked List: Recap" published: true tags: ["Beginners", "Tutorial", "JavaScript", "Webdev"] series: JavaScript Data Structures --- ## Intro [Last time](https://dev.to/miku86/javascript-data-structures-singly-linked-list-remove-fai), we added the last method, `remove`. I hope you learned something about the concept of a Singly Linked List and tried your best to implement it on your own. If you want to get notified about new stuff, [subscribe](https://miku86.com/pages/newsletter) :) Most of the time it deepens my knowledge if I go over it again. And again. --- ## Final Implementation (Short version) Our Singly Linked List has these methods: - get a specific node - update a specific node - add a node to the end - remove a node from the end - add a node to the beginning - remove a node from the beginning - add a node at a specific index - remove a node at a specific index ```js class Node { constructor(value) { this.value = value; this.next = null; } } class SinglyLinkedList { constructor() { this.length = 0; this.head = null; this.tail = null; } // get a specific node get(index) { if (index < 0 || index >= this.length) { return null; } else { let currentNode = this.head; let count = 0; while (count < index) { currentNode = currentNode.next; count += 1; } return currentNode; } } // update a specific node set(index, value) { const currentNode = this.get(index); if (currentNode) { currentNode.value = value; return currentNode; } else { return null; } } // add to the end push(value) { const newNode = new Node(value); if (!this.length) { this.head = newNode; } else { this.tail.next = newNode; } this.tail = newNode; this.length += 1; return newNode; } // remove from the end pop() { if (!this.length) { return null; } else { let nodeToRemove = this.head; let secondToLastNode = this.head; while (nodeToRemove.next) { secondToLastNode = nodeToRemove; nodeToRemove = nodeToRemove.next; } secondToLastNode.next = null; this.tail = secondToLastNode; this.length -= 1; if (!this.length) { this.head = null; this.tail = null; } return nodeToRemove; } } // add to the beginning unshift(value) { const newNode = new Node(value); if (!this.length) { this.tail = newNode; } else { newNode.next = this.head; } this.head = newNode; this.length += 1; return newNode; } // remove from the beginning shift() { if (!this.length) { return null; } else { const nodeToRemove = this.head; this.head = this.head.next; this.length -= 1; if (!this.length) { this.tail = null; } return nodeToRemove; } } // add at a specific index insert(index, value) { if (index < 0 || index > this.length) { return null; } else if (index === 0) { return this.unshift(value); } else if (index === this.length) { return this.push(value); } else { const preNewNode = this.get(index - 1); const newNode = new Node(value); newNode.next = preNewNode.next; preNewNode.next = newNode; this.length += 1; return newNode; } } // remove from a specific index remove(index) { if (index < 0 || index >= this.length) { return null; } else if (index === 0) { return this.shift(); } else if (index === this.length - 1) { return this.pop(); } else { const preNodeToRemove = this.get(index - 1); const nodeToRemove = preNodeToRemove.next; preNodeToRemove.next = nodeToRemove.next; this.length -= 1; return nodeToRemove; } } } ``` ## Questions - Do you like this "tiny steps" approach? - Are you interested in other data structures, e.g. Doubly Linked List, Stack, Queue?
miku86
221,266
SwiftUI for Mac - Part 3
In part 1 of this series, I created a Mac app using SwiftUI. The app uses a Master-Detail design to l...
0
2019-12-19T08:26:55
https://troz.net/post/2019/swiftui-for-mac-3/
swift, swiftui, mac
--- title: SwiftUI for Mac - Part 3 published: true date: 2019-12-15 07:28:20 UTC tags: Swift, SwiftUI, Mac canonical_url: https://troz.net/post/2019/swiftui-for-mac-3/ --- In [part 1 of this series](https://dev.to/trozware/swiftui-for-mac-part-1-24ne), I created a Mac app using SwiftUI. The app uses a Master-Detail design to list entries in an outline on the left and show details about the selected entry in the detail view on the right. In [part 2](https://dev.to/trozware/swiftui-for-mac-part-2-5050) I explored using menus, adding all the expected user interface elements and opening secondary windows. In this third and final part, I want to look at the various ways to present dialogs to the user. There are four different types of dialog that I want to explore: - Alert - Action - Sheet - File dialogs (open & save) So the first thing to do is add a footer to the DetailView to trigger each of these. I am going to separate this out into a new subview for neatness. ## Alert To make an Alert, I need an @State Bool which sets whether the alert is visible or not. All the button has to do is toggle that Bool. Stripping out the extra code and views, this is what I have. ```swift struct DialogsView: View { @State private var alertIsShowing = false var body: some View { Button("Alert") { self.alertIsShowing.toggle() } } } ``` To configure the alert itself, I added an alert modifier to the outmost view in this view. The `dialogResult` string is a diagnostic so that I can confirm that the results of the various dialogs get passed back to the parent view. ```swift Alert(title: Text("Alert"), message: Text("This is an alert!"), dismissButton: .default(Text("OK")) { self.dialogResult = "OK clicked in Alert" }) ``` There were a few things that tripped me up in this relatively short chunk of code. Firstly, both title and message must be Text views, not strings. If you get an error message that says “Cannot convert value of type ‘String’ to expected argument type ‘Text’", then you have forgotten to use a Text view. Then there is the button which auto-suggest tells me is of type Alert.Button. I couldn't find any documentation for this, but delving into the definition for Alert, I see that there are three pre-defined button types: default, cancel or destructive. Cancel actually has two variants and will use a label appropriate to the user's locale if no label is supplied. Again, these buttons need a Text view as the label (if supplied) and can take an action closure, which I used to update my `dialogResult` string. This version showed a single `dismissButton` but I saw that there was a variation of Alert with `primary` and `secondary` buttons. It was not obvious that these would also dismiss the alert dialog, but I tried anyway. ```swift Alert(title: Text("Alert"), message: Text("This is an alert!"), primaryButton: .default(Text("OK"), action: { self.dialogResult = "OK clicked in Alert" }), secondaryButton: .cancel({ self.dialogResult = "Cancel clicked in Alert" })) ``` This worked very nicely and the Esc and Return keys triggered the two buttons as you would expect with both of them closing the dialog. ![Alert](https://troz.net/images/SwiftUI-Mac-alert.png) I tried using the `destructive` button type, but there was no difference to either the appearance or behavior of the button. So Alert is a great choice for a text-based dialog, either for informational use or to allow two choices of action. ## Action Very short section here - ‘ActionSheet’ is unavailable in macOS! I probably should have researched that before I started this section. So use Alerts, I guess or a custom sheet. ## Sheets While Alerts have a very fixed structure, sheets allow us to put any SwiftUI view into a sheet dialog. So I added another Bool for the Sheet button to toggle, and added this sheet modifier. SheetView right now is simply a TextView. ```swift .sheet(isPresented: $sheetIsShowing) { SheetView() } ``` This didn't work so well. It showed the sheet, but the sheet was tiny - only the size of the Text view it contained. And I had no way of dismissing it… The size problem was solved by setting a frame on the Text view in SheetView. The trick to dismissing the sheet is to pass it a Binding to the Bool that triggered it to open in the first place. If a button in the sheet sets this Bool back to false, the parent view will hide the sheet. That sounds confusing, but it works. ```swift .sheet(isPresented: $sheetIsShowing) { SheetView(isVisible: self.$sheetIsShowing) } struct SheetView: View { @Binding var isVisible: Bool var body: some View { VStack { Text("This is a sheet.") Button("OK") { self.isVisible = false } } .frame(width: 300, height: 150) } } ``` Here is a very bad diagram that tries to explain what is happening: ![Sheet](https://troz.net/images/SwiftUI-Mac-sheet.png) The parent view has an @State Boolean variable called `sheetIsShowing`. This is bound to the alert's `isPresented` so it dictates whenever the sheet is visible. When the Sheet button is clicked, this variable is set to `true` and the sheet opens. But at the same time, a Binding to this variable is passed to the sheet. I deliberately gave this a different name, so as to make it clear which View was changing what variable. When the sheet wants to close, it does not close itself. Instead it sets this variable to false. Because it is a Binding, this sets the original `sheetIsShowing` variable on the parent view to false and the parent view then closes the sheet. ### Sheets & Data With this in place, I had the sheet opening and closing perfectly, but I was not yet passing data back & forth between the sheet and its parent view. I decide to put a TextField in the SheetView and bind its contents to the `dialogResult` property in the DetailView so that any edits appeared immediately in the DetailView. And while I am there, I might as well add some more decorations to the SheetView since it is a full View and not a restricted Alert. Calling the SheetView changed to this: ```swift .sheet(isPresented: $sheetIsShowing) { SheetView(isVisible: self.$sheetIsShowing, enteredText: self.$dialogResult) } ``` And the SheetView itself (not all the interface is listed here for brevity): ```swift struct SheetView: View { @Binding var isVisible: Bool @Binding var enteredText: String var body: some View { VStack { Text("Enter some text below…") .font(.headline) .multilineTextAlignment(.center) TextField("Enter the result of the dialog here…", text: $enteredText) .padding() HStack { Button("Cancel") { self.isVisible = false self.enteredText = "Cancel clicked in Sheet" } Spacer() Button("OK") { self.isVisible = false self.enteredText = "OK: \(self.enteredText)" } } } .frame(width: 300, height: 200) .padding() } } ``` ![Sheet with data](https://troz.net/images/SwiftUI-Mac-sheet-data.png) I only had two issues with this now. I was not able to get the focus into the TextField automatically when the sheet opened and I was not able to assign keyboard shortcuts to the Cancel and OK buttons so that they could be operated without a mouse. And as I mentioned in the previous part, I was not able to make the OK button take on the default styling. One useful technique that I developed: the SheetView is in the DialogsView.swift file instead of in its own SwiftUI file. It would probably be a good idea to separate it out but I didn't which meant that it had no Canvas preview to look at while I was laying it out. So I edited the PreviewProvider like this, so that I could change the comments to switch it between showing the DialogsView and showing the SheetView. ```swift struct DialogsView_Previews: PreviewProvider { static var previews: some View { // DialogsView() SheetView(isVisible: .constant(true), enteredText: .constant("")) } } ``` ## Files AppKit provides NSOpenPanel for selecting a file and NSSavePanel for saving. I will try to implement NSSavePanel to allow saving the current cat image. Since this is an AppKit control rather than a SwiftUI control, I assumed that I would need to use NSViewRepresentable like I did for the NSColorWell in part 2. But NSColorWell is a descendent of NSView but NSSavePanel is not. So I need a new idea. Rather naively, I though maybe I could just create an NSSavePanel in a function inside DialogsView and see what happened. ```swift func saveImage() { let panel = NSSavePanel() panel.nameFieldLabel = "Save cat image as:" panel.nameFieldStringValue = "cat.jpg" panel.canCreateDirectories = true panel.begin { response in if response == NSApplication.ModalResponse.OK, let fileUrl = panel.url { print(fileUrl) } } } ``` Crash & burn… so what if I made the NSSavePanel an @State property of the View? No, that crashed even faster. Maybe SwiftUI Views don't like this sort of thing, but how about if I get the Application Delegate to handle it? What if I moved the `saveImage` method to the App Delegate and changed the calling function to access it there? Still crashed. At this stage I am beginning to wonder if I know how to use an NSSavePanel. Time to create a simple test app without SwiftUI and see what happens. Well it appears that I no longer know how to use an NSSavePanel. Code from an older project that works fine, will not work now! Guess what - it was a macOS Catalina security issue which I would have realised faster I had opened the Console. Back to the Signing & Capabilities section of the target settings and this time I set File Access for User Selected File to Read/Write. Now the NSSavePanel opens when called from DialogsView and prints the selected file URL if one is chosen. But this is all happening in DialogsView, which is a subview of DetailView. And DetailView is the view that holds the image, not DialogsView. So how can I save the image? Do I pass the URL to DetailView or pass the image to DialogsView? Or do something clever with Notifications and Subscriptions? I really don't know what is best, but I have decided to post a Notification with the URL as its object. DetailView can receive this Notification and save the image whenever it is received. So I replaced the `print` line in the `saveImage()` method with: ```swift NotificationCenter.default.post(name: .saveImage, object: fileUrl) ``` And in DetailView, I set up the publisher: ```swift private let saveImageUrlSelected = NotificationCenter.default .publisher(for: .saveImage) var body: some View { VStack { // view code removed for brevity } .onReceive(saveImageUrlSelected) { publisher in if let saveUrl = publisher.object as? URL, let imageData = self.catImage?.tiffRepresentation { if let imageRep = NSBitmapImageRep(data: imageData) { if let saveData = imageRep.representation(using: .jpeg, properties: [:]) { try? saveData.write(to: saveUrl) } } } } } ``` And there we have it. Three types of dialogs demonstrated in a SwiftUI for Mac app: 1. Alerts: good for simply text-only dialogs 2. Sheets: good for more complex dialogs 3. Panels: AppKit dialogs that can be called from a SwiftUI View. I think this time I really am finished. This article has already expanded out into a 3-part monster, so I think it is way past time that I stopped typing. I hope you have enjoyed this series. I would love to hear from anyone who found this series useful or who had any suggestions or corrections to make. The final project is available on [GitHub](https://github.com/trozware/swiftui-mac) if you would like to download it and take a look.
trozware
254,554
iPad Sidecar Issues Over USB with VPN
When I'm working away from home, I like to use my iPad as a second monitor using Apple's "sidecar" fe...
0
2020-02-03T20:16:17
https://dev.to/bmatcuk/ipad-sidecar-issues-over-usb-with-vpn-3m7
ipad, sidecar, vpn, usb
When I'm working away from home, I like to use my iPad as a second monitor using Apple's "sidecar" feature. However, I noticed that, if the wifi network isn't great, there can be some performance issues or disconnects. So, I wanted to use a USB cable instead. This worked fine, until I switched on my company's VPN. Within seconds, sidecar disconnected with a cryptic error message. Any attempt to reconnect failed with odd behavior (I'd get an error in OSX, but a black screen would load on the iPad). Turns out: VPN clients will disable IPv6 unless the VPN configuration has explicit support for IPv6. The reason for this is that a large majority of VPNs are _not_ configured for IPv6, so any IPv6 traffic will bypass the VPN. If you're attempting to use the VPN to secure all of your internet traffic, this is a problem! All of your IPv6 traffic will "leak". It also turns out that connecting your iPad via USB makes it appear as a virtual ethernet device, configured to use IPv6. So, connecting to the VPN disables IPv6 and sidecar can no longer communicate with the iPad. However, I'm not using a VPN to secure _all_ of my traffic—just the traffic to my company's network. All of that traffic is IPv4 anyway, so, this security feature is lost on me. Disabling it solves all my sidecar issues! Woo! So, if you're in the same boat as me, trying to use sidecar with a VPN, try searching through your VPN client's settings or documentation for this feature and disable it. Here's how to do that with tunnelblick: 1. Click on tunnelblick's icon in the menu bar 2. Select "VPN Details" 3. Click on "Configurations" at the top 4. Select your VPN configuration from the left-hand side 5. Uncheck "Disable IPv6 unless the VPN server is accessed using IPv6" 6. You may need to restart the VPN? 7. Profit.
bmatcuk
221,498
Why are these the gitignore rules for VS Code?
If you search for good .gitignore rules for Visual Studio Code, you often come across these. ## ##...
0
2019-12-15T18:57:07
https://dev.to/thebuzzsaw/why-are-these-the-gitignore-rules-for-vs-code-3p03
csharp, dotnet, git
If you search for good `.gitignore` rules for Visual Studio Code, you often come across these. ``` ## ## Visual Studio Code ## .vscode/* !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json ``` I tried using them for a while, but they just lead to problems. It is the equivalent of committing the `.user` files in other .NET projects. The file `launch.json`, for example, contains my own command-line arguments, environment variables, etc. Why would I want those committed? Why would I want changes from other developers overriding mine all the time? Who decided this was a good idea? It's particularly annoying because `dotnet new gitignore` imports these questionable rules. I just block the entire folder now. ``` .vscode/ ``` If I ever clone my repo from scratch, VS Code offers to whip up a new set of files for me anyway. Are there other languages/environments where committing these files is extremely beneficial?
thebuzzsaw
221,975
Usar taxonomías en el campo select de un formulario
La entrada Usar taxonomías en el campo select de un formulario se publicó primero en 🈴KungFuPress por...
0
2020-01-26T00:40:51
https://kungfupress.com/usar-taxonomias-desde-el-campo-select-de-un-formulario/
customposttype, plugins, taxonomías
--- title: Usar taxonomías en el campo select de un formulario published: true date: 2019-12-18 16:40:21 UTC tags: Custom Post Type,Plugins,taxonomías canonical_url: https://kungfupress.com/usar-taxonomias-desde-el-campo-select-de-un-formulario/ --- La entrada [Usar taxonomías en el campo select de un formulario](https://kungfupress.com/usar-taxonomias-desde-el-campo-select-de-un-formulario/) se publicó primero en [🈴KungFuPress](https://kungfupress.com) por [Kung Fu Press](https://kungfupress.com/author/kungfupress/) Cuando desarrolles un formulario en WordPress te vas a encontrar casi siempre con la necesidad de mostrar al usuario una lista de opciones para seleccionar una de ellas, el típico campo select. A veces, por acabar antes, pones las opciones grabadas a fuego en tu código, pero cuando más adelante necesites introducir o quitar opciones, tendrás que editar el código. La cosa se complica si cuando surja la necesidad no estás tú ahí para hacerlo. Por ello una buena práctica es sacar siempre estas opciones del código y llevarlas a la base de datos. Podrías crear una tabla para ello, pero luego tendrías que crear un listado y un formulario para revisar, agregar o modificar elementos. Si quieres ahorrarte este trabajo te recomiendo usar las taxonomías de WordPress. En este tutorial aprenderás a usar taxonomías en el campo select de un formulario y a guardarla con el resto de datos. Cada envío del formulario lo almacenarás en un Custom Post Type (CPT). ## Creando la base del plugin y el CPT Lo primero que vas a hacer es crear la estructura básica del plugin y agregar un CPT de la forma más breve posible. Crea una carpeta con el nombre del plugin y un fichero PHP dentro con el mismo nombre de la carpeta. Contendrá un par de constantes con información del plugin y la llamada al fichero **plugin-init.php** donde luego definirás el tipo de entrada personalizada (CPT) y la taxonomía personalizada. Como siempre, tienes el [código completo del plugin en GitHub](https://github.com/kungfupress/kfp-formtaxon/tree/v0.1.0), pero recuerda que si estás siguiendo este tutorial con fines de aprendizaje te recomiendo teclear todo el código de tu puño y letra. ``` <?php /** * Plugin Name: KFP FormTaxon * Plugin URI: https://github.com/kungfupress/kfp-formtaxon * Description: Ejemplo de utilización de una categoría personalizada desde un formulario * Version: 0.1.0 * Author: Juanan Ruiz * Author URI: https://kungfupress.com/ * PHP Version: 5.6 * * @package kfp_ftx */ defined( 'ABSPATH' ) || die(); // Constantes que afectan a todos los ficheros del plugin. define( 'KFP_FTX_DIR', plugin_dir_path( __FILE__ ) ); define( 'KFP_FTX_URL', plugin_dir_url( __FILE__ ) ); define( 'KFP_FTX_VERSION', '0.1.0' ); // Crea CPT y taxonomia. require_once KFP_FTX_DIR . 'include/plugin-init.php'; ``` ## Creación del CPT Crea una carpeta **include** y dentro de ella el archivo **plugin-init.php**. En este archivo vas a crear el CPT y la taxonomía. Vas a utilizar el **hook** `add_action('init')` de manera que el CPT esté definido antes de ejecutar otras acciones del plugin. Para definir dicho CPT usarás la función `register_post_type()` que recibe como parámetros el slug o nombre interno del CPT y un array de argumentos donde se pueden personalizar muchos aspectos del mismo, aunque en este ejemplo será lo mínimo para ir al grano. Tienes otro artículo más extenso sobre [cómo crear tipos y campos personalizados](https://kungfupress.com/como-crear-tipos-y-campos-personalizados-en-wordpress/) ``` <?php ** * File: kfp-formtaxon/include plugin-init.php * @package kfp_formtaxon * defined( 'ABSPATH' ) || die(); add_action( 'init', 'kfp_cpt_taller', 10 ); /** * Crea el CPT Taller con lo mínimo que se despacha en CPTs * @return void */ function kfp_cpt_taller() { $args = array('public' => true,'label' => 'Taller',); register_post_type( 'kfp-taller', $args ); } ``` ## Cómo crear una taxonomía personalizada Agrega el código para definir la taxonomía y algunos ejemplos predeterminados de esta. Para ello vas a usar de nuevo el **hook** `add_action('init')` y la función `register_taxonomy()` que recibe tres parámetros: el slug de la taxonomía, un array con los tipos de entrada que pueden usar esta categoría y otro array con el resto de propiedades, de nuevo al mínimo. En este ejemplo el CPT representa una actividad formativa ( **taller** ) y la taxonomía el **lugar** donde se imparte. ```php add_action( 'init', 'kfp_taxonomy_lugares', 0 ); /** * Registra la taxonomía con lo mínimo indispensable * @return void */ function kfp_taxonomy_lugares() { $args = array('label' => 'Lugar','hierarchical' => true,'show_admin_column' => true,); register_taxonomy( 'kfp-lugar', array( 'kfp-taller' ), $args ); } add_action( 'init', 'kfp_lugares_add', 1 ); /** * Agrega algunos lugares de ejemplo por defecto * @return void */ function kfp_lugares_add() { $lugares = array('Escuela de Ingenieros Informáticos','Facultad de Derecho','Facultad de Bellas Artes','Facultad de Medicina','Rectorado',); foreach ( $lugares as $lugar ) { wp_insert_term( $lugar, 'kfp-lugar' ); } } ``` ## Creación del formulario Ahora vas a crear el formulario que te permitirá crear un nuevo taller desde el frontend. Para mostrarlo en la parte pública o frontend de tu web, podrías crear una plantilla específica o aprovechar algún elemento de la plantilla como la cabecera, el pié o incluso un widget. Para este ejemplo vas a asociar el formulario a un shortcode, esto te permitirá incluirlo en cualquier página o entrada que desees, incluso en un widget en el que podrías colocar el shortcode. Si prefieres tener el formulario en el backend podrías crear un panel de administración allí y reutilizar el código que verás aquí con algunas adaptaciones. ![Formulario con taxonomía en campo select](https://kungfupress.com/wp-content/uploads/2019/12/formulario_lugares.gif) Así que crea un nuevo fichero llamado **form_taller_shortcode.php** dentro de la carpeta **include**. Recuerda que necesitas mostrar la lista de lugares en el formulario, así que lo primero que harás será recuperar todos los elementos de la taxonomía utilizando la función `get_terms()` de WordPress. Luego «pintas» el formulario, que se va a procesar con el fichero **admin-post.php** de WordPress, tal como se indica en el **action** del formulario. Un campo oculto del formulario llamado también **action** permitirá a WordPress saber que formulario es el que estás grabando por lo que debes usar un identificador único, yo he escogido **kfp-ftx-taller**. El formulario se completa con un campo nonce de seguridad, el nombre y la descripción del nuevo taller y por supuesto la lista desplegable para seleccionar el lugar donde se va a celebrar el taller. ``` <?php /** * File: kfp-formtaxon/include/form-taller-shortcode.php * * @package kfp_ftx */ defined( 'ABSPATH' ) || die(); add_shortcode( 'kfp_ftx_crear_taller', 'kfp_ftx_crear_taller' ); /** * Implementa formulario para crear un nuevo taller. * * @return string */ function kfp_ftx_crear_taller() { // Trae los lugares existentes en la base de datos a la vatiable $lugares. // Esta variable recibirá un array de objetos de tipo taxonomy. $lugares = get_terms('kfp-lugar',array('orderby' => 'term_id','hide_empty' => 0,)); ob_start(); ?> <form action="<?php echo esc_url( admin_url( 'admin-post.php' ) ); ?>" method="post"> <?php wp_nonce_field( 'kfp-ftx-taller', 'kfp-ftx-taller-nonce' ); ?> <input type="hidden" name="action" value="kfp-ftx-taller"> <div class="form-input"> <label for="nombre">Taller</label> <input type="text" name="nombre" id="nombre" required> </div> <div class="form-input"> <label for="id_lugar">Lugar</label> <select name="id_lugar" required><option value="">Selecciona el lugar</option> <?php foreach ( $lugares as $lugar ) { echo('<option value="' . esc_attr( $lugar->term_id ) . '">'. esc_attr( $lugar->name ) . '</option>'); } ?> </select> </div> <div class="form-input"> <label for="descripcion">Descripción</label> <textarea name="descripcion" id="descripcion"></textarea> </div> <div class="form-input"> <input type="submit" value="Enviar"> </div> </form> <?php return ob_get_clean(); } ``` Crea en tu web una página o entrada donde colocarás el shortcode `[kfp_ftx_crear_taller]`. Comprueba que el formulario aparece dentro de la entrada y que el campo **Lugar** contiene los elementos de ejemplo que pusiste al definir la taxonomía. ## Grabar el formulario y la taxonomía en la base de datos Para grabar los datos del formulario usarás un hook dinámico (parte del nombre lo inventas tú) que será procesado por el fichero **wp-admin/admin-post.php**. De momento el script procesará tanto el envío de usuarios autenticados como anónimos, por lo que utilizarás dos hooks: **admin_post_{$action}** y **admin_post_nopriv_{$action}**. Recuerda que el valor de **{$action}** lo has definido en un campo oculto del formulario. ![Esquema de grabación de los datos de un formulario utilizando admin-post.php](https://kungfupress.com/wp-content/uploads/2019/12/explica-llamada-form-1024x966.png) Crea un nuevo fichero include/form-talller-grabar.php Agrega en él los hooks que acabo de explicar. Define la función `kfp_ftx_graba_taller()` donde una vez comprobados que los campos requeridos vienen rellenos y que el nonce es correcto podrás grabar el CPT de tipo **taller** utilizando la función **wp_insert_post()** ``` <?php /** * File: kfp-formtaxon/include/form-taller-grabar.php * * @package kfp_ftx */ defined( 'ABSPATH' ) || die(); // Agrega los action hooks para grabar el formulario (el primero para usuarios // logeados y el otro para el resto) // Lo que viene tras admin_post_ y admin_post_nopriv_ tiene que coincidir con // el value del campo input con name "action" del formulario enviado. add_action( 'admin_post_kfp-ftx-taller', 'kfp_ftx_graba_taller' ); add_action( 'admin_post_nopriv_kfp-ftx-taller', 'kfp_ftx_graba_taller' ); /** * Graba los datos enviados por el formulario como un nuevo CPT kfp-taller * * @return void */ function kfp_ftx_graba_taller() { // Comprueba campos requeridos y nonce. if ( isset( $_POST['nombre'] ) &amp;&amp; isset( $_POST['id_lugar'] ) &amp;&amp; isset( $_POST['kfp-ftx-taller-nonce'] ) &amp;&amp; wp_verify_nonce( sanitize_text_field( wp_unslash( $_POST['kfp-ftx-taller-nonce'] ) ), 'kfp-ftx-taller' ) ) { $nombre = sanitize_text_field( wp_unslash( $_POST['nombre'] ) ); $descripcion = sanitize_text_field( wp_unslash( $_POST['descripcion'] ) ); $id_lugar = (int) $_POST['id_lugar']; $args = array( 'post_title' => $nombre, 'post_content' => $descripcion, 'post_type' => 'kfp-taller', 'post_status' => 'draft', 'comment_status' => 'closed', 'ping_status' => 'closed' ); // Esta variable $post_id contiene el ID del nuevo registro // Viene de perlas para grabar los metadatos $post_id = wp_insert_post($args); } } ``` Comprueba que todo funciona y el nuevo CPT se graba. Al enviar el formulario obtendrás una pantalla en blanco, pero puedes comprobar que se ha creado un nuevo taller desde el escritorio. Luego arreglarás lo de la pantalla en blanca, primero tienes que asociar la taxonomía lugar con el taller . ## Cómo grabar una taxonomía en un post En el paso anterior, al grabar el nuevo post de tipo taller, capturabas su identificador (ID) en la variable `$post_id`, que era lo que te devolvía la función `wp_insert_post()`. Ahora vas a usar la función `wp_set_object_terms() `que recibe el ID del post como primer parámetro. Luego necesitas el ID de la taxonomía que has asignado desde el formulario (lo has capturado en la variable `$id_lugar`) y por último el tipo de taxonomía, en este caso `'kfp-lugar'` Quedando el final del código así: ```php $post_id = wp_insert_post($args); $term_taxonomy_ids = wp_set_object_terms( $post_id, $id_lugar, 'kfp-lugar' ); ``` Sólo tendrías, por tanto, que añadir la última línea, volver a grabar un nuevo taller desde el formulario y comprobar desde el escritorio que la taxonomía se ha grabado con el taller. Para terminar tendrás que salir de esa pantalla en blanco que aparece tras enviar el formulario. No es un error, es normal, precisamente estás usando admin-post.php porque es mucho más rápido que cargar el sitio completo, el tema y los contenidos actuales. Con admin-post.php procesas el formulario con un WordPress que carga lo mínimo y luego rediriges a donde quieras. En este caso al mismo formulario con un mensaje de éxito o error, pero podría enviar a una página con un listado con todos los talleres propuestos hasta el momento o de todos los talleres propuestos por este usuario. Aquí tienes el código completo, incluyendo redirecciones y mensajes, del fichero **include/form-taller-grabar.php** ``` <?php /** * File: kfp-formtaxon/include/form-taller-grabar.php * * @package kfp_ftx */ defined( 'ABSPATH' ) || die(); // Agrega los action hooks para grabar el formulario (el primero para usuarios // logeados y el otro para el resto) // Lo que viene tras admin_post_ y admin_post_nopriv_ tiene que coincidir con // el valor del campo input con nombre "action" del formulario enviado. add_action( 'admin_post_kfp-ftx-taller', 'kfp_ftx_graba_taller' ); add_action( 'admin_post_nopriv_kfp-ftx-taller', 'kfp_ftx_graba_taller' ); /** * Graba los datos enviados por el formulario como un nuevo CPT kfp-taller * * @return void */ function kfp_ftx_graba_taller() { // Si viene en $_POST aprovecha uno de los campos que crea wp_nonce para volver al form. $url_origen = home_url( '/' ); if ( ! empty( $_POST['_wp_http_referer'] ) ) { $url_origen = esc_url_raw( wp_unslash( $_POST['_wp_http_referer'] ) ); } // Comprueba campos requeridos y nonce. if ( isset( $_POST['nombre'] ) &amp;&amp; isset( $_POST['id_lugar'] ) &amp;&amp; isset( $_POST['kfp-ftx-taller-nonce'] ) &amp;&amp; wp_verify_nonce( sanitize_text_field( wp_unslash( $_POST['kfp-ftx-taller-nonce'] ) ), 'kfp-ftx-taller' ) ) { $nombre = sanitize_text_field( wp_unslash( $_POST['nombre'] ) ); $descripcion = sanitize_text_field( wp_unslash( $_POST['descripcion'] ) ); $id_lugar = (int) $_POST['id_lugar']; $args = array( 'post_title' => $nombre, 'post_content' => $descripcion, 'post_type' => 'kfp-taller', 'post_status' => 'draft', 'comment_status' => 'closed', 'ping_status' => 'closed' ); // Esta variable $post_id contiene el ID del nuevo registro // Viene de perlas para grabar la taxonomía $post_id = wp_insert_post($args); $term_taxonomy_ids = wp_set_object_terms( $post_id, $id_lugar, 'kfp-lugar' ); $query_arg = array( 'kfp-ftx-resultado' => 'error' ); wp_redirect( esc_url_raw( add_query_arg( $query_arg , $url_origen ) ) ); exit(); } $query_arg = array( 'kfp-ftx-resultado' => 'error' ); wp_redirect( esc_url_raw( add_query_arg( $query_arg , $url_origen ) ) ); exit(); } ``` Verás que ahora el formulario redirige a la propia página o entrada del formulario y en la barra de direcciones de tu navegador aparece el resultado (‘success’ o ‘error’) pero tienes que hacer que aparezca dentro de la página. Para ello coloca estos dos «if» justo antes de «pintar» el formulario en el fichero **include/form-taller-shortcode.php** ```php ob_start(); if ( filter_input( INPUT_GET, 'kfp-ftx-resultado', FILTER_SANITIZE_STRING ) === 'success' ) { echo '<h4>Se ha grabado el taller correctamente</h4>'; } if ( filter_input( INPUT_GET, 'kfp-ftx-resultado', FILTER_SANITIZE_STRING ) === 'error' ) { echo '<h4>Se ha producido un error al grabar el taller</h4>'; } ?> <form action="<?php echo esc_url( admin_url( 'admin-post.php' ) ); ?>" method="post"> ``` Para terminar te comento las posibles formas de administrar tus taxonomías personalizadas. ## Cómo administrar tu taxonomía personalizada Si estás trabajando con entradas, páginas o entradas personalizadas (CPTs) el menú de administración de la taxonomía lo encontrarás asociado al menú de administración de los tipos de entrada a los que la taxonomía está asociada. ![Panel para administrar taxonomía personalizada desde el escritorio de WordPress](https://kungfupress.com/wp-content/uploads/2019/12/taxonomia-asociada-a-cpt.png) En la imagen de arriba ves la taxonomía «Lugar» asociada al Custom Post Type «Taller», desde el menú de este CPT puedes acceder al listado, creación, edición y borrado de elementos de esta taxonomía. Si estás trabajando con una tabla propia (algo que siempre debe quedar como última opción) podrías asociar la taxonomía a las entradas (a los posts vamos) y desde allí podrás administrarla igualmente. Pero quizás podría quedar más elegante (y más usable) si desde el propio formulario pones un enlace o un botón al panel de edición de la taxonomía. En el caso de este ejemplo sería algo como: ```php <a href="wp-admin/edit-tags.php?taxonomy=provincias">Editar provincias</a> ``` Realmente esta última opción también la puedes utilizar aunque estés usando un Custom Post Type, siempre que el formulario esté en el escritorio o backend (para el frontend sería otro cantar y quizás no tendría sentido que cualquier usuario pudiera editar una taxonomía). ## Conclusiones Utilizar taxonomías es la manera óptima de ofrecer a tus usuarios una serie de opciones cerradas a la hora de rellenar un formulario, en algunos casos incluso podrías permitir a los propios usuarios agregar nuevas opciones si fuera necesario. En cualquier caso el uso de taxonomías proporciona a los administradores del sitio las herramientas necesarias para administrarlas, herramientas que ya vienen incluidas en WordPress. Cuando implementes lo aquí aprendido en un par de ocasiones te preguntarás cómo habías podido vivir hasta ahora sin utilizar taxonomías personalizadas. Espero que te haya gustado el tutorial y le puedas sacar provecho, te recuerdo que tienes el [código completo del plugin en GitHub](https://github.com/kungfupress/kfp-formtaxon/tree/v0.1.0). ## Referencias - [https://developer.wordpress.org/reference/files/wp-admin/admin-post.php/](https://developer.wordpress.org/reference/files/wp-admin/admin-post.php/) - [https://developer.wordpress.org/reference/functions/wp\_set\_object\_terms/](https://developer.wordpress.org/reference/functions/wp_set_object_terms/) - Imagen de cabecera por [Amelie & Niklas Ohlrogge](https://unsplash.com/@pirye?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) en [Unsplash](https://unsplash.com/s/photos/library?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) La entrada [Usar taxonomías en el campo select de un formulario](https://kungfupress.com/usar-taxonomias-desde-el-campo-select-de-un-formulario/) se publicó primero en [🈴KungFuPress](https://kungfupress.com) por [Kung Fu Press](https://kungfupress.com/author/kungfupress/)
juananruiz
222,581
W3C confirms: WebAssembly becomes the fourth language for the Web 🔥 What do you think?
World Wide Web Consortium (W3C) brings a new language to the Web as WebAssembly becomes a W3C Recomm...
0
2019-12-17T15:34:53
https://dev.to/destrodevshow/w3c-confirms-webassembly-becomes-the-fourth-language-for-the-web-what-do-you-think-45e0
webassembly, javascript, performance, discuss
<b> World Wide Web Consortium (W3C) brings a new language to the Web as WebAssembly becomes a W3C Recommendation. Following HTML, CSS, and JavaScript, WebAssembly becomes the fourth language for the Web which allows code to run in the browser. </b> --------------- <h3>5 December 2019</h3> The World Wide Web Consortium (W3C) announced that the WebAssembly Core Specification is now an official web standard, launching a powerful new language for the Web. WebAssembly is a safe, portable, low-level format designed for efficient execution and compact representation of code on modern processors including in a web browser. *“The arrival of WebAssembly expands the range of applications that can be achieved by simply using Open Web Platform technologies. In a world where machine learning and Artificial Intelligence become more and more common, it is important to enable high performance applications on the Web, without compromising the safety of the users,”* - declared Philippe Le Hégaret, W3C Project Lead. <h5>High-performance applications relying on a low-level infrastructure</h5> At its core, WebAssembly is a virtual instruction set architecture that enables high-performance applications on the Web, and can be employed in many other environments. There are multiple implementations of WebAssembly, including browsers and stand-alone systems. WebAssembly can be used for applications like video and audio codecs, graphics and 3D, multi-media and games, cryptographic computations or portable language implementations. <h5>WebAssembly enhances Web Performance</h5> WebAssembly improves Web performance and power consumption by being a virtual machine and execution environment enabling loaded pages to run as native compiled code. In other words, WebAssembly enables near-native performance, optimized load time, and perhaps most importantly, a compilation target for existing code bases. Despite a small number of native types, much of the performance increase relative to JavaScript derives from its use of consistent typing. WebAssembly leverages decades of optimization for compiled languages and its byte code is optimized for compactness and streaming. A web page can start executing while the rest of the code downloads. Network and API access occurs through accompanying JavaScript libraries. The security model is identical to that of JavaScript. Read the Full article [<b>here 👉 published on W3.org</b>](https://www.w3.org/2019/12/pressrelease-wasm-rec.html.en) <h4>What do you think about this huge change? 🤔 </h4> I am sharing one awesome talk from Lin Clark about WebAssembly. {% youtube HktWin_LPf4 %} Cheers! :wave: As I am trying to contribute contents on the Web, you can buy me a coffee for my hours spent on all of these ❤️😊🌸 <a href="https://www.buymeacoffee.com/destromas" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" style="height: 51px !important;width: 217px !important;" ></a>
destro_mas
222,594
What's the best thing to do when you've run into a debugging dead end?
So you've hit a wall. What is the best course of action in this situation?
0
2019-12-17T15:59:02
https://dev.to/ben/what-s-the-best-thing-to-do-when-you-ve-run-into-a-debugging-dead-end-39jg
discuss, debugging
So you've hit a wall. What is the best course of action in this situation?
ben
268,919
Sign in with Google Button 🎯🪁
A few months ago when I am started learning 👨‍💻 Web Development from Eduonix I was super curious...
0
2020-02-25T16:12:50
https://dev.to/atulcodex/sign-in-with-google-button-4je4
google, html, css
--- title: Sign in with Google Button 🎯🪁 published: true tags: google, html, css --- A few months ago when I am started learning 👨‍💻 [Web Development](https://www.eduonix.com/web-development-css3-scratch-till-advanced-project-based/UHJvZHVjdC0xMzE1MDIw) from [Eduonix](https://www.eduonix.com/web-development-css3-scratch-till-advanced-project-based/UHJvZHVjdC0xMzE1MDIw) I was super curious about this Google button ⛈. I have tried several times to create it on my own but I was failed 😢😭 many times because I confused at google logo image in button but today you can see my little achievement here on my code pen. It is like a motivation 🧘‍♂️ for me in my Dev journey that's why I am sharing this with our Dev community 👨‍👨‍👦‍👦👨‍👨‍👧‍👧👩‍👩‍👧‍👦👨‍👦. Hey, guys if you faced something like this situation in your journey please comment below it will increase our productivity lit bit more. {% codepen https://codepen.io/atulcodex/pen/QWbdMrP %}
atulcodex
228,931
Vue.js Composition API: usage with MediaDevices API
Introduction In this article I would like to share my experience about how Vue Composition...
0
2019-12-30T22:18:02
https://dev.to/3vilarthas/vue-js-composition-api-usage-with-mediadevices-api-2efb
vue, javascript, tutorial
## Introduction In this article I would like to share my experience about how Vue Composition API helped me to organise and structure work with browser's [`navigator.mediaDevices`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/mediaDevices) API. It's **highly encouraged** to skim through the RFC of the upcoming [Composition API](https://vue-composition-api-rfc.netlify.com) before reading. ## Task The task I received was not trivial: * application should display all the connected cameras, microphones and speakers that user have; * user should have the ability to switch between them (e.g. if user have two cameras he/she can choose which one is active); * application should appropriately react when user connects or disconnects devices; * the solution should be easily reusable, so developers can use it on any page. ## Solution For now, the only one way to reuse logic across components was `mixins`. But they have their own nasty drawbacks, so I decided to give a chance to a new Composition API. > Vue Composition API allows easily share stateful logic across components. Let's start with separation of concerns – create three appropriate hooks `useCamera`, `useMicrophone`, `useSpeaker`. Each hook encapsulates the logic related to the specific device kind. Let's look at one of them — `useCamera`: `useCamera.ts`: ```ts import { ref, onMounted, onUnmounted } from '@vue/composition-api' export function useCamera() { const camera = ref('') const cameras = ref<MediaDeviceInfo[]>([]) function handler() { navigator.mediaDevices.enumerateDevices().then(devices => { const value = devices.filter(device => device.kind === 'videoinput') cameras.value = value if (cameras.value.length > 0) { camera.value = cameras.value[0].deviceId } }) } onMounted(() => { if (navigator && navigator.mediaDevices) { navigator.mediaDevices.addEventListener('devicechange', handler) handler() } }) onUnmounted(() => { if (navigator && navigator.mediaDevices) { navigator.mediaDevices.removeEventListener('devicechange', handler) } }) return { camera, cameras, } } ``` Here is some explanations: First off create two variables: * `camera`, which will store the `deviceId` of the active camera (remember that user can choose active device); * `cameras`, which will contain the list of all connected cameras. These variables is supposed to be consumed by the component, so we return them. There is `handler` function which enumerates all the connected devices and `push`es only those with `kind === 'videoinput'` to the `cameras` array. The type of `cameras` variable is `MediaDeviceInfo[]`, here is the snippet from [`lib.dom.d.ts`](https://github.com/microsoft/TypeScript/blob/master/lib/lib.dom.d.ts#L10237) which declares that interface: ```ts type MediaDeviceKind = "audioinput" | "audiooutput" | "videoinput"; /** The MediaDevicesInfo interface contains information that describes a single media input or output device. */ interface MediaDeviceInfo { readonly deviceId: string; readonly groupId: string; readonly kind: MediaDeviceKind; readonly label: string; toJSON(): any; } ``` Composition API provides us with `onMounted` and `onUnmounted` hooks, which is the analog to the current Options API `mounted` and `destroyed` hooks. As you can see, we invoke our `handler` function in `onMounted` hook to get the list of cameras, when component mounts. Since devices can be connected or disconnected during the runtime of the application, we have to synchronize our data model with actually connected devices. To accomplish that task we need to subscribe to [`devicechange`](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/ondevicechange) event which fires either when new device connects or already connected device disconnects. Since we did subscription, **we need to not forget to unsubscribe from this event when component is completely destroyed** to not catch any nasty bugs. We have all set up, now let's use our custom hook in a component. `component.vue`: ```vue <script lang="ts"> import { createComponent, computed, watch } from '@vue/composition-api' import { useCamera } from '@/use/camera' export default createComponent({ name: 'MyComponent', setup() { const { camera, cameras } = useCamera() const camerasLabels = computed(() => cameras.value.map(camera => camera.label || camera.deviceId) ) watch(cameras, value => { console.log(value) }) return { camerasLabels, } }, }) </script> <template> <section>Connected cameras: {{ camerasLabels }}</section> </template> ``` Our hook can only be used during the invocation of a `setup` hook. When hook is invoked it returns our two variables: `camera` and `cameras`. From that moment we can do whatever we want – we have fully reactive variables, as we would have with `data` using Options API. For example, let's create a computed property `camerasLabels` which will list labels of `cameras`. **Note** that when new camera connects or already connected camera disconnects our hook will handle it and update `cameras` value, which itself is reactive, so our template will be updated too. We can even watch for `cameras` and perform our custom logic. The code of `useMicrophone` and `useSpeaker` code is the same, but the only difference is `device.kind` in the `handler` function. Thus, the solution can be reduced into the one hook – `useDevice`, which can accept the device kind as it's first argument: ```ts export function useDevice(kind: MediaDeviceKind) { // ... the same logic function handler() { navigator.mediaDevices.enumerateDevices().then(devices => { const value = devices.filter(device => device.kind === kind) // <- filter by device kind // ... the same logic }) } // ... the same logic } ``` But I would prefer to split it up using three different hooks, because there might be logic specific to the device kind. So our final solution looks something like this: ```ts <script lang="ts"> import { createComponent, computed, watch } from '@vue/composition-api' import { useCamera } from '../use/camera' import { useMicrophone } from '../use/microphone' import { useSpeaker } from '../use/speaker' export default createComponent({ name: 'App', setup() { const { camera, cameras } = useCamera() const { microphone, microphones } = useMicrophone() const { speaker, speakers } = useSpeaker() // computed const camerasLabels = computed(() => cameras.value.map(camera => camera.label) ) // or method function getDevicesLabels(devices: MediaDeviceInfo[]) { return devices.map(device => device.label) } watch(cameras, value => { console.log(value) }) return { camerasLabels, microphones, speakers, getDevicesLabels, } }, }) </script> <template> <ul> <li>Connected cameras: {{ camerasLabels }}</li> <li>Connected microphones: {{ getDevicesLabels(microphones) }}</li> <li>Connected speakers: {{ getDevicesLabels(speakers) }}</li> </ul> </template> ``` ## Demo Live demo is located [here](https://codesandbox.io/s/vue-composition-api-media-devices-ekz22). You can experiment with it a little – connect a new microphone or camera and you will see how the application reacts. I have cheated a little. As you can see there are some lines: ```ts await navigator.mediaDevices.getUserMedia({ video: true }) // <- in useCamera await navigator.mediaDevices.getUserMedia({ audio: true }) // <- in useMicrophone and useSpeaker ``` It ensures that user have granted access to camera and microphone. If user have denied access to devices hooks won't work. So they implies that user have granted access to devices. ## Conclusion We have created a bunch of useful hooks, that can be easily shared across projects to facilitate work with `navigator.mediaDevices`. Our hooks react to the actual devices state and synchronize it with data model. The API is simple enough – just execute hook in the `setup` method, all the logic is encapsulated in hook itself. **P.S.** If you like the article, please, click "heart" or "unicorn" — it will give me some motivation to write the next article, where I plan to showcase how to combine these hooks in the `useMediaStream` hook, which contains stream with our active camera and microphone. Article also will describe how to change input and output sources of the stream.
3vilarthas
232,868
What does everyone plan to learn in web development this year?
A post by John Au-Yeung
0
2020-01-06T03:21:29
https://dev.to/aumayeung/what-does-everyone-plan-to-learn-in-web-development-this-year-5ej
discuss, javascript, webdev
aumayeung
235,498
Why and how you should migrate from Visual Studio Code to VSCodium
Why and how you should migrate from Visual Studio Code to VSCodium
4,182
2020-01-10T01:56:14
https://dev.to/0xdonut/why-and-how-you-should-to-migrate-from-visual-studio-code-to-vscodium-j7d
vscode, productivity, opensource, tutorial
--- title: Why and how you should migrate from Visual Studio Code to VSCodium published: true description: Why and how you should migrate from Visual Studio Code to VSCodium tags: vscode, productivity, opensource, tutorial cover_image: https://i.imgur.com/LlwMOnj.png series: code productivity --- In this tutorial we'll go over why you should make the switch, and how you can retain all of your extensions when you do make the switch. It won't take more than a couple of minutes to do the actual change! ## The problem with Visual Studio Code Visual Studio code is without a doubt [the most used](https://2019.stateofjs.com/other-tools/#text_editors) Code editor (for front end developers at least). It definitely provides a lot of helpful extensions of which there have been umpteen posts about. ![text editors](https://2019.stateofjs.com/images/captures/text_editors.png) ### So why would I suggest you uninstall it for something else? Whilst Microsoft’s vscode source code is open source (MIT-licensed), the product available for download (Visual Studio Code) is licensed under this [not-FLOSS license](https://code.visualstudio.com/license) and contains telemetry/tracking. > ...may collect information about you and your use of the software, and send that to Microsoft... You may opt-out of many of these scenarios, but not all... Microsoft insist this is for bug tracking and so on, which may well be true. But you never know what else the data could end up being used for in the hands of someone unscrupulous. You can [turn off telemetry reporting](https://code.visualstudio.com/docs/getstarted/telemetry#_disable-telemetry-reporting) in Visual Studio Code, but there are plenty of opportunities for Microsoft to add other features in, which may slip past your attention. Run this command in your terminal and check your output ``` code --telemetry ``` Not great, lets change it. ## [VSCodium](https://github.com/VSCodium/vscodium) > VSCodium ... is not a fork. This is a repository of scripts to automatically build Microsoft's vscode repository into freely-licensed binaries with a community-driven default configuration. This means we don't have to go through the hassle of building each version ourselves, everything is done for us and the best part is we get these binaries **under the MIT license. Telemetry is completely disabled.** Moreover, the editor itself looks and functions *exactly the same*, you won't miss a thing! ![vscodium logo](https://i.imgur.com/nuKSg5v.png) That's a pretty simple and compelling argument. ![same but different](https://media.giphy.com/media/C6JQPEUsZUyVq/giphy.gif) ## How to install VSCodium and keep all your extensions and settings This is the easy part. I will focus upon macOS, but these instructions are pretty simple to amend to other platforms. _updated to include settings_ Make sure you have [Homebrew](https://brew.sh) installed: ``` /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" ``` ### 1. Export all your installed extensions First export all of your installed extensions into a text file (amend the output path as you see fit) ``` code --list-extensions | tee ~/vscode-extensions.txt ``` This will output all of your extensions to `~/vscode-extensions.txt` and list them out in your terminal for you to see. ### 2. Export your settings Export any custom keybindings and user settings you have as default. ``` cp ~/Library/Application\ Support/Code/User/settings.json ~/vscode-settings.json cp ~/Library/Application\ Support/Code/User/keybindings.json ~/vscode-keybindings.json ``` ### 3. Uninstall Visual Studio Code We use the force argument so that nothing gets left behind that would clash or interrupt VSCodium's install. ``` brew cask uninstall --force visual-studio-code ``` ### 4. Install VSCodium ``` brew cask install vscodium ``` ### 5. Reinstall your extensions for VSCodium Because VSCodium has the same command line tools, we invoke them the same was as before ``` xargs -n1 code --install-extension < ~/vscode-extensions.txt ``` This went through the file and executed `code --install-extension` on each line individually. You should have seen the output in your terminal. If you get a `DeprecationWarning: Buffer()...` warning, you don't need to worry, it's related to Yarn and can be resolved with `yarn global add yarn` ### 6. Import your settings ``` mv ~/vscode-settings.json ~/Library/Application\ Support/VSCodium/User/settings.json mv ~/vscode-keybindings.json ~/Library/Application\ Support/VSCodium/User/keybindings.json ``` Now you should be set and ready to go, the only thing you should notice is the logo is different. Everything else will work, feel and function the same as before. Happy coding devs!
0xdonut
237,816
Angular Developer Roadmap
Angular has become one of the most famous frameworks for frontend development. Angular is backed by...
0
2020-02-05T07:42:11
https://codesquery.com/angular-developer-roadmap/
angular, angular7
--- title: Angular Developer Roadmap published: true date: 2019-12-04 14:32:42 UTC tags: Angular,angular7 canonical_url: https://codesquery.com/angular-developer-roadmap/ --- Angular has become one of the most famous frameworks for frontend development. Angular is backed by the tech giant Google who also used it in all their web application. These days demand for the angular developer is increasing and it is very necessary to have a roadmap for the newbies so that they can become [...] The post [Angular Developer Roadmap](https://codesquery.com/angular-developer-roadmap/) appeared first on [CodesQuery](https://codesquery.com).
hisachin
253,300
Reading Snippets [41 => Ruby ] 💎
"Computer science is never tied to a programming language; it is tied to the task of solving problems...
0
2020-02-02T05:10:00
https://dev.to/calvinoea/reading-snippets-40-ruby-522g
ruby, rails, beginners, computerscience
"Computer science is never tied to a programming language; it is tied to the task of solving problems efficiently using a computer. Programming languages come and go but the essence of computer science stays the same. The core goal of computer science is to study algorithms that solve real problems." <kbd><small>Source:<a href="https://learning.oreilly.com/library/view/computer-science-programming/9781449356835/onedot1_what_is_computer_sciencequestion.html">Computer Science Programming Basics in Ruby </a></small></kbd>
calvinoea
270,193
Fetch Data with Next.js (getInitialProps)
In the sequence of the Next.js tutorial, part 2 about data fetching is now available:
0
2020-02-27T16:48:34
https://dev.to/bmvantunes/fetch-data-with-next-js-getinitialprops-38lc
javascript, react
In the sequence of the Next.js tutorial, part 2 about data fetching is now available: {% youtube Os3JZc2CtwY %}
bmvantunes
310,130
E5 - Sample Solution
The files are viewable on this page directly, or on my Github. Files in solution: _header.php cart...
5,954
2020-04-15T19:40:09
https://dev.to/herobank110/e5-sample-solution-m0m
php
The files are viewable on this page directly, or on my [Github](https://github.com/herobank110/PHP-Tutorial). Files in solution: * [_header.php](#_header) * [cart.php](#cart) * [db_commands.sql](#db_commands) * [edit_cart.php](#edit_cart) * [index.php](#index) * [on_checkout.php](#on_checkout) * [order_confirm.php](#order_confirm) * [style.css](#style) Parent topic: [Example 5](https://dev.to/herobank110/exercise-5-shopping-facility-443n) ## _header.php <a name="_header"></a> ```php <header> <?php function getCartCount() { $cart = json_decode($_COOKIE["cart"] ?? "{}", true); $cartCount = 0; foreach ($cart as $productID => $quantity) $cartCount += $quantity; return $cartCount; } ?> <a class="typography headline" href="index.php">Neat Treats</a> <a class="typography headline" href="index.php">Products</a> <a class="typography headline" href="cart.php">Cart (<?=getCartCount()?>)</a> </header> ``` ## cart.php <a name="cart"></a> ```php <html> <head> <link rel="stylesheet" href="style.css"> </head> <body> <?php include("_header.php"); ?> <main> <h2 class="typography subhead">Cart Page</h2> <div class="list-grid cart-list-grid"> <?php $cart = json_decode($_COOKIE["cart"] ?? "{}", true); $databaseLink = new mysqli("localhost", "root", "", "NeatTreats"); foreach ($cart as $productID => $quantity) { $result = $databaseLink->query( "SELECT Name FROM Product WHERE ProductID=$productID;" ); if (empty($databaseLink->error) && $result->num_rows > 0) { $row = $result->fetch_object(); $productName = $row->Name; } else { $productName = "INVALID ID"; } echo "<span class='typography body'>{$row->Name}</span>"; echo "<span class='typography body'> x$quantity</span>"; $editUrl = "edit_cart.php?id=$productID"; echo "<a class='typography body' href='$editUrl&type=add'>Add</a>"; echo "<a class='typography body' href='$editUrl&type=sub'>Sub</a>"; echo "<a class='typography body' href='$editUrl&type=rem'>Remove</a>"; } $databaseLink->close(); ?> </div> <form action="on_checkout.php" method="post"> <div class="list-grid checkout-list-grid"> <label class='typography body'>Email Address:</label> <input name="email"> <label class='typography body'>Card Number:</label> <input name="card_num"> <label class='typography body'>Expiry Month:</label> <select name="expiry_month"> <option selected disabled></option> <?php for ($i=1; $i < 13; $i++) echo "<option value=$i>". date("F", 3600 * 24 * 28 * $i) ."</option>"; ?> </select> <label class='typography body'>Expiry Year:</label> <input name="expiry_year"> <div> <button>Checkout</button> </div> <span class="error"> <?php if (isset($_COOKIE["checkout_error"])) { // Output and expire the checkout error cookie. echo $_COOKIE["checkout_error"]; setcookie("checkout_error", "", time() - 3600); } ?> </span> </div> </form> </main> </body> </html> ``` ## db_commands.sql <a name="db_commands"></a> ```sql DROP DATABASE NeatTreats; CREATE DATABASE NeatTreats; USE NeatTreats; CREATE TABLE Cart ( CartID INT auto_increment, ProductID INT, Quantity INT, PRIMARY KEY (CartID, ProductID) ); CREATE TABLE Product ( ProductID INT auto_increment, Name VARCHAR(32), Description VARCHAR(255), Price FLOAT, PRIMARY KEY (ProductID) ); INSERT INTO Product (ProductID, Name, Description, Price) VALUES (1, 'Vanilla Cake', 'Tasty vanilla flavoured sponge cake', 12.34), (2, 'Chocolate Cake', 'Scrumptious chocolate flavoured sponge cake', 32.14), (3, 'Strawberry Cake', 'Yummy strawberry flavoured sponge cake', 42.35); ALTER TABLE Cart ADD FOREIGN KEY (ProductID) REFERENCES Product (ProductID); ``` ## edit_cart.php <a name="edit_cart"></a> ```php <?php $productID = $_GET["id"]; $editType = $_GET["type"]; $redirectUrl = $_GET["redirect_url"] ?? "cart.php"; $cart = json_decode($_COOKIE["cart"] ?? "{}", true); switch ($editType) { case "add": // Increment quantity of already added product. if (isset($cart[$productID])) $cart[$productID]++; // Otherwise put 1 quantity of product in cart. else $cart[$productID] = 1; break; case "sub": if (!isset($cart[$productID])) break; // Decrement quantity of already added product $cart[$productID]--; if ($cart[$productID] <= 0) // If quantity is now 0, remove from cart. unset($cart[$productID]); break; case "rem": unset($cart[$productID]); break; } setcookie("cart", json_encode($cart), time() + 3600 * 24 * 10, "/"); header("Location: $redirectUrl"); ?> ``` ## index.php <a name="index"></a> ```php <html> <head> <link rel="stylesheet" href="style.css"> </head> <body> <?php include("_header.php"); ?> <main> <h2 class="typography subhead">Products Page</h2> <div class="list-grid product-list-grid"> <?php $databaseLink = new mysqli("localhost", "root", "", "NeatTreats"); $result = $databaseLink->query("SELECT * FROM Product LIMIT 200;"); if (empty($databaseLink->error)) { while ($row = $result->fetch_object()) { echo "<span class='typography body'>#{$row->ProductID}</span>"; echo "<img class='product-thumbnail' width='50' height='50' "; echo "src='cake_images/cake_{$row->ProductID}.png' alt='cake image'>"; echo "<div>"; echo "<span class='typography body'>{$row->Name}</span>"; echo "<br>"; echo "<a class='typography body' href='edit_cart.php"; echo "?id={$row->ProductID}&type=add&redirect_url=index.php'>"; echo "Add to cart</a>"; echo "</div>"; } } $databaseLink->close(); ?> </div> </main> </body> </html> ``` ## on_checkout.php <a name="on_checkout"></a> ```php <?php // Function definitions /** Sanitizes and validates checkout form input * * @param array $inputArray $_POST or $_GET array. Will be sanitized in place. * @param mysqli $databaseLink Link to sanitize SQL injections. If null, SQL * injections will not be protected. * @return boolean Whether all input is valid. */ function isCheckoutInputValid(&$inputArray, mysqli $databaseLink=null): bool { foreach ($inputArray as $name => $input) { $input = strip_tags($input); if ($databaseLink !== null) $input = $databaseLink->real_escape_string($input); // Propagate changes back into array. $inputArray[$name] = $input; } $email = $inputArray["email"] ?? ""; $cardNum = $inputArray["card_num"] ?? ""; $expiryMonth = $inputArray["expiry_month"] ?? ""; $expiryYear = $inputArray["expiry_year"] ?? ""; // This validation example will not save each individual error. Refer // to example 3 for making specific error messages. return ( !empty($email) // presence check && filter_var($email, FILTER_VALIDATE_EMAIL) // format: x@y.z && !empty($cardNum) // presence && (int)$cardNum != null // type: int (Also fails for $cardNum = "0") && (int)$cardNum > 0 // range: greater than 0 && strlen($cardNum) >= 12 // length: between 12 and 16 && strlen($cardNum) <= 16 && !empty($expiryMonth) // presence && (int)$expiryMonth != null // type: int && (int)$expiryMonth >= 1 // range: between 1 and 12 && (int)$expiryMonth <= 12 && !empty($expiryYear) // presence && (int)$expiryYear != null // type: int && (int)$expiryYear >= 2020 // range: between 2020 and 3000 && (int)$expiryYear <= 3000 ); } /** Save the cart into the database via the given link. */ function saveCartToDatabase(array $cart, mysqli $databaseLink): bool { // Can't checkout with an empty cart! if (count($cart) == 0) return false; // Get the next valid auto increment CartID $databaseLink->query("INSERT INTO Cart (ProductID, Quantity) VALUES (1, 0);"); if ($databaseLink->errno != 0) return false; $nextCartID = $databaseLink->insert_id; $databaseLink->query("DELETE FROM Cart WHERE CartID=$nextCartID;"); if ($databaseLink->errno != 0) return false; // Add the cart rows to the database. $saveCartQuery = "INSERT INTO Cart (CartID, ProductID, Quantity) VALUES "; $isFirst = true; foreach ($cart as $productID => $quantity) { if ($isFirst) $isFirst = false; else $saveCartQuery .= ", "; $saveCartQuery .= "($nextCartID, $productID, $quantity)"; } $saveCartQuery .= ";"; $databaseLink->query($saveCartQuery); if ($databaseLink->errno != 0) return false; return true; } /** Send an invoice to the customer about the specified order. */ function sendInvoice(array $cart, array $inputArray): bool { // Extract inputs from input array. Should have already been validated. $customerEmail = $inputArray["email"]; $cardNum = $inputArray["card_num"]; $expiryMonth = $inputArray["expiry_month"]; $expiryYear = $inputArray["expiry_year"]; $emailSubject = "Invoice of your order from Neat Treats"; $emailHeaders = ( "From: neattreats.sender@gmail.com\r\n" . "Content-Type: text/html; charset=ISO-8859-1\r\n" ); $productsGrid = ""; foreach($cart as $productID => $quantity) $productsGrid .= "<p>Product #$productID x$quantity</p>"; $emailBody = ( "<html>" . "<body>" . "<h2>Order Confirmation</h2>" . "<p>Thanks for your order! We hope you enjoy it a lot.</p>" . "<h4>Products:</h4>" . "<div style='padding-left:10px'>" . $productsGrid . "</div>" . "<h4>Payment Method:</h4>" . "<div style='padding-left:10px'>" . "<p>Card Number: $cardNum</p>" . "<p>Expires: $expiryMonth / $expiryYear</p>" . "</div>" . "</body>" . "</html>" ); return mail($customerEmail, $emailSubject, $emailBody, $emailHeaders); } // Performs actual script when page is opened: (function () { $cart = json_decode($_COOKIE["cart"] ?? "{}", true); $databaseLink = new mysqli("localhost", "root", "", "NeatTreats"); $allSuccess = false; $error = ""; if (count($cart) > 0) if (isCheckoutInputValid($_POST)) if (saveCartToDatabase($cart, $databaseLink)) if (sendInvoice($cart, $_POST)) $allSuccess = true; else $error = "Couldn't send invoice"; else $error = "Couldn't save to database"; else $error = "Invalid checkout input"; else $error = "Cart is empty"; $databaseLink->close(); if ($allSuccess) { setcookie("cart", "", time() - 3600, "/"); header("Location: order_confirm.php"); } else { setcookie("checkout_error", $error, time() + 3600, "cart.php"); header("Location: cart.php"); } })(); ?> ``` ## order_confirm.php <a name="order_confirm"></a> ```php <html><body> <h2>Order Confirmation</h2> <p>Thanks for your order!</p> </body></html> ``` ## style.css <a name="style"></a> ```css .typography { font-family: Arial, Helvetica, sans-serif; } .typography.headline { font-size: 24; } .typography.subhead { font-size: 20; font-weight: lighter; } .typography.body { font-size: 14; } header { display: inline-grid; grid-template-columns: 2fr 1fr 1fr; place-items: center; width: 420px; } header>a { text-decoration: none; } header>a:active, header>a:visited { color: orange; } header>a:hover { color: darkorange; } main { margin: 30px 10px; } .list-grid { display: inline-grid; row-gap: 5px; column-gap: 5px; margin-left: 10px; border-left: 1px solid grey; padding-left: 3px; } .product-list-grid { grid-template-columns: auto auto auto; } .cart-list-grid { grid-template-columns: auto auto auto auto auto; margin-bottom: 20px; } .checkout-list-grid { grid-template-columns: auto auto; } .product-thumbnail { border-radius: 4px; border: 1px solid lightgrey; } .error { color: #e22828; } ``` Parent topic: [Example 5](https://dev.to/herobank110/exercise-5-shopping-facility-443n)
herobank110
321,548
11 NPM Commands Every Node Developer Should Know.
1. Create a package.json file npm init -y # -y to initialize with default values....
0
2020-04-28T15:08:23
https://dev.to/vyasriday/npm-commands-that-a-node-developer-should-know-2h84
npm, node, codenewbie, beginners
#### 1. Create a package.json file ```bash npm init -y # -y to initialize with default values. ``` #### 2. Install a package locally or globally ```bash npm i package # to install locally. i is short for install npm i -g package # to install globally ``` #### 3. Install a specific version of a package ```bash npm install package@version ``` #### 4. Install a package as dev dependency ```bash npm i -d package # -d stands for --save-dev i.e install as dev dependency ``` #### 5.To see all the dependencies of your project ```bash npm list # this will list all the dependencies of the third-party modules as well ``` #### 6. To see direct dependencies of your project ```bash npm list --depth=0 ``` #### 7. To see the information about a package in your project ```bash npm view package ``` #### 8. See specific information from package.json of a third-party package like dependencies of the package ```bash npm view package dependencies ``` #### 9. To check different versions available for a package ```bash npm view package versions ``` #### 10. Check outdated packages in your project ```bash npm outdated # run with -g flag to check globally ``` #### 11. To update the packages ```bash npm update ``` I hope this helps. Let me know if you use any other commands that can be helpful. Thanks for reading. 😀
vyasriday
322,542
How to Undo Last Commit and Keep Changes
To undo the last commit but keep the changes, run the following command: git reset --soft HEAD~1...
0
2020-04-30T03:06:26
https://dev.to/andyrewlee/how-to-undo-last-commit-and-keep-changes-1eh2
git, beginners, tutorial
To undo the last commit but keep the changes, run the following command: ``` git reset --soft HEAD~1 ``` Now when we run `git status`, we will see that all of our changes are in staging. When we run `git log`, we can see that our commit has been removed. If we want to completely remove changes in staging, we can run the following command: ``` git reset --hard HEAD ``` These are dangerous commands and should be used with caution. Try it out on a different branch first. However, in the worst case scenario, we can recover commits we accidentally deleted with `git reflog`.
andyrewlee
327,414
How to Embed Video Chat in your Unity Games
When you're making a video game, you want to squeeze every last drop out of performance out of your g...
0
2020-05-05T16:59:34
https://dev.to/joelthomas362/create-an-agora-group-video-chat-using-unity-33ce
agora, unity3d, photon, csharp
When you're making a video game, you want to squeeze every last drop out of performance out of your graphics, code, and any plugins you may use. Agora's Unity SDK has a low footprint and performance cost, making it a great tool for any platform, from mobile to VR! In this tutorial, I'm going to show you how to use Agora to create a real-time video party chat feature in a Unity MMO demo asset using the Agora SDK and Photon Unity Networking (PUN).  By the end of the demo, you should understand how to download the Agora plugin for Unity, join/leave another player's channel, and display your player party in a scalable fashion. For this tutorial, I'm using Unity 2018.4.18. ![Final Results!](https://media.giphy.com/media/Y3kh4JUgtTjYRILhSn/giphy.gif) **Before we get started, let's touch on two important factors:** Since this is a networked demo, you have two strategies for testing: 1. Use two different machines, each with their own webcam (I used a PC and Mac laptop) 2. From the same machine, test using one client from a Unity build, and another from the Unity editor (this solution is suboptimal, because the two builds will fight for the webcam access, and may cause headache). **Getting Started** - [Create an Agora.io developer account here](https://www.agora.io/en/blog/how-to-get-started-with-agora?utm_source=medium&utm_medium=blog&utm_campaign=unity_party_chat) to get your AppID - [Create a Photon developer account](https://dashboard.photonengine.com/en-US/Account/SignUp) for their appID - [Import Agora SDK into your project from the Unity Asset Store](https://assetstore.unity.com/packages/tools/video/agora-video-chat-sdk-for-unity-134502) - [Import Photon Viking Multiplayer Showcase](https://assetstore.unity.com/packages/tools/network/photon-viking-multiplayer-showcase-1846) into your project **Create Agora Engine** Our "Charprefab" is the default Viking character we will be using. This object lives in Assets > DemoVikings > Resources.  It is already set up with Photon to join a networked lobby/room and send messages across the network. Create a new script called AgoraVideoChat and add it to our CharPrefab.  In AgoraVideoChat let's add this code: ```javascript using agora_gaming_rtc; // *NOTE* Add your own appID from console.agora.io [SerializeField] private string appID = ""; [SerializeField] private string channel = "unity3d"; private string originalChannel; private IRtcEngine mRtcEngine; private uint myUID = 0; void Start() { if (!photonView.isMine) return; // Setup Agora Engine and Callbacks. if(mRtcEngine != null) { IRtcEngine.Destroy(); } originalChannel = channel; mRtcEngine = IRtcEngine.GetEngine(appID); mRtcEngine.OnJoinChannelSuccess = OnJoinChannelSuccessHandler; mRtcEngine.OnUserJoined = OnUserJoinedHandler; mRtcEngine.OnLeaveChannel = OnLeaveChannelHandler; mRtcEngine.OnUserOffline = OnUserOfflineHandler; mRtcEngine.EnableVideo(); mRtcEngine.EnableVideoObserver(); mRtcEngine.JoinChannel(channel, null, 0); } private void OnApplicationQuit() { if(mRtcEngine != null) { mRtcEngine.LeaveChannel(); mRtcEngine = null; IRtcEngine.Destroy(); } } ``` This is a basic Agora setup protocol, and very similar if not identical to the AgoraDemo featured in the Unity SDK download. Familiarize yourself with it to take your first step towards mastering the Agora platform! You'll notice that photon.isMine is now angry at us, and that we need to implement some Agora callback methods.  We can include the proper Photon behavior by changing `public class AgoraVideoChat : MonoBehaviour` to `public class AgoraVideoChat : Photon.MonoBehaviour` Agora has many callback methods that we can use [which can be found here](https://docs.agora.io/en/Video/API%20Reference/unity/namespaceagora__gaming__rtc.html#a26e13f07aadde1c9cf3cebf385723bbb), however for this case we only need these: ```javascript // Local Client Joins Channel. private void OnJoinChannelSuccessHandler(string channelName, uint uid, int elapsed) { if (!photonView.isMine) return; myUID = uid; Debug.LogFormat("I: {0} joined channel: {1}.", uid.ToString(), channelName); //CreateUserVideoSurface(uid, true); } // Remote Client Joins Channel. private void OnUserJoinedHandler(uint uid, int elapsed) { if (!photonView.isMine) return; //CreateUserVideoSurface(uid, false); } // Local user leaves channel. private void OnLeaveChannelHandler(RtcStats stats) { if (!photonView.isMine) return; } // Remote User Leaves the Channel. private void OnUserOfflineHandler(uint uid, USER_OFFLINE_REASON reason) { if (!photonView.isMine) return; } ``` Let's play our "VikingScene" level now, and look at the log. (You should see something in the log like: "[your UID] joined channel: unity3d") **_Huzzah!_** We are in an Agora channel with the potential to video-chat 16 other players or broadcast to around 1 million viewers!  But what gives? Where exactly _are we_? **Create Agora VideoSurface** Agora's Unity SDK uses `RawImage` objects to render the video feed of webcams and mobile cameras, as well as cubes and other primitive shapes (see AgoraEngine > Demo > SceneHome for an example of this in action). 1. Create a Raw Image (Right-click Hierarchy window > UI > Raw Image) and name it "UserVideo" 2. Add the `VideoSurface` script to it (Component > Scripts > agora_gaming_rtc > VideoSurface) 3. Drag the object into Assets > Prefabs folder 4. Delete the UserVideo object from the hierarchy (you can leave the canvas), we only wanted the prefab ![Create Prefab](https://media.giphy.com/media/l1OEJvZTgni2V9Rfyu/giphy.gif) 5. Add this code to AgoraVideoChat ```javascript // add this to your other variables [Header("Player Video Panel Properties")] [SerializeField] private GameObject userVideoPrefab; private int Offset = 100; private void CreateUserVideoSurface(uint uid, bool isLocalUser) { // Create Gameobject holding video surface and update properties GameObject newUserVideo = Instantiate(userVideoPrefab); if (newUserVideo == null) { Debug.LogError("CreateUserVideoSurface() - newUserVideoIsNull"); return; } newUserVideo.name = uid.ToString(); GameObject canvas = GameObject.Find("Canvas"); if (canvas != null) { newUserVideo.transform.parent = canvas.transform; } // set up transform for new VideoSurface newUserVideo.transform.Rotate(0f, 0.0f, 180.0f); float xPos = Random.Range(Offset - Screen.width / 2f, Screen.width / 2f - Offset); float yPos = Random.Range(Offset, Screen.height / 2f - Offset); newUserVideo.transform.localPosition = new Vector3(xPos, yPos, 0f); newUserVideo.transform.localScale = new Vector3(3f, 4f, 1f); newUserVideo.transform.rotation = Quaternion.Euler(Vector3.right * -180); // Update our VideoSurface to reflect new users VideoSurface newVideoSurface = newUserVideo.GetComponent<VideoSurface>(); if (newVideoSurface == null) { Debug.LogError("CreateUserVideoSurface() - VideoSurface component is null on newly joined user"); } if (isLocalUser == false) { newVideoSurface.SetForUser(uid); } newVideoSurface.SetGameFps(30); } ``` 6. Add the newly created prefab to the UserPrefab slot in our Charprefab character, and uncomment `CreateUserVideoSurface()` from our callback. methods. 7. Run it again! Now we can see our local video stream rendering to our game. If we call in from another agora channel, we will see more video frames populate our screen.  I use the "AgoraDemo" app on my mobile device to test this, but you can also use our [1-to-1 calling web demo](https://webdemo.agora.io/agora-web-showcase/examples/Agora-Web-Tutorial-1to1-Web/) to test connectivity, or connect with the same demo from another machine. We now have our Agora module up and running, and now it's time to create the functionality by connecting two networked players in Photon. **Photon Networking - Party Joining** To join/invite/leave a party, we are going to create a simple UI.  Inside your CharPrefab, create a canvas, and 3 buttons named InviteButton, JoinButton, and LeaveButton, respectively. **Make sure the Canvas is the first child of the Charprefab.** ![Add a button](https://media.giphy.com/media/fu8IHHW5dTZsiJ8yxw/giphy.gif) Next we create a new script called PartyJoiner on our base CharPrefab object. Add this to the script: ``` javascript using UnityEngine.UI; [Header("Local Player Stats")] [SerializeField] private Button inviteButton; [SerializeField] private GameObject joinButton; [SerializeField] private GameObject leaveButton; [Header("Remote Player Stats")] [SerializeField] private int remotePlayerViewID; [SerializeField] private string remoteInviteChannelName = null; private AgoraVideoChat agoraVideo; private void Awake() { agoraVideo = GetComponent<AgoraVideoChat>(); } private void Start() { if(!photonView.isMine) { transform.GetChild(0).gameObject.SetActive(false); } inviteButton.interactable = false; joinButton.SetActive(false); leaveButton.SetActive(false); } private void OnTriggerEnter(Collider other) { if (!photonView.isMine || !other.CompareTag("Player")) { return; } // Used for calling RPC events on other players. PhotonView otherPlayerPhotonView = other.GetComponent<PhotonView>(); if (otherPlayerPhotonView != null) { remotePlayerViewID = otherPlayerPhotonView.viewID; inviteButton.interactable = true; } } private void OnTriggerExit(Collider other) { if(!photonView.isMine || !other.CompareTag("Player")) { return; } remoteInviteChannelName = null; inviteButton.interactable = false; joinButton.SetActive(false); } public void OnInviteButtonPress() { //PhotonView.Find(remotePlayerViewID).RPC("InvitePlayerToPartyChannel", PhotonTargets.All, remotePlayerViewID, agoraVideo.GetCurrentChannel()); } public void OnJoinButtonPress() { if (photonView.isMine && remoteInviteChannelName != null) { //agoraVideo.JoinRemoteChannel(remoteInviteChannelName); joinButton.SetActive(false); leaveButton.SetActive(true); } } public void OnLeaveButtonPress() { if (!photonView.isMine) return; } [PunRPC] public void InvitePlayerToPartyChannel(int invitedID, string channelName) { if (photonView.isMine && invitedID == photonView.viewID) { joinButton.SetActive(true); remoteInviteChannelName = channelName; } } ``` Add the corresponding "OnButtonPress" functions into the Unity UI buttons you just created.  [Example: InviteButton -> "OnInviteButtonPress()"] ![On Button Press](https://media.giphy.com/media/IfrL4VWdbpPUzUXFBJ/giphy.gif) - Set the CharPrefab tag to "Player" - Add a SphereCollider component to CharPrefab (Component bar > Physics > SphereCollider, check the "Is Trigger" box to true, and set it's radius to 1.5 **Quick Photon Tip - Local Functionality** As you can see we need to implement two more methods in our `AgoraVideoChat` class. Before we do that, let's cover some code we just copied over. ``` javascript private void Start() { if (!photonView.isMine) { transform.GetChild(0).gameObject.SetActive(false); } inviteButton.interactable = false; joinButton.SetActive(false); leaveButton.SetActive(false); } ``` "If this photon view isn't mine, set my first child to _False_" - It's important to remember that although this script is firing on the CharPrefab locally controlled by your machine/keyboard input - this script is **also** running on every other CharPrefab in the scene. Their canvases will display, and their print statements will show as well.  By setting the first child (my "Canvas" object) to false on all other CharPrefabs, I'm only displaying the local canvas to my screen - _not every single player in the Photon "Room"_. Let's build and run with two different clients to see what happens… …Wait, we're already in the same party? If you remember, we set `private string channel = "unity3d"` and in our Start() method are calling `mrtcEngine.JoinChannel(channel, null, 0);`. We are creating and/or joining an Agora channel named "unity3d", in every single client right at the start. To avoid this, we have to set a new default channel name in each client, so they start off in separate Agora channels, and then can invite each other to their unique channel name. Now let's implement two more methods inside AgoraVideoChat: `JoinRemoteChannel(string remoteChannelName)` and `GetCurrentChannel()`. ``` javascript public void JoinRemoteChannel(string remoteChannelName) { if (!photonView.isMine) return; mRtcEngine.LeaveChannel(); mRtcEngine.JoinChannel(remoteChannelName, null, myUID); mRtcEngine.EnableVideo(); mRtcEngine.EnableVideoObserver(); channel = remoteChannelName; } ``` ``` javascript public string GetCurrentChannel() => channel; ``` This code allows us to receive events that are called across the Photon network, bouncing off of each player, and sticking when the invited Photon ID matches the local player ID.  When the Photon event hits the correct player, they have the option to "Join Remote Channel" of another player, and connect with them via video chat using the Agora network.  Test the build to watch our PartyJoiner in action! **Finishing Touches - UI & Leaving a Party** You have now successfully used Agora to join a channel, and see the video feed of fellow players in your channel. Video containers will pop in across your screen as users join your channel.  However, it doesn't look great, and you can't technically leave the channel without quitting the game and rejoining. Let's fix that! **UI Framework** Now we'll create a ScrollView object, to hold and organize our buttons. Inside of Charprefab > Canvas: make sure CanvasScaler UI Scale Mode is set to "Scale With Screen Size" (by default it's at "Constant Pixel Size" which in my experience is less than ideal for most Unity UI situations) - Inside our CharPrefab object, right-click on Canvas, select UI > Scroll View ![Make sure your Anchors, Pivots, and Rect Transform match mine](https://media.giphy.com/media/U3IDb2gCxnEJ4SDADq/giphy.gif) - Set "Scroll View" Rect Transform to "Stretch/Stretch" (bottom right corner) and make sure your Anchors, Pivots, and Rect Transform match the values in the red box pictured above. - Uncheck "Horizontal" and delete the HorizontalScrollbar child object - Set "Content" child object to "Top/Stretch" (rightmost column, second from the top) I have my Height set to 300 Min X:0 Y:1 Max X:1 Y: 1 Pivot X:0 Y: 1 - Create an empty gameobject named "SpawnPoint" as a child of Content - Set the Rect Transform to "Top/Center" (Middle column, second from the top) and set the "Pos Y" value to -20 Make sure your Anchors: Min/Max and your Pivot values equal what is displayed ![](https://media.giphy.com/media/eMPJQtPTLsLpR4ZrL6/giphy.gif) In AgoraVideoChat add: ``` javascript [SerializeField] private RectTransform content; [SerializeField] private Transform spawnPoint; [SerializeField] private float spaceBetweenUserVideos = 150f; private List<GameObject> playerVideoList; ``` In Start() add `playerVideoList = new List<GameObject>();` We're going to completely replace our CreateUserVideoSurface method to: ``` javascript // Create new image plane to display users in party private void CreateUserVideoSurface(uint uid, bool isLocalUser) { // Avoid duplicating Local player video screen for (int i = 0; i < playerVideoList.Count; i++) { if (playerVideoList[i].name == uid.ToString()) { return; } } // Get the next position for newly created VideoSurface float spawnY = playerVideoList.Count * spaceBetweenUserVideos; Vector3 spawnPosition = new Vector3(0, -spawnY, 0); // Create Gameobject holding video surface and update properties GameObject newUserVideo = Instantiate(userVideoPrefab, spawnPosition, spawnPoint.rotation); if (newUserVideo == null) { Debug.LogError("CreateUserVideoSurface() - newUserVideoIsNull"); return; } newUserVideo.name = uid.ToString(); newUserVideo.transform.SetParent(spawnPoint, false); newUserVideo.transform.rotation = Quaternion.Euler(Vector3.right * -180); playerVideoList.Add(newUserVideo); // Update our VideoSurface to reflect new users VideoSurface newVideoSurface = newUserVideo.GetComponent<VideoSurface>(); if(newVideoSurface == null) { Debug.LogError("CreateUserVideoSurface() - VideoSurface component is null on newly joined user"); } if (isLocalUser == false) { newVideoSurface.SetForUser(uid); } newVideoSurface.SetGameFps(30); // Update our "Content" container that holds all the image planes content.sizeDelta = new Vector2(0, playerVideoList.Count * spaceBetweenUserVideos + 140); UpdatePlayerVideoPostions(); UpdateLeavePartyButtonState(); } ``` and add two new methods: ``` javascript // organizes the position of the player video frames as they join/leave private void UpdatePlayerVideoPostions() { for (int i = 0; i < playerVideoList.Count; i++) { playerVideoList[i].GetComponent<RectTransform>().anchoredPosition = Vector2.down * 150 * i; } } // resets local players channel public void JoinOriginalChannel() { if (!photonView.isMine) return; if(channel != originalChannel || channel == myUID.ToString()) { channel = originalChannel; } else if(channel == originalChannel) { channel = myUID.ToString(); } JoinRemoteChannel(channel); } ``` Comment out the `UpdateLeavePartyButtonState()` for now, and drag in your newly created ScrollView UI objects into the appropriate slots. ![](https://media.giphy.com/media/MCdBbTnPlrxvnaudLG/giphy.gif) Almost there! Now all we have to do is add the methods for "Leave Party" functionality in AgoraVideoChat: ``` javascript public delegate void AgoraCustomEvent(); public static event AgoraCustomEvent PlayerChatIsEmpty; public static event AgoraCustomEvent PlayerChatIsPopulated; private void RemoveUserVideoSurface(uint deletedUID) { foreach (GameObject player in playerVideoList) { if (player.name == deletedUID.ToString()) { // remove videoview from list playerVideoList.Remove(player); // delete it Destroy(player.gameObject); break; } } // update positions of new players UpdatePlayerVideoPostions(); Vector2 oldContent = content.sizeDelta; content.sizeDelta = oldContent + Vector2.down * spaceBetweenUserVideos; content.anchoredPosition = Vector2.zero; UpdateLeavePartyButtonState(); } private void UpdateLeavePartyButtonState() { if (playerVideoList.Count > 1) { PlayerChatIsPopulated(); } else { PlayerChatIsEmpty(); } } ``` and update our AgoraVideoChat callbacks: ``` javascript // Local Client Joins Channel. private void OnJoinChannelSuccessHandler(string channelName, uint uid, int elapsed) { if (!photonView.isMine) return; myUID = uid; CreateUserVideoSurface(uid, true); } // Remote Client Joins Channel. private void OnUserJoinedHandler(uint uid, int elapsed) { if (!photonView.isMine) return; CreateUserVideoSurface(uid, false); } // Local user leaves channel. private void OnLeaveChannelHandler(RtcStats stats) { if (!photonView.isMine) return; foreach (GameObject player in playerVideoList) { Destroy(player.gameObject); } playerVideoList.Clear(); } // Remote User Leaves the Channel. private void OnUserOfflineHandler(uint uid, USER_OFFLINE_REASON reason) { if (!photonView.isMine) return; if (playerVideoList.Count <= 1) { PlayerChatIsEmpty(); } RemoveUserVideoSurface(uid); } ``` and in PartyJoiner: ``` javascript private void OnEnable() { AgoraVideoChat.PlayerChatIsEmpty += DisableLeaveButton; AgoraVideoChat.PlayerChatIsPopulated += EnableLeaveButton; } private void OnDisable() { AgoraVideoChat.PlayerChatIsEmpty -= DisableLeaveButton; AgoraVideoChat.PlayerChatIsPopulated -= EnableLeaveButton; } public void OnLeaveButtonPress() { if(photonView.isMine) { agoraVideo.JoinOriginalChannel(); leaveButton.SetActive(false); } } private void EnableLeaveButton() { if(photonView.isMine) { leaveButton.SetActive(true); } } private void DisableLeaveButton() { if(photonView.isMine) { leaveButton.SetActive(false); } } ``` Play this demo in two different editors and join a party! We start off by connecting to the same networked game lobby via the Photon network, and then connect our videochat party via Agora's SD-RTN network! **In Summary** - We connected to Agora's network to display our video chat channel - We enabled other users to join our party, see their faces, and talk with them in real time - We took it one step further and built a scalable UI that houses all the people you want to chat with! If you have any questions or hit a snag in the course of building your own networked group video chat, please feel free to reach out directly or via the Agora Slack Channel! [Check out the link to the full github project here!](https://github.com/joelthomas362/agora-partychat-demo)
joelthomas362
328,784
Web forms with DotVVM controls
Currently, web pages allow internet users to know about a particular product or service, how to conta...
0
2020-05-22T00:21:40
https://dev.to/dotvvm/web-forms-with-dotvvm-controls-6bk
webdev, html, dotnet, dotvvm
Currently, web pages allow internet users to know about a particular product or service, how to contact a company, but it can also help collect information about its users and thus establish a data source. An important tool for this purpose is the form. For the design of a form, the most basic way is to use HTML tags to obtain the certain required information. Then, it is necessary to use some programming language through some framework to process that information. With these considerations, in order to fulfill the purpose of collecting and performing certain procedures with a form, we will use DotVVM in this article through ASP.NET Core. ##DotVVM Controls## Depending on the technology you are working with, extracting the information entered into a form can be carried out through different mechanisms to achieve this purpose. In the case of DotVVM, the framework has a number of controls or components for various purposes that allow you to design elements for the implementation of forms through HTML and C#. This communication between HTML (web pages) and C# (source code) is done through the MVVM design pattern (Model, View, Viewmodel). The purpose of these elements are as follows: + **The model**. — is responsible for all application data and related business logic. + **The view**. — Representations for the end-user of the application model. The view is responsible for displaying the data to the user and allowing manipulation of the application data. + **Model-View or View-Model**. — one or more per view; the model-view is responsible for implementing view behavior to respond to user actions and for easily exposing model data. Next, we'll look at an example to observe how DotVVM controls work. ###DotVVM Form### ![](https://dev-to-uploads.s3.amazonaws.com/i/qgcdwp0s29hi7kapg3rv.png) Considering that a web page in DotVVM consists of a view and also a view-model, let's analyze each of these elements separately. ####Viewmodel#### ```csharp public class DefaultViewModel : MasterPageViewModel { public string Title { get; set;} public PersonModel Person { get; set; } = new PersonModel { EnrollmentDate = DateTime.UtcNow.Date }; public DefaultViewModel() { Title = "Person Form"; } public void Process() { String script = "alert('" + "Welcome" + " " + Person.Username + " to Web App :) ')"; Context.ResourceManager.AddStartupScript(script); } } ``` For starters, in the model view of the web page, we find two attributes, `Title` for the page title and 'Person' an object with the properties of the `Person` class. ```csharp public string Title { get; set;} public PersonModel Person { get; set; } = new PersonModel { EnrollmentDate = DateTime.UtcNow.Date }; ``` In this class, `PersonModel`, there is an important aspect to mention, these are annotations, in this case, annotations of type `[Required]`, which allow you to set as validation that the specified attributes cannot be null. We can also find other directives to validate the data type of a variable, handle regular expressions, among other cases. For more information on policy validation we can consult the following source: [https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions-1/models-data/validation-with-the-data-annotation-validators-cs](https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions-1/models-data/validation-with-the-data-annotation-validators-cs). Returning to the analysis of the `DefaultViewModel`, we also find the constructor in which objects and attributes can be initialized: ```csharp public DefaultViewModel() { Title = "Person Form"; } ``` Finally, in this view model, we have a `Process` method, which aims to process the input of information into a form: ```csharp public void Process() { String script = "alert('" + "Welcome" + " " + Person.Username + " to Web App :) ')"; Context.ResourceManager.AddStartupScript(script); } ``` In this case, as an exemplification, the `AddStartupScript` method is used to use a JavaScript statement to launch a message in an alert. ####View#### ```html @viewModel DotVVMControls.ViewModels.DefaultViewModel, DotVVMControls @masterPage Views/MasterPage.dotmaster <dot:Content ContentPlaceHolderID="MainContent"> <h1 align="center"> <img src="UserIcon.png" width="20%" height="20%" /> <br /> <b>{{value: Title}}</b> </h1> <div align="center"> <div Validator.Value="{value: Person.Username}" Validator.InvalidCssClass="has-error" Validator.SetToolTipText="true" class="page-input-box"> <b>Username:</b> <br /> <dot:TextBox Text="{value: Person.Username}" style="border: 1px solid #4a4d55; font-size: 1.1em;" /> </div> <p /> <div Validator.Value="{value: Person.EnrollmentDate}" Validator.InvalidCssClass="has-error" Validator.SetToolTipText="true" class="page-input-box"> <b>EnrollmentDate:</b> <br /> <dot:TextBox Text="{value: Person.EnrollmentDate}" ValueType="DateTime" FormatString="dd/MM/yyyy" class="page-input" style="border: 1px solid #4a4d55; font-size: 1.1em;" /> </div> <p /> <div Validator.Value="{value: Person.EnrollmentDate}" Validator.InvalidCssClass="has-error" Validator.SetToolTipText="true" class="page-input-box"> <b>Gender:</b> <br /> <dot:RadioButton id="Male" CheckedItem="{value: Person.Gender}" style="border: 1px solid #4a4d55; font-size: 1.1em;" style="border: 1px solid #4a4d55; font-size: 1.1em;" /> <label for="Male">Male</label> <dot:RadioButton id="Female" CheckedItem="{value: Person.Gender}" style="border: 1px solid #4a4d55; font-size: 1.1em;" /> <label for="Female">Female</label> </div> <p /> <b>About:</b> <br /> <dot:TextBox Text="{value: Person.About}" Type="MultiLine" class="page-input" style="border: 1px solid #4a4d55; font-size: 1.1em;" /> <p /> <dot:Button Text="Process" Click="{command: Process()}" class="page-button" style="background-color: #004C88; border: 2px solid ; color: #fff; font-weight: 600; padding-left: 2em; padding-right: 2em; font-size: 1rem;" /> <p /> </div> </dot:Content> ``` A first important aspect to mention is the display of the contents of a variable on the website: ```html <b>{{value: Title}}</b> ``` This is a DotVVM control called Literal, and it helps to render text on the page. More information at: [https://www.dotvvm.com/docs/controls/builtin/Literal/2.0](https://www.dotvvm.com/docs/controls/builtin/Literal/2.0). Another of the most useful controls in DotVVM is data binding. This allows us to manage values either to retrieve or assign information. An example is the following: ```html <div Validator.Value="{value: Person.Username}" Validator.InvalidCssClass="has-error" Validator.SetToolTipText="true" class="page-input-box"> ``` Here we see how the `Username` value of the `Person` object is being used. In this case, through another `Validator` control, which, as the name implies, allows validations to be performed on the form. For example, the validation is being given according to the annotations of the Username attribute, which for the time being has the annotation `[Required]` as we saw in the `Person` class. Learn more about `Validator` at: [https://www.dotvvm.com/docs/controls/builtin/Validator/latest](https://www.dotvvm.com/docs/controls/builtin/Validator/latest). Y sobre `Binding`: [https://www.dotvvm.com/docs/tutorials/basics-binding-syntax/2.0](https://www.dotvvm.com/docs/tutorials/basics-binding-syntax/2.0). Now, in relation to controls for form elements we can have the following example: ```html dot:TextBox Text="{value: Person.Username}" /> ``` In this case, the control is `TextBox`, which becomes the DotVVM version of the tag `<input type'"text ... />`, with the difference, that here we have the data-bind to load or get the information of an attribute, `Pearson.Username` in the case of the example. Learn more about TextBox: [https://www.dotvvm.com/docs/controls/builtin/TextBox/latest](https://www.dotvvm.com/docs/controls/builtin/TextBox/latest). If we continue along this same line, another control that we can find is the `RadioButton`, which follows the same principle as the HTML tag only here we can work with data-bind to communicate with the model of the view for the handling of the information: ```html <dot:RadioButton id="Female" CheckedItem="{value: Person.Gender}" /> <label for="Female">Female</label> ``` More information about `RadioButton`: [https://www.dotvvm.com/docs/controls/builtin/RadioButton/latest](https://www.dotvvm.com/docs/controls/builtin/RadioButton/latest). ##What's next?## With this tutorial article, we learned how to create dynamic forms by implementing views and models of views with controls predefined by DotVVM to work with web pages. The source code for this implementation can be found in this repository: [DotVVM Form Controls](https://github.com/esdanielgomez/DotVVMFormControls). Want to know the steps to create a DotVVM app? To do this, you can review this article: Steps to Create an MVVM (Model-View-Viewmodel) application with DotVVM and ASP.NET Core. Want to take your first steps in developing web applications with ASP.NET Core and DotVVM? Learn more in this tutorial: [DotVVM and ASP.NET Core: Implementing CRUD operations](https://dev.to/dotvvm/dotvvm-and-asp-net-core-implementing-crud-operations-l2e). Thank you! See you on [Twitter](https://twitter.com/esDanielGomez)!! :)
esdanielgomez
328,822
Mocking Nuxt Global Plugins to Test a Vuex Store File
How to deal with mocking plugins when you're not mounting a test file
0
2020-05-06T20:47:31
https://dev.to/rdelga80/mocking-nuxt-global-plugins-to-test-a-vuex-store-file-45e6
vue, testing, webdev, frontend
--- title: Mocking Nuxt Global Plugins to Test a Vuex Store File published: true description: How to deal with mocking plugins when you're not mounting a test file tags: vue, testing, webdev, frontend cover_image: https://dev-to-uploads.s3.amazonaws.com/i/9qojni4llwopl6zzx69h.jpg --- This is one of those edge cases that drives a developer up the wall, and once it's finally solved you run to The Practical Dev to hopefully spare someone else the pain you went through. I've written previously about Vue Testing: [VueJS Testing: What Not How](https://dev.to/rdelga80/vuejs-testing-what-not-how-hn4), and since then have become the "go-to" for my company's Vue testing issues. But this one was quite a head scratcher. ### The Problem Vue's testing is pretty straight forward thanks to `vue-test-utils`. Testing components is really easy, as long as pieces are properly broken down to units (see my post). With `vue-test-utils` you can mount components locally within the test file, test against the local mount, and ta-da tests. Via the mount, and Jest's functionality, things like plugins and mocks can be handled either locally within the file or globally within config or mock files. This problem, though, has to deal with Vuex Store files which _are not mounted_. This is because state, actions, mutations, and getters are tested directly and not within the Vue ecosystem (which behave differently than if tested directly). Sample Vuex Store file: ```javascript export const actions = { testAction({ commit }, data) { commit('TEST_MUTATION', data) } } export const mutations = { TEST_MUTATIONS(state, data) { state.data = data } } ``` Sample Vuex Store test file: ```javascript import { cloneDeep } from 'lodash-es' import * as testStore from '@/store/testFile' describe('@/store/testFile', () => { let actions, mutations const cloneStore = cloneDeep(testStore) beforeEach(() => { actions = cloneStore.actions mutations = cloneStore.mutations )} it('test action calls test mutation', () => { const commit = jest.fn() actions.testActions({ commit }) expect(commit) .toHaveBeenCalledWith( 'TEST_MUTATION', expect.anything() ) }) ``` ### Approaches This issue circled around a global plugin called `$plugin`, which is a plugin that was created to handle api requests globally. This means that within the store file there is no imported module, and therefore rules out solutions such as `jest.mock()` or adding a file to the `__mocks__` directory. This also ruled out adding to `VueTestUtils.config`, since again there is no Vue instance to test against. Every time the test was run it returned `$plugin` as being undefined. ### Solution The solution to this problem is actually pretty easy, and I'm a little surprised it took so long to figure out. Here's an example of what an action like this may look like: ```js export const actions = { async testAction({ commit }) { let data try { data = await this.$plugin.list(`endpoint`) } catch (e) { console.log(e) } commit('SET_DATA', data) } } ``` When imported into a test file, it acts as a pure function, without having anything to do with Vuex functionality. That means that `this` refers to the actions variable, and not a Vue instance! Once that was cleared up, I added this to the `beforeEach` loop in the test file: ```js actions.$plugin = { list: () => { return [{}] } } ``` And that's all. No more failing tests, and no more undefined plugins.
rdelga80
328,930
Iterators & Generators
5. Iterators &amp; Generators 5.1. Iterators We use for statement for looping over a list. &gt;&gt;...
6,570
2020-05-06T15:15:39
https://dev.to/estherwavinya/iterators-generators-1ghk
python, beginners, codenewbie, writing
**5. Iterators & Generators** **5.1. Iterators** We use `for` statement for looping over a list. ``` >>> for i in [1, 2, 3, 4]: ... print(i) ... 1 2 3 4 ``` If we use it with a string, it loops over its characters. ``` >>> for c in "python": ... print(c) ... p y t h o n ``` If we use it with a dictionary, it loops over its keys. ``` >>> for k in {"x": 1, "y": 2}: ... print(k) ... y x ``` If we use it with a file, it loops over lines of the file. ``` >>> for line in open("a.txt"): ... print(line, end="") ... first line second line ``` So there are many types of objects which can be used with a for loop. These are called `iterable objects`. There are many functions which consume these iterables. ``` >>> ",".join(["a", "b", "c"]) 'a,b,c' >>> ",".join({"x": 1, "y": 2}) 'y,x' >>> list("python") ['p', 'y', 't', 'h', 'o', 'n'] >>> list({"x": 1, "y": 2}) ['y', 'x'] ``` **5.1.1. The Iteration Protocol** The built-in function `iter` takes an iterable object and returns an iterator. ``` >>> x = iter([1, 2, 3]) >>> x <listiterator object at 0x1004ca850> >>> next(x) 1 >>> next(x) 2 >>> next(x) 3 >>> next(x) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration ``` Each time we call the `next` method on the iterator gives us the next element. If there are no more elements, it raises a *StopIteration*. Iterators are implemented as classes. Here is an iterator that works like built-in `range` function. ``` class yrange: def __init__(self, n): self.i = 0 self.n = n def __iter__(self): return self def __next__(self): if self.i < self.n: i = self.i self.i += 1 return i else: raise StopIteration() ``` The `__iter__` method is what makes an object iterable. Behind the scenes, the iter function calls `__iter__` method on the given object. The return value of `__iter__` is an iterator. It should have a `__next__` method and raise `StopIteration` when there are no more elements. Lets try it out: ``` >>> y = yrange(3) >>> next(y) 0 >>> next(y) 1 >>> next(y) 2 >>> next(y) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 14, in __next__ StopIteration ``` Many built-in functions accept iterators as arguments. ``` >>> list(yrange(5)) [0, 1, 2, 3, 4] >>> sum(yrange(5)) 10 ``` In the above case, both the iterable and iterator are the same object. Notice that the `__iter__` method returned self. It need not be the case always. ``` class zrange: def __init__(self, n): self.n = n def __iter__(self): return zrange_iter(self.n) class zrange_iter: def __init__(self, n): self.i = 0 self.n = n def __iter__(self): # Iterators are iterables too. # Adding this functions to make them so. return self def __next__(self): if self.i < self.n: i = self.i self.i += 1 return i else: raise StopIteration() ``` If both iteratable and iterator are the same object, it is consumed in a single iteration. ``` >>> y = yrange(5) >>> list(y) [0, 1, 2, 3, 4] >>> list(y) [] >>> z = zrange(5) >>> list(z) [0, 1, 2, 3, 4] >>> list(z) [0, 1, 2, 3, 4] ``` **Problem 1:** Write an iterator class `reverse_iter`, that takes a list and iterates it from the reverse direction. :: ``` >>> it = reverse_iter([1, 2, 3, 4]) >>> next(it) 4 >>> next(it) 3 >>> next(it) 2 >>> next(it) 1 >>> next(it) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration ``` **5.2. Generators** Generators simplifies creation of iterators. A generator is a function that produces a sequence of results instead of a single value. ``` def yrange(n): i = 0 while i < n: yield i i += 1 ``` Each time the `yield` statement is executed the function generates a new value. ``` >>> y = yrange(3) >>> y <generator object yrange at 0x401f30> >>> next(y) 0 >>> next(y) 1 >>> next(y) 2 >>> next(y) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration ``` So a generator is also an iterator. You don’t have to worry about the iterator protocol. The word “generator” is confusingly used to mean both the function that generates and what it generates. In this chapter, I’ll use the word “generator” to mean the genearted object and “generator function” to mean the function that generates it. Can you think about how it is working internally? When a generator function is called, it returns a generator object without even beginning execution of the function. When `next` method is called for the first time, the function starts executing until it reaches `yield` statement. The yielded value is returned by the `next` call. The following example demonstrates the interplay between `yield` and call to `__next__` method on generator object. ``` >>> def foo(): ... print("begin") ... for i in range(3): ... print("before yield", i) ... yield i ... print("after yield", i) ... print("end") ... >>> f = foo() >>> next(f) begin before yield 0 0 >>> next(f) after yield 0 before yield 1 1 >>> next(f) after yield 1 before yield 2 2 >>> next(f) after yield 2 end Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration >>> ``` Lets see an example: ``` def integers(): """Infinite sequence of integers.""" i = 1 while True: yield i i = i + 1 def squares(): for i in integers(): yield i * i def take(n, seq): """Returns first n values from the given sequence.""" seq = iter(seq) result = [] try: for i in range(n): result.append(next(seq)) except StopIteration: pass return result print(take(5, squares())) # prints [1, 4, 9, 16, 25] ``` **5.3. Generator Expressions** Generator Expressions are generator version of list comprehensions. They look like list comprehensions, but returns a generator back instead of a list. ``` >>> a = (x*x for x in range(10)) >>> a <generator object <genexpr> at 0x401f08> >>> sum(a) 285 ``` We can use the generator expressions as arguments to various functions that consume iterators. ``` >>> sum((x*x for x in range(10))) 285 ``` When there is only one argument to the calling function, the parenthesis around generator expression can be omitted. ``` >>> sum(x*x for x in range(10)) 285 ``` Another fun example: Lets say we want to find first 10 (or any n) pythogorian triplets. A triplet `(x, y, z)` is called pythogorian triplet if `x*x + y*y == z*z`. It is easy to solve this problem if we know till what value of z to test for. But we want to find first n pythogorian triplets. ``` >>> pyt = ((x, y, z) for z in integers() for y in range(1, z) for x in range(1, y) if x*x + y*y == z*z) >>> take(10, pyt) [(3, 4, 5), (6, 8, 10), (5, 12, 13), (9, 12, 15), (8, 15, 17), (12, 16, 20), (15, 20, 25), (7, 24, 25), (10, 24, 26), (20, 21, 29)] ``` **5.3.1. Example: Reading multiple files** Lets say we want to write a program that takes a list of filenames as arguments and prints contents of all those files, like `cat` command in unix. The traditional way to implement it is: ``` def cat(filenames): for f in filenames: for line in open(f): print(line, end="") ``` Now, lets say we want to print only the line which has a particular substring, like `grep` command in unix. ``` def grep(pattern, filenames): for f in filenames: for line in open(f): if pattern in line: print(line, end="") ``` Both these programs have lots of code in common. It is hard to move the common part to a function. But with generators makes it possible to do it. ``` def readfiles(filenames): for f in filenames: for line in open(f): yield line def grep(pattern, lines): return (line for line in lines if pattern in line) def printlines(lines): for line in lines: print(line, end="") def main(pattern, filenames): lines = readfiles(filenames) lines = grep(pattern, lines) printlines(lines) ``` The code is much simpler now with each function doing one small thing. We can move all these functions into a separate module and reuse it in other programs. **Problem 2:** Write a program that takes one or more filenames as arguments and prints all the lines which are longer than 40 characters. **Problem 3:** Write a function `findfiles` that recursively descends the directory tree for the specified directory and generates paths of all the files in the tree. **Problem 4:** Write a function to compute the number of python files (.py extension) in a specified directory recursively. **Problem 5:** Write a function to compute the total number of lines of code in all python files in the specified directory recursively. **Problem 6:** Write a function to compute the total number of lines of code, ignoring empty and comment lines, in all python files in the specified directory recursively. **Problem 7:** Write a program `split.py`, that takes an integer n and a filename as command line arguments and splits the file into multiple small files with each having n lines. **5.4. Itertools** The itertools module in the standard library provides lot of intersting tools to work with iterators. Lets look at some of the interesting functions. `chain` – chains multiple iterators together. ``` >>> it1 = iter([1, 2, 3]) >>> it2 = iter([4, 5, 6]) >>> itertools.chain(it1, it2) [1, 2, 3, 4, 5, 6] ``` `izip` – iterable version of zip ``` >>> for x, y in itertools.izip(["a", "b", "c"], [1, 2, 3]): ... print(x, y) ... a 1 b 2 c 3 ``` **Problem 8:** Write a function `peep`, that takes an iterator as argument and returns the first element and an equivalant iterator. ``` >>> it = iter(range(5)) >>> x, it1 = peep(it) >>> print(x, list(it1)) 0 [0, 1, 2, 3, 4] ``` **Problem 9:** The built-in function enumerate takes an iteratable and returns an iterator over pairs (index, value) for each value in the source. ``` >>> list(enumerate(["a", "b", "c"]) [(0, "a"), (1, "b"), (2, "c")] >>> for i, c in enumerate(["a", "b", "c"]): ... print(i, c) ... 0 a 1 b 2 c ```
estherwavinya
328,935
5 Tips for Overcoming Coder's Block
TL;DR Use a sandbox Get a cheat-sheet Take a quickstart Break down the problem Mix 'em up...
0
2020-07-01T16:43:53
https://dev.to/vicradon/5-tips-for-overcoming-coder-s-block-28mo
devlive, codepen
TL;DR 1. Use a sandbox 2. Get a cheat-sheet 3. Take a quickstart 4. Break down the problem 5. Mix 'em up ## Intro Usually, developers experience a block. During the block period, they don't write any useful, reasonable code. This block may be due to tiredness. You may have been coding for days without breaks. But when you can't code because you have no idea on how to approach the problem, then these tips are for you. ## 1. Use a sandbox Sandboxes are great because they help you immediately flesh out some code. Most times, they are separate from your project. This means your old code doesn't weigh you down. Some great sandbox environments are: 1. [Codesandbox](https://codesandbox.io/) 2. [Codepen](http://codepen.io/) 3. [Repl](http://repl.it/) 4. [Glitch](https://glitch.com/) The ability to download your modules and packages in no time is a big plus. ## 2. Get a cheat-sheet When using new technology, you may need a general overview, but are too lazy to watch a crash course (it happens). You need to read a cheat sheet. Google "{insert technology name} cheat sheet filetype:pdf". You don't have to include the `filetype:pdf` if you don't need the cheat-sheet in pdf format. A cheatsheet would show you the big picture. No crash course can beat that (my opinion). ## 3. Take a quick start Consider using a **quick start** if your first instinct is to consume an intro video. You should experiment once in a while. Go to the docs of the technology and check up the quick start. Most modern, well-documented projects have a quick start. A quick start would get your hands dirty in the shortest time while referring you to links in the docs. If the docs are terrible you can stick with whatever method suits you 🙂. ## 4. Break down the problem How can you ever solve a problem without breaking it down? This is a fundamental part of problem-solving. Say you want to add state management to an app with several levels of state. You can begin by sketching the state-flow among the key components. Another way is to create a new markdown file with title `how to implement feature-x`. This is pseudo-coding. It is a great way to think about logic before syntax. Your brain would be more prepared to face the problem at hand. I recently discovered that draw.io has a VsCode extension. You can use draw.io right from your favourite editor. If you are a visual-oriented thinker, this extension is for you. If you aren't, still consider diagrams of state-flow and logic. Using flowcharts doesn't mean you're a noob. ## 5. Mix 'em up So, you have these four tips and still not sure how to start? Mix them up. You could start with **tip 4**, then **tip 2**. it all depends on your use case. Don't slack like a potato when you have the power not to. A worthy mention is rubber ducking which is talking to a rubber duck about your logic or bug. Note: The object you talk to doesn't have to be a rubber duck. It could be your coffee mug. Thanks for reading.
vicradon
329,046
11 Days of Salesforce Storefront Reference Architecture (SFRA) — Day 3: Creating a Storefront
11 Days of Salesforce Storefront Reference Architecture (SFRA) — Day 3: Creating a...
8,976
2020-05-18T19:43:39
https://medium.com/perimeterx/11-days-of-salesforce-storefront-reference-architecture-sfra-day-3-creating-a-storefront-1f85001ebf1d
sfcc, sfra, salesforce
--- title: 11 Days of Salesforce Storefront Reference Architecture (SFRA) — Day 3: Creating a Storefront published: true date: 2020-05-06 15:52:20 UTC tags: sfcc,sfra,salesforce series: 11 Days of Salesforce Storefront Reference Architecture (SFRA) canonical_url: https://medium.com/perimeterx/11-days-of-salesforce-storefront-reference-architecture-sfra-day-3-creating-a-storefront-1f85001ebf1d --- ### 11 Days of Salesforce Storefront Reference Architecture (SFRA) — Day 3: Creating a Storefront ![](https://cdn-images-1.medium.com/max/1024/1*esvm-ahE7DcanHEZSQ16iw.jpeg)<figcaption>Photo by <a href="https://unsplash.com/@liz99?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">童 彤</a> on <a href="https://unsplash.com/s/photos/storefront?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption> With the development environment ready (and if it doesn't, please refer back to [Day 2](https://dev.to/pxjohnny/11-days-of-salesforce-storefront-reference-architecture-sfra-day-2-setting-up-your-32g8-temp-slug-8741800)), it’s time to create our first storefront. A storefront is basically the term SFCC uses for a website application, be it an e-commerce site or an in-house employee benefits portal. A storefront is a foundational concept of SFCC development, as everything you develop for the platform is, ultimately, hosted on a storefront. ### Uploading the Base Cartridge Before we create our storefront, we need to upload the base cartridge to our Business Manager. The base cartridge contains the default controllers, JavaScript files and CSS for the site, and is necessary for creating sites on Business Manager. In order to authenticate the SFRA command-line tool with Business Manager, we need to set the credentials needed for it to upload cartridges: 1. Using your favorite IDE, create a new file in the root of the storefront-reference-architecture folder named dw.json. 2. Add the following content to the file: ```json { "hostname": "<sandbox-url.demandware.net>", "username": "<username>", "password": "<password>", "code-version": "<code version>" } ``` > 🦏 Make sure to replace the placeholders with your real credentials. If you are coming from SGJC please note that these credentials are the same as you had for the server connection in Eclipse. 3. Upload the base cartridge by running `npm run uploadCartridge` in the root folder. If all went well you should see an output similar to this: ![](https://cdn-images-1.medium.com/max/1024/1*XSOo9lxyMje-oSZMImgjfw.png)<figcaption>app_storefront_base was uploaded successfully 🎉</figcaption> ### Setting up a New Sandbox Site in Business Manager Now that the `app_storefront_base` cartridge is uploaded, we can proceed to create a new site that will make use of it. The fastest way to set up an SFRA demo site is to use the demo site SFCC provides. To set up an SFRA sandbox site: 1. On your local machine, clone the demo site repository by running _git clone git@github.com:SalesforceCommerceCloud/storefrontdata.git_ 2. cd to the repository folder (`cd storefrontdata`). 3. In order to upload the sample site to Business Manager, the folder should be zipped up. You can zip the `demo_site_sfra` folder yourself, or run `npm run zipData` (defined in `package.json`) to get the needed zip file. If successful you should have a new `demo_site_sfra.zip` file in your current folder. 4. Login to Business Manager and click **Administrations > Site Development > Site Import & Export.** 5. Under **Import** make sure **Local** is selected. Click **Choose File** and select the demo\_site\_sfra.zip file we created earlier. Once selected, click **Upload**. 6. Once the upload finishes, you should see the new file under the **Import** section. Select it and click **Import.** ![](https://cdn-images-1.medium.com/max/1024/1*ZwXSYVHFfuzD7mGYx0_Mrg.png)<figcaption>The new file ready to be imported.</figcaption> 7. Importing the site will take a while (about 10 minutes), you can watch the progress in the **Status** section at the bottom of the page. Once done, click **Administrations > Sites > Manage Sites.** 8. If all went well, you should see two new sites under the **Storefront Sites** section: **RefArch** and **RefArchGlobal**. Click on **RefArch** and select the **Settings** tab. You’ll notice that the base cartridge we uploaded before is on the Cartridges list. > 🦏 We will discuss the Cartridges list and the importance of a cartridge in the cartridge chain on day 4. 9. Using the dropdown in the top left corner, select **RefArch,** and click the **Storefront** button. If all went well, a new tab will open with the demo site: ![](https://cdn-images-1.medium.com/max/1024/1*xGQv60KJhKd0NuWF3InhzQ.png)<figcaption>The demo SFRA site in all its glory.</figcaption> Congratulations, you are now the proud owner of a brand new SFRA demo site 🎊 In tomorrow’s post, we will create our very first cartridge and use it on the newly created demo site! As always, looking forward to your comments either here or on [Twitter](https://twitter.com/fullstackj) 😎 * * *
pxjohnny
330,593
RPC-like API for your Laravel project
JSON-RPC 2.0 this is a simple stateless protocol for creating an RPC (Remote Procedure Call) style AP...
0
2020-05-08T19:30:13
https://dev.to/tabuna/rpc-like-api-for-your-laravel-project-3oeh
laravel, api, jsonrpc
[JSON-RPC 2.0](https://www.jsonrpc.org/specification) this is a simple stateless **protocol** for creating an RPC (Remote Procedure Call) style API. It usually looks as follows: You have one single endpoint on the server that accepts requests with a body of the form: ```json {"jsonrpc": "2.0", "method": "post.like", "params": {"post": "12345"}, "id": 1} ``` And it returns answers of the view: ```json {"jsonrpc": "2.0", "result": {"likes": 123}, "id": 1} ``` If an error occurs - the error response: ```php {"jsonrpc": "2.0", "error": {"code": 404, "message": "Post not found"}, "id": "1"} ``` And it's all! Did I say that is easy? Bonus support batch operations: Request: ```json [ {"jsonrpc":"2.0","method":"server.shutdown","params":{"server":"42"},"id":1}, {"jsonrpc":"2.0","method":"server.remove","params":{"server":"24"},"id":2} ] ``` Response: ```json [ {"jsonrpc":"2.0","result":{"status":"down"},"id":1} {"jsonrpc":"2.0","error":{"code":404,"message":"Server not found"},"id": 2} ] ``` In the id field, the API client can send anything so that after receiving responses from the server, match them with requests. Also, the client can send “notifications” - requests without an “id” field that do not require a response from the server: ```json {"jsonrpc":"2.0","method":"analytics:trackView","params":{"type": "post", "id":"123"}} ``` Not so long ago I became acquainted with this protocol, but it is great for my tasks of creating an API on Laravel. To use such a protocol in your application, you need to set the [dependency](https://sajya.github.io/) using the composer: ```php $ composer require sajya/server ``` All actions are described in `Procedure` classes, it is a familiar controller, but it must contain the static property `name`. Create the following artisan class: ```php php artisan make:procedure TennisProcedure ``` A new file will be created in the `app/Http/Procedures/TennisProcedure.php` Let's call the new procedure `tennis`, to do this, change the `name` property and add the `pong` returning value to the `ping` method to get this content: ```php namespace App\Http\Procedures; use Sajya\Server\Procedure; class TennisProcedure extends Procedure { /** * The name of the procedure that will be * displayed and taken into account in the search */ public static string $name = 'tennis'; /** * Execute the procedure. * * @return string */ public function ping(): string { return 'pong'; } } ``` Like the controller, the procedure needs to be registered in the routes file, define it in the file `api.php`: ```php use App\Http\Procedures\TennisProcedure; Route::rpc('/v1/endpoint', [ TennisProcedure::class ])->name('rpc.endpoint'); ``` In order to turn to the required method, you must pass the name specified in the class and the necessary method with the delimiter "@" character. In our case, it will be: `tennis@ping`. Let's make a `curl` call to the new API: ```bash curl 'http://127.0.0.1:8000/api/v1/endpoint' --data-binary '{"jsonrpc":"2.0","method":"tennis@ping","params":[],"id" : 1}' ``` The result will be the resulting JSON string: ```bash {"id":"1","result":"pong","jsonrpc":"2.0"} ```
tabuna
329,235
Animated Login Form 2020 tutorial using HTML & CSS Flexbox only [video format]
In this tutorial, we'll create an awesome Animated Login Form using HTML &amp; CSS Flexbox only....
6,475
2020-05-06T23:11:15
https://dev.to/codeleague7/animated-login-form-2020-tutorial-using-html-css-flexbox-only-video-format-1339
css, html, beginners, tutorial
{% youtube 1DWJ65XP-zs %} In this tutorial, we'll create an awesome **Animated Login Form using HTML & CSS Flexbox only**. This project is suitable for all especially **beginners**. We'll also use **CSS transitions** which allow us to change property values smoothly, over a given duration.
codeleague7
329,240
Mid Meet Py - Ep.6 - Interview with Steve Dower
We meet in the middle of the day in the middle of the week to chat about Python news. Interview with Steve Dower, Python tools developer at Microsoft and core CPython developer
0
2020-05-07T01:22:16
https://dev.to/midmeetpy/mid-meet-py-ep-6-interview-with-steve-dower-209m
python, chat, communities, irl
--- title: Mid Meet Py - Ep.6 - Interview with Steve Dower published: true description: We meet in the middle of the day in the middle of the week to chat about Python news. Interview with Steve Dower, Python tools developer at Microsoft and core CPython developer tags: python, chat, communities, IRL --- PyChat (Python news): [Pydata UK joint meetup with multiple chapters in UK yesterday, awesome talks about TensorFlow Probability and How to expend Pandas capabilities](https://www.meetup.com/PyData-Manchester/events/270272244/) [Global Pydata conference calling for organizing committee](https://twitter.com/dontusethiscode/status/1257907697141329926) [PyData Dublin starting with a round of Monday talks from the 18th](https://www.meetup.com/PyDataDublin/) [Release of Napari `0.3.0` with many new features](https://napari.org/docs/release/release_0_3_0.html) [Naomi Ceder is the first keynote speaker announced for EuroPython](https://twitter.com/europython/status/1257680784384831489) [Learning aid for Vim](https://vim-adventures.com/) Learn Vim while playing a game Py Hall of Frame: Interview with Steve Dower, Python tools developer at Microsoft and core CPython developer [Follow Steve on Twitter](https://twitter.com/zooba) PyPI Highlight: [Napari - fast, interactive, multi-dimensional image viewer for Python](https://github.com/napari/napari) [Kedro - a best-practice scaffold for ML and DS pipelines](https://github.com/quantumblacklabs/kedro)
cheukting_ho
329,332
DNS Resolution
Recently was working around DNS and thought to put it here! Computers work with numbers. Computers t...
0
2020-05-07T04:18:27
https://dev.to/_nancychauhan/dns-resolution-3pbg
dns, dnsresolution, networking
Recently was working around DNS and thought to put it here! Computers work with numbers. Computers talk to another computer using a numeric address called IP address. Though structured and thus great for computers, it is tough for humans to remember. DNS acts as the phonebook of the internet 🌐. It converts a web address such as "example.com" to an IP address, which computers use to connect. As a result, we don't have to remember complicated IP addresses 🤩. We are trying to open example.com on a browser. A Typical DNS lookup goes like this: 1. The browser first looks up "example.com" in its DNS cache. If it is present, the browser uses the cached IP address and connects to "example.com". If not, then the browser goes to the next step. 2. Browser issues a `gethostbyname (3)` and passes the responsibility of name resolution to the operating system (OS). The OS now becomes the resolver. 3. OS looks for the domain name in the system DNS cache. If found then it returns the IP address to the browser else the OS goes to the next step. 4. The OS looks into `\etc\hosts`, known as the hosts file. The hosts file is a method of maintaining hostname to IP address mapping from the ARPANET days. If an entry exists, the OS returns the IP address else it goes to the next step. 5. The OS tries to connect to your configured DNS Servers and sends a DNS query for "example.com". You can manually set your DNS Servers, or your connected networks can configure it for you. The DNS server now becomes the resolver and has to return a response to the OS of the machine that has sent the DNS query. 6. The DNS server (resolver) looks into its DNS cache for the hostname. If it finds an entry, it returns the same to the calling machine. Else it goes to the next step. 7. The DNS server tries to connect to root nameserver (.) You can do `dig .` to find root nameserver your DNS server is trying to connect. At present, there are 13 root nameservers named with the letters "a" to "m" &mdash; `a.root-servers.net.` ``` ➜ dig -t NS . ; <<>> DiG 9.10.6 <<>> -t NS . ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45206 ;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;. IN NS ;; ANSWER SECTION: . 48 IN NS a.root-servers.net. . 48 IN NS d.root-servers.net. . 48 IN NS k.root-servers.net. . 48 IN NS g.root-servers.net. . 48 IN NS j.root-servers.net. . 48 IN NS c.root-servers.net. . 48 IN NS b.root-servers.net. . 48 IN NS m.root-servers.net. . 48 IN NS f.root-servers.net. . 48 IN NS h.root-servers.net. . 48 IN NS l.root-servers.net. . 48 IN NS e.root-servers.net. . 48 IN NS i.root-servers.net. ;; Query time: 80 msec ;; SERVER: 10.254.254.210#53(10.254.254.210) ;; WHEN: Wed May 06 22:51:43 IST 2020 ;; MSG SIZE rcvd: 239 ``` 8. Now the DNS server requests on of the above root nameserver for the TLD level root nameserver for TLD for ".com". ``` ➜ dig @d.root-servers.net. -t NS com. ; <<>> DiG 9.10.6 <<>> @d.root-servers.net. -t NS com. ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 106 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 27 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1450 ;; QUESTION SECTION: ;com. IN NS ;; AUTHORITY SECTION: com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS j.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. ;; ADDITIONAL SECTION: a.gtld-servers.net. 172800 IN A 192.5.6.30 b.gtld-servers.net. 172800 IN A 192.33.14.30 c.gtld-servers.net. 172800 IN A 192.26.92.30 d.gtld-servers.net. 172800 IN A 192.31.80.30 e.gtld-servers.net. 172800 IN A 192.12.94.30 f.gtld-servers.net. 172800 IN A 192.35.51.30 g.gtld-servers.net. 172800 IN A 192.42.93.30 h.gtld-servers.net. 172800 IN A 192.54.112.30 i.gtld-servers.net. 172800 IN A 192.43.172.30 j.gtld-servers.net. 172800 IN A 192.48.79.30 k.gtld-servers.net. 172800 IN A 192.52.178.30 l.gtld-servers.net. 172800 IN A 192.41.162.30 m.gtld-servers.net. 172800 IN A 192.55.83.30 a.gtld-servers.net. 172800 IN AAAA 2001:503:a83e::2:30 b.gtld-servers.net. 172800 IN AAAA 2001:503:231d::2:30 c.gtld-servers.net. 172800 IN AAAA 2001:503:83eb::30 d.gtld-servers.net. 172800 IN AAAA 2001:500:856e::30 e.gtld-servers.net. 172800 IN AAAA 2001:502:1ca1::30 f.gtld-servers.net. 172800 IN AAAA 2001:503:d414::30 g.gtld-servers.net. 172800 IN AAAA 2001:503:eea3::30 h.gtld-servers.net. 172800 IN AAAA 2001:502:8cc::30 i.gtld-servers.net. 172800 IN AAAA 2001:503:39c1::30 j.gtld-servers.net. 172800 IN AAAA 2001:502:7094::30 k.gtld-servers.net. 172800 IN AAAA 2001:503:d2d::30 l.gtld-servers.net. 172800 IN AAAA 2001:500:d937::30 m.gtld-servers.net. 172800 IN AAAA 2001:501:b1f9::30 ;; Query time: 259 msec ;; SERVER: 199.7.91.13#53(199.7.91.13) ;; WHEN: Wed May 06 22:54:16 IST 2020 ;; MSG SIZE rcvd: 828 ``` 9. DNS server then requests one of the above root nameservers for the authoritative nameserver for the domain `example.com`. This set of nameservers host the addresses of the domain as well as any subdomains it may have. ``` ➜ dig @a.gtld-servers.net. -t NS example.com ; <<>> DiG 9.10.6 <<>> @a.gtld-servers.net. -t NS example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1127 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN NS ;; AUTHORITY SECTION: example.com. 172800 IN NS a.iana-servers.net. example.com. 172800 IN NS b.iana-servers.net. ;; Query time: 66 msec ;; SERVER: 192.5.6.30#53(192.5.6.30) ;; WHEN: Wed May 06 22:55:10 IST 2020 ;; MSG SIZE rcvd: 88 ``` 10. The DNS server requests the authoritative nameservers for IP addresses of the domain and returns the result to the system that sent it the DNS query. ``` ➜ dig @a.iana-servers.net. -t A example.com ; <<>> DiG 9.10.6 <<>> @a.iana-servers.net. -t A example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5682 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 86400 IN A 93.184.216.34 ;; Query time: 281 msec ;; SERVER: 199.43.135.53#53(199.43.135.53) ;; WHEN: Wed May 06 22:58:40 IST 2020 ;; MSG SIZE rcvd: 56 ``` Using the IP address `93.184.216.34`, the web browser connects to the host. Every stage maintains a cache for some number of seconds based on the `TTL` that every query returns. In the following DNS query result, the TTL is `86400` seconds ``` example.com. 86400 IN A 93.184.216.34 ``` A resolver can thus cache the contents of the query for 86400 seconds. This caching helps to speed up the process and reduces the load on DNS servers. Originally posted at https://todayilearnt.xyz/posts/nancy/dns_resolution/
_nancychauhan
329,365
what's your side hustle after work hours?
What do you do after work time ? If you have a side hustle how motivated are you to achieve it ?
0
2020-05-07T05:13:52
https://dev.to/get_hariharan/what-s-your-side-hustle-after-work-hours-9m1
discuss
What do you do after work time ? If you have a side hustle how motivated are you to achieve it ?
get_hariharan