id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,261,344
Auto-Run Tests: Test Via Continuous Integration
What is Continuous Integration(CI)? It's a way to make sure new codes are continuously...
0
2022-11-17T23:16:38
https://dev.to/cychu42/auto-run-tests-test-via-continuous-integration-ib6
beginners, javascript, opensource
## What is Continuous Integration(CI)? It's a way to make sure new codes are continuously integrated via running automated tests to make sure new codes follow certain standards and don't break anything. ## How to do it? I used Github Action which made the process pretty easy. 1. In your GitHub repo, click the Action tab and choose a template designed for your language. This will add an YAML file for configuration of this. 2. Edit the YAML file to suit your needs: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kk4kdnj62b43djwt3w1.png) - `on` lets you set when to run tests, such as: on any push to main branch or any PR to main - `runs-on` sets the OS of virtual machine to run the tests, such as: windows-latest,macos-latest, or ubuntu-latest - `strategy`'s `matrix` sets versions of the language or tool, like node-version [16.x, 18.x] for Node.js versions of 16 or 18 - Under `steps`: - `uses` checks out code to run - `name` sets up version of node to run - `run: npm cl` installs dependency - `run: npm build` runs `npm build` according to your `build` script in package.json (optional), like:`- run: npm run build --if-present` - `run: npm test` runs `npm test` command according to your `test` script in package.json ## Another CI There was [another repo](https://github.com/alexsam29/ssg-cli-tool) I tried to add test for, and it uses the same setup as mine. It uses JavaScript and achieve CI via GitHub Action, except that it also run test for version 14.x and 16.x of Node.js. Given both repo use [Jest](https://jestjs.io/) for testing, It's pretty easy to write test for the repo. I needed to spend time to understand the code and then run test coverage and see the existing tests to see what's needed to be done. ## Final Thought I think CI is pretty great, as it provides an easy way to run tests and make sure new codes conform to certain standards before being added. It also saves time for developers from running tests manually, and CI makes sure no test is forgotten.
cychu42
1,261,348
Como mudar o fundo do terminal
1° Abra seu terminal: 2° Clique na seta indicativa para baixo: 3° Entre nas configurações do...
0
2022-11-17T20:42:06
https://dev.to/stefanyrepetcki/como-mudar-o-fundo-do-terminal-128c
frontend, develop, tutorial, programming
1° Abra seu terminal: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5fj2u6oydu8ftxga5ei.png) 2° Clique na seta indicativa para baixo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqthxwu2lvc5kdni6mt9.png) 3° Entre nas configurações do seu terminal: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cu7nhlqi83rpnbnbtpbg.png) 4° Procure a opção do nome do seu terminal: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7r582lh1osmejkro9stp.png) 5° Nas configurações adicionais, procure por "Aparência": ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y5969yk8g3rz41oh2n9.png) 6° Procure o item descrito na imagem abaixo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qc0aw6nkl0mmcml6ffgv.png) 7° Adicione o caminho da imagem do background: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4t0pvu8tcnkcuzeehwi.png) 8° Após selecionar, clique em salvar. Seu terminal irá atualizar no mesmo instante: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/463uzfeawm3synz8c39b.png) ` Espero que tenha ajudado!:) Deixo aqui meu [linkedin](https://www.linkedin.com/in/stefany-repetcki/) e [github] ❤️****
stefanyrepetcki
1,261,883
The Best Ways to Exploit Rate Limit Vulnerabilities
TL;DR- If you’re into bug bounties or just white-hat hacking in general, you’ve probably heard of...
0
2022-11-18T04:22:30
https://medium.com/the-gray-area/the-best-ways-to-exploit-rate-limit-vulnerabilities-f73ac24a08f1
cyber, cybersecurity, bughunting, bugbounty
--- title: The Best Ways to Exploit Rate Limit Vulnerabilities published: true date: 2022-11-18 01:36:39 UTC tags: cyber,cybersecurity,bughunting,bugbounty canonical_url: https://medium.com/the-gray-area/the-best-ways-to-exploit-rate-limit-vulnerabilities-f73ac24a08f1 --- [![](https://cdn-images-1.medium.com/max/1475/1*OMhE9T_tuC0pUoZyWKWSnQ.jpeg)](https://medium.com/the-gray-area/the-best-ways-to-exploit-rate-limit-vulnerabilities-f73ac24a08f1?source=rss-79a7fbf51e21------2) TL;DR- If you’re into bug bounties or just white-hat hacking in general, you’ve probably heard of no-rate-limit vulnerabilities [Continue reading on The Gray Area »](https://medium.com/the-gray-area/the-best-ways-to-exploit-rate-limit-vulnerabilities-f73ac24a08f1?source=rss-79a7fbf51e21------2)
grahamzemel
1,261,901
Get JSON Value with Dynamic Key in TypeScript
Small and hopefully helpful snippet. Scenario: You have an on going update of JSON of Car data which...
0
2022-11-18T05:31:36
https://dev.to/manet/get-json-value-with-dynamic-key-16l7
typescript, json, snippet, beginners
Small and hopefully helpful snippet. Scenario: You have an on going update of JSON of `Car` data which has `modelyear` being added regularly. You wanted to output model description base on year as an input. ```typescript // Get JSON Value with dynamic key const vehicle = { "category": "car", "brand": "SupaDupa", "modelYear": { "2000": "SD-S", "2020": "SD-M", "2030": "SD-A", "2040": "SD-R", "2050": "SD-T" } } type ObjectKey = keyof typeof vehicle.modelYear; const year = '2020' as ObjectKey // set value of dynamic key console.log(vehicle.modelYear[year]) ``` I had Original Snippet [here](https://www.typescriptlang.org/play?#code/PTAEHEFMBdQKQMoHkByoBqBDANgV0qAO4CW0AFqACYCeAdpgLbEDGoA1pNQFDMD2tAZ1gA3SGRbYCAXlABvLqEWgARM0zRIAc14AnasoBcKtTuUAaBUuUAjHZlqVDKhLgAOmACJvM5y4uUMvJSQ2ACakJimRvJKsSoATAAMyU7KCB4AtAi+cVZJSanpGQCyObkJiQDMiYWZAIJlucpJACw1RmmZAEqNcc2JAKztzpkAKsp+cQC+XDPQ1K4ESNYAVpDM0ADSnKAyHNS8AGag84tHoKLizJIAdIHBYRE6ANw8-EKg1E+7oADk+YlfqBMAJQMs1htttRQCBQAIYBccPhQOcaPQmKx9m9BLxbtheJoABSXCSQO5BELhSIAbS+kQAugBKLhAA)
manet
1,262,160
Creating a Simple Firefox Extension
About This will be short overview what steps it takes and how code looks of very simple...
0
2022-11-18T08:47:13
https://dev.to/struggzard/creating-a-simple-firefox-extension-219g
## About This will be short overview what steps it takes and how code looks of very simple browser extension. ## Goal Extension will have simple background script which creates loop and shows native notification with static content every 30 second. ## File Structure For minimal extension setup only `manifest.json` file is required but without including some scripts it would not do much, thus `just-sample.js` is added with main extension code. ``` Extension Folder + icons - manifest.json - just-sample.js ``` ## File Content Here `manifest.json` content. Long lived scripts should be added to `background` section and since this extension would use notification feature it needs to be defined in `permissions` array. ```json { "manifest_version": 2, "name": "Annoying-Notifications", "version": "1.0", "description": "Sample browser extension to annoy user with useless notifications :)", "icons": { "48": "icons/notification_black_48dp.png" }, "background": { "scripts": ["just-sample.js"] }, "permissions": ["notifications"] } ``` Based on documentation only `manifest_version`, `name`, and `version`, are mandatory. Main script is simple and can be split into following parts: 1. There is `createNotification` function which act as notifications factory. 2. Another function `mainLoop` calls to itself recursively to avoid blocking thread. Function goal is call `createNotification` and wait for 30 seconds and then repeat same thing. 3. Script body only calls `mainLoop` once (when extension is initialized). However, since `mainLoop` calls to itself endlessly notification shows up after 30 seconds repeatable. ## Demo Short demo video to demonstrate extension in action. Please note that video is trimmed thus testing extension locally it will take more time for notification to show up. {% embed https://www.youtube.com/watch?v=ieuXYFTUYDk %} ## Conclusion While this sample does not do much it demonstrate how to get started and what to expect when developing Firefox extension. ## References - Source code: https://github.com/struggzard/firefox-extension-sample - Documentation: https://developer.mozilla.org - Cover image: https://unsplash.com/photos/4xmVvHRioKg
struggzard
1,262,919
Introducing extrnode: An Open-Source Load Balancer for RPC Nodes
November has been such a difficult month for Solana developers given the recent events. While many...
0
2022-11-18T17:49:38
https://dev.to/extrnode/introducing-extrnode-an-open-source-load-balancer-for-rpc-nodes-2399
dapp, blockchain, opensource, programming
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1bil4lry5d5xzahj819.png) November has been such a difficult month for Solana developers given the recent events. While many people look to be leaving the Solana blockchain as a result, we are glad to note that most dApp developers are staying put. The FTX furor does not appear to have changed how developers view Solana, and many continue to build applications on the network. Of course, there’s more work to do to make Solana highly efficient and scalable, which is why we’re launching an open-source load balancer for the Solana developer community. It allows developers to reroute requests from Solana’s delinquent public RPC nodes to active ones. The product is available for download and we invite you to test it and share your feedback on our [Discord](https://discord.gg/mb2xQ7kgdq). You can also build a Docker image from the [source code](https://github.com/extrnode/solana-lb) or use a ready-made image from [Docker Hub](https://hub.docker.com/r/extrnode/solana-lb). **The role of RPCs on Solana** Cryptocurrency wallets are the most popular applications of blockchain technology, but they are not actually connected to the blockchain. Those dApps simply translate user actions, such as checking balance, into a code to be executed by RPC nodes on the blockchain. Unfortunately, RPC nodes are pre-selected, making it possible for some of them to fail or time out. If that happens, the applications won’t work. Besides, hosting Solana nodes is quite expensive. Most developers cannot justify paying over $1,000 to run a node, so they opt for public RPC nodes. Public RPC nodes are usually hosted by centralized providers—Google Cloud, Hetzner, and AWS—making them highly unreliable. For example, Hetzner recently blocked Solana nodes from using its service, taking offline a whopping 22% of the network’s nodes. Hetzner’s decision did not affect Solana hard enough to take the network down, but many apps crashed as their selected RPC nodes went offline. The event serves as a warning to the community that trusting only one RPC on a centrally hosted service is susceptible to the interest of third parties. One way to ensure availability is to create a script, module, or standalone balancer that switches automatically to a backup RPC endpoint if something unexpected happens with the primary node. If that fails too, the developer is in trouble. **What is an extrnode open-source load balancer?** extrnode aims to solve the node reliability problem by creating an [open-source load balancer](https://github.com/extrnode/solana-lb) that distributes requests within a network of Solana’s public RPC nodes. It automatically reroutes requests to another active RPC endpoint when the current request-sending RPC is down. Therefore, extrnode provides a more reliable backup system for dApps to maintain 99.99% uptime. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a844x0xzbqzfxvflm6s7.png) [extrnode load balancer](https://extrnode.com/) will be free to use. Simply run the open-source load balancer on Docker, connect to it, and all is set up. It is also possible to configure the open-source load balancer to pick up the closest RPCs with the fastest response time. Lastly, extrnode is a community-driven project, meaning users can share ideas, modify their clients, and contribute to the project's source code. Developers currently have only three options to select a load balancer to run their RPC nodes: - Decentralized but paid balancers: users often have to pay in volatile project tokens. - Free but centralized balancers: users can only access an RPC of a single provider, which reduces the solution’s reliability in case of attacks. - DIY load balancers: complex and expensive, as the development would require a team, money, and infrastructure to host the solution. **The benefits of extrnode load balancer to developers** With extrnode load balancer, devs can work on their dApps without having to worry about sudden RPC shutdowns or bans. Solana developers using extrnode load balancer can expect to have uninterrupted access to RPC nodes, and their users can enjoy using those dApps without any delays or errors. No longer do devs have to ask users to switch manually to other RPCs to continue enjoying their service. It is also expensive and complex to build a custom load balancer, which is why extrnode is offering the open-source load balancer to the community to use for free. Devs can use the current version to test applications on the mainnet. However, we do not recommend using it to service production applications since it is not production-ready yet. Going forward, the team is releasing a free public load balancer hosted on [Everstake](https://everstake.one/)'s infrastructure, as well as extrnode Premium for production use. Using the free version will require a developer to send requests to extrnode's RPC endpoint for the load balancer to reroute them to an available RPC. The Premium load balancer will accept only the most reliable and fastest validators in the RPC cluster. Some of these include 01node, Chainflow, Imperator, Chainode Tech, Stakin, Staking Facilities, and Triton One. extrnode load balancer is only available on Solana, but we plan to launch it on other chains soon. Solana—a top blockchain—offers the perfect network for testing the product and is critical to scaling it to other blockchains. We’re also encouraged by the resilience and unwavering support the Solana community has shown despite the current market and infrastructure troubles. **Is it safe to use extrnode open-source load balancer?** Yes. However, it is still early days. The [extrnode open-source](https://github.com/extrnode/solana-lb) load balancer will have a fail-safe request rerouting mechanism. We invite users to test the service and share feedback to help us improve the product. In the future, we will release an enterprise solution for customers with higher security requirements. The solution will only accept the most reliable and fastest validators. The goal is to provide complete decentralization and protection against any incidents. Also coming to extrnode are an RPC Explorer and a public Solana RPC node map. To learn more about extrnode and participate in our community events, follow us on [Twitter](https://twitter.com/extrnode) and join our [Discord](https://discord.gg/mb2xQ7kgdq).
extrnode
1,263,411
Youtube Shorts Beta is now Available for india || 🔥The biggest Short video app of Youtube
Youtube  starting off by launching an early beta of YouTube Shorts in India, this includes a...
0
2022-11-19T05:17:26
https://dev.to/testdef/youtube-shorts-beta-is-now-available-for-india-the-biggest-short-video-app-of-youtube-5cap
<div class="separator" style="clear: both; text-align: center;"> <a href="https://lh3.googleusercontent.com/-OtdQ5WkHKfM/X2MKWAk5imI/AAAAAAAAGT4/zh5qjGB_GrMTvxn-aAY9H1u9mYtURNTBgCLcBGAsYHQ/s1600/1600326208800097-0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"> <img border="0" src="https://lh3.googleusercontent.com/-OtdQ5WkHKfM/X2MKWAk5imI/AAAAAAAAGT4/zh5qjGB_GrMTvxn-aAY9H1u9mYtURNTBgCLcBGAsYHQ/s1600/1600326208800097-0.png" width="400"> </a></div><div><div><b>Youtube&nbsp; starting off by launching an early beta of YouTube Shorts in India,</b>&nbsp;<b>this includes a new camera and a handful of editing tools that will be rolling out over the course of the next few weeks</b>. 'Youtube Will continue to add more features over the coming months and we look forward to hearing your feedback about these features.&nbsp;</div><br><ul><li><b>If you have access to the Shorts camera</b>, you’ll be able to start creating short vertical videos right on the YouTube app! To check if you have access and start creating your first Short, hit the "+" icon (or the video camera icon on iOS) and select ‘Video’. If you see ‘Create a short video’ then you have access to the Shorts camera, which will allow you to use editing tools to do things like string&nbsp;<b>multiple video clips&nbsp;</b>together, use&nbsp;<b>speed controls</b>&nbsp;and&nbsp;<b>timers</b>, and&nbsp;<b>add music</b>&nbsp;to your video clips.&nbsp;</li><ul><li>Note: For users on Android and located in India, you’ll see the “Create” icon has moved to the middle of the bottom navigation bar to make it easier for you to create videos right from the YouTube app.</li></ul></ul></div><div><br></div></br><p style="color:#757CF9;">This Article is Scattred w/ <a target="_blank" href="https://scattr.io?ref=dev&source=+Youtube Shorts Beta is now Available for india || 🔥The biggest Short video app of Youtube+">Scattr</a></p>
testdef
1,263,445
How to convert HTML To React Component
Just Simply Copy your HTML CODE into this website https://magic.reactjs.net/htmltojsx.htm This Site...
0
2022-11-19T06:54:31
https://dev.to/farhadi/how-to-convert-html-to-react-component-1dpn
webdev, javascript, react, html
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sr5apg1pwiogjhni2yh4.jpg) Just Simply Copy your HTML CODE into this website https://magic.reactjs.net/htmltojsx.htm This Site will Generate JSX Code Copy from website paste into your React project files.
farhadi
1,263,668
A Thorough View of Strings in JavaScript
Introduction In JavaScript, textual data is stored as strings. There is no separate type...
0
2022-11-19T08:43:36
https://dev.to/indirakumar/a-thorough-view-on-strings-in-javascript-1c
javascript, webdev, programming, beginners
## Introduction In JavaScript, textual data is stored as strings. There is no separate type for a single character. The internal format for strings is always UTF-16, no matter what the pages use. The size of a JavaScript string is always 2 bytes per character. Any string can be created by enclosing the value with single or double quotes or backticks. **Example:** ``` let single = 'single-quoted'; let double = "double-quoted"; let backticks = `backticks`; ``` Here single and double quotes are the same. But backticks help us embed expressions into strings by just wrapping ${} around ``` alert(`1 + 2 = ${1 + 2}.`); \\ 1 + 2 = 3. ``` Backticks also allow you to span your texts to multiple lines which is not the same case with quotes. ``` let studentsList = `Guests: * Indira * Suraj * Valli `; ``` ## Tagged Templates: Backticks are a blessing in another way too. You can use them to format strings using template functions. Here is an example explaining the same ``` const person = "Kumar"; const age = 19; function myTag(strings, personExp, ageExp) { const str0 = strings[0]; // "That " const str1 = strings[1]; // " is a " const str2 = strings[2]; // "." const ageStr = ageExp > 99 ? "centenarian" : "youngster"; // We can even return a string built using a template literal return `${str0}${personExp}${str1}${ageStr}${str2}`; } const output = myTag`That ${person} is a ${age}.`; console.log(output); // That Kumar is a youngster. ``` Here, myTag is the tagged template that processes the strings, here "that", "is a", "." will be passed as an array of strings, then the expression ${person} is passed to personExp, and ${age} is passed to ageExp. ## Escape sequences: These are the characters, that you want to print, but they get skipped because they have special meanings attached to them. Here is how to do a few formatting in strings in JavaScript. - \n - New line - \r - Carriage return, In Windows text files, a combination of two characters \r\n represents a new break, while on non-Windows OS it’s just \n. Most Windows software also understands \n. - \', \" - Quotes - \\ - Backslash - \t - Tab - \b, \f, \v - Backspace, Form Feed, Vertical Tab **Example:** ``` alert( 'I\'m the Kumar!' ); // I'm the Kumar! ``` ## String Length: The length property has the string length: ``` alert( 'My\n'.length ); // 3 ``` **Note:** - \n is a single “special” character, so the length is indeed 3 - In the String, the length is just a property, it is not a function to be called upon strings. ## Accessing characters In JavaScript, To get a character at position pos, use square brackets [pos] or call the method str.at(pos). The first character starts from the zero position. ``` let str = `Hello`; // the first character alert( str[0] ); // H alert( str.at(-1) ); // o ``` As usual, the last character is indexed at -1 and so on. ## Properties and Important Methods: **1) Strings are immutable** Strings can’t be changed in JavaScript. It is impossible to change a character. Examples: ``` let str = 'Hi'; str[0] = 'h'; // error ``` **2) Changing the case** Methods toLowerCase() and toUpperCase() change the case: ``` alert( 'Kumar'.toUpperCase() ); // KUMAR alert( 'Kumar'.toLowerCase() ); // kumar ``` **3) Searching a substring:** It can be done using the .indexOf() function let str = 'Widget with id'; ``` alert( str.indexOf('Widget') ); // 0, because 'Widget' is found at the beginning alert( str.indexOf('widget') ); // -1, not found, the search is case-sensitive ``` **4) Getting a substring** There are 3 methods in JavaScript to get a substring: substring, substr and slice. str.slice(start, end) Returns the part of the string from start to (but not including) end. If there is no second argument, then the slice goes till the end of the string: ``` let str = "stringify"; alert( str.slice(0, 5) ); // 'strin', the substring from 0 to 5 (not including 5) alert( str.slice(2) ); // 'ringify', from the 2nd position till the end ``` str.substring(start, end) Returns the part of the string between the start and end (not including the end). This is almost the same as the slice, but it allows the start to be greater than the end (in this case it simply swaps start and end values). ``` let str = "stringify"; alert( str.substring(2, 6) ); // "ring" alert( str.substring(6, 2) ); // "ring" ``` str.substr(start [, length]) Returns the part of the string from start, with the given length. ``` let str = "stringify"; alert( str.substr(2, 4) ); ``` ## Comparing Strings: Strings are compared character by character in alphabetical order. But there are a few odd things that would happen in the JavaScript world when you compare strings. Let's see a few of them here. - A lowercase letter is always greater than the uppercase: ``` alert( 'a' > 'Z' ); // true ``` - Letters with diacritical marks are “out of order”: `alert( 'Österreich' > 'New zealand' ); // true` Here Ö has a diacritical mark. So it is greater than N, even though N comes before O in alphabetical order. This is because of the higher ASCII values of these special characters. Here is one more way to compare strings The call str.localeCompare(str2) returns an integer indicating whether str is less, equal or greater than str2 according to the language rules: - Returns a negative number if str is less than str2. - Returns a positive number if str is greater than str2. - Returns 0 if they are equivalent. For Example: ``` alert( 'Österreich'.localeCompare('Zealand') ); // -1 ``` ## Unicode in JavaScript source code: In JavaScript, the identifiers and string literals can be expressed in Unicode via a Unicode escape sequence. The general syntax is \uXXXX, where X denotes four hexadecimal digits. For example, the letter o is denoted as ‘\u006F’ in Unicode. **Example:** ``` var f\u006F\u006F = 'abc'; console.log(foo) //abc ``` String Concatenation: How can we end a string article on JavaScript without talking about the great concatenation? Here is a brief overview of it. The concat() method joins two or more strings. This method does not change the existing strings. It returns a new string. **Syntax:** string.concat(string1, string2, ..., stringX) **Example:** ``` let text1 = "Hello"; let text2 = "world!"; let result = text1.concat(" ", text2); console.log(result) //Hello world! ``` ## References: - [Unicode](https://flaviocopes.com/javascript-unicode/) - [String Functions W3 Schools](https://www.w3schools.com/js/js_string_methods.asp) - [Strings](https://javascript.info/string) - [Template functions MDN Docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_templates) _Thank you for your valuable time!!! Happy Coding❤️_
indirakumar
1,263,704
Working with telescope
I haven't worked with a large repository of any sort before, and I don't think my skills are great...
0
2022-11-19T11:01:11
https://dev.to/ririio/working-with-telescope-4g5g
I haven't worked with a large repository of any sort before, and I don't think my skills are great enough to actually add anything to it. What I learned from working with any repositories, is that contributors tend to overlook the most basic problems due to their focus much more bigger issues. Therefore, when I worked with [telescope](https://github.com/Seneca-CDOT/telescope) I decided to locate for any existing errors that might have been overlooked by everyone. I was a bit pressed for what I can do simply because the 'sign-up' doesn't seem to be working properly with my repository regardless of how much time I spent trying to fix it. I found an error that was overlooked by everyone, due to its simplicity. When clicking on the 'About Us' it doesn't link to the code dependecies, but instead link to a publicized version of the doc, as seen on the image below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vgcogicmkvflw6ykrdr3.png) The issue was simple, thus I did not struggle in terms of what I am able to do. However, the problem starts to appear when I tried to send a pull request. The first issue derives from not following the linter convention that was established from the repository. This was a simple fixed, because it sends an error in the CI where the problem originated from, and the last problem occurs from sending a merge as to a rebase. This took me a bit to understand, and I finally realized that I was doing the rebase from my master instead of the intended issue branch that I have posted. Overall, working with large repository is overwhelming, but there will always be an issue that any beginner developer such as myself are able to do due to the tendencies of our senior developers to only focus on the large scale problems.
ririio
1,264,558
Artificial "Intelligence" and Controversial Ideas about Future Technology
This article is intended for developers with some prior knowledge of web technology. A less technical...
22,566
2022-11-21T12:13:05
https://dev.to/ingosteinke/artificial-intelligence-and-controversial-ideas-about-future-technology-1chj
web3, openai, chatgpt, machinelearning
This article is intended for developers with some prior knowledge of web technology. A less technical article with a greater focus on computer generated art, cyberpunk, and augmented reality can be found in my weblog ([open-mind-culture.org](https://www.open-mind-culture.org/)). I published this as a DEV post just a few days before the hype about [chatGPT](https://openai.com/blog/chatgpt/), which I will discuss in an updated last chapter. Spoiler: I don't fear that it will make my job obsolete any time soon. The cover image features original photography side by side with seemingly similar imagery created by a computer. Note that the artificial versions of myself are wearing a black hat that does not exist in reality, but only as a detail of the street art mural I had been standing next to. ## Discussing the Future of the Web and Technology In the ongoing discussion about new technologies, some of which are often subsumed under the umbrella term of ["Web3"](https://en.wikipedia.org/wiki/Web3), I try to keep an open mind for legitimate and helpful use cases, as I had been fascinated by early metaverse and cyberpunk ideas in science fiction literature. I also loved the idea of an open, decentralized, interconnected global network which is the internet, or which was the internet before it got more commercialized and centralized. Its "western" zone is now dominated by an [oligopoly of companies](https://en.wikipedia.org/wiki/Big_Tech) mostly based in California, USA. This might be better than the censorship and state control in some other parts of the world, but it is still far from the original concept. ### Intelligence vs. Machine Learning Another popular misconception, apart from "Web 3", is "AI" or "artificial intelligence", a misnomer for applications making use of machine learning. The latter expression might make it more evident at first sight, what is actually happening: much like a search engine or a diligent student, "intelligent" applications reproduce and remix existing knowledge and artwork. Consequently, they also reproduce misconceptions, stereotypes, racism, ableism and other biased aspects often unknown and unnoticed. In the current heated discussion, I wonder why so many fellow developers keep getting overly excited by the new features, fearing to become obsolete due to new technology, or otherwise criticizing openAI for the wrong reasons, becoming easy prey for the latest fad bootlickers. ### Getting Upset for the Wrong Reasons Despite all of the other reasons to get upset about (like climate change, war, poverty, politics, pandemics) and despite so many positive advancements on the other hand (eco-tech startups, non-profit communites, diversity, and some specifically developer-related advancements like new CSS features and coding assistance based on machine learning), I dedicate one (and probably only one) blog post to the latest hype, to add my own limited experience, and some inspirational artwork. Let's have a look back into history, when many of today's convenience was nothing but science fiction: ### Cyberpunk Literature ![Screenshot of image search results for early cyberpunk literature](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x6edy15t1x4bm0aumcbe.png) There are several blog post about futurist novels that include descriptions of virtual reality and global communication, like [The Sheep Look Up](https://en.wikipedia.org/wiki/The_Sheep_Look_Up) by John Brunner and [Snow Crash](https://en.wikipedia.org/wiki/The_Shockwave_Rider) by Neal Stephenson. While some of our current technology and future research might have been inspired by literary sources, it falls short of its potential. As I [replied to A.J. Sadauskas on mastodon.social](https://mastodon.social/@fraktalisman/109375654136114670), > the current Web3 enthusiasts offer no efficient alternatives to the commercial and centralised Web 2.0, while the fediverse and IndieWeb movement focus on the decentral and robust principles that the internet was build upon and which made email, [usenet](https://en.wikipedia.org/wiki/Usenet) ([NNTP, described in RFC 977 in February 1986](https://www.w3.org/Protocols/rfc977/rfc977.html)), [HTTP](https://www.ietf.org/rfc/rfc2616.txt) and [HTML](https://www.ietf.org/rfc/rfc1866.txt) such a success in the first place. Just to mention some noteworthy reads again: the [web has no version numbers](https://hidde.blog/the-web-doesnt-have-version-numbers/), and ["Web3" is going great](https://web3isgoinggreat.com) (not!) ## Is Artificial "Intelligence" dumb and biased? [Machine learning](https://en.wikipedia.org/wiki/Machine_learning) refers to the fact, that we can train algorithms using input data, not only accelerating the time it takes to develop complex applications, but also to create interfaces generating unexpected output in a way that makes them seem to be sentient and intelligent. But feeding large amounts of mainstream culture's output into machines tends to reproduce undesirable prejudice and bias found in our society and our past to present culture. This phenomenon is not limited to machine learning, but when it manifests in code and machines built and documented by human teams, it might be easier to point out and adjust. Read how [Dr. Joy Buolamwini is fighting bias in algorithms](https://www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/) and how to make [technology serve all of us, not just the privileged few](https://www.ajl.org/). ## Digital Artwork thanks to Machine Learning The [Open AI](https://openai.com/) art movement on the other hand already created tons of detailed high resolution images, either photo-realistic or looking like a handcrafted painting, often creating an ["uncanny valley"](https://en.wikipedia.org/wiki/Uncanny_valley) effect due to misplaced details and seemingly unimportant errors that no human artists would ever come up with. While many digital artists seemed to favor a sinister, dystopian gamer aesthetic which has already been popular on platforms like [deviantart](https://www.deviantart.com), other artists, like my friend Andy ["Retinafunky"](https://www.instagram.com/retinafunky/) Weisner, experiment with said flaws and different aesthetics. [![Screenshot of AI artwork by retinafunky on instagram: "Cubism city , inspired by Picasso, imagined with Midjourney AI"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34iuxmk3delzts0okj6l.png)](https://www.instagram.com/p/Ci-U0C4N6CP/) ### Criticizing AI Art for the wrong reasons Some traditionalists criticize [AI Art](https://en.wikipedia.org/wiki/Artificial_intelligence_art) for the wrong reasons, denouncing it as plagiarism or no real work, as they fail to see innovation and creative effort. Looking back in history, many [famous artists had assistants](https://www.anothermag.com/art-photography/8858/five-of-the-most-influential-artists-assistants), many of them stayed anonymous, some got famous as well. So we might conclude that you can be a brilliant artist without being the one painting every single brushstroke. ### When photography was not Art Photography, now an accepted art form exhibited in galleries, was criticized in the beginning in a discussion much like many of today's controversies about algorithmic art and AI art. [When photography was not art](https://daily.jstor.org/when-photography-was-not-art/), "the fear has sometimes been expressed that photography would in time entirely supersede the art of painting. Some people seem to think that when the process of taking photographs in colors has been perfected and made common enough, the painter will have nothing more to do." Just like with photography, to produce a stunning work of art using modern algorithmic tools, you can either be extremely lucky, or you have to experiment and be inspired, taking your time to learn the tools and parameters, evolve and improve your art over time. But what are my points about AI art them? I fear with AI art, we are passing too much power to algorithms thus losing control and losing touch to the real world, nature, people and social, ethical and environmental topics. ### Nothing left but random squares? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pb2w27kuiliraippzo32.jpg) Here is an image of myself in front of a poster showing the [United Nations' 17 Sustainability Goals (SDG)](https://sdgs.un.org/goals). All variations of this picture created by the public version of Open AI's [Dall·E](https://openai.com/dall-e-2/) distort the icons and text and replace it with illegible symbols or random letters. I fear that this might sum up how the current "AI" systems view our world, and it shows that they are either not that intelligent after all, or that their capabilities might prioritize aspects that take the relevant meaning out of our culture, valuing style, aesthetics and presentation much higher than content and context. But then again, this proves the point, that we still need actual artists, and that Dall·E, [Midjourney](https://www.midjourney.com), and other tools are power tools, but still not very useful without human interaction. I think that time will tell, and if practiced seriously, AI is a valuable and innovative tool for digital artists. ## Other forms of Digital Art There is creational art, creating images, objects, or text, done by creative artists like [bleeptrack](https://www.bleeptrack.de) does. There is augmented reality art, combining actual real-world artworks seem to become alive using AR apps like [artivive](https://artivive.com) when looking at an enhanced painting. I will follow up and dig deeper into the details of the artistic aspects of digital technology in post at [open-mind-culture.org](https://www.open-mind-culture.org). Now let's revisit an aspect of "Web3" that I am 99% critical about: NFT and the energy consumption of some of the latest trends in information technology. [Training a single AI model can emit as much carbon as five cars in their lifetimes](https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/). It seems hard to calculate the CO₂ emissions of NFT "mining", but the popular cryptocurrency [Ethereum uses about 31 terawatt-hours (TWh) of electricity a year, about as much as the whole of Nigeria](https://theconversation.com/nfts-why-digital-art-has-such-a-massive-carbon-footprint-158077), according to an estimate based on the [Ethereum Energy Consumption Index](https://digiconomist.net/ethereum-energy-consumption). ## NFT: a Good Idea Misused by Scammers? You may have seen the various images of a "bored ape" cartoon character, often used as a profile picture by people not even creating any artwork whatsoever, trying to profit from the hype around [blockchain](https://en.wikipedia.org/wiki/Blockchain), [cryptocurrencies](https://en.wikipedia.org/wiki/Cryptocurrency) and [non fungible tokens (NFT)](https://en.wikipedia.org/wiki/Non-fungible_token) investing a lot of money in hope for return of investment. ![Bored ape images and a news headline "Someone buys a Bored Ape, gets scammed out of it two hours later"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29zth1nuldczdfszu6i7.png) ### Wasting energy to deceive people with NFT and the Metaverse NFT and cryptocurrencies offer people to participate the profitable art market and other investments without having a bank account, a credit card, or being recognized by established gallery owners. NFT also created a large black market to scam aspiring artists, developers, and other hopeful individuals. Mining cryptocurrency using energy-consuming calculations in a blockchain is a waste of energy that could better be used for other purposes, even more so in the face of human-made climate change already starting to destroy our planet. "Recreating" extinct species and places in a virtual enviroment, like [Tuvalu "uploading itself into the metaverse"](https://www.iflscience.com/an-entire-pacific-country-will-upload-itself-to-the-metaverse-it-s-a-desperate-plan-with-a-hidden-message-66250) only accessible by using visual technology (headsets, cameras, displays) does not help either, and the current pitiable state of the so-called metaverse makes it look even more ridiculous. It reminds me of the final scene of the dystopian science fiction film [Soylent Green](https://www.imdb.com/title/tt0070723/). ### A Second life as boring as the first one A virtual reality does have its benefits, but unlike a real environment, it does not provide sunlight, fresh air, delicious food and have you ever tried to dance or swim in a virtual environment? You might remember [Second Life](https://en.wikipedia.org/wiki/Second_Life): most people seemed to enjoy creating a second life as boring as their first one. Same with #chatGPT and visual image generation: many people seem to get excited about the output that looks impressive at first sight. But like the images' uncanny valley artifacts, take a good look at generated code unless all you plan to do is submit a solution to a coding kata. ## Summary Web3 misses the point of their alleged goals like decentralization, equity and helping to create a better, more diverse and creative world through digitization. Web3 enthusiasts waste energy and compromise security and integrity of data and money, and give up freedom and agency, allowing to be manipulated by algorithms and greedy companies. ### Copilot, tabnine, and ChatGPT as tools for developers We can use artificial "intelligence" as a tool. As a tool for artists, as a tool for copywriters, and as a tool for coding, too. I admit that I have been using @tabnine, GitHub copilot, JetBrains context actions, and static code analysis like eslint, stylelint, phpstan, and code sniffer. I also use [Grammarly](https://www.grammarly.com/) to improve my writing, especially when posting in English, which is not my native language. I have been using all of those tools at the same time, they have saved me some debugging detours, some keystrokes, and some StackOverflow searches for generating boilerplate code and generic documentation. And I will also evaluate how chatGPT might come in handy. But I don't fear that any of those tools might seriously put my job as a senior developer in danger. There are hopeful digital innovations that might help us build a better tomorrow despite, so let's take our time and find out how!
ingosteinke
1,264,767
CSS3 Code Generators
CSS3 Code Generators - That can speed up your web design development workflow. As a web developer or...
0
2022-11-20T17:36:47
https://dev.to/farhanacsebd/css3-code-generators-cf5
**CSS3 Code Generators -** That can speed up your web design development workflow. As a web developer or web designer, sometimes you may struggle to get the proper CSS codes for your design, for example, if you want to add a drop shadow to an HTML element, if you don't know the exact values for box-shadow CSS property, you may have to add these values and adjust it to so many times to get the correct shadow effect, this may cost lots of time and its very painful right, As a solution, check out some excellent CSS code generator tools that you can use right now to speed up your web design projects. - Number 01 => [Box Shadow CSS Generator](https://cssgenerator.org/box-shadow-css-generator.htmlL) - Number 02 => [glassmorphism-generator](https://hype4.academy/tools/glassmorphism-generator) - Number 03 => [css-underline-generator](https://cssbud.com/css-generator/css-underline-generator/) - Number 04 => [Neumorphism.io](https://neumorphism.io/#e0e0e0) - Number 05 => [animated-css-background-generator](https://wweb.dev/resources/animated-css-background-generator/) - Number 06 => [zcreativelabs](https://zcreativelabs.com/) - Number 07 => [GRADIENT BUTTONS](https://gradientbuttons.colorion.co/) - Number 08 => [ColorSpace](https://mycolor.space/) - Number 09 => [By z creative labs](https://www.blobmaker.app/) - Number 10 => [Animated CSS Background Generator](https://waitanimate.wstone.uk/) - Number 11 => [CSS Grid Generator](https://cssgrid-generator.netlify.app/) - Number 12 => [Flaxy Boxes](https://the-echoplex.net/flexyboxes/?fixed-height=on&display=flex&flex-direction=column&flex-wrap=nowrap&justify-content=center&align-items=flex-end&align-content=space-between&order%5B%5D=0&flex-grow%5B%5D=0&flex-shrink%5B%5D=1&flex-basis%5B%5D=auto&align-self%5B%5D=auto&order%5B%5D=0&flex-grow%5B%5D=0&flex-shrink%5B%5D=1&flex-basis%5B%5D=auto&align-self%5B%5D=auto&order%5B%5D=0&flex-grow%5B%5D=0&flex-shrink%5B%5D=1&flex-basis%5B%5D=auto&align-self%5B%5D=auto) - Number 13 => [FANCY-BORDER-RADIUS](https://9elements.github.io/fancy-border-radius/)
farhanacsebd
1,265,473
Browser Storage Hook React
import this -&gt; import { useLocalStorage,useSessionStorage } from...
0
2022-11-21T10:22:04
https://dev.to/mdwahiduzzamanemon/browser-storage-hook-react-12l1
npm, react, hook, package
> import this -> import { useLocalStorage,useSessionStorage } from 'browser_storage_hook_react'; > uses -> const [value, setValue] = useLocalStorage('key', 'defaultValue'); const [value, setValue] = useSessionStorage('key', 'defaultValue'); {% embed https://www.npmjs.com/package/browser_storage_hook_react?activeTab=readme %}
mdwahiduzzamanemon
1,265,584
Difference Between Manual And Automated Testing Procedures For User Acceptance Testing
Automated testing serves as a reliable process that can help in the identification of any kind of...
0
2022-11-21T11:13:02
https://remarkmart.com/difference-between-manual-and-automated-testing/
manual, testing, automated
Automated testing serves as a reliable process that can help in the identification of any kind of defects in applications developed by a business organization. Dedicated tools that can help with automated testing of every aspect related to application and can help in resolving any kind of problems within the same before these are delivered to the final users. UAT User Acceptance Testing serves as a dedicated testing method that is employed for testing application and delivering any defects in the same. They are various benefits of automated testing procedures and opting for dedicated tools. These are as follows: - Bug-free application development: Automated tools that help in carrying out user acceptance testing can help in the delivery of quality applications that are free from any defects or bugs. - Better test coverage: The automated tools make use of artificial intelligence technology that does not require any manual intervention and can help provide better test coverage. Every aspect related to application can be analysed and tested for the identification of application defects. - Better reporting capabilities: The platform is even capable of generating automatic reports providing information about every aspect. Regression analysis and the information regarding various kinds of defects found in the application can be easily identified and made available for various users and testers within an organization. Manual testing procedures, in comparison to automatic testing, are considered to be more cumbersome and time-consuming. These involve the use of dedicated code-based tools that requires analysing every aspect related to application and, therefore, the anticipation of the facts based upon the same. Moreover, these require manual intervention from dedicated testing teams who can help analyse the solutions and therefore identify any kind of bugs in the same. On the other hand, automated testing procedures remove the requirement of any kind of manual testing. These make use of artificial intelligence-based solutions that can help in carrying out repetitive tests without any kind of manual work. Every aspect is analysed and the problems within the same or identified. Moreover, all the necessary information is made available to the testing team members, which they can use for the present resolution of problems as well as for future application development. The reports can be used for identifying the defects and therefore making necessary changes within the application without any kind of hassle. Automated testing tools make use of already identified testing scripts. These are inbuilt within the platform that can be used for carrying out user acceptance testing. Moreover, the platform is completely capable of carrying out all-rounder application testing, enabling testers to focus on a large area. Every aspect related to application is identified and tested, and the problems within the same are resolved. are the best tools that business organizations can opt for. These can help with end-to-end user acceptance testing. The tools are completely capable of understanding various configurations and therefore resolving the problems within the same and generating reports at the same time.
rohitbhandari102
1,265,590
GTK calculator on Rust
(Almost done) Made my first application with user interface and built it on Linux. Have to improve...
0
2022-11-21T11:26:14
https://dev.to/antonov_mike/gtk-calculator-on-rust-4cc7
rust, gtk, algorithms
(Almost done) Made my first [application with user interface](https://github.com/antonovmike/calculator_gtk) and built it on Linux. Have to improve few issues. Also need to know how to build it for Windows and Mac. Some issues I have to solve to consider the project complete: - Listen for keyboard events (the program have to accept keyboard input even if the cursor is not in the text entry field) - Scrollable entry screen with saved data of previous operations - Set rounding precision (what is the best way to round calculation results?) - Negative numbers issue (now it only works if you remove space between negative sign ant number manually) I rewrote the part of the code in which buttons are created. Now all the numeric buttons are created inside the for loop and placed in the correct places. ```Rust // NUM BUTTONS for iterator in 0..=9 { let button = gtk::Button::with_label(&iterator.to_string()); let mut column = 0; let mut raw = 1; button.connect_clicked(clone!( @strong entry => move |_| { entry.insert_text(&iterator.to_string(), &mut -1); })); if iterator == 1 || iterator == 4 || iterator == 7 { column = 0 } if iterator == 2 || iterator == 5 || iterator == 8 { column = 1 } if iterator == 3 || iterator == 6 || iterator == 9 { column = 2 } if iterator == 0 { column = 1 } if (1..=3).contains(&iterator) { raw = 1 } else if (4..=6).contains(&iterator) { raw = 2 } else if (7..=9).contains(&iterator) { raw = 3 } else if iterator == 0 { raw = 4 } grid.attach(&button, column, raw, 1, 1); } ```
antonov_mike
1,265,594
Don't make these mistakes in your CV
Ah yes, another year is slowly coming to an end. This means that project plans are nearing...
0
2022-11-21T11:33:21
https://dev.to/topjer/reading-cvs-is-suffering-3ijh
watercooler, tutorial, career
Ah yes, another year is slowly coming to an end. This means that project plans are nearing fruition/failure. Maybe people start to feel the call for change, thinking to themselves that 2023 will be the year when they leave the hell hole they wander daily. Or maybe their company is bought by a eccentric billionaire who decides to lay of half of his software engineering team based on the [lines of code written in the last year](https://twitter.com/parismarx/status/1587966900516147200) - sick metric by the way! Whatever the reason might be, we get a ton of applications at my company. Since I am one of the most senior member of my team I get the honor of screening all those CVs together with my boss. A process that turned out to be surprisingly painful. So I thought I share some of that with you. Mostly to entertain. But who knows? Maybe there will even be an insight in here. ## The Context Allow me to give you some context. My Company is based in Germany. So some things I complain about might not be relevant in other countries like the US. Also, the position we are trying to fill was the one of a Senior Python Developer. So, we wanted to quickly identify people that can show relevant experience and invite them to an interview. As it turns out, this is not that easy. ## Brevity Is The Soul of Wit Let me start by saying this: I don't like reading applications! It keeps me from doing my actual work. There is clearly the need for it. Our workload is increasing and we can always use some fresh meat. Yet, I get mildly irritated when I have to read a CV for more than 5 minutes. Again, all I need is the faint impressions that you have done some Python in the last 3 years. *BAM* Your are in the next round! > A CV is not there to get you hired. It is there to get you invited to an interview. Yet, what I was getting were applications that meticulously wrote about the projects they have been involved in. Pages over pages. In the worst cases it was not even apparent what their part has been in all of that. Some I topped that, because it was not already enough to read. they Also added an evaluation from their current employer. Two pages of prose about what a great employee they are written in font size 8. At least this is what I suppose it said because I did not bother reading it. > You want to seem professional? Then respect the time of whoever is going to read you application. ## The Other End of the Spectrum There were also CVs that were drastically different. One guy, fresh from university applied to the position and sent a one page CV that only contained the information that he just recently finished his Bachelor Thesis. The infuriating thing here is not that he applied to a Senior Positions with hardly any work experience. This could have been due to the fact that our Recruiter actually was the first to approach. No, it was infuriating because there was nothing the read from this CV. "If the CVs are long it is bad but if they are short it is just as bad. Seems like this guy is never happy" might be what you are now thinking. So allow me to elaborate what I was looking for in a CV. ## Your Past Experiences If you want to apply for developer position using language X then make it clear why you are the right one for it based on your past experiences. Highlight the problems you solved with X, which libraries you have used etc. Also, be more detailed in recent jobs and become more coarse with jobs that are longer in the past. It is not really convincing when the last relevant job experience was 15 years ago. This is even applicable if you are fresh from university. You might not have any work experience but there hopefully are some coding projects you can name where you have used X. Now that we have established what I would like to see, let's continue with some things I did not like to see. ## Tell Me Something About You Here in Germany it is common practice to end you CV with some words about yourself. By that I literally mean that a couple of activities are listed, people do in their free time. I assume the idea here is to give an impression of the person behind the CV. Yet it seems rather futile to compress something as complex as a humans nature into 5 nouns. > Well, this CV does not contain any relevant work experience but the fact that she likes to perform traditional polish dance is intriguing. We should definitely give her a chance. And please, for gods sake. If you are thinking about adding 'Watching Netflix' to it ... Don't! Really, I mean it. Which scenario are you imagining where this would make you more attractive as an employee? *the screen gets blurry and the next thing we see is the Head of a software engineering team lost in deep contemplation* "I'll be damned. Before me sits the best team of software engineers the world has ever seen. Each one of them has their own area of expertise which they complement with a broad overview other topics. They are living and breathing software development. Each waking hour is spent either writing, reading or thinking about code. Heck, some of them even learned lucid dreaming so they can keep solving engineering problems while they dream. But not a single one of them can give me a recommendation what to watch on Netflix. I am getting sick and tired of rewatching The IT Crowd." He is torn from his thoughts by the sound of trumpets echoing from above. Confused as to what could be the origin of these otherworldly melodies, he approaches the window. What he sees is that the clouds are parting. His eyes squint as he adjusts to the heavenly glow. Finally he can make out a shape. It is the hand of the creator of all things reaching out for him. To an outside observer this scene would be reminiscent to "The creation of Adam". But unlike in the picture, the hand is holding something. Something divine. Something to end his monotonous binges. It is your CV. *Snap, back to reality.* If this is the scenario you are hoping for, then good luck with that. ## I Really Want to Work at {insert name here} Not sure whether this is a thing outside of Germany. But here we usually get a letter alongside the CV that allows the applicant to give a more detailed insight into reasoning for the application. (This is changing somewhat, yet often times we still get them) Most letters I have seen share two qualities. On the one hand, they are a joy to read. They eloquently explain the applicants motivation. Why your company is the ideal workplace and how thrilled they would be to get a chance to work alongside you. Sometimes these letters are so poignant that I am filled with a deep gratitude. Because I am actually working for this wonderful company. If only it weren't for the second quality: Being written so generically that it could fit any company out there. I do understand that it is unrealistic to write a unique letter for each company you are applying to. Still, if you decide to add a cover letter, then make at least one paragraph unique to the company. Refer to something found in the job description for example. Yet, even better - although it will put a tear to the eyes of all those cover letter ghost writers out there - do not bother with it. ## The Worst is yet to come You know what I really despise about reading CVs? It is when people are judging their skill set themselves. There even seem to be websites out there that create these overviews for you. Just choose the technologies you are familiar with and then indicate your skill level on a scale from 0 to whatever. Aside from the fact that they sometimes are so horribly formatted (font and color scheme) that I dread just looking at them. They are also absolute BS. There are probably enough studies out there indicating that we are not capable of objectively judging our skills. With a clear tendency towards overestimating the own proficiency. Thus these overviews are only a groundless claim. Worthless without backup. If, on the other hand, I see that they have been working as a Python developer for the past five years. Explicitly naming - some of - the projects they have been working on, together with the libraries / technologies used. It becomes a more reliable basis to judge the skill. Bottom line: Ditch these overviews. They are just noise (and make my eyes bleed). ## Conclusion In case you had as little patience with this article as I was having with some CVs and skipped right to the end, I want to repeat the main point. > Past experience is key. List all your past experiences that are relevant for the position you want to apply to. Be succinct and precise and cut out all the noise. The interviews are the place where you can shine with your unique personality (and your Netflix recommendations). Let me know in the comment section how you approach the topic of a CV or maybe you are in a similar position and have read some CVs in the past. Then feel free to share your opinion as well.
topjer
1,289,229
How and Why LimaCharlie Secures Google Chrome and ChromeOS
Chrome is the world’s most popular web browser—and ChromeOS is becoming more prevalent due to the use...
0
2022-12-15T22:11:04
https://www.limacharlie.io/blog/endpoint-detection-and-response-on-chrome
security
--- title: How and Why LimaCharlie Secures Google Chrome and ChromeOS published: true date: 2022-12-08 00:00:00 UTC tags: security canonical_url: https://www.limacharlie.io/blog/endpoint-detection-and-response-on-chrome --- Chrome is the world’s most popular web browser—and ChromeOS is becoming more prevalent due to the use of Chromebooks in education and other sectors. In this blog post, we’re going to talk about what this means for security teams, and how LimaCharlie can be used to secure Chrome and ChromeOS. ## Why Google Chrome and ChromeOS matter for security Chrome/ChromeOS security is definitely a niche topic, but there are enough special considerations here to warrant discussion. Here’s why it’s necessary think about how to secure Chrome and ChromeOS—and why it’s essential for cybersecurity teams to have the tools to do so: **The ubiquity of Chrome** : The Chrome browser is pretty much everywhere; it has around 65% of [<u>global browser market share</u>](https://gs.statcounter.com/browser-market-share). There is therefore a need for visibility into Chrome activity on nearly every system and network—especially in an age of [<u>BYOD and hybrid/remote teams</u>](https://limacharlie.io/blog/cybersecurity-for-distributed-teams). **The Chromebook userbase:** ChromeOS is still only a small fraction of the organizational OS market when compared to Windows or even macOS. But Chromebooks are becoming more common, especially because of the demand for lightweight, inexpensive, easy-to-provision laptops. These features have made Chromebooks popular in [<u>education</u>](https://agpartseducation.com/why-do-schools-use-chromebooks/), and to a lesser extent in [<u>nonprofits</u>](https://www.ancoris.com/blog/why-non-profits-moving-google-chrome-devices) and [<u>governments</u>](https://mocoshow.com/blog/montgomery-county-is-providing-40000-chromebooks-to-residents-who-do-not-have-computers-over-15000-distributed-so-far/). But this means that ChromeOS is likely to be found in settings with limited cybersecurity and IT resources—and in the case of schools especially, with end users who may not be as well-versed in avoiding cyber threats as your average corporate computer user. **The problem with popularity:** ChromeOS is generally [<u>very secure</u>](https://chromeenterprise.google/os/security). This is a big part of its appeal. But obviously no computing platform is perfect, and there have certainly been [<u>vulnerabilities discovered in the past</u>](https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=ChromeOS). As Chromebooks see more widespread adoption, we can expect to see bad actors spending more time and energy on finding ways to attack the platform. This is something that has been [<u>observed with macOS</u>](https://www.bankinfosecurity.com/ios-mac-malware-grows-increased-enterprise-use-a-18792) as Macs have become more prevalent in the enterprise, and similarly with [<u>CI/CD pipeline threats</u>](https://limacharlie.io/blog/cicd-pipeline-attacks) as more companies embrace DevOps approaches. In short, forward-looking security teams would be wise to start thinking about ChromeOS security sooner rather than later. **Coverage isn’t always optional** : For compliance and regulatory reasons alone, many security teams need visibility into what’s happening on all endpoints—no matter how secure they are in theory. They may also be required to [<u>store telemetry data</u>](https://limacharlie.io/blog/telemetry-storage-matters-for-cybersecurity) for every endpoint. LimaCharlie offers a simple, cost-effective way to do this for ChromeOS and, as with our other sensor types, includes one year of full telemetry storage for free. **Customer demand** : MSSPs serving existing clients with, e.g., a Windows fleet, may suddenly find themselves asked if they can cover Chromebooks as well. It’s important to have a tool in place that can meet customer demand for ChromeOS coverage if needed—and ideally one that integrates easily and cost-effectively with your existing security stack. For [<u>MSSPs using LimaCharlie</u>](https://www.youtube.com/playlist?list=PLO8_Yc4h5cIouf3f8192rA3UqV1xvCqr2), the Chrome sensor provides this. (It also offers a ready-made opportunity to market managed security services to the education sector.) ## LimaCharlie Chrome sensor capabilities The LimaCharlie Chrome sensor can run in the Chrome browser or as an endpoint detection and response (EDR) sensor on Chromebooks. The sensor offers security teams the following capabilities: - Visibility into network activity without the need for SSL interception - Details of DNS activity - Information about HTTP activity (including Request/Response headers) - The ability to view all installed extensions on the system - The ability to write detection and response (D&R) rules against sensor telemetry - A way to disconnect endpoints from the network if suspicious activity is found—either manually or via automation For a full list of supported sensor events and commands, see the [<u>LimaCharlie Chrome sensor documentation</u>](https://doc.limacharlie.io/docs/documentation/8c46453c66d3a-chrome-sensor). ## LimaCharlie Chrome sensor setup and pricing Setting up the LimaCharlie Chrome sensor follows the same basic pattern as [<u>the one used for other LimaCharlie EDR sensors</u>](https://www.youtube.com/watch?v=TDHyi-nhij8), the main difference being that the sensor is [<u>obtained from the Chrome web store</u>](https://chrome.google.com/webstore/detail/limacharlie-sensor/ljdgkaegafdgakkjekimaehhneieecki). For full information on setup, see our documentation on [<u>deploying the LimaCharlie Chrome sensor</u>](https://doc.limacharlie.io/docs/documentation/acae13e5a95fa-deploying-sensors#chrome). If you’d like a more visual explanation, one of our recent webinars also features a [<u>walkthrough of the setup process</u>](https://www.youtube.com/watch?v=W21_CrhdVcw&t=941s). Our pricing is designed to be as transparent and predictable as possible. Chrome sensors are priced at $.25/sensor per month. For some users, a [<u>sleeper deployment</u>](https://help.limacharlie.io/en/articles/6042143-how-does-sleeper-deployment-work-with-limacharlie) of ChromeOS sensors may also be an attractive option. Interested in learning more?[<u>Book a demo</u>](https://calendly.com/demo-lc/20-min-call?month=2022-07)or [<u>try LimaCharlie for free</u>](https://app.limacharlie.io/signup) today.
charltonlc
1,265,657
Formatting dates and times without a library in JavaScript
Dates and times can be tricky in JavaScript. You always want to be sure you are doing it correctly....
0
2022-11-21T13:41:27
https://dev.to/daryllukas/formatting-dates-and-times-without-a-library-in-javascript-18he
javascript, webdev, beginners, tutorial
Dates and times can be tricky in JavaScript. You always want to be sure you are doing it correctly. Luckily, because JavaScript has a built-in Date object, it makes working with dates and times much easier. In fact, there are many different methods on the Date object that can be used to do different things with dates. In this blog, I'll run through a few of these methods. ## A quick note on the `Date` object The built-in `Date` object in JavaScript offers us a set of APIs for working with date and time data. A `Date` object instance represents a single point in time. The instance is in a platform-independent format. ## Creating an instance ### Current Date / Time There are multiple ways you can get an instance (or representation) of the current date and time. #### Using the constructor ```jsx let now; now = new Date(); console.log(now); // [object Date] { ... } ``` #### Using the `Date()` function This returns a string representation of the current date and time ```jsx now = Date(); console.log(now); // "Mon Nov 21 2022 10:39:45 GMT+0200 (South Africa Standard Time)" ``` #### Using a static method, `.now()` This returns a `number` that represents the milliseconds elasped since 1 January 1970 UTC ```jsx now = Date.now(); console.log(now); // 1669019985182 ``` ### Other ways to create an instance #### Parsing strings With the `new Date()` constructor (or `Date.parse()` ). Strings must be compliant with the ISO 8601 format, `YYYY-MM-DDTHH:mm:ss.sssZ`. ```jsx const nextBirthday= new Date('2023-08-02T00:00:00') ``` The example below is discouraged and may not work in all runtimes/browsers. A library like `moment` or `dayjs` can help if other string formats are needed. ```jsx const nextBirthday = new Date('August 02, 2023 00:00:00'); // DISCOURAGED: may not work in all runtimes ``` #### Passing integers in the constructor This syntax is `new Date(year, month, day, hours, minutes, seconds, milliseconds);` This `month` is 0-indexed, meaning `0` is January and `11` is December ```jsx nextBirthday = new Date(2023, 7, 2, 0, 0, 0); console.log(nextBirthday .toString()); // "Wed Aug 02 2023 00:00:00 GMT+0200 (South Africa Standard Time)" ``` #### Epoch (Unix) Timestamp Epoch timestamp is the number of seconds elapsed since January 1, 1970 (midnight UTC/GMT) Sytanx `new Date(milliseconds)` ```jsx nextBirthday = new Date(1690927200000) ``` Get Epoch in seconds ```jsx const seconds = Math.floor(Date.now() / 1000); ``` ## Date methods We can also manage and manipulate dates and times using the methods provided by the `Date` object. ### Accessing Date components Get the year (4 digits, e.g., 2022) ```jsx nextBirthday.getFullYear() // 2023 ``` Get the month. The month is zero-indexed (i.e., 0 - 11). ```jsx nextBirthday.getMonth() // 7 ``` Get the day of the month, from 1 to 31. ```jsx nextBirthday.getDate() // 2 ``` Get the day of the week. This is also zero-index (i.e., from 0 to 6). The first day is always Sunday, `0` and last day is Saturday, `6`. (Not so in some countries but can’t be changed). ```jsx nextBirthday.getDay(); // 2 ``` The methods, `getHours()`, `getMinutes()`, `getSeconds()` , `getMilliseconds()` get the corresponding time components. All the methods above return the components relative to your local time zone. UTC-counterparts are available for each method which return values relative to UTC. Simply insert `UTC` right after `get` in the method name. ```jsx // Local time is UTC +2 nextBirthday.getHours(); // 0 nextBirthday.getUTCHours(); // 22 ``` Destructuring assignment example ```jsx const date = new Date(); const [month, day, year] = [date.getMonth(), date.getDate(), date.getFullYear()]; const [hour, minutes, seconds] = [date.getHours(), date.getMinutes(), date.getSeconds()]; ``` ### Setting Date Components The following methods allow to set date/time components - `setFullYear(year, [month], [date])` - `setMonth(month, [date])` - `setDate(date)` - `setHours(hour, [min], [sec], [ms])` - `setMinutes(min, [sec], [ms])` - `setSeconds(sec, [ms])` - `setMilliseconds(ms)` - `setTime(milliseconds)` (sets the whole date by milliseconds since 01.01.1970 UTC) ```jsx let now = new Date(); now.setHours(12); now.setFullYear(2015); console.log(now.toString()); // "Sat Nov 21 2015 12:08:53 GMT+0200 (South Africa Standard Time)" ``` Every one of them except `setTime()` has a UTC-variant ```jsx let now = new Date(); now.setUTCHours(2); ``` Some methods can set multiple components at once. For example, `setHours()` gives you the option to set `minutes`, `seconds` and `milliseconds` as well. These options are not modified if not included in the method call. ```jsx let today = new Date(); today.setHours(12); console.log(now.toString()); // "Mon Nov 21 2022 12:13:36 GMT+0200 (South Africa Standard Time)" now.setHours(12, 12, 12, 12); console.log(now.toString()); // "Mon Nov 21 2022 12:12:12 GMT+0200 (South Africa Standard Time)" ``` ## Formatting Variations of the `toString()` method. Returns various string representation of a `Date` instance. ```jsx let now = new Date(); now.toString(); // "Mon Nov 21 2022 14:17:21 GMT+0200 (South Africa Standard Time)" now.toDateString(); // "Mon Nov 21 2022" now.toTimeString(); // "14:17:21 GMT+0200 (South Africa Standard Time)" now.toISOString(); // "2022-11-21T12:17:21.595Z" now.toUTCString(); // "Mon, 21 Nov 2022 12:17:21 GMT" now.toLocaleString(); // "11/21/2022, 2:17:21 PM" now.toLocaleDateString(); // "11/21/2022" now.toLocaleTimeString(); // "2:17:21 PM" now.toJSON(); // "2022-11-21T12:17:21.595Z" ``` ### The `Intl.DateTimeFormat` object On modern browsers and runtimes, JavaScript includes an Internationalization API that allows for language-sensitive date and time formatting. You can start using it right away without having to import a massive date formatting library. ```jsx const now = new Date(); const formatted1 = Intl.DateTimeFormat('en-ZM').format(now); const formatted2 = Intl.DateTimeFormat('en-ZM', { dateStyle: "full" }).format(now); const formatted3 = Intl.DateTimeFormat('en-ZM', { dateStyle: "long" }).format(now); const formatted4 = Intl.DateTimeFormat('en-ZM', { dateStyle: "medium" }).format(now); const formatted5 = Intl.DateTimeFormat('en-ZM', { dateStyle: "short" }).format(now); console.log(now); // [object Date] ... console.log(formatted1); // "21/11/2022" console.log(formatted2); // "Monday, November 21, 2022" console.log(formatted3); // "21 November 2022" console.log(formatted4); // "21 Nov 2022" console.log(formatted5); // "21/11/2022" ``` ## Conclusion In today's blog post, we learned how to format date/time data in JavaScript. There are many libraries that handle this for us already, but if you're working in a legacy codebase or have performance reasons for wanting to avoid a library, you may have to handle date/time formatting natively. Learn more about the Date object on [MDN references docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date)
daryllukas
1,266,298
Function
- Na imagem abaixo: Algumas funções que podemos fazer com o dart. Criar funções de...
0
2022-11-21T21:39:23
https://dev.to/ramondeveloper/function-4a9m
dart
## - Na imagem abaixo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o34xv8gc61wy0bwb81cb.png) Algumas funções que podemos fazer com o dart. Criar funções de operadores lógicos para calcular números especificos, números aleatórios, juntar números ou nomes. No main colocamos o Controller para especificar, calcular etc... Resultado: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy5ajho7gouhswyzb9tq.png)
ramondeveloper
1,266,333
Forever Functional: Higher-order functions with TypeScript
by Federico Kereki In a previous article in this series, I discussed several types of higher-order...
22,768
2022-11-22T00:16:09
https://blog.openreplay.com/forever-functional-higher-order-functions-with-typescript/
typescript, functional, programming
by [Federico Kereki](https://blog.openreplay.com/forever-functional-higher-order-functions-with-typescript/) In a [previous article in this series](https://blog.openreplay.com/forever-functional-higher-order-functions-functions-to-rule-functions/), I discussed several types of higher-order functions (HOF) whose arguments and results may be functions themselves. All the examples there, were done in vanilla JavaScript, but nowadays it's more common to work with TypeScript, so we need to add appropriate data typing to our code. However, this is more than just trivial! In this article, I'll review some examples from my previous article, focusing instead on the necessary data types and additional requirements that we'll also have to consider. And, as a bonus, we'll consider an extra method with several typing quirks of its own! ## Wrapping functions This type of HOF takes a function as an argument and returns a new one with the same functionality but some added features like logging or timing. Let's work with the former because solutions for both cases are practically identical. Suppose we have a function and want to automatically add some logging, so we'll see what arguments it got and what value it returned. In plain JavaScript, we'd write something like this. ```javascript function addLogging(fn) { return (...args) => { console.log(`entering ${fn.name}(${args})`); try { const valueToReturn = fn(...args); console.log(`exiting ${fn.name}=>${valueToReturn}`); return valueToReturn; } catch (thrownError) { console.log(`exiting ${fn.name}=>threw ${thrownError}`); throw thrownError; } }; } ``` Upon entering, we log whatever arguments we get. Then, we try to call the original function, store the returned value, log it, and return it. If the function throws an exception, we log that and throw the error up for processing. How do we write this in TypeScript? ```javascript function addLogging<T extends (...args: any[]) => any>( fn: T ): (...args: Parameters<T>) => ReturnType<T> { return (...args: Parameters<T>): ReturnType<T> => { . . everything as above . }; } ``` Our `addLogging()` HOF will need a generic type definition because it can deal with all kinds of functions with different data types involved. The first line defines a generic type, `T`, that represents a function with an unknown number of arguments, of any type, and returns a value, also of any type. In the second line, we specify that the input function `fn` is of type `T`. So far, so good! How do we specify the format of the returned function? The key here is that the output type must fully match the input function. We cannot just say that the returned function will have any arguments; we have to say that they match the types of the parameters of `T`, which is written as `Parameters<T>` using a [utility type](https://www.typescriptlang.org/docs/handbook/utility-types.html) in TypeScript. Similarly, we cannot just say that the output function's result is of any type. We want to say it will be the same type as the result of our function of type T. For this, TypeScript provides another utility type, `ReturnType<T>`. Now we can understand more clearly the first four lines: we defined a generic type of function `T`, we said that the input function was of that type, and we said that our HOF would return a new function whose parameters and result would have the same types as those of `T`; neat! Let's now consider a different kind of problem, where the output function need not match the typing for the input one. ## Altering functions These HOFs take a function as input and produce a new function with changed functionality. In the previous article, we considered examples such as negating conditions (to simplify filtering operations) or changing the number of parameters (arity) of functions. Let's review the first one for our analysis, which will provide a reasonably simple example of the needed data typing. Suppose we have a function that tests for a condition; with it we can write code as `someData.filter(testCondition)`, and the resulting array will only include the array elements that satisfy the test. But what would we do if we wanted the elements that did *not* satisfy the condition? If we had a `not()` HOF to negate a predicate (that is, to produce the opposite result), we could just write `someData.filter(not(testCondition))`, and it would be clear enough. A simple implementation of not() could be the following -- and let's go with an arrow function for variety, so we'll see how TypeScript works with those. ```javascript const not = (fn) => (...args) => !fn(...args); ``` Given a function, we create a new function that returns the opposite of whatever the original function would have returned. How do we write data types for this? ```javascript const not = <T extends (...args: any[]) => boolean>(fn: T) => (...args: Parameters<T>): boolean => !fn(...args); ``` We'll write data types similar to what we had in the previous section. The second line shows how a generic function is specified; the generic type T goes before the definition of the function, after the assignment operator. In this case, since we want to apply `not()` to boolean functions, we will be clear and say that's the expected type of the input `fn` function, also in the second line. What will be returning? We'll produce a new function whose parameters must match those of `fn` (in the same way as with `logging())`, and the result type will be `boolean`. If you prefer, write `ReturnType<T>` instead; whatever you think is clearer. The experience gained with the `logging()` HOF was helpful with `not()`; let's see the third example, with an extra complication. <h2>Open Source Session Replay</h2> _<a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a> is an open-source, session replay suite that lets you see what users do on your web app, helping you troubleshoot issues faster. OpenReplay is self-hosted for full control over your data._ ![OpenReplay](https://blog.openreplay.com/banner-blog_1oYPGT.png) _Start enjoying your debugging experience - <a href="https://github.com/openreplay/openreplay" >start using OpenReplay for free</a>._ ## Creating new functions The last type of HOFs that we considered in [the previous article](https://blog.openreplay.com/forever-functional-higher-order-functions-functions-to-rule-functions/) were able to produce new functions of their own. We discussed converting a method into a function (demethodizing) or transforming a callback-based async function into a promise. Let's see how we would demethodize functions, which provides an excellent example of the problems we need to solve. In the [previous article](https://blog.openreplay.com/forever-functional-higher-order-functions-functions-to-rule-functions/), we showed how to "demethodize" any method and transform it into a function. We saw a way to do this with `bind()`: ```javascript const demethodize = (fn) => (...args) => fn.bind(...args)(); ``` How do we convert this to TypeScript? We could think of using the same kind of solution by defining a type `T` and writing: ```javascript const demethodize = <T extends (...args: any[]) => any>(fn: T) => (...args: Parameters<T>): ReturnType<T> => fn.bind(...args)(); ``` But this won't work! The last line will be marked as an error: "*A spread argument must either have a tuple type or be passed to a rest parameter. ts(2556)*" What's going on? The issue is that we cannot use the spread syntax (which allows for any number of arguments) with functions that expect a fixed number of parameters. In this case, `bind()` requires at least one first argument: the object that will assume the value of `this`. How do we specify that the variable number of arguments includes at least one? The simplest way is to separate the initial argument as follows: ```javascript const demethodize = <T extends (arg0: any, ...args: any[]) => any>(fn: T) => (arg0: any, ...args: Parameters<T>): ReturnType<T> => fn.bind(arg0, ...args)(); ``` We are telling TypeScript that the list of arguments to our "demethodized" method will always include at least an initial `arg0`; problem solved! ## Creating new methods Let's have an extra HOF for additional challenges! Instead of "demethodizing" a method to convert it into a function, what about "methodizing" a function to add it to some object's prototype? For instance, we could want to add a `reverse()` method to strings, so we could write something like `"URUGUAY".reverse()` and get `"YAUGURU"`. (Arrays already have a `reverse()` method, but strings do not.) First, we should write the needed function. ```javascript function reverse(x: string): string { return x.split("").reverse().join(""); } ``` The first parameter for any function to be "methodized" must always be whatever object the method will work on. Working in JavaScript, to add a new method to a prototype, we would require a `methodize()` function like this: ```javascript function methodize(obj, fn) { obj.prototype[fn.name] = function (...args) { return fn(this, ...args); }; } ``` With it, we could write `methodize(String, reverse)`, and from now on all strings will have gained a brand new `reverse()` method; good! But how do we write this in TypeScript? The answer is not too easy! ```javascript function methodize< T extends any[], F extends (arg0: any, ...args: T) => any, O extends { prototype: { [key: string]: any } } >(obj: O, fn: F) { obj.prototype[fn.name] = function ( this: Parameters<F>[0], ...args: T ): ReturnType<F> { return fn(this, ...args); }; } ``` Wow! The first two generic types look familiar. `T` represents a varying number of parameters, and `F` is a function with a mandatory first `arg0` parameter (as we saw in the previous section). The seventh line, the one starting with `this:`, is interesting too. You cannot have an argument or variable called `this`, and here we're saying that the value of `this` will be the same type as `arg0`. The actual function won't have an extra argument; this is just a [trick used by TypeScript](https://www.typescriptlang.org/docs/handbook/2/functions.html#declaring-this-in-a-function). But what about type `O`? The issue is that TypeScript objects to our assignment to `obj.prototype`, because it cannot tell if `obj` actually has a `prototype` attribute. Type `O` says that the object to which we'll attach a new method does actually have a `prototype`, so everything is OK. Are we done? Unhappily, no. If we write the following `"URUGUAY" .reverse()` as we wanted, we'll get an error: "*Property' reverse' does not exist on type '"MONTEVIDEO"'. ts(2339)*". The problem is that TypeScript works with static typing and cannot tell that the `reverse()` will eventually exist. So, we need to add a global declaration like this: ```javascript declare global { interface String { reverse(): string; } } ``` Everything will finally be OK with this definition (that could also appear in a `.d.ts` file); we added a new method to all strings and let TypeScript know about it -- great! ## Conclusion Working with higher-order functions in TypeScript and writing correct data typing for them can sometimes be a frustrating puzzle, requiring several attempts, back and forth, until the right solution is achieved. I hope this article will help you on your way to modern, typed Functional Programming! > A TIP FROM THE EDITOR: It will help if you read the original [Higher Order Functions -- Functions To Rule Functions](https://blog.openreplay.com/forever-functional-higher-order-functions-functions-to-rule-functions/) article, in which all these functions are explained. [![newsletter](https://blog.openreplay.com/newsletter_Z1P0UIP.png)](https://newsletter.openreplay.com/)
asayerio_techblog
1,266,356
Boxing metaverse - whaat?
Hi folks, if there was a boxing metaworld what would you like to see there - please give me any ideas...
0
2022-11-22T00:57:28
https://dev.to/davidsl/boxing-metaverse-whaat-38d5
metaverse, gamedev, nft, webx
Hi folks, if there was a boxing metaworld what would you like to see there - please give me any ideas - I'm doing research and your opinion as a professionals would be very helpful
davidsl
1,266,659
First post
I was about to spend ages trying to figure out how to work this and format posts but decided to just...
0
2022-11-22T05:28:18
https://dev.to/nyashanice/first-post-294g
I was about to spend ages trying to figure out how to work this and format posts but decided to just play with it and figure it out as I go. :) learning new things is exciting
nyashanice
1,266,848
Quick Guide To CSS Preprocessors
You might believe you now know the web development process, and then the next moment, your colleague...
0
2022-11-22T06:42:36
https://dev.to/quokkalabs/quick-guide-to-css-preprocessors-10al
css, guide, tutorial, startup
You might believe you now know the web development process, and then the next moment, your colleague is casually going around talking about CSS and its processor. What is CSS? How do they make developers' lives much more accessible? Now it's time to enter the world of CSS preprocessors, an essential part of any coder's toolbox. In this blog, we'll briefly introduce CSS itself and the coding problems it solves. Next, we'll cover some of their features, pros, and cons. ## What is CSS? CSS stands for cascading style sheets. It offered a high level of writing CSS that extended the essential functionalities. This advanced code is later compiled as regular CSS code. Along with JavaScript and HTML, CSS builds the basis for web pages online. With CSS, you can define the presentation of a web page via color, layout, and text style. CSS also handles adjustments to different types of devices & screen sizes. Separating CSS from HTML makes it easier to maintain a website. Hence, you can share the style sheet across all the pages or adjust it to various conditions. A CSS preprocessor is a language that extends CSS language and is compiled into regular CSS syntax. A browser can only understand CSS, which may need more to write clean rules. Utilizing CSS, the designer/developer can only reuse a collection of regulations in various selectors with clear pieces of information across a stylesheet. To overcome this limitation, the concept of a preprocessor was created. ## What problem does CSS solve? CSS's primary issue is removing the web page format from the HTML. It's used to describe the structure of a website. Consider it as the pillar of a house—things like the house, whether it is a heading, a list, a link, etc. Over time, tags like font or color attributes got added to HTML. This became a nightmare for web developers. The style and design must be written from scratch for each page without the shared styles. It makes the development process more expensive. CSS mainly extracts the style out of the HTML on its own. CSS file. You can change the look of any website with just one file with the help of an external style sheet. CSS saves time and effort by making the style more reusable. ## What are CSS preprocessors? ![CSS PREPROCESSOR USAGE](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qsf9xhdwrauk1li8cqt.jpg) If CSS sounds perfect, why do we not use it regularly? What could turn out badly? There are still a few disadvantages. As the web develops, composing customary plain, CSS can get extensive and monotonous. CSS preprocessors extend the usefulness of standard CSS. They add extra logical syntax and tools like factors if/else statements and loops. It makes the CSS more efficient, robust, and dynamic. A developer can write a more complex style and layout using a CSS preprocessor. The source code can be limited and readable. CSS preprocessors add syntax, not inside CSS. This advanced code is accumulated as ordinary CSS code. ## Popular CSS preprocessors There are three main CSS preprocessors: **SASS**, **LESS**, and **Stylus**. Most CSS preprocessors have similar features. Each processor has its unique way of completing the task. **There are some differences in advanced usage; let's discuss them in detail.** ### SASS ![SASS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i77emcvrjl9zyocyv2cn.jpg) [SASS](https://sass-lang.com/) stands for Syntactically Awesome Style Sheets. It has mainly two choices for syntax, i.e., Sass and SCSS. Currently, because of version 3.0, SCSS is the official syntax. SCSS is closer to regular CSS, making migrating easier. The significant difference between the two is using parentheses and semicolons. Sass is viable with all versions. It has a large group, has been around for over 15+ years, and has more features than the other CSS preprocessors—additionally, a few supportive structures around Sass, similar to Compass and Bourbon. ### LESS ![LESS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n1civl3niqjd1v0s4jh.jpg) [LESS](https://lesscss.org/) is short for Leaner Style Sheets. It's designed to be as near to regular CSS as expected, as the syntax is the same. Less runs in the browser inside Node JavaScript. With Less, compilation happens via less.js in your browser. It is okay with fast, modern browsers; however, we must be aware of old browsers. As well as, Less.js caches the CSS in the browser, so there is no need to regenerate for the user every single time. ### Stylus ![Stylus](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0o2rz8k5mo0wblui5okd.png) [The Stylus](https://stylus-lang.com/) is built on Node.js. It differs from Sass and Less, which are more opinionated to the syntax; the stylus allows you to omit semicolons, colons, and braces if you want at any time. Another cool feature is that the stylus has a property lookup feature. You can do that easily if you set property X relative to property Y's value. The stylus can be more concise because of its flexibility, but it depends on your preferred syntax. ##CSS Preprocessor Features _Let's dive deep into the features CSS preprocessors add to CSS._ ### Variables Like in [programming languages](https://quokkalabs.com/blog/learn-programming-languages-mobile-app-development/), CSS preprocessors permit you to add variables to your styles. It is useful for styles you intend to reuse frequently. For example: ``` $hoverColor: #33FF9B; button { background-color: $hoverColor; } ``` Above, we've defined a hover color variable. Rather than looking up and determining the code every time you required it in your app. You can refer to `$hoverColor` instead of a variable. ### If/Else statements These statements allow us to apply some CSS if a condition is proper. For example, while checking, the width of the page is more significant than 500 pixels. If it is, we will set the margin at 100px or the margin at the top to 10px. ``` @if width(body) > 500px { margin-top: 100px; } else { margin-top: 10px; } ``` ### Loops Loops are valuable when you have a collection of items, i.e., arrays or objects you want to increment. For example, suppose we have things of different SM icons and the colors/font we should apply. We want to look as per design prospects and apply the relevant color, and later on, link to each button. We'll look through `$social object` & assign each color, and link to our controls. ``` @each $name, $color in $social { // selector based on href name [href*='#{$name}'] { background: $color; // apply the link for the relevant image file &::before { content: URL(https://www.careerfoundry.com/images/#{$name}.png); } } } ``` The popular features include mixins, color functions, extends, etc. It depends on which preprocessor you use, of course! ## Pros of using CSS preprocessors - It makes code more maintainable. You can declare and fix your brand colors in one place. If you want to change brand colors later, you only have to update them in one place. - These preprocessors make it easy to reuse styles, meaning you don't have to write the same code repeatedly. - Rather than sprawling sheets of styles, you can group your code and be more specific in CSS. Less repetition is shorter and more readable. - It's more efficient. It's especially updating it later when the design changes. ## Cons of using CSS preprocessors - Since we're reusing code in CSS, it could take longer to find out the problem. - Since the browser doesn't read this more advanced version of CSS, it needs to compile it into regular CSS before showing the style. - The source files will be more concise, but the generated CSS files could be huge. It could cause additional time for a request to complete. That's it! I hope you are now fully aware of the CSS preprocessor. CSS preprocessors are standard. You'll find them as part of the [web development process](https://quokkalabs.com/web-application-development) at many organizations. The best choice often relies on the project and the developers' preferences, and the time the venture was created. It's time to apply any preprocessor in your next project and see what they can do!
labsquokka
1,266,940
Event Streams Are Nothing Without Action
Each data point in a system that produces data on an ongoing basis corresponds to an Event. Event...
0
2022-11-22T09:23:43
https://memphis.dev/blog/event-streams-are-nothing-without-action/
eventdriven, streamprocessing, batchprocessing
Each data point in a system that produces data on an ongoing basis corresponds to an Event. Event Streams are described as a continuous flow of events or data points. Event Streams are sometimes referred to as Data Streams within the developer community since they consist of continuous data points. Event Stream Processing refers to the action taken on generated Events. This article discusses Event Streams and Event Stream Processing in great depth, covering topics such as how Event Stream Processing works, the contrast between Event Stream Processing and Batch Processing, its benefits and use cases, and concluding with an illustrative example of Event Stream Processing. --- ### **Event Streams: An Overview** Coupling between services is one of the most significant difficulties associated with microservices. Conventional architecture is a “don’t ask, don’t tell” architecture in which data is collected only when requested. Suppose there are three services in issue, A, B, and C. Service A asks the other services, “What is your present state?” and assumes they are always ready to respond. This puts the user in a position if the other services are unavailable. Retries are utilized by microservices as a workaround to compensate for network failures or any negative impacts brought on by changes in the network topology. However, this ultimately adds another layer of complexity and increases the expense. In order to address the problems with the conventional design, event-driven architecture adopts a “tell, don’t ask” philosophy. In the example above, Services B and C publish Continuous Streams of data, such as Events, and Service A subscribes to these Event Streams. Then, Service A may evaluate the facts, aggregate the outcomes, and locally cache them. Utilizing Event Streams in this manner has various advantages, including: - Systems are capable of closely imitating actual processes. - Increased usage of scale-to-zero functions (serverless computing) as more services are able to stay idle until required. - Enhanced adaptability --- ## **The Concept of Event Stream Processing** Event Stream Processing (ESP) is a collection of technologies that facilitate the development of an Event-driven architecture. As previously stated, Event Stream Processing is the process of reacting to Events created by an Event-driven architecture. ![the concept of event stream processing](https://community.ops.io/remoteimages/uploads/articles/j51sm3i8g0ku2onr8c2t.jpeg) One may behave in a variety of ways, including: - Conducting Calculations - Transforming Data - Analyzing Data - Enriching Data You may design a pipeline of actions to convert Event data, which will be detailed in the following part, which is the heart of Event Stream Processing. --- ## **The Basics of Event Stream Processing** Event Stream Processing consists of two separate technologies. The first form of technology is a system that logically stores Events, and the second type is software used to process Events. The first component is responsible for data storage and saves information based on a timestamp. As an illustration of Streaming Data, recording the outside temperature every minute for a whole day is an excellent example. In this scenario, each Event consists of the temperature measurement and the precise time of the measurement. Stream Processors or Stream Processing Engines constitute the second component. Most often, developers use Apache Kafka to store and process Events temporarily. It also enables the creation of Event Streams-based pipelines in which processed Events are transferred to further Event Streams for additional processing. Other [Kafka use cases](https://memphis.dev/blog/apache-kafka-use-cases-when-to-use-it-when-not-to/) include activity tracking, log aggregation, and real-time data processing. Kafka enables software architects design s[calable architectures](https://memphis.dev/blog/building-a-scalable-search-architecture/) for large systems due to the decoupling of different components in the system. --- ## **Event Stream Processing vs. Batch Processing** With the development of technology, businesses deal with a much bigger number of data than they did ten years ago. Therefore, more sophisticated data processing technologies are necessary to keep up with this rate of change. A conventional application is responsible for the collection, storage, and processing of data, as well as the storage of the processed outputs. Typically, these procedures occur in batches, so your application must wait until it has sufficient data to begin processing. The amount of time your application may have to wait for data is unacceptable for time-sensitive or real-time applications that need quick data processing. In order to solve this difficulty, Event Streams enter the fray. In Event Stream Processing, every single data point or Event is handled instantaneously, meaning there is no backlog of data points, making it perfect for real-time applications. ![event stream processing vs. batch processing](https://community.ops.io/remoteimages/uploads/articles/c08j9dx581lyrhk0hwhn.jpeg) In addition, Stream Processing enables the detection of patterns, the examination of different degrees of attention, and the simultaneous examination of data from numerous Streams. Spreading the operations out across time, Event Stream Processing requires much less hardware than Batch Processing. --- ## **The benefits of using Event Stream Processing** Event Stream Processing is used when quick action must be taken on Event Streams. As a result, Event Stream Processing will emerge as the solution of choice for managing massive amounts of data. This will have the greatest impact on the prevalent high-speed technologies of today, establishing Event Stream Processing as the solution of choice for managing massive amounts of data. Several advantages of incorporating Event Stream Processing into your workflow are as follows: - Event Stream Pipelines can be developed to fulfill advanced Streaming use cases. For instance, using an Event Stream Pipeline, one may enhance Event data with metadata and modify such objects for storage. - Utilizing Event Stream Processing in your workflow enables you to make choices in real-time. - You can simply expand your infrastructure as the data volume grows. - Event Stream Processing offers continuous Event Monitoring, enabling the creation of alerts to discover trends and abnormalities. - You can examine and handle massive volumes of data in real-time, allowing you to filter, aggregate, or filter the data prior to storage. --- ## **Event Streams use cases** As the Internet of Things (IoT) evolves, so does the demand for real-time analysis. As data processing architecture becomes more Event-driven, ESP continues to grow in importance. Event Streaming is used in a variety of application cases that span several sectors and organizations. Let’s examine a few industries that have profited from incorporating Event Stream Processing into their data processing methodologies. Besides helping big sectors, it also addresses specific problems we face on a daily basis. Here are some examples of how this can be used. ## **Use case 1: Pushing Github notifications using Event Streams** Event streams are a great way to stay up-to-date on changes to your codebase in real-time. By configuring an event stream and subscribing to the events you’re interested in, you can receive push notifications whenever there is an activity in your repository. We hope this use case has will help you understand how to use event streams in GitHub push notifications. Here we are taking an example of creating a chrome extension that makes use of event dreams to provide real-time GitHub push notifications. The GitHub Notifier extension for Google Chrome allows you to see notifications in real time whenever someone interacts with one of your GitHub repositories. This is a great way to stay on top of your project’s activity and be able to respond quickly to issues or pull requests. The extension is available for free from the Google Chrome store. Simply install it and then sign in with your GitHub account. Once you’ve done that, you’ll start receiving notifications whenever someone mentions you, comments on one of your repositories, or even when someone stars one of your repositories. You can also choose to receive notifications for specific events such as new releases or new Pull Requests. Stay up-to-date on all the latest activity on your GitHub repositories with GitHub Notifier! ![pushing github notifications using event streams](https://community.ops.io/remoteimages/uploads/articles/e00qd8fvm4qi1div6hwy.jpeg) --- ## **Use case 2: Internet of Things in Industry (IIoT)** In the context of automating industrial processes, businesses may incorporate an IIoT solution by including a number of sensors that communicate data streams in real-time. These sensors may be installed in the hundreds, and their data streams are often pooled by IoT gateways, which can deliver a continuous stream of data further into the technological stack. Enterprises would need to apply an event stream processing approach in order to make use of the data, analyze it to detect trends, and swiftly take action on them. This stream of events would be consumed by the event streaming platform, which would then execute real-time analytics. For instance, we may be interested in tracking the average temperature over the course of 30 seconds. After that, we want the temperature only to be shown if it surpasses 45 °C. When this condition is satisfied, the warning may be utilized by other programs to alter their processes in real-time to prevent overheating. There are many technologies that can help automate the processes. Camunda’s Workflow Engine is one of them which implements this process automation and executes processes that are defined in Business Process Model and Notation (BPMN), the global standard for process modeling. BPMN provides an easy-to-use visual modeling language for automating your most complex business processes. If you want to get started with Camunda workflow, the Camunda connectors is a good starting point. ## **Use case 3: Payment Processing** Rapid payment processing is an excellent use of event stream processing for mitigating user experience concerns and undesirable behaviours. For instance, if a person wishes to make a payment but encounters significant delays, they may refresh the page, causing the transaction to fail and leaving them uncertain as to whether their account has been debited. Similarly, when dealing with machine-driven payments, delay may have a large ripple impact, particularly when hundreds of payments are backed up. This might result in repeated attempts or timeouts. To support the smooth processing of tens of thousands of concurrent requests, we may leverage event streaming processing to guarantee a consistent user experience throughout. A payment request event may be sent from a topic to an initial payments processor, which then changes the overall amount of payments being processed at the moment. A subsequent event is then created and forwarded to a different processor, which verifies that the payment can be completed and changes the user’s balance. A final event is then generated, and the user’s balance is updated by another processor. ## **Use case 4: Cybersecurity** Cybersecurity systems collect millions of events in order to identify new risks and comprehend relationships between occurrences. For the purpose of reducing false positives, cybersecurity technologies use event streaming processing to augment threats and give context-rich data. They do this by following a sequence of processes, including: - Collect events from diverse data sources, such as consumer settings in real-time. - Filter event streams so that only relevant data enters the subjects to eliminate false positives or benign assaults. - Leverage streaming apps in real-time to correlate events across several source interfaces. - Forward priority events to other systems, such as security information and event management (SIEM) systems or security automation, orchestration, and response (SAO&R) systems (SOAR). ## **Use Case 5: Airline Optimization** We can create real-time apps to enhance the experience of passengers before, during, and after flights, as well as the overall efficiency of the process. We can effectively coordinate and react if we make crucial events, such as customers scanning their boarding passes at the gate, accessible across all the back-end platforms used by airlines and airports. For example, based on this one sort of event, we can enable three distinct use cases, including: - Accurately predicting take-off times and predicting delays - Reduce the amount of assistance necessary for connecting passengers by giving real-time data - Reduce the impact of a single flight’s influence on the on-time performance of the other flights. ## **Use case 6: E-Commerce** Event stream processing can be used in an e-commerce application to facilitate “viewing through to purchasing.” To do this, we may build an initial event stream to capture the events made by shoppers, with 3 separate event kinds feeding the stream. - Customer sees item. - A customer adds an item to their shopping cart. - A customer puts an order. We may assist our use cases by applying discrete processes or algorithms, such as: - An hourly sales calculator that parses the stream for ‘Customer puts order’ events and keeps a running tally of total revenues for each hour. - A product look-to-book tracker that reads “Customer sees item” from the stream and keeps track of the overall number of views for each product. Additionally, it parses ‘Customer puts order’ events from the stream and keeps track of the total number of units sold for each product. - A new ‘Customer abandons’ cart event is created and posted to a new topic when an abandoned cart detector – which reads all three kinds of events and uses the algorithm described previously to identify customers who have abandoned their shopping cart – detects abandoned carts. --- ## **Conclusion** In a world that is increasingly driven by events, Event Stream Processing (ESP) has emerged as a vital practice for enterprises. Event streams are becoming an increasingly important data source as more and more companies move to a streaming architecture. The benefits of using event streams include real-time analytics, faster response times, and improved customer experience. They offer many benefits over traditional batch processing. In addition, there are a number of use cases for event streams that can help you solve specific business problems. If you’re looking for a way to improve your business performance, consider using event stream processing. --- [Join 4500+ others and sign up for our data engineering newsletter](https://memphis.dev/newsletter) --- Follow Us to get the latest updates! [Github](https://github.com/memphisdev/memphis) • [Docs](https://docs.memphis.dev/memphis/getting-started/readme) • [Discord](https://discord.com/invite/DfWFT7fzUu) --- Originally published at [memphis.dev](https://memphis.dev/) by By [Avraham Neeman](https://twitter.com/AvrahamNeeman) Co-Founder & CPO @Memphis.dev.
atrifsik
1,267,304
AWS Lambda (Node.js) calling SOAP service
REST service: Uses HTTP for exchanging information between systems in several ways such as...
0
2022-11-22T16:30:30
https://dev.to/prabusah_53/http-get-and-delete-from-aws-lambda-7ml
### REST service: Uses HTTP for exchanging information between systems in several ways such as JSON, XML, Text etc. ### SOAP service: A Protocol for exchanging information between systems over internet only using XML. ### Requirement: Calling SOAP services from Lambda. We'll use this npm package: https://www.npmjs.com/package/soap With this npm package - its a 3 step process: 1. Create a soap client 2. Call SOAP method by passing JSON input. 3. Convert XML output to JSON #### Create SOAP client: ``` const soap = require('soap'); let client = await soap.createClientAsync(url); ``` _url_ - is the wsdl url. For example: http://sampledomainname/services/sampleserice?wsdl (just for representation - this wsdl url does not works). wsdl doc has all the details. But if you are like me who can best understand JSON data then use _client.describe()_ to get soap service details in JSON format. > console.log(client.describe()); Output log below: ``` { "SomeService": { "SomeServicePort": { "someServiceMethod": { "input": { "serviceRequest": { "field1": "xsd:string" } } } } } } ``` _Convert JSON to XML (pass input to SOAP)_: soap npm package takes care of this conversion. Pass JSON, & converts to XML. #### Call SOAP method by passing JSON input: ``` client.SomeService.SomeServicePort_http.someServiceMethod(args, function(err, result) { if(err) console.error("Error - ", err); console.log(result); // is a javascript object }, { postProcess: function(_xml) { console.log('XML - ', _xml); //this prints the input XML return _xml.replace('test', 'newtest'); // any mapping or string conversion, add here. }}); ``` Consider this args JSON as input to soap method > Let args = { > "one":"1" > } The args JSON would be converted by soap npm package as below XML: ``` <?xml version=\"1.0\" encoding=\"utf-8\"?> <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema- instance\"> <soap:Body> <ns1:someServiceMethod> <ns1:one>1</ns1:one> </ns1:someServiceMethod> </soap:Body> </soap:Envelope> ``` Before we discuss about response XML to JSON (result of soap call). Let's see how to use async/await way of calling soap method. ``` let resultArr = await client.someServiceMethodAsync(args); ``` Just add suffix _Async_ to the soap method you wish to call, would return a promise. Note the above call results an array with 4 elements in both success and failure scenarios. > Array element 0: result is a javascript object. > Array element 1: rawResponse is the raw xml response string. > Array element 2: soapHeader is the response soap header as a javascript object (contains WS Security info-so let us not log this!). > Array element 3: rawRequest is the raw xml request string. So the Array element O is already converted the XML response to JSON. Let's put altogether below: ``` const soap= require('soap'); let client = await soap.createClientAsync("http://sampledomainname/services/sampleserice?wsdl"); let resultArr= await client.someServiceMethodAsync(args); console.log("Response JSON-, resultArr[0]); ``` ### Secured Soap Call: All enterprise soap services would require authentication and in below section let's discuss about WSSecurity. > let wsSecurity = new soap. WSSecurity('username', 'password", options); client.setSecurity(wsSecurity): Putting altogether: ``` const soap = require('soap'); let client = await soap.createClientAsync('http://sampledomainname/services/sampleserice?wsdl'); let options = { hasNonce: true, actor: 'actor' } let wsSecurity = new soap.WSSecurity('username', 'password', options); client.setSecurity(wsSecurity); let resultArr = await client.someServiceMethodAsync(args); console.log('Response JSON-', resultArr[0]); ``` Image by <a href="https://pixabay.com/users/splitshire-364019/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=407081">SplitShire</a> from <a href="https://pixabay.com//?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=407081">Pixabay</a>
prabusah_53
1,267,358
How To Poll an Airflow Job (i.e. DAG Run)
Ever wanted to actually know when an Airflow DAG Run has completed? Perhaps your use case involves...
0
2022-11-22T15:31:53
https://dev.to/joeauty/how-to-poll-an-airflow-job-ie-dag-run-1ffc
airflow, devops
Ever wanted to actually know when an Airflow DAG Run has completed? Perhaps your use case involves this completed work being some sort of workflow dependency, or perhaps it is used in a CI/CD pipeline. I'm sure there are a myriad of possible scenarios here beyond ours at [Redactics](https://www.redactics.com), which is using Airflow to clone a database for a dry-run of a database migration, but you be the judge! [Here is the repo](https://github.com/Redactics/airflow-dagrun-poller) In this repo you can find a sample `Dockerfile`, a sample Kubernetes job that provides some context as to how this poller script can be used, as well as the script itself. There isn't much to it, but I hope it saves some developers a moment or three of their time should they need to recreate this functionality!
joeauty
1,267,670
Java Collections Framework
Uma coleção é um agrupamento de objetos, uma interação de vários objetos em um lugar só. Por exemplo,...
0
2022-11-22T17:34:51
https://dev.to/jeronimafloriano/java-collections-framework-1deb
java, programming, backend, ptbr
Uma coleção é um agrupamento de objetos, uma interação de vários objetos em um lugar só. Por exemplo, uma pasta com uma coleção de cartas, uma gaveta com várias chaves etc. Há vários tipos de coleções com diferentes particularidades. O Java possui uma arquitetura que representa várias coleções: o framework de coleções(Collections Framework). Esse framework nada mais é que a representação de todas as coleções existentes no java, um conjunto que engloba interfaces e classes que representam as estruturas de dados. Neste post iremos discorrer sobre o uso geral do Collections Framework, para que possamos ter o conhecimento necessário para escolhermos quando usar qual coleção. Toda coleção no java possui: - Uma interface, que representa uma determinada coleção e nos permite manipulá-la independente de detalhes da implementação(Seguindo o padrão de programar voltado para a interface, não para a implementação). - As implementações concretas das interfaces, para que possamos implementar de fato uma coleção. - Métodos para manipularmos as coleções. Dentro do framework de collections, as interfaces encapsulam várias implementações de coleções, permitindo o acesso a uma coleção sem nos preocuparmos com os detalhes de implementação. ![Collections Framework](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll98me3xruqrp3hjte95.png) A coleção principal no java é a interface Collection, e é dela que se originaram as demais interfaces. As principais interfaces do framework estendem diretamente de Collection, elas são o "core" do framework, o ponto principal das coleções. Cada coleção tem uma particularidade, por isso há várias coleções. Por exemplo, algumas permitem elementos duplicados e outros não, algumas são ordenadas e outras não são ordenadas (Veremos mais à frente quais são algumas dessas características). Todas as interfaces da coleção principal são genéricas. Por exemplo, esta é uma declaração da Collection: **public interface Collection< E >** O uso de “< E >” nos informa que a interface é genérica, ou seja, aceita qualquer tipo. Portanto, ao declarar uma instância de coleção devemos especificar o tipo de conteúdo do objeto na coleção, por exemplo: **Collection< Strings > colecao;** A lista a seguir descreve as principais interfaces e implementações do framework: <u>**Collection**</u> A raiz das coleções, todas as coleções implementam a Collection. Não há nenhuma implementação direta dessa interface, ela é a base de todas as coleções e possui os métodos básicos de toda coleção, como por exemplo: Métodos que informam quantos elementos há na coleção (size, isEmpty), métodos que verificam se um determinado objeto está na coleção (contains), métodos que adicionam e removem um elemento da coleção (add, remove) e métodos que fornecem um iterador sobre a coleção (iterador), dentre outros. Como a Collection é a raiz das coleções, podemos por exemplo “jogar” um Set dentro de um List e vice-versa, e assim usufruirmos dos métodos que precisarmos, pois ambas essas coleções implementam a interface Collection. <u>**List**</u> Uma coleção ordenada e sequencial, que pode conter elementos duplicados. Ela guarda a posição de seus elementos, nos permitindo buscar um elemento dentro de uma List pela sua posição. As principais implementações de List são: ArrayList, LinkedList e Vector. - ArrayList: Representa um array com redimensionamento dinâmico(o tamanho vai crescendo conforme são adicionados novos elementos). Possui acesso fácil e performático pelo índice. Os elementos precisam ser copiados para outro array quando não há mais capacidade e, quando uma posição é removida, ela precisa reposicionar as referências. Geralmente é a implementação de melhor desempenho - LinkedList: Lista duplamente "linkada" (encadeada), que possui uma inserção e remoção performática em qualquer posição, pois nela o elemento B irá guardar qual é o elemento A(anterior a ele) e qual é o elemento C(posterior a ele). Ela também guarda o elemento inicial e o final da lista, e com isso ela não precisa reposicionar as referências quando uma posição é removida (como acontece no Arraylist). O acesso em uma LinkedList é mais demorado pelo índice, pois é preciso ficar pesquisando elemento por elemento (se tem um próximo, se tem um anterior...). - Vector: Usa um array por baixo dos panos Serve para trabalhos com mais de uma trhead, pois é trhead-safe: enquanto uma pilha usa o recurso, a outra fica esperando para depois usá-lo. <u>**Set**</u> É uma coleção usada para representar conjuntos. Um Set não pode conter elementos duplicados, e por isso é mais fácil adicionar e remover elementos, pois não é necessário percorrer todos os objetos ao adicionar/remover/procurar algum deles, como as Lists fazem. Também não segue uma ordem, é como uma "sacola" cheia de objetos dentro, onde eles estão todos misturados. A estrutura Set usa uma tabela de espalhamento para realizar mais rapidamente suas buscas. Com isso, ao buscar um objeto, em vez de comparar o objeto com todos os outros objetos contidos dentro do Set ela compara apenas os que possuem o mesmo hashcode. Suponha que temos uma Collection e desejamos criar outra Collection contendo os mesmos elementos mas com eliminando os elementos duplicados. Podemos fazer isso convertendo a Collection em alguma implementação de Set, copiando a implementação de Collection para um novo Set ou utilizando um Stream para converter em um Set. As implementações da interface Set mais utilizadas são: HashSet, LinkedHashSet e TreeSet. - HashSet: Não guarda ordem, nem mesmo a ordem de inserção dos objetos. Um HashSet armazena seus elementos em uma tabela hash e é a implementação de melhor desempenho. - LinkedHashSet: É implementado como uma tabela de hash com uma lista vinculada passando por ela, guardando e ordenando com base na ordem de inserção dos objetos, ou seja, irá trazer os objetos de acordo com a ordem que eles foram inseridos. O LinkedHashSet poupa os usuários da ordem não especificada fornecida por HashSet, com uma leve perda de performance. - TreeSet :Armazena os elementos em uma “árvore” os ordenando com base em seus valores, portanto só pode ser utilizado em classes que implementam o Comparable ou passando no construtor do TreeSet como parâmetro um objeto que implementa Comparator, para que assim ele possa fazer ordenações. Também é menos performático se comparado ao HashSet. <u>**Queue**</u> Uma fila usada para armazenar vários elementos antes do seu processamento. As filas normalmente ordenam os elementos de maneira FIFO (primeiro a entrar, primeiro a sair). As filas de prioridade (PriorityQueue) são uma exceção, pois ordenam os elementos de acordo com seus valores. Em uma fila FIFO, todos os novos elementos são inseridos no final da fila. Cada implementação de Queue deve especificar suas propriedades de ordenação. É possível que uma Queue implementação restrinja o número de elementos que contém; tais filas são conhecidas como limitadas. Algumas implementações de Queue são: - AbstractQueue: A implementação de fila mais simples possível que o Java fornece., uma implementação básica da interface Queue. - LinkedList - PriorityQueu: Uma fila que ordena os elementos de acordo com seus valores. <u>**Deque**</u> Uma coleção usada para armazenar vários elementos antes do processamento. É uma fila de duas pontas, onde todos os novos elementos podem ser inseridos, recuperados e removidos em ambos os extremos. Deques podem ser usados tanto como uma fila FIFO (primeiro a entrar, primeiro a sair) quanto uma pilha LIFO (último a entrar, primeiro a sair). Em um deque todos os novos elementos podem ser inseridos, recuperados e removidos em ambos os extremos. Algumas implementações de Deque: - ArrayDeque: Implementação de uma matriz redimensionável de uma Deque. Crescem conforme são adicionados novos elementos, não são thread-safe e não permitem elementos nulos. - LinkedList - ConcurrentLinkedDeque: Fornece suporte à operações simultâneas de inserção, remoção e acesso. <u>**Map**</u> Um objeto que mapeia chaves para valores e através da chave podemos acessar o valor correspondente. Map não pode conter chaves duplicadas, cada chave pode mapear para no máximo um único valor. Se repetimos uma chave, a chave repetida é sobrescrita pela nova. A plataforma Java contém três Map implementações de uso geral: HashMap, TreeMap e LinkedHashMap. Seu comportamento e desempenho são precisamente análogos a HashSet, TreeSet, e LinkedHashSet. - HashMap: Implementação baseada em tabela hash da interface Map. Não guarda ordem, nem mesmo a ordem de inserção dos objetos. Um HashMap armazena seus elementos em uma tabela hash e é a implementação de melhor desempenho. - TreeMap: Uma implementação de Map que armazena os elementos em uma “árvore” os ordenando com base em seus valores, portanto só pode ser utilizado em classes que implementam o Comparable ou passando no construtor do TreeMap como parâmetro um objeto que implementa Comparator, para que assim ele possa fazer ordenações. - LinkedHashMap: Tabela de hash e implementação de lista encadeada da interface Map que define a ordem de iteração, que normalmente é a ordem na qual as chaves foram inseridas no mapa(ordem de inserção). As duas últimas interfaces de coleção principais são apenas versões classificadas de Set e Map: - SortedSet: Um Set que mantém seus elementos em ordem crescente. - SortedMap: Um Map que mantém seus mapeamentos em ordem crescente de chaves. Este é o Map análogo de SortedSet. **FONTE:** https://docs.oracle.com/javase/tutorial/collections/interfaces/collection.html
jeronimafloriano
1,268,551
Web scraping Google Shopping Product Reviews with Nodejs
What will be scraped Full code If you don't need an explanation, have a look at the full code...
0
2022-11-23T08:43:09
https://dev.to/serpapi/web-scraping-google-shopping-product-reviews-with-nodejs-4n9e
webscraping, node, serpapi
<h2 id='what'>What will be scraped</h2> ![what](https://user-images.githubusercontent.com/64033139/200179517-d49235a8-b193-449f-afe3-727073c5b778.png) <h2 id='full_code'>Full code</h2> If you don't need an explanation, have a look at [the full code example in the online IDE](https://replit.com/@MikhailZub/Scrape-Google-Shopping-Product-Reviews-with-NodeJS-SerpApi#withPuppeteer.js) ```javascript const puppeteer = require("puppeteer-extra"); const StealthPlugin = require("puppeteer-extra-plugin-stealth"); puppeteer.use(StealthPlugin()); const reviewsLimit = 100; // hardcoded limit for demonstration purpose const searchParams = { id: "8757849604759505625", // Parameter defines the ID of a product you want to get the results for hl: "en", // Parameter defines the language to use for the Google search gl: "us", // parameter defines the country to use for the Google search }; const URL = `https://www.google.com/shopping/product/${searchParams.id}/reviews?hl=${searchParams.hl}&gl=${searchParams.gl}`; async function getReviews(page) { while (true) { await page.waitForSelector("#sh-fp__pagination-button-wrapper"); const isNextPage = await page.$("#sh-fp__pagination-button-wrapper"); const reviews = await page.$$("#sh-rol__reviews-cont > div"); if (!isNextPage || reviews.length > reviewsLimit) break; await page.click("#sh-fp__pagination-button-wrapper"); await page.waitForTimeout(3000); } return await page.evaluate(() => { return { productResults: { title: document.querySelector(".BvQan")?.textContent.trim(), reviews: parseInt(document.querySelector(".lBRvsb .HiT7Id > span")?.getAttribute("aria-label").replace(",", "")), rating: parseFloat(document.querySelector(".lBRvsb .UzThIf")?.getAttribute("aria-label")), }, reviewsResults: { rating: Array.from(document.querySelectorAll(".aALHge")).map((el) => ({ stars: parseInt(el.querySelector(".rOdmxf")?.textContent), amount: parseInt(el.querySelector(".vL3wxf")?.textContent), })), reviews: Array.from(document.querySelectorAll("#sh-rol__reviews-cont > div")).map((el) => ({ title: el.querySelector(".P3O8Ne")?.textContent.trim() || el.querySelector("._-iO")?.textContent.trim(), date: el.querySelector(".OP1Nkd .ff3bE.nMkOOb")?.textContent.trim() || el.querySelector("._-iU")?.textContent.trim(), rating: parseInt(el.querySelector(".UzThIf")?.getAttribute("aria-label") || el.querySelector("._-lq")?.getAttribute("aria-label")), source: el.querySelector(".sPPcBf")?.textContent.trim() || el.querySelector("._-iP")?.textContent.trim(), content: el.querySelector(".g1lvWe > div:last-child")?.textContent.trim() || el.querySelector("._-iN > div:last-child")?.textContent.trim(), })), }, }; }); } async function getProductInfo() { const browser = await puppeteer.launch({ headless: true, // if you want to see what the browser is doing, you need to change this option to "false" args: ["--no-sandbox", "--disable-setuid-sandbox"], }); const page = await browser.newPage(); await page.setDefaultNavigationTimeout(60000); await page.goto(URL); await page.waitForSelector(".xt8sXe button"); const reviews = { productId: searchParams.id, ...(await getReviews(page)) }; await browser.close(); return reviews; } getProductInfo().then((result) => console.dir(result, { depth: null })); ``` <h2 id='preparation'>Preparation</h2> First, we need to create a Node.js\* project and add [`npm`](https://www.npmjs.com/) packages [`puppeteer`](https://www.npmjs.com/package/puppeteer), [`puppeteer-extra`](https://www.npmjs.com/package/puppeteer-extra) and [`puppeteer-extra-plugin-stealth`](https://www.npmjs.com/package/puppeteer-extra-plugin-stealth) to control Chromium (or Chrome, or Firefox, but now we work only with Chromium which is used by default) over the [DevTools Protocol](https://chromedevtools.github.io/devtools-protocol/) in [headless](https://developers.google.com/web/updates/2017/04/headless-chrome) or non-headless mode. To do this, in the directory with our project, open the command line and enter: ```bash $ npm init -y ``` And then: ```bash $ npm i puppeteer puppeteer-extra puppeteer-extra-plugin-stealth ``` \*<span style="font-size: 15px;">If you don't have Node.js installed, you can [download it from nodejs.org](https://nodejs.org/en/) and follow the installation [documentation](https://nodejs.dev/learn/introduction-to-nodejs).</span> 📌Note: also, you can use `puppeteer` without any extensions, but I strongly recommended use it with `puppeteer-extra` with `puppeteer-extra-plugin-stealth` to prevent website detection that you are using headless Chromium or that you are using [web driver](https://www.w3.org/TR/webdriver/). You can check it on [Chrome headless tests website](https://intoli.com/blog/not-possible-to-block-chrome-headless/chrome-headless-test.html). The screenshot below shows you a difference. ![stealth](https://user-images.githubusercontent.com/64033139/173014238-eb8450d7-616c-42ae-8b2f-24eeb5fd5916.png) <h2 id='process'>Process</h2> We need to extract data from HTML elements. The process of getting the right CSS selectors is fairly easy via [SelectorGadget Chrome extension](https://selectorgadget.com/) which able us to grab CSS selectors by clicking on the desired element in the browser. However, it is not always working perfectly, especially when the website is heavily used by JavaScript. We have a dedicated [Web Scraping with CSS Selectors](https://serpapi.com/blog/web-scraping-with-css-selectors-using-python/#css_gadget) blog post at SerpApi if you want to know a little bit more about them. The Gif below illustrates the approach of selecting different parts of the results using SelectorGadget. ![how](https://user-images.githubusercontent.com/64033139/200179651-37fc05f8-e285-4d6c-ac82-aa65ec6f34af.gif) <h3 id='code_explanation'>Code explanation</h3> Declare [`puppeteer`](https://www.npmjs.com/package/puppeteer-extra) to control Chromium browser from `puppeteer-extra` library and [`StealthPlugin`](https://www.npmjs.com/package/puppeteer-extra-plugin-stealth) to prevent website detection that you are using [web driver](https://www.w3.org/TR/webdriver/) from `puppeteer-extra-plugin-stealth` library: ```javascript const puppeteer = require("puppeteer-extra"); const StealthPlugin = require("puppeteer-extra-plugin-stealth"); ``` Next, we "say" to `puppeteer` use `StealthPlugin`, write the necessary request parameters, search URL and set how many reviews we want to receive (`reviewsLimit` constant): ```javascript puppeteer.use(StealthPlugin()); const reviewsLimit = 100; // hardcoded limit for demonstration purpose const searchParams = { id: "8757849604759505625", // Parameter defines the ID of a product you want to get the results for hl: "en", // Parameter defines the language to use for the Google search gl: "us", // parameter defines the country to use for the Google search }; const URL = `https://www.google.com/shopping/product/${searchParams.id}/reviews?hl=${searchParams.hl}&gl=${searchParams.gl}`; ``` Next, we write a function to get product info from the page: ```javascript async function getReviews(page) { ... } ``` Next, we use `while` loop ([`while`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/while)) in which we check if the next page button is available on the page and the number of reviews ([`$`](https://pptr.dev/api/puppeteer.page._), and [`$$`](https://pptr.dev/api/puppeteer.page.__) methods) is less then `reviewsLimit` we click ([`click()`](https://pptr.dev/api/puppeteer.page.click) method) on the next page button element, wait 3 seconds (using [`waitForTimeout`](https://pptr.dev/api/puppeteer.page.waitfortimeout) method), otherwise we stop the loop (using [`break`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/break)). ```javascript while (true) { await page.waitForSelector("#sh-fp__pagination-button-wrapper"); const isNextPage = await page.$("#sh-fp__pagination-button-wrapper"); const reviews = await page.$$("#sh-rol__reviews-cont > div"); if (!isNextPage || reviews.length > reviewsLimit) break; await page.click("#sh-fp__pagination-button-wrapper"); await page.waitForTimeout(3000); } ``` Then, we get information from the page context (using [`evaluate()`](https://pptr.dev/api/puppeteer.page.evaluate) method) and save it in the returned object: ```javascript return await page.evaluate(() => ({ ... })); ``` Next, we need to get the different parts of the page using next methods: - [`querySelectorAll()`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll); - [`querySelector()`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector); - [`getAttribute()`](https://developer.mozilla.org/en-US/docs/Web/API/Element/getAttribute); - [`textContent`](https://developer.mozilla.org/en-US/docs/Web/API/Node/textContent); - [`trim()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/trim); - [`Array.from()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from); - [`replace()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/replace); - [`parseInt()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt); - [`parseFloat()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseFloat). ```javascript productResults: { title: document.querySelector(".BvQan")?.textContent.trim(), reviews: parseInt(document.querySelector(".lBRvsb .HiT7Id > span") ?.getAttribute("aria-label").replace(",", "")), rating: parseFloat(document.querySelector(".lBRvsb .UzThIf") ?.getAttribute("aria-label")), }, reviewsResults: { rating: Array.from(document.querySelectorAll(".aALHge")).map((el) => ({ stars: parseInt(el.querySelector(".rOdmxf")?.textContent), amount: parseInt(el.querySelector(".vL3wxf")?.textContent), })), reviews: Array.from(document.querySelectorAll("#sh-rol__reviews-cont > div")).map((el) => ({ title: el.querySelector(".P3O8Ne")?.textContent.trim() || el.querySelector("._-iO") ?.textContent.trim(), date: el.querySelector(".OP1Nkd .ff3bE.nMkOOb")?.textContent.trim() || el.querySelector("._-iU")?.textContent.trim(), rating: parseInt(el.querySelector(".UzThIf")?.getAttribute("aria-label") || el.querySelector("._-lq")?.getAttribute("aria-label")), source: el.querySelector(".sPPcBf")?.textContent.trim() || el.querySelector("._-iP")?.textContent.trim(), content: el.querySelector(".g1lvWe > div:last-child")?.textContent.trim() || el.querySelector("._-iN > div:last-child")?.textContent.trim(), })), }, ``` Next, write a function to control the browser, and get information: ```javascript async function getProductInfo() { ... } ``` In this function first we need to define `browser` using `puppeteer.launch({options})` method with current `options`, such as `headless: true` and `args: ["--no-sandbox", "--disable-setuid-sandbox"]`. These options mean that we use [headless](https://developers.google.com/web/updates/2017/04/headless-chrome) mode and array with [arguments](https://peter.sh/experiments/chromium-command-line-switches/) which we use to allow the launch of the browser process in the online IDE. And then we open a new `page`: ```javascript const browser = await puppeteer.launch({ headless: true, // if you want to see what the browser is doing, you need to change this option to "false" args: ["--no-sandbox", "--disable-setuid-sandbox"], }); const page = await browser.newPage(); ``` Next, we change default ([30 sec](https://github.com/puppeteer/puppeteer/blob/2a0eefb99f0ae00dacc9e768a253308c0d18a4c3/src/common/TimeoutSettings.ts#L17)) time for waiting for selectors to 60000 ms (1 min) for slow internet connection with [`.setDefaultNavigationTimeout()`](https://pptr.dev/api/puppeteer.page.setdefaultnavigationtimeout) method, go to `URL` with [`.goto()`](https://pptr.dev/api/puppeteer.page.goto) method and use [`.waitForSelector()`](https://pptr.dev/api/puppeteer.page.waitforselector) method to wait until the selector is load: ```javascript await page.setDefaultNavigationTimeout(60000); await page.goto(URL); await page.waitForSelector(".xt8sXe button"); ``` And finally, we save product data from the page in the `reviews` constant (using [`spread syntax`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax)), close the browser, and return the received data: ```javascript const reviews = { productId: searchParams.id, ...(await getReviews(page)) }; await browser.close(); return reviews; ``` Now we can launch our parser: ```bash $ node YOUR_FILE_NAME # YOUR_FILE_NAME is the name of your .js file ``` <h2 id='output'>Output</h2> ```json { "productId":"8757849604759505625", "productResults":{ "title":"Apple iPhone 14 Pro Max - 128 GB - Space Black - Unlocked", "reviews":748, "rating":4.5 }, "reviewsResults":{ "rating":[ { "stars":5, "amount":554 }, { "stars":4, "amount":58 }, { "stars":3, "amount":31 }, { "stars":2, "amount":32 }, { "stars":1, "amount":73 } ], "reviews":[ { "title":"13 Pro Max better in almost every way", "date":"October 11, 2022", "rating":2, "source":"Cody LaRocque · Review provided by Google", "content":"Great if you’re coming from an 11 or below. I upgraded from my 13 Pro Max as I do every year. Needless to say I am extremely underwhelmed and pretty dissatisfied with this years IPhone. My 13 pro max was better in almost every way, battery life being the major hit to me. No heavy gaming, streaming etc just daily text, call email etc. avg 4 hours per day screen-time. My 13 lasted almost a day and a half at this rate; my 14, I find myself needing a charge before I’m even off from my shift at work.The dynamic island is an over marketed, over hyped piece of useless software and does not function as cool as Apple made it appear. Always on display was fun for about 2 minutes setting it up, then immediately being turned always off because it’s way too bright and sucks power like you wouldn’t believe.All of apples key selling points are all the keys reasons I dislike this phone. Always on is a nightmare, battery life is a joke, dynamic island is useless, crash detection goes off on roller coasters, the cameras have very very little upside differences, the brightness of the screen only lasts for a couple seconds until it auto dims to conserve energy. I also feel like the overall build quality is lacking, I purchased the phone and the Apple leather case; this being the first time I’ve ever even used a case on my iPhone. My 13 lasted a year being dropped multiple times and didn’t even have a scratch. I dropped my 14 face down on a flat floor with the Apple leather case and it chipped the front corner. If you are coming from the 13. Don’t bother upgrading. If your coming from a 12 or below, consider upgrading to the now discounted 13 Less" }, ... and other reviews ] } } ``` <h2 id='serp_api'>Using<a href="https://serpapi.com/reviews-results"> Google Product Reviews Results API </a>from SerpApi</h2> This section is to show the comparison between the DIY solution and our solution. The biggest difference is that you don't need to create the parser from scratch and maintain it. There's also a chance that the request might be blocked at some point from Google, we handle it on our backend so there's no need to figure out how to do it yourself or figure out which CAPTCHA, proxy provider to use. First, we need to install [`google-search-results-nodejs`](https://www.npmjs.com/package/google-search-results-nodejs): ```bash npm i google-search-results-nodejs ``` Here's the [full code example](https://replit.com/@MikhailZub/Scrape-Google-Shopping-Product-Reviews-with-NodeJS-SerpApi#withSerpApi.js), if you don't need an explanation: ```javascript require("dotenv").config(); const SerpApi = require("google-search-results-nodejs"); const search = new SerpApi.GoogleSearch(process.env.API_KEY); //your API key from serpapi.com const reviewsLimit = 100; // hardcoded limit for demonstration purpose const params = { product_id: "8757849604759505625", // Parameter defines the ID of a product you want to get the results for. engine: "google_product", // search engine device: "desktop", //Parameter defines the device to use to get the results. It can be set to "desktop" (default), "tablet", or "mobile" hl: "en", // parameter defines the language to use for the Google search gl: "us", // parameter defines the country to use for the Google search reviews: true, // parameter for fetching reviews results }; const getJson = () => { return new Promise((resolve) => { search.json(params, resolve); }); }; const getResults = async () => { const json = await getJson(); const results = {}; results.productResults = json.product_results; results.reviewsResult = []; while (true) { const json = await getJson(); if (json.reviews_results?.reviews) { results.reviewsResult.push(...json.reviews_results.reviews); params.start ? (params.start += 10) : (params.start = 10); } else break; if (results.reviewsResult.length > reviewsLimit) break; } return results; }; getResults().then((result) => console.dir(result, { depth: null })); ``` <h3 id='serp_api_code_explanation'>Code explanation</h3> First, we need to declare `SerpApi` from [`google-search-results-nodejs`](https://www.npmjs.com/package/google-search-results-nodejs) library and define new `search` instance with your API key from [SerpApi](https://serpapi.com/manage-api-key): ```javascript const SerpApi = require("google-search-results-nodejs"); const search = new SerpApi.GoogleSearch(API_KEY); ``` Next, we write the necessary parameters for making a request and set how many reviews we want to receive (`reviewsLimit` constant):: ```javascript const reviewsLimit = 100; // hardcoded limit for demonstration purpose const params = { product_id: "8757849604759505625", // Parameter defines the ID of a product you want to get the results for. engine: "google_product", // search engine device: "desktop", //Parameter defines the device to use to get the results. It can be set to "desktop" (default), "tablet", or "mobile" hl: "en", // parameter defines the language to use for the Google search gl: "us", // parameter defines the country to use for the Google search reviews: true, // parameter for fetching reviews results }; ``` Next, we wrap the search method from the SerpApi library in a promise to further work with the search results: ```javascript const getJson = () => { return new Promise((resolve) => { search.json(params, resolve); }); }; ``` And finally, we declare the function `getResult` that gets data from the page and return it: ```javascript const getResults = async () => { ... }; ``` In this function we get `json` with results, add `product_results` data to the `productResults` key of the `results` object and return it: ```javascript const json = await getJson(); const results = {}; results.productResults = json.product_results; ... return results; ``` Next, we need to add an empty `reviewsResult` array to the results object and using `while` loop ([`while`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/while)) get `json`, add `reviews` results from each page and set next page start index (to `params.start` value). If there is no more `reviews` results on the page or if the number of received reviews more than `reviewsLimit` we stop the loop (using [`break`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/break)): ```javascript results.reviewsResult = []; while (true) { const json = await getJson(); if (json.reviews_results?.reviews) { results.reviewsResult.push(...json.reviews_results.reviews); params.start ? (params.start += 10) : (params.start = 10); } else break; if (results.reviewsResult.length > reviewsLimit) break; } ``` After, we run the `getResults` function and print all the received information in the console with the [`console.dir`](https://nodejs.org/api/console.html#consoledirobj-options) method, which allows you to use an object with the necessary parameters to change default output options: ```javascript getResults().then((result) => console.dir(result, { depth: null })); ``` <h2 id='serp_api_output'>Output</h2> ```json { "productResults":{ "product_id":8757849604759506000, "title":"Apple iPhone 14 Pro Max - 128 GB - Space Black - Unlocked", "reviews":748, "rating":4.3 }, "reviewsResult":[ { "position":1, "title":"13 Pro Max better in almost every way", "date":"October 11, 2022", "rating":2, "source":"Cody LaRocque · Review provided by Google", "content":"Great if you’re coming from an 11 or below. I upgraded from my 13 Pro Max as I do every year. Needless to say I am extremely underwhelmed and pretty dissatisfied with this years IPhone. My 13 pro max was better in almost every way, battery life being the major hit to me. \n""+""\n""+""No heavy gaming, streaming etc just daily text, call email etc. avg 4 hours per day screen-time. My 13 lasted almost a day and a half at this rate; my 14, I find myself needing a charge before I’m even off from my shift at work.\n""+""\n""+""The dynamic island is an over marketed, over hyped piece of useless software and does not function as cool as Apple made it appear. \n""+""\n""+""Always on display was fun for about 2 minutes setting it up, then immediately being turned always off because it’s way too bright and sucks power like you wouldn’t believe.\n""+""\n""+""All of apples key selling points are all the keys reasons I dislike this phone. Always on is a nightmare, battery life is a joke, dynamic island is useless, crash detection goes off on roller coasters, the cameras have very very little upside differences, the brightness of the screen only lasts for a couple seconds until it auto dims to conserve energy. \n""+""\n""+""I also feel like the overall build quality is lacking, I purchased the phone and the Apple leather case; this being the first time I’ve ever even used a case on my iPhone. My 13 lasted a year being dropped multiple times and didn’t even have a scratch. I dropped my 14 face down on a flat floor with the Apple leather case and it chipped the front corner. \n""+""\n""+""If you are coming from the 13. Don’t bother upgrading. If your coming from a 12 or below, consider upgrading to the now discounted 13 " }, ... and other reviews ] } ``` <h2 id='links'>Links</h2> - [Code in the online IDE](https://replit.com/@MikhailZub/Scrape-Google-Shopping-Product-Reviews-with-NodeJS-SerpApi#index.js) - [Google Product Reviews Results API](https://serpapi.com/reviews-results) If you want other functionality added to this blog post or if you want to see some projects made with SerpApi, [write me a message](mailto:miha01012019@gmail.com). --- <p style="text-align: center;">Join us on <a href="https://twitter.com/serp_api">Twitter</a> | <a href="https://www.youtube.com/channel/UCUgIHlYBOD3yA3yDIRhg_mg">YouTube</a></p> <p style="text-align: center;">Add a <a href="https://github.com/serpapi/public-roadmap/issues">Feature Request</a>💫 or a <a href="https://github.com/serpapi/public-roadmap/issues">Bug</a>🐞</p>
mikhailzub
1,268,625
How to Create an Online Shopping App
About 60 % of users around the globe are mobile users. This is so because smartphones are easy to use...
0
2022-11-23T11:03:31
https://dev.to/christinek989/how-to-create-an-online-shopping-app-1kem
mobile, shoppingapp, softwar, java
About 60 % of users around the globe are mobile users. This is so because smartphones are easy to use and are more accessible. If so many people use mobile phones, [creating an online shopping app](https://addevice.io/blog/how-to-make-online-selling-app/) like Amazon must be the first and easiest investment that a business-minded person might think of. If this is one of the first articles you are reading about mobile shopping app development, then you are at the right place. We will talk about: 1. Key trends to follow to create an online shopping app 2. Most popular shopping apps 3. Features that should be included 4. Tech Stack of a Shopping App 5. Cost to build your own shopping app. Equipped with this information you can go on with your search and come up with an idea that may bring you good money, I don’t want to tell you millions, although that might also be the case. Just a small figure: the mobile e-commerce revenue was **$3.56 trillion in 2021**. Want your piece of the pie? Let’s get started! ## Key trends to follow to create an online shopping app In this part, I will give you a list of trends to follow in mobile shopping app development. I can even create a shopping app like Amazon if you are watchful of these trends. Have a look. Each trend may need deeper research and consideration: - 5G technology - [Progressive web apps ](https://appmaster.io/blog/progressive-web-apps-pwa-vs-native-apps-which-type-is-better-in-2022) - Wearable apps - Edge computing - Beacon technology - Artificial intelligence and machine learning - mCommerce - On-demand mobile apps - Blockchain - Wallet for mobile - Chatbots - P2P apps - The Internet of Things - Virtual and augmented reality - Smart hubs - Biometric identification. ## Most popular shopping apps Here is a list. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hs5ttkdfv5hat0vrbj40.png) What is amazing in the list is that Amazon ranks only 4th and Alibaba is the 9th. What makes a shopping app popular? Why do people download one shopping app more than others? [Forrester research](https://www.forrester.com/blogs/the-rising-bar-for-retailer-mobile-app-experiences-are-you-ready/) shows that there are 3 main reasons why some apps are more successful. These are speed, convenience, and personalized experience. But my question about why some apps are more popular than others is still valid. After all, most apps recognize that they should provide a speedy, convenient, personalized experience. For example, why is Meesho ranking number one? When I was researching the topic, I found several reasons. One is that Meesho is easy to use. Second, it offers low prices. It also offers affordable shipping and returns. And it has a very specific market – India. But what amazed me more was the idea that only those companies succeed that have the right work culture and vision to follow. It may seem unimportant but to me, exceptional individuals running the business determine its success. And that’s true. Just read biographies of CEOs and you will see that they are the ones that push the work culture and vision at their companies. To name a few. Tim Cook (Apple). Satya Nadella (Microsoft), Andy Iassy (Amazon), Elon Mask (Tesla), Mark Zuckerberg (Meta (Facebook)). ## Features that should be included You can read about detailed lists of features to be included when you create an online shopping app elsewhere. That’s not my task. I want to ask a simple question. What features bring you success? And here is what I found. Simplicity is number one on the list. It should be super simple to sign up, navigate, add to a cart, get access to multiple payment options, add to wish lists, etc. To tell you frankly, the commerce market has created a lot of lazy bone customers. And I mean that! Once they feel the slightest discomfort while creating a profile or browsing the images, they are likely to leave you. The second thing that I found is that you have to stand out somehow. For example, you can offer discounts or coupons, or special membership offers. Easily accessible customer service is another thing to consider. Chatbots are not enough! You should have a team that cares! And finally, customized and interactive user experience. We live in the age of AI and Machine Learning. It’s so easy to collect user data. If you make your offers based on consumer preference, you are likely to have them continue using your retail application regularly. ## Tech Stack of a Shopping App It’s hard to say which tech stack is the best for a shopping app because the best tools depend on several factors. For example, the complexity of an app is a determining factor. Some tools like Java are good for large projects with complex tasks. Another thing is niche characteristics. In the case of e-commerce, a combination of Angular and NodeJS frameworks works bests. They are scalable, flexible, and very available. In order to decide which tech stack is good for your case, you should ask a number of questions. - What is the number of users I expect? - Is it better to buy or build code? - What does your team know the best? - Are there enough developers for that tech stack in the market? - Is the package good enough in case you scale up? ## Cost to build your own shopping app. Now we come to the big issue. How much money should you expect to spend? Sure, I will not give you the answer right away but I will come up with a list of shopping apps with their budgets. You can make your own judgment based on this data. Maybe you want to create a shopping app as big as Amazon. Or, maybe you want to build a small app just for your small town. Let’s have a look at some figures on the [cost of a mobile app](https://addevice.io/blog/how-much-does-it-cost-to-build-a-mobile-app/). The average cost of development for an e-commerce website like Amazon is about $60,000 – $80,000. The approximate cost of the development of an app like Alibaba is USD 41,000. The total cost of a retail app may be as low as $10,000. You can find such in India. ## Wrapping up Ok, so you now have an idea of what it is like to create an online shopping app. I highly recommend you keep on your search and find out more information. But if you have a fabulous idea already, it’s high time to put the idea into a nice and neat shopping app and jump into the crazy market to make crazy money. Those that don’t try never win!
christinek989
1,268,647
Front-end Guide
Front-end Guide Credits: Illustration by @dev_lindseyk This guide has been cross-posted...
0
2022-11-23T11:43:43
https://dev.to/codelikeagirl29/front-end-guide-2h7k
webdev, programming, codenewbie
Front-end Guide == ![](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675500036/lautaro-andreani-UYsBCu9RP3Y-unsplash_naaqdn.jpg) _Credits: Illustration by [@dev_lindseyk](https://lindseyk.dev)_ _This guide has been cross-posted on [Free Code Camp](https://medium.freecodecamp.com/grabs-front-end-guide-for-large-teams-484d4033cc41)._ This study guide is inspired by ["A Study Plan to Cure JavaScript Fatigue"](https://medium.freecodecamp.com/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1#.g9egaapps) and is mildly opinionated in the sense that we recommend certain libraries/frameworks to learn for each aspect of front end development, based on what is currently deemed most suitable. We explain why a certain library/framework/tool is chosen and provide links to learning resources to enable the reader to pick it up on their own. Alternative choices that may be better for other use cases are provided as well for reference and further self-exploration. If you are familiar with front end development and have been consistently keeping up with the latest developments, this guide will probably not be that useful to you. It is targeted at newcomers to front end. If your company is exploring a modern JavaScript stack as well, you may find this study plan useful to your company! Feel free to adapt it to your needs. We will update this study plan periodically, according to our latest work and choices. *- Web Team* **Pre-requisites** - Good understanding of core programming concepts. - Comfortable with basic command line actions and familiarity with source code version control systems such as Git. - Experience in web development. Have built server-side rendered web apps using frameworks like Ruby on Rails, Django, Express, etc. - Understanding of how the web works. Familiarity with web protocols and conventions like HTTP and RESTful APIs. ## Table of Contents - [Single-page Apps (SPAs)](#single-page-apps-spas) - [New-age JavaScript](#new-age-javascript) - [User Interface](#user-interface---react) - [State Management](#state-management---fluxredux) - [Coding with Style](#coding-with-style---css-modules) - [Maintainability](#maintainability) - [Testing](#testing---jest--enzyme) - [Linting JavaScript](#linting-javascript---eslint) - [Linting CSS](#linting-css---stylelint) - [Formatting Code](#formatting-code---prettier) - [Types](#types---flow) - [Build System](#build-system---webpack) - [Package Management](#package-management---yarn) - [Continuous Integration](#continuous-integration) - [Hosting and CDN](#hosting-and-cdn) - [Deployment](#deployment) - [Monitoring](#monitoring) Certain topics can be skipped if you have prior experience in them. ## Single-page Apps (SPAs) A single-page application (SPA) is a web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server, instead of the default method of a web browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app. In a SPA, a page refresh never occurs; instead, all necessary HTML, JavaScript, and CSS code is either retrieved by the browser with a single page load,[1] or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. Web developers these days refer to the products they build as web apps, rather than websites. While there is no strict difference between the two terms, web apps tend to be highly interactive and dynamic, allowing the user to perform actions and receive a response for their action. Traditionally, the browser receives HTML from the server and renders it. When the user navigates to another URL, a full-page refresh is required and the server sends fresh new HTML for the new page. This is called server-side rendering. However in modern SPAs, client-side rendering is used instead. The browser loads the initial page from the server, along with the scripts (frameworks, libraries, app code) and stylesheets required for the whole app. When the user navigates to other pages, a page refresh is not triggered. The URL of the page is updated via the [HTML5 History API](https://developer.mozilla.org/en-US/docs/Web/API/History_API). New data required for the new page, usually in JSON format, is retrieved by the browser via [AJAX](https://developer.mozilla.org/en-US/docs/AJAX/Getting_Started) requests to the server. The SPA then dynamically updates the page with the data via JavaScript, which it has already downloaded in the initial page load. This model is similar to how native mobile apps work. The benefits: - The app feels more responsive and users do not see the flash between page navigations due to full-page refreshes. - Fewer HTTP requests are made to the server, as the same assets do not have to be downloaded again for each page load. - Clear separation of the concerns between the client and the server; you can easily build new clients for different platforms (e.g. mobile, chatbots, smart watches) without having to modify the server code. You can also modify the technology stack on the client and server independently, as long as the API contract is not broken. The downsides: - Heavier initial page load due to loading of framework, app code, and assets required for multiple pages.```<sup><a href="#fn1" id="ref1">1</a></sup>``` - There's an additional step to be done on your server which is to configure it to route all requests to a single entry point and allow client-side routing to take over from there. - SPAs are reliant on JavaScript to render content, but not all search engines execute JavaScript during crawling, and they may see empty content on your page. This inadvertently hurts the Search Engine Optimization (SEO) of your app. ```<sup><a href="#fn2" id="ref2">2</a></sup>```. However, most of the time, when you are building apps, SEO is not the most important factor, as not all the content needs to be indexable by search engines. To overcome this, you can either server-side render your app or use services such as [Prerender](https://prerender.io/) to "render your javascript in a browser, save the static HTML, and return that to the crawlers". While traditional server-side rendered apps are still a viable option, a clear client-server separation scales better for larger engineering teams, as the client and server code can be developed and released independently. This is especially so when having multiple client apps hitting the same API server. As web developers are now building apps rather than pages, organization of client-side JavaScript has become increasingly important. In server-side rendered pages, it is common to use snippets of jQuery to add user interactivity to each page. However, when building large apps, just jQuery is insufficient. After all, jQuery is primarily a library for DOM manipulation and it's not a framework; it does not define a clear structure and organization for your app. __JavaScript frameworks__ have been created to provide higher-level abstractions over the DOM, allowing you to keep state in memory, out of the DOM. Using frameworks also brings the benefits of reusing recommended concepts and best practices for building apps. A new engineer on the team who is unfamiliar with the code base, but has experience with a framework, will find it easier to understand the code because it is organized in a structure that they are familiar with. Popular frameworks have a lot of tutorials and guides, and tapping on the knowledge and experience from colleagues and the community will help new engineers get up to speed. #### Study Links - [Single Page App: advantages and disadvantages](http://stackoverflow.com/questions/21862054/single-page-app-advantages-and-disadvantages) - [The (R)Evolution of Web Development](http://blog.isquaredsoftware.com/presentations/2016-10-revolution-of-web-dev/) - [Here's Why Client Side Rendering Won](https://medium.freecodecamp.com/heres-why-client-side-rendering-won-46a349fadb52) ## New-age JavaScript Before you dive into the various aspects of building a JavaScript web app, it is important to get familiar with the language of the web - JavaScript, or ECMAScript. JavaScript is an incredibly versatile language which you can also use to build [web servers](https://nodejs.org/en/), [native mobile apps](https://facebook.github.io/react-native/) and [desktop apps](https://electron.atom.io/). Prior to 2015, the last major update was ECMAScript 5.1, in 2011. However, in the recent years, JavaScript has suddenly seen a huge burst of improvements within a short span of time. In 2015, ECMAScript 2015 (previously called ECMAScript 6) was released and a ton of syntactic constructs were introduced to make writing code less unwieldy. If you are curious about it, Auth0 has written a nice article on the [history of JavaScript](https://auth0.com/blog/a-brief-history-of-javascript/). Til this day, not all browsers have fully implemented the ES2015 specification. Tools such as [Babel](https://babeljs.io/) enable developers to write ES2015 in their apps and Babel transpiles them down to ES5 to be compatible for browsers. Being familiar with both ES5 and ES2015 is crucial. ES2015 is still relatively new and a lot of open source code and Node.js apps are still written in ES5. If you are doing debugging in your browser console, you might not be able to use ES2015 syntax. On the other hand, documentation and example code for many modern libraries that we will introduce later below are still written in ES2015. Some developers use [babel-preset-env](https://github.com/babel/babel-preset-env) to enjoy the productivity boost from the syntactic improvements the future of JavaScript provides and we have been loving it so far. `babel-preset-env` intelligently determines which Babel plugins are necessary (which new language features are not supported and have to be transpiled) as browsers increase native support for more ES language features. If you prefer using language features that are already stable, you may find that [babel-preset-stage-3](https://babeljs.io/docs/plugins/preset-stage-3/), which is a complete specification that will most likely be implemented in browsers, will be more suitable. Spend a day or two revising ES5 and exploring ES2015. The more heavily used features in ES2015 include - "Arrows and Lexical This", - "Classes", - "Template Strings", - "Destructuring", - "Default/Rest/Spread operators", - "Importing and Exporting modules". **Estimated Duration: 3-4 days.** You can learn/lookup the syntax as you learn the other libraries and try building your own app. #### Study Links - [Learn JavaScript on Codecademy](https://www.codecademy.com/learn/learn-javascript) - [Intro to ES6 on Codecademy](https://www.codecademy.com/learn/introduction-to-javascript) - [Learn ES2015 on Babel](https://babeljs.io/learn-es2015/) - [ES6 Katas](http://es6katas.org/) - [You Don't Know JS](https://github.com/getify/You-Dont-Know-JS) (Advanced content, optional for beginners) - [Answers to Front End Job Interview Questions — JavaScript](https://github.com/yangshun/front-end-interview-handbook/blob/master/questions/javascript-questions.md) ## User Interface - React ![react-icon](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675471295/icons/react-icon_t7zeix.png) If any JavaScript project has taken the front end ecosystem by storm in recent years, that would be [React](https://facebook.github.io/react/). React is a library built and open-sourced by the smart people at Facebook. In React, developers write components for their web interface and compose them together. React brings about many radical ideas and encourages developers to [rethink best practices](https://www.youtube.com/watch?v=DgVS-zXgMTk). For many years, web developers were taught that it was a good practice to write HTML, JavaScript and CSS separately. React does the exact opposite, and encourages that you write your HTML and [CSS in your JavaScript](https://speakerdeck.com/vjeux/react-css-in-js) instead. This sounds like a crazy idea at first, but after trying it out, it actually isn't as weird as it sounds initially. Reason being the front end development scene is shifting towards a paradigm of component-based development. The features of React: - **Declarative** - You describe what you want to see in your view and not how to achieve it. In the jQuery days, developers would have to come up with a series of steps to manipulate the DOM to get from one app state to the next. In React, you simply change the state within the component and the view will update itself according to the state. It is also easy to determine how the component will look like just by looking at the markup in the `render()` method. - **Functional** - The view is a pure function of `props` and `state`. In most cases, a React component is defined by `props` (external parameters) and `state` (internal data). For the same `props` and `state`, the same view is produced. Pure functions are easy to test, and the same goes for functional components. Testing in React is made easy because a component's interfaces are well-defined and you can test the component by supplying different `props` and `state` to it and comparing the rendered output. - **Maintainable** - Writing your view in a component-based fashion encourages reusability. We find that defining a component's `propTypes` make React code self-documenting as the reader can know clearly what is needed to use that component. Lastly, your view and logic is self-contained within the component, and should not be affected nor affect other components. That makes it easy to shift components around during large-scale refactoring, as long as the same `props` are supplied to the component. - **High Performance** - You might have heard that React uses a virtual DOM (not to be confused with [shadow DOM](https://developer.mozilla.org/en-US/docs/Web/Web_Components/Shadow_DOM)) and it re-renders everything when there is a change in state. Why is there a need for a virtual DOM? While modern JavaScript engines are fast, reading from and writing to the DOM is slow. React keeps a lightweight virtual representation of the DOM in memory. Re-rendering everything is a misleading term. In React it actually refers to re-rendering the in-memory representation of the DOM, not the actual DOM itself. When there's a change in the underlying data of the component, a new virtual representation is created, and compared against the previous representation. The difference (minimal set of changes required) is then patched to the real browser DOM. - **Ease of Learning** - Learning React is pretty simple. The React API surface is relatively small compared to [this](https://angular.io/docs/ts/latest/api/); there are only a few APIs to learn and they do not change often. The React community is one of the largest, and along with that comes a vibrant ecosystem of tools, open-sourced UI components, and a ton of great resources online to get you started on learning React. - **Developer Experience** - There are a number of tools that improves the development experience with React. [React Developer Tools](https://github.com/facebook/react-devtools) is a browser extension that allows you to inspect your component, view and manipulate its `props` and `state`. [Hot reloading](https://github.com/gaearon/react-hot-loader) with webpack allows you to view changes to your code in your browser, without you having to refresh the browser. Front end development involves a lot of tweaking code, saving and then refreshing the browser. Hot reloading helps you by eliminating the last step. When there are library updates, Facebook provides [codemod scripts](https://github.com/reactjs/react-codemod) to help you migrate your code to the new APIs. This makes the upgrading process relatively pain-free. Kudos to the Facebook team for their dedication in making the development experience with React great. <br> ![React Devtools Demo](images/react-devtools-demo.gif) Over the years, new view libraries that are even more performant than React have emerged. React may not be the fastest library out there, but in terms of the ecosystem, overall usage experience and benefits, it is still one of the greatest. Facebook is also channeling efforts into making React even faster with a [rewrite of the underlying reconciliation algorithm](https://github.com/acdlite/react-fiber-architecture). The concepts that React introduced has taught us how to write better code, more maintainable web apps and made us better engineers. We like that. We recommend going through the [tutorial](https://facebook.github.io/react/tutorial/tutorial.html) on building a tic-tac-toe game on the React homepage to get a feel of what React is and what it does. For more in-depth learning, check out the Egghead course, [Build Your First Production Quality React App](https://egghead.io/courses/build-your-first-production-quality-react-app). It covers some advanced concepts and real-world usages that are not covered by the React documentation. [Create React App](https://github.com/facebookincubator/create-react-app) by Facebook is a tool to scaffold a React project with minimal configuration and is highly recommended to use for starting new React projects. React is a library, not a framework, and does not deal with the layers below the view - the app state. More on that later. **Estimated Duration: 3-4 days.** Try building simple projects like a to-do list, Hacker News clone with pure React. You will slowly gain an appreciation for it and perhaps face some problems along the way that isn't solved by React, which brings us to the next topic... #### Study Links - [React Official Tutorial](https://facebook.github.io/react/tutorial/tutorial.html) - [Egghead Course - Build Your First Production Quality React App](https://egghead.io/courses/build-your-first-production-quality-react-app) - [Simple React Development in 2017](https://hackernoon.com/simple-react-development-in-2017-113bd563691f) - [Presentational and Container Components](https://medium.com/@dan_abramov/smart-and-dumb-components-7ca2f9a7c7d0#.5iexphyg5) #### Alternatives - [Angular](https://angular.io/) - [Ember](https://www.emberjs.com/) - [Vue](https://vuejs.org/) - [Cycle](https://cycle.js.org/) ## State Management - Flux/Redux ![redux](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675471490/icons/reduz_yczpni.png) As your app grows bigger, you may find that the app structure becomes a little messy. Components throughout the app may have to share and display common data but there is no elegant way to handle that in React. After all, React is just the view layer, it does not dictate how you structure the other layers of your app, such as the model and the controller, in traditional MVC paradigms. In an effort to solve this, Facebook invented Flux, an app architecture that complements React's composable view components by utilizing a unidirectional data flow. Read more about how Flux works [here](https://facebook.github.io/flux/docs/in-depth-overview.html). In summary, the Flux pattern has the following characteristics: - **Unidirectional data flow** - Makes the app more predictable as updates can be tracked easily. - **Separation of concerns** - Each part in the Flux architecture has clear responsibilities and are highly decoupled. - **Works well with declarative programming** - The store can send updates to the view without specifying how to transition views between states. As Flux is not a framework per se, developers have tried to come up with many implementations of the Flux pattern. Eventually, a clear winner emerged, which was [Redux](http://redux.js.org/). Redux combines the ideas from Flux, [Command pattern](https://www.wikiwand.com/en/Command_pattern) and [Elm architecture](https://guide.elm-lang.org/architecture/) and is the de facto state management library developers use with React these days. Its core concepts are: - App **state** is described by a single plain old JavaScript object (POJO). - Dispatch an **action** (also a POJO) to modify the state. - **Reducer** is a pure function that takes in current state and action to produce a new state. The concepts sound simple, but they are really powerful as they enable apps to: - Have their state rendered on the server, booted up on the client. - Trace, log and backtrack changes in the whole app. - Implement undo/redo functionality easily. The creator of Redux, [Dan Abramov](https://github.com/gaearon), has taken great care in writing up detailed documentation for Redux, along with creating comprehensive video tutorials for learning [basic](https://egghead.io/courses/getting-started-with-redux) and [advanced](https://egghead.io/courses/building-react-applications-with-idiomatic-redux) Redux. They are extremely helpful resources for learning Redux. **Combining View and State** While Redux does not necessarily have to be used with React, it is highly recommended as they play very well with each other. React and Redux have a lot of ideas and traits in common: - **Functional composition paradigm** - React composes views (pure functions) while Redux composes pure reducers (also pure functions). Output is predictable given the same set of input. - **Easy To Reason About** - You may have heard this term many times but what does it actually mean? We interpret it as having control and understanding over our code - Our code behaves in ways we expect it to, and when there are problems, we can find them easily. Through our experience, React and Redux makes debugging simpler. As the data flow is unidirectional, tracing the flow of data (server responses, user input events) is easier and it is straightforward to determine which layer the problem occurs in. - **Layered Structure** - Each layer in the app / Flux architecture is a pure function, and has clear responsibilities. It is relatively easy to write tests for pure functions. You have to centralize changes to your app within the reducer, and the only way to trigger a change is to dispatch an action. - **Development Experience** - A lot of effort has gone into creating tools to help in debugging and inspecting the app while development, such as [Redux DevTools](https://github.com/gaearon/redux-devtools). <br> ![Redux Devtools Demo](images/redux-devtools-demo.gif) Your app will likely have to deal with async calls like making remote API requests. [redux-thunk](https://github.com/gaearon/redux-thunk) and [redux-saga](https://github.com/redux-saga/redux-saga) were created to solve those problems. They may take some time to understand as they require understanding of functional programming and generators. Our advice is to deal with it only when you need it. [react-redux](https://github.com/reactjs/react-redux) is an official React binding for Redux and is very simple to learn. **Estimated Duration: 4 days.** The egghead courses can be a little time-consuming but they are worth spending time on. After learning Redux, you can try incorporating it into the React projects you have built. Does Redux solve some of the state management issues you were struggling with in pure React? #### Study Links - [Flux Homepage](http://facebook.github.io/flux) - [Redux Homepage](http://redux.js.org/) - [Egghead Course - Getting Started with Redux](https://egghead.io/courses/getting-started-with-redux) - [Egghead Course - Build React Apps with Idiomatic Redux](https://egghead.io/courses/building-react-applications-with-idiomatic-redux) - [React Redux Links](https://github.com/markerikson/react-redux-links) - [You Might Not Need Redux](https://medium.com/@dan_abramov/you-might-not-need-redux-be46360cf367) #### Alternatives - [MobX](https://github.com/mobxjs/mobx) ## Coding with Style - CSS Modules ![css-logo](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675471694/icons/css_cfzswz.png) CSS (Cascading Style Sheets) are rules to describe how your HTML elements look. Writing good CSS is hard. It usually takes many years of experience and frustration of shooting yourself in the foot before one is able to write maintainable and scalable CSS. CSS, having a global namespace, is fundamentally designed for web documents, and not really for web apps that favor a components architecture. Hence, experienced front end developers have designed methodologies to guide people on how to write organized CSS for complex projects, such as using [SMACSS](https://smacss.com/), [BEM](http://getbem.com/), [SUIT CSS](http://suitcss.github.io/), etc. However, the encapsulation of styles that these methodologies bring about are artificially enforced by conventions and guidelines. They break the moment developers do not follow them. As you might have realized by now, the front end ecosystem is saturated with tools, and unsurprisingly, tools have been invented to [partially solve some of the problems](https://speakerdeck.com/vjeux/react-css-in-js) with writing CSS at scale. "At scale" means that many developers are working on the same large project and touching the same stylesheets. There is no community-agreed approach on writing [CSS in JS](https://github.com/MicheleBertoli/css-in-js) at the moment, and we are hoping that one day a winner would emerge, just like Redux did, among all the Flux implementations. For now, we are banking on [CSS Modules](https://github.com/css-modules/css-modules). CSS modules is an improvement over existing CSS that aims to fix the problem of global namespace in CSS; it enables you to write styles that are local by default and encapsulated to your component. This feature is achieved via tooling. With CSS modules, large teams can write modular and reusable CSS without fear of conflict or overriding other parts of the app. However, at the end of the day, CSS modules are still being compiled into normal globally-namespaced CSS that browsers recognize, and it is still important to learn and understand how raw CSS works. If you are a total beginner to CSS, Codecademy's [HTML & CSS course](https://www.codecademy.com/learn/learn-html-css) will be a good introduction to you. Next, read up on the [Sass preprocessor](http://sass-lang.com/), an extension of the CSS language which adds syntactic improvements and encourages style reusability. Study the CSS methodologies mentioned above, and lastly, CSS modules. **Estimated Duration: 3-4 days.** Try styling up your app using the SMACSS/BEM approach and/or CSS modules. #### Study Links - [Learn HTML & CSS course on Codecademy](https://www.codecademy.com/learn/learn-html-css) - [Intro to HTML/CSS on Khan Academy](https://www.khanacademy.org/computing/computer-programming/html-css) - [SMACSS](https://smacss.com/) - [BEM](http://getbem.com/introduction/) - [SUIT CSS](http://suitcss.github.io/) - [CSS Modules Specification](https://github.com/css-modules/css-modules) - [Sass Homepage](http://sass-lang.com/) - [Answers to Front End Job Interview Questions — HTML](https://github.com/yangshun/tech-interview-handbook/blob/master/front-end/interview-questions.md#html-questions) - [Answers to Front End Job Interview Questions — CSS](https://github.com/yangshun/tech-interview-handbook/blob/master/front-end/interview-questions.md#css-questions) #### Alternatives - [JSS](https://github.com/cssinjs/jss) - [Styled Components](https://github.com/styled-components/styled-components) ## Maintainability Code is read more frequently than it is written. Developers highly value readability, maintainability and stability of the code and there are a few ways to achieve that: "Extensive testing", "Consistent coding style" and "Typechecking". Also when you are on a team, sharing same practices becomes really important. Check out these [JavaScript Project Guidelines](https://github.com/wearehive/project-guidelines) for instance. ## Testing - Jest + Enzyme ![jest](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675472001/icons/jest_vjoc9i.png) [Jest](http://facebook.github.io/jest/) is a testing library by Facebook that aims to make the process of testing pain-free. As with Facebook projects, it provides a great development experience out of the box. Tests can be run in parallel resulting in shorter duration. During watch mode, by default, only the tests for the changed files are run. One particular feature we like is "Snapshot Testing". Jest can save the generated output of your React component and Redux state and save it as serialized files, so you wouldn't have to manually come up with the expected output yourself. Jest also comes with built-in mocking, assertion and test coverage. One library to rule them all! ![Jest Demo](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675487232/icons/0_U_1Ch8vmzySSPUh1_1_g9fuml.gif) React comes with some testing utilities, but [Enzyme](http://airbnb.io/enzyme/) by Airbnb makes it easier to generate, assert, manipulate and traverse your React components' output with a jQuery-like API. It is recommended that Enzyme be used to test React components. Jest and Enzyme makes writing front end tests fun and easy. When writing tests becomes enjoyable, developers write more tests. It also helps that React components and Redux actions/reducers are relatively easy to test because of clearly defined responsibilities and interfaces. For React components, we can test that given some `props`, the desired DOM is rendered, and that callbacks are fired upon certain simulated user interactions. For Redux reducers, we can test that given a prior state and an action, a resulting state is produced. The documentation for Jest and Enzyme are pretty concise, and it should be sufficient to learn them by reading it. **Estimated Duration: 2-3 days.** Try writing Jest + Enzyme tests for your React + Redux app! #### Study Links - [Jest Homepage](http://facebook.github.io/jest/) - [Testing React Applications with Jest](https://auth0.com/blog/testing-react-applications-with-jest/) - [Enzyme Homepage](http://airbnb.io/enzyme/) - [Enzyme: JavaScript Testing utilities for React](https://medium.com/airbnb-engineering/enzyme-javascript-testing-utilities-for-react-a417e5e5090f) #### Alternatives - [AVA](https://github.com/avajs/ava) - [Karma](https://karma-runner.github.io/) ## Linting JavaScript - ESLint ![eslint](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675472104/icons/eslint_tbc6qz.png) A linter is a tool to statically analyze code and finds problems with them, potentially preventing bugs/runtime errors and at the same time, enforcing a coding style. Time is saved during pull request reviews when reviewers do not have to leave nitpicky comments on coding style. [ESLint](http://eslint.org/) is a tool for linting JavaScript code that is highly extensible and customizable. Teams can write their own lint rules to enforce their custom styles. Most developers use Airbnb's [`eslint-config-airbnb`](https://www.npmjs.com/package/eslint-config-airbnb) preset, that has already been configured with the common good coding style in the [Airbnb JavaScript style guide](https://github.com/airbnb/javascript). For the most part, using ESLint is as simple as tweaking a configuration file in your project folder. There's nothing much to learn about ESLint if you're not writing new rules for it. Just be aware of the errors when they surface and Google it to find out the recommended style. **Estimated Duration: 1/2 day.** Nothing much to learn here. Add ESLint to your project and fix the linting errors! #### Study Links - [ESLint Homepage](http://eslint.org/) - [Airbnb JavaScript Style Guide](https://github.com/airbnb/javascript) #### Alternatives - [Standard](https://github.com/standard/standard) - [JSHint](http://jshint.com/) - [XO](https://github.com/xojs/xo) ## Linting CSS - stylelint ![stylelint](https://cdn.rawgit.com/grab/front-end-guide/master/images/stylelint-logo.svg) As mentioned earlier, good CSS is notoriously hard to write. Usage of static analysis tools on CSS can help to maintain our CSS code quality and coding style. For linting CSS, we use stylelint. Like ESLint, stylelint is designed in a very modular fashion, allowing developers to turn rules on/off and write custom plugins for it. Besides CSS, stylelint is able to parse SCSS and has experimental support for Less, which lowers the barrier for most existing code bases to adopt it. Once you have learnt ESLint, learning stylelint would be effortless considering their similarities. stylelint is currently being used by big companies like [Facebook](https://code.facebook.com/posts/879890885467584/improving-css-quality-at-facebook-and-beyond/), [Github](https://github.com/primer/stylelint-config-primer) and [Wordpress](https://github.com/WordPress-Coding-Standards/stylelint-config-wordpress). One downside of stylelint is that the autofix feature is not fully mature yet, and is only able to fix for a limited number of rules. However, this issue should improve with time. **Estimated Duration: 1/2 day.** Nothing much to learn here. Add stylelint to your project and fix the linting errors! #### Study Links - [stylelint Homepage](https://stylelint.io/) - [Lint your CSS with stylelint](https://css-tricks.com/stylelint/) #### Alternatives - [Sass Lint](https://github.com/sasstools/sass-lint) - [CSS Lint](http://csslint.net/) ## Formatting Code - Prettier ![](https://res.cloudinary.com/codelikeagirl29/image/upload/v1675487469/icons/prettier_rsycrj.png) Another tool for enforcing consistent coding style for JavaScript and CSS is [Prettier](https://github.com/prettier/prettier). In most cases, it is recommended to setup Prettier on top of ESLint and stylelint and integrate it into the workflow. This allows the code to be formatted into consistent style automatically via commit hooks, so that developers do not need to spend time formatting the coding style manually. Note that Prettier only enforces coding style, but does not check for logic errors in the code. Hence it is not a replacement for linters. **Estimated Duration: 1/2 day.** Nothing much to learn here. Add Prettier to your project and add hooks to enforce the coding style! #### Study Links - [Prettier Homepage](https://prettier.io/) - [Prettier GitHub repo](https://github.com/prettier/prettier) - [Comparison between tools that allow you to use ESLint and Prettier together](https://gist.github.com/yangshun/318102f525ec68033bf37ac4a010eb0c) ## Types - Flow <img alt="Flow Logo" src="https://cdn.rawgit.com/grab/front-end-guide/master/images/flow-logo.png" width="256px" /> Static typing brings about many benefits when writing apps. They can catch common bugs and errors in your code early. Types also serve as a form of documentation for your code and improves the readability of your code. As a code base grows larger, we see the importance of types as they gives us greater confidence when we do refactoring. It is also easier to onboard new members of the team to the project when it is clear what kind of values each object holds and what each function expects. Adding types to your code comes with the trade-off of increased verbosity and a learning curve of the syntax. But this learning cost is paid upfront and amortized over time. In complex projects where the maintainability of the code matters and the people working on it change over time, adding types to the code brings about more benefits than disadvantages. Recently, I had to fix a bug in a code base that I haven’t touched in months. It was thanks to types that I could easily refresh myself on what the code was doing, and gave me confidence in the fix I made. The two biggest contenders in adding static types to JavaScript are [Flow](https://flow.org/) (by Facebook) and [TypeScript](https://www.typescriptlang.org/) (by Microsoft). As of date, there is no clear winner in the battle. For now, we have made the choice of using Flow. We find that Flow has a lower learning curve as compared to TypeScript and it requires relatively less effort to migrate an existing code base to Flow. Being built by Facebook, Flow has better integration with the React ecosystem out of the box. [James Kyle](https://twitter.com/thejameskyle), one of the authors of Flow, has [written](http://thejameskyle.com/adopting-flow-and-typescript.html) on a comparison between adopting Flow and TypeScript. Anyway, it is not extremely difficult to move from Flow to TypeScript as the syntax and semantics are quite similar, and we will re-evaluate the situation in time to come. After all, using one is better than not using any at all. Flow recently revamped their homepage and it's pretty neat now! **Estimated Duration: 1 day.** Flow is pretty simple to learn as the type annotations feel like a natural extension of the JavaScript language. Add Flow annotations to your project and embrace the power of type systems. #### Study Links - [Flow Homepage](https://flow.org/) - [TypeScript vs Flow](https://github.com/niieani/typescript-vs-flowtype) #### Alternatives - [TypeScript](https://www.typescriptlang.org/) ## Build System - webpack <img alt="webpack Logo" src="https://cdn.rawgit.com/grab/front-end-guide/master/images/webpack-logo.svg" width="256px" /> This part will be kept short as setting up webpack can be a tedious process and might be a turn-off to developers who are already overwhelmed by the barrage of new things they have to learn for front end development. In a nutshell, [webpack](https://webpack.js.org/) is a module bundler that compiles a front end project and its dependencies into a final bundle to be served to users. Usually, projects will already have the webpack configuration set up and developers rarely have to change it. Having an understanding of webpack is still a good to have in the long run. It is due to webpack that features like hot reloading and CSS modules are made possible. We have found the [webpack walkthrough](https://survivejs.com/webpack/foreword/) by SurviveJS to be the best resource on learning webpack. It is a good complement to the official documentation and we recommend following the walkthrough first and referring to the documentation later when the need for further customization arises. **Estimated Duration: 2 days (Optional).** #### Study Links - [webpack Homepage](https://webpack.js.org/) - [SurviveJS - Webpack: From apprentice to master](https://survivejs.com/webpack/foreword/) #### Alternatives - [Rollup](https://rollupjs.org/) - [Browserify](http://browserify.org/) - [Parcel](https://parceljs.org/) ## Package Management - Yarn <img alt="Yarn Logo" src="https://cdn.rawgit.com/grab/front-end-guide/master/images/yarn-logo.png" width="256px" /> If you take a peek into your `node_modules` directory, you will be appalled by the number of directories that are contained in it. Each babel plugin, lodash function, is a package on its own. When you have multiple projects, these packages are duplicated across each project and they are largely similar. Each time you run `npm install` in a new project, these packages are downloaded over and over again even though they already exist in some other project in your computer. There was also the problem of non-determinism in the installed packages via `npm install`. Some of our CI builds fail because at the point of time when the CI server installs the dependencies, it pulled in minor updates to some packages that contained breaking changes. This would not have happened if library authors respected [semver](http://semver.org/) and engineers did not assume that API contracts would be respected all the time. [Yarn](https://yarnpkg.com/) solves these problems. The issue of non-determinism of installed packages is handled via a `yarn.lock` file, which ensures that every install results in the exact same file structure in `node_modules` across all machines. Yarn utilizes a global cache directory within your machine, and packages that have been downloaded before do not have to be downloaded again. This also enables offline installation of dependencies! The most common Yarn commands can be found [here](https://yarnpkg.com/en/docs/usage). Most other yarn commands are similar to the `npm` equivalents and it is fine to use the `npm` versions instead. One of our favorite commands is `yarn upgrade-interactive` which makes updating dependencies a breeze especially when the modern JavaScript project requires so many dependencies these days. Do check it out! npm@5.0.0 was [released in May 2017](https://github.com/npm/npm/releases/tag/v5.0.0) and it seems to address many of the issues that Yarn aims to solve. Do keep an eye on it! **Estimated Duration: 2 hours.** #### Study Links - [Yarn Homepage](https://yarnpkg.com/) - [Yarn: A new package manager for JavaScript](https://code.facebook.com/posts/1840075619545360) #### Alternatives - [Good old npm](https://github.com/npm/npm/releases/tag/v5.0.0) ## Continuous Integration We used [Travis CI](https://travis-ci.com/) for our continuous integration (CI) pipeline. Travis is a highly popular CI on Github and its [build matrix](https://docs.travis-ci.com/user/customizing-the-build#Build-Matrix) feature is useful for repositories which contain multiple projects. We configured Travis to do the following: - Run linting for the project. - Run unit tests for the project. - If the tests pass: - Test coverage generated by Jest is uploaded to [Codecov](https://codecov.io/). - Generate a production bundle with webpack into a `build` directory. - `tar` the `build` directory as `<hash>.tar` and upload it to an S3 bucket which stores all our tar builds. - Post a notification to Slack to inform about the Travis build result. #### Study Links - [Travis Homepage](https://travis-ci.com/) - [Codecov Homepage](https://codecov.io/) #### Alternatives - [Jenkins](https://jenkins.io/) - [CircleCI](https://circleci.com/) - [GitLab CI/CD](https://about.gitlab.com/product/continuous-integration/) ## Hosting and CDN Traditionally, web servers that receive a request for a webpage will render the contents on the server, and return a HTML page with dynamic content meant for the requester. This is known as server-side rendering. As mentioned earlier in the section on Single-page Apps, modern web applications do not involve server-side rendering, and it is sufficient to use a web server that serves static asset files. Nginx and Apache are possible options and not much configuration is required to get things up and runnning. The caveat is that the web server will have to be configured to route all requests to a single entry point and allow client-side routing to take over. The flow for front end routing goes like this: 1. Web server receives a HTTP request for a particular route, for example `/user/john`. 1. Regardless of which route the server receives, serve up `index.html` from the static assets directory. 1. The `index.html` should contain scripts that load up a JavaScript framework/library that handles client-side routing. 1. The client-side routing library reads the current route, and communicates to the MVC (or equivalent where relevant) framework about the current route. 1. The MVC JavaScript framework renders the desired view based on the route, possibly after fetching data from an API if required. Example, load up `UsersController`, fetch user data for the username `john` as JSON, combine the data with the view, and render it on the page. A good practice for serving static content is to use caching and putting them on a CDN. We use [Amazon Simple Storage Service (S3)](https://aws.amazon.com/s3/) for hosting our static website content and [Amazon CloudFront](https://aws.amazon.com/cloudfront/) as the CDN. We find that it is an affordable and reliable solution that meets our needs. S3 provides the option to "Use this bucket to host a website", which essentially directs the requests for all routes to the root of the bucket, which means we do not need our own web servers with special routing configurations. Other than hosting the website, we also use S3 to host the build `.tar` files generated from each successful CI build. #### Study Links - [Amazon S3 Homepage](https://aws.amazon.com/s3/) - [Hosting a Static Website on Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html) #### Alternatives - [Google Cloud Platform](https://cloud.google.com/storage/docs/hosting-static-website) - [Now](https://zeit.co/now) ## Deployment The last step in shipping the product to our users is deployment. We used [Ansible Tower](https://www.ansible.com/tower) which is a powerful automation software that enables us to deploy our builds easily. As mentioned earlier, all our commits, upon successful build, are being uploaded to a central S3 bucket for builds. We follow semver for our releases and have commands to automatically generate release notes for the latest release. When it is time to release, we run a command to tag the latest commit and push to our code hosting environment. Travis will run the CI steps on that tagged commit and upload a tar file (such as `1.0.1.tar`) with the version to our S3 bucket for builds. On Tower, we simply have to specify the name of the `.tar` we want to deploy to our hosting bucket, and Tower does the following: 1. Download the desired `.tar` file from our builds S3 bucket. 1. Extracts the contents and swap in the configuration file for specified environment. 1. Upload the contents to the hosting bucket. 1. Post a notification to Slack to inform about the successful deployment. This whole process is done under 30 seconds and using Tower has made deployments and rollbacks easy. If we realize that a faulty deployment has occurred, we can simply find the previous stable tag and deploy it. #### Study Links - [Ansible Tower Homepage](https://www.ansible.com/tower) #### Alternatives - [Jenkins](https://jenkins.io/) ## Monitoring After shipping the product, you would also want to monitor it for any errors. Apart from network level monitoring from our CDN service provider and hosting provider, we use [Sentry](https://sentry.io/) to monitor errors that originates from the app logic. #### Study Links - [Sentry Homepage](https://sentry.io/) ### The Journey has Just Begun Congratulations on making it this far! Front end development today is [hard](https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f), but it is also more interesting than before & more possibilites are within our reach. What was covered here so far will help any new engineer to their web team to get up to speed with our technologies pretty quickly. There are many more things to be learnt, but building up a solid foundation in the essentials will aid in learning the rest of the technologies. This helpful [front end web developer roadmap](https://github.com/kamranahmedse/developer-roadmap#-front-end-roadmap) shows the alternative technologies available for each aspect. To made our technical decisions based on what was important to a rapidly growing IT world - maintainability and stability of the code base. These decisions may or may not apply to smaller teams and projects. Do evaluate what works best for you and your company. As the front end ecosystem grows, we are actively exploring, experimenting and evaluating how new technologies can make us a more efficient team and improve our productivity. I hope that this post has given you insights into some of the front end technologies developers use frequently. ### More Reading **General** - [State of the JavaScript Landscape: A Map for Newcomers](http://www.infoq.com/articles/state-of-javascript-2016) - [The Hitchhiker's guide to the modern front end development workflow](http://marcobotto.com/the-hitchhikers-guide-to-the-modern-front-end-development-workflow/) - [How it feels to learn JavaScript in 2016](https://hackernoon.com/how-it-feels-to-learn-javascript-in-2016-d3a717dd577f#.tmy8gzgvp) - [Roadmap to becoming a web developer in 2017](https://github.com/kamranahmedse/developer-roadmap#-frontend-roadmap) - [Modern JavaScript for Ancient Web Developers](https://trackchanges.postlight.com/modern-javascript-for-ancient-web-developers-58e7cae050f9) --- **Other Study Plans** - [A Study Plan To Cure JavaScript Fatigue](https://medium.freecodecamp.com/a-study-plan-to-cure-javascript-fatigue-8ad3a54f2eb1#.c0wnrrcwd) - [JS Stack from Scratch](https://github.com/verekia/js-stack-from-scratch) - [A Beginner’s JavaScript Study Plan](https://medium.freecodecamp.com/a-beginners-javascript-study-plan-27f1d698ea5e#.bgf49xno2) ### Footnotes > [Jump back to footnote 1 in the text ↩](https://webpack.js.org/guides/code-splitting/) [Universal JS](https://medium.com/@mjackson/universal-javascript-4761051b7ae9)
codelikeagirl29
1,268,659
5 ways to start delivering sustainable technology through Serverless
Delivering sustainable technology is becoming mandatory for organisations across the planet, from...
0
2022-11-23T12:02:28
https://theserverlessedge.com/you-can-deliver-sustainable-technology-through-serverless/
serverless, aws, cloud, architecture
Delivering [sustainable technology](https://theserverlessedge.com/watch-our-talk-on-sustainable-software-engineering-at-beltech/) is becoming mandatory for organisations across the planet, from both a regulatory and moral standpoint. We have written a lot on how using Well-Architected and [Serverless First principles](https://theserverlessedge.com/what-is-serverless-first-in-2021-the-new-stack/) is the best way to build a secure, cost effective, reliable and performant applications. More importantly, we firmly believe that a Serverless first approach is the sustainable approach for [Long Term Value](https://theserverlessedge.com/ask-better-questions-for-long-term-value/). {% embed https://www.youtube.com/watch?v=Sku6SL_P6Z8 %} {% embed https://open.spotify.com/episode/5CdlPvyegWdGKe6rVeEO85?si=GArTzRbcQCOevmZuF5xR8A %} We have been thinking about this topic for quite some time and we believe that these 5 steps will get you moving in the right direction: 1. Measure your carbon footprint. 2. Move to Public Cloud and close your Data Centers. 3. Run your workloads in a low carbon region where possible (keeping in mind data location regulations, latency requirements and service availability). 4. Eliminate waste by understanding and meeting your users needs in the simplest way possible. 5. Adopt a Serverless first mindset and approach. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jq4xr1icoodurv6ftkao.png) _Photo by Victor on Unsplash_ ## Sustainable technology guidelines In addition to our 5 steps the [UK Government's Central Digital & Data Office](https://www.gov.uk/government/organisations/central-digital-and-data-office) recently published guidance on [how to make your technology sustainable](url) which contains awesome guidance for any businesses or government agencies. They have 12 questions that are important to consider when aligning with [sustainability goals](https://www.gov.uk/government/publications/implementing-the-sustainable-development-goals/implementing-the-sustainable-development-goals--2_). At the start of your project, you should consider these questions: 1. What are your organisation’s sustainability goals? 2. If the contract is more than £5 million per year, has the supplier committed to meet the government’s net zero target, and published a [Carbon Reduction Plan](https://www.gov.uk/government/publications/procurement-policy-note-0621-taking-account-of-carbon-reduction-plans-in-the-procurement-of-major-government-contracts)? **I would update this to: has your supplier committed to meet a net zero target, and published a Carbon Reduction Plan?** 3. Can you include specific project objectives to meet your organisation’s sustainability goals? 4. Have you identified potential benefits for meeting sustainability objectives, or risks that would stop you meeting those objectives? 5. Does your organisation have processes for [recording and reporting](https://www.gov.uk/government/publications/greening-government-commitments-2016-to-2020) on sustainability goals? For example, reporting on the targets for greenhouse gases, waste and water. 6. Do your project plans include user research to more clearly define requirements and reduce the chance of buying software and hardware you do not need? 7. Do you have a process or plan for recording the impact of future upgrades to software and hardware? 8. Are you able to [recycle or repurpose](https://www.hse.gov.uk/waste/waste-electrical.htm) any equipment you are replacing? 9. Are you able to use existing datasets for your project? 10. Are there any opportunities for minimising processing, transmission and storage? 11. Can you put in place processes which reduce printing and paper trails in back office systems and user facing services? 12. Have you assessed whether home working is a practical and more sustainable option for your project team? The whole document is worth a look, but I would draw your attention to the steps to reduce greenhouse gas emissions. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ts7p7m5n7ouyoqti39oa.png) Central Digital and Data Office UK Gov ## Cloud Bill as Proxy for Carbon Score Without an accurate Carbon Score capability from your cloud provider, we feel using your Cloud Bill as a proxy can be a useful indicator of your Carbon usage. **Lower cloud bill = less carbon used** To back this up, AWS released a report by 451 Research that [EU businesses that move to AWS Cloud can improve energy efficiency and reduce carbon emissions](https://blog.aboutamazon.eu/aws/eu-businesses-that-move-to-aws-cloud-improve-energy-efficiency-and-reduce-carbon-emissions) that states that: > Businesses in Europe can reduce energy use by nearly 80% when they run their applications on the AWS Cloud instead of operating their own data centers. As well as the advantages of improved time to value, security, cost, performance, reliability, and increased pace of innovation, moving to Public Cloud is now the right thing to do from a sustainability point of view as well. ## Cloud Provider Carbon Footprint Score As ESG and reducing carbon footprints are becoming a Board/C-Suite priority, Cloud providers that can provide rapid and accurate answers to questions on their workloads will be at a significant advantage. - What is our Carbon footprint? - How can I reduce our Carbon footprint? ## Emerging capabilities Google GCP have launched the [Carbon Footprint](https://cloud.google.com/carbon-footprint) capability with accurate measures of your gross carbon footprint and [guidance on how to reduce the footprint](https://cloud.google.com/architecture/reduce-carbon-footprint). Microsoft Azure have launched the [Sustainability Calculator](https://azure.microsoft.com/en-us/blog/microsoft-sustainability-calculator-helps-enterprises-analyze-the-carbon-emissions-of-their-it-infrastructure/) which helps their customers to understand the carbon footprint of your Azure cloud resources. AWS launched their [Carbon Footprint Tool](https://aws.amazon.com/aws-cost-management/aws-customer-carbon-footprint-tool/) in March 2022.
serverlessedge
1,270,779
Typescript: The keyof operator
Este artículo fue originalmente escrito en https://matiashernandez.dev A few primary concepts...
0
2022-11-24T13:36:36
https://matiashernandez.dev/blog/post/typescript-the-keyof-operator
webdev, programming, typescript
> Este artículo fue originalmente escrito en [https://matiashernandez.dev](https://matiashernandez.dev/blog/post/typescript-the-keyof-operator) A few primary concepts around Typescript help you build complex data shapes. One of that building blocks is the keyof operator. This operator or keyword is the Typescript's anwser to the javascript `Object.keys` operator. `Object.keys` returns a list of the properties (the keys) of an object. `keyof` do something similar but in the typed world only. It will return a literal union type listing the "properties" of an object-like type. This operator is the base building block for advanced typing like mapped and conditional types. > The keyof operator takes an object type and produces a string or numeric literal union of its keys. - [Typescript Handbook](https://www.typescriptlang.org/docs/handbook/2/keyof-types.html) ```typescript type Colors = { primary: '#eee', primaryBorder: '#444', secondary: '#007bff' black: '#000', white: '#fff', whiteBorder: '#f2f2f7', green: '#53C497', darkGreen: '#43A17C', infoGreen: '#23AEB7', pastelLightGreen: '#F3FEFF', } type ColorKeys = keyof Colors; // "primary" | "primaryBorder" | "secondary" .... function SomeComponent({ color }: { color: ColorKeys }) { return "Something" } SomeComponent({ color: "WhateverColor"}) SomeComponent({ color: "primary"}) ``` The previous code sample is an snippet from a real web app. The `Colors` type describes a set of colors that can be used across the application. The `keyof` operator is applied to the `Colors` type to retrieve a literal union of all the possible colors. > Literal union means that is a Union type made up of literal values like "primary" | "primaryBorder" The union is then used to type the props of `SomeComponent`, allowing the `color` argument to be one of the colors defined in the type. You can also use the `keyof` operator to build up more complex types or constraints alongside Generics and template literals, but that will be for another post. There you go; the `keyof` operator can be small but is an essential piece in the big scheme that unlocks powerful operations when used in the right place. ![Footer Social Card.jpg](https://cdn.hashnode.com/res/hashnode/image/upload/v1615457338201/5yOtr5SdF.jpeg) ✉️ [Únete a Micro-bytes](https://microbytes.matiashernandez.dev) 🐦 Sígueme en [Twitter](https://twitter.com/matiasfha) ❤️ [Apoya mi trabajo](https://buymeacoffee.com/matiasfha)
matiasfha
1,276,351
React Refactoring - Composition: When don't use it!
Intro I refact a dropdown composition component. Changing it to object. We had a dropdown...
0
2022-11-29T01:22:53
https://dev.to/raafacachoeira/react-refactoring-component-children-to-component-with-prop-objets-5f9o
react, refactoring, webdev, beginners
## Intro I refact a dropdown composition component. Changing it to object. We had a dropdown in the following structure: ``` jsx <Dropdown> <Dropdown.Toggle color='primary' /> <Dropdown.Menu> <Dropdown.Item onClick={() => {}}> MyLabel </Dropdown.Item> <Dropdown.Item onClick={() => {}}> AnotherLabel </Dropdown.Item> </Dropdown.Menu> </Dropdown> ``` ## Problem When we need to hide some elements, we need to apply conditional rendering and with that, we have a mixture of javascript with jsx syntax that makes the code difficult to read and difficult to maintain. For example: ``` jsx <Page> <Subheader helpText="Choose a Publication action"> <Dropdown> <Dropdown.Toggle color='primary' /> <Dropdown.Menu> {allowEdit ? ( <Dropdown.Item onClick={handleTransfer}> Transfer </Dropdown.Item> <Dropdown.Item onClick={handleAbandon}> Abandon </Dropdown.Item> ) : ( {hasPermission('Pages.Publications.Publish') ? ( <Dropdown.Item onClick={handlePublish}> Publish </Dropdown.Item> ) : ( <Dropdown.Item onClick={handleTransfer}> Transfer </Dropdown.Item> )} ) )} </Dropdown.Menu> </Dropdown> </Subheader> <Content> ... </Content> <Page> ``` Realize that there is a ternary inside the other, and also, there is a code duplication for the 'Transfer' item. This is terrible to read. If **allowEdit** is TRUE, it should render the 'Transfer' and 'Abandon' items: ![transfer and abandon items](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ds01p7ngsp2fw3tf6anl.png) If **allowEdit** is FALSE and user has permission to Publish, it should render only 'Publish' item. Or 'Transfer' for fallback. ![just publish item](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwqc1hqlvkslhte1iczf.png) ![transfer as fallback item](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1msvxmph6gnpvmpnm82q.png) ## Proposal Create a new component (**DropdownButton**) to support options that receive list items instead of children. Like this: ``` jsx <DropdownButton color='primary' options={[ { label: '', onClick: () => {}, visible: true } { label: '', onClick: () => {}, visible: true } ]} /> ``` The **visible** property should control when showing the dropdown item. ## Benefits - The code is easier to read - Less code repetition - We can import the object from elsewhere And applying our conditional rendering... ``` tsx <Page> <Subheader helpText="Choose a Publication action"> <DropdownButton color='primary' options={[ { label: 'Transfer', onClick: handleTransfer, visible: allowEdit || !hasPermission('Pages.Publications.Publish') }, { label: 'Abandon', onClick: handleAbandon, visible: allowEdit }, { label: 'Publish', onClick: handlePublish, visible: !allowEdit && hasPermission('Pages.Publications.Publish') }, ]} /> </Subheader> <Content> ... </Content> <Page> ``` ## Conclusion We are used to writing more declarative code, always thinking about generic code, but sometimes it makes our code difficult to read and maintain. Props as object items are widely used and we can use them when we want to better control our components. In this refactoring, we've assigned responsibility for the conditional rendering requirements to the **visible** prop.
raafacachoeira
1,277,230
Git Checkout / Reset / Revert. When to use what?
We often come across some weird scenarios (especially my team) while committing our code changes....
20,115
2022-11-29T13:48:16
https://medium.com/@5minslearn/git-checkout-reset-revert-when-to-use-what-dc1fc9f3c5bb
git, gitcheckout, gitrevert, gitreset
We often come across some weird scenarios (especially my team) while committing our code changes. Some of them are, * How can I bring the file to working area, which I have added to the staging area by mistake? * OMG! I made a commit with production keys in the code but did not push the code yet. Is it possible to delete that commit? (Meanwhile, my tech lead be like 😡) * Can anyone say how to remove the changes which I’ve pushed already? * I logged onto Production machine and made a change to a file and fixed the production issue immediately 👏 (Can also be debugging on production 😜). Now, I want to remove that change using git cli for the CI/CD to pass. Any Git commands available for this purpose? Head to the bottom of this page, if you need blind answers for these scenarios. For those who need to understand more, continue from here. Let’s find answers to all of your such queries with just 3 commands. 1. Checkout 2. Reset 3. Revert ### What is Git Checkout? git checkout removes your changes which are in the working directory of your project. Remember, neither staged nor committed changes will be removed. I have added some random text on File1 and File2. We can see the changes with git status and git diff commands. ![Changes made to files](https://cdn-images-1.medium.com/max/2000/1*A45FZO1_4_VWLGbfBHtvgQ.png) Let’s say we need to remove all the changes from File1 file and bring it back to the original state. Let’s run git checkout command on File1 and see the changes. ![Git Checkout a File](https://cdn-images-1.medium.com/max/2000/1*-Zk34X8y9zUhtEg5rLf6tQ.png) “Okay. That’s cool. Removing changes from 1 file is fine. How can I remove **all the changes made in all files** in our repo? ”, Naras from my team raised the question. Oh! That’s simple. Just add a . (dot) after checkout command to remove changes from all files in our repo. > Git released restore command in it’s recent release as an alternative to checkout command ### What is Git Reset? git reset command moves the changes from staging area to working area. I made changes to File2 and staged it. ![Staged File2.txt](https://cdn-images-1.medium.com/max/2000/1*fY3tm7ctTKglUdn8n0B32w.png) Running git reset on File2 will bring the changes to the working directory ![Git reset on File2.txt](https://cdn-images-1.medium.com/max/2000/1*lw7GVGzTMh8yBzT-Wedf8g.png) reset is a powerful command in git which can perform unbelievable actions based on the parameters passed. One of the powerful feature is it’s ability to remove a commit from repository provided some conditions. Let’s explore them below ### Types of Git Reset Git Reset is divided into 3 types. They are, 1. Mixed (Default) 2. Soft 3. Hard ### Mixed Reset The Mixed reset command brings back the committed changes to the working directory phase. The syntax of mixed reset command is, git reset HEAD~n n represents the number of commits to be removed (from the end). Let’s remove the last 1 commit from our code. ![Git Mixed Reset](https://cdn-images-1.medium.com/max/2000/1*POOyFc5zekLyUIvib_AAaA.png) As I ran git reset HEAD~1 , the last one commit on the repo has been removed and those changes are brought to the working directory. ### Soft Reset The Soft reset command is similar to mixed reset. This command brings back the committed changes to the staging area. The syntax of soft reset command is, git reset HEAD~n --soft n represents the number of commits to be removed (from the end). Let’s remove the last 1 commit from our code. ![Git Soft Reset](https://cdn-images-1.medium.com/max/2000/1*hTv6EsSdC6RuBZDObn1NDQ.png) As I ran git reset HEAD~1 --soft, the last one commit on the repo has been removed and those changes are brought to the staging area. ### Hard Reset The Hard reset command is bit **dangerous** command. This command deletes the commits from git. The syntax of hard reset command is, git reset HEAD~n --hard n represents the number of commits to be deleted (from the end). Let’s remove the last 1 commit from our code. ![Git Hard Reset](https://cdn-images-1.medium.com/max/2000/1*FN9u_QPHP_FwJd3ac9RQ2g.png) As I ran git reset HEAD~1 --hard, the last one commit on the repo has been deleted completely from the repo. “Why are stressing the word **dangerous** here. How’s this command so dangerous when compared with other reset commands? ”, intelligent Kumar interrupted with this question. Well. That’s an interesting question. The reason I’m stressing that it’s so dangerous is that, this commit has the power to delete all our changes. Remember this does not bring back the changes to either staging or working area. Running this command does not ask for any confirmation from the user. Think of a scenario where you made a small mistake by adding a “0” next to “~1” in the above command and hit “Enter” without looking and confirming the command. So, the command you ran would be, git reset HEAD~10 --hard Can you imagine how dangerous it is now? You lost 10 of your most valuable commits. There’s no way to reverse this process unless you pushed the code to server. So, you have to make all your changes from the beginning again. I could better explain this with a real scenario that happened between me and Aadhithyanath (One of my colleague). One day Aadhi approached my help to resolve a bug. I saw there were changes in 10+ files (around 14 files). I asked him to commit those changes before proceeding. He compressed all changes together in a single commit. We started fixing the bug. We fixed one scenario where the bug has happened. But, there are other scenarios too we need to handle. We made a commit. While heading to fix other scenarios we found that our last commit containing the fix for 1st scenario was implemented in the wrong way. So, we planned to remove that commit and go ahead with the correct implementation. I ran the following commit, git reset HEAD~2 --hard Aadhi suddenly shouted “OH MY GOD!!!!” “What happened? ”, I asked him. “You deleted all my changes 😠”, replied Aadhi. “What are you saying? ”, I was asking him by looking at the terminal what I’ve executed. Do you notice that ~2 ? Yes. I deleted the last 2 commits. To our bad luck, we deleted the commit which Aadhi has compressed his changes altogether in a single commit. It was 4 hours of his hard work. Adding up to our bad luck, we haven’t pushed that code to the remote repo. This happens often to those who type faster. I’m one among them 😜. Do you understand how dangerous it is? That’s the reason, I stressed the word **dangerous **here. Do not execute this command unless you’re 200% sure of what’s will be the result of it. I remember you told, > One of the powerful feature is it’s ability to remove a commit from repository provided some conditions. “What are those conditions from the above line?”, asked Udhaya, a developer cum marketer from our team. Oh! I forgot to mention about that important condition. Thanks for reminding Udhaya. The only one condition you need to keep in mind is that, reset command should not be run with the commits pushed to the remote repo. “Why?”, I hope this strikes your mind. This main purpose of reset command is to make changes to your commit. So, when you make a change on the commit / delete a commit which you already pushed to remote repo, the hash of that commit will change. This will not let you to push your changes further to the remote repo unless you force push it. > Note: Doing a force push in a repo is not a recommended way unless it is an unavoidable case Because, git pushes the changes to remote by comparing it’s **tip **(tip is called the last / latest commit of the branch in a repo) ### What is Git Revert? Revert command removes the changes made on a commit, by creating a new commit. The new commit will contain the changes that’s the reverse of old commit. git revert is the preferred command whenever you have to make the changes to the commit which you’ve already pushed. The syntax of revert command is, git revert <commit_hash> commit_hash is the hash of the commit which we want to revert. ![Git Revert](https://cdn-images-1.medium.com/max/2000/1*ZNN9hoL1QAmURAP-Gg-uUg.png) We have reverted the top commit from our repo and you can see, a new commit has been created above the old commit saying it’s the revert of that commit. “**Can you please explain one real time scenario for each of the above concepts?**”, asked Krish, one of the leading developer in our firm. Sure. 1. You made few changes and messed up with the code and you need the old code back. Make use of git checkout to restore that file 2. When you make a commit, but you feel that you need to include some more changes in those files in the same commit, use git reset HEAD~1 --mixed to bring the commit to the working directory 3. When you made a commit, but you need to add few more files to the same commit, use git reset HEAD~1 --soft which will bring all the changes to the staging area 4. When you made a commit that contains a production key in the code, use git reset HEAD~1 --hard to completely remove that commit 5. When you made a commit and pushed the code, later you felt that the changes made in that commit are not valid / not required, use git revert <commit_hash> to reverse the changes Raman (little mischievous ) from my team asked me the following, “Raising a question is simple, but returning the satisfying answer is the toughest job. Can you say the appropriate commands to use for the scenarios you mentioned at beginning?”. “That’s the evaluation of your understanding folks. Can you all say the command for each scenario?”, I replied. To my surprise, everyone has voted the right command for each scenario. Felt happy on seeing the success of my teaching in real time. Thank you folks. ### Answers for my valuable readers & followers, * How can I bring the file to working area, which I have added to the staging area by mistake? — git reset <path_to_file> * OMG! I made a commit with production keys in the code but did not push the code yet. Is it possible to delete that commit? (Meanwhile, my tech lead be like 😡) — git reset HEAD~1 --hard * Can anyone say how to remove the changes which I’ve pushed already? — git revert <commit_hash> * I logged onto Production machine and made a change to a file and fixed the production issue immediately 👏 (Can also be debugging on production 😜). Now, I want to remove that change using git cli for the CI/CD to pass. Any Git commands available for this purpose? — git checkout <file_name> Give a like ❤️ if you like this article. Subscribe to our [newsletter](https://5minslearn.gogosoon.com/?ref=devdotto_git_checkout_reset_revert) to receive more such insightful articles that get delivered straight to your inbox.
5minslearn
1,278,541
Tesla Solar Panel Installation Company in Texas & Colorado
ECG Solar is a quality solar panel installation company for Texas &amp; Colorado. We Install all...
0
2022-11-30T10:07:02
https://dev.to/excelcgsolar/tesla-solar-panel-installation-company-in-texas-colorado-416b
ECG Solar is a quality [solar panel installation company ](https://excelcgsolar.com/)for Texas & Colorado. We Install all forms of solar from residential, commercial, utility scale, and not so common solar, such as solar roofs and building integrated solar systems.
excelcgsolar
1,288,957
Reduce your Jest tests running time (especially on hooks!) with the maxWorkers option
Photo by Josh Olalde on Unsplash Many applications have hundreds (and sometimes thousands!) of...
0
2022-12-08T15:52:29
https://buaiscia.github.io/blog/tips/reduce-jest-tests-running-time-maxworkers
webdev, testing, jest, beginners
Photo by <a href="https://unsplash.com/@josholalde?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Josh Olalde</a> on <a href="https://unsplash.com/s/photos/workers?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a> Many applications have hundreds (and sometimes thousands!) of tests. Sometimes those tests are run only manually; more often they're attached to some hook. For example, we can use 'husky' to run some action before the commit or the push. In my case, lint and prettier before the commit and tests before the push. Or the tests can be run during some actions on a CI/CD tool. I'm with a MacBook Pro 2020, and I've got around 170 unit and integration tests running on a frontend application. That means that at least half of it needs rendering the components many times - hence consuming much memory and CPU. When I run my tests manually, they usually are pretty fast, around 1 minute and some seconds of execution time. However, I found that when I'm pushing the code to our repository, many times the test suites fail on timeout. They exceed the default time limit of 5 seconds. Of course, this limit can be increased. But if 5 seconds is not enough for running a test, there's something wrong! Now, although Jest runs with Javascript (which is single-threaded), it uses workers to divide the workload among many Jest processes, imitating multi-threading. Jest calculates -on many factors- how many workers can activate and runs them. In my case, there were around 8. When I was running them continuously, my laptop fan started running as if it was going to melt. CPU got slower with the heating up and tests were starting to timeout because there was no more computational power to let them run properly. Luckily, Jest let us customize this option with --maxWorkers. On my CRA (Create React App) I added this option cutting in half the workers to the test command: ``` "test": "react-scripts test --maxWorkers=50%" ``` This lets just 4 workers be spun and let the tests pass without timeout issues. Their execution was anyway fairly quick, and it prevented me from any more trouble! Of course, the option can be set even lower (or higher). It depends on many factors, so you can try and test yourself (pun intended)! Thank you for reading! Let's connect on [Twitter](https://twitter.com/AlexBuaiscia) or [Mastodon](@alex_@uiuxdev.social)!
buaiscia
1,288,964
How to avoid “schema drift”
We are all familiar with drifting in-app configuration and IaC. We’re starting with a specific...
0
2022-12-08T16:07:43
https://memphis.dev/blog/how-to-avoid-schema-drift/
schemaregistry, schemaverse, schemamanagemen, schemadrift
We are all familiar with drifting in-app configuration and IaC. We’re starting with a specific configuration, backed with IaC files. Soon after, we are facing a “drift” or a change between what is actually configured in our infrastructure and our files. The same behavior happens in data. Schema starts in a certain shape. As data ingestion grows and scales to different sources, we get a schema drift, a messy, unstructured database with UI crashes, and an analytical layer that keeps failing due to a bad schema. We will learn how to deal with the scenario and how to work with dynamic schemas. --- ## Schemas are a major struggle. A schema defines the structure of the data format. Keys/Values/Formats/Types, a combination of all, results in a defined structure or simply – schema. Developers and data engineers, have you ever needed to recreate a NoSQL collection or recreate an object layout on a bucket because of different documents with different keys or structures? You probably did. An unaligned record structure across your data ingestion will crash your visualization, analysis jobs, and backend, and it is an ever-ending chase to fix it. --- ## Schema drift Schema drift is the case where your sources often change metadata. Fields, keys, columns, and types can be added, removed, or altered on the fly. Your data flow becomes vulnerable to upstream data source changes without handling schema drift. Typical ETL patterns fail when incoming columns and fields change because they tend to be tied to those sources. Stream requires a different toolset. The following article will explain the existing solutions and strategies to mitigate the challenge and avoid schema drift, including data versioning using lakeFS. ![schema drift 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3pjg00fvfusimpamo9j.png) --- ## Comparison between Confluent Schema Registry and Memphis Schemaverse **Confluent — Schema Registry** Confluent Schema Registry provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving your Avro®, JSON Schema, and Protobuf schemas. It stores a versioned history of all schemas based on a specified subject name strategy, provides multiple compatibility settings, and allows the evolution of schemas according to the configured compatibility settings and expanded support for these schema types. It provides serializers that plug into Apache Kafka® clients. They handle schema storage and retrieval for Kafka messages that are sent in any of the supported formats. **The good** - UI - Support Avro, JSON Schema, and protobuf - Enhanced security - Schema Enforcement - Schema Evolution **The bad** - Hard to configure - No external backup - Manual serialization - Can be bypassed - Requires maintenance and monitoring - Mainly supports in java - No validation ![Schema registry overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8nivp6hki3tii4r6xaqf.png) Schema registry overview. [Confluent Documentation ](https://docs.confluent.io/platform/current/schema-registry/index.html) --- **Memphis — Schemaverse** [Memphis Schemaverse](https://docs.memphis.dev/memphis/memphis/schemaverse-schema-management) provides a robust schema store and schema management layer on top of the [memphis broker](https://github.com/memphisdev/memphis) without a standalone compute or dedicated resources. With a unique and modern UI and programmatic approach, technical and non-technical users can create and define different schemas, attach the schema to multiple stations and choose if the schema should be enforced or not. Memphis’ low-code approach removes the serialization part as it is embedded within the producer library. Schema Verse supports versioning, GitOps methodologies, and schema evolution. **The good** - Great UI and programmatic approach - Embed within the broker - Zero-trust enforcement - Versioning - Out-of-the-box monitoring - GitOps. Working with files - Low/no-code validation and serialization - No configuration - Native support in Python, Go, Node.js **The bad** - Not supporting all formats yet.[ Protobuf and JSON only](https://docs.memphis.dev/memphis/memphis/schemaverse-schema-management/formats). GraphQL and Avro are next. ![Shemaverse overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntuoix2f9n96h3jqwl96.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0duwki1xhukvadtv1q0p.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40kiow2wcoecwqbmfm4g.jpeg) --- ## How to avoid schema drifting using Confluent Schema Registry Avoid schema drifting means that you want to enforce a particular schema on a topic or station and validate each produced data. This tutorial assumes that you are using Confluent cloud with registry configured. If not, here is a Confluent article on how to install the self-managed version (without enhanced features). To make sure you are not drifting and maintaining a single standard or schema structure in our topic, you will need to 1. Copy ccloud-stack config file to $HOME/.confluent/java.config. 2. Create a topic. 3. Define a schema. For example- ``` { "namespace": "io.confluent.examples.clients.basicavro", "type": "record", "name": "Payment", "fields": [ {"name": "id", "type": "string"}, {"name": "amount", "type": "double"} ] } ``` 4. Enable schema validation over the newly created topic. 5. Configure Avro/Protobuf both in the app and with the registry. Example producer code in Maven ``` ... import io.confluent.kafka.serializers.KafkaAvroSerializer; ... props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class); ... KafkaProducer<String, Payment> producer = new KafkaProducer<String, Payment>(props)); final Payment payment = new Payment(orderId, 1000.00d); final ProducerRecord<String, Payment> record = new ProducerRecord<String, Payment>(TOPIC, payment.getId().toString(), payment); producer.send(record); ``` Because the pom.xml includes avro-maven-plugin, the Payment class is automatically generated during compile. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdu43a5bitifp6z2gheq.png) In case a producer will try to produce a message that is not struct according to the defined schema, the message will not get ingested. --- ## How to avoid schema drift using Memphis Schemaverse **Step 1: Create a new schema (Currently only available through Memphis GUI)** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3nl5qr1k1nxnx0dd1ll.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6jtfl8wt5xf2yutj3m3.png) **Step 2: Attach Schema** Head to your station, and on the top-left corner, click on “+ Attach schema” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3mknqenho9hpjkbi6llt.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a2o6o0n7s0p3mgi3711z.png) **Step 3: Code example in node.js** Memphis abstracts the need for external serialization functions and embeds them within the SDK. **Producer (Protobuf example)** ``` const memphis = require("memphis-dev"); var protobuf = require("protobufjs"); (async function () { try { await memphis.connect({ host: "localhost", username: "root", connectionToken: "*****" }); const producer = await memphis.producer({ stationName: "marketing-partners.prod", producerName: "prod.1" }); var payload = { fname: "AwesomeString", lname: "AwesomeString", id: 54, }; try { await producer.produce({ message: payload }); } catch (ex) { console.log(ex.message) } } catch (ex) { console.log(ex); memphis.close(); } })(); ``` **Consumer (Requires .proto file to decode messages)** ``` const memphis = require("memphis-dev"); var protobuf = require("protobufjs"); (async function () { try { await memphis.connect({ host: "localhost", username: "root", connectionToken: "*****" }); const consumer = await memphis.consumer({ stationName: "marketing", consumerName: "cons1", consumerGroup: "cg_cons1", maxMsgDeliveries: 3, maxAckTimeMs: 2000, genUniqueSuffix: true }); const root = await protobuf.load("schema.proto"); var TestMessage = root.lookupType("Test"); consumer.on("message", message => { const x = message.getData() var msg = TestMessage.decode(x); console.log(msg) message.ack(); }); consumer.on("error", error => { console.log(error); }); } catch (ex) { console.log(ex); memphis.close(); } })(); ``` --- ## Schema enforcement with data versioning. Now that we know the many ways to enforce schema using Confluent Or Memphis, let us understand how a versioning tool like lakeFS can seal the deal for you. **What is lakeFS?** [lakeFS](https://lakefs.io/) is a data versioning engine that allows you to manage data, like code. Through the Git-like branching, committing, merging, and reverting operations, managing the data and in turn the schema over the entire data life cycle is made simpler. **How to achieve schema enforcement with lakeFS?** By leveraging the lakeFS branching feature and webhooks, you can implement a robust schema enforcement mechanism on your data lake. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nidr5lerz3w3w60haduw.png) [lakeFS hooks](https://github.com/treeverse/lakeFS-hooks) allow automating and ensuring a given set of checks and validations run before important life-cycle events. They are similar conceptually to Git Hooks, but unlike Git, lakeFS hooks trigger a remote server which runs tests and so it is guaranteed to happen. You can configure lakeFS hooks to check for specific table schema while merging data from development or test data branches to production. That is, a pre-merge hook can be configured on the production branch for schema validation. If the hook run fails, it causes lakeFS to block the merge operation from happening. This extremely powerful guarantee can help implement schema enforcement and automate the rules and practices that all data sources and producers should adhere to. ## Implementing schema enforcement using lakeFS: 1. Start by creating a lakeFS data repository on top of your object store (say, AWS S3 bucket for example). 2. All the production data (single source of truth) can live on the `main` branch or `production` branch in this repository. 3. You can then create a `dev` or a ‘staging’ branch to persist the incoming data from the data sources. 4. Configure a lakeFS webhooks server. For example, a simple python flask app that can serve http requests can be a webhooks server. Refer to sample flask app that lakeFS team has put together to get started. 5. Once you have the webhooks server running, enable webhooks on a specific branch by adding `actions.yaml` file under `_lakefs_actions` directory. Here is a sample actions.yaml file for schema enforcement on master branch: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3hr4z5lf8t1swjdrsk7.png) - Suppose the schema of incoming data is different from that of production data, the configured lakeFS hooks will be triggered on pre-merge condition, and hooks will fail the merge operation to master branch. This way, you can use lakeFS hooks to enforce the schema, validate your data to avoid PII columns leakage in your environment, and add data quality checks as required. To understand more about configuring hooks and effectively using them, refer to our [lakeFS](https://docs.lakefs.io/setup/hooks.html), [git for data — documentation](https://docs.lakefs.io/setup/hooks.html). --- Article by Yaniv Ben Hemo Co-Founder & CEO @Memphis.dev ֿ --- Follow Us to get the latest updates! [Github](https://github.com/memphisdev/memphis) • [Docs](https://docs.memphis.dev/memphis/getting-started/readme) • [Discord](https://discord.com/invite/DfWFT7fzUu) --- [Join 4500+ others and sign up for our data engineering newsletter.](https://mailchi.mp/memphis.dev/newslettersub)
atrifsik
1,289,275
Treetop Tree House
Advent of Code 2022 Day 8 Part 1 Analyzing each cross-section My algorithm in...
16,285
2022-12-08T20:59:51
https://dev.to/rmion/treetop-tree-house-354c
adventofcode, algorithms, javascript, programming
## [Advent of Code 2022 Day 8](https://adventofcode.com/2022/day/8) ## Part 1 1. Analyzing each cross-section 2. My algorithm in pseudocode 3. My algorithm in JavaScript ### Analyzing each cross-section Given a grid of trees: ``` . . . . . . . . . . . . . . . . . . . . . . . . . ``` Knowing that all trees on the edge are fine: ``` * * * * * * . . . * * . . . * * . . . * * * * * * ``` For each non-edge tree - for example, the center tree: ``` . . . . . . . . . . . . ? . . . . . . . . . . . . ``` I need to separately analyze each slice of its cross-section: ``` . . 1 . . . . 1 . . 4 4 ? 2 2 . . 3 . . . . 3 . . ``` As long as all values in any slice are less than the source tree, I can consider it `visible` like the ones on the edge. How might I do this programmatically? ### My algorithm in pseudocode Iterating through all non-edge trees in the grid: ``` Track the amount of trees visible - accounting for all edge trees For each tree from the second row to the second-to-last For each tree from the second column to the second-to-last Set flag to false Set directions to a 4-element array of relative (x,y) coordinates Set index to 0 While flag is false and index is less than the length of directions Generate a list of values originating from the current tree in a straight line stepping in the direction of the relative coordinate at the current index If the maximum value in the list is less than the current tree's value Set flag to true Else Increment index by 1 If flag is true Increment the amount of trees visible by 1 Else Don't increment the amount of trees visible Return the amount of trees visible ``` I wonder how close this will end up being to my actual algorithm. Here I go! ### My algorithm in JavaScript - It very closely matches my pseudocode - The bulk of it is the part where I walk each slice of the cross-section ```js let grid = input .split('\n') .map(row => row.split('').map(Number)) let answer = grid.length * 4 - 4 for (let row = 1; row < grid.length - 1; row++) { for (let col = 1; col < grid[row].length - 1; col++) { let flag = false let index = 0 let coords = [[-1,0],[0,1],[1,0],[0,-1]] while (!flag && index < coords.length) { let slice = [] let y = row + coords[index][0] let x = col + coords[index][1] while ( (y >= 0 && y < grid.length) && (x >= 0 && x < grid[y].length) ) { slice.push(grid[y][x]) y += coords[index][0] x += coords[index][1] } if (Math.max(...slice) < grid[row][col]) { flag = true } index++ } if (flag) answer++ } } return answer ``` ## Part 2 1. Enter: more careful analysis of each tree 2. My algorithm in JavaScript ### Enter: more careful analysis of each tree - It's no longer enough to spot a tree of equal or greater height - My algorithm from Part 1 seems well set to adjust to this new requirement of checking nearby trees ### My algorithm in JavaScript - I start both `for` loops at `0` instead of `1` to check every tree - I track fewer variables inside the inner `for` loop - I added a third condition to the `while` loop that causes it to break as soon as a number equal to or higher than the current tree is discovered ```js let grid = input .split('\n') .map(row => row.split('').map(Number)) let answer = 0 for (let row = 0; row < grid.length; row++) { for (let col = 0; col < grid[row].length; col++) { let coords = [[-1,0],[0,1],[1,0],[0,-1]] let scores = coords.map(coord => { let slice = [] let y = row + coord[0] let x = col + coord[1] while ( (y >= 0 && y < grid.length) && (x >= 0 && x < grid[y].length) && (slice[slice.length - 1] || 0) < grid[row][col] ) { slice.push(grid[y][x]) y += coord[0] x += coord[1] } return slice.length }) let product = scores.reduce((sum, score) => sum * score) answer = product > answer? product : answer } } return answer ``` I got confused when writing this bit of code: ```js (slice[slice.length - 1] || 0) ``` It is designed to account for an empty list whose length would be 0 and therefore would not have an item at index `-1`. ## I did it!! - I solved both parts! - By using several nested `for` and `while` loops! - Along with some `map()` and `reduce()`! This ended up being a delightfully challenging twist on the now-common grid-themed puzzle! I'm very glad I have accrued all my skills working with this type of puzzle from similar puzzles in years past. It made solving this one much easier: I could focus on other parts of the problem...besides relative coordinates!
rmion
1,289,332
Web Application Penetration Testing 101
https://medium.com/@vishnuv4910/web-application-penetration-testing-101-a1fea91e36a9
0
2022-12-08T18:48:51
https://dev.to/vz360/web-application-penetration-testing-101-5bnh
https://medium.com/@vishnuv4910/web-application-penetration-testing-101-a1fea91e36a9 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dk8sd4xq1vevnd9ztwp9.png)
vz360
1,289,389
CSS Selectors: Style lists with the ::marker pseudo-element
For HTML lists, there was no straightforward way to style the bullets a different color from the list...
20,455
2022-12-29T18:39:27
https://dev.to/dianale/css-selectors-style-lists-with-the-marker-pseudo-element-ee4
css, webdev, codepen, tutorial
For HTML lists, there was no straightforward way to style the bullets a different color from the list text until a couple of years ago. If you wanted to change just the marker color, you'd have to generate your own with pseudo-elements. Now with widespread browser support of the `::marker` pseudo-element, this can be targeted directly. Let's look at the different methods for styling lists, from methods before `::marker` was available and how to style with `::marker` going forward. Here's a simple grocery list that we will style: ```html <ul> <li class="list__apples">Apples</li> <li class="list__bananas">Bananas</li> <li class="list__avocados">Avocados</li> <li class="list__cheese">Cheese</li> <li class="list__bread">Bread</li> </ul> ``` {% codepen https://codepen.io/pursuitofleisure/pen/GRGLPYZ %} ## Before ::marker ### Changing the bullet color using ::before or ::after In order to change the color of list bullets, we need to generate new bullets using pseudo-elements: ```css li { display: flex; align-items: center; gap: 0 0.25rem; } li::before { content: "\2022"; color: #ec368d; } ``` Here we are creating a bullet icon by using the CSS code/unicode "\2022" - which is the code for a filled-in circle. We can set the color and then since the list-item is using flex, we can vertically center the bullet with the text. Before `flex` was supported, I used `position: absolute`, which was horrible to style and to keep responsive with the font-size. So with `flex` being available now, this already makes the old method not as cumbersome as it used to be. The new marker will take into account any left-padding on the list. I used to include the following rule to remove the default list styling first. This is accomplished by the following line (if you have any CSS resets in use, this may already be included): ```css ul { list-style: none; } ``` This rule however is actually not needed because with the pseudo-elements, those replace the default style regardless. You may or may not want to add this for standardization purposes. ### Changing the marker to an image What about changing the marker to an image instead? This is actually pretty simple. Here I'm setting the marker to the CSS code/unicode for a shopping cart emoji. ```css li { list-style: "\1F6D2"; } ``` ## Using ::marker ### Changing the color of the list marker Now that we have access to `::marker`, things are much easier. If we want to just change the color of the bullets, then it's just one line: ```css li::marker { color: #ec368d; } ``` For changing the bullets to an emoji, we can also use `::marker`. This is the same amount of code as the old method, so it's your preference on which to use: ### Changing the marker to an emoji ```css li::marker { content: "🍎"; } ``` Let's have fun and have each emoji match its list item: ```css .list__apples::marker { content: "🍎"; } .list__bananas::marker { content: "🍌"; } .list__avocados::marker { content: "🥑"; } .list__cheese::marker { content: "🧀"; } .list__bread::marker { content: "🥖"; } ``` You can also use the CSS code/unicode value for `content` instead of the actual emoji in case emojis cause issues within your editor or compiler: ```css li::marker { content: "\1F34E"; } ``` ## Conclusion And that's it for styling list markers! Short and simple and one of the really nice things to have compared to the old methods.
dianale
1,289,953
Creating A Carousel In React With Bootstrap
Creating a carousel in react might sound a bit intimidating; but with the right tools and background...
0
2022-12-09T07:12:38
https://dev.to/javirodmart/creating-a-carousel-in-react-with-bootstrap-3f0p
webdev, javascript, react, beginners
Creating a carousel in react might sound a bit intimidating; but with the right tools and background knowledge, it becomes much easier. First, we'll start by installing Bootstrap with npm. `npm install react-bootstrap bootstrap` After you have installed Bootstrap, you will need to import it into your JS. `import Carousel from 'react-bootstrap/Carousel';` Now let's get to the fun part. You'll want do use the following code to create your carousel: ``` <Carousel> <div className='slide'><img src='https://cdn.mos.cms.futurecdn.net/MeU4HokrzUwhbd9PJBQCSV-1200-80.png'/></div> <div className='slide'><img src='https://fs-prod-cdn.nintendo-europe.com/media/images/10_share_images/games_15/nintendo_switch_download_software_1/2x1_NSwitchDS_Overwatch2WatchpointPack_image1600w.jpg'/></div> <div className='slide'><img src='https://i.ytimg.com/vi/wQATS4HOxdo/maxresdefault.jpg'/></div> <div className='slide'><img src='https://p325k7wa.twic.pics/high/elden-ring/elden-ring/04-retailers/00-beautyshots/04-Standard/ER_standard_keyart.jpg?twic=v1/step=10/quality=80/max=760'/></div> <div className='slide'><img src='https://cdn.cloudflare.steamstatic.com/steam/apps/570940/capsule_616x353.jpg?t=1668145065'/></div> <div className='slide'><img src='https://images2.alphacoders.com/204/thumb-1920-204972.jpg'/></div> <div className='slide'><img src='https://i.ytimg.com/vi/H4rYVsJ4v9Y/maxresdefault.jpg'/></div> <div className='slide'><img src='https://cdn.akamai.steamstatic.com/steam/apps/12210/capsule_616x353.jpg?t=1618853493'/></div> </Carousel>; ``` Personally, I recommend putting the carousel in its own component in case you need to use it in different places throughout your app. Putting it in its own component will also make it significantly easier to see what the placement will look like in your app. After that, you're pretty much done. This will give you the basic foundation for your carousel. It can be customized to your liking by changing fonts, speeds, styles, transitions, and image size. I recommend playing around with these to see what you prefer and what works best with your personal style and project. You can use this for a slide show or even a loading page if you wanted to. Happy coding!
javirodmart
1,290,153
MIG/MAG wires
MIG/MAG welding wire is used in welding. There is a difference between MIG and MAG: MIG stands for ...
0
2022-12-09T09:04:22
https://dev.to/royalweldingwire/migmag-wires-4l27
**[MIG/MAG welding wire](https://royalweldingwires.com/mig-mag-welding-mild-steel-wires/#70S3)** is used in welding. There is a difference between MIG and MAG: MIG stands for ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3nr8xk0rnwnridd7fji.jpg) metal inert gas (ARGON, HELIUM), and MAG stands for metal active gas (CO2). MIG/MAG wires are most commonly employed to weld thin pieces of metal together. They are used to fill gaps between two pieces of metal by depositing filler wire into them. To join metal pieces, welders work with MIG/MAG. As the pieces are clamped together, the wire is inserted into the gun of a welding machine. This should be followed by a constant feed and trigger action. This will maintain a consistent weld line along the joint. The welder has control over how much heat is applied to each area by using a regulator available in the welding machine.
royalweldingwire
1,290,274
Tutorial on Robot Framework
So you know how you can use selenium webdriver to control browser for automation purposes? There's...
0
2022-12-09T12:44:53
https://dev.to/lakhbir_x1/tutorial-on-robot-framework-5bof
So you know how you can use selenium webdriver to control browser for automation purposes? There's something called Robot framework that is great for test automation and RPA purposes. You can see how it is used in Python to run tests {% embed http://bit.ly/3F9Gv1A %}
lakhbir_x1
1,290,277
Two Peas In A Pod - What Makes Isabella And Aiden's Friendship So Beautiful?
They say all good things come in pairs. After all, why chuckle alone when you can laugh...
0
2022-12-09T12:50:24
https://dev.to/ja0424953/two-peas-in-a-pod-what-makes-isabella-and-aidens-friendship-so-beautiful-367e
friendship, relationship, isabella, love
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs2ffo5qwhrei722qdjc.jpg) They say all good things come in pairs. After all, why chuckle alone when you can laugh uncontrollably with someone? Why cry alone when you can wipe your tears using your best friend's shirt? Why go through this journey alone when you can pair with your pea in this pod we call life? Author Xayley Rose, in her upcoming novel, [My Other Half](https://www.amazon.com/My-Other-Half-Xayley-Rose-ebook/dp/B0BM57B4V8/ref=sr_1_8?crid=2V7M2D08UGL1K&keywords=My+Other+Half&qid=1669397326&s=books&sprefix=%2Cstripbooks%2C742&sr=1-8), introduces her fans to Aiden and Isabella, the embodiment of the phrase, "two peas in a pod." Let's read through the beautiful resonances she breathed in her characters, giving their relationship life. ## The test of time We all experience friendships, some long-term, some existing until the going gets really tough. No matter how many laughs you might share with your friend, the question is, will your friendship stand the test of time? Aiden's blue eyes wrapped around Isabella from the first day of school until high school like a warm blanket of comfort. Their duo was a nod that, yes, such relationships exist. ## Little gestures of love Actions speak louder than words. If you're in a healthy friendship, efforts must be reciprocated for both parties to be happy and feel loved. Being excited for your friend's birthday more than they are, coming over to surprise them when they're down, and holding them tight when all hope seems lost to them are just a few phrases in the love language of friendship. Aiden and Isabella are two friends who encapsulate this love language and translate it perfectly despite their banter. Xayley Rose treats us with numerous moments shared by Aiden and Isabella to make us gush. ## Getting used to you We all think we are good liars; why would anyone lie if they knew they'd be caught? But as closed a book you might deem yourself out to be, it is a different story when it comes to your best friend. If your friend knows you, a slight twitch of the brow or tweak in voice is caught. A true friend is one who hears you say so much, even in the silence. Aiden reads Isabella like a book that doesn't know it's being read. Their friendship shows the beauty in knowing someone so profoundly that their every action is familiar. ## Dear best friend You are lucky if you have found your pea in this pod, just like Isabella and Aiden found each other. So the next time you see them, hug them tightly and thank them for being there. Thank them for their silly jokes that make your day better, the untidy card they made for you on your birthday, but most importantly, thank them for being them and existing in your life. Spoke to your heart? Share Isabella and Aiden's relationship with your best friend by getting them a copy of My Other Half as a nod to how you find similarities between them and the two of you. Click on the link to learn more.
ja0424953
1,290,529
Open Source Dashboard (for js developers)
Javascript open-source maintainers often have a hard time tracking their npm package status. I used...
0
2022-12-09T14:08:52
https://blog.niradler.com/open-source-dashboard-for-js-developers
--- title: Open Source Dashboard (for js developers) published: true date: 2019-06-15 16:00:02 UTC tags: canonical_url: https://blog.niradler.com/open-source-dashboard-for-js-developers --- Javascript open-source maintainers often have a hard time tracking their npm package status. > I used until recently npm-stats.com to get npm information about my packages, recently the site stops to work, so as a coder it was an easy decision to start working on a clone. The package I created is called npmdash. (short for npm dashboard) ![Open Source Dashboard (for js developers)](https://cdn.hashnode.com/res/hashnode/image/upload/v1642373588890/GV9w3XVCL.png) The homepage is a very simple input field to enter the npm username. (be patient it can take a while.) ![Open Source Dashboard (for js developers)](https://cdn.hashnode.com/res/hashnode/image/upload/v1642373591036/qeCe_nRex.png) This is the packages view it's a list of the packages sorted by monthly downloads, including data on the package that can help you get more insight into the package, in the future more relevant data will be added. You can use the [hosted](http://npm.devresources.site/) version, the docker version or the open-source CLI option. The hosted version uses AWS fargate, you can clone the repo and build your docker version if needed, and deploy it to your own container solution. CLI Usage ``` npm i -g npmdash npmdash // will open the homepage npmdash -u <username> // will open the packafes view for this username ``` Docker Usage ``` docker pull niradler/npmdash docker container run -p 8989:8989 npmdash ``` For future improvements, I will like to improve the GitHub integration, improve the UI and get reports maybe? Still thinking about it. Enjoy and let me know what you think :)
niradler
1,290,578
Generate JWT secret key using OpenSSL
Searching online for a way to generate an HS512 JWT secret key, I found a Stack Overflow post with...
0
2022-12-09T15:00:50
https://dev.to/arc95/generate-jwt-secret-key-using-openssl-5edn
Searching online for a way to generate an HS512 JWT secret key, I found a [Stack Overflow post](https://stackoverflow.com/a/56934114/177416) with the OpenSSL command: ``` openssl rand -base64 172 | tr -d '\n' ``` Then the question was: do I have OpenSSL installed on my PC? If we have Git installed, [it includes OpenSSL](https://stackoverflow.com/a/51757939/177416) at this location: ``` C:\Program Files\Git\usr\bin\openssl.exe ``` Running the command at that location generates the key.
arc95
1,290,929
Getting Started with Pixel Vision 8
Start developing a game with the Pixel Vision 8 fantasy game console
0
2022-12-09T20:15:45
https://dev.to/cmiles74/getting-started-with-pixel-vision-8-1cmg
gamedev, pixelvision8, retrogaming
--- layout: post title: Getting Started with Pixel Vision 8 published: true description: Start developing a game with the Pixel Vision 8 fantasy game console tags: gamedev, pixelvision8, retrogaming cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/br5jv0pbuzqmjcdv3p8g.png --- I've been interested in fantasy console for a long time now, but my interest has been limited to reading a blog article here and there and thinking something like "Man, that looks like fun!" After literal years of doing other things, I've finally decided to put my own 8-bit arcade game together. After taking a look at the field, I decided to go with the [Pixel Vision 8][pv8]! My plan is to try an get some of the people at work interested so it's important that the console be free. 😉 It's a pretty nice product and reasonably polished. It has handy tools for writing code, drawing sprites and tile maps, and editing music. It's also pretty modular, each specialized function is managed by a "chip" (color, tile, sprite, etc.) that can be customized or even replaced with your own code. Plus, it looks pretty dang awesome on my snazzy retro looking PC. ![Snazzy Retro PC](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1siys2v22uxexetnzj4n.jpg) This post will cover getting the Pixel Vision 8 application installed, drawing your first sprite and moving that sprite around on screen. If people are interested and if I have time, I may write another post that covers getting an actual game coded up. ## Install the Pixel Vision 8 Application Head over to the [Pixel Vision 8][pv8] home page and click on the link for the "Latest Build". This will take you to the page with the latest release on the project's GitHub page. Scroll down to the "Assets" section and then click on the archive for your operating system. For instance, if you were on MacOS and the latest release was 1.0.18, you'd click on the link that reads "PixelVision8-v1.0.18-macOS.zip". Once you have the release downloaded, go ahead and unzip it. This is kind of on you, as every platform is a bit different. Good luck! With the distribution unpacked, open up the directory and take a look at the contents. There's a lot of stuff in there, you want to double-click the executable named "Pixel Vision 8". The machine will boot up, load it's operating system and present you with a welcome screen. ![PV8 Welcome](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ye3bzrwe5rfx2tnlt09.png) Pretty neat, eh? It kind of reminds me of the [Amiga Workbench][amiga]. Go ahead and click the "Yes" button to create a new project. You will be dropped at the "Workspace Explorer" where you can browse all of the files in your starter template project. ## Draw a Sprite First things first: we need a sprite to display on screen. For those of you not in the know, a [sprite][sprite] is a bitmap graphic that can easily be moved around on screen or stacked on top of or below other bitmaps. We will use this sprite to represent the player. [Sprites in Pixel Vision 8][pv8-sprite] are all 8x8 pixels in size, when your game starts up the file "sprites.png" is loaded and chopped up in to 8x8 images. Each of these images is then loaded into the memory of the sprite chip until it runs out of space. Each sprite is given an index number at load time (starting from 0) and you can use that index to fetch a particular sprite. Double-click the folder labeled "Sprites" to open it up, then choose "New Sprite" from the "Workspace Explorer" menu (the menu is under an icon that looks like three colored slashes, "///"). The "New Sprite" dialog will appear, asking you to name the file; change the name to "sprites" (plural). At this point we run into a minor bug in the Pixel Vision 8 Workspace Explorer: we need the "sprites" file in the root of our project, next to our "code" file. Unfortunately we can _only_ create a new sprite file when we're in the "Sprites" directory. Lucky for us this is easy to fix, simply drag the "sprites" file to the folder named "..", the one with the green up-arrow icon. You will be asked to confirm the move, go ahead and say "yes". You may now double-click the ".." folder to move up a level and you will see the files in your project. Now that you have a file of sprite data, double-click the "sprites" file to open it. The sprite editor will open and you will be asked if you want to use the Pixel Vision 8 color palette, go ahead and say yes. You should now see one lone sprite in the upper-left hand corner of your canvas. We want edit that little sprite, but it's so small! In the lower right-hand corner of the editor is a large yellow button; click that button to zoom in and make the sprite as large as you can (if you click while holding down the shift key then everything will get smaller 😉). Now that it's nice and big, edit that sprite until you think it looks like your player; you can select the draw color by clicking on it, the pencil and line tools may be used to draw your spite. The magenta background will be ignored, at game time it will be replaced with whatever is behind your sprite. ![A Sprite](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s1f1miewz9sy8obm52dp.png) When you like the look of your sprite, choose "Save" from under the "///" menu to save your sprite and then choose "Quit" to close the sprite editor. ## Displaying Your Sprite The next step is to write enough code to get your sprite to display when your game is run. You should be back at the "Workspace Explorer", looking at the contents of the "sprites" directory. Double-click the "code" file to edit your game's code in the editor. ### The Game Loop The Pixel Vision 8 lets you choose between using [Lua][lua] or [C#][c-sharp], for this project we'll be using [Lua][lua]. It's pretty concise and, in my opinion, easier to pick up from scratch than C#. You can read up on the language later, this tutorial will give you enough knowledge just-in-time style to get by. You will see a bunch of code in the file already, this is the sample code that comes with each new workspace. In my opinion it's not all that helpful; press "Control"+"A" to select all of the text and then hit the backspace or delete key to remove all of it. Next, type in the text below to code up the skeleton of your game. ```lua function Init() end function Update(timeDelta) end function Draw() end ``` This won't actually do anything but it does provide a short game that presents an entirely black screen! Choose "Save" from under the "///" menu and then choose "Run Game" to try it out. Your game should load and you should see a screen of black. When you are done exploring this barren landscape, press the "Escape" key to quit your game and return to the editor. These three functions are common to every game, they are called by the Pixel Vision 8 to perform the following... + Setup your game + Update the current state of your game (i.e. move the player) + Draw the current state of the game on screen While the setup function ("Init") is only called once the other two are called over and over. The "Update" function will be called, it can do things like see if any buttons are pressed and move the player to a new location. After that, the "Draw" function will be called to update what we see on the display. This is called ["the game loop"][game-loop], it's a common pattern in a lot of different kinds of video games. ### Draw the Player While we could jam all of our code into those three functions, this style is frowned upon by sane people everywhere. Instead we'll add some new functions to the top of our file and then call those functions where appropriate. First up is keep track of the state of our player, add the following code to the top of your file. ```lua function InitActors() player = {posX = 8, posY = 8, sprite = 0} end ``` This code creates a new variable named `player` that keeps track of all the data about our player: their coordinates on screen (`posX` and `posY`) and the index of the sprite we're using to represent them (`sprite`). The data we've assigned to the player is called [a "table"][lua-table], it stores any number of keys and values and makes it easy to get at their values and update them. Now we need to call our function to initialize our player when our game initializes. Edit your "Init" function to match the listing below. ```lua function Init() InitActors() end ``` Next up we want to draw the player. This will be pretty simple; we need to position the sprite in the player's current location and then draw the sprite. Type in the code below right underneath your "InitActors" function. ```lua function DrawPlayer() DrawSprite(player.sprite, player.posX, player.posY) end ``` The [DrawSprite][pv8-drawsprite] function is provided by the Pixel Vision 8 (via the sprite chip), it handles all of the nitty-gritty of drawing the sprite to the display. It takes three arguments, conveniently all of the data it needs is available from our player data. We provide the sprite, the coordinate on the "x" axis and then the coordinate of the player along the "y" axis. With that information in hand, our sprite will be drawn. The last step is to update our "Draw" function to actually draw the player's sprite. Edit your "Draw" function to look like the code listed below. ```lua function Draw() DrawPlayer() end ``` Your entire file of code should look something like this... ![Code Listing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3z8w9jqp9iu5zf9r54mw.png) Go ahead and save the code file and run your game. You should see your sprite on the screen. 😎 ### Move the Player The last step is to let the player move around by pressing the arrow keys on their keyboard. There's only a little bit of logic to handle, we'll add it to a new function named "UpdatePlayer". ```lua function UpdatePlayer() if (Button(Buttons.Left, InputState.Down, 0)) then player.posX -= 2 end if (Button(Buttons.Right, InputState.Down, 0)) then player.posX += 2 end if (Button(Buttons.Down, InputState.Down, 0)) then player.posY += 2 end if (Button(Buttons.Up, InputState.Down, 0)) then player.posY -= 2 end end ``` The Pixel Vision 8 [Button][pv8-button] function is doing most of the work here. This function accepts a button type (up, down, left, right, etc.), a state for the button, and a controller index. The state of the button is either "Down" (pressed) or "Up" (not pressed) and there are only two controllers (0 for player 1, 1 for player 2). When you call the "Button" function it takes this information and compares it to the actual state of the controller, if the controller is in that state then the function returns "true". When this function is called we check each button and, if it's pressed, we update the current location of the player. Simple, right? To make this work, add a call to this function to our "Update" function. ```lua function Update(timeDelta) UpdatePlayer() end ``` Try it out! You'll notice it doesn't work _exactly_ as expected. The problem is that we aren't clearing the display before we draw the sprite in the next location. The [Clear][pv8-clear] function will clear the screen, if we add that right before we draw the items on screen, we'll be all set. ```lua function Draw() Clear() DrawPlayer() end ``` Woot! If you are interested in going the extra mile, how would we change the "UpdatePlayer" code to let the player appear on one side of the screen when they reach the other, like in [Pac Man][pacman]? What would we do if we wanted the boundaries of the display to act as a wall? What might we change to make the player move faster or slower? ## Build Your Game The last thing we'll cover is building a "disk" that contains your game, this provides you one with one file that you can share with friends. You simply drag the disk file onto the Pixel Vision 8 window (or executable) to run the game. Go ahead and quit the code editor, bringing you back to the Workspace Explorer and viewing the contents of your game's directory. First we want to change the name of your game and it's description; double click on the "Info" file or edit your game's information. With this tool you may edit the name and description of your game, along with a couple other items. Go ahead and change the name and description, then choose "Save" from the "///" menu. ![Update Game Info](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fq834apwc3lw0zvh425.png) Next, click the "Build" button to the right of your game's name. You will be asked if you really want to build the game's disk, go ahead and click the "Build" button. Your game will be built and then you will be asked if you want to visit the new build directory, go ahead and click the "No" button and close the editor. At this point your game is built! You can switch to your operating system's file browser, there should be a "PixelVision8" directory present. When you open up that directory, you should see another named "Workspace", this contains all of the files you were looking at with the Workspace Explorer. Open the "MyFirstGame" directory to see your game's files, then navigate through the "Builds" and "Build" directories. You should now see the disk file for your game. Drag the disk to the Pixel Vision 8 window. The disk with your game should be visible on the desktop of the Workspace Explorer and the files that make up your game should be visible. Now open the disk by double-clicking it and then pick "Run" from the "///" window: you will be able to run your game! You can email this file to friend or post it somewhere on the internet. Anyone with the Pixel Vision 8 application installed can run your game or play around with it, perhaps even making their own changes. ## Conclusion It's quite an achievement to get this much done so quickly, the tools provided by the Pixel Vision 8 make it easy to get straight to game building. Looking at your code, there's not so much that it's hard to read, hopefully it's also pretty clear how things fit together. Good luck on your next video game! If time allows, I'll post another article that gets a little closer to a playable game. ![Your Sprite](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6r5nlzsy0rzjzbo2tes5.png) ---- [pv8]: https://pixelvision8.github.io/Website/ "Pixel Vision 8" [amiga]: https://www.gregdonner.org/workbench/wb_11.html "Amiga Workbench" [sprite]: https://en.wikipedia.org/wiki/Sprite_(computer_graphics) "Sprite" [pv8-sprite]: https://github.com/PixelVision8/PixelVision8/wiki/sprites "PV8 Sprites" [game-loop]: https://gameprogrammingpatterns.com/game-loop.html "Game Loop" [c-sharp]: https://learn.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/ "C#" [lua]: https://www.lua.org/pil/contents.html "Lua" [lua-table]: https://www.lua.org/pil/2.5.html "Lua, Tables" [pv8-drawsprite]: https://github.com/PixelVision8/PixelVision8/wiki/api-draw-sprite "DrawSprite" [pv8-button]: https://github.com/PixelVision8/PixelVision8/wiki/api-button "Button" [pv8-clear]: https://github.com/PixelVision8/PixelVision8/wiki/api-clear "Clear" [pacman]: https://freepacman.org/ "Pac Man"
cmiles74
1,290,974
AdventJS 2022 - Retos de código para Javascript cada día de diciembre
Recientemente me he enterado de una iniciativa que salió a la luz por vez primera el año pasado y en...
20,870
2022-12-09T23:22:51
https://dev.to/joseayram/adventjs-2022-retos-de-codigo-para-javascript-cada-dia-de-diciembre-3o17
javascript, typescript, adventjs, adventofcode
Recientemente me he enterado de una iniciativa que salió a la luz por vez primera el año pasado y en este continúa con su segunda edición. Se trata de [adventJS](https://adventjs.dev/) un espacio creado por [@midudev](https://midu.dev/) en donde tendrás la oportunidad de poner en práctica tus conocimientos de programción intentando resolver 1 reto por día hasta navidad. Los retos podrás resolverlos en **Javascript** o **TypeScript** según prefieras. Existe una tabla de puntuación y los puntos serán asignados según la calidad del código. La edición anterior de este proyecto aún está disponible en el siguiente enlace: [adventJS 2021](https://2021.adventjs.dev/). La plataforma cuenta con su propia consola para codificar y probar tus resultados, por lo que no necesitarás instalar nada adicional en tu equipo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7tinzopsafs1iym5d61.jpeg) Si llegas un poco tarde como yo, no te preocupes, aún puedes ponerte al día. Para mayor información acá tienes [la nota oficial](https://midu.dev/adventjs-2022/) y su tweet de lanzamiento: {% embed https://twitter.com/midudev/status/1597965996383412225?s=20&t=U5XwNIZYwC0wtSzGiiKw1g %} --- Aunque JS no es mi fuerte, **lo intentaré**, así que espero que tú también. A medida que vaya avanzando publicaré las soluciones por acá para que sirvan de comparación y retroalimentación.
joseayram
1,291,480
A bit of blabbering about website startup’s.
I’ll be honest… It's really hard trying get audience for new startups in this age &amp; day if...
0
2022-12-10T08:03:21
https://dev.to/zackuknow/a-bit-of-blabberingabout-website-startups-3ofp
beginners, career, writing, startup
I’ll be honest… It's really hard trying get audience for new startups in this age & day if you're new to this. I mean, can you REALLY do seo and backlinking stuff from get go of by just reading few articles? This was, until my friend told me about creating guest post on already popular website or advertising on marketplace but it's really costly & hard to do for a new product owner. Than i thought i need to do something to change this… So i started working on a project called simileto which provides people a place to list their startups for getting people to familiarise with new products and get mentioned in their competitors product page. It's listing site for app, website, game, marketplace, alternative and [software suggestion website](https://www.simileto.com/) made for people to get acquainted with old and new products replacements when they shutdown or are just starting up.
zackuknow
1,291,710
Flexbox (Parent Properties)
When we talk about flexbox properties we should separate between properties for parent &amp;...
0
2022-12-11T09:03:09
https://dev.to/omaradelattia/flexbox-parent-properties-38jb
css, flexbox, webdev, programming
When we talk about flexbox properties we should separate between properties for parent & properties for children ## Properties for the Parent - <u>**Display**</u> `.container {display: flex;}` vs `.container {display: inline-flex;}` Well, both of them has the same effect on the children but the only different is on the parent. **`flex-inline` makes the parent display inline**. - <u>**Flex-Direction**</u> There are 4 main directions which are: - `.container{flex-direction: row}` (default) the main axis is from left to right & the cross axis is from top to bottom - `.container{flex-direction: row-reverse}` the main axis is from right to left & the cross axis is from top to bottom - `.container{flex-direction: column}` the main axis is from top to bottom & the cross axis is from left to right - `.container{flex-direction: column-reverse}` the main axis is from bottom to top & the cross axis is from left to right But what is the main axis & cross axis? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8fwwwycsdb2ctklc59l.JPG) Let's first understand the axis in the flexbox. Flexbox is a one dimension layout which means it apply the flex properties on only one dimension. That means we have more than one axis which are the **main axis** & the **cross axis** - <u>**Flex-Wrap**</u> By default, flex items will all try to fit onto one line. We can change that and allow the items to wrap as needed with this property. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gr1ge5tyj9qvc60j8v7y.PNG) - `.container{flex-wrap: nowrap}` (default) all flex items will be on one line - `.container{flex-wrap: wrap}` flex items will wrap onto multiple lines, from top to bottom. - `.container{flex-wrap: wrap-reverse}` flex items will wrap onto multiple lines from bottom to top. - <u>**Flex-Flow**</u> Well, `flex-flow` is shorthand for the flex-direction and flex-wrap properties. For example ``` .container {flex-flow: column wrap;} ``` - <u>**Justify-Content**</u> `justify-content` defines the alignment along the main axis so that it helps us to distribute extra free space leftover. And its types are: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ui7p8y1ackmt3lg5k20g.PNG) - `.container{justify-content: flex-start}` (default) items are packed toward the start of the flex-direction. - `.container{justify-content: flex-end}` items are packed toward the end of the flex-direction. - `.container{justify-content: center}` items are centered along the line. - `.container{justify-content: space-between}` items are evenly distributed in the line; first item is on the start line, last item on the end line. - `.container{justify-content: space-around}` items are evenly distributed in the line with equal space around them. Note that visually the spaces aren’t equal, since all the items have equal space on both sides. The first item will have one unit of space against the container edge, but two units of space between the next item because that next item has its own spacing that applies. - `.container{justify-content: space-evenly}` items are distributed so that the spacing between any two items (and the space to the edges) is equal. - `.container{justify-content: start}` items are packed toward the start of the writing-mode direction. - `.container{justify-content: end}` items are packed toward the end of the writing-mode direction. - `.container{justify-content: left}` items are packed toward left edge of the container, unless that doesn’t make sense with the flex-direction, then it behaves like start. - `.container{justify-content: right}` items are packed toward right edge of the container, unless that doesn’t make sense with the flex-direction, then it behaves like end. - <u>**Align-Items**</u> `align-items` defines the alignment along the cross axis so it is the same version of `justify-content` but in diferent axis. And its types are: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vh55bg4j6uaz5itu5l7r.PNG) - `.container{align-items: stretch}` (default) stretch to fill the container. - `.container{align-items: flex-start | start | self-start}` items are placed at the start of the cross axis. The difference between these is subtle, and is about respecting the flex-direction rules or the writing-mode rules. - `.container{align-items: flex-end | end | self-end}` items are placed at the end of the cross axis. The difference again is subtle and is about respecting flex-direction rules vs. writing-mode rules. - `.container{align-items: center}` items are centered in the cross-axis. - `.container{align-items: baseline}` items are aligned such as their baselines align. - <u>**Align-Content**</u> To work with align content we should use `flex-wrap: wrap` since it works with items in multiple lines as it is described in the photo below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fittw89fmwzz2fjkymj7.PNG) - `.container{align-content: normal}` (default) items are packed in their default position as if no value was set. - `.container{align-content: flex-start | start}` items packed to the start of the container. The (more supported) flex-start honors the flex-direction while start honors the writing-mode direction. - `.container{align-content: flex-end | end}` items packed to the end of the container. The (more support) flex-end honors the flex-direction while end honors the writing-mode direction. - `.container{align-content: center}` items centered in the container. - `.container{align-content: space-between}` items evenly distributed; the first line is at the start of the container while the last one is at the end. - `.container{align-content: space-around}` items evenly distributed with equal space around each line. - `.container{align-content: space-evenly}` items are evenly distributed with equal space around them. - `.container{align-content: stretch}` lines stretch to take up the remaining space. - <u>**Gap & Row-Gap & Column-Gap**</u> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n58hgtn9h6cyj4suhbhe.PNG) The gap property controls the space between flex items. It applies that spacing only between items not on the outer edges. > when we apply `justify-content: space-between;` the gap will only take effect if that space would end up smaller. This article did not cover all the flexbox parent properties tricks but it has covered the most used ones. I hope it is helpful. And I am open to any comments or updates.
omaradelattia
1,291,722
HTML-FS-1.1. Задача 3 - Виджет новой статьи в блоге «Нетологии»
A post by AlexFRANTSUZOV
0
2022-12-10T12:31:27
https://dev.to/alexfrantsuzov/html-fs-11-zadacha-3-vidzhiet-novoi-stati-v-bloghie--9im
codepen
{% codepen https://codepen.io/frantsuzovALEX/pen/poKBLag %}
alexfrantsuzov
1,291,937
ChatGPT, the last nail on the coffin for coding interviews?
You know how you rely on Google to solve all of your life's problems? No? So it's just me...
0
2022-12-11T18:39:32
https://dev.to/sharonsinei/chatgpt-the-last-nail-on-the-coffin-for-coding-interviews-5b6l
chatgpt, programming, webdev, discuss
You know how you rely on Google to solve all of your life's problems? No? So it's just me then? Programmers ,I'm certain, can relate when it comes to solving their coding problems. Now imagine if Google were on steroids. That's how some are describing [ChatGPT](https://en.m.wikipedia.org/wiki/ChatGPT). ![An image implying that ChatGPT is here to replace Google](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9irw41d1nxbtozgljy2l.jpg) The AI is here to disrupt and it's thrilling to see how the tech industry will take to it specifically with regards to coding interviews. ![An image showing human beings working together with AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7j2awvzqo3axo8wycaq7.jpg) Some companies might opt to ditch coding interviews entirely and stick to holding technical discussions with candidates. Some might just tighten the monitoring process of the interviews. Programming interviews might now be more focused on understanding problem solving capabilities, thought processes and assessing familiarity with tools (such as [ChatGPT](https://en.m.wikipedia.org/wiki/ChatGPT)). Looking at the glass as half full, ChatGPT will open the door for new and interesting ways of solving problems including for hiring good people. ![An image showcasing how AI will help humanity to solve problems](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0tuvpk6ytnve66tm5kx2.jpg) Software engineers you can relax, your jobs will be safe , probably.
sharonsinei
1,319,731
Kubernetes for Java Developers: A Step-by-Step Guide to Containerizing Your Applications
Kubernetes is an open-source container orchestration system that can be used to automate the...
0
2023-01-06T10:36:28
https://dev.to/amarpreetbhatia/kubernetes-for-java-developers-a-step-by-step-guide-to-containerizing-your-applications-2hko
kubernetes, containerapps, java, cloudnative
Kubernetes is an open-source container orchestration system that can be used to automate the deployment, scaling, and management of containerized applications. There are several benefits that a Java developer might realise by using Kubernetes: 1. **Improved resource utilisation**: As mentioned previously, Kubernetes can improve resource utilisation by allowing you to more efficiently pack containers onto hosts. This can lead to cost savings and improved performance. 2. **Greater flexibility**: Kubernetes allows you to easily deploy and manage Java applications across a cluster of hosts, which can be more flexible than managing a fleet of virtual machines. Better scalability: Kubernetes makes it easy to scale Java applications up or down by adding or removing containers from the cluster. This can be more efficient and faster than scaling virtual machines. 3. **Enhanced reliability**: Kubernetes can automatically detect and replace failed containers, and it can also perform rolling updates to ensure that applications remain available during updates. 4. **Simplified management**: Kubernetes provides a unified interface for managing containerized applications, which can simplify the process of deploying and managing Java applications at scale. 5. **Streamlined development process**: Kubernetes can make it easier for Java developers to build and test applications by providing a consistent environment for development, staging, and production. Overall, Kubernetes can help Java developers build and manage highly scalable and reliable applications more efficiently. _Let’s put below Hello World Sample to Kubernetes_, ``` public class ClassA { public static void main(String[] args) { System.out.println(“Hello World”); } } ``` To containerize a Java program for Kubernetes, you will need to follow these steps: 1. Create a Dockerfile for your Java program. A Dockerfile is a text file that contains instructions for building a Docker image. Your Dockerfile should include instructions for installing any dependencies your Java program requires, as well as instructions for copying your Java code into the image and setting the command to run your program. Here is an example of a simple Dockerfile for a Java program: ``` FROM openjdk:8-jre-alpine ADD my-java-program.jar /app/my-java-program.jar CMD [“java”, “-jar”, “/app/my-java-program.jar”] ``` 2. Build the Docker image using the Dockerfile. You can do this using the docker build command. For example: ``` docker build -t my-java-program . ``` 3. Test the Docker image by running a container using the docker run command. For example: ``` docker run -it my-java-program ``` 4. Push the Docker image to a Docker registry. A Docker registry is a repository for storing and distributing Docker images. You can use a public registry such as Docker Hub, or you can set up your own private registry. 5. Create a Kubernetes deployment to manage the containers running your Java program. A deployment is a higher-level object that manages a group of replicas of your application. You can create a deployment using a YAML file that specifies the details of your deployment, including the number of replicas you want to run and the Docker image to use. Here is an example YAML file for a deployment: ``` apiVersion: apps/v1 kind: Deployment metadata: name: my-java-program spec: replicas: 3 selector: matchLabels: app: my-java-program template: metadata: labels: app: my-java-program spec: containers: - name: my-java-program image: my-java-program ports: - containerPort: 8080 ``` Apply the deployment to your Kubernetes cluster using the kubectl command. For example: That’s it! Your Java program should now be running in a container on your Kubernetes cluster. You can use the kubectl command to view the status of your deployment and make any necessary adjustments.
amarpreetbhatia
1,319,760
The Programming Language You Learn Doesn’t Matter That Much
It’s the side work you do that matters Continue reading on JavaScript in Plain English »
0
2023-01-06T12:01:27
https://javascript.plainenglish.io/the-programming-language-you-learn-doesnt-matter-that-much-f5f5c8ed71d0
programminglanguages, programming, coding, javascript
--- title: The Programming Language You Learn Doesn’t Matter That Much published: true date: 2023-01-06 07:16:50 UTC tags: programminglanguages,programming,coding,javascript canonical_url: https://javascript.plainenglish.io/the-programming-language-you-learn-doesnt-matter-that-much-f5f5c8ed71d0 --- [![](https://cdn-images-1.medium.com/max/2600/1*YATZQO1ojT9s728i6TIrHA.jpeg)](https://javascript.plainenglish.io/the-programming-language-you-learn-doesnt-matter-that-much-f5f5c8ed71d0?source=rss-b970c961c4e1------2) It’s the side work you do that matters [Continue reading on JavaScript in Plain English »](https://javascript.plainenglish.io/the-programming-language-you-learn-doesnt-matter-that-much-f5f5c8ed71d0?source=rss-b970c961c4e1------2)
codewithbernard
1,319,828
Attack Vectors in Solidity #09: Bad randomness, also known as the "nothing is secret" attack
Introduction Bad randomness is a type of vulnerability that can occur in smart contracts...
0
2023-01-06T13:01:50
https://dev.to/natachi/attack-vectors-in-solidity-09-bad-randomness-also-known-as-the-nothing-is-secret-attack-ca9
blockchain, web3
### Introduction Bad randomness is a type of vulnerability that can occur in smart contracts written in Solidity, the programming language used for writing contracts on the Ethereum platform. This vulnerability arises when a contract relies on randomness to generate a value or make a decision, but the source of the randomness is not truly random or can be predicted by an attacker. In this article, we will discuss the concept of bad randomness as an attack vector in Solidity, provide code examples, and discuss potential solutions to this problem. ### Vulnerability One common example of bad randomness in Solidity is when a contract uses the block hash as a source of randomness. The block hash is a value that is derived from the contents of a block on the Ethereum blockchain, and it is considered to be unpredictable because it is based on the transactions and other data contained in the block. However, the block hash is not truly random because it is determined by the contents of the block, which may be known or predictable to an attacker. For example, consider the following Solidity code: ```solidity function randomNumber() public view returns (uint) { return uint(keccak256(abi.encodePacked(block.difficulty))) % 10; } ``` This contract function generates a random number between 0 and 9 by taking the keccak256 hash of the block difficulty and modulo 10. While the block difficulty may be unpredictable, it is still determined by the miners who are adding blocks to the blockchain, and an attacker may be able to influence the difficulty and predict the output of this function. Another example of bad randomness in Solidity is when a contract uses the current block timestamp as a source of randomness. The block timestamp is the time at which a block was added to the blockchain, and it is considered to be unpredictable because it is determined by the miner who adds the block. However, the block timestamp is not truly random because it can be influenced by an attacker who controls a miner or has the ability to manipulate the time on their own machine. For example, consider the following Solidity code: ```solidity function randomNumber() public view returns (uint) { return uint(keccak256(abi.encodePacked(block.timestamp))) % 10; } ``` This contract function generates a random number between 0 and 9 by taking the keccak256 hash of the block timestamp and modulo 10. While the block timestamp may be unpredictable, it is still determined by the miner who adds the block, and an attacker may be able to influence the timestamp and predict the output of this function. ### Solution To address the issue of bad randomness in Solidity, there are several potential solutions that developers can consider. One option is to use a hardware random number generator (RNG) to generate truly random values that cannot be predicted by an attacker. Another option is to use a decentralized randomness beacon, such as Chainlink's VRF, which is a decentralized oracle service that provides secure and verifiable randomness to smart contracts. ### Conclusion In conclusion, bad randomness is a significant attack vector in Solidity, as it can allow an attacker to predict or influence the output of contract functions that rely on randomness. To mitigate this risk, developers should use secure sources of randomness, such as hardware RNGs or decentralized randomness beacons, to ensure that their contracts are resistant to this type of attack. --- Thanks for checking out my article! I hope you found it helpful and informative. If you have any thoughts or feedback, please don't hesitate to leave a comment. And if you have any questions or suggestions for future articles, I'd love to hear them. Thanks for your support!
natachi
1,320,103
100 days
New initiative: code every day for 100 days. I have 13 days left in my plan to complete Mosh's Java...
0
2023-01-06T17:25:31
https://dev.to/caseyeee/100-days-34a9
New initiative: code every day for 100 days. I have 13 days left in my plan to complete Mosh's Java mastery course. Then I will learn to use GitHub and start making commits. Today I worked on a mortgage calculator and got better at setting up while loops. I also dealt with bringing variables into scope.
caseyeee
1,322,716
Vue skills to master in 2023
As the front-end development world continues to evolve, the Vue.js framework and its team of sage...
0
2023-01-09T17:51:07
https://www.vuemastery.com/blog/vue-skills-to-master-in-2023/
coding, vue, javascript
--- title: Vue skills to master in 2023 published: true date: 2023-01-06 10:00:53 UTC tags: coding,vuejs,javascript canonical_url: https://www.vuemastery.com/blog/vue-skills-to-master-in-2023/ --- ![Vue skills to master in 2023](https://cdn-images-1.medium.com/max/1024/1*ypj6p8VicjGPlBrPkWCkgw.jpeg) As the front-end development world continues to evolve, the Vue.js framework and its team of sage core team members is at the leading edge of innovation. As a Vue developer, now more than ever there are new skills, tools and libraries you can add to your tool chest so you can **be the best Vue developer you can be in 2023.** If you want to stay at the top of your game, you want to reach mastery and stay there. Mastery not only gives you confidence in your work, but it can guarantee job security, unlock the promise of promotions, and open up doors to expansive opportunities. So what exactly should us Vue devs be focusing on for 2023? This article serves as your guide to the hot topics, tools and trends that you can stay on top of to elevate your code and level up your life. ### Composing like a pro 🧑‍💻 When Vue 3 was released, we received the Composition API, which empowers us to build highly modular, reusable, and scalable Vue apps. The key to _how_ we can achieve this is in the name: composition. > _When we truly understand how to get the most out of the Composition API, we wield our wizard wand of wonder._ **🧙‍♂️✨** ### The Composition API If you’re still on Vue 2, or you’re only using the old Options API in your Vue 3 apps, it’s time to join the new frontier and [learn the Composition API](https://www.vuemastery.com/courses/vue-3-essentials/why-the-composition-api). In case you missed it, Vue 2 will officially retire two years from the time of this writing. While the Options API still works and will likely remain relevant for years to come inside Vue 3 apps, it’s just one _option_ for how to compose your components. And it’s certainly not the most powerful, flexible and readily reusable option. In fact, since the release of Vue 3, the Composition API has even seen a revamp with the [Script Setup](https://www.vuemastery.com/blog/a-brief-history-of-vue-script-setup), which is a syntactic sugar-coated version of the Composition API that gives Vue developers the freedom to create even more performant components with built-in readability, reusability, maintainability, modularity, and makes them more TypeScript-Friendly to boot. There’s a reason the Script Setup syntax is the recommended way to compose components, according to Vue Creator Evan You. So if you’ve been waiting for a sign to start learning it, stop waiting. You can start with our [Vue 3 Composition API](https://www.vuemastery.com/courses/vue-3-essentials/why-the-composition-api) course, taught by Gregg Pollack, and then grab your shovel and go on a [Vue 3 Deep Dive](https://www.vuemastery.com/courses/vue3-deep-dive-with-evan-you/vue3-overview), with the guidance of Evan You himself. ☝ The upcoming Vapor Mode that Evan You is working on will only work with Composition API. ### Crafting Composables One of the core powers the Composition API gives us is the ability to write and use composables. When _composed carefully_, these are neatly organized, transparent, and highly reusable pieces of reactive code that can be seamlessly shared across our components. You can think of them as similar to React Hooks, or mixins from the Vue 2 days (without the drawbacks and gotchas of mixins). By now, you may have heard of or even used a composable from the popular VueUse library. This year, we advise you to hone your ability to craft elegant composables. This will allow you to not only better understand and incorporate composables from existing libraries, but you’ll be able to create your own useful library of composables that are custom-crafted to excel across your app(s). Check out our [Coding Better Composables](https://www.vuemastery.com/courses/coding-better-composables/what-is-a-composable) course to get going. ### Join the Pinia Pioneers🍍 Last year, Pinia became the officially supported state management library for Vue.js, sending Vuex into the front-end history books. Sure, like the Options API, Vuex is still very much in use in apps around the world, but it’s the old model of a tool that they aren’t making anymore because it’s been replaced by something thoughtfully reimagined (by Core Vue.js Team Member Eduardo San Martin Morote) to be more intuitive, lighter weight, more modular, freeingly flexible, and developer-friendly. If you’re wondering (worrying?) if you should convert your app that relies on Vuex over to Pinia, the answer is the same as many development questions: it depends. It depends on how much overhead that adds to you/your team’s workload compared to the added benefits that this shiny new library will provide. However, if you’re starting a new project, or just plain intend on staying in the Vue ecosystem for the foreseeable future, you’ll need to know how to work with Pinia sooner or later. And when it comes to reaching mastery and staying there, sooner is always better than later. To join the Pinia pioneers paving the path of state management in 2023, you can start with the [Pinia Fundamentals](https://www.vuemastery.com/courses/pinia-fundamentals/fundamentals-what-is-pinia) course, then use the [From Vuex to Pinia](https://www.vuemastery.com/courses/from-vuex-to-pinia/what-is-pinia) course to map your Vuex knowledge over to the Pinia paradigm. Our upcoming **Proven Pinia Patterns** course will teach you more advanced use cases and best practices when architecting your apps with Pinia. ### Nuxt 3 is ready for liftoff 🚀 At the end of 2022, we finally received the much-anticipated Nuxt 3 official release. This is a big deal because the Nuxt framework has now been completely rewritten from the ground up to be faster, more performant, and easier to maintain. This production-proven framework helps you build scalable Vue apps with best practices and time-savers built in, and it’s now future-proofed with an improved CLI, new tools (such as Nuxt Suspense, Nuxt Bridge, Nuxt DevTools) and an ecosystem of useful modules. Due to the rewrite being done in TypeScript, this means we get the option of type-checking with TypeScript support built into our Nuxt apps. (Not using TS? It’s completely optional). We have an expanding playlist of Nuxt 3 courses, which starts with the [Nuxt 3 Essentials](https://www.vuemastery.com/courses/nuxt-3-essentials/nuxt-3-overview) course, and includes a course where you can [Build a Blog with Nuxt 3](https://www.vuemastery.com/courses/build-a-blog-nuxt3-content/nuxt3-blog-introduction) using Nuxt Content module V2, taught by Nuxt Ambassador Ben Hong. Behind the scenes, we’re currently producing a Fullstack Nuxt 3 course, which will be power-packed full of modern Vue-ecosystem tools so you can combine a number of libraries, including: This will be one of our most comprehensive courses to date, so keep an eye out for its release! ### VueFire for Firebase 🔥 Speaking of Firebase… until recently we didn’t have an elegantly opinionated way to sync up our Vue 3 apps with Firebase. Now, with the brand new VueFire library, we have a starter kit that simplifies using Firebase’s Cloud Firestore database and authentication-provider for your Vue 3 apps. VueFire is the latest tool built by Eduardo San Martin Morote (creator of Vue Router, Pinia, and more). If you’ve used Eduardo’s libraries, you know they’re thoughtfully designed and enable elegant implementations. In the case of VueFire, this is a collection of composables that allows us to create realtime bindings, making it straightforward to **always keep your local data in sync** with remote databases. Our course [**Firebase with Vue 3 & VueFire**](https://www.vuemastery.com/courses/firebase-with-vue3-and-vuefire/firebase-introduction) is the best way to add this new tool to your arsenal. ### It’s pronounced “Veet” not “Vite” ⚡ If you haven’t heard of Vite.js by now, allow me to pull you out from underneath that boulder and introduce you to one of the most popular and rapidly adopted tools of the web world. > _“The next generation of front end tooling” — Evan You_ Created by Evan You, Vite is a build tool that comes with a dev server and bundles your code for production. As it’s name suggests (Vite is French for “quick”) Vite is lightning-fast and highly flexible for build processes that require a DIY configuration. If you want to learn Vite from the man who wrote it, check out our full [Lightning Fast Builds with Vite](https://www.vuemastery.com/courses/lightning-fast-builds-with-vite/intro-to-vite/) workshop taught by Evan You. As Vite has quickly reshaped the world of web dev, it has birthed offspring such as Vitest, a blazing-fast unit test framework that is Jest-compatible and comes with out-of the-box ESM, TypeScript and JSX support, and is powered by esbuild. We’re working on a complete Vitest course, but in the meantime you can [Get Started with Vitest](https://www.vuemastery.com/blog/getting-started-with-vitest) here on our blog. ### TypeScript for a better night’s rest On the topic of rapidly adopted web dev tools, we round out this guide with the incredibly popular TypeScript. It’s not by chance that all of the major frameworks have been written in TypeScript under the hood (Vue 3, Nuxt 3, Vite…). TypeScript gives us that added layer of type-checked safety, predictable, and bug-proofability that helps us sleep at night and increases our team’s collaborative productivity because we aren’t pulling our hair out wondering who thought it was a good idea to send strings into a parameter that should only ever be an object ( **!?!🤦!?!** ). TypeScript is one of those tools whose value is self-explanatory as soon as you reach the point where its absence is dragging down your productivity. And with the Volar extension for VS Code, it’s that much easier to use. If you’re ready to join the TypeScript gang, we’ve got you covered with our courses [Intro to TypeScript + Vue 3](https://www.vuemastery.com/courses/vue3-typescript/why-vue-%26-typescript) and [TypeScript Friendly Vue 3](https://www.vuemastery.com/courses/typescript-friendly-vue3/introduction-to-the-script-setup-syntax/). ### What else should you master? While this list isn’t exhaustive, it’s a great start to elevating your skills and future-proofing your career. You can find all of these topics and more in our always growing library of premium courses taught by the same people _building the tech_ and working with it daily. If you aren’t yet a member of the Vue Mastery community, we invite you to join us at a deeply discounted price of 55% off, saving you $165 on a year’s access to all of our existing and upcoming courses. [Use the discount now](https://www.vuemastery.com/pricing/?coupon=FUTURE-PROOF-23) while it’s still active. _Originally published at_ [_https://www.vuemastery.com_](https://www.vuemastery.com/blog/vue-skills-to-master-in-2023/) _on January 6, 2023._ * * *
vuemasteryteam
1,320,340
Explain like I'm 5
I asked chatGPT to explain the Flutter Architecture to me as if I were 5 years old. Considering...
0
2023-01-07T00:02:10
https://dev.to/joshjgomes/explain-like-im-5-4h2e
webdev, flutter, programming, beginners
I asked chatGPT to explain the Flutter Architecture to me as if I were 5 years old. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otb3oxwswxsmpd6tlr99.png) _Considering how creative its explanation was, chatGPT will continue to be a sharp tool in the kit of those with great ambition for change._ Thanks for stopping by
joshjgomes
1,320,441
Supply chain security incident at CircleCI: Rotate your secrets
On January 4, CircleCI, an automated CI/CD pipeline setup tool, reported a security incident in their...
0
2023-01-09T10:20:36
https://snyk.io/blog/supply-chain-security-incident-circleci-secrets/
vulnerabilities
--- title: Supply chain security incident at CircleCI: Rotate your secrets published: true date: 2023-01-06 22:07:00 UTC tags: Vulnerabilities canonical_url: https://snyk.io/blog/supply-chain-security-incident-circleci-secrets/ --- On January 4, CircleCI, an automated CI/CD pipeline setup tool, reported a security incident in their product by sharing an [advisory](https://circleci.com/blog/january-4-2023-security-alert/). ## Context around the CircleCI Incident On December 27, security engineer Daniel Hückmann received an email notification about a potential intrusion in his CircleCI account thanks to an AWS CanaryToken placed by him. The decoy, in the form of an AWS key, was likely triggered by the attacker. Decoys, also known as honeytokens, can be placed in the systems to investigate and detect intrusions and activated by an attacker to alert you of potential security breaches. ![](https://snyk.io/wp-content/uploads/blog-circleci-incident-email-1240x1361.jpg) It appears that CircleCI was breached on December 21, the same day it published a [reliability update](https://web.archive.org/web/20230105041902/https://circleci.com/blog/an-update-on-circleci-reliability/) emphasizing its dedication to improving its services. This update followed a series of similar updates in 2022 and came after several security issues for CircleCI in recent years. In mid-2019, the company [experienced a data breach](https://web.archive.org/web/20200323153337/https://support.circleci.com/hc/en-us/articles/360034852194-security-incident-on-8-31-2019-details-and-faqs-) due to the compromise of a third-party vendor, resulting in the disclosure of user data such as usernames and email addresses associated with GitHub and Bitbucket accounts, as well as other sensitive data. In 2022, threat actors were also [caught stealing GitHub accounts](https://thehackernews.com/2022/09/hackers-using-fake-circleci.html) using fake CircleCI email notifications sent to users. ## The impact of secret sprawling on the software supply chain Storing secrets, such as passwords and API keys, in multiple locations is known as “secret sprawling” and can make it difficult to manage and secure them effectively. This can happen within an organization as well as in the software supply chain. When secrets are scattered in different locations, it becomes challenging to track and update them, which increases the risk of unauthorized access. Another issue is the use of long-standing privileges, also called static secrets, that are not rotated regularly. Although secrets management platforms can effectively store and encrypt secrets, static secrets are still vulnerable. They may leak if included in plain-text form in application source code, recorded in application logs, or stored in configuration files. The more a secret is used, the greater the risk of it being compromised. Additionally, static secrets can grant persistent privileges across multiple systems, providing hackers with ongoing access to your infrastructure. There are several issues that can arise with the use of static credentials, which are shared and have widespread usage. These include the lack of a clear owner, stale credentials that have not been cleaned up when services are decommissioned, a lack of regular rotation process, no expiration or time-to-live limit, and the unnecessary use of static credentials when more secure access control mechanisms, such as short-lived dynamically created tokens, are available. In a supply chain attack, a hacker infiltrates a company’s supply chain to gain access to their systems or data. This can be particularly dangerous because it allows the hacker to bypass security measures and access sensitive information. The attacker will often try to compromise the source code or build processes, initiate malicious activity, or allow further breaches down the supply chain. It is important for organizations to recognise the risks of secret sprawling and put measures in place to [prevent supply chain attacks](https://snyk.io/solutions/software-supply-chain-security/). Software products often consist of various components sourced from different vendors and stored in different repositories. If hackers are able to gain access, they may embed malware and ensure that the sub-component is signed, allowing it to be trusted as it moves through the supply chain. To address these issues, organizations can use a secrets management solution with an encrypted vault to secure credentials and keys. This solution should also offer developers streamlined workflows for accessing secrets as needed, governed by policies approved by the security team. To further protect keys and credentials, the solution should be able to easily rotate secrets. Dynamic secrets are a type of credential that are generated on-demand and provide temporary access to a resource for a limited period of time with a limited set of permissions. They are not stored, making them less vulnerable to attack. Even if a dynamic secret is compromised, it becomes useless before a cybercriminal can use it. Dynamic secrets are often used in conjunction with zero standing privileges (ZSP), which means that clients have privileged access to a resource with only the minimum rights needed to accomplish a specific task for the minimum time required. This approach helps to reduce the risk of unauthorized access and ensures that privileges are granted only when needed. ## How to rotate secrets Rotating secrets is replacing an existing secret (such as a password, API key, or SSH key) with a new one in order to reduce the risk of them being compromised. This can be done manually or automatically, depending on the system being used. There are a few different strategies you can use to rotate secrets: - **Regular intervals:** You can set a schedule to rotate secrets at regular intervals. This can help to reduce the risk of a secret being compromised, as an attacker would only have a limited window of time to try to exploit it. - **On-demand:** You can also rotate secrets whenever you suspect that a secret may have been compromised or if you want to reduce the risk of compromise proactively. - **Automation:** You can use tools and scripts to automate rotating secrets. This helps ensure that secrets are rotated in consistently and promptly. When rotating secrets, it is important to ensure that the new secret is adequately protected and properly implemented and configured. Communicating the change to any relevant parties, such as system administrators and users, is also necessary. ## Rethinking the strategy of secrets management While we may have previously believed that our secrets were secure, past instances of platform compromise have caused organizations to reevaluate their strategies for storing secrets and determine the most secure method. It’s important to consider that once secrets are used and committed, they are no longer secrets. Therefore, what is the best way to manage secrets? Should they not be used at all or just used once and then revoked? ## CircleCI recommendations for users CircleCI has recommended that all users immediately rotate any secrets stored in their platform, whether they be in project environment variables or contexts. In addition, they advise reviewing internal logs for any unauthorized access between December 21, 2020 and January 4, 2023 or after the secret rotation has been completed. CircleCI has also invalidated Project API tokens. If you work in software development or infrastructure environments, you must be aware that rotating secrets can be difficult and can disrupt CI/CD workflows. What you can do is - Create an inventory of the environment variables used in all of your projects and pipelines, including those stored at both the Context Level and Project level. - Do not delete your secrets from CircleCI, instead **revoke them** to prevent any potential malicious actors from accessing them. - Make sure to invalidate OAuth Tokens and rotate the environment variables. This post will be updated if any new information arises. ## Secure your software supply chain See how Snyk can secure your sprawling software supply chain. [Schedule a demo](https://snyk.io/schedule-a-demo/) <!-- /.block -->
snyk_sec
1,320,777
বুটস্ট্রাপ কার্ড কম্পোনেন্ট এর ক্লাসগুলোর সহজ, সিম্পল বিশ্লেষণ
Bootstrap তাদের card component এর code এ কোন কোন class কেন ব্যবহার করেছে সেটা আমরা আজকে এখানে কিছুটা...
0
2023-01-07T13:18:33
https://dev.to/chayti/buttsttraap-kaardd-kmponentt-er-klaasgulor-shj-simpl-bishlessnn-18ln
bootstrap, css
Bootstrap তাদের card component এর code এ কোন কোন class কেন ব্যবহার করেছে সেটা আমরা আজকে এখানে কিছুটা আলোচনা করার চেষ্টা করব। ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x13pkzbrkzudlrac3yyo.JPG) ``` <div class="card" style="width: 18rem;"> <img src="..." class="card-img-top" alt="..."> <div class="card-body"> <h5 class="card-title">Card title</h5> <p class="card-text">Some quick example text to build on the card title and make up the bulk of the card's content.</p> <a href="#" class="btn btn-primary">Go somewhere</a> </div> </div> ``` **.card ->** Bootstrap এ card তৈরি করার জন্য একটা in-built component. এই class এর under এ যা লিখা হবে bootstrap সেতাকে একটা card এর মত design করার চেষ্টা করবে। **.card-img-top ->** এর মানে হল card এর মধ্যে image টা vertically (column wise) সবার উপরে থাকবে। যদি আপনি image টা সবার নিচে দেখাতে চান, সেক্ষেত্রে .card-img-bottom এই class টা ব্যবহার করতে হবে। .card-img-top এটা ব্যবহার করলে image এর top দুইটা corner rounded হবে, bottom corder দুইটা sharp হবে। উপরের image এর মত। আর .card-img-bottom এটা ব্যবহার করলে top corder দুইটা sharp হবে, নিচের corner দুইটা হবে rounded। **.card-body ->** image ছাড়া আর বাকি যত content আপনি card এর মধ্যে দিতে চান সব গুলোকে bootstrap card এর body হিসেবে বিবেচনা করেছে এবং সেই হিসেবেই design করেছে। তাই এই class টি ব্যবহার করা হয়। **.card-title ->** card body content এর title এর styling দেওয়ার জন্য এই class ব্যবহার করা হয়েছে। **.card-text ->** card body content এর text এর styling দেওয়ার জন্য এই class ব্যবহার করা হয়েছে। Normally এটা দেখে আপনার কাছে মনে হতে পারে এটা p এর কাজই করছে। আসলেও করছে অনেকটা তাইই। তবে এই ক্লাস এর under এ bootstrap নিজের মত করে কিছু assignment, spacing সেট করে দিয়েছে। **~let's_code_your_career ~happy_coding**
chayti
1,320,786
Curso Gratuito de LGPD – Lei Geral de Proteção de Dados 2023
Thumb LGPD Essential Kit – Guia de TI Conteúdo O que é LGPD? Curiosidades sobre a...
0
2023-01-07T13:40:54
https://dev.to/guiadeti/curso-gratuito-de-lgpd-lei-geral-de-protecao-de-dados-2023-45oj
cursogratuito, treinamento, cursosgratuitos, lgpd
--- title: Curso Gratuito de LGPD – Lei Geral de Proteção de Dados 2023 published: true date: 2023-01-07 13:32:19 UTC tags: CursoGratuito,Treinamento,CursosGratuitos,LGPD canonical_url: --- ![Thumb LGPD Essential Kit - Guia de TI](https://guiadeti.com.br/wp-content/uploads/2023/01/thumb-lgpdkit-1024x576.jpg "Thumb LGPD Essential Kit - Guia de TI") _Thumb LGPD Essential Kit – Guia de TI_ ## Conteúdo <nav><ul> <li><a href="#o-que-e-lgpd">O que é LGPD?</a></li> <li><a href="#curiosidades-sobre-a-lgpd">Curiosidades sobre a LGPD</a></li> <li><a href="#o-curso">O Curso</a></li> <li><a href="#inscreva-se-ja">Inscreva-se já!</a></li> </ul></nav> ## O que é LGPD? A Lei Geral de Proteção de Dados Pessoais é uma lei federal brasileira que estabelece as regras para o tratamento de dados pessoais no Brasil. Ela foi criada com o objetivo de proteger os direitos fundamentais de privacidade e liberdade de informação dos cidadãos, assegurando que os dados pessoais sejam tratados de maneira lícita, leal e transparente. Ela se aplica a qualquer pessoa física ou jurídica, pública ou privada, que realiza o tratamento de dados pessoais no Brasil, independentemente do país de origem da empresa ou do local onde os dados são armazenados. Ela inclui regras sobre o uso de dados pessoais em atividades comerciais, como publicidade, marketing e análise de dados, além de estabelecer direitos dos titulares dos dados, como o direito à privacidade, ao acesso, à correção e à exclusão de dados. A lei entrou em vigor em agosto de 2020, mas ainda está em fase de transição, com prazos diferenciados para sua implementação por parte das empresas. É importante que as empresas estejam atentas às exigências da LGPD e comecem a se adequar às suas regras o quanto antes, a fim de evitar sanções e garantir a proteção dos dados pessoais de seus clientes e usuários. ## Curiosidades sobre a LGPD 1. Foi inspirada na General Data Protection Regulation (GDPR), uma lei de proteção de dados pessoais da União Europeia. 2. É a primeira lei de proteção de dados pessoais no Brasil. Até a sua implementação, o país se baseava em leis específicas para proteção de dados em alguns setores, como o Marco Civil da Internet e a Lei de Proteção de Dados do SUS. 3. Se aplica a qualquer tratamento de dados pessoais no Brasil, independentemente do país de origem da empresa ou do local onde os dados são armazenados. 4. Estabelece direitos dos titulares dos dados, como o direito à privacidade, ao acesso, à correção e à exclusão de dados. 5. Prevê sanções para as empresas que não cumprem as suas regras, incluindo multas e responsabilidade civil. 6. Entrou em vigor em agosto de 2020, mas ainda está em fase de transição, com prazos diferenciados para sua implementação por parte das empresas. 7. Foi criada com o objetivo de proteger os direitos fundamentais de privacidade e liberdade de informação dos cidadãos, assegurando que os dados pessoais sejam tratados de maneira lícita, leal e transparente. ## O Curso O curso gratuito de [LGPD](https://guiadeti.com.br/guia-tags/cursos-de-lgpd/)é ministrado por Fabricio Rocha Alexandre e é uma oportunidade única de aprender sobre a Lei Geral de Proteção de Dados Pessoais de maneira acessível e prática. Ministrado por Fabricio Rocha Alexandre, especialista em direito digital e [LGPD](https://guiadeti.com.br/guia-tags/cursos-de-lgpd/), o curso aborda todos os aspectos da LGPD de maneira clara e objetiva, preparando os participantes para aplicar os conhecimentos adquiridos em suas empresas ou negócios. O LGPD Essential Kit é um curso completo e de fácil compreensão sobre a Lei Geral de Proteção de Dados. Ele foi planejado para aqueles que desejam se aprofundar no assunto e entender como a LGPD pode e deve ser aplicada em seu negócio. Este curso técnico lhe garantirá autonomia e autoridade no assunto, mergulhando nos assuntos de cada uma das fases do processo de adequação à [LGPD](https://guiadeti.com.br/guia-tags/cursos-de-lgpd/). Além disso, você terá condições de aplicar o conhecimento teórico na sua empresa ou negócio, com a ajuda da experiência adquirida por mais de 2.000 horas de projetos em [LGPD](https://guiadeti.com.br/guia-tags/cursos-de-lgpd/). Com o LGPD Essential Kit, você poderá aplicar o conhecimento adquirido no seu dia a dia sem complicações. Não perca essa oportunidade de aprender sobre LGPD com um especialista no assunto e se adequar às exigências da lei de maneira eficiente e prática. ## Inscreva-se já! [Clique aqui](https://www.youtube.com/playlist?list=PLJCciKEggBXo8V0y7C6Xeas8WtljPQcuc) para acessar o curso ou assista abaixo. <iframe title="Curso Gratuito de LGPD - Lei Geral de Proteção de Dados - LGPD Essential KIT" width="1200" height="675" src="https://www.youtube.com/embed/videoseries?list=PLJCciKEggBXo8V0y7C6Xeas8WtljPQcuc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> O post [Curso Gratuito de LGPD – Lei Geral de Proteção de Dados 2023](https://guiadeti.com.br/curso-gratuito-lgpd-lei-geral-de-protecao-de-dados/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,320,830
Amazing Documents Converter Application 'Docurain'
I would like to introduce the service document converter service 'Docurain' which I was...
0
2023-01-07T15:34:38
https://dev.to/kakisoft/amazing-documents-converter-application-docurain-156d
I would like to introduce the service document converter service 'Docurain' which I was impressed. --- **(Note)** I would like to introduce one impressive service in this slide. It may seems like a sales presentation. But, I just love this service. I'm not working for this company. Have you, as engineers, ever had difficult time making some documents? --- Today, I would introduce amazing documents converter application 'Docurain' Estimate, invoice and purchase order ... Some system is required to print these documents. It is not easy to create these functions. Every single document has totally different and unique format. Also, we have to choose an appropriate library among many others. We face many challenges to make some documents download functions. But, there is a simple solution. It is "Docurain". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o50wtdcexam2xv61k9ht.png) If you use this service, you can be liberated from the annoying document creation tasks. There are only 2 steps to create a function for document creation using Docurain. Firstly, create the document template. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5t967uhg4j187k3x8lz.png) It is incredibly simple. Only tool that you need is Excel. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/up6tb2prmml52ahzqfsj.png) You have full control to create a complex template. There is no need of complex operation. Lastly, call the API. Specify the template, and set the parameters you want to print, and call. Of course, you can use with any computer languages. There is no need to install some specific libraries. And, there are no initial costs, monthly costs, or support costs. It costs only 5 yen per document. So, let me show you how to create documents using Docurain. First, prepare your Excel template. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gp0dtco8lfwzi0qdawly.png) Like this, fill with specific code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wbn40jss1o1f6z5oep1e.png) And, upload this file. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3syec0xc2gfazh43ds0.png) Next, call the API. Choose the template, and set the parameter as json object. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jaynidl903upog8a93ru.png) Click execute button, document will be downloaded. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lf8q6bgdn664p8h0qkeh.png) You can create such document. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffbsbk7na3obvgian9no.png) Needless to say, you can call the API from source code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p7pwt56p14xm3mvmnqln.png) As you can see, you can easily create document function. Why don't you make your work easy? If you want to get more information, please visit official website. [https://docurain.jp/](https://docurain.jp/) We are looking forward to hearing from the companies that find document creation frustrating. --- **!!!Caution!!!** Let me repeat that, I'm not from Docurain company neither this slide is production advertizement. It is truly truly for this event for IT engineers. --- ______________________________ Actually, I talked about this in the presentation at engineer's event. The presentation material is here. https://kakisoft.github.io/slides/amazing-documents-converter-application-docurain-en/export/index.html And, the event website is here. https://try-english-lt.connpass.com/event/267720/ Note: This service hasn't been translated in English yet.
kakisoft
1,320,918
The Role of Automation in Increasing Productivity
As developers, we are constantly looking for ways to streamline our workflows and increase our...
0
2023-01-07T16:49:39
https://dev.to/sammaji15/the-role-of-automation-in-increasing-productivity-3p20
ci, automation, introduction
As developers, we are constantly looking for ways to streamline our workflows and increase our productivity. One strategy that can have a big impact is the use of automation. Automation refers to the use of technology to perform tasks without human intervention. In the context of software development, this can take many forms, including automated testing, continuous integration, and deployment. ## Why is automation important for productivity? There are several reasons why automation can increase productivity for developers: * **Automation saves time:** By automating routine tasks, developers can focus their efforts on more high-value work, rather than wasting time on manual, repetitive tasks. * **Automation reduces the risk of errors:** When tasks are automated, there is less chance for human error, which can save time and resources in the long run. * **Automation enables faster feedback:** Automated testing, for example, can provide immediate feedback on the quality and functionality of code, allowing developers to quickly identify and fix issues. * **Automation promotes consistency:** Automated processes are consistent and repeatable, ensuring that tasks are completed reliably and predictably. ## Examples of automation in software development There are many ways that automation can be used in software development. Some common examples include: ### Automated Testing Automated testing refers to the use of software to perform tests on a codebase. This can include unit tests, integration tests, and acceptance tests. Automated testing can save time and ensure that code changes do not introduce regressions or break existing functionality. Several types of automated tests can be used in software development: * **Unit tests:** Unit tests are automated tests that focus on a small, specific piece of code, such as a single function or method. Unit tests are typically run by developers as they write code, to ensure that the code is working as expected. * **Integration tests:** Integration tests are automated tests that focus on the interactions between different pieces of code. These tests are typically run after unit tests, to ensure that the code works correctly when integrated with other components. * **Acceptance tests:** Acceptance tests are automated tests that focus on the overall functionality of a system. These tests are typically run by quality assurance (QA) teams to ensure that the code meets the requirements and specifications of the project. Some tools for automated testing are JUnit, Selenium and Cucumber. ### Continuous Integration Continuous integration (CI) is a software development practice in which developers integrate code changes frequently, usually several times a day. CI systems automatically build, test, and validate code changes, providing rapid feedback to developers. Some tools for automated testing are Jenkins, GitLab and Travis CI. ### Deployment Automation Deployment automation refers to the use of tools and processes to automate the deployment of code changes to production environments. This can save time and reduce the risk of errors when deploying code. Some tools for automated testing are Jenkins, GitLab and Travis CI. ## Conclusion Automation is a powerful tool for increasing productivity in software development. By automating routine tasks, developers can save time, reduce the risk of errors, and focus on higher-value work. Whether it's automated testing, continuous integration, or deployment automation, the use of automation can help teams deliver high-quality software faster. Some tools for automated testing are Jenkins, GitLab and Ansible.
sammaji15
1,320,964
Pandas-Basics In Short
Pandas is a python library that is used to analyse data. It is a table themed library like...
0
2023-01-09T13:03:53
https://dev.to/mdmusfikurrahmansifar/pandas-basics-in-short-1196
Pandas is a python library that is used to analyse data. It is a table themed library like spreadsheet in excel unlike numpy which had a matrixlike theme. It allows us to analyse, manipulate and explore huge amount of data. For the basics, we will discuss a few topics- * Series * DataFrame * Missing data * Groupby * Merging,joining & concatinating To start we need to- ``` import numpy as np import pandas as pd ``` #Series: **syn=pd.Series(data,index)** ![series ex](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3sa960k9sk4q0iqzr3z.png) Here data and index can be edited and fixed according to our need. It can be **list, numpy array or even dictionary.** ![series datatype](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9u2awnnviunb3hask68p.png) Here's some examples-(look up the variations) ![pd-03](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvf2m7qkuc9lwmbhsnmj.png) ![pd-04](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oo44sl7824gc2e1t1sih.png) ![pd-05](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipjzm945xts8mezfcjey.png) If index not mentioned then by default it is added from 0 ![pd-06](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekghjc3026g64h52gmhx.png) ![pd-07](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3u9jiiebsvqlphxxlplf.png) In dictionary the keys are the index and values are data ![pd-08](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qv8zyzkzn010mwjapp14.png) Series is just an idea but we won't see it most often. Its like a string in a list. It won't show a table rather a tablelike presentation. Now what we will use is dataframe which gives us our expected output. #DataFrame: **syn=pd.DataFrame(data,index,columns)** It is the fundamental topic. So we need to know about some of the usage and applications- * Selection and indexing * Conditional selection * Creating new column * Removing column-row ##Selection and indexing: ###Selecting a row-column: ![selection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzmhvtxwt7dbed9xfqoe.png) **Syntax:** Column selection: **arr[column]**- returns a series Row selection: **arr.loc[row]**- returns a series Row selection: **arr.iloc[row number(starts from 0)]** ![column](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/utzgtkoicwuhazv4odwq.png) ![row](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sd5xi3ey5hzt10esrnli.png) ###Selecting range: ####By rows- ![select range](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0nwpxqqxnacml1iwivv7.png) ####By columns- ![range select col](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zobhnssrz9rnkgwmtcbd.png) ###Selecting data: By combining previous methods we can get a data from the dataframe ![data selection](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b08wd607246iwp1c50zy.png) ###Conditional Selection: Here we apply condition. If we just apply condition, it gives us boolean result. If we call the dataframe then it gives us values true to the condition and NaN in the false ones. ![cond. select](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl5bhodut7rsi0kbry6w.png) ![cond. select](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9g9pdzyjcj59m67uxpq.png) We can even combine conditions by 'and'/'or'. But here in pandas to combine conditions we use **'&'**/**'|'** instead of **'and'/'or'**. ![17](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1gf1k4wskk51ynvo6x6s.png) ![18](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5slxr0giphu2xpvztyz.png) ##Creating columns: syn: arr[column name]=data of the column ![19](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xi561d1j0fm14ww3zd2x.png) ##Removing row-column: axis=0 -> Row axis=1 -> Column `inplace=True` is used to make the change permanent ![20](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg82pm6j0dmwdhxaeihm.png) ![21](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jnvvvzhf2u2e8qqk2k3.png) by default axis=0 ##Missing value: ###Adding missing value: use np.nan in data ![23](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/869ygubyvmmol1f5f2pl.png) ###Removing NaN: By default `.dropna()` removes row with NaN For column use `.dropna(axis=1)` ![24](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9ap7lhqzx2m6zcuagqa.png) We can also spare row or columns with certain number of true values ![25](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgnxs7ybm3s8h5bani8l.png) ![26](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bktgr799c8c1rk7wuvwg.png) ##Filling missing values: ![27](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/781h90zxq8bd5726rape.png) ##Groupby: We can group common data in a column and work with them ![28](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rn9guvuw8cxxcdabav9.png) ![29](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbtysgeqysljt0tou335.png) After `.groupby()` all common data gets stored...it doesn't print other then when we work with them. Like- ![30](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dyeqxur34pjmzrg6homa.png) ![31](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f9fwmruw1biw48wffmwp.png) try `.min()`, `.max()`, `.describe()`, `.mean()` etc. ##Concatenating, Merging, Joining: ###Concatenating: to attach column-wise or row-wise: `pd.concat([],axis= )` by default axis=0 ![32](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uc1rq5lvc1lyd2gxez4p.png) ![33](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/edpc35smp9qiyk04b0wq.png) ###Merging: to attach regarding common column ![34](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ob0byw0yyu5lavhf4lmq.png) ![35](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v347d886ui8ld8v4py0k.png) ###Joining: to attach regarding common index ![36](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vctezct65ev7liaaz9n4.png) ![37](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ls36h3u8yimhb5fdy3v.png) ##Summary: ![38](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/usef479j6eafmifn06xx.png) This was the basics of pandas. It is really the fundamental stuff. There are features related to file handling, data analysis, plotting etc. Let's keep exploring...let's dive together😉
mdmusfikurrahmansifar
1,321,046
Frontend: One-on-one (Duologue) chatting application with Django channels and SvelteKit
Introduction This is the second part of a series of tutorial on building a one-on-one...
21,300
2023-01-07T20:13:21
https://dev.to/sirneij/frontend-one-on-one-duologue-chatting-application-with-django-channels-and-sveltekit-d8l
webdev, svelte, typescript, tutorial
## Introduction This is the second part of a series of tutorial on building a one-on-one (duologue) chatting application with Django channels and SvelteKit. We will focus on building the app's frontend in this part. **NOTE: I won't delve much into the nitty-gritty of SvelteKit as I only intend to show how one can interact with Websocket in the browser in SvelteKit. I wrote some other tutorials that talk more about it.** ## Source code This tutorial's source code can be accessed here: {% github Sirneij/chatting %} ## Implementation ### Step 1: Setup a SvelteKit project In our `chatting` folder, create a SvelteKit project by issuing the following command in your terminal: ```shell ╭─ sirneij in ~/Documents/Devs/chatting on (main) ╰─(ノ˚Д˚)ノ npm create svelte@latest frontend ``` From the prompts, I chose skeleton project and added TypeScript support. Follow the instructions given by the command after your project has successfully being created. I added [bootstrap (v5)][1] and [fontawesome (v6.2.0)][2] to the frontend. I also added `routes/+layout.svelte` and modified `routes/+page.svelte`. `routes/chat/[username]/+page.svelte` was created as well to house the websocket logic. Before then, `lib/store/message.store.ts` has the following content: ```typescript import type { Message } from '$lib/types/message.interface'; import { writable, type Writable } from 'svelte/store'; const newMessages = () => { const { subscribe, update, set }: Writable<Array<Message>> = writable([]); return { subscribe, update, set }; }; const messages = newMessages(); const sendMessage = (message: string, senderUsername: string, socket: WebSocket) => { if (socket.readyState <= 1) { socket.send( JSON.stringify({ message: message, senderUsername: senderUsername }) ); } }; export { messages, sendMessage }; ``` This is a custom writable store that exposes a function `sendMessage` which does exactly what its name implies. It was used in `routes/chat/[username]/+page.svelte` to send message to the backend. Let's look at the content of `routes/chat/[username]/+page.svelte`: ```html <script lang="ts"> import type { PageData } from './$types'; import Contact from '$lib/components/Contacts/Contact.svelte'; import { session } from '$lib/store/user.store'; import You from '$lib/components/Message/You.svelte'; import Other from '$lib/components/Message/Other.svelte'; import { messages, sendMessage } from '$lib/store/message.store'; import { browser } from '$app/environment'; import { page } from '$app/stores'; import { BASE_URI_DOMAIN } from '$lib/constants'; import type { Message } from '$lib/types/message.interface'; export let data: PageData; const fullName = `${JSON.parse(data.context.user_object)[0].fields.first_name} ${ JSON.parse(data.context.user_object)[0].fields.last_name }`; let messageInput: string, socket: WebSocket; if (browser) { const websocketUrl = `${ $page.url.protocol.split(':')[0] === 'http' ? 'ws' : 'wss' }://${BASE_URI_DOMAIN}/ws/chat/${JSON.parse(data.context.user_object)[0].pk}/?${ $session.user.pk }`; socket = new WebSocket(websocketUrl); socket.addEventListener('open', () => { console.log('Connection established!'); }); socket.addEventListener('message', (event) => { const data = JSON.parse(event.data); const messageList: Array<Message> = JSON.parse(data.messages).map((message: any) => { return { message: message.fields.message, thread_name: message.fields.thread_name, timestamp: message.fields.timestamp, sender__pk: message.fields.sender__pk, sender__username: message.fields.sender__username, sender__last_name: message.fields.sender__last_name, sender__first_name: message.fields.sender__first_name, sender__email: message.fields.sender__email, sender__is_staff: message.fields.sender__is_staff, sender__is_active: message.fields.sender__is_active, sender__is_superuser: message.fields.sender__is_superuser }; }); $messages = messageList; messageInput = ''; }); } const handleSendMessage = (event: MouseEvent) => { event.preventDefault(); sendMessage(messageInput, $session.user.username as string, socket); }; </script> <div class="container py-5"> <div class="row"> <div class="col-md-6 col-lg-5 col-xl-4 mb-4 mb-md-0 scrollable"> <h5 class="font-weight-bold mb-3 text-center text-lg-start"> {$session.user.username}'s contacts </h5> <Contact contacts={JSON.parse(data.context.users)} /> </div> <div class="col-md-6 col-lg-7 col-xl-8 scrollable" id="message-wrapper"> <h5 class="font-weight-bold mb-3 text-center text-lg-start"> {fullName} </h5> <ul class="list-unstyled" id="chat-body"> {#each $messages as message, id} {#if message.sender__pk === $session.user.pk} <You {message} /> {:else} <Other {message} /> {/if} {/each} </ul> <div class="text-muted d-flex justify-content-start align-items-center pe-3 pt-3 mt-2 message-control" > <img src="https://mdbcdn.b-cdn.net/img/Photos/Avatars/avatar-{$session.user.pk}.webp" alt="You" title="You" style="width: 40px; height: 100%" /> <textarea placeholder="Type message" class="form-control form-control-lg" id="message-body" rows="1" bind:value={messageInput} /> <a class="ms-3" id="send-message-btn" title="Send" href={null} on:click={(event) => handleSendMessage(event)} > <i class="fas fa-paper-plane" /> </a> </div> </div> </div> </div> ``` `handleSendMessage` gets fired whenever a user sends a message. The only thing it does is to use the exposed `sendMessage` function to send the message to tte backend. `sendMessage` takes, among others, the websocket. It was initialized with `let socket: WebSocket;` which was populated with: ```typescript ... if (browser) { const websocketUrl = `${ $page.url.protocol.split(':')[0] === 'http' ? 'ws' : 'wss' }://${BASE_URI_DOMAIN}/ws/chat/${JSON.parse(data.context.user_object)[0].pk}/?${ $session.user.pk }`; socket = new WebSocket(websocketUrl); socket.addEventListener('open', () => { console.log('Connection established!'); }); socket.addEventListener('message', (event) => { const data = JSON.parse(event.data); const messageList: Array<Message> = JSON.parse(data.messages).map((message: any) => { return { message: message.fields.message, thread_name: message.fields.thread_name, timestamp: message.fields.timestamp, sender__pk: message.fields.sender__pk, sender__username: message.fields.sender__username, sender__last_name: message.fields.sender__last_name, sender__first_name: message.fields.sender__first_name, sender__email: message.fields.sender__email, sender__is_staff: message.fields.sender__is_staff, sender__is_active: message.fields.sender__is_active, sender__is_superuser: message.fields.sender__is_superuser }; }); $messages = messageList; messageInput = ''; }); } ... ``` It must be in the `browser` block since SvelteKit does Server-side rendering by default which can be turned off but since we ain't turning it off, we must ensure websocket is initialized only in the browser since it's a browser-based API. With this, we are done! Ensure you take a look at the complete code on github. ## Outro Enjoyed this article? Consider [contacting me for a job, something worthwhile or buying a coffee ☕](mailto:sirneij@gmail.com). You can also connect with/follow me on [LinkedIn](https://www.linkedin.com/in/idogun-john-nelson/). It isn't bad if you help share it for wider coverage. I will appreciate... [1]: https://getbootstrap.com/docs/5.0/getting-started/introduction/ "Bootstrap v5.0." [2]: https://fontawesome.com/ "Fontawesome v6.2.0"
sirneij
1,321,342
Weekly Challenge 198
Two relatively straight forward tasks to start the new year. Challenge, My solutions Task...
0
2023-01-08T06:13:59
https://dev.to/simongreennet/weekly-challenge-198-2jbl
perl, python, theweeklychallenge
Two relatively straight forward tasks to start the new year. [Challenge](https://theweeklychallenge.org/blog/perl-weekly-challenge-198/), [My solutions](https://github.com/manwar/perlweeklychallenge-club/tree/master/challenge-198/sgreen) ## Task 1: Max Gap ### Task You are given a list of integers, `@list`. Write a script to find the total pairs in the sorted list where 2 consecutive elements has the max gap. If the list contains less then 2 elements then return 0. ### My solution For this task, I have defined two variables. The value `gap` records the largest gap between two numbers so far, while the `count` value records the number of occurrences of the gap. The first thing I do is (numerically) sort the list. I then iterate from 0 to the size of the list minus two. For each iteration, I compare the difference in the value in that position to the one in the next position. If the gap is greater, I set `gap` to the difference, and reset `count` to 1. If the difference is the same as the current gap, I increase 1 to `count`. Finally I display the value of `count`. ### Examples ```bash $ ./ch-1.py 2 5 8 1 2 $ ./ch-1.py 3 0 ``` ## Task 2: Prime Count ### Task You are given an integer `$n > 0`. Write a script to print the count of primes less than `$n`. ### My solution Time to pull out my trusty `is_prime` method. This was last used in task 2 from challenge 177. I iterate from 1 to `n-1` and added one to `count` if that number is a prime. Then I display the value of `count`. ### Examples ```bash $ ./ch-2.py 10 4 $ ./ch-2.py 15 6 $ ./ch-2.py 1 0 $ ./ch-2.py 25 9 ```
simongreennet
1,321,486
Generate SSH key pair on Git Bash in 2 Steps
Step 1 Generate a new SSH key pair by running the following command in Git...
0
2023-01-08T11:08:36
https://dev.to/pizofreude/generate-ssh-key-pair-on-git-bash-in-2-steps-3931
security, gitbash, tutorial, beginners
## Step 1 Generate a new SSH key pair by running the following command in Git Bash: ```bash ssh-keygen -t rsa -b 4096 ``` This will prompt you to enter a file to save the key to. You can accept the default file location by pressing Enter, or specify a different location by entering the file path and name. The default directory for saving the SSH key pair when using the ssh-keygen command is `~/.ssh/`. The `~` represents your home directory, and the `.ssh` subdirectory is where SSH keys are typically stored. So, the default location for saving the SSH key pair would be `~/.ssh/id_rsa` for the private key, and `~/.ssh/id_rsa.pub` for the public key. If you do not specify a different location when prompted, the key pair will be saved to these default locations. You can also specify a different location by entering the desired file path and name at the prompt. ## Step 2 You will be prompted to enter a passphrase for your SSH key. This is an optional security measure that can help protect your key. You can either enter a passphrase or leave it blank by pressing Enter. Once the key has been generated, you can view it by running the following command: ```bash cat ~/.ssh/id_rsa.pub ``` This will display the contents of your public SSH key.
pizofreude
1,321,758
Developing and Deploying Microservices with Go
Introduction Entire books have been written on the subject of microservices. This post...
0
2023-01-08T17:27:16
https://dev.to/bruc3mackenzi3/developing-and-deploying-microservices-with-go-3ldo
microservices, go, docker, kubernetes
## Introduction Entire books have been written on the subject of microservices. This post will scratch the surface by exploring the key ingredients that go into development and deployment of microservices. [A demo microservice app](https://github.com/bruc3mackenzi3/go-microservice-template) will be used as a case study, presenting popular technologies to bring the concept into motion. Go is a popular language of choice for modern, service-based architectures. The term _cloud native_ is often used to describe Go, but what does it mean? Go is a compiled language, meaning a Go program can be run on a target machine without the installation of dependencies. A higher level language by comparison, like Java or Python, requires its runtime environment and all 3rd party dependencies be installed. This has a number of drawbacks: * Added complexity to the build process * Built images have larger file size * CICD pipelines, image builds, cloud deployments, etc all take longer to run * All dependencies must be installed on target machines A modern, compiled language like Go on the other hand has such advantages: * Dependencies are only required on the build machine * A single, executable file is deployed to target machines which need only provide a compatible operating system * The build machine can compile the program to run on different operating systems and CPU architectures, simply by specifying these as arguments to the compiler * Compiled programs run faster, improving application performance and reducing infrastructure costs ## Microservices in Action To get started building a Go microservice, head over to my Go Microservice Repository to explore the technologies I've integrated. You can pick and choose certain technologies to apply to your own project, [or fork the repository](https://github.com/bruc3mackenzi3/go-microservice-template/fork) to start a new project using it as a template. ### Technologies Used For a rundown of tools and frameworks used to bring a Go microservice to life, [see the repo readme](https://github.com/bruc3mackenzi3/go-microservice-template#tech-stack).
bruc3mackenzi3
1,321,760
Creating a toast notification system for you react web app
Introduction I was developing a web app for a company and needed to implement a toast...
0
2023-01-08T18:06:03
https://dev.to/gatesvert81/creating-a-toast-notification-system-for-you-react-web-app-fl0
webdev, javascript, react, nextjs
## Introduction I was developing a web app for a company and needed to implement a toast notification system. I don't like using a lot of npm packages for my projects to avoid them being bulky and prevent crashes when they are not managed properly. So, I set out to make my own simple toast notification system that works great with my app. For those who don't know what a toast notification is, according to [design-systems](https://design-system.hpe.design/); > Toast notifications are used to communicate low severity level information to users in an unobtrusive way. In today's Article, we will develop a simple and light toast notification for our react web apps. Try the live demo [React-notification-system](https://react-notication-system.vercel.app/) ## Tools used in Project - React (I will use [Nextjs](https://nextjs.org/) for this project) - [Tailwind](https://tailwindcss.com/) for styling with a plugin [DaisyUI](https://daisyui.com/), a simple tailwind component for making customisable styles for our projects - [Framer motion](https://www.framer.com/motion/) for animations ## Pre-requisite knowledge Since you are here, I can guess you know a little of React but these are the main react hooks I will use in this project - useReducer - useContext - react custom hooks - useCallback All these can be accessed int the [Official React documentation](https://reactjs.org/). ## Project setup Let's begin by creating our project and installing the necessary packages. 1. Create next app and give it any name of your choice `npx create-next-app notification-system` 2. Move into the project directory and install necessary dependencies ``` cd notification-system npm install -D tailwindcss postcss autoprefixer npx tailwindcss init -p npm install framer-motion ``` 3. Set up tailwind for styling, you'll find all these setup config in [Tailwind framework setup](https://tailwindcss.com/docs/guides/nextjs) - add these to your global.css file in the main directory ```css @tailwind base; @tailwind components; @tailwind utilities; ``` - add these to your tailwind.config.js file ```javascript /** @type {import('tailwindcss').Config} */ module.exports = { content: [ "./pages/**/*.{js,ts,jsx,tsx}", "./src/**/*.{js,ts,jsx,tsx}", ], theme: { extend: {}, }, plugins: [], } ``` - initialize project `npm run dev` ## Project logic Let's goooo, we are done with the project set up. The main point of this is to get the use notified with messages. We will use the reducer function to put all our notifications in one place, serve it to the whole app using useContext and use it within our project using react custom hook. I will explain as we progress. ## Create Reducer We will need to create Actions for the notification. Create a src folder in your main directory and add a Reducer folder. - Create a file and name it notificationAction.js. We will fill it with our Actions ```javascript // These are actions set for our notification and it will make more sense as we progress export const notificationAction = { SUCCESS: "SUCCESS", WARNING: "WARNING", ERROR: "ERROR", ALERT: "ALERT", DELETE: "DELETE", ADD: "ADD", INACTIVE: "INACTIVE", }; ``` - Create a file in the same (Reducer) folder and name it notificationReducer.js. ```jsx import { notificationAction } from "./notificationAction"; // Initial notification state. It's empty for now export const notificationInitialState = { notifications: [], }; // Our reducer function export default (state = notificationInitialState, { type, payload }) => { switch (type) { case notificationAction.ADD: // Add notification to the list (state..notifications) return { notifications: [...state.notifications, payload.notification] }; case notificationAction.DELETE: // Remove/Delete notification const deleteNotifcation = state.notifications?.filter( (notification) => notification.id !== payload.id ); return { notifications: [...deleteNotifcation] }; case notificationAction.INACTIVE: // Make notifcation inactive const notifications = state.notifications?.map((notification) => { if (notification.id === payload.id) { return { ...notification, active: false, }; } return notification; }); return { notifications: [...notifications] }; default: return state; } }; ``` - Create a last file in the Reducer Folder with name index.js, this file will export all reducer functions and action to organize our app. ```jsx import { notificationAction } from "./notificationAction"; import notificationReducer from "./notificationReducer"; import { notificationInitialState } from "./notificationReducer"; export { notificationInitialState, notificationReducer, notificationAction }; ``` ## Create Context This folder holds the notification context and serves it to our app. Create a folder in the src folder and name it Context. Create a file with name Notification.js. In this folder we use useReducer hook to serve our Reducer to the app, useReducer hook takes the reducer function as the first parameter and the intitial state as the second parameter and returns an array that contains the state and dispatch function respectively. The state acts like useState's state. - First, we create our function and add our reducer to the useReducer hook. ```jsx function Notification({ children }) { const [state, dispatch] = useReducer( notificationReducer, notificationInitialState ); return () } export default Notification; ``` - Next create a function called notify that we will call each time we need to make a toast notification. it will take two (2) parameters, type and message (will explain that in that soon). Set the id of the notification (I will use their index count) and dispatch with type ADD notification. Our notification will have an id (which we will use to track a notification), type (whether our notification is a success toast, warning toast, information/alert toast or an error toast), message (the text for our notification) and active to display or hide our notification. Clear the notification with setTimeout function (feel free to use any length of time, I am using 6 seconds) . We return the id of the notification. ```jsx ... const notify = (type, message) => { const notificationId = state.notifications.length; dispatch({ type: notificationAction.ADD, payload: { notification: { id: notificationId, type: type, message: message, active: true, }, }, }); setTimeout(() => { closeNotification(notificationId); }, 6000); return notificationId; }; ... ``` - To hide our notification create a closeNotification function that will set the active key of our notification to false. To delete our notification after hiding it, create a deleteNotification function that will delete a second after the notification is hidden. ```javascript const deleteNotifcation = (id) => { dispatch({ type: notificationAction.DELETE, payload: { id: id, }, }); }; const closeNotification = (id) => { dispatch({ type: notificationAction.INACTIVE, payload: { id: id, }, }); setTimeout(() => { deleteNotifcation(id); }, 1000); }; ``` - Create a function called showNotification athat will display our notifications. For this function we use useCallback hook which will run our function whenever the state change. This function will map all our notifications at the top of our website . We will create the NotificationCard Component later. ```jsx ... const showNotifications = useCallback( () => ( <> {state.notifications.map((notification) => ( <AnimatePresence key={notification?.id}> {notification?.active && ( <motion.div initial={{ opacity: 0, scale: 0.8, y: "10%", }} animate={{ opacity: 1, scale: 1, y: "0%", }} exit={{ opacity: 0, scale: 0.8, y: "10%", }} > <NotificationCard type={notification?.type} message={notification?.message} /> </motion.div> )} </AnimatePresence> ))} </> ), [state] ); ... ``` - Create the Context to serve the app with our notification. Add it before the react function. ``` export const NotificationContext = createContext(); ``` In the Notification function return the context provider and the value. Display the showNotification function on top of the site. ```jsx return ( <> <NotificationContext.Provider value={value}> <div className="w-full h-fit fixed left-0 top-0 pt-10 flex flex-col justify-center items-center gap-3 z-50"> {showNotifications()} </div> {children} </NotificationContext.Provider> </> ); ``` - Go to the _app.js in the pages folder wrap the app with our context ```jsx import Notification from "../src/Context/Notification"; import "../styles/globals.css"; function MyApp({ Component, pageProps }) { return ( <Notification> <Component {...pageProps} /> </Notification> ); } export default MyApp; ``` - Set the value of the Context Provider ```javascript const value = { notifications: state?.notifications, notify, closeNotification, }; ``` - The Notification.js should have this in it ```jsx import { AnimatePresence, motion } from "framer-motion"; import React, { createContext, useCallback, useEffect, useReducer, } from "react"; import NotificationCard from "../Components/NotificationCard"; import { notificationAction, notificationInitialState, notificationReducer, } from "../Reducer"; export const NotificationContext = createContext(); function Notification({ children }) { const [state, dispatch] = useReducer( notificationReducer, notificationInitialState ); const deleteNotifcation = (id) => { dispatch({ type: notificationAction.DELETE, payload: { id: id, }, }); }; const closeNotification = (id) => { dispatch({ type: notificationAction.INACTIVE, payload: { id: id, }, }); setTimeout(() => { deleteNotifcation(id); }, 1000); }; const notify = (type, message) => { const notificationId = state.notifications.length; dispatch({ type: notificationAction.ADD, payload: { notification: { id: notificationId, type: type, message: message, active: true, }, }, }); setTimeout(() => { closeNotification(notificationId); }, 6000); return notificationId; }; const showNotifications = useCallback( () => ( <> {state.notifications.map((notification) => ( <AnimatePresence key={notification?.id}> {notification?.active && ( <motion.div initial={{ opacity: 0, scale: 0.8, y: "10%", }} animate={{ opacity: 1, scale: 1, y: "0%", }} exit={{ opacity: 0, scale: 0.8, y: "10%", }} > <NotificationCard type={notification?.type} message={notification?.message} /> </motion.div> )} </AnimatePresence> ))} </> ), [state] ); useEffect(() => { state; }, [state]); const value = { notifications: state?.notifications, notify, closeNotification, }; return ( <> <NotificationContext.Provider value={value}> <div className="w-full h-fit fixed left-0 top-0 pt-10 flex flex-col justify-center items-center gap-3 z-50"> {showNotifications()} </div> {children} </NotificationContext.Provider> </> ); } export default Notification; ``` ## Create Custom hook Use a custom hook to access the values of the context easily. Create a folder in the src folder and name it Hooks. Create a folder in the Hooks folder and name it useNotification. We use useContext to get our context from notification and use it in any component in the app provided the component is a child of the Notification Context. ```jsx import React, { useContext } from "react"; import { NotificationContext } from "../Context/Notification"; function useNotification() { const context = useContext(NotificationContext); if (context === undefined) { throw new Error("useNotification must be used within NotificationContext"); } return context; } export default useNotification; ``` ## Usage Before we use the notification system we need to create the NotificationCard component. Create a Components folder in the src folder and create a file called Notification.js. Add a react function in the folder with props type and message. I am using the [card UI](https://daisyui.com/components/card/) from Daisy UI. ```jsx import React, { useEffect, useState } from "react"; import { notificationAction } from "../Reducer"; function NotificationCard({ type, message }) { const [bgColor, setBgColor] = useState("bg-white"); useEffect(() => { switch (type) { case notificationAction.ALERT: setBgColor("bg-info"); break; case notificationAction.ERROR: setBgColor("bg-error"); break; case notificationAction.SUCCESS: setBgColor("bg-success"); break; case notificationAction.WARNING: setBgColor("bg-warning"); break; default: setBgColor("bg-gray-300"); break; } }, [type, message]); return ( <div className={`card w-96 ${bgColor} text-primary-content`}> <div className="card-body"> <h2 className="card-title">Notification</h2> <p>{message}</p> </div> </div> ); } export default NotificationCard; ``` we use useEffect and a switch statement to change the color and message for each notification. - open index.js in the pages folder and clear the default text in the main tag leaving the function . ```jsx import Head from "next/head"; ``` ```jsx export default function Home() { return ( <div className=""> <Head> <title>Notification Web App</title> <meta name="description" content="Generated by create next app" /> <link rel="icon" href="/favicon.ico" /> </Head> <main> </main> </div> ); } ``` - Add 4 [buttons](https://daisyui.com/components/button/) from daisyUI inside the main tag, this will be used to trigger the notifications. These button will be use for the four different notification type. ```jsx <button className="btn btn-info">Info</button> <button className="btn btn-success">Success</button> <button className="btn btn-warning">Warning</button> <button className="btn btn-error">Error</button> ``` - Use our custom useNotification hook and destructor the notify function. `const { notify } = useNotification();` - Add an onClick function to the buttons and run the notify in them. Add the notificationAction as the first parameter for the type of notification we want and the message as the second parameter. ```jsx ... <button className="btn btn-info" onClick={() => notify( notificationAction.ALERT, "This is an information notification" ) } > Info </button> <button className="btn btn-success" onClick={() => notify( notificationAction.SUCCESS, "This is an success notification" ) } > Success </button> <button className="btn btn-warning" onClick={() => notify( notificationAction.WARNING, "This is an warning notification" ) } > Warning </button> <button className="btn btn-error" onClick={() => notify(notificationAction.ERROR, "This is an error notification") } > Error </button> ... ``` - index.js should have this code ```jsx import Head from "next/head"; import Image from "next/image"; import useNotification from "../src/Hook/useNotification"; import { notificationAction } from "../src/Reducer"; import styles from "../styles/Home.module.css"; export default function Home() { const { notify } = useNotification(); return ( <div className="w-full h-screen"> <Head> <title>Notification Web App</title> <meta name="description" content="Generated by create next app" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className="w-full h-full flex justify-center gap-3 items-center"> <button className="btn btn-info" onClick={() => notify( notificationAction.ALERT, "This is an information notification" ) } > Info </button> <button className="btn btn-success" onClick={() => notify( notificationAction.SUCCESS, "This is an success notification" ) } > Success </button> <button className="btn btn-warning" onClick={() => notify( notificationAction.WARNING, "This is an warning notification" ) } > Warning </button> <button className="btn btn-error" onClick={() => notify(notificationAction.ERROR, "This is an error notification") } > Error </button> </main> </div> ); } ``` ## Enjoy Open the console and run the app, and it should be open in http://localhost:3000 ```javascript npm run dev ``` Your app should look like this ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qikatlqlgha54tqr3tlu.jpeg) Try the live demo [React-notification-system](https://react-notication-system.vercel.app/) When you click on the button you should receive toast notifications at the top of the app. You can change the position in the Notification.js function in the Context folder. - Info Toast ![Info Toast](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ibcn03eafyfh4e9k3z3.jpeg) - Success Toast ![Success Toast](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9w0i4p7eiyctqqmvfem1.jpeg) - Warning Toast ![Warning Toast](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmpmcxegfhout63xwp1n.jpeg) - Error Toast ![Error Toast](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smm99dsxq3uplirqdxki.jpeg) ## Conclusion This is a simple toast notification you can implement in your own app. I hope this helps to improve the usability of your app. I post a blog weekly every Thursday. Get access to the [github repo here](https://github.com/Gatesvert81/react-notication-system). Please don't forget to buy me a coffee, [Mathias Martey](https://www.buymeacoffee.com/mathiasmartey). Follow me on twitter [@blaq_xcobar](https://twitter.com/blaq_xcobar)
gatesvert81
1,321,785
Getting Started with PICO-8
Start developing a game with the PICO-8 fantasy game console
0
2023-01-08T18:27:44
https://dev.to/cmiles74/getting-started-with-pico-8-4nla
pico8, retrogaming, gamedev
--- layout: post title: Getting Started with PICO-8 published: true description: Start developing a game with the PICO-8 fantasy game console tags: pico8, retrogaming, gamedev cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9m0vva6e1tncd1ixd3ht.png --- I've been interested in fantasy consoles for a long time now but my interest has been limited to reading a blog article here and there and thinking something like "Man, that looks like fun!" After literal years of doing other things I've finally decided to put my own 8-bit arcade game together. I took a look at the field, tried the [Pixel Vision 8][pv8] and then saw the project archived, regrouped and decided to go with the [PICO-8][pico8]! At first I started out working with the [Pixel Vision 8][pv8] project but I found some bugs that were frustrating me and while I was working on fixing those bugs, I found some more bugs. While I was thinking about fixing those bugs the project was archived, there's currently no maintainer. Becoming the maintainer of one of these projects is out-of-scope, I'm more interested in getting a game coded up, so I decided to pony up the $15.00 and purchase a PICO-8 license. I recommend you do the same, its not that much money and it helps keep the project active. It's a pretty nice product and reasonably polished. It has handy tools for writing code, drawing sprites and tile maps, and editing music. While you could use external tools and pull in files, it's nice to be able to do everything in one place. Plus it looks pretty dang awesome on my snazzy retro looking PC. ![Snazzy Retro PC](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rny64w70l0srpa0r53ud.jpg) This post will cover getting the PICO-8 application installed, drawing your first sprite and moving that sprite around on screen. My plan is to add more articles as I get the game built up. I'm warning you now that this could take a while, I have a day job as well. 😉 ## Install the PICO-8 Application Check all of the pockets of your coats and look under the sofa cushions until you've found $15 and then head over to the [PICO-8][get-pico8] page. Once you fill in your e-mail address and hand over your dollars, you'll end up with a license through the [Humble][humble] website. You can then download the distribution for your operating system (or a ZIP of the distribution if you're on Linux). For those of you on Windows or MacOS, there's not much to the installation; double-click the Windows installer or copy over the MacOS application from the disk image. Those of us on Linux will have to put the distribution somewhere and either start from the command line or [create a "desktop" file][pico8-desktop] to launch the application... Unless you are on Arch Linux, there is [an AUR package for PICO-8][aur-pico8] that's pretty easy to get working. With all of that work out of the way, you are ready to get started! Launch the PICO-8 application and you will be greeted with the welcome screen. ![PICO-8 Welcome](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5heih7d76kqi8qz5ixf5.png) They're definitely going for the old 8-bit home computer vibe, this reminds me of the Atari 800 or Commodore 64 where you'd get dropped at a BASIC interpreter. This is a bit better as the PICO-8 speaks [Lua][lua] which is quite a bit more fun to work with. What you are looking at right now is the interactive console, press the "escape" key on your keyboard to switch over to "editor" mode. It should look like the image below. ![PICO-8 Editor Mode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xg54e1bgz1yr4ysmspt.png) The icons in the upper-right hand corner represent the different editors: code, sprite, map, sound effects and music; this image is of the code editor. More information about all of these tools and how they work can be found in [the PICO-8 manual][pico8-manual]. ## Draw a Sprite For now, switch over the the sprite tool (the second one from the left). For those of you not in the know, a [sprite][sprite] is a bitmap graphic that can easily be moved around on screen or stacked on top of or below other bitmaps. We will use this sprite to represent the player. Each sprite on the PICO-8 is 8x8 pixels in size, these sprites are displayed in rows beneath the sprite editor area. There's one sprite there already, it kind of looks like a white star. Click on that sprite to bring it into focus and display it in the editor area. The sprites don't have names but they each have a number: the first one (the one you're editing) is `0`, the one to the right of that would be `1` and the one to the right of that is `2` and so on. ![Sprite Editor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h7jhf590epxq6jyj5yo.png) There are several tools to make editing your sprite easier, feel free to explore them. Go ahead and edit the sprite until you have something you are somewhat satisfied with, after all, you can always go back and make changes later. This sprite will represent the player in your game. ## Displaying Your Sprite The next step is to write enough code to get your sprite to display when your game is run. ### The Game Loop The PICO-8 lets you write your game code with [Lua][lua]. It's pretty concise and, in my opinion, pretty easy to pick up from scratch; don't worry if you don't know much about Lua. You can read up on the language later, this tutorial will give you enough knowledge just-in-time style to get by. Make sure you are in "editing" mode, press the "escape" key if you are not. Pick the code editor from the set of icons in the upper-right, it's the first one and kind of looks like two parenthesis "()". You will be welcomed by an entirely empty page of code, go ahead and bang in the code listed below. ```lua function _init() end function _update() end function _draw() end ``` This won't actually do anything but it does provide a short game that does nothing! Press the "escape" key to switch back to console mode and type "run" (the [`run`][pico8-run] function will start your game) at the prompt. You should receive no errors and see nothing at all happen. When you grow tired of all the nothing, press the "escape" key to stop your game from running. These three functions are common to every game, they are called by the PICO-8 to perform the following... + Setup your game + Update the current state of your game (i.e. move the player, the bad guys, etc.) + Draw the current state of the game on screen While the setup function ("_init") is only called once the other two are called over and over. When the "_update" function is called it can do things like see if any buttons are pressed and move the player to a new location. After that, the "_draw" function will be called to draw the current state of the game to the display. This is called ["the game loop"][game-loop], it's a common pattern in a lot of different kinds of video games. ### Draw the Player While we could jam all of our code into those three functions, this style is frowned upon by sane people everywhere. Instead we'll add some new functions to the top of our file and then call those functions where appropriate. First up is keeping track of the state of our player, add the following code to the top of your file. ```lua function init_actors() ply1={x=8,y=8,spr=0} end ``` This code creates a new variable named `ply1` that keeps track of all the data about our player: their coordinates on screen (`x` and `y`) and the index of the sprite we're using to represent them (`spr`). The data we've assigned to the player is called [a "table"][lua-table], it stores any number of keys and values and makes it easy to get at their values and update them. Now we need to call our function to setuip our player along with the rest of the game. Edit your "_init" function to match the listing below. ```lua function _init() init_actors() end ``` Next up we want to draw the player. This will be pretty simple; we need to position the sprite in the player's current location and then draw the sprite. Type in the code below right underneath your "initActors" function. ```lua function draw_ply1() spr(ply1.spr,ply1.x,ply1.y) end ``` The [`spr`][pico8-spr] function is provided by the PICO-8's sprite library, it handles all of the nitty-gritty of drawing the sprite to the display. It takes three arguments, conveniently all of the data it needs is available from our player data. We provide the name (numeric index) of our sprite, the coordinate on the "x" axis and then the coordinate of the player along the "y" axis. With that information in hand, our sprite will be drawn. The last step is to update our "_draw" function to actually draw the player's sprite. Edit your "_draw" function to look like the code listed below. ```lua function _draw() draw_ply1() end ``` Your entire file of code should look something like this... ![Code Listing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/paq3d08a9zk7l5eiudyv.png) Go ahead and save the code file and run your game. You should see your sprite on the screen. 😎 ### Move the Player The last step is to let the player move around by pressing the arrow keys on their keyboard. There's only a little bit of logic to handle, we'll add it to a new function named "update_ply1". ```lua function update_ply1() if btn(0) then ply1.x -= 2 end if btn(1) then ply1.x += 2 end if btn(2) then ply1.y -= 2 end if btn(3) then ply1.y += 2 end end ``` The PICO-8 has a [`btn`][pico8-btn] function is doing most of the work here. This function accepts the index of button type (player 0 up, player 0 down, etc.) and returns `true` if that button is being pressed. We then check each button we're interested in and then (if it's being pressed) we update the state of the player to match. To make this work, add a call to this function to our "_update" function. ```lua function _update() update_ply1() end ``` Try it out! You'll notice it doesn't work _exactly_ as expected. The problem is that we aren't clearing the display before we draw the sprite in the next location. The [`cls`][pico8-cls] function will clear the screen, if we add that right before we draw the items on screen, we'll be all set. ```lua function _draw cls() draw_ply1 end ``` Woot! If you are interested in going the extra mile, how would we change the "update_ply1" code to let the player appear on one side of the screen when they reach the other, like in [Pac Man][pacman]? What would we do if we wanted the boundaries of the display to act as a wall? What might we change to make the player move faster or slower? There are many variations even for simple games. 🤯 ## Build Your Game The last thing we'll cover is building a "cartridge" that contains your game, this provides you one with one file that you can share with friends. You simply drag the cartridge file onto the PICO-8 (or executable) to load the game. With the PICO-8 we can also save the cartridge as an actual [PNG][pico8-png] image that you can view in an image viewer or web browser. It will have a screenshot as well as a little description of your game. Switch to the code editor, we're going to add a couple of lines to the top of your game's code. We'll add two comments; these won't change how your game works but they will be used to describe your game and will be part of the cartridge. Two dashes in a row ("--") start a comment, the comments are displayed at the bottom of your cartridge image. Add one comment with the name of your game and another with your name. Make sure these are the first two lines of code in the editor! ```lua --move the player --cmiles74 ``` Next, switch back to the console and type "run" to run the game. Move the player around a little bit and then press the "control" key along with the "7" key (control-7). You should see the PICO-8 say "captured label image" down at the bottom of the screen. You've just taken the screenshot that will decorate your cartridge! Now you can save your cartridge to disk. Type the following at the console to save your game. ```code save move-the-player.p8.png ``` At this point your game is built! You can switch to your operating system's file browser so that you can navigate to your cartridge directory. This will vary based on your operating system. | Operating System | Cartridge Path | |------------------|--------------------------------------------| | MacOS | ~/Library/Application Support/pico-8/carts | | Linux | ~/.lexaloffle/pico-8/carts | | Windows | %AppData%/Roaming/pico-8/carts | When you visit this directory you should see your cartridge listed. If you drag it over to your web browser, you'll see what the image looks like. ![PICO-8 Cartridge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/669wemysz9zv7935be5p.png) You can load your cartridge by double-clicking it or dragging it onto your PICO-8 window. You can also email this file to friend or post it somewhere on the internet. Anyone with the PICO-8 application installed can run your game or play around with it, perhaps even making their own changes and sending you an updated cartridge. ## Conclusion It's quite an achievement to get this much done so quickly, the tools provided by the PICO-8 make it easy to get straight to game building. Looking at your code, there's not so much that it's hard to read, hopefully it's also pretty clear how things fit together. Good luck on your next video game! If you are looking for inspiration, type ["splore"][pico8-splore] at the PICO-8 console and check out some of the games people have written. Don't forget to check back here for more articles, I plan to get a couple more written. ---- [pico8]: https://www.lexaloffle.com/pico-8.php?page=faq "PICO-8" [get-pico8]: https://www.lexaloffle.com/pico-8.php?#getpico8 "Get PICO-8" [pv8]: https://pixelvision8.github.io/Website/ "Pixel Vision 8" [humble]: https://www.humblebundle.com/ "Humble Store" [pico8-desktop]: https://gist.github.com/netvip3r/34a0f3519cfe3e0e7c44e8f1fcba76d4 "PICO-8 Desktop File" [aur-pico8]: https://aur.archlinux.org/packages/pico-8 "Arch AUR PICO-8 Package" [lua]: https://www.lua.org/ "Lua Programming Language" [pico8-manual]: https://www.lexaloffle.com/dl/docs/pico-8_manual.html "PICO-8 Manual" [sprite]: https://en.wikipedia.org/wiki/Sprite_(computer_graphics) "Sprite" [pacman]: https://freepacman.org/ "Pac Man" [pico8-btn]: https://pico-8.fandom.com/wiki/Btn "PICO-8 btn Function" [pico8-cls]: https://pico-8.fandom.com/wiki/Cls "PICO-8 cls Function" [pico8-run]: https://pico-8.fandom.com/wiki/Run "PICO-8 run Function" [pico8-spr]: https://pico-8.fandom.com/wiki/Spr "PICO-8 spr Function" [pico8-png]: https://pico-8.fandom.com/wiki/P8PNGFileFormat "PICO-8 PNG Cartridge" [pico8-splore]: https://pico-8.fandom.com/wiki/Splore "PICO-8 splore Function" [game-loop]: https://gameprogrammingpatterns.com/game-loop.html "Game Loop" [lua-table]: https://www.lua.org/pil/2.5.html "Lua, Tables"
cmiles74
1,321,795
Facade pattern in TypeScript
Introduction The facade pattern is a structural pattern which allows you to communicate...
20,586
2023-01-08T18:57:46
https://www.jmalvarez.dev/posts/facade-pattern-typescript
typescript, architecture, javascript, webdev
## Introduction The facade pattern is a structural pattern which allows you to communicate your application with any complex software (like a library or framework) in a simpler way. It is used to create a simplified interface to a complex system. ![Diagram](https://www.jmalvarez.dev/images/facade-pattern-typescript/diagram.webp) ## Applicability - you don't want your application to be tightly coupled to a 3rd party library or framework - you want to simplify the interaction of your application with a complex system ## Implementation You can find the full example source code [here](https://github.com/josemiguel-alvarez/design-patterns-typescript/blob/main/structural-paterns/facade/facade.ts). As an example of the implementation I'm going to create an application that uploads files to a cloud storage service. Let's imagine that the 3rd-party code is something like the following: ```ts class CloudProviderService { public isLoggedIn(): boolean { // Checks if the user is logged in } public logIn(): void { // Logs the user in } public convertFile(file: string): string { // Converts the file to a format that the cloud provider accepts } public uploadFile(file: string): void { // Uploads the file to the cloud provider } public getFileLink(file: string): string { // Returns the link to the uploaded file } } ``` As we don't want to couple our application with this 3rd-party code we are going to create a facade that will hide the complexity of it. This facade will be the only class that our application will interact with. ```ts class CloudProviderFacade { private service: CloudProviderService; constructor() { this.service = new CloudProviderService(); } } ``` Now we have to implement the functions that our application will use to interact with the 3rd-party code. For the example I'm going to only implement one function: ```ts class CloudProviderFacade { private service: CloudProviderService; constructor() { this.service = new CloudProviderService(); } public uploadFile(file: string): string { if (!this.service.isLoggedIn()) { this.service.logIn(); } const convertedFile = this.service.convertFile(file); this.service.uploadFile(convertedFile); return this.service.getFileLink(convertedFile); } } ``` And that's it! Like this we have implemented the Facade pattern. It's very simple and very useful at the same time. A simple example of how the client would use this code: ```ts const facade = new CloudProviderFacade(); const fileLink = facade.uploadFile("file.txt"); console.log("File link:", fileLink); /* Output: Logging in... Converting file... file.txt Uploading file... file.txt.converted File link: https://example.com/file.txt.converted */ ``` ## Resources - [Example source code](https://github.com/josemiguel-alvarez/design-patterns-typescript/blob/main/structural-paterns/facade/facade.ts) - [Composite - refactoring.guru](https://refactoring.guru/design-patterns/facade)
jmalvarez
1,321,941
Adicionando ESLint e Prettier a um projeto React criado com ViteJS
Tutorial sobre como adicionar e configurar ESLint e Prettier a um projeto React criado com ViteJS
0
2023-01-09T01:03:14
https://dev.to/marcosdiasdev/adicionando-eslint-e-prettier-a-um-projeto-react-criado-com-vitejs-hgn
react, vitejs, eslint, prettier
--- title: Adicionando ESLint e Prettier a um projeto React criado com ViteJS published: true description: Tutorial sobre como adicionar e configurar ESLint e Prettier a um projeto React criado com ViteJS tags: react, vitejs, eslint, prettier cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wy1x1c69g6m8iuivc8i2.jpg # published_at: 2023-01-08 23:10 +0000 --- Recentemente, eu resolvi experimentar o [ViteJS](https://vitejs.dev/guide/why.html) como alternativa ao Create React App. Embora o ViteJS já ofereça uma estrutura básica para um projeto React, o template recomendado na documentação não vem configurado com ESLint e Prettier. Neste artigo, vamos passar brevemente pela instalação e configuração dessas duas ferramentas em um projeto React criado com ViteJS. ## Passo 1: instalar as dependências Além do ESLint e do Prettier, vamos instalar as seguintes dependências: - `@typescript-eslint/eslint-plugin`: plugin com regras para projetos com código TypeScript. - `@typescript-eslint/parser`: um parser que permitirá ao ESLint fazer sugestões em código TypeScript. - `eslint-config-prettier`: uma configuração para o ESLint que desabilita regras que serão gerenciadas pelo Prettier, evitando conflitos. - `eslint-plugin-import`: plugin que permitirá ao ESLint lidar com a sintaxe de `import`/`export` da ES6+. - `eslint-plugin-jsx-a11y`: plugin com regras de acessibilidade. - `eslint-plugin-react`: plugin com regras específicas para projetos React. - `eslint-plugin-react-hooks`: plugin com regras sobre o uso de React hooks. Ele garantirá, por exemplo, que você jamais utilize um hook fora de um function component. Para instalar todas as dependências, execute a seguinte linha: ```sh yarn add -D eslint prettier @typescript-eslint/eslint-plugin @typescript-eslint/parser eslint-config-prettier eslint-plugin-import eslint-plugin-jsx-a11y eslint-plugin-react eslint-plugin-react-hooks ``` Se você utilizar `npm` como gerenciador de pacotes, substitua `yarn add -D` por `npm install --dev`. ## Passo 2: configurar o ESLint Com as dependências corretamente instaladas, é hora de criar os arquivos de configuração do ESLint. Primeiro, criaremos o arquivo `.eslintrc` que manterá as configurações para o ESLint em nosso projeto. > Todos os arquivos de configuração do ESLint e do Prettier deverão ser criados na raíz do projeto. `.eslintrc` ```json { "extends": [ "eslint:recommended", "plugin:react/recommended", "plugin:react-hooks/recommended", "plugin:import/recommended", "plugin:jsx-a11y/recommended", "plugin:@typescript-eslint/recommended", "eslint-config-prettier" ], "settings": { "react": { "version": "detect" }, "import/resolver": { "node": { "paths": [ "src" ], "extensions": [ ".js", ".jsx", ".ts", ".tsx" ] } } }, "rules": { "no-unused-vars": [ "error", { "vars": "all", "args": "after-used", "ignoreRestSiblings": true, "argsIgnorePattern": "^_" } ], "react/react-in-jsx-scope": "off" } } ``` Com a regra `no-unused-vars` garantiremos que um erro será emitido caso nos esqueçamos de alguma variável não utilizada em nosso código. A regra `react/react-in-jsx-scope` foi desabilitada para que o ESLint não marque como obrigatória a importação do React (`import React from 'react'`) em arquivos que utilizem JSX, já que isso não é mais necessário nas versões mais recentes do React. Em seguida, criaremos um arquivo `.eslintignore`, que deverá indicar ao ESLint quais arquivos serão ignorados: `.eslintignore` ```json node_modules/ dist/ env.d.ts ``` Se preferir, adicione também à seção `scripts` do seu `package.json` um script para executar o ESLint no seu projeto: ```json "scripts": { ... "lint": "eslint . --ext .ts,.tsx" } ``` ## Passo 3: configurar o Prettier Assim como fizemos com o ESLint, agora criaremos um arquivo `.prettierrc` com as configurações que utilizaremos no Prettier. Para aprender mais sobre as opções utilizadas e outras disponíveis, acesse a [documentação do Prettier](https://prettier.io/docs/en/options.html): `.prettierrc` ```json { "trailingComma": "all", "tabWidth": 2, "semi": true, "singleQuote": true, "printWidth": 120, "bracketSpacing": true, "endOfLine": "lf" } ``` Para determinar os arquivos ignorados pelo Prettier, crie um arquivo `.prettierignore`, como a seguir: `.prettierignore` ```json node_modules/ dist/ ``` ## Passo 4: integrando com o VSCode Por fim, iremos configurar o VSCode para utilizar o ESLint e o Prettier para fazer sugestões e formatar o código, respectivamente. Caso você ainda não as possua, instale as extensões [Prettier - Code formatter](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) e [ESLint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint). Com as extensões instaladas, também será necessário configurar o VSCode para utilizar o Prettier para formatação do documento no momento em que ele for salvo. A extensão ESLint deverá entrar em ação automaticamente caso haja um arquivo de configuração válido, como o `.eslintrc` que criamos. Caso seu projeto ainda não possua um arquivo `.vscode/settings.json`, será necessário criá-lo. Nesse arquivo, adicione as duas regras a seguir: ```json { ... "editor.formatOnSave": true, "editor.defaultFormatter": "esbenp.prettier-vscode" } ``` Pronto! Com isso, o ESLint deverá destacar os trechos problemáticos do seu código e o Prettier deverá formatar e aplicar correções sempre que possível ao salvar o documento. ![ESLint and Prettier on VSCode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sguy7hw5qf62vxhppzm6.gif) ## Considerações finais Este artigo foi inspirado pelo artigo [Setting up ESLint & Prettier in ViteJS ](https://cathalmacdonnacha.com/setting-up-eslint-prettier-in-vitejs) e traz alguns ajustes e adições. Um repositório com o projeto completo, incluindo Storybook, está disponível em [https://github.com/marcosdiasdev/react-vitejs-template](https://github.com/marcosdiasdev/react-vitejs-template). Ficou com alguma dúvida? Vamos conversar.
marcosdiasdev
1,321,996
My Learning Strategy
Learning actively if i am doing a course in programming I will actively code along without just...
0
2023-01-09T01:35:56
https://dev.to/orix/my-learning-strategy-387e
learn, programming
Learning actively if i am doing a course in programming I will actively code along without just watching and not putting my knowledge into action. Don't try to do everything in one day. Whatever you learn if you think you have mastered it try teaching it to another person to test if yourself (write a blog post,Journal entry etc...). Write all your questions down in one place and then answer all of them later. Some tips 🙌🏽: Sleep for 7-8 hours a night Sleep at 10:00pm wake up at 05:30 AM.
orix
1,322,016
ExpressJS & TypeScript: Create the model and the controller by creating a simple ecommerce app and using MySQL as a database.
Here is an example of how you could save data to a MySQL database using Express.js, MySQL, and...
0
2023-01-09T02:22:22
https://dev.to/elhamnajeebullah/expressjs-typescript-create-the-model-and-the-controller-by-creating-a-simple-ecommerce-app-and-using-mysql-as-a-database-3of4
typescript, node, beginners, tutorial
Here is an example of how you could save data to a MySQL database using Express.js, MySQL, and TypeScript with the Model-View-Controller (MVC) pattern: First, you would need to install the necessary dependencies and configure your MySQL database: ``` // Install the MySQL and TypeScript dependencies npm install mysql2 @types/mysql2 ``` ```typescript // Configure your MySQL database connection in a separate module import { createConnection, Connection } from 'mysql2'; export class MySQLDatabase { static connection: Connection; static connect() { this.connection = createConnection({ host: 'localhost', user: 'username', password: 'password', database: 'database', }); } } ``` Next, you would create a model file to represent your database entities and handle the database queries: ```typescript import { MySQLDatabase } from './database'; export class Product { id: number; name: string; price: number; static async find(): Promise<Product[]> { return new Promise((resolve, reject) => { MySQLDatabase.connection.query( 'SELECT * FROM products', (err, results) => { if (err) { return reject(err); } resolve(results as Product[]); } ); }); } static async findById(id: number): Promise<Product | null> { return new Promise((resolve, reject) => { MySQLDatabase.connection.query( 'SELECT * FROM products WHERE id = ?', [id], (err, results) => { if (err) { return reject(err); } resolve(results[0] as Product | null); } ); }); } save(): Promise<void> { return new Promise((resolve, reject) => { MySQLDatabase.connection.query( 'INSERT INTO products SET ?', this, (err, result) => { if (err) { return reject(err); } this.id = result.insertId; resolve(); } ); }); } update(): Promise<void> { return new Promise((resolve, reject) => { MySQLDatabase.connection.query( 'UPDATE products SET ? WHERE id = ?', [this, this.id], (err, result) => { if (err) { return reject(err); } resolve(); } ); }); } delete(): Promise<void> { return new Promise((resolve, reject) => { MySQLDatabase.connection.query( 'DELETE FROM products WHERE id = ?', [this.id], (err, result) => { if (err) { return reject(err); } resolve(); } ); }); } ``` Then, in your controller file, you could use the methods from the Product model to save, update, and delete data from the database: ```typescript import { Request, Response } from 'express'; import { Product } from '../models/product'; export class ProductsController { // A method to handle GET requests to the '/products' endpoint static async getAll(req: Request, res: Response) { try { // Get the list of products from the database const products = await Product.find(); res.json(products); } catch (err) { res.status(500).send(err); } } // A method to handle GET requests to the '/products/:id' endpoint static async getById(req: Request, res: Response) { try { // Get the product with the matching id from the database const product = await Product.findById(req.params.id); if (product) { res.json(product); } else { res.sendStatus(404); } } catch (err) { res.status(500).send(err); } } // A method to handle POST requests to the '/products' endpoint static async create(req: Request, res: Response) { try { // Create a new product in the database const product = new Product(req.body); await product.save(); res.sendStatus(201); } catch (err) { res.status(500).send(err); } } // A method to handle PUT requests to the '/products/:id' endpoint static async update(req: Request, res: Response) { try { // Update the product with the matching id in the database const product = await Product.findById(req.params.id); if (product) { Object.assign(product, req.body); await product.update(); res.sendStatus(200); } else { res.sendStatus(404); } } catch (err) { res.status(500).send(err); } } // A method to handle DELETE requests to the '/products/:id' endpoint static async delete(req: Request, res: Response) { try { // Delete the product with the matching id from the database const product = await Product.findById(req.params.id); if (product) { await product.delete(); res.sendStatus(200); } else { res.sendStatus(404); } } catch (err) { res.status(500).send(err); } } } ``` In this example, the Product model is assumed to have methods find, findById, save, update, and delete for fetching, saving, updating, and deleting data from the database, respectively. The controller defines methods for handling GET, POST, PUT, and DELETE requests to the relevant API endpoints, and uses the Product model methods to perform the corresponding database operations. I hope this helps! Let me know if you have any questions or need further assistance.
elhamnajeebullah
1,322,089
A Tale of Hashery and Woe: How Mutable Hash Keys Led to an ActiveRecord Bug
In the changelog of Active Record right after version 7.0.4 is a small little bugfix that would, if I...
0
2023-01-10T14:25:13
https://dev.to/rnubel/a-tale-of-hashery-and-woe-how-mutable-hash-keys-led-to-an-activerecord-bug-3e85
rails, ruby, programming
In the changelog of Active Record right after version 7.0.4 is a small little bugfix that would, if I wasn't about to quote it and follow it up with this whole article, probably not catch your attention: > Fix a case where the query cache can return wrong values. See #46044 > > Aaron Patterson Behind this innocuous release note, though, is a fascinating tale of how hashes in Ruby _really_ work that will take us from the Rails codebase down to the MRI source code. And it starts with a simple question: ## Will this Ruby snippet print "hit" or "miss"? ```ruby key = {} hash = { key => "hit" } key[:foo] = rand(1e6) puts hash[key] || "miss" ``` ## Well... it depends. The only correct answer is **"yes"**. It could print _either_ value! If you think that's crazy, well, let me prove it: ```ruby # ruby 3.2; same results on 2.7 1_000_000.times.each_with_object(Hash.new(0)) do |_, result| key = {} hash = { key => "hit" } key[:foo] = rand(1e6) result[hash[key] || "miss"] += 1 end => {"miss"=>996050, "hit"=>3950} ``` Wild, right? To get to the bottom of this, we'll have to dig into the Ruby source code and find the Hash implementation. ## Pop open the hood Ruby hashes are implemented in the succinctly named [hash.c](https://github.com/ruby/ruby/blob/ruby_3_2/hash.c). This bug is happening when we _look up_ a value, which ultimately requires us to find the **entry** in the hash's underlying array. The `ar_find_entry` function is responsible for that. The arguments to it are: * `hash`: the hash object itself * `hash_value`: the numerical hash of the key object (in Ruby, any object responds to `.hash` and produces a deterministic, numerical hash -- try it out! This hash value is critical to how hashes work, as we'll see.) * `key`: the key object ```c static unsigned ar_find_entry(VALUE hash, st_hash_t hash_value, st_data_t key) { ar_hint_t hint = ar_do_hash_hint(hash_value); return ar_find_entry_hint(hash, hint, key); } # https://github.com/ruby/ruby/blob/ruby_3_2/hash.c#L744-L749 ``` Your first thought (other than "whoa, how long has it been since I worked in C?") is probably to wonder what a "hint" is. The `ar_do_hash_hint` method, plus a quick lookup of the type definition in `hash.h`, tells us: ```c static inline ar_hint_t ar_do_hash_hint(st_hash_t hash_value) { return (ar_hint_t)hash_value; } # https://github.com/ruby/ruby/blob/ruby_3_2/hash.c#L403-L407 ``` ```c typedef unsigned char ar_hint_t; # https://github.com/ruby/ruby/blob/ruby_3_2/internal/hash.h#L20 ``` This typecast to an unsigned char means that **a hint is just the last byte of the numerical hash.** So, a number from 0 to 255. ## Delving ever deeper Armed with that knowledge, we can proceed to `ar_find_entry_hint`. I'll strip the debug code out of this for clarity: ```c static unsigned ar_find_entry_hint(VALUE hash, ar_hint_t hint, st_data_t key) { unsigned i, bound = RHASH_AR_TABLE_BOUND(hash); const ar_hint_t *hints = RHASH(hash)->ar_hint.ary; for (i = 0; i < bound; i++) { if (hints[i] == hint) { ar_table_pair *pair = RHASH_AR_TABLE_REF(hash, i); if (ar_equal(key, pair->key)) { return i; } } } return RHASH_AR_TABLE_MAX_BOUND; } # https://github.com/ruby/ruby/blob/ruby_3_2/hash.c#L701-L742 ``` There are two checks this code makes for each entry to decide if it's our value: * `if (hints[i] == hint)` -- Ruby uses the hint as a way to **quickly** check if the key is likely to be our key or not. * `if (ar_equal(key, pair->key))` -- but before we *actually* return the row as a hit, we check the actual equality of the given key and the one for this entry. What this means is that two objects having keys with the same hint is perfectly normal: the code would spend a little wasted time before realizing the keys are different, but we wouldn't expect a false positive. So when we mutate the hash key in the snippet at the top of this post, there's a **1 out of 256** chance that our hash still keeps the same hint. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16wcn664g1i8nco1yfyq.png) ## But won't the equality check save us? **No!** Even after we mutate a hash, the hash **always stays equal with any references to it** (because it's the same object and therefore still has the same memory address). And the hash table isn't storing the key object by value; it's only storing a pointer to it. So in the snippet above, the reference to `key` inside `hash` will always be equal to the that same `key` object no matter how much we mutate it; meaning the only thing standing between us and a false-positive hit is the hint check. That gives us a 255 out of 256 chance to see our "expected behavior of a miss, and a 1 out of 256 chance to see the unexpected hit. I ran the snippet at the top of the post for 100,000,000 iterations -- where we'd expect to see 390,625 hits -- and got 390,956. I'm no statistician, but I think that proves our logic is correct. ## Big deal. Clearly mutating a hash key is just a bad idea. Well, you're not wrong. Python doesn't let dictionaries use mutable keys, which is very sensible. Go and JavaScript do, but they don't exhibit this bug and instead will always produce a hit after the key is mutated (without investigating, I'll bet they're doing the hashing based purely on the pointer address). So Ruby seems to stand alone in both allowing this behavior and having a hash implementation that leads to this indeterminate behavior. ## How does this relate to Rails? Ah, I almost forgot. This connects to Active Record because of how ActiveRecord::QueryCache is implemented: ```ruby def cache_sql(sql, name, binds) @lock.synchronize do result = if @query_cache[sql].key?(binds) @query_cache[sql][binds] else @query_cache[sql][binds] = yield end result.dup end end # https://github.com/rails/rails/blob/v7.0.4/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb#L127-L141 ``` If you don't remember what the query cache is, it's an enabled-by-default middleware for ActiveRecord that caches read queries to prevent suboptimal code from hammering the database unnecessary. It gets reset after every request, and for the most part, it works great with no need to notice it working. But at Enova, we saw a case where the cache was producing false positives, which led me down this whole path. Anyway, taking a look at the code again, the key thing is that `binds` will (if `bound_parameters` is on) hold a reference to a hash passed into a `where` clause. So this snippet demonstrates the same actual Ruby hash bug that we've just solved: ```ruby criteria = { thing: "one" } results_one = MyModel.where(some_col: criteria) criteria.merge!( other: "stuff" } results_two = MyModel.where(some_col: criteria) # 1/256 chance that results_two incorrectly gets the same data as results_one! ``` This is the bug that I reported in [rails/rails#46044](https://github.com/rails/rails/issues/46044): {% embed https://github.com/rails/rails/issues/46044 %} Tenderlove (aka Aaron Patterson) was able to quickly fix it, though, by dup-and-freezing any hash values passed into ActiveRecord query conditions: {% embed https://github.com/rails/rails/pull/46048 %} ## Conclusion Hopefully you found this entertaining and a little informative. Maybe you're wondering, is this actually a bug in Ruby itself? Certainly it feels like a design flaw. When I get around to it, perhaps soon, I plan to post this to the Ruby mailing list to at least see some core developers' thoughts on it.
rnubel
1,322,110
Deploying Go Applications to AWS App Runner: A Step-by-Step Guide
...With the managed runtime for Go In this blog post you will learn how to run a Go application to...
0
2023-03-02T10:17:13
https://abhishek1987.medium.com/deploying-go-applications-to-aws-app-runner-a-step-by-step-guide-4cacfa2a7a38
go, tutorial, programming, aws
***...With the managed runtime for Go*** In this blog post you will learn how to run a Go application to AWS App Runner using the [Go platform runtime](https://docs.aws.amazon.com/apprunner/latest/dg/service-source-code-go1.html). You will start with an [existing Go application on GitHub](https://github.com/abhirockzz/apprunner-go-runtime-app) and deploy it to AWS App Runner. The application is based on the [URL shortener application](https://dev.to/aws/build-a-serverless-url-shortener-with-go-10i2) (with some changes) that persists data in DynamoDB. **What's covered?** - [Prerequisites](#prerequisites) - [Create a GitHub repo for the URL shortener application](#create-a-github-repo-for-the-url-shortener-application) - [Create a DynamoDB table to store URL information](#create-a-dynamodb-table-to-store-url-information) - [Create an IAM role with DynamoDB specific permissions](#create-an-iam-role-with-dynamodb-specific-permissions) - [Deploy the application to AWS App Runner](#deploy-the-application-to-aws-app-runner) - [Verify URL shortener functionality](#verify-url-shortener-functionality) - [Clean up](#clean-up) - [Conclusion](#conclusion) AWS App Runner can create and manage services based on two types of service sources: - Source code (covered in this blog post) - Source image **Source code** is nothing but your application code that App Runner will build and deploy. All you need to do is point App Runner to a source code repository and choose a suitable runtime that corresponds to a programming platform version. App Runner provides platform-specific managed runtimes (for Python, Node.js, Java, Go etc.). The **AWS App Runner Go platform runtime** makes it easy to build and run containers with web applications based on a Go version. *You don't need to provide container configuration and build instructions such as a Dockerfile.* When you use a Go runtime, App Runner starts with a *managed* Go runtime image which is based on the [Amazon Linux Docker image](https://hub.docker.com/_/amazonlinux) and contains the runtime package for a version of Go and some tools. App Runner uses this managed runtime image as a base image, and adds your application code to build a Docker image. It then deploys this image to run your web service in a container. ## Prerequisites You will need an AWS account and [install AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). Let's get started.... ## Create a GitHub repo for the URL shortener application Clone this [GitHub repo](https://github.com/abhirockzz/apprunner-go-runtime-app) and then upload it to a GitHub repository in your account (keep the same repo name i.e. `apprunner-go-runtime-app`) ``` git clone https://github.com/abhirockzz/apprunner-go-runtime-app ``` ## Create a DynamoDB table to store URL information Create a table named `urls`. Choose the following: - Partition key named `shortcode` (data type `String`) - `On-Demand` capacity mode ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pu2aq43o9zc0eo95yx90.png) ## Create an IAM role with DynamoDB specific permissions ```bash export IAM_ROLE_NAME=apprunner-dynamodb-role aws iam create-role --role-name $IAM_ROLE_NAME --assume-role-policy-document file://apprunner-trust-policy.json ``` Before creating the policy, update the `dynamodb-access-policy.json` file to reflect the DynamoDB table ARN name. ```bash aws iam put-role-policy --role-name $IAM_ROLE_NAME --policy-name dynamodb-crud-policy --policy-document file://dynamodb-access-policy.json ``` ## Deploy the application to AWS App Runner > If you have an existing AWS App Runner GitHub connection and want to use that, skip to the **Repository selection** step. **Create AWS App Runner GitHub connection** Open the App Runner console and Choose **Create service**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbxxb7zdqo3kkv7eh3iw.png) On the **Source and deployment** page, in the **Source** section, for **Repository** type, choose **Source code repository**. Under **Connect to GitHub** choose **Add new**, and then, if prompted, provide your GitHub credentials. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yz0m91egr1o8zxwcdz3f.png) In the **Install AWS Connector for GitHub** dialog box, if prompted, choose your GitHub account name. If prompted to authorize the *AWS Connector for GitHub*, choose **Authorize AWS Connections**. Choose **Install**. Your account name appears as the selected GitHub account/organization. You can now choose a repository in your account. **Repository selection** For **Repository**, choose the repository you created - *apprunner-go-runtime-app*. For **Branch**, choose the *default* branch name of your repository (for example, **main**). **Configure your deployment**: In the **Deployment** settings section, choose **Automatic**, and then choose **Next**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsdulfq772vb9hst6tgc.png) **Configure application build** On the **Configure build** page, for **Configuration** file, choose **Configure all settings** here. Provide the following build settings: - Runtime – Choose *Go 1* - Build command – Enter *go build main.go* - Start command – Enter *./main* - Port – Enter *8080* Choose **Next**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oklfp2n6n7ssu6ujm9oj.png) **Configure your service.** Under **Environment variables**, add an environment variable. For **Key**, enter `TABLE_NAME`, and for **Value**, enter the name of the DynamoDB table (`urls`) that you created before. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sjmfpcj4c44so8qw99b.png) Under **Security > Permissions**, choose the IAM role that you had created earlier (`apprunner-dynamodb-role`). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fk6v91dqcy2h86rmzstj.png) Choose **Next**. On the **Review and create** page, verify all the details you've entered, and then choose **Create and deploy**. If the service is successfully created, the console shows the service dashboard, with a Service overview of the application. ## Verify URL shortener functionality The application exposes two endpoints: 1. To create a short link for a URL 2. Access the original URL via the short link First, export the App Runner service endpoint as an environment variable, ```bash export APP_URL=<enter App Runner service URL> # example export APP_URL=https://jt6jjprtyi.us-east-1.awsapprunner.com ``` Invoke it with a URL that you want to access via a short link. ```bash curl -i -X POST -d 'https://abhirockzz.github.io/' $APP_URL # output HTTP/1.1 200 OK Date: Thu, 21 Jul 2022 11:03:40 GMT Content-Length: 25 Content-Type: text/plain; charset=utf-8 {"ShortCode":"ae1e31a6"} ``` You should get a `JSON` response with a short code and see an item in the `DynamoDB` table as well. You can continue to test the application with a few other URLs. **To access the URL associated with the short code** Enter the following in your browser `http://<enter APP_URL>/<shortcode>` For example, when you enter `https://jt6jjprtyi.us-east-1.awsapprunner.com/ae1e31a6`, you will be re-directed to the original URL. You can also use `curl`. Here is an example: ```bash export APP_URL=https://jt6jjprtyi.us-east-1.awsapprunner.com curl -i $APP_URL/ae1e31a6 # output HTTP/1.1 302 Found Location: https://abhirockzz.github.io/ Date: Thu, 21 Jul 2022 11:07:58 GMT Content-Length: 0 ``` ## Clean up Once you complete this tutorial, don't forget to delete the following: - DynamoDB table - App Runner service ## Conclusion In this blog post, you learned how to go from a Go application in your GitHub repository to a complete URL shortener service deployed to AWS App Runner!
abhirockzz
1,322,307
How to use Firestore with Redux in a React application
You’re using Firebase as your backend-as-a-service platform, with Firestore holding your data....
0
2023-01-09T10:11:03
https://dev.to/emotta/how-to-use-firestore-with-redux-in-a-react-application-44g7
webdev, react, redux, firebase
![Photo by [Lautaro Andreani](https://unsplash.com/@lautaroandreani?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)](https://cdn-images-1.medium.com/max/9012/0*WNb40yO0ma4mKbC3) You’re using **Firebase** as your backend-as-a-service platform, with **Firestore** holding your data. You’re building the frontend with **React** and you want to use **Redux** to manage the app’s state. If you’re still wondering **how to efficiently fetch data from Firestore and seamlessly add it to your Redux state**, you’ve come to the right place. In this post I’m going to explain how to do it using some of the recommended approaches when using Redux today, in 2023. TL;DR: use RTK Query, with *queryFn* functions that call the Firebase SDK. ## Some background ### Firebase and Redux [Firebase](https://firebase.google.com/) is a backend-as-a-service platform. One of their products is [Firestore](https://firebase.google.com/docs/firestore), which is a noSQL database. To use it in your app, the recommended approach is to use the [Firebase SDK](https://firebase.google.com/docs/web/setup). [Redux](https://redux.js.org/) is a state management library. It’s useful when your app’s state is too large, or the logic to update it too complex, [among other scenarios](https://redux.js.org/faq/general). ### You should be using Redux Toolkit If you’re trying to build an app using Firestore and Redux, you may have come across resources explaining how to the fetch data, and in a separate process add it to Redux state. You may have read about writing your fetching logic using “[thunks](https://redux.js.org/usage/writing-logic-thunks)” explicitly. There’s even a package you may have found called [react-redux-firebase](http://react-redux-firebase.com/) that provides React bindings. While these solutions certainly work, they’re not the most efficient or up-do-date in 2023. The main reason for this being — since most of the resources I found were written more than a few years ago — they do not make use of a library that will make your life using Redux 100% easier and more efficient: **Redux Toolkit (or “RTK”)**. As of today, the Redux documentation recommends always using RTK. In their own words, “RTK includes utilities that help simplify many common use cases, including [store setup](https://redux-toolkit.js.org/api/configureStore), [creating reducers and writing immutable update logic](https://redux-toolkit.js.org/api/createreducer), and even [creating entire ‘slices’ of state at once](https://redux-toolkit.js.org/api/createslice).” ### Using RTK Query to fetch data If you’re using RTK, you should probably also be using **RTK Query** to fetch your data from the backend. It eliminates the need to write data fetching and caching logic yourself. The **createApi** function is the core of [RTK Query](https://redux-toolkit.js.org/rtk-query/overview)’s functionality. From their documentation: “It allows you to define a set of ‘endpoints’ that describe how to retrieve data from backend APIs and other async sources, including the configuration of how to fetch and transform that data. It generates an ‘API slice’; structure that contains Redux logic (and optionally React hooks) that encapsulate the data fetching and caching process for you”. Normally, createApi is used by defining a baseUrl and then defining endpoints for specific queries and mutations: ```typescript // Define a service using a base URL and expected endpoints export const pokemonApi = createApi({ reducerPath: 'pokemonApi', baseQuery: fetchBaseQuery({ baseUrl: 'https://pokeapi.co/api/v2/' }), endpoints: (builder) => ({ getPokemonByName: builder.query<Pokemon, string>({ query: (name) => `pokemon/${name}`, }), }), }) ``` Firebase, though, works through its SDK, not through an API. If we still want to keep the benefits of fetching and mutating data using RTK Query (more on this below), we need a slightly different approach, as we’ll see. ### Where to get the final code of the app shown here The final version of the code shown in this post is part of a project that can be found in [this repository](https://github.com/e-motta/top-project-photo-tagging). In my previous post I went through the design process and some insights I had while implementing it. ## Implementing it Before anything else, we need to install the necessary libraries (**@reduxjs/toolkit** and **firebase**), [configure the Firebase project](https://firebase.google.com/docs/web/setup) and [set up the Redux store using RTK](https://redux-toolkit.js.org/tutorials/quick-start). The first thing we’ll do after that is create a “slice” of the state, where we will write the code related to a specific part of the state. As mentioned, we’ll be using the ***createApi*** function for that. We can do that by creating **a single slice** containing all of the logic pertaining to our API, or we can split it in **more than one slice**. In either case, it is recommended to define **a single central slice**. If we need more than one slice, we can inject the different endpoints in the central slice later. For this post we’ll implement one slice that will fetch and update data related to a high-scores table from the Firestore database. In the [repo](https://github.com/e-motta/top-project-photo-tagging) and in [this link](https://redux-toolkit.js.org/rtk-query/usage/code-splitting) you can see what would need to be changed if you need more than one slice when using RTK Query. In this example, we’re creating a file called **scoresSlice.ts**: ```typescript // src/features/scores/scoresSlice.ts import { createApi, fakeBaseQuery } from "@reduxjs/toolkit/query/react"; import { arrayUnion, collection, doc, updateDoc, getDocs, } from "firebase/firestore"; import { firestore } from "../../firebase"; import { ScoresTable, ScoresTables } from "../../types"; export const firestoreApi = createApi({ baseQuery: fakeBaseQuery(), tagTypes: ["Score"], endpoints: (builder) => ({ fetchHighScoresTables: builder.query<ScoresTables, void>({ async queryFn() { try { const ref = collection(firestore, "scoresTables"); const querySnapshot = await getDocs(ref); let scoresTables: ScoresTables = []; querySnapshot?.forEach((doc) => { scoresTables.push({ id: doc.id, ...doc.data() } as ScoresTable); }); return { data: scoresTables }; } catch (error: any) { console.error(error.message); return { error: error.message }; } }, providesTags: ["Score"], }), setNewHighScore: builder.mutation({ async queryFn({ scoresTableId, newHighScore }) { try { await updateDoc(doc(firestore, "scoresTables", scoresTableId), { scores: arrayUnion(newHighScore), }); return { data: null }; } catch (error: any) { console.error(error.message); return { error: error.message }; } }, invalidatesTags: ["Score"], }), }), }); export const { useFetchHighScoresTablesQuery, useSetNewHighScoreMutation } = firestoreApi; ``` **A note: The way things are set up here, Firestore rules need to be open for read and write by everyone, which is not recommended for production. A better approach would be calling Firebase Functions to perform operations on Firestore, but since they don’t offer a free version for that, for the sake of this example project I chose to leave things as they are shown here.* The createApi function takes an object with ***baseQuery*** and ***endpoints***. In this example we’re also adding optional ***tagTypes***. RTK Query will automatically generate **React** **hooks** so that we can use the queries and mutations. ### **baseQuery** This is normally used to define the base URL of the API (see the Pokémon example above, under the “Using RTK Query to fetch data” section). Nevertheless, since we’re using the Firebase SDK, we don’t have a base URL. We can instead use a ***fakeBaseQuery()***, imported from the RTK Query library. ### **tagTypes** Tag types are used for caching and invalidation. When specifying them, you will be able to provide tags when data is fetched from the database. Afterwards, you can invalidate the cache of specific tags, meaning the data will need to be fetched again. This can be useful when you need to ensure that you’re using the most up-to-date data. In our example, we’re specifying the tag “Score”. We provide this tag when fetching data from the *fetchHighScoresTables* endpoint (notice the *providesTags: [‘Score’]* after the endpoint object). When setting a new high score with the ***setNewHighScore*** endpoint, we’re invalidating the same tag (*invalidatesTags: [‘Score’]* after the endpoint object). This means that the cached data fetched with ***fetchHighScoresTables*** (without the new high score) will be invalidated and the data will need to be fetched again (this time with the new high score). ### **endpoints** This is a function that takes a *builder* as argument and returns an object with the expected API endpoints. Normally the endpoints would look like in the Pokémon example above (under the “Using RTK Query to fetch data” section). To fetch data from Firestore, though, considering we’re not actually querying it from an API, we need to use the ***queryFn*** function. This allows us to define any arbitrary logic inside it, as long as we return the data in the shape expected by RTK Query (i.e. *{ data: ResultType }*). In the example above, we’re importing and using the Firebase SDK functions to query and update Firestore, just like we would if not using RTK Query. After querying the data, we return *{ data: scoresTables }*; when updating the database, no data needs to be returned, but RTK Query still expects the same shape of return, so we simply return *{ data: null }*. ### React hooks After defining ***firestoreApi*** with the ***createApi*** function, RTK Query will automatically generate React hooks to query/mutate the data. As per the documentation: “A React hook that automatically triggers fetches of data from an endpoint, ‘subscribes’ the component to the cached data, and reads the request status and cached data from the Redux store. The component will re-render as the loading status changes and the data becomes available.” In the example above, it generated the ***useFetchHighScoresTablesQuery*** and ***useSetNewHighScoreMutation*** hooks. We can use them just like any other React hook (inside function components, or inside other custom hooks). The *use…Query* hook will return an object with the data and the query properties. ```typescript // src/features/scores/HighScores.tsx import { useFetchHighScoresTablesQuery } from './scoresSlice'; const HighScores = () => { const { data, isLoading, isSuccess, isError, error, } = useFetchHighScoresTablesQuery(); return ( // ... ) } ``` The *use…Mutation* hook will return a tuple with a mutation trigger and the mutation result. The mutation trigger is called to fire off the mutation request. The mutation result is an object with the mutation properties. ```typescript // src/features/scores/components/EnterName.tsx import { useSetNewHighScoreMutation } from '../scoresSlice'; const EnterName = () => { const [setNewHighScore, result] = useSetNewHighScoreMutation(); // assume we set the variable 'id' with the table id // and the variable 'newHighScore' with a valid high score return ( <button onClick={() => setNewHighScore(id, newHighScore)}></button> ) } ``` And this is it! You’re all set to use Firestore while keeping the benefits of Redux and RTK Query. ## Final words In summary: * To use Firestore with Redux in a React application, it is recommended to use Redux Toolkit (RTK) and RTK Query. * RTK Query’s *createApi* function can be used to define endpoints for specific queries and mutations. * However, since the Firebase SDK is not an API, a different approach is needed to use *createApi* with Firestore. * By using RTK Query with *queryFn* functions that call the Firebase SDK, you can add data from Firestore to your Redux state and take advantage of the benefits of Redux and RTK Query. I hope this can be useful to you, or at least be a source of inspiration!
emotta
1,322,443
How to Create a real-time chat application with WebSocket
WebSockets are a technology for creating real-time, bi-directional communication channels over a...
0
2023-01-09T13:20:50
https://blog.learnhub.africa/2023/01/09/how-to-create-a-real-time-chat-application-with-websocket/
websocket, webdev, programming, javascript
WebSockets are a technology for creating real-time, bi-directional communication channels over a single TCP connection. They help build applications that require low-latency, high-frequency communication, such as chat applications, online gaming, and collaborative software. In this tutorial, we will be building a real-time chat application using WebSockets and the Socket.IO library. We will use Node.js as the server-side language and React as the client-side technology. However, the principles discussed in this tutorial can be applied to any language or technology that supports WebSockets. ####Prerequisites Before we begin, you should have the following prerequisites: *Basic understanding of JavaScript *Basic understanding of Node.js and React *Basic understanding of web development concepts, such as *HTTP and HTML *Node.js and npm installed on your machine *React developer tools installed in your browser (optional, but helpful for debugging) ####Setting up the Server First, let's create a new directory for our project and navigate to it in the terminal. ``` mkdir real-time-chat-app cd real-time-chat-app ``` Next, let's create a new Node.js project by running the following command: `npm init -y` This will create a package.json file in our project directory. Next, we need to install the Socket.IO library, which will handle the WebSocket communication for us. Run the following command to install it: `npm install socket.io` Now let's create a new file called `server.js` in the root of our project directory. This will be the entry point for our server-side code. In server.js, we first need to require the http and socket.io modules: `const http = require('http'); const io = require('socket.io');` Next, let's create an HTTP server and bind it to a port. We will use port 3000 for this example, but you can choose any available port on your machine. ``` const server = http.createServer((req, res) => { res.end('Server is running!'); }); const port = 3000; server.listen(port, () => { console.log(`Server is listening on port ${port}`); }); ``` Now if we run the server by running node server.js in the terminal and navigate to `http://localhost:3000` in the browser, we should see the message "Server is running!". Next, we need to attach the Socket.IO server to our HTTP server. We do this by calling the listen method on the io object, and passing it our HTTP server as an argument: `const ioServer = io.listen(server);` Now that we have set up the server, let's move on to setting up the client-side code. ####Setting up the Client First, let's create a new directory called client in the root of our project directory. This is where we will store the client-side code for our chat application. Next, let's create a new React project in the client directory. Run the following command in the client directory: `npm start` Once you have created a new React project in the client directory, you should have a public and src directory inside the client directory. Inside the src directory, create a new file called `App.js`. This will be the root component for our chat application. First, let's import the required dependencies: `import React, { useState, useEffect } from 'react'; import io from 'socket.io-client';` Next, let's create a function called useWebSocket that will handle the WebSocket communication for us. This function will return an object with three properties: messages: an array of messages that have been received from the server sendMessage: a function that can be called to send a message to the server error: an error object that will be set if there is an error in the WebSocket connection Add the following code to App.js: ``` function useWebSocket() { const [messages, setMessages] = useState([]); const [error, setError] = useState(null); useEffect(() => { const socket = io('http://localhost:3000'); socket.on('connect', () => { console.log('Connected to server!'); }); socket.on('disconnect', () => { console.log('Disconnected from server'); }); socket.on('error', (error) => { setError(error); }); socket.on('message', (message) => { setMessages((prevMessages) => [...prevMessages, message]); }); return () => { socket.disconnect(); }; }, []); const sendMessage = (message) => { if (socket.connected) { socket.send(message); } }; return { messages, sendMessage, error }; } ``` This code uses the useEffect hook to set up the WebSocket connection when the component mounts. We are also using the useState hook to keep track of the messages and error state. The useWebSocket function returns an object that contains the messages array, the sendMessage function, and the error object. Now let's use the useWebSocket function in our App component. Add the following code to App.js: ``` function App() { const { messages, sendMessage, error } = useWebSocket(); if (error) { return <div>Error: {error.message}</div>; } return ( <div> <ul> {messages.map((message) => ( <li>{message}</li> ))} </ul> <form onSubmit={(event) => { event.preventDefault(); const messageInput = event.target.elements.message; sendMessage(messageInput.value); messageInput.value = ''; }}> <input name="message" /> <button type="submit">Send</button> function App() { const { messages, sendMessage, error } = useWebSocket(); if (error) { return <div>Error: {error.message}</div>; } return ( <div> <ul> {messages.map((message) => ( <li>{message}</li> ))} </ul> <form onSubmit={(event) => { event.preventDefault(); const messageInput = event.target.elements.message; sendMessage(messageInput.value); messageInput.value = ''; }}> <input name="message" /> <button type="submit">Send</button> </form> </div> ); } export default App; ``` Now let's move on to handling the messages on the server side. Handling Messages on the Server In the `server.js` file, let's add the following code to handle incoming messages: ``` ioServer.on('connection', (socket) => { console.log('New client connected'); socket.on('message', (message) => { console.log(`Received message from client: ${message}`); ioServer.emit('message', message); }); socket.on('disconnect', () => { console.log('Client disconnected'); }); }); ``` In this code, we use the connection event to handle new clients connecting to the server. We are also using the message event to handle incoming messages from the client and the disconnect event to handle clients that disconnect from the server. When a message is received, we simply log it to the console and then emit it back to all connected clients using the emit method. Now, if you start the server by running node server.js and navigate to `http://localhost:3000` in the browser, you should be able to send messages and see them displayed in real-time. ####Conclusion This tutorial taught us how to create a real-time chat application using WebSockets and the Socket.IO library. We have set up a Node.js server and a React client and implemented the WebSocket communication using the useWebSocket hook on the client side and the connection, message, and disconnect events on the server side. You can extend this application in many ways, such as adding user authentication, storing messages in a database, and implementing more advanced features like message editing and deletion. I hope this tutorial has provided a good foundation for you to build upon. Read more exciting articles on frontend here ####Resource [Microsoft websockets](https://learn.microsoft.com/en-us/dotnet/fundamentals/networking/websockets) [Basics of Websockets](https://en.wikipedia.org/wiki/WebSocket)
scofieldidehen
1,322,543
Doubly Linked List Series: Creating shift() and unshift() methods with JavaScript
Building off of last week's blog on Doubly Linked List classes, push(), and pop() methods - linked...
0
2023-01-09T13:46:16
https://dev.to/tartope/doubly-linked-list-series-creating-shift-and-unshift-methods-with-javascript-2mmd
Building off of last week's blog on Doubly Linked List classes, push(), and pop() methods - [linked here](https://dev.to/tartope/doubly-linked-list-series-node-and-doubly-linked-list-classes-with-push-and-pop-methods-using-javascript-37e) - today I will be discussing shift() and unshift() methods. Starting with shift(), this method does not take a parameter and removes the first node from the beginning of the list. Here is an illustration of a shift() method: ![Drawing of shift method](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0ss0fne1xpvyc6h6utf.png) Edge case: you can account for an empty list, or has only one node. If neither of those cases applies, then follow the steps above. Here is an example of code for a shift() method: ![Screen shot of shift method code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s41ul2sd166fzmbx7ofz.png) The method unshift() adds a node to the beginning of the list. Here is an illustration of this method: ![Drawing of unshift method](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnahdlxfnq6mad4vhecw.png) Edge case: you can account for an empty list. If the list is not empty, then follow the steps mentioned above. Here is an example of code for an unshift() method: ![Screen shot of unshift method code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2jg7txaghbigr84oeoh.png) The time complexity for shift() and unshift() methods for a Doubly Linked List is O(1) constant time. The length of the list does not impact the number of steps to remove from the beginning or add to the beginning of the list. In the next part of this series, I will share what I have learned about creating get() and set() methods for a Doubly Linked List.
tartope
1,322,601
Getting started with Nix Package Manager
Nix is a standalone package manager from the NixOS family that can be installed on other Linux/Unix...
0
2023-01-10T15:30:00
https://dev.to/blackbeard173/getting-started-with-nix-package-manger-4b21
nix, linux, wsl2
Nix is a standalone package manager from the NixOS family that can be installed on other Linux/Unix OS. `https://nixos.org/download.html` ## Installation: To install nix on WSL: ```bash sh <(curl -L https://nixos.org/nix/install) --no-daemon ``` Then add `source $HOME/.nix-profile/etc/profile.d/nix.sh` to your `.bashrc` or `.zshrc`. ## Usage: - Searching a package: `nix-env -qa firefox` - installing a package: `nix-env -i firefox` or `nix-env -iA nixpkgs.firefox` (works on non-nix OS only) - List installed packages: `nix-env -q` - Uninstall a package: `nix-env -e firefox` - Upgrade a package: `nix-env -u firefox` - Upgrade all packages: `nix-env -u` - Cleanup: `nix-collect-garbage -d`
blackbeard173
1,322,607
Running KubernetesPodOperator Tasks in different AWS accounts
update August, 14th I wanted to update to newer version of MWAA, so I have tested the original blog...
0
2023-01-09T17:36:43
https://blog.beachgeek.co.uk/mwaa-eks-multi-aws/
opensource, aws
**update August, 14th** I wanted to update to newer version of MWAA, so I have tested the original blog post against EKS 1.24 and MWAA version 2.4.3. I also had a few messages about whether this would work across different AWS regions. The good news is that it does. I have also put together a repo for this [here](https://github.com/094459/mwaa-eks) I thought that I would also check/update that it works for newer versions of MWAA, so I had 2.4.3 up and running so thought I would use that. I did have to update the requirements.txt from the original post below so that it is compatible with Airflow 2.4.3. If you are using newer versions, you will need to make suitable changes. Check your constraints files for the right versions. ``` --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.4.3/constraints-3.10.txt" apache-airflow-providers-cncf-kubernetes==4.4.0 kubernetes==23.6.0 ``` In my quick test, I had my MWAA 2.4.3 environment up and running in eu-central-1 (Frankfurt), and I had my EKS Cluster 1.24 running in eu-west-2 (London). When I triggered the DAG from Frankfurt. ``` [2023-08-15, 19:27:45 UTC] {{kubernetes_pod.py:380}} INFO - Found matching pod mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 with labels {'airflow_kpo_in_cluster': 'False', 'airflow_version': '2.4.3', 'dag_id': 'kubernetes_pod_example', 'foo': 'bar', 'kubernetes_pod_operator': 'True', 'run_id': 'manual__2023-08-15T192740.4169940000-5bfe34873', 'task_id': 'pod-task', 'try_number': '1'} [2023-08-15, 19:27:45 UTC] {{kubernetes_pod.py:381}} INFO - `try_number` of task_instance: 1 [2023-08-15, 19:27:45 UTC] {{kubernetes_pod.py:382}} INFO - `try_number` of pod: 1 [2023-08-15, 19:27:45 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:46 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:47 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:48 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:49 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:50 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:51 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:52 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:53 UTC] {{pod_manager.py:180}} WARNING - Pod not yet started: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:54 UTC] {{pod_manager.py:228}} INFO - 665240076514 [2023-08-15, 19:27:54 UTC] {{pod_manager.py:228}} INFO - eu-west-2 [2023-08-15, 19:27:56 UTC] {{pod_manager.py:275}} INFO - Pod mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 has phase Running [2023-08-15, 19:27:58 UTC] {{kubernetes_pod.py:478}} INFO - skipping deleting pod: mwaa-pod-test-4e7bc9fc53b84c048a45ba0dabef35b8 [2023-08-15, 19:27:58 UTC] {{taskinstance.py:1401}} INFO - Marking task as SUCCESS. dag_id=kubernetes_pod_example, task_id=pod-task, execution_date=20230815T192740, start_date=20230815T192743, end_date=20230815T192758 [2023-08-15, 19:27:58 UTC] {{local_task_job.py:159}} INFO - Task exited with return code 0 ``` --end of update I got a mail from Apurav Sharma who was looking to find out about how MWAA supported using the KubernetesPodOperator to kick off tasks in Amazon EKS Containers in any AWS account. This post reveals how you can do that, using a very simple task that displays the AWS account number. ![sample architecture for multi account eks](https://ricsuepublicresources.s3.eu-west-1.amazonaws.com/images/eks-multi.png) **Pre-requisites** * You will need admin access to two AWS Accounts, with local AWS Cli tools setup and * eksctl version 0.124.0 * kubectl version at least v1.24.1 * A MWAA environment up and running (I am using MWAA with Apache Airflow 2.2.2) As I have two different AWS accounts, I am using profiles in my local .aws/credentials file to enable me to ensure I access the specific AWS account. Any references to "--profile personal" is referring to the second AWS account, and where it is omitted, the first AWS account. The cost of running through this tutorial is approx $5, but please ensure you delete/clean up all resources after you complete this walk through to stop recurring costs. **Creating a new Amazon EKS cluster** I used the same steps that were in my original blog post, [Working with Amazon EKS and Amazon Managed Workflows for Apache Airflow v2.x](https://dev.to/aws/working-with-amazon-eks-and-amazon-managed-workflows-for-apache-airflow-v2-x-k12). I will repeat those steps here to make it easier to follow along. I have used the latest version of Kubernetes in this post, that Amazon EKS supports (1.24). To create the Amazon EKS Cluster on the first AWS account I run the following command ``` eksctl create cluster \ --name mwaa-eks \ --region eu-central-1 \ --version 1.24 \ --nodegroup-name linux-nodes \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4 \ --with-oidc \ --ssh-access \ --ssh-public-key frank-open-distro \ --managed \ --vpc-public-subnets "subnet-0a6787dd2777c1897, subnet-0b10b70867384b67e" \ --vpc-private-subnets "subnet-05b6e2630d8f2d555, subnet-0fdcca6496460b7e6" ``` output similar to ``` 2023-01-06 14:21:45 [ℹ] eksctl version 0.124.0-dev+ac917eb50.2022-12-23T08:09:21Z 2023-01-06 14:21:45 [ℹ] using region eu-central-1 2023-01-06 14:21:47 [✔] using existing VPC (vpc-05733622960d2fa38) and subnets (private:map[eu-central-1a:{subnet-0fdcca6496460b7e6 eu-central-1a 10.192.20.0/24 0 } eu-central-1b:{subnet-05b6e2630d8f2d555 eu-central-1b 10.192.21.0/24 0 }] public:map[eu-central-1a:{subnet-0a6787dd2777c1897 eu-central-1a 10.192.10.0/24 0 } eu-central-1b:{subnet-0b10b70867384b67e eu-central-1b 10.192.11.0/24 0 }]) 2023-01-06 14:21:47 [!] custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets 2023-01-06 14:21:47 [ℹ] nodegroup "linux-nodes" will use "" [AmazonLinux2/1.24] 2023-01-06 14:21:47 [ℹ] using EC2 key pair %!q(*string=<nil>) 2023-01-06 14:21:47 [ℹ] using Kubernetes version 1.24 2023-01-06 14:21:47 [ℹ] creating EKS cluster "mwaa-eks" in "eu-central-1" region with managed nodes 2023-01-06 14:21:47 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup 2023-01-06 14:21:47 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster=mwaa-eks' 2023-01-06 14:21:47 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "mwaa-eks" in "eu-central-1" 2023-01-06 14:21:47 [ℹ] CloudWatch logging will not be enabled for cluster "mwaa-eks" in "eu-central-1" 2023-01-06 14:21:47 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster=mwaa-eks' 2023-01-06 14:21:47 [ℹ] 2 sequential tasks: { create cluster control plane "mwaa-eks", 2 sequential sub-tasks: { 4 sequential sub-tasks: { wait for control plane to become ready, associate IAM OIDC provider, 2 sequential sub-tasks: { create IAM role for serviceaccount "kube-system/aws-node", create serviceaccount "kube-system/aws-node", }, restart daemonset "kube-system/aws-node", }, create managed nodegroup "linux-nodes", } } 2023-01-06 14:21:47 [ℹ] building cluster stack "eksctl-mwaa-eks-cluster" 2023-01-06 14:21:49 [ℹ] deploying stack "eksctl-mwaa-eks-cluster" ``` check cloudformation and come back in 10-15 mins ``` 2023-01-06 14:27:56 [ℹ] waiting for CloudFormation stack "eksctl-mwaa-eks-cluster" 2023-01-06 14:28:57 [ℹ] waiting for CloudFormation stack "eksctl-mwaa-eks-cluster" ``` when it finishes you should see something similar to ``` 2023-01-06 14:40:48 [ℹ] waiting for the control plane to become ready 2023-01-06 14:40:50 [✔] saved kubeconfig as "/Users/ricsue/.kube/config" 2023-01-06 14:40:50 [ℹ] no tasks 2023-01-06 14:40:50 [✔] all EKS cluster resources for "mwaa-eks" have been created 2023-01-06 14:40:51 [ℹ] nodegroup "linux-nodes" has 3 node(s) 2023-01-06 14:40:51 [ℹ] node "ip-10-192-10-251.eu-central-1.compute.internal" is ready 2023-01-06 14:40:51 [ℹ] node "ip-10-192-10-57.eu-central-1.compute.internal" is ready 2023-01-06 14:40:51 [ℹ] node "ip-10-192-11-142.eu-central-1.compute.internal" is ready 2023-01-06 14:40:51 [ℹ] waiting for at least 1 node(s) to become ready in "linux-nodes" 2023-01-06 14:40:51 [ℹ] nodegroup "linux-nodes" has 3 node(s) 2023-01-06 14:40:51 [ℹ] node "ip-10-192-10-251.eu-central-1.compute.internal" is ready 2023-01-06 14:40:51 [ℹ] node "ip-10-192-10-57.eu-central-1.compute.internal" is ready 2023-01-06 14:40:51 [ℹ] node "ip-10-192-11-142.eu-central-1.compute.internal" is ready 2023-01-06 14:40:56 [ℹ] kubectl command should work with "/Users/ricsue/.kube/config", try 'kubectl get nodes' 2023-01-06 14:40:56 [✔] EKS cluster "mwaa-eks" in "eu-central-1" region is ready ``` Check its configured correctly ``` eksctl utils associate-iam-oidc-provider \ --region eu-central-1 \ --cluster mwaa-eks \ --approve ``` which will output ``` 2023-01-06 15:03:17 [ℹ] IAM Open ID Connect provider is already associated with cluster "mwaa-eks" in "eu-central-1" ``` **Creating a new Kubernetes namespace** Create a new Kubernetes namespace where we will run our DAG ``` kubectl create namespace mwaa ``` which will output ``` namespace/mwaa created ``` and we can check by running ``` kubectl get ns ``` which should list our new namespace ``` NAME STATUS AGE default Active 37m kube-node-lease Active 37m kube-public Active 37m kube-system Active 37m mwaa Active 62s ``` **Create a role for the mwaa namespace** Now you need to create a new Kubernetes manifest file that will create a role for the MWAA namespace. If you run the following command: ``` kubectl get pods -n mwaa --as mwaa-service ``` You will probably get the following error message: ``` Error from server (Forbidden): pods is forbidden: User "mwaa-service" cannot list resource "pods" in API group "" in the namespace "mwaa" ``` So lets fix that. First we are going to create and apply a new role for the MWAA namespace. ``` cat << EOF | kubectl apply -f - -n mwaa kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: mwaa-role rules: - apiGroups: - "" - "apps" - "batch" - "extensions" resources: - "jobs" - "pods" - "pods/attach" - "pods/exec" - "pods/log" - "pods/portforward" - "secrets" - "services" verbs: - "create" - "delete" - "describe" - "get" - "list" - "patch" - "update" --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: mwaa-role-binding subjects: - kind: User name: mwaa-service roleRef: kind: Role name: mwaa-role apiGroup: rbac.authorization.k8s.io EOF ``` When you run this, you should get the following ``` role.rbac.authorization.k8s.io/mwaa-role created rolebinding.rbac.authorization.k8s.io/mwaa-role-binding created ``` Now if we try again the command ("kubectl get pods -n mwaa --as mwaa-service") that generated the error above, we should get a new output ``` No resources found in mwaa namespace. ``` **Modifying the MWAA Worker Execution policy** You now need to create a new MWAA Worker Execution role with an updated policy. The steps are 1/ Create a new IAM policy, 2/ Create a new IAM Role and attach the policy you created in Step 1, and 3/ Reconfigure your MWAA environment to use this new IAM Role. When creating a new IAM policy, copy the existing policy statements you have in your MWAA Execution policy, but add the following (replacing {AWS ACCOUNT NUMBER} with your AWS account): ``` { "Effect": "Allow", "Action": [ "eks:DescribeCluster" ], "Resource": "arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks" } ``` This is the complete new role that I created ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "airflow:PublishMetrics", "Resource": "arn:aws:airflow:eu-central-1:{AWS ACCOUNT NUMBER}:environment/EKSMultiAccount" }, { "Effect": "Deny", "Action": "s3:ListAllMyBuckets", "Resource": [ "arn:aws:s3:::airflow-eks-multi-account", "arn:aws:s3:::airflow-eks-multi-account/*" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject*", "s3:GetBucket*", "s3:List*" ], "Resource": [ "arn:aws:s3:::airflow-eks-multi-account", "arn:aws:s3:::airflow-eks-multi-account/*" ] }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup", "logs:PutLogEvents", "logs:GetLogEvents", "logs:GetLogRecord", "logs:GetLogGroupFields", "logs:GetQueryResults" ], "Resource": [ "arn:aws:logs:eu-central-1:{AWS ACCOUNT NUMBER}:log-group:airflow-EKSMultiAccount-*" ] }, { "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": "cloudwatch:PutMetricData", "Resource": "*" }, { "Effect": "Allow", "Action": [ "sqs:ChangeMessageVisibility", "sqs:DeleteMessage", "sqs:GetQueueAttributes", "sqs:GetQueueUrl", "sqs:ReceiveMessage", "sqs:SendMessage" ], "Resource": "arn:aws:sqs:eu-central-1:*:airflow-celery-*" }, { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt" ], "NotResource": "arn:aws:kms:*:{AWS ACCOUNT NUMBER}:key/*", "Condition": { "StringLike": { "kms:ViaService": [ "sqs.eu-central-1.amazonaws.com" ] } } }, { "Effect": "Allow", "Action": [ "eks:DescribeCluster" ], "Resource": "arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks" } ] } ``` Once you have updated your MWAA Worker Execution role with this new role, the MWAA environment will take 20-30 minutes to update. However, we need to make one more change that will also require a restart, so complete the next step so we only have to do this once. **Deploying the Kubernetes operators into Apache Airflow** Create a new requirements.txt file with the following: ``` kubernetes==11.0.0 apache-airflow[cncf.kubernetes] ``` And then in the S3 bucket that you are using for your MWAA environment, create a folder and upload this file. You will then need to edit your environment to point to this requirements.txt file. Once updated, the MWAA environment will need to update which may take 20-25 minutes to complete. > Tip! You can track and debug Python library loading and import issues by reviewing the CloudWatch logs for the MWAA Worker nodes. There will be a "requirements_install_***" log file which you can view and this will help you troubleshoot issues. **Creating a new identity mapping** Use the following command to create a new identity mapping for Amazon EKS (replacing {AWS ACCOUNT NUMBER} with your AWS account number) ``` eksctl create iamidentitymapping \ --region eu-central-1 \ --cluster mwaa-eks \ --arn arn:aws:iam::{AWS ACCOUNT NUMBER}:role/mwaa-eks-multi-account-role \ --username mwaa-service ``` which should output something like ``` 2023-01-06 16:21:17 [ℹ] checking arn arn:aws:iam::{AWS ACCOUNT NUMBER}:role/mwaa-eks-multi-account-role against entries in the auth ConfigMap 2023-01-06 16:21:17 [ℹ] adding identity "arn:aws:iam::{AWS ACCOUNT NUMBER}:role/mwaa-eks-multi-account-role" to auth ConfigMap ``` **Creating your kubeconfig file** When we use the KubernetesPodOperator we need to provide a kube_config.yaml file which we will upload into the same folder as our DAG. To create this, we use the following command. ``` aws eks update-kubeconfig \ --region eu-central-1 \ --kubeconfig ./kube_config.yaml \ --name mwaa-eks \ --alias aws ``` Which will display the following output ``` Added new context aws to /Users/ricsue/Projects/keys/ssh-keygen-keys/kube_config.yaml ``` You should now have your kube_config.yaml file in the same folder (where {AWS ACCOUNT NUMBER} is your AWS Account number). ``` apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....REDACTED....== server: https://REDACTED.gr7.eu-central-1.eks.amazonaws.com name: arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks contexts: - context: cluster: arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks user: arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks name: aws current-context: aws kind: Config preferences: {} users: - name: arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - eu-central-1 - eks - get-token - --cluster-name - mwaa-eks command: aws ``` **Creating your Apache Airflow DAG** Create your DAG, using the following sample code. This DAG uses the aws-cli public container and runs a simple aws cli command to output the AWS account number. > Note! You will notice the path to the kube_config.yaml file is /usr/local/airflow/dags/kube_config.yaml - you do not need to edit/change this (as long as your config file was not renamed from kube_config.yaml) ``` from airflow import DAG from datetime import datetime from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator default_args = { 'owner': 'aws', 'depends_on_past': False, 'start_date': datetime(2019, 2, 20), 'provide_context': True } dag = DAG( 'kubernetes_pod_example', default_args=default_args, schedule_interval=None) kube_config_path = '/usr/local/airflow/dags/kube_config.yaml' podRun = KubernetesPodOperator( namespace="mwaa", #image="ubuntu:18.04", image="public.ecr.aws/aws-cli/aws-cli", cmds=["bash"], arguments=["-c", "aws sts get-caller-identity --query 'Account' --output text"], labels={"foo": "bar"}, name="mwaa-pod-test", task_id="pod-task", get_logs=True, dag=dag, is_delete_operator_pod=False, config_file=kube_config_path, in_cluster=False, cluster_context='aws' ) ``` > **Error and Debugging** > > When I initially ran this, I got the following error > ``` > kubernetes.client.rest.ApiException: (401) > Reason: Unauthorized > HTTP response headers: HTTPHeaderDict({'Audit-Id': '2c8f0848-1506-44ec-92a8-772c8756e1ee', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 06 Jan 2023 17:05:51 GMT', 'Content-Length': '129'}) > HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401} > ``` > > If you get this, then I suggest waiting a few moments. When I initially triggered the DAG I got this error. When I then went for a short lunch break and tried again, it worked. > When you trigger this, it should output the AWS account number where the Kubernetes Pod is running. This is what I get when I run this: ``` [2023-01-09, 10:41:30 UTC] {{logging_mixin.py:109}} INFO - Running <TaskInstance: kubernetes_pod_example.pod-task manual__2023-01-09T10:41:21.080681+00:00 [running]> on host ip-10-192-20-19.eu-central-1.compute.internal [2023-01-09, 10:41:30 UTC] {{taskinstance.py:1429}} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=aws AIRFLOW_CTX_DAG_ID=kubernetes_pod_example AIRFLOW_CTX_TASK_ID=pod-task AIRFLOW_CTX_EXECUTION_DATE=2023-01-09T10:41:21.080681+00:00 AIRFLOW_CTX_DAG_RUN_ID=manual__2023-01-09T10:41:21.080681+00:00 [2023-01-09, 10:41:30 UTC] {{kubernetes_pod.py:566}} INFO - Creating pod mwaa-pod-test.0de50f0e9f0f44a788cc15dadc0052e7 with labels: {'dag_id': 'kubernetes_pod_example', 'task_id': 'pod-task', 'execution_date': '2023-01-09T104121.0806810000-b4d079a05', 'try_number': '1'} [2023-01-09, 10:41:31 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.0de50f0e9f0f44a788cc15dadc0052e7 [2023-01-09, 10:41:32 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.0de50f0e9f0f44a788cc15dadc0052e7 [2023-01-09, 10:41:33 UTC] {{pod_manager.py:197}} INFO - {AWS ACCOUNT NUMBER} [2023-01-09, 10:41:33 UTC] {{pod_manager.py:215}} WARNING - Pod mwaa-pod-test.0de50f0e9f0f44a788cc15dadc0052e7 log read interrupted but container base still running [2023-01-09, 10:41:34 UTC] {{pod_manager.py:197}} INFO - {AWS ACCOUNT NUMBER} [2023-01-09, 10:41:34 UTC] {{pod_manager.py:234}} INFO - Pod mwaa-pod-test.0de50f0e9f0f44a788cc15dadc0052e7 has phase Running [2023-01-09, 10:41:36 UTC] {{kubernetes_pod.py:462}} INFO - skipping deleting pod: mwaa-pod-test.0de50f0e9f0f44a788cc15dadc0052e7 [2023-01-09, 10:41:37 UTC] {{taskinstance.py:1280}} INFO - Marking task as SUCCESS. dag_id=kubernetes_pod_example, task_id=pod-task, execution_date=20230109T104121, start_date=20230109T104130, end_date=20230109T104137 [2023-01-09, 10:41:37 UTC] {{local_task_job.py:154}} INFO - Task exited with return code 0 [2023-01-09, 10:41:37 UTC] {{local_task_job.py:264}} INFO - 0 downstream tasks scheduled from follow-on schedule check ``` You should be able to see your account number where I have shown {AWS ACCOUNT NUMBER} above. We have now completed the first step which is configuring MWAA to execute within an Amazon EKS cluster in the SAME account as MWAA is running. The next step is to get MWAA to execute a task on an Amazon EKS cluster in a different AWS account. > > Note!, whilst I will be using a different AWS account, I will stick within the same AWS Region > **Setting up my EKS Cluster in a new Account** As I am not going to have a MWAA environment in this new AWS Account, I need to setup the Amazon EKS environment a little differently. In my second AWS account I have set up a new VPC with public/private subnets in the same AWS Region, and I have also created a new keypair which is used when we create the new EKS Cluster. You will notice here that I am using the "--profile personal" which I have configured in my local .aws/credentials to point to an IAM user in the new account. I create my new EKS Cluster (called mwaa-eks2) using this command: ``` eksctl create cluster \ --name mwaa-eks2 \ --region eu-central-1 \ --version 1.24 \ --nodegroup-name linux-nodes \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4 \ --with-oidc \ --ssh-access \ --ssh-public-key mwaa-eks \ --managed \ --vpc-public-subnets "subnet-081d77fe5ceb5ae90, subnet-0b9b3a80c1f5d046b" \ --vpc-private-subnets "subnet-014a24745c5edbbbd, subnet-0d9dc9f06d773243b" \ --profile personal ``` Configure IAM ``` eksctl utils associate-iam-oidc-provider \ --region eu-central-1 \ --cluster mwaa-eks2 \ --approve \ --profile personal ``` Create a Kubernetes namespace called mwaa2 ``` kubectl create namespace mwaa2 ``` Create the mwaa-role and service mapping ``` cat << EOF | kubectl apply -f - -n mwaa2 kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: mwaa-role rules: - apiGroups: - "" - "apps" - "batch" - "extensions" resources: - "jobs" - "pods" - "pods/attach" - "pods/exec" - "pods/log" - "pods/portforward" - "secrets" - "services" verbs: - "create" - "delete" - "describe" - "get" - "list" - "patch" - "update" --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: mwaa-role-binding subjects: - kind: User name: mwaa-service roleRef: kind: Role name: mwaa-role apiGroup: rbac.authorization.k8s.io EOF ``` We can make sure this all looks good by typing this command, and we should get the same output as we did when we ran it above. ``` kubectl get pods -n mwaa2 --as mwaa-service ``` **EKS Role and Permissions** We now need to create and attach an IAM role that will allow the MWAA execution workers to access this new EKS Cluster. We are going to keep this policy simple, but you should scope down the IAM Actions if you are doing this in production. Create a new IAM Policy and Role. Create the policy first as follows ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:*" ], "Resource": "arn:aws:eks:eu-central-1:{2ND AWS ACCOUNT NUMBER}:cluster/mwaa-eks2" } ] } ``` Now create a new role, and then attach this to the Role you create. You will need to change the TRUST ASSOCIATION of the Role so that the MWAA execution worker can assume this role: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::{AWS ACCOUNT NUMBER}:role/mwaa-eks-multi-account-role" }, "Action": "sts:AssumeRole", "Condition": {} } ] } ``` The final step is to attach this to the new EKS Cluster we have running in the second account. ``` eksctl create iamidentitymapping \ --region eu-central-1 \ --cluster mwaa-eks2 \ --arn arn:aws:iam::{2ND AWS ACCOUNT NUMBER}:role/mwaa-eks-access \ --username mwaa-service \ --profile personal ``` which should generate the following: ``` 2023-01-09 14:35:15 [ℹ] checking arn arn:aws:iam::{2ND AWS ACCOUNT NUMBER}:role/mwaa-eks-access against entries in the auth ConfigMap 2023-01-09 14:35:15 [ℹ] adding identity "arn:aws:iam::{2ND AWS ACCOUNT NUMBER}:role/mwaa-eks-access" to auth ConfigMap ``` **Updating the MWAA Execution permissions** Now we add the following to the MWAA Execution policy of the first AWS Account where we have MWAA running. All we need to do is append this to the permissions: ``` { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Resource": "arn:aws:iam::{2ND AWS ACCOUNT NUMBER}:role/mwaa-eks-access" } ``` the complete policy now looks like this: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "airflow:PublishMetrics", "Resource": "arn:aws:airflow:eu-central-1:{AWS ACCOUNT NUMBER}:environment/EKSMultiAccount" }, { "Effect": "Deny", "Action": "s3:ListAllMyBuckets", "Resource": [ "arn:aws:s3:::airflow-eks-multi-account", "arn:aws:s3:::airflow-eks-multi-account/*" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject*", "s3:GetBucket*", "s3:List*" ], "Resource": [ "arn:aws:s3:::airflow-eks-multi-account", "arn:aws:s3:::airflow-eks-multi-account/*" ] }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:CreateLogGroup", "logs:PutLogEvents", "logs:GetLogEvents", "logs:GetLogRecord", "logs:GetLogGroupFields", "logs:GetQueryResults" ], "Resource": [ "arn:aws:logs:eu-central-1:{AWS ACCOUNT NUMBER}:log-group:airflow-EKSMultiAccount-*" ] }, { "Effect": "Allow", "Action": [ "logs:DescribeLogGroups" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": "cloudwatch:PutMetricData", "Resource": "*" }, { "Effect": "Allow", "Action": [ "sqs:ChangeMessageVisibility", "sqs:DeleteMessage", "sqs:GetQueueAttributes", "sqs:GetQueueUrl", "sqs:ReceiveMessage", "sqs:SendMessage" ], "Resource": "arn:aws:sqs:eu-central-1:*:airflow-celery-*" }, { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt" ], "NotResource": "arn:aws:kms:*:{AWS ACCOUNT NUMBER}:key/*", "Condition": { "StringLike": { "kms:ViaService": [ "sqs.eu-central-1.amazonaws.com" ] } } }, { "Effect": "Allow", "Action": [ "eks:DescribeCluster" ], "Resource": "arn:aws:eks:eu-central-1:{AWS ACCOUNT NUMBER}:cluster/mwaa-eks" }, { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Resource": "arn:aws:iam::{2ND AWS ACCOUNT NUMBER}:role/mwaa-eks-access" } ] } ``` **Creating a new kube_config.yaml** Once we have done this, we can create a new kube_config.yaml file to include details of the new EKS Cluster in the second AWS account. ``` aws eks update-kubeconfig \ --region eu-central-1 \ --kubeconfig ./kube_config_2.yaml \ --name mwaa-eks2 \ --alias aws \ --profile personal ``` We need to modify to add this to add the IAM Role details to the user section ``` - --role - arn:aws:iam::{2ND AWS ACCOUNT NUMBER}:role/mwaa-eks-access ``` so the full config file now looks like ``` apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED....REDACTED....== server: https://ENDPOINT.gr7.eu-central-1.eks.amazonaws.com name: arn:aws:eks:eu-central-1:{2nd AWS ACCOUNT}:cluster/mwaa-eks2 contexts: - context: cluster: arn:aws:eks:eu-central-1:{2nd AWS ACCOUNT}:cluster/mwaa-eks2 user: arn:aws:eks:eu-central-1:{2nd AWS ACCOUNT}:cluster/mwaa-eks2 name: aws current-context: aws kind: Config preferences: {} users: - name: arn:aws:eks:eu-central-1:{2nd AWS ACCOUNT}:cluster/mwaa-eks2 user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - eu-central-1 - eks - get-token - --cluster-name - mwaa-eks2 - --role - arn:aws:iam::{2nd AWS ACCOUNT}:role/mwaa-eks-access command: aws ``` **Updating the DAG** I create a new DAG file based on the original, changing a few details to point to both the new kube_config file as well as the different Kubernetes namespace. ``` from airflow import DAG from datetime import datetime from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator default_args = { 'owner': 'aws', 'depends_on_past': False, 'start_date': datetime(2019, 2, 20), 'provide_context': True } dag = DAG( 'kubernetes_pod_example2', default_args=default_args, schedule_interval=None) #use a kube_config stored in s3 dags folder for now kube_config_path = '/usr/local/airflow/dags/kube_config_2.yaml' podRun = KubernetesPodOperator( namespace="mwaa2", #image="ubuntu:18.04", image="public.ecr.aws/aws-cli/aws-cli", cmds=["bash"], arguments=["-c", "aws sts get-caller-identity --query 'Account' --output text"], labels={"foo": "bar"}, name="mwaa-pod-test", task_id="pod-task", get_logs=True, dag=dag, is_delete_operator_pod=False, config_file=kube_config_path, in_cluster=False, cluster_context='aws' ) ``` And that should be it. When we upload the new DAG and Kube Config files, and then trigger the new DAG. We see the following output: ``` [2023-01-09, 14:39:10 UTC] {{kubernetes_pod.py:566}} INFO - Creating pod mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b with labels: {'dag_id': 'kubernetes_pod_example2', 'task_id': 'pod-task', 'execution_date': '2023-01-09T143906.4245000000-ca84eb319', 'try_number': '1'} [2023-01-09, 14:39:11 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:12 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:13 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:14 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:15 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:16 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:17 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:18 UTC] {{pod_manager.py:157}} WARNING - Pod not yet started: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:19 UTC] {{pod_manager.py:197}} INFO - {2nd AWS ACCOUNT} [2023-01-09, 14:39:22 UTC] {{pod_manager.py:234}} INFO - Pod mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b has phase Running [2023-01-09, 14:39:24 UTC] {{kubernetes_pod.py:462}} INFO - skipping deleting pod: mwaa-pod-test.2b6eb7ab07a44b5f99c7fcef440b783b [2023-01-09, 14:39:24 UTC] {{taskinstance.py:1280}} INFO - Marking task as SUCCESS. dag_id=kubernetes_pod_example2, task_id=pod-task, execution_date=20230109T143906, start_date=20230109T143909, end_date=20230109T143924 [2023-01-09, 14:39:25 UTC] {{local_task_job.py:154}} INFO - Task exited with return code 0 [2023-01-09, 14:39:25 UTC] {{local_task_job.py:264}} INFO - 0 downstream tasks scheduled from follow-on schedule check ``` We can see the output has changed, and we now see the {2nd AWS ACCOUNT} number listed. Congratulations, you have now run your task on an EKS Cluster in a different AWS account. **Cleaning up** Once you have gone through this, be sure to clean up and delete these resources that you have created. The quickest way is to go to the CloudFormation and delete the stacks that have been created. It will take around 30-40 minutes for the cleanup to complete. **Conclusion** In this short walk through, we built upon a previous blog post [Working with Amazon EKS and Amazon Managed Workflows for Apache Airflow v2.x](https://dev.to/aws/working-with-amazon-eks-and-amazon-managed-workflows-for-apache-airflow-v2-x-k12) and extended this to show how you can run those tasks on other AWS Accounts.
094459
1,322,700
Feature highlight AMBOSS SPACE
Recently I have taken a bit of a deep dive into the features offered by Amboss Space. First off, I...
0
2023-01-14T06:13:41
https://www.secondl1ght.site/blog/amboss-space
devops, tooling, productivity, tutorial
Recently I have taken a bit of a deep dive into the features offered by Amboss Space. First off, I want to have full disclosure that I will be joining the team at Amboss next month as a Frontend Engineer, which I am super excited about! I started using their product and services more lately in order to research the company. What I discovered is that they offer **WAY** more than I originally had seen. So I thought I would put together a list showcasing some of the main features so that others can also discover their awesome feature-set. The **TLDR** is: if you are a lightning node operator, using Amboss is guaranteed to give you an edge. They have a wide variety of tools to help understand the complexity of the lightning network and make running a successful node easier. So without further ado, here is my feature highlight of some very cool [Amboss](https://amboss.space) features. ## Lightning Network Explorer ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmu2c4wgnpvhkkd1x3ji.png) Just like the bitcoin network, the lightning network feels like magic. But to really get a sense of how cool it is, seeing it visually is important. The lightning network explorer on Amboss **beautifully** accomplishes this challenge. You have to visit the website yourself and explore to really appreciate it. Here is a popular node page to get you started: [Kraken 🐙⚡](https://amboss.space/node/02f1a8c87607f415c8f22c00593002775941dea48869ce23096af27b0cfdcc0b69). The explorer provides deep insights into the network so that you can make informed decisions about interacting with it. You can find nodes by entering either a pubkey or alias into the search and you can view top nodes and aggregated network stats on the main page. When viewing a node you can hop between the network by navigating to its channel connections. ## Channel Liquidity Marketplace ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtelvaj2jehs8mgsudoq.png) One of the more difficult parts of running a lightning node is creating and maintaining inbound liquidity so that you can receive sats. Amboss solves this problem by offering a **P2P** lightning marketplace called [Magma](https://amboss.space/magma), where users are matched to buy and sell channels. Buyers of channels can easily **connect** & setup sizeable amounts of inbound liquidity within minutes. Sellers can put their sats to work and **earn** a truly low-risk yield on their investment. They do this by selling liquidity in the form of opening channels to other nodes at a agreed upon rate. Sellers also get the added benefit of additional routing fees for payments being sent through their newly added channel. As of writing this article the average APR is **3.07%**. The process on either side of the trade is very simple, which is a huge win for node operators. ## Enterprise Data Analytics ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ab0q1dp9ac20kag331ic.png) The [stats](https://amboss.space/stats) section of Amboss provides even deeper historical analysis into individual node information and the network as a whole using multiple data sources. When operating a node at a professional level for profit, this data can be **vital** to the success of your business in a highly competitive landscape. With the stats dashboard you can create custom charts by entering node pubkeys. This advanced functionality gives you the ability to compare incoming fee rates to identity lightning native arbitrage opportunities and other actionable data points. ## Monitoring and Recovery Services ### Monitoring ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvbjdvfunpusd4npi5yx.png) There are many important events that can happen to your node and you will want to be notified. Things like your node going offline, channels being opened or closed, Magma marketplace events, and more. Amboss has 3 methods for **instantly** receiving these notifications so that you can take action if necessary. You can setup notifications to be sent via Telegram, Email, or Webhook. I personally have setup the Telegram notifications and find it very useful to receive these updates either at my laptop or on mobile. ### Recovery ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/end1e6roo2yt463bylo9.png) You need 2 pieces of information to fully recover a lightning node; your **wallet seed phrase** and a **static channel backup**. Amboss is **non-custodial**, so protecting your seed phrase is up to you. But the second part, which can be more difficult because it needs to be consistently updated, is where Amboss can make a **big** difference. They offer automated channel backups via [Thunderhub](https://thunderhub.io/) whenever your node state changes. The backup file is **fully encrypted** by your nodes private key so **only you** have access to it. This is an extremely useful service that helps take the burden of this cumbersome task off the user. ## API and Embed-able Widgets ### API ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5pr49ecjt18tg8f08ak4.png) If you thought Amboss was going to keep all of this data to themselves you thought wrong. They have made available a robust API that is free to use for **non-commercial** use. For commercial use you can reach out to the team for more details on how to use the API at info@amboss.tech. Documentation can be found [here](https://docs.amboss.space/api/introduction). Anybody can have access to rich lightning network data and start building new apps or integrate into their existing applications. ### Widgets ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w5hz5psfmdd60hjd1n6.png) If you want to embed lightning network statistics into your own website without fully integrating the API, they have a one-liner code option available in the form of an **iframe** widget. There are some style and unit options you can pass and you can either view a specific node details or total network stats. Documentation can be found [here](https://docs.amboss.space/space/widgets). An example is shown below: ``` <iframe src="https://amboss.space/embed/networkStats?theme=light&unit=EUR&noBackground=true" width="100%" height="40px" style="overflow: hidden" scrolling="no" /> ``` ## Personalized Node Profiles ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d6rv948rny88qoli20ck.png) Every node visible on the network gets a page on Amboss to view it's connection details and other metrics. But users can claim a node by proving they control the keys. Once a node is claimed you can add additional details to personalize your profile page. Not only does this make it easy for other users to contact you if needed, but it unlocks a fun social aspect to running a node. You can verify your node with other social platforms, add a bio, and even earn badges. Though the main use-case of lightning is as a payment processing network, it has always had an excitement and enthusiastic spirit around it. Features like this make running a node fun and is just another reason why trad-fi can't compete. My personal favorite node personality is [BCash_Is_Trash](https://amboss.space/node/0298f6074a454a1f5345cb2a7c6f9fce206cd0bf675d177cdbf0ca7508dd28852f). ## Node Communities ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rxofv03gffr38s3e7rfc.png) Adding on to the previous section's social features you can also create and join lightning network node [communities](https://amboss.space/community). There are communities all around the world joining the decentralized financial revolution. Currently there are **93** listed on Amboss. The feature-set for these communities is currently being built out, but I have heard during BitDevs talks that there are some great new things in the works. ## Keysend Billboards ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttxpoqvmjjutzdhk2dwx.png) On the homepage and on each node's page there is a keysend billboard. Anybody can post a message here which adds another social dynamic but it can also display important information to other users. Further to this idea, users can subscribe to announcements from other node's on the network who can broadcast a message to their subscribers. Messaging on the lightning network is possible and Amboss has utilized this feature and created some interesting use-cases. ## Prime ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/in9qiz15g7yx4cckavdh.png) The majority of Amboss services are available completely free, but they do have a **Prime** membership which currently costs **$8/month** and gives you access to even more features. If you find all of the free features are more than what you need, you can still subscribe to support the projects development. I subscribed right away solely because I could tell how much work went into making a project of this caliber and I think it benefits the bitcoin ecosystem greatly. --- That is the end of my feature highlight for Amboss Space, and I didn't even cover everything! You will have to check it out for yourself to see it all. With all of these features packed in, it's hard to choose a favorite. But I would have to say the best part is that the app is dark mode by default! XD Here are some links for Amboss: - [Twitter](https://twitter.com/ambosstech) - [Telegram](https://t.me/GetAmboss) - [FAQ](https://docs.amboss.space/space/faqs) I know the team behind Amboss cares about it being a community based project. So you should reach out to them if you have any suggestions, ideas, or feedback on what you would like to see in the application! Thanks for reading and happy stacking, here is my node information if you would like to connect with me: [secondl1ght](https://amboss.space/node/021ef14c694456a3aae3471a2e8830da21a8130ccbead6794e3530430e2e074d63). 🌩️
secondl1ght
1,322,843
How To fetch data from an API in a React application
To fetch data from an API in a React application, you can use the fetch() function, which is a...
0
2023-01-09T18:26:59
https://dev.to/ikamran01/how-to-fetch-data-from-an-api-in-a-react-application-212f
javascript, beginners, react, api
To fetch data from an API in a React application, you can use the fetch() function, which is a built-in JavaScript function for making network requests. Here's an example of how you might use fetch() to retrieve data from an API and store it in a React component: ```Javascript import React, { useState, useEffect } from 'react'; function MyComponent() { const [data, setData] = useState(null); useEffect(() => { async function fetchData() { const response = await fetch('https://my-api.com/endpoint'); const data = await response.json(); setData(data); } fetchData(); }, []); return ( <div> {data && ( <div> {data.map(item => ( <div key={item.id}>{item.name}</div> ))} </div> )} </div> ); } ``` In this example, we're using the useEffect() hook to perform the fetch operation when the component mounts. The useEffect() hook is called with a function that contains the fetch() call, and an empty array as the second argument, which tells React to only call the function when the component mounts. Once the data is fetched, it is stored in the component's state using the useState() hook, and the component renders the data by mapping over it and rendering each item as a div.
ikamran01
1,322,904
- What are the top programming languages to learn in 2023.
A post by Esther Ugute
0
2023-01-09T20:15:42
https://dev.to/cute6269/-what-are-the-top-programming-languages-to-learn-in-2023-23mk
java, python, javascript, csharp
cute6269
1,322,913
Managing Hybrid Teams 6
8. Creating a positive team culture What then is the appropriate culture, and...
21,181
2023-01-12T09:23:18
https://dev.to/fpaghar/managing-hybrid-teams-6-m47
## 8. Creating a positive team culture ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dgltippoc47tckfhbj53.png) #### What then is the appropriate culture, and how do you develop it? How we fit into our larger social environment is a part of our personal identity. Because that group is “mine,” we identify with them and are impacted by their behavior. As a manager, you’ll play a role in establishing the group's culture, which includes its inclusiveness, friendliness, support, creativity, sense of security, etc. By making sure that individuals and smaller groups are considered as "out-groups," you'll make sure that there are no “in” groups (the senior, longest-serving, and most vocal). Take humor as an instance. Research shows that comedy is one of the most effective tools an organization has for fostering genuine connection, well-being, and intellectual safety among our colleagues in this new world of remote work, where we hardly ever see each other in person. We couldn't agree more, actually. Your firm will have a distinctive culture that is shaped by a variety of factors, including your clients, your business's “why,” and the variety of cultural backgrounds that each employee brings to the workplace. All of these distinguish your workplace from others, and all successful team cultures serve as places where your team members may operate effectively. Health and intellectual security are seen as being of the utmost significance. Your team culture must be transparent and safe in order to accomplish this, especially in a hybrid situation. **Employees at "remote-friendly" companies, or those with a high percentage of remote job postings on LinkedIn, are 32% more likely to report struggling with work-life balance, according to LinkedIn.** Additionally, think about the corporate cultures that are developing as virtual and hybrid working becomes more prevalent. There is a chance that two cultures—in-groups and out-groups—will develop, with the in-groups being dominated by on-site employees who gain from co-location and in-person collaboration while the virtual workforce's social cohesion suffers. When this happens, remote workers may quickly feel alone, disenfranchised, and unhappy as a result of inadvertent behavior in a company that fails to develop a consistent model of both virtual and in-person work. The ability to have a sense of belonging and a shared identity is preserved when there is a shared identity. The ability to have a sense of belonging and a shared identity is preserved when there is a shared identity. #### Try one of these three things to help your team's culture: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2tjd43qeof0e5hdl3y7i.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l1tipf0a4h3jb583d92q.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pb27vdetr994hqnmkzp.png) ## Conclusion ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2viw4imeutrl5uvjl5a8.png) Additionally, it's not the complete picture. Here, we have room—even an advantage—to consider how to improve our workflow. You have some fantastic possibilities to update outdated, ineffective business practices by adopting a development mindset. You'll create a team that is stronger and richer rather than merely one that is exhausted if you use this opportunity to pivot and become more inclusive, expand your creative talent pool, and establish more balanced (but still profitable) ways of working. Instead of telling their apprehensive and shocked coworkers that "we've arrived," great team leaders take their team on a journey. There is no single, all-encompassing management strategy. However, there are solutions to your problems and solutions that will work for your team. You'll involve everyone in the realities of managing your hybrid team that works in different places in a way that works for everyone by drawing on the many capabilities of your team.
fpaghar
1,322,925
How to Download a YouTube video in MP3 Format with Python
Let's talk about how to use pytube, a lightweight, dependency-free library written in Python, to download a YouTube video.
0
2023-01-09T21:15:12
https://code.pieces.app/blog/how-to-download-a-youtube-video-in-mp3-format-with-python
python, video
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/pytube_341958a171c035a8045af6d9c6324e39.jpg" alt="The YouTube app on an iPad."/></figure> Without a doubt, YouTube is the most popular video-sharing platform in the world. As a software developer, you may encounter a situation where you want to script something to download videos either in audio or video format. To achieve this, you can use [pytube](https://pytube.io/en/latest/). At its core, pytube is a lightweight, dependency-free library written in Python. Not only does it include a command-line utility that allows you to download videos right from the terminal, but it also makes pipelining easy by allowing you to specify callback functions for different download events. **Prerequisites** - Python 3 installed on your local machine - A text editor (e.g., Visual Studio Code, Atom, etc.) - Fundamental knowledge of Python ## Step 1: Create a Virtual Environment When working on any Python project that involves the installation and use of third party dependency-free libraries, the first thing to do is create a virtual environment. A virtual environment is a Python environment where the libraries and scripts installed into it are isolated from those installed in other virtual environments. The virtual environment retains the libraries installed in the “system” Python by default. For example, the library installed as part of your operating system can allow you to create a “virtual” isolated Python installation, and you install packages into that virtual installation. By virtue of creating a virtual environment, when you switch projects, you can simply create a new virtual environment and not have to worry about breaking the packages installed in the other environments. It is one of the most important tools used by Python developers. With a virtual environment, you will have full control over the libraries used in the project. In this tutorial, you will use the pytube and Flask APIs. To create a virtual environment, go to your project’s directory and run `venv`: _For Unix/macOS_ ``` python3 -m venv env ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=0aea42a721) _For Windows_ ``` python -m venv env ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=6adf4b9147) Before you can start installing or using packages in your virtual environment, you’ll need to activate it. Activating a virtual environment will put the virtual environment-specific `python` and `pip` executables into your shell’s `PATH`. _For Unix/macOS_ ``` source env/bin/activate ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=53d84bbbf9) _For Windows_ ``` .\env\Scripts\activate ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=06c640aa00) ## Step 2: Install the Needed Libraries To complete the environment setup, you’ll need to install `Flask` and `pytube`. You install these by creating a `requirements.txt` file. The `requirements.txt` file is a text file where you list the libraries required for your application. It is the convention typically used by developers that makes it easier to manage applications where numerous libraries exist as dependencies. Although you will not use the Flask API library until later in the tutorial, follow these steps to install it now: - Open the folder in Visual Studio Code. - Create a new file. - Name the file `requirements.txt` and add the following text: ``` flask requests pytube ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=e25c45987e) - Save the file by clicking Ctrl-S on Windows, or Cmd-S if you are using MacOS. - Return to the command or terminal window and perform the installation by using pip to run the following command: ``` pip install -r requirements.txt ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=3b9f4b90e5) This command will download and install the necessary libraries and their dependencies. ## Step 3: Collect the Input URL and Extract Audio The next step is to get the URL of the video from which the audio will be extracted. However, before doing that, you need to import the necessary libraries (i.e., pytube and OS). While the pytube library provides the facilities to download YouTube videos from the web, the OS library provides a portable way of using operating-system-dependent functionality: ``` from pytube import YouTube import os ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=2607449932) With the libraries imported, you’ll now get the url of the video to be downloaded from the user. To do this, use the `input()` function to get the url from the user and the `YouTube()` function as imported from the pytube library to save it as a variable for downloading: ``` yt = YouTube(str(input("Enter the URL of the video you want to download: \n>> "))) ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=cca64c9db0) Since the audio of the YouTube video is the focus of this tutorial, you’ll extract the audio from the variable you created in the previous block of code. To do this, you need to use the `streams()` and `filter()` methods while setting the `only_audio` Boolean parameter as `True`, signifying that only the audio should be extracted: ``` audio = yt.streams.filter(only_audio = True).first() ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=3015438dfc) ## Step 4: Set a Destination for Saved Files The next step is to determine the destination where the file will be saved. This can be done by creating a variable called destination that will hold the path to your video as a string. In this instance, there will be an option for the user to save the audio file in the same directory as the project: ``` print("Enter the destination (leave blank for current directory)") destination = str(input(">> ")) or '.' ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=237941b8f5) ## Step 5: Download and Save the File The final step is to download and save the audio file: [https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=237941b8f5](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=237941b8f5) You can see from the line of code above that the `output_path` parameter is set as the destination variable you created earlier to hold the file path for the audio file. You will then save the audio file as .mp3: ``` base, ext = os.path.splitext(out_file) new_file = base + '.mp3' os.rename(out_file, new_file) ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=641447bb49) With the video saved, you’ll then display a result of success for the user to know that the audio has been successfully downloaded: ``` print(yt.title + " has been successfully downloaded in .mp3 format.") ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=80084f8fc9) ## Step 6: Test in the Terminal Next, you can test the program you’ve built on the terminal. To do this, go to the project directory in your terminal and run the command shown below: ``` python <file_name>.py ``` [Save this code](https://user-c185237a-4958-4e1e-a972-57cc037976e7-agyqaaz4hq-uc.a.run.app/?p=82ad46b468) ## Step 7: Use Flask to Build a Web Application to Prioritize Functionality Flask is a micro web framework written in Python that is used to develop web applications. In this tutorial, you’ll use Flask to create a web application that will take the format shown below. Flask uses templates to expand the functionality of a web application while maintaining a simple and organized file structure. Templates are enabled using the Jinja2 template engine, which was installed when you downloaded Flask. Templates also allow data to be shared and processed before being turned into content and sent back to the client. Now, create a folder called `templates` in your project folder. In that folder, you’ll create two files – `index.html` and `results.html`. This will form the baseline for your web application’s frontend. At this point, your project folder should look like this: Add the following block of code to the `index.html` file: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css" integrity="sha384-TX8t27EcRE3e/ihU7zmQxVncDAy5uIKz4rEkgIXeMed4M0jlfIDPvg6uqKI2xXr2" crossorigin="anonymous"> <title>YouTube downloader</title> </head> <body> <div class="container"> <h1>YouTube downloader as MP3</h1> <div>Enter the URL of the video you want to download: </div> <div> <form method="POST"> <div class="form-group"> <textarea name="text" cols="20" rows="10" class="form-control"></textarea> </div> <button type="submit" class="btn btn-success">Download!</button> </div> </form> </div> </div> </body> </html> ``` [Save this code](https://c185237a-4958-4e1e-a972-57cc037976e7.pieces.cloud/?p=987c449505) Add this to the `results.html` file: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.5.3/dist/css/bootstrap.min.css" integrity="sha384-TX8t27EcRE3e/ihU7zmQxVncDAy5uIKz4rEkgIXeMed4M0jlfIDPvg6uqKI2xXr2" crossorigin="anonymous"> <title>Result</title> </head> <body> <div class="container"> <h2>Results</h2> <div> <strong>MP3 file has been successfully downloaded in .mp3 format.</strong> </div> <div> <a href="{{ url_for('index') }}">Try another one!</a> </div> </div> </body> </html> ``` [Save this code](https://c185237a-4958-4e1e-a972-57cc037976e7.pieces.cloud/?p=377a498957) Now, edit the `app.py` file you created earlier to reflect the flask `POST` and `GET` methods. The file should look like this: ``` from flask import Flask, redirect, url_for, request, render_template, session from pytube import YouTube import os import requests app = Flask(__name__) @app.route('/', methods=['GET']) def index(): return render_template('index.html') @app.route('/', methods=['POST']) def index_post(): yt = request.form['text'] yt = YouTube(yt) video = yt.streams.filter(only_audio = True).first() destination = '.' out_file = video.download(output_path = destination) base, ext = os.path.splitext(out_file) new_file = base + '.mp3' os.rename(out_file, new_file) return render_template('results.html') ``` [Save this code](https://c185237a-4958-4e1e-a972-57cc037976e7.pieces.cloud/?p=7ad74295d5) At this point, with your web application created, you can test it. To do this, open your terminal and run the following command to set the Flask runtime to development. This implies that each time there is a change, the server will automatically reload: ``` # Windows set FLASK_ENV=development # Linux/macOS export FLASK_ENV=development ``` [Save this code](https://c185237a-4958-4e1e-a972-57cc037976e7.pieces.cloud/?p=37dc4c967d) Then, run the application! ``` flask run ``` [Save this code](https://c185237a-4958-4e1e-a972-57cc037976e7.pieces.cloud/?p=7ec848b148) After running the command above, open the application in a browser by navigating to http://localhost:5000. You should see the form displayed. Congratulations! ## Conclusion Finally, it’s important to note that this tutorial only touches on one of the features of pytube. Pytube as a Python module complements applications in a plethora of situations. For instance, apart from supporting the download of a YouTube video in audio or video format, pytube also enables the download of a complete YouTube playlist. Furthermore, the library integrates track caption support, and has the ability to capture thumbnail URLs.
get_pieces
1,322,944
Azure Data Factory - Trigger on SFTP upload
In this tutorial, we'll focus on configuring Storage Event Triggers with SFTP-enabled storage...
0
2023-01-09T21:44:58
https://dev.to/dinisrodrigues/azure-data-factory-trigger-on-sftp-upload-3cgb
In this tutorial, we'll focus on configuring Storage Event Triggers with SFTP-enabled storage accounts. We'll walk through the basic steps of setting up a trigger, as well as the specific configuration changes you'll need to make to trigger an ADF pipeline in response to SFTP-related storage events. --- ## The Story I was implementing a system where I had to trigger a pipeline in Azure Data Factory (ADF) whenever a file was uploaded on a specific location of a Storage Account, from an external stakeholder. Well, this was pretty straightforward (I thought), as ADF offers Trigger solutions out of the box. The problem was that, I was seeing the file being uploaded, but my Trigger was not being activated 🧐 I started the usual debug process. Opened the Microsoft Azure Storage Explorer Application, manually uploaded the file to the location, and the pipeline was triggered. What the hell? Everything is working... Started going down the road of Google and Stackoverflow queries, but found nothing that suited my situation. Some colleague discussions and a couple of hours later, we realised that we had given our external stakeholder SFTP credentials from which he could work in the directory. This had to be it. Right after this conclusion, bam: ![ADF doesn't support SFTP triggers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i5j4znabnq0fqkw9m562.png) ADF doesn't support trigger activation over SFTP... ## The Solution Scheduling the trigger was not an option. So I had to find other ways of doing this. One could go via Logic Apps route, believe me I tried. Wasted more time on this than I expected. **It did not work!** Then I found a pretty cool drafted solution by [Amrinder Singh](https://techcommunity.microsoft.com/t5/azure-paas-blog/working-with-adf-storage-event-trigger-over-sftp/ba-p/3659652). But I was not able to understand the implementation as is. Thought this might be some legacy stuff. Read some more documentation, and the more I was reading the more I was understanding this previous implementation, but more guidance was originally required. ### Step 1: Create the ADF Trigger Let's create a Trigger in ADF, by providing the details of our SFTP-enabled storage account, the container name, and the pattern (blob start/blob end) that we want to use for triggering conditions. ![ADF Simple Trigger](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s0tgxu800ag0dxjnw0rt.png) When you perform this step, ADF is automatically creating under the hood, something called **Event Grid configurations**. This is the key information which we will use in the next steps ## Step 2: Edit Event Grid Now the cool stuff begins. You need to search for the **Event Grid** Resource in the Azure Portal and proceed to System topics. ![Event Grid System Topics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qw2ugb28kr8sukbm3ygm.png) Here you will find each automatically created **System Topic** by your Data Factories. Select the one corresponding to your desired ADF and resource group. ![Event Subscriptions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o08j14a6lyg5i3y7bjiu.png) Under **Event Subscriptions** you will find every trigger you own. With more configuration possibilities than you ever had the chance to add by using only ADF's UI. We will just focus in adding SFTP capabilities to it, but you can find more possible configurations in here: [Azure Blob Storage as an Event Grid source](https://learn.microsoft.com/en-us/azure/event-grid/event-schema-blob-storage?tabs=event-grid-event-schema#sftp-events) Go to your desired trigger, and go to **Filter -> Advanced Filters**. ![Default Event filter](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyr2azs38mqewzxqy7z5.png) This is how the default Trigger Filters looks like. Note that SFTP related filters are missing. ![Modified SFTP Trigger](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8z2k0mus6ior2xn5afld.png) To make things work, just add these values to it: - `SftpCreate` - `SftpCommit` These will trigger the **BlobCreated Event**. More specifically, this event is triggered when clients use the `put` operation, which corresponds to the `SftpCreate` and `SftpCommit` APIs. An empty blob is created when the file is opened and the uploaded contents are committed when the file is closed. **Save the changes** and your workflow is complete! ## Success 🎉 We've reached the end of our journey! ![SFTP](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/01xll1ftllzbs1ojnm5o.png) Testing the file upload to our Storage container over SFTP. ![ADFT Trigger Success](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dow7imlw2c42u46ay2th.png) We have a **Great Success** Tick mark ✅ Hope you found this useful for your current and future endeavour's! Dinis 💪
dinisrodrigues
1,322,947
CSS Width and Max-Width: Understanding the Difference
Confused about when to use the CSS width property, and when to opt for max-width? This guide...
0
2023-01-09T22:18:12
https://www.w3tweaks.com/difference-between-css-width-and-max-width.html
css, csswidth, cssmaxwidth
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69jaoux7qdt1ja2udk7q.jpg) --- Confused about when to use the [CSS](https://www.w3tweaks.com/category/css-tutorials) width property, and when to opt for max-width? This guide will provide you with a clear explanation of the different roles each plays in your web design. Learn why it’s important to understand the distinction between CSS width and max-width and explore examples of each in action. --- ### What is CSS width? CSS width is a property used in web design to define the width of an element, generally as a fixed distance (including pixels, ems, and rems). For example, if you set the width of a div element to 100px, then it will appear on the website at exactly 100px wide. The main limitation of width is that if your content exceeds the set width or if visitors are using different-sized devices or browsers, then your design could break or be distorted. Find the below demo and code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uagf3lxgb864d53s39fs.png) ##### Code ``` .max { width:600px; outline:1px solid black; margin-bottom:1rem; } .wider { width:800px; background:yellow; } ``` --- ### When Should You Use CSS width? You should use the CSS width property whenever you want the element to remain a fixed size regardless of browser or device. This is particularly useful when creating website layouts, as it ensures that all elements are lined up properly and remain sized correctly. However, bear in mind that if the content you’re displaying exceeds this width, then your design may be distorted. --- ### What is CSS max-width? The max-width property allows you to set a maximum width for an element that won’t increase when the browser window size increases. This means that it will remain the same until it reaches the defined max-width. This property is particularly useful if your website’s design needs to be responsive and change depending on browser or device size. --- ### When Should You Use max-width? You should use the max-width property when you want to keep an element from getting wider than a certain point. This is especially useful for images and other elements that can break their container if allowed to resize themselves. Since the max-width property tells the browser not to allow any elements to be wider than a given amount, you can use it to maintain consistent design across different window sizes. --- ### How Do CSS Width and Max-Width Work Together? While the CSS width and max-width properties both control the width of elements, they work differently. The CSS width property can be set to any specific size or percentage value, telling the browser that this should always be the minimum width of an element. Meanwhile, max-width can also use any specific size or percentage value but allows the element to expand if needed. This combination of properties sets a maximum size while allowing smaller sizes if possible, keeping the design consistent regardless of user screen size. ### Example ##### Demo {% codepen https://codepen.io/w3tweaks/pen/mdjRKPa %} ##### Code ``` .max { width:600px; outline:1px solid black; margin-bottom:1rem; } .wide { max-width:600px; background:cyan; } .wider { width:800px; background:yellow; } ```
w3tweaks
1,322,966
Developers - which tools have you ever paid for or wouldn't mind paying for?
In their farewell post, the Kite team says that one of the biggest challenges they had was that they...
0
2023-01-09T22:56:29
https://dev.to/npobbathi/developers-which-tools-have-you-ever-paid-for-or-wouldnt-mind-paying-for-5ce9
discuss, startup
In their [farewell post](https://www.kite.com/blog/product/kite-is-saying-farewell/), the Kite team says that one of the biggest challenges they had was that they couldn't make their 500K strong community to pay for the product. I was wondering which tools do developers consider worthy of paying for and why?
npobbathi
1,323,134
5 websites will make you a smarter 🏆 developer👩‍💻
Here is the list of the best 5 websites to help you with your projects. Novu The...
0
2023-01-10T02:21:00
https://dev.to/mahmoudessam/5-websites-will-make-you-a-smarter-developer-2jld
programming, webdev, javascript, python
Here is the list of the best 5 websites to help you with your projects. ### Novu - The open-source notification infrastructure for developers. - Simple components and APIs for managing all communication channels in one place Email, SMS, Direct, and Push ### Reactive Resume - Reactive Resume is a free and open source resume builder. ### Open Source Alternative to - Open source alternatives to some popular tools. ### Axiom - Use browser bots to automate website actions and repetitive tasks on any website or web app. ### Magic patterns - Beautiful pure CSS background patterns that you can actually use in your projects. Happy working 🙂 🎥 [Websites links:](https://youtu.be/LgkcEWC6uKY) ###Bonus: #### Top GitHub 👩‍💻 Alternatives Hosts Part One🚀 🎥 [Websites links:](https://youtu.be/YmtTmDjT1u4) #### Web developers tools 🛠️ you need to know💡👩‍💻 🎥 [Websites links:](https://youtu.be/2VRXU2H4q0c) ### Connect with Me 😊 #### 🔗 Links [![linkedin](https://img.shields.io/badge/linkedin-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/mahmoud-el-kariouny-822719149/) [![twitter](https://img.shields.io/badge/twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://twitter.com/Mahmoud42275)
mahmoudessam
1,323,344
Gorilla Flow - Ingredients, Side Effects, Warning & Complaints?
Gorilla Flow oil is reduced greatly by just choosing an efficient mattress. While mattresses are...
0
2023-01-10T07:34:32
https://dev.to/gorillaflow2/gorilla-flow-ingredients-side-effects-warning-complaints-1jdb
webdev, javascript, beginners, programming
Gorilla Flow oil is reduced greatly by just choosing an efficient mattress. While mattresses are expensive, suppliers want that try them for around a month or longer, provided it is really protected.TIP! Speak with your doctor hard to think but coffee is said to be of help when attempting to sooth chronic back pain. Recent possess shown hinders the chemical adenosine. https://www.deccanherald.com/brandspot/pr-spot/gorilla-flow-prostate-supplement-official-website-review-and-customer-feedback-1146688.html https://soundcloud.com/gorilla-flow-963476885/gorillaflowinfo https://www.pinterest.com/pin/1011761872514158287/ https://www.instagram.com/gorillaflowprice/ https://www.scoop.it/topic/gorilla-flow-info https://twitter.com/GorillaFlow2 https://tautaruna.nra.lv/forums/tema/49291-gorilla-flow-prostate-health-reviews-pros-cons-uses-benefits/ https://www.sportsblog.com/gorillaflowbuys/gorilla-flow-benefits/
gorillaflow2
1,323,397
Cyber Liability vs. Data Breach
These days, data breaches are a frequent friend of web applications – however, there are so many...
0
2023-01-10T10:00:00
https://breachdirectory.com/blog/cyber-liability-vs-data-breach/
webdev, security, programming, beginners
These days, data breaches are a frequent friend of web applications – however, there are so many terms related to them... As no one data breach is exactly the same and the details of these things are often so hard to find out, it's sometimes hard to distinguish a data breach from a data leak. In this space, two terms that are often confused are _cyber liability_ and a _data breach_ – what's the difference between them? Are they different at all? That's what we're answering in this blog. ## What Is a Data Breach? First things first, a data breach is an incident where sensitive information is exposed to an unauthorized body. In many cases, a data breach is the direct result of an attack on a web application such as SQL injection, Cross-site Scripting, Cross-site Request Forgery, etc., but in some cases, data breaches can occur due to social engineering too. The consequences of a data breach vary from company to company and they're directly dependent on the data classes that are exposed. The consequences of a data breach that exposes only usernames or email addresses may not be as severe as the consequences of a data breach that exposes emails, usernames, IP addresses, passwords, credit card details, and physical addresses – in many cases, though, stolen data is limited to email addresses, usernames, and passwords. That's not to say that data breaches don't do damage, though – far from it: they're making headlines. Part of those headlines is due to the financial damage that they do – part of it is due to cyber liability. Companies that don't have cyber liability insurance often find themselves struggling to pay the price of a data breach. ## What Is Cyber Liability? Cyber liability is insurance from data breaches – in other words, insurance from cyber attacks. The main aim of cyber liability is to protect businesses from bleeding cash in case of a data breach – cyber liability insurance covers some or all of these aspects: - Customer notification about a data breach. - Recovering data. - Legal fees and related expenses. - In some cases, cyber liability includes the media and related third-party costs. Most cyber insurance programs also sometimes require companies to notify their customers about a data breach. Depending on the program, it may also cover forensic expenses, and in some cases, even cover details about the negotiation and the payment of ransomware demands. ## Minimizing Cyber Liability In order to minimize cyber liability, you have to ensure that the application backing the product your company sells is as secure as possible. That may mean securing all of your code [according to the standards set by OWASP](https://owasp.org/www-project-top-ten/), sanitizing every input field that you can imagine, using a web application firewall, or using data breach search engines and their API capabilities such as the one provided by BreachDirectory. The data breach search engine and the BreachDirectory API both serve a distinct purpose – the data breach search engine allows everyone to assess their likelihood of being exposed in a data breach, and the BreachDirectory API allows companies, universities, individuals, and law enforcement agencies to implement the backbone – the data existing in the BreachDirectory search engine – into their own projects. Here's what the documentation of the API looks like: ![the BreachDirectory API](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qykbttvq3ejk8qxvarh0.png) As of the time of writing, the BreachDirectory API has a couple of distinct plans – a Personal Plan, a Simple Plan, a Bulk Plan, and a Reseller Plan. [The Personal Plan](https://buy.stripe.com/00g4ileH2esLdY44gh) is a fit for individuals that are interested in cyber security and want to implement the data behind the data breach search engine into their own projects, [the Simple Plan](https://buy.stripe.com/8wM9CF0Qc98rdY46oo) is a fit for those who want to implement the API into more systems and query it more often, [the Bulk Plan](https://buy.stripe.com/00g5mpgPa1FZf28aEL) is a fit for companies and enterprises that want to secure a lot of accounts at once, and [the reseller plan](https://buy.stripe.com/cN2025gPa4Sb8DK28i) is a fit for those who want to make some money. Before acquiring API keys from BreachDirectory, many users use [the data breach search engine](https://breachdirectory.com/search) to assess their likelihood of being exposed in a data breach and the need to protect their own employees if they manage a team. The BreachDirectory API is used for a variety of purposes, the main ones being related to open-source intelligence (OSINT) capabilities. Curious how it all would work on your infrastructure? [Give it a try today](https://buy.stripe.com/00g5mpgPa1FZf28aEL) and find out!
breachdirectory
1,323,950
Destroy All Goblins is Launched
After a month of work, my largest game to date, is finished! It’s called Destroy All Goblins. I made...
0
2023-01-10T18:22:18
http://www.brettchalupa.com/destroy-all-goblins-is-launched
gamedev
--- title: Destroy All Goblins is Launched published: true date: 2023-01-09 18:09:00 UTC tags: gamedev canonical_url: http://www.brettchalupa.com/destroy-all-goblins-is-launched --- After a month of work, my largest game to date, is finished! It’s called _Destroy All Goblins_. I made it with DragonRuby Game Toolkit. It took me 50 hours, and it came in at 2,149 lines of code. [![Destroy All Goblins screenshot](http://www.brettchalupa.com/uploads/destroy-all-goblins-screenshot-1.png)](https://brettchalupa.itch.io/destroy-all-goblins) [Play _Destroy All Goblins_!](https://brettchalupa.itch.io/destroy-all-goblins) [![Available on itch.io](http://www.brettchalupa.com/uploads/itch-badge.png)](https://brettchalupa.itch.io/destroy-all-goblins) Or watch the trailer: {% embed https://www.youtube.com/watch?v=wbu3LSfB68s %} _Destroy All Goblins_ is a small freeware game for personal computers. It’s inspired by _Super Crate Box_, and it uses art assets by [Pixel Frog](https://pixelfrog-assets.itch.io/) from the public domain. I had such a fun time making it, and I learned a ton. I’m looking forward to reflecting and writing a post-mortem after I have a little space from the project. I’m not quite sure what I’ll work on next, but I’ve got a lot of ideas. For now though, it’s time to take a little break from such focus and relax a little bit!
brettchalupa
1,323,482
Unlocking the Power of Big Data Processing with Resilient Distributed Datasets
A resilient distributed dataset (RDD) is a fundamental data structure in the Apache Spark framework...
0
2023-01-10T09:27:55
https://dev.to/sabareh/unlocking-the-power-of-big-data-processing-with-resilient-distributed-datasets-28kf
devops, database, datascience, data
A resilient distributed dataset (RDD) is a fundamental data structure in the Apache Spark framework for distributed computing. It is a fault-tolerant collection of elements that can be processed in parallel across a cluster of machines. RDDs are designed to be immutable, meaning that once an RDD is created, its elements cannot be modified. Instead, operations on an RDD create a new RDD that is derived from the original. One of the key features of RDDs is that they can be split into partitions, which can be processed in parallel on different machines in a cluster. When an operation is performed on an RDD, it is automatically parallelized across all of its partitions. This allows Spark to take advantage of data locality, where data is processed on the same machine where it is stored, reducing network traffic and improving performance. RDDs also have built-in fault tolerance, meaning that if a machine in a cluster fails, its partition can be recreated on another machine with minimal impact on the overall computation. This is achieved through a process called lineage, where Spark tracks the sequence of transformations that were applied to an RDD in order to create a new RDD. If a partition of an RDD is lost, Spark can use the lineage information to recompute the lost partition from the original RDD. RDDs are also highly customizable, with user-defined operations called "transformations" that can be applied to an RDD in order to create a new one. Common examples of transformations include map, filter, and reduce, which can be used to transform an RDD into a new one by applying a function to each element, filtering elements based on a predicate, or aggregating elements in some way. Transformations can be combined to perform complex data processing tasks, and Spark's optimizer will take care of creating an efficient execution plan. Another strength of RDDs is that it's a form of abstraction that can handle any data type. RDD support wide range of data type including the Structured, semi-structured and unstructured data. In conclusion, RDDs are a powerful and flexible data structure that enables efficient, parallel processing of large datasets in a distributed environment. They are designed to be fault-tolerant, allowing for easy recovery from machine failures, and they provide a convenient abstraction for working with data in Spark. RDDs have shown to be an effective and popular choice for big data processing, and it will be more prevalent in the years to come. ## References: 1. Zaharia, M., Chowdhury, M., Franklin, M. J., Shenker, S., & Stoica, I. (2010, June). Spark: cluster computing with working sets. In Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (pp. 10-10). USENIX Association. 2. Dean, J., & Ghemawat, S. (2008). MapReduce: simplified data processing on large clusters. Communications of the ACM, 51(1), 107-113. 3. Matei, Z., & Stonebraker, M. (2013). Data management in the cloud: limitations and opportunities. Communications of the ACM, 56(6), 36-45. 4. Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., ... & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50-58. 5. The Apache Software Foundation. (2021). Apache Spark. Retrieved from https://spark.apache.org/ 6. Spark-Summit. (2021) Resilient Distributed Datasets (RDD). https://spark.apache.org/docs/latest/rdd-programming-guide.html
sabareh
1,323,546
Garbage Collection C#
Garbage Collector Glossary Heap Portion of memory where dynamically...
0
2023-01-10T11:24:42
https://dev.to/amansinghparihar/garbage-collection-c-ja9
csharp, garbagecollection
# Garbage Collector ## Glossary ### Heap Portion of memory where dynamically allocated objects resides. ### Managed Heap Whenever we start any process or run our .NET code, it reserves some space in the memory which is contiguous and it's called managed heap. There were the days when the developers would be required to release the memory which they have allocated for the objects. It's the developers responsibility to manage the memory, and this brings some issues during development. - If we create a new object but forget to delete it, it will persist in the memory for a long time even if we didn't want to use it. - If our variable is referencing an object in the memory, and later that variable started referencing another object in the memory, then the previous object will persist in memory even if no variable in referencing this object. It's called memory leak. Object is not referenced by any variable and also not required but still in memory. To overcome these scenarios, .NET came up with the garbage collector, which will take care of deleting the objects from the memory. But, the garbage collector is limited to allocate or release the memory of managed code only, it does not allocate or release the unmanaged memory. Managed Code is the code which runs under the .NET environment. .NET takes care of its memory management. But the code which is outside of the .NET environment called the unmanaged code. .NET garbage collector doesn't know how to release the memory of unmanaged code. Managed code written in the .NET might access unmanaged code. - for ex:- Connecting with database using SQL, Oracle classes, Network calls made from managed code, accessing files. Even though we are writing the managed code, but this code is using the unmanaged code. So as a developer it's our responsibility to release the memory once the work of these objects are completed. ### How do we release the memory then? .NET provides the IDisposable interface for the types (example:- class) which are using the unmanaged code, for ex, network calls, database connection etc. Those types needs to implement IDisposable interface and provides the implementation of the Dispose() method and write their own logic to release the memory. And our task as a developer is that once we are done with these objects, we should call the dispose method on these objects. .NET also provides a Finalize method, in case developer forgets to call the dispose method on this object. GC releases the memory even if we forget to call the dispose method, if that class also implements the finalize, but the crux here is that finalize in indeterministic and dispose is deterministic. It means we have control over the dispose method, when to call. But that finalize will be in hand of GC, when it calls.
amansinghparihar
1,323,564
Are SASS @mixins really that lightweight? And what are %placeholders?
Mixins are a powerful feature in Sass that allow you to reuse a set of CSS declarations across...
0
2023-01-10T12:55:06
https://dev.to/kemotiadev/are-sass-mixins-really-that-lightweight-and-what-are-placeholders-119i
saas, mixin, css, webdev
**Mixins** are a powerful feature in Sass that allow you to reuse a set of CSS declarations across multiple selectors. One of the main advantages of using Mixins is that they can help keep your CSS code organized and maintainable, making it easier to update and maintain styles across a large code base. However, one concern that some developers have with Mixins is whether they are lightweight to use, and if they add unnecessary bloat to your CSS code. In this article, we will take a closer look at Mixins in Sass and explore whether they are a lightweight solution for reusing CSS styles. Let's dive right into it! --- I'm assuming that you already have a basic knowledge about what mixins are and how they work. If not, I encourage you to have a look at [the official documentation](https://sass-lang.com/documentation/at-rules/mixin). Mixins, as mentioned previously, allow you to create a block of code and reuse it later in your application. Sometimes, I see that people gets temptated to use them as a main provider for their styles. **... which is not a good idea. Let me show you why** Let's assume we have a huge e-commerce store with a plenty of buttons and we have created a mixin, that will share some basic styles among them. Here's our simple mixin: ```scss @mixin ButtonStyles { margin: 0 3px; color: white; border: none; outline: none; transition: all .3s ease; cursor: pointer; display: inline-flex; svg { width: 15px; } } ``` Alright, and now let's use it with `@include ButtonStyles`! ```scss .btn_primary { @include ButtonStyles; background: black; width: 45px; } .btn_secondary { @include ButtonStyles; background: blue; width: 60px; } .btn_contrast { @include ButtonStyles; background: yellow; width: 60px; } ``` What do you think will happen now if you will compile your **SASS** code into **CSS**? Try to answer yourself. * a) Code from the mixin will not be repeated. * b) Code from the mixin will be repeated everytime you call `@include ButtonStyles` Well, I bet you already know the answer! Using a mixin in this particular case will basically copy paste the entire code from that mixin into your code as many times as you `@include` it. Here's the result: ```css .btn_primary { margin: 0 3px; color: #fff; border: none; outline: none; transition: all .3s ease; cursor: pointer; display: inline-flex; background: #000; width: 45px } .btn_primary svg { fill: #fff; width: 15px } .btn_secondary { margin: 0 3px; color: #fff; border: none; outline: none; transition: all .3s ease; cursor: pointer; display: inline-flex; background: blue; width: 60px } .btn_secondary svg { fill: #fff; width: 15px } .btn_contrast { margin: 0 3px; color: #fff; border: none; outline: none; transition: all .3s ease; cursor: pointer; display: inline-flex; background: yellow; width: 60px } .btn_contrast svg { fill: #fff; width: 15px } ``` **Not very efficient if you want to optimize your CSS weight.** Imagine how bloated the code can become, if you'll do it many times in your application. **Well... how to solve it then?** Answer is pretty simple. If you want to keep the idea of having a single block that you can reuse in your Sass code, use **placeholder**. ## %placeholder Don't be mislead by the name, it's not an HTML placeholder you put into your inputs, it's a **SASS** placeholder with `%` sign attached to it's name. Example: ```scss %buttonStyles { margin: 0 3px; color: white; border: none; outline: none; transition: all .3s ease; cursor: pointer; display: inline-flex; svg { width: 15px; } } ``` **SASS** placeholder is the most efficient way of creating your styles template, because it doesn't get compiled into your **CSS** code and is never repeated. Also, you can't use `@include` like you do with Mixins, with placeholder you use `@extend` keyword instead. Curious to see how does it work in practice? Here we go! Let's first `@extend` desired styles in our buttons: ```sass .btn_primary { @extend %buttonStyles; background: black; width: 45px; } .btn_secondary { @extend %buttonStyles; background: blue; width: 60px; } .btn_contrast { @extend %buttonStyles; background: yellow; width: 60px; } ``` And now let's have a look at the result after compiling **SASS** code into **CSS**. ```CSS .btn_secondary, .btn_primary, .btn_contrast { margin: 0 3px; color: #fff; border: none; outline: none; transition: all .3s ease; cursor: pointer; display: inline-flex } .btn_secondary svg, .btn_primary svg, .btn_contrast svg { width: 15px } .btn_primary { background: #000; width: 45px } .btn_secondary { background: blue; width: 60px } .btn_contrast { background: yellow; width: 60px } ``` And voilà! Your **CSS** code is **much lighter** since nothing there was repeated. This is super useful in large applications, where each `kb` matters. Alternatively to a `%placeholder` you can use `.class` instead and extend it similarly by `@extend .buttonStyles` but this will result in applying this class name to the css class names list. In our case scenario it would change this ```css .btn_secondary, .btn_primary, .btn_contrast { ...code } ``` into this ```css .buttonStyles, .btn_secondary, .btn_primary, .btn_contrast { ...code } ``` Note the `.buttonStyles` class applied to the classes list. It's not a bad solution, but since you are already extending styles, why not do it perfectly? Moreover, if you really care about every `kb`, you might save a bunch by not having unused class names in class lists. **Conclusion** Mixins in Sass are a great way to reuse CSS styles, but they can add unnecessary bloat to your CSS code if they are not used carefully. An alternative approach to using Mixins is to use placeholders. Placeholders allow you to reuse a set of CSS declarations without generating any actual CSS code. Instead, they are only used as a reference and only generate code when they are referenced by other selectors. This can help keep your CSS code more efficient and lightweight. By using placeholders instead of Mixins, you can ensure that your CSS code is lean and efficient, making your web application faster and more responsive for your users.
kemotiadev
1,323,784
A Bloody Case Caused by a Thread Pool Rejection Strategy
Conduct an in-depth analysis of the full gc problem of the online business of the inventory...
0
2023-01-10T15:16:35
https://dev.to/ppsrap/a-bloody-case-caused-by-a-thread-pool-rejection-strategy-15j5
java, threadpool, exception, programming
## Conduct an in-depth analysis of the full gc problem of the online business of the inventory center, and combine the solution method and root cause analysis of this problem to implement the solution to such problems ## **1. Event review** Starting from 7.27, the main site, distributors, and other business parties began to report that orders occasionally timed out. We began to analyze and troubleshoot the cause of the problem, and were shocked to find that full gc occasionally occurred online, as shown in the figure below. If you continue to let it go, it will inevitably affect the core link and user experience of Yanxuan trading orders, resulting in transaction losses. The development of the inventory center responds quickly, actively investigates and solves problems, and handles problems in the bud to avoid capital losses. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ma0qcvbr79dssuwzhhl.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eiqznrwil8sx5eej8nmt.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88200hhuhdrwu4kseq31.png) **2. Emergency hemostasis** For frequent full gc, based on experience, we boldly guess that it may be caused by some interfaces generating large objects and calling them frequently. In an emergency, first ensure that the core functions of the system are not affected, and then troubleshoot the problem. Generally, there are three methods, as follows: > Expansion There are generally two ways to expand capacity, one is to increase the size of the heap memory, and the other is to expand the capacity of the application machine; in essence, it is to delay the number and frequency of full gc occurrences, try to ensure the core business, and then troubleshoot the problem. Limiting Current limiting can be regarded as a kind of service degradation. Current limiting is to limit the input and output traffic of the system to achieve the purpose of protecting the system. Generally, current limiting can be done at the proxy layer and application layer. Reboot It is a relatively violent method. A little carelessness may cause data inconsistency. It is not recommended unless necessary. Our application current limit is aimed at the application interface level. Since we don't know the specific cause of the problem and the problem is still in the bud, we don't directly limit the current, but directly expand the capacity and restart it incidentally. We temporarily expanded the heap memory in an emergency, increased the heap memory size of some machines from 6g to 22g, and restarted the application to make the configuration parameters take effect. After emergency expansion of some machines (73 and 74) on 7.27, we can find that full gc did not occur within 2 days after the expansion, providing us with fault tolerance time for further investigation; ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6p6u9iyyjpygfrdu2hyq.png) **3. Problem analysis** 3.1 Status Quo Challenges Since there is no OOM, there is no on-site memory snapshot, so it is difficult to determine the cause of the problem, and the main inventory service involves too much logic (the core business logic has more than 100,000 lines of code, which are all running daily), and the business logic is complex. The volume is large, and there are a small number of slow requests, which increases the difficulty of troubleshooting. Since there is no relatively complete infrastructure, we do not have a global call monitoring platform to observe what happened to the application before and after the full gc. We can only find the truth of the problem by analyzing the link call situation on the problem machine. 3.2 Appearance reasons Essentially, we need to look at what the application system does when full gc occurs, that is to say, what is the last straw that crushes the camel? We have done a lot of analysis on the application logs at the time point before the full gc occurred, combined with the slow SQL analysis, as long as the business frequently operates the [internal and external procurement and outbound] business for a period of time, the system will trigger a full gc, and the time point is relatively consistent, so , the preliminary judgment may be caused by internal and external procurement and outbound business operations. Through the analysis of the business code analysis, it is found that the inventory change will load 100,000 pieces of data into the memory after intervention and interception, with a total of about 300M. In this regard, we urgently contacted the dba on 7.28 to migrate part of the business data to other databases to avoid further impact on the business, and then optimize the business process in the future! ! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yom4qdywlkq3y15jr1sq.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4hdz1j2a8eitpczjklk.png) After the migration, we found that there was no full gc on the day, and no business feedback interface timed out. On July 29, we found that machine 73 (upgraded configuration) did not have full gc, and machine 154 continued to have full gc on July 29. Observe every The amount of memory that can be reclaimed by a gc is not much, which means that the memory is not released in time, and there may be a leak problem! 3.3 Root cause of the problem At that time, we dumped the memory snapshots many times, and did not find similar problems. Fortunately, the 155 machine was upgraded last (the backup machine, mainly used to process timing tasks, and was reserved for reference and comparison), which brought us closer to the root of the problem. because. To further analyze the reason, we analyzed the heap memory snapshot of one of the machines (155), and found an interesting phenomenon, that is, there are a large number of threads blocking waiting threads; ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5vxizvsgugo7izionnl.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsslkui1ilb77s09afub.png) Each blocked thread will hold about 14M of memory. It is these threads that cause the memory leak. So far we have finally found the cause of the problem and verified our guess, that is, a memory leak has occurred! **3.4 Cause Analysis** 3.4.1 Business Description From 4.2, we locate the problem code. In order to facilitate our understanding of this part of the business (pull the SKU quantity information from the database, every 500 SKUs form a SyncTask, and then cache it in redis for use by other business parties, execute it every 5 minutes ) to give an overview. 3.4.2 Business code ``` 3.4.2 Business code @Override public String sync(String tableName) { // Generate data version number DateFormat dateFormat = new SimpleDateFormat("YYYYMMdd_HHmmss_SSS"); // Start the Leader thread to complete execution and monitoring String threadName = "SyncCache-Leader-" + dateFormat.format(new Date()); Runnable wrapper = ThreadHelperUtil.wrap(new PrimaryRunnable(cacheVersion, tableName,syncCachePool)); Thread core = new Thread(wrapper, threadName); // Create new thread core.start(); return cacheVersion; } private static class PrimaryRunnable implements Runnable { private String cacheVersion; private String tableName; private ExecutorService syncCachePool; public PrimaryRunnable(String cacheVersion, String tableName,ExecutorService syncCachePool) { this.cacheVersion = cacheVersion; this.tableName = tableName; this.syncCachePool = syncCachePool; } @Override public void run() { .... try { exec(); CacheLogger.doFinishLog(cacheVersion, System.currentTimeMillis() - leaderStart); } catch (Throwable t) { CacheLogger.doExecErrorLog(cacheVersion, System.currentTimeMillis() - leaderStart, t); } } public void exec() { // Query data and build synchronization task List<SyncTask> syncTasks = buildSyncTask(cacheVersion, tableName); // Synchronize task submission thread pool Map<SyncTask, Future> futureMap = Maps.newHashMap(); for (SyncTask task: syncTasks) { futureMap.put(task, syncCachePool.submit(new Runnable() { @Override public void run() { task.run(); } })); } for (Map.Entry<SyncTask, Future> futureEntry: futureMap.entrySet()) { try { futureEntry.getValue().get(); // Block getting synchronization task results } catch (Throwable t) { CacheLogger.doFutureFailedLog(cacheVersion, futureEntry.getKey()); throw new RuntimeException(t); } } } } /** * Deny Policy Class */ private static class RejectedPolicy implements RejectedExecutionHandler { static RejectedPolicy singleton = new RejectedPolicy(); private RejectedPolicy() { } @Override public void rejectedExecution(Runnable runnable, ThreadPoolExecutor executor) { if (runnable instanceof SyncTask) { SyncTask task = (SyncTask) runnable; CacheLogger.doRejectLog(task); } } } ``` The current queue size is 1000, and the maximum number of threads is 20, which means that the thread pool can handle at least 51w data, and the current number of sku is about 54w. If the task takes time, all remaining tasks may be put into the queue, and there may be threads Insufficient pool queue condition. Insufficient queue size will trigger the rejection policy. Currently, the rejection policy in our project is similar to DiscardPolicy (when a new task is submitted, it will be discarded directly without any notification to you) From the analysis here, we summarize the causes of the problem as follows: > 1. First, when the task is submitted to the thread pool and the rejection policy is triggered, the state of the FutureTask is in the New state, and calling the get() method will reach LockSupport.park(this), blocking the current thread and causing a memory leak; > 2. The reason is that the thread pool is not used properly. There are two main problems. One is that there is a problem with the selection of the rejection strategy. Abnormal termination (in addition, there is no need to obtain task results in the project, and there is actually no need to use the submit method to submit tasks). 3.4.4 Getting to the bottom of it After analyzing this point, we can say that we have found the cause of the problem, that is to say, when FutureTask gets the execution result, it calls LockSupport.park(this) and blocks the main thread. When will the current thread be woken up? Let's move on to the code. That is, when the task currently assigned by the existing worker thread Worker is executed, it will call the getTask() method of the Worker class to get the task from the blocking queue, and execute the run() method of the task. **4. Problem solving** By optimizing the thread pool configuration and business process l, such as increasing the size of the thread pool queue, repairing the rejection strategy, optimizing the business process to avoid large objects, and executing tasks at off-peak times, a series of combined measures ensure the stable execution of tasks. 4.1 Thread pool configuration optimization Increase thread pool queue size, fix rejection policy 4.1.1 Modify the deny policy > 1. The main purpose of the custom rejection strategy used in the project is to print out the task information contained in the rejection task, such as skuId, etc., and then manually update it to prevent abnormal inventory data provided to other services; > 2. From the previous article, we have already seen that the runnable type is FutureTask, so the if judgment in the picture will never be established. This custom rejection policy is like the default rejection policy in the thread pool. I will give you any notice, relatively speaking, there is a certain risk, because we did not know that this task would be discarded when we submitted it, which may cause data loss); > 3. After the modification, when the queue is full, the rejection strategy will be triggered immediately and an exception will be thrown, so that the parent thread will not be blocked all the time to obtain the result of the FutureTask. ps: The Runnable in the thread is currently packaged in the project. If you use native classes, you can obtain the rejected tasks in the rejection policy through reflection. Just get the rejection task information, you can ignore it. 4.1.2 Increase the queue size > 1. The maximum number of threads in the thread pool is 20, the queue size is 1000, the current number of skus is 54w, and each task has 500 skuIds. If the execution time of each task is a little longer, it can only process 51w skus at most, plus 3 tasks Common thread pool, set the queue size to 3000; > 2. After the queue is adjusted, it can prevent some SKUs from not synchronizing the inventory data to the cache in time. 4.2 Business Process Optimization Optimize the large objects that appear in internal and external procurement to reduce the problem of requesting 300M large objects each time. At the same time, the execution time of the three scheduled tasks in the public thread pool is staggered to avoid mutual interference between tasks after the increase in sku. 5. Summarize precipitation 5.1 Summary of full gc solutions > 1. What should we do when encountering frequent full gc online? The first thing that comes to mind is to deal with it urgently first, and then analyze the reasons. We have three options for emergency treatment: restart, current limit, and capacity expansion. > 2. Secondly, clarify the direction. Generally speaking, there are two main reasons for full gc, one is application resource allocation problems, and the other is program problems. In terms of resource configuration, we need to check whether the jvm parameter configuration is reasonable; most of the full gc is caused by program problems. There are two main reasons. One is that the program has large objects, and the other is that there is a memory leak; > 1. The most important point is to analyze the dump file, but to ensure that the memory snapshot at the time of the incident is obtained, the analysis software can use MAT and VisualVM. For the problem we encountered, we can actually use jstack to obtain all the threads of the current process for analysis; > 2. In case of full gc, a timely alarm should be issued to avoid the development response lagging behind the business. In addition, in practice, we should set JVM parameters reasonably, so as to avoid full gc as much as possible. In this troubleshooting, we also adjusted the jvm parameters, which will be discussed later Corresponding articles have been published. 5.2 Notes on using thread pool > 1. If you do not need to obtain task results synchronously, try to use the execute method to submit tasks, and handle exceptions carefully to prevent frequent destruction and creation of threads; > 2. If you need to use the submit method to submit the task, try to use the timeout method to obtain the result synchronously, so as to avoid the problem of memory leak caused by the continuous blocking problem; > 3. Use the rejection policy carefully, and be familiar with the possible problems in the combination of the rejection policy and the thread submission method. For example, DiscardPolicy and submit methods may cause blocking and waiting for results; > 4. Thread pool threads must be recognizable, that is, they have their own naming rules to facilitate troubleshooting.
ppsrap
1,323,857
Pulumi and AWS SAM - How to convert Cloud Formation IAC to Pulumi
SAM stands for AWS Serverless Application Model. It's a framework for building serverless...
0
2023-01-10T18:09:09
https://dev.to/ferjssilva/pulumi-and-aws-sam-how-to-convert-cloud-formation-iac-to-pulumi-62n
iac, pulumi, aws, cloud
**SAM** stands for AWS Serverless Application Model. It's a framework for building serverless applications on AWS. **IAC** is a way to manage and provision infrastructure using code. **Pulumi** is a multi-cloud development platform for provisioning infrastructure as code (IAC). While **SAM is specific to AWS** and its services, Pulumi supports multiple cloud providers (AWS, Azure, Google Cloud and more). When used together, these tools can provide a number of **advantages** for developing, deploying, and maintaining serverless applications. SAM and Pulumi can be **beneficial for organizations** that want to use a common toolset across different teams and projects. In this post I'll show some **key points** of this combination. One of the main advantages of combining SAM and Pulumi is that it allows for better collaboration between development and operations teams. SAM uses a familiar and **simplified syntax** to define serverless resources, making it simple for developers to manage and deploy their code, whilst Pulumi uses a **common language** and framework to allow developers and operations teams to collaborate. Another advantage of using SAM and Pulumi together is the ability to easily manage both **serverless and non-serverless** resources with a **single toolset**. SAM generated IAC is specifically designed for serverless applications and provides a **simplified way** to define and deploy serverless resources, such as Lambda functions, API Gateway endpoints, and DynamoDB tables. However, it **does not cover all the possible resources and scenarios** that can be defined on AWS. Pulumi can be used to manage the **non-serverless** resources required by your application, such as an S3 bucket, RDS instance, or even a k8s cluster. This can be beneficial for applications that have a **complex infrastructure** and need to manage resources across different cloud providers. Furthermore, using Pulumi with SAM allows for better **reusability** and composition of the code. Pulumi supports code reuse and composition through Pulumi Components, which are building blocks of reusable code that can be **shared across different teams and projects**. This can save time and effort by reducing the need to write and maintain similar code in multiple projects. It's also worth mentioning that the AWS SAM templates code generated **can be converted to Pulumi** using open-source libraries maintained by the Pulumi Authors, which allow to take an existing CloudFormation template and convert it to be used to **manage both** resources. This **saves a significant amount of time and effort** by eliminating the need to manually convert CloudFormation syntax to the appropriate language syntax. **You can convert Cloud Formation IAC** code directly from Pulumi website clicking [here](https://www.pulumi.com/cf2pulumi/) or using this [official library](https://github.com/pulumi/pulumi-aws-native). In conclusion, using AWS SAM and Pulumi together can provide a number of advantages for developing, deploying, and maintaining serverless applications on AWS. These include the ability to easily manage both serverless and non-serverless resources, better collaboration between developers and operations teams, and better reusability and composition of the code. And with the open-source libraries that are available for converting SAM templates to Pulumi, it makes the process even more streamlined. Give it a try! References: AWS Serverless Application Model (SAM): https://aws.amazon.com/serverless/sam/ AWS CloudFormation - https: https://aws.amazon.com/cloudformation/ Pulumi vs Cloud Formation: https://www.pulumi.com/docs/intro/vs/cloud-templates/cloudformation/ Migrating From AWS Cloud Formation: https://www.pulumi.com/docs/guides/adopting/from_aws/#migrate-resources-into-code Comparison of AWS CloudFormation and AWS SAM: https://aws.amazon.com/serverless/sam/comparison/ Converting Cloud Formation to Pulumi: https://www.pulumi.com/cf2pulumi/ https://github.com/pulumi/pulumi-aws-native ㋡ Happy Coding []s
ferjssilva
1,324,342
CVE vulnerabilities on Google Chrome prior to releases around on Dec. 2022
Overview Google Chrome vulnerabilities CVE-2023-0140 (and more) Chrome on...
21,348
2023-01-10T23:26:52
https://scqr.net/en/blog/2023/01/11/chrome-chromium-vulnerabilities-cve-2023-0140-etc/index.html
chrome, chromium, security, vulnerability
## Overview ### Google Chrome vulnerabilities [CVE](https://cve.mitre.org/cve/)-2023-0140 (and more) [Chrome](https://www.google.co.jp/chrome/browser/) on Windows (and more), whose version is prior to 109.0.5414.74, has risk to make remote attack easy. 109 was released around last month. \-- Recommended to update. ### Other vulnerabilities > CVE-2023-0140 (and more) > That on Windows (and more) Other vulns related to Chrome prior to 109.0.5414.74, including Android, ChromeOS, etc.: CVE-2023-0128, 0129, 0130, 0131, 0132, 0133, 0134, 0135, 0136, 0137, 0138, 0139, 0141. ### As to Chromium Well, some of CVE security severity on [Chromium](https://www.chromium.org/), which Chrome based on: High or Medium. ### As to Microsoft Edge As Chromium is affected, Microsoft [Edge](https://www.microsoft.com/edge) is possibly also affected. ## Note This post is based on [our tweets](https://twitter.com/scqrinc/status/1612943292085465088?s=20). ### CVE news https://twitter.com/CVEnew/status/1612913531338235920?s=20
nabbisen
1,324,653
Create a Real Time Crypto Price App with Next.js, TypeScript, Tailwind CSS & Binance API
Web App: A Real Time Crypto Price App built using Next.js, TypeScript, Tailwind CSS, and...
0
2023-01-11T05:55:02
https://dev.to/codeofrelevancy/create-a-real-time-crypto-price-app-with-nextjs-typescript-tailwind-css-binance-api-1oc8
web3, crypto, nextjs, typescript
## Web App: A Real Time Crypto Price App built using Next.js, TypeScript, Tailwind CSS, and Binance API would be a dynamic application that provides users with up-to-date information on the current value of various cryptocurrencies. --- ## Source code and Tutorial: {% embed https://www.youtube.com/embed/9n8B3zflzeI %} --- ## Next.js: Next.js is a framework that allows for the development of server-rendered React applications, providing a powerful and efficient tool for building web apps. TypeScript, on the other hand, is a superset of JavaScript that adds static typing and other features that make it a great choice for building large-scale applications. --- ## Tailwind CSS: The app designed using Tailwind CSS, a utility-first CSS framework that makes it easy to build responsive, scalable and consistent interfaces. Tailwind CSS provides a set of pre-defined classes that allow developers to quickly and easily create UI elements that are visually consistent across the app. --- ## Custom hooks: There is a custom hook called "useTicker" that allows for tracking the current prices of various cryptocurrencies. It uses the useState hook to set an initial state of the "cryptocurrencies" variable with the value of the imported "CRYPTOCURRENCIES" constant. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15o6u2p7gqk1c84s4gjg.png) The hook also utilizes useCallback to create a "fetchCrypto" function that uses the fetch API to call the Binance API and retrieve the latest prices for the symbols present in the "CRYPTOCURRENCIES" array. The response is then parsed into JSON and the "cryptocurrencies" state is updated with the new prices. The useEffect hook is then used to call the "fetchCrypto" function every 5 seconds by setting an interval and clearing it when the component unmounts. However, to optimize its performance, you can consider adding fetchCrypto as a dependency in useEffect function to avoid infinite re-renders. Finally, the hook returns the current state of the "cryptocurrencies" variable, which will have been updated with the latest prices. --- ## Cryptocurrencies: You can expand the list of supported cryptocurrencies by adding additional entries to the CRYPTOCURRENCIES constant. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryztvf4936j1bz2dy0zk.png) --- ## Utils: The function "formatPrice" takes in an optional parameter "price" which defaults to 0 if no value is passed in. The function then uses the Math.round method to round the value of "price" to the nearest hundredth. This value is then stored in a variable "formattedPrice". The function then checks if the "formattedPrice" is greater than 0, if it is, the function uses the toLocaleString method to format the number as a string with appropriate thousands separators, and prepends a "$" symbol. If the "formattedPrice" is not greater than 0, the function returns the original "price" without modification. The final result is a string that represents a formatted price. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e2shxbjymufp4mwqy5wv.png) --- ## Binance API: The Binance API used to gather real-time data on the value of various cryptocurrencies, such as Bitcoin, Ethereum, BNB, XRP, Dogecoin, Polygon, Solana, Shiba Inu, ApeCoin, NEAR Protocol, Terra Classic, and Terra. The API would provide access to historical data, trading pairs, and other information that would be used to display the current value of each cryptocurrency on the app's interface. The endpoint returns information such as the last traded price, 24 hour trading volume, and the 24 hour percentage change for each pair. This information can be useful for those looking to monitor the performance of specific cryptocurrency pairs over a 24 hour period. --- ## TypeScript: TypeScript is an open-source programming language that is a superset of JavaScript. It was developed and is maintained by Microsoft. TypeScript adds optional static typing, class-based object-oriented programming, and other features to JavaScript. This allows for better code organization, improved developer productivity, and more robust code. --- ## Summary: This web app is a powerful and efficient tool for individuals and businesses looking to stay informed about the value of various cryptocurrencies. With its real-time data, this app would provide users with a comprehensive view of the crypto market, making it an essential tool for anyone interested in the crypto space. --- Please consider following and supporting me by subscribing to my channel. Your support is greatly appreciated and will help me continue creating content for you to enjoy. Thank you in advance for your support! [YouTube](https://youtube.com/@codeofrelevancy) [Discord](https://discord.com/invite/f8jwnzcRz2) [GitHub](https://github.com/codeofrelevancy)
codeofrelevancy