[
{
"url": "https://dev.to/richjdsmith/comment/7ba7",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nThis is an awesome list of a bunch of great tips you've put together! Thank you for it!\n\nRuby was (and retrospectively, I'm thankful for it) the first programming language I learned. I've since picked up PHP, JavaScript and am working on Elixir, but I consider myself a proud Ruby programmer.\n\nWith the progress being made on the Ruby 3X3 goal, as well as the fact that nothing I have tried comes close to the productivity i get from working with Rails, I don't see that changing anytime soon.\n\nI'm a Rubyist, and your list is a perfect example of why.\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/vibin12609877/6-benefits-of-mobile-app-development-using-react-native-3ldb",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWhat is React Native\n\nReact Native is an open-source programming language used for developing cross-platform applications. It is created by Facebook so that you can create native apps for both Android and iOS using JavaScript as their programming language. Code reusability, hot reload, bridging are some of the features that attract developers to use this.\n\nBenefits of using React Native for mobile application development\n\nSaves time and money\n\nDevelopers can build 2 apps within the time frame to built a single application because 90% of code is shared between iOS and Android apps. It not only saves time but also the cost of development. You can develop both the applications with a single team who are proficient enough with the set of technologies and thus you can save the overhead cost on resources. Also, you can use pre-built components while developing so that development can become speedy.\n\nCode reusability\n\nNo more worries if you want to move your application to another framework. With react native, you can move your code to Android Studio or Xcode and start from there. This code reusability features distinguish React Native from others.\n\nHot reload\n\nWith the hot reload feature, developers can immediately view the changes made in existing code and this is extremely helpful while working on the front end part.\n\nPublish updates for your apps faster\n\nPublishing updates are much easier while using react-native. You can update your app while users are using it and also single source code changes will make an update on all platforms.\n\nLarge community support\n\nThe community support for React Native is huge after making it open source. You get huge and free access to documentation and individual experiences through the community. Whenever a developer gets stuck in between, he can always look for support. Git Hub react-native community is one such space for developers.\n\nBridging\n\nYou can bridge with other platforms even while you are building an app with react native. This helps to use other technologies like swift objective c, especially when you want to add a third party application build on another platform. One such example is blending payment to the shopping apps.\n\nApps Build on React Native\n\nFacebook ad manager is built on react native and it is the first cross-platform application that used react native.\n\nWallmart\n\nWallmart switched to react-native as the demand for a scalable solution arose when there was a huge user demand.",
"content_format": "markdown"
},
{
"url": "https://dev.to/flippedcoding/comment/5npp",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nA lot of it comes down to preference, but usually percentages are used for block level elements like
.\n\nIf you need something that's a fixed size you'd probably go with px.\n\nThe others are just a toss up.\n\nI guess vh and vw measures will be more helpful because it adjust based on screen size. But yes, each measure have it's own importance.\n\nAre you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.\n\nHide child comments as well\n\nConfirm\n\nFor further actions, you may consider blocking this person and/or reporting abuse\n\nWe're a place where coders share, stay up-to-date and grow their careers.",
"content_format": "markdown"
},
{
"url": "https://dev.to/itaylisaey/enhance-your-react-workflow-with-this-new-tool-1kmg",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nOne of the most frustrating thing about developing in web frameworks is the constant boilerplate code needed each time you start a new project, component, page, etc.\n\nThe Agrippa CLI aims to solve or at least mitigate this problem for React developers.\n\nAgrippa inherit the modularity and unopinionated approach of React with a modular, customizable CLI that help you create React components easily, and most importantly FAST.\n\nAs written in their documentation:\n\n> \n\nAgrippa is a humble CLI, whose purpose is to assist React developers in creating components without the boilerplate. It can easily generate templates for React components of different compositions (styling solutions, prop validations, etc.) and in different environments. -Agrippa on Github\n\n## Simple and Convenient CLI\n\nto use Agrippa you can install it globally `npm i -g agrippa` (or `yarn global add agrippa` ).> \n\nOnce installed, components can be generated using agrippa gen [options]\n\nSome of the options supported by agrippa gen are: `--styling` : which styling solution to use (e.g. CSS, SCSS, JSS, Material-UI). `-props` : which prop validation/definition solution to use (e.g. Typescript interfaces, prop-types, JSDoc comments). `--children` : whether the components is meant to have children or not.\nAlso, Agrippa automatically detects and sets other, important defaults for you, such as whether to use Typescript or Javascript, and whether to import React or not. — Article written by the creator of the package\n\nAgrippa has more options and configuration, with the latest update adding a custom post commands, and base directories configuration.\n\n## Conclusion\n\nin conclusion the Agrippa CLI has become my preferred way of creating React component hassle-free, and it might be fitting for you too.\n\nI hope you enjoyed reading this article and it would be awesome if you consider showing love to original developer of the package:",
"content_format": "markdown"
},
{
"url": "https://dev.to/thisdotmedia/angular-news-using-the-wordpress-api-developer-advocacy-primeng-ngupgrade-rxjs-3c0e",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nThe Angular community is continuously rolling out new tools and resources to help increase the capacity of its members. Keep up with current Angular news as Tracy Lee (@ladyleet) and Taras Mankovski (@tarasm) discuss a variety of topics with Angular experts.\n\nFeatured in this article are interviews with Roy Sivan (@royboy789) from Walt Disney, John Papa (@John_Papa) from Microsoft, Kito Mann (@kito99) from Virtua, George Kalpakas(@gkalpakas) from the Angular core team and Ben Lesh (@benlesh) from Google.\n\n## Wordpress’ API for Mobile App Development & Content Management\n\nThe benefits of using WordPress API with Roy Sivan\n\nRoy Sivan is a Senior WordPress developer at Walt Disney. He is a technology expert on WordPress and using WordPress in SPA and mobile development.\n\nWordPress is still a strong contender a CMS solution. It has never been easier to decouple applications from WordPress and allow those working on site content to build out material which can then be used to create applications with any framework of choice, including Angular.\n\nThe WordPress API has recently become a popular choice for companies looking to build mobile apps. Because the WordPress API is decoupled from WordPress, you can use AppPresser to pull data from the main site and display it in a native mobile application.\n\nIf you have looked into WordPress before and been concerned about security, the open source community has been hard at work solving this problem. Security has always been an important consideration for WordPress due to the large size of the platform, but because of a recent uptick in contributions to this concern, there are now a variety of options for authentication.\n\nRoy encourages those interested in pursuing WordPress to review the new developer handbook to better understand the variety of customization possibilities. It serves as a great resource with in-depth tutorials that walk you through everything from advanced customizations to building your own API endpoints. Chances are, whatever you are trying to accomplish with WordPress has already been thought out and is in the documentation.\n\n## The Life of a Developer Advocate and How to be a Better Developer\n\nUsing advocacy to propel development in the community with John Papa\n\nWorking as a Developer Advocate is all about sharing experience and showing people new ways of doing things. John Papa has been coding for most of his life and a developer advocate for the past 10 years. He loves making contributions that make code easier so developers can focus on the real problems.\n\nJohn Papa’s work as an advocate includes a variety of methods to promote development in the community.\n\nIn Angular, John’s work has played a key role in developer’s lives. He recently wrote VS Code snippets for Angular to help make writing Angular code easier. His snippets allow Angular developers to get rid of much boiler plate and add consistency to their code.\n\nJohn also played an integral part in creating the style guide for Angular and AngularJS. His style guide work for Angular played significantly into the opinions Angular-CLI gives developers.\n\nJohn is always encouraging people to get more involved in their development communities. He recommends getting on social media and engaging in conversations, contributing to open source projects, building extension tools, and finding other ways to help developers in their work.\n\nYou will often find him presenting at conferences and events. He stresses the importance of participating in hallway conversations and views this as a great way to see what people are saying about different tools. By talking to people between sessions and listening to people’s concerns as a developer advocate, he is able to help give that feedback to core teams and make improvements all around.\n\n## PrimeNG\n\nGetting started with PrimeNG with Kito Mann\n\nIf you’ve done development in Java, you may be familiar with PrimeFaces, a popular Java component library. We sit down with Kito Mann, a technology trainer and mentor at Virtua to discuss PrimeNG.\n\nPrimeNG is an Angular component library that is a derivative of PrimeFaces. The library offers over 70 open source native Angular components along with a variety of theming options for developers.\n\nPrimeNG has become popular amongst developers in organizations as small as 2–3 person shops and large organizations. The primary benefit developers see when using PrimeNG is help with developing complicated UI such as grids, charts and calendar components.\n\n## NgUpgrade\n\nIntegrating ngUpgrade and the latest developments in Angular with George Kalpakas\n\nConstant innovation in the Angular team has recently brought us ngUpgrade. NgUpgrade works by enabling upgrades (making AngularJS components available in Angular) and downgrades (making Angular components available in AngularJS) between components. The goal and vision of the ngUpgrade project is to make it easier for developers working with large AngularJS applications to upgrade to the latest version of Angular. To use ngUpgrade, developers simply need to add upgrade to their projects and bootstrap both frameworks using a sequence of operations.\n\nGeorge Kalpakas, a member of the Angular core team and core contributor to the ngUpgrade project, speaks with us about the project and the plans the Angular team has in store for developers.\n\nThere have been a significant amount of improvements to the ngUpgrade project as of late. Recently, ngUpgrade was re-written to incorporate AOT compilation. Another recent addition was improving the interoperability of things like ngModel and control access monitor in order to make it easier to use components and interpolate between Angular and AngularJS.\n\nThe path to creating ngUpgrade has not been an easy one. Some of the challenges the team has faced through this process was getting both frameworks to be aware of each other. Current challenges the team is working on is the debugging story for ngUpgrade and solving challenges around users not configuring ngUpgrade correctly and running into errors.\n\nLooking forward, the ngUpgrade team is working on docs and other resources to help people get started using ngUpgrade. Additional plans include improving the performance of ngUpgrade apps and providing more tooling and more boiler plates for various use cases.\n\nIf users have questions or feedback through the upgrade process, you can reach George through the ngUpgrade GitHub project where the Angular team is always looking to hear more from their users.\n\n## RxJS\n\nRxJS 6 and 7, Testing, and more with Ben Lesh, Author and Project Lead of RxJS 5+\n\nBen Lesh is a Software Engineer at Google and is the author and project lead for RxJS 5+. He provides us with an update on the current state of RxJS. Currently, the size of RxJS (minified, bundled, and gzipped is 33k. However, Ben has created a library that shrunk the size of RxJS to only 3k!\n\nTo do this, he moved all the error handling into one central spot so v8 can inline all the other functions without having to deal with the de-optimized functions that you get if you have a tryCatch block in a function.\n\nThere were some compromises that Ben had to make in order to get it to a smaller size, but the new improvements to the library should be roughly the same speed and about 10x smaller.\n\nThough RxJS does not currently take advantage of Angular’s compiler, an ideal scenario would be to get Angular’s template compiler and use it at build-time to produce functions able to handle updates in the DOM based on the compiled template.\n\nMany who use RxJS wonder about RxJS’s testing story. RxJS has a great testing suite that runs a battery of more than 2,500 tests within a second. Unfortunately it does not provide much utility to the average user yet. The test suite currently schedules synchronistic tests on things that would normally run asynchronistically but does it in a determinate way. It makes some assumptions based off the fact that it is only really used in RxJS’s core library, but there is work being done to make this more useful externally. A lot of challenging problems remain to be resolved before it is successful.\n\nNew updates on the testing story include a prototyping of a testing suite in T-Rx. Ben tells us he was able to re-write the schedulers meaning that he had access to a brand new virtual scheduler that he wrapped in a high order function that gives users access to everything you might need for testing. This automatically sets up the other schedulers so they work in an expected way with the test scheduler.\n\nThere is more to come, so keep up to date with RxJS on the latest developments on the project in Github.\n\nCo-written by Necoline Hubner and Tracy Lee.\n\nNeed JavaScript consulting, mentoring, or training help? Check out our list of services at This Dot Labs.",
"content_format": "markdown"
},
{
"url": "https://dev.to/jakewies/the-next-iteration-of-the-web-580f",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nThe following is a post from my newsletter, Original Copy. Want early access to posts like this? Subscribe here.\n\nTim Berners-Lee is best known for inventing the World Wide Web. In 1989, while working at CERN, he proposed an idea: \"a web of hypertext documents viewable by browsers\".\n\nYou can see the appeal of such a technology. An industry ripe with innovation and experimentation requires rapid knowledge transfer. The idea was so powerful that a few years later it extended to the general public. The Internet was born.\n\nThis first iteration of the Web, dubbed Web 1.0, lasted from the early 90s until about 2004. It was all about consuming static content, usually in the form of texts, links and images. Creators of this content were the \"webmasters\" who knew how to publish on the Web. An uncommon skill.\n\n## Web 2.0\n\nToday we experience an interactive and social Web. Anyone can create and publish content with the click of a button. No webmaster needed. This is Web 2.0, the modern Web.\n\nPersonal computers and smart phones led the way. Tech giants like Google and Facebook followed. The applications created by these companies empower users to learn, shared and build. They grew faster than the open protocols created during the Web 1.0 era. A massive improvement, but not without consequences.\n\nWeb 2.0 is a centralized Web. Applications are owned by a single entity or corporation. Within centralized applications, the owner is the single source of truth. They make the rules, and you play the game.\n\nThe owner has control of everything:\n\n* Permissions\n* Governance\n* Data\n* Equity\n\nThis has lead to issues of censorship, security and the topic that intrigues me the most: privacy.\n\n### Data wars\n\nIn 1973, the artist Richard Serra coined the phrase, \"If something is free, you're the product.\" At the time he was referring to television. His point was that television was the audience. Its main purpose was to shuttle viewers to an advertiser. A funnel.\n\nToday's most popular Internet applications, Google, Facebook, Twitter etc., act the same way. They're all free to use, yet they are among the richest companies in the United States. At the time of writing this, Google and Facebook are in the top 5 companies in the US by market capitalization. And their product is free!\n\nThe modern Web empowers companies to give away their technology at no cost. This removes barriers to entry so users can pile in. The company's only goal is to get you in and keep you in as long as possible, because you have something they need:\n\nA pair of eyeballs.\n\nThe more users they acquire, the more data they collect and the more money they make off of advertising. This is THE playbook used by technology companies all over the world. Why? Because it works! It works so well that you and I, aware of this madness, continue to \"use\".\n\n## Enter Web 3.0\n\nI was too young to recognize the dramatic shift from Web 1.0 to Web 2.0. One day you wake up, and MySpace exists. Everyone's talking about it. Everyone's using it. It happened fast!\n\nNow that I'm older, and I've been in this Web Development racket for almost a decade, I have my ear closer to the ground. I sense the paradigm shifting again.\n\nWeb 3.0, the rapidly approaching (pssssst! It's already here) next iteration of the Web, is a decentralized Web. It aims to flip the system on its head, turning the keys over to the participants of the network (the users).\n\nWeb 3.0 applications are usually referred to as Decentralized applications or Dapps. They are not owned by a single entity. They are not deployed to a centralized server. Rather, they're isolated pieces of code deployed to a peer-to-peer network: a blockchain.\n\nThe validity of the blockchain and its app ecosystem is upheld through incentives. These incentives take the form of digital tokens - Cryptocurrency. Awarded to participants on the network through various \"tokenomics\".\n\nThe use of these tokens is where things get interesting. The most basic function is paying for actions taken on the network. Any action that effect's the \"state\" of the blockchain costs money - a transaction. And that's only scratching the surface.\n\nAnyone can use a decentralized app. Logins are anonymous. Payments are built-in, anonymous and backed by cryptography.\n\n### The early adopters\n\nWith all the buzz around Web 3.0, it is still early days.\n\nCertain use cases still rely on some layer of centralization. Centralization makes things easier to coordinate. Achieving full decentralization introduces difficult problems like communication, incentives and consensus. But they are problems worth solving.\n\nThere is a mass migration of smart engineers and designers heading to the Wild West of Web 3.0. I am excited for its future.\n\nMost of my work is still in Web 2.0 land, but I'm ramping up on Web 3.0 technology. A lot of the content I'll be sharing will focus on it.",
"content_format": "markdown"
},
{
"url": "https://dev.to/emanuelquerty/comment/b3cf",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nHi, I'm Emanuel. Glad to join this community!\n\nI'm a coding enthusiast. I love coding and everything IT related. I've been doing web development for about 5 years, mostly front-end, though I have backend experience as well as I've done backend for some time, mostly, PHP and SQL. However, I've been learning and working with nodeJS for almost a year and have enjoyed working with it. Currently, learning MongoDB. I also work with Python, hence my desire to expand my knowledge on machine learning and data science as Python is widely used in those areas. As you can see just by my intro, I love getting my hands dirty and learning pretty much anything. I hope to learn and share my knowledge with this community.\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/aspittel/socketio-making-web-sockets-a-piece-of-cake-bmd/comments",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nOh that's a great subject. I've seen situations where it could make for a much more user friendly experience. Thanks for writing and inspiring me to revisit this option!\n\nI just did some research to see what the current options are when using `.Net`.\nSignalR seems to be a great tool to start with bi-directional communication. It requires a jQuery plugin to get things working. The advantage is that it provides 'fallbacks' for browsers that do not support websockets.\n\nBut with websockets adoption improving, a jQuery plugin might be more than your really need. I would rather use a a library that just relies on websockets.\n\nGreat article Ali, I'm very excited for Socket and realtime material.\n\nI coudnt stop my self to debug this live app, It could be possible with the firecamp.app\n\nThanks again for the sharing your knowledge :clap: :clap: :clap:\n\nI also think Socket.io is fascinating! I completed a course on PluralSight not too long ago. The course although not entirely about Socket.io, introduced it as a module and I found it interesting!\n\nI'll definitely learn more about it! When they describe it as allowing real-time communication... It reminds me of E-Commerce support chats.\n\nI remember tring websockets before release, in draft10 I think, on PHP, it was ..almost working.\n\nAnyway, if you want to learn about websocket why didn't you used vanilla JS? is easy nowdays, at least you would have learned how they are implemented, and then in a commercial project you would use a library like socket.io.\n\nNice article, I have build a Realtime code editor project using socket.io called Code-Sync.\n\nIt offers a real-time collaborative code editor featuring unique room generation, syntax highlighting, and auto-suggestions. Users can seamlessly edit, save, and download files while communicating through group chat\n\nThanks for the write-up! I was a little intimidated to look into websockets before, but now I feel like maybe it's something I could tackle.\n\nI tried out your app and I was wondering if there was a way to show filled pixels on load, rather than empty pixels that updated with changes after you loaded. So something more similar to the reddit version where a user could come in and see what has already been drawn.\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/jw_baldwin",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# James Baldwin\n\n💻 Full-time Software Engineer currently working on flowist.io.\n\nWork\n\nSoftware Engineer at Hacking on Flowist.io\n\n### Want to connect with James Baldwin?\n\nCreate an account to connect with James Baldwin. You can also sign in below to proceed if you already have an account.\n\nAlready have an account? Sign in\n\nloading...",
"content_format": "markdown"
},
{
"url": "https://dev.to/tra",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Tariq Ali\n\nTariq Ali is an aspiring web developer dedicated to producing interesting applications and websites for businesses. Before entering into web development, Tariq served as a journalist, real estate agent, and college instructor. He has a special talent in writing and researching effectively.\n\n# Eight Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least eight years.\n\n# Writing Debut\n\nAwarded for writing and sharing your first DEV post! Continue sharing your work to earn the 4 Week Writing Streak Badge.\n\n# Seven Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least seven years.\n\n# Five Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least five years.\n\n# Four Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least four years.\n\n# Three Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least three years.\n\n# Two Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least two years.\n\n# One Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least one year.\n\n### Want to connect with Tariq Ali?\n\nCreate an account to connect with Tariq Ali. You can also sign in below to proceed if you already have an account.",
"content_format": "markdown"
},
{
"url": "https://dev.to/johannesjo/looking-for-a-side-project-which-you-could-use-yourself-every-day-and-which-is-even-useful-during-the-corona-crisis-566a",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI'm resorting to cats. This is serious!\n\nSuper Productivity is a to do list / time tracker which comes with Jira, GitHub & Gitlab integrations as well as some nifty productivity helper features, such as a pomodoro timer and a break reminder.\n\nIt's my favorite side project and I use it every day to plan my tasks and to track my time. But my skill set and also my perspective after using it for over two years are limited.\n\nI need your help!\n\n## Things that would help\n\n### Feedback\n\n* What do you like and what not?\n* What should be improved?\n* What essential features are missing?\n* What features seem useless to you or not well thought out?\n* If you don't, why won't you use the app?\n* How do you use the app, if so? I need to figure in which direction to head, as there might be too many features at the moment.\n* Reporting bugs\n\n### Design & UX\n\nI am not a designer. So there is probably a lot not to like. It would be absolutely great if a professional could have a look or two.\n\n* Are the general concepts working? What should be improved?\n* Providing better icons\n\n### Features/Coding\n\n* I started working on an android version and I really suck at it...\n* there are a lot of open issues most of them being improvement suggestions",
"content_format": "markdown"
},
{
"url": "https://dev.to/denolfe/bootstrap-your-dotfiles-with-dotbot-4j7f",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nA customized set of dotfiles can vastly increase your command-line productivity and happiness. Having your dotfiles in a repo allows you to take your configuration anywhere. In this tutorial, we'll be setting up a dotfiles repository and bootstrapping it using dotbot.\n\n## Why Dotbot?\n\nWhile it could be tempting for some to script your dotfiles configuration and installation yourself, I would advise against going this route. I previously went this route, but I would constantly run into edge-cases leading to constant modification of the scripts. With a framework, most of the use-cases have been thought of, so it is very low friction in comparison.\n\nOn investigating a number of tools out there, dotbot's features set it apart from the others:\n\n* Single configuration file\n* Single command to install on a new machine via symbolic links\n* Can be added as a git submodule\n* Python is the only dependency (standard for almost all distros)\n\n## Getting Started\n\nThe first step is to get a git repository started and add dotbot as a submodule\n\n```\n# Create project directory\n> mkdir dotfiles\n> cd dotfiles\n\n# Initialize Repository\n> git init\n> git submodule add https://github.com/anishathalye/dotbot\n> cp dotbot/tools/git-submodule/install .\n> touch install.config.yaml\n```\n\nSo now we have a few things set up:\n\n* New git repository\n* Dotbot added as a submodule\n* Dotbot's\n `install` script copied to the project root* A blank configuration file\n\n## Configuration\n\nNext, we'll start modifying our config file. Here is a starting point:\n\n```\n- defaults:\n link:\n relink: true\n\n- clean: ['~']\n\n- link:\n ~/.bashrc: bashrc\n ~/.zshrc: zshrc\n ~/.vimrc: vimrc\n\n- shell:\n - [git submodule update --init --recursive, Installing submodules]\n```\n\nLet's go through each section to see what it does\n\n### Defaults\n\nDefaults controls what action will be taken for everything in the `link` section. `relink` removes the old target if it is a symlink. There are additional options that may be worth looking at in the documentation\n\n### Clean\n\nThis simply defines what directory should be inspected for dead links. Dead links are automatically removed.\n\n### Link\n\nThis is where most of your modifications will take place. Here we define where we want the symlink to be once linked, and what file should be linked there. In the above example, we have 3 files that commonly contain customizations.\n\n### Shell\n\nThis section contains any raw shell commands that you'd like to run upon running your install script. In this case, it installs any submodules.\n\n## Move files into Repository\n\nNext, we move the files we want to link into our repository. Assuming you want the 3 files specified from above, we can run the following commands to move them in.\n\n```\n> cp ~/.vimrc ./vimrc\n> cp ~/.zshrc ./zshrc\n> cp ~/.bashrc ./bashrc\n```\n\n### Run Install Script\n\nWe can test out if everything works properly by running `./install` within our repository. If all is configured properly, you should see something like the following:\n\n```\n> ./install\nAll targets have been cleaned\nCreating link ~/.bashrc -> ~/.dotfiles/bashrc\nCreating link ~/.zshrc -> ~/.dotfiles/zshrc\nCreating link ~/.vimrc -> ~/.dotfiles/vimrc\nAll links have been set up\nInstalling submodules [git submodule update --init --recursive]\nAll commands have been executed\n\n==> All tasks executed successfully\n```\n\nOnce you are satisfied with how you dotfiles install, be sure to commit your changes and push to a remote repository.\n\n```\n> git add --all\n> git remote add origin git@github.com:username/dotfiles.git\n> git push -u origin master\n```\n\n### Use on Multiple Machines\n\nNow that you have a basic dotfiles repository set up, you can push this to a public repository in order to use on multiple machines. On any machine, you can now simply run the following commands to install your dotfiles:\n\n```\n> git clone git@github.com:username/dotfiles.git --recursive\n> cd dotfiles && ./install\n```\n\nAny new changes can be retrieved from the repository and installed using the following commands:\n\n```\n> git pull\n> ./install\n```\n\n## Summary\n\nYou can now easily maintain your dotfiles in a git repository and share them between your environments.\n\nHere is my personal dotfiles repository that uses dotbot. There are many other places to draw dotfiles inspiration from such as GitHub Does Dotfiles and other dotbot users.",
"content_format": "markdown"
},
{
"url": "https://dev.to/veera_zaro/from-bpo-day-to-day-monotonous-work-to-frontend-engineer-in-a-startup-without-cs-degree--255n",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n## 👋 Quick Intro\n\nHi Everyone, this is Veera. I did my graduation in Commerce. After graduation, I started looking for jobs that would help support my family. Speaking English was a bit difficult for me. Though, I managed to land a decent job as a Process Associate.\n\n## 🤔 How I Got Interest In Tech\n\nWhen the pandemic (Covid-19) started, my company began doing WFH (Work From Home). Together, they introduced a new tool to my project. That could reduce manual workers (headcount) in the future. So, I started to think about the future and asked myself many questions.\n\nI was confused about how they would line up with this tool, which is more like an automaton. Luckily, I have a brother who is a Software Engineer. I asked him, how could they have built a tool like this?... Then BOOM!!!,** he throws in some jargon like API, Client, Server, HTTP, etc., That's when my interest spur into tech.\n\n## ⏳ Time For A Change\n\nI started thinking about changing my career from BPO to Software Developer. Two reasons made me think hard about my decision :\n\n* It was too late to restart my career.\n* I don't have a CS Degree.\n\nIt wasn't easy to decide at that time. Then I read some articles which inspired me. I found it very interesting that many people had begun their tech careers after turning 30. My interest in this field started with those articles, so I finally started learning to code.\n\n## 💻 Started To Learn Programming\n\nI'd spent some time learning in the morning and had to work in the evening. My brother helped me understand the basics when I first began to learn. I started with Python which is powerful, high-level, and beginner friendly. I came to know that the industry is very wide with a lot of roles like Frontend, Backend, and More.\n\nLearned the basics of programming for quite some time. I've found myself interested in \"Frontend (UI - User Interface)\". The reason is that you can see what you're doing. I was unfamiliar with Frontend technologies up until then, so I began exploring that side.\n\n## My Roadmap For Learning Frontend\n\nFrontend development involves crafting a website for browsers and mobile devices. There are three core technical skills required for that\n\n* HTML ( Hyper Text Markup Language ).\n* CSS ( Cascading Style Sheet ).\n* Javascript ( scripting language ).\n\nIn the first two months, I learned HTML and CSS. Then I built a few static websites locally. Once I felt familiar with HTML and CSS, I decided to move forward to exploring JS (javascript). Javascript is awesome and makes the website dynamic (actually alive). I decided to spend two months learning JS. As many beginners do, I got stuck in tutorial hell. Once I got back on track, I built a few projects using HTML, CSS, and Javascript.\n\nAfter learning three core technologies, I decided to learn backend technologies too. I've dedicated two months to learning ( API, Nodejs, and MongoDB). Then I came to know about ReactJS. It is the most popular among the developer's community and on Tech Twitter. There are tons of fresher/Experienced job opportunities for ReactJS developers. So, after learning the backend, I jumped straight into ReactJS. Learning ReactJS will be easy if you know the fundamentals of Javascript. I've spent some time learning React and Redux ( State management ).\n\n## 👨💻 Finally Getting Into The Tech Industry\n\nAfter spending eight months (part-time) learning and doing projects. I've decided to take a step forward. When I was ready to apply for jobs, I started to prepare for my first tech interview. Tech interviews may have many rounds ( technical, coding, online test and assessment, HR), though it depends on companies. I had no idea and no experience attending tech interviews. However, I have to prepare for my interview.\n\nI was like \"Do they ask questions about HTML/CSS or Javascript or ReactJS?\" or \"will they ask me to solve a problem?\" I had no clue about what questions I would get in the interview. I've found some useful Github repositories for tech interview preparation. With Hacker Rank and Code Wars ( basic problems) I started improving my problem solving and logical thinking skills.\n\nI started applying for the job across different locations. Getting into a tech job without a CS degree was not easy. I was willing to work anywhere. After applying to more than 15 companies. A company responded to my application under the \"Education: UG/PG Any Graduate\" section and I took their online test but failed. Some questions were difficult to my level of knowledge at that time. Meanwhile, I was waiting for a response from other companies.\n\nThen one fine day I received a call for an interview in Chennai for ReactJS Developer. There were three rounds of interviews ( Technical, Coding, Code Review & HR ). I answered all the questions based on ReactJS and Redux, Context API during the first round. For the second round, I was given a task to complete within five days using the API provided by them. I submitted the project one day earlier. They appreciated my work and also asked for a meeting to do a code review. During that code review round, I was asked about my project. It was some basics about Javascript and CSS. I successfully cleared all rounds and got the offer for a full-time Frontend Engineer position.\n\nIt is rewarding that I was able to land a job in the tech industry. Since I started to work in this industry a little while ago, I've learned a lot about this field. And I'm still learning and improving my skills.\n\nI've used the following Github Repos :\n\nArticles That Inspired Me :\n\n* From Lawyer to Engineer at Google link\n* From Chef to Software Engineer link\n* Frying Chicken in a gas station to Software Engineer link\n\nThank you for reading along.",
"content_format": "markdown"
},
{
"url": "https://dev.to/ppshobi/i-have-created-a-video-on-how-to-build-a-login-system-in-django-by-leveraging-djangos-built-in-auth-component-51em",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAt some point for a web application, we need the authentication feature to identify the user and manage them.\n\nBeing a battery included framework Django comes with a builtin login system. So I have created a video screencast with step by step instructions to build an auth system in Django.\n\nIn this screencast, we will be taking a look into how to leverage Django's power to build a powerful authentication system\n\nHere is the link http://bit.ly/2NXLgnf\n\nSource Code : https://github.com/ShobiExplains/SimpleDjangoAuthentication\n\nLet me know what you guys thinks about the video.",
"content_format": "markdown"
},
{
"url": "https://dev.to/apex95/fuzzing-tesseract-cloud-vision-generating-ocr-proof-images-210l",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIn this article, I'm going to discuss about my Bachelor's degree final project, which is about evaluating the robustness of OCR systems (such as Tesseract or Google's Cloud Vision) when adversarial samples are presented as inputs. It's somewhere in-between fuzzing and adversarial samples crafting, on a black box.\n\nIt's an old project that I recently presented at an International Security Summer School hosted by the University of Padua. I decided to also publish it here mainly because of the positive feedback received when presented at the summer school.\n\nI'll try to focus on methodology and results, which I consider being of interest, without diving into implementation details.\n\n# I published this ~1 year ago - not sure if it still works as described here. Hopefully it does, but I'm pretty sure Google made changes to the Vision engine since then.\n\n## Motivation\n\nLet's start with what I considered to be plausible use cases for this project and what problems it would be able to solve.\n\n* \n\nConfidentiality of text included in images? -- It is no surprise to us that large services (that's you, Google) will scan hosted images for texts in order to improve classification or extract user information. We might want some of that information to remain private.\n\n* \n\nSmart CAPTCHA? -- This aims to improve the efficiency of CAPTCHAs by creating images which are easier to read by humans, thus reducing the discomfort, while also rendering OCR-based bots ineffective.\n\n* \n\nDefense against content generators? -- This could serve as a defense mechanism against programs which scan documents and republish content (sometimes using different names) in order to gain undeserved merits.\n\n## Challenges\n\nNow, let's focus on the different constraints and challenges:\n\n### 1. Complex / closed-source architecture\n\nModern OCR systems are more complex than basic convolutional neural networks as they need to perform multiple actions (e.g.: deskewing, layout detection, text rows segmentation), therefore finding ways to correctly compute gradients is a daunting task. Moreover, many of them do not provide access to source code thus making it difficult to use techniques such as FGSM or GANs.\n\n### 2. Binarization\n\nAn OCR system usually applies a binarization procedure (e.g.: Otsu's method) to the image before running it through the main classifier in order to separate the text from the background, the ideal output being pure black text on a clean white background.\n\nThis proves troublesome because it restricts the samples generator from altering pixels using small values: as an example, converting a black pixel to a grayish color will be reverted in the binarization process thus generating no feedback.\n\n### 3. Adaptive classification\n\nThis is specific to Tesseract, which is rather deprecated nowadays - still very popular, though. Modern classifiers might be using this method, too. It consists of performing 2 iterations over the same input image. In the first pass, characters which can be recognized with a certain confidence are selected and used as temporary training data. In the second pass, the OCR attempts to classify characters which were not recognized in the first iteration, but using what it previously learned.\n\nConsidering this, having an adversarial generator which alters one character at a time might not work as expected since that character might appear later in the image.\n\n### 4. Lower entropy\n\nThis refers to the fact that the input data is rather 'limited' for an OCR system when compared to... let's say object recognition. As an example, images which contain 3D objects have larger variance than those which contain characters since the characters have a rather fixed shape and format. This should make it more difficult to create adversarial samples for character classifiers without applying distortions.\n\nA direct consequence is that it greatly restricts the amount of noise that can be added to an image so that the readability is preserved.\n\n### 5. Dictionaries\n\nOCR systems will attempt to improve their accuracy by employing dictionaries with predefined words. Altering a single character in a word (i.e.: the incremental approach) might not be effective in this case.\n\n## Targeted OCR Systems\n\nFor this project, I used Tesseract 4.0 for prototyping and testing, as it had no timing restrictions and allowed me to run a fast, parallel model with high throughput so I could test if the implementation works as expected. Later, I moved to Google's Cloud Vision OCR and tried some 'remote' fuzzing through the API.\n\n## Methodology\n\nIn order to be able to cover even black box cases, I used a genetic algorithm guided by the feedback of the targeted OCR system. We observe that the confidence of the classifier, alone, is not a good metric for this problem, a score function based on the Levenshtein distance and the amount of noise is employed.\n\nOne of the main problems here was the size of the search space which was partially solved by identifying regions of interest in the image and focusing only on these. Also, lots of parameter tuning...\n\n## Noise properties\n\nGiven the constraints, the following properties of the noise model must be matched:\n\n* high contrast -- so it bypasses the binarization process and generates feedback\n* low density -- in order to maintain readability by exploiting the natural low-filtering capability of the human vision\n\nApplying salt-and-pepper noise in a smart manner will, hopefully, satisfy the constraints.\n\n## Working modes\n\nInitially, the algorithm worked using only overtext mode, which applied noise in the rectangle which contained characters. However, this method is not the best choice for texts written using smaller characters mainly because there are less pixels that can be altered thus drastically lowering the readability even with minimal amounts of noise. For this special case, the decision to insert the noise in-between the text rows (artifacts) was taken in order to preserve the original characters. Both methods presented similar success rates in hiding texts from the targeted OCR system.\n\nJust for fun, here's what happens if the score function is inverted, which translates as \"generate an image with as much noise as possible, but which can be read by OCR software\". Weird, but it's still recognized...\n\n## Results on Tesseract\n\nPromising results were achieved while testing against Tesseract 4.0. In the following figure is presented an early (non-final) sample in which the word \"Random\" is not recognized by Tesseract:\n\n## Tests on Google's Cloud Vision Platform\n\nThis is where things get interesting.\n\n# The implemented score function can be maximized in 2 ways: hiding characters or tricking the OCR engine into adding characters which shouldn't be there.\n\nOne of the samples managed to create a loop in the recognition process of Google's Cloud Vision OCR, basically recognizing the same text multiple times. No DoS or anything (or I'm not aware of it), I'm still not sure if the loop persisted or not - it either produced a small number of iterations, failed (timed out?) or they had load balancers which compensated for this and used different instances.\n\nLet's take a closer look at the sample: below, you can see how the adversarial sample was interpreted by Google's Cloud Vision OCR system. The image was submitted directly to the Cloud Vision platform via the \"Try the API\" option so, at the moment of testing, the results could be easily reproduced.\n\nAlso the 'boring' case where the characters are hidden:\n\n## Conclusions\n\nIt works, but the project reached its objective and is no longer in development.\n\nIt seems difficult to create samples that work for all OCR systems (generalization).\nAlso, the samples are vulnerable to changes at the preprocessing stage in the OCR pipeline such as:\n\n* noise filtering (e.g.: median filters)\n* compression techniques (e.g.: Fourier compression)\n* downscaling->upscaling (e.g.: Autoencoders)\n\nHowever, we can conclude that, using this approach, it is more challenging to mask small characters without making the text difficult to read. I compiled the following graph, in which are compared: the images generated by the algorithm (below 7% noise density) and a set of images that contain random noise (15% noise density). The 2 sets contain different images with characters of sizes: 12, 21, 36, 50. Each random noise set contains 62 samples for each size - average values were used.\n\nNoise efficiency is computed by taking into account the Levenshtein distance and the total amount of noise in the image.\n\n## Interesting TODO's\n\n* Extracting templates from samples and training a generator?\n* Exploiting directly the row segmentation feature?\n* Attacking Otsu's binarization method?\n\nMaybe someday...",
"content_format": "markdown"
},
{
"url": "https://dev.to/swordheath/typescript-for-beginners-what-you-should-know-n0a",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nTypeScript is a great programming language that was introduced to the public in 2012. This language is a typed superset of JavaScript that compiles to plain JavaScript. TypeScript is pure object-oriented, with classes and interfaces, and statically typed like C# or Java. The popular JavaScript framework Angular 2.0 is written in TypeScript. Mastering TypeScript can help programmers write object-oriented programs and have them compiled to JavaScript, both on the server side and the client side.\n\nSince I’ve only recently mastered my skills working with this programming language, I decided to share my experience and hopefully help those who are just at the beginning of their journey!\n\nWhy can using typescript be helpful?\n\nAs was already mentioned, TypeScript is a kind of subtype of JavaScript, which helps developers simplify the original language so it is easier to implement it in projects. In the developer world, there are lots of doubts regarding TypeScript and the general sense that it brings to the code writing process. But my opinion here is that TypeScript knowledge is not something that will take a lot of your time while learning but something that will definitely give you lots of advantages, especially if you work with JavaScript.\nFurthermore, TypeScript extends JavaScript and improves the developer experience. It enables developers to add type safety to their projects. TypeScript provides various other features, like interfaces, type aliases, abstract classes, function overloading, tuples, generics, etc.\n\nSetting up typescript\n\nThe first and most obvious step is to know JavaScript:) Because typescript is based on this programming language, learning it is primarily dependent on your understanding of JavaScript.You don’t have to be a master of JavaScript, but without knowing some basics of this language, working with TypeScript is a challenging task.\nWhen I was dealing with this programming language, I found dozens of different resources explaining the mechanics of writing the code, and also very helpful for me was the official documentation, which you can find here.\n\nThere are a few basic steps that will help you start working with TypeScript.\n\nStep 1. Installing\n\nIn this step, you need to install Node.js on your computer and the `npm` command manager.> \n\nIt is also possible to use Visual Studio typescript plugins, but I personally prefer working with Node. Js\n\n `> npm install -g typescript` Once Node.js is installed, run node `-v` to see if the installation was successful.Step 2: Generate the Node.js file\n\nUsing the `npm init` command, you will need to generate the Node.js `package.json` file, which will introduce systematic questions about your project.Step 3: Add TypeScript dependencies\n\nNow, when we prepared our environment in Node.js, we needed to install dependencies for TypeScript using the following command: `npm install -g typescript` \nThe command will install the TypeScript compiler on a system-wide basis. From that moment, any project you create on your computer can access TypeScript dependencies without having to reinstall the TypeScript package.\n\nStep 4. Install typescript ambient declarations\n\nThis is probably one of the greatest things that typescript offers: ambients. This is a specific declaration, type, or module that tells the TypeScript compiler about actual source code (like variables or functions) existing elsewhere. If our TypeScript code needs to use a third-party library that was written in plain JavaScript libraries like jQuery, AngularJS, or Node.js, we can always write ambient declarations. The ambient declaration describes the types that would have been there and will be written in TypeScript.The TypeScript ecosystem contains thousands of such ambient declarations files, which you can access through `DefinitelyTyped`. DefinitelyTyped is a repository that contains declaration files contributed and maintained by the TypeScript community.\nTo install this declaration file, use this command:\n\n```\nnpm install --save-dev @types/node\n```\n\nMost of the types for popular JavaScript libraries exist in `DefinitelyTyped`. However, if you have a JavaScript library whose types are missing from `DefinitelyTyped`, you can always write your own ambient modules for it. Defining types doesn’t necessarily have to be done for every line of code in the external library, only for the portions you are using.Step 5: Create a typescript configuration file\n\nThe `tsconfig.json` file is where we define the TypeScript compiler options.\n\n```\nnpx tsc --init --rootDir src --outDir build\n```\n\nThe `tsconfig.json` file has many options. It’s good to know when to turn things on and off. TSC reads this file and uses these options to translate TypeScript into browser-readable JavaScript.\nStep 6: Create a scr folder\n\n `src` hosts the TypeScript files:\n\n```\nmkdir src\ntouch src/index.ts\n```\n\nInside `src`, we also create an `index.ts` file, and then we can start writing some TypeScript code.> \n\nWhile writing Typescript, it is advisable to have a Typescript compiler installed in your project’s local dependencies.\n\nStep 7. Executing TypeScript\n\nNow, run the `tsc` command using `npx`, the Node package executer. `tsc` will read the `tsconfig.json` file in the current directory and apply the configuration against the TypeScript compiler to generate the compiled JavaScript code: `npx tsc` \nNow run the command below to execute the code:\n\n `node dist/index.js` \nBasically, that’s it; we’ve set up an environment and compiled our first TypeScript file, where we can now start writing our code.\n\nImportant things to know\n\nWhen starting to work with TypeScript, you should take into account that:\n\n* TypeScript takes longer to write than JavaScript because you have to specify types, so it may not be worth using for small projects;\n* TypeScript must be compiled, which can take some time, particularly in larger projects.\n\nThese are two limitations that TypeScript has; it doesn’t mean that you should not implement it in larger projects, but you should be aware that it may slow down the code writing process. So if you have an urgent, huge project, think thoroughly about whether to use TypeScript for it or not.\n\nIn this post, I just briefly explained the basics of setting up TypeScript and its purpose, but to be able to build projects based on this programming language, you need to know a lot of theory, especially ones about classes, modules, variables, functions, interfaces, etc. As with any programming language, theory is the basis that allows developers to avoid mistakes (in most cases) and successfully create and run projects.\n\nConclusion\n\nBecause TypeScript generates plain JavaScript code, you can use it with any browser. TypeScript is also an open source project that provides many object-oriented programming language features such as classes, interfaces, inheritance, overloading, and modules. Overall, TypeScript is a promising language that can undoubtedly assist you in neatly writing and organizing your JavaScript code base, making it more maintainable and extensible.",
"content_format": "markdown"
},
{
"url": "https://dev.to/dailydevtips1/vanilla-javascript-shuffle-array-558c",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nNow and then, you need randomly to shuffle an array in `JavaScript`. There is a super-easy way of doing so. We can use the `sort()` method and pass a random number to it.\n\n## JavaScript Shuffle Array\n\nAs mentioned in the introduction, we are going to be using the `sort` method.\nThis method, without any parameters, will sort an array in a natural way like 123 and abc.\n\nSee the following example:\n\n```\nvar charArray = ['d', 'f', 'a', 'c', 'b', 'e'];\nvar numArray = [1, 5, 3, 2, 4];\n\nconsole.log(charArray.sort());\n// [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"]\n\nconsole.log(numArray.sort());\n// [1, 2, 3, 4, 5]\n```\n\nAs you can see the Arrays get normalised sorted. But we can also pass a specific argument which is what we are going to use to randomise the sort.\n\n```\nvar rockPaperScissor = ['💎', '📄', '✂️'];\nconsole.log(rockPaperScissor.sort(() => 0.5 - Math.random()));\n```\n\nThis will randomly shuffle the array let me explain in depth.\n\nThe `sort` function comes with a comparison between two elements, where element one is bigger than two. It will put the index lower or higher.As for the `.5 - Math.random()` this will return a value between -0.5 and 0.5\nSo whenever the value is below 0, the element is placed before the one other element.\n\n> \n\nAlso, read about sorting an Array of Objects by Value\n\nYou can test this and see it in action on this Codepen.\n\nSee the Pen Vanilla JavaScript Shuffle Array by Chris Bongers (@rebelchris) on CodePen.\n\n### Thank you for reading, and let's connect!\n\nThank you for reading my blog. Feel free to subscribe to my email newsletter and connect on Facebook or Twitter",
"content_format": "markdown"
},
{
"url": "https://dev.to/andevr/starting-late-learning-to-code-at-40-1oja/comments",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI have a couple ready to start with Colt Steele. Very early days, am doing the CSS section of his entry web dev bootcamp.\n\nThat first course is decent. The downside is that it does not teach es6 or any of the new JavaScript features. The end project is huge though, and you can always pick up another course to cover es6+. I'd recommend either Javascript the complete guide 2020 by Maximillian Schwarzmuller, or Andrew Mead's The Modern Javascript bootcamp. Keep in mind you want to take your time going through javascript. For example, once you start getting into writing functions I would take your time going through the course. Take a couple days and just write a ton of functions. My biggest beef with most courses is they dont really provide you with ample practice, despite what they say, so you have to make sure you just take your time, otherwise you'll get stuck. Getting stuck was the reason I switched to Java.\n\nIt is only too late to start something new only after a person has already died!\n\nSo, just go for it!\n\nDon't hold back!\n\nKeep on trying, making mistakes, but go for it!\n\nGood luck! 👍Developers market in Russia is big. However, it is hard to find a remote job :D\n\nOther aspects are more or less the same as everywhere.\nIt is definitely not easy being a nerd in Arkansas, lol. Most of the dev community seems focused around Fayetteville/Little Rock, but of course I live in a smallish town (Mountain Home, was in an even smaller one a couple months ago). Learning Java has been a ton of fun so far, and it's really sticking thanks to all the practice I'm getting. Awesome to hear from another Arkansan.\n\nHi Drew. I'm 49 and been in the same boat as you. I used to be in the printing industry which was a dead end job and got made redundant twice. I then thought what the hell and went and did a computing degree which I finished 2 years ago. It was a good experience but learning to code yourself was more relevant to me. From January I am going to start freelancing after a year of front end learning. I still have a way to go and still only at the beginning in many ways.\n\nI still have 20 years of work in me hopefully and we are a long time dead so I'm just going for it and will pick myself up when I fall. Good luck my friend.\n\nRob\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/andrewbogushevich/comment/hfpn",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n```\nnumOne = [1, 2]\nnumTwo = [1, 3, 1]\n\n// original\nlet duplicatedValues = […new Set(numOne)].filter(item => numTwo.includes(item));\n// [1]\n\n// your\nlet firstValues = new Set(numOne);\nduplicatedValues = numTwo.filter(item => firstValues.has(item));\n// [1, 1]\n```\n\nto solve this we need either to create second `Set` or to use `includes` instead of `has` \nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/ben/how-much-of-a-memory-impact-do-tabs-in-my-terminal-have-114l",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWhen I am running my dev server, the log lines add up quickly. It's largely stuff I don't care about unless it's what I'm currently dealing with, but it piles up.\n\nDo the thousands upon thousands of lines that build up in the tabs I have open meaningfully impact the performance of my operating system? (In my case, MacOS)\n\nI'd be curious to get a better sense of how this memory is managed and what does and does not hog resources in this regard.",
"content_format": "markdown"
},
{
"url": "https://dev.to/robertomaurizzi/comment/111ge",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nNot really: you can create language-specific libraries that then do call Javascript for DOM operations and allow you to register callbacks.\n\nAfter that you can use, for example, your language decent type management, decent inheritance support, good standard library to write complex interfaces much more easily than with a language where 1+'1' is '11' but 1 - '1' is 0, has undefined, your best object extension option is to use a mixin and where you need to import half a million files to avoid reinventing the wheel... ;-)\nI am super interested in WASM but just playing devil's advocate here. I also think we could do away with the DOM and simply render in WebGL. Are you doing something in WASM already? Any language preferences there? I am considering Rust.\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/apotonick/five-first-episodes-of-trailblazer-tales-3l2k",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nFinally, I started producing short 6 minutes episodes on Youtube to learn about Trailblazer. This is actually more fun than I expected!\n\nThe first five episodes focus on refactoring a messy Rails controller action to a Trailblazer operation. Here is the link to our channel.\n\nLet us know what you think, and what is missing! In the following episodes we are going to discuss Reform, macros, nesting operations, and the new dependency injection features.",
"content_format": "markdown"
},
{
"url": "https://dev.to/aatmaj/learning-python-intermediate-course-day-18-tkinter-types-of-widgets-part-1-2i6h",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n## Today we will explore widgets in Tkinter\n\n> \n\nRecap Tkinter is the inbuilt python module that is used to create GUI applications. It is one of the most commonly used modules for creating GUI applications in Python as it is simple and easy to work with. For more information about Tkinter and it's advantages, please check out the previous part of the course.\n\n# What are widgets?\n\n> \n\nDefination A software widget is a relatively simple and easy-to-use software application or component made for one or more different software platforms.\n\nIn simple words, widgets are graphical elements like buttons, frames, textboxes etc. which are helpful in taking input from the user in graphical format. In Tkinter, Widgets are objects, that is they are instances of classes that represent buttons, frames, and so on. To make concepts simpler, just as we had treated strings and dictionaries, we can treat widgets too as as a 'button data type'. So the widgets have some data and have some methods to process that data.\n\n# Types of widgets\n\nThere are a lot of GUI widgets, both fancy and simple ones. (You can check out this wiki link for a complete description) However there are some 18 widgets which are supported by Tkinter and are of a real significance.\n\n# Let us now check out all the widgets one by one\n\n* \n\nButton\n\nOur plain old button. Python allows us to set the look and feel of the button.* \n\nCanvas\n\nThe canvas widget adds graphics to the window. It can be used to draw graphs in the application.* \n\nCheckbox\n\nCheckbox (or check-button) is an on-off type of button. The user can give binary inputs by clicking the button on or clicking the button off. The user can select one or all choices from the available list.* \n\nRadiobutton\n\nThe Radiobutton also is an on-off type of button, but the user can only select one (and only one) out of the all available choices.\n\n### Checkboxes vs. Radio Buttons\n\nCheckbox allows one or many options to be selected. It is used when you want to allow user to select multiple choices. A radio button is used when you want to select only one option out of several available options. It is used when you want to limit the users choice to just one option from the range provided.\n\n> \n\nRadio buttons are used when there is a list of two or more options that are mutually exclusive and the user must select exactly one choice. In other words, clicking a non-selected radio button will deselect whatever other button was previously selected in the list.\n\nCheckboxes are used when there are lists of options and the user may select any number of choices, including zero, one, or several. In other words, each checkbox is independent of all other checkboxes in the list, so checking one box doesn't uncheck the others.\n\nMore on Checkboxes vs. Radio Buttons By Nielsen Norman Group\n\n## To be continued.....\n\nImage credits- All images in the text are my own screenshots.",
"content_format": "markdown"
},
{
"url": "https://dev.to/lorenzotinfena/universal-data-science-template-for-python-and-jupyter-for-now-11kp",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nHi everyone! I'm new in this community.\n\nI have little experience with python and data science in general but i would like to start with a strong foundation so i decided to try doing this little project.\n\nNow if someone that want to do some algorithm in datascience, like machine learning with Tensorflow (in this case I use python language), he has to set up an environment, \"classic\" with Python stock, and install pip, and with pip install dependencies. But for virtual environments exist conda, that is beatiful! But has some limitation, like cross platform, and collaboration, (suppose that your friend try your code with a single library version different), and you have to choose between \"conda packages\" or \"pip/pyenv packages\"? And exists also other environment managing tools...\n\nThe project is based on VSCode, and VSCode remote container extension, that uses docker (that for windows10 use WSL2)\n\nI would like to make this project with these main points:\n\nScalable, Generic (not python environment, or conda environment, or R environment), simple (simple -> scalable, just better xd), complete, cross platform, updated, and collaborative (you can share your project to everyone and everyone has your exactly enviroment)Link to project:\n\nhttps://github.com/LorenzoTinfena/data-science-vscode-remote-development-template\nNow is very very basic, and the main part for now is the idea, but if someone want to take part, is welcome :)",
"content_format": "markdown"
},
{
"url": "https://dev.to/rubenwap/will-this-quick-julia-lesson-make-you-forget-about-python-for-your-data-needs-4d1c",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI have been a Python heavy user for some time, so when I learned about Julia I thought that the prospect it was introducing was very interesting. Simplicity in the syntax but execution power closer to C, but maybe the obstacle to beat was too high. Python already has massive adoption, there is a huge ecosystem with modules for any task you can think about and even people from the R world seem to be migrating to it. Also, Matlab seems to keep having strong presence in Academia. So, what is the incentive to change? Will Julia arise and become a main actor in the scientific computing community?\n\nIndeed, it is already quite relevant if you look at the case studies you will see how companies such as NASA, CISCO or PFIZER are already using it in serious projects.\n\nWhile I don't suggest to throw all your Python project out the window and rewrite everything in Julia, I think the language is interesting enough to merit a quick look at its functionality. So if you have heard comments on unstability or packages not mature enough, you can draw your own conclusions after giving it a try.\n\n## Installation\n\nAfter you install Julia from their site (or if you are on mac, doing `brew cask install julia` ), I am going to assume that you also want a notebook environment. For that you have two options. You can install the desktop app Nteract, or if you are so inclined, stick with the classic Jupyter Notebook (or Jupyter lab) for which you need to have Python installed, so you can do `pip3 install jupyterlab`.When you have your notebook application, you still need the actual Julia notebook kernel, so go to your terminal and type `julia`. You should see this:\n\n```\n_\n _ _ _(_)_ | Documentation: https://docs.julialang.org\n (_) | (_) (_) |\n _ _ _| |_ __ _ | Type \"?\" for help, \"]?\" for Pkg help.\n | | | | | | |/ _` | |\n | | |_| | | | (_| | | Version 1.5.1 (2020-08-25)\n _/ |\\__'_|_|_|\\__'_| | Official https://julialang.org/ release\n|__/ |\n\njulia>\n```\n\nNow type the `]` symbol, and the prompt will change to this: `(@v1.5) pkg>` \nSo you can type\n\n `(@v1.5) pkg> add IJulia` \npress enter, and observe how the kernel package gets installed. If you had already opened your notebook before, it's pretty likely that you need to close it and open it again so it detects the new Julia package.\n\n## Getting started with the language\n\nThe basic things are what you might expect:\n\n```\n# Printing\nprintln(\"Hello\")\n\n# Variables\nmy_name = \"Ruben\"\n\ntypeof(my_name) # String\n\n# Arithmetic\nsum = 3 + 7\ndifference = 10 - 3\nproduct = 5 * 5\nquotient = 100 / 10\npower = 10 ^ 2\nmodulus = 101 % 2\n\n# String interpolation\ninterp = \"My name is $my_name\"\n```\n\nSome interesting quirks:\n\n```\n# Note that single quote denote a character\ntypeof('a') # char\ntypeof(\"a\") # string\n\n# Concatenations can force transformations\n# Like in this case, from int to string\n\nstring(10, \" can also be converted\")\n# outputs: \"10 can also be converted\"\n```\n\nSome nice structures:\n\n```\n# Dictionary\nphonebook = Dict(\"Ruben\" => 555234898, \"Santa Claus\" => 00358787876)\n\n# Tuple\nanimals = (\"dog\", \"donkey\")\n\nanimals[1] # Outputs \"dog\" because Julia is 1 indexed!!!!\n\n# Mix types\nfarm2 = [\"chicken\", \"hen\", \"horse\", \"cow\", 56]\n```\n\n## Control structures and functions\n\n```\n# classic while. What you would expect\nn = 0\nwhile n < 10\n n += 1\n println(n)\nend\n\n# the for is quite pythonic\nfor n in 1:10\n println(n)\nend\n```\n\nNow something new. Look at this way to create vectors, either populated or empty. We can do it either with random data or specific data\n\n```\n# a 5 rows 3 cols vector with random data\nr, c = 5,3\nA = rand(r, c)\n\n# same but with zeroes\nZ = zeros(r, c)\n```\n\nBecause of this:\n\n```\n# We can have a for loop with two conditions in that vector\nfor i in 1:m, j in 1:n\n A[i,j] = i + j\nend\n\n# Or something looking like a Python list comprehension\nC = [i + j for i in 1:m, j in 1:n]\n```\n\nFinally, functions can be defined in different ways\n\n```\nfunction hi(name)\n println(\"Hi $name\")\nend\n\nhi2(name) = println(\"Hi $name\")\n\nhi3 = name -> println(\"Hi $name\")\n\n# and you would call them as you expect\nhi(\"world\")\nhi2(\"world\")\nhi3(\"world\")\n```\n\n## What about dataframes\n\nIf you are going to work with dataframes, you are going to need to know how to install (and import package). We have already seen one way to install (when we added IJulia earlier on)\n\nAnother way is from within your code:\n\n```\nusing Pkg\nPkg.add(\"CSV\")\nPkg.add(\"DataFrames\")\n```\n\nAlthough you might want to do that with the other option (via terminal) to don't execute this every time you run your code. Normally when you ship a Julia project you would have a `.toml` file with all the dependencies, so you can grab them at once by issuing the `] instantiate` instruction.\nOnce you have installed the DataFrames module (and CSV if you need it), we can create Dataframes from files or from our own data constructions. For the example, I am using this file\n\n```\ndf = DataFrame(CSV.File(\"sp500.csv\"))\n```\n\nYou can use pretty familiar functions to get the heads and lasts:\n\n```\n# 3 is how many rows you want to display\nhead(df, 3)\nfirst(df, 3)\nlast(df, 3)\n```\n\nYou can get the column names\n\n `names(df)` \nAnd select specific columns\n\n `df[\"column_name\"]` \nSelecting rows uses the array index for what you need. For example, this is the first 5 lines and the first 3 columns\n\n `df[1:5, 1:3]` Or you could get very specific searching. Just show me the rows where the symbol is either `MMM` or `ABT` \n\n```\nfilter(row -> row.symbol in [\"MMM\", \"ABT\"], df)\n```\n\n## To summarize\n\nAlthough by this article you could think that the biggest plus of Julia is that it has a very nice syntax, the truth is that I am only scratching the surface, since its real benefit is way deeper than that. It's not only a matter of speed (but also)\n\nTake a look at the article on Why We Created Julia to have a clearer view.\n\nJulia seems a good solution for the two languages problem with Python (if you want speed you need to revert to C extensions), it has modules for most tasks you might need, specially in the scientific computing area (including ML) and it feels stable and well maintained.\n\nIf you are a happy Python/R/Matlab user, you might not switch overnight, but I still think it's a very fresh and interesting proposition.",
"content_format": "markdown"
},
{
"url": "https://dev.to/mojemoron/what-the-heck-is-unpacking-and-packing-of-sequences-in-python-ifp",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIf you are new to python or have been programming in python and you haven't started using Packing and Unpacking this article is for you.\n\nPython provides additional convenience involving the treatments of tuples and other sequence types.\n\nWhat the heck is Packing\nIn Python, a series of comma-separated objects without parenthesis are automatically packed into a single tuple. For example, the assignment\n\n `fruits = apple,orange,lemon` results in the variable fruits being assigned to the tuple (apple, orange, lemon). This behavior is called the automatic packing of a tuple.\n\nAlso, another common use of packing is when returning multiple values from a function. like so `return x,y,z` \nthis packs the return values into a single tuple object (x,y,z).\n\nSo what the heck is Unpacking in Python?\n\nIn the same light, Python can automatically unpack a sequence,\n\nallowing one to assign a series of individual identifiers to the elements\n\nof the sequence. As an example, we can write\n\n```\na, b, c, d = range(1, 5)\na, b, c, d = [1,2,3,4]\n```\n\nwhich has the effects of a=1,b=2,c=3,d=4. For this syntax, the right-hand\n\nside expression can be any iterable type, as long as the number of variables on the left-hand side is the same as the number of elements in the iteration.\n\nif they aren't equal an error will be raised:\n\n```\na, b, c = [1,2,3,4]\nValueError: not enough values to unpack (expected 3)\n```\n\nUnpacking can also be used to assign multiple values to a series of identifiers at once\n\n `a, b, c = 1, 2, 3` \nThe right-hand-side is first evaluated i.e packed into a tuple before unpacking into the series of identifiers. This is also called a simultaneous assignment.\n\nAnother common usage of simultaneous assignment which can greatly improve code readability is when swapping the values of two variables like so:\n\n `a, b = b, a` \nWith this command, a will be assigned to the old value of b, and b will be assigned to the old value of a. Without a simultaneous assignment, a swap typically requires more delicate use of a temporary variable, such as\n\n```\ntemp = a\na = b\nb = temp\n```\n\nYou can also get the rest of a list like so:\n\n```\na, b, c, *d = [1, 2, 3, 4, 5, 6] \na => 1\nb => 2\nc => 3\nd => [4, 5, 6]\n```\n\nIn conclusion, using packing and unpacking during development can greatly improve the readability of your code and also solve some common issues with sequence assignment.\n\nIf you have any questions, kindly leave your comments below.\n\nKindly follow me and turn on your notification. Thank you!\n\nHappy coding! ✌",
"content_format": "markdown"
},
{
"url": "https://dev.to/softwaremill/what-is-the-best-way-to-contribute-to-open-source-a38",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIf you were to write down the growth path of a developer, you might end up with something like:\n\n👶 Junior Developer\n\n💻 Mid Developer\n\n🚀 Senior Developer\n\n🧙 Software Architect\nBut how does this growth actually happen?\n\nIf we examine the true drivers, it might turn out that developers grow by spending time writing code and documentation, collaborating with other developers, using various stacks, and solving different problems.\n\nEven though you might have many years of commercial experience in your resume, your public GitHub profile has a huge impact on your software career. It’s your business card in the software world. Not convinced? Check out my previous post Why contribute to open source?\n\nContributing to open source projects might be intimidating, but it’s worth getting out of your comfort zone and getting involved. Don’t let inexperience in a given tech stack or a library, or any other difference hold you back!\n\nHere are just 3 steps, the best and quickest way, to become an open source contributor.\n\n# How to contribute to open source?\n\n## Say you'd like to get involved\n\nStart from picking the stack and go to “the chats”. To make it easier to establish contact, oss maintainers always set up a dedicated slack or gitter channel or a discord server, or more and more popular, Matrix. Approach library maintainers there and say you'd like to get involved.\n\n> \n\nHere’s a list of some popular Scala and Java projects and their communication channels:\n\n* Zio - discord server\n* Cats - gitter channel\n* Akka - gitter channel\n* Kafka - Jira tickets\n* Scala - Scala contributors forum\n* Spark - gitter channel\n* OpenJDK - census\n* Hibernate - follow this guide\n* Apache Commons - everything happens on this mailing list\n* Micronaut - gitter community\n* Quarkus - GitHub page\n* Spring - GitHub page\n\nThere are also developer tools that need contributions. The general rules here are the same.\n\n> \n\nScala Metals - gitter chat\n\nZio intelliJ - #zio-intellij on the ZIO Discord\nGitHub Docs encourage library maintainers to apply the good first issue label to issues in their repositories to highlight opportunities for people to contribute. When looking for something on GitHub to match your needs, search for issues and pull requests, go to isses in a repository and narrow the results using label qualifier.\n\nAnother shortcut to find your first repository to contribute to is checking the list created by Masato Ohba. Using his script, he puts plenty of issues waiting for beginners' contribution into this spreadsheet.\n\n## Do not be afraid of the docs\n\nDocumentation and usage examples might sound boring, but are a good way to get to know the project, and contribute something useful. You’d be surprised how many people resign from using a project as they can’t find a good, working (!) example, or a step-by-step tutorial aimed at beginners. Working on docs will allow you to learn the very basics of the software in question. It will make it easier for you to make meaningful contributions to the code later.\n\n> \n\nImportant information for the first timers. Always start from the README.md file in the repository which contains rules on how to use and set up the project, and (if present), CONTRIBUTING.md, which outlines how to collaborate in the project.\n\n## Improve the tools you already rely on\n\nIt’s much easier to make time for things that are a part of your current work assignment.\n\nIf you’re using an open source software, library or framework, and you find a bug in it, open an issue. Make actual contributions and take it as an opportunity to demonstrate value, build trust, and gain skills. Even starting from a small patch, that gets merged, will build up your confidence. And finally, you'll be able to use it in your project, and ship things faster.\n\nMajority of the projects follow a standard fork and pull model for contributions via GitHub pull requests.\n\n> \n\nAn ideal path to meaningful contribution can look like this:\n\n* Find an issue\n* Let library maintainers know you are working on it\n* Build the project with your contributions\n* Write tests, docs and examples\n* Submit pull request\n\n## What will be your next Scala open source project?\n\nScala 3 is just around the corner. According to the voices of Scala programmers who took part in te Scala Developers' Survey, we run at the turn of the year, there are many programmers who are quite content with their choice of libraries and frameworks. But thery also point out areas that could be improved.\n\nWhen analyzing the survey entries, we identified the projects that attract most contributors. These are Zio and Akka.\n\nThis is mostly in line with the most liked libraries out there. And probably shouldn’t be surprising: both ecosystems are growing and hence there’s a lot of work to be done around them.\n\nMost popular Scala OSS projects\n\n* Zio\n* Akka\n* Scala\n* Cats\n* http4s\n* Play\n* Spark\n* Tapir\n* Alpakka\n> \n\nThe results of the survey clearly show that the Scala community is engaged in OSS, also that there’s always a place for more contributors.\n\nWhen it comes to specific pain points we concluded that these general areas need improvements:\n\n* Relational databases\n* Graphics and UI libraries\n* ML and data science projects\n* Documentation\n* Json parsing\n* Security and unified observability approach\nThe Scala community needs you!\n\nHelp us improve it.\n\nContribute.\nWant to find out more about the developers' expectations for Scala 3 and their view on the future of Scala? Download the Scala 3 Tech Report!",
"content_format": "markdown"
},
{
"url": "https://dev.to/nakulkurane/backing-up-my-logic-pro-projects-to-aws-s3-part-1-2051",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nOn a whim, just to play with Python and AWS, I thought of building a script to backup my Logic Pro projects to AWS S3. The premise of the script is that it will check a directory on my Macbook and upload any files that have been modified since the last time the file was uploaded to S3. This will run via a cron job (which, yes, only runs if my Macbook is awake). Anyway, let’s get started.\n\nAs for how I built it, some prerequisites:\n\n* PyCharm or any IDE\n* Python 3.0+\n* AWS account (free tier should suffice for testing)\n* Basic understanding of Python/programming\n\nThis post won’t go into every code snippet as you can find it on my GitHub (coming soon), but will walk through the steps I took to assemble the components needed.\n\n## Objective\n\nCompare files in a local folder with the respective files in S3 and zip and upload the local file if the local file is later than the one on S3 (last modified date time)\n\n# Part 1\n\nIn Part 1 of implementing this, we will simply go over how to connect to S3 via Python and how to get the last modified date time for a later comparison.\n\nSo, we’ll go through the snippets for these steps and/or you can skip all of this and just check out the code on GitHub (coming soon).\n\n## Connecting to S3\n\nHow to connect to AWS S3? Fortunately, AWS offers a SDK for Python called boto3, so you simply need to install this module and then import it into your script.\n\nI am using Python 3.7 and pip3 so my command was `pip3 install boto3` \nImport the following:\n\npython\n\n# import AWS modules\n\nimport boto3\n\nimport logging\n\nfrom botocore.exceptions import ClientErrorYou also need to install the aws cli module (AWS command line interface) so go ahead and do that with a `pip3 install aws cli`.\nNow, you will need to configure this aws cli tool to connect to your AWS account, so you should have an IAM user and group for the cli. Go to the IAM service in your AWS console to create that.\n\n### Creating IAM User and Group for AWS CLI\n\nFirst, create a Group and attach the AdministratorAccess policy. Then, create a User and assign the User to the Group you just created. Continue to the end of the wizard and eventually, you will see a button to download a CSV which will contain your credentials. These will be the credentials you use to configure your aws cli.\n\nHave the Access Key ID and Secret Access Key ready and in your Terminal, type `aws configure`. You will then be asked for the Key ID and Access Key so simply paste the values from the CSV. You will also be asked for region name (input the region where your S3 buckets are). The last input field asks for output format (you can just press Enter).\nGreat! Now, your aws cli should be configured to connect to your AWS S3 bucket(s).\n\n### Create S3 Client and Resource\n\nSo, next we need to create a client and a resource for accessing S3 methods (the client and resource offer different methods based on usage, you can read more here).\n\npython\n\n# CREATE CLIENT AND RESOURCE FOR S3\n\ns3Client = boto3.client('s3')\n\ns3Resource = boto3.resource('s3')\n\n# object for all s3 buckets\n\nbucket_name = '' # NAME OF BUCKET GOES HERE, HARD CODED FOR NOW\n\n# CREATE BUCKET OBJECT FOR THE BUCKET OF CHOICE\n\nbucket = s3Resource.Bucket(bucket_name)\n\n## Retrieve object's last modified date time (upload time)\n\nOk, so what needs to be done to retrieve an object’s last modified date time?\n\nI’ve written a method to loop through the objects in a bucket and call an AWS method to get the last modified time, shown below (yes, this can be written other ways as well).\n\nThe `bucket.objects.all()` returns a Collection which you can iterate through (read more on Collections here).\nSo you can see I began looping through that and only calling the last_modified method if the S3 object contained Logic_Projects and .zip in the key name.\n\nIf you’re not familiar, the key is simply how S3 identifies an Object. I am looking for Logic_Projects because I made a folder with that name and am also checking for .zip just in case I upload something else in the folder by accident (but this script will only upload zips to that folder, so it’s just a safety check).\n\nIf it passes those checks, then I proceed to the date time conversions.\n\nYou’ll notice I am calling two other methods in this loop, called utc_to_est and stamp_to_epoch before I finally return a value. That is to make datetime comparisons easier later on. The last_modified method returns a date in UTC format so it could look like this: 2019-12-07 19:47:36+00:00.\n\nWhen I am comparing modification times, I’d rather just compare numbers. The date AWS returns is in UTC, also referred to as GMT (Greenwich Mean Time).\n\nSo I added a function to convert this datetime to Eastern Standard Time, shown below:\n\nNow that we’re in the right timezone, I want to convert it to a number, so I converted it to an Epoch time, which is the number of seconds that have elapsed since 00:00:00 UTC on January 1, 1970 (Why Epoch time is this date is not in the scope of this post).\n\nOK, that's all for Part 1! In this post, we've connected to AWS via Python (boto3) and we've created a method to get the last modified time (in seconds since Epoch and in EST) of an object in S3.\n\nIn Part 2, we will go over comparing the last modified time of a file in S3 with the last modified time of that file locally.\n\nThis post also appears on Medium (link below).",
"content_format": "markdown"
},
{
"url": "https://dev.to/nelsongc/five-coding-streams-to-watch-on-twitch-11o5",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nTwitch has exploded in popularity over the last few years. With the growing amount of E-Sports and music streamers, we have also seen a niche group of people start to stream our favourite subject: programming.\n\nThere are developers of all skill levels and tech stacks on the platform, meaning that no matter if you are a beginner, Lead Developer, JavaScript lover or Python fan, there is something for you.\n\nThe best of the streamers showcase varied, interesting projects and bring character to their content, engaging with their chat and answering questions - meaning you can become part of their coding community too! I have learned many new skills and joined in several developer Discord communities due to Twitch.\n\nTo get you started, here are five great coding streams to check out:\n\n# 1. whitep4nth3r\n\nSalma, AKA whitep4nth3er, loves her front-end development and TypeScript/JavaScript builds. One of the most engaging and genuine streamers I have watched, there is always something to learn. Her stream is an amazing place to see the real process of a software developer and to join a fun and engaged community. Some cool projects I have seen her work on include the Twitch stream overlays on her channel, and tasks for her software developer job.\n\n# 2. GoPirateSoftware\n\nHighlighting the awesome and wide breadth of coding streams you can find on Twitch, GoPirateSoftware, AKA Thor, is a game developer who streams his work on the indie game Heartbound - available in early access on Steam. His community is actively involved in the development of the project and truly highlights how valuable Twitch can be - for those watching and the streamers as well! Learn in real time how a story driven indie game is built, and all the fun challenges along the way!\n\n# 3. Coding Garden\n\nHosted by CJ, Coding Garden is an open, interactive and engaging community where any coder, from beginner to veteran, can learn and grow together. His streams are often easy to follow along tutorials and mini-projects, and there is always something new you can pick up from what he is working on. Not to mention an active chat community each stream and the Coding Garden Youtube channel with past tutorials.\n\n# 4. ThePrimeagen\n\nIf you are looking to watch an expert developer then you have come to the right place. A Netflix employee, ThePrimeagen is often showcasing different coding tasks associated with his work, taking his audience on various creative tangents, or tackling coding challenges like The Advent of Code.\n\n# 5.TheAltF4Stream\n\nA passion project started by two friends, AT0TA and Blackglasses, TheAltF4Stream is a variety coding channel that is all about building a great online community around programming and games. You'll often see either of them taking on a new language, working on a mini project, or discussing the current news, frameworks and more in tech as they develop.\n\n# Explore Twitch for more coding goodness\n\nThese are just some of the streamers I have found during my time on Twitch. Check out the Science and Technology Channel to find plenty of others. If you have a favourite of your own, add a comment below.\n\nThanks for reading, and follow my profile if you enjoyed the article! :)",
"content_format": "markdown"
},
{
"url": "https://dev.to/joywinter90/what-is-an-agile-sdlc-model-and-what-are-its-advantages-4phm",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIf you have any experience as an engineer in the past two decades, you’ve probably heard the term “Agile” being used quite a bit. There are a lot of different versions and adaptations of Agile according to business needs.\n\nWhile some companies prefer to stick with traditional software development methods (as they are more convenient and members are used to it), others prefer using more secure, flexible, and high-quality software producing methods like Agile.\n\nThe need to adopt the Agile model has stemmed from the fact that, over the years, as technology has evolved, so have customer needs and expectations. Traditional software development methods are no longer as efficient and effective as they once used to be for many organizations. Constant demands for better features and updates have changed the software development industry and different development approaches were needed.\n\nBut what is an Agile SDLC model? Why should you adopt it? Is it an affordable solution that even small to medium scale businesses (SMBs) can adopt? Let’s take a closer look.\n\nWhat is an Agile SDLC Model?\n\nFirst of all, SDLC stands for Software Development Life Cycle. An Agile SDLC model is a combination of incremental and iterative software development models that focuses on delivering high-quality software continuously while reducing project overhead and increasing business value.\n\nThe project progresses in regularly iterated cycles, known as “sprints,” which usually last two to four weeks but can be longer or shorter depending upon need. Every iteration requires cross-functional teams working in collaboration on various aspects like planning, requirement analysis, design, coding, unit testing, security testing, integration testing, etc.\n\nThe Agile model manifesto promotes software development in small, quick steps. It is based on continuous iterations of software that allows companies to release updates to users more frequently.\n\nTraditional SDLC vs Agile SDLC Model: A Comparison\n\nThe primary difference between a traditional SDLC and an Agile SDLC is the sequence of project phases. In traditional development methodologies, the sequence of the project development process is linear, whereas, in Agile, it is iterative but with short iterations (older iterative SDLCs often had iterations that were many months long).\n\nIn a traditional SDLC, the software development team has to make a detailed overview of all the requirements that might come up in the future in terms of design and development of the software. This makes traditional SDLC more challenging and time-consuming.\n\nOn the other hand, the Agile SDLC model is quite flexible and the Agile software development team determines the scope of required changes based on customer needs and it goes through the cycle of analysis, design, development, and testing before every release. This allows the team to release small changes into the production environment instead of releasing a single major update.\n\nAs far as security is concerned, both the traditional SDLC and the Agile SDLC models can do it well or poorly. You just need to plan on involving security early in both lifecycles.\n\nBenefits of the Agile SDLC Model\n\nHere are some of the top benefits of Agile SDLC model that you should know about:\n\nStakeholder Engagement\n\nOne of the major benefits of an Agile SDLC is that it provides several opportunities for stakeholders and team members to engage with each other - before, during, and after each sprint.\n\nAgile offers a unique opportunity for clients to be more involved throughout the software development life cycle, from the design phase to prioritizing features, iteration planning, and review sessions to the final release.\n\nBy involving your client in every step of the software development life cycle, you potentially increase collaboration between the team and the client, thereby providing more opportunities for the team to better understand the client’s expectations.\n\nContinuous delivery also builds the client’s trust in the ability of the team to deliver high-quality working software, encouraging them to be more engaged with the organization.\n\nPredictable Costs and Schedule\n\nSince each sprint is of a fixed duration, companies can predict the cost of the entire project and limit the amount of work the team can perform during a fixed schedule. With a fixed number of sprints, the company can calculate individual development team speed, your project timelines, and budget estimates, product backlog, or any other requirements.\n\nOf course, this is dependent upon completing all tasks in sprints; but like traditional methodologies, issues arise and projects can definitely run late and over budget. However, these issues may be detected earlier in Agile projects.\n\nIf the ROI outweighs the cost of the project, then a company may decide to take the project further. However, if the ROI does not meet the company’s expectations, they can easily predict it and understand if a project is feasible, and more importantly, profitable for the organization.\n\nIn addition to this, companies also consider client’s estimates and their needs which improves decision making about the need for additional iterations and priority of features.\n\nTeam Efficiency\n\nAnother popular benefit of the Agile SDLC model is that Agile teams are known to be highly efficient at completing projects. Since Agile teams share a collaborative environment, it tends to produce a ripple effect on efficiencies as well. Usually, this is due to improved communication of needs that minimizes the degree of rework.\n\nScalability\n\nTwo major deal breakers for companies to decide whether they want to work on a project or not are time and cost. It includes questions like:\n\n• How long will the project take to complete?\n\n• What will it cost?\n\n• Is it worth the initial investment?\n\n• What is the ROI of the project in the long run?\n\n• How can we best utilize the resources and people available at hand?\nThe last question is the most important as it holds a lot of value for companies who still struggle with predicting the feasibility of a project or understanding their team’s capabilities. Agile SDLC provides a way to identify the key stakeholders, determine a project’s viability, and identify whether the project will scale well as the company grows.\n\nFocuses on Business Value\n\nBy breaking down the silos and adopting an Agile SDLC, companies can focus more on business value instead of software development issues. This is because the Agile SDLC model lets the team understand what’s more important for the client’s business and what their priorities are.\n\nOnce they gain an understanding of these things, they are able to deliver features that are just right for their clients’ business and provide the most value.\n\nImproves Quality\n\nAn Agile SDLC offers high-quality software as testing is done by the end user during the early stages of the SDLC to ensure that the product is released in the desired state. It helps members like security team experts identify and address security vulnerabilities early in the development phase itself if they are engaged early.\n\nTakeaways\n\nAgile SDLC is an excellent software development method for businesses that constantly release software to meet customers’ needs and client requirements. One of the major benefits of an Agile SDLC is that it promotes cross-functional team collaboration and feedback sharing. This means different teams work in tandem to create better quality software while aligning with client requirements.\n\nThis post was originally published at CypressDataDefense.com.",
"content_format": "markdown"
},
{
"url": "https://dev.to/aryakris/learn-to-code-big-o-notation-21pf",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nThe complexity of code in algebraic terms is Big O Notation. In other words Big O Notation determines how efficient your algorithm is!\n\nHow is the efficiency of algorithm measured?\n\nIn order to measure the efficiency we take into consideration 2 factors.\n\n* \n\nTime Complexity - The time taken to run the code completely\n\n* \n\nSpace Complexity - The extra space that is required in the process\n\nIt is denoted as -\n\n `f(n)=O(inputSize)` \nBig-O notation show how much time or space an algorithm would require in the worst case scenario as the input size grows.",
"content_format": "markdown"
},
{
"url": "https://dev.to/tracycss/code-newbie-codeland-connections-4dfg",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nHey I am Tracy. I would love to connect to all code newbies going through #100daysofcode. Drop your social accounts in the comment section and let's follow each other and grow to be amazing developers.\n\n# You can follow me on\n\n# some of the PDFs from the speakers\n\n* Saron Yitbarek - What I learned from 6 years of building CodeNewbie.\n* Erika Heidi - Art of Programming.\n* Josh Puetz - Salary Negotiation for people that hate to negotiate.\n* Paula de la Hoz - Freedom of Security\n* Vaidehi Joshi - The cost of data\n* Nick Palenchar - The Whale, the Container and the Ocean - A Docker Tale\n* Ben Halpern - why Forem is special\n* Rukia Sheikh-Mohamed - 5 Steps to Getting Unstuck",
"content_format": "markdown"
},
{
"url": "https://dev.to/sanmiade/day-72-of-100-days-of-swiftui-2m3k",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI just completed day 72 of 100 days of SwiftUI. Today, I implemented saving of annotation pins into a document using `Codables` and the `FileManager` class. I also added face ID authentication to the BucketList app so user can only use the app after being authenticated.\n\n## Top comments (0)\n\nSubscribe\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/ronsoak/the-r-a-g-redshift-analyst-guide-how-does-it-work-2mee",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWelcome to the R.A.G, a guide about Amazon's Redshift Database written for the Analyst's out there in the world who use it.\n\n### Previously on the R.A.G....\n\n## Article No Longer Available\n\n# So how does it work?\n\nRedshift works by using Clusters, Nodes, Slices, Blocks, Caching, and Workload Management.\n\nLet's first run through what these all are at a top level and then I'll run through how they all work together.\n\n### ▪️ Clusters\n\nYou hear 'Cluster' a lot in regards to Redshift but it's really not that important for you to stress over. 1 Cluster = 1 Redshift. A large majority of companies will just have one Redshift Data Warehouse, ergo just one cluster. A Cluster just refers to the cluster of nodes within it. That's about all an analyst needs to know. :)\n\n### ▪️ Nodes\n\nEvery Redshift Cluster has two types of nodes. It has only one Leader Node, and anywhere between 1 to 128 Compute Nodes, though in a single node cluster that singular lone node is both the Leader and the Compute.\n\nThe Leader Node is the manager. All requests and actions go through the Leader Node who takes a look at what's being asked of it and works out the best way to execute the task across it's Compute Nodes.\n\nCompute Nodes, as mentioned in the previous article, are discrete units of both processing and storage, and so bits of tables will be stored on these nodes.\n\nThis is, in part, how Redshift can work so fast, with processing and storage split out amongst several nodes, each node is doing a smaller part of the whole job.\n\n### ▪️ Slices\n\nSlices are where the actual data is stored on the nodes. Compute Nodes can have 2,16, or 32 slices.\n\nData is stored on the slices from the bottom of the slice up. So if you build a table with at least one sort key, it will also store that data on the slice in that sorted order. Slices can contain thousands of bits of thousands of tables.\n\n### ▪️ Blocks\n\nSlices are broken down into blocks. Each block is 1MB in size. You could imagine that one cell in a table is a Block, however most data that you would find in a cell would only make up a few bytes in size and so Redshift will attempt to fit as much data as it can inside a 1 MB Block.\n\nThis is also one of the things that can make Redshift so quick. Redshift stores, in a meta-data store, the minimum and maximum value of the information inside a Block, which allows the Compute Node to skip blocks of data that it doesn't need without actually looking at the underlying data.\n\nThis is where compression, something we will over later, really can help. The more data you can fit into a Block can, with the right sort key, tremendously speed up how quickly you'll get your results.\n\n### ▪️ Caching\n\nRedshift has two kinds of caching. Execution and Results.\n\nWhen you first run a query the Leader Node will construct an execution plan, which is it's best guess of how to run this query efficiently based on what it knows about where and how the data is stored. It will then store that execution plan so that if you re-run the query again in the future, the Leader Node doesn't have to make a new plan, instead will just use the one stored in cache, thus saving time. This can mean that your first run of a query shouldn't be used as a benchmark.\n\nRedshift will also store the results of a query in short term memory, so if you re-run the same query or someone else runs the same query then Redshift will provide the results stored in the cache rather than running the query. This can be really useful when querying result sets in the billions and then you accidentally close your session.\n\n### ▪️ Workload Management\n\nWorkload Management or WLM is how Redshift manages it's resources.\n\nWLM decides how many people can run a query at the same time and can determine how much CPU or RAM you can use up in your query.\n\nThis is all managed through user groups and queues.\n\nDepending on how your Redshift Cluster is set up, this may impact you. It's not uncommon to have queues dedicated to super user tasks or for systems that feed into or feed from Redshift and so you may end up with a conservative amount of resources left over for you and your fellow analysts.\n\nWhat this can mean is that WLM prevents you from really experiencing how fast Redshift can be as your query may have to wait until another query has finished running OR it may be killed by WLM for consuming too much of the allotted resources.\n\nHowever it's not all doom and gloom. Redshift out of the box has a 'small queries queue' which is it's own separate queue that is just for 'select * from table limit 50' queries, meaning you don't have to wait for a colleagues large query, or an ETL load before you can get your results.\n\nRedshift can also be configured to manage workload on the fly, not needing DBA's to set any hard rules, instead relying on Redshifts own Machine Learning to predict incoming query run times, and assigns them to the optimal queue for the faster processing.\n\nFundamentally, your results may vary.\n\n# Right, so lets put it all together.\n\nThe key to how Redshift can work so fast is in the above configuration.\nLets work through some hypothetical scenario's with my imaginary, above pictured, Redshift Cluster with four two-slice nodes. In my imaginary Redshift cluster is a table containing cars makes and models.\n\n### Scenario One: Car Color.\n\nSay I tell Redshift to store all the white cars on Node#1, red cars on Node#2, yellow on Node#3, and black on Node#4. If I run a query that wants all the red and all the yellow cars then that query is going to go to the Leader Node, which knows which color cars are on which nodes and so it only communicates to Node#2 and Node#3. Bam! it all came back super quick, and Node#1 and Node#4 where never disturbed.\n\nHowever If spread out my cars by color, and then ran a query looking for all Audi's then the Leader Node is forced to ask every node to 'have a look' to see if they have any Audi's. As you can imagine this wouldn't work so fast.\n\n### Scenario Two: Manufacture Date.\n\nLets continue with the cars being spread by color and lets also say that we have sorted the car data on the nodes by the year they were first manufactured which means the blocks in the slices are ordered from the bottom up. If I run a query looking for all cars manufactured after 2000 that were not black, that request will again go to the Leader Node, who knows to exclude the fourth node. That request then get passed to the Computer Nodes who know from the background meta-data tables that cars manufactured after 2000 don't start until 60% up the slice, and so immediately jump to the top end of the slice and start reading values to return. Again an example of how this architecture can speed things up.\n\nHowever, if each row also had a decommissioned date, and we wanted all the cars that where decommissioned between 1980 and 1995 were not black, we still get that speed increase from the Leader Node knowing to avoid the fourth Node, however the Compute Nodes are forced to scan every block in their slices for values between 1980 and 1995 because we sorted it instead by Manufacture Date, now you can define multiple sort keys, but I'll cover that off in a later article.\n\nTake Away: Redshift works by utilizing an architecture which when used properly can yield impressive speeds across billion fold data set. However you, the analyst, may not find this to be the case every time. If you have to run a query which runs counter to how the table is spread across nodes or not how the data is sorted on the slice then the results may be slow to come back. You may be limited, or outright have your query cancelled, by WLM.\n\nIt's important to stress this as all the flashy advertising and company success stories mask a reality where people who actually use Redshift may get frustrated at it's less than advertised speeds. The data can't be configured to suit everyone's queries, more often than not a data set is going to get configured in a way that benefits the most likely query scenario, which will be a guess and will leave a lot of other queries running a bit slower.\n\nheader image drawn by me\n\n# Who am I?\n\n> Ronsoak 🏳️🌈🇳🇿@ronsoakWho Am I❓\n\n|🇳🇿 |🇬🇧 |🏳️🌈\n\n📊 Senior Data Analyst for @Xero\n\n⚠️ My words do not represent my company.\n\n✍️ Writer on @ThePracticalDev (dev.to/ronsoak)\n\n✍️ Writer on @Medium (medium.com/@ronsoak)\n\n🎨 I draw (instagram.com/ronsoak_art/)\n\n🛰️ I ❤️ space!03:11 AM - 13 Oct 2019",
"content_format": "markdown"
},
{
"url": "https://dev.to/valtism/comment/hlk",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nThanks! I'm glad you liked it.\n\nI'm not sure I grasp your question fully, but `const` is a way of declaring a variable. The reason I prefer it to `var` is because it offers a more reliable mechanism for instantiating variables. When a variable does not need to be reassigned within the scope it is defined in, a `const` variable is always preferred. Using `const`, you avoid common trap doors like accidental global variables, naming conflicts, and the like. Does this answer your question?\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/adam_cyclones/weak-monorepos-54fd",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI'm 🪙 coining the term weak monorepo for a solution to specific problem.\n\nImagine a consultancy where everyone needs the same generic tooling. Each person might work on different projects and some people might not be security clear to see every project 👀 due to top secret NDA's.\n\nYou then have the dilemma of packing every project with the same tooling over and over, keeping them all up to date.\n\nOr having a separate bunch of tooling which points to a project or projects, not ideal.\n\n## How do you solve this?\n\nThe solution is pretty simple, create a monorepo as normal but then git ignore the `packages` / `projects` folder whatever you call it. Then instead if monorepo packages in one repo, keep the project as repositories of thier own cloning as needed into the `packages` / `projects` folder.\n\nI believe this is the best of both worlds.\nJust make sure your CWD is in the package your working in when using GIT other that that it's business as usual.\n\nOh but also you can individually branch your packages like a sane system of organisation.\n\nThis solution is something I am trailing at work and we will see how everyone likes it.",
"content_format": "markdown"
},
{
"url": "https://dev.to/zubayerhimel0/a-little-bit-about-data-science-and-data-scientist-17o9",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nCouple of days ago I took Data Science course of IBM from Coursera. I completed my first course and got basic knowledge about data science. As I am new to this sector I thought I would share what I learned. This will also help me as a future reference 😅. Now lets get started.\n\n### Data science and Data scientist\n\nData science is a field where you use scientific methods, algorithms, processes to get information from a chunk of structured or unstructured data. A data scientist is someone who extract the information from the data. So data science is basically what data scientists do.\n\nData scientist is a player who only plays with data whether the data is in structured or unstructured form.\n\n### Who can be a data scientist?\n\nAn ideal data scientist should have curiosity in him/her. Without a curious mind one can't be a perfect data scientist. Now here question may arise why curiosity is important in this field? Curiosity is important because without curiosity a person can't extract the information from a thousand/million/billion set of data. He/She won't get what to extract from the data.\n\nA data scientist has to be a good story teller. He has to make a good story about his finding from the data to present the final result. So a date scientist requires good story telling skill in this field.\n\nBesides these skill a data scientist should have knowledge in STEM(Science, Technology, Engineering, Math) kind of things. Having knowledge in statistics is important. Because these are important and kind of mandatory skills as he/she'll need to analyze data, may need to build an application to extract the information out of data, need to make a chart of the findings etc.\n\n### Why data science is important?\n\nWhy not it is important??? Now a days data science is used everywhere. We can solve our problems with the use of data science. Data science is future because it predicts future. Data science is related to data mining, deep learning and big data.\n\nThat's it. Hope you'll find this useful. 😁",
"content_format": "markdown"
},
{
"url": "https://dev.to/txai/comment/845d",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nNow I just skip straight to regex101 rather than search and inevitably just give up and type symbols in regex101 until it does what I want.\n\nI conditioned myself to not search for regex help, I guess ^^;\n\nIf you want some regex practice, I really enjoyed these games.\n\nAnytime I want to do anything with regex I use regexr.com/, you can test if it's gonna work on a sample of your dataset.\n\nHas a handy cheatsheet on the sidebar as well\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/spukas/what-is-a-typescript-3gbj",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nTypeScript is a JavaScript but with an additional syntax that is called type system. All the JS rules also apply for TypeScript, so array functions, objects, arrays, spreading, everything that you know in JS can be used for writing a TypeScript.\n\nThe purpose of TypeScript is to catch errors early in the development process. To compare with JavaScript, to find a possible error or a bug, you first need to execute the code. That is not an ideal process, which makes development slower, because you need to continually rerun the code to see if you have left a bug somewhere.\n\nWith the help of the type system, during development your code is analysed continuously, looking for possible errors and or bugs. If it finds one, then you are noticed inside the code editor with a message of the error and a provided fix. And all of this happens without the need to execute the code.\nTypeScript compiler analyses code by using type annotations. Type annotations let you define the type of variable, input or output for the function or method. For example, you can annotate the type of the function to be a String or some variable to be the type of a Boolean. And once you annotate, it tells the compiler that only this specific type is allowed to use. If the compiler detects a different type used on the identifier, it throws an error. In other words, you are describing the information that is going through your code.\n\nType annotations are used during development only. After the code is compiled from TypeScript to JavaScript, all the type system is removed. You will not see any types that you defined. And the browser or NodeJs does not understand what the TypeScript is, nor it needs to know about it. The types are used only during the development process to help catch errors fast.\n\nMany strong typed language compilers provide an option for code optimisation. That is not the case with TypeScript. It does not do any performance optimisations during the compilation process. It just removes the type system and converts the code to the plain JavaScript.\n\n## Summary\n\nTo sum up, TypeScript is a JavaScript + Type System. It bounds types (i.e. Boolean, String or Number), to the expressions (i.e. variables, function inputs or outputs), and makes sure that only these types are used. It speeds up the development process because mistakes are caught early, before executing the code. TypeScript is used only in development, and after the compilation, the code converts to plain JavaScript getting stripped of all types.",
"content_format": "markdown"
},
{
"url": "https://dev.to/patrickleet",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Patrick Scott\n\n404 bio not found\n\n### Want to connect with Patrick Scott?\n\nCreate an account to connect with Patrick Scott. You can also sign in below to proceed if you already have an account.\n\nAlready have an account? Sign in\n\nloading...",
"content_format": "markdown"
},
{
"url": "https://dev.to/boyukbas/answer-squash-my-last-x-commits-together-using-git-3bbd",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nYou can do this fairly easily without `git rebase` or `git merge --squash`. In this example, we'll squash the last 3 commits.\nIf you want to write the new commit message from scratch, this suffices:\n\n```\ngit reset --soft HEAD~3\ngit commit\n```\n\nIf you want to start editing the new…\n\n `` \nThat's cool!",
"content_format": "markdown"
},
{
"url": "https://dev.to/moogoo78/python-queuequeue-4kdd",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nPython has 3 ways to implement a \"queue\" (FIFO rule).\n\n* list\n* collections.deque\n* queue.Queue\n\nlist is build-in, multi purpose structure. If get item out from a list, need to shift all item, may slow. collections.deque: list-like container with fast appends and pops on either end. Here introduce more about Queue:\n\n```\nfrom queue import Queue\n\nq = Queue(5) # 5 is maxsize of the queue [1]\nq.put('foo') # put item into a queue\nq.put('bar')\n\nprint(q.qsize()) # approximate [2] size of the queue\n>>> 2\nprint(q.full()) # check full or not\n>>> False\nprint(q.empty()) # check empty or not\n>>> False\na = q.get()\nprint(a)\n>>> foo\nb = q.get()\nprint(b)\n>>> bar\nc = q.get() \n# blocking ... queue is empty, wait until an item is available. \n\nget_nowait() # get(False), if not available, raise queue.Empty\n\n# blocking process\ntask_done() & join()\n```\n\n[1]\n\nIf queue is full, no free slow, put() will block while available[2]\n\nqsize() > 0 doesn’t guarantee that a subsequent get() will not block, nor will qsize() < maxsize guarantee that put() will not block.",
"content_format": "markdown"
},
{
"url": "https://dev.to/billraymond/videos-one-jekyll-feed-view-to-rule-them-all-2b6e",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n## What you get for reading and watching...\n\nReusable Jekyll code to create a featured post, list of posts, and featured images, all in one file (plus some css). Here is what the reusable code will look like when viewed in a browser:\n\n## Back story\n\nWhen I created my website, I decided to create multiple post types. As of this writing, I have two types of posts: Blog and Podcast.\n\n## Code conundrum\n\nNot fully understanding how to work with Jekyll at the time, I created multiple files containing code to display my blog roll and another set to display the podcast. Then, I wrote a separate code to display featured images, and even more code for the featured post for each post type.\n\nWhenever I modified the look-and-feel of my site, I had to do a grueling round of code changes and run tests to ensure I did not break the responsive design. I was (well, still am) planning to create another post type for training. The idea of doing yet another round of code for that felt so overwhelming I gave up on the idea, and that is not the way to live ☹️\n\n## One feed to rule them all\n\nAfter opening my code to make a little change to my blog and having to modify five different files, I threw up my hands and did what I should have done a long time ago. I re-architected my site and created a single file that can display any post type I want.\n\nWith my new approach, there is one file containing all the reusable code I need for a feed page (list of posts). Using some CSS styling tricks, the feed can look totally different than the featured post, but it is still the same code. I also have the added value of getting featured images with each post to make the site feel more dynamic and modern.\n\n## Tl;DR (or 'listen, just give me the code')\n\nYou can clone my GitHub repo that shows you how to add featured images and the one one feed to rule them all code here:\n\nhttps://github.com/BillRaymond/jekyll-featured-imagesThe great news is the code is using Jekyll. Check out the live running site here (the blog and podcast links are the ones to pay attention to):\n\nhttps://billraymond.github.io/jekyll-featured-images/feed/blog\n\n## Recommended 'before you begin' videos\n\nMy training starts with a barebones Minima site, but it does build on a video series I created that shows you how to add featured images to your posts. Technically, you do not have to set up featured images, but you will be missing out, so check these out first:\n\n## The one feed to rule them all video series\n\nThere are three must-watch videos listed here. I have a fourth to finalize, but that is a tips & tricks video, so please subscribe to my channel to keep up with that release.\n\n## One last request\n\nThe videos I do on YouTube are a labor of love, and I am trying to build an audience for people like myself that are occasional developers. The videos take a long time to create, so I would appreciate it if you subscribe, like, comment, and share if you find the content useful. In return, you will receive my sincere gratitude and the drive to create more content like this.",
"content_format": "markdown"
},
{
"url": "https://dev.to/prazwal/numpy-and-pandas-2b4o",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nData scientists can use various tools and techniques to explore, visualize, and manipulate data. One of the most common ways in which data scientists work with data is to use the Python language and some specific packages for data processing.\n\nNumPy\n\nNumPy is a Python library that gives functionality comparable to mathematical tools such as MATLAB and R. While NumPy significantly simplifies the user experience, it also offers comprehensive mathematical functions.Pandas\n\nPandas is an extremely popular Python library for data analysis and manipulation. Pandas is like excel for Python - providing easy-to-use functionality for data tables.Explore data in a Jupyter notebook\n\nJupyter notebooks are a popular way of running basic scripts using your web browser. Typically, these notebooks are a single webpage, broken up into text sections and code sections that are executed on the server rather than your local machine. This means you can get started quickly without needing to install Python or other tools.Testing hypotheses\n\nData exploration and analysis is typically an iterative process, in which the data scientist takes a sample of data and performs the following kinds of task to analyze it and test hypotheses:Clean data to handle errors, missing values, and other issues.\n\nApply statistical techniques to better understand the data, and how the sample might be expected to represent the real-world population of data, allowing for random variation.\n\nVisualize data to determine relationships between variables, and in the case of a machine learning project, identify features that are potentially predictive of the label.\n\nRevise the hypothesis and repeat the process.",
"content_format": "markdown"
},
{
"url": "https://dev.to/meryemjow/being-distracted-228h",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nHey!\n\nlike most of us, we are all trying to learn by ourselves from online resources.\n\nI'm a beginner in programming, but lately i have been lost from the much of available online resources and also the when i look at how many things in a programming language i need to know though i have been made a plan for what and where and how and the amount of time i will be learning.\n\nBut i have become very frustrated lately and don't know how to handle this.\n\n## Top comments (10)\n\nThis is the golden standard I give to all new developers looking for guidance.\n\nI'm doing a \"learning in public\" month next month. Basically learning something new, building something with it, and sharing the results with a small blog post every day for the whole month.\n\nI'm a DevOps Engineer so I'll be using the DevOps portion of that roadmap. Specifically, just trying to fill in gaps between my college education and what I've learned in the workforce over the last four years.\n\nFor me, that means automating more stuff with Bash, learning Go, and then doing a deep dive on Kubernetes and Ansible in August.\n\nDistraction, frustration, anxiety, uncertainty, hesitation, confusion, irritation, anger... you will have all of these feeling, you can't escape them. It's a normal state in each developer's life. However, there's a lot of rewarding great feeling when you achieve any little thing. Developing can lead to extreme mental fatigue, some people may give up, you need to set yourself a goal(s) to help you survive those low times.\n\nWeb developing ecosystem is evolving each day. Few years ago, it was extremely easier to get along compared to today. The good news is, that you already started. You're in good state compared to people who will decide to join next year. Henry mentioned a great resource to follow, it may be overwhelming at first but remember not all what's mentioned is really needed as a starting point.\n\nBy the way if you prefer interactive resource here's a video by Traversy Media that suggests a minimal road-map : Web Development In 2019 - A Practical Guide\nAlso, Traversy Media has a great playlist about questions each developer may ask check it out : Developer Discussion: Industry, Career & Personal Help. I extremly suggest this playlist\n\nGood luck, and never hesitate to ask questions.\n\nYou made me realise that i'm not the only one who's having theses struggles. as you said, these feelings are normal in each developer's life.\n\nI would lie if i said that i don''t think sometimes of giving up and my mind keeps telling me :\"programming is not your thing Meryem\". but i just don't give up. i guess because of the fact that i'm having a goal is what makes me want to continue.\n\nThe practical guide of Traversy Media is pretty helpful.\n\nThank you so much Mazen for your words. it meaned a lot. Thank you\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/saigowthamr/javascript-quiz-part-2-2iog",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIf you want to answer part1 check out part1\n\n> \n\n1.How to reverse a String Using Reduce Method.\n\nexample: if you pass 'king' as an argument output is 'gnik'\n\n2.what is the difference between slice and splice.\n\n3.How to convert an object into a string?\n\nIf you want to answer part1 check out part1\n\n> \n\n1.How to reverse a String Using Reduce Method.\n\nexample: if you pass 'king' as an argument output is 'gnik'\n\n2.what is the difference between slice and splice.\n\n3.How to convert an object into a string?\n\n> \n\n1.How to reverse a String Using Reduce Method.\n\n```\nconst reverseStr = str =>\n (str.split('')\n .reduce((prev, c) => ([ c, ...prev ]), [])\n .join(''))\n```\n\n> \n\n3.How to convert an object into a string?\n\nKinda depends on what you want.\n\n```\nconst pony = {\n name: 'Twilight Sparkle',\n type: 'Unicorn',\n element: 'Magic',\n}\n\n// Option A because nothing beats '[object Object]' as a string representation\npony.toString() // '[object Object]'\n\n// Option B\nJSON.stringify(pony) // '{\"name\": \"Twilight Sparkle\", \"type\": \"Unicorn\", \"element\": \"Magic\"}'\n\n// Option C\npony.toString = function () {\n return `${this.name}, ${this.type}, ${this.element}`\n}\npony.toString() // 'Twilight Sparkle, Unicorn, Magic'\n\n// Option D\npony.inspect = function () {\n return `Pony(${this.name}, ${this.type}, ${this.element})`\n}\npony.inspect() // 'Pony(Twilight Sparkle, Unicorn, Magic)'\n\n// Option E because why not\npony.toString = function () {\n return `${this.name}, ${this.type}, ${this.element}`\n}\npony + '' // 'Twilight Sparkle, Unicorn, Magic'\n```\n\nInteresting fact for completeness: The JSON counterpart of `toString()` is `toJSON()`. If an object has a `toJSON()` method, it will be evaluated and its result will be stringified instead:\n\n```\npony.toJSON = function () {\n return this.name\n}\n\nJSON.stringify(pony) === '\"Twighlight Sparkle\"'\n```\n\nThat sounds... useful? 🙈\n\nBut yeah, it's a feature that I as well only discovered after many years of JS experience and since have never used in production. It's pretty cool though.\nSlight error in your example: Twilight is an Alicorn. Source ❤️\n\n```\n// #1\n\"hello\".split(\"\").reduce((c,v,i,a) => c + a[a.length-i-1], \"\");\n```\n\n#2\n\n// .slice returns a shallow copy of \"substring\" of elements in an Array\n\n// .splice removes elements from elements at an index and can insert as well#3\n\nThis is a bit of a trick question and depends on what you're looking for, and @avalander went into a lot of detail for this\n1.How to reverse a String Using Reduce Method.\n\n```\nfunction reverseString(str) {\n const arr = str.split('');\n return arr.reduce((acc, val) => val + acc,\"\");\n}\n```\n\n2.What is the difference between slice and splice?\n\n3.How to convert an object into a string?\n\n `\"Hello World\".split('').reduce((a, b) => b + a)` \nslice is a prototype function on Array and String. It doesn't mutate and returns a new Array/String that equals to an extract of the Array/String. splice is a prototype function on Array and is used to insert and delete elements at a given index of the array. It mutates the actual array and returns the deleted elements.\n\nYou can explicitly call `obj.toString()` or you can use JavaScript's coercion: `(obj + \"\")`, so the obj.toString() method is implicitly called to convert it to a string. By default, the output is not too useful; it returns `[object Object]`, but you can override the toString() method. Or, if you want to serialize the object into a string representation, you may want to use `JSON.stringify(obj)` > \n\n1) How to reverse a string using reduce method?\n\n```\nconst rev = str => [...str].reduce((carry, char) => char.concat(carry), '')\n```\n\nSpecifically, `[...str]` is superior to `str.split('')` because it correctly handles Unicode:\n\n```\n'😎'.split('') // [\"�\", \"�\"]\n[...'😎'] // [\"😎\"]\n```\n\n> \n\n2) What is the difference between slice and splice?\n\n `slice` copies a range out of an array. `splice` deletes and/or inserts items at a given position in an array.> \n\n3) How to convert an object into a string?\n\nThe question's a little to broad imo, I'm gonna spare the time writing out exactly what Avalander did.\n\n\"king\".split(\"\").reverse().join(\"\"); // result will be \"gnik\"\n\nYou could also accomplish this with a simple loop\n\nvar str = \"king\";\n\nvar reversed = \"\";\n\nvar i = str.length-1;\n\nwhile (i >=0) {\n\nreversed += str[i];\n\ni--;\n\n}\n\n// result in the reversed variable will be \"gink\"\nThis one is constantly confused across JS developers. Splice actually returns a mutated version of the original array, and in the process actually modifies the array. Slice on the other hand can return a copy or a mutated version of the array, but the original array remains the way it was before the operation was executed. For example, if you try arr.splice(1, 1); it will literally remove a single item from an array starting at index 1. If you were to do the same thing with a slice operation, arr.slice(1, 1) it will start at index 1 and return 1 item in a new array (new array in memory). Splice also is used to manipulate arrays without the need to strore a returned result in a new variable since the original reference in memory is modified, whereas slice is intended to be used as a executable operation on an existing array and a returned value will either get passed as a parameter or stored in memory with it's own pointer and variable name.\n\nAvalander's answer is pretty concise, but if I were to give an answer based on how the question is worded it would have to be JSON.stringify(obj);\n\nFor further actions, you may consider blocking this person and/or reporting abuse\n\nsharathchandark -\n\nDera Okeke -\n\nsharathchandark -\n\nSacha Greif -",
"content_format": "markdown"
},
{
"url": "https://dev.to/webeleon/purge-docker-with-fire-2661",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nOn my development machines, I need docker most of the time...\n\nThe annoying parts are:\n\n* take a shit ton of place on my SSD for images and unused containers. Like 200GB\n* add a lot of virtual network interfaces.... around 50 useless interfaces polluting my computer.\n\nI won't uninstall docker but I would like to remove all containers, all volumes, all networks, and all volumes.\n\nStart by stopping all running containers.\n\n```\nsudo docker stop `docker ps -qa`\n```\n\nthen delete all containers (running and stopped)\n\n `sudo docker rm `docker ps -qa`` \nthen remove all networks not used by at least one container. None are left so we are removing all of them...\n\n `sudo docker network prune` \nNow we will remove all the volumes:\n\n `sudo docker volume prune` \nAnd finally, we can remove all the images:\n\n `sudo docker images prune` \nEnjoy the space and silence.\n\nps: If you are only looking to remove unused resources go with `sudo docker system prune` pps: If you are only looking to remove stopped container `docker container prune -f` thanks to Manuel Castellin for the tips.",
"content_format": "markdown"
},
{
"url": "https://dev.to/yorodm/the-ultimate-web-framework-deathmatch-1op8",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAre those benchmarking blog post bothering you? Tired of listening to people boast about how good framework X is? Undecided wether to use framework X or framework Y in your new startup?. Fear not!!! You can now (easily) launch your own edition of: The Ultimate Web Framework Deathmatch using TechEmpower's Web Framework benchmarks suite.\n\nThe project has a Github repo with all the things you need to run your own tests. As this post is beeing written there's a fierce competition going between an aio-libs based python app and net/http with SQLx, both using PostgreSQL, so just read the instructions and let the games begin! You can add your results to the project via a PR or just post your benchmarking as a comment here.\n\nDisclaimer\n\nIn case someone is wondering, I have no idea of who the good folks at TechEmpower are, neither I work for them or receive any compensation from them. I just thought the whole Deathmatch thing sounded like a way to have some productive fun.",
"content_format": "markdown"
},
{
"url": "https://dev.to/alex_barashkov/the-challenges-of-building-own-macos-app-1acd",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nTwo years ago, we began the process of building a simple, open-source MacOS app as a side project, but this turned out to be a much more challenging journey than we initially thought. Here, we talk about the various stages of this journey, including:\n\n* how we built the app to mute/unmute microphones using the shortcut or touch bar icon;\n* why taking an open-source approach did not meet our expectations;\n* what limitations Apple placed on the touchbar that rendered it useless for our needs.\n\nWe've just released our Mutify app on Product Hunt, so now we’re waiting for your feedback and support!\n\nhttps://www.producthunt.com/posts/mutify\n\n### 💡Initial Idea\n\nI often spend a lot of time on calls in the middle of work days discussing tasks, solutions and future plans with tech teams working on various different projects. While on these calls, I prefer to mute my mic anytime I’m not speaking. However, I often switch between browser tabs or look at presentations while in these meetings, which makes it hard to quickly unmute myself as I search for the tab and unmute icon I need. To resolve this, I began using a MacBook 2016 with the touchbar, thinking that adding a mute button to the touchbar could be an effective solution. This is what sparked the journey that eventually led to our new app.\n\n### From Open Source to Paid App\n\nWe’re pretty happy with open source in principle, so we initially developed the first version of the app as an open source project, investing in the costs ourselves. We released it and made some promotions, and got positive feedback, but also noticed some bugs. MacOS turned out to be more of a problem than we thought. Particular problems we faced include:\n\n* problems with other apps attempting to control the mic\n* compatibility between different versions of MacOS\n* limitations put in place by Apple that needed careful investigation\n\nWe on the app’s community regularly growing so that we could get some contributors able to help handle issues that arose. Unfortunately, this didn’t happen. From day one, all development was done by ourselves; open-source development on Swift turned out to be quite different than the JS or PHP communities, where you can rely on small bug fixes from outside devs. This lack of helpful community is a major weakness of MacOS development.\n\nWe got 159 stars on Github, more than 8000 installations, and zero contributors. The app, however, still needed lot of work to complete, so we decided to rebuild it from scratch and change to a paid model to guarantee at least some support for future development and app improvements. That how Mutify was born.\n\n# Mutify Features\n\n# Mute and unmute microphone\n\nMute and unmute with a hotkey or touchbar icon available at all times.\n\n# Noise level\n\nDisplay the current ambient noise level, helping you track when you need to mute yourself.\n\n# Variety app support\n\nUse it with Hangouts, Zoom, GoToMeeting, Skype, Telegram, etc. Different calls apps try to control the mic, but the app allows you to keep control.\n\n### Difficulties of MacOs App Development\n\nWe encountered problems at almost every step of development. It was hard to build a fully customized UI, as we didn't want to use standard components. When we developed a method of low-level muting apps, some apps unmuted without user input or notification. For example, when attempting to mute a microphone and showing it disabled at all in MacOS settings, GoToMeeting or GoToWebinar simply unmuted the mic again when detecting it had been muted. After discussing this with the support team for these apps, they insisted that this is correct behaviour. From our perspective, this is dangerous, since it suggests that meeting apps just want to listen you all of the time. We had to find complicated workarounds to ensure that the mic was always muted when we wanted it to be.\n\nSince the day one, we thought that Apple would extend the functionality of the touchbar API, but there are no changes as yet. Touchbar API has a lot of limitations that prevent developers using it to build potentially useful software.\n\nFor example:\n\n* \n\nThird party apps can’t be displayed in the list of actions for the touchbar in the “Cutomize Touchbar” settings;\n\n* \n\nYou only could add one third-party app icon that is always displayed in the touchbar together with Apple developed tools;\n\nThose limitations significantly reduce the possible options for extending touchbar functionality. That’s why we also support mute by hotkey functionality. Even for a simple, one-action app, you need time to build features like the following:\n\n* Onboarding screens\n* Update functionality\n* Integrated payment functions\n* Open at login functions\n* Hotkey support\n* Website\n\n### Effort = Quality\n\nThe main lesson we learned from this project is that doing something well always means a lot of hard work from different people with different skills, including some that don’t always relate to the primary functionality of the app. In the end, however positive feedback from our customers and personal satisfaction of using using our own application made it feel worth the effort.\n\nSupport us on Product Hunt for a 30% discount! Mutify also has a seven-day free trial, so feel free to test it out and we hope you enjoy!",
"content_format": "markdown"
},
{
"url": "https://dev.to/cydstumpel/using-css-variables-to-create-smooth-animations-5164",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nCreating animations with JavaScript can be really frustrating. You can't actually manipulate the CSS Object Model directly from JavaScript (yet!), but you definitely want to...\n\nMost people today use either the `classList` or the `style` property of JavaScript to change styles. The problem with both is that they are actually added to the HTML DOM and not directly to the CSS engine. Waiting for Houdini to be widely supported could take a while, and we want to create smoother and better animations now.\n\n## So, what can we do?\n\nThe answer is CSS variables, or CSS custom properties as some people like to call them. Because while CSS variables are also manipulated via the `style` property in JavaScript it's actually much faster in most browsers than using inline styles.\nUsing event listeners to change the variables is crazy smooth, here's an example I wrote a few weeks ago (using variable fonts, which are also v. cool!):\n\nAnother big advantage of CSS variables is that you can set variables on the parent element which all children elements can read, and subsequently also the `:before` & `:after` pseudo selectors. This does make adding a new variable slower when there are a lot of child elements though!\nEspecially animations based on mouse position work really well in my opinion:\n\n## But what about IE?\n\nSo CSS variables are supported among most browsers, except for of course... IE. But seriously, even microsoft is telling people to stop using IE11. As long as you use the animations for enhancement purposes only, your users --even those who torture themselves with IE-- should be fine.",
"content_format": "markdown"
},
{
"url": "https://dev.to/waqardm/coding-bytes-part-1intro--data-types-2nbc",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Intro\n\nThis post was originally posted on medium\n\nI’ve been learning coding on / off for about 2 years now. Fallen off the wagon plenty of times. The resources available nowadays are much better and more available compared to a couple of years back and though I still consider myself a newbie, I know my way around things a little better now. I know what may be causing an issue and know where to try and look to fix it.\n\nI’m still rubbish at algorithms but that is a part of my journey that I know I will work through eventually. It’s been a tough ride, especially at the beginning. Even though there is enough support online, getting stuck can be a nightmare! Fixing a problem that’s solution is in your face can sometimes take hours, but I’ve come to accept that as part of the learning process too. I believe that is the hardest part in the beginning — resolving the issue on your own and trying not to give up. I’ve also come to peace with the fact that I may learn/work slower than others, and no matter what I do, I can’t learn everything. Most people will tell you imposter syndrome is too prevalent but the only thing to do is stick through it.\n\nIt is often said that the best way to learn is to teach something. And though I have a lot to learn still, I wanted to start a series to solidify my learning, and work through the very basics of programming whilst benefitting someone else learning to code (I hope!). I’ve decided on a format which aimed at reducing information overload and hence named it ‘coding bytes’. The idea is small bits of information can be digested easier and when put together with the other pieces, the magic will happen!\n\nIn each post I will aim to cover one subject/concept without linking it to any other principle or subject. In this way I hope, a learner can take small steps and work through the journey reducing the imposter syndrome.\n\n## Preamble…\n\nI will be using Javascript as the chosen programming language (don’t worry, all languages are essentially the same!), just because of its ease of use and the fact that it works straight out of the box from your browser. I have made a video showing how you can do this here.\n\nBasic Data Types\n\nThe most basic information we deal with is known as a ‘data type’. All this means is the type of data we are going to work with e.g. a letter, a word, or a paragraph can be used as real world examples of data types. In programming we use the following data types (with examples):Integer\n\nAn integer is a positive or negative number. `4 or -12` Float\n\nA number with a decimal. `3.75` \nThough in Javascript integer and float come under the common number type, I have separated them intentionally as these terms are common many other languages.\n\nString\n\nA word or sentence or more. As string can include numbers too. Note the quotation marks that encapsulate the string. `“string”, “this is also a string”` Boolean\n\nA boolean is a `true` or `false` output. Sometimes `true` is also known as `1` and `false` as `0`.\n\n## Other Data Types\n\nVariables\n\nVariables can be explained as a reusable word or box which we can use to store information and later update it. There are a few ways to declare variables, but we’ll stick with the most basic var.\n\n `var age = 4;` \nIn the the above declaration, I am letting the computer know I wish to declare a variable by using the var keyword, then calling the variable ‘age’. Now we have variable named age with a value of 4 . The reason for assigning values will become apparent as we progress.\n\nArrays\n\nThe easiest way to describe arrays are to think of a list or collection of something. A to-do list is an array. The list itself is the array, whilst each to-do item is an array element. Note our array is stored in a variable named ‘list’.\n\n```\nvar list = [\"pick up shopping\", \"call the doctor\", \"book tickets\"];\n```\n\nThere are a few more data types, but these should do fine for now as this post has become a little longer than I wanted due to the intro.\n\nThanks for reading. To keep up with my coding journey come say hi 👋 on twitter. I can be found on @lawyerscode",
"content_format": "markdown"
},
{
"url": "https://dev.to/fazledyn",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n404 bio not found\n\nEducation\n\nBUET\n\nWork\n\nCSE Undergraduate\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least one year.\n\nSkills/Languages\n\nDjango, Flask, React, NEXT, Hugo, Tensorflow\n\nWe're a place where coders share, stay up-to-date and grow their careers.",
"content_format": "markdown"
},
{
"url": "https://dev.to/bruno8moura/react-a-simple-use-of-usestate-method-8p1",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWhen I started the use with React, a thing that bodereds me was the method useState().\n\nSo I understood that we have two types of storage when working with React,\n\n* Memory storage and;\n* Component storage,\n\nAnd finally, with that vision, I managed to absorbed the knowledge about the use of useState() method.\n\nAfter that, I built a gif to turn my words clear,\n\nSumming up, you can change values without useState(), but what you're changing is the value in browser's memory, not in the React Component. To do that you have to use useState().\n\nThat's all folks!",
"content_format": "markdown"
},
{
"url": "https://dev.to/metacritical/comment/676c",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI personally think not to spoil your mind with rails MVC architecture, it doesnt hold long beyond a basic blog, it breaks down easily even under a moderately larger app. The conventions are rules with their own peculiarity which moves you away from the problem.\n\nYou end up solving the framework to work for you than working the problem and let framework fit in place. Its almost ridiculous to keep battling the Rails framework to fit your problem domain.\n\nOccams Razor suggest that simplest answer is also the most probable. The simplest answer is Routing + templating + middleware (RTM) not MVC. Use nodejs, clojure or Sinatra. Anything but rails if you are aiming to go beyond a blog and not want to battle the framework.\n\nI see where you're coming from, but I'm an individual contributor working with a small team on an existing project, so I can't exactly waltz into engineering meetings and be like \"let's start from scratch using a framework that no one knows\" hahah\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/jmfayard/how-to-write-a-good-readme-discuss-4hkl/comments",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI usually structure my READMEs like this:\n\n* Quick description of what the project is and what it does\n* A quickstart \"tutorial\" on how to start using it\n\n* Requirements\n* Installation\n* Configuration\n* How to use it\n* An update/upgrade guide (unless it's quite complex, then I'll just put a link to a wiki page or something)\n* Where to find the docs\n* How to contribute to the project\n* How to report bugs in the project\n* The license\n\nThe structure and content might vary depending on the project but that's the default \"template\" I use when I write a README.\n\nI like it that way because users just need to take 5 minutes to read it and know what's what.\n\nHope some of this info adds to the discussion.\n\nOften a gif can be super-helpful to show how a project functions.\n\nExample: github.com/MichaelDimmitt/gh_revealAn image can likewise provide benefit.\n\nExample: github.com/JohnCoates/AerialLogos or other images are also nice, they make the document more fun and less of a pain to read.\n\ngithub.com/hashrocket/decent_exposureAlso, people often like the \"shields\" to show numbers like num downloads/week or tests passing.\n\nshields.io/#/Happy to add to the discussion further.\n\nMac app I like to use, Giphy Capture, for making gif's\nI like to think of a README very much like a college essay. Very formulaic, with a rigid and proven structure, with the intent of getting your idea across as quickly as possible.\n\nWhile I am by no means an expert (or even really that great at following my own guidelines), here is the general structure I tend to stick with:\n\n* Introduction\n\n* Project name and description\n* Badges. Everyone loves badges. They help establish the trust between the project and the end-user. Badges are colorful and catch their eye, and ones like code coverage and build status build reputation. But don't go crazy. Much more than 1 full line of badges on a desktop gets too cluttery, but too few just looks lazy. 3-6 different badges is a good number for me.\n* Close the introduction with a hook, something to get a passerby interested. It might be a screenshot, a video gif with basic functionality, or a very short code snippet. You want this to be the first thing someone notices when scanning your README, which should make them pause and be interested enough to actually read the description above it and spend their precious seconds scrolling to see what's below it.\n* Body\n\n* Your README is not your documentation. Let me repeat this: Your README is not your documentation. It is an overview of your project, and it should get people interested in using it more.\n* Definitely include a quick-start for the project. The shortest possible example that gets someone fully up-and-running. For a library, it might be how to add the dependency and a minimal integration into an existing project. For a framework, maybe a scaffold command followed by the most basic run command.\n* A brief overview of features. Again, this is not documentation, it is just enough information to get users interested. For each notable feature, a small code snippet might be nice, and links to the full documentation page are a must. A project with a CLI might simply show a few of its commands, a library could do with a high-level overview of its API.\n* Basic configuration. What is the entry-point into tweaking the quick-start example to suit their needs. Again, this is not full documentation, but simply the first place someone should look to start changing stuff. Link back to the full API docs from here.\n* Next steps. After a user has completed the quick-start and has a basic idea of how to work with the API, it's time to lead them toward being a fully dedicated user. Link to the issue tracker, maintainer chat rooms, tags on Stack Overflow, and anything else they will need to keep going back to get get ongoing help or instruction.\n* Conclusion\n\n* This is the last stuff you want end-users to see. If they made it this far, it's time to bring it all home. This should primarily be targeted at contributors and maintainers rather than end-users, but should be a good reassurance to the end-users of the quality of the project and its development progress. Here are some examples of good things to put in your README conclusion, in no particular order:\n* Project license (although a link to the license is probably better, to keep the README shorter)\n* Contributor guidelines (again, a link is better). Code of conduct, how to set up and build the project for development, that kind of stuff that prospective contributors need to know.\n* References to other projects that inspired or are used in your project. Give credit where credit is due.\n* For small projects, a list of all contributors might be nice, especially in rapidly-growing projects.\n* Who's using this project?\n\nMy main point to all of the stuff above is that your README should be a springboard to everything that anyone needs to know about your project. It should primarily work to get new users intersted in trying it out, aid existing users in getting to the right channels for help, and hold the important bookmarks that maintainers will keep coming back to. And above all, Your README is not your documentation, but it should link out to the relevant pages in your full documetation whenever possible.\n\nThat makes a lot of sense, and I would have probably fallen into the \"Your README is not your documentation\" trap if you hadn't point it out that clearly. Thanks a lot!\n\nYou're welcome! It's definitely a temptation to use the README for documentation, since it is right there and Github displays it nicely for you. But dumping full docs into a README is just going to make it too huge for a new user to easily scan and they will leave disinterested, and it is too difficult to navigate for existing users to easily find what they need.\n\nGithub's Wiki is just as easy to use and gives you proper navigation for small, flat documentation needs. For larger and more complex docs, it's probably better to keep your markdown in the project and integrate it with a static site generator1, so that the release of the project also publishes the docs to Github Pages or something like that. Either way, it's better for everyone than putting it in your README.\n\nI'm currently writing a post on this, I'll send you a link when it's done.\n\nHowever, according to me the core idea is the README should be a gateway to other pieces of info. Keep minimal info in the README, like how to install, a quickstart code sample, and links to the contributing, docs, license etc. Only exceptions when your project is really tiny\n\nThis article is pretty good about about the structure of README files: medium.com/@meakaakka/a-beginners-...\n\nIt also contains a template that you can reuse.\n\nI especially like it when the contributors with avatars are listed at the bottom. It adds a great human touch.\n\n* \n\nI actually put the full license in a dedicated file and the \"license\" section of the README only contains a link to it. That's probably a bit redundant but it's an old habit from the time when GitHub did not include the license in the project page header.\n\n* \n\nThat's usually something I put in the full documentation to keep the README more lightweight and free of unnecessary technical details.\n\nHere is a really nice Readme Template\n\ngithub.com/elsewhencode/project-gu...\nThis may seem to be a weird question to ask, but it is not and here is WHY.\n\nI am sure you have noticed that a lot of technical documentation is poorly written and frustrating.\n\nWhy is it so?\n\nAs it turns out, writing for users is hard work.\n\nWhen you publish a project, it's because you are passionate about the topic.\n\nOnce you are ready pushing the \"publish\" button, you have learned even more about it, you know all the jargons, the implementation details, ...\n\nThe danger to write for you instead of writing for your users is very much present, and avoiding it is hard.\n\nIt would have been easy for me for example to write something like \"Oh and by the way, don't forget to add the Gradle plugin portal in your settings file if you need too\". And then my target audience (who is not ME) is like: what???\n\nThis is why I was interested by the wisdom of people that faced the same problems.\n\nThanks to everybody for their replies!I use this as a rough template for many of my projects:\n\ngist.github.com/PurpleBooth/109311...I also love adding shields:\n\ngithub.com/nektro/reader/blob/mast...\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/ibmdeveloper",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nBuild. Learn. Explore. Create something new and boost your tech skills. Great software starts at developer.ibm.com.\n\nWe're a place where coders share, stay up-to-date and grow their careers.",
"content_format": "markdown"
},
{
"url": "https://dev.to/anabella/first-impressions-of-epic-react-by-kent-c-dodds-1oc2",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n> \n\nThis is my first post on a series the Epic React courses. This one is about my first impressions and opinions on the course in general (tl;dr I think it's great!), while the upcoming ones will focus on the content covered on each section. Stay tuned for more!\n\nI purchased my subscription to Kent C. Dodds' Epic React course when it came out last year, but I hadn't been able to start it until yesterday. I knew from other courses by him that it would be great, so I wanted to make sure I had time in my schedule for it before I began using it.\n\nNow that the new year is in, I decided it was about time I saw what it was all about. And so last night I watched the first chapter.\n\n# Welcome to Epic React 🚀\n\nI was instantly surprised by this initial tour of the course. As first impressions go, this looks like way beyond any other online course I've ever done. More so, it feels like the best parts of every online course I've done, optimized to work together and cater to anyone's learning habits and preferences.\n\nWhat do I mean?\n\n## E-learning, the good parts 👩🏻💻\n\nWhat are those awesome parts that (I think) Kent noticed and put together in just the right combination?\n\n### Video tutorials 🍿\n\nIt's no secret that many people prefer watching a video course or tutorials over reading a book on a certain subject. There's so much more that you get from having someone just tell it to you instead of reading it yourself. To start with, listening to a real person speaking, using their voice and emphasizing on things, not just words on a page, really makes a difference. That sort of thing is usually really hard to convey in written word, especially in the formal writing often found in books. And in the end it really makes a difference in making the learning process feel more informal and relaxed.\n\nBut videos (or books) are never enough, if you can't get your hands dirty with the code. That's cool though, Epic React's got you covered:\n\n### Interactive tools 👾\n\nThe first thing that surprised me about Kent's course is that it's a full product and not just a library of video tutorials. The course comes with a dedicated application you can run locally and use to\n\n* read about the current exercise,\n* see the result of your code, and what the final result should look like\n* as well as using some handy tools to control network calls.\n\nSimilar to the classic Codecademy-style apps, this is your control panel for learning.\n\nAh, but I remember taking my first ever coding lessons in Codecademy (back in 2012 or so) and feeling like the learning was good but partial, because I had literally no idea how to create and run code that actually did something outside of the learning platform. And that's why many other types of courses give students:\n\n### Project Files 🗂\n\nI remember first learning Javascript within a learning webapp, kind of intuitively knowing that \"Javascript runs in the browser\" but having no idea how to actually make a browser run my code, let alone make it interact with a page.\n\nThis is another ingredient in Epic React that contributes to enhancing your learning experience. You get to see your code run in a real environment, like it would if you were building an app and not just solving exercises.\n\n### The more the merrier 👯♀️\n\nThe courses also provides a detailed explanation of each exercise (to add to Kent's videos explaining the subject), together with links and references you're encouraged to consult in order to expand your understanding.\n\nSolving the basic tasks seems to be more or less straight forward with the help of the courses cast of code comment emoji (most notably Kody the Koala 🐨) so that you don't spend much time trying to \"please\" the exercise checker into accepting your solution: I know I've been through that and it can be both frustrating and distracting from what you're trying to achieve.\n\nBut if you're into challenges you can take on the extra credit for a less hand-holdy experience. I believe solving problems on my own is one of the best ways to solidify new knowledge.\n\n### The cherry on top 🍒\n\nTo top it all Kent has created a Discord community for learners, and even a model for people to create their own learning clubs to stay motivated and learn from each other. This, I think, is the most innovative and generous thing added to this course. It really feels like it's providing every tool available for us to succeed.\n\n## Conclusion 😃\n\nWithout even starting any of the actual material I can tell that this is miles ahead of any other course out there. I can't wait to start watching the next chapter: React Fundamentals.\n\n> \n\nWell, thank you for reading this far! These were my first impressions on Kent C. Dodds' Epic React course and what it proposes as a learning platform. I hope you enjoyed it!",
"content_format": "markdown"
},
{
"url": "https://dev.to/toastking/comment/cce3",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nLaTex is great because you can use version control. If saves me so much time on edits. If you're not doing anything fancy design wise I highly recommend it.\n\nI know some folks who use LaTex and keep alllll of their work history on it. Then they just uncomment bits of it as needed to customize it for specific jobs.\n\nFor further actions, you may consider blocking this person and/or reporting abuse\n\nWe're a place where coders share, stay up-to-date and grow their careers.",
"content_format": "markdown"
},
{
"url": "https://dev.to/nethmirodrigo/advisor-do-hackathon-submission-35o5",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n## What We built\n\nAdvisor+ is a question-and-answer web application where questions are asked, answered and blogged with the contribution of users(advicee) and experts(advisors). which can be considered as a platform where users and experts meet. This is more of a consultation platform than a service platform where advisors consult users to help in problems related to mental health, carrier guidance, financial guidance etc.\n\n### Category Submission:\n\nBuilt for Business - Which also can be considered as an application for people to get advices on mental health, carrier guidance, financial guidance etc.\n\n### App Link\n\nhttps://advisor-plus-frontend-rhee8.ondigitalocean.app/\n\n### Description\n\nAdvisor+ is a question-and-answer web application where questions are asked, answered and blogged with the contribution of users and experts.\n\nAdvisor+ is purely crows sourced. You can either register as a user, or if you are a professional at any field, you can register as an expert. By registering as a user, you can ask any question related to any field, be it sports, science, psychology, arts etc.\n\nAfter that, registered expert in the relevant field may answer your question. The user can then rate the answer, which also increases the rating of the expert, and the post is blogged in the Advisor+ page.\n\nIf a user like to keep it private, anonymous question facility is available and none of your data will be stored.\n\nSadly we have not been able to finish our project with the exams we are about to face in our colleges.\n\nWe had a plan to add these features in the app\n\n* payment gateway for users to pay for advisors\n* private chat bot for users to contact advisors\n* professional verified skill test for advisors to rank themselves on top\n* some minor sections on user and advisor profiles.\n\n### Link to Source Code\n\nBackend - https://github.com/NethmiRodrigo/advisorPlus-backend\n\nFrontend - https://github.com/NethmiRodrigo/advisorPlus-frontend\n\n### Screenshots\n\n### Permissive License\n\nMIT\n\n## Background\n\n> \n\nAs we all know, it is not safe to go anywhere during this period of Covid-19 pandemic situation. Because of that, normal day to day life of a person turned upside down and we are unable to go anywhere. Then we thought what we can contribute to the community during such pandemic situation.\n\nWe thought about designing a platform to get medical advices from home by loging to a platform. Our idea was if we were able to create a such platform, no one needs to go anywhere to get a quick medical advice. And it's really safe during such pandemic situation.\n\nNot only getting advices. We can post our questions, things that we want further clarifications and advices in the blog. So it is really helpful to a person who looking an answer to a question (may be an advice). There are no restrictions to surf the blog. And it is really helpful for the people who work with kinda researches.\n\nDuring the implementation of the project, we thought \"Why are gonna limit the concept only for the medical side? Why can't we implement our idea along with other fields Marketing, Engineering, etc. ?\" That was the turning point of our project.\n\nWe implemented a platform to GET and GIVE advices for the people who are looking for answers and solutions regarding to a particular situation.\n\nADVISOR+ provides solutions for EVERYTHING!\n\n### How We built it\n\nWe built the frontend using react js and the backend using node js. We used firebase for authentication. A digital ocean droplet was used to deploy the backend and the frontend repository was connected to digital ocean app for automated deployment.\n\n## The Team\n\n@cmdrguyson\n\n@lordreigns\n\n@nethmirodrigo\n\n@isuruvihan\n\n@yasithsam\n\n@akilamaithri",
"content_format": "markdown"
},
{
"url": "https://dev.to/nickparsons/docker-commands-the-ultimate-cheat-sheet-33n2",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIf you don’t already know, Docker is an open-source platform for building distributed software using “containerization,” which packages applications together with their environments to make them more portable and easier to deploy.\n\nThanks to its power and productivity, Docker has become an incredibly popular technology for software development teams. However, this same power can sometimes make it tricky for new users to jump into the Docker ecosystem, or even for experienced users to remember the right command.\n\nFortunately, with the right learning tools you’ll have a much easier time getting started with Docker. This article will be your one-stop shop for Docker, going over some of the best practices and must-know commands that any user should know.\n\n### Docker Commands and Best Practices\n\nBefore we get into the best practices for using Docker, here’s a quick overview of the vocabulary you should know:\n\n* Layer : a set of read-only files or commands that describe how to set up the underlying system beneath the container. Layers are built on top of each other, and each one represents a change to the filesystem.\n* Image : an immutable layer that forms the base of the container.\n* Container : an instance of the image that can be executed as an independent application. The container has a mutable layer that lies on top of the image and that is separate from the underlying layers.\n* Registry : a storage and content delivery system used for distributing Docker images.\n* Repository : a collection of related Docker images, often different versions of the same application.\n\n# With that refresher in mind, here are some quick tips for building applications with Docker:\n\n* Try to keep your images as small as possible. This will make them easier to transfer and faster to load into memory when starting a new container. Don’t include libraries and dependencies unless they’re an absolute requirement for the application to run.\n* If your application needs to be scalable, consider using Docker Swarm, a tool for managing a cluster of nodes as a single virtual system.\n* For maximum efficiency, use Docker in combination with continuous integration and continuous deployment practices. You can use services such as Docker Cloud to automatically build images from source code and push them to a Docker repository.\n\n### Below, you’ll find all of the basic Docker commands that you need to start working with containers:\n\n# Developing with Docker Containers :\n\n* docker create [image]: Create a new container from a particular image.\n* docker login : Log into the Docker Hub repository.\n* docker pull [image]: Pull an image from the Docker Hub repository.\n* docker push [username/image]: Push an image to the Docker Hub repository.\n* docker search [term]: Search the Docker Hub repository for a particular term.\n* docker tag [source] [target]: Create a target tag or alias that refers to a source image.\n\n# Running Docker Containers\n\n* docker start [container]: Start a particular container.\n* docker stop [container]: Stop a particular container.\n* docker exec -ti [container] [command]: Run a shell command inside a particular container.\n* docker run -ti — image [image] [container] [command]: Create and start a container at the same time, and then run a command inside it.\n* docker run -ti — rm — image [image] [container] [command]: Create and start a container at the same time, run a command inside it, and then remove the container after executing the command.\n* docker pause [container]: Pause all processes running within a particular container.\n\n# Using Docker Utilities:\n\n* docker history [image]: Display the history of a particular image.\n* docker images : List all of the images that are currently stored on the system.\n* docker inspect [object]: Display low-level information about a particular Docker object.\n* docker ps : List all of the containers that are currently running.\n* docker version : Display the version of Docker that is currently installed on the system.\n\n# Cleaning Up Your Docker Environment:\n\n* docker kill [container]: Kill a particular container.\n* docker kill $(docker ps -q): Kill all containers that are currently running.\n* docker rm [container]: Delete a particular container that is not currently running.\n* docker rm $(docker ps -a -q): Delete all containers that are not currently running.\n\nHopefully this guide will serve as your go to Docker cheat sheet. If there is anything I missed, please let me know and I will happily add it.\n\nHappy coding! ✌",
"content_format": "markdown"
},
{
"url": "https://dev.to/tlylt/thinking-like-an-economist-4mk2",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAs a Computer Science student, I found many other areas of study fascinating. During the many holidays throughout my education thus far, I had taken various introductory-level courses in disciplines such as Psychology, Philosophy, and Economics. There are just subjects that make an interesting read (or, if you are watching online lecture series like I often do, a delightful binge) and are closer to our lives. This year I decided to take a general education module called \"Thinking Like An Economist\", and here are my thoughts after my first week.\n\nThe concept of \"humans respond to incentives\" cannot be said enough. We behave in many ways following incentives planted by the government or resulted from the natural course of things. Therefore, anyone who wishes to manipulate or nudge the public into doing something, whether good or bad, should heel a piece of economists' advice on how to do it properly.\n\nChange is hard to induce, even with incentives. The problem with incentives is that it may backfire. You have examples of cheating among teachers when they were told that their bonuses were tied to their students' performance. Or, you have a higher child accident rate when the government made car seats required in aircraft, thereby incurring extra payment to purchase a ticket on a flight for children (long story short, people might be compelled to travel by car, which is more accident-prone, instead of paying more for a flight).\n\nSometimes we can clearly understand why incentives might work against our will. Other times we need a long chain of logical deductions for what might have gone wrong. There are countless examples in the popular literature on Economics. From environmental protection to managing organizations, we see economists at work in most of the human interventions.\n\nSo, I would like to discuss Web Development with the lens of an economist.\n\nI started doing web development because I learned and believed the following:\n\n> \n\n\"You send a link to a person, and he gets to see your work instantly. How cool is that?\"\n\nThe ease of visualizing and communicating our ideas through the Internet is probably one giant incentive for people to start web development. Perhaps, apart from the popular discussion on how front-end development is subjectively easier and therefore has a much lower barrier to entry than back-end development, is the idea that front-end developers see what they build and get motivated by that sheer fact. Money, of course, is also at play. When we look beyond money however, which I think it occurs more often than not, citing the well-known Maslow's Hierarchy of Needs, immaterial incentives such as the joy of seeing one's work might be a considerable driving force.\n\nThe other thing that I would like to talk about is writing a blog. I have not long ago committed myself to write every week at DEV, in the hope of achieving a few higher purposes:\n\n* To leave behind any valuable pieces of knowledge that might help somebody in the future\n* To motivate me in life long learning\n* To participate in community building\n* To join meaningful discussions and\n* To tinker out of pure curiosity and fun\n\nand some practical considerations:\n\n* To create an online presence of my work and interest\n* To share with future employees that I care about CS\n* To work on communicating my ideas as a non-native speaker\n* To earn a side incoming (if possible) and\n* To gain a public following\n\nIt does not matter if you have the same set of motivations behind your current gigs. We all do them out of intrinsic and pragmatic reasons. And what's more important are the reasons why we all CONTINUE to do them. It could be discipline and resilience in oneself. It could be monetary or in today's term, \"eyeballs\" that are sold to you. If none of the reasons why you have started what you are doing right now, would you continue to do them diligently?\n\nI remember one article on DEV that I read about recently. It's on the writer's frustration that his well-crafted tutorial videos which he put up painstakingly on his Youtube channel are not getting any views. There are no incentives for him to stay in the game. Although, I might add that the encouragements that he received from that DEV article's comment section could possibly last him a bit longer.\n\nIncentives are cold and harsh things devised to achieve public good. One writer falls out of the wagon so that the better one gets a bigger share of the pie. The game around incentives has a clear winner and a bunch of dejected losers. Does it have to be like so? Not necessarily (Let's make the pie bigger, some might say). However, I do think that the fact that it is unyielding fair could work in our favor. If I ever lose my intrinsic motivation to write articles on DEV and there are no external encouragements from anyone or the platform, I might end up discovering something else that worth my time and effort.\n\nTill that day.",
"content_format": "markdown"
},
{
"url": "https://dev.to/emalp/writing-a-simple-sms-sending-reminder-app-for-absolute-absent-minded-people-like-me-4gc3",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI am very very very forgetful and am too lazy to do anything about it. I wanted the simplest form of reminder where I would not need to download any complex apps and the notification sent by the reminder would repeat itself after a certain specified days.\n\nFinally I gathered the guts in my uni holidays to write something that is absolutely in it's simplest form and would hit in the most precious human activation zone; the SMS text. So I'll try and write a simple program that sends me a SMS every certain days with a special message.\n\nFor this I'll be using the Telstra SMS API. It gives us 1000 free SMS which is enough for me as I'll be sending maximum of just 2 SMS per week, thats only about 104 SMS per year. I'll also be utilising the python Schedule module to help me manage my schedules. So let's begin!\n\nFirst of all lets import all required stuff:\n\n```\nfrom __future__ import print_function\nimport time\nimport Telstra_Messaging\nfrom Telstra_Messaging.rest import ApiException\nfrom datetime import datetime\n```\n\nNow lets setup the Telstra SMS API:\n\n```\nclass SMSender():\n client_id = 'your_client_id' # str | \n client_secret = 'your_client_secret' # str | \n grant_type = 'client_credentials' # str | (default to 'client_credentials')\n\n def __init__(self):\n self.configuration = Telstra_Messaging.Configuration()\n\n def authenticate_client(self):\n api_instance = Telstra_Messaging.AuthenticationApi(Telstra_Messaging.ApiClient(self.configuration))\n\n try:\n # Generate OAuth2 token\n self.api_response = api_instance.auth_token(self.client_id, self.client_secret, self.grant_type)\n\n except ApiException as e:\n print(\"Exception when calling AuthenticationApi->auth_token: %s\\n\" % e)\n\n def provision_client(self):\n self.configuration.access_token = self.api_response.access_token\n api_instance = Telstra_Messaging.ProvisioningApi(Telstra_Messaging.ApiClient(self.configuration))\n provision_number_request = Telstra_Messaging.ProvisionNumberRequest() \n\n try:\n # Create Subscription\n api_response = api_instance.create_subscription(provision_number_request)\n api_response = api_instance.get_subscription()\n\n except ApiException as e:\n print(\"Exception when calling ProvisioningApi->create_subscription: %s\\n\" % e)\n\n def send_sms(self, msg_to, msg_body):\n api_instance = Telstra_Messaging.MessagingApi(Telstra_Messaging.ApiClient(self.configuration))\n send_sms_request = Telstra_Messaging.SendSMSRequest(to=msg_to, body=msg_body)\n\n try:\n # Send SMS\n api_response = api_instance.send_sms(send_sms_request)\n return True\n\n except ApiException as e:\n print(\"Exception when calling MessagingApi->send_sms: %s\\n\" % e)\n```\n\nAs you can see, each function does a specific work.\n\n> authenticate_clientAuthenticates with our client id and secret with the Telstra API. After that\n> provision_clientprovision's our client with a new phone number. In the end\n> send_smsis a simple method to send SMS to whatever phone number with specific message.\n\nNow for the fun part, let's start to code our Scheduling portion of the program.\n\nWe will have 3 functions for this part.\n\nThe first one is add_repeat_sequence:\n\n```\ndef add_repeat_sequence(self, date_from=datetime.today(),\\\n repeat_after_days=1, at_time=\"09:30\", msg=\"This is your alarm!!\", to=\"0444444444\"):\n\n right_now = int(datetime.today().timestamp())\n date_from = int(date_from.timestamp())\n\n if date_from < today:\n raise ValueError(\"date_from cannot be before right_now\")\n else:\n time_diff = date_from - right_now\n\n t = Timer(time_diff, self.set_alarm, [repeat_after_days, at_time, msg, to])\n t.start()\n```\n\nThis function is to basically set an alarm at an specified date, given by \"date_from\". After our timer completes it runs the \"set_alarm\" function with the parameters given in the list.\n\nWe will now write the \"set_alarm\" function which is passed in the Timer's parameter. This function will be called when our timer setting date has approached.\n\n```\ndef set_alarm(self, day_diff, time_alarm, msg, to):\n\n self.send_alarm(msg, to)\n\n if day_diff <= 1:\n schedule.every().day.at(time_alarm).do(self.send_alarm, msg, to)\n else:\n schedule.every(day_diff).days.at(time_alarm).do(self.send_alarm, msg, to)\n```\n\nHere, out set_alarm basically leverages our 'schedule' package to repeat the timer at certain days interval.\n\nFinally the \"send_alarm\" used by \"set_alarm\" is used to send the actual message:\n\n```\ndef send_alarm(self, msg, to):\n sender = SMSender()\n sender.authenticate_client()\n sender.provision_client()\n sender.send_sms(to, msg)\n print(\"Alarm successfully sent!\")\n```\n\nThe send_alarm uses the previously coded SMSender to send the message.\n\nFinally wrap these three functions into a Repeater class and that's it!\nFinally our main function looks like:\n\n```\nif __name__ == \"__main__\":\n\n repeat = Repeater()\n starting_date = datetime.today()\n repeat.add_repeat_sequence(date_from=starting_date, repeat_after_days=14, msg=\"Heyaaaaa!\", \\\n to=\"+61444444444\")\n\n while True:\n schedule.run_pending()\n time.sleep(1)\n```\n\nThis will successfully send us a \"Heyaaaaa!\" every 14 days starting today.\n\nHope you liked the post. This is only a simple skeleton to provide a gist. There's still a lot of space for proper error checking, code testing and good code design. Feel free to change the code in whatever way you'd find yourself using it! All you need to do is run it in a server somewhere where it can run uninterrupted.\n\nThanks a lot!\n\nYou can find the full version of the code here in my github.",
"content_format": "markdown"
},
{
"url": "https://dev.to/giteden/how-to-share-vue-components-between-applications-2515",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n### Learn how to easily share Vue components between different projects, sync changes, and build faster using Bit.\n\nBit is a tool and platform for collaborating on individual Vue components across projects/repositories. If you’re not already familiar with Bit, you can read more about it here.\n\nIn this tutorial, we will walk through the main steps of sharing and consuming Vue components with Bit. This will include:\n\n* \n\nSigning up to Bit and creating a collection for our shared components\n\n* \n\nCloning a demo project from Github\n\n* \n\nInstalling Bit and initializing a workspace\n\n* \n\nExporting components from an existing project\n\n* \n\nImporting components to a new project\n\n* \n\nUpdating components and exporting them back to Bit’s cloud\n\n* \n\nImporting the updated component into our original project\n\n## Sign up and create a collection\n\nHead over to Bit and create your account. Enter a username and password, or simply use your Github account.\n\nNow that you’re a member, create a collection to store your future shared components using the top-right ‘New’ button. Choose between a private collection, for you and your invitees only, or a public collection to be viewed and used by the entire open-source community.\n\n## Clone a demo app\n\nWe need a demo project to really get our hands dirty.\n\nClone and install this demo to-do app:\n\nhttps://github.com/teambit/vue-demo-app\n\n```\n$ git clone https://github.com/teambit/vue-demo-app\n$ cd vue-demo-app\n$ npm install\n```\n\n## Install Bit and initialize a workspace\n\nInstall Bit CLI on your machine using npm:\n\n `$ npm install bit-bin -g` \nLog in to your account (via the CLI)\n\n `$ bit login` \nThis will open up your browser and log in to your account. We are now ready to start using Bit.\n\nInitialize a workspace\n\nTo start working with Bit on our newly-cloned project, initialize a workspace by typing into your terminal (in your projects root folder): `$ bit init` \nYou should receive a message stating:\n\n```\nsuccessfully initialized a bit workspace.\n```\n\n## Export our project’s Vue components\n\nTrack new components\n\nOur project is made of single-file components. Each component occupies one and only one .vue file - this sort of architecture is not mandatory but is highly advisable.\nWe can tell Bit to track all our components (located in the ‘components’ library) with a single command:\n\n `$ bit add src/components/*` \nYou should receive a message stating:\n\n `tracking 4 new components` \nTo make sure Bit tracks our components with no errors or issues that need to be resolved, type in:\n\n `$ bit status` \nYou should expect to see the following message:\n\n```\n> h1 ... ok\n > todo-input-controls ... ok\n > todo-list ... ok\n > todo-list-item ... ok\n```\n\nIf any component has dependency graph issues, click here to learn how to resolve them.\n\nConfigure a compiler\n\nEncapsulating components together with their compilers give us the freedom to use, build and test them anywhere. This includes running them in the cloud to enable features such as Bit’s live component playground (see an example).\n\nThis is done using pre-made Bit compilers that can be imported into your project’s workspace. You only need to do this once, and it can apply to all current and future components you share from it.\n\nTo configure our Vue compiler, type into your terminal:\n\n```\n$ bit import bit.envs/bundlers/vue --compiler\n```\n\nStage (tag) and export your components\n\nWhen tagging a component, three things happen:\n\n* \n\nThe component’s test will be run\n\n* \n\nThe component will get compiled\n\n* \n\nAll future modifications to this component will not affect this component version\n\nTo tag all our tracked components we add the — all flag:\n\n `$ bit tag --all 1.0.0` \nSpecifying a version number is not mandatory — you can leave it to Bit (in which case, patch number will automatically increase on each new tag)\n\nAfter entering your tag command, you should see in your terminal:\n\n```\n4 component(s) tagged\n(use \"bit export [collection]\" to push these components to a remote\")\n(use \"bit untag\" to unstage versions)\n\nnew components\n(first version for components)\n > h1@1.0.0\n > todo-input-controls@1.0.0\n > todo-list@1.0.0\n > todo-list-item@1.0.0\n```\n\nWe’re now ready to export our components to our new collection:\n\n```\nbit export .\n```\n\nFor example:\n\n `bit export bit.vue-demo-app` \nYou should expect to see in your terminal, something similar to:\n\n```\nexported 4 components to scope bit.vue-demo-app\n```\n\nYour components are now available to be shared and reused in your collection in Bit’s cloud.\n\nGo to `https://bit.dev//` (or check out my own collection on https://bit.dev/bit/vue-demo-app/ to see them rendered live in Bit’s playground. You can also write examples, showing how each component could be used. With Bit’s component-hub UI and search engine, discovering components is easier than ever.\n\n## Import a component to a new project\n\nCreate a new Vue project using vue-cli\n\nIf you don’t have Vue-CLI installed on your machine, type into your terminal:\n\n `npm install -g @vue/cli` \nLet’s create a new Vue project and name it ‘new-project’:\n\n `$ vue create new-project` \nWe’ll choose the default setup — Babel and ESLint:\n\n```\n? Please pick a preset: default (babel, eslint)\n```\n\nGreat!\n\nNow, let’s say our new project needs an input-field-and-a-button component, just like the one we had in our previous project (‘TodoInputControls.vue’). We can either choose to install it in its built form, using NPM or Yarn, just like any other\n\n```\n$ npm i @bit/..todo-input-controls\n```\n\nor we may choose to not only use it but also modify it and even export it back to our collection. Let’s give it a try.\n\nFirst, we need to initialize a new Bit workspace (in our new project’s root directory)\n\n `$ bit init` \nThen, import our ‘TodoInputControls’ component from our collection.\n\n```\n$ bit import ./todo-input-controls\n```\n\nfor example:\n\n```\nbit import bit.vue-demo-app/todo-input-controls\n```\n\nAfter completion, this message should appear:\n\n```\nsuccessfully imported one component\n- added ./todo-input-controls new versions: 1.0.0, currently used version 1.0.0\n```\n\nOur imported component is now located under the newly created ‘components’ library (located in our root folder — not our src folder).\n\n```\n├───.git\n├───components\n│ ├───todo-input-controls\n```\n\nLet’s open our (.vue) source code inside the ‘todo-input-controls’ directory and make a small change before exporting it back as a new version.\n\nFor example, say we want to modify our button so that it would be disabled not only when the input field is empty but also when it has nothing but whitespaces.\n\nThis is how our button looks before our modification:\n\nThis is how it looks after our change:\n\nGreat. We’re done with our updates.\n\nLet’s export our component back to our collection.\n\nOur component is an imported component. That means it is already tracked and handled by Bit. That makes two steps in our export workflow redundant: adding a component to Bit’s list of tracked components (bit add) and configuring a compiler (bit import bit.envs/bundlers/vue --compiler).\n\nWhen a component is tracked by Bit, it is given its own ID. To get the full list of tracked components, type in:\n\n `$ bit list` \nIn our case, we have only a single tracked component.\n\nLet’s see what Bit outputs back to us:\n\nAs expected, we have a single-row table with our tracked component in it.\n\nWe can use our component’s ID to tell Bit to tag it before exporting it back. This time we’ll let Bit decide on a new version number.\n\n```\n$ bit tag ./todo-input-controls\n```\n\nWe should expect to see this notification:\n\n```\n1 component(s) tagged\n(use \"bit export [collection]\" to push these components to a remote\")\n(use \"bit untag\" to unstage versions)\n\nchanged components\n(components that got a version bump)\n > ./todo-input-controls@1.0.1\n```\n\nLet’s export our modified component:\n\n```\n$ bit export .\n```\n\nYou should receive a message stating:\n\n```\nexported 1 components to scope .\n```\n\n## Import the updated components into our original project\n\nLet’s open our previous project and import the updated component from our collection.\n\nCheck for remote changes\n\nRemember $ bit list? Let’s add a flag to that, to check for outdated components in our current project.\n\n `$ bit list --outdated` \nYou should see this table in your console:\n\nFetch all outdated components\n\nWe can fetch the latest release of a specific component\n\n```\n$ bit import ./todo-input-controls\n```\n\nor, we can simply fetch all outdated components\n\n `$ bit import` \nYou should expect to see:\n\n```\nsuccessfully imported one component\n- updated bit.vue-demo-app/todo-input-controls new versions: 1.0.1\nEdens-MacBook-Pro:vue-demo-app eden$ bit status\nmodified components\n(use \"bit tag --all [version]\" to lock a version with all your changes)\n(use \"bit diff\" to compare changes)\n\n> todo-list ... ok\n```\n\nThat’s all! 😃\n\n## Conclusion\n\nIn this tutorial, we’ve seen how easy it is to share and collaborate on individual Vue components. Thanks to Bit, the borders of our project’s repository do not mark the borders of collaboration.",
"content_format": "markdown"
},
{
"url": "https://dev.to/yukikimoto/how-does-spvm-work-on-iot-device-such-as-raspberry-pi-32ai",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Q. How does SPVM work on IoT Device such as Raspberry Pi?\n\nA. Raspberry Pi is a Linux system. SPVM works well on Linux system. so SPVM work on IoT Device such as Raspberry Pi.\n\n# Q. How Well SPVM works on IoT Device such as Raspberry Pi\n\nA. SPVM creates an executable file by using spvmcc command. The executable file is deployed on IoT Device such as Raspberry Pi by copy.\n\n# Q. How easy is it to get the sensor readings?\n\nA. Sensor access uses C/C++/Assembler.SPVM provides ease of binding to C/C++ using SPVM Native Module.\n\n# Q. How is data transferred to the Internet?\n\nA. General ways using HTTP, TCP/IP, Linux, libraries of programing languages. SPVM::IO::Socket, SPVM::Sys::Socket are currently being developed aiming to complete the implementation of libraries to transfer data to the internet in 2023,",
"content_format": "markdown"
},
{
"url": "https://dev.to/justinctlam/comment/g9bn",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAnother thing to add is to understand the business’ goals and purpose. Too often software engineers think their value comes from the code that is produced. Real value comes from the effect of the code, either to generate money, make some progression to the field or industry through innovation or fixing a problem. So to succeed where you are is to also learn the business where you.\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/opennms/opennms-meridian-2019-1-2-earth-released-3g30",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nRelease 2019.1.2 is the third release in the Meridian 2019 series.\n\nIt contains a number of alarm classification bug fixes and performance improvements, flow enhancements, and more.\n\nThe codename for 2019.1.2 is Earth.\n\n# Bug\n\n* possible issue in JCIFS Monitor - contiously increase of threads - finally heap dump (Issue NMS-12407)\n* Wrong links in the Help/Support page (Issue NMS-12418)\n* Classification Engine reload causes OOM when defining a bunch of rules (Issue NMS-12429)\n* Cannot define a specific layer in topology app URL (Issue NMS-12431)\n* Classification UI: Error responses are not shown properly (Issue NMS-12432)\n* Classification Engine: The end of the range is excluded, which is not intuitive (Issue NMS-12433)\n* Ticket-creating automations are incorrectly enabled by default (Issue NMS-12439)\n* Enable downtime model-based node deletion to happen when unmanaged interfaces exist (Issue NMS-12442)\n* Improve alarmd Drools engine performance by using STREAM mode (Issue NMS-12455)",
"content_format": "markdown"
},
{
"url": "https://dev.to/mrmagician/aws-overview-part-1-45m3",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI would personally suggest not going too deep in the links to different services mentioned as it may confuse you and I will be explaining those in future posts\n\nIn this post, I will try to explain what AWS is in the best and easy to understandable way possible.\n\n## What the heck is AWS ?\n\nAWS basically stands for Amazon Web Services. In an essence, it's Amazon having a lot of computers stacked in places called data centres across the globe and then offering these computers for use to the general public at justified rates.\n\n## Why do we need AWS ?\n\nHere are some of the benefits of using AWS:\n\n* You don't have to worry about managing the hardware. For eg: let's say during black friday sale your application goes viral with its offerings and you are receiving 10 times the usual traffic. In that case, you can't possibly add that much hardware fast enough and it will impact your customers but if you use AWS then you can set it to auto-scale which will provision the hardware as per the demand or you can do it yourself with just one single click (it's that easy 😉)\n* Its cheaper to use AWS that to use your own hardware and there are a lot of surveys on the internet which have proven this 😌.\n* AWS customers can enjoy a lot of services which if they try to develop on their own, it would take them a lot of time and then the time spent in managing those will be a lot but with AWS you can just use them with few clicks 😎 and they best part is they just configure so easily with other AWS resources.\n* AWS is more cheaper, reliable than the competition always!\n* Since AWS is the oldest player in the game, its offering are tested a lot by the customers and just for reference Netflix, Prime Video etc uses AWS for their infrastructure and I am sure you'll agree with me that they works like charm (though the developers of these apps are also major contributors for this but AWS has empowered them 😀)\n\n## What are the offerings by AWS ?\n\nThe offerings by AWS can be divided into two parts:\n\n* \n\nResources\n\nThis includes things like EC2 instances (can be thought of a\n\nsmall server) which basically provision some hardware and we\n\nare able to directly interact with that hardware.* \n\nServices\n\nThis includes almost everything at AWS. Things like SQS queues, SNS topics, lambda, S3 storage which at some lower level also provision some hardware but we are not incharge of managing that hardware and we can't access it.\n\n## What are some of the bad points about AWS?\n\nHere are a few:\n\n* AWS may go down and your product may be impacted by this. You can thought of this as putting all your eggs in the same basket. To avoid such scenarios, make sure that you deploy your apps in different regions and availability zones which would minimise the impact on your service if something at AWS goes bad.\n* Since resources are provisioned automatically (if you configured AWS to do so) sometimes, it may result in you getting hefty bills so my advise would be to add alarms on billing and also set provisioning of resources accordingly.\n* [inspired by @wowik ] If data flow outside the AWS network(to the open internet) is also charged and it may happen that sometimes the amount you pay for the data transfer is equal to or more than what you pay for the resources 🙁.\n\nThat's all from my side. In my future posts, I will try to explain some services from AWS. If you want to add something in it, feel free to comment or reach out and I will then make the changes accordingly.",
"content_format": "markdown"
},
{
"url": "https://dev.to/himwad05/onboard-aws-eks-cluster-on-lens-kubernetes-ide-492l",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nToday, while working on a personal kubernetes project, I came across Lens - The Kubernetes IDE and was impressed by a couple of its features:\n\n* Built-In Prometheus monitoring setup with RBAC maintained for each user so users will see only the permitted resources visualizations\n* Built-In terminal which will ensure that it matches the Kube APIServer version with the version of kubectl.\n\nI felt daily administration and interaction with the EKS cluster can really be simplified with these 2 features. I decided to onboard one of my AWS EKS clusters on it but I was not able to find any documentation for Lens with AWS EKS. Although it only requires a kubeconfig - whether you paste it or upload it, the outcome is that it will connect to your cluster and authenticate with it to load all the objects into the Lens. Therefore, I decided to document the steps to make it easier for Lens users.\n\nFor AWS EKS, Lens can be treated as just another client which requires kubectl access. You will need to download the kubeconfig file and save it in ~/.kube folder so lens can read the file and then contact the Kube-ApiServer and aws-auth get the access to the EKS cluster. The process is well documented in AWS under Cluster Authentication section along with the steps and they work fine for both Windows and Linux. Even though I just tried Lens for Windows but I have authenticated kubectl client running on Linux servers numerous time to say confidently that it should work.\n\nI will describe the steps performed below even though they are documented to ensure you do not have to move between different documentation pages:\n\n1. Install aws-iam-authenticator on Windows using chocolatey\n\n```\nInstall command:\nSet-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))\n\nIf there are no errors on the above command, run the below command to show the chocolatey version which means its correctly installed:\nchoco\n```\n\n* Install aws-iam-authenticator using chocolatey\n\n```\nOpen a PowerShell terminal window and install the aws-iam-authenticator package with the following command:\nchoco install -y aws-iam-authenticator\n\nTest that the aws-iam-authenticator works:\naws-iam-authenticator help\n```\n\n2. Ensure AWS CLI is installed\n\nIf not, then browse through this documentation. Once it is installed, please add the installation directory to your PATH environment variable using this link as Lens will throw an error otherwise.3. Configure the AWS CLI with the desired role or user\n\nUse aws configure command as shown in this documentation. Please ensure that the user or role has the permissions to use the eks:DescribeCluster API action otherwise you will not be able to update the kubeconfig file using AWS CLI in the next step.4. Create Kubeconfig file for AWS EKS\n\nThe steps are taken from official AWS Documentation which I have tested successfully\n\n* Confirm that you are using the correct role or user:\n `aws sts get-caller-identity` \n\n* Generate the kubeconfig file automatically\n\n```\naws eks --region region-code update-kubeconfig --name cluster_name\n\nNote: replace the following with your desired values:\n region-code = Region where EKS cluster is located such as ap-southeast-1\n cluster_name = Name of the cluster in that region\n```\n\nThe kubeconfig should be located under C:\\Users.kube\\config. Please replace the path \"C:\\Users\" with the that of the current logged in user to get to the .kube folder\n\n5. Upload the kubeconfig file in Lens\n\nClick on + button on the top left corner which will give you an option to upload kubeconfig or paste it manually. Once you have selected the kubeconfig file, it will ask you to select the context, select the required context and then click on button at the bottom \"Add cluster(s)\" which will then start the authentication and add the objects into lens for your consumption.\nThe above steps should get you to onboard your EKS cluster into lens but please note, the steps will be different if you are not using AWS EKS. I hope this will help everyone using AWS EKS.",
"content_format": "markdown"
},
{
"url": "https://dev.to/msyyn/become-a-better-front-end-developer-17c0",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAlso available on Medium.\n\nIf you have been around the front-end scene for a while, you probably know that a few years ago being a front-end developer meant that you handle HTML, CSS and maybe some basic Javascript, whereas backend logic was done with languages like PHP. Nowadays, the term front-end developer has broader usage.\n\n## Two sides of the front-end\n\nAccording to article Front-of-the-front-end and back-of-the-front-end, there are two types of front-end developers. Front-of-the-front-end developers focus on look & feel and back-of-the-front-end developers focus on business logic and functionalities.\n\nCredit and source: Twitter/@shadeed9\nA front-of-the-front-end developer is a web developer who specializes in writing HTML, CSS and presentational JavaScript code.\n\nA back-of-the-front-end developer is a web developer who specializes in writing JavaScript code necessary to make a web application function properly.\n\nOften those who only maintain a skill set of HTML/CSS are falsely mistaken as inexperienced developers which is not the case. A front-end developer shouldn’t feel pressure to learn Javascript if they want to focus on the visual aspect of the front-end rather than the functionality side of the front-end.\n\nLet’s take a look at this picture:\n\nCredit and source: The Great Divide, Chris Coyier\nBoth of these are front-end developers, but do entirely different things and obtain different skill sets. Understanding this division is crucial for shaping your own path as a front-end developer.\n\n## Ask yourself why\n\nIn many social situations people often ask us ‘What do you do?’ When answering the question, you would probably say something along the lines ‘I am a developer’ or ‘I build web applications’.\n\nNow instead, you should approach that question with why you do what you do. Take a moment to think of your why. What is something that really sparks joy in you when developing? The answer could be something ‘I love crafting beautiful and accessible online stores to make online shopping a better experience for everyone’.\n\nWhen you have an answer for yourself about why you do what you do, it becomes easier to categorize you to either side of the front-end development. If your why is more related to visuals or usability, you’re leaning towards front-of-the-front-end, and if your why is more related to logic or systems, you’re leaning on the back-of-the-front-end.\n\n## Choose your path\n\nNow that we have explored the two sides of the front-end and took a moment to understand your whys, it’s easier to choose the path to continue on or identify the path you’re currently on.\n\nIf what you prefer to write is semantic and accessible HTML, you know CSS/SCSS architecture and build pleasant layouts, you’re probably a look & feel oriented developer who fits in the front-of-the-front-end category.\n\nIf you prefer writing complex Javascript logic, working with modern Javascript frameworks and APIs, you’re probably a functionality oriented developer who fits in the back-of-the-front-end category.\n\nThe line between these two sides varies from developer to developer and you could be a mix of both. Having understanding of the division helps you better categorize yourself which should help shape your path as a developer and progress your set of unique skills.\n\nSources and credits:\n\nThe Great Divide by Chris Coyier (@chriscoyier )\n\nfront-of-the-front-end and back-of-the-front-end web development by Brad Frost (@brad_frost)\n\nThank you for reading! Consider following me on Twitter for more insightful content.",
"content_format": "markdown"
},
{
"url": "https://dev.to/briwa/comment/bf2o",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nThe recent one was about Highcharts. You were supposed to have an animation when hovering the mouse on the legend.\n\njsfiddle.net/y92haq35 (somehow the fiddle can't be embedded)\n\nIt was fine on an isolated environment, but somehow the animation didn't appear on our page. I thought it was a configuration issue, so I copied and pasted the exact same config on our page. It didn't appear too. I was having trouble inspecting the styles because it isn't triggered semantically by CSS, rather by JS.\n\nI inspected the source code, but every hover class was firing up properly. I tried it on a different page in the app, the animation is there. So something was causing it in the original page.\n\nAfter painstakingly removing the components/modules in that page one by one, seeing which one causing the problem, I found out that there is a line of CSS that goes like this:\n\n```\n/*\n* (a note about a bug its trying to fix)\n*/\n.highcharts-series-hover {\n opacity: 1 !important;\n}\n```\n\nBasically this line says all Highchart series would have an opacity of 1, so even if the animation kicks in, this line overrides it (with the `!important` ) so that it looks like there is no animation. Should've fixed the actual bug from the issue tracker...\nAnd that concludes 3 hours of debugging. I think I didn't do a good job debugging it, any suggestions? 😂\n\nOn another note, how do you all prevent these kind of CSS bugs? Visual regression test? Eye test?\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/redhacks/mentorship-group-mdj",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Red Hacks Practice Group\n\nOur group is an open and safe space for practice in contributing to open-source projects, especially those maintained by Red Hat. This includes the projects created by this group as well as OpenShift, Quarkus, Ansible, and many related projects.\n\nOur goal is to help developers be successful in their careers in open source. If you'd like to practice open-source development using the same processes and tools that world-class companies use, you're very welcome to join.\n\nDISCLAIMER: This is an independent group, not affiliated with Red Hat or IBM in any way.\n\n## What members get?\n\nAs a member, you will be granted access to our projects, meetings, and resources as if we were working in the same software team at an open-source company:\n\n* Interesting and realistic projects\n* Weekly meetings to sync in ideas and challenges\n* 1:1 mentorship sessions with mentors\n* Chat group and asynchronous collaboration\n* Pull-request reviews\n* Modern tooling (github, gitpod, AWS and more)\n\n## What members need to know?\n\nThe group is open to all levels of seniority. There is no problem if you are just beggining in technology. However, there are at least two things you'll need to contribute to the conversation:\n\n* Basic english communication skills to participate in the meetings and write comments or documents.\n* Any modern programming language, preferably Python, Java or Go, where you'll find more peers here.\n\n# What is expected from members?\n\nAlso, we expect that members:\n\n* Respect our code of conduct. TL;DR: Don't be an asshole.\n* Drive its participation proactively: participate in the meetings, schedule mentorships, assign tickets, and engage others.\n\nThe number of participants in the group is limited, please participate and make the most of your time in the group.\n\n## How to join?\n\nIf you'd like to join the group, there are only two things you need to do:\n\n* Sponsor Caravana Cloud on GitHub: https://github.com/sponsors/CaravanaCloud\n* Submit your contact info: https://forms.gle/GkudAYCoZa3M6pXq5\n\nAfter that, you'll be added to our google group and github team. That should grant you access to all the links, invites and content. Send us a message if you need help with anything.\n\nTo accomodate everyone's agenda, we have two meeting times:\n\n* 17:00 BRT on the first and third monday of the month\n* 13:00 BRT on the second and fourth monday of the month\n\nNobody is expected to join all meetings. However, they are a great opportunity to ask, share and meet your fellow members.\n\n# Where to find all the links?\n\nAll links to private meetings and channels are in the group private repo on github at https://github.com/CaravanaCloud/redhacks\n\n# Frequently Asked Questions\n\nIs this free?\n\nNo, but it's really cheap, $5/month. We just want to keep the trolls out and the lights on.\n\nWhere can I learn more?\n\nBesides this blog, you can send us a message on X/Twitter https://twitter.com/CaravanaCloud or email help at caravana.cloud\n\nWhy is this in english?\n\nTo better help members participate in real open-source, in english. Even thou we speak other languages, most open-source is done in english and we need to practice that as well. However, do not worry, it is perfectly OK to make mistakes, this is a place for safe, constructive practice.\n\nWhy are people speaking portuguese and spanish?\n\nOur community is huge in Latin America, and sometimes we need to speak some \"spanglish\", please nevermind. We'll be back to english in a moment.\n\nIs it OK to speak portuguese in meetings?\n\nYes IF YOU ARE SURE that everyone in the meeting speaks portuguese.\n\nPlease help us make everyone feel welcome.\nDo I need to go to all meetings\n\nNo, but you are welcome to. Most members come on their prefered time, 2 times a month.\n\nWhere does the money go?\n\nFirst, to the projects themselves, expenses like cloud services and developer tooling.\n\nThen to donations to organizations selected by the group.",
"content_format": "markdown"
},
{
"url": "https://dev.to/vyckes/going-reliability-first-in-2020-24lo",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAnother year, and another big list of applications broken by `datetime` bugs. And what do you think? It will be a leap year. So we have to deal with these bugs, twice. How is it that after so many years of engineering, we still have these issues? And what does it have to do with my 2020 resolution?\nFor me to determine my engineering resolution, I have to look back at 2019 and the years before. I have to see what happened and what improved. What can we improve still?\n\n## 2010-2018\n\nThe biggest part of the past decade I filled with studies and being a student. It was during the beginning of the decade I found a new hobby: web design and development. It started with a free university license for Photoshop. I moved from creating small logos to implementing layouts in HTML and CSS. From friends, I learned about PHP and WordPress, which got my interest. Programming courses at the university helped me kick-start my hobby. But that was what it stayed, a hobby.\n\nIt was not until the last years of my studies that I got renewed interest information systems. I got interested in designing systems, and how they interact with each other. I found joy in creating UML-diagrams for instance. But one thing always got my interest more: how do users use our systems? I got to know Finaps. The rest is history.\n\n## 2019\n\n2019 was a fruitful year in my professional career. In 2018 we started an experiment within Finaps to see if we could change our technology stack. Could we scale our technical knowledge from low-code platforms, towards enterprise worthy 'fit-for-purpose' applications? This would mean we had to expand our technology stack (we moved towards React, .NET Core & GraphQL). At the beginning of 2019, we pursued this path on a larger scale. This held some big changes for us and me:\n\n* The cross-functional team I work in tripled in size;\n* I became the lead engineer of the team;\n* We went from one front-end engineer (me) to five front-end engineers in the team.\n\nThis path continued and will continue in (the beginning of) 2020. In the meantime, I finally launched my blog. This was in the works for over ten years, but I never pulled the trigger to release it. But in June 2019, I finally created the blog I always wanted. In the meantime, I have written a small set of articles and I even had some success. One of my articles took off on The Practical Dev. Even with low visitor numbers, I found great joy in writing and updating my website.\n\n## Going into 2020: 'reliability first'\n\n2020 will be a challenging year. I have to step up as team-lead. I have to keep my team happy and enable to grow in the directions they want. In the first half of the year, this will be a big focus. Not for the team, but for me, as I have much learning to do before I can enable my team.\n\nLooking at front-end development, I have some clear goals for 2020. With growing projects in size, our way of tackling these projects has to mature. We already looked into a scalable architecture, but that was the beginning. Always trying the 'next best thing' is fun, but our applications do not always benefit from them. They become less reliable. 2020 will be the year I grow myself in fundamental knowledge to improve reliability. I am going 'reliability first'. This means I will focus on:\n\n* Become better in testing my code;\n* Research and apply concepts like 'finite state machines' in front-end state management;\n* Research concepts are known from back-end development and see how they can be applied in front-end (and if they should be applied!). A good example is the publish-subscribe pattern, that we already use in our architecture;\n* Determine how to track user behavior and errors. This should provide insights on what to focus on when maintaining applications (e.g. performance improvements);\n* Developing with performance in mind (e.g. optimizing assets, lazy loading, code splitting, or apply memoization);\n* Applying data normalization in state management and study the impact on the application and the collaboration within a team when applied;\n* Data structures and algorithms. When to apply them in front-end development;\n\nall points have some value. But combined, they provide a very solid base for reliable large-scale applications. Especially when working with a team on larger projects, solid foundations are crucial. So it will be my primary focus of 2020. Everything I learn along the way, I will share on this website.\n\nBut my biggest goal for 2020, is becoming a good father, as of February 2020, I will be! And it is my most exciting goal in 2020, without question.",
"content_format": "markdown"
},
{
"url": "https://dev.to/dangerismycat",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Ryan James\n\nFull-Stack Software Engineer, bogosort enthusiast\n\n# Seven Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least seven years.\n\n# Six Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least six years.\n\n# Five Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least five years.\n\n# Four Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least four years.\n\n# Three Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least three years.\n\n# Two Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least two years.\n\n# One Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least one year.\n\n### Want to connect with Ryan James?\n\nCreate an account to connect with Ryan James. You can also sign in below to proceed if you already have an account.",
"content_format": "markdown"
},
{
"url": "https://dev.to/sayem_omer/stun-turn-and-ice-servers-nat-traversal-for-webrtc-5e29",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nBecause of the absence of IP address space and the need to keep a hidden system's engineering obscure to outside clients, NATs are utilized. These decipher inside private IP delivers to outside open IP addresses.Approaching traffic into the private system is steered through an open location restricting that happens on the NAT gadget. This coupling is made when an inside machine with a private IP address attempts to get to an open IP address (through the NAT).\n\nVoIP and WebRTC requires to pass media packets like video/audio into two peers which needs to pass external packets into internal networks.For these situation webRTC needs mechanisms like STUN , TURN and ICE server. Whats these do ! going forward.\n\nWe know webRTC is p2p connection. So in a private network connection between two client don't need any server.\n\nBut when a client A from private network need to connect to Client B who is in a public network , it gets block by NAT firewall and it doesn't know the public ip to connect. TO rescue first see how STUN server do.\n\nClient query the STUN server for his public ip and STUN provides public IP.With this provided public IP Client A can connect to Client B .\n\nSO, problem solved! Not yet ! STUN servers doesn't provide same public IP if the client call in different servers. so the bindings is not unique. Thats a problem ! another life-saving mechanism is TURN server.\n\nTURN server actually don't provide any Public IP , its relays media . it takes media from client A and provide to client B , without looking the packet. Compare to STUN server it's very much operation expensive. To reduce cost both STUN and TURN can be use in case of STUN server failure .\n\nThese STUN and TURN servers has dawbacks.STUN is backend inexpensive but doesn't work always . TURN server is reliable but backend expensive .Finally ICE servers comes handy !\n\nWhat ICE server do , its holds everything , its a collector . It collects local IP , STUN server's reflexive IP , TURN sever's relay media addess and stored in a remore peer via Session Description Protocol (SDP) . WebRTC client receives the ICE address of itself and the peers and send media through connectivity checking .\n\nThats all upto ICE server which is used in most of the WebRTC implimentation . Good news there more optimized WebRTC Natting named ICE Trickle . Maybe talks about it some other day . Happy learning !",
"content_format": "markdown"
},
{
"url": "https://dev.to/powerexploit/the-world-s-most-dangerous-search-engine-shodan-1ja5",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIts about a year ago when I started to learn about cyber security & hacking.\n\n' I was working on a tool which helps me to find out particular router on internet but it was getting difficult me to find it.just after a minute I opened up my browser & searched about how to find webcams and router on internet?\n\nWow!!\n\nI saw something amazing which let's me amazed a shodan search engine .\nSHODAN:\n\nShodan worlds most dangerous search engine over the internet that lets the user find specific types of computers (webcams, routers, servers, etc.) connected to the internet using a variety of filters.\n\nShodan is also available as Linux tool it means we can use this dangerous search engine using Linux terminal.\n\nFor more info visit Shodan official site.",
"content_format": "markdown"
},
{
"url": "https://dev.to/devteam/shecoded-2021-stories-from-women-building-software-and-the-allies-supporting-them-49pf",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nNote: Given the global nature of our audience, this year, we have decided to begin celebrating International Women’s Day when the first timezone hits Monday, March 8. That's 10 AM UTC on Sunday, March 7.\n\n## A very happy International Women's Day, 2021! 🎉\n\nAs you may know, each year at DEV, we celebrate International Women's Day by collecting and highlighting stories within the DEV community — this celebration is called \"Nevertheless, She Coded\".\n\n## Join us for #SheCoded 2021!\n\n### Jess Lee for The DEV Team ・ Mar 1 '21\n\nThe official theme of International Women's Day this year is \"Choose to Challenge\". At DEV, we see this theme as an invitation to fight against gender and gender non-conforming inequity, harmful stereotypes, and the limitations our society puts on women who code.\n\nWhile #SheCoded is focused particularly on the stories of women in the tech, for our fifth year in a row, we welcome posts from the entire community, including allies — regardless of gender.\n\nBelow are just a few #SheCoded, #TheyCoded, and #SheCodedAlly stories that embody the spirit of \"Nevertheless, She Coded\" and International Women's Day. We invite you to browse the entire #SheCoded library and contribute your own post to our collection. ❤️\n\n### @rachelombok\n\n\"If you are unsure how to be a good ally, ask how you can support your underrepresented peers and their coding journeys. Being clear and open about things like this will make you a better ally, and you may learn a new way of how to support that you might've not thought of.\"\n\n### @kotzendekrabbe\n\n\"My journey to becoming an Executive Principal Engineer working in Developer Relations was not easy. Here is my story and why I believe we need to challenge ourselves.\"\n\n## Article No Longer Available\n\n### @marklewin\n\n\"If you are a woman or non-binary person who has been put off joining us in the past, I plead you to reconsider now.\n\nYou’re not the heroes we deserve, but you are the heroes we need.\"\n\n### @omeal\n\n\"Women are interested in fast-paced work life, they love to solve interesting problems and look for opportunities to grow.\"\n\n## Women in Technology\n\n### Supriya Shashivasan ・ Mar 4 '21\n\n### @danielaf\n\n”This is what I have done [since #SheCoded last year]:\n\n- I completed a Udemy course of JavaScript\n\n- I signed up for Edabit - which is an excellent way to “keep fit” by solving 1 or 2 challenges every day\n\n- I found a tutor at Coding Tech. I have been doing online lessons each week and it has helped massively\n\n- I finished the book \"JavaScript & JQuery\", which the teacher of my Front End course recommended. A book sounds sort of old school when it comes to programming, but studying in the old way can be very effective. This guide is very well done, and I highly recommend it\n\n- I started to develop my own project, so I am coding for fun, which is with no doubt the best way to learn and keep motivated.\"\n\n## Article No Longer Available\n\n### @adiatiayu\n\n\"I have found my safe place and a great support system in my journey. They guide me to grow, develop myself, allow me to contribute and help with what I can give.\"\n\n## Nevertheless, Ayu Coded in 2021 And No Longer Alone!\n\n### Ayu Adiati ・ Mar 2 '21\n\n### @itsasine\n\nIf we are to survive as a species, it will be because we one day realize that every member of our society has something to offer and that the contributions of all of us working together will get us further than simply relying on manpower ever could have.\n\n### @bb_loft\n\n”We know that women have been disproportionately impacted by COVID and we can all lend our support to the community around us in helping those impacted however we are able. As folks are starting to plan physical events, I hope we can keep the increase in accessibility and inclusivity practices that virtual events have enabled.”\n\n### @jenc\n\n\"Instead of waiting to be told how senior I am and worrying I'm overvaluing myself, I've come to understand that my skills and specialization are valuable in their own right.\"\n\n## \"How good am I?\" Reflections on 5th year designer-developer doldrums\n\n### Jen Chan ・ Mar 5 '21\n\n### @whykay\n\n”My most recent achievement was finally creating a site to curate tech events and diversity in tech events hosted by organizers based in Ireland and Northern Ireland.”\n\n## Nevertheless, whykay 👩🏻💻🐈🏳️🌈 (she/her) Coded in 2021!\n\n### whykay 👩🏻💻🐈🏳️🌈 (she/her) ・ Mar 3 '21\n\n# Thank you to everyone who participated in #SheCoded this year. Your words, personal challenges, and victories are awe-inspiring. If you want to contribute your story or want to browse the many posts not mentioned here please visit dev.to/shecoded. We will be highlighting posts through March 9th.\n\nP.S. As part of the celebration, we have a special SheCoded 2021 collection of merch on the DEV Shop. Anyone who participates in this year’s event with an approved post will get a $20 coupon to the shop to go towards this collection.",
"content_format": "markdown"
},
{
"url": "https://dev.to/jhooq/what-are-terraform-dynamic-blocks-part-8-1d9n",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nTerraform dynamic block behaves pretty much the same as for or for-each loop. It not only iterates over the value range but also creates nested dynamic blocks which can be complex.\n\nTerraform dynamic block usually consists of three elements -\n\n* Name of dynamic block\n* for_each\n* \n\ncontent\n\n* \n\nName of dynamic block - You can keep the name of the block as per your choice, terraform does not have very strict rules on defining the dynamic block names\n\n* \n\nfor_each - Here you need to provide any collection or structural values which can be iterated\n\n* \n\ncontent - It is a terraform block or body which will be generated for each value of the for-each loop\n\nTerraform dynamic block - https://jhooq.com/terraform-dynamic-block/",
"content_format": "markdown"
},
{
"url": "https://dev.to/dougajmcdonald",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Doug McDonald\n\n404 bio not found\n\n### Want to connect with Doug McDonald?\n\nCreate an account to connect with Doug McDonald. You can also sign in below to proceed if you already have an account.\n\nAlready have an account? Sign in\n\nloading...",
"content_format": "markdown"
},
{
"url": "https://dev.to/cliffordfajardo",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Clifford Fajardo\n\nEngineer at LinkedIn Interests: Web Platform, Typescript/Open Source Research & Software, Developer tooling, PKM/BASB🧠, Education & Communities Bootcamp Alum (Hackreactor)\n\nWork\n\nSoftware Engineer at LinkedIn\n\n### Want to connect with Clifford Fajardo?\n\nCreate an account to connect with Clifford Fajardo. You can also sign in below to proceed if you already have an account.\n\nAlready have an account? Sign in\n\nloading...",
"content_format": "markdown"
},
{
"url": "https://dev.to/patrickstolc/what-s-next-javascript-generation-3kjh",
"domain": "dev.to",
"file_source": "part-00936-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWhen creating a no-code tool, you quickly realize that the users’ skill set and training differ very much. You see anything from no coding experience to very skilled programmers. Our vision at Busywork has always been to allow designers and frontend developers to make web apps without programming, recognizing that web apps’ backend is the most time-consuming and challenging to make.\n\nAs the Busywork platform has evolved, it has been apparent to us that while users find the node-based editor intuitive and easy to use, integrating the backend with their frontend is hard. The fact that users know how to build but lose their way when needing to integrate their work in their frontend is a critical problem to solve.\n\nWhat do we mean by integration? A web-based frontend usually consists of HTML and CSS. Python or JavaScript are often used to write backends. The integration of the two is the glue that ties everything together and turns a static site into an app. The glue is JavaScript.\n\nWhile tools like Webflow take the code out of making a frontend, we at Busywork take the code out of making a backend; there are very few tools that allow you to connect the two easily.\n\nBusywork will also generate your glue (JavaScript).\n\n# How will we solve \"Integration\"?\n\nThe problem of integrating our Busywork backend into any frontend isn’t new. We’ve shown off a demo that solves this problem visually and intuitively. But it has felt too disconnected from our platform.\n\n> Just a quick demo: Connect your @busyworkhq backend to @webflow without writing any code.\n\nyoutu.be/NSBLT491vEU\n\n@c_spags this one is for you 👊08:33 AM - 21 Feb 2020\nWhen planning for a new feature, we are always thinking about the long term viability and how it should support higher complexity than we can imagine at the moment. We’ve realized that there’s only so much you can do using forms (check the previous demo) before you lose track — especially with conditional logic. Also, embedding the integration tool directly into a hosted static site creates a big assumption: you have a deployed version of your static site.\n\nTo solve the issues arising from a form-based integration tool, we’ve decided to move everything into our node-based editor. The move to the node-based editor will allow users to keep better track of their integration-flows, and the way of working is more cohesive with how you already use Busywork. With this way of creating integration-flows, we also introduce new building blocks. You will exclusively use these for connecting your backend to your frontend.\n\nBackend workflows get executed when a frontend or server sends an HTTP request to the URL of the workflow. The execution of integration-flows is different. Integration-flows get triggered by user interactions, such as the click of a button, or changes in the app state.\n\nSuppose you want to let a user add a new record to your database, it could be a new job post. In this simple case, your frontend would have a form that allows the user to fill in the job post’s details. On the backend side, you would insert the data into the database. To connect the frontend and backend, the objective would be to execute the backend workflow when the user has filled and submitted the form. From the result of the workflow, successful or failed, you can close the loop and give the user feedback with something like a message.\n\n# How it Works\n\nBusywork revolves around the concept of workspaces: one workspace per project. The goal is to separate concerns and isolate workflows, databases, and user management. A new idea that we introduce is Triggers. Triggers are what you need to set up to connect your front and backend, and are a part of a workspace.\n\nOur goal is to take the most popular paradigms from programming and make them accessible to people with little or no programming experience. The Busywork platform itself uses components, and we are confident that this is the best way to build an app.\n\nA component is a reusable element in your web app with precisely defined behavior and optional data-binding. If we look at the simple example of a todo-list, it consists of a list of tasks and a form that allows you to create a new task. Applying the paradigm of components to this example, we would have a component representing the list and a component that represents the form.\n\nA task list component has to present the tasks from the database. For the frontend to show the tasks, you need to establish a way of communicating with the backend and defining one or more functions that change your frontend. Traditionally.\n\nWith Busywork, you make one or more flows on a per-component basis that connects the specific fields from the database that you want to display on your frontend.\n\nThe new task component has to send the user input to the backend, which then adds a new record to the database. Breaking this component down reveals a longer chain of actions: user clicks form button -> add the task to database -> return a successful or failed message to the user.\n\nAs the users create a new task, the list of tasks needs to be updated. You don’t need to extend your “new task flow” to do this. Busywork detects any changes that are bound to your frontend and updates it automatically.\n\nThe above is just a short teaser of what we are working on to make integration easier. We will reveal more details, examples, and videos very soon.\n\n# Our Long Term Goal\n\nWhat we’ve done up until this point has laid the ground for letting users connect their Busywork backend to any web-based frontend. We’ve focused very much on Webflow, but we are also able to support other frontend platforms. The current limitation is still the need for a static site to connect front and backend.\n\nWe want to give designers the power of full-stack developers and accept that web-builders might not be their goto tool. To accommodate more design-oriented tools, we will be working on direct integration with Figma. The Figma integration will allow designers to generate static sites from their design, export the frontend code or deploy it to Netlify, and finally connect their app with Busywork.\n\nSynchronizing design with deployment without the additional step of rebuilding on the frontend removes a lot of repetitive work. It will enable designers to ship the product they design. It allows designers to show working products to customers while iterating on the design. Ultimately designers will be able to handoff working apps to their customers.\n\nOriginally published at https://blog.busywork.co.",
"content_format": "markdown"
},
{
"url": "https://dev.to/dmarcr1997/bash-cut-operation-4ecf",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAs promised, here is the next addition to my bash blog series. In this article, I wanted to cover the cut operation. With knowledge of the cut operation parsing, sorting, and reading data becomes a lot easier in bash. This post covers the basics of cut and should give a good foundation to use it how you see fit.\n\n## Cut\n\nTo get the information you want from your string, data, or whatever else just follow it with | cut and the flag of your choice. The example below uses the character flag:\n\n```\nread line\necho $line | cut -c 3\n```\n\nThe command above reads in a line and then outputs the third character of whatever was input. This is done by placing the -c(character flag) after cut and then giving the placement of the character you want from the string(starting from 1). This can also be extended to cut multiple characters using either a '-' or ','. For example:\n\n```\nread line //=> Hello World\necho $line | cut -c 1-5 //=> Hello\necho $line | cut -c 1,5,8 //=> Hoo\n```\n\n## Cutting Flags\n\nThere are many different flags that can be used with cut other than the character flag; read more here. I want to cover two very useful flags the -d, and -f flags. The -d or the delimiter flag specifies what to cut the input on. The -f or field flag allows you to specify the index of the items you want back from what you split with the delimiter flag. These two are often used together to get back specific chunks of information.\n\nFor example:\n\n```\nread line //=> hi how are you\necho $line cut -d $' ' -f 1-3 //=> hi how are\n```\n\nThere are many more ways to use this command and other flags to explore. I would love to see any examples I missed here and hear any feedback to make this series better. Next week I will cover reading from a text file.",
"content_format": "markdown"
},
{
"url": "https://dev.to/naismith/new-year-new-goals-my-2020-professional-resolutions-1ehd",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n2019 was a crazy year for myself, both professionally and personally. In the span of 2 months I got married, bought and moved into my first house, and started a new job. Talk about stress. But in the final moments of 2019, I hope to plan out and goal set for the upcoming year and decade.\n\nHopefully writing down my goals and sharing them, will encourage me to stick to them.\n\n## 💻 Writing More\n\nI wrote my first blog post on dev.to in 2019. In 2020 I plan to write 12 different blog posts by the end of the year. Sharing knowledge is incredibly important, either for your team or bettering the community.\n\nI had been meaning to write much more in 2019, but every time I was close to publishing a post I always second guessed myself. But hopefully by jumping 'into the deep end' I can keep up this momentum and continue to share my insights/knowledge with others, and learn back from them.\n\n## 📚 Reading More\n\nReading is a great way to learn from others. Wether it's books, blog posts, articles you can always learn something new. This year, similar to writing more, I'd like to read 12 new books. As a kid I loved reading, and in 2019 I started reading again after not reading for so long.\n\nIn addition to books, I plan to read more blog posts and articles. I have no specific goal in mind, but I look forward to the new content that 2020 has in store.\n\n## ⌨️ A New Front End Framework\n\nIn our industry we joke about new frameworks coming out every month, day or hour. When developing on the front end, I tend to lean towards React. While I love React, I am interested in playing with some alternatives in either strengthening my love for React or finding something better.\n\nBeing comfortable for far too long in anything can cause you to plateau in your skills. Which is why I plan in 2020 to pickup Vue.JS or Svelte for a small project to see if the grass is greener on the other side.\n\n## 🌐 Open Source\n\nHacktoberfest came and went this year and I failed to participate. Obviously it's optional but the idea of contributing to the community in which I benefit from is of huge interest to me. I'm hoping by the end of the year to have made a beneficial impact on at least 1 open source project.\n\nMy current interest is in creating a plugin for the open source CMS Strapi. If you are not familiar with it, I recommend taking a look at it.\n\n## 👫 Community\n\nI created a developer meetup in the city I grew up in as there's little to no social community. I hope to continue the success of it in 2020 by making new friendships, helping educate others, and learn from others over some pizza.\n\n## 📣 Talking\n\nI had never given a tech talk until this year. I hope to continue some talks in the new year as it has been a lot of fun (in fact my most recent talk I converted into my first ever blog post - 2 birds with one stone!).\n\n## ☯️ Balance\n\nI recently just stopped a part time job as it was severely cutting into my personal time. Early mornings and late nights was starting to cause strain on my mental health and relationships as it would take time to recover. But one thing I hope to do in this new decade is take the time to enjoy the world around me and my life with my new wife. Noone on their death bed wishes they worked more.\n\n# Conclusion\n\nThere's certainly more I could add to this list, getting promotions, or a raise. But I think that would be in most people's resolutions.\n\nWhat's on your list? Got any book recommendations? Did you meet your goals? I'd love to hear your comments down below.",
"content_format": "markdown"
},
{
"url": "https://dev.to/couellet/build-an-e-commerce-progressive-web-app-with-gatsby-b02",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI tried buying a t-shirt on my phone the other day.\n\nFirst, I get redirected to a `http://m.thatsite.com` URL.\nMobile site loads...\n\nContent finally appears, along with a fullscreen pop-up:\n\nDownload our mobile app for a better shopping experience!\n\nI tap the link. App store loads...\n\nBad reviews, blurred screenshots, 50 MB. Sigh\n\nI close both app store and browser.\n\nThis familiar, sketchy shopping experience could have been avoided.\n\nHow? With progressive web app e-commerce.\n\nProgressive Web Applications (PWAs) have been on the rise these last years. Solid PWA examples are popping up everywhere, and for good reasons.\n\nThey encourage an inclusive, global, adaptative approach to web development. They make sense both from a user AND a business POV, as we'll see in this piece. Frameworks like React and Vue JS are increasingly used to craft PWAs.\n\nIn this post, I'll cover:\n\n* A definition of PWAs\n* A case for PWA e-commerce\n* An overview of Gatsby.js for PWAs\n* A detailed PWA example with steps, code repo, and live demo\n\nLet's dive in.\n\n→ Read the full post here",
"content_format": "markdown"
},
{
"url": "https://dev.to/ianjennings/learn-more-about-your-candidates-with-video-take-home-challenges-40jp",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nAfter years of testing developer experience, today I'm turning the tables and letting you test your developer candidates!\n\nPaircast.io is the platform we built for testing DX and it turns out it's mighty good at assessing developers' skills.\n\nYou can see how developers solve problems and where they get stuck as they complete coding challenges on their own machine. You get all the quality of a technical interview with the convenience of a take-home assignment. No engineers needed to conduct technical interviews.\n\n👉 Learn more by watching how developers solve problems, not just end results.\n\n👉 Real-world work, not brain teasers\n\n👉 No more cheating on take-homes, you can see developers write every line\n\n👉 Get a higher signal with less time invested by your candidates\nPersonally, I'm super excited to launch Paircast into the world as I'd never pass an online coding challenge myself. After hearing a bunch of MLH Fellowship students complain about these coding tests, I knew there had to be a better way. Let's burn \"crack the coding interview\" and focus on real-world skills!",
"content_format": "markdown"
},
{
"url": "https://dev.to/syakirurahman/how-do-you-stay-motivated-to-work-on-side-project-42j7",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nI have a full time job as a full-stack developer. Since covid-19 pandemic, i'm working from home. So i got more spare time and i decided to rebuild my own blog devaradise.com.\n\nI wrote some tutorials there and published them weekly. But sometime, i'm feeling unmotivated to write even when i have some topics in mind.\n\nSo, i just want to know if there is anyone who feel like me. How do you stay motivated for your side project? It doesn't have to be a blog. It can be an open source project, a startup project, or anything.",
"content_format": "markdown"
},
{
"url": "https://dev.to/siddharthshyniben/soim-social-image-generator-cli-4ffk",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIn my last post on Social Image generation, I talked about how I built a site which can be used to automate social image generation. Well, I just built a CLI for it! Introducing `soim`.\n\n## SiddharthShyniben / soim\n\n### Social Image generator\n\n# \n\n `soim` This image was made by running `soim -t \"This is some sample output of soim\" -T \"social, images, generator\" -i \"https://raw.githubusercontent.com/AnandChowdhary/undrawcdn/master/illustrations/images.svg\" -c \"Made by @SiddharthShyniben\" -p \"cover.png\"` `soim` is a CLI tool for generating social images. Given data, `soim` uses\nPuppeteer to screenshot a page. `soim` can also be used as a library. The exported `generateImage` function takes\nan object as options, and the options are the same as the CLI options.\n\n## Install\n\n```\n# locally\n~$ npm i soim\n\n# globally (for CLI)\n~$ npm i -g soim\n~$ soim -t ...\n```\n\n## CLI options\n\n* \n `-t`, `--text` : The main text.* \n `-T`, `--tags` : Comma separated list of tags, shown at the top of the image* \n `-p`, `--path` : The place to write the image* \n `-l`, `--link` : Custom link where the social image template lives. Useful if you want to design a custom template. See…It's a small CLI tool to generate social images. Just run `soim --text Something --caption something ...` and get an image!\nIt uses puppeteer under the hood to generate images. I'll write a post on it someday.\n\nHope you like it!",
"content_format": "markdown"
},
{
"url": "https://dev.to/bartoszgajda55/5-best-study-resources-for-gcp-ace-exam-1jfi",
"domain": "dev.to",
"file_source": "part-00204-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nRecently, I have finally had a chance to attempt an exam for Google Cloud Certified Associate Cloud Engineer certification and I have passed in first attempt! This achievement has been possible, thanks to thorough preparation, during which I have used a selection of resources. In this post I will be sharing with you 5 of those that I think have contributed the most to the positive outcome of the GCP ACE exam. Enjoy!\n\n# 1. Official GCP ACE Study Guide\n\nThe first element on the list is probably the most comprehensive source of knowledge you can get to prepare for GCP ACE exam. This book consists of 18 chapters, each covering a large chunk of knowledge you have to possess in order to pass the exam. Chapters include screenshots, so you can easily recreate the steps shown on the paper. Each of those chapters is finished with 20 question mini-exam, that reflects quite good the questions you might encounter on real assignment.\n\nIn addition to that, you also get two full practice exams and around 100 flashcards. The flashcards are especially useful in the learning process, but I know it’s quite personal thing.\n\nThis books costs typically around £25, which I think is really good bang for the buck.\n\n# 2. A Cloud Guru Course\n\nThe second source I have used was A Cloud Guru course on GCP ACE certification exam. I have used the Udemy to get it, but you can also buy it from an official page (it’s probably better to buy at the source).\n\nThe course is quite comprehensive, especially the GKE and Kubernetes related chapters. It takes around 10–12 hours to complete, depending on your studying pace. This course also covers all the chapters like the book, but it comes in much more practical fashion.\n\nOnline courses have big advantage over written word, as it is easier to follow what the instructor is doing and getting hands-on experience of the platform.\n\nThe price of this course varies — I got it on Udemy for around £15, but that was during sale (which thankfully are quite often on Udemy). If you buy it directly, it is going to cost you a bit more.\n\n# 3. Linux Academy Course\n\nAnother online course comes from Linux Academy. They are known for their high-quality exam prep course and this one is no exception. It takes a bit longer than A Cloud Guru one but in my opinion it’s for the better.\n\nThis course covers all of the exam study points (as all of the above sources) but in my personal opinion, does the best work of showing how to actually use the GCP in practice. The lessons are optimally sized and the instructor does a great job of explaining the inner workings of Google Cloud services.\n\nThis course is at the time of writing only available on Linux Academy site. The best option there is to go with membership — that will cost you \\$49 per month.\n\n# 4. Google Code Labs\n\nGoogle Code Labs is my personal favourite on this list. This site is a collection of labs that are completely free and guide you step by step how to implement a real-world solution or application on Google Cloud Platform.\n\nThere is a broad selection of labs, covering more Google services than you actually need to pass the ACE exam. Each lab shows you the exact steps you have to complete, in order to finish with complete and working solution to a selected problem (like deploying a Spring Web app on Google App Engine).\n\nIt is a great resource not only for ACE exam but for any Google Cloud certification.\n\n# 5. Google Qwiklabs\n\nThe last resource on the list is probably well known to anyone preparing for GCP Associate Cloud Engineer exam. The Qwiklabs provides a so called ‘quests’ which are a form of labs that guide you Google Cloud Platform and show the underlying fundamentals of this environment.\n\nIt’s quite comprehensive, although I wouldn’t risk attempting the exam after the Qwiklabs only, but it serves as great supporting material.\n\n# Summary\n\nI hope you have found this post useful. If so, don’t hesitate to like or share this post. Additionally, you can follow me on my social media if you fancy so 🙂\n\n> Want to become @googlecloud Certified Associate Cloud Engineer? This post shows 5 best resources to study and pass the exam with ease!\n\nbartoszgajda.com/2020/07/13/5-b…\n\n#google #googlecloudplatform #gcp #gcpace #asociatecloudengineer #exam #certification #cloud #aws #azure18:43 PM - 28 Sep 2020",
"content_format": "markdown"
},
{
"url": "https://dev.to/thepracticaldev/daily-challenge-90-one-step-at-a-time-23l",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nYou have a square matrix of unsigned integer values. Say, 5 + 5.\n\nEach row of that matrix contains numbers from 1 to 5 (or to whatever length of the matrix), randomly sorted.\nYou can move through the matrix only one step at the time - either straight down, left-down or right-down.\n\nYour challenge is to write a code that will start at the middle of the top row, and find the path down with the highest sum.\n\nExample:\n\nFor the following matrix, you start in c/0. For this matrix the best path would be: `c/0`, `d/1`, `d/2`, `e/3`, `d/4` which scores `4+4+5+4+5 = 22`.\nThis challenge comes from peledzohar here on DEV.\n\nWant to propose a challenge idea for a future post? Email yo+challenge@dev.to with your suggestions!",
"content_format": "markdown"
},
{
"url": "https://dev.to/rachelsoderberg/salesforce-outbound-messages-part-2---testing-a-net-web-service-connection-with-salesforce-outbound-messages-5a4c",
"domain": "dev.to",
"file_source": "part-00471-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWelcome to Part 2 of my Salesforce Outbound Messages series! At this point, I am going to assume you have an Outbound Message in Salesforce that fails when you create a new Quote and a Web Service that compiles successfully. If you don't have both of those ready, go back to Part 1 so you can get up to speed (don't worry, we'll wait!): Salesforce Outbound Messages Part 1\n\nThe first thing you'll need to connect your Web Service and Outbound Message is an endpoint - since our service isn't processing actual data yet we'll technically need two \"endpoints.\" If that sounds confusing, don't worry, it's more simple than you think.\n\nFirst load your web service project and run your service; in my case this is called QuoteService.svc. If you've done everything correctly, the WCF Test Client should launch. Copy the localhost address that looks something like http://localhost:63881/QuoteService.svc. Note: Keep the WCF Test Client running.\n\nLaunch SoapUI and create a New SOAP Project, pasting the web service address you just copied into the Initial WSDL field with ?wsdl at the end like so: http://localhost:63881/QuoteService.svc?wsdl\n\nCheck Create Requests and click OK. You'll find your new Soap Envelope a few levels down in the navigation bar, called Request 1.\nLeave the Soap Envelope as-is for now and navigate to PutsReq.com. If you aren't familiar with PutsReq, it is a website that lets you record HTTP requests and fake responses, and forward requests. This allows us to see the data that would have been passed from our Salesforce Outbound Message to our Web Service if it were completely set up. Create a PutsReq and copy the provided PutsReq URL at the top. Paste this URL into your Salesforce Outbound Message Endpoint URL in place of the dummy URL you provided in Part 1. Note: If you click the PutsReq URL, it should take you to an otherwise empty page that says \"Hello World\" in the top left corner.\n\nCreate a new Quote in Salesforce. If you used the NOT( ISBLANK( Order_XML__c ) ) Rule Criteria for your Workflow Rule, be sure to set the Order XML field on the new Quote to something (I set mine to \"NOT BLANKVALUE\"). This Quote will still show up as failed in the Outbound Messaging Delivery Status, but you should see a new Request on PutsReq.\n\nAlong with this new Request, there should be a POST section at the bottom of PutsReq with a Soap Envelope:\n\nYou may have noticed this Soap Envelope looks much like the one that was generated in SoapUI a few steps ago. You are correct! Copy this and paste it into SoapUI, replacing their boilerplate code.\n\nHopefully your WCF Test Client is still running in the background - if it's not you'll get errors with the Soap Request, so go run your Web Service again. Click submit (the little Play button) on your Soap Request and wait a few moments. If everything is set up correctly, you will be notified in the right panel with a True Ack.\n\nThe next step, which I'll take you through in Part 3 of this series, is adding more fields so your Outbound Message can be more useful.",
"content_format": "markdown"
},
{
"url": "https://dev.to/aahnik",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Aahnik Daw\n\nRecursive Knowledge Trap\n\n# Four Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least four years.\n\n# Writing Debut\n\nAwarded for writing and sharing your first DEV post! Continue sharing your work to earn the 4 Week Writing Streak Badge.\n\n# Three Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least three years.\n\n# Two Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least two years.\n\n# One Year Club\n\nThis badge celebrates the longevity of those who have been a registered member of the DEV Community for at least one year.\n\n# 4 Week Writing Streak\n\nYou've posted at least one post per week for 4 consecutive weeks!\n\n# Top 7\n\nAwarded for having a post featured in the weekly \"must-reads\" list. 🙌\n\n# Hacktoberfest 2020\n\nAwarded for successful completion of the 2020 Hacktoberfest challenge.\n\n### Pinned\n\n### Want to connect with Aahnik Daw?\n\nCreate an account to connect with Aahnik Daw. You can also sign in below to proceed if you already have an account.",
"content_format": "markdown"
},
{
"url": "https://dev.to/nilanth/10-react-packages-with-1k-ui-components-2bf3",
"domain": "dev.to",
"file_source": "part-00046-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n10 React Packages which has more than 1K UI Components to speed up the development.\n\nReact has a very large community with more packages. UI is the Wow factor of any Application. When it comes to React apps, we might have some complex UIs and flows. We will see the best UI components packages, which includes more than 1K components.\n\n## 1. Ant Design\n\nAnt Design is an enterprise-class UI design language and React UI library. Which is the most popular React UI library based on GitHub Stars. It has 100 plus components from typography to tables. Ant Design document is very clean and has clear examples.\n\nAnt Design does not save only developer time, it saves designers time also, As it includes Sketch and Figma files for all components. Ant Design component supports both JSX and TypeScript. Customizing the ant theme is also very simple. Ant Components save a lot of time for developers in handling forms and validation as it has prebuilt form components. Ant Design also supports hooks.\n\nGitHub - 73.8K ⭐\n\n## 2. Material-UI\n\nMaterial-UI is also the most popular React UI library, It is a simple and customizable component library to build faster, beautiful, and more accessible React applications. Material-UI contains 100 plus components. It also includes 1K plus icons.\n\nMaterial UI also provides paid Sketch, Figma, Adobe Xd files for designers. Material UI is also used by top organizations like Spotify, NASA, Netflix, Amazon and more. Material UI has well-prepared documentation with code samples.\n\nGitHub - 70.3K ⭐\n\n## 3. Chakra UI\n\nChakra UI provides a set of accessible, reusable, and composable React components that make it super easy to create websites and apps. Chakra UI components follow the WAI-ARIA guidelines specifications and have the right aria-* attributes. Chakra UI community is growing faster due to its performance and experience. Chakra UI has well-prepared documentation with code samples.\n\nGitHub - 20K ⭐\n\n## 4. React Bootstrap\n\nReact Bootstrap enables to use of Bootstrap JS for React Component. React Bootstrap components are build from scratch with react and not contains jquery. React Bootstrap contains all the bootstrap components which we used with JavaScript. Now it includes Bootstrap 5 in the beta stage. React Bootstrap has well-prepared documentation with code samples.\n\nGitHub - 19.8K ⭐\n\n## 5. Semantic UI React\n\nSemantic is a UI component framework based around useful principles from natural language.\n\nSemantic UI React is the official Semantic-UI-React integration. It contains 50 plus components, Jquery Free, Auto Controlled State, Sub Components and more. If your react App needs Semantic UI you can prefer this package.\n\nGitHub - 12.4K ⭐\n\n## 6. Fluent UI\n\nFluent is an open-source, cross-platform design system that gives designers and developers the frameworks they need to create engaging product experiences - accessibility, internationalization, and performance included. Fluent design is used for Windows 10 devices, tools and also for Windows 11.\n\nFluent UI is developed by Microsoft, It has a collection of utilities, React components and web components for building web applications. It has good documentation.\n\nGitHub - 12K ⭐\n\n## 7. Evergreen\n\nEvergreen is the UI framework upon is build product experiences at Segment. It serves as a flexible framework, and a lot of its visual design is informed through plenty of iteration with the segment design team and external contributors. Evergreen has 30 Plus components and the documentation also seems good.\n\nGitHub - 11K ⭐\n\n## 8. Reactstrap\n\nReactstrap helps to use Bootstrap 4 Components with react. This is simple to configure and use. It has good documentation for using components.\n\nGitHub - 10.1K ⭐\n\n## 9. Grommet\n\nGrommet is a react-based framework that provides accessibility, modularity, responsiveness, and theming in a tidy package. It has 60 Plus components. It also provides Sketch, Figma, AdobeXd files and 600 plus SVG icons. Grommet is used by Netflix, Samsung, Uber, Boeing, IBM and more organizations.\n\nGitHub - 7.4K ⭐\n\n## 10. Reakit\n\nReakit is a lower-level component library for building accessible high-level UI libraries, design systems and applications with React. Reakit is tiny and fast.\n\nGitHub - 5K ⭐\n\n## 11. Mantine\n\nMantine is a React components and hooks library with native dark theme support and focus on usability, accessibility and developer experience. Mantine includes more than 100 customizable components and hooks.\n\nGitHub - 1.8K ⭐\n\n## Conclusion\n\nUI Library saves development time and reduces the usage of more dependencies. There are a few more UI libraries, I have listed only the most used. I hope you have found this useful. Thank you for reading.\n\nGet more updates on Twitter.\n\nYou can support me by buying me a coffee ☕\n\n## More Blogs\n\n* No More ../../../ Import in React\n* How to Create Public And Private Routes using React Router\n* Redux Toolkit - The Standard Way to Write Redux\n* 5 Packages to Optimize and Speed Up Your React App During Development\n* How To Use Axios in an Optimized and Scalable Way With React\n* 15 Custom Hooks to Make your React Component Lightweight\n* 10 Ways to Host Your React App For Free\n* How to Secure JWT in a Single-Page Application\n* Redux Auth Starter: A Zero Config CRA Template",
"content_format": "markdown"
},
{
"url": "https://dev.to/uilicious/explain-distributed-storage---and-how-it-goes-down-for-github--uilicious--cloud--etc-1mni",
"domain": "dev.to",
"file_source": "part-00591-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# Background Context\n\nIt's been a bad month for sysadmins (Oct 2018), first youtube then github\n\n> GitHub Status@githubstatusWe are continuing work to repair a data storage system for GitHub.com. You may see inconsistent results during this process.04:12 AM - 22 Oct 2018\n\nTo start: #hugops to the folks at github for fixing their systems. It isn't easy to keep large-scale systems up and running (having been there myself)\n\nLiquid error: internal\n\nAnd apparently, while distributed data storage is used everywhere in the cloud, not many developers truly understand it...\n\nSo, what is distributed storage? Why does every major cloud provider use it? And, why and how do they fail?\n\n# What is Distributed Storage?\n\nDistributed Storage, here, collectively refers to \"Distributed data store\" also called \"Distributed databases\" and \"Distributed File System\".\n\nAnd is used in various forms from the NoSQL trend to most famously, AWS S3 storage.\n\nThe core concept is to form redundancy in the storage of data by splitting up data into multiple parts, and ensuring there are replicas across multiple physical servers (often in various storage capacities).\n\nData replica could either be stored as exact whole copies or compressed into multiple parts using Erasure Code\n\n> \n\nBecause erasure code, encryption, parity bits, involves complex math to explain how they work and does not change much to the concepts in this article - other than compressing data. I would simply use the inaccurateerm \"replicas\", to save both of us the math lecture 😉\n\nRegardless of the storage method used, this helps to ensure that there are X copies of data stored across Y servers. Where X <= Y. (For example, 3 replicas across 5 servers).\n\n# Ugh complex math, why do we do this?\n\nIn large scale systems, it is not a question of \"if\" a server will fail, but \"when\". After all, with the amount of hardware, it's pretty much a statistical fact.\n\nAnd you really do not want such a large-scale system to go down whenever a single server sneezes.\n\nThis contrasts heavily against what used to be more commonplace - redundancy of drives on a single server (such as RAID 1), which protects data from hard-drive failure.\n\nWhen done correctly, distributed storage systems, helps to make the system survive downtime, such as a complete server failure, an entire server rack being thrown away, or even the extreme of a nuclear explosion of a data center.\n\nOr as xkcd puts it ...\n\n# Nuclear proof database? Wouldn't such a system be extremely slow?\n\nWell, that depends... part of the reason why there are so many different distributed systems out there, is that all systems form some sort of compromise that favor one attribute over another. Examples include latency or ACID guarantees.\n\nAnd one of the most common compromises for distributed systems is accepting the significant overheads involved in coordinating data across multiple servers. So when it comes to a single server node benchmarked against non distributed systems, they tend to lose out.\n\nHowever, what they gain in return is horizontal scalability, such as the capability to span across a thousand nodes.\n\nA common example that many developers might have experienced: Uploading a single large file to cloud storage can be somewhat slow. However, on the flip side they can upload as many files concurrently as their wallet will allow simultaneously because the load will be distributed across multiple servers.\n\nOr for CERN (you know, the giant Large Hadron Collider), it means 11,000 servers, with 200 PB of data, transmitting more then 200 Gigabits/s. With zero black holes found to date\n\nSo yes, a single server is \"slow\". For \"fast\" multi-server you have parallelism with replicas distributed among them.\n\n# Oh wow, cool. So how do these replicas get created in the first place?\n\nWithout going into how any one distributed system works, in general.\n\n* \n\nWhen a piece of data gets written to the system by a client program, its replicas are synchronously created into various other nodes in the process.\n\n* \n\nWhen a piece of data is read, the client either gets the data from one of the replica or from multiple replicas where a final result is decided by majority vote.\n\n* \n\nIf any replica is found corrupted or missing due to a crash, its replica is removed from the system. And a new replica is created and placed onto another server, if available. This either happens on read or through a background checking process.\n\n* \n\nDeciding on what data is valid or corrupted and where each replica data should be, it is generally decided by a \"master node\" or via a \"majority vote\". And sometimes for choosing where replicas goes \"randomly\".\n\n* \n\nOne of the most important recurring thing to take note, however, is the system requirement for majority vote for many operations, among all redundancies for the system to work.\n\n# With so many redundancies in place, how does it still fail?\n\nFirst off... that depends on your definition of failure...\n\n| System State | Description |\n| --- | --- |\n| All 💚 | Every node and replica is working. Yay! |\n| Majority of replicas are 💚 | Everything is fine. Replicas may be down, but as long as we have the majority, users will not notice a thing as long as we replace those downed replicas in time... |\n| Majority of replicas are 🔴 | Houston, we have a problem. Depending on the system design, it either goes into read-only mode, a hard failure state, or just continue on as per normal (rare) with the rest of the nodes, and suffer from split brain. |\n| All 🔴 | Everything is down. Oops. Hopefully we can recover the cluster or restore those backups. They do exist, right? 😭 |\n\nOne of the big benefits of a distributed system - is that in many cases when a single node or replica fails, behind the scenes there will be a system reliability engineer (or sysadmin) who will be replacing the affected servers without any users noticing.\n\n> \n\nWhich is probably one of the most overlooked and thankless aspects of the work involved. No one is aware of what happens. On record, this has already happened twice within Uilicious this year, and for infrastructure the size of Google and Amazon, I'm certain it would be a daily occurrence. So #hugops\n\nAnother thing to take note is the definition of failure from a business perspective. Permanently losing all of your customer data is a lot worse then entering, let's say, read-only mode or even crashing the system.\n\n> \n\nParticularly for github, if users have yet to push their commits - they are well aware they need to keep their local copy of the data for later uploads. However, if they have already uploaded, they might delete their local copy to clear up space for their <1TB SSD storage laptops.\n\nHence, many such systems are designed to shut down or enter read-only first, where long, manual restoration would be needed instead, to ensure data safety. Which is by design a \"partial failure\".\n\nAnd finally... Murphy's law... means that sometimes multiple nodes or segments of your network infrastructure will fail. You will face situations where the majority vote is lost.\n\n# How is a redundancy fixed then?\n\nWith cloud systems, this typically means replacing a replica with a new instance.\n\nHowever, depending on one's network infrastructure and data set size, time is a major obstacle.\n\nFor example to replace a single 8TB node with a gigabit connection (or 800 megabits/second effectively), one would take approximately 22.2 hours, or 1 whole day rounded up. And that's assuming optimal transfer rates which would saturate the system. To keep the system up and running without noticeable downtime, we may halve the transfer rate, doubling the time required to 2 whole days.\n\nNaturally, more complex factors will come into play, such as 10-gigabit connections, or hard-drive speed.\n\nHowever, the main point stays the same, replacing a damaged replica in a \"large\" storage node is hardly instantaneous.\n\n> \n\nThis is also why you will see tweets with ETA on how long it will take for data to get synced up in a cluster\n\nAnd for the record, considering that they are probably running a multi Peta Byte cluster, getting data synced up in a day is \"fast\"\nAnd it's sometimes during these long 48 hours moments where things gets tense for the sysadmin. With a 3 replica configuration, there would be no room for mistakes if the majority of 2 healthy nodes is to be maintained. For a 5 replica configuration, there would be breathing room of 1.\n\nAnd when majority vote is lost, one of three things would occur:\n\n* Split brain : where you end up with a confused cluster\n* Read-only mode : to prevent the system from having 2 different datasets (and hence a split brain)\n* Hard system failure : Some systems prefer inducing a hard failure, rather than causing a split brain.\n\n# Ugh, my head hurts? What is the split brain problem?\n\nA split brain starts to occur when your cluster starts splitting into 2 segments. Your system would start seeing two different versions of the same data as the cluster goes out of sync.\n\nThis happens most commonly when a network failure causes half of the cluster to be isolated from the other half.\n\nWhat happens subsequently if data change were to occur, is that half of the system would be updated with the other half outdated.\n\nThis is \"alright\" until the other half comes back online. And the data might be out of sync or even changed due to usage with the other half of the cluster.\n\nBoth halves would start claiming they are the \"real\" data and vote against the \"other half\". And as with any voting system that is stuck in a gridlock, no work gets done.\n\nThis can happen even with an odd number of replicas. For example, if one replica has a critical failure and decides to \"not vote\"\n\n# Ok so how do we prevent this in the first place?\n\nFortunately, most distributed systems, when configured properly, is designed to prevent split brain from happening in the first place. They typically comes in 3 forms.\n\n* \n\nA master node which gets to make all final decisions (this however may cause a single point of failure of a master node going down. Some systems fallback to voting a new master node if it occurs).\n\n* \n\nHard system failure to prevent such split brain. Until the cluster is all synced back up properly; This ensures that no \"outdated\" data is shown.\n\n* \n\nLocking the system in readonly; The most common sign of this would be when certain nodes show outdated data, in read-only mode.\n\nThe latter being the most common for most distributed systems, also seen in the recent github downtime\n\n> GitHub Status@githubstatusWe continue work to repair a data storage system for GitHub.com. You may see inconsistent results during this process.02:01 AM - 22 Oct 2018\n> \n\nA notable exception would be distributed cache systems such as hazelcast : which would take the approach of the data with the \"latest\" timestamp wins in resolving split brain problems.\n\nThis maybe \"alright\" in a caching use case, and is intentionally so, but it might be a big problem for more persistent storage system as it will lead to data being discarded. In many cases this situation would need some context specific decision to be made by either the programmer or the sysadmin to resolve.\n\n# That's a lot to take in - so how can I use one of these then?\n\nWell you probably already are\n\nThese days most entry level AWS, or GCP instances use some form of \"block storage\" backend for their hard-drives, which is a distributed storage system.\n\nMore famously would be Object Storage such as S3 itself and pretty much any cloud storage. Even for dedicated servers, most backups system such as the ones provided by linode and digital ocean, use some form of distributed storage.\n\nBeyond cloud services, there are many open source deployments that use distributed storage. Pretty much all NoSQL database, even kubernetes itself because it comes deployed with ETCD a distributed key-value storage.\n\nSubsequently for notable specific applications, there is Cockroach DB (SQL), GlusterFS (File storage), Elasticsearch (NoSQL), Hadoop (Big data). Even mysql can be deployed in such a setup, known as group replication. Or alternatively through galera.\n\nThe ability to use such distributed systems and not think about it, is ultimately what you pay for on the \"cloud-tax\" - for someone behind the scene solving these problems for you.\n\n# So should I be angry when github or a cloud provider is down?\n\nBe cool and #hugops - give them time, they are probably on it to the best of their abilities. Even the best of us can make the same mistakes.\n\nSpecial thanks: to @feliciahsieh for helping proofread and fix all my awful commas and grammar.\n\n# About Uilicious\n\nUilicious is a simple and robust solution for automating UI testing for web applications. Signing up for our free trial and writing test scripts to validate your distributed system, can be as easy as the script below.\n\nOr alternatively, test such as these ...\n\nUPDATE: This is now turned into a series with a part 2 expanding on distributed storage systems.",
"content_format": "markdown"
},
{
"url": "https://dev.to/roguecode25/setting-up-a-postgresql-database-in-aws-rds-and-connecting-it-to-pgadmin-10mb",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n## Introduction\n\nPgAdmin is a client-side platform for adminstration and development of PostgreSQL database. It simplifies the visualization of schemas, logs and other relevant information based on SQL standards. Amazon has a great relational database hosting service known as Amazon Relational Database Service (RDS). The article focuses on creating a PostgreSQL database in RDS and connecting it pgAdmin.\n\n## Creating a PostgreSQL database in AWS\n\n* \n\nSetup and login to your AWS account as ROOT user. Go to services, choose database and select RDS.\n\n* \n\nYou will be redirected to a dashboard as shown below. Click on create database.\n\n* Select PostgreSQL under engine options and choose templates as shown below. Use Free Tier if setting up the database for personal projects.\n\n* Scroll down to settings and set master username and password. Copy these credentials to the clipboard as they will be used to make the pgAdmin connection.\n\n* Scroll down to connectivity and select \"Yes\" for public access. This will allow the database to accept the connection from local setup.\n\n* Scroll down and click on \"create database\". The process will take a few minutes and you will be directed to the page below. Copy the endpoint to the clipboard for later reference.\n\n* Under security, click on the VPC security groups and you will be redirected to an EC2 console as shown below.\n\n* Click on security group ID and you will be redirected to the page below.\n\n* Under inbound rules, check if there are rules for PostgreSQL that accepts TCP connections. If there isn't click on \"edit inbound rules\". Click on \"Add rule\". Under type, select PostgreSQL. Under source, select custom. Under the search input, select\n `0.0.0.0/0` and click on \"Save rules\".\nThe PostgreSQL database has been successfully been created and ready to accept connections from a local host.\n\n## Installing pgAdmin and making RDS connection\n\nFirst we need to install pgAdmin to make the connection to our PostgreSQL database hosted in AWS RDS.\n\nHere is an installation guide for installing pgAdmin on Ubuntu, Mac and Windows.\nAfter installing pgAdmin, launch it and register a new server as shown below.\n\nUnder general, enter server name and move to connection tab. Under host name/address, paste the endpoint copied from AWS-RDS console. Enter the port set in RDS. Enter the master username set on RDS under username and corresponding password. Under advanced, set connection timeout (seconds) to 1000 secs to avoid timeout during connection. Leave every other configuration in its default.\n\nAfter the connection is accepted, the pgAdmin dashboard will look as follows;\n\nNow you can view tables created under schemas/tables.\n\n## Conclusion\n\nPgAdmin offers a versatile interface to view database information while AWS RDS offers a great service to host databases to the cloud for better security and ease of access.",
"content_format": "markdown"
},
{
"url": "https://dev.to/lefebvre/oop-101-understanding-classes-343p",
"domain": "dev.to",
"file_source": "part-00315-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIn its simplest form, a class is a container of code and data much like a module. But unlike a module, a class provides better code reuse. Classes are the fundamental building blocks of object-oriented programming.\n\nClasses offer many benefits, including:\n\n* Reusable Code: When you directly add code to a PushButton on a Window to customize its behavior, you can only use that code with that one PushButton. If you want to use the same code with another PushButton, you need to copy the code and then make changes to the code in case it refers to the original PushButton (since the new PushButton will have a different name than the original). Classes store the code once and refer to the object (like the PushButton) generically so that the same code can be reused any number of times without modification. If you create a class based on the PushButton control and then add your code to that class, any usage (instances) of that custom class will have that code.\n* Smaller Projects and Applications: Because classes allow you to store code once and use it over and over in a project, your project and the resulting application is smaller in size and may require less memory.\n* Easier Code Maintenance: Less code means less maintenance. If you have basically the same code copied in several places of your application, you have to keep that in mind when you make changes or fix bugs. By storing one copy of the code, when you need to make changes, you'll spend less time tracking down all those places in your project where you are using the same code. Changes to the code in a class are automatically used anywhere where the class is used. Easier Debugging The less code you have, the less code there is to debug.\n* More Control: Classes give you more control than you can get by simply adding code to the event handlers of a control in a window. You can use classes to create custom controls. And with classes, you have the option to create versions that don’t allow access to the source code of the class, allowing you to create classes you can share or sell to others.\n\n## Understanding Classes, Instances and References\n\nBefore you can use a class in your project, it is important to understand the distinction between these three concepts: the class itself, the instance of the class and the reference to the class.\n\n* The Class: Think of the class as a template for a container of information (code and data), much like a module. And like a module, each class exists in your project only once. But unlike a module, a class can have multiple instances that each contain different data.\n* The Reference: Classes provide better code reuse because of a concept called instances. Unlike a module, which exists only once in your application, a class can have multiple instances. Each instance (also called an object) is a separate and independent copy of the class and all its methods and properties.\n\n### The Class\n\nXojo has many built-in classes for user interface controls such as PushButton, WebButton, iOSButton, Label, WebLabel, iOSLabel, TextField, WebTextField, iOSTextField, ListBox, WebListBox and iOSTable to name just a few.\n\nBy themselves, control classes are not all that useful; they are just abstract templates. But each time you add a control class to a layout (such as a window, web page or view) you get an instance of the class. Because each button is a separate instance, this is what allows you to have multiple buttons on a window, with each button being completely independent of each other with its own settings and property values.\n\nThis is an example of a Vehicle class that will be used in later examples:\n\n```\nClass Vehicle\n Property Brand As String\n Property Model As String\nEnd Class\n```\n\n### The Instance\n\nThese instances are what you interact with when writing code on the window and is what the user interacts with when they use your application.\n\nFor example, when you drag a TextArea from the Library to a Window, you create a usable instance of the TextArea on the Window. The new instance has all the properties and methods that were built into the TextArea class. You get all that for free — styled text, multiple lines, scroll bars, and all the rest of it. You customize the particular instance of the TextArea by modifying the values of the instance’s properties.\n\nWhen you add a control to a layout, the Layout Editor creates the reference for you automatically (it is the name of the control).\n\nHowever, when writing code you create instances of classes using the New keyword. For example, this create a new instance of a Vehicle class that is in your project: `Dim car As New Vehicle` \n\n### The Reference\n\nA reference is a variable or property that refers to an instance of a class. In the above code example, the car variable is a reference to an instance of the Vehicle class.\n\nIn your code you interact with properties and methods of the class using dot notation like this:\n\n```\nDim car As New Vehicle\ncar.Brand = \"Ford\"\ncar.Model = \"Focus\"\n```\n\nHere the Brand and Model were defined as properties of the Vehicle class and they are given values for the specific instance. Having an instance is important! If you try to access a property or method of a class without first having an instance, you will get a NilObjectException likely causing your app to quit. This is is one of the most common programming errors that people make and is typically caused by forgetting the New keyword. For example, this code might look OK at first glance:\n\n```\nDim car As Vehicle\ncar.Brand = \"Ford\"\ncar.Model = \"Focus\"\n```\n\nBut the 2nd line will result in a NilObjectException when you run it because car is not actually an instance. It is just a variable that is declared to eventually contain a Vehicle instance. But since an instance was not created, it gets the default value of Nil, which means it is empty or undefined.\n\nIf you are not sure if a variable contains an instance, you can check it before you use it:\n\n```\nIf car <> Nil Then\n car.Brand = \"Ford\"\n car.Model = \"Focus\"\nEnd If\n```\n\nWhen the variable is not Nil, then it has a valid reference. And since it is a reference, it has important considerations when assigning the variable to another variable: When you do an assignment from one reference variable to another, the second variable points to the same reference as the first variable.\n\nThis means the second variable refers to the same instance of the class as the first variable and is not a copy of it. So if you change a property of the class with either variable, then the property is changed for both. An example might help:\n\n```\nDim car As New Vehicle\ncar.Brand = \"Ford\"\ncar.Model = \"Focus\"\nDim truck As Vehicle\ntruck = car\n// truck.Model is now \"Focus\"\ncar.Model = \"Mustang\"\n// truck.Model is now also \"Mustang\"\ntruck.Model = \"F-150\"\n// car.Model is now also \"F-150\"\n```\n\nIn this diagram you can see that the variables for both car and truck point to the same instance of Vehicle. So changing either one effectively changes both.\n\nIf you want to create a copy of a class, you need to instead create a new instance (using the New keyword) and then copy over its individual properties as shown below:\n\n```\nDim car As New Vehicle\ncar.Brand = \"Ford\"\ncar.Model = \"Focus\"\nDim truck As New Vehicle\ntruck.Brand = car.Brand\ntrunk.Model = car.Model\n// truck.Model is now also \"Focus\"\ntruck.Model = \"F-150\"\n// car.Model remains \"Focus\"\n```\n\nWhen you do it this way, you get two separate instances. Changes to one do not affect the other as you can see in these diagrams:\n\n## Adding Classes to Your Projects\n\nIn Xojo, adding a class to a project is easy. To add a new class, click the Insert button on the toolbar and choose Class or select Class from the Insert menu. This adds a new class to the Navigator with the default name (Class1 for the first class).\n\nUse the Inspector to change the name of the class. Like modules, classes primarily contain properties and methods. But they can also contain many other thing such as constants, enumerations, events and structures.\n\n## Adding Properties to Classes\n\nProperties are variables that belong to an entire class instance rather than just a single method. To add a property to a class, use the Add button on the Code Editor toolbar, Insert ↠ Property from the menu, the contextual menu or the keyboard shortcut (Option-Command-P on macOS or Ctrl+Shift+P on Windows and Linux). You can set the property name, type, default value and scope using the Inspector.\n\nTo quickly create a property, you can enter both its name and type on one line in the Name field like this: PropertyName As DataType. When you leave the field, the type will be set in the Type field.\n\nProperties added in this manner are sometimes called Instance Properties because they can only be used with an instance of the class. You can also add properties that can be accessed through the class itself without using an instance. These are called Shared Properties.\n\n### Shared Properties\n\nA shared property (sometimes called a Class Property) is like a “regular” property, except it belongs to the class, not an instance of the class. A shared property is global and can be accessed from anywhere its scope allows. In many ways, it works like a module property.\n\nIt is important to understand that if you change the value of a shared property, the change is available to every usage of the shared property.\n\nGenerally speaking, shared properties are an advanced feature that you only need in special cases. For example, if you are using an instance of a class to keep track of items (e.g., persons, merchandise, sales transactions, and so forth) you can use a shared property as a counter. Each time you create or destroy an instance of the class, you can increment the value of the shared property in its constructor and decrement it in its destructor. (For information about constructors and destructors, see the section Constructors and Destructors.) When you access it, it gives you the current number of instances of the class.\n\n## Adding Methods to Classes\n\nTo add a method to a class, use the Add button on the Code Editor toolbar, Insert ↠ Method from the menu, the contextual\n\nmenu or the keyboard shortcut (Option-Command-M on macOS or Ctrl+Shift+M on Windows and Linux). You can set the method name, parameters, return type and scope using the Inspector.\nWhen typing the method name, the field will autocomplete with the names of any methods on its super classes.\n\nMethods added in this manner are called Instance Methods because they can only be used with an instance of the class.\n\nYou can also add methods that can be accessed through the class itself. These are called Shared Methods.\n\n### Shared Methods\n\nA shared method (sometimes called a Class Method) is like a normal method, except it belongs to the class, not an instance of the class. A shared method is global and can be called from anywhere its scope allows. In many ways, it works like a module method.\n\nShared Methods do not know about an instance so its code can only access other shared methods or shared properties of the class.\n\nGenerally speaking, shared methods are an advanced feature that you only need in special cases. A common usage of shared methods is to create an instance (rather than relying on a Constructor).\n\nLearn more about Xojo, the fast way to make apps for Windows, Mac, Linux, iOS and the web.",
"content_format": "markdown"
},
{
"url": "https://dev.to/robinkartikeyakhatri/media-attribute-in-link-tag-11ij",
"domain": "dev.to",
"file_source": "part-00553-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nWhy is media attribute written inside the link tag in html markup? Is it necessary to use media attribute in link tag to create a responsive website? Isn't CSS written using @media (max-width: 1100px) in a CSS file without writing the media attribute? Is it necessary to use @media (max-width: 1100px) in CSS file even after writing media attribute in link tag?\n\n## Top comments (2)\n\nSubscribe\n\nI feel like it is equivalent to @media applied to the whole import statement in SCSS.\n\nIt's yet another way to do things without building the CSS first, like it comes from third party CDN.\n\nI saw it used for\n\n```\nmedia=\"(prefers-color-scheme: dark)\" \nmedia=\"(prefers-color-scheme: no-preference), (prefers-color-scheme: light)\"\n```\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/mattbidewell/comment/ijg",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nMy IDE/Text editor:\n\nFirst choice is Atom for anything where I dont need a complex editor. However since I develop Android apps then my first stop for that is Android Studio, general Java I use IntelliJ. Used to use Netbeans however moved over when I started using Android Studio.Photo manipulation:\n\nBitmap: Photoshop- Have CS6 (Cracked version) I will purchase the full suit one day, however this will do for now.\nVector: Inkscape, I want Sketch or Illustrator however money is the main issue.\n\niTerm2 for terminal stuff.\n\nAny sort of application to help the eyes too, at the moment im using f.lux. I'm more of a morning person and start developing around 545am so anything better on the eyes helps stop headaches etc.\n\nCommunication: Slack and Hangouts for the win!\n\nMusic: Spotify, I can just hit play and not worry too much.\n\nOrganisation: Google Keep and Trello.\n\nFor further actions, you may consider blocking this person and/or reporting abuse",
"content_format": "markdown"
},
{
"url": "https://dev.to/shmuelhizmi",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\n# shmuelhizmi\n\nA 17 years old full-stack web developer.\n\nWork\n\nfull-stack typescript web developer at MCE systems, israel\n\n### Pinned\n\n### Want to connect with shmuelhizmi?\n\nCreate an account to connect with shmuelhizmi. You can also sign in below to proceed if you already have an account.\n\nAlready have an account? Sign in\n\nloading...",
"content_format": "markdown"
},
{
"url": "https://dev.to/dashbird/end-of-aws-lambda-support-for-node-js-10-should-you-switch-straight-to-v14-3a7d",
"domain": "dev.to",
"file_source": "part-00780-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nIt's the end of AWS Lambda support for Node.js v10. AWS Lambda support for Node.js 10 is due to end in August 2021. It's time to switch! In this article, we're discussing and comparing the differences of working with Node.js 10 and Node.js 14 + AWS Lambda, the impacts, and benefits of this change.\n\nOriginal article with code snippets here: https://dashbird.io/blog/aws-lambda-nodejs-10-vs-14/\n\nAWS Lambda supports multiple versions of programming language runtimes, but not forever. That's because the creators of these programming languages don't support them infinitely either.\n\nAWS Lambda also only supports the long-term support (LTS) versions of Node.js: the even version numbers. Currently, versions 10, 12, and 14 are all supported, but in August 2021, the support for version 10 is running out.\n\nSo it's time to switch, but besides evading the support end, what benefits does it bring to switch directly to version 14? In this article, I will go over the changes that have happened since version 10 of Node.js.\n\n# What Features were Added Since Version 10?\n\nLet's start with the ECMAScript additions to JavaScript. Usually, they are supported in the JavaScript engine before they are standardized. The idea is that nothing, that isn't working in practice, gets standardized.\n\nThis doesn't mean that all JavaScript engines support every ECMAScript feature before it is standardized. After all, the committee can't wait for every JavaScript engine that is out there. For example, proper tail call optimization is standardized for years now, but only a few engines have implemented the feature.\n\nAnyway, let's check what happened since version 10!\n\n# Optional Chaining and Nullish Coalescing\n\nThis is one of the biggest additions, in my opinion. Gone are the crashes because you tried to access an attribute of an undefined variable, or you set a default for a value that was null-ish.\n\nThe optional chaining operator `?.` is an alternative to the regular chaining operator `.` they can be used all the same; the only difference is that the regular chaining operator will crash if you try to access an attribute on an undefined variable. The optional chaining operator will simply return `undefined`.If at least one part of this chain isn't an object, `deepValue` will be set to `undefined`.The nullish coalescing operator `??` is an alternative to the logical or operator `||` used to set variables to a default value.The problem with the logical or operator was that it also used the default when the value was something that could be automatically converted to `false`, like the number `0`. The nullish coalescing operator would only use the default when the checked variable was null or undefined.In the example, the logic or operator would set the amount to `20` even if the `input` was `0`, but the coalescing operator would leave it at `0`.\nThe two new operators work nicely in tandem.\n\n# Symbol Additions\n\nThe Symbol class got two new additions. If you create a new symbol with a description, you can now check it later with the description property.\n\nThis example will log \" `description` \" to the console.There is also a new well-known symbol called `matchAll`. It can be used to create custom matchAll functions for your data types.The object `numbers` needs a generator function (that's what the `*` means) in its `Symbol.matchAll` property, so it can be used as a parameter for the `String.prototype.matchAll` method. This example will output `[\"2021\", \"05\", \"12\"]`.\n\n# Global This\n\nThe unification of `window`, `self`, `frames`, and `global` into one global object called `globalThis`. Back in the days, the access to the global object differed from where your script was running, but if you tried to access `window` inside Node.js or a worker thread, it would fail. With `globalThis` your script doesn't have to care anymore.\nThis example works in all current browser versions and Node.js:\n\n# Class Fields\n\nSince Node.js 10, there have been some additions to classes as well: instance class fields, static class fields, and private class fields.\n\nBefore version 12, instance fields had to be set in the constructor, and static fields had to be set via the prototype, and both were always public. Now you can set them all in the class definition and modify their visibility.\n\nLet's see how it works in practice:\n\n# Flattening Arrays\n\nIf you end up with a nested array, and you want to lift the nested elements back to the top array, the new `flat` method for arrays is the solution.The example above, flattened, will contain `[1, 2, 3, 4, 5, 6, 7]`. But keep in mind that it only flattens a depth of one; you have to call it again if you have deeper nested arrays you want to get rid of.Since the `flat` method is usually called to clear up the array structure before a call to the `map` method, a shortcut method called flatMap() will work as a `map` with a `flat` applied before.\nIn the following example, the item is always a number and never an array because the nested array is flattened before passed to the callback function.\n\n# International Display Names\n\nThe new Intl.DisplayNames API allows to translate region, language, and script display names automatically.\n\nThis would display \"德文\", the traditional Chinese translation of the language \"German\".\n\n# Numeric Separators\n\nIt's now possible to add underlines to number literals to make them more readable inside the code.\n\n# Performance Improvements\n\nSince version 10, many performance improvements have happened. The startup time improved by 30%, which can shave quite some time from your cold start latency.\n\nThe heap size configuration was done with two default values before, but in version 14, Node.js will take the actual available memory into account when configuring the optimal heap size. Since AWS Lambda allows for very flexible memory configurations per function, this could help you get more bang for the buck.\n\n> \n\nFind out more about saving money on Lambdas in our 6 AWS Lambda Cost Optimization Strategies That Work article.\n\nI tried a prime number calculation on AWS Lambda with Node.js 10 and 14 to see the differences. First, I calculated primes between one hundred and one thousand and then those between one hundred and one billion. I run this with 128MB and 1024MB memory configuration, to see how the init times and runtime durations changed between the two versions.\n\nThe following graphs illustrate the differences.\n\nBlue marks Node.js 10 and red 14. As you can see, the init timings got better by ~20% and I just switched the runtime version. Because Lambda functions are billed by millisecond this is real money you can save. Also, short-running Lambda functions can reduce the latency quite a bit.\n\nAgain, blue marks Node.js 10 and red 14. This time I measured the duration an invocation took. The difference between the primes between hundred and thousand wasn't much, so I left them out here, but for longer computations, like finding the primes between one hundred and one billion, it makes a huge difference. This saves a huge chunk of money and could even make it possible to do slow calculations synchronously as an API Gateway response now.\n\n# Other Additions\n\nThe diagnostic reports help with analyzing the behavior of a Node.js program. It will log out a JSON formatted string into a file that you can check for more details than CloudWatch gives you.\n\n> \n\nFind out how you can get even more detailed and faster AWS data analytics, observability and debugging.\n\nTLS 1.3 and QUIC protocol support have landed in the LTS version of Node.js, so your Lambda can keep communicating with the latest services out there.\n\nThe worker threads API is now stable. You can use it for computation-intensive workloads. Usually, Node.js is a single-threaded environment, but you can now execute JavaScript with worker threads in parallel. When you configure Lambda memory, the vCPU cores are also configured implicitly. If you set a memory limit of 10 GB, you also get 6 vCPU cores, which you can now utilize with Node.js worker threads.\n\nFull International Components for Unicode (ICU) support has landed in Node.js. ICU is a set of \"libraries providing Unicode and Globalization support for software applications\". The full list of features can be found on Node.js's GitHub Repository.\n\n# Conclusion\n\nThe sun is setting on the LTS for Node.js version 10 in a few months' time, and so is the AWS Lambda runtime support. If you have to switch anyway, why not go straight to version 14? It will give you a longer support time in the future, and you get a whole bag of extra features that improve the coding of Lambda functions and their performance on the AWS platform.\n\nFurther reading:\n\nAWS Lambda Node.js errors and exceptions\n\nHow to deploy a Node.js application to AWS Lambda using Serverless",
"content_format": "markdown"
},
{
"url": "https://dev.to/ezebik2020/the-webdev-technology-stack-17l2",
"domain": "dev.to",
"file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nHello how are you?\n\nBefore we start, sorry if you don't understand English much, I'm from Argentina and I did this article with google translate.\n\n## Introduction:\n\nI am one month away from starting a project in my company and it is a fairly large project that would receive hundreds of thousands of requests per minute.\n\nWe are deciding technologies, previously projects had been developed in Lumen and Java, this is not possible to migrate because it was not developed correctly and now the company wants to establish a stack of technologies in which all projects will be developed from here on.\n\n## My reserched options:\n\nIn the last 2 weeks I have been researching and saw the following:\n\n* \n\nLaravel: Today it is not very common for PHP to be taught in schools and on the other hand it is a bit slow technology. Of course, the Octane package just came out and paints to exceed or at least reach the speed of the same web developed in Node. Meanwhile php is being less supported every day and despite the push it has been having with Laravel I think it will be left behind with the release of new Node frameworks.\n\n* \n\nAdonisJS: It is a Laravel in Node, but today it has almost no community, even though I read that its discord is very active.\n\nAnd it's still very young.* \n\nNestJs: It is the most popular Node framework, I understand with which all kinds of rest apis can be developed. It also offers Fastify which is very convenient over Express due to support for asynchronous calls. I want to clarify that it is the one that investigates the least.\n\n## The other part of the stack:\n\nThe idea would be to use ReactJS for Front and React Native for Mobile development.\n\nI am also thinking of using InertiaJS or NextJS.\n\nWe are implementing Docker and Kubernetes.\n\n## The question:\n\nMy question is directed for you to comment without choosing sides, trying to be as impartial as possible.\n\nWhat Technology Stack would you use?\n\nGreetings and thank you very much for reading.",
"content_format": "markdown"
},
{
"url": "https://dev.to/hackmel/what-i-learned-while-building-a-snake-game-with-react-and-recoil-13cd",
"domain": "dev.to",
"file_source": "part-00755-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000",
"content": "# \n\nDate: \nCategories: \nTags: \n\nRecoil is a new state management library developed by Facebook used for React applications. It has a very interesting concept that promises simplicity, but with powerful capabilities. Unlike redux, it has a simple framework without messy boiler plates. I decided to take a look and see how it works. However, instead of making a simple app, I thought of building a game with it. I ended up building a Snake Game to fully test my understanding of how the library works. I know some will say I don’t need to have a sophisticated state manager to build this game and I certainly agree. However, I also believe that the most effective way to learn a technology is to apply it in an unusual way or on a more complicated application. Writing a game in React is unusual and complicated, but is possible. It’s a perfect way to learn Recoil.\n\n# Atom\n\nUnlike redux and react context, Recoil has the concept of multiple units of states called Atom, where components can subscribe to. Components will render only when the atom where they subscribe for changes. This will avoid unnecessary rendering when state changes. An atom can be defined by using the atom() function. An atom should have a unique key and a default value for its state. On my game I have created 3 separate atoms that represent its own data:\n\nThe SnakeTailState holds the information of all the snake's tail location, by default it has 3 tails. The FoodState holds the location where the food will appear in the screen. And lastly, the KeyPressState will hold the keyboard entries that will tell the direction of the snake.\n\n# React hooks\n\nRecoil is designed for React Developers who love hooks. Yes, if you love developing functional components and use hooks a lot then you will enjoy the benefits of recoil. Recoil has some ready made hooks in order to access and update atoms.\n\n* useRecoilState(stateKey) returns a tuple where the first element is the value of state and the second element is a setter function that will update the value of the given state when called.\n* useSetRecoilState(stateKey) returns a setter function for updating the value of writeable Recoil state.\n\nThese sample hooks are just among the other hooks that you can use to access and modify your atoms. On my code I used useRecoilState to access the SnakeTailState and pass it to my snake component that display it on the screen. While useSetRecoilState is used to update the KeyPressState everytime a user pressed the keyboard.\n\n# Selector\n\nSelectors are functions or derived state in Recoil. Recoil can have a get and a set function. Get functions can return calculated values from an atom or other selectors. A get function does not change the values of the state. However, a set function, also called a writeable selector can be used to change or update a state.\n\nAs you can see on my selector, I built the following logic that corresponds to my states. These selector can communicate with other atoms and other selectors to build a new set of states.\n\n* Calculate how to create new tails whenever the snake has eaten the food.\n* Decide where the food will randomly appear on the screen.\n* Check the snake's next direction based on the key pressed.\n* Check if the food was eaten\n* Check if the snake hit the walls\n\nI don't have to write those logic inside the presentation layer which made my code very clean. All I have to do is use Recoils helper hooks to access the selectors from the presentation layer, the same way I access an atom.\n\n# Findings and opinion\n\nFor me Recoil is a better choice on state management. One reason is that it promotes one of the SOLID principle, Single responsibility principle. By designing your state to have different units of state that represents one thing you avoid making a convoluted state.\n\n# Why does a single global state is bad?\n\nIf our app is simple, probably we can get away with it. But, as our app becomes larger and more complicated, having a single global state that holds everything will be problematic.\n\n# Imagine our state as a database\n\nWhen we first design our database, do we build one table to represent our data? Unless we have a very good reason, a database should always be normalized. Each tables on our database should represent one particular data, example: Employee, Department, Salary etc. And I believe that states should also be designed the same way. Each state should only represent a particular set of data.\n\nOn a database, if we want to combine rows between tables, we can define a view. On Recoil, we can do the same by using selectors.\n\n# Conclusion\n\nBuilding a game with React is fun though not recommended, but it helped me understand Recoil much better. I will probably continue this by writing another version of my snake game app using Redux and share my experience.",
"content_format": "markdown"
}
]