id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
968,969
Quasar
Does anyone help programming with quasar, I am having issues getting the software to work properly?
0
2022-01-26T22:17:22
https://dev.to/william_pate2022/quasar-37m8
Does anyone help programming with quasar, I am having issues getting the software to work properly?
william_pate2022
969,020
Cloud Resume Challenge AWS
Background I was introduced to the Cloud Resume Challenge while interviewing for a...
0
2022-01-27T20:19:13
https://dev.to/joshb8/cloud-resume-challenge-aws-38oa
cloudresumechallenge, aws, cloud
## Background I was introduced to the [Cloud Resume Challenge](https://cloudresumechallenge.dev/docs/the-challenge/aws/) while interviewing for a position in October of 2021. I was looking for a change in direction in my IT career and the cloud struck my interest. Over the past couple of months, I have been slowly working on this challenge. With many late nights in the books, I have completed the challenge. Although frustrating at times I would recommend anyone interested to take this challenge. The rewarding feeling after every completed step makes you want to come back for more. ## Challenge Instructions **Certification** AWS certified Cloud Practitioner(In Progress) ✔️ **HTML** [Check out my HTML code](https://github.com/joshb8/cloud-resume/blob/main/resume-site/index.html) ✔️**CSS** [Check out my CSS code](https://github.com/joshb8/cloud-resume/blob/main/resume-site/styles.css) ✔️**Static Website** [Used AWS SAM template](https://github.com/joshb8/cloud-resume/blob/main/template.yaml) ✔️**HTTPS** This was made easy by AWS CloudFront ✔️**DNS** Used AWS Route [joshscloudresume.net](joshscloudresume.net) ✔️**Javascript** There is a script at the end of my [HTML File](https://github.com/joshb8/cloud-resume/blob/main/resume-site/index.html) ✔️**Database** Used AWS DynamoDB ✔️**API** Created and deployed with AWS SAM ✔️**Python** Lambda Functions [Get Function](https://github.com/joshb8/cloud-resume/blob/main/get-function/app.py) [Put Function](https://github.com/joshb8/cloud-resume/blob/main/put-function/app.py) ✔️**Tests** Integration and upload test using [GitHub Actions](https://github.com/joshb8/cloud-resume/actions/runs/1752827599) ✔️**Infrastructure as Code** AWS SAM [template.yaml](https://github.com/joshb8/cloud-resume/blob/main/template.yaml) ✔️**CI/CD** [GitHub Actions](https://github.com/joshb8/cloud-resume/actions) ✔️**Blog Post** Here we are **My thoughts on the challenge** Looking back on the challenge I should have been writing this blog post along the way. It is impossible to remember all of the struggles and AH-HA! moments involved in the learning process. With that being said I will go over the main struggles that took up most of my time. Although the instructions from [The cloud resume challenge](https://cloudresumechallenge.dev/) give great instructions and excellent resources to start the journey there were many hours spent on stack overflow figuring out the finer details. **How I approached the challenge** **1. HTML/CSS/JS** I first simply coded my resume in HTML and added a little CSS to meet that requirement. I would later dive deeper into creating an actual website once I realized that I enjoyed the design aspect of this. **2. Host website** I used the AWS SAM template to host a static website in an S3 bucket. I originally used google DNS to purchase a domain name however after hours of troubleshooting I simply could not get it to resolve. I purchased another domain through AWS Route 53 and it worked instantly. **3. CloudFont/HTTPS** At first, I had some issues implementing HTTPS for security because the rules had changed about the target origin ID name and I was going off an old version. After some stack overflow searching, I corrected the issue. **4. API/DYNAMODB/LAMBDA** Steps 7 through 12 were the most difficult but rewarding challenges yet. Creating the API and database on DynamoDB was easy enough. Linking them together through Lambda functions was another story. Once I found the right docs and templates for boto3 SDK to access the database Things started to come together nicely. **5. Deploying with CI/DI** Deploying with CI/DI was a learning experience. I had not created a GitHub account at this point and it was a learning curve pushing my folders to GitHub using git. One problem I ran into with GitHub actions is that it was blocking my user account while updating the get/put lambda functions. I solved this by adding my user to the policy statements on each of the lambda functions. Another useful feature that I learned was the secrets function in GitHub. I used this to hide my user access key and secret key in my .yml file to be able to build and deploy my site as well as upload my site to s3 through the actions feature. **6. Certification** I am pending certification, giving myself a week to study up and pass the exam. I am proud of myself for completing this challenge and hope to pursue my newfound interest in all things cloud. **Thanks for reading about my journey**
joshb8
969,372
Rails React Streaming SPA
Introduction TLTR: Feel free to get the source code. Requirements -The code...
0
2022-01-27T09:21:24
https://dev.to/mwpenn94/kaleidoscope-rails-react-streaming-spa-294h
#Introduction [TLTR: Feel free to get the source code.](https://github.com/mwpenn94/kaleidoscope) ##Requirements -The code should be written in ES6 as much as possible -Use the create-react-app generator to start the project -The Application should have one HTML page to render your react-redux application -There should be 5 stateless components -There should be 3 routes -The Application must make use of react-router and proper RESTful routing -Use Redux middleware to respond to and modify state change -Make use of async actions and redux-thunk middleware to send data to and receive data from a server -The Rails API should handle the data persistence with a database. -Use fetch() within actions to GET and POST data from the API - do not use jQuery methods. -The client-side application should handle the display of data with minimal data manipulation ##App Design The overall plan for this single page application was to design a web site with a full-stack design using a Rails backend with a ReactJS frontend. Based on the requirements, the app allows users CRUD functionality allows users CRUD functionality for their own streams. Similar to large scale counterparts, it is designed for straightforward use with minimal explanation required for the user. ##Frontend Design See the "backend/app/javascript" directory for more info. ##Backend Design See the "backend" directory for more info.
mwpenn94
972,064
Three Weeks and Beyond with #100Devs
The Recap Woah! We finally made it through three weeks of #100Devs Cohort 2, and are...
0
2022-01-29T18:06:18
https://codewithfan.hashnode.dev/three-weeks-and-beyond-with-100devs
beginners, 100devs, webdev, codenewbie
# The Recap Woah! We finally made it through three weeks of #100Devs Cohort 2, and are proficient to lay down serious vomity code to create a webpage with HTML & CSS. If I were to talk to my past self three weeks ago and say, "You are going to be able to create something out of nothing in three weeks. Trust me!", I would have laughed. In today's blog, I am covering what I have learned so far in the past three weeks of #100Devs to become an employed Full Stack Software Developer. **⚠ Please, feel free to provide any feedback/comment/likes/dislikes on this or any blog I post. It can make me unlock the new potential to become a better blogger. 👍🏾** > ***💡 "I want to help you become a Software Engineer for free"*** - Leon Noel <iframe src="https://giphy.com/embed/fdLR6LGwAiVNhGQNvf" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/fdLR6LGwAiVNhGQNvf"> # Week 1 Class one was the most hype and watched stream in the Software & Game Development category on Twitch...EVER! [A total of 7K+ Software Engineers tuned in live](https://www.twitch.tv/videos/1260641923) to figure out if: 1. Is the **#100Devs** Bootcamp real? 2. What's the catch of a *free* Bootcamp? 3. Am I ready to become a **Software Engineer**? If you answered "Yes" to all three questions, you faced homework the very first class: - [Learning How to Learn](https://www.coursera.org/learn/learning-how-to-learn) - [How to Study for Exams, by Ali Abdaal](https://www.youtube.com/watch?v=ukLnPbIffxE) - [Space Repetition, by Ali Abdaal](https://www.youtube.com/watch?v=Z-zNHHpXoMM) The objective was to complete four weeks of learning how to learn in one week and supplement the course with [Ali Abdaal's](https://www.youtube.com/c/aliabdaal) how-to-study videos. I discovered that I was learning new things wrong and that utilizing active recall and spaced repetition is the key to retaining anything. Introducing [**Anki**](https://apps.ankiweb.net/)! Anki is an intelligent flashcard reader that allows you to create, organize, and study decks of cards. This is a tool that is meant to be used not only for the cohort but for life! The game plan is while you are reading/learning life, Anki is your best friend in creating your flashcards to perform active recall later, then do it again, and again...and again 😂 <iframe src="https://giphy.com/embed/Uv2K54K09f5x2GqqqW" width="480" height="480" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/cbc-cooking-show-fridge-wars-fridgewars-Uv2K54K09f5x2GqqqW"></a></p> The second set of homework, which set me in the trough of sorrow (yes, you Shay) was: - [Learn to Code HTML & CSS 💀(https://learn.shayhowe.com/html-css/) - [Intro to using MDN DOCS](https://developer.mozilla.org/en-US/docs/Web/HTML) - BBC Website ![bbc-image.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643446063466/nTi4kYweB.png) Since learning from [The Odin Project](https://www.theodinproject.com/paths/foundations/courses/foundations), I was familiar with adding HTML semantics to the [BBC Homework Assignment](https://glitch.com/edit/#!/bbc-homepage). The trough of sorrow hit me like a train when reading 12 lessons back to back and creating various flashcards on Anki. Because I have not been in a structured learning environment in seven years, paying attention and staying focused was no easy task. Thankfully, I was able to complete the lessons using the [Pomodoro Technique](https://tomato-timer.com/), or in my case, Animedoro. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">so leon mentioned &quot;animedoro&quot;... sounds like a distraction but I won&#39;t knock it until I try it. 😂 maybe 35 on, 25 off with death note. <br><br>any <a href="https://twitter.com/hashtag/100Devs?src=hash&amp;ref_src=twsrc%5Etfw">#100Devs</a> have a favorite anime series they recommend? <a href="https://t.co/A0VFWmyUQA">pic.twitter.com/A0VFWmyUQA</a></p>&mdash; stefan 🍍 (@codewithfan) <a href="https://twitter.com/codewithfan/status/1482893465469915136?ref_src=twsrc%5Etfw">January 17, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # Week 2 > ***💡 "Leon Noel is a Pokemon Master"*** <iframe src="https://giphy.com/embed/xuXzcHMkuwvf2" width="480" height="360" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/90s-the-girls-are-in-charge-xuXzcHMkuwvf2"></a> We made it out alive of the first week of #100Devs. Some were able to submit their Coursera homework on time while others were still catching up with Barbara. For Class 3 - Tuesday's class, we reviewed the following: - Navigation ``` <nav> <ul> <li><a href="#">Donate</a></li> <li><a href="#">Login</a></li> <li><a href="#">Sign Up</a></li> </ul> </nav> ``` - Forms: `<form action="/my-handling-form-page" method="post"></form>` - Input Types ![Screen Shot 2022-01-29 at 6.42.24 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643449355121/GgorGqraG.png) In addition, we went over how Leon approaches a webpage if he were to create its HTML. The homework assigned was to re-create the [Khan Academy](https://glitch.com/edit/#!/khan-academy-100devs), [Tech Crunch](https://glitch.com/edit/#!/tech-crunch-100devs) webpage using only HTML, and learn how to do layouts on [learnlayout.com](https://learnlayout.com/). <iframe src="https://giphy.com/embed/zOvBKUUEERdNm" width="480" height="270" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/coding-zOvBKUUEERdNm"></a> Class 4, we dived into the nitty-gritty of CSS fundamentals such as: - Where should you style your CSS? - How do you link your HTML & CSS together? - The CSS Breakdown ![Screen Shot 2022-01-29 at 7.29.18 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643452171888/V21_MRJO0.png) - Classes, IDs, and basic properties - Most !important ![Screen Shot 2022-01-29 at 7.30.31 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643452254620/mFS1RFVcM.png) To conclude week 2, we were given the task to use HTML & CSS to create a [simple lab site](https://glitch.com/edit/#!/simple-lab-leon-100-devs) ![Screen Shot 2022-01-29 at 8.07.15 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643454454466/ULgeOu_OHJ.png) # Week 3 > ***💪🏽 "We don't get got. We go get!"*** Classes 5 and 6 are where we put our true learning abilities to the test. This week, we discussed: - The Box Model ![Screen Shot 2022-01-29 at 8.19.41 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643455191263/CVGJQzVpD.png) - created basic layouts ![Screen Shot 2022-01-29 at 8.20.29 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643455244204/sTgoUb8aq.png) - practice relationships selecting without using classes or IDs. ![Screen Shot 2022-01-29 at 8.16.52 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643455054640/b06ThJphbK.png) - specificity practice ![Screen Shot 2022-01-29 at 8.18.14 PM.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1643455116533/sEc94MBbZ.png) After completing the three layouts, the next objective is to read and dive into Responsive Web Design. From my foxhole, it means creating websites so they are accessible and viewable across all devices and platforms. Our resource is to learn from the superstar Shay Howe: [Learning to Code Advanced HTML & CSS](https://learn.shayhowe.com/advanced-html-css/), but supplement with the MDN Docs as well. Once all understood, the goal is to make the three layouts responsive, like how Nicole did! <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Day 19 of <a href="https://twitter.com/hashtag/100DaysofCode?src=hash&amp;ref_src=twsrc%5Etfw">#100DaysofCode</a>: <a href="https://twitter.com/hashtag/100Devs?src=hash&amp;ref_src=twsrc%5Etfw">#100Devs</a> assignment today. <a href="https://twitter.com/leonnoel?ref_src=twsrc%5Etfw">@leonnoel</a> said we needed to make it responsive. It&#39;s definitely responding. 😅 <a href="https://t.co/zM56c8T6Zv">pic.twitter.com/zM56c8T6Zv</a></p>&mdash; Nicole Barnabee (@NicoleBarnabee) <a href="https://twitter.com/NicoleBarnabee/status/1487085289851994117?ref_src=twsrc%5Etfw">January 28, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> # That's all friends! <iframe src="https://giphy.com/embed/w89ak63KNl0nJl80ig" width="480" height="400" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/theoffice-w89ak63KNl0nJl80ig"></a> No joke, this was difficult to write. Writing is an art I want to continue to practice while on the #100Dev journey. I will post a weekly recap of the lessons at the end of each week. If you have any questions, comments, concerns, **feedback**, do let me know. Until next time ~ This is where you can find me → [Twitter](https://twitter.com/codewithfan), [LinkedIn](https://www.linkedin.com/in/stefantaitano/)!
codewithfan
969,531
A recap of the Apama Advent Calendar of articles
On December 1, 2021, ‘someone’ (cough, cough) decided, having just opened the first door of our...
0
2022-02-02T12:44:32
https://tech.forums.softwareag.com/t/a-recap-of-the-apama-advent-calendar-of-articles/254620
iot, cumulocity, streaminganalytics, newsletter
--- title: A recap of the Apama Advent Calendar of articles published: true date: 2022-01-27 08:45:41 UTC tags: iot, cumulocity, streaminganalytics, newsletter canonical_url: https://tech.forums.softwareag.com/t/a-recap-of-the-apama-advent-calendar-of-articles/254620 --- **On December 1, 2021, ‘someone’ (cough, cough) decided, having just opened the first door of our family advent calendars at home, that it would be “fun” to produce a kind of informational “advent calendar” providing a short article series of brief tips, tricks, hints, and reminders of information relating to the Apama Streaming Analytics platform from Software AG and the Community.** Our goal was to have one article each workday (Monday-Friday) from the 1st of December until Christmas Eve (maybe with a couple of special bonus weekend articles to help get it all flowing). [![image](https://aws1.discourse-cdn.com/techcommunity/optimized/3X/e/0/e0aa0eaa044994257cb6939a7c33fa415a0b105e_2_500x500.jpeg)](https://aws1.discourse-cdn.com/techcommunity/original/3X/e/0/e0aa0eaa044994257cb6939a7c33fa415a0b105e.jpeg "image") We wanted the content to be diverse, informative, and progressive, helping to introduce a novice or intermediate reader to Apama Streaming Analytics. Of course, we also had the idea of ending with something fun in the final article, like opening the last door of a real calendar. Below you will find a table of the articles and the content “levels.” Just one of the articles, for Thursday, December 23, is nontechnical in content and instead rediscovers several use cases related to environmental issues. Other than that one day, we cover all the basics and the more commonly used aspects of Apama, particularly when it is used as a component of, or in conjunction with, our market focus around Cumulocity IoT. | Level | Article subject | Date | | --- | --- | --- | | Beginner | [Finding information about Apama](https://tech.forums.softwareag.com/t/apama-advent-calendar-01-dec-2021-finding-information/253610/6) | Wed 1 Dec 2021 | | Beginner | [Apama anywhere from multi-cloud to ThinEdge](https://tech.forums.softwareag.com/t/apama-advent-calendar-02-dec-2021-apama-anywhere-from-multi-cloud-to-thinedge/253649/3) | Thu 2 Dec 2021 | | Beginner | [Apama for domain experts](https://tech.forums.softwareag.com/t/apama-advent-calendar-03-dec-2021-apama-for-domain-experts/253719/2) | Fri 3 Dec 2021 | | Beginner | [Apama fundamental concepts](https://tech.forums.softwareag.com/t/apama-advent-calendar-04-dec-2021-apama-fundamental-concepts/253723) | Sat 4 Dec 2021 (Bonus!) | | Beginner | [Apama CLI tooling](https://tech.forums.softwareag.com/t/apama-advent-calendar-05-dec-2021-apama-cli-tooling/253730) | Sun 5 Dec 2021 (Bonus!) | | Beginner | [Apama EPL fundamentals](https://tech.forums.softwareag.com/t/apama-advent-calendar-06-dec-2021-apama-epl-fundamentals/253758) | Mon 6 Dec 2021 | | Beginner / Intermediate | [Apama logging](https://tech.forums.softwareag.com/t/apama-advent-calendar-07-dec-2021-apama-logging/253779) | Tue 7 Dec 2021 | | Intermediate | [Apama Connectivity](https://tech.forums.softwareag.com/t/apama-advent-calendar-08-dec-2021-apama-connectivity/253799) | Wed 8 Dec 2021 | | Intermediate | [Apama and automated testing](https://tech.forums.softwareag.com/t/apama-advent-calendar-09-dec-2021-apama-and-automated-testing/253829) | Thu 9 Dec 2021 | | Beginner | [Apama Development Environments](https://tech.forums.softwareag.com/t/apama-advent-calendar-10-dec-2021-apama-development-environments/253845) | Fri 10 Dec 2021 | | | Weekend | Sat 11 Dec 2021 | | | Weekend | Sun 12 Dec 2021 | | Intermediate | [Apama in containers](https://tech.forums.softwareag.com/t/apama-advent-calendar-13-dec-2021-apama-in-containers/253888) | Mon 13 Dec 2021 | | Intermediate | [Deploying Apama projects](https://tech.forums.softwareag.com/t/apama-advent-calendar-14-dec-2021-deploying-apama-projects/253914) | Tue 14 Dec 2021 | | Beginner / Intermediate | [EPL in Cumulocity IoT](https://tech.forums.softwareag.com/t/apama-advent-calendar-15-dec-2021-epl-in-cumulocity-iot/253939) | Wed 15 Dec 2021 | | Intermediate / Advanced | [The Analytics Builder Block SDK and Custom Blocks](https://tech.forums.softwareag.com/t/apama-advent-calendar-16-dec-2021-the-analytics-builder-block-sdk-and-custom-blocks/253968) | Thu 16 Dec 2021 | | Advanced | [Enhancing Cloud Fieldbus using Apama](https://tech.forums.softwareag.com/t/apama-advent-calendar-17-dec-2021-enhancing-cloud-fieldbus-using-apama/253984) | Fri 17 Dec 2021 | | | Weekend | Sat 18 Dec 2021 | | | Weekend | Sun 19 Dec 2021 | | Advanced | [Using Connectivity Plugins in apama-ctrl](https://tech.forums.softwareag.com/t/apama-advent-calendar-20-dec-2021-using-connectivity-plugins-in-apama-ctrl/254018) | Mon 20 Dec 2021 | | Intermediate | [EPL Stream Listeners](https://tech.forums.softwareag.com/t/apama-advent-calendar-21-dec-2021-epl-stream-listeners/254020) | Tue 21 Dec 2021 | | Intermediate | [The MemoryStore plugin](https://tech.forums.softwareag.com/t/apama-advent-calendar-22-dec-2021-the-memorystore-plugin/254021) | Wed 22 Dec 2021 | | Use Cases | [Apama on planet Earth](https://tech.forums.softwareag.com/t/apama-advent-calendar-23-dec-2021-apama-on-planet-earth/254022) | Thu 23 Dec 2021 | | Intermediate | [Apama Christmas Tree Lights](https://tech.forums.softwareag.com/t/apama-advent-calendar-24-dec-2021-apama-christmas-tree-lights/254139) | Fri 24 Dec 2021 | You can find all of these as well as any other related knowledge base articles here: [Latest Streaming-Analytics-Apama topics in Knowledge base - Software AG Tech Community & Forums](https://tech.forums.softwareag.com/tags/c/knowledge-base/6/Streaming-Analytics-Apama) Phew! I must admit, having instigated the creation of such an advent calendar, I now understand why companies choose instead to do “12 days of Christmas.” ![:wink:](https://emoji.discourse-cdn.com/twitter/wink.png?v=11 ":wink:") Many thanks to my coauthors, Harald, Mario, Nick, Bevan, and Louise. We hope you find this collection of articles helpful. Don’t forget there are also a large number of blog posts over on [www.apamacommunity.com](http://www.apamacommunity.com/), and you can ask questions or have discussions about Apama Streaming Analytics here in the Tech Communities [Forum](https://tech.forums.softwareag.com/tags/c/forum/1/Streaming-Analytics-Apama). * * * > This article is part of the TECHniques newsletter blog - technical tips and tricks for the Software AG community. [Subscribe](https://info.softwareag.com/TechCommunity-Subscription-Page.html) to receive our quarterly updates or read the [latest issue](https://tech.forums.softwareag.com/techniques-latest). [Visit the original article](https://tech.forums.softwareag.com/t/a-recap-of-the-apama-advent-calendar-of-articles/254620)
techcomm_sag
969,729
Is it possible to create custom tag like <If> with condition using jsx ?
A post by Ramya Yegu
0
2022-01-27T14:12:06
https://dev.to/yramyayegu/is-it-possible-to-create-custom-tag-like-with-condition-using-jsx--3jeh
react
yramyayegu
970,079
Incident Management Metrics and Key Performance Indicators
In 2008, I got my first job at a software-as-a-service company. We built learning management software...
0
2022-01-27T20:53:33
https://earthly.dev/blog/incident-management-metrics/
devops, aws, linux, kubernetes
--- title: Incident Management Metrics and Key Performance Indicators tags: devops, aws,linux, kubernetes canonical_url: https://earthly.dev/blog/incident-management-metrics/ published: true --- In 2008, I got my first job at a software-as-a-service company. We built learning management software and ran it on servers in the small data center connected to our office. We released new software onto these production servers monthly and measured quality by counting bugs per release. We also had account managers who kept us informed of how many large clients seemed upset about the last release. Occasionally, when something went wrong, we would do a stability release and spend a month only fixing bugs. [Testing](https://earthly.dev/blog/unit-vs-integration) was not a part of our build process but a part of our team: every feature team had quality assurance people who tested each feature before it was released. This wasn't that long ago, but cloud software development has matured a lot since this time. Incident management has become standard practice, and many great metrics and Key Performance Indicators (KPIs) exist for measuring release quality. Let's review some of them. ## Mean Time Between Failures (MTBF) When software is being released only once a month, on a fixed timeline, with extensive manual testing, counting the number of bugs might work. But once you start releasing many times per week or per day, this won't work, and another way to measure software quality is required. Mean time between failures is a metric from the field of [reliability](https://earthly.dev/blog/achieving-repeatability) engineering. Calculating it is simple: it is time over the number of failures that occurred during that time. If in the last 30 days you have had two production incidents, then the mean time between failure is 15 days. <div class="notice--big--primary"> ### Calculating MTBF | Incidents in last 30 days | | | ------------- | -------- | | #1 | Jan 3rd | | #2 | Jan 25 | Mean Time Between Failures = : 30 days / 2 Incidents = 15 days </div> ## Mean Time to Recovery (MTTR) Something funny happens when you start releasing more frequently. You may end up with a higher count of issues in production, but resolving them will happen much faster. If each change is released separately using a continuous delivery model, then recovering gets easier -- often, all that is required is hitting a rollback button. If you are measuring MTBF, your software may be getting much better, but your numbers will be getting worse. Enter mean time to recovery. Mean time to recovery is just what it sounds like: you start a timer when the incident begins and stop it when production is healthy again - even a simple rollback counts. Average this number across incidents, and you have MTTR. You now have a metric that captures the health of your incidence response process. <div class="notice--big--primary"> ### Calculating Mean Time to Recovery | Incident #1 | | | ------------- | -------- | | Reported | 10:00 am. | | Recovered | 12:00 pm. | | Recovery Time | 2 Hours | | Incident #2 | | | ------------- | -------- | | Reported | 10:00 am. | | Recovered | 2 days later at 10:00 am. | | Recovery Time | 48 Hours | Mean Time To Recovery = : 2 hour + 48 hours / 2 failures = 25 hours </div> ## Mean Time to Resolve (MTTRe) <div class="notice--info"> ℹ️ Acronyms Collision Alert Mean Time To Resolve, MTTRe, differs from Mean Time To Recover, MTTR, but some resources use MTTR for both. To avoid confusion, ensure you are using the correct terminology for your metric. </div> Rolling back to address an incident is a great idea: it's often the quickest way to get things back in a good place. But there are other types of incidents. Imagine your application deadlocks every once in a while, and you have to restart it to unlock. You may have an excellent mean time to recovery, but you've never actually addressed the root cause. This is what MTTRe measures, not the time to get the service back up and running but to resolve the root cause and ensure the problem never happens again. The never-happens-again part is hard to achieve but vital. If you are responding quickly but never getting to the root cause, you will be living in a stressful world of constant fire fighting. However, if you are resolving the root cause of each incident, then quality will increase over time. <div class="notice--big--primary"> ### Calculating Mean Time to Resolve | Incident #3 | | | -------------------- | -------- | | Reported | day 1 | | Addressed | day 1 | | Root Cause Analysis | day 2 | | Root Cause Addressed | day 31 | | **Resolve Time** | **30 days** | | Incident #4 | | | -------------------- | -------- | | Reported | day 1 | | Addressed | day 1 | | Root Cause Analysis | day 2 | | Root Cause Addressed | day 11 | | **Resolve Time** | **10 days** || Mean Time To Resolve = : 30 days + 10 days / 2 incidents = 20 days </div> ## Mean Time to Acknowledge (MTTA) An essential part of good incident management is an on-call rotation. You need someone around to respond to incidents when they occur. Our previous metrics would be unable to differentiate between an incident that took 3 hours to recover from and one that was recoverable in 5 minutes but took two hours and 55 minutes to be acknowledged. MTTA highlights this difference. It is a metric for measuring the responsiveness of the on-call person to any alerts. <div class="notice--big--primary"> ### ️Calculating Mean Time to Acknowledge | Incident #5 | | | -------------------- | -------- | | Reported | 10 am | | Acknowledged | 10: 05 am | | Recovered | 12:00 pm | | **Acknowledge Time** | **5 minutes** | | Incident #6 | | | -------------------- | -------- | | Reported | 10 am | | Acknowledged | 11: 55 am | | Recovered | 12:00 pm | | **Acknowledge Time** | **115 minutes** | Mean Time To Acknowledge = : 5 minutes + 115 minutes / 2 incidents = 60 minutes </div> ## Summary There are many ways to measure the quality of your software as a service product. MTBF, MTTR, MTTRe, and MTTA can each offer a different lens for viewing your software release life cycle. As you improve your Software Development Life Cycle, find ways to collect aggregate metrics like these and choose one or two to target for improvement. Invest in improving these metrics, and you'll make up for it in time saved fighting fires. Also, focusing on aggregate metrics can be an effective way to move the discussion from blame about specific incidents to a higher-level debate around changing the process to better support the company's goals. If your build pipeline is taking more than 15 minutes and therefore negatively affecting your metrics, then take a look at Earthly's [free and open build tool](http://earthly.dev/). Originally published on [Earthly's blog](https://earthly.dev/blog/incident-management-metrics/)
adamgordonbell
970,513
How I cleared Microsoft Azure AZ-204 in one month
AZ-204: Developing Solutions for Microsoft Azure focuses on design, development, deployment,...
0
2022-01-28T09:35:09
https://dev.to/devanandukalkar/how-i-cleared-az-204-in-one-month-51k8
azure, az204, python
**AZ-204: Developing Solutions for Microsoft Azure** focuses on design, development, deployment, maintenance and monitoring of scalable solutions on Azure. There is no pre-requisite for this certification. But if you complete [AZ-900: Azure Fundamentals](https://docs.microsoft.com/en-us/learn/certifications/exams/az-900) before, your learning process will speed up in terms of understanding some of the general concepts of Microsoft Azure. The exam covers each of these aspects with varying weightage. Below are the core skills that Microsoft has outlined on their learning portal. 1. _Develop Azure compute solutions (25-30%)_ 2. _Develop for Azure storage (15-20%)_ 3. _Implement Azure security (20-25%)_ 4. _Monitor, troubleshoot, and optimize Azure solutions (15-20%)_ 5. _Connect to and consume Azure services and third-party services (15-20%)_ Detailed exam skills outline can be found [here](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4oZ7B). Just to give you a glimpse of what Azure services will need to be studied from exam perspective. 1. _Azure Virtual Machines_ 2. _Azure Functions_ 3. _Azure Storage Account_ 4. _Azure Web Apps_ 5. _Azure Logic Apps_ 6. _Azure Cosmo DB_ 7. _Azure Container Registry/Docker_ 8. _Azure Key Vault_ 9. _Azure Active Directory_ 10. _Azure Authentication(OAuth 2.0), Security_ 11. _Azure Service Bus/Queue, Azure Storage queues_ 12. _Azure Event Grid, Event Hub_ 13. _Azure API management_ 14. _Azure Cache for Redis_ 15. _Azure ARM Templates_ 16. _Application Insights_ 17. _Azure Monitor_ 18. _Azure CLI, Powershell_ 19. _Azure Batch_ 20. _Azure Search_ The exam will require a bit of experience with programming languages supported by Azure (e.g. C#, Python, Java, Node.js). Although the tutorials present out there are mostly using C# for development. But it should not matter much as there are Azure SDKs for each of these supported languages which you can use to develop a solution. **How I prepared for the exam:** I started with Scott Duffy's [AZ-204 Developing for Microsoft Azure Exam Prep](https://www.udemy.com/course/70532-azure/) course on Udemy. This course covers practical guide to develop each Azure services outlined for the exam. It can be completed within a week or two(depending on your speed). The second course that I referred to was from Alan Rodrigues on YouTube ([link](https://www.youtube.com/watch?v=wWBW6ojr-Nw&list=PLLc2nQDXYMHpekgrToMrDpVtFtvmRSqVt)). This course is bit more on hours but provides comprehensive knowledge. Alan has developed solutions for each of those Azure services from scratch using C#/.Net in the tutorials. If you are short on time, I would suggest to take first course by Scott Duffy. But If you really want to learn it in detail, I would really recommend to take Alan Rodrigues's course on YouTube. As the course covers a lot, it's almost impossible to grasp it by just watching the videos. You must practice developing these solutions using your preferred choice of programming language. That way, it will be easier to understand nitty-gritty concepts which would ultimately help you in exam. I had practiced azure development using Python. The SDKs can be found on this [GitHub](https://github.com/Azure/azure-sdk-for-python) link. There are many built in extensions for these Azure services in Visual Studio Code, please make sure to utilize them. Additionally, you could read Microsoft's official documentation and make use of their learning portal. Details can be found [here](https://docs.microsoft.com/en-us/learn/certifications/azure-developer/). **Practice Questions:** Practicing questions should make about 30% of your preparation time. This is very important, as you would get to know the variety of questions that will be asked and how to manage time during examination. There are about 2 case studies given in the exam with 10-15 questions based on the same. It could take time to read the case study and understand the problem. Trick is to not read entire problem. Instead, read question first and then head back to related specific portion of the case study. This will help in save time during exam. You must get good grasp on Azure CLI, powershell commands & related arguments, as there are many questions related to completing the command by filling in the blanks. There are few questions on completing the unfinished code as well (e.g importing libraries or invoking methods/classes) I would recommend below 2 resources to practice questions for AZ-204. 1. Practice tests by [Skillcertpro](https://skillcertpro.com/product/developing-solutions-for-microsoft-azure-az-204-practice-exam-test/) 2. Practice tests by [Whizlabs](https://www.whizlabs.com/microsoft-azure-certification-az-204/) In fact, even if you complete just the first one, it should be more than enough to clear the examination. Key is to have as much hands on experience as possible. I hope this will help you to prepare for examination. Best of luck and Let me know your questions in comment section. **Check out my verified badge:** https://www.credly.com/badges/05857a22-73a8-4ec2-a438-c5ee8733d6ee
devanandukalkar
971,086
The Intl object: JavaScript can speak many languages
JavaScript has a useful yet unknown object to handle formatting dates, numbers, and other values in...
0
2022-01-28T16:03:11
https://nicozerpa.com/intl-javascript-can-speak-many-languages/
javascript, webdev, programming
JavaScript has a useful yet unknown object to handle formatting dates, numbers, and other values in different languages, the `Intl` object. This object is very useful when you have a raw date or a big number and you need to **display it in a more user-friendly way**. You can, for example, convert the date `2022-01-16T20:10:48.142Z` to "January 16, 2022 at 8:10 PM" for people in the US, and to "16 de enero de 2022, 20:10" for those who live in Spain. ## Formatting numbers and currency You can format numbers and currency with the `Intl.NumberFormat` object. This is how it works: ```javascript const usaCurrencyFormatter = new Intl.NumberFormat( "en-US", // <-- Language and country // (in this case, US English) { style: "currency", // <-- it can also be // "decimal", "percent" // or "unit" currency: "USD" // <-- Which currency to use // (not needed if the style // is not "currency") } ); usaCurrencyFormatter.format(2349.56); // ☝️ returns "$2,349.56" const spainCurrencyFormatter = new Intl.NumberFormat( "es-ES", // <!-- Spanish from Spain { style: "currency", currency: "EUR" // <-- Euros } ); spainCurrencyFormatter.format(2349.56); // ☝️ returns "2349,56 €" const qatarNumberFormatter = new Intl.NumberFormat( "ar-QA", // <!-- Arabic from Qatar { style: "decimal" } ); qatarNumberFormatter.format(4583290.458); // ☝️ returns "٤٬٥٨٣٬٢٩٠٫٤٥٨" ``` When you're formatting currency, you have to specify the `currency` parameter with the code of the currency you want/need to use. [You can check a list of currency codes here](https://en.wikipedia.org/wiki/ISO_4217#Active_codes). ## Formatting dates `Intl.DateTimeFormat` lets you format dates in different languages and locales: ```javascript const date = new Date("2022-01-16T20:10:48.142Z"); const usaDateFormatter = new Intl.DateTimeFormat( "en-US", // US English { dateStyle: "short", // <-- how to display the date // ("short", "medium", or "long") timeStyle: "short", // <-- how to display the time // if you don't include this parameter, // it will just show the date timeZone: "America/Los_Angeles" // <-- this object also // converts time zones } ); usaDateFormatter.format(date); // ☝️ returns "1/16/22, 12:10 PM" const brazilDateFormatter = new Intl.DateTimeFormat( "pt-BR", // Portuguese from Brazil { dateStyle: "long", timeStyle: "medium", timeZone: "UTC" } ); brazilDateFormatter.format(date); // ☝️ returns "16 de janeiro de 2022 20:10:48" const japanDateFormatter = new Intl.DateTimeFormat( "ja", // Japanese { dateStyle: "long", timeStyle: "short", timeZone: "Asia/Tokyo" } ); japanDateFormatter.format(date); // ☝️ returns "2022年1月17日 5:10" ``` However, these are only two of the many utilities in `Intl` to format other types of values into different languages. [On this page, there's the full list of formatters](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl). ## Where to get languages and country codes? Language codes consist of three parts *language*-*writingSystem*-*countryOrRegion*. Only the first part is necessary, and the writing system is necessary only if the language can be written in more than one alphabet/writing system. Here are some examples: ``` en-US: English, United States es: Spanish pt-BR: Portuguese, Brazil zh-Hans-CN: Chinese, simplified writing ("hans"), from China ``` The entire list of languages, countries or regions, and writing systems (or "scripts") [can be found here](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry). --- Free JavaScript Newsletter: Every other Monday, easy and actionable steps to level up your JavaScript skills. [Click here to subscribe.](https://nicozerpa.com/newsletter/)
nicozerpa
971,860
What is the best practice to provide a test account for Google SignIn in Google Play Console?
What is the best practice to provide a...
0
2022-01-29T12:36:50
https://dev.to/torufuruya/what-is-the-best-practice-to-provide-a-test-account-for-google-signin-in-google-play-console-4293
{% stackoverflow 70827749 %}
torufuruya
972,144
Why a Spring Cloud Config Server is Crucial to a Good CI/CD Pipeline (Pt 1)
Introduction Before I began developing large-scale, enterprise level software, I didn’t...
0
2022-04-12T20:49:45
https://www.paigeniedringhaus.com/bloh/why-a-spring-cloud-config-server-is-crucial-to-a-good-ci-cd-pipeline-pt-1
java, devops, springboot, backend
--- title: Why a Spring Cloud Config Server is Crucial to a Good CI/CD Pipeline (Pt 1) published: true date: 2018-05-26 00:00:00 UTC tags: java,devops,springboot,backend canonical_url: https://www.paigeniedringhaus.com/bloh/why-a-spring-cloud-config-server-is-crucial-to-a-good-ci-cd-pipeline-pt-1 --- [![Laptop screen filled with minified source code](https://www.paigeniedringhaus.com/static/24d70e51d7ba55ddbed9ad6d61d1d908/15ec7/screen-of-code.jpg "Laptop screen filled with minified source code")](/static/24d70e51d7ba55ddbed9ad6d61d1d908/eea4a/screen-of-code.jpg) ## Introduction Before I began developing large-scale, enterprise level software, I didn’t fully comprehend the value of things like integration tests, automated build processes and shared libraries. For the small applications I built during my [coding bootcamp](https://www.paigeniedringhaus.com/blog/how-i-went-from-a-digital-marketer-to-a-software-engineer-in-4-months), it was easy for me to do things like hard code URLs so my frontend application could talk to its backend counterparts and their databases. I only had two environments: my local development environment on my laptop and my AWS production environment where I hosted my projects portfolio, and it was a fairly simple, albeit entirely manual, process (with a good bit of trial and error) to get the pieces of the puzzle connected. Fast forward several months to my software engineering job of maintaining and improving upon an application backed by no less than 13 microservices. No, I’m not joking. Yes, all 13 are critical to this application working. Did I mention that in my team’s [test driven development (TDD)](https://en.wikipedia.org/wiki/Test-driven_development), agile process we also have 4 different development environments before a new feature makes it to production? Yep: a local development space, a QA space, a Q1 space, an acceptance space and finally our production space. So, think about it: does it make sense to have to manually update each and every one of these services as they move through the development process on their way to production? Of course it doesn’t. You need a centralized, easy-to-automate way to update those environment variables. This is where a cloud configuration server can shine. To give you a quick overview, cloud config servers are designed to: > “Provide server and client-side support for externalized configuration in a distributed system. With the Config Server you have a central place to manage external properties for applications across all environments.” > > [- Spring Cloud Config](https://cloud.spring.io/spring-cloud-config/) **Basically, a config server allows you to externally store variables your application will need to run in all environments, regardless of lifecycle, and update them in one, centralized place.** [![Spring Cloud logo](https://www.paigeniedringhaus.com/static/4b53473c4b91ca751900865efaa74475/f93b5/spring-cloud.jpg "Spring Cloud logo")](/static/4b53473c4b91ca751900865efaa74475/f93b5/spring-cloud.jpg) _Behold, the Spring Cloud._ The first config server I was introduced to is the [Spring Cloud Config](https://cloud.spring.io/spring-cloud-config/) server, since my team is responsible for a full stack application composed of many Java Spring Boot backend microservices and a JavaScript Node.js frontend microservice. --- ## Setting Up Your Config Server Setting up the config server is actually pretty easy. To get started, you can go the [Spring Initializr site](http://start.spring.io/), add `Config Server` as a dependency to the project and click the "Generate Project" button in the browser. > I’ll also add that, at least at the time I’m writing this, the Spring Cloud Config Server is not compatible with Spring Boot version 2 (I found this out the hard way), so choose the highest Spring Boot version 1 snapshot you can. [![Spring Initializr site](https://www.paigeniedringhaus.com/static/44b116839628c34184589e88e7a330fa/1e043/spring-initializr.png "Spring Initializr site")](/static/44b116839628c34184589e88e7a330fa/c679a/spring-initializr.png) _This is all you need to get your Spring Cloud Config Server running: the Config Server dependency. Note the version of Spring Boot is 1.5.14, not 2.0.2, which is the latest version it automatically defaults to._ Once you’ve downloaded the project and opened it up in your IDE of choice (mine is [IntelliJ](https://www.jetbrains.com/idea/)), you can go straight to the main application file, add the `@EnableConfigServer` and `@SpringBootApplication` Spring Boot annotations, and you’re almost ready to go. Below is my actual main method file, truly, that’s all it needs. ```java package com.myExampleConfigServer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.config.server.EnableConfigServer; @EnableConfigServer @SpringBootApplication public class MyExampleConfigServerApplication { public static void main(String[] args) { SpringApplication.run(MyExampleConfigServerApplication.class, args); } } ``` The other file you’ll need to configure is your `application.yml` file in your `resources/` folder. This file will be where you set up your cloud config server’s access to GitHub (and the GitHub files it will use to provide application configuration variables). As you can see, there’s a spot for your GitHub URL (which I’ll come back to in a minute), username and password. Since our applications are hosted on [Pivotal Cloud Foundry](https://pivotal.io/platform), we use what are called [VCAP services](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html) (it stands for VMWare Cloud Application Platform). It’s a fancy way of saying a service where environment variables are stored that only people (and applications) who have access to that space within PCF can access. This lets us put sensitive information like service accounts and passwords somewhere besides a public GitHub repo where the credentials are available for the config server, but not available for anyone without authorization. Here’s my `application.yml`. With these two files configured and a cloud platform to host your server, you’ll be ready to start using your config server. **application.yml** ```yaml server: port: 8888 spring: application: name: myexample-config-server cloud: config: server: git: uri: ${vcap.services.config-service.credentials.url} username: ${vcap.services.config-service.credentials.user} password: ${vcap.services.config-service.credentials.token} ``` If you’d like to see an example of the config server app, I’ve put a starter project [here in GitHub](https://github.com/paigen11/spring-cloud-config-server-example/tree/master). Ok, so you’ve got your config server. Great, now it’s time to set up your config server properties — the files the config server will actually access to pull environment variables for your projects. ## Setting Up Your Config Server Properties This part is a cinch. Create a totally empty new repo in GitHub, and then create a new file with the following naming conventions: ```yaml [application name]-[life cycle profile].yml ex. my-app-to-config-QA.yml ``` Within this YAML file, you’ll be able to add your configuration properties for your application. Here’s some sample info you might include in the YAML. **my-app-to-config-QA.yml** ```yaml configurations: featureToggleFlag: true my-custom-flag: false sampleCronJob: "0 0 1 * * *" sampleUrl: http://google.com ``` Commit this file to a GitHub repo, copy that repo URL and place it in the Cloud Config Server URI in your config server’s `application.yml`. Now the server knows where to look in GitHub for the config files you want to use. You can see an example of a config properties repo [here in GitHub](https://github.com/paigen11/spring-cloud-config-properties-sample). ## Connecting Your Java Spring Boot Application to Your Config Server With both your config server and your properties you want the server to provide to your project, you need your project to be able to get those properties when your Spring Boot service starts up. Again, there’s not a lot of overhead to getting this working. In the `build.gradle` file of your app that needs these config server values, you’ll need the following dependencies: **build.gradle** ```java compile group: 'org.springframework.cloud', name: 'spring-cloud-starter-config', version:'1.3.3.RELEASE' compile group: 'org.springframework.boot', name: 'spring-boot-configuration-processor', version: "${springBootVersion}" ``` Then, you’ll create a `bootstrap.yml` file which will live alongside your `application.yml` file in your `src/main/resources/` folder, and it will contain information pointing to the config server’s location wherever it’s being hosted. It will look something like this: **bootstrap.yml** ```yaml spring: application: name: ##APPLICATION NAME GOES HERE## cloud: config: uri: https://myexample-config-server.non-prod.com --- spring: profiles: production cloud: config: uri: https://myexample-config-server.prod.com ``` Next, you’ll go to whichever files actually need the config properties and add the annotations `@Configuration` and `@ConfigurationProperties(prefix=”configurations")` (or whatever you’ve named your config properties in the config properties YAML). Here's an example: **XyzProperties.java** ```java @Configuration @ConfigurationProperties(prefix = "configurations") public class ConfigProperties { } ``` If you notice, the properties are prefixed by `configurations`. To bind these properties to the ConfigProperties class, you'll first need to add a `"prefix"` to the `@ConfigurationProperties` annotation like so: To bind the `“featureToggleFlag”` property, add the following member variable. along with the getter and setter. Or use [Lombok](https://projectlombok.org/features/constructor)’s `@NoArgsConstructor` and skip the getters and setters - your choice. **XyzProperties.java** ```java @Configuration @ConfigurationProperties(prefix = "configurations") public class ConfigProperties { private boolean featureToggleFlag; //Add Getters and Setters here if desired... } ``` If you match the member variable name with the actual property name in the configuration properties, Spring will automatically bind it to the member variable. So far so good. The very last thing you must do before starting up your service is ensure that you set `SPRING_PROFILES_ACTIVE` to the correct config properties environment. The current values would be `“QA”` for the QA environment, `“Q1”` for the Q1 environment, `“prod”` for production and so on. Be warned, these values are case sensitive, so however you’ve named them in your config properties file, it must be exactly the same in the `"Active Profiles"` input in IntelliJ's start script setup. In IntelliJ, this can be done by clicking the "Edit Configurations" property when setting up a Spring Boot project to run, and adding the correct value to the `"Active Profiles"` input midway down the config modal. [![Active profile setting inside of IntelliJ IDE](https://www.paigeniedringhaus.com/static/6ae9167b7e83f687a89594e2da8007c4/1e043/intellij-active-profiles.png "Active profile setting inside of IntelliJ IDE")](/static/6ae9167b7e83f687a89594e2da8007c4/0d390/intellij-active-profiles.png) _Inside your IntelliJ run configuration, add this line in the "Active Profiles" input._ And that’s it. Now, when you start up the Spring Boot service, you should see `Spring Profile` being set in the logs, and to be sure, you might also put in some `System.out.println` messaging to let you know if it’s successfully reached the config server and acquired the properties needed. Fantastic, we've built a config server, we've moved environment variables out of your Spring Boot project to a centralized location, so changes can be made quickly and we've configured a Java service to pull those variables from the config server. Nicely done. --- ## Conclusion and Part Two: The Config Server and Node.js But, how do you do the same thing with a JavaScript / Node.js project? Can you use the same Spring Boot config server? Or do you have to set up a separate Node-based config server? You can use the existing one, and I’ll show you how to set it up, and how you might use it in a JavaScript project to enable and disable feature toggles for faster feature deployments in my **[next post](https://www.paigeniedringhaus.com/blog/leveraging-a-spring-cloud-config-server-in-a-node-js-apps-feature-toggles-pt-2)**. Check back in a few weeks — I’ll be writing more about JavaScript, React, IoT, or something else related to web development. If you’d like to make sure you never miss an article I write, sign up for my newsletter here: https://paigeniedringhaus.substack.com Thanks for reading! --- ## Further References & Resources - [Spring Cloud Config Initializr](https://cloud.spring.io/spring-cloud-config) - [Sample Cloud Config Server Repo](https://github.com/paigen11/spring-cloud-config-server-example) - [Sample Cloud Config Properties Repo](https://github.com/paigen11/spring-cloud-config-properties-sample)
paigen11
972,145
Leveraging a Spring Cloud Config Server in a Node.js App's Feature Toggles (Pt 2)
Introduction As I showed in my first post about the Spring Cloud Config server, setting...
0
2022-05-15T13:42:35
https://www.paigeniedringhaus.com/blog/leveraging-a-spring-cloud-config-server-in-a-node-js-apps-feature-toggles-pt-2
javascript, node, devops, springboot
--- title: Leveraging a Spring Cloud Config Server in a Node.js App's Feature Toggles (Pt 2) published: true date: 2018-06-02 00:00:00 UTC tags: javascript,nodejs,devops,springboot canonical_url: https://www.paigeniedringhaus.com/blog/leveraging-a-spring-cloud-config-server-in-a-node-js-apps-feature-toggles-pt-2 --- [![Laptop screen filled with minified source code](https://www.paigeniedringhaus.com/static/1bc85da9de192091881a510c69f4a2e3/15ec7/more-code.jpg "Laptop screen filled with minified source code")](/static/1bc85da9de192091881a510c69f4a2e3/eea4a/more-code.jpg) ## Introduction As I showed in my [first post about the Spring Cloud Config server](https://www.paigeniedringhaus.com/blog/why-a-spring-cloud-config-server-is-crucial-to-a-good-ci-cd-pipeline-pt-1), setting up a configuration server for a Java Spring Boot application and then leveraging that server to provide environment variables to existing Spring Boot projects is really handy. The next trick was figuring out if it was possible to use that very same config server in a Node,js project. To give you a quick rundown, my team worked on a microservice-based application architecture, meaning, we had up to thirteen separate Java services and one Node service all running to power our application. It made total sense to go with a Java-based config server, since: thirteen Spring Boot microservices, but when our business partners decided they wanted to implement feature toggles, it became a priority to get the config server working in the Node project as well. --- ## What is a Feature Toggle, and Why Do I Need It? Let me set the stage by giving you a quick run down of what a feature toggle is and why it’s so beneficial for developers, business partners and customers. > A **feature toggle** is a technique in [software development](https://en.wikipedia.org/wiki/Software_development) that attempts to provide an alternative to maintaining multiple [source-code](https://en.wikipedia.org/wiki/Source_code) branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. A feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users. > > [— Wikipedia, Feature Toggle](https://en.wikipedia.org/wiki/Feature_toggle) Basically, a feature toggle allows developers to introduce new features to users in a more Agile development fashion. With feature toggles, devs can continue to maintain just one codebase but show different views of the UI to users in production versus the development team in lower life cycles, based on if those toggles are set to ‘on’ or ‘off’. They also allow our business and UX partners to validate early on, if new features are beneficial to our users, or if they need to reevaluate how we’re approaching a solution. Let me make one thing crystal clear though: **feature toggles are not meant to be permanent solutions.** They are temporary, meant to remain only until the feature has been completed and validated that it’s useful or meets whatever other criteria was set for success or failure. Then the code around the feature toggle should be removed to keep the codebase clean and as free of technical debt as is possible. For my specific situation, our feature toggle was needed to hide a button that didn’t have full functionality yet because the backend service to support it wasn’t finished, and hide the radio buttons from users. They wouldn’t be needed until the button and service were done. [![The UI a typical user sees (feature toggle hides buttons and checkboxes)](https://www.paigeniedringhaus.com/static/097dd5b077051b3987a8884e99515af7/1e043/feature-toggle-users.png "The UI a typical user sees (feature toggle hides buttons and checkboxes)")](/static/097dd5b077051b3987a8884e99515af7/2cefc/feature-toggle-users.png) _Feature toggle view for users: no button and no checkboxes._ Above is the UI view users in production would see: no buttons, no checkboxes. Below is the UI view developers working on the feature need to see: buttons and checkboxes present. [![The UI a developer sees (feature toggle shows buttons and checkboxes)](https://www.paigeniedringhaus.com/static/cd8585f87c6ca9fa4882efabf18b3f41/1e043/feature-toggle-devs.png "The UI a developer sees (feature toggle shows buttons and checkboxes)")](/static/cd8585f87c6ca9fa4882efabf18b3f41/2cefc/feature-toggle-devs.png) _Feature toggle view for devs: buttons and checkboxes._ Let's look at how we got these two views simultaneously. --- ## Setting Up Your Config Server Properties Before we get to the Node config server, let’s first set up the new file with the following naming conventions in our config server properties repo: ```yaml [application name]-[life cycle profile].yml ex. my-ui-app-to-config-QA.yml ``` Within this YAML file, you’ll be able to add your configuration properties for your UI feature toggle. Here’s all I had to include for my feature toggles. ```yaml modifyDatesToggle: true ``` Commit this to a Github repo, and we’re set there. You can see an example of the config properties repo [here in Github](https://github.com/paigen11/spring-cloud-config-properties-sample). ### nodecloud-config-client Node.js Set Up To leverage the existing Spring Cloud Config server set up with our Node frontend application, I chose the [`nodecloud-config-client`](https://www.npmjs.com/package/nodecloud-config-client) on npm, as it’s a fairly well documented Spring Cloud Config Server client written for JavaScript. > I should mention, this is not to be confused with `node-cloud-config-client` or `cloud-config-client` — really, there’s a million versions of config client packages written for Node.js, but the one listed above and in the resources link is the one I used. So to get down to business, after saving the npm package to our UI’s `package.json` dependencies, add it to the `server.js`file (or wherever your Node server’s configuration is located). [![Code set up in app.js file showing the cloud-config-client requirement](https://www.paigeniedringhaus.com/static/3e4d6fedebe868cce1a95acbd2414d9e/1e043/node-cloud-config.png "Code set up in app.js file showing the cloud-config-client requirement")](/static/3e4d6fedebe868cce1a95acbd2414d9e/1e043/node-cloud-config.png) _See the `client` variable at the bottom — that’s where the config client comes in to play._ If you’re developing in multiple environments like my team did (local, QA, Q1, production, etc.), add a variable to access the Spring Cloud config server for all stages of the development process (these [environment variable services](https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#VCAP-SERVICES) are how we connected to it in Pivotal Cloud Foundry). I mentioned these back in my **[previous post](https://www.paigeniedringhaus.com/blog/why-a-spring-cloud-config-server-is-crucial-to-a-good-ci-cd-pipeline-pt-1)**, if you’d like more info. [![Logic in Node server determining how to fetch feature toggles from the Spring Cloud Config server](https://www.paigeniedringhaus.com/static/8b2f4460d32b995fd95fc968a5a96c72/1e043/config-logic.png "Logic in Node server determining how to fetch feature toggles from the Spring Cloud Config server")](/static/8b2f4460d32b995fd95fc968a5a96c72/610c0/config-logic.png) _The `configServerUrl` goes through a series of checks for environment variables, and if it finds none of those (like during local development), it defaults to the hard coded URL. I used the same environment variables and logic for the `configServicesProfile` and `featureToggle`._ Next, add a variable to account for where the config services profile will be located (for local development I had it default to QA). And I added a variable to check for enabled/disabled feature toggles. The below code connects to the config server (when feature toggles are enabled), and maps any feature toggles found to an object. ```javascript { modifyDatesFeature : true } ``` Then that object is sent to the rest of the UI with the `app.get` endpoint simply called `"/featureToggles"`. [![Node code connecting to config server and checking for feature toggles](https://www.paigeniedringhaus.com/static/ec2063dabcc7ff699489c64a0bf017ca/1e043/node-feature-toggle.png "Node code connecting to config server and checking for feature toggles")](/static/ec2063dabcc7ff699489c64a0bf017ca/62a6a/node-feature-toggle.png) _Feature flag variable, call to the client to access the config server with the correct environment (QA in my case), and a check if the `featureToggle` is `‘on’`, map the properties that are available to it. Below is the endpoint to call the features._ Great. The Node server is set up, and you can get the features toggles (or whatever else you’re storing in the config server) just by changing a few variables. ### Connecting Your Client Side UI Framework to Your Node Server The first thing you’ll need to do on the frontend is make an AJAX call to the Node server to check for any existing feature toggles. The framework of the app implementing this was JavaScript MVC (an older framework popular years ago, I'd never heard of it before I worked with it either), and here’s an example of what the call looked like. [![JavaScript client-side code calling for any feature toggles from Node server](https://www.paigeniedringhaus.com/static/08fa392466f831d1e687036a78b88e80/1e043/javascript-mvc.png "JavaScript client-side code calling for any feature toggles from Node server")](/static/08fa392466f831d1e687036a78b88e80/000c7/javascript-mvc.png) In the JavaScript file that’s actually concerned with feature flags (for me, it was a `dates.js` file), I imported the feature toggle AJAX call and created a function to check for the specific feature flag associated with this functionality: `modifyDatesToggle`. [![Feature toggle code for showing / hiding particular date / time feature](https://www.paigeniedringhaus.com/static/c9c250b288bb39d8f8c12f1643ce353f/1e043/js-feature-toggle-code.png "Feature toggle code for showing / hiding particular date / time feature")](/static/c9c250b288bb39d8f8c12f1643ce353f/5f6dd/js-feature-toggle-code.png) _The feature flag was imported at the top of the file, then I call to the feature flag endpoint to see if there’s anything there the file needs to be aware of. If the feature flag matches the variable I named “modifyDatesToggle”, it gets pulled in and applied to the file._ Finally, inside your JavaScript file using the feature toggle, group the code, wherever possible according to if the feature toggle is enabled, like so. [![Code modifications that happen when feature toggle is enabled](https://www.paigeniedringhaus.com/static/f74ee8cfe78d761e09949ec07632fb67/1e043/feature-toggle-grouped-code.png "Code modifications that happen when feature toggle is enabled")](/static/f74ee8cfe78d761e09949ec07632fb67/3c492/feature-toggle-grouped-code.png) _Example 1: This check displays the check boxes if the `modifyDatesToggle` is `“on.”`_ The thinking goes, that by wrapping all the feature enabled code inside of the feature toggle check, even if the call to the config server fails for some reason, users will only ever see the code that should be deployed to production. This gives us extra protection by not being as dependent on more variables than necessary. --- ## But Wait, There’s More — Enabling End-to-End Tests with Feature Toggles There’s one more thing I wanted to include in this blog: how I used the same feature toggle logic to run or skip some of the Protractor end-to-end tests. If you’re using a configuration file for Protractor, and I don’t know why anyone wouldn’t, there’s a cool feature that lets you make your own [custom parameters](https://moduscreate.com/blog/protractor_parameters_adding_flexibility_automation_tests/). I took a page out of my own book where the feature toggle is set in the `server.js` file, and did a similar set up in the Protractor config file: if environment variables are passed in for the feature toggle, the tests that concern that feature will run, if it’s not, they’ll be ignored — I’ll show you the test syntax in a moment, as it’s a little unconventional. [![Feature toggle setup in e2e testing configuration](https://www.paigeniedringhaus.com/static/16ebbe609d61f226bf2d751080c234ba/1e043/feature-toggle-e2e-config.png "Feature toggle setup in e2e testing configuration")](/static/16ebbe609d61f226bf2d751080c234ba/63ec5/feature-toggle-e2e-config.png) _Here’s the configuration that all the Protractor tests use. Params takes in the modifyDatesToggle object and if it exists, it sets the params, if it doesn’t, it sets it to null._ When you’re running the tests locally, you can actually pass the params in through the command line like this: ``bash $ protractor e2e/conf.js browser.params.modifyDatesToggle=‘true’ ``` To implement the feature toggle in the actual end-to-end tests, inside of each test that requires a feature toggle check, add an if statement right after the `‘it’` declaration, checking if the feature toggle is null or not. As I said, this is a little weird to see, but it works. [![Automated test utilizing feature toggled code](https://www.paigeniedringhaus.com/static/5b10677f0d6acfc3724bddbf96a836d1/1e043/feature-toggle-test.png "Automated test utilizing feature toggled code")](/static/5b10677f0d6acfc3724bddbf96a836d1/a3767/feature-toggle-test.png) _An example of a Protractor end-to-end test with a feature toggle check. Not all the tests need this check — just the ones that require the feature toggle to be either off or on._ If the feature toggle is not enabled, the test will be skipped, but show up as if it passed in the console (eliminating failing tests because the proper environment wasn’t present for the test to use). If the feature toggle is enabled, the test will run as normal. This prevents us from having to go through the tests one by one and either `x` them out to ignore them or remove the `x` so they’ll run depending on what the desired functionality is. And once it’s time to remove the feature toggle, it’s easy enough to either remove the unnecessary tests from the `spec` file or just `x` them out, depending on your team’s preference for keeping tests even after functionality has changed. (Personally, I would remove it.) Above, I showed how to run the feature toggle tests from the command line, but it can also be run programmatically when the application is being deployed. Segue: for my team personally, we use [Jenkins](https://jenkins.io/) for all our builds. That’s a whole other set of blog posts, but suffice it to say, while the build is running all the unit test are run, all the end-to-end tests are running, all the rest of the checks we have in place before declaring a feature is tested and ready for our business team to accept happens. Back to how this matters: since Jenkins is our job runner, we can pass the environment variables for the feature toggle tests to Jenkins through a `Jenkinsfile`. [![Enabling feature toggle in Jenkinsfile](https://www.paigeniedringhaus.com/static/2aa4b21d04f265f1c5152f0faf43d15f/1e043/feature-toggles-jenkinsfile.png "Enabling feature toggle in Jenkinsfile")](/static/2aa4b21d04f265f1c5152f0faf43d15f/8cdda/feature-toggles-jenkinsfile.png) _All that’s needed is the simple line at the bottom: `env.MODIFY_DATES_TOGGLE=true`_ And that should be all you need to run your end-to-end tests — feature toggles on or off. ### Automating The Feature Toggles As I said before, my team hosted our applications on Pivotal Cloud Foundry, and in PCF, we use things called VCAP services to hold our environment variables. Through manifest files we could bind these services to our applications and pass in environment variables — variables like if feature toggles should be turned on or off. See where I’m going here? The check for the feature toggle environment variable makes deploying the app in an automated fashion easier. By including a simple environment variable (or not) to the manifest that deploys with each different stage of development, it’s easy to tell the application to check for settings in the config server (and employ them if need be) or not. [![QA manifest that includes feature toggles](https://www.paigeniedringhaus.com/static/1353543796a724400b25393482e96286/3ddad/qa-manifest.png "QA manifest that includes feature toggles")](/static/1353543796a724400b25393482e96286/3ddad/qa-manifest.png) _QA manifest screenshot._ QA manifest: note the `SPRING_PROFILES_ACTIVE` pointing to `QA`, `FEATURE_TOGGLE` environment variables with `‘on’`. [![Production manifest that does not include feature toggles](https://www.paigeniedringhaus.com/static/e043e464058470192b22f20b6cbdea92/e9131/prod-manifest.png "Production manifest that does not include feature toggles")](/static/e043e464058470192b22f20b6cbdea92/e9131/prod-manifest.png) _Production manifest screenshot._ Production manifest: no feature toggle in sight, and the `SPRING_PROFILES_ACTIVE` is set to `production`. --- ## Conclusion There you have it. Now, you have seen how you can leverage a Spring Cloud Config server you’ve built with your Node.js application. Not only that, you can also configure end to end tests to run with the feature toggle on or off as well. If you’ve found other ways to implement feature toggles or other nifty things like this in your own JavaScript projects, I’d love to hear about them. Check back in a few weeks — I’ll be writing more about JavaScript, React, IoT, or something else related to web development. If you’d like to make sure you never miss an article I write, sign up for my newsletter here: https://paigeniedringhaus.substack.com Thanks for reading! --- ## Further References & Resources - [Wikipedia, Feature Toggle](https://en.wikipedia.org/wiki/Feature_toggle) - [NodeCloud Config Client](https://www.npmjs.com/package/nodecloud-config-client) - [Sample Cloud Config Properties Repo](https://github.com/paigen11/spring-cloud-config-properties-sample) - [Protractor documentation around environment params](https://moduscreate.com/blog/protractor_parameters_adding_flexibility_automation_tests/)
paigen11
972,420
Leetcode diary: 259. 3Sum Smaller
This is a new series where I document my struggles of leetcode questions hoping seeing however small...
0
2022-01-30T04:00:32
https://dev.to/kevin074/leetcode-diary-259-3sum-smaller-2gl6
javascript, algorithms, devjournal, motivation
This is a new series where I document my struggles of leetcode questions hoping seeing however small of an audience I get gives me the motivation to continue. [link](https://leetcode.com/problems/3sum-smaller/) The leetcode gods have not been kind to me. I am far from worthy of its blessing ... the depression brought on by failing to pass the tests is weighing down on my soul ... oh god ~~ This question is hard ... I thought it would be fun to do a series of questions, but that again proves to be a crazy idea. Below is the best of my attempt: ``` var threeSumSmaller = function(nums, target) { const sorted = nums.sort(function(a,b){ return a>b ? 1 : -1}); let midI, rightI; let midNum, rightNum; let sum; let answers = 0; sorted.forEach(function(leftNum, leftI){ rightI = sorted.length-1; midI = rightI-1; while (rightI - leftI > 1) { rightNum = sorted[rightI]; midNum = sorted[midI]; sum = leftNum + midNum + rightNum; while (sum >= target && leftI < midI) { midI--; midNum = sorted[midI]; sum = leftNum + midNum + rightNum; } answers += midI-leftI; rightI--; midI = rightI-1; } }) return answers; }; ``` The idea is that instead of searching for every possible index, what I could do is for each iteration of left index, I start at the end for the two other pointers. This is this way when I move the "mid" pointer to the left, when the sum becomes < target, I can just stop the search there. Take for example: [1,2,3,4,5,6,7], target = 13 1+6+7 = 14 1+5+7 = 13 1+4+7 = 12 Note that since we found that the sum is smaller than target at [1,4,7], it then means [1,2,7] and [1,3,7] must also be smaller than target, so we can just stop the iteration there and move on to the next one. However, the performance for this is poor, it is merely a slightly better solution than the brute force where you just straight up try the triple nested for loop. Apparently there was a VERY similar answer, it is the two pointer approach in the solution, below is by modifying my above code to it: ``` var threeSumSmaller = function(nums, target) { const sorted = nums.sort(function(a,b){ return a>b ? 1 : -1}); let midI, rightI; let midNum, rightNum; let sum; let answers = 0; sorted.forEach(function(leftNum, leftI){ midI = leftI+1; midNum = sorted[midI]; rightI = sorted.length-1; rightNum = sorted[rightI]; while (midI < rightI) { rightNum = sorted[rightI]; midNum = sorted[midI]; sum = leftNum + midNum + rightNum; if(sum < target) { answers+= rightI - midI; midI++; } else { rightI--; } } }) return answers; }; ``` WELL I'LL BE DARNED!!! it is basically the exact fucking same thing except for some reason midI starts at leftI +1 as we normally think it should. The thing that bothered me is why is answers+= rightI-midI in this case? [1,2,3,4,5,6,7], target = 13 1+2+7 = 10, midI = 1, rightI = 6, answers += 5 = 5 1+3+7 = 11, midI = 2, rightI = 6, answers += 4 = 9 1+4+7 = 12, midI = 3, rightI = 6, answers += 3 = 12 ah okay, so when 1+2+7 is smaller than 13, it means also that: 1+2+6 < 13 1+2+5 < 13 1+2+4 < 13 1+2+3 < 13 So basically the same logic but backwards...so backwards of my backward logic ...nice... However, I went back to modify my backwards solutions just to see whether I could make it work. I could get a solution that calculates correctly, but inefficiently still. I believe the reason is that the correct solution shrinks the search in the while loop no matter what and does not need to revisit. On the other hand, my solution requires a "spring back to end" action every time, which makes it do some unnecessary revisiting. So the lesson here is that ... I don't have good intuition on questions that deal with sorted array, yeah I don't know how to word it at all even...fuck... I was so close though! Let me know anything on your mind after reading through this, THANKS!
kevin074
973,516
awesome npm packages for data validation and parsing(user login validation)
in this post we covering three of the best npm packages for data validation and schema building for...
0
2022-01-31T09:18:33
https://dev.to/alguercode/awesome-npm-packages-for-data-validation-and-parsinguser-login-validation-3ai4
beginners, productivity, javascript, webdev
in this post we covering three of the best npm packages for data validation and schema building for javascript programming language before you start reading, give a follow. ## 1.[Yup](https://github.com/jquense/yup) Yup is a JavaScript schema builder for value parsing and validation. Define a schema, transform a value to match, validate the shape of an existing value, or both. Yup schema are extremely expressive and allow modeling complex, interdependent validations, or value transformations. Yup's API is heavily inspired by Joi, but leaner and built with client-side validation as its primary use-case. Yup separates the parsing and validating functions into separate steps. cast() transforms data while validate checks that the input is the correct shape. Each can be performed together (such as HTML form validation) or seperately (such as deserializing trusted data from APIs). ## 2.[volder](https://github.com/devSupporters/volder) **volder** is powerful Object schema validation, it lets you describe your data using a simple and readable schema and transform a value to match the requirements, it has custom error messages, custom types and nested schemas. ## 3. [joi](https://github.com/sideway/joi) The most powerful schema description language and data validator for JavaScript. ### don't go without star that repos
alguercode
975,117
Paracetamol.js💊| #52: Explica este código JavaScript
Explica este código JavaScript const info = { [Symbol('a')]:...
16,071
2022-02-12T16:26:26
https://dev.to/duxtech/paracetamoljs-52-explica-este-codigo-javascript-kme
javascript, webdev, spanish, beginners
## **Explica este código JavaScript** ```js const info = { [Symbol('a')]: 'b' } console.log(info) console.log(Object.keys(info)) ``` - A: `{Symbol('a'): 'b'}` y `["{Symbol('a')"]` - B: `{}` y `[]` - C: `{ a: "b" }` y `["a"]` - D: `{Symbol('a'): 'b'}` y `[]` Respuesta en el primer comentario. ---
duxtech
1,220,997
Проверка
Просто проверка
0
2022-10-16T05:41:50
https://dev.to/urto/provierka-1fh9
Просто проверка
urto
977,232
Why Use Crypto Wallets?
How individuals and Businesses benefit from using crypto-wallets. Due to the numerous advantages...
0
2022-02-03T11:30:57
https://dev.to/bitpowr/why-use-crypto-wallets-216k
**How individuals and Businesses benefit from using crypto-wallets.** Due to the numerous advantages over traditional fiat currencies, cryptocurrencies such as Bitcoin and Ethereum are becoming increasingly popular. You’ll need to grasp how crypto wallets function if you wish to use any of the coins. This article covers what a cryptocurrency wallet is, why you would want to use one, how other people use their wallets and the many types of cryptocurrency wallets. **Why Use a Crypto Wallet?** Are you still thinking of why you should use a wallet? well, a study from Juniper Research found that the number of people using digital wallets will increase from 2.3 billion to nearly 4 billion, or 50% of the world’s population, by 2024. This would result in an increase of more than 80% in wallet transaction values to more than $9 trillion per year. Also, traditional banking systems have a number of flaws that make it difficult to complete any transaction. For starters, transactions are frequently slow because they must go through intermediaries, such as banks, implying that there is a single point of failure. Data can be jeopardized, altered, or even corrupted across numerous systems where accounts and balances are maintained. These issues are reduced or eliminated with crypto wallets. ## What is a Crypto Wallet? A cryptocurrency wallet is a device, program or physical medium, which stores the public and private keys for cryptocurrency transactions. The crypto wallet programs are available for mobile phones, desktops, and even as pieces of hardware. These wallets can be compared to how emails work, read our previous article to understand better. How different Individual use Crypto Wallets **AJ:** Hey! For me a crypto wallet is a digital bank, I use Trust wallet and Ledger. Trust wallet is online and used and trusted by millions of people. I also use Ledger, it's offline…feels a lot safer. Knowing that I can stuff my crypto away and I control my own keys has been most beneficial. **Vin:** Crypto is a store of digital assets, I mostly use Software wallets. They are Profit on investment for me. **Marc:** I use Decentralized exchange wallets like Trust wallet Centralized exchange wallets (more exchanges than wallets, although it somewhat serves a similar purpose): Binance, Gate, KuCoin, and most recently FTX . The Benefit to me is it holds my assets more or less. I also have portfolio managers like Coinmarketcap and Coingecko to help track and manage my assets across wallets. **Tobi:** Crypto wallet is digital storage that stores my crypto currencies, I use Argent for ethereum assets (has the best wallet infrastructure), I use blockchain.com (for BTC, BCH), Binance for exchange. Wallets help me move money easily. From all the above mentions it is safe to say you have more to gain when you use crypto wallets to keep your assets. ## How do different Businesses use Crypto Wallets? **Collecting NFTs:** All NFT focused businesses one way or the other use wallet like metamask to store the information regarding the location of your NFT assets on the blockchain. A wallet is an essential tool if you want to start collecting NFT’s as it provides private keys or passwords that allow a holder to access funds and assets stored on the blockchain. **Building multicurrency wallets:** Wallet infrastructures such as bitpowr could make building multicurrency crypto wallets like Coinbase Wallet and Exodus much faster and easier. A deposit address, blockchain data, exchange rates, and transaction broadcasting are all that are required of a multicurrency crypto wallet. With one integration, you get them all. **Retail Platforms — Payment processing:** Retail platforms can easily generate multiple wallets (hot, cold, hd, multisig) and addresses for customers' use. Trading Firms like Hedge funds Asset managers use Crypto wallets as secure cross-exchange trading environment infrastructure to trade capital across different exchanges. **Crypto Exchange platforms:** In a matter of days, you can launch scalable crypto exchanges using infrastructures like bitpowr’s while maintaining security. You can create transactions, deposit addresses, and receive notifications for incoming transactions (withdrawals). Thinking of wallet services to use, Check out some from Bitpowr **1. Crypto Wallet** These are bare wallets like non-custodial wallets that are not connected to private ledger account. Bitpowr has limited control over these wallets The end-users (mostly developers), will handle every operation by themselves using the API. Funds in crypto wallets are not managed by Bitpowr and can only be moved by the end-users (developer) **2. Exchange Wallet** These are wallets that are connected to Bitpowr private ledger account for instant internal settlements, connected to external fiat/crypto exchanges for fx. The developers have minimal control on moving funds from the addresses from these wallets as they are controlled by Bitpowr for easy fiat exchange and to save the tx fees. You can easily move funds between exchange wallets on Bitpowr network instantly with no tx fee. **3. Savings Wallet** These are types of wallets that are generated using the crypto wallet, connected to a private ledger account, and created as contract addresses on supported chains. All ETH, TRON, CELO addresses are generated and deployed as a contract to each blockchain. ## What can you do with bitpowr wallet services? Easily Create, manage and secure your crypto-wallets: You can manage multiple wallets and currencies from a single interface. Bitpowr Wallet allows you to easily manage all your accounts and cryptocurrencies from an intuitive interface. **Create multi assets wallets:** Assets that contain popular cryptocurrencies, such as Bitcoin or Ethereum, wallets for each team or department in your organization. User and wallet-based transaction policies. Define specific transaction policies for cryptocurrency accounts. Add different users to each account and define their roles, permissions, and limits. **Assign Chained and Threshold Approval Process:** Design and approval chain specific for each wallet or account. In a real corporate environment, multiple approvers do not share the same level of permission. With Bitpowr Wallet, you can combine multi-level approval chains with M-of-N approvals and threshold levels, adapting the approval process to the needs of a company. **Assign Roles & Permissions:** Assign roles to different users for each wallet, such as wallet operator, approver or auditor. **Transaction limits:** Establish transaction limits for each wallet and user, including maximum transfer amount and maximum operations per day. **Schedule withdrawals:** Schedule periodic withdrawals from a wallet for daily, weekly or monthly transfers. Monitor balance to trigger payment automatically Configure automatic transfers when a certain balance is reached. Conclusively The future wallet will serve as a gateway to protocols and services, as well as a representation of our professional and personal financial standing. Wallets will evolve from being something we use occasionally to something we use all of the time. Our digital wallets will become the single most significant storage location for everything from our money to our identities.
bitpowr
978,964
StencilJS with Storybook
StencilJS is such a great tool for creating web components, and Storybook is great for creating...
0
2022-02-04T18:33:00
https://dev.to/jfgmdev/stenciljs-with-storybook-3027
storybook, stenciljs, webcomponents, designsystem
StencilJS is such a great tool for creating web components, and Storybook is great for creating design systems, but integrating these two tools doesn't feel very natural because there's no single right way to do it. After a lot of research, I will show you a simple way to carry out this integration and not die trying 😁 ## Create a Stencil Project ```sh npm init stencil ``` This will show you some questions. Answer this way ```sh ✔ Pick a starter > component ✔ Project name > storybook-wc-stencil ``` After that, you will have a stencil project with a basic example component inside. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmb0w65718bk7mb1dj52.png) ## Install dependencies ```sh cd storybook-wc-stencil yarn install ``` ## Ignore node_modules code on check Add _skipLibCheck_ property to exclude node_modules code _**tsconfig.json**_ ```json { "compilerOptions": { ... "skipLibCheck": true, }, ... ``` ## Create a typing file for tsx imports Our linter could give us problems when trying to import md files, we can also use this file for another type of extensions. _**src/typings.d.ts**_ ```ts declare module '*.jpg'; declare module '*.md' { const value: string; // markdown is just a string export default value; } declare module '*.css' { const content: { [className: string]: string }; export default content; } ``` ## Add storybook ```sh npx -p @storybook/cli sb init --type html ``` This will generate the storybook project ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wg8fjux26it11iuj4oc.png) ## Add notes addon ```sh yarn add -D @storybook/addon-notes ``` _**.storybook/main.js**_ ```js module.exports = { stories: ['../src/**/*.stories.mdx', '../src/**/*.stories.@(js|jsx|ts|tsx)'], addons: [ '@storybook/addon-links', '@storybook/addon-essentials', '@storybook/addon-notes', ], framework: '@storybook/html', }; ``` ## Configuration to load Stencil components on Storybook _**.storybook/preview.js**_ ```js import { defineCustomElements } from '../dist/esm/loader'; defineCustomElements(); export const parameters = { actions: { argTypesRegex: '^on[A-Z].*' }, controls: { matchers: { color: /(background|color)$/i, date: /Date$/, }, }, }; ``` ## Clean stories Let's remove all content from stories directory ## Project structure Create a new file called _**my-component.stories.tsx**_ inside _**src/stories**_ directory Your project structure should look like this ``` ./storybook-wc-stencil/ | |---- .storybook/ | |---- main.js | |---- preview.js |---- src/ | |---- components/ | | |---- my-component/ | | |---- my-component.css | | |---- my-component.e2e.ts | | |---- my-component.spec.ts | | |---- my-component.tsx | | |---- readme.md | |---- stories/ | | |---- components/ | | |---- my-component.stories.tsx | |---- typings.d.ts |---- .editorconfig |---- .gitignore |---- .prettierrc.json |---- LICENSE |---- package.json |---- readme.md |---- stencil.config.ts |---- tsconfig.json |---- yarn.lock ``` ## Generate components We can use the next command to automatically generate our components on our _components_ directory ```sh yarn generate component-name ``` ## Run Project For having hot reload, we must execute this two commands in parallel, so we can use two terminals or create a new script ```sh yarn build -- --watch ``` ```sh yarn storybook ``` The first command will generate the build of our components, we use the `--watch` flag to be always generating this build on any change ## Story code structure We are going to face some disadvantages when working storybook with stencil - We need to define properties that we want to use in controls - We need to define default props for controls - We need to add description and prop types for Docs pages - defaultValue property is not working for Doc pages - We need to pass args values on the template ```ts // This md file is generated by stencil, and we are going to use it as a note page import notes from '../../components/my-component/readme.md'; export default { title: 'UI/My Component', args: { // Here we define default values that we want to show on controls // Also, only props defined here are going to be shown first: 'Juan Fernando', middle: 'Gómez', last: 'Maldonado', }, argTypes: { // Here we can add description and prop value type first: { description: 'First name', // First way to define type table: { type: { summary: 'string', }, }, }, middle: { // Second and shorter way to define type type: { summary: 'string', }, }, last: { // We can disable the property // This will hide it in controls and Doc page table: { disable: true, }, }, }, parameters: { // This will create a note page for our story component notes, }, }; const Template = args => `<my-component first="${args.first}" middle="${args.middle}" last="${args.last}"></my-component>`; export const Basic = Template.bind({}); export const Another = Template.bind({}); Another.args = { first: 'John', }; ``` ### Cleaner way for adding values and description on stories To avoid boilerplate code, I created a simple library that returns args, argTypes and a custom template for our component. This library is [story-wc-generator](https://www.npmjs.com/package/story-wc-generator). ```sh yarn add story-wc-generator ``` ```ts import notes from '../../components/cool-button/readme.md'; import storyGenerator from 'story-wc-generator'; const { args, argTypes, Template } = storyGenerator('cool-button', { text: { value: 'Click me!', description: 'Text label', type: 'string' }, color: { value: 'primary', description: 'Color of button', control: 'select', options: ['primary', 'secondary', 'dark'], type: 'primary | secondary | dark', }, }); export default { title: 'UI/Cool Button', args, argTypes, parameters: { notes, }, }; export const Primary = Template.bind({}); export const Secondary = Template.bind({}); Secondary.args = { color: 'secondary', }; ... ``` This example includes all properties that can be used, but you can go to the [documentation](https://www.npmjs.com/package/story-wc-generator) for more about the library After creating our components and stories, we must have a result like this ![Controls](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddpyhdeysiis2dcw45mp.png) ![Docs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lxagsk0yrxiqvrb0599.png) ![Notes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dec620p8aao1q7722dev.png) You can see the repository [here](https://github.com/jf-gm-dev/storybook-wc-stencil) and the live project [here](https://jf-gm-dev.github.io/storybook-wc-stencil/?path=/story/introduction--page)
jfgmdev
990,509
Make a React-Auth form using Bootstrap in few simple steps!
In this post, we are going to make an authentication form in react that can toggle between login and...
0
2022-02-16T17:11:04
https://dev.to/anshnarula5/make-a-react-auth-form-using-bootstrap-in-few-simple-steps-2io5
javascript, react, bootstrap, webdev
In this post, we are going to make an authentication form in react that can toggle between login and register tabs. This is what we are going to build today : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67jujirbwpbtbyoz2dox.jpg) We will not use any libraries or external tools for creating form and make authentication forms really easy. **Step 1 : Create a react project and run it by using following commands.** ``` npx create-react-app auth ``` Then open up the newly created project folder in your favorite editor, it should look like this. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0jho8w74so4qm5qj9o9n.jpg) ``` npm start ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmili0km9ofq77piym44.png) **Step 2 : Now create a file and that component to App.js file.** Your New File should look like this. (I named this file Auth.js) ```javascript import React from 'react' const Auth = () => { return ( <div>Auth</div> ) } export default Auth ``` Add this component to App.js ```javascript import './App.css'; import Auth from './Auth'; function App() { return ( <div className="App"> <Auth /> </div> ); } export default App; ``` **Step 3 : Add React-Bootstrap to your project using following command** ``` npm install react-bootstrap bootstrap@5.1.3 ``` and now include following line in your src/index.js or App.js file. ```javascript import 'bootstrap/dist/css/bootstrap.min.css'; ``` **Step 4 : Create form.** Let us now begin creating form. * Import following to your Auth.js file. We are going to wrap our form inside a card and to center the card we are going to put the card inside row and column using grid system. ```javascript import { Card, Col, Row, Form } from "react-bootstrap"; ``` Now add Row, Col and Card in following way : ```javascript <Row className="justify-content-center"> <Col xs={10} md={4}> <Card className="my-5 px-5 py-3"> <h1 className="m-3 text-center">Sign up</h1> </Card> </Col> </Row> ``` Now you can see this in you browser. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2csx2onaj8g8t2mwj888.png) (*p.s. I added `background-color: #7c8baa;` and `min-height: 100vh;` in App.css to App*) * Now let us add formData state. ```javascript const [formData, setFormData] = useState({ email: "", name: "", password: "", password2 : "" }); const { email, name, password, password2 } = formData; ``` (*We are destructuring each field so that we can use these inside our input fields*) * Now, we can create a basic form that show all the fields. ```javascript <Form.Group controlId="email" className="my-2"> <Form.Label>Username</Form.Label> <Form.Control type="text" placeholder="enter name" name="name" value={name} /> </Form.Group> <Form.Group className="my-2"> <Form.Label>Email Address</Form.Label> <Form.Control type="email" placeholder="enter email" value={email} name="email" /> </Form.Group> <Form.Group className="my-2"> <Form.Label>Password</Form.Label> <Form.Control type="password" placeholder="enter password" value={password} name="password" /> </Form.Group> <Form.Group className="my-2"> <Form.Label>Confirm Password</Form.Label> <Form.Control type="password" placeholder="enter password again" value={password2} name="password" /> </Form.Group> ``` This should display something like this : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x1nytr13fn8elde3e4vl.png) * Add toggle logic Since in login form, we need to display only email and password fields and for register, we are displaying all the fields, so we can use following logic to dynamically toggle between login and register form. ```javascript const [isLogin, setIsLogin] = useState(true); ``` We change username and confirm password fields as follows : ```javascript {!isLogin && ( <Form.Group className="my-2"> <Form.Label>Username</Form.Label> <Form.Control type="text" placeholder="enter name" name="name" value={name} /> </Form.Group> )} <Form.Group className="my-2"> <Form.Label>Email Address</Form.Label> <Form.Control type="email" placeholder="enter email" value={email} name="email" /> </Form.Group> <Form.Group className="my-2"> <Form.Label>Password</Form.Label> <Form.Control type="password" placeholder="enter password" value={password} name="password" /> </Form.Group> {!isLogin && ( <Form.Group className="my-2"> <Form.Label>Confirm Password</Form.Label> <Form.Control type="password" placeholder="enter password again" value={password2} name="password2" /> </Form.Group> )} ``` Also, we need to add onChange function in each input fields. We define a function named handleChange and trigger this function whenever input field is changed ```javascript const handleChange = (e) => { setFormData({...formData, [e.target.name] : e.target.value}) } ``` * Now to toggle between login and register tabs, we make a function and name it handleToggle and this function is called whenever we click on toggle button. Also, when we toggle, we want clear the input fields. ```javascript const handleToggle = () => { setIsLogin(prev => !prev) setFormData({ email: "", name: "", password: "", password2: "" }); } ``` Buttons : ```javascript <div className="mt-3 text-center"> <p> {isLogin ? "Don't" : "Already"} have an account ?{" "} <Button size="sm" variant="outline-primary" onClick={handleToggle}> Sign {isLogin ? "Up" : "In"} </Button> </p> <Button className="btn btn-block">Sign {isLogin ? "In" : "Up"}</Button> </div> ``` **Final Code :** ```javascript import React, { useState } from "react"; import { Button, Card, Col, Form, Row } from "react-bootstrap"; const Auth = () => { const [formData, setFormData] = useState({ email: "", name: "", password: "", password2: "", }); const { email, name, password, password2 } = formData; const [isLogin, setIsLogin] = useState(true); const handleToggle = () => { setIsLogin((prev) => !prev); }; return ( <Row className="justify-content-center"> <Col xs={10} md={4}> <Card className="my-5 px-5 py-3"> <h1 className="m-3 text-center">Sign {isLogin ? "In" : "Up"}</h1> {!isLogin && ( <Form.Group className="my-2"> <Form.Label>Username</Form.Label> <Form.Control type="text" placeholder="enter name" name="name" value={name} onChange = {handleChange} /> </Form.Group> )} <Form.Group className="my-2"> <Form.Label>Email Address</Form.Label> <Form.Control type="email" placeholder="enter email" value={email} name="email" onChange = {handleChange} /> </Form.Group> <Form.Group className="my-2"> <Form.Label>Password</Form.Label> <Form.Control type="password" placeholder="enter password" value={password} name="password" onChange = {handleChange} /> </Form.Group> {!isLogin && ( <Form.Group className="my-2"> <Form.Label>Confirm Password</Form.Label> <Form.Control type="password" placeholder="enter password again" value={password2} name="password2" onChange = {handleChange} /> </Form.Group> )} <div className="mt-3 text-center"> <p> {isLogin ? "Don't" : "Already"} have an account ? {" "} <Button size="sm" variant="outline-primary" onClick={handleToggle} > Sign {isLogin ? "Up" : "In"} </Button> </p> <Button className="btn btn-block"> Sign {isLogin ? "In" : "Up"} </Button> </div> </Card> </Col> </Row> ); }; export default Auth; ``` **Final Result** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n5q8df3nf2ia4qiagmm.gif) Thank you for reading this article and happy coding 🚀
anshnarula5
990,694
Making 3D CSS Flippable Cards
A while back, I wrote an article on 3d interactive CSS buttons. Using a similar technique, I decided...
0
2022-02-15T23:03:42
https://fjolt.com/article/css-3d-interactive-flippable-cards
css, tutorial, webdev, codepen
A while back, [I wrote an article on 3d interactive CSS buttons](https://fjolt.com/article/css-3d-interactive-flippable-cards). Using a similar technique, I decided to design some 3d interactive (and flippable) CSS user cards. These also work great for lots of different things - for example, a bank card UI, a playing card UI, or just a teams page. The demo can be seen below! [The full code, as always, is Available on CodePen. ](https://codepen.io/smpnjn/pen/qBVPvpZ) ## 3d flippable cards with CSS and Javascript Hover over the cards (or tap anywhere on the card on mobile) below to see the effect in full swing. {% codepen https://codepen.io/smpnjn/pen/qBVPvpZ %} To achieve this effect, we have to combine a few different things in both Javascript and CSS: 1. **First**, we need to create a function which lets us manipulate the angle of the card based on mouse position. 2. **Next**, we need to use that function to figure out the position to add a 'glare' light effect on top of the card. 3. **Then**, we need to add a lot of CSS to create a backface and a front face for the card. 4. **Finally**, we need to add a few functions in our Javascript to allow us to 'flip' the card. ## Creating the HTML Let's start with the HTML. Here's what it looks like for our first card. Each card has two main parts - `inner-card`, and `inner-card-backface`. The first contains the front of the card, and the second, the back. We also have two buttons - flip, and unflip, to change which side of the card is visible. ```html <div class="card blastoise"> <span class="inner-card-backface"> <!-- back of the card --> <span class="image"> <span class="unflip">Unflip</span> </span> </span> <span class="inner-card"> <!-- front of the card --> <span class="flip">Flip</span> <span class="glare"></span> <!-- to store the glare effect --> </span> </div> ``` ## Creating the JS Our JS does one fundamental thing - and that is to figure out the user's position on the card, and translate that into an angle which we pass to our CSS, to change how we view the card. To do that, we need to understand how far from the center of the card the user is. We only really have two axis to worry about - and when the user reaches the top or the bottom of either, we can rotate the card relative to the center, as shown in the image below. ![How 3d rotations work on cards in Javascript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ysllvtifowlyrpvqriwm.png) ## How the Javascript works for 3d flippable CSS cards Ultimately, to do that we write a function which accepts the 'card' element, and updates its CSS accordingly: ```javascript let calculateAngle = function(e, item, parent) { let dropShadowColor = `rgba(0, 0, 0, 0.3)` if(parent.getAttribute('data-filter-color') !== null) { dropShadowColor = parent.getAttribute('data-filter-color'); } parent.classList.add('animated'); // Get the x position of the users mouse, relative to the button itself let x = Math.abs(item.getBoundingClientRect().x - e.clientX); // Get the y position relative to the button let y = Math.abs(item.getBoundingClientRect().y - e.clientY); // Calculate half the width and height let halfWidth = item.getBoundingClientRect().width / 2; let halfHeight = item.getBoundingClientRect().height / 2; // Use this to create an angle. I have divided by 6 and 4 respectively so the effect looks good. // Changing these numbers will change the depth of the effect. let calcAngleX = (x - halfWidth) / 6; let calcAngleY = (y - halfHeight) / 14; let gX = (1 - (x / (halfWidth * 2))) * 100; let gY = (1 - (y / (halfHeight * 2))) * 100; // Add the glare at the reflection of where the user's mouse is hovering item.querySelector('.glare').style.background = `radial-gradient(circle at ${gX}% ${gY}%, rgb(199 198 243), transparent)`; // And set its container's perspective. parent.style.perspective = `${halfWidth * 6}px` item.style.perspective = `${halfWidth * 6}px` // Set the items transform CSS property item.style.transform = `rotateY(${calcAngleX}deg) rotateX(${-calcAngleY}deg) scale(1.04)`; parent.querySelector('.inner-card-backface').style.transform = `rotateY(${calcAngleX}deg) rotateX(${-calcAngleY}deg) scale(1.04) translateZ(-4px)`; if(parent.getAttribute('data-custom-perspective') !== null) { parent.style.perspective = `${parent.getAttribute('data-custom-perspective')}` } // Reapply this to the shadow, with different dividers let calcShadowX = (x - halfWidth) / 3; let calcShadowY = (y - halfHeight) / 6; // Add a filter shadow - this is more performant to animate than a regular box shadow. item.style.filter = `drop-shadow(${-calcShadowX}px ${-calcShadowY}px 15px ${dropShadowColor})`; } ``` **This function does 4 things:** - Calculates the shadow of the element, so that it appears to be moving in 3d space. - Calculates the angle the card should be at, based on the mouse position. - Calculates the position of the backface, so it moves in tandem with the card front of the card. - Calculates the position of the glare, which is at the reflection of where the user's mouse is. All we have to do now, is add this function to each of our mouse movement events, and then reset everytihng when the user's mouse leaves the element. We'll also add in a few functions for 'flipping' and 'unflipping' the card: ```javascript document.querySelectorAll('.card').forEach(function(item) { // For flipping the card backwards and forwards if(item.querySelector('.flip') !== null) { item.querySelector('.flip').addEventListener('click', function() { item.classList.add('flipped'); }); } // For 'unflipping' the card. if(item.querySelector('.unflip') !== null) { item.querySelector('.unflip').addEventListener('click', function() { item.classList.remove('flipped'); }); } // For when the user's mouse 'enters' the card item.addEventListener('mouseenter', function(e) { calculateAngle(e, this.querySelector('.inner-card'), this); }); // For when the users mouse moves on top of the card item.addEventListener('mousemove', function(e) { calculateAngle(e, this.querySelector('.inner-card'), this); }); // For when the user's mouse leaves the card. item.addEventListener('mouseleave', function(e) { let dropShadowColor = `rgba(0, 0, 0, 0.3)` if(item.getAttribute('data-filter-color') !== null) { dropShadowColor = item.getAttribute('data-filter-color') } item.classList.remove('animated'); item.querySelector('.inner-card').style.transform = `rotateY(0deg) rotateX(0deg) scale(1)`; item.querySelector('.inner-card-backface').style.transform = `rotateY(0deg) rotateX(0deg) scale(1.01) translateZ(-4px)`; item.querySelector('.inner-card').style.filter = `drop-shadow(0 10px 15px ${dropShadowColor})`; }); }); ``` You might notice that the mouse events are for the card, but the transformations mainly happen on .inner-card. That's because if the angle of .card changes, the 'hover box' will change. If that happened, a user may be hovering over the card, but the angle would change so much that they wouldn't anymore, making the effect seem broken. By adding the hover effects to the card, we maintain a constant hover box, while still allowing us to transform the .inner-card within this fixed box. ## Adding the CSS Finally, we can add the CSS. The fundamental thing here is that we have a card container .card which contains the card we transform - .inner-card. Another benefit of doing things this way is that when a user clicks 'flip', we can flip `.card` itself, as we maintain a parent and child element. That means we can continue to transform the `.inner-card,` and flip the .card at the same time, producing a more seamless effect. On `.inner-card-backface`, we add the line `transform: rotateX(0) rotateY(0deg) scale(1) translateZ(-4px);` to move it back by 4 pixels. That creates a cool 3d depth effect, as well as making sure the front and backfaces do not collide as the user hovers. We also add `backface-visibility: visible;` to our .card so both our back and front faces are interactable. Finally, since we flip our entire card using the .flipped class, we need to 'unflip' the content on the back of the card. If we don't do that, the text on the back will appear back to front! So we have a class called `.flip-inner-card` which simply lets us flip the backface of the card, so the text is no longer back to front. ```css .card { box-shadow: none; backface-visibility: visible; background: transparent; font-family: Inter,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Open Sans,Helvetica Neue,sans-serif; transform-style: preserve-3d; padding: 0; height: auto; margin: 0 2rem 0 0; width: 18rem; height: 25rem; float: left; transition: all 0.2s ease-out; border: none; letter-spacing: 1px; } .flip, .unflip { background: rgba(0,0,0,0.1); font-size: 1rem; position: absolute; top: 1rem; right: 1rem; padding: 0.5rem 0.75rem; border-radius: 100px; line-height: 1rem; cursor: pointer; transition: all 0.1s ease-out; } .unflip { top: auto; background: #2d2d62; bottom: 1rem; } .flip:hover { background: rgba(0,0,0,0.3); } .card .flip-inner-card { transform: rotateY(180deg); position: absolute; top: 0; padding: 2rem 1.5rem; box-sizing: border-box; left: 0; width: 100%; height: 100%; } .inner-card-backface { transform: rotateX(0) rotateY(0deg) scale(1) translateZ(-4px); border-radius: 14px; background: linear-gradient(45deg, #0b0b2a, #0b0b2a); position: absolute; top: 0; color: white; padding: 2rem; box-sizing: border-box; transition: all 0.15s ease-out; will-change: transform, filter; left: 0; width: 100%; height: 100%; } .card.flipped { transform: rotateY(180deg); } .card .flip-inner-card { transform: rotateY(180deg); position: absolute; top: 0; padding: 2rem 1.5rem; box-sizing: border-box; left: 0; width: 100%; height: 100%; } ``` ## Conclusion In this tutorial, we've covered how to make a 3d CSS flippable card. We've talked about the function required to figure out the angle to display as the user hovers over it, as well as the CSS required to make a 3d card like this. I hope you've enjoyed - feel free to use on any of your personal projects, and here are some useful links: - [All the code can be found on Codepen](https://codepen.io/smpnjn/pen/qBVPvpZ) - [Here is a similar effect, with buttons instead of cards](https://fjolt.com/article/css-3d-interactive-flippable-cards)
smpnjn
990,894
HI EveryOne
I am Djiga and am very happy to be in the community
0
2022-02-16T05:07:20
https://dev.to/salane_djiga_kalanji/hi-everyone-4lfn
I am Djiga and am very happy to be in the community
salane_djiga_kalanji
990,914
UI/UX DESIGN FOR BEGINNERS
Starting a journey in UI/UX for any newbie could be a bit challenging due to the fact that it is a...
0
2022-02-16T06:14:04
https://dev.to/abibataderogba/uiux-design-for-beginners-gco
beginners, webdev, productivity, ux
Starting a journey in UI/UX for any newbie could be a bit challenging due to the fact that it is a novel skill. Do not worry. Here are some useful notes that will help you and, of course, prepare you well for this journey. I would not have prepared you well for this journey if I do not bring to bare what UI/UX connotes. UI/UX means User Interface/User experience. It is a skill that involves putting components, elements, texts and other features together for the purpose of defining, ideating and creating a typical web or mobile design structure. One can think about user interface to be the beauty of the design while user experience can be thought of as the functionality and usability of the design. Any application design you see has gone through quite a lot of design thinking processes that include: Empathy Ideation Prototype Test Empathy: This is the ability of the designer to be emotional about what the user needs. The designer should be able to feel and know the needs of users. Ideation: The UI/UX designer thinks about how the design should flow and what problem to solve. Prototype: This is the sample of the whole application design usually drafted to see and test how it would work in real life use. It could also be sent to the user for a review. The review of the user is very critical as UI/UX is not merely about colors and components, but is greatly dependent on putting the user first and thinking about what the user needs. The satisfaction derived by the user is also determined by where and how each icons and features are placed. User Experience should have some basic principles, which are: • Usable • Equitable • Enjoyable • Useful Usable: As a UX designer, you must build a design that is usable by the users. It should have all the necessary features such that it is able to fulfil the users need Equitable: the designer makes sure the whole app is easier to use both on the mobile app and desktop app. Enjoyable: The design should be one which can be enjoyed by the user during use. It shouldn’t be a complicating and difficult to use app by the user. Useful: Designs created by the UX designer should not be a dormant one that users can’t use. Test: This is the stage in which the design is being checked and modeled for its efficiency. It is tested using design tools like Figma or Adobe XD. At this testing stage, the design goes through three stages: internal test within your company; reviews with stakeholders; and external test with potential users. (Note: a stakeholder is a person you need to work with to complete the project or anyone who has some interest in the project.) New technologies are coming up virtually every day likewise mockup tool kits for Figma, therefore it is imperative to be informed and be updated about these new technologies. Image credit: uxdesign.cc
abibataderogba
990,921
Benefits of using Strapi with Typescript + Storybook tandem
We are working on react components integrated with strapi and storybook. Both for me - a novice...
0
2022-02-16T14:31:08
https://dev.to/claudiakwj/benefits-of-using-strapi-with-typescript-storybook-tandem-ccm
storybook, strapi, typescript, cms
We are working on react components integrated with strapi and storybook. Both for me - a novice programmer and for Wojtek - a programmer with many years of experience and extensive knowledge, it brings many benefits. Starting with strapi, we can determine the content type, decide what fields our components will need, e.g. whether a given component will contain a title, description, photo and whether, for example, it must meet a condition to be displayed. In strapi, we can manage the content that we want to include in a given component. We can also easily change it - so that a non-programmer can edit the content. All data that we input into strapi can be obtained using GraphQL API or REST API. On top of that with the help of a graphql-codegen tool we can make the graphql response typed in typescript. ![GraphQL query and response](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xc28o4zznkqf0wae1n5p.png) Typescript shows us what data and in what form component expects to receive. Typescript is also responsible for maintaining contract between strapi responses and components props. ![Typed strapi response](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksf95xj9rwxdsqu6xdjg.png) By passing these props to the components, we can display them by creating a story in the storybook. The storybook shows what a given component will look like and allows you to create different variants of component. We can create stories for different themes, multiple language versions and for example show negative scenarios, for when user forgot to input some data or we have had an backend issue, showcasing display of appropriate error message and gracefully handling those scenarios as well. ![Storybook example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pnanofpikx7z87uo7xxt.gif) This combination of strapi + sb + ts allows a front end developer to work without being strictly dependent on the backend. As long as the initial contract is maintained. But also using Strapi by itself which provides most of the CRUD operations out of the box, limits the amount of custom backend code that needs to be written significantly, which in turn speeds up the development process and amount of bugs that can be introduced. However, it brings the most benefits for the client, who can : - manage the content of his application in strapi without the help of a developer - check how the components will look like, as well as all their variants in the storybook, while the application is still being worked on - strapi is free and open-source CMS which is constantly being improved and enhanced by the community, making it a choice that in the future may support even more features
claudiakwj
991,162
Menu Button using Pure CSS
Demo ⬇
0
2022-02-16T11:28:19
https://dev.to/sankalpt92/menu-button-using-pure-css-48n9
codepen, javascript, programming, ux
##Demo ⬇ {% codepen https://codepen.io/sanki92/pen/abVVLyj %}
sankalpt92
991,219
Do you want a beautiful developer portfolio website
hello iam also working as a web builder.I built website under $50 because everyone needs a website...
0
2022-02-16T13:31:18
https://dev.to/sripadhs/do-you-want-a-beautiful-developer-portfolio-website-41o2
showdev, javascript, webdev, productivity
hello iam also working as a web builder.I built website under $50 because everyone needs a website for their productivity if you need any types of website contact me now through my email address email id:myweb6529@gmail.com !!!!please give your name and phonenumber in the mail!!!!
sripadhs
991,513
MX LINUX (pt-br)
Eu sou uma tremenda fã do Microsoft Windows de muitos anos. Eu usei o Microsoft Windows Server NT4,...
0
2022-02-16T17:41:55
https://dev.to/amgauna/linux-mx-pt-br-39bc
linux, vbcode, eclipse, mx
Eu sou uma tremenda fã do Microsoft Windows de muitos anos. Eu usei o Microsoft Windows Server NT4, Windows Server 2000, Windows Server 2003, Windows Server 2008, Windows Server 2012, Windows Server 2016, Windows Server 2019. Eu usei o Microsoft SQL Server 6, SQL Server 2000, SQL Server 2008, SQL Server 2014, SQL Server 2017. Eu usei o Windows 97, Windows 98, Windows XP, Windows Vista, Windows 7, Windows 10. Quando eu comprava um computador desktop ou notebook com Linux instalado, eu ia logo excluir a partição do Linux, criar uma patição nova, formatar a partição do disco rígido e instalar alguma versão do Windows. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rvj1p00ch8n2sgkuk8h3.jpg) Muitos anos atrás eu estudei e usei o Satux Linux, o Mandriva Linux, e o Ubuntu Linux, mas eu não gostei de usar nenhum deles. Eu tenho um notebook Thinkpad R61 velho (4Gb de RAM DDR2), que eu gosto muito dele, que estava muito lerdo com o Windows 10. Então neste último final de semana eu fiquei lendo a respeito do MX Linux ser um sistema operacional leve e indicado para uso em computadores velhos, e decidi experimentar. Dias atrás tarde da noite, eu fiz o download do MX Linux versão 21, e criei um DVD, quando eu reiniciei o notebook, eu pensei ter mandado ele instalar, mas o MX Linux cria um disco virtual com uma demonstração do Linux MX funcionando dentro do Windows 10, mesmo o notebook tendo só 4gb, ele funcionou dentro do Windows 10. No dia seguinte de manhã bem cedo eu mandei o DVD fazer a instalação do MX Linux, então ele formatou o disco rígido e demorou quase uma hora instalando. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cc8wdtfvxbukeczaycmx.jpg) Eu gostei do MX Linux, ele é bem leve, e o notebook velho que só tem 4gb não está lerdo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv2iq7xuwfbijwjmu0hw.jpg) Eu gostei também dele ter o modo de usar muito semelhante ao Windows, e ele tem um gerenciador de aplicativos com uma lista de diversos aplicativos que podem ser instalados nele, e mostra o que já está instalado e que pode ser desinstalado. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4klgjg11ugqncd497t6b.jpg) Eu gostei do MX Linux também porque eu consegui instalar o Microsoft Visual Studio Code, Atom, Eclipse, Python, PHP, Java, etc, nele, e está funcionando, mas eu ainda estou testando a aprendendo como ele funciona, já tem muitos anos que eu não brinco com o Linux. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6pt6lw7upw6ddsa1lgs.jpg) No terminal do MX Linux eu usei os comandos: `sudo apt update` e depois disse sim a pergunta dele da instalação, depois eu usei o comando `sudo apt upgrade` e depois disse sim a pergunta da instalação, ele demorou quase 4 horas para terminar de instalar todas as atualizações, depois eu reiniciei o notebook, e ele está atualizado e funcionando perfeito. No gerenciador de aplicativos do MX Linux existe o Microsoft Visual Studio Code, o Atom, o Eclipse e outros aplicativos de desenvolvimento web, disponíveis para instalação. Depois de ter terminado de instalar eles, eu executei o Terminal: 1) Eu decidi instalar o Python, usei o comando `sudo apt install python` e respondi sim a pergunta de instalação dele, e esperei, demorou alguns minutos para terminar a instalação dele. 2) Eu decidi instalar o PHP, usei o comando `sudo apt install php` e respondi sim a pergunta de instalação dele, e esperei, demorou alguns minutos para terminar a instalação dele. 3) Eu decidi instalar o Java, usei o comando `sudo apt install java` e respondi sim a pergunta de instalação dele, e esperei, demorou alguns minutos para terminar a instalação dele. 4) Depois de ter terminado de instalar os aplicativos de desenvolvimento, o Python, o PHP, o Java, eu executei os comandos: `sudo apt update` e depois disse sim a pergunta dele da instalação, depois eu usei o comando `sudo apt upgrade` e depois disse sim a pergunta da instalação, e esperei para ver se surgia mais alguma atualização faltando ser instalado no MX Linux, e por último eu reiniciei o notebook.
amgauna
991,861
How to Unit Test Next.js API Routes with Typescript
Introduction Next.js is an awesome frontend framework. It's powered by React under the...
0
2022-02-17T14:29:00
https://www.paigeniedringhaus.com/blog/how-to-unit-test-next-js-api-routes-with-typescript
nextjs, react, typescript, api
--- title: How to Unit Test Next.js API Routes with Typescript published: true date: 2022-02-16 00:00:00 UTC tags: nextjs,react,typescript,api canonical_url: https://www.paigeniedringhaus.com/blog/how-to-unit-test-next-js-api-routes-with-typescript --- [![Woman coding](https://www.paigeniedringhaus.com/static/3454c5a95c805c6893c9bc924c416154/15ec7/woman-coding.jpg "Woman coding")](/static/3454c5a95c805c6893c9bc924c416154/eea4a/woman-coding.jpg) ## Introduction **[Next.js](https://nextjs.org/)** is an awesome frontend framework. It's powered by React under the hood so it plays well with everything React has to offer out of the box: Hooks, Context, hot browser reloading, Typescript integration, and then it takes it a step further than what Create React App has, and offers even more like [routing](https://nextjs.org/docs/routing/introduction), [server side rendering (SSR)](https://nextjs.org/docs/basic-features/data-fetching/get-server-side-props), [static site generation (SSG)](https://nextjs.org/docs/basic-features/data-fetching/get-static-props), all the SEO juice that comes along with both SSR and SSG, _and_ built-in [API routing](https://nextjs.org/docs/api-routes/introduction) - no extra Node server required to proxy API calls securely to a database, another microservice, or a third party API. At [work](https://blues.io?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes), a team of developers and I have been building a **[new application](https://github.com/blues/sparrow-starter)** that we've open sourced to help our users get up and running faster with the Internet of Things (IoT) hardware we create. For our first "accelerator application", the idea is that a user will get some of our [IoT devices](https://blues.io/products/?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes), those devices will begin collecting data like temperature, humidity, motion, etc., they'll send that environmental data to a [cloud](https://notehub.io?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes), and then they'll fork our "starter application" code to get a dashboard up and running, pulling in their own sensor data from the cloud, and displaying it in the browser. To build this app, we decided to go with the Next.js framework because it offered so many of the benefits I listed above, one of the most important being the the ability to make secure API calls without having to set up a standalone Node server using Next.js's **[API routes](https://nextjs.org/docs/api-routes/introduction)**. All of the data displayed by the application must be fetched from the cloud (or a database) where the device data is stored after it's first recorded. And this being a production-ready application, things like automated unit and end-to-end tests to ensure the various pieces of the application work as expected are a requirement - both to give the developers and our users confidence that as new features are added already existing functionality remains intact. While by and large, the Next.js **[documentation](https://nextjs.org/docs/getting-started)** is great, one place that it does fall short is when it comes to unit testing these API routes. There is literally nothing in the documentation that touches on how to test API routes with [Jest](https://jestjs.io/) and [React Testing Library](https://testing-library.com/docs/react-testing-library/intro/) - the de facto unit testing library combo when it comes to any React-based app. **Which is why today I'll be showing you how to unit test Next.js API routes, including gotchas like local environment variables, mocked data objects, and even Typescript types for Next-specific objects like `NextApiRequest`.** --- ## The actual Next.js API route to test So before we get to the tests, let me give you a brief example of the sorts of API calls this application might make. For our app, the first thing that must be fetched from the cloud is info about the **"gateway devices"**. > **Note:** The files I've linked to in the actual repo are historical links in GitHub. The project underwent a major refactor afterwards to more cleanly divide up different layers for future ease of use and flexibility, but if you dig back far enough in the commit history (or just click on the hyperlinked file name) you can see our working code that matches what I'm describing below. ### Fetch the gateway device info The **gateways** are the brains of the operation - there are a number of sensors that all communicate with the gateways telling them what environmental readings they're getting at their various locations, and the gateways are responsible for sending that data from each sensor to the cloud - it's like a hub and spoke system you'd see on a bicycle wheel. Before anything else can happen in the app, we have to get the gateway information, which can later be used to figure out which sensors and readings go with which gateways. I won't go into more details about how the app works because it's outside the scope of this post, but you can see the whole repo in GitHub [here](https://github.com/blues/sparrow-starter). Let's focus on the API call going from the Next.js app to our cloud (which happens to be called [Notehub](https://notehub.io?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes)). In order to [query Notehub](https://dev.blues.io/reference/notehub-api/device-api/#get-device-by-uid?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes) we'll need: * An [authorization token](https://dev.blues.io/reference/notehub-api/api-introduction/?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes), * A Notehub [project's ID](https://dev.blues.io/notehub/notehub-walkthrough/#create-a-new-project?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes), * And a gateway [device's ID](https://dev.blues.io/reference/glossary/#deviceuid?&utm_source=dev.to&utm_medium=web&utm_campaign=sparrow-accelerator&utm_content=unit-test-nextjs-routes). Below is an example of the call made to Notehub via Next.js to fetch the gateway device data. I'll break down what's happening after the code block. > Click the file name if you'd like to see the original code this file was modeled after. **[`pages/api/gateways/[gatewayID].ts`](https://github.com/blues/sparrow-starter/blob/0a0c26c6e0db042b85861846defe621af999d936/src/pages/api/gateways/%5BgatewayUID%5D.ts)** ```typescript import type { NextApiRequest, NextApiResponse } from 'next'; import axios, { AxiosResponse } from 'axios'; export default async function gatewaysHandler( req: NextApiRequest, res: NextApiResponse, ) { // Only allow GET requests if (req.method !== 'GET') { res.status(405).json({ err: 'Method not allowed' }); return; } // Gateway UID must be a string if (typeof req.query.gatewayID !== 'string') { res.status(400).json({ err: 'Invalid gateway ID' }); return; } // Query params const { gatewayID } = req.query; // Notehub values const { BASE_URL, AUTH_TOKEN, APP_ID } = process.env; // API path const endpoint = `${BASE_URL}/v1/projects/${APP_ID}/devices/${gatewayID}`; // API headers const headers = { 'Content-Type': 'application/json', 'X-SESSION-TOKEN': AUTH_TOKEN, }; // API call try { const response: AxiosResponse = await axios.get(endpoint, { headers }); // Return JSON res.status(200).json(response.data); } catch (err) { // Check if we got a useful response if (axios.isAxiosError(err)) { if (err.response && err.response.status === 404) { // Return 404 error res.status(404).json({ err: 'Unable to find device' }); } } else { // Return 500 error res.status(500).json({ err: 'Failed to fetch Gateway data' }); } } } ``` In our code, the **[Axios HTTP library](https://axios-http.com/docs/intro)** is used to make our HTTP requests cleaner and simpler, there are **[environment variables](https://nextjs.org/docs/basic-features/environment-variables)** passed in from a `.env.local` file for various pieces of the call to the Notehub project which need to be kept secret (things like `APP_ID` and `AUTH_TOKEN`), and since this project is written in Typescript, the **[`NextApiRequest` and `NextApiResponse` types](https://nextjs.org/docs/basic-features/typescript#api-routes)** also need to be imported at the top of the file. After the imports, there's a few validation checks to make sure that the HTTP request is a `GET`, and the `gatewayID` from the query params is a string (which it always should be, but it never hurts to confirm), then the URL request to the Notehub project is constructed (`endpoint`) along with the required `headers` to allow for access, and the call is finally made with Axios. Once the JSON payload is returned from Notehub, it is read for further errors like the gateway ID cannot be found, and if everything's in order, all the gateway info is returned. There's just enough functionality and possible error scenarios to make it interesting, but no so much that it's overwhelming to test. Time to get on with writing unit tests. --- ## Set up API testing in Next.js Ok, now that we've seen the actual API route we want to write unit tests for, it's time to get started. Since we're just testing API calls instead of components being rendered in the DOM, Jest is the only testing framework we'll need this time, but that being said, there's still a little extra configuration to take care of. ### Install the `node-mocks-http` Library The first thing that we'll need to do in order to mock the HTTP requests and response objects for Notehub (instead of using actual production data, which is much harder to set up correctly every time) is to install the **[`node-mocks-http`](https://www.npmjs.com/package/node-mocks-http)**. This library allows for mocking HTTP requests by any Node-based application that uses `request` and `response` objects (which Next.js does). It has this handy function called **[`createMocks()`](https://github.com/howardabrams/node-mocks-http#createmocks)**, which merges together two of its other functions `createRequest()` and `createResponse()` that allow us to mock both `req` and `res` objects in the same function. This lets us dictate what Notehub should accept and return when the `gatewayHandler()` function is called in our tests. Add this library to the project's `devDependencies` list in the `package.json` file like so. ```shell npm install --save-dev node-mocks-http ``` ### Add an `.env.test.local` file for test-related environment variables I learned the hard way that environment variables present in a Next.js project's `.env.local` file (the [prescribed way Next wants to read environment variables](https://nextjs.org/docs/basic-features/environment-variables)) do not automatically populate to its unit tests. Instead, we need to make a new file at the root of the project named `.env.test.local` to hold the **[test environment variables](https://nextjs.org/docs/basic-features/environment-variables#test-environment-variables)**. This file will basically be a duplicate of the `env.local` file. We'll include the `BASE_URL` to reach our API, a valid `AUTH_TOKEN`, a valid `APP_ID` and a valid `DEVICE_ID`. The `DEVICE_ID` is the gateway device's ID, which actually comes from the app's URL query parameters but since this is unit testing this route file's functionality, to keep all our variables in one centralized place, we'll pass the gateway's ID as an environment variable. Here's what your test environment variables file should contain. > **Note:** Neither this file nor your actual `.env.local` file should _ever_ be committed to your repo in GitHub. Make sure these are in your `.gitignore` file so they don't accidentally make it there where anyone could read potentially secret variables. **`.env.test.local`** ```shell BASE_URL=https://api.notefile.net AUTH_TOKEN=[MY_AUTH_TOKEN] APP_ID=[app:MY_NOTEHUB_PROJECT_ID] DEVICE_ID=[dev:MY_GATEWAY_DEVICE_ID] ``` > **Use real env vars for your test file:** Although in our final test file you won't see us importing all of these variables to construct the Notehub URL, if they are not valid and included now, the tests will error out - the tests are actually constructing valid URLs under the hood, we're just specifying what to send the receive back when the calls are placed. Undefined variables or nonsense test data variables will cause the tests to fail. And with those two things done, we can get to testing. ### Write the API tests For keeping things in line with what Jest recommends, we can store all our test files inside of a folder at the root of the Next project named ` __tests__ /`, and to make it easy to figure out which tests go with which components, I tend to like to mimic the original file path and name for the file being tested. > If you prefer to keep your tests inline with your actual source files, that's a valid choice as well. I've worked with both kinds of code repos so it's really a matter of personal preference. > > In that case, I tend to just create ` __tests__ /` folders at the root of each folder alongside where the actual files live. So inside of the `pages/api/` folder I'd make a new folder named ` __tests__ /` and add any related test files in there. Since this is a route API file buried within our `pages/` folder, I'd recommend a similar file path inside the ` __tests__ /` folder: ` __tests__ /pages/api/gateways/[gatewayID].test.ts`. In this way, a quick glance at the file name should tell us exactly what this file is testing. Then, we come up with possible test cases to cover. Some scenarios to test include: - Testing a valid response from Notehub with a valid `authToken`, `APP_ID` and `DEVICE_ID` which results in a 200 status code. - Testing that an invalid gateway ID for a device that doesn't exist and throws a 404 error. - Testing that no gateway ID results in a 400 error. - And testing that trying to make any type of HTTP call besides a `GET` results in a 405 error. Below is what my tests look like to test this API endpoint. We'll dig into the details after the big code block. > Click the file name if you'd like to see the original code this file was modeled after. **[` __tests__ /pages/api/gateways/[gatewayUID].test.ts`](https://github.com/blues/sparrow-starter/blob/b1712d3b34040b12794ddc71671649c401450eea/ __tests__ /src/pages/api/gateways/%5BgatewayUID%5D.test.ts)** ```typescript /** * @jest-environment node */ import { createMocks, RequestMethod } from 'node-mocks-http'; import type { NextApiRequest, NextApiResponse } from 'next'; import gatewaysHandler from '../../../../../src/pages/api/gateways/[gatewayUID]'; describe('/api/gateways/[gatewayUID] API Endpoint', () => { const authToken = process.env.AUTH_TOKEN; const gatewayID = process.env.DEVICE_ID; function mockRequestResponse(method: RequestMethod = 'GET') { const { req, res, }: { req: NextApiRequest; res: NextApiResponse } = createMocks({ method }); req.headers = { 'Content-Type': 'application/json', 'X-SESSION-TOKEN': authToken, }; req.query = { gatewayID: `${gatewayID}` }; return { req, res }; } it('should return a successful response from Notehub', async () => { const { req, res } = mockRequestResponse(); await gatewaysHandler(req, res); expect(res.statusCode).toBe(200); expect(res.getHeaders()).toEqual({ 'content-type': 'application/json' }); expect(res.statusMessage).toEqual('OK'); }); it('should return a 404 if Gateway UID is invalid', async () => { const { req, res } = mockRequestResponse(); req.query = { gatewayID: 'hello_world' }; // invalid gateway ID await gatewaysHandler(req, res); expect(res.statusCode).toBe(404); expect(res._getJSONData()).toEqual({ err: 'Unable to find device' }); }); it('should return a 400 if Gateway ID is missing', async () => { const { req, res } = mockRequestResponse(); req.query = {}; // Equivalent to a null gateway ID await gatewaysHandler(req, res); expect(res.statusCode).toBe(400); expect(res._getJSONData()).toEqual({ err: 'Invalid gateway UID parameter', }); }); it('should return a 405 if HTTP method is not GET', async () => { const { req, res } = mockRequestResponse('POST'); // Invalid HTTP call await gatewaysHandler(req, res); expect(res.statusCode).toBe(405); expect(res._getJSONData()).toEqual({ err: 'Method not allowed', }); }); }); ``` **Handle the imports** Before writing our tests we need to import the `createMocks` and `RequestMethod` variables from the `node-mocks-http` library. As I noted earlier, `createMocks()` allows us to mock both the `req` and `res` objects in one function, instead of having to mock them separately. Additionally, since this is a Typescript file, we'll need to import the `NextApiRequest` and `NextApiResponse` types from `next` - just like for the real API route file. And finally, we need to import the real `gatewayHandler` function - it's what we're trying to unit test after all. **Create a reusable `mockRequestResponse()` helper function** After creating a `describe` block to house all the unit tests, I created a reusable helper function to set up the mocked API call for each test. This reusable `mockRequestResponse()` function, allows us to only have to construct our mocked HTTP call once, cuts down on the amount of duplicate code in the test files, and makes overall readability easier. Although we may change various parts of the `req` or `res` object based on what scenario is being tested, writing this function once and being able to call it inside of each test is a big code (and time) saver. ```typescript const authToken = process.env.AUTH_TOKEN; const gatewayID = process.env.DEVICE_ID; function mockRequestResponse(method: RequestMethod = 'GET') { const { req, res, }: { req: NextApiRequest; res: NextApiResponse } = createMocks({ method }); req.headers = { 'Content-Type': 'application/json', 'X-SESSION-TOKEN': authToken, }; req.query = { gatewayID: `${gatewayID}` }; return { req, res }; } ``` Above, I've pulled out a snippet from the larger code block that focuses just on the `mockRequestResponse()` function and the two environment variables it needs to during its construction `authToken` and `gatewayID`. After declaring the function name we specify its method using the `node-http-mocks` `RequestMethod` object: `method:RequestMethod="GET"`, and then we destructure and set the `req` and `res` object types that come from the `createMocks()` function as `NextApiRequest` and `NextApiResponse` (just like in our real code). We create the same `req.headers` object that Notehub requires with our test-version `authToken`, and set the mocked query parameter `gatewayID` equal to the `gatewayID` being supplied by our `.env.test.local` file. **Write each test** With our `mockRequestResponse()` function built, we can simply call it inside of each test to get our mocked `req` and `res` objects, call the actual `gatewayHandler()` function with those mocked objects, and make sure the responses that come back are what we expect. If a property on the `req` object needs to be modified before the call to `gatewayHandler` is made, it's as straight forward as calling the `mockRequestResponse()` function and then modifying whatever property of the `req` object needs to be updated. ```typescript const { req, res } = mockRequestResponse(); req.query = { gatewayID: 'hello_world' }; ``` To check response objects, especially for error scenarios where different error strings are passed when a gateway ID is missing or invalid, we can use the `res._getJSONData()` function to actually read out the contents of the response. That way we can check actual error message along with the HTTP status codes. Pretty handy, right? --- ## Check the test code coverage If you're using **[Jest's code coverage](https://jestjs.io/docs/configuration#collectcoverage-boolean)** reporting features, now's a good time to run that function and check out the code coverage for this file in the terminal printout or the browser. > You can open the code coverage report via the command line by typing: `open coverage/lcov-report/index.html` And hopefully, when you navigate to the code coverage for the `pages/api/` routes, you'll see some much better code coverage for this file now. Now go forth and add unit tests to all other API routes as needed. --- ## Conclusion I'm a fan of the Next.js framework - it's React at its heart with lots of niceties like SEO and API routes baked in. While Next fits the bill for many projects nowadays and helps get us up and running fast with projects, its testing documentation leaves something to be desired - especially for some of its really great additions like API routes. Automated testing is a requirement in today's modern software world, and being able to write unit tests to continue to confirm an app's functionality works as expected isn't something to be ignored or glossed over. Luckily, the `node-mocks-http` library helps make setting up mocked `req` and `res` objects simple, so that we can test our Next.js app from all angles - from presentational components in the DOM down to API routes on the backend. Check back in a few weeks — I’ll be writing more about JavaScript, React, IoT, or something else related to web development. If you’d like to make sure you never miss an article I write, sign up for my newsletter here: https://paigeniedringhaus.substack.com Thanks for reading. I hope learning how to unit test API routes helps you out in your next Next.js project (no pun intended!). --- ## References & Further Resources - [Next.js framework](https://nextjs.org/) - [Jest unit testing library](https://jestjs.io/) - [React Testing Library](https://testing-library.com/docs/react-testing-library/intro/) - [Axios HTTP library](https://axios-http.com/docs/intro) docs - [Notehub](https://notehub.io) cloud - [Node mocks HTTP](https://www.npmjs.com/package/node-mocks-http) library - Full [GitHub project repo](https://github.com/blues/sparrow-starter)
paigen11
992,390
How to Schedule Jira Issues in Google Calendar
If you're part of a busy product team, you're probably staring down an aggressive task list alongside...
0
2022-02-17T14:09:09
https://dev.to/elizabethwerd/how-to-schedule-jira-issues-in-google-calendar-4img
jira, googlecalendar, productivity
If you're part of a busy product team, you're probably staring down an aggressive task list alongside a jam-packed calendar full of team meetings, one-on-ones, and breakout sessions every week. With multiple projects in the works and hundreds of tasks on your plate -- how are you supposed to find time to actually get stuff done? [Jira](https://www.atlassian.com/software/jira) is an awesome popular management tool for software teams that helps you organize your projects, prioritize your workload, and build a solid product roadmap. Paired with the most popular calendar app, [Google Calendar](https://www.google.com/calendar/about/), used by over [half a billion](https://expandedramblings.com/index.php/google-app-statistics/) users worldwide to manage their time and schedules, you should be able to stay pretty organized, right? But, despite managing your two most important work elements (your available time and your task list), there's still a major disconnect between knowing what you need to work on, and actually scheduling heads-down time to get it done. With ever evolving task lists and unpredictable workweeks, it's not surprising that the average person fails to complete [41%](http://blog.idonethis.com/how-to-master-the-art-of-to-do-lists/) of the tasks on their to-do list! It also doesn't help that you're facing a constant stream of interruptions all day. While you intend to stay heads-down on a task in Jira, every Slack message you pop over to answer is causing you to [context switch](https://reclaim.ai/blog/context-switching) and can cost an average of [23 minutes](https://firstup.io/blog/employee-productivity-statistics/) every time just to get back on track. And if a task requires deep focus work, you can add another [15-20 minutes](https://www.locationrebel.com/flow-state/#:~:text=The%20science%20shows%20us%20that,disturb%20you%20until%20after%20lunch.) to that delay to reach a flow state where you are most productive. So how can you better align your Jira issues, projects, and priorities with your actual availability? Let's walk through creating this integration through [Reclaim.ai](https://reclaim.ai/) so you can automatically [schedule time for your Jira issues in Google Calendar](https://reclaim.ai/blog/jira-google-calendar?utm_source=devto&utm_medium=blog-published&utm_campaign=jira-google-calendar&utm_term=jira-google-calendar), by priority, before their due date. What are the benefits? ---------------------- You've probably heard of the widely popular (for good reason!) productivity hack called [time blocking](https://reclaim.ai/blog/time-blocking-planner#:~:text=4.-,Reclaim,your%20daily%20tasks%20and%20habits.) by now, or stumbled across it on one of your colleagues calendars. Time blocking is the habit of scheduling your daily and weekly tasks into "blocks" of time on your calendar so you're able to get more done as efficiently as possible. It may sound like extra work that you don't have time for, but time blocking has actually been proven to increase productivity [up to 80%](https://www.betterup.com/blog/time-blocking) by reducing multitasking and creating dedicated time to stay focused. Let's walk through the benefits of scheduling your Jira issues in Google Calendar:  - Make more time for heads-down work - Stay focused on the task at hand - Reduce distractions & context switching - Defend your time from being overrun by meetings - Share context about what you're working on with coworkers - Track your time on each project - Plan better sprints with realistic goals (based on your actual availability!) So, instead of wasting another open hour between meetings poking around your Jira to-do list, let's get your software development tool integrated with Google Calendar so you can automate your time blocking and simply look at your schedule to know exactly what to do next. How to connect Jira & Google Calendar ------------------------------------- Now that you're up to speed on the benefits of integrating your [Jira issue list with Google Calendar](https://reclaim.ai/features/jira-integration), let's jump into how to set up the connection. We're going to walk you through the steps to integrate using the Reclaim smart time blocking app for Google Calendar. Time blocking at Reclaim works with your existing schedule, and automatically finds the best time to schedule your tasks around your other calendar events. Tasks are scheduled as "free" initially to placehold the time and keep your schedule flexible, but as your calendar fills up and you run out of available openings the task could be rescheduled in, the time block will flip to "busy" locking in the time. Here's how to get started: (Note, if you're the first Reclaim user in your organization, visit [here](https://help.reclaim.ai/en/articles/5855518-jira-integration-overview) first to add Reclaim to your Jira workspace.) 1. Sign up at [Reclaim.ai](https://app.reclaim.ai/signup). 2. Authorize Reclaim in Jira: Go to an existing Jira issue, or create a new one, and click the Reclaim button in the menu bar to give Reclaim permissions to access your Jira issues. 3. Enable Jira in Reclaim: Finish connecting Reclaim in Jira's onboarding process, or go to [Settings > Integrations](https://app.reclaim.ai/settings/integrations) in Reclaim to enable the Jira integration. 4. Select manual or automatic syncing: In your [Jira Integrations Settings](https://app.reclaim.ai/settings/integrations), select whether you want to manually or automatically schedule Jira issues to your Google Calendar. Manually scheduling (default) works by selecting the issues you want to schedule within an open issue in Jira, which allows for more control over what gets scheduled on your calendar. Reclaim also allows you to automatically sync all Jira issues to your calendar according to the estimated and issue due date provided, and prioritize your full list in the Reclaim Planner. 5. Start scheduling your Jira issues: Jira issues require a valid assignee, an estimate, a due date (or fix version, or Sprint). If you've chosen to manually schedule, click the "Schedule via Reclaim" button within the Jira issue, or if you're automatically scheduling, your tasks should already be in your calendar! Check your Google Calendar or Reclaim Planner to see your new time blocks for Jira issues.  6. Adjust and customize your scheduled tasks: While Reclaim automatically finds time for your tasks in your calendar, you can also customize your time blocks to whatever works best for you! Use the right side menu in the Planner to prioritize your Task list, push back tasks, add more time, or check off a completed task. 7. Track time on tasks: Reclaim also automatically logs the time you spend working on your active issues and projects directly in your Jira issues -- helping you track your time and make better estimates on issues in the future. 8. Invite your team: [Invite your team](https://app.reclaim.ai/share-reclaim) so they can integrate their Jira issues with their calendar, or assign them to coworkers' calendars, for an even more efficient workflow that allows you to deploy more every sprint! Visit the [Jira Integration Overview](https://help.reclaim.ai/en/articles/5855518-jira-integration-overview) help doc for more details on the Reclaim setup process. {% embed https://youtu.be/GvEIjz2u_KE %}. Get more done every sprint -------------------------- And you're all set! With this intuitive integration, your Jira workflow can be improved by automatically scheduling issues into your Google Calendar according to what's most important in your evolving task list. Now you and your team can start defending focus time for your projects and minimize unnecessary interruptions from non-priorities. If you're looking for even more ways to optimize your productivity, check out Reclaim's [Habits](https://reclaim.ai/features/habits) feature. While Jira holds all of your one-time tasks, Habits are basically recurring tasks for your regular work routines like weekly planning, code review, or even just blocking time for lunch! Another helpful feature that's amazing at helping to [prevent burnout](https://reclaim.ai/blog/workplace-burnout) for you and your team is [decompression time](https://reclaim.ai/features/buffer-time) -- allowing you to automatically schedule breaks after Zoom calls or virtual meetings, so you're not thrown right into another conference call before you've had time to reset. Remember to defend the time you need for the things that fall outside of your task list, or that time is very likely to bleed into your focus time. As mentioned above, another major issue most professionals face is non-stop distractions and interruptions during the workweek. Unsurprisingly, a Microsoft [study](https://www.vox.com/recode/2019/5/1/18511575/productivity-slack-google-microsoft-facebook) found that information professionals switch windows 372 times a day, or around every 40 seconds while working on completing their tasks! In large companies, the average employee is sending more than 200 slack messages per week, and it's "not uncommon" that power users send out upwards of 1,000 messages a day. A pro tip to consider to reduce distractions -- [integrate Slack](https://reclaim.ai/features/slack-integration) with Google Calendar at Reclaim so you can automatically sync your status according to your schedule and minimize interruptions by sharing context through your status. If you really don't want to be interrupted during a deep focus work, you can block them out completely by automating DND whenever you're in a deep focus work session for a Jira issue. And that's it! We hope this tutorial and integration helps you and your team start reclaiming time for your most important priorities, and as always, we would love to hear about it. Tweet us at @reclaimai to share your feedback and experience time blocking Jira issues!
elizabethwerd
992,576
Data structures in C: Stack
Introduction Stack is a linear data structure that puts its elements one over the other in...
0
2022-02-21T02:26:00
https://dev.to/josethz00/data-structures-in-c-stack-55c7
computerscience, c, tutorial, datastructures
## Introduction Stack is a linear data structure that puts its elements one over the other in sequential order. Think about a pile of plates, is the same thing as a stack, you put one plate over the other and if you use it correctly the first item you pop out of the pile is the last item that you have putten into it. ![Pile of plates](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0a9l55zcwx9cdy9rppi.png) When you have a data structure that the last element is the first to go out, we call it **LIFO**, which means **Last In - First Out**. So the first element inserted into the stack will be the last to go out and the last element inserted will be the first to go out. Here is an animated illustration of a stack: ![Animation of a stack working](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kks4kckqvhbey5zbfydu.gif) &nbsp; ## Stack operations Now let's start diving into the details of this data structure by defining the operations that it should do. ![Stack operations](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymdsur4d8u4f8nntkvnl.png) ### `stack(int m)` Creates and returns an empty stack, with maximum capacity `m`. ### `push_S(char x, Stack * S)` Insert the item `x` on top of the stack. ### `pop_S(Stack * S)` Removes and returns the item on top of the stack. ### `top_S(Stack S)` Accesses and returns the item on top of the stack without removing it. ### `destroy_S(Stack * S)` Destroys the stack received by the parameter in the function by freeing its memory space. ### `is_empty_S(Stack S)` Return `true` (**stdbool.h**) if the stack `S` is empty, otherwise , returns `false`(**stdbool.h**). ### `is_empty_S(Stack S)` Return `true` (**stdbool.h**) if the stack `S` is full, otherwise , returns `false`(**stdbool.h**). &nbsp; ## Stack implementation (struct) We can define a stack of char like this: ```c typedef char TmpT; typedef struct stack { int max; int top; TmpT * item; } Stack; ``` First we define a type called `TmpT` as the primitive type `char`. This was made because if we decide to modify our stack to work with integers, floats or any other data type, we just need to change de TmpT declaration, and the stack struct itself remains intact. After that, the Stack is defined with 3 properties/fields, they are: **max**, **top** and **item**. The **max** property is the max number of elements that the stack comports. The **top** is the index of the last item in the stack (the last in). The **item** property is a pointer to a data type, in this case a char pointer, so it will have all the items of the stack, all the characters. &nbsp; ## Stack implementation (methods) ```c Stack *stack(int m) { Stack *S = (Stack *)malloc(sizeof(struct stack)); S->max = m; S->top = -1; S->item = (TmpT *)malloc(m * sizeof(TmpT)); return S; } int is_empty_S(Stack S) { return (S.top == -1); } int is_full_S(Stack S) { return (S.top == S.max - 1); } void push_S(TmpT x, Stack *S) { if (is_full_S(*S)) { puts("Stack full!"); abort(); } S->top++; S->item[S->top] = x; } TmpT pop_S(Stack *S) { if (is_empty_S(*S)) { puts("Stack empty!"); abort(); } TmpT x = S->item[S->top]; S->top--; return x; } TmpT top_S(Stack S) { if (is_empty_S(S)) { puts("Stack empty!"); abort(); } return S.item[S.top]; } void destroy_S(Stack *S) { free(S->item); free(S); S = NULL; } ``` &nbsp; ## Examples Now that we already defined and implemented our stack data structure, let's do some practical exercises with this. The first exercise is to reverse a chain of characters, here's the code: ```c #include <stdio.h> #include <stdlib.h> #define STACK_LENGTH 513 int main(void) { char c[STACK_LENGTH]; // initialize the stack Stack * s = stack(STACK_LENGTH); printf("Type a chain of characters: "); fgets(c, STACK_LENGTH, stdin); // foreach char of the string c received at fgets, pushes it to the stack for (int i = 0; c[i]; i++) { push_S(c[i], s); } printf("Reversed chain: "); // while the stack is not empty, pops the last element of the stack. // By the end of this loop there will be printed the reversed string while (!is_empty_S(*s)) { printf("%c", pop_S(s)); } puts("\n"); destroy_S(s); return 0; } ``` Another exercise that we can do with stacks is decimal to binary number conversion, in this case we can use the stack as a "storage" to the digits of the binary number. For this exercise don't forget to change the `TmpT` type to int. The final code is: ```c #include <stdio.h> #include <stdlib.h> int main(void) { int n; Stack s = stack(8); printf("Enter a decimal number: "); scanf("%d", &n); do { push_S(n % 2, s); n /= 2; } while (n > 0); printf("The binary representation of %d is: ", n); while (!is_empty_S(s)) { printf("%d", pop_S(s)); } puts("\n"); destroy_S(s); return 0; } ``` &nbsp; ## Conclusion We have reached the end of this article and here we were able to understand what a stack is, how it works, how to implement it in the C language and we also discussed some of its applications. &nbsp; ## Sources [Blogpost: Stack Data Structure (Introduction and Program)](https://www.geeksforgeeks.org/stack-data-structure-introduction-program/) [Blogpost: Stack in C Programming](https://www.journaldev.com/35172/stack-in-c) [Book: Estruturas de dados em C: Uma abordagem didática - Silvio do Lago Pereira](https://www.amazon.com.br/Estruturas-Dados-Uma-Abordagem-Did%C3%A1tica/dp/8536516291)
josethz00
992,991
Must/Should/Can - a Personal Organization System
Repo: joedietrich-dev/must-should-can Inspiration A little while ago, I found myself...
0
2022-02-20T23:17:04
https://dev.to/joedietrichdev/mustshouldcan-a-personal-organization-system-499e
react, rails
Repo: [joedietrich-dev/must-should-can](https://github.com/joedietrich-dev/must-should-can) ## Inspiration A little while ago, I found myself struggling to bring order to my tasks at work. I'd tried many different organizational systems. Some didn't suit my work style, others were way too complicated - adding to my daily tasks rather than making them easier. I decided to put together a system that worked for me. ## The System I divide my tasks for the day into three buckets: Tasks I **must** do today, Tasks I **should** do today, Tasks I **can** do today. Every day, I rewrite and carry over any incomplete tasks to the next day. It's simple, but it works for me! ## Basic Features I took the simple pen-and-paper tools and made them digital. The features of Must/Should/Can are straightforward, as is system itself: - Account Creation and Login - Task Creation, Editing, and Prioritization - Task Resets - Task Archiving and Deleting ## What I Used ### Backend - [Ruby on Rails](https://rubyonrails.org/) as the framework for the API - [ActiveModelSerializers](https://github.com/rails-api/active_model_serializers) to build JSON views - [PostgreSQL](https://postgresql.org) as the database - The [bcrypt](https://github.com/bcrypt-ruby/bcrypt-ruby) gem to improve password security in tandem with the ActiveRecord [`has_secure_password`](https://api.rubyonrails.org/classes/ActiveModel/SecurePassword/ClassMethods.html#method-i-has_secure_password) feature ### Frontend - [React](https://reactjs.org/) / [Create React App](https://create-react-app.dev/) - [React Router v6](https://reactrouter.com/) - For client-side routing - [Styled Components](https://styled-components.com/) to style the application ## Authorization, Passwords, and Salting While building Must/Should/Can, it did not escape my attention that a user's tasks could be very private, so there was a need to protect them as much as possible. To ensure that privacy, I not only implemented user authorization and password authentication, I protected their passwords with the ActiveRecord `has_secure_password` feature. ### `has_secure_password` If you're storing passwords in any system, it is a **very bad idea** to store them in plaintext **anywhere** in your application. Doing so exposes you and your users to potential data losses, which is a Bad Thing. The `has_secure_password` feature adds methods to an ActiveRecord model that make setting and authenticating securely hashed and salted passwords on your user models easy. Under the hood, `has_secure_password` uses the `bcrypt` gem to hash and salt your user's passwords. This process makes it very difficult for bad actors to access your users' password data, even if they manage to steal your database. Hashing is the process of taking data and processing it to create a new value, usually of a fixed length (sometimes called a fingerprint). The process is unidirectional, meaning once a value has been hashed, it is incredibly impractical (with current technology) to reverse the process to derive the original value from the hash. For example, using bcrypt, the password `Wolfgang the puppy` might hash to the value `$2a$12$j29LhAzasXWN7glfGjp9NuFXcOYBCffkE4RWcQJwBFzxsAsUsQ2nK`. This unidirectionality is what makes hashed passwords more secure than plaintext passwords - a hacker will need to do extra work to break the encryption involved. Or they might have a Rainbow Table, which is a precomputed set of values that will let an attacker look up the password based on a given hash. If the hashing function is known to the attacker, hashing alone won't be enough to protect a user's password, since the same input value will always produce the same output hash. This is why bcrypt will also **salt** a password before storing the hash in your database. A salt is data added to the input of a hash function. In bcrypt's implementation, a unique salt is added to every password on generation. This means that an attacker would need to use a different pre-computed Rainbow Table for every single password, which is computationally prohibitive. All of this means that, properly implemented, using `has_secure_password` and `bcrypt` in your application is one important step protect you and your users from bad actors. ## Next Steps I plan to introduce the ability to add notes to tasks so you can, for example, sketch out an agenda for a meeting, or divide tasks into subtasks. I also plan to enhance the archive with grouping and sorting. Later on, I'll enhance the user's account management experience, letting them reset their password and edit their user name. ## End Thanks for reading! For a walkthrough, take a look at the [demo video](https://youtu.be/rDLsf4lwwns). Access the application itself at [https://must-should-can.herokuapp.com/](https://must-should-can.herokuapp.com/).
joedietrichdev
993,450
Build a Video Conference App from Scratch using WebRTC,Websocket,PHP +JS Day 43
In this video we'll cover how to create rejectedCall function
0
2022-02-18T11:03:23
https://dev.to/benpobi/build-a-video-conference-app-from-scratch-using-webrtcwebsocketphp-js-day-43-3d90
webrtc, webdev, javascript, tutorial
In this video we'll cover how to create rejectedCall function {% youtube SEvGnrw8KXg %}
benpobi
993,497
Josef Müller-Brockmann poster
tooling around with svg to share my appreciation for Swiss Style, modern design history and catch up...
0
2022-02-18T12:38:08
https://dev.to/jimmont/josef-muller-brockmann-poster-1bj1
svg, design
tooling around with svg to share my appreciation for Swiss Style, modern design history and catch up with a little tech: https://www.jimmont.com/art/beethoven.html the interaction and viewport dynamic made for a fun exploration into how the original was made and looks in a new context more about the original designer Josef https://en.wikipedia.org/wiki/Josef_M%C3%BCller-Brockmann the svg excerpt: ``` <svg viewBox="0 0 750 1076" version="1.1" style="width:100vw;height:100vh;"> <title>Josef Müller-Brockmann 1955 Beethoven poster</title> <!-- remembering Josef Müller-Brockmann https://en.wikipedia.org/wiki/Josef_Muller-Brockmann 1955 Beethoven poster --> <style> svg{background-color:#000;overflow:visible;} :root{ --circle: rgba(255,255,255,1); --pie: rgba(0,0,0,1); --stroke: #fff; --stroke-width: 3px; } text,tspan{font-family:arial,sans-serif;font-size: 11px; word-spacing: 0px;line-height:1.1;line-height: 1.1; white-space: pre; text-anchor:start;} .align-right, .align-right *{text-anchor:end;} #beethoven{font-size: 32px; font-weight: 700; line-height: 1;} circle{fill:var(--circle, #fff);stroke:var(--stroke, #fff);stroke-width:var(--stroke-width);} g.group{ transform:rotate(0deg); transition:transform 3s; } g.group.on{ /* transform:rotate(360deg); */ animation: 3s rotations ease-in-out; } @keyframes rotations{ 0% {transform:rotate(0);} 45% {transform:rotate(360deg);} 90% {transform:rotate(0);} 100% {} } path{fill:var(--pie, #000);stroke:var(--stroke, #fff);stroke-width:var(--stroke-width);} #topdisc{fill:var(--pie, #fff);} </style> <g style="transform:translate(203px, 770px);" id="discs"> <rect width="60" height="60" bx:origin="0 0" fill="rgba(255,255,255,0.3)"/> </g> <g style="transform:translate(198px, 781px);" id="texts"> <g class="group"> <text class="align-right" id="beethoven" x="0" y="-78">beethoven</text> <text class="align-right" id="left-labels" x="0" y="0"> <tspan x="0">tonhalle</tspan> <tspan x="0" dy="6.6em">leitung</tspan> <tspan x="0" dy="1.2em">solist</tspan> <tspan x="0" dy="1.8em">beethoven</tspan> <tspan x="0" dy="4.2em">vorverkauf</tspan> </text> <text x="4" y="0"> <tspan x="4">grosser saal</tspan> <tspan x="4" dy="1.2em">dienstag, den 22, februar 1955,</tspan> <tspan x="4" dy="1.2em">20.15 uhr</tspan> <tspan x="4" dy="1.2em">4. extrakonzert</tspan> <tspan x="4" dy="1.2em">der tonhalle-gesellschaft</tspan> <tspan x="4" dy="1.8em">carl schuricht</tspan> <tspan x="4" dy="1.2em">wolfgang schneiderhan</tspan> <tspan x="4" dy="1.8em">ouverture zu -coriolan-,op. 62</tspan> <tspan x="4" dy="1.2em">violinkonzert in d-dur,op. 61</tspan> <tspan x="4" dy="1.2em">siebente sinfonie in a-dur,op. 92</tspan> <tspan x="4" dy="1.8em">tonhalle-kasse, hug, jecklin,</tspan> <tspan x="4" dy="1.2em">kuoni</tspan> <tspan x="4" dy="1.2em">karten zu fr.3.50 bis 9.50</tspan> <tspan x="4" dy="1.2em"/> </text> </g> </g> <script> // <![CDATA[ // circle divided in 32 parts, 2π const angleIncrement = 2*Math.PI/32; // series x2, 1-32 => 1, 2, 4, 8, 16, 32 // dr = (end - start) / 32 = (900-260)/32 = 20; // see from looking at the image and calculation output the added space for each circle's border // visually approximated small and large circles // step small number to find a reasonable value, step larger one to find value /32 where it yields at whole number // account for the offset, notice 1px border // radius is dr * step + sum of 1 unit borders, so index * 1px border; // radius = step * dr + i; const dr = 10; let arc, arcs, arclist = [ [], // text in first // [start-unit, size] [[-2,5], [12,5]], // 5, -9, 5 @1 [[-4, 6], [8, 6]], // 6, -6, 6 @2 [[-7, 8], [4,12]], // 7, -4, 8+? @4 [[-8,25]], // 20+??, -? ? @8 [[-4, 22]], // ? @16 [[-9, 20]] // ? @32 ]; let i = 0 , step = 1 , parent = document.querySelector('#discs') , r0 = 250 //c0.getBBox().width / 2 , circle = parent.ownerDocument.createElementNS('http://www.w3.org/2000/svg', 'circle') , path = parent.ownerDocument.createElementNS('http://www.w3.org/2000/svg', 'path') , group = parent.ownerDocument.createElementNS('http://www.w3.org/2000/svg', 'g') , copy, g, off , radius ; group.classList.add('group'); function onpath(event){ let node = event.composedPath().find(node=>node.matches&&node.matches('g.group')); if(!node) return; if(node.matches('path')) node = node.parentNode; const is = 'on'; switch(event.type){ case 'animationend': node.classList.remove(is); break; case 'transitionend': node.classList.remove(is); break; default: node.classList.add(is); } } function rotatable(node){ 'touchmove, touchstart, mouseover, mousedown, transitionend, animationend'.split(/[,\s]+/).forEach(event=>node.addEventListener(event, onpath)); }; rotatable(parent); rotatable(document.querySelector('#texts')); while(i < 7){ off = i * 2; radius = r0 + (dr * step) + off; console.log(i, step, radius, `${r0}+${step * dr} +${off}`); arcs = arclist[ i ]; let j = 0; while(arc = arcs[j++]){ g = group.cloneNode(true); copy = path.cloneNode(); copy.style.setProperty('d', dpath({radius, angleIncrement, arc})); //copy.setAttributeNS('http://www.w3.org/2000/svg', 'd', dpath({radius, angleIncrement, arc})); copy.setAttributeNS(null, 'd', dpath({radius, angleIncrement, arc})); g.setAttribute('id', `arc${i}-${j}`); g.append(copy); parent.prepend(g); } copy = circle.cloneNode(); copy.setAttribute('id', `c${i}`); copy.setAttribute('r', radius); parent.prepend(copy); step *= 2; i++; } // arc = [start-relative-to-zero, size-in-angleIncrements]; function dpath({radius, angleIncrement, arc}){ let a0, x0, y0, a1, x1, y1; const [start, size] = arc; a0 = angleIncrement * start; a1 = a0 + (angleIncrement * size); x0 = (Math.cos(a0) * radius); y0 = (Math.sin(a0) * radius); x1 = (Math.cos(a1) * radius); y1 = (Math.sin(a1) * radius); const largeArc = Math.abs(a1 - a0) > Math.PI ? 1 : 0; return `M0,0 L${x0},${y0} A${radius} ${radius} 0 ${ largeArc } 1 ${x1} ${y1} Z`; // Safari doesn't work with CSS d:path(...) // return `path('M0,0 L${x0},${y0} A${radius} ${radius} 0 ${ largeArc } 1 ${x1} ${y1} Z')`; } // ]]> </script> </svg> ```
jimmont
993,500
Node.js New Project Ideas Discussion
Are you a software **developer **or just a **thinker **or innovator? How many times you are...
16,657
2022-02-18T12:52:02
https://dev.to/zigrazor/nodejs-new-project-ideas-discussion-3852
discuss, javascript, webdev, node
Are you a software **developer **or just a **thinker **or innovator? How many times you are searching for collaborator or contributor for your project ideas and you don't find them? Well, you are in the right place! I create this post to discuss new project ideas in **_Node.js_** and to find teammate for them! _You can post comment for propose you ideas, then in the comment can be discussed new ideas or actively partecipate to the start of the project._ If the idea is good will be added in the last part of this post to give more prominence to the project. **If you comment with a github repos**, the link will be added in the project description. _I wish this can be helpful for open-source world. Thank you in advance for your partecipation._
zigrazor
993,785
How to create bitmap fonts for Phaser JS with BMFont
This guide explains how to generate bitmap fonts from TTF or OTF files for use in PhaserJS. I'll be...
0
2022-02-21T12:55:21
https://dev.to/omar4ur/how-to-create-bitmap-fonts-for-phaser-js-with-bmfont-2ndc
javascript, tutorial, gamedev
This guide explains how to generate bitmap fonts from TTF or OTF files for use in PhaserJS. I'll be using [BMFont](https://www.angelcode.com/products/bmfont/) which is Windows only. ### Why bitmap fonts? The main use case is if you're creating a pixel art game & want your text to match the retro style and have no antialiasing. Below is an example from a recent game I made. The top is the standard font rendering in Phaser. The bottom is a bitmap version of the same font. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m013sgvqtkvqojn2a2gt.png) **Note the antialiasing inside letters like `g`, `a`, or `o`** in the top image. This creates what looks like blurry artifacts at the small scale of pixel art games. The bottom image has the crisp, pixelated rendering expected in retro games. Bitmap fonts may also be faster to render. [From the Phaser docs](https://photonstorm.github.io/phaser3-docs/Phaser.GameObjects.BitmapText.html): > BitmapText objects are less flexible than Text objects, in that they have less features such as shadows, fills and the ability to use Web Fonts, however you trade this flexibility for rendering speed To use a bitmap font, Phaser needs: 1. **An image** containing all the possible characters 2. **An XML file** that defines the x/y/width/height of each character in the image. It's basically a spritesheet of letters. Here's an example I generated from this public domain font: https://www.fontspace.com/public-pixel-font-f72305. The image with all the letters: ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t2kzasgn1ardrb13xzz3.png) Snippet from the XML: ```xml <char id="32" x="507" y="0" width="3" height="1" xoffset="-1" yoffset="31" xadvance="32" page="0" chnl="15" /> <char id="33" x="285" y="87" width="18" height="28" xoffset="3" yoffset="4" xadvance="32" page="0" chnl="15" /> <char id="34" x="441" y="108" width="22" height="16" xoffset="3" yoffset="4" xadvance="32" page="0" chnl="15" /> ``` You can download this generated example in the format that Phaser can use here: [pixel_bitmap_font.zip](https://github.com/OmarShehata/webgl-outlines/files/8104312/pixel_bitmap_font.zip) ### Step 1 - Download BMFont Download the executable on this page: https://www.angelcode.com/products/bmfont/ ### Step 2 - Load the font * Prepare your font as a TTF file or similar * Open up `bmfont64.exe` * Select `Options` > `Font settings` * Select your font file in `Add font file` * Then select the name of the font in the `Font` dropdown ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4zpneejuo1l8x4c5g5rn.png) If your font is installed system-wide you can skip the `Add font file` step and just select the name of the font directly. Now you should see your font loaded: ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8spdeb607re5b4j0r8uu.png) ### Step 3 - Export First, we change the export settings: * Select `Options` > `Export options` * Select `XML` as the Font descriptor * Select `PNG` as the textures option ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vmd7zt5yzlbceqvbgme.png) * Press OK Then to export: * Select all letters for export (Ctrl + A) * `File` > `Save bitmap font as` This will generate an XML file (it'll have the extension .fnt, you can rename that to .xml or leave it as-is, Phaser will be able to read it as an XML either way) and a PNG file. You may need to increase the width/height in the export options to keep all the letters in one image. ### Step 4 - Use it in Phaser Tell Phaser where to load the PNG & XML files: ```javascript // Load it this.load.bitmapFont('bitmapFontName', 'font.png', 'font.fnt'); // Add it to the scene this.add.bitmapText(0, 0, 'bitmapFontName', 'Lorem ipsum\ndolor sit amet'); ``` Full example here: https://labs.phaser.io/edit.html?src=src/loader/bitmap%20text/load%20bitmap%20text.js. ### Final thoughts Note that a generated bitmap font has a font size baked in. Phaser can scale the font up and down but that may introduce artifacts in some cases. If you know the font size you want ahead of time you can set it in `Options` > `Font settings`. I used a font size of 32px in my game which was big enough so that it still looked good when scaled down or up a bit. I hope you found this useful! If you have any corrections or find a better way to generate bitmap fonts for Phaser I'm happy to update this article. Find me on Twitter ([@Omar4ur](https://twitter.com/Omar4ur)) or my website (https://omarshehata.me/).
omar4ur
994,038
Day 3 of 100 days of Code
Today I learnt the use of the conditional if/else if statements and the query selector in...
16,877
2022-02-18T22:29:50
https://dev.to/nkemdev/day-4-of-100-days-of-code-4i0h
javascript, webdev, beginners
![Blackjack game app learning javascript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgh0vhcrmsbh4u6npe69.jpg) Today I learnt the use of the conditional if/else if statements and the query selector in JavaScript and created a blackjack app. Blackjack is a game that you win when your card summation is nearer or equals to 21. 21 is the golden summation, but if there isn’t anyone in the game that has the number, the nearest number to 21 wins the game. Conditional statements control behavior and determine whether or not pieces of code can be executed following an instruction. The types of conditional statements are: • The ‘if’ statement • ‘Else if’ statement • ‘ else’ statement The if statement is where if a condition is true, the blocks of code/statement will be executed. The else if statement is when the first condition is false then this will be executed. Then the else statement is where if all the statements preceding this statement are false then it will be executed. Example ``` let firstNumber = 6 let secondNumber =13 let sum = firstNumber + secondNumber if (sum < 21) { console.log(“ You could be the winner”) } ``` Else if example ``` if (sum < 21) { console.log(“ You could be the winner”) } else if ( sum ===21) { console.log(“ Congratulations you have won the blackjack game” ) } ``` Else example ``` f (sum < 21) { console.log(“ You could be the winner”) } else if ( sum ===21) { console.log(“ Congratulations you have won the blackjack game” ) } else{ console.log (“ Sorry better luck next time”) } ``` Other two things I also learnt were the "==" and the "===". The difference between them. Example `5 =='5'` This will return true as it sees it as being similar irrespective of the data type difference. Hence you would say. It is not strict in differentiating. `5==='5'` This will return false as there are two different datatypes even though they look similar in view. The first is number five while the second is a string data type.
nkemdev
994,396
Unique Project Ideas to Improve Your React Skills
What does a programmer want after learning a new language/framework? Practice and projects, right?...
0
2022-02-19T09:45:55
https://dev.to/jashanmago/unique-project-ideas-to-improve-your-react-skills-3gia
javascript, react, webdev, programming
What does a programmer want after learning a new language/framework? Practice and projects, right? So, I’m going to tell you some unique projects you can make to practice and improve your React Skills. At the end of each project, I have added a section where I’ll tell you how can challenge yourself even more in that project, how you can improve that project, etc. And, I promise you that after the completion of each project, you’ll learn something new and become better. I’ll try my best to keep only the best and unique project ideas on this list. Now, let’s begin! ## Movie Recommendation App ![Inspiration of the movie app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/td87qqnazphhkok77xiy.jpeg) You might also face this question: What should I watch next? If yes, you’re not alone, many people want recommendations about what to watch next. This project can help them, as well as you, to improve your skills. Now, the first question that comes to you is what to use to make this project. The answer is simple! If you’re an intermediate, use machine learning. For beginners, an API will do the work. There are several APIs you can use in this project, including the most famous movie database API, [IMDb](https://developer.imdb.com/). But if you ask me, I would suggest [TMDb](https://developers.themoviedb.org/3). It’s an alternative, in fact, a suitable alternative to IMDb. The best part is, it’s free! ### Seems Very Easy? - Try To Add login and Sign up functions. - Ask the user for his/her preferred genre and try to show recommendations according to it. - Add 'Search for Movie,' Option. - Add the "Give Ratings" option. ## Quiz App ![Quiz app inspiration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkh28lo02s81waggsx4z.jpeg) You mightn't call it a unique project, but it is! If you're a beginner, I'll suggest that you make just a basic Quiz app, but for intermediate developers, I would suggest scaling this quiz app to a bigger scale. Now, let me explain what I meant by 'bigger scale'. You can add awesome features, such as categorized quizzes, points system, sign up and log in, online leaderboard, and many more. For adding questions, you've two ways: the first one is to make your own question API (if you're familiar with backend development), the second and most preferred way is to use an API. I don't think that I need to list APIs. You can easily find some online. If you still find this idea boring, there's one more idea you can try. Create a platform where people can create and attend surveys online. Creators should also be able to see stats related to their surveys like how many people attended surveys, how many of them are male, how many were teens, etc. ### Seems Very Easy? - Add an option to buy hints. - In the quiz app, add an option where people can also create and share their projects. - Focus on UI also, not just on logic. ## Travel Advisor ![Travel advisor inspiration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gqhum15rk41mtjbyx9l2.jpg) Do you love travelling? Well, many of us do. That's what makes this project helpful for both you and for the visitors of this app. At the very beginning level, you can make a basic Map, which shows information as cards when user clicks on a location. For. For intermediates, I would recommend a big scale app, with good UI and UX, and of course, you can take inspirations from big names. You can make this app for the entire world, or for your own country. Food guide, hotel guide, restaurant guide are some features you can add. You can use Google Maps, and of course, you can discover APIs according to the features you're gonna use in your app. Now again, I'm advising you, just because you're making this project to practice React, Don't skip the designing part! Try to be as professional as you can. ### Seems Very Easy? - Add a search bar. - Add lists like favorites, add to wishlist, visited, etc. - Add a rating system. ## Sports Website ![Sports app inspiration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3w75mq9ngledi9ayy2vi.jpg) I would recommend this one to only those who are interested in sports. You can also make a website limited to one sport. Now what to put on this website? See, there are plenty of options! You can make a sports news website, score update website, or if you like sports too much, you can even start a blog. I don't think that I need to list sport APIs, there are dozens of them. For beginners who don't know where to find them, you can go to [Rapid API](https://rapidapi.com/). There are 30,000+ APIs available on this platform! If you want to make this project on a larger scale, you can try to mix up ideas. You can make a website which shows the latest scores, news and blogs, just like [ESPN](https://www.espn.in/). ### Seems Very Easy? - Add comments feature. - Work more on UI and UX. ## Group Chat App ![Chat app UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrttxebaq64agq1ouvjf.png) Before explaining anything else about this project, let me first tell you that this is not that boring and basic chat application idea. In fact, I've enhanced the view. What my idea says is to create an application where people can form groups and chat in that groups, but wait, there's more! People can do a lot more in those groups, including voice calling, video calling, group calling, arrange or join events, and many more. Now, as you can notice, this project is on a bigger scale, so, I don't suggest this project to beginners. You can, of course, add more features according to your interests and creativity. There are some big names from which you can take ideas and inspiration. My favorite ones are Slack and Discord. In fact, Discord inspired this entire project. ### Seems Very Easy? - Allow your users to add other people as friends and to chat with them. ## The End That's all for this article. Hope this article helps you. If yes, please leave a comment. If you made any of these projects, kindly share your website/GitHub repository link in comments so that it gives confidence to other developers. For articles like this, please inspire me just by following me and commenting on this article. If you want to support me, use my affiliate link to shop online. You just have to buy anything you want from Amazon using my links. You'll not pay anything extra, and I'll get some commission! Hope you got it. If yes, remember to use my link whenever you want to buy anything online. [Here's my link](https://www.amazon.in/b?_encoding=UTF8&tag=codinghashi0e-21&linkCode=ur2&linkId=acd955dd19d2293e62d98134b2eaec16&camp=3638&creative=24630&node=1375424031). If you faced any problems during this article, please comment below to solve that. > Thanks For Reading! Happy Coding!
jashanmago
994,551
Show 404s from Laravel's telescope directly from DB
Not terribly efficient, but if you want to quickly review the recent 404s, will do in a...
0
2022-02-19T13:41:52
https://dev.to/slothy/show-404s-from-laravels-telescope-directly-from-db-14lp
Not terribly efficient, but if you want to quickly review the recent 404s, will do in a pinch: ``` SELECT json_extract(content, '$.uri') AS uri, created_at FROM `telescope_entries` WHERE `content` LIKE '%404%' ORDER BY `sequence` DESC LIMIT 100 ```
slothy
994,788
Introduction to Data Structures and Algorithms.
Introduction A data structure is a particular way of organizing data in a computer so that...
0
2022-02-20T07:02:10
https://dev.to/evansvan/introduction-to-data-structures-and-algorithms-3d2a
python
#Introduction A data structure is a particular way of organizing data in a computer so that it can be used effectively. Data Structures allows you to organize your data in such a way that enables you to store collections of data, relate them and perform operations on them accordingly. Python features both built-in data structures such as lists, sets, dictionaries and tuples and user defined such as linked lists, trees and graphs. ### Built-in Data Structures These as the name suggest come out of the box in python making them easy to use and programming easier.Lets break them down better ####Lists These are used to store data of the same type or different types in a sequential manner. They are indexed starting from 0 and the ending index is N-1 (If N elements are there). It also has negative indexing starting from -1 which allows you to access elements from last to first. List are created using square brackets if you don't pass any elements then you get an empty list. #####List Manipulation and Functions List items can be accessed using the index values other list can also be modified using 1. append() adding all elements passed o it as a single element. 2. Insert() takes an element and index, adds element to the index value and increases list size. 3. Del which is a python keyword deletes element at index provided 4. pop() function removes element from the indexed value. 5. remove() this is used to remove an element using its value. 6. len() returns length of the list. 7. count() this function finds the count of the value passed to it. 8. sort() and sorted() these functions do the same thing by sorting he list values. The sorted() has a return where as sort() modifies original list. Examples ``` test = [1,2,'one','two'] test[0] # returns value at 0 index which is 1 test[-1] # returns the last value in the list 'two' test.append(3) #adds 3 to end of list returns [1,2,'one','two',3] test.insert(1,3) # adds element to index provided returns [1,3,2,'one','two'] del test[2] # deletes element at index 2 return [1,2,'two'] test.pop(1) #pops the element at provided index returns [1,'one','two'] test.remove('two') # removes the provided element returns [1,2,'one'] len(test) # returns length of list which is 3 here test.count(2) # finds count of value passed which is 1 here ``` ####Dictionaries These are used to store key-value pairs the data can be the same or of different types. Dictionaries are created using curly braces {} or using the dict() function. #####Dictionary Manipulation and Functions 1. To change dictionary values you have to use the keys. So you firstly access the key and then change the value accordingly. 2. To add values you simply just add another key-pair value. 3. To delete items you use the pop function. 4. To clear the dictionary you use the clear() function. 5. To access elements in the dictionary you use the key values. 6. You can also access the keys, values or all items of a dictionary using the key(), value() and items() functions. Examples ``` mydict= {} #initializes empty dict mydict= {1:'one', 2:'two'} # dict with two elements mydict[1] # returns (one) mydict[3]= 'three' # {1:'one', 2:'two', 3:'three'} mydict.pop(2) # returns {1:'one'} mydict.clear() # returns an empty dict {} mydict.keys() # gets keys returns # (1,2) mydict.values() # gets values returns # (one,two) mydict.items() # returns {1:'one', 2:'two'} ``` ####Sets Sets are an unordered collection of unique elements meaning that even if data is repeated more than once the set will only keep one value. Sets are created using the curly braces and just passing values. #####Set Manipulation and Functions 1. add() this is used to add elements where you just pass the value you want added. 2. union() this is used to combine data in the sets. Examples ``` myset = {1.2.3,1} #returns {1,2,3} no duplicates allowed myset.add(3) # returns {1,2,3,4} ``` ####Tuples Tuples are similar to list but are immutable this means that once created they cannot be modified. Just like a list ttuples can contain data of various types. Tuples are created using parenthesis or using the tuple() function. #####Tuple Manipulation and Functions 1. Accessing items is done the same way as for lists. 2. To add items you use the + which takes another tuple to be appended to it. Examples ``` mytup = (1.2.3) mytup[1] # returns (2) ``` ####Conclusion This concludes the deep dive into the built-in methods for python.These are the primary methods used by most python programmers to create and manage their data, they also have a variety of functions apart from the ones i have described here used to manipulate data. In the next article we will deep dive into the user defined methods.
evansvan
995,514
How to send mail using Nodemailer?
What is nodemailer? Nodemailer is a module for Node.js applications to allow easy as cake...
0
2022-02-20T18:34:30
https://dev.to/himanshumishir/how-to-send-mail-using-nodemailer-ol5
nodemailer, node, javascript, beginners
## What is nodemailer? **Nodemailer** is a module for Node.js applications to allow easy as cake email sending. The project got started back in 2010 when there was no sane option to send email messages, today it is the solution most Node.js users turn to by default. ### Why nodemailer? *Nodemailer features* A single module with zero dependencies – code is easily auditable, as there are no dark corners Heavy focus on security, no-one likes RCE vulnerabilities Unicode support to use any characters, including emoji 💪 Windows support – you can install it with npm on Windows just like any other module, there are no compiled dependencies. Use it hassle free from Azure or from your Windows box Use HTML content, as well as plain text alternative Add Attachments to messages Embedded image attachments for HTML content – your design does not get blocked Secure email delivery using TLS/STARTTLS Different transport methods in addition to the built-in SMTP support Sign messages with DKIM Custom Plugin support for manipulating messages Sane OAuth2 authentication Proxies for SMTP connections ES6 code – no more unintentional memory leaks, due to hoisted var’s Autogenerated email test accounts from Ethereal.email` **Step by Step guide on how to send mail** 1. Open terminal. ``` mkdir node-mail cd node-mail ``` 2. Create server.js file. ``` touch server.js ``` 3. Create a node app. ``` npm init -y ``` 4. Install express and nodemailer. ``` npm install nodemailer express ``` 5. Create transportOptions.js and message.js. ``` touch message.js transportOptions.js ``` 6. Open message.js and export an object. ``` module.exports = (email)=>{ return { from: "test@mail.com", to: email, subject:"Sample mail ", text: "Hello", html: "<h1 style="color:#183b56;font-size: 28px;text-align: center;">Hello User</h1> <p style="font-size: 18px;color: #1f2933;text-align: center;">Hello this is a test mail</p>", } }; ``` 7. Open transportOptions.js and export an object here also. ``` module.exports = transportOptions = { host: "smtp.office365.com", port: "587", auth: { user: "test@mail.com", pass: "PASSWORD" }, secureConnection: true, tls: { ciphers: "SSLv3" }, }; ``` 8. Open server.js and create an express server. ``` const express = require('express'); const transportOptions = require('./transportOptions'); const message = require('./message'); const app = express(); app.get('/send-mail', (req, res) => { const {email} = req.body; (async () => { try { const mailTransport = nodemailer.createTransport(transportOptions); await mailTransport.sendMail(message(email)); return res.status(200).json({ message: "Successfully sent mail!", }); } catch (err) { return res.status(400).json({ message: "Sorry No such Email Exist", }); } })(); }); app.listen(3000, () => console.log('Example app is listening on port 3000.')); ``` 9. Save all files and test. 10. Please comment for any suggestion or feedback. 11. You can contact me on HimanshuMishra@duck.com
himanshumishir
995,599
Day 27/100
Almost bagged another section. Day 27/100 Language: JAVA Time Period Spent: 2 hrs Course...
0
2022-02-20T19:31:25
https://dev.to/lgomezzz/day-27100-318e
100daysofcode
Almost bagged another section. Day 27/100 **Language:** JAVA **Time Period Spent:** 2 hrs **Course Taken:** Udemy: Java Programming Masterclass - Tim Buchalka **Today's Learning:** - Linked List Continuation - Linked Lists Research **Additional Notes:** Today was spent on more traditional revising by reading sources. This allowed me to learn on my own accord by getting away from the tutorial. I then continued on the tutorial and finished the section on the Linked Lists explanation.
lgomezzz
995,869
Additional OCR Language Packs
IronOCR supports 125 international languages, but only English is installed within IronOCR as...
0
2022-02-21T05:47:12
https://ironsoftware.com/csharp/ocr/languages/
--- canonical_url: https://ironsoftware.com/csharp/ocr/languages/ --- IronOCR supports 125 international languages, but only **English** is installed within IronOCR as standard. Additional Language packs may be easily added to your C#, VB or [ASP .NET](https://ironsoftware.com/csharp/ocr/use-case/asp-net-ocr/) project via Nuget or as Dlls which can be downloaded and added as project references. ## Code Examples ### International Language Example **C#:** ``` //PM> Install-Package IronOcr.Languages.ChineseSimplified using IronOcr; var Ocr = new IronTesseract(); Ocr.Language = OcrLanguage.ChineseSimplified; using (var input = new OcrInput()) { input.AddImage("img/chinese.gif"); // Add image filters if needed // Input.Deskew(); // Input.DeNoise(); var Result = Ocr.Read(input); string TestResult = Result.Text; // Console can't print unicode. Save to disk instead. Result.SaveAsTextFile("chinese.txt"); } ``` **VB:** ``` 'PM> Install-Package IronOcr.Languages.ChineseSimplified Imports IronOcr Private Ocr = New IronTesseract() Ocr.Language = OcrLanguage.ChineseSimplified Using input = New OcrInput() input.AddImage("img/chinese.gif") ' Add image filters if needed ' Input.Deskew(); ' Input.DeNoise(); Dim Result = Ocr.Read(input) Dim TestResult As String = Result.Text ' Console can't print unicode. Save to disk instead. Result.SaveAsTextFile("chinese.txt") End Using ``` ## Custom Language Example For using any Tesseract .Traineddata language file you have downloaded or trained yourself **C#:** ``` using IronOcr; var Ocr = new IronTesseract(); Ocr.UseCustomTesseractLanguageFile("custom_tesseract_files/custom.traineddata"); using (var Input = new OcrInput(@"images\image.png")) { var Result = Ocr.Read(Input); Console.WriteLine(Result.Text); } ``` **VB:** ``` Imports IronOcr Private Ocr = New IronTesseract() Ocr.UseCustomTesseractLanguageFile("custom_tesseract_files/custom.traineddata") Using Input = New OcrInput("images\image.png") Dim Result = Ocr.Read(Input) Console.WriteLine(Result.Text) End Using ``` ### Multiple Language Example More than one Language at a time. **C#:** ``` //PM> Install-Package IronOcr.Languages.Arabic using IronOcr; var Ocr = new IronTesseract(); Ocr.Language = OcrLanguage.English; Ocr.AddSecondaryLanguage(OcrLanguage.Arabic); // Add any number of languages using (var Input = new OcrInput(@"images\multi-lang.pdf")) { var Result = Ocr.Read(Input); Console.WriteLine(Result.Text); } ``` **VB:** ``` 'PM> Install-Package IronOcr.Languages.Arabic Imports IronOcr Private Ocr = New IronTesseract() Ocr.Language = OcrLanguage.English Ocr.AddSecondaryLanguage(OcrLanguage.Arabic) ' Add any number of languages Using Input = New OcrInput("images\multi-lang.pdf") Dim Result = Ocr.Read(Input) Console.WriteLine(Result.Text) End Using ``` ### Faster Language Example Dictionaries Tuned for Speed. Use 'Fast' Variant of any OcrLanguage. **C#:** ``` using IronOcr; var Ocr = new IronTesseract(); Ocr.Language = OcrLanguage.EnglishFast; using (var Input = new OcrInput(@"images\image.png")) { var Result = Ocr.Read(Input); Console.WriteLine(Result.Text); } ``` **VB:** ``` Imports IronOcr Private Ocr = New IronTesseract() Ocr.Language = OcrLanguage.EnglishFast Using Input = New OcrInput("images\image.png") Dim Result = Ocr.Read(Input) Console.WriteLine(Result.Text) End Using ``` ### Higher Accuracy Detail Language Example Dictionaries tuned for accuracy but much slower results. Use 'Best' Variant of any OcrLanguage. **C#:** ``` //PM> Install-Package IronOcr.Languages.French using IronOcr; var Ocr = new IronTesseract(); Ocr.Language = OcrLanguage.FrenchBest; using (var Input = new OcrInput(@"images\image.png")) { var Result = Ocr.Read(Input); Console.WriteLine(Result.Text); } ``` **VB:** ``` 'PM> Install-Package IronOcr.Languages.French Imports IronOcr Private Ocr = New IronTesseract() Ocr.Language = OcrLanguage.FrenchBest Using Input = New OcrInput("images\image.png") Dim Result = Ocr.Read(Input) Console.WriteLine(Result.Text) End Using ``` ## How To Install OCR Language Packs Additional OCR Language packs are available for download below. Either - Install the Nuget package. [Search Nuget for IronOcr Languages](https://www.nuget.org/packages?q=ironocr.languages) - Or download the "ocrdata" file and add it to your .NET project in any folder you like. Set `CopyToOutputDirectory = CopyIfNewer` ## Download OCR Language Packs Download OCR Language Packs [directly from the IronOCR Website](https://ironsoftware.com/csharp/ocr/languages/#download-ocr-language-packs) itself. ## Help If the language you are looking to read is not available in the list above please [get in touch with us](https://ironsoftware.com/contact-us/). Many other languages are available on request. Priority on production resources are given to IronOCR licensees so please also consider [licensing](https://ironsoftware.com/csharp/ocr/licensing) IronOCR for access to your desired language pack. ---
ironsoftware
996,106
SelectorsHub - an awesome XPath and CSS Plugin
I had to create my own Chrome Extension Selenideium Element Inspector, just to realize there is...
0
2022-02-21T14:49:23
https://mszeles.com/selectorshub-an-awesome-xpath-and-css-plugin
selenium, selectorshub, selenideiumelementinspector, testautomation
--- title: SelectorsHub - an awesome XPath and CSS Plugin published: true date: 2022-02-21 08:22:45 UTC tags: selenium, selectorshub, selenideiumelementinspector, testautomation canonical_url: https://mszeles.com/selectorshub-an-awesome-xpath-and-css-plugin --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hx3r5jetjodvohmuow6q.png) I had to create my own Chrome Extension [Selenideium Element Inspector](https://dev.to/mszeles/save-dozens-of-minutes-daily-during-writing-selenide-and-selenium-e2e-automated-tests-using-the-selenideium-element-inspector-4736), just to realize there is already a very similar product on the market. Not long after I posted about the plugin in the [Test Automation LinkedIn group](https://www.linkedin.com/groups/86204/) I got a comment on my post letting me know there is an extension called [SelectorsHub](https://chrome.google.com/webstore/detail/selectorshub-xpath-plugin/ndgimibanhlabgdgjcpbbndiehljcpfh) developed by [Sanjay Kumar](https://www.linkedin.com/in/tetris/creator-sanjaykumar) which already covers the functionality I have implemented. ## SelectorsHub I immediately head into the Chrome Web Store to check the plugin. I was amazed. SelectorsHub has **more than 80000 users and it has a 5 stars review from 12k people.** SelectorsHub has all the bells and whistles that you need - A great feature set - A nice GUI - Crossbrowser support Let's look at the features of SelectorsHub It can **autogenerate** 1. Axes Based XPath, relative XPath, index-based XPath & absolute XPath. 2. Unique relative CSS Selector 3. All possible selectors for inspected element 4. JS Path and jQuery. It **supports** 1. XPath and CSS selector error handling 2. Shadow dom and nested shadow dom 3. iframe and nested iframe 4. SVG elements 5. Dynamic elements, 6. Verify multiple XPaths and complete locators page. 7. It suggests what exception automation will give for the respective element. Sanjay Kumar even made an [online training](https://selectorshub.com/selectorshub-training/) about the tool and XPath and CSS selectors. The **supported browsers are Chrome, Safari, Firefox, Edge, Opera, Brave, Chromium.** That is what I call full support of different browsers. **I highly recommend checking the plugin in case you would like to learn the ins and outs of CSS and XPath selectors.** ## Is there any reason for the existence of Selenideium Element Inspector? **Of course.** However, both tools are working with CSS and XPath, **Selenideium Element Inspector is highly tailored towards productivity**. First of all, I love **simplicity**. Selenideium Element Inspector is so simple it does not even have a GUI, but it is not needed at all as it uses Chrome DevTools's Console. **Selenideium Element Inspector helps you to become as productive as possible**. You only need 3 steps while working with a web element in the DOM: 1. Click the element 2. Select the most stable selector from the console 3. Copy-paste the full line of code into your favourite IDE That's it. Very simple, don't you think? In addition to being very effective, Selenideium Element Inspector is a **knowledge-sharing project**. By following [my series](https://mszeles.com/series/selenideium-inspector) you can get **in-depth knowledge of how to create your very own Chrome extension and how to navigate the Document Object Model (DOM) via Javascript.** And finally, **Selenideium Element Inspector is an open-source project** with all the advantages of being an open-source project. Check it out on [GitHub](https://github.com/mszeles/selenideium-element-inspector). And now please go, download the plugins, and share your feedback! 😊 **Download** [SelectorsHub](https://chrome.google.com/webstore/detail/selectorshub-xpath-plugin/ndgimibanhlabgdgjcpbbndiehljcpfh) **Download** [Selenideium Element Inspector](https://chrome.google.com/webstore/detail/selenideium-element-inspe/mgfhljklijclnfeglclagdeoiknnmnda) P.S.: **Support us by sharing this article in case you found SelectorsHub or Selenideium Element Inspector useful.** 📚 **Join the Selenide community** on [LinkedIn](https://www.linkedin.com/groups/9154550/)! ✌
mszeles
996,180
My 9-5 just became a Unicorn. These are the top 5 features that helped us get there which every app should have
Three years ago I joined Flipdish as their first Product Manager. Back then it was a startup that had...
0
2022-02-21T12:06:58
https://chrisdermody.com/flipdish-just-reached-unicorn-status-ive-learned-a-lot-here-are-5-features-that-ill-be-building-in-my-future-products-from-the-start/
product, webdev, career, startup
Three years ago I joined [Flipdish](https://flipdish.com) as their first Product Manager. Back then it was a startup that had found product market fit and just needed to execute quickly and grow. I had product experience, but I think the kicker that got me the job was that I built my own app to track things like tips and delivery addresses when I worked as a fast food delivery driver, so I could hold my own technically too. I don’t code in Flipdish (officially) but I can’t keep my hands out of VSCode, so over Christmas I decided to build a [co-working app](https://reservadesk.com/?utm_source=dev.to&utm_medium=forum&utm_campaign=flipdish) with my fiance. She did most of the backend (node/Express/Firestore) and I worked on the marketing site and web app (Nuxt/Vue3). Without realising it, I found myself building all the features that I saw make a huge difference to Flipdish’s success. I’ve never seen anyone write about these in a clear, easy list, so here we go. I wanted this list to be tactical, that anyone can take and build out a solid product with, so below are 5 things that I’ll be building from the start in all my future projects. ## 1. Audit logs Being able to see an “activity log” of actions taken in a given account is immensely powerful. In Flipdish it’s used daily by clients and support staff to debug any queries that may arise, or just to understand general activity on a given account. For [Reservadesk](https://reservadesk.com/?utm_source=dev.to&utm_medium=forum&utm_campaign=flipdish), it’s proven hugely helpful early on to not only understand account activity, but simply for quick development. I can see when I make changes and the before/after of what they were. I’ve written a separate post about best [practices for audit logging](https://chrisdermody.com/best-practices-for-audit-logging-in-a-saas-business-app/?utm_source=dev.to&utm_medium=forum&utm_campaign=flipdish) if you’d like to learn more about what makes a useful audit log. ## 2. Granular user permissions When trying to figure out what way to configure your user permissions, it’s easy to do something simple, like giving yourself and your staff an “admin” permission (god mode, basically), and your users something general like “guest” or “teammate”. This will bite you later on, and is a tricky thing to unwind. From now on, all of my projects will have granular user permissions from the outset. It has minimal up-front cost, and gives you flexibility down the road. You can always “bundle” your granular permissions together so that it appears in UI as a general thing, but you’ll have that flexibility down the road should you need to sell to enterprise or grow into different markets/regions. In Flipdish, since we’ve exploded across the UK, Europe and North America, we have country managers, support staff, success staff, contractors etc, and they all need specific permissions to different things. Had we not had granular user permissions from the outset, this would have inevitably slowed us down and hindered growth. ## 3. Image manipulation API It’s pretty rare that the projects you build won’t have images in some form. Before joining Flipdish, I think I was simply serving huge images in my sites/apps with minimal optimisation. Rookie error. I’ve since discovered a whole new world of intelligent image optimisation via services Imgix or imagekit.io. They both offer what is effectively an API layer on top of your images, meaning you can build your UI and simply pass in query parameters to “transform” the image based on what the UI needs at that particular time. For instance, let’s say your user uploads an huge 1000px x 1000px image, which will only be used as a small icon. You don’t want to load that massive image on a mobile device. With the image manipulation API you can request that image, but at a smaller size, like 40px x 40px. The service will transform it on the fly for you. It’s very impressive. Once you start using this, you’ll never go back in my opinion. ## 4. Localisation I can see you rolling your eyes at this one, but hear me out. You don’t need to localise your app into multiple languages from the outset, but building in i18n from the start is something I’ll do religiously from now on, for two reasons. 1. The cost is low. I still dev in English, but then quickly create the strings in localise.biz (or similar service) before finally building a release for production. If sometime down the line I see that the app is popular in a given region that has other languages, it’s fairly trivial to get your app translated and you’re away. 2. You can let your customer define their own locale customisations. This gives your app a layer of customisation that’s relatively simple to achieve, but can be a real differentiator. Users can sometimes just want a certain word to say something else. In [reservadesk](https://reservadesk.com/?utm_source=dev.to&utm_medium=forum&utm_campaign=flipdish), we built a system where our users can define certain words themselves, like the main call to action to reserve a desk can say “book now” or “reserve” - whatever the customer wants. In Flipdish, we have customisation requests every single day from clients and our ability to accommodate their requests has been central to keeping them as clients. ## 5. Referral scheme This one doesn’t necessarily need much dev work initially, but it’s something I’ll be keeping in mind for all future projects. Letting your customers who love your app be rewarded for sending others your way is an absolute win-win. In Flipdish we’ve had partners and affiliates as a key strategic element of our growth plans across the globe. Having a robust system for rewarding your best referrers in a systematic way will absolutely help you down the line. ## Bonus - your API is an asset One huge thing I’ve learned in Flipdish is that your API isn’t just something you need for your UI to work. If you go about it right, building an open, public, well-documented API can make magic happen. You’ll find that companies start building things on top of your APIs for clients who need it, making them extremely sticky to your platform. ## Extra bonus - we’re hiring - remotely My Fiance and I are looking for a house, which I'm sure you know is not cheap so we need some sweet sweet referral bonuses. If you like the sound of working somewhere like Flipdish, [we're hiring](https://grnh.se/f6341a92teu). Our stack is primarily C# on backend, React/React Native frontend, as well as native iOS and Android development in Swift & Kotlin. I’m Chris Dermody on [Linkedin](https://www.linkedin.com/in/chrisdermody/), or [cderm](https://twitter.com/cderm) on Twitter. I'm more than happy to answer any questions about any roles, or get you in touch with the person who can answer them if I can’t 🙂
chipd
996,259
3- Why Should I Write Tests While Developing Software?
Hi everyone. Here I am again with the 4th article of the Advanced Software Development series. If...
0
2022-02-21T13:01:36
https://dev.to/hamitseyrek/why-should-i-write-tests-while-developing-software-2pfm
testing, software, tdd, sdlc
Hi everyone. Here I am again with the 4th article of the Advanced Software Development series. If you've come this far by reading the series, it's fine. If you came directly to this article, I suggest you read the [0- Advanced Software Development](https://dev.to/hamitseyrek/0-advanced-software-development-23ak) article. Now on to the subject of testing, "_Why is testing so important when developing software?_" I think we can start by looking for an answer to the question. It is impossible not to make mistakes while writing code. We all make mistakes. While some of them are insignificant, some of them can be vital for the software. In such a situation, we may not always to be tolerated by our team leader, boss or customer. After writing our code and revealing our product, we cannot catch the blind spots in the manual tests we do individually. Because we test our product from the perspective we have when writing. In such a case, we are likely to miss the same blind spots again. When we look from a professional point of view, we see that the majority of corporate companies now use advanced project management systems. All of these project management systems, which pay attention to the software development life cycle, use software tests. The Software Testing Lifecycle (STLC) is an integral part of the Software Development Lifecycle (SDLC). While this is the case, it has become mandatory for us to write test codes. Otherwise, it will be very difficult for us to find a job in a large corporate company. Our advanced software development process will end at the beginning of the road. Tested software enables you to provide customers with quality software that requires less maintenance, making you more reliable and professional. A continuous verification and validation process is required to deliver a great product. Tests measure the performance and efficiency of the system/application, helping to ensure that the software is compatible with all technical and business parameters. There are many different testing methods you can use to make sure changes to your code are working as expected. We can divide these into two main groups as manual and automatic tests. Manual Test: It is done using the software personally. This is very costly as it requires someone to devote time to it. At the same time, since the tester is a human, there is a high probability of making mistakes. Automatic Test: It is done by running pre-written test codes. Here, the tester is the machine itself. It is ensured that all conditions written in the test function are met. Automated testing is important component of Continuous Integration (CI) and Continuous Delivery (CD) of DevOps tools . It is also an important part of the QA (Quality Control) process. ### Test Types ## Unit Tests They are low-level tests used to test the functionality of classes, methods, or functions we create in software. Unit tests can be easily automated by CI (Continuous Integration) servers. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rng9x0scqgfzj7dgvwyi.png) ## Integration Tests It is the type of test where we test how well different modules or services used by our application work together. Since this test type requires interacting with more than one module of the software to be successful, it is more costly to run than unit testing. For example, integration tests can tests whether the interaction of some modules with the database works correctly or whether some microservices that are separate from each other in a software work as expected. ## Functional Tests Sometimes Functional Tests and Integration Tests are confused because both work with different modules. Functional Tests focus on the business needs of the application. It focuses only on the output of event. Unlike Integration Tests, it does not question the relationship of modules with each other. It is enough that the output is correct for this test to be successful. ## End-to-End Tests This test imitates user behavior. It is used to verify whether various user flows are working as expected. They are more costly to implement. They are more complex in structure. It is generally recommended to be used at a few different points of vital importance in a software. For the rest of the software it makes more sense to use the less costly Unit Tests and Integration Tests. ## Acceptance Tests User behaviors such as End-to-End Tests are emulated. Acceptance Tests are used to verify whether the application meets all desired business requirements. If not all requirements are met, the test will fail. ## Performance Tests As the name suggests, it is used to measure performance. They are often used to measure the response time of the system when executing a large number of requests or the behavior of the system under a heavy load. These tests are costly as serious server power is required. However, it is very important to implement it after major changes to be made in future software. ## Smoke Test Smoke tests are used to test the main features of the application. They are therefore faster. They do not need to be written for each module. It is enough to focus only on the main features. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5cipjqk2d09gfp96fpyw.png) Whatever programming language you use, there are definitely many different test options for it. For example, there are Appium, XCTest, Calablash, Detox, OCMock and EarlGrey test frameworks for iOS, while PHPUnit, Mocha and RSpec frameworks are used for PHP and JavaScript. You can find the best by doing research in the developer communities. Tests can be run with the help of a compiler or directly from the command line, or they can be automated with CI from DevOps tools. (_CI pipeline consists of Build-Test-Deploy steps. If the test phase, which is the 2nd step, is not passed, the application will never be deployed._) ### Test Processes in Software We have mentioned code quality in every article of the Advanced Software Development series. Now let's add one more to the code quality's properties. Quality code is also error-free code. To ensure this, we need to test the software. While software tests are written TDD, BDD, DDD, RDD, FDD, ATDD etc there are many approaches. I will briefly touch on the most commonly used TDD and BDD approaches. But my advice to you is to read at least 1-2 articles about other approaches. Even if you are going to use TDD, it is useful to at least know why you should choose it and why it differs from others. ## TDD - Test Driven Development This approach, released in 2003, is the most-used testing approach. The TDD approach is not a test type. It is more of a way of doing software testing, a software development tactic. The TDD approach requires that test codes be written before software codes are written. TDD helps the software to come out in a simpler and simpler structure as it requires the expectations to be determined beforehand so that the tests can be written in advance. This makes the quality of the software better. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfabkff2cf0nohxfoq2o.png) **In the TDD approach:** **1- [Write Test]** Write a test for a situation. **2- [Fail]** The test is run. The test fails because there is no function/method that does this yet in the software. **3- [Refactor]** The necessary codes are written/edited to pass the test. The test is passed successfully. [Write Test] - [Fail] - [Refactor] steps continue in the loop. The application is subjected to a continuous refactoring process until the software is able to meet all the demands of the customer. ## BDD - Behaviour Driven Development In this approach, which was put forward in 2009, the TDD approach was tried to be facilitated. In line with TDD, tests are written in advance in the BDD approach too. Behavior Driven Development (BDD), as the name suggests, is a behavior-based software development method. Business analysts and developers determine the required behavior of the software with meetings. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4a80zq4egw0933zgb3em.png) In the BDD approach, three scenarios are establishing: "Given", "When", "Then". These scenarios are writting in colloquial language. Writing tests in spoken language are the advantages of BDD. Members who will join the team later can understand these tests much more easily. ### Conclusion After giving a brief overview of the testing approaches, you now know exactly what the "Test" folders were created for in the framework you are using. Just knowing is not enough, it is also necessary to use. If you want to find a job in corporate companies that manage their projects with Agile, the code you will write should be testable code. The easiest way to achieve this is to use one of the testing approaches. While the importance of writing test code is sometimes difficult to understand, it should be measured not just by the cost and time spent, but by the great value it brings. In the [Advanced Software Development](https://dev.to/hamitseyrek/0-advanced-software-development-23ak) series: Previous article: "[2- Why/How to Write a Comment? What is Clean Code?](https://dev.to/hamitseyrek/whyhow-to-write-comment-what-is-cleancode-5hin)". Next article: "[4- What is DevOps? What are the benefits? What tools does it use?](https://dev.to/hamitseyrek/4-what-is-devops-what-are-the-benefits-what-tools-does-it-use-27fl)". Don't forget to like if you think it's useful :) _Always get better..._
hamitseyrek
996,270
Bevy Minesweeper: Assets
Check the repository We have great board configuration with our BoardOptions resource but we hard...
16,975
2022-02-21T15:28:44
https://dev.to/qongzi/bevy-minesweeper-part-9-534e
rust, gamedev, tutorial, bevy
> [Check the repository](https://gitlab.com/qonfucius/minesweeper-tutorial) We have great board configuration with our `BoardOptions` resource but we hard coded every color, texture and fonts. Let's create a new configuration resource in `board_assets.rs` for our `board_plugin`: ```rust // board_assets.rs use bevy::prelude::*; use bevy::render::texture::DEFAULT_IMAGE_HANDLE; /// Material of a `Sprite` with a texture and color #[derive(Debug, Clone)] pub struct SpriteMaterial { pub color: Color, pub texture: Handle<Image>, } impl Default for SpriteMaterial { fn default() -> Self { Self { color: Color::WHITE, texture: DEFAULT_IMAGE_HANDLE.typed(), } } } /// Assets for the board. Must be used as a resource. /// /// Use the loader for partial setup #[derive(Debug, Clone)] pub struct BoardAssets { /// Label pub label: String, /// pub board_material: SpriteMaterial, /// pub tile_material: SpriteMaterial, /// pub covered_tile_material: SpriteMaterial, /// pub bomb_counter_font: Handle<Font>, /// pub bomb_counter_colors: Vec<Color>, /// pub flag_material: SpriteMaterial, /// pub bomb_material: SpriteMaterial, } impl BoardAssets { /// Default bomb counter color set pub fn default_colors() -> Vec<Color> { vec![ Color::WHITE, Color::GREEN, Color::YELLOW, Color::ORANGE, Color::PURPLE, ] } /// Safely retrieves the color matching a bomb counter pub fn bomb_counter_color(&self, counter: u8) -> Color { let counter = counter.saturating_sub(1) as usize; match self.bomb_counter_colors.get(counter) { Some(c) => *c, None => match self.bomb_counter_colors.last() { None => Color::WHITE, Some(c) => *c, }, } } } ``` Declare the module in `resources/mod.rs`: ```rust // mod.rs // .. pub use board_assets::*; mod board_assets; ``` This new resource will store every visual data we need and allow customization. We also added a `bomb_counter_colors` field to customize the bomb neighbor text colors and made a utility `bomb_counter_color` method to retrieve it. > What is this `DEFAULT_IMAGE_HANDLE` constant value? We copy the way `SpriteBundle` handles its default texture using the same hard coded `Handle<Image>` for a white texture. Now that we have the option for custom textures for everything from the tiles to the board background we will enable every `texture` field we omitted. ## Plugin Let's use our now resource in our `create_board` system in our `board_plugin`: ```diff // lib.rs + use resources::BoardAssets; // .. pub fn create_board( mut commands: Commands, board_options: Option<Res<BoardOptions>>, + board_assets: Res<BoardAssets>, window: Res<WindowDescriptor>, - asset_server: Res<AssetServer>, ) { // .. - let font = asset_server.load("fonts/pixeled.ttf"); - let bomb_image = asset_server.load("sprites/bomb.png"); // .. // Board background sprite: parent .spawn_bundle(SpriteBundle { sprite: Sprite { - color: Color::WHITE + color: board_assets.board_material.color, custom_size: Some(board_size), ..Default::default() }, + texture: board_assets.board_material.texture.clone(), transform: Transform::from_xyz(board_size.x / 2., board_size.y / 2., 0.), ..Default::default() }) // .. Self::spawn_tiles( parent, &tile_map, tile_size, options.tile_padding, - Color::GRAY, - bomb_image, - font, - Color::DARK_GRAY, + &board_assets, &mut covered_tiles, &mut safe_start, ); // .. } ``` We remove the `asset_server` argument. > Why is `board_assets` not optional? Making it optional is not easy because bevy doesn't provide a default font `Handle`. It would require advanced engine manipulation like using `FromWorld` and `Assets` implementations and using a hard coded font or font path. > But `Handle` implements `Default` Indeed but then either the app will panic when trying to print out text or nothing will show up. ---- Our `spawn_tiles` and `bomb_count_text_bundle` functions should be cleaned up as well: ```diff // lib.rs fn spawn_tiles( parent: &mut ChildBuilder, tile_map: &TileMap, size: f32, padding: f32, - color: Color, - bomb_image: Handle<Image>, - font: Handle<Font>, - covered_tile_color: Color, + board_assets: &BoardAssets, covered_tiles: &mut HashMap<Coordinates, Entity>, safe_start_entity: &mut Option<Entity>, ) { // .. // Tile sprite cmd.insert_bundle(SpriteBundle { sprite: Sprite { - color + color: board_assets.tile_material.color, custom_size: Some(Vec2::splat(size - padding)), ..Default::default() }, transform: Transform::from_xyz( (x as f32 * size) + (size / 2.), (y as f32 * size) + (size / 2.), 1., ), + texture: board_assets.tile_material.texture.clone(), ..Default::default() }) // .. // Tile Cover let entity = parent .spawn_bundle(SpriteBundle { sprite: Sprite { custom_size: Some(Vec2::splat(size - padding)), - color: covered_tile_color, + color: board_assets.covered_tile_material.color, ..Default::default() }, + texture: board_assets.covered_tile_material.texture.clone(), transform: Transform::from_xyz(0., 0., 2.), ..Default::default() }) .insert(Name::new("Tile Cover")) .id(); // .. // Bomb neighbor text parent.spawn_bundle(Self::bomb_count_text_bundle( *v, - font.clone(), + board_assets, size - padding, )); } fn bomb_count_text_bundle( count: u8, - font: Handle<Font>, + board_assets: &BoardAssets, size: f32, ) -> Text2dBundle { // We retrieve the text and the correct color - let (text, color) = ( - count.to_string(), - match count { - 1 => Color::WHITE, - 2 => Color::GREEN, - 3 => Color::YELLOW, - 4 => Color::ORANGE, - _ => Color::PURPLE, - }, - ); + let color = board_assets.bomb_counter_color(count); // We generate a text bundle Text2dBundle { text: Text { sections: vec![TextSection { - value: text, + value: count.to_string(), style: TextStyle { color, - font, + font: board_assets.bomb_counter_font.clone(), font_size: size, }, }], // .. ``` We now use only our `BoardAssets` resource for every visual element of the board. ## App We need to set a `BoardAssets` resource, but we have an issue. Loading our assets must be in a *system*, here a *startup system*, but we need to do it **before** our plugin launches its `setup_board` system or it will panic. So let's prevent this situation by setting our state to `Out`: ```diff // main.rs fn main() { // .. - .add_state(AppState::InGame) + .add_state(AppState::Out) // .. } ``` and registering a `setup_board` startup system, and moving the previous board setup into it ```diff // main.rs fn main() { // .. - app.insert_resource(BoardOptions { - map_size: (20, 20), - bomb_count: 40, - tile_padding: 3.0, - safe_start: true, - ..Default::default() - }) // .. + .add_startup_system(setup_board) // .. } ``` We can declare the new system: ```rust // main.rs use board_plugin::resources::{BoardAssets, SpriteMaterial}; // .. fn setup_board( mut commands: Commands, mut state: ResMut<State<AppState>>, asset_server: Res<AssetServer>, ) { // Board plugin options commands.insert_resource(BoardOptions { map_size: (20, 20), bomb_count: 40, tile_padding: 1., safe_start: true, ..Default::default() }); // Board assets commands.insert_resource(BoardAssets { label: "Default".to_string(), board_material: SpriteMaterial { color: Color::WHITE, ..Default::default() }, tile_material: SpriteMaterial { color: Color::DARK_GRAY, ..Default::default() }, covered_tile_material: SpriteMaterial { color: Color::GRAY, ..Default::default() }, bomb_counter_font: asset_server.load("fonts/pixeled.ttf"), bomb_counter_colors: BoardAssets::default_colors(), flag_material: SpriteMaterial { texture: asset_server.load("sprites/flag.png"), color: Color::WHITE, }, bomb_material: SpriteMaterial { texture: asset_server.load("sprites/bomb.png"), color: Color::WHITE, }, }); // Plugin activation state.set(AppState::InGame).unwrap(); } ``` Using the generic state system we set up in the [previous part](./8_states.md) we control when we want the plugin to launch. Here, we want it to launch *after* we loaded our assets and set up the `BoardAssets` resource. That's why we first set our *state* to `Out` and set it to `InGame` once our assets are ready. Our plugin is now completely modular and has zero hard coded values, everything from the board size to the tile colors can be customized. > Can we edit the theme at runtime? Yes ! the `BoardAssets` resource is available to every system, but everything that is not a `Handle` won't be applied until the next generation. For a more dynamic system you can check my plugin [bevy_sprite_material](https://github.com/ManevilleF/bevy_sprite_material). --- [Previous Chapter](https://dev.to/qongzi/bevy-minesweeper-part-8-4apn) -- [Next Chapter](https://dev.to/qongzi/bevy-minesweeper-part-10-5hie) --- Author: Félix de Maneville Follow me on [Twitter](https://twitter.com/ManevilleF) > Published by [Qongzi](https://qongzi.com)
qongzi
996,795
Make a Shuffling "Fun Facts" Game About Presidents in AR
Today is President's Day 📜✒️ (enjoy your day off!). We created a shuffling game to share a fun fact...
0
2022-02-21T20:54:56
https://dev.to/echo3d/make-a-shuffling-fun-facts-game-about-presidents-in-ar-4nd9
unity3d, gamedev, 3d, augmentedreality
Today is President's Day 📜✒️ (enjoy your day off!). We created a shuffling game to share a fun fact about each of the top 5 historically popular presidents. # Register Don't have an API key? Make sure to register for FREE at [echo3D](https://console.echo3d.co/#/pages/contentmanager). # Setup - Clone the [repo](https://github.com/echo3Dco/Unity-PresidentsDay-echo3D-example) - [Install](https://docs.echo3d.co/unity/installation) the echo3D Unity SDK - Download the 3D models from the Models folder in the project - Go to echo3D console and [click "Add to Cloud"](https://docs.echo3d.co/quickstart/add-a-3d-model) and upload the models - Open the "scene" scene - [Set the API key](https://docs.echo3d.co/quickstart/access-the-console) on the echo3D object in the Hierarchy using the the Inspector - Play in Unity or [build and run the AR application](https://docs.echo3d.co/unity/adding-ar-capabilities#4-build-and-run-the-ar-application) # Learn more Refer to our [documentation](https://docs.echo3d.co/unity/) to learn more about how to use Unity and echo3D. # Support Feel free to reach out at support@echo3D.co or join our support channel on Slack. # Screenshots ![Roosevelt.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645476508229/1QQk_Slp2.png) ![Lincoln.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645476512095/WkhNZUpGC.png) ![JFK.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645476522714/OFWZBmRVa.png) --- > *echo3D [(www.echo3D.co](www.echo3D.co); Techstars 19') is a cloud platform for 3D/AR/VR that provides tools and network infrastructure to help developers & companies quickly build and deploy 3D apps, games, and content.* ![echo3D - Logo 2021 - Background - Round Edges.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1645476555896/ijTTitjLn.png)
_echo3d_
1,135,717
The Why and How of Team Retrospectives
In my time as a developer, I've seen all my teams adopt practices from the Agile methodology. This...
0
2022-07-25T18:21:39
https://dev.to/devsatasurion/the-why-and-how-of-team-retrospectives-58j0
productivity, agile, programming
In my time as a developer, I've seen all my teams adopt practices from the Agile methodology. This included the adoption of many "agile" meetings - standup, backlog grooming, sprint planning, and retro. When I first heard of the idea of having a retro meeting, I was hesitant. I remember thinking "What is the point of blocking out this time that could be spent coding?" I quickly changed my opinion after attending a few retros. I learned how important it is to create a dedicated space for team feedback and how team health can be impacted if this space is not allocated. ### :thinking: What is a Retro? Retrospective meetings are a team-wide meeting in which everyone can feel free to give their feedback on the running of the team since the last retrospective. There is usually dedicated time to discuss what went well and what could improve. At the end of these discussions, action items are taken by team members to improve the team flow. These action items can be reflected upon at the beginning of the next retrospective. :exclamation: Before I start discussing my thoughts, I want to note that there is no "correct" way to run a retro. I've seen different teams use different styles successfully depending on their communication styles. Flexibility is key here. ## The Importance of Feedback and Vulnerability Let's examine the "why" of a retro. Why do we implement team feedback sessions and what are the important learnings that come from them? In this article ["How to Build Confidence While Showing Vulnerability"](https://hbr.org/2022/07/how-to-build-confidence-about-showing-vulnerability) the author Dan Cable notes a few benefits that come from having open team communication. The first is that open discussions **normalize and encourage learning**. If we approach retrospectives from a "what can we improve on" mentality, people will feel more likely to say what is working/not working for them. People will feel more comfortable exploring alternatives in order to find the best solution to a team problem. Another benefit that Cable notes with displaying vulnerability in a leadership role is **better team engagement** in other aspects of work. If the team feels heard, they are more likely to be excited about their work and show better cooperation. ## :running: How Should Retros be Run? #### 1. Plan your goals Before running a retro, it is important to do some preparation to make the best use of your team's time. You should start with a set of goals in mind to better customize the retrospective to your team's needs. In my retrospectives I like to check our team "health", encourage everyone to give feedback, and inspire teammates to take ownership of tasks. These goals shape the way I run and participate in retrospectives as I want to make sure everyone feels heard and included. #### 2. Schedule! Retros need to be frequent enough that people remember what they want to talk about. If over a month passes, team members may forget important items or feel that they cannot talk about anything specific because the retro spans such a broad length of time. I recommend scheduling a retro every two weeks for an hour. This frequency does not have too long of a gap between meetings without draining too much time away from development. #### 3. Make sure EVERYONE is heard One of the reasons I like retros so much is that everyone is given a chance to participate and give feedback. Every team member should be encouraged to share their thoughts on how the team has been doing (both positive/constructive criticism). I have found that in this situation some team members may feel shyer so it is important to encourage everyone to give honest feedback without any repercussions. If you are a manager leading the retro it is also important to prepare YOURSELF for feedback and not become defensive if someone says something critical. #### 4. Actually do your action items It is one thing to voice concerns and opinions about team health, but it is another to actually implement the ideas that arise from the retro discussion. Action items should be clearly marked on the retro board and can be turned into Jira tasks or noted in public channels for further discussion. Everyone on the team should be aware of and responsible for these tasks. If you are finding it difficult for the team to take responsibility for these tasks, then you might want to assign "task leaders" that will keep track of progress. Then at the start of every retro review last week's action items. Was progress made? How is the implementation of this action item going? Good luck and have fun planning your team retros! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4a6igbswym0ppy9jhr2.png) ### TLDR: - Retrospectives are an important part of team health and maintenance and should be prioritized - Make sure to plan your goals for the retrospective - Follow through on your action items! Finally please let me know your thoughts! Have you had successful teams that did not use retros? What do you think is the most important "health check" a team can do? #### Resources used: Cover art - https://www.vectorstock.com/royalty-free-vector/retro-neon-city-background-neon-style-80s-vector-22323112 How to Build Confidence About Showing Vulnerability - https://hbr.org/2022/07/how-to-build-confidence-about-showing-vulnerability Retro definition - https://www.atlassian.com/agile/scrum/retrospectives#:~:text=A%20retrospective%20is%20anytime%20your,retro%20on%20just%20about%20anything!
grace_harders
1,136,123
It's my first post
Hello. Here I will write about my achievements in the IT-industry
0
2022-07-09T11:59:36
https://dev.to/minishok/its-my-first-post-2dn5
webdev, beginners, javascript, react
Hello. Here I will write about my achievements in the **IT-industry**
minishok
1,136,401
DEFINISI PEKERJA SOSIAL MENURUT PARA AHLI
Pekerjaan Sosial adalah seseorang yang memiliki pengetahuan, ketrampilan dan nilai praktik pekerjaan...
0
2022-07-10T00:51:04
https://dev.to/akikuya/definisi-pekerja-sosial-menurut-para-ahlidefinisi-pekerja-sosial-menurut-para-ahli-4icl
sosial, ilmusosial, peksos, pekerjasosial
Pekerjaan Sosial adalah seseorang yang memiliki pengetahuan, ketrampilan dan nilai praktik pekerjaan sosial yang diperoleh melalui pendidikan, pelatihan, dan/atau pengalaman di bidang kesejahteraan sosial dan/atau bidang ilmu sosial, dan/atau telah disetarakan serta telah mendapatkan sertifikat kompetensi. Adapun untuk definisi pekerja sosial menurut para ahli, antara lain sebagai berikut; 1. Rex A.skidmore dan Milton G.thackeray mendefinisikan : “Pekerjaan sosial bertujuan untuk meningkatkan keberfungsian sosial individu-individu,baik secara individual maupun kelompok,dimana kegiatannya difokuskankepada relasi sosial mereka,khususnya interaksi antara manusia dengan lingkungannya.”(skidmoredan Thackeray 1982:7) 2. “Pekerjaan sosial adalah profesi yang bidang utamanya berkecimpung dalam kegiatan pelayanan sosial yang terorganisasi, dimana kegiatan tersebut bertujuan untukmemberikan fasilitas dan memperkuat relationship, khususnya dalam penyesuaian dirisecara timbal balik dan saling menguntungkan antara individu dengan lingkungansosialnya, melalui penggunaan metoda-metoda pekerjaan sosial, sehingga individu maupun masyarakat dapat menjadi baik” (Leonora serafica- de Guzman) 3. “Pekerjaan sosial ialah semua keterampilan teknis yang dijadikan wahana bagi pelaksanaan usaha kesejahteraan sosial.” (UU No.6 Tahun 1974 Tentang Ketentuan-ketentuan Pokok Kesejahteraan Sosial) 4. “The social work profession promotes social change, problem solving in human relationship and the empowering and liberation to enhance well-being. Utilising theoriesof human behaviour andsocial systems, social work intevenese at the point where peopleinteract with their environment. Principles of human rights and social justice are fundamental to social work.” – (International Federation of Social Worker) “Profesi pekerjaan sosial mengenalkan perubahan sosial, pemecahan masalah di dalam hubungan manusia dan pemberdayaan dan pembebasan untuk meningkatkankesehjateraan. Dengan menggunakan teori-teori tingkah laku manusia dan sistem sosial dan kebijakan sosial dan keadilan sosial merupakan dasar bagi pekerja sosial.” 5. “Pekerjaan sosial adalah kegiatan profesional dalam membantu individu-individu,kelompok-kelompok atau komunitas-komunitas guna meningkatkan atau memperbaikikapasitas fungsionalitas sosial dan menciptakan kondisi-kondisi kemasyarakatan yang memungkinkan tercapainya tujuan tersebut.” National Association of SocialWorkers(1958) 6. “Pekerjaan sosial berupaya untuk meningkatkan keberfungsian social individu-individu baik secara individual maupun dalam kelompok dengan kegiatan-kegiatan yangdifokuskan pada relasi-relasi social mereka yang merupakan interaksi diantara manusiadengan lingkungannya. Kegiatan-kegiatan ini dapat dikelompokkan menjadi tiga fungsi: perbaikan kapasitas yang terganggu, pengadaan sumber-sumber individu dan sosial dan pencegahan terjadinya disfungsi sosial.” – Werner Boehm, (1958) 7. “Social work is defined as a social institutional methods of helping people to prevent and to resolve their social problem, to restore and enhance their social functioning.” – Siporin, (1975) Artinya : “Pekerjaan sosial didefinisikan sebagai metode kelembagaan sosial untuk membantu orang, untuk mencegah dan memecahkan sosial mereka, untuk memulihkan dan meningkatkan keberfungsian mereka.” 8. Pekerjaan sosial adalah suatu pelayanaan sosial profesional yang didasarkan pada ilmu pengetahuan dan keterampilan dalam hubungan kemanusiaan guna membantu seseorang atau kelompok untuk mencapai kepuasan dan kebebasan pribadi maupun sosial.” – W.A.Friedlander (1967) 9. Menurut Social Work Year Book tahun 1945 dalam buku pengantar bukuKesejahteraan Sosial oleh Drs. Syarif Muhidin, M.Sc "Pekerjaan Sosial adalahsuatu pelayanan professional kepada orang orng dengan tujuan untukmembantu mereka baik secara individu maupun kelompok untuk mencapairelasi-relasi standar hidup yang memuaskan sesuai dengan kebutuhan merekadan masyarakat.” 10.pekerjaan sosial adalah suatu pelayanan profesional kepada orang-orang dengan tujuan untuk membantu mereka baik secaraindividu atau kelompok untuk mencapai relasi-relasi dan standar hidup yang memuaskansesuai dengan kebutuhan dan kemampuan mereka dengan masyarakat. (Social worker year book tahun 1945) 11.“Pekerjaan sosial ialah semua keterampilan teknis yang dijadikan wahana bagi pelaksanaan usaha kesejahteraan sosial.” UU No.6 Tahun 1974 Tentang Ketentuan-ketentuan Pokok Kesejahteraan Sosial 12.“Social work is defined as a social institutional methods of helping people to prevent andto resolve their social problem, to restore and enhance their social functioning.” – Siporin, (1975)Artinya : “Pekerjaan sosial didefinisikan sebagai metode kelembagaan sosial untuk membantu orang, untuk mencegah dan memecahkan sosial mereka, untuk memulihkan dan meningkatkan keberfungsian mereka.” 13.“Pekerjaan sosial (social work) adalah sebuah profesi yang mendorong perubahan sosial,memecahkan masalah dalam kaitannya dengan relasi kemanusiaan, memberdayakan, dan membebaskan masyarakat untuk meningkatkan kesejahteraannya.” – (DuBois & Miley(2005:4) 14.“Pekerjaan sosial merupakan profesi yang bidang utamanya berkecimpung dalam kegiatan pelayanan sosial yang terorganisasi, di mana kegiatan tersebut bertujuan untukmemberikan fasilitas dan memperkuat relasi, khususnya dalam penyesuaian diri secaratimbal balik dan saling menguntungkan antara individu dengan lingkungan sosialnyamelalui penggunaan metoda pekerjaan sosial.” – (Leonora Serafika de Guzman 15.Social Work bertujuan untuk meningkatkan atau memulihkan interaksi secara timbal balik antara individu dengan masyarakatnya agar tercipta kehidupan yang berkualitastinggi - Dean H. Hepworth dan Jo Ann Larsen. 16.Pekerjaan Sosial adalah sebuah profesi pertolongan yang bertugas untuk membantuindividu, keluarga, kelompok, komunitas, atau masyarakat guna memperbaiki,mengembalikan, meningkatkan, dan mengembangkan keberfungsian sosialnya serta berupaya mendorong terjadinya perubahan sosial demi tercapainya standar kesejahteraandalam kehidupannya - Scientific Social Work Discussion (SSWD) 17.Semua keterampilan teknis yang dijadikan wahana bagi pelaksanaan usaha kesejahteraansosial.– Undang-undang no. 11 tahun 2009 18.“Pekerjaan sosial bertujuan untuk meningkatkan keberfungsian sosial baik secara individual maupun kelompok, di mana kegiatannya difokuskan kepada relasi sosialmereka, khususnya interaksi antara manusia dengan lingkungannya” – Rex Skidmore 19.“Pekerjaan sosial berkepentingan dengan permasalahan interaksi antara orang denganlingkungan sosialnya, sehingga mereka mampu melaksanakan tugas-tugas kehidupan,mengurangi ketegangan, mewujudkan aspirasi dan nilai-nilai mereka.” – Allan Pincus 20.“Pekerjaan sosial merupakan sebuah aktivitas profesional dalam menolong individu,kelompok dan masyarakat dalam meningkatkan atau memperbaiki kapasitas mereka agar berfungsi sosial dan untuk menciptakan kondisi-kondisi masyarakat yang kondusif dalam mencapai tujuannya.” – Zastrow (1999:5) 21.Pekerjaan sosial adalah seni yang mempergunakan berbagai sumber untukmengatasi kebutuhan– kebutuhan individu, kelompok dan masyarakat denganmemahami metode pertolungan secara ilmiah agar mereka itu dapat menolongdirinya sendiri. - Herbert Hewitt Stroup 22.Pekerjaan sosial adalah suatu profesi pemberian bantuan untuk meningkatkanatau mengembalikan interaksi timbal balik yang saling menguntungkan antaraorang dan masyarakat guna memperbaiki kualitas hidup setiap orang. - Drs.Soetarso M.S.W 23.Menurut Tara Kuther, Ph.D, Pekerja Sosial adalah seorang profesional yang paling sering bekerja dengan orang yang membantu mereka membantu mengelola kehidupan sehari-hari mereka, memahami dan beradaptasi dengan penyakit, cacat, kematian dan memberikan pelayanan sosial. 24.FRIEDLANDER,WALTER A,APTE,ROBERTZ, Pekerjaan yang didasarkan pada pengetahuan dan keterampilan ilmiah guna membantu individu ,kelompok/masyarakat agar tercapai kepuasan pribadi dan sosial serta kebebasan. 25.REX.A.SKIDMORE,MILTON TRACKERAY dan WILLIAM FARLEY, Pekerjaan sosial adalah bertujuan untuk meningkatkan keberfungsian sosial individu2,baik secara individu maupun kelompok ,dimana kegiatan difokuskan kepada relasi sosial mereka khususnya interaksi orang2 dengan lingkungan. 26.LEONARA SCRAFICA DE GUSMAN, Pekerjaan sosial adalah profesi yang bidang utamanya berkecimpung dalam kegiatan pelayanan sosial yang terorganisasi,dimana tujuannya untuk memfasilitasi dan memperkuat relasi dalam penyesuaian diri secara timbal balik dan saling menguntungkan antara individu dengan lingkungan sosial melalui penggunaan metode2 pekerjaan sosial. 27.Menurut UU no 6 th 1974 tentang ketentuan Pokok. Kesejahteraan Sosial,Pekerjaan sosial didefinisikan semua keterampilan teknis yang dijadikan sebagai wahana bagi pelaksanaan usaha kesejahteraan sosial 28.Jack Claridge, Pekerja Sosial adalah seorang individu yang bertujuan untuk membantu orang-orang dalam masyarakat yang tidak mampu atau kesulitan dalam menangani masalah kehidupan yang mereka hadapi. 29.Menurut Endang Moertopo, Pekerja Sosial adalah seorang yang memiliki dasar pengetahuan, keterampilan, dan nilai-nilai pekerjaan sosial yang bertujuan untuk mmemberikan pelayanan kesejahteraan sosial. KESIMPULAN Pekerja Sosial merupakan seorang profesional yang menolong individu, kelompok, organisasi, komunitas maupun masyarakat untuk meningkatkan dan memperbaiki kapasitas agar dapat berfungsi sosial dengan baik, serta untuk menciptakan kondisi masyarakat yang kondusif. Pekerja Sosial dapat juga dapat diartikan sebagai seorang profesional yang membantu individu, kelompok, organisasi, komunitas maupun masyarakat dalam mencegah dan menyelesaikan masalah sosial serta memeberdayakan supaya kesejahteraan sosal dalam masyarakat dapat dicapai serta ditingkatkan. source : https://www.akikuya.xyz/
akikuya
1,136,735
Separate numbers in input with Angular Directive
Imagine that you have an input tag in your project for entering credit card numbers and you want to...
0
2022-07-10T14:02:00
https://dev.to/rezanazari/separate-numbers-in-input-with-angular-directive-p4k
angular, javascript
Imagine that you have an input tag in your project for entering credit card numbers and you want to separate the entered numbers with a few digits for better readability. Using commands in Angular, we write it as follows. First, we create a directive that only allows the user to enter numbers, arrows, and backspace. ```js import { Directive, HostListener } from '@angular/core'; @Directive({ selector: '[appOnlyNumber]', }) export class NumberDirective { constructor() {} @HostListener('keydown', ['$event']) onKeyDown(event) { const charCode = event.which ? event.which : event.keyCode; if ( (charCode >= 48 && charCode <= 57) || (charCode >= 96 && charCode <= 105) || (charCode >= 37 && charCode <= 40) || charCode == 8 || charCode == 16 ) { return true; } return false; } } ``` Then we write to the main directive. This directive executes commands during the 'keyup' event. This directive has two inputs, one for 'all the digits of the credit card' and the other for 'the number of digits' to be separated. ```js import { Directive, HostListener, Input } from '@angular/core'; @Directive({ selector: '[appDigiSeperator]', }) export class DigiSeperatorDirective { @Input() detachableDigit: number; @Input() totalFigures: number; @HostListener('keyup', ['$event']) onKeyDown(event) { let enteredNumber = this.check_number_not_longer_than_total_figure( this.remove_space_from_numbers(event.target.value) ); const categorizedNumbers = []; for ( let index = 0; index < enteredNumber.length; index += this.detachableDigit ) { const seperatedDigit = this.substring_numbers_by_digit( enteredNumber, index, this.detachableDigit ); categorizedNumbers.push(seperatedDigit); } event.target.value = this.join_categorized_numbers(categorizedNumbers); } private remove_space_from_numbers(numbers: string) { return numbers.replace(/\s/g, ''); } private check_number_not_longer_than_total_figure = (numbers: string) => { if (numbers.length > this.totalFigures) { return numbers.slice(0, this.totalFigures); } return numbers; }; private substring_numbers_by_digit( numbers: string, startIndex: number, endIndex: number ) { return numbers.substring(startIndex, startIndex + endIndex); } private join_categorized_numbers(categorizedNumbers: number[]) { return categorizedNumbers.join(' '); } } ``` Now use it in your compnents ```html <input type="text" appOnlyNumber appDigiSeperator [detachableDigit]="4" [totalFigures]="16" [(ngModel)]="data" /> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgfzhhahpvfgxb7m4hrr.png) Here is complete code! [See complete code](https://stackblitz.com/edit/angular-ivy-idtysn?file=src%2Fapp%2Fdigi-seperator.directive.ts,src%2Fapp%2Fapp.component.ts,src%2Fapp%2Fapp.component.html)
rezanazari
1,136,857
Weekly 0020
Monday I started the week to work on my productivity workflow. I decided to reorganize my...
17,109
2022-07-10T18:21:40
https://dev.to/kasuken/weekly-0020-54be
weeklyretro
### Monday I started the week to work on my productivity workflow. I decided to reorganize my Notion from scratch and I tried to find inspiration from the book “Building a Second Brain” by Tiago Forte and other online Notion templates. It’s an hard work but I learnt a lot of stuff about Notion. I called my template “Digital Garden” and I think the next week I will publish on Gumroad. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1bgdk3dvazjq57nr0u0.png) In the afternoon I did an estimation for a new project for a customer and I did a call with one of my colleague to talk about it. **Mood:** 😪 ### Tuesday Still working on Notion template for my productivity workflow. In the meantime I took a look to the Network Guardian collecting data for a Customer. I discovered a lot of “issues” and I will work on them the next week. There a lot of problems about disconnection and reconnections during the collecting data phases. I have to manage that in a better way. ![https://res.cloudinary.com/practicaldev/image/fetch/s--SElE7c0---/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxdqu56rh6lvcvf18wtn.png](https://res.cloudinary.com/practicaldev/image/fetch/s--SElE7c0---/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uxdqu56rh6lvcvf18wtn.png) In the afternoon THE email is arrived. I have been awarded for another year as a Microsoft MVP and I am very proud of myself, honestly. You can find more info about in on my blog post: [https://dev.to/kasuken/i-have-been-awarded-for-9-years-in-a-row-as-microsoft-mvp-28a5](https://dev.to/kasuken/i-have-been-awarded-for-9-years-in-a-row-as-microsoft-mvp-28a5) **Mood**: 😪 ### Wednesday Calls with customers and other annoying activities not related with the development. **Mood:** 🙂 ### Thursday Unproductive day because of the Long Covid. I didn’t close any tasks during the day. **Mood:** 😢 ### Friday I started the day with a new feature implementation on Red Origin. It’s not really a new feature because I have implemented this function a few months ago, but it’s more or less a rewriting and a reorganization of the code. But I decided to implement the function from scratch again, based on my previous experience on that. I also finished two more Notion templates: a call for papers template for speakers and a content calendar template. I will share them next week. **Mood: 😊**
kasuken
1,138,088
A brief history of modern computers, multitasking and operating systems
In this article I'll try to write a brief history of modern computers, multitasking and how operating...
18,859
2022-07-12T06:11:29
https://dev.to/leandronsp/a-brief-history-of-modern-computers-multitasking-and-operating-systems-2cbn
unix, linux, operatingsystems, threads
In this article I'll try to write a brief history of modern computers, multitasking and how operating systems tackle concurrency. ## 40s The first *modern* computers in the 40's were capable of running programs built in [punched cards](https://en.wikipedia.org/wiki/Punched_card). At that moment, computers used to load **one program at a time**, and because of very modest speeds, programs would took *days* to finish, thus leading to long *waiting queues* for programmers. ![computers in 40s](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mb794m14al7wbn4ns05j.jpeg) ## 50s A decade later, computers became a little faster and as such they could **enqueue multiple programs at once** using a FIFO system (first-in, first-out). Despite of the importance of enqueuing jobs and solving the *programmers waiting queues*, programs would still **run one at a time**, meaning that when a specific program gets blocked on *input or output* (*i.e* waiting the printer to finish printing the output), the **CPU gets idle**. ![computers 50s](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf7cr4inj8oisgvq6kba.jpeg) In other words, there's no efficiency on the computer physical resources. ## 60s Also called the "transistors era" or "third-generation computers", in the 60's we start seeing smaller yet even more faster computers. ![computers 60s](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u767v3y9w88zi8neyt65.jpeg) The main goal here is to keep up the CPU **as busy as possible**, denoting that while a specific program *waits* on I/O, another one can use a "slice" of the CPU. Two concurrent programs could use multiple computer resources *at the same time*. But how is this achieved? ### Monitors, the former operating systems Monitors were *primitive* systems that once installed in the computer, they could **manage concurrent programs** and give them a fair amount of the *CPU* while other programs are blocked on I/O. This "fairness" is based on an arbitrary time of the CPU, so as long as the Monitor thinks a program used its *fair time* of the CPU, it **pauses** this program and prevents it from using the CPU, giving priority to another program which was on the **waiting queue**. And this process repeats over and over again while programs finish their I/O operations. Such technique is called [time-sharing multitasking](https://en.wikipedia.org/wiki/Time-sharing). Make no mistake, **concurrency** is all about multiple concurrent programs getting a fair amount time of the CPU while they wait on I/O. It's still *one* CPU for all of them, but the *Monitor* helps to **keep the CPU busy as much as possible**. By achieving multitasking and increasing the efficiency of resource utilization, we also increase the *volume of information being processed over time* (throughput). ![tput](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xc368llk803usq1g9zu.jpeg) ## 70s This is the decade where more sophisticated "Monitor Systems" like [Unix](https://en.wikipedia.org/wiki/Unix) were born. Those systems are called [Operating Systems](https://en.wikipedia.org/wiki/Operating_system), and this is the "cambrian explosion era" of computers. ## Operating Systems Like "Monitors", Operating Systems were created to manage computer resources (CPU, memory, I/O) and guarantee a fair amount of computer resources (mainly CPU) to multiple concurrent programs running in the computer. Year after year, more operating systems are created. Let's dig in the architecture of a modern operating system (OS): ### Programs are isolated Having *concurrency* in its realm, operating systems (OS from now on) need to guarantee that two different programs don't use the same memory address. Otherwise, it would lead to a [race condition](https://en.wikipedia.org/wiki/Race_condition). For solving that problem and avoid race condition, a modern OS has to follow some rules: * the program needs to be isolated, meaning it must have its own memory space * programs can communicate to each other only via **message passing** * thus, programs need a unique identifier ### OS Processes These traits for a program to being isolated and having a unique identifier are what makes it a **process**. Then, basically, processes are *instances* of programs. Let's check some processes on our OS: ```bash $ ps ax PID TTY STAT TIME COMMAND 1 pts/0 Ssl 0:00 /usr/bin/qemu-x86_64 /usr/bin/bash 53 pts/0 Sl+ 0:00 /usr/bin/qemu-x86_64 /usr/bin/sleep 10 62 ? Rl+ 0:00 /usr/bin/ps ax ``` Note the process ID `53` which is running the command `sleep 10`. This process is *waiting* on the computer clock and is **not using the CPU**, and as soon as it finishes, the process is gone and completely removed from the OS. Now, let's raise one more question: what if we write a program but within this program we want some specific **blocks of code** to being executed concurrently? Specifically saying, it's a scenario where *a block of code is waiting on I/O, but another one in the same process is "free" to compete on CPU*. Enter **threads**. ### OS Threads Operating Systems also bring a concurrent primitive called **Thread**, which is **bound to a process** and can be treated as a concurrency unit like OS processes. Different threads **running in the same process** are not isolated, because they *share the same memory space*. Threads are created within programs developers use to write and do share the same process memory. Hence, Threads are bound to **race conditions**. ```bash $ ps a -o pid,tid,command PID TID COMMAND 1 1 /usr/bin/qemu-x86_64 /usr/bin/bash 197 197 /usr/bin/qemu-x86_64 /usr/bin/sleep 10 200 200 /usr/bin/ps ax -o pid,tid,command ``` We can see the `TID` (thread ID) column, having the same identifier as the `PID` (process ID). Because every OS process by default, *has a main thread* running on it. Other threads might be created within the program by the application programmer. Okay, all of these stuff about *processes and threads* are cool and nice, but **how do the OS manage those concurrency units**? ### OS Scheduler The OS scheduler is a program which manages OS processes/ threads in the waiting queue and give them a fair amount of the CPU while others wait on I/O. Similar to the primitive "Monitor", a scheduler uses **time-sharing multitasking** by pausing/resuming OS processes and threads through a **context switch**. OS Processes/Threads are preempted multiple times on CPU, and because of this nature of preempting concurrency units by *time*, most modern operating systems employ **Preemptive Schedulers**. ### Cooperative scheduling In the 70s/80s, a small amount of operating systems used to have a different scheduling strategy. Those schedulers **do NOT preempt** OS processes by time-sharing, but instead delegate to the OS process to make its own "context switch", based on the process rules and requirements. Such scheduling is called "Cooperative Scheduling", as the scheduler gives control to the process for making the context switch. However, most modern operating systems use preemptive multitasking, because they can have complete control over the concurrency units on the computer. ## Race condition As said earlier, OS processes are isolated *by design*, so they are not bound to race-conditions. However, threads do share the same memory space (the OS process memory), then programmers have to carefully design multi-threading systems. In the presence of a potential race condition, a single Thread can acquire a "lock", which is an OS primitive that prevents other Threads in the same process from being preempted in the CPU. Still, locks can be cumbersome and lead to **deadlocks**, where two different Threads are *blocked forever* because they are waiting locks from each other. To avoid locking, other techniques which employ **optmistic locking** arise, where instead of acquiring OS locks, two different "versions" of data are generated then "compared" before updating. Yet another alternative to OS locks, is by making the Thread more "safe", having its own isolated space, then communicating outside via *message passing*, similar to OS processes. Those threads follow the [actor-model definition](https://en.wikipedia.org/wiki/Actor_model) and can be called "actors". Both **optmistic locking** and **actor-based threads** rely on algorithms that can be implemented by runtimes and programming libraries. ## Multi-core era As the [Moore's Law](https://en.wikipedia.org/wiki/Moore%27s_law) comes to an end due to physical limitations of computer resources, CPU's stopped increasing clock rates, which means they are **not getting much faster** since the mid 2000's. CPU engineers then came to a great solution which is building multiple "CPU cores" into a single CPU-unit, where each CPU-core has the same clock speed. That's why since the mid 2000's we've seen the blast of multi-core processors as the number of cores are getting cheaper and increasing faster. Now that we understand concurrency and the importance to make the CPU busy, how can we **increase the throughput of applications** in such a scenario where CPU's are not faster but do have multiple CPU-cores? Yes, using all the CPU-cores at the same time. It's called **parallelism**. In a modern multiple-core CPU architecture, concurrent processes/threads that need CPU work can be executed **in parallel**. ## Non-blocking era On the other hand, in the lands of I/O, network bandwidth and SSD's have gotten faster year after year. Then operating systems started to offer capabilities where process do not need to be "blocked" on I/O. These processes could be freed to do other work **asynchronously** and, as soon as the I/O operation is *completed*, the process is notified by the OS. Such technique is called "non-blocking I/O", or "async I/O". Runtime implementations such as NodeJS and other projects like Loom, PHP Swoole and Ruby3 employ concurrency primitives for taking advantage on async I/O, thus helping to increase the system overall throughput. ## Conclusion I hope this article helped you to understand a bit more about operating systems (OS) and how OS processes/threads are crucial units for tackling concurrency on computers.
leandronsp
1,139,614
The Future of the Cloud? Make it Optional
Today, most apps are cloud-only. Data must travel halfway around the world to a remote data center...
0
2022-07-13T16:40:00
https://www.ditto.live/blog/the-future-of-the-cloud-is-optional
cloud, discuss, architecture, mobile
![deskless worker with no internet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmm6ms0z9hjdvu61z7a3.png) Today, most apps are _cloud-only_. Data must travel halfway around the world to a remote data center just for it to arrive at a device in the same room. These apps become entirely unusable if their data connection is slow or a Wi-Fi router breaks. Cloud-only apps have made it difficult for the "deskless workforce" to get their work done. They require database access on mobile devices in the field, where crucial on-the-ground operations are happening, but their cloud-only applications do not function 100% of the time. Operations halt while employees wait for a mobile phone to reconnect. It isn&apos;t a great user experience; it costs businesses money and can even put people in life-threatening situations when real-time data is needed for quick decision-making. ![Cloud Outage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dw6axcrpwcadx50akb6r.png) ## From Cloud-Only to Cloud-Optional Despite these problems, most applications today are cloud-only. Why? If you want to send your friend a picture of a cat, you can use AirDrop, which sends the data directly between the two devices, _peer-to-peer_. So why don&apos;t all apps send other kinds of data directly, _peer-to-peer_, between devices? Why isn&apos;t the cloud optional? ![Why can't apps send data directly?](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnga5qopyttpqegvq5pp.png) The answer is that it&apos;s simply easier to build cloud-only applications. Over the past few decades, investors have funneled billions of dollars into cloud-only databases and tools. Because of this, developers don't have to think about TCP/IP networking, database partitioning, or on-disk compression whenever they want to update a field in a database table. For years, this existing toolkit of cloud-only tools has made cloud-only applications the fastest way to deploy collaborative software. ## Peer-to-peer is not new, but it is still hard to use There are scalable peer-to-peer protocols for data that doesn&apos;t change, such as AirDropping cat pictures or torrenting movies and music &ndash; BitTorrent accounts for 27% of all upstream Internet traffic[^1]. But, developers need the reliability of a database for times when that data starts to change at the speed of Slack and requires the precision of an airplane safety checklist. Unfortunately, few engineers in the world can build a peer-to-peer database. It&apos;s a new software engineering paradigm that needs more education, tool development, and user experience improvements. Because of this, we haven&apos;t had a cloud-optional, peer-to-peer database. _Until now._ Ideally, a peer-to-peer database needs to be: - **User-friendly.** Developers are users too! Instead of sending data to a remote server, the application needs to write data to its local database first in the form of _changes_, then listen for changes from other devices, and recombine them on the fly. Ditto provides an API on top of these details so developers can focus on their business logic instead of synchronization logic. - **Cloud-optional.** Devices can go into dead zones, routers can crash, or cloud services can go down. All devices must see the same query results given the same set of changes, even if the changes arrive in a different order. Ditto's Conflict-Free Replicated Data Types [(CRDTs)](https://docs.ditto.live/common/how-it-works/crdt/) provide a consistent view of the data for every device. These data structures are still very much a new topic in computer science research. - **Partitioned.** Mesh networks can generate a tremendous amount of data that can overwhelm small devices if each node aggressively tries to sync every piece of data. Ditto provides a [Big Peer and a Small Peer SDK](http://www.ditto.live/products/platform) with different replication strategies. The Small Peer is selfish, which means it only synchronizes data it explicitly requests, giving developers complete control over storage and bandwidth usage. The Big Peer is greedy, which means it synchronizes as much as possible. ![Ditto peer-to-peer cloud architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vgkem54sa0h1hqry0z91.png) - **Ad-hoc.** Devices may join and leave the mesh at any time. Despite this churn, all devices should still see and have input on the same data: the same "source of truth." Routing messages between _appropriate_ nodes becomes a mathematical challenge as the mesh network topology changes over time. Ditto&apos;s [novel delta state CRDT](https://www.ditto.live/blog/testing-crdts-in-rust-from-theory-to-practice) is great for ad-hoc mesh networks because they are flexible and do not rely on having the full history of a database table to write or read the latest value. - **Forward-Compatible.** Since devices update at different times, they need to account for incoming data with different schema. For example, if a device is offline and therefore outdated, it should still be able to read new data and sync. Ditto is causally consistent, providing a reliable order of changes that can be inspected, including incorporating metadata about schema changes over time. ![Ditto forward compatibility](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s80htfm2deje6z7qzeg4.jpg) ## A shared set of tools These challenges are straightforward to explain but bring up a sizable, entangled web of discussions. There are many different ways to design and implement caching, data synchronization, querying, and distributed architecture. It is a lot of work for development teams to create a reliable peer-to-peer database that syncs data in a partially connected mesh. Organizations either can&apos;t hire the talent to complete the job or don&apos;t see the financial benefit of investing in building it themselves. It is a more efficient use of resources to support a database that many different companies use so that each individual company doesn&apos;t need to invest in building and maintaining their own cloud-optional database. With Ditto, developers have a complete end-to-end cloud-optional solution that combines the best of cloud software with the best of peer-to-peer software. On the surface, the shift from cloud-only to cloud-_optional_ seems subtle. But this is a more fundamental paradigm shift and provides tremendous business opportunities. Ditto is setting the standard as one of the first CRDT-based peer-to-peer databases that turn the existing plethora of mobile devices into a cloud-optional intelligent edge platform. If you are a developer and want to make your cloud optional, [get started](https://portal.ditto.live/create-account) today for free. [We&apos;re hiring](https://jobs.ashbyhq.com/ditto), so look at our available listings and see if you think Ditto might be a good fit for you. ## Footnotes [^1]: [The Global Internet Phenomena Report](https://www.prnewswire.com/news-releases/sandvine-releases-2019-global-internet-phenomena-report-300914962.html), Sandvine; 2019.
rratner
1,140,053
AWS Summit 2022 Berlin - Have some PIE!
Written by Kateryna Oliferchuk. Originally published on June 2nd 2022. Data is everywhere and...
0
2022-07-14T06:20:44
https://kreuzwerker.de/post/aws-summit-2022-berlin-have-some-pie
aws, devops, cloud, engineering
Written by Kateryna Oliferchuk. _Originally published on June 2nd 2022._ Data is everywhere and everything is built around data nowadays. With so many options of data storage available, you can always find one that best suits your current use case and requirements. But, based on my previous experience, that’s easier said than done: the best fitting database is not actually always chosen, and even a great initial choice might not play out well over time. To learn more about how to approach selecting the best fitting storage for a microservice in a structured and efficient way, I attended the talk [Choose your database as if your life depends on it](https://aws.amazon.com/events/summits/berlin/agenda/?berlin-summit-agenda-card.sort-by=item.additionalFields.headline&berlin-summit-agenda-card.sort-order=asc&awsf.berlin-summit-agenda-day=*all&awsf.berlin-summit-agenda-language=*all&awsf.berlin-summit-agenda-persona=*all&awsf.berlin-summit-agenda-level=*all&awsf.berlin-summit-agenda-segment=*all&awsf.berlin-summit-agenda-topic=*all&awsf.berlin-summit-agenda-industry=*all&berlin-summit-agenda-card.q=DBS304&berlin-summit-agenda-card.q_operator=AND) by Ovidiu Hutuleac at the AWS Summit 2022 Berlin. I’ve often had the experience in the past that decisions were made as outcomes of back-and-forth discussions in which arguments were not always backed up with concrete facts. And with so many different database types to choose from (key-value, graph, document, relational, time-series, in-memory, columnar), each with so many different vendor options, it is a difficult challenge indeed. My main takeaway from the talk is to focus on one bounded context per time, list its requirements and constraints explicitly, and, finally, use the PIE theorem to choose the right data storage for it. ![aws-summit-blog-data](//images.ctfassets.net/hqu2g0tau160/4t3JOW90p57FlsqW2w8gnF/e90a8bc5545ae9b94c1fed9c478a19a5/aws-summit-blog-data.png) Similar to the CAP theorem, the PIE theorem states that we can choose two out of three desirable features in a data system: Pattern Flexibility, Efficiency, Infinite scale. The questions you need to answer while making a decision would be: 1. Is there a need for flexibility in querying our data or do we only allow fixed access patterns? 2. Is there a requirement for a consistent, predictable latency regardless of the number of clients and queries to the data store at the same time? 3. How big and fast do we expect our data to grow? Answering these questions will help you make your decision. I believe that investing time upfront to thoughtfully analyze the requirements and limitations will save you a lot of time in the future building workarounds. The talk also left me with a new thought: Should database selection be treated the same as general software development? Meaning that we should start simple and stay agile and adjust over time. In my previous experience, we often took the initially chosen data store as set in stone. When new requirements came in, we tried to make the existing data store fit them instead of taking a step back and questioning whether the data store was still the right one. For example, in a past project we chose Elasticsearch as a primary data store because it perfectly fit the initial needs - a simple search use case. Over time, the scope increased, and more and more features were cumbersomely built using Elasticsearch, even though it was no longer the most suitable storage for us. Costs of operating the cluster became quite high and a lot of time was spent on improving performance. Looking back, I wish we had taken the time and paused to rethink our initial choice. We should remain open to changing the database completely. To recap, choosing the right data storage for your system is an important choice to make carefully, and the PIE theorem will help you ask the right questions upfront. At the same time, you should always be open to challenge your initial decision over time, and re-iterate with the PIE theorem as your system and requirements evolve. You know, a PIE is always the answer 🍰.
kreuzwerker
1,140,307
8 common mistakes in Cypress (and how to avoid them)
This is a blog post made from a talk I gave at Front end test fest, so if you want to watch a video...
0
2022-07-14T13:18:54
https://filiphric.com/8-common-mistakes-in-cypress-and-how-to-avoid-them
--- title: 8 common mistakes in Cypress (and how to avoid them) published: true date: 2022-07-14 00:00:00 UTC tags: canonical_url: https://filiphric.com/8-common-mistakes-in-cypress-and-how-to-avoid-them --- This is a blog post made from a talk I gave at Front end test fest, so if you want to watch a video about it, [feel free to do so on this link](http://front-endtestfest.com/6gm). On my Discord server, I sometimes encounter a common pattern when answering questions. There are certain sets of problems that tend to surface repeatedly and for these, I created this blog post. Let’s jump into them! ## #1: Using explicit waiting This first example feels kinda obvious. Whenever you added an explicit wait to your Cypress test, I believe you had an unsettling feeling about this. But what about the cases when our tests fail because the page is too slow? Feels like using `cy.wait()` is the way to go. ```js // ❌ incorrect way, don’t use cy.visit('/') cy.wait(10000) cy.get('button') .should('be.visible') ``` But this makes our test just sit there and hope that the page will get loaded before the next command. Instead, we can make use of Cypress’ built-in retryability. ```js cy.visit('/') cy.get('button', { timeout: 10000 }) .should('be.visible') ``` So why is this better? Because this way, we will wait maximum 10 seconds for that `button` to appear. But if the button renders sooner, the test will immediately move on to the next command. This will help you save some time. If you want to read more about this, I recommend [checking out my blog on this topic](https://filiphric.com/waiting-in-cypress-and-how-to-avoid-it). ![Join my upcoming workshop! https://filiphric.com/cypress-core-workshop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmmacz4l8hrb5vh08vva.png) ## #2: Using unreadable selectors I could write a whole article just on the topic of selectors [in fact, I did](https://filiphric.com/waiting-in-cypress-and-how-to-avoid-it)), since this is one of the most dealt-with topics for testers. Selectors can be the first thing that can give us a clue as to what our test is doing. Because of that, it is worth making them readable. Cypress has some [recommendations](https://docs.cypress.io/guides/references/best-practices#Selecting-Elements) as to which selectors should be used. The main purpose of these recommendations is to provide stability for your tests. At the top of the recommendations is to use separate `data-*` selectors. You should add these to your application. However (and unfortunately IMHO), testers don’t always have the access to the tested application. This makes selecting elements quite a challenge, especially when the element we are searching for is obscure. Many that find themselves in this situation reach for various strategies for selecting elements. One of these strategies is using **xpath**.[The big caveat of xpath is that their syntax is very hard to read](https://filiphric.com/waiting-in-cypress-and-how-to-avoid-it). By merely looking at your xpath selector, you are not really able to tell what element you are selecting. Moreover, they don’t really add anything to the capabilities of your Cypress tests. Anything xpath can do you can do with Cypress commands, and make it more readable. ```js // Select an element by text cy.xpath('//*[text()[contains(.,"My Boards")]]') // Select an element containing a specific child element cy.xpath('//div[contains(@class, "list")][.//div[contains(@class, "card")]]') // Filter an element by index cy.xpath('(//div[contains(@class, "board")])[1]') // Select an element after a specific element cy.xpath('//div[contains(@class, "card")][preceding::div[contains(., "milk")]]') ``` ```js // Select an element by text cy.contains('h1', 'My Boards') // Select an element containing a specific child element cy.get('.card').parents('.list') // Filter an element by index cy.get('.board').eq(0) // Select an element after a specific element cy.contains('.card', 'milk').next('.card') ``` ## #3: Selecting elements improperly Consider the following scenario. You want to select a card (the white element on the page) and assert its text. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bw91fjt72sazl1whyi53.png) Notice how both of these elements contain the word "bugs" inside. Can you tell which card are we going to select when using this code? ```js cy,visit('/board/1') cy.get('[data-cy=card]') .eq(0) .should('contain.text', 'bugs') ``` You might be guessing the first one, with the text "triage found bugs". While that may be a good answer it’s not the most precise one. Correctly it is - whichever card will load first. It is important to remember that whenever a Cypress command finishes doing its job, it will move on to the next command. So once an element is found by the `.get()`command, we will move to the`.eq(0)` command. After that, we will move to our assertion that will fail. You might wonder why Cypress does not retry at this point, but it actually does. Just not the whole chain. By design, `.should()` command will [retry the previous command, but not the whole chain](https://twitter.com/filip_hric/status/1493964887336251394). This is why it is vital to implement a better test design here and add a "guard" for this test. Before we assert the text of our card, we’ll make sure that all cards are present in DOM: ```js cy,visit('/board/1') cy.get('[data-cy=card]') .should('have.length', 2) .eq(0) .should('contain.text', 'bugs') ``` ## 4#: Ignoring requests in your app Let’s take a look at this code example: ```js cy.visit('/board/1') cy.get('[data-cy=list]') .should('not.exist') ``` When we open our page, multiple requests get fired. Responses from these requests will get digested by the frontend app and rendered into our page. In this example `[data-cy=list]` elements get rendered after we get a response from `/api/lists` endpoint. But the problem with this test is, that we are not telling Cypress to wait for these requests. Because of this, our test may give us a false positive and pass even if there are lists present in our application. Cypress will not wait for the requests our application does automatically. We need to define this using the intercept command: ```js cy.intercept('GET', '/api/lists') .as('lists') cy.visit('/board/1') cy.wait('@lists') cy.get('[data-cy=list]') .should('not.exist') ``` ## #5: Overlooking DOM re-rendering Modern web applications send requests all the time to get information from the database and then render them in DOM. In our next example, we are testing a search bar, where each keystroke will send a new request. Each response will make the content on our page re-render. In this test, we want to take a search result and confirm that after we type the word "for", we will see the first item with the text "search for critical bugs". The test code goes like this: ```js cy.realPress(['Meta', 'k']) cy.get('[data-cy=search-input]') .type('for') cy.get('[data-cy=result-item]') .eq(0) .should('contain.text', 'search for critical bugs') ``` This test will suffer from "element detached from DOM" error. The reason for this is that while still typing, we will first get 2 results, and when we finish, we’ll get just a single result. In a result, our test will go like this: 1. "f" key is typed in the search box 2. request searching for all items with "f" will fire 3. response comes back and app will render two results 4. "o" key is typed in the search box 5. request searching for all items with "fo" will fire 6. response comes back and app will render two results 7. "r" key is typed in the search box 8. request searching for all items with "for" will fire 9. Cypress is done with typing, so it moves to the next command 10. Cypress will select `[data-cy=result-item]` elements and will filter the first one (using `.eq(0)`) command 11. Cypress will assert that it has the text "search for critical bugs" 12. since the text is different, it will run the previous command again (`.eq(0)`) 13. while Cypress is retrying and going back and forth between `.eq(0)` and `.should()` a response from our last request comes back and app will re-render to show single result 14. the element that we selected in step 10 is no longer present, so we get an error Remember, `.should()` command will make the previous command retry, but not the full chain. This means that our `cy.get('[data-cy=result-item]')` does not get called again. To fix this problem we can again add a guarding assertion to our code to first make sure we get the proper number of results, and then assert the text of the result. ```js cy.realPress(['Meta', 'k']) cy.get('[data-cy=search-input]') .type('for') cy.get('[data-cy=result-item]') .should('have.length', 1) .eq(0) .should('contain.text', 'search for critical bugs') ``` But what if we cannot assert the number of results? [I wrote about this in the past](https://filiphric.com/testing-lists-of-items), but in short, the solution is to use `.should()` command with a callback, something like this: ```js cy.realPress(['Meta', 'k']) cy.get('[data-cy=search-input]') .type('for') cy.get('[data-cy=result-item]') .should( items => { expect(items[0].to.have.text('search for critical bugs')) }) ``` ## #6: Creating inefficient command chains Cypress has a really cool chaining syntax. Each command passes information to the next one, creating a one-way flow of your test scenario. But even these commands have an internal logic inside them. Cypress commands can be either parent, child or dual. This means, that some of our commands will always start a new chain. Consider this command chain: ```js cy.get('[data-cy="create-board"]') .click() .get('[data-cy="new-board-input"]') .type('new board{enter}') .location('pathname') .should('contain', '/board/') ``` The problem with writing a chain like this is not only that it is hard to read, but also that it ignores this parent/child command chaining logic. Every `.get()` command is actually starting a new chain. This means that our `.click().get()` chain does not really make sense. Correctly using chains can prevent your Cypress tests from unpredictable behavior and can make them more readable: ```js cy.get('[data-cy="create-board"]') // parent .click() // child cy.get('[data-cy="new-board-input"]') // parent .type('new board{enter}') // child cy.location('pathname') // parent .should('contain', '/board/') // child ``` ## #7: Overusing UI I believe that while writing UI tests, you should use UI as little as possible. This strategy can make your test faster and provide you the same (or bigger) confidence with your app. Let’s say you have a navigation bar with links, that looks like this: ```html <nav> <a href="/blog">Blog</a> <a href="/about">About</a> <a href="/contact">Contact</a> </nav> ``` The goal of the test will be to check all the links inside `<nav>` element, to make sure they are pointing to a live website. The intuitive approach might be using `.click()` command and then check either location or content of the opened page to see if the page is live. However, this approach is slow and in fact, can give you false confidence. As I mentioned in one of my previous blogs, this approach can overlook that one of our pages is not live, but returns a 404 error. Instead of checking your links like this, you can use `.request()` command to make sure that the page is live: ```js cy.get('a').each( link => { cy.request(page.prop('href')) }) ``` ## #8: Repeating the same set of actions It’s really common to hear that your code should be DRY = don’t repeat yourself. While this is a great principle for your code, it seems like it is slightly ignored during the test run. In an example below, there’s a `cy.login()` command that will go through the login steps and will be used before every test: ```js Cypress.Commands.add('login', () => { cy.visit('/login') cy.get('[type=email]') .type('filip+example@gmail.com') cy.get('[type=password]') .type('i<3slovak1a!') cy.get('[data-cy="logged-user"]') .should('be.visible') }) ``` Having this sequence of steps abstracted to a single command is definitely good. It will definitely make our code more "DRY". But as we keep using it in our test, our test execution will go through the same steps over and over, essentially repeating the same set of actions. With Cypress you can pull up a trick that will help you solve this issue. This set of steps can be cached and reloaded using `cy.session()` command. This is still in an experimental state, but can be enabled using `experimentalSessionAndOrigin: true` attribute in your `cypress.config.js`. You can wrap the sequence in our custom command into `.session()` function like this: ```js Cypress.Commands.add('login', () => { cy.session('login', () => { cy.get('[type=email]') .type('filip+example@gmail.com') cy.get('[type=password]') .type('i<3slovak1a!') cy.get('[data-cy="logged-user"]') .should('be.visible') }) }) ``` This will cause to run the sequence in your custom commands just once per spec. But if you want to cache it throughout your whole test run, you can do that by using [cypress-data-session plugin](https://www.npmjs.com/package/cypress-data-session). There are a lot more things you can do this, but caching your steps is probably the most valuable one, as it can easily shave off a couple of minutes from the whole test run. This will of course depend on the test itself. In my own tests, where I just ran 4 tests that logged in, I was able to cut the time in half. Hopefully, this helped. I’m teaching all this and more in my [upcoming workshop](https://filiphric.com/cypress-core-workshop). Hope to see you there!
filiphric
1,140,624
si
si
0
2022-07-14T17:27:58
https://dev.to/diegoorduna/si-pk
si
diegoorduna
1,141,126
Do you have a favorite keyboard ? [I bought the new pop keys KB from Logitech]
Hello DEV coders and nerds 🙂 In my 5 days vacation to Wroclaw/Poland, I wanted to chill and relax,...
0
2022-07-15T11:41:40
https://dev.to/bekbrace/do-you-have-a-favorite-keyboard-i-bought-the-new-pop-keys-kb-from-logitech-23ha
discuss, community, productivity, watercooler
Hello DEV coders and nerds 🙂 In my 5 days vacation to Wroclaw/Poland, I wanted to chill and relax, you know drink cold beer and enjoy the sun for some time, maybe read a book that I wanted to read for long time and I didn't have the chance. But - there is a But here 😄- I couldn't stay away from electronics shops; and the most famous one here in Poland is a German franchise called "Media Markt", it has always new gadgets, cameras, laptops and so on.. {%youtube xr9G2296bvE%} I noticed this great looking keyboard, a Pop Keys Yellow keyboard from Logitech but I was so hesitant at first, especially when I saw the emojis on the side : 😍😭😄😂, I thought it's for kids and will never have a good clicking sound. Considering the price - 138 € - I started to think again, I check the website, I watched some videos, and finally my wife convinced me of buying it / she is an extraordinary saleswoman 😄 Hope you will enjoy this video where I am unwrapping this awesome keyboard 😄 Thank you amazing DEV community for the support!
bekbrace
1,141,261
Latest Cyber Security Trends For Businesses
The cybersecurity industry continues to be in constant flux. While organizations strive to harden...
0
2022-07-22T07:03:03
https://dev.to/sudip_sg/latest-cyber-security-trends-for-businesses-577d
The cybersecurity industry continues to be in constant flux. While organizations strive to harden their systems against discovered vulnerabilities, [attackers keep crafting newer mechanisms to attack tech stacks](https://crashtest-security.com/7-signs-that-your-website-has-been-hacked/). The growing reliance on computing systems for business and personal functions enables attackers to exploit sensitive information and compromise organizational operations. It is, therefore, crucial for developers and security professionals to keep an eye on emerging cyber security trends for dynamic threat modeling and mitigation. This trend report explores the top cybersecurity trends for 2022 and how these can potentially impact global businesses and users alike. ## Inherent Cloud Vulnerabilities While cloud deployments offer flexibility and cost savings, inherent vulnerabilities in cloud services continue to pose considerable cybersecurity risks to modern organizations. Because of the complexities associated with multiple geographically distributed devices and third-party integrations, most organizations struggle to implement comprehensive security controls to inhibit attack vectors. [As per a recent study](https://resources.checkpoint.com/cyber-security-resources/2020-cloud-security-report), cloud security misconfiguration for remote workstations and external APIs interacting with third-party services are the primary causes of data leakage and unauthorized access to cloud-based assets. To overcome this, organizations are now adopting the **Cloud Security Posture Management (CSPM)** that helps them identify and prevent misconfiguration while automating compliance and security administration. Apart from this, [DevSecOps](https://crashtest-security.com/security-teams-devsecops/) is now a mainstream framework that helps organizations adopt a ***shift left for security*** approach. This essentially implies that security is considered at par with all other aspects of a development workflow since the very initial stages of SDLC. ## Security for Remote Work Organizations adjusted business models to facilitate [remote working culture due to the COVID-19 pandemic](https://crashtest-security.com/the-importance-of-web-application-security-during-the-corona-outbreak/). This new normal has introduced a new wave of vulnerabilities arising from the **inadequate implementation of security policies, disparate network infrastructure**, and **lack of knowledge to harden security for remote devices**. Throughout the pandemic, attackers discovered new approaches to exploit network security vulnerabilities such as **improper firewall implementation, insecure broadband connections**, and **single-layer protection** leading to data breaches. The counteraction led organizations to focus on administering robust controls to enforce data safety and swift incidence response for remote work arrangements. To secure decentralized access, organizations are also adopting a [zero-trust approach](https://owasp.org/www-chapter-singapore/assets/presos/Securing_Production_Identity_Framework_for_Everyone_(SPIFFE),_Building_End_to_End_Secure_Software_Factory_and_Protecting_Cloud-Native_Supply_Chain_Helpful_Cloud-Native_Security_Checklists_and_Demo_on_SPIFFE_and_Not.pdf) for sensitive data while imparting **organization-wide training** to ensure every stakeholder knows and observes security best practices. ## Identity-First Security The [identity-first security program](https://www.itproportal.com/features/identity-first-security-redefined/) helps organizations offer secure resource access for distributed deployments. This approach emphasizes **user identity verification** rather than authorizing users through the traditional method of login credentials that hackers can easily compromise. The technique also leverages **Identity Detection and Response (IDR)** mechanisms to detect user profiles that have been compromised or used to initiate attacks, helping security teams to mitigate persistent threats. The recent trend highlights an identity-first security strategy extending beyond authentication and authorization to include a broader range of access controls, including [session management](https://crashtest-security.com/broken-authentication-and-session-management/) and **threat modeling** for holistic resource protection. Two of the most common identity-based security measures used in modern applications are **Multi-Factor Authentication (MFA)** and **Single Sign-On (SSO)**. ## Vendor Consolidation Modern applications are built using tech stacks that integrate multiple frameworks, packages, and plugins. While the inclusion of third-party integrations often simplifies development workflows, it offers less oversight of the used application resources. Additionally, using multiple third-party integrations increases the need for human effort to piece together the safety measures implemented across disconnected points. This sprawl in security controls often reduces the effectiveness of cyber security efforts that requires a security team to put more focus on patching vulnerabilities introduced by each integration. A recent survey projects nearly 50% of enterprises are currently pursuing a vendor consolidation strategy to enforce a unified approach for detecting, identifying, and remediating security threats. Consolidating third-party tools and security vendors helps simplify security operations. A key strategy for vendor consolidation is also the [defense-in-depth](https://www.forcepoint.com/cyber-edu/defense-depth) approach. Teams carefully examine the entire vendor network and [IT infrastructure](https://crashtest-security.com/it-infrastructure-security/) to identify gaps and overlaps in security implementation. ## Ransomware Attacks With over 71% of data breaches in 2020-21 being financially motivated, Ransomware attacks continue to be one of the most followed trends in cybersecurity. In this type of attack, threat actors deploy malicious software that enables them to seize computing data or resources illegally. In return for confiscating sensitive content or unblocking organization access, attackers demand a ransom. A Ransomware attack typically targets industries that use specific software to store large amounts of personally identifiable information. Cyber syndicates continuously enhance their exploits through emerging technologies, including artificial intelligence, machine learning, and cryptocurrencies, so the [European Union Agency for Cybersecurity (ENISA)](https://www.enisa.europa.eu/) attributes the growth to a rise in ransom payments from firms that try to avoid the backlash of a successful attack. Although organizations are adopting regulatory guidances and embracing tools to harden their security postures, the evolving threat landscape continues to be alarming. ## GDPR Compliance for Data Privacy With data privacy laws being enforced across several countries, organizations now emphasize having data privacy officers within their cybersecurity team to help their businesses and services comply with mandatory and security regulations. Organizations are also enforcing measures like **data encryption in transit and at rest, role-based logins, multi-factor authentications, credential protection,** and **network segmentation** to intensify data privacy. Numerous cyberattacks leading to the exposure of sensitive information belonging to organizations/customers have now enforced the introduction of federal, state-level, and international data privacy laws such as the [EU GDPR](https://gdpr-info.eu/). GDPR imposes a unified and consistent **data protection law** for all **European Union(EU) member states**. While it is meant to protect the citizens of EU states, the regulation had a spiraling effect on global data security efforts since the regulation covers all goods and services marketed/sold to EU nations. The law requires organizations to collect, process, and persist user data under legally set guidelines. The regulation also provides protocols to protect this data from potential exploitation, misuse, and guidance on respecting users’ rights who own the data. The GDPR compliance requirements involve: * Establishing a legal and transparent data processing method * Reviewing data protection policies * Determining the independent public authority that monitors compliance * Conduct an assessment of the impacts of data protection efforts * Verify the existence and effectiveness of user privacy rights * Hire a data protection officer * Enforce company-wide training on secure data processing ## Security-as-a-Service (SECaaS) With the rapid expansion of the cybersecurity threat landscape, creating custom security solutions, quality audits, and control processes for mitigating threats are an enormous cost overhead for organizations. Organizations are now leaning towards **Security-as-a-Service (SECaaS),** a cloud-based managed security service to overcome the challenges, efforts, and costs associated with maintaining robust security. SECaaS is a growing industry that helps businesses reduce the workload on their in-house cybersecurity teams while allowing them to scale security controls as the business grows seamlessly. Apart from allowing organizations to **utilize the latest security functions, updates,** and **features** security experts provide, the SECaaS model also helps **save costs** by reducing manual overhead and redundant efforts toward threat mitigation. Most SECaaS offerings offer security at a granular level, with the most commonly outsourced security services including: * [Continuous monitoring](https://crashtest-security.com/continuous-security/) * Email security * Intrusion protection * Network security * Security Information and Event Management * Business continuity and disaster recovery * [Vulnerability scanning](https://crashtest-security.com/vulnerability-scanner/) ## Cybersecurity Mesh Architecture (CSMA) Developed by Gartner, [CSMA](https://www.checkpoint.com/cyber-hub/cyber-security/what-is-cybersecurity-mesh-architecture-csma/) is one of the most popular strategic cybersecurity trends of 2022 that provides organizations with a flexible and collaborative framework for security architecture. With the growing number of cyberthreats, organizations are tasked with continuous assessment and threat modeling to mitigate risks associated with their complex tech stacks. The cybersecurity mesh helps overcome the challenge of ***security silos*** by defining a framework that unifies security solutions for hybrid and multi-cloud environments. [Gartner also predicts that by 2024, organizations adopting a cybersecurity mesh architecture will reduce the financial impact of security incidents by an average of 90%](https://www.gartner.com/en/doc/756665-cybersecurity-mesh). gartner.com The CSMA strategy provides a flexible, collaborative approach to security architecture by modularizing security activities and enforcing interoperability using four supportive layers. These layers are: * Implementing analytics and intelligence by using past data to predict and avert future cybersecurity attacks * Decentralized identity management * Consolidates dashboards for unified security management * Consolidated policy and posture management ## Mobile Security Threats With up to 92% of the world’s population now owning a hand-held device, the modern work environment introduced the **Bring Your Own Device (BYOD)** and **remote work culture** that relies on personal devices being granted the required privileges to access sensitive data and critical infrastructure. While the culture stimulates efficient collaboration, increased workplace mobility, and reduced expenses toward device and software licenses, security continues to be a prime challenge. Common malicious traffic from mobile devices includes: * Commands originating from malware installed on a device * Redirects to malicious URLs and websites * Phishing messages used for obtaining authentication data Accessing public wi-fi and collaboration tools on mobile devices exposes potential security gaps that facilitate various forms of phishing attacks for obtaining credentials or sensitive data. Some of the most common mobile security threats include: * Data leakage * Network spoofing * Spyware and malware installation * Unprotected wi-fi connections * E-commerce fraud * Account takeovers ## Conclusion The changing dynamics of the cybersecurity landscape require businesses to take countermeasures to avert risks and vulnerabilities proactively. While technology helps organizations achieve rapid growth and embrace innovation, it is also susceptible to attack vectors and numerous security risks. Security must be considered a continuous and dynamic process. **Crashtest Security Suite** offers an automated penetration testing and vulnerability scanning tool that helps reduce security exposure for web applications and APIs. To know how Crashtest Security can help decrease your risk exposure through automated pentesting, try a [14-days free demo](https://crashtest.cloud/registration?utm_campaign=blog_reg&_gl=1*17h61mq*_ga*MTQ0NDEyMzA5OC4xNjQ5Mzk4OTg2*_ga_3YDVXJ8625*MTY1Nzg4MjE2MS40Mi4xLjE2NTc4ODI3NTYuNjA.&_ga=2.253600910.662873033.1657781663-1444123098.1649398986) today. *This article has already been published on https://crashtest-security.com/cyber-security-trends-2022/ and has been authorized by Crashtest Security for a republish.*
sudip_sg
1,148,198
Simple Login form using HTML and CSS
Hello guys, Today in this post we’ll learn How to Create a Simple Login Form using HTML and CSS. Hope...
0
2022-07-21T22:26:02
https://dev.to/codewithayan/simple-login-form-using-html-and-css-3g14
Hello guys, Today in this post we’ll learn How to Create a [Simple Login Form](https://codewithayan.com/simple-login-page-in-html-and-css-source-code/) using HTML and CSS. Hope you enjoy this post. A login form is one of the most important component of a website or app that allows authorized users to access an entire site or a part of a website. You would have already seen them when visiting a website. Let's head to create it Whether it’s a signup form or login form, it should be catchy, user-friendly and easy to use. These types of Forms lead to increased sales, lead generation, and customer growth. ## Demo [Click ](https://codewithayan.com/simple-login-page-in-html-and-css-source-code/)to watch demo! ##Simple Login form using HTML CSS (source code) **HTML Code** ``` <!DOCTYPE html> <html lang="en" > <head> <meta charset="UTF-8"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/normalize/5.0.0/normalize.min.css"> <link rel="stylesheet" href="styledfer.css"> </head> <body> <div id="login-form-wrap"> <h2>Login</h2> <form id="login-form"> <p> <input type="email" id="email" name="email" placeholder="Email " required><i class="validation"><span></span><span></span></i> </p> <p> <input type="password" id="password" name="password" placeholder="Password" required><i class="validation"><span></span><span></span></i> </p> <p> <input type="submit" id="login" value="Login"> </p> </form> <div id="create-account-wrap"> <p>Don't have an accout? <a href="#">Create One</a><p> </div> </div> <script src='https://code.jquery.com/jquery-2.2.4.min.js'></script> <script src='https://cdnjs.cloudflare.com/ajax/libs/jquery-validate/1.15.0/jquery.validate.min.js'></script> </body> </html> ``` **CSS Code** ``` body { background-color: #020202; font-size: 1.6rem; font-family: "Open Sans", sans-serif; color: #2b3e51; } h2 { font-weight: 300; text-align: center; } p { position: relative; } a, a:link, a:visited, a:active { color: #ff9100; -webkit-transition: all 0.2s ease; transition: all 0.2s ease; } a:focus, a:hover, a:link:focus, a:link:hover, a:visited:focus, a:visited:hover, a:active:focus, a:active:hover { color: #ff9f22; -webkit-transition: all 0.2s ease; transition: all 0.2s ease; } #login-form-wrap { background-color: #fff; width: 16em; margin: 30px auto; text-align: center; padding: 20px 0 0 0; border-radius: 4px; box-shadow: 0px 30px 50px 0px rgba(0, 0, 0, 0.2); } #login-form { padding: 0 60px; } input { display: block; box-sizing: border-box; width: 100%; outline: none; height: 60px; line-height: 60px; border-radius: 4px; } #email, #password { width: 100%; padding: 0 0 0 10px; margin: 0; color: #8a8b8e; border: 1px solid #c2c0ca; font-style: normal; font-size: 16px; -webkit-appearance: none; -moz-appearance: none; appearance: none; position: relative; display: inline-block; background: none; } #email:focus, #password:focus { border-color: #3ca9e2; } #email:focus:invalid, #password:focus:invalid { color: #cc1e2b; border-color: #cc1e2b; } #email:valid ~ .validation, #password:valid ~ .validation { display: block; border-color: #0C0; } #email:valid ~ .validation span, #password:valid ~ .validation span{ background: #0C0; position: absolute; border-radius: 6px; } #email:valid ~ .validation span:first-child, #password:valid ~ .validation span:first-child{ top: 30px; left: 14px; width: 20px; height: 3px; -webkit-transform: rotate(-45deg); transform: rotate(-45deg); } #email:valid ~ .validation span:last-child #password:valid ~ .validation span:last-child { top: 35px; left: 8px; width: 11px; height: 3px; -webkit-transform: rotate(45deg); transform: rotate(45deg); } .validation { display: none; position: absolute; content: " "; height: 60px; width: 30px; right: 15px; top: 0px; } input[type="submit"] { border: none; display: block; background-color: #ff9100; color: #fff; font-weight: bold; text-transform: uppercase; cursor: pointer; -webkit-transition: all 0.2s ease; transition: all 0.2s ease; font-size: 18px; position: relative; display: inline-block; cursor: pointer; text-align: center; } input[type="submit"]:hover { background-color: #ff9b17; -webkit-transition: all 0.2s ease; transition: all 0.2s ease; } #create-account-wrap { background-color: #eeedf1; color: #8a8b8e; font-size: 14px; width: 100%; padding: 10px 0; border-radius: 0 0 4px 4px; } ``` Congratulations! You have now successfully created our Simple Login Form using HTML and CSS. Wait, Take a look at my other amazing CSS and Javascript tutorials: 2. [Random Password Generator using Javascript](https://codewithayan.com/random-password-generator-app-in-javascript-source-code/) 2. [Responsive footer design in HTML and CSS ](https://codewithayan.com/footer-design-in-html/) 3. [Simple Feedback Form using Html, CSS and Javascript](https://codewithayan.com/feedback-form-using-html-css-and-javascript/) Thank you!
codewithayan
1,141,962
[Solved] ‘pip’ is not recognized in cmd
'pip’ is not recognized in cmd is the error shows up on windows when one tries to use pip in the...
0
2022-07-16T02:32:22
https://dev.to/ammohitchaprana/solved-pip-is-not-recognized-in-cmd-a82
pip, python, programming, py
'pip’ is not recognized in cmd is the error shows up on windows when one tries to use pip in the command prompt. To solve this error on windows, you must declare path variable by following these steps: - 1. Step 1 – Right click on My Computer or This PC - 2. Step 2 – Click on Properties - 3. Step 3 – Click on Advanced System Settings You will find a section called system variables. Click on Path from the list of variable and values that shows up there. After clicking on path click edit. You will find a New button in the pop up. Click that and paste the location of the python35 or python36 folder (The location you specified while installing python) followed by “\Scripts” there. For me its “C:\Users\a610580\AppData\Local\Programs\Python\Python35-32” so I type “C:\Users\a610580\AppData\Local\Programs\Python\Python35-32\Scripts” Click Ok to close all windows and restart your command prompt. Irepeat – restart your command prompt. Everything should now be working fine! Make sure you don’t disturb anything else in the path variable and follow the aforementioned steps exactly.
ammohitchaprana
1,142,054
pandas 数字列转换为时间
背景 这种时间阅读起来很不方便 解决 data['ts'] = pd.to_datetime(data['t'],unit='ms') ...
0
2022-07-16T07:34:09
https://dev.to/qiudaozhang/pandas-shu-zi-lie-zhuan-huan-wei-shi-jian-1ep5
## 背景 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxtbt6qy2woh958y1h2o.png) 这种时间阅读起来很不方便 ## 解决 ```python data['ts'] = pd.to_datetime(data['t'],unit='ms') ``` 可以根据需要调整unit,不过这个时间转换的时候并没有和时区关联,我们东八区看这个时间就落后了8个点,如果为了显示正确可以在基础的毫秒上加8个小时的毫秒转换 ```python data['ts'] = pd.to_datetime(data['t'] + 8 * 60 * 60 * 1000,unit='ms') ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63p749hhruhqup598fs3.png)
qiudaozhang
1,142,069
2141. Leetcode Solution in Cpp
#define LL long long class Solution { private: bool check(LL t, int n, const vector&lt;int&gt;...
0
2022-07-16T08:08:46
https://dev.to/chiki1601/2141-leetcode-solution-in-cpp-32bd
cpp
``` #define LL long long class Solution { private: bool check(LL t, int n, const vector<int> &batteries) { LL tot = 0; for (LL x : batteries) tot += min(t, x); return tot >= t * n; } public: LL maxRunTime(int n, vector<int>& batteries) { LL l = 1, r = 100000000000000ll; while (l < r) { LL mid = (l + r) >> 1; if (check(mid + 1, n, batteries)) l = mid + 1; else r = mid; } return l; } }; ``` #leetcode #weekly contest Here is the link for the problem..... https://leetcode.com/contest/weekly-contest-276/problems/maximum-running-time-of-n-computers/
chiki1601
1,142,267
Ethernaut: 20. Denial
Play the level // SPDX-License-Identifier: MIT pragma solidity ^0.6.0; import...
18,918
2022-07-16T14:48:54
https://dev.to/erhant/ethernaut-20-denial-2pn2
solidity, ethereum, openzeppelin, security
[Play the level](https://ethernaut.openzeppelin.com/level/0xf1D573178225513eDAA795bE9206f7E311EeDEc3) ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.6.0; import '@openzeppelin/contracts/math/SafeMath.sol'; contract Denial { using SafeMath for uint256; address public partner; // withdrawal partner - pay the gas, split the withdraw address payable public constant owner = address(0xA9E); uint timeLastWithdrawn; mapping(address => uint) withdrawPartnerBalances; // keep track of partners balances function setWithdrawPartner(address _partner) public { partner = _partner; } // withdraw 1% to recipient and 1% to owner function withdraw() public { uint amountToSend = address(this).balance.div(100); // perform a call without checking return // The recipient can revert, the owner will still get their share partner.call{value:amountToSend}(""); owner.transfer(amountToSend); // keep track of last withdrawal time timeLastWithdrawn = now; withdrawPartnerBalances[partner] = withdrawPartnerBalances[partner].add(amountToSend); } // allow deposit of funds receive() external payable {} // convenience function function contractBalance() public view returns (uint) { return address(this).balance; } } ``` In this level, the exploit has to do with `call` function: `partner.call{value:amountToSend}("")`. Here, a `call` is made to the partner address, with empty `msg.data` and `amountToSend` value. When using `call`, if you do not specify the amount of gas to forward, it will forward everything! As the comment line says, reverting the call will not affect the execution, but what if we consume all gas in that call? That is the attack. We will write a `fallback` function because the call is made with no message data, and we will just put an infinite loop in there: ```solidity contract BadPartner { fallback() external payable { while (true) {} } } ``` We then set the withdrawal partner as this contract address, and we are done. Note that `call` can use at most 63/64 of the remaining gas (see [EIP-150](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-150.md)). If 1/64 of the gas is enough to finish the rest of the stuff, you are good. To be safe though, just specify the amount of gas to forward.
erhant
1,142,375
ERROR #1093 : You can't specify target table table_name for update in FROM clause
In this blog, we will solve one MySQL question and understand errors while solving the...
0
2022-07-16T17:28:09
https://dev.to/ahana001/error-1093-you-cant-specify-target-table-tablename-for-update-in-from-clause-2j2n
mysql
In this blog, we will solve one MySQL question and understand errors while solving the problem. Problem : Delete Duplicate Emails Table: Person ``` +-------------+---------+ | Column Name | Type | +-------------+---------+ | id | int | | email | varchar | +-------------+---------+ ``` id is the primary key column for this table. Each row of this table contains an email. The emails will not contain uppercase letters. Write an SQL query to delete all the duplicate emails, keeping only one unique email with the smallest id. Input: Person table: ``` +----+------------------+ | id | email | +----+------------------+ | 1 | john@example.com | | 2 | bob@example.com | | 3 | john@example.com | +----+------------------+ ``` Output: ``` +----+------------------+ | id | email | +----+------------------+ | 1 | john@example.com | | 2 | bob@example.com | +----+------------------+ ``` Explanation: john@example.com is repeated two times. We keep the row with the smallest Id = 1. If We want No. of count which is having same email then we should use GROUP BY method. The GROUP BY statement groups rows that have the same values into summary rows, like "find the number of id in each email". The GROUP BY statement is often used with aggregate functions (COUNT(), MAX(), MIN(), SUM(), AVG()) to group the result-set by one or more columns. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wq7hz5y4fd044uleocda.png) Here we group each row that has the same email and find a count of the row with COUNT() function. email john@example.com occurs two times so we get a count as 2 and for bob@example.com we get a count as 1 because it occurs only one time. Here email john@example.com has two ids 1 and 3. If We want to get minimum id from these two ids or minimum id from any no. of ids for particular email then we have to use MIN() function. MySQL query should be like : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6jlqm6xsk29g53xu341h.png) Here we get id 1 for email john@example.com because min(1,3) should be 1 and get 2 for bob@example.com because it has only one id that should be returned in MIN() function. Now we have completed our major task in this problem. We want only id that is not in this min id because we want to keep the smallest id in our table and delete all ids that are greater than small id which has same email. so we first group row that has same email and then finds the smallest id from that to keep smallest id in table and delete all the row that has same email with id > small(id). so the query will delete all id that is not equal to smallest id of particular email. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5vs9lxnupoclq4zfi3v.png) Oops... We have got an Error. ERROR #1093 : You can't specify the target table table_name for an update in FROM clause What you can do is change the query to something like this below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2lctnpg5rmgx1ndbl3ue.png) Okay, let me explain how the magic happens here. You can not delete the rows from the same data source to which your sub-query refers. This above-mentioned query is a working, but it’s ugly for several reasons, including performance. Here nested subquery makes a temporary table. So it doesn’t count as the same table you’re trying to delete data from. This is because your update could be cyclical. what if updating that record causes something to happen which made the WHERE condition FALSE? You know that isn’t the case, but the engine doesn’t. There also could be opposing locks on the table in the operation.
ahana001
1,142,814
How Open-Source community Can give you a Job?
Open source is not only a great way to get started in your coding career, but also a way to build up...
0
2022-08-01T08:15:00
https://dev.to/nathan20/how-open-source-community-can-give-your-a-job-4kc7
beginners, programming, opensource, tutorial
Open source is not only a great way to get started in your coding career, but also a way to build up your portfolio and show potential employers what you’re capable of. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otrlnobn0qn28n7r742d.jpg) By contributing to an open source project, you can demonstrate your skills and commitment to coding, as well as show that you’re able to work on a team. And, of course, if you’re looking for a job in the opensource world, attending a local meetup or contributing to an opensource project is a great way to network and find potential employers. ##What you can do to find a job in the opensource community First, get involved in the community by contributing to projects, participating in forums and mailing lists, and attending events. Not only will this give you a chance to show off your skills, but it will also help you build up a network of contacts. Second, create a strong online presence by writing blog posts and articles, and sharing your work on social media. This will help you reach a wider audience and demonstrate your expertise to potential employers. Finally, reach out to companies and recruiters who are active in the opensource community to let them know you're looking for a job. Many companies are actively recruiting in the opensource community, so this is a great way to get your foot in the door. By following these tips, you can increase your chances of landing a job in the opensource community. ##How to use the resources in the opensource community to get a job To apply for a job, you will need to create a resume and cover letter. Your resume should list your skills and experience. Your cover letter should explain why you are interested in the job and why you would be a good fit for the position. Once you have created your resume and cover letter, you can start applying for jobs. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jpcz4hfdajglq1u3uh37.jpeg) To do this, you will need to create an account on a job board website like Indeed or Dice. Then, you can search for jobs and apply for them online. If you are having trouble finding a job, you can also reach out to people in the opensource community for help. You can join mailing lists and IRC channels related to your field of interest. You can also attend conferences and meetups. By networking with people in the community, you will increase your chances of finding a job. The open-source community is a great place to find your next job. With so many options and so much talent, you're sure to find the perfect fit for your skillset. And, who knows, maybe you'll even find your dream job. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aghor3bb1xf5p8xx8bk5.gif) Star our [Github repo](https://bit.ly/3QFgAUf) and join the discussion in our [Discord channel](https://bit.ly/3HQtlYo)! Test your API for free now at [BLST](https://www.blstsecurity.com/?promo=blst&domain=https://dev.to/How_Open-source_community_can_give_your_a_job?)!
nathan20
1,143,135
Running a command with a different root folder (Linux)
You can run a command with a different root folder with the chroot command. Let's try it out. ...
0
2022-07-17T21:00:20
https://dev.to/tallesl/running-a-command-with-a-different-root-folder-linux-14jd
docker, bash, linux
You can run a command with a different root folder with the `chroot` command. Let's try it out. ## Listing dependencies First, let's get the library dependencies of `sh`, `echo`, `cat`, and `pwd`: ``` $ ldd /bin/sh /bin/echo /bin/cat /bin/pwd /bin/sh: linux-vdso.so.1 (0x00007ffd8b764000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f415d09f000) /lib64/ld-linux-x86-64.so.2 (0x00007f415d2d4000) /bin/echo: linux-vdso.so.1 (0x00007ffd04383000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3120c73000) /lib64/ld-linux-x86-64.so.2 (0x00007f3120e90000) /bin/cat: linux-vdso.so.1 (0x00007fffe1d7a000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fcf49a42000) /lib64/ld-linux-x86-64.so.2 (0x00007fcf49c60000) /bin/pwd: linux-vdso.so.1 (0x00007fffdbfad000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f60ec35c000) /lib64/ld-linux-x86-64.so.2 (0x00007f60ec57a000) ``` [(more about `ldd` here)](https://dev.to/tallesl/find-library-dependencies-of-a-binary-file-linux-3njf) ## Creating a new custom root Now let's create a barebones root filesystem for those commands to work, with their respective libraries: ``` $ mkdir customroot/ $ mkdir customroot/bin/ customroot/lib/ customroot/lib64/ $ cp /bin/sh /bin/echo /bin/cat /bin/pwd customroot/bin/ $ cp /lib/x86_64-linux-gnu/libc.so.6 customroot/lib/ $ cp /lib64/ld-linux-x86-64.so.2 customroot/lib64/ ``` ## Running a new shell on with the custom root Now running our shell with `customroot` folder as the root: ``` $ sudo chroot customroot/ sh # echo "Hello World!" > hello.txt # cat hello.txt Hello World! # pwd / # ls sh: 4: ls: not found # exit ``` Notice `pwd` outputting `/` as expected, and `ls` not running since with didn't copy its binaries (and dependencies). ## Creating a custom Debian root Copying each binary file and it's dependencies as done above is cumbersome. What if we could generate a complete base root filesystem in an easier way? `debootstrap` to the rescue! ``` $ sudo apt install debootstrap $ sudo debootstrap bionic customdebroot ``` Now let's run a new shell with our new root: ``` $ sudo chroot customdebroot/ bash # pwd / # ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var ``` Let's install some package: ``` # apt install dict # which dict /usr/bin/dict # dict nintendo 1 definition found From The Free On-line Dictionary of Computing (30 December 2018) [foldoc]: Nintendo <company, games> A Japanese {video game} hardware manufacturer and software publisher. Nintendo started by making playing cards, but was later dominant in video games throughout the 1980s and early 1990s worldwide. They make lots of games consoles including the Gameboy, Gameboy Advance SP, DS, DS Lite and the Wii. {Nintendo home (http://nintendo.com/)}. (2008-03-08) ``` Back to our original shell with the 'real' root: ``` # exit $ file customdebroot/usr/bin/dict customdebroot/usr/bin/dict: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=1ae2d6542c13f51dcaaf6510a18e1b4bf49cf7c8, stripped ``` `dict` was only installed on our custom Debian root as expected.
tallesl
1,143,349
1074. Leetcode Solution in Cpp
class Solution { public: int numSubmatrixSumTarget(vector&lt;vector&lt;int&gt;&gt;&amp; matrix,...
0
2022-07-18T05:24:46
https://dev.to/chiki1601/1074-leetcode-solution-in-cpp-10c1
cpp
``` class Solution { public: int numSubmatrixSumTarget(vector<vector<int>>& matrix, int target) { const int m = matrix.size(); const int n = matrix[0].size(); int ans = 0; // transfer each row of matrix to prefix sum for (auto& row : matrix) for (int i = 1; i < n; ++i) row[i] += row[i - 1]; for (int baseCol = 0; baseCol < n; ++baseCol) for (int j = baseCol; j < n; ++j) { unordered_map<int, int> prefixCount{{0, 1}}; int sum = 0; for (int i = 0; i < m; ++i) { if (baseCol > 0) sum -= matrix[i][baseCol - 1]; sum += matrix[i][j]; if (prefixCount.count(sum - target)) ans += prefixCount[sum - target]; ++prefixCount[sum]; } } return ans; } }; ``` #leetcode #challenge Here is the link for the problem: https://leetcode.com/problems/number-of-submatrices-that-sum-to-target/
chiki1601
1,143,777
Roundup of .NET MAUI. - Week of July 11, 2022
Wow, lots of libraries and especially videos this week about .NET MAUI. I just got back from a full...
18,552
2022-07-18T16:33:16
https://dev.to/davidortinau/roundup-of-net-maui-week-of-july-11-2022-52ml
Wow, lots of libraries and especially videos this week about [.NET MAUI](https://dot.net/maui). I just got back from a full week at HQ in Redmond meeting with teammates I haven't seen in more than 2 years or...ever. Table of Contents: * [Libraries](#libraries) * [Videos](#videos) ## Libraries <a name="libraries"></a> ### Material Color Utilities A .NET MAUI implementation of a Google library, this gives you Material Design colors so you can easily utilize them in your app plus some helpers. You can use the colors as defined, or add your own overrides and fallbacks. The readme has code snippets: {% embed https://github.com/albi005/MaterialColorUtilities %} https://www.nuget.org/packages/MaterialColorUtilities.Maui https://github.com/albi005/MaterialColorUtilities ### TeeChart Steema has released an update to TeeChart for .NET MAUI. This includes charts, map, and gauge controls covering dozens of use cases. https://www.nuget.org/packages/Steema.TeeChart.NET.MAUI/4.2022.7.13-beta ### Community Toolkit The team shipped 1.1.0 and Pedro Jesus has [guest-blogged about customizing controls](https://devblogs.microsoft.com/dotnet/customizing-dotnet-maui-controls/) and the toolkit. This release includes the long awaited (BY ME) IconTintColorBehavior! ``` <Image Source="home.png"> <Image.Behaviors> <mct:IconTintColorBehavior TintColor="Purple"/> </Image.Behaviors> </Image> ``` https://www.nuget.org/packages/CommunityToolkit.Maui ### SkiaSharp Various Packages Matthew Leibowitz on the .NET MAUI team shipped a bunch of updates this past week, including a SkiaSharp view for .NET MAUI that plays Lottie animations! {% embed https://twitter.com/mattleibow/status/1547901227341455369?s=20&t=l8H9XmgVd4rfSv43eLoQyg %} {% embed https://twitter.com/jfversluis/status/1547921573700456450?s=20&t=l8H9XmgVd4rfSv43eLoQyg %} [All the NuGets](https://www.nuget.org/packages?q=Tags:%22skiasharp.extended%22&utm_source=twitter&utm_medium=tweet&utm_campaign=skiasharp.extended) ### XCalendar I may have highlighted this one before. I've built calendar controls before, and it's so fun I would prefer to just use something like this. ;) Check it out. ![XCalendar screenshot](https://user-images.githubusercontent.com/73718829/150847171-290910bf-1751-409d-a622-39d3e14687b4.jpg) https://www.nuget.org/packages/XCalendar.Maui/4.0.0-pre1 ### Serilog Now with .NET MAUI support. https://www.nuget.org/packages/Serilog.Sinks.Xamarin/1.0.0-dev-00104-fd287b7 ## YouTube <a name="videos"></a> {% embed https://www.youtube.com/watch?v=V9ZFyDkVv4M %} {% embed https://www.youtube.com/watch?v=yrJIeYGOO-k %} {% embed https://www.youtube.com/watch?v=3z-IpztV9rM %} {% embed https://www.youtube.com/watch?v=_ApMpAm5m_A %} {% embed https://www.youtube.com/watch?v=oC5zpEbwViE %} {% embed https://www.youtube.com/watch?v=q0KDP_6au3k %} {% embed https://www.youtube.com/watch?v=nuAjxoVTBH8 %} {% embed https://www.youtube.com/watch?v=Aw5qGFV_CH0 %} {% embed https://www.youtube.com/watch?v=_LcMbKZ56lY %} {% embed https://www.youtube.com/watch?v=LsDJEpgOhfU %} {% embed https://www.youtube.com/watch?v=SK8u3MmXnjE %} {% embed https://www.youtube.com/watch?v=8d7xLErPm9o %} {% embed https://www.youtube.com/watch?v=xd7BW3gCCRY %} {% embed https://www.youtube.com/watch?v=BzUi9_GbQkU %} {% embed https://www.youtube.com/watch?v=xZfv-rkbpHc %} {% embed https://www.youtube.com/watch?v=nSdQaaaLjUY %}
davidortinau
1,144,065
AWS Cloud WAN: The General Availability and Product Features
This is not a “Last Week in AWS” however there is something to sum about. AWS announced the general...
0
2022-07-18T22:13:21
https://dev.to/aws-builders/aws-cloud-wan-the-general-availability-and-product-features-a3g
aws, cloud, cloudwan, networking
This is not a “Last Week in AWS” however there is something to sum about. [AWS announced the general availability of Cloud WAN on July 12.](https://aws.amazon.com/about-aws/whats-new/2022/07/general-availability-aws-cloud-wan/) This release is important to manage global on-premise networks with the cloud services. Let’s take a quick look. As most of you know that WAN stands for Wide Area Networking in the telecom business. It helps to build, manage, monitor and also maintain a global network which contains lots of physical on-premise systems. Of course, not only on-premise environments but also networking-focused cloud products can be connected easily. You can basically use network policies to specify which of your Virtual Private Clouds (VPC) to any networking service by using AWS Direct Connect or AWS Site-to-Site VPN. > **Network Policy:** Defines rules and apply policies to configure and manage your network. (Source: Product page) You can monitor, operate and maintain these configurations on the Cloud WAN central dashboard too. > **Central Dashboard:** Create connections between your branch offices, data centers, and Amazon VPCs. (Source: Product page) Cloud WAN is available to use between AWS Regions by using Border Gateway Protocol (BGP). Therefore there is not any technical limitation related to AWS Regions. ## Why is it essential? Global networks are evolving continuously. Since the popular applications are mostly running on serverless and cloud, a huge opportunity for network operators has been born. How? There is an annual expense for on-premise systems. And it gets worse day after day. However, it can be prevented with the proper usage of the cloud. The basic solution is to migrate some of the suitable services to the cloud. Especially server-based applications or end user-related applications are such examples. So integrating the cloud and the network itself, in other words, hybrid cloud, became a real opportunity to reduce OPEX and CAPEX while operating the systems as usual. So what is the point of Cloud WAN? Well, Cloud WAN is used to connect region-based services actually. Let’s think about a high-available network application which is deployed in multiple locations. Without Cloud WAN; it can be hard to turn the system into a hybrid cloud as a whole because there is another conjugate system running in a different region or location. With Cloud WAN; it is easier to understand the requirement and build accordingly. Since we have a product that can be used to exchange routes with BGP, there is no need to worry about connecting different regions to each other. It is just a bunch of network routing policies after all. To sum up, Cloud WAN is a great product for creating a hybrid cloud environment. If you or your company does not require local availability (released for several regions for now), then it can be definitely tried out. --- If this post seems interesting for you, please see the details of [the product page here.](https://aws.amazon.com/cloud-wan/) Also there are two Twitch sessions covering Cloud WAN. 1. [AWS On Air Live at the NY Summit](https://www.twitch.tv/videos/1529700791?t=0h27m) 2. [The Routing Loop - Cloud WAN](https://www.twitch.tv/aws/video/1531712794) --- All content is archived on [bugrakilic.github.io](https://bugrakilic.github.io/). You can reach me on [Twitter](https://twitter.com/bugrkilic), [LinkedIn](https://linkedin.com/in/bugrakilic) or via [email](mailto:bugrakilic@outlook.com). Thanks!
bugrakilic
1,144,364
Get always the last version of the github/gitlab software.
Do you use a of software from github ? that not have a packaging system ? then lastversion its for...
0
2022-07-28T02:52:37
https://dev.to/zodman/get-always-the-last-version-of-the-githubgitlab-software-74n
software, linux, cli, opensource
Do you use a of software from github ? that not have a packaging system ? then lastversion its for you! ``` lastversion apache/incubator-pagespeed-ngx #> 1.13.35.2 lastversion download apache/incubator-pagespeed-ngx #> downloaded incubator-pagespeed-ngx-v1.13.35.2-stable.tar.gz lastversion download apache/incubator-pagespeed-ngx -o pagespeed.tar.gz #> downloads with chosen filename lastversion https://transmissionbt.com/ #> 3.0 lastversion format "mysqld Ver 5.6.51-91.0 for Linux" #> 5.6.51 ``` If you follow the awesome cli seria I have wrotten with this software you can get the last version of all :> {% github https://github.com/dvershinin/lastversion %}
zodman
1,144,706
For Linux, would you switch to an AMD GPU?
I keep hearing that AMD is better for Linux and that Nvidia GPU users may experience serious...
0
2022-07-19T12:22:00
https://dev.to/smoker12/for-linux-would-you-switch-to-an-amd-gpu-19l4
I keep hearing that AMD is better for Linux and that Nvidia GPU users may experience serious problems. Would you move from an Nvidia GPU to an AMD one if you were going from Windows to Linux full-time? [how to find ip address on windows]( https://getinfolist.com/2020/07/02/how-to-find-ip-address-on-windows/ )
smoker12
1,144,861
AWS CloudTrail - Create a multi-region workflow to track user and API activity on your AWS account
Do you need more security for your AWS account? In the previous blog I implemented several...
18,978
2022-07-20T15:26:00
https://dev.to/aws-builders/how-to-create-an-aws-cloudtrail-workflow-for-your-aws-account-3949
beginners, tutorial, aws, security
### Do you need more security for your AWS account? In the previous [blog](https://dev.to/aws-builders/aws-cost-explorer-cost-anomaly-detection-report-identifed-an-unauthorized-amazon-sagemaker-canvas-user-3524) I implemented several steps to reduce costs and protect unauthorized user access to an AWS account. ![rootaccount](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzcpqtpzspf5mjwqgbi1.jpg) These remediation steps included: a) Block public access to S3 buckets enabled b) Linking Multi-Factor Authentication (MFA) to your AWS Root Account c) Cleaning up and deleting inactive AWS services d) Deleting Users that are not listed under AWS IAM e) Resetting the passwords to your AWS IAM and Root accounts f) Resetting the passwords to your email accounts g) Creating MFA on your email accounts h) Monitor for AWS service usage using AWS Cloud Watch i) Creating a Cost Anomaly Detection Report from AWS Cost Explorer If you would like to monitor unauthorized access by a user you may also create an AWS CloudTrail. ### AWS CloudTrail AWS CloudTrail may be used for compliance by providing an audit review of user actions and API usage by monitoring the event from [a user, role or AWS service](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) as an event with log data stored in a S3 bucket. CloudTrail may monitor and record the user actions across all AWS services by creating trail in a single region or multiple regions. ### Architecture The architecture of a CloudTrail [workflow](https://aws.amazon.com/cloudtrail/) is shown below in the AWS diagram: ![architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aoa3gda2ueo2kqabbemd.jpg) The workflow commences with: Step 1: Unusual user or API activity is recorded by CloudTrail Step 2: Event history logs is stored in a S3 bucket created by CloudTrail Step 3: Unusual user or API activity is monitored, the recorded event history for the last 90 days may be viewed by creating an optional [insights events dashboard](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-insights-events-console.html#downloading-insights-events) which may be downloaded as a csv or json file. Step 4: The CloudTrail console will analyze recent events ### Tutorial 1: Create a CloudTrail for multiple regions using the AWS Console Step 1: Ensure you have created an [AWS account](https://dev.to/aws-builders/getting-started-with-aws-a-sweet-journey-5cjj) Step 2: Create IAM permissions for [CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/security_iam_id-based-policy-examples.html#grant-permissions-for-cloudtrail-administration) Step 3: Navigate to the search bar and type the word **CloudTrail** ![Inavigate tobar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ybjnkyjznvmnoh2gxua.jpg) Step 4: On the CloudTrail homepage, click the orange button **Create a trail** ![create trail](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6696uoo6bxffv7gjifn.jpg) Step 5: Create a name for the trail that describes the purpose of the trail. Step 6: Click **Create new S3 bucket** and provide a name for the S3 bucket created by CloudTrail. Click **Next** The diagram below is confirmation of the creation of the S3 bucket: ![trail created](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/321q9366efyy4ry64s4q.jpg) Step 7: Under **Choose log events** you may retain the default settings and select **Save Changes**. ![management](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ta5effsgyrgox1x0tl98.jpg) Step 8: There is confirmation of the creation of the CloudTrail ![trail](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qf72fps78sdnmljpv219.jpg) You may navigate to recent trail events from the CloudTrail dashboard. ![trail dashn](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aldn7nyo0n2v4toc9qsz.jpg) ### Tutorial 2 (Optional): Add Insights Events CloudTrail is available under the AWS Free Tier and please review pricing of Insights Events [here](https://aws.amazon.com/cloudtrail/pricing/) as you may incur additional charges. Step 1: Click into the created CloudTrail and scroll down to **Events** and click **Edit**, Check the box 'insights events'. ![check box](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6xju60vmrou4pz482ex.jpg) Step 2: Check the box **Insight Events** and then check the last two boxes 'API error rate' and 'API call rate'. ![click insight events](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cjvjj2wpu1f55ttdmwmc.jpg) After 24 hours you will be able to view insights from your dashboard. ![insight graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8eruqyrunpjcebmtksos.jpg) ### Tutorial 3 (Optional): Create Cloud Watch on CloudTrail Navigate to **CloudWatchLogs** and click 'enabled' and **Save Changes**. ![cloudwatch](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctwkman4ssvjncvg5t5z.jpg) ### Conclusion You will have the peace of mind to allow AWS CloudTrail to track all user and API activity across your AWS services in multiple regions, where log data is stored in your S3 bucket to review for audit purposes. You will also have access to a dashboard to visualize more granular insights that you may require to help you understand event history for your AWS account. Until next time, happy learning! 😁 ### Resources [What is AWS CloudTrail?](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) [Create a CloudTrail using AWS Console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-tutorial.html) [Create Insight Events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html) ### Join us for AWS re:Inforce conference Next week is AWS re:Inforce conference, 26-27 July 📆 A learning conference on compliance, privacy and identity 🔐🛠️ • Register to watch the keynote & sessions streamed live online 📺 • Link: https://reinforce.awsevents.com ![Reinforce](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjrctympdeoqdzdir9zu.jpg)
abc_wendsss
1,145,183
Trying to work out when to use if-else, switch or conditional (tenary) operator - The importance of MDN
This is a post where I document as I learn type of thing. So if anything doesn't make sense or is...
0
2022-07-19T23:31:25
https://dev.to/vassovass/trying-to-work-out-when-to-use-if-else-switch-or-conditional-tenary-operator-the-importance-of-mdn-27on
else, javascript, beginners, codenewbie
This is a post where I document as I learn type of thing. So if anything doesn't make sense or is wrong, let me know in the comments. _Assumption is that you know that these options exist for writing conditionals._ I have just learnt switch statements and the conditional (tenary) operator which is variations on the if-else conditional statement. So far my understanding is that the: 1. **Switch statement** is used as a cleaner way for comparing 1 value to multiple options ``` const birthYear = 2005; const birthYear2 = birthYear < 2000 switch (birthYear2) { case (birthYear2 < 2000): century = 20; postLetters = "th"; console.log(`${century}${postLetters} century`); break; case birthYear2 > 2000: century = 21; postLetters = 'st'; console.log(`${century}${postLetters} century`); break; default: console.log(`cusp of ${century}${postLetters} century`); break; } ``` 2. **Conditional (tenary) operator** is a simpler /more readable way of conditionally declaring variables ``` age = 25; age >= 18 ? console.log('He is of legal limt') : console.log('He is not of legal limt'); ``` 3. **If-else** can be used for all of the above, just not as neat and readable example code: ``` const birthYear = 2995; let century; let postLetters; if (birthYear < 2000) { century = 20; postLetters = "th"; } else { century = 21; postLetters = "st"; } console.log(`${century}${postLetters} century`); ``` My issue is that I don't quite understand exactly when and were it should apply specifically, or at least have a rough idea. So I [Googled](https://www.google.com/search?q=when+do+you+use+if+else+switch+statement+and+conditional+operator+in+javascript&rlz=1C1CHBF_enZA970ZA970&oq=when+do+you+use+if+else%2C+switch+statement+and+conditional+op&aqs=chrome.2.69i57j33i160l4.45887j0j7&sourceid=chrome&ie=UTF-8) this and it led to ["Switch-Case or If-Else: Which One to Pick?"](https://dasha.ai/en-us/blog/javascript-if-else-or-switch-case) which doesn't address the conditional (tenary) operator. Some notes from this: - The switch statement is a multiple-choice selection statement. - If there are several matching cases to a case value in Javascript switch multiple cases, the first one is selected. (Taken from the comparison chart) - Switch statements are ideal for fixed data values. - if-else conditional branches are great for variable conditions that result into Boolean. - if-else: Having different conditions is possible. - Switch: We can only have one expression. - Switch: Sequence of execution - executes one case after another till a break statement appears or until the end of the switch statement is reached. - if-else: Sequence of execution - if-statement will be executed, or else-statement is executed. - you should use switch statements for making decisions based on single integers enumerated value, or a string object. > **You can use if-else when:** > The condition result is a boolean. > The conditions are complex. For example, you have conditions with multiple logical operators. > **You can use a switch-case when:** > There are multiple choices for an expression. > The condition is based on a predefined set of values such as enums, constants, known types. For example, error codes, statuses, states, object types, etc. _From: ["Switch-Case or If-Else: Which One to Pick?"](https://dasha.ai/en-us/blog/javascript-if-else-or-switch-case)_ My Takeaway from the above: - one condition in Switch versus multiple in if-else - if you can use switch statement instead of if-else, then do it. I then checked the references used for ["Switch-Case or If-Else: Which One to Pick?"](https://dasha.ai/en-us/blog/javascript-if-else-or-switch-case) and it led me to [conditionals of MDN ](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/conditionals) - if-else: mainly good for cases where you've got a couple of choices, and each one requires a reasonable amount of code to be run, and/or the conditions are complex (for example, multiple logical operators). - switch: where you just want to set a variable to a certain choice of value or print out a particular statement depending on a condition - Summary of Switch statement: they take a single expression/value as an input, and then look through a number of choices until they find one that matches that value, executing the corresponding code that goes along with it. - Summary: The ternary or conditional operator is a small bit of syntax that tests a condition and returns one value/expression if it is true, and another if it is false — this can be useful in some situations, and can take up a lot less code than an if...else block if you have two choices that are chosen between via a true/false condition. ternary operator is not just for setting variable values; you can also run functions, or lines of code — anything you like Basically, I should have gone to the MDN docs from the get go. It has examples, explanations and all that jazz🎶
vassovass
1,148,200
Understanding Scope in JavaScript
This article assumes you have a good understanding of variables and the different ways of declaring...
0
2022-07-22T11:41:00
https://dev.to/sophie/understanding-scope-in-javascript-2pml
javascript, programming, webdev, beginners
This article assumes you have a good understanding of variables and the different ways of declaring variables in JavaScript as those concepts won't be treated. ## What is Scope? Scope is what determines the accessibility(visibility) of variables. Scoping controls where a variable is accessible and the time the variable definition is maintained. There are three types of scope in JavaScript: Global Scope, Local Scope and Block Scope. ### Global Scope Global Scope is the default scope for all codes running in script mode. Variables declared outside of any function or block of code become global variables and are accessible from anywhere in the code. There is one copy of a global variable in the entire program. ```js // global scope var name = 'Sophia' console.log(name); //Sophia function getName(){ console.log(name) //name is accessible here } getName() //Sophia ``` Variables defined outside of any function become global variables and can also be modified from any function. Take our earlier name variable: ```js // global scope var name = 'Sophia' console.log(name); //Sophia function getName(){ console.log(name) //name is accessible here } function changeName(){ name = 'Jane' console.log(name) //name is accessible here } getName() //Sophia changeName() //Jane ``` Changing the value of a global variable in any function will be reflected through out the program. We can already see why it is not good practice to declare global variables unintentionally. ### Local Scope Local scope can also be referred to as function scope. A function creates a scope and variables declared within that function becomes local to that function and cannot be accessed from outside that function. Consider the following example: ```js //Local Scope function getName(){ var name = 'Sophia'; console.log(name) //name can only used in getName } getName() //Sophia console.log(name); //Causes error ``` Local variables are only accessible inside their functions, which means variables with same name can be used in different functions and their values won't be overwritten. Local variables are created when a function starts, and deleted once the function is completed. ```js //Local Scope function getName(){ var name = 'Sophia'; console.log(name) } function getOtherName(){ var name = 'Xavier'; console.log(name) } getName() //Sophia getOtherName() //Xavier ``` Now take a look at this code example: ```js function createCounter(){ count = 0; } function incrementCounter(){ return ++count; } ``` What is the scope of the `count` variable above and would the `incrementCounter` function work? Before you answer, note that JavaScript does not require variables to be declared before they are used and undeclared variables are put in the global scope. Let me know your answer in the comment section. ### Block Scope Prior to ES6(2015), JavaScript only had Local(Function) Scope and Global Scope. With the introduction of `let` and `const`, the Block Scope was provided. Block Scope is simply the area within `if`, `switch` statements, and `loops`. A block is generally contained within curly brackets `{ }`. Variables declared inside a `{ }` cannot be accessed from outside that block. ```js //Block Scope function getName(){ for(let count = 1; count <= 10; count +=1){ //count is accessible here } //count is not accessible here } ``` The count variable comes into existence when the loop block is entered and destroyed once the loop is exited. Note that blocks only scope variables declared with `let` and `const` In summary, scoping is what controls where variables are accessible. Global variables always exists and are accessible throughout the program. Block and Function variables are only accessible in the block/function they are declared in. **Note however that there is the concept of Closures which allow variables to be accessed outside of the local scope.** Closures will be discussed in more detail in my next article.
sophie
1,145,462
How to use parameters in AWS CDK?
Defining CDK Parameters Use the optional Parameters section to customize your templates....
0
2022-07-20T06:52:38
https://www.codewithyou.com/blog/how-to-use-parameters-in-aws-cdk
cdk
## Defining CDK Parameters Use the optional Parameters section to customize your templates. Parameters enable you to input custom values to your template each time you create or update a stack. To define a parameter, you use the [CfnParameter](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.CfnParameter.html) construct. ```typescript // parameter of type String const applicationPrefix = new CfnParameter(this, 'prefix', { description: 'parameter of type String', type: 'String', allowedPattern: '^[a-z0-9]*$', // allowed pattern for the parameter minLength: 3, // minimum length of the parameter maxLength: 20, // maximum length of the parameter }).valueAsString // get the value of the parameter as string console.log('application Prefix 👉', applicationPrefix) // parameter of type Number const applicationPort = new CfnParameter(this, 'port', { description: 'parameter of type Number', type: 'Number', minValue: 1, // minimum value of the parameter maxValue: 65535, // maximum value of the parameter allowedValues: ['8080', '8081'], // allowed values of the parameter }).valueAsNumber // get the value of the parameter as number console.log('application Port 👉', applicationPort) // parameter of type CommaDelimitedList const applicationDomains = new CfnParameter(this, 'domains', { description: 'parameter of type CommaDelimitedList', type: 'CommaDelimitedList', }).valueAsList // get the value of the parameter as list of strings console.log('application Domains 👉', applicationDomains) ``` **Note that the name (logical ID) of the parameter will derive from its name and location within the stack. Therefore, it is recommended that parameters are defined at the stack level.** Let's over go what we did in the code snippet above. We defined 3 parameters: - `prefix`: a parameter of type String, with a minimum length of 3 and a maximum length of 20. - `port`: a parameter of type Number, with a minimum value of 1 and a maximum value of 65535. - `domains`: a parameter of type CommaDelimitedList. CloudFormation currently supports the following parameter types: - String – A literal string - Number – An integer or float - List – An array of integers or floats - CommaDelimitedList – An array of literal strings that are separated by \* commas - [AWS-specific parameter types](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#aws-specific-parameter-types) - [SSM parameter types](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#aws-ssm-parameter-types) If you try to deploy the stack without defining the parameters, the stack will fail. `CdkStarterStackStack failed: Error: The following CloudFormation Parameters are missing a value: port, domains` The parameter `prefix` is not required because it has a default value. ## ## Deploy a stack with parameters ```bash npx aws-cdk deploy \ --parameters prefix=demo \ --parameters port=8081 \ --parameters domains=www.codewithyou.com,www.freedevtool.com \ --outputs-file ./cdk-outputs.json ``` Note that we have to use the --parameters flag for every parameter we pass into the template. Now that we've successfully deployed our CDK application, we can inspect the parameters section in the CloudFormation console: ![Parameters in the CloudFormation console](https://www.codewithyou.com/_next/image?url=%2Fstatic%2Fimages%2Fhow-to-use-parameters-in-aws-cdk%2F1.png&w=1080&q=75) Or we can use the `cdk-outputs.json` file to get the values of the parameters: ```bash CdkStarterStackStack.applicationDomains = www.codewithyou.com,www.freedevtool.com CdkStarterStackStack.applicationPort = 8081 CdkStarterStackStack.applicationPrefix = demo ``` If you are look into the `cdk.out/CdkStarterStackStack.template.json` file, you will see that parameters are defined in the `Parameters` section. ```json { "Parameters": { "prefix": { "Type": "String", "Default": "sample", "AllowedPattern": "^[a-z0-9]*$", "Description": "parameter of type String", "MaxLength": 20, "MinLength": 3 }, "port": { "Type": "Number", "AllowedValues": ["8080", "8081"], "Description": "parameter of type Number", "MaxValue": 65535, "MinValue": 1 }, "domains": { "Type": "CommaDelimitedList", "Description": "parameter of type CommaDelimitedList" }, "BootstrapVersion": { "Type": "AWS::SSM::Parameter::Value<String>", "Default": "/cdk-bootstrap/hnb659fds/version", "Description": "Version of the CDK Bootstrap resources in this environment, automatically retrieved from SSM Parameter Store. [cdk:skip]" } } } ``` ## Cleanup Don't forget to delete the stack before leaving this article. Thanks for reading! ```bash npx aws-cdk destroy ``` > **The code for this article is available on [GitHub](https://github.com/buithaibinh/aws-cdk-parameters-example)**
binhbv
1,145,610
Ultimate Guide on MUI X Component Library for React in 2023
One of the most popular library out there to work specifically on UIs for a React app has always been...
0
2022-07-20T11:04:00
https://blog.wrappixel.com/guide-on-mui-x-component-library-for-react/
programming, react, tutorial, productivity
One of the most popular library out there to work specifically on UIs for a React app has always been MUI (formerly Material UI & Material UI Components). It is so widely adopted across the industry that engineers at companies like Spotify, Amazon, NASA etc use MUI to build their UIs. It provides a robust, customisable and accessible solution to the common and foundational components that React developers don’t need to write from scratch again and again. You can use either React Material UI components or download react material UI templates from useful resources like [WrapPixel](https://www.wrappixel.com/templates/category/react-material-ui-template/?ref=232) and [AdminMart](https://adminmart.com/templates/material-ui/?ref=6). Choice is yours. However, in this article, we won’t be talking about the usual MUI Core components. Here we will try to understand about its big brother: MUI X component library and how to use some of its common components. Let’s start. ## What is MUI X? **MUI X** is a React UI library that is used to build complex and data-rich applications using a growing list of advanced React components. If your React application in your company works on the heavy lifting of multiple bytes of data driven components (one of the most common example being the dashboard interface), then you can consider using MUI X over the regular MUI Core library. One important thing to note before you start implementing this library and ship to production, is that MUI X serves a part of its components under a commercial license. Please pay attention to the license [plans listed on their documentation](https://mui.com/components/data-grid/getting-started/#plans). To summarise, it has 3 plans to go: **Community:** free forever and available as [@mui/x-data-grid](https://www.npmjs.com/package/@mui/x-data-grid). **Pro and Premium:** a commercial license that you can get as [@mui/x-data-grid-pro](https://www.npmjs.com/package/@mui/x-data-grid-pro). You can take a look at the feature comparison between these [here](https://mui.com/components/data-grid/getting-started/#feature-comparison) along with the pricing Information on their [store](https://mui.com/store/items/material-ui-pro/). For practical implementation of the guide you can use builtin [react templates](https://www.wrappixel.com/templates/category/react-templates/?ref=232) to check how things work in actual. So that you’ll get to know how material ui components are used. ## Features of the MUI X component library But there’s one question you will have in your mind: “Why should I use this library over others? What features it provides to me and my team?” To answer this, here are some of the features it provides: 1. Powerful components: MUI X is not your go-to library to make simple inputs, buttons or cards, it comes with only some of the most powerful components for advanced use-cases. It enables apps to have complex use-cases with several advanced components like the Data Grid and Date Range Picker components. 2. The Data Grid component: its flagship Data Grid component is so much packed with exclusive features that it needs a special mention on the features section. With all the features that you may need and expect in a data table, it has them all. From editing, selection, to sorting, pagination, filtering and more! 3. Theming: Don’t like the default Material Design based interface? Don’t fret! You can easily use sophisticated theming features to make all the components look exactly like you want them to be. Use your brand’s design tokens to reflect the look and feel. 4. Open-sourced with roadmaps: The entire MUI X’s code can be found on its GitHub repository where you can easily raise an issue, suggest new features or track/mention bug fixes. Along with that the team comes with new updates to components and more in their roadmap planned to released exciting component like Sparkling, Upload, Gauge and more! ## Common MUI X components Now is the exciting part. Let’s get to know some of the common components provided by MUI X and how to use them. [Data Grid component](https://mui.com/components/data-grid/): this component use React and TypeScript to provide a seamless UX when it comes to manipulate a large amount of data. For real-time updates it has an [API](https://mui.com/api/data-grid/data-grid/) with focus on accessibility, theming and fast performance. The Data Grid component comes with two versions; first the MIT which is a clean abstraction of the [Table demo](https://mui.com/components/tables/#sorting-amp-selecting) and can be imported in a React app as: `import { DataGrid } from '@mui/x-data-grid';` ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bc1stvzuxdw60xq4akx3.png) The second is the Commercial version which as shown in the docs can hold over 3 million cells in total! Import it as: `import { DataGridPro } from '@mui/x-data-grid-pro';` ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8nk2ukcdozw4pk9rjcyy.png) For a detailed comparison between the MIT and the Commercial Data Grid Component, you can [visit the docs](https://mui.com/components/data-grid/#mit-vs-commercial). [Date Range Picker component](https://mui.com/components/date-range-picker/): this is a MUI X Pro component which lets the user select a range of dates. You must have used some popular date management libraries for your React app like [date-fns](https://date-fns.org/), [dayjs](https://github.com/iamkun/dayjs), [moment](https://momentjs.com/) etc. Guess what? This component supports them all and other libraries thanks to the public dateAdapter interface it uses. - With its [DatePicker API](https://mui.com/api/date-picker/) you can pass almost any prop to it. With that you get various usages for the Date Range Picker component. The static mode is the most basic and common one: ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e2q7mhg7j1u842g72xbo.png) - Here you can see we can not only show the date range, but we can also show the current date of the month by default. It’s possible to render any picker inline of the component so that we can also have custom popovers/modals after we select the date range. [Tree View component](https://mui.com/components/tree-view/): the tree views in its basic form, are used to represent a folder that can be expanded to reveal the contents of the folder, which may be files, folders, or both. One of the most common examples of this is the system file navigator where we have multiple folders and files. - This component supports multi-selection of items inside the tree, comes with an option to collapse or select all with one click (thanks to the controlled API it offers) along with rich objects and customisation which can end up looking like this: ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bu4zz1x4i4qkfew5xa8h.png) ## Why React UI components Library? React UI components are used for variety of different purposes to enhance coding structure and improve coding efficiency. In order to use it React UI components library is used that is super helpful. **Related Article:** [How to Use Material UI (MUI) Icons in React?](https://blog.wrappixel.com/how-to-use-mui-icons-in-react/?ref=232) **Getting started with MUI X** Now that we know some of its common components to use, let’s put this into a quick practice by learning how to setup MUI X for the Data Grid component. **Step 1: Install MUI X Data Grid component** This component has a peer dependency on the one MUI component which you can install with: // with npm npm install @mui/material // with yarn yarn add @mui/material After that’s done, let’s install the Data Grid component by running the following command in your terminal window: // with npm npm install @mui/x-data-grid // with yarn yarn add @mui/x-data-grid **Step 2: Import the component ** Open your React file where you want to add this component and import it as: import { DataGrid } from '@mui/x-data-grid'; **Step 3: Define the rows and columns ** For both rows and columns, you can make an object of key-value pairs. Here’s how a row can be constructed: const rows: GridRowsProp = [ { id: 1, col1: 'Hello', col2: 'World' }, { id: 2, col1: 'DataGridPro', col2: 'is Awesome' }, { id: 3, col1: 'MUI', col2: 'is Amazing' }, ]; For columns, you can define a set of attributes of the GridColDef interface which are mapped to their corresponding rows via the field property: const columns: GridColDef[] = [ { field: 'col1', headerName: 'Column 1', width: 150 }, { field: 'col2', headerName: 'Column 2', width: 150 }, ]; **Step 4: Render the component** You can now pass on the rows and columns attributes as follows: <DataGrid rows={rows} columns={columns} /> This gives you an output which is similar to this: ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7fr4b9qp2xe11gpkxpb.png) In this article you got to know about the MUI X component library for React, the features it provides, some of the common components you are likely to use with different variations of each of them and finally we spin-off a new MUI X component in a React app. This way you can get to know how useful **Material UI components** are and what benefits you can reap out of it.
suniljoshi19
1,145,658
Best Web Application Testing Tools for 2022
Disclaimer: this article has been written in collaboration with Dzmitry Narkevich (LinkedIn), Senior...
0
2022-07-20T12:17:22
https://dev.to/nataliakharlanova/best-web-application-testing-tools-for-2022-2ioi
testing
**Disclaimer:** _this article has been written in collaboration with Dzmitry Narkevich ([LinkedIn](linkedin.com/in/dzmitry-narkevich-4a1989a7/)), Senior Manual QA Engineer at Exadel._ **Contents:** * What Is Web Application Testing and Why It’s Important * Criteria To Select Web Testing Tools * Web Testing Tools Comparison * Wrapping It Up The tasks within software testing may vary significantly — from routinely checking components against mockups, to dealing with complex business logic, CMS, analytics, security. In this article, we will discuss [QA tips and tricks](https://exadel.com/services/quality-assurance/) and share the best web application testing tools that can be used in a wide range of projects. The requirements for a web application’s function will depend on its purpose. In an online store, the most important actions will be to add items to the cart and to pay for the order. News portals, on the other hand, should be designed for convenient information reception and simultaneous attendance by a large number of users who use different web browsers and operating systems. For example, if the user can't go to the shopping cart via Mozilla Firefox or read the news because of an advertising banner in the way on the mobile version of the browser, there are flaws in testing that need to be fixed. ## What Is Web Application Testing and Why It’s Important Web applications help businesses interact with customers, and web testing helps QA ensure that this process remains as predictable and stable as possible. Web application testing can include functionality, performance, usability, security, compatibility, localization, internationalization, and more. The tests are usually defined by the client and product requirements and its domain. A website is a brand’s metaphorical business card, a means of interacting with users. Therefore, the quality of the web application has a direct impact on the success of the product and, in turn, the business. Right now, we will focus on testing the "visible" part of web applications — the GUI. We did a little research on web testing applications and created a list of the most important criteria when choosing a web testing tool for your project. ## Criteria To Select Web Testing Tools 1. **Cost.** It’s obvious that a testing application is usually selected according to the “price-quality” principle. 2. **Free trial and conditions.** This is a handy option for a pilot project to make sure that expectations meet requirements. 3. **Compatibility.** This option shows us on which platforms the application can be used. Nowadays, most applications are supported by all platforms, but this option can still be present in some tools and can be used for specific application needs. 4. **Usability.** Not all applications have a user-friendly interface, so it is better to study user reviews in advance or use the trial version, rather than committing to a difficult-to-use service. 5. **Ease of use.** Does it require additional training? Try to figure out how challenging it might be for a team of QAs to master using this product. 6. **Tech support.** This is the most useful factor to consider. During tool research, the QA team encounters problems or questions about the application, and technical support usually responds promptly to requests. In addition, vendors are likely to create new features based on customer feedback. 7. **Coverage (environment).** Usually, you need a wide selection of devices, browsers, and versions of browsers, though on certain projects, this is not as important. 8. **Inspecting elements/scale.** This is one of the most important options for web testing. It allows you to explore both the layout of the page and the size of its elements. 9. **Debugging/logging.** In most cases, we will consider this as access to the network and console tabs in the developer tools. 10. **Integration with other tools (end-to-end traceability).** For example, the ability to integrate an application with auto tests, a bug tracker, etc. ## Web Testing Tools Comparison We’ll list several applications that are currently hot (at the start of 2022). We could make a more detailed comparison list of web application testing tools based on the principle of a cloud-based farm of devices and virtual machines with browsers — but in terms of manual web testing, they are generally very similar. Therefore, the table of criteria for these does not differ significantly. The applications we will consider are as follows: * [Browserstack](https://www.browserstack.com/) * [Saucelabs](https://saucelabs.com/) * [Perfecto](https://www.perfecto.io/) * [CrossBrowserTesting](https://smartbear.com/product/bitbar/) * [Lambda Test](https://www.lambdatest.com/) * [AWS Device Farm](https://aws.amazon.com/device-farm/) See a full comparison list in the picture below: ![Web Testing Tools Comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjpwdvsqrj2e7awv5wnm.png) _** prices by the end of 2021_ With the help of these tools, you can [test](https://exadel.com/news/test-management-tools-comparison-why-you-need-it-and-how-to-choose-the-right-one) a web application in different versions of browsers on desktops, tablets, and mobile phones. They can also be integrated with other applications for test automation, load testing, and other useful things. These web application testing tools are regularly updated, so the best way to help you understand them is by using their trial versions and communicating with managers, as well as checking reviews from other users. The next category of web testing applications is those that help to check the correctness of the web page’s display. Some of them are typically used by software developers and some by QAs. ![Web Testing Tools Comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1fs2teyprvli2jcmrx0p.png) * [W3C Markup Validator](https://validator.w3.org/) A free QA web testing tool for checking the validity of web documents. * [A Real Validator](http://arealvalidator.com/) A paid offline tool with similar functionality to the W3C Markup Validator. There is a 30-day trial version. * [HTML Tidy](https://htmltidy.net/) A free, but not updatable, online web application testing tool to automatically correct html pages. * [Browser Dev Tools](https://developer.chrome.com/docs/devtools/) This standard tool is available to everyone and depends on the operating system installed on the PC. In the capable hands of a manual tester, it is a good budget option for a small project. To master all its functions, just look at a couple of [Dev Tools tutorials](https://developer.chrome.com/docs/devtools/overview/). * [Responsively. app](https://responsively.app/) A convenient and free open-source testing tool for web applications for developing and testing responsive design. You can choose from a library of standard screen sizes or create your own (even a TV screen). You can also simultaneously display the application on multiple screens, enjoy fast simultaneous page refresh, and other useful features. ##Wrapping It Up One of the most important elements of successful web application testing is collecting product requirements. Relevant tools help to improve the quality of the QAs’ work, as well as boost their professional development. Even with limited resources, you can get everything right (we’re sure many of us have been in situations like this!). With our list of criteria and suggested web testing tools, you will be able optimize your daily routine and find inspiration to explore existing [QA solutions](https://exadel.com/news/tag/qa-solutions/). Web applications are the most popular tools for eCommerce and retail. The success of the customer's product depends on the quality of its functionality. That’s why web applications testing will always be in demand: this area is guaranteed to develop further. Thanks to a variety of domains and technologies, projects can be exciting and help QAs expand their [skills](https://exadel.com/news/tag/qa-best-practices/).
nataliakharlanova
1,146,142
Email template service
If we are developing a more serious app, we will probably use some kind of email notification or...
19,524
2022-07-20T20:01:00
https://dev.to/maurerkrisztian/email-template-service-5mj
typescript, email, microservices, tempalte
If we are developing a more serious app, we will probably use some kind of email notification or newsletter, or even just a password change email. In this article, I will present my email / pdf template service. Code: [https://github.com/MaurerKrisztian/template-api-tm](https://github.com/MaurerKrisztian/template-api-tm) Demo: [https://template-api-tm-production.up.railway.app/template](https://template-api-tm-production.up.railway.app/template) The goal was: - by entering the template-name and the data of the placeholders, I get back the filled-in html page Thinking about this a step further, since I use it for emails 90% of the time, I have therefore built in the email send feature. Example body: ```json { "mailOptions": { "to": "test@gmail.com", "subject": "test email" }, "template": { "name": "test_template", "data": { "title": "test title", "description": "This is a test description", "color": "#DEB887" } } } ``` The following must be specified for each request: mailOption, template.name template.data - Error management: The data part is easy to mess up, so every template has a validator. - The other problem was that if we have several templates, we can't keep in mind what kind of data it expects, and digging through the code is quite time-consuming. That's why I added an endpoint where you can view the most important information: available templates, example data, etc. [https://template-api-tm-production.up.railway.app/template](https://template-api-tm-production.up.railway.app/template) - You can access a playground for testing templates with different data with the press of a button it will rerender the template. - Emails can only work with inline css, however writing the entire template with inline css is not very nice and not maintainable, so the css specified in the style tag in the templates is automatically converted to inline when sent out. - Finally, I also included pdf generation, you can specify what you want: html or pdf when requesting. This is what your generation process looks like: - data validation - fills in the template with the data field - converting to pdf (if necessary) - template inline css transformation - email sending I's not perfect but mine. 🙂 Also my first article. :partying_face:
maurerkrisztian
1,147,450
Angular13 migrate Jest
Start Angular 13 ng i -g @angular/cli@13 ng new my-app npm i...
0
2022-08-17T05:52:00
https://dev.to/chalermporn/angular13-migrate-jest-3a68
# Start Angular 13 ``` ng i -g @angular/cli@13 ng new my-app npm i --legacy-peer-deps //or npm -loglevel info install --legacy-peer-deps -strict-ssl false ``` **migrate Jest** ``` npm remove -D @types/jasmine jasmine-core karma-jasmine karma-jasmine-html-reporter karma karma-chrome-launcher karma-coverage rm -rf karma.conf.js src/test.t ``` **bun remove** ``` bun remove -d @types/jasmine jasmine-core karma-jasmine karma-jasmine-html-reporter karma karma-chrome-launcher karma-coverage ``` ref: https://www.npmjs.com/package/@angular-builders/jest ``` npm i -D jest @types/jest @angular-builders/jest@13.x.x --legacy-peer-deps ``` **add script to package.json** "clear": "rm -rf node_modules *-lock.* *.lock", "init": "npm i --legacy-peer-deps", ref: https://github.com/briebug/jest-schematic **benchmark bun vs npm vs yarn vs pnpm command line** ``` hyperfine "bun install --backend=hardlink" "yarn install --on-scripts" "npm install --no-scripts --ignore-scripts" "pnpm install --ignore-scripts" --prepare="rm -rf node_modules" --cleanup="rm -rf node_modules" --warmup=8 ```
chalermporn
1,147,498
Hello Blog !
Hi there 🔥 ! Hello blog, hello Dev.to and welcome to you ! I'm camarm, a French 🇫🇷...
0
2022-07-21T09:24:00
https://www.camarm.dev/blog/1147498
hello, welcome
## Hi there 🔥 ! --- Hello blog, hello Dev.to and welcome to **you** ! I'm **camarm**, a French 🇫🇷 Developer 💻 fond of **Linux** 🐧, **server** infrastructures 💾 and development 👨‍💻 I will write for my **blog** 📰 articles about code/tricks/problems I encountered in my beginner/intermediate developer life (all articles will be sent in my [newsletter ✉️](https://www.camarm.dev/#contact)). You can read them [here 🔗](https://www.camarm.dev/blog) (thanks to dev.to 🎉 for hosting the articles!). Bye!
camarm
1,147,569
Which JavaScript framework is right for you? Next.js vs React.js
What is a JavaScript framework? A JavaScript framework is a tool that helps developers to...
0
2022-07-21T10:15:00
https://dev.to/omerwow/which-javascript-framework-is-right-for-you-nextjs-vs-reactjs-58i0
javascript, webdev, beginners, react
###What is a JavaScript framework? A JavaScript framework is a tool that helps developers to create and structure code. A framework provides a predefined structure for code, which makes it easier to develop complex applications. There are many different JavaScript frameworks available, but some of the most popular include React.js, Angular.js, and Vue.js. Each framework has its own strengths and weaknesses, so it's important to choose the right one for your project. ###The benefits of using a JavaScript framework Using a JavaScript framework can have many benefits. It can help you organize and automate your code, making it more readable and maintainable in the process. If you're looking to improve the quality of your code, a JavaScript framework is definitely worth considering. ###React.js vs Next.js: Features When comparing React and Next.js, the first thing to look at is their features. Both frameworks have a lot to offer, but there are some key differences between them. It's important to note them Next.js is based on React and is meant to amplity it's capabilities and provide tools to make the most out of react. ###React: - Ease of use: React is easy to learn and use, making it a good choice for developers who are new to JavaScript frameworks. - Component based: React allows you to create reusable components, which can be used to build complex user interfaces and applications. - Virtual DOM: React uses a virtual DOM, which makes it fast and efficient when rendering large lists of data or complex user interfaces. - JSX: JSX is a syntax extension of JavaScript that allows you to write HTMLlike code inside JavaScript files. Data binding: React uses oneway data binding, which means that data flows in one direction from the parent component to the child component. - Extensibility: React can be extended with thirdparty libraries and plugins to add additional functionality if needed. ###Next.js: As I've written before, Next.js is built on top of React, here are some of the ways it changes and improves on it: - Serverside rendering: Next.js comes with builtin support for serverside rendering, which can improve the performance of your application. Static site generation: With Next.js, you can generate static versions of your pages, which can be deployed without a server. - Routing: Next.js has builtin routing capabilities, making it easy to create singlepage applications (SPAs). - TypeScript support: TypeScript is a typed superset of JavaScript that enables type checking and other advanced features. Next.js comes with builtin support for TypeScript out of the box. - File system routing: With Next.js, you can define your routes directly in your file system, which can make things easier to manage if you have a large application. ###Next.js vs React.js: Which one is right for you? There's no easy answer when it comes to choosing a JavaScript framework. It really depends on your specific needs and preferences. However, if you're looking for a lighthearted and witty article on the subject, then you can't go wrong with either Next.js or React.js. Star our [Github repo](https://bit.ly/3QFgAUf) and join the discussion in our [Discord channel](https://bit.ly/3HQtlYo)! Test your API for free now at [BLST](https://www.blstsecurity.com/?promo=blst&domain=https://dev.to/Which_JavaScript_framework_is_right_for_you?_Nextjs_vs_Reactjs)!
omerwow
1,147,801
How to convert between any two units in Java using qudtlib
We just released version 1.0 of a new project, qudtlib, which offers unit conversion and related...
0
2022-07-21T15:10:00
https://dev.to/fkleedorfer/introducing-qudtlib-55jh
unitconversion, java, rdf, library
[We](https://www.researchstudio.at/studio/smart-applications-technologies/) just released version 1.0 of a new project, [qudtlib](https://github.com/qudtlib/qudtlib-java), which offers unit conversion and related functionality for over 1700 units, while being quite small: the jar-with-dependencies is ~400kB in size. The project is based on the [QUDT ontology](https://qudt.org). Currently, this is available in Java only, but more is to follow. ## For example Let's say you want to convert feet into lightyears, here's what your java code would look like: ``` Qudt.convert( new BigDecimal("1"), Qudt.Units.FT, Qudt.Units.LY); --> 3.221738542107027933386435678630668E-17ly ``` ok, now how do we know that's correct? Easy: we know that light travels about one foot per nanosecond, so if we multiply that number by the number of nanoseconds in a year, which is... ``` Qudt.convert( new BigDecimal("1"), Qudt.Units.YR, Qudt.Units.NanoSEC); --> 31557600000000000ns ``` ... we should get around 1. Let's try: ``` Qudt.convert( new BigDecimal("1"), Qudt.Units.FT, Qudt.Units.LY).getValue() .multiply( Qudt.convert( new BigDecimal("1"), Qudt.Units.YR, Qudt.Units.NanoSEC) .getValue())); --> 1.01670336216396744710635782571955168476800000000000 ``` That's about 1. Seems to work ;-) ## The Hack The part of the QUDT ontology we need, including some additional data we generate, is about 3.5 MB in size. The libraries needed to access the data for providing the functionality we want have another 30+ MB. We felt that was too much for something as simple as converting celsius into fahrenheit (or feet to lightyears). So we moved all the heavy lifting into the preprocessor (ie., maven) to generate a java class that instantiates all the objects representing the relevant part of the QUDT ontology upon startup. No SPARQL queries or other RDF munging at runtime, not a single external runtime dependency, You're welcome. The big upside of this approach is that it should be relatively easy to port the solution to other languages. ## What's on the Roadmap? Typescript. ## What else? Well, you still don't know how many feet is a light year, so why don't you [try qudtlib](https://github.com/qudtlib/qudtlib-java/tree/main/qudtlib-example) and find out?
fkleedorfer
1,147,809
15 Common Beginner JavaScript Mistakes
I've taught JavaScript to hundreds of people in-person and thousands online. During my time teaching,...
0
2022-08-09T18:46:46
https://www.jamesqquick.com/blog/15-common-beginner-javascript-mistakes
javascript, programming
--- title: 15 Common Beginner JavaScript Mistakes published: true date: 2022-07-19 19:49:00 UTC tags: javascript, programming canonical_url: https://www.jamesqquick.com/blog/15-common-beginner-javascript-mistakes --- I've taught JavaScript to hundreds of people in-person and thousands online. During my time teaching, I've seen beginner JavaScript developers make the same mistakes over and over....and over again. Here are the 15 most common beginner JavaScript mistakes I've seen. {% embed https://youtu.be/pWnJY_Wkde4 %} > [Find the full source code and examples in this Replit](https://join.replit.com/jamesqquick). ## 1. Not Returning From a Function If you've ever called a function and gotten a response of `undefined`, you've probably already seen this mistake. Functions in JavaScript return a value of `undefined` by default, which means if you don't explicitly return anything with the `return` keyword, `undefined` will be the result. So instead of this. ```javascript const getAddedValue = (a, b) => { a + b; } ``` Make sure you actually return the result. ```javascript const getAddedValue = (a, b) => { return a + b; } ``` ## 2. Loading JavaScript Scripts in HTML Before the DOM Is Loaded This is the only example that is going to touch a bit on how JavaScript interacts with HTML and the DOM. The key in how they interact is that the timing is important. If you want to reference a DOM element in your JavaScript, that DOM element must have already been loaded on the page. Let's look at an example. Let's say you have a bit of HTML that also imports a JavaScript file called `script.js`. Notice the JavaScript file is imported before the `h1` element with the id of `header`. ```html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>JavaScript Mistake Demo</title> </head> <body> <script src="script.js"></script> <h1 id="header">Hello world</h1> </body> </html> ``` Then, in your JavaScript, you want to get a reference to the `header` element and update its text. ```javascript const header = document.getElementById('header'); header.innerText = "James is cool!"; ``` In this case, you're going to get an error: `Cannot set properties of null (setting 'innerText')`. This is because the JavaScript is trying to get a reference to a DOM element that has not been loaded on the page yet. To fix this, it's common to import your JavaScript at the bottom of your HTML, after all of the HTML elements have been loaded. ```javascript <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>JavaScript Mistake Demo</title> </head> <body> <h1 id="header">Hello world</h1> <script src="script.js"></script> </body> </html> ``` This way, the JavaScript file is being loaded **after** the `header` element has already been loaded. It's important to note that there are also other options to explore in terms of importing your JavaScript files in your HTML. What we've shown is a basic example targeted at beginners, but there are additional keywords that you can look into for more details. - [HTML script async Attribute](https://www.w3schools.com/tags/att_script_async.asp) - [HTML script defer Attribute](https://www.w3schools.com/tags/att_script_defer.asp) ## 3. Reassigning Const Variables This is such a common mistake that it continues to happen to me almost on a daily basis, so don't feel bad if it does to you. As of ES6 (JavaScript version released in 2015), there are two main ways to declare variables: `const` and `let`. These two have, for the most part, replaced the usage of `var` in modern JavaScript. The difference between these two is that you **cannot reassign** `const` variables while you can with `let`. Here's a piece of sample code. ```javascript const count = 0; for (let i = 0; i < 10; i++) { count = count + i; } console.log(count); ``` In this case you'll get an error: `Assignment to constant variable`. If you're going to need to reassign a variable, make sure to use `let`. ```javascript let count = 0; for (let i = 0; i < 10; i++) { count = count + i; } console.log(count); ``` For more on const vs let vs var, [check out this article from freeCodeCamp](https://www.freecodecamp.org/news/var-let-and-const-whats-the-difference/). ## 4. Misunderstanding Variable Scope Variable scope is a tricky concept for new developers, especially in JavaScript. One common problem I see from learning developers is defining a variable inside of a function and expecting to be able to access that variable outside of the function it is defined in. That's a pretty generic problem I see. Let's take a look at a more JavaScript specific example though. This goes back to the usage of `const` and `let` vs `var`. I mentioned that `var` is a more outdated way of declaring variables, but you will still come across it in documentation and source code. Because of that, it's important to understand how scoping differs with it. If you define a variable using `var` inside of a for loop, for example, that variable is also accessible after the for loop. Here's a quick example. ```javascript for (let i = 0; i < 10; i++) { var count = 5; } console.log(count); ``` This code will actually work although it's counter-intuitive to me. The value of `count` is accessible after the for loop it was defined in. However, this does not work for `const` and `let` variables. If defined inside of a for loop, they are only accessible inside of that for loop. ```javascript for (let i = 0; i < 10; i++) { const count = 5; } console.log(count); //error: count is not defined ``` For more details, here's another [great article from freeCodeCamp](https://www.freecodecamp.org/news/var-let-and-const-whats-the-difference/#:~:text=var%20declarations%20are%20globally%20scoped%20or%20function%20scoped%20while%20let,the%20top%20of%20their%20scope.). ## 5. Poorly Named Variables Poorly named variables make code **one million times more difficult to read and follow**. This is a mistake I've seen every single beginner developer that I've worked with make. Using names like `thing1`, `thing2`, and `anotherThing` give no context as to what the variables are. This makes it much harder for me to help debug someone's code. Let's take a look at an example. ```javascript const arr = ["James", "Jess", "Lily", "Sevi"]; let str = ""; for(let i = 0; i < arr.length; i++){ const tmp = arr[i]; str += tmp[0]; } console.log(str); ``` The variables `arr`, `str`, and `tmp` give no context for what the variables are and what the code is doing. Here's an example that uses naming conventions that add much more context. ```javascript const names = ["James", "Jess", "Lily", "Sevi"]; let retVal = ""; for(let i = 0; i < arr.length; i++){ const name = arr[i]; retVal += name[0]; } console.log(retVal); ``` One of the easiest pieces of advice I have is to name arrays as the pluralized version of the type of info they hold. For example, an array of names should be called `names`. Then, you can reference an individual items inside of that array as `name`. I commonly see an array named in the singular form, and it's incredibly confusing. ## 6. Too Large of Functions This is a pretty meta mistake, but it's extremely common early on. When developers learn about functions in JavaScript, they often have the tendency to write one function and dump all the code they need into it. However, it's better to start thinking about how to break functions into smaller pieces of code that are more readable, composable, etc. Instead of a code example, here are a few extra links you can read for more. - [https://medium.com/swlh/how-long-should-functions-be-how-do-we-measure-it-cccbdcd8374c](https://medium.com/swlh/how-long-should-functions-be-how-do-we-measure-it-cccbdcd8374c) - [https://softwareengineering.stackexchange.com/questions/133404/what-is-the-ideal-length-of-a-method-for-you](https://softwareengineering.stackexchange.com/questions/133404/what-is-the-ideal-length-of-a-method-for-you) ## 7. Unnecessary Else Statements What often gets misunderstood is that a `return` statement inside of a function actually stops the execution of that function. In other words, after you return inside of a function, no other code inside of that function gets run. Because of that, I often see unnecessary else statements. Here's an example. ```javascript const isOdd = (num) => { if(num % 2 === 1) { return true }else { return false; } } ``` Because we already have a `return` in the `if` condition, the `else` is unnecessary. We simplify this code by removing the else condition. ```javascript const isOdd = (num) => { if(num % 2 === 1) { return true } return false; } ``` You can even take this one step further and return the evaluated expression directly since `num%2 === 1` returns a boolean. ```javascript const isOdd = (num) => { return num % 2 === 1; } ``` ## 8. Not Short-circuiting Loops Short-circuiting is another way to improve your for loops. Let's say write a function where you need to determine whether an array of numbers includes an even number. Here's an example of how you might solve that. ```javascript const hasEvenNumber = (numbersArr) => { let retVal; for(let i =0; i< numbersArr.length; i++){ if(numbersArr[i] % 2 === 0){ retVal = true; }else { retVal = false; } } return retVal; } ``` In this case, we are iterating through each number and updating a boolean accordingly based on if the number is even. Unfortunately, there's a problem. Let's say the first number is even, but then the next number is odd. Well, the boolean is updated to true for even, then back to false for the odd number. This would give the incorrect answer since you just want to know if the function has at least one even number (it should return true). In this case, after you see the first even number, you have your answer. You don't need to look anymore. This is where short-circuiting comes into play. After you see one even number, return. If you never see one, return `false `at the end. ```javascript const hasEvenNumber = (numbersArr) => { for(let i =0; i< numbersArr.length; i++){ if(numbersArr[i] % 2 === 0){ return true; } } return false; } ``` This way, your logic is cleaner and you're avoiding unnecessarily iterating through additional items in the array. ## 9. Double vs Triple Equals This is a huge topic of confusion in JavaScript. To start, it's important to know that the double equals **compares two values without taking into account their data types**. On the flip side, the triple equals **compares two values while taking into account their data types**. Since the double equals doesn't take into account the data types of the two values, JavaScript has to have some way to compare them. To do this, JavaScript secretly will cast (convert one data type to another) each value accordingly so they can be compared. This means that a number and string could be considered to be double equal but not triple equal since they are different data types. ```javascript const jamesAge = "31"; const jessAge = 31; console.log(jamesAge == jessAge); //equal console.log(jamesAge === jessAge); //not equal ``` General piece of advice: **use triple equals by default** unless you have a specific reason to use double equals. It's typically safer and helps avoids unintended results. For more details on how these equalities work, check out the [MDN docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Equality_comparisons_and_sameness). ## 10. Incorrect Object vs Primitive Comparisons Another similar problem that I see is trying to compare primitives and objects for equality. There are seven primitive values in JavaScript. - number - string - bigint - boolean - undefined - symbol - null Everything else is represented as an object, and objects and primitives are referenced differently. In JavaScript, primitives are referenced **directly by their value**. On the other hand, objects more generically **reference a space in memory where the value(s) is stored**. This leads to some confusion when objects and primitives. Consider this example. ```javascript const name1 = "James"; const name2 = "James"; console.log(name1 === name2); ``` Since these two variables are primitives with the same value they are considered equal, but what if we compare two objects with the same name property like so? ```javascript const person1 = { name:"James" } const person2 = { name:"James" } console.log(person1 === person2); ``` In this case, these two objects are **not considered equal**. This is because the two variables are actually references to different "spaces in memory". Although they look the same, they aren't considered to be equal because of this. If you wanted to compare their equality more appropriately, you could compare their `name` properties directly. ```javascript const person1 = { name:"James" } const person2 = { name:"James" } console.log(person1.name === person2.name); ``` ## 11. Cannot Read Property of Undefined As I'm writing this, I'm realizing that I didn't necessarily order this in terms of frequency. If I did, this one would rank much higher 🤣. Anyways, one thing that beginner (and experienced) developers forget to do is validate input parameters to a function. Just because you expect to receive an object, doesn't mean you actually will. Let's look at this function that prints the name property of an object. ```javascript const printNamedGreeting = (person) => { console.log(person.name) } ``` Seems simple enough, but what happens if someone calls this function and doesn't pass anything? Well, you'll get a `cannot read property name of undefined` error. So, how do you improve on this code? Well, you'll need to validate that the input you receive matches your expectations. In this case, one simple (although not complete) solution would be to check that the input parameter is not "falsy". ```javascript const printNamedGreeting = (person) => { if(!person){ return console.log("Invalid person object") } console.log(person.name) } ``` In this case, if the `person` parameter is "falsy", we log out an error. Otherwise, we continue to log out the `name` property. Keep in mind, this is a quick and simple validation check, but in a real application you'd probably want something more in-depth. You'd want to validate that the input is actually an object (not a string for example) and that it also has a name property. Lastly, before you say it, yes, you could use TypeScript for this...I know! 😃 ## 12. Mutation with Arrays Mutation is an interesting concept in JavaScript. In general, mutation occurs when you call a function on a variable that then changes the variable itself in some way. For example, when you call the `sort` function on an array, the array itself is mutated. However, if you call the `map` function on an array, the original array remains intact. It is not mutated. ```javascript const names = ["Jess", "James", "Sevi", "Lily"]; const copiedNames = [...names]; const sortedNames = names.sort(); console.log(names); //["James", "Jess", "Lily", "Sevi"] console.log(copiedNames); //["Jess", "James", "Sevi", "Lily"] console.log(sortedNames); //["James", "Jess", "Lily", "Sevi"] ``` In this example, calling `names.sort()` will mutate the original `names` array, meaning the `names` and `sortedNames` arrays will look the same. However, the `copiedNames` array is unaffected since it was a true copy of the original array. However, if you were to call the `map()` function on the array, the original `names` array is unaffected. ```javascript const firstLettersArray = names.map( name => name[0]); ``` The main lesson here is to understand how the functions that you call mutate or not the data you're working with. This should be listed in the documentation for whatever function you're calling. ## 13. Not Understanding Asynchronous Code The asynchronous nature of JavaScript is one of the most difficult things to grasp for beginner developers. This isn't the place to go into a full tutorial of asynchronous JavaScript, so I'll leave you with the most common beginner example there is to introduce the topic. In what order will these log statements print out? ```javascript console.log("1"); setTimeout(() => { console.log("2") }, 0) console.log("3"); ``` The answer might surprise if you're new to this. W3 Schools is always a favorite reference for me, so here's their [getting start doc for asynchronous](https://www.w3schools.com/js/js_asynchronous.asp)JavaScript. ## 14. Not Handling Errors The problem with lots of beginner tutorials is that they rarely cover how to handle errors in JavaScript. You only see the best case scenarios for code samples, and not the "what if something goes wrong" code. It's understandable because there's only so much you can cover in a beginner tutorial. At some point, though, it's important to spend some time learning how to appropriately handle errors. Again, this might not be the spot for an in-depth tutorial on error handling, so I'll leave you with one piece of advice. As you're writing code, ask yourself " **what if something goes wrong here**". As long as you keep that thought in your head as you're learning, you'll be in a good spot! ## 15. Not Formatting Code This is another meta example, but it makes such a big difference. Similar to the variable naming section above, **poorly formatted code is incredibly difficult to read**. Developers are used to reading code in a standardized and formatted way. Because of that, it becomes seemingly impossible to read code that isn't formatted. I've experienced this over and over again in teaching. To answer what seems like an easy question, I find myself backtracking through the code to try and decipher what is going on. The lesson here is always spend a bit of time and energy on formatting your code correctly with indentation, spacing, etc. This will make it much easier for you to read as well as anyone else that might read your code.
jamesqquick
1,147,984
How the Browser Reads CSS
Note prior to reading: This article requires no prior knowledge on any topic Reading time: 20-25...
0
2022-07-21T16:57:00
https://dev.to/worldwidewebdevelopment/how-the-browser-reads-css-20an
css, browser
**Note prior to reading:** This article requires no prior knowledge on any topic --- **Reading time:** 20-25 minutes Download this article for free right [here](https://drive.google.com/drive/u/1/folders/1FnrucEf88yIO5KY-tlCeWz_FbAF9ocns). --- People can read the same thing in completely different ways. One person reads it in one way, another person in another way, and a third one might even read the same thing in a totally third way. And the way they read it might even change if the context of where the written is written. And when it comes to the browser’s way of reading CSS file(s), the above is exactly what is going on. First of all, what does CSS stand for? CSS stands for Cascading Style Sheet. The keyword here is “Cascading” since that’s what it does. Styling given to a parent element will be inherited by all its children, since the styling cascades down onto the child(ren) element(s) from its/their parent. You might ask, “Isn’t CSS just CSS no matter what?”. In terms of its purpose, which is to style a web document, yes. But then it also pretty much comes to an end there in terms of CSS just being CSS. There are different ways to write it, and the browser treats the different ways of writing CSS differently.   There are three ways to write CSS: inline, internal, and external. Let’s break it down what those fancy terms even mean: - Inline This means to style an element by writing the styling for it directly inside the opening tag of that element in the HTML file by using the style attribute. Let’s say we want the text of the `<h1></h1>` to be red by styling it inline, we will do it like this, with the result of the text being red:![Inline styling example:](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zn82drzl8ivbooxy7fp2.png) But styling elements this way is very limited, and, therefore something you should never do unless you have to test something styling-wise. Inline styling only styles the one element, on which it’s applied, and not any other element of the same type. This means that if I had another `<h1></h1>`, it would not be red, since it has no styling to it. - Internal This means to style the HTML inside <style></style> tags either right underneath the opening <html> tag or just above the closing <body> tag. This will work for really small applications but should never be used in applications planned to be put into production at some point. If we were to style an `<h1></h1>` with internal styling, it would look like one of the following two examples with [the first](https://jsfiddle.net/VS130300/dn49hou5/5/) and [the last](https://jsfiddle.net/VS130300/m5k0jx3a/) example both having the result of printing out red text: ![Internal styling examples:](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvpmyluf6k6qvxmhqpk9.png) Where this differs from the inline styling is that this apply to any instance of the element, meaning that in our example, any `<h1></h1>` would be red.   - External This is undoubtedly the way to go. External means having your CSS in a separate file or multiple separate files that are all being linked to your HTML file(s). This is the way that’s being used for production applications and is the best way of dealing with CSS and also the way you should do it. So, how do we make an external CSS file, and how do we make it cooperate with the HTML file? First of all, you have to create a CSS file. All CSS files have to have the .css extension at the end. That way, the file system on your machine knows it’s dealing with a CSS file. You can call your file whatever you want (in the example below, it’s called style.css and is not put in a folder but simply lies in the root of our project for simplicity purposes). Thereafter, you have to have at least one HTML file, which has to end with the .html file extension at the end, so that your system knows it’s dealing with an HTML file. Do note that if you only have one HTML file, it has to be named index.html (this makes the browser know that it’s the root page). In your HTML file, the CSS file has to be linked in the <head></head> tags. The way we link a CSS file to our HTML file is by using the <link> tag, in which we add a rel attribute of stylesheet and then the path to our CSS file. The path has to be exact. Otherwise, the browser is not going to find the CSS file. The HTML and CSS should look like this with the result of red text when [run](https://jsfiddle.net/VS130300/tmkh6ebz/15/): ![External styling example:](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9zrw6vai4se7xc2r3e9.png) Now you know the different ways available to write CSS, but that still doesn’t answer the question being the topic of the post: How does the browser read CSS? The different way of writing it has its own ranking in terms of precedence. The way of writing the CSS that the browser weighs the heaviest, is ***inline***. This means that any styling that might have been added to the same element either internally or externally, will be overridden by the inline styling in terms of if the same property is written both inline or internally and/or externally.   Let’s have a look at an example: ![Inline weight example:](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5u6jhpw5yt272rxot2u1.png) If the example in the code is being run, the result would be that the color of the text turns out to be red.   The way of writing CSS that the browser weighs a little lesser, is ***internal***. This means that any styling applied externally would be overridden by the internally added CSS (make sure that you do not have any inline styling applied, elsewise it’s going to override the styling as described in the example above. Let’s have a look at an example: ![Internal weight example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nw0vkc34unv9t3vykjgd.png) Here, when the code is being [run](https://jsfiddle.net/VS130300/7vk6s4be/1/), the result will be that the text turns out to be green. The way of writing CSS that the browser weighs the least, as you might have guessed by now, is ***external***. You might have thought at first that it’s the way the browser weighs the most, since that’s the way you’re used to write your CSS, but that’s just not how things are. This means that the stylings written in the external CSS file are only to be applied, if no inline or internal stylings are being found by the browser. So, also here it’s important to make sure that you do not have any inline or internal stylings. Otherwise, they’ll override the stylings written in the external stylesheet. Let’s have a look at an example: ![External weight example:](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vc0rtu57cr4fcmghiwe.png) Here, when the code is being run, the result is that the text turns out to be red. Now that you know how the browser reads CSS, it’s your time to paint the picture you want inside the frame! That’s it! > _- Remember to make your website just a little bit more awesome every day_.
worldwidewebdevelopment
1,148,209
Make a Ping Pong Game in Unity (Tutorial)
Whether you’re hosting a ping pong championship or need to change out assets for projects deployed on...
0
2022-07-21T23:08:09
https://dev.to/echo3d/make-a-ping-pong-game-in-unity-tutorial-23o
tutorial, unity3d, 2d, gamedev
Whether you’re hosting a ping pong championship or need to change out assets for projects deployed on multiple platforms, [echo3D](http://www.echo3d.co) makes it easy to do this by simply using an API key and updating in 1 place. Assets are stored on the [echo3D](http://www.echo3d.co) cloud and called at runtime so you can focus on other things. Like ping pong. Or trying out more open-source projects on our [Github](https://github.com/echo3Dco). In this tutorial, you will play ping pong against another player (or the other side of your brain!) ![](https://miro.medium.com/max/1200/1*0Tv9KbwM0GJ0Ml9TETZxzQ.gif) *** Click this link [**https://go.echo3d.co/b1Cm**](https://go.echo3d.co/b1Cm) or scan this QR code to see the ping pong championship logo in AR. ![](https://miro.medium.com/max/1400/1*iUBWFJ2J9_pUdghVajtoKg.png) *** Register ======== Don’t have an echo3D account? Register for FREE at [echo3D](https://console.echo3d.co/#/auth/register). Version ======= [Unity 2020.3.25f1](https://unity3d.com/get-unity/download/archive) Setup ===== * Clone this [repo](https://github.com/echo3Dco/Unity-echo3D-Demo-PingPongChallenge/), see the full README * [Import the Unity SDK](https://docs.echo3D.co/unity/installation) * [Add the assets](https://docs.echo3D.co/quickstart/add-a-3d-model) to the echo3D console from the Unity ‘Models’ folder * Uncheck the [Security](https://docs.echo3d.co/web-console/deliver-pages/security-page) box in your console * In Unity, open the _GameBoard_ scene * Drag the _echo3D_ script onto Background object in the Hierarchy * Add the API key and entry ID for each object in the Inspector ![](https://miro.medium.com/max/1400/1*0Z_JDVXFCJNPaE9CkL18BA.png)![](https://miro.medium.com/max/1040/1*-T_RzpcgJp9aAFEyxPYf-Q.png) * In the Hierarchy, make sure to the boxes are unchecked for the Sprite Renderer for Background * Adjust the [metadata](https://docs.echo3d.co/unity/transforming-content) of the Background in the echo3D console. These may work for you: BackgroundImage: scale: .1, zAngle: 180, height: 1180, xAngle: 90, width: 1920, x: .1 Run === * Press _Play_ in Unity. Left player: w and s keys Right player: up and down keys Switch Out Assets ================= * Find new asset in the echo3D console (You can upload your own or choose from our library) ![](https://miro.medium.com/max/1200/1*Ej527j0FyKH-Niin_tC4Ig.gif) * Get the API key and entries ID ![](https://miro.medium.com/max/1400/1*0Z_JDVXFCJNPaE9CkL18BA.png) * Swap them out on the echo3D script in the Unity Hierarchy and see your assets change when you run in Play mode ![](https://miro.medium.com/max/664/1*LIJnwnUcRyRt4Eskmb6QyQ.png) Learn More ========== Refer to our [documentation](https://docs.echo3D.co/unity/) to learn more about how to use Unity and echo3D. Support ======= Feel free to reach out at [support@echo3D.co](mailto:support@echo3D.co)or join our [support channel on Slack](https://go.echo3D.co/join). Sources ======= * Ping pong background asset: [freepik](https://www.freepik.com/free-vector/detailed-table-tennis-logo_9891974.htm#query=ping%20pong%20logo&position=4&from_view=search) Support ======= Feel free to reach out at [support@echo3D.co](mailto:support@echo3D.co) or join our [support channel on Slack](https://go.echo3d.co/join). For additional troubleshooting, debug [here](https://docs.echo3d.co/unity/troubleshooting). Screenshots =========== ![](https://miro.medium.com/max/1400/1*mmoT5-ODKCSZRUjAX3bBTQ.png) More Tutorials ============== For more easy tutorials, check these out: * [Build a 3D Balloon Pop game in Unity](https://medium.com/echo3d/build-a-balloon-pop-game-unity-free-eed9fe7d8be9) * [Get a Quarantine Dog in AR](https://medium.com/echo3d/get-a-quarantine-dog-in-ar-8383ea55376b) * [How to Create 3D Content and See It In AR](https://medium.com/echo3d/how-to-create-3d-content-and-see-it-in-ar-free-no-coding-required-369e5b4a4b3e) *** >_**echo3D** ([www.echo3D.co](http://www.echo3d.co/); Techstars 19’) is a cloud platform for 3D/AR/VR that provides tools and network infrastructure to help developers & companies quickly build and deploy 3D apps, games, and content._ ![](https://miro.medium.com/max/1400/1*cNrRDT0JEpkcy-zG6KBNgw.png)
_echo3d_
1,148,359
Getting Started with the Angular Toolbar Component
A quick overview on how to create and configure the Syncfusion Angular Toolbar component in an...
0
2022-07-22T04:36:40
https://dev.to/syncfusion/getting-started-with-the-angular-toolbar-component-2422
angular, webdev
A quick overview on how to create and configure the Syncfusion [Angular Toolbar component](https://www.syncfusion.com/angular-ui-components/angular-toolbar) in an Angular project. Angular Toolbar is a collection of clickable icons, buttons, or some input elements that perform specific functions when you click on them. In this video, you will learn how to add a simple Angular Toolbar to an Angular app and then how to set prefix icons, separators, and display modes. Finally, you will learn how to add input-based components like Numeric Textbox and Dropdown List to the Angular Toolbar. Product overview: https://www.syncfusion.com/angular-ui-components/angular-toolbar Explore tutorial videos: https://www.syncfusion.com/tutorial-videos Download an example from GitHub: https://github.com/SyncfusionExamples/getting-started-with-the-angular-toolbar-component {% youtube vc65VHyEk_8 %}
techguy
1,148,387
100. Leetcode Solution in Cpp
class Solution { public: bool isSameTree(TreeNode *p, TreeNode *q) { // Start typing...
0
2022-07-22T05:52:29
https://dev.to/chiki1601/100-leetcode-solution-in-cpp-3046
cpp
``` class Solution { public: bool isSameTree(TreeNode *p, TreeNode *q) { // Start typing your C/C++ solution below // DO NOT write int main() function if (p == NULL && q == NULL) return true; if (p == NULL || q == NULL) return false; return (p->val == q->val) && isSameTree(p->left, q->left) && isSameTree(p->right, q->right); } }; ``` #leetcode #challenge Here is the link for the problem: https://leetcode.com/problems/same-tree/
chiki1601
1,148,531
Push your code on Github like Pro
I never liked to typing 4 commands each time for pushing your code to github. So i have created a...
0
2022-07-22T08:21:23
https://dev.to/ritik_20/push-your-code-on-github-like-pro-1g6a
I never liked to typing 4 commands each time for pushing your code to github. So i have created a alias for that and with also random commits name. paste the below code to your default `shellrc` file. ``` #Get random gibbrish word get_random(){ random_commit=`shuf /usr/share/dict/cracklib-small | awk 'FNR == 1 { print $1 }'` } alias gpm='get_random; git add .; git commit -m ${random_commit}; git push origin main' ``` make sure you have file located at `/usr/share/dict/cracklib-small`. For my system Manjaro `cracklib-small` file consists of lots of english dictionary words. For you system it maybe different.
ritik_20
1,148,573
Methods On An Array
What is an array? An array is an object, it is a data structure that is used to store a bunch of...
0
2022-07-22T09:32:48
https://dev.to/ananyamadhu08/methods-on-an-array-58n0
What is an array? An array is an object, it is a data structure that is used to store a bunch of elements. Arrays can consist of numbers, booleans, strings, objects and even other arrays. An array can be created with the following syntax - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8lux1gyctqu408y24te.png) Let us discuss different methods that can be used to manipulate array’s - 1) **push()** - This method can be used to insert an element to an array. This method adds and element to the end of an array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sqp1lfpfl1ons0fenjxg.png) 2) **unshift()** - This method can be used to insert an element to an array. This method adds and element to the beginning of an array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhq70rzbcpu5x2b008b9.png) 3) **pop()** - This method allows us to remove an element of an array. This method pops of the last element of the given array. This method returns to us the element that has been removed while also manipulating the original array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3et8x8jnltpqj0zhpdg.png) 4) **shift()** - This method is like the pop method except that this method allows us to remove the first item of an array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ev2xzfd0vjvc6fzz5mrq.png) 5) **slice()** - This is a method on an array that can be used to return selected elements in an array. This method can also be used to clone an array as it returns a new array instead of modifying the original array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yyay9q9w5911byakc9vl.png) 6) **length** - This method on an array can be used to check the number of elements present in the array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lw0pa111m2z88w8vbccu.png) 7) **concat()** - This method allows us to join two or more arrays. This method returns to us a new array of the merged arrays. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlmu4o6juqg91d9qvq5m.png) 8) **join()** - This method allows us to join all the elements of an array. This method returns to us a string of all the elements in the array that have been separated by a separator, the default separator is a comma. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5xribyfwidagasu8l30.png) 9) **fill()** - This method on an array allows us to replace all or few selected items in the array with a static value. This method manipulates the original array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7xlbdlp7dqd2bw825nh2.png) 10) **includes()** - This method allows us to check whether or not an element is present in a given array. This method returns to us either true or false. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahwill6uy9skv6i2wlck.png) 11) **indexOf()** - Using this method we can determine the index of any given element in an array, it returns to us the index of the element. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilziijs7svd52w1wg8ia.png) 12) **reverse()** - This method allows us to reverse the order of elements in an array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szru9s1eqz6oifrs1lqe.png) 13) **sort()** - This method converts all the elements of an array into strings and sorts them out, the default sorting method is ascending. This methods manipulates and changes the original array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b1cbn0mutdq2bibaythr.png) 14) **splice()** - This method helps us update, add or delete items from an array. This method manipulates the original array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ibzjrekg910ksqjf1rfz.png) 15) **map()** - This methods allows us to run some logic for each and every element in the array. This method returns to us a new array after applying the logic to the elements. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j37zqs3aqxs0n4v6iih7.png) With this we come to an end of this blog, but there are several more methods that can be used to manipulate arrays these are just some of the methods out there. Now since we know about all these methods we can use them to manipulate arrays the next time we're coding!
ananyamadhu08
1,148,749
What are you trying to test?
The importance of testing is well documented and there are many resources out there describing the...
0
2022-07-22T12:18:44
https://dev.to/mbarzeev/what-are-you-trying-to-test-2161
webdev, beginners, javascript, react
The importance of testing is well documented and there are many resources out there describing the benefits of maintaining a good and balanced test coverage for your code base. Gladly writing tests has become a standard in our industry, but sometimes the need (or requirement) to write them obscures one’s vision on what exactly should be tested. From time to time I get asked to help with a certain test, mainly on mocking practices (I actually wrote a [Jest Mocking Cheatsheet](https://dev.to/mbarzeev/jest-mocking-cheatsheet-fca) not too long ago, just to keep a reference for myself) and I find that after being presented with the immediate issue, the first question I usually ask is: > “What are you trying to test?” This question rises almost every time and this question has the potential to untangle the problem and result in a much simpler yet efficient solution. I thought it would be worth sharing how it does that with you - Developers, including yours truly, are having a hard time focusing on what needs to be tested since their focus is on the entire feature, and how the user interacts with it. This focus makes it hard to pluck out the very thing you wish to test, and you find yourself testing the entire jungle just because you wanted to check if a certain tree has a certain fruit. *** ## Understanding the type of test you’re writing Usually the answer to the question “what are you trying to test?” will describe a set of conditions which result in a state you would like to test, an example for that might be: >“I would like to test that when the user clicks a button on my component, a modal opens with a confirmation message, and when the user confirms, a certain text appears on the component” So… what type of test is that? The flow described above goes through different units - the component, the modal and then back to the component. Clearly we’re not dealing with a “unit test” here. This appears to be more of an integration test where different services/components are integrated to fulfill a user interaction flow. Now, before jumping into how to “mimic” this flow in our test we need to ask whether that’s our intention - writing an integration test. In many cases the answer is “no, I just want to make sure that when the application state is so-and-so my component displays a certain text”. It doesn't really matter what set that state. I believe that the art of writing a good unit-test is peeling off redundant setups to get the test as focused as possible on what we want to check. If the only reason for this test is “making sure that when the application state is so-and-so my component displays a certain text” what the tester needs to focus on is creating that state artificially and then checking the component. Obviously there is room for a complimentary integration as well - now that you know that your component acts as expected to state changes, let’s change the state from another component or service and see if all works as expected. >It’s not “either this or that” Unit tests and integration tests testing the same area can and should live side by side. An integration test, as best as it might be, is no replacement for a good comprehensive unit-test, in my eyes, and vice versa. Those memes where you see the caption “unit tests passing, no integration tests'' are funny but tell a true story - you should have both. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xutcr3x962k1td5szpck.png) So you decided that the type of test you need is a test which does have several units integrated under it. What should it be - integration or E2E test? is there a difference? ## Do you need a “Pure Integration” or E2E test? I see integration tests more suitable for checking communication between different services, API to an API communication without any external user intervention. Let’s call the “pure integration tests” for now. On the other hand, any test which involves user interaction as the example described above is worth an E2E test. I think that although modern testing libraries give us the tools to test these sorts of interaction flows, a real E2E test which runs on a real browser with the real full application set up and ready is much more reliable than mimicking the entire application runtime. ### The cost of writing an E2E test as a unit test Since it is objectively harder to write and maintain Integration or E2E tests, developers tend to write the equivalents as unit tests. What I mean by that is that they are attempting to simulate the user interaction with the tools available (such as [react-testing-library](https://testing-library.com/docs/react-testing-library/intro/)) and jump from a component to a modal, to another component just to make sure the last component displays what it should. I find it to be a bad practice and the immediate result of this approach is having **slow** and **complex** unit tests which are very hard to maintain. In many cases these sorts of tests require the author to create an elaborate setup for it and be able to reason about it later when the test fails. A test which relies on a “fake” application state is less reliable than a test which runs on the actual live application. ## Are you testing the application state? In many cases, tests tend to change the application's “fake” state and then read from it to modify a component’s behavior, but was that your intention? If you simply wanted to make sure a component behaves in a certain way given a certain state it is not the state you are testing - it is the component. In most cases a better approach would be to hand the “state” as an argument (prop for us React-ers) to the component. This sort of things are where tests help you design your code better. The test “forces” you to design your component to be testable, which translates into having your component avoid side-effects as much as possible. ## Are you Testing a 3rd party API? Sometimes you realize that the test relies on a 3rd party service call. This can be a certain library you’re using or even the browser native API. Those 3rd parties are not your code and you do not need to make sure they work, but rather **assume** they work and mock them to your test’s needs. To put it in simpler words - you don’t need a browser to have a `document` object on your global scope and you don’t need to import `lodash` to have a mock implementation for `_.dropRightWhile()`. Again, peeling off the irrelevant stuff from your test is crucial. ## Wrapping up It is important to insist on asking these questions when approaching to write a new test. If you understand the type of test you’re about to write and peel off the things that are not relevant to your test, the outcome would be much cleaner, precise and efficient. It will give you better reliability and will be easier to maintain in the future. Do you agree? if you have any comments or questions be sure to leave them in the comments below so we can all learn from them. *Hey! If you liked what you've just read check out <a href="https://twitter.com/mattibarzeev?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">@mattibarzeev</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> on Twitter* :beers: <small><small><small>Photo by <a href="https://unsplash.com/es/@srkraakmo?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Stephen Kraakmo</a> on <a href="https://unsplash.com/s/photos/focus?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></small></small></small>
mbarzeev
1,149,252
How To Create Pages In Shopify Store
Shopify is an online Ecommerce platform which allows users to create online store. Using Shopify you...
19,007
2022-07-23T04:39:41
https://dev.to/imveernvkr/how-to-create-pages-in-shopify-store-56li
shopify, beginners, tutorial, webdev
Shopify is an online Ecommerce platform which allows users to create online store. Using Shopify you can sell digital products, physical products and can also start drop shipping business. Shopify is one of the biggest ecommerce platforms and it is used by biggest companies across the world. I have shown complete process to create pages and to edit pages in Shopify store, Also to add Google Map to our Shopify website. This is helpful for all the beginners and also to the users who are already using Shopify but have lesser knowledge about Shopify. To watch the video on creating pages in Shopify, click on below shown image or click on the following link -> [How To Create Pages On Shopify Store](https://youtu.be/lhHw-GP3UnM) [![How To Create Pages On Shopify Store](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qvrhcrscirmkmg3ltty.png)](https://youtu.be/lhHw-GP3UnM)
imveernvkr
1,149,393
A Central Clock for Laravel Web Applications with ReactPHP
How to implement a central clock for a Laravel web application? Why A Central Clock is...
0
2022-07-23T08:27:00
https://yoramkornatzky.com/post/a-central-clock-for-laravel-web-applications-with-reactphp
php, laravel, reactphp, realtimeweb
How to implement a central clock for a [Laravel](https://laravel.com) web application? # Why A Central Clock is Needed? Quizzes, time tracking, online education, Pomodoro timers, auctions, and many other web applications need a central clock. The time on this clock is what all users see on their web page. Such a central clock has to be implemented at the server, as this is the only place where we can rely on. It needs to be transmitted to the user's web page. # Running the Clock with ReactPHP Use [ReactPHP](https://reactphp.org) to implement a timer, either periodic, $timer = $loop->addPeriodicTimer($time, function() use(&$task) { broadcast(new TimeSignal(json_encode(...))); }); or one time, $timer = $loop->addTimer($time, function() use(&$task) { broadcast(new TimedEvent(json_encode(...))); }); where `TimeSignal` and `TimedEvent` are Laravel events. Events are broadcast using [Laravel Echo Server](https://github.com/tlaverdure/laravel-echo-server), [Laravel Websockets](https://beyondco.de/docs/laravel-websockets/getting-started/introduction), or [Soketi](https://docs.soketi.app). Say on a channel `time`. # Processing Time Signals in Front-End ## JavaScript Listen for events with [Laravel Echo](https://github.com/laravel/echo), window.Echo.channel('time') .listen('TimeSignal', (e) => { }) .listen('TimedEvent', (e) => { }); ## Livewire Define in the [Livewire](https://laravel-livewire.com) component the listeners: protected $listeners = [ 'echo:time,TimeSignal' => 'processTimeSignal', 'echo:time,TimedEvent' => 'processTimedEvent', ];
kornatzky
1,149,445
Simple Kqueue Echo Server in C
There are three important functions when use kqueue. int kqueue(void); int kevent(int kq, ...
0
2022-07-23T13:28:00
https://dev.to/shwezhu/simple-kqueue-echo-server-in-c-1d8b
socket
![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6c8msb5dhaucnot951q.png) --- There are three important functions when use kqueue. ```c int kqueue(void); int kevent(int kq, const struct kevent* changelist, int nchanges, struct kevent *eventlist, int nevents, const struct timespec *timeout); EV_SET(kev, ident, filter, flags, fflags, data, udata); ``` > The `kqueue()` system call creates a new **kernel event queue** and returns a descriptor. > The `kevent()` system call is used to **register events with the queue**, and return any pending events to the user. + **change kqueue** When you want to change the queue, like delete a socket file descriptor from **kqueue** for example, or add listen socket to **kqueue**, you can call `kevent()` like below: ```c int kq = kqueue(); struct kevent event; EV_SET(&event, listen_fd, EVFILT_READ, EV_ADD, 0, 0, NULL); kevent(kq, &event, 1, NULL, 0, NULL); ``` As the codes above, when we want to change(delete/add) a event to a **kqueue**, there are two steps, first thing we should do is to initialize **kevent** with a file descriptor we want to monitor, secondly is to add this **kevent** to **kqueue** using `kevent()`. + **monitor kqueue** When we want to monitor kqueue, we don't have to provide `changelist` and `nchanges` parameter of `kevent()`. We just need to provide these two parameters: `eventlist` and `nevents`, like below: ```c struct kevent event_list[MAX_EVENTS]; int num_events = kevent(kq, NULL, 0, event_list, MAX_EVENTS, NULL); ``` + **remove kevent** When a client disconnect, we don't care(monitor) the **kevent** associated with this client anymore, so we need to remove this **kevent** from **kqueue**. But we don't have to call `EV_SET()` and `kevent()`, we just need to close this file descriptor. Cause the man page says below: > Calling close() on a file descriptor will remove any kevents that reference the descriptor. + **source code** {% embed https://gist.github.com/shwezhu/96299b523e41a5a6bf01b634dad4ce25 %} + **reference** [kqueue](https://www.freebsd.org/cgi/man.cgi?query=kqueue&apropos=0&sektion=0&format=html) [Streaming Server Using Kqueue](https://nima101.github.io/kqueue_server)
shwezhu
1,149,592
Top 67 Youtube Channels for all Developers in 2022
1. Traversy Media Visit this channel : click here =&gt; Traversy Media 2. Tech...
0
2022-07-23T16:42:00
https://dev.to/lodstare/top-67-youtube-channels-for-all-developers-in-2022-1mck
javascript, programming, beginners, tutorial
## 1. Traversy Media ![Traversy Media](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h69l6kaahkki3t8to6y8.jpg) Visit this channel : click here => [Traversy Media](https://www.youtube.com/c/TraversyMedia/playlists) ## 2. Tech With Tim ![Tech With Tim](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yed9i13gmagdgwt0u8cn.jpg) Visit this channel : click here => [Tech With Tim](https://www.youtube.com/c/TechWithTim/playlists) ## 3. ProgrammingKnowledge ![ProgrammingKnowledge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vrf3j8watuignto5wxo.jpg) Visit this channel : click here => [ProgrammingKnowledge](https://www.youtube.com/c/ProgrammingKnowledge/playlists) ## 4. Programming with Mosh ![Programming with Mosh](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5efjj7nevfm8mdky4l3.jpg) Visit this channel : click here => [Programming with Mosh](https://www.youtube.com/c/programmingwithmosh/playlists) ## 5. Fireship ![Fireship](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jb8xyxzt48s88uwzijgv.jpg) Visit this channel : click here => [Fireship](https://www.youtube.com/c/Fireship/playlists) ## 6. freeCodeCamp.org ![freeCodeCamp.org](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4nx63ifvx8m6tgziuus.jpg) Visit this channel : click here => [freeCodeCamp.org](https://www.youtube.com/c/Freecodecamp/playlists) ## 7. Derek Banas ![Derek Banas](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1bw7syod0urx3yil4819.jpg) Visit this channel : click here => [Derek Banas](https://www.youtube.com/c/derekbanas/playlists) ## 8. The Coding Train ![The Coding Train](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54ro1gvcoykl4xb63h0z.jpg) Visit this channel : click here => [The Coding Train](https://www.youtube.com/c/TheCodingTrain/playlists) ## 9. thenewboston ![thenewboston](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckj90u7uo3hndf2sfpqg.jpg) Visit this channel : click here => [thenewboston](https://www.youtube.com/user/thenewboston/playlists) ## 10. Tutorials Point (India) Ltd. ![Tutorials Point (India) Ltd.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apjwz5t2x7z5hehijlq3.jpg) Visit this channel : click here => [Tutorials Point (India) Ltd.](https://www.youtube.com/channel/UCVLbzhxVTiTLiVKeGV7WEBg/playlists) ## 11. Web Dev Simplified ![Web Dev Simplified](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2gkfdirzzck1qcst1njq.jpg) Visit this channel : click here => [Web Dev Simplified](https://www.youtube.com/c/WebDevSimplified/playlists) ## 12. The Net Ninja ![The Net Ninja](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axk3xcda5349c2uklz48.jpg) Visit this channel : click here => [The Net Ninja](https://www.youtube.com/c/TheNetNinja/playlists) ## 13. sentdex ![sentdex](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cr158vd7kk3w53tn4x5.jpg) Visit this channel : click here => [sentdex](https://www.youtube.com/c/sentdex/playlists) ## 14. DesignCourse ![DesignCourse](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilguk9zqs73k4dq9vggd.jpg) Visit this channel : click here => [DesignCourse](https://www.youtube.com/c/DesignCourse/playlists) ## 15. Academind ![Academind](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4gv76syw2s1x8osmji3.jpg) Visit this channel : click here => [Academind](https://www.youtube.com/c/Academind/playlists) ## 16. Adam Khoury ![Adam Khoury](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ky3n7xnnzhvw222rz1ls.jpg) Visit this channel : click here => [Adam Khoury](https://www.youtube.com/c/AdamKhoury/playlists) ## 17. Adrian Twarog ![Adrian Twarog](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4akoe55y2lyx9qudytkz.jpg) Visit this channel : click here => [Adrian Twarog](https://www.youtube.com/c/AdrianTwarog/playlists) ## 18. Ben Awad ![Ben Awad](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fpaft9x0cg3mrldulrqy.jpg) Visit this channel : click here => [Ben Awad](https://www.youtube.com/c/BenAwad97/playlists) ## 19. Brian Design ![Brian Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9g09qfjz8fo8c88mqrt.jpg) Visit this channel : click here => [Brian Design](https://www.youtube.com/channel/UCsKsymTY_4BYR-wytLjex7A/playlists) ## 20. Caleb Curry ![Caleb Curry](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/va4h2lp5yf9f3jxwof27.jpg) Visit this channel : click here => [Caleb Curry](https://www.youtube.com/c/CalebTheVideoMaker2/playlists) ## 21. Chiris Hawkes ![Chiris Hawkes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/frsy3ba7vgiveej2nn3n.jpg) Visit this channel : click here => [Chiris Hawkes](https://www.youtube.com/c/noobtoprofessional/playlists) ## 22. Classsed ![Classsed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/smfxhse1l9n8948dt2cp.jpg) Visit this channel : click here => [Classsed](https://www.youtube.com/c/Classsed/playlists) ## 23. Code with Ania Kubów ![Code with Ania Kubów ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z40dhmwgluf3jcdhir1d.jpg) Visit this channel : click here => [Code with Ania Kubów](https://www.youtube.com/c/AniaKub%C3%B3w/playlists) ## 24. Coder Coder ![Coder Coder](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mzcuyl44dr6ql7vkq3xc.jpg) Visit this channel : click here => [Coder Coder](https://www.youtube.com/c/TheCoderCoder/playlists) ## 25. codeSTACKr ![codeSTACKr](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yszqy122xjpp971sxsu2.jpg) Visit this channel : click here => [codeSTACKr](https://www.youtube.com/c/codeSTACKr/playlists) ## 26. Coding Garden ![Coding Garden](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vwdnn0dx30vkmsvt6mw2.jpg) Visit this channel : click here => [Coding Garden](https://www.youtube.com/c/CodingGarden/playlists) ## 27. CodingEntrepreneurs ![CodingEntrepreneurs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjl72yfyjlecksyco0tu.jpg) Visit this channel : click here => [CodingEntrepreneurs](https://www.youtube.com/c/CodingEntrepreneurs/playlists) ## 28. CodingNepal ![CodingNepal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1bmt956bzz80juihtuzc.jpg) Visit this channel : click here => [CodingNepal](https://www.youtube.com/c/CodingNepal/playlists) ## 29. CodingPhase ![CodingPhase](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gpv0vo6f8w9d6jj24vpi.jpg) Visit this channel : click here => [CodingPhase](https://www.youtube.com/c/CodingPhase/playlists) ## 30. Create a Pro Website ![Create a Pro Website](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20xrp0kj5ka5gzhdcj2d.jpg) Visit this channel : click here => [Create a Pro Website](https://www.youtube.com/c/CreateaProWebsite/playlists) ## 31. CS Dojo ![CS Dojo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q99e36yxng354ysp9lkp.jpg) Visit this channel : click here => [CS Dojo](https://www.youtube.com/c/CSDojo/playlists) ## 32. Create WP Site ![Create WP Site](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngnlapueg618z5ljqasx.jpg) Visit this channel : click here => [Create WP Site](https://www.youtube.com/c/LearnHowToday/playlists) ## 33. Dani Krossing ![Dani Krossing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/quvhv25i9jcdxagh66vg.jpg) Visit this channel : click here => [Dani Krossing](https://www.youtube.com/c/TheCharmefis/playlists) ## 34. Darrel Wilson ![Darrel Wilson](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r9wk7h8v20cfbjhiy2tv.jpg) Visit this channel : click here => [Darrel Wilson](https://www.youtube.com/c/Darrelwilsonbug/playlists) ## 35. Dennis Ivy ![Dennis Ivy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j4yi2geuybo1l274qqkx.jpg) Visit this channel : click here => [Dennis Ivy](https://www.youtube.com/c/DennisIvy/playlists) ## 36. Dev Ed ![Dev Ed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f98wy7keusbngwjzq1vv.jpg) Visit this channel : click here => [Dev Ed](https://www.youtube.com/c/DevEd/playlists) ## 37. devdojo ![devdojo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3h9l2a9k1olaump7y125.jpg) Visit this channel : click here => [devdojo](https://www.youtube.com/c/Devdojo/playlists) ## 38. Development Community ![Development Community](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b9rfnkiqt2magv6gldtb.jpg) Visit this channel : click here => [Development Community](https://www.youtube.com/c/DevelopmentCommunityFR/playlists) ## 39. Mark Tellez ![Mark Tellez](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjdch56cimhn6gh4x569.jpg) Visit this channel : click here => [Mark Tellez](https://www.youtube.com/c/devmentorlive/playlists) ## 40. DevTips ![DevTips](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mn9zlrpy1s9hl8nndbhf.jpg) Visit this channel : click here => [DevTips](https://www.youtube.com/c/DevTipsForDesigners/playlists) ## 41. Dylan Israel ![Dylan Israel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5qnl7vazn3ntz0u8jb9.jpg) Visit this channel : click here => [Dylan Israel](https://www.youtube.com/c/CodingTutorials360/playlists) ## 42. Eddie Jaoude ![Eddie Jaoude](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ay3osa7bjoqgk055y9b5.jpg) Visit this channel : click here => [Eddie Jaoude](https://www.youtube.com/c/eddiejaoude/playlists) ## 43. Faraday Academy ![Faraday Academy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/13ajuwhosxooq6ag95ha.jpg) Visit this channel : click here => [Faraday Academy](https://www.youtube.com/c/FaradayAcademy/playlists) ## 44. Florin Pop ![Florin Pop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ytfbldfytkvfotpxtwgn.jpg) Visit this channel : click here => [Florin Pop](https://www.youtube.com/c/FlorinPop/playlists) ## 45. Fun Fun Function ![Fun Fun Function](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbpblihl5brg4bnadkkw.jpg) Visit this channel : click here => [Fun Fun Function](https://www.youtube.com/c/funfunfunction/playlists) ## 46. Ihatetomatoes ![Ihatetomatoes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x57un20s7700mx7sn717.jpg) Visit this channel : click here => [Ihatetomatoes](https://www.youtube.com/c/Ihatetomatoes/playlists) ## 47. James Q Quick ![James Q Quick](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mq55ipbhojml5zdnbn74.jpg) Visit this channel : click here => [James Q Quick](https://www.youtube.com/c/JamesQQuick/playlists) ## 48. JavaScript Mastery ![JavaScript Mastery](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcxsypnxsuds6xunyubw.jpg) Visit this channel : click here => [JavaScript Mastery](https://www.youtube.com/c/JavaScriptMastery/playlists) ## 49. Jesse Showalter ![Jesse Showalter](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h25fkx6tri8zswq0l3wh.jpg) Visit this channel : click here => [Jesse Showalter](https://www.youtube.com/c/JesseShowalter/playlists) ## 50. Kevin Powell ![Kevin Powell](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hsqcnggfrlm98yatk1dw.jpg) Visit this channel : click here => [Kevin Powell](https://www.youtube.com/kepowob/playlists) ## 51. Let's Build WordPress ![Let's Build WordPress](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yksd3lguuofhhiso29xa.jpg) Visit this channel : click here => [Let's Build WordPress](https://www.youtube.com/c/Letsbuildwp/playlists) ## 52. LevelUpTuts ![LevelUpTuts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b1rw4h79laonudvhpsvf.jpg) Visit this channel : click here => [LevelUpTuts](https://www.youtube.com/c/LevelUpTuts/playlists) ## 53. Mr. Web Designer ![Mr. Web Designer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mvs81tdwpux20cuoo6b.jpg) Visit this channel : click here => [Mr. Web Designer](https://www.youtube.com/c/MrWebDesignerAnas/playlists) ## 54. Online Tutorials ![Online Tutorials](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6gvphc7wfnpslqtb2nw.jpg) Visit this channel : click here => [Online Tutorials](https://www.youtube.com/c/OnlineTutorials4Designers/playlists) ## 55. packetcode ![packetcode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxkr6wl8s0cpunr744hg.jpg) Visit this channel : click here => [packetcode](https://www.youtube.com/c/Packetcode/playlists) ## 56. Program With Erik ![Program With Erik](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/efb8ryvhl7w8gpyi3o1m.jpg) Visit this channel : click here => [Program With Erik](https://www.youtube.com/c/ProgramWithErik/playlists) ## 57. Stefan Mischook ![Stefan Mischook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tqywef4l14vq0pq2ehu3.jpg) Visit this channel : click here => [Stefan Mischook](https://www.youtube.com/c/StefanMischook/playlists) ## 58. Step by Step ![Step by Step](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pqef0ojo9nvwq5lxtj6.jpg) Visit this channel : click here => [Step by Step](https://www.youtube.com/c/StepbyStep_KhanamCoding) ## 59. Telmo Sampaio ![Telmo Sampaio](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7bozeqrab9oj4eafay0h.jpg) Visit this channel : click here => [Telmo Sampaio](https://www.youtube.com/user/Telmo87/playlists) ## 60. Tiff In Tech ![Tiff In Tech](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zlribdsmddnxz2jjoln.jpg) Visit this channel : click here => [Tiff In Tech](https://www.youtube.com/c/TiffInTech/playlists) ## 61. Tyler Moore ![Tyler Moore](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/089nwp3vtqnhckftgwzn.jpg) Visit this channel : click here => [Tyler Moore](https://www.youtube.com/c/TylerMooreYT/playlists) ## 62. Weibenfalk ![Weibenfalk](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk88zpvbzg81ampr9tl3.jpg) Visit this channel : click here => [Weibenfalk](https://www.youtube.com/c/Weibenfalk/playlists) ## 63. Wes Bos ![Wes Bos](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q0neo6qdjww59ebhvh8x.jpg) Visit this channel : click here => [Wes Bos](https://www.youtube.com/c/WesBos/playlists) ## 64. William Candillon ![William Candillon](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovjhc3rxurfz5ksnrs85.jpg) Visit this channel : click here => [William Candillon](https://www.youtube.com/c/wcandillon/playlists) ## 65. WinningWP - Winning WordPress ![WinningWP - Winning WordPress](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqnniia84uvmswufwqhz.jpg) Visit this channel : click here => [WinningWP - Winning WordPress](https://www.youtube.com/c/Winningwp/playlists) ## 66. WPTuts ![WPTuts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ni8t31z3u8pkj9sd5s5n.jpg) Visit this channel : click here => [WPTuts](https://www.youtube.com/c/WPTuts/playlists) ## 67. KnifeCircus ![KnifeCircus](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/utxua2bv1fnoisp2jy2p.jpg) Visit this channel : click here => [KnifeCircus](https://www.youtube.com/c/KnifeCircus/playlists) ## connect with me on : - [Twitter](https://twitter.com/ayoub_el_achab) - [Instagram](https://www.instagram.com/ayoub_el_achab_/) - [Git Hub](https://github.com/Lodstare) - [Linkedin](https://www.linkedin.com/in/ayoub-el-achab/)
lodstare
1,149,755
Binary search trees🌳
👋Heeelloo everybody! 🔥🚀 -&gt; This post is about binary search trees🌳. They are...
0
2022-07-24T13:35:00
https://dev.to/emanuel191/binary-search-trees-51e3
programming, tutorial, cpp, beginners
## 👋Heeelloo everybody! 🔥🚀 ` ` -> This post is about **binary search trees**🌳. They are represented by **nodes linked together** to simulate a hierarchy, and these nodes are positioned **following a rule**. -> I will walk you through the implementation of a binary search tree. Working with trees **requires knowledge about pointers**, **the memory layout of a C program**, **memory allocation**, **and a good understanding of recursion**. -> We focus on binary search tree🌳 because this is a **time-efficient** data structure. We call it **binary** because every node **can have at most two children**. It is a **search tree** because it **respects a particular condition** which says that **every node with a value smaller than the root node is going to be placed on the left of the root node, and every node with a value bigger than the root node is going to be placed in the right of the root node**. This rule is applied recursively for every right subtree and left subtree. -> **Searching**, **accessing**, **inserting**, and **deleting** in a binary search tree🌳 are **usually** done in **O(log n)**. But there is a chance that a binary search tree can have a structure **similar to a linked list**, and the operations mentioned above will be done in **O(n)**. This situation is a drawback for the tree data structure and it happens when every new node has either a **smaller** value than the last node, or a **bigger** value than the last node. Try to draw a binary search tree with these values: `10, 9, 8, 7, 6, 5, 4` and with these values: `4, 5, 6, 7, 8, 9, 10` and see what happens. -> Happily, another tree🌳 data structure, **AVL**, is an improvement of classical trees. **AVL** is a **self-balancing** tree, and by balancing, we avoid the situation when the tree can become a linked list. But this is another discussion. -> We can implement trees with more than two children, **but time complexity will not change**. Only the logarithm base will change, but **it doesn't matter in complexities**, so again, the time complexity will be **O(log n**) in the best case and **O(n)** in the worst case. ` ` ## 1. First step: creating a `Node` data type **To work with a tree data structure**, we need to **create a new data type**, a **`Node`** data type. Trees🌳 use nodes. Here we define our **`Node`** data type as having three attributes: an unsigned long long variable - **representing the information to be stored**(also it can be any primitive or abstract data type. I used a primitive data type for simplicity), **a reference to the left child**, and **a reference to the right child**. ``` struct Node { unsigned long long data; Node* left; Node* right; }; ``` ` ` ## 2. Second step: generating `Node` variables After we define how a **`Node`** data type looks, we **need to be able to create variables of that type**. We are working with heap memory, so we must **dynamically allocate memory** for our new nodes—the **new** operator requests memory **of the size of our data type**. Then we initialize data attribute and references. The **nullptr** keyword was introduced in C++, representing 0 as an adress and only as an adress; **nullptr** is a keyword with pointer type. On the other hand, **NULL** is by default 0 and is not always a pointer. I will post about the difference between `nullptr` and `NULL` in the future. Because we are working with pointers and we are in C++, is better to use **nullptr**. ``` Node* CreateNode(unsigned long long Data) { Node* newNode = new Node; newNode->data = Data; newNode->left = nullptr; newNode->right = nullptr; return newNode; } ``` ` ` ## 3. Third step: inserting nodes After we defined how our **`Node`** data type looks and we made sure that we can create variables of that type, we need to be able to **put these variables in such a way that respects the definition of a binary search tree**. We will create a function for inserting in a binary search tree. I prefer the recursive method as it is shorter. We need **two arguments** for that function: **the root node** and **the information/object to be stored**. If the root is null, then we know that it is time to create a new **`Node`** variable and insert it. If the **value** of information/object is **greater** than the **value** of the **actual root node**, then we go to the **right**; if not, then we go to the **left**. ``` void InsertNode(Node*& Root, unsigned long long Data) { if (Root == nullptr) Root = CreateNode(Data); else if (Data > Root->data) InsertNode(Root->right, Data); else InsertNode(Root->left, Data); } ``` ` ` ## 4. Fourth step: printing a binary search tree We created a **`Node`** data type, and we made sure that we could generate **`Node`** variables and we made sure that we could insert them correctly. Now we need to think about how we can **see** our data structure. For this, we need to use **algorithms for depth traversals or breadth traversal**. There are four possible algorithms: **inorder**, **preorder**, **postorder** and **level order traversal**. I use preorder for this example. ``` void Print(Node* Root) { if (Root) { cout << Root->data << ' '; Print(Root->left); Print(Root->right); } } ``` ` ` We are done. We have all pieces for working with a binary search tree. Here is the complete code🔥: ``` #include <iostream> using namespace std; struct Node { unsigned long long data; Node* left; Node* right; }; Node* CreateNode(unsigned long long Data) { Node* newNode = new Node; newNode->data = Data; newNode->left = nullptr; newNode->right = nullptr; return newNode; } void InsertNode(Node*& Root, unsigned long long Data) { if (Root == nullptr) Root = CreateNode(Data); else if (Data > Root->data) InsertNode(Root->right, Data); else InsertNode(Root->left, Data); } void Print(Node* Root) { if (Root) { cout << Root->data << ' '; Print(Root->left); Print(Root->right); } } int main() { Node* root = nullptr; unsigned long long i = 0, nodesNumber; cout << "Number of nodes to be added: "; cin >> nodesNumber; cout << '\n'; while (i < nodesNumber) { unsigned long long number; cout << "Value of node: "; cin >> number; cout << '\n'; InsertNode(root, number); ++i; } cout << "Binary search tree printed: "; Print(root); cout << '\n'; return 0; } ``` This is a basic structure. I wanted to give you the **main idea**. We can also create a **Tree**🌳 data type and think about **Tree**🌳 as an object with attributes rather than a simple variable called root. But from here, you can improve/change this implementation as you like. Feel free to explore.🗺️ and be curious. ` ` ❗Observations: - I focused on the idea rather than on a real-world scenario where I must consider validations and other stuff. - I have used **C++** programming language. ` ` - Emanuel Rusu👨‍🎓 - You can visit my [GitHub](https://github.com/Emanuel181) - Or you can find me on [Linkedin] (https://ro.linkedin.com/in/emanuel1) - Next topic: **Intern reprezentation of primitive data types** - See you next time! 👋
emanuel191
1,150,522
Easily Convert Django Function Based Views To Class Based Views
In this tutorial I will take a simple notes app built with function based views (FBV) and convert...
0
2022-07-25T02:44:00
https://dev.to/dennisivy11/easily-convert-django-function-based-views-to-class-based-views-3okb
python, django
In this tutorial I will take a simple notes app built with function based views (FBV) and convert them into class based views (CBV). This post will be used as a guide for a YouTube tutorial so I recommend watching the full [video tutorial](https://youtu.be/-3BN-JMLE0A) and referencing the [source code](https://github.com/divanov11/Easily-Convert-Django-Function-Based-Views-To-Class-Based-Views). ### Our Function Based Views Let's start by taking a quick look at the views we currently have. Our views file has views to follow the basic CRUD operations for Creating, Reading, Updating and Deleting Notes. ```python def TaskList(request): if request.method == 'GET': tasks = Task.objects.all().order_by('-updated') context = {'tasks':tasks} return render(request, 'base/index.html', context) if request.method == 'POST': task = Task.objects.create( body=request.POST.get('body') ) task.save() return redirect('tasks') ## ------------------------------------------------------ def TaskDetail(request, pk): if request.method == 'GET': task = Task.objects.get(id=pk) context = {'task':task} return render(request, 'base/task.html', context) if request.method == 'POST': task = Task.objects.get(id=pk) task.body = request.POST.get('body') task.save() return redirect('tasks') ## ------------------------------------------------------ def TaskDelete(request, pk): task = Task.objects.get(id=pk) if request.method == 'POST': task.delete() return redirect('tasks') context = {'task':task} return render(request, 'base/delete.html', context) ``` ### Keeping our class based views raw Class based views have a level of complexity to them not because they are difficult to use, but because there is a layer of abstraction to them that makes it difficult to understand exactly what's going on and what we need to do to modify them. Django provides us with a number of built in views to use which makes for rapid development, but before you know the ins and outs of the views this can actually make thngs confusing since there is a lot of magic under the hood. So instead of using the built in views, I will keep things raw and only extend the base view Django gives us and write all the logic from scratch so you can see how class based views compare and differ from function based views. ### A few things about class based views Before we get started there's a few things I want you to know about class based views. ####Extending the base View class Every class based view extends the base `View` class. Since we are not using any other built in views make sure that you import `View` and pass in "View" to each class: ```python from django.views import View ... class OurView(View): ``` ####Separation by http methods With class based views we separate our code into HTTP verbs. So instead of having to do something like `if request.method == 'POST'`, we simply modify the `post` method provided by the `View` class and let that method take care of everything that happens on a `post` request. The same goes for `get` requests. Ex: ```python class OurView(View): def get(self, request): pass def post(self, request): pass ``` Let's get started with our first view. **TaskList View** Let's comment out the `TaskList` view and rebuild it from scratch. We'll rewrite the view as a class now and inherit from the `View` class. Let's also add two methods to this new class (`get` & `post`) and make sure to pass in `self` before `request` in each method. Once we have the class and two methods, lets extract the logic from our function based view and add it to the new class according to each http method like so: ```python from django.views import View .... class TaskList(View): def get(self, request): tasks = Task.objects.all().order_by('-updated') context = {'tasks':tasks} return render(request, 'base/index.html', context) def post(self, request): task = Task.objects.create( body=request.POST.get('body') ) task.save() return redirect('tasks') ``` Now to use this view we need to reference the class in urls.py and then use the `as_view()` method. ```py path('', views.TaskList.as_view(), name="tasks"), ``` And just like that, we converted our first function based view into a class based view! **TaskDetail View** Now lets do the same for `TaskDetail`. Again, we will comment out our function based view and extract and separate all the logic we have into the http methods. ```py class TaskDetail(View): def get(self, request, pk): task = Task.objects.get(id=pk) context = {'task':task} return render(request, 'base/task.html', context) def post(self, request, pk): task = Task.objects.get(id=pk) task.body = request.POST.get('body') task.save() return redirect('tasks') ``` Then add `as_view()` to the url path for when we call this view. ```python path('<str:pk>/', views.TaskDetail.as_view(), name="task"), ``` **TaskDelete View** At this point I'm sure you're starting to see the pattern, so lets do the same as before with the Delete view. ```python class TaskDelete(View): def get(self, request, pk): task = Task.objects.get(id=pk) context = {'task':task} return render(request, 'base/delete.html', context) def post(self, request, pk): task = Task.objects.get(id=pk) task.delete() return redirect('tasks') ``` ```python path('<str:pk>/delete/', views.TaskDelete.as_view(), name="delete"), ``` So lets recap what we did. For each view we: - Changed `def` to `class` - Extended the base `View` class - Separated logic by `http` methods and added `self` before request - Added `as_view()` to each view in `urls.py`
dennisivy11
1,150,564
Getting Started In Tech
So many people have done justice to this topic, and we have said a lot about and on pivoting to tech....
0
2022-07-25T03:26:00
https://dev.to/hrhonyx/how-to-pivot-into-tech-9j5
career, help, writing, motivation
So many people have done justice to this topic, and we have said a lot about and on pivoting to tech. This article is for that one person who needs an extra push in the right direction and simpler terms. ## **What is Tech?** Knowing what tech is makes it easier to know and create a niche for yourself. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/903a4ejrgfwvzjfdsf9l.png) We can describe tech as an industry sector, it can also be said to be a process, a product, a set of tools, and can also likewise be said to be skills and knowledge, and finally, I would add a range of occupations. ** ## Know Your X, Y, and Z ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t7iducpds6nqzgguyvmc.png) In high school and university mathematics, had us do world problems where we were asked to find X, or find Y or Z. So applying that to every aspect of our lives can be helpful. I hear your brain asking what is X. **X in this case is you**. The first step in tech is finding you? _Who are you_? If you are from non-science or technical background, what kind of skill do you have that can be transferred into tech? Are you a manager? Do you work with certain tech apps like QuickBooks, Jira, or confluence? What about your field of study? And is there a tech career close to it? e.g., Business administration tech relative is project management and Business Process Analyst. Let's put it this way. X is simply doing a SWOT (Strength, weakness, opportunities, threat) of yourself. What are your **strengths**? Your transferrable skills. What's that skill you were trained for, or you can do without researching? It's important you know that skill because that's what going to help you in your transition journey. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftay2dxai8t4unnxdh1l.png)SWOT ANALYSIS to know the areas to fit into the tech What is your **weakness**? Do you find it easy to give up when you're faced with a challenge? It is important to know that. If talking is a weakness you can turn it into a strength by becoming a tech trainer. Do you overanalyze things, or keep finding the why or causation? Maybe you might be good at analytics. What are your **opportunities**? You already know your strength and your weakness. Now, what are the tech spaces you can either turn your weakness into strengths or use your strengths? Finding that tech area or space is your silver lining at the end of a tunnel in your career transition. What are your **threats**? These are people in that your chosen career path. I don't refer to them as threats. I turn them into mentors, I stalk their Twitter and their LinkedIn and find the courses they do, and I do them. I emulate the way they write, and how they teach. I learn from them! So, if X is YOU, then what is Y? **Y is Why tech**? Why do you want to transition into tech? What's your reason? Is it the money? If it is, what happens if you don't make money? Are you willing to wait for the silver lining? Questions like how many hours are you going to invest into tech need to be answered? Because you have to invest time into that pivot and skill acquisition. Changes that you will make need to be answered. What about time? how much time are you willing to invest into that pivot, needs to be answered. If you're on Twitter, you can use any of these hashtags to ask questions and I'm willing to help you out #sheistechie #mwithDuchess #dtechieduchess #TechwithDuchess We now know X; Y so let's find Z. **Z is How**? Call it a roadmap, a mood board. But for real how do you want to learn tech? Self-taught? For me, I learn by reading and emulating people. Do you want to watch videos? Is your learning path paid to learn like boot camps or free courses? you need to know all this. Your tech goal has to be SMART. Specific, Measurable, Achievable, Relevant and Time-bound ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xwrmxuj4d7l0ibh25ml0.png) Remember that your how have to be SMART (specific. measurable, achievable, relevant, and time-bound). Your tech goals need to be **specific**. What do you want to learn and why? What do you want to use the learned skill for? career advancement, change of industry, and more money just to name a few. Which industry do you want to apply those skills too? Your tech goals need to be **measurable**. So, this is where people have to see your growth. You're learning java? start building your GitHub where you commit your codes. Learning analytics? Start your tableau public, post on Linkedin and Twitter, and even your GitHub. Do you want to be a technical writer? Write that blog, write an article on LinkedIn, write a Twitter post, or write on medium. That way you're documenting your journey. Let people see your babe steps. Write a blog about your tech journey. So that at some point you can measure and say, 'this is where I used to be, but this is where I am now. Your tech goals need to be **achievable**. Everyone wants to be a Mark Zuckerberg or Jeff Bezos or even a Bill Gates but wait!!! start with baby steps and keep going. One of these days, you would get to the silver lining. Protect yourself from the noise. You're not making the big money? That is okay. Just keep grinding. Your tech goals need to be **relevant**. Close your eyes and envision where you want to be in tech. Open your eyes and write it down. Let the courses you're signing up for be aligned with where you are heading to. In my tech career, I've worked in risk management, done technical writing, been a technical trainer, system support specialist, help desk support specialist, system analyst, project manager, product manager, and even a GIS specialist. But all this prepared me for product management. Now I'm very conscious of my product management career path and ensure that everything I do now aligns with where I want to be. Your tech goals have to be **time-bound**. I'm a big believer in writing what my short-term goals are and trying to follow through. Set time for study and time to finish up that course and certify in that field. Very important that you work with the time and add a bit of time constraint. ## **** ## **Some Area in Tech To Pivot To** I'm an ambassador of soft tech. so this article is going to be a bit biased. I heard someone say aloud what is soft tech. Soft tech to me is any area of tech which doesn't require coding or programming language. In my word, soft tech is tech management, which may include but is not restricted to Product Management, Product Management, Business Process Management, and Program Management. I refer to them as the 4Ps of tech. I even started a slack and discord community for it. No alt text is provided for this image ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlugafn65s0de45mzry5.jpg) Photo Credit MyTechBestfriend We've learned about our strengths and weakness, even opportunities and threats. we also know our goals. Look at this diagram and ask yourself where you can apply the skills you have and if you need to gain more skills to help you, then what courses do you have to do to get you into any of those areas. While the topic of “working in tech” is consuming social media because of all of its benefits, it’s important to recognize that to work in tech you do not need to work in a technical role. For example, you can be a software engineer (technical role) working at a startup focused on women’s fertility. On the other hand, you can be an HR coordinator at Meta (a tech company). AND you can also be in a technical role (SE) in a tech company (Meta). It’s important to make these differences. Don’t feel pressured to pursue a technical career because of the social media hype if YOU don’t feel it aligns with you. ## **How I Stay Motivated** I keep sharing these tips listed here because this has helped me become a better techie. I mentioned stalking people's Twitter and LinkedIn. In 2018, when I needed to rebrand as a tech person, I did👇: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24abnoe93o172eopuvka.png) 1. I rebranded my Twitter to showcase where I was heading to. I wanted to be known as a lady in tech before a social commenter or a content curator. So I started following top Twitter tech handles. Because you are what you consume. Nigerians say, 'follow who knows road'. That's the fastest way to learn. we are always on social media, and ensuring that you benefit from social media is key. 2. I mentioned before that I research fields and I follow the top people who the algorithm brings up. That's what I do with LinkedIn. I describe LinkedIn as the social media where would-be employers and employees court and do the mating dancing. I'm a product manager and a good one if I may say so, so I search product management and look at the top profiles that LinkedIn brings up. Then I model my page to look like theirs and try to always update my certifications and gain more knowledge. Google is your best friend. I Google everything and read up on anything I see in my field. You should too. Then try to write what you've learned in your own words. It helps you to learn and become an expert in your field. 3. The next tip on my list is that I network. I reach out to people, I ask myself what's the worst that can happen? Either a no or a yes is what I get. I'm sending you a cold email or dm on Twitter or Instagram or even LinkedIn. 4. Another thing I advise people to do is which works for me; I attend twitter spaces. I haven't been to the clubhouse in a year but twitter spaces? I love it!! gives you room to meet and interact with great tech minds. And on Twitter space, you can ask questions. 5. Finally, ask questions. I tell people no question is stupid. Ask people what you've found confusing and that you need clarifications on. You are confused about what path in tech to take? Ask questions!!! Want to know the difference between Azure Cloud and Google Cloud? Ask questions. ## **Too Long Didn't Read TLDR;** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02br7f555tcma6b8sz8e.png) If you enjoyed this, you can shoot me a DM. If you have questions, I'm available to answer. Send me an [email](d.duchessonyx@gmail.com), connect with me on everything tech on [Twitter](twitter.com/ijenlencha), read my views and [social commentary](twitter.com/hrh_onyx), and don't forget to you can connect on [Instagram](instagram.com/ijenlencha)
hrhonyx
1,150,634
Deployment on AWS
AWS developer associate certification exams have 22% of weightage and will be around 14 questions. ...
0
2022-07-25T05:19:08
https://dev.to/sathish3sank/deployment-on-aws-4l79
deploy, devops
AWS developer associate certification exams have 22% of weightage and will be around 14 questions. ## Contents 1. Deploy written code in AWS using existing CI/CD pipelines, processes and design patterns. 2. Deploy applications using Elastic Beanstalk. 3. Prepare the application deployment package to be deployed to AWS. 4. Deploy serverless applications. ## Deploy written code in AWS using existing CI/CD pipelines, processes and design patterns - Continuous Integration - Continuous Delivery - Continuous Deployment Useful services: 1. AWS CodeCommit 2. AWS CodeBuild 3. AWS CodeDeploy 4. AWS CodePipeline
sathish3sank
1,150,841
Getting Started With NestJS
NestJS a JavaScript framework that can be used to build scalable and dynamic server side applications...
0
2022-07-25T18:09:26
https://dev.to/kalashin1/getting-started-with-nestjs-1p1d
javascript, typescript, node
NestJS a JavaScript framework that can be used to build scalable and dynamic server side applications very quickly and easily, NestJS is built with TypesScript and supports TypeScript out of the box, it feels very much like using Angular but on the backend, you see the project was heavily influenced by Angular. NestJS enforces a certain application structure and this is one of the benefits of using NestJS. NestJS combines the best of OOP and functional programming, it also exploits the benefits of using TypeScript. It is built upon some popular libraries that we use to build NodeJS server side applications like Express and Cors. NestJS is a high level abstraction that is built on these simple libraries, so much thought has been put into the framework development and some of the obvious benefit that comes from using the framework includes; * Reduction of unnecesary code * Fast development time * Ease with testing apps No matter how cool we say JavaScript really is, there are some pitfalls that comes with using JavaScript, especially for server side apps. There is often the problem of file and module structure, lack of types, there's often too much duplicated code, and it is quite difficult to test your app, all these are problems familiar to some developers and the goal of using NestJS is to provide an elegant solution to all these problems. Nest JS was built to give projects a certain level of structure, most often junior developers struggle with choosing the right project structure, handling application dependencies and other third party plugins. NestJS is the right tool for the junior developer, or anyone who has trouble adopting a particular application structure. It is also a good solution to the afore mentioned problems. NestJS also makes it incredibly easy for us to test our applications. ## Installation To install NestJS you have to ensure that you have NodeJS installed on your PC, then you can run; ```bash npm i -g @nestjs/cli ``` This installs the very capable NestJS CLI that comes baked in with some commands that can allows us to spin up new projects and lots of other utility features we will need when building applications with NestJS. We can scaffold a new nestjs project by running ```bash nest new project-name ``` This will scaffold a project for us with some basic code, you can now proceed to opening up the project in your favorite text editor. The two commands we just ran are; ```bash npm i -g @nestjs/cli nest new project_name ``` Alternatively you can clone the starter template git ```bash git clone https://github.com/nestjs/typescript-starter.git project_name; ``` Navigate into the newly created folder, and install the dependencies. ```bash cd project_name; npm install; ``` ### Folder Structure A NestJS project will often posses the following folder structure depending on which version you are using, as of the time of this writing, NestJS is in version `9.0.5`. We will only concern ourselves with the `src` folder, that's going to be the only folder we will be working with most of the time. That's where our application source code will be stored. ```javascript src/----------|------app.controller.ts |------app.service.ts |------app.module.ts |------main.ts ``` #### main.ts This file contains the file necessary code for bootstrapping and starting our application. It imports `NestFactory` and the main module for our application, creates a server app for us and listens on a specified port for an incoming request. ```typescript import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; async function bootstrap() { const app = await NestFactory.create(AppModule); await app.listen(3000, () => console.log(`App started on PORT 3000`)); } bootstrap(); ``` Calling the `create()` method of `NestFactory` builds a http application for us from the application main module, The app that is returned by this method satisfies the `NestExpressApplication` interface and this interface exposes some useful methods for us. We start the http server by calling `app.listen()` as we would have with an express app. If it isn't apparent, you now see the benefit of working with NestJS, enabling `CORS` on our application is as simple as calling `app.enableCors()` on our application. Ordinarily it would require us to first install `cors` module and then using it as a middleware. To create our http app/server we need to pass in our applications main module as an argument to the `create` method of `NestFactory`, we would look at the `app.module.ts` below. ### app.module.ts A module in NestJS is simply a data structure for managing our applications dependencies. Modules are used by Nest to organize the application structure into scopes. Controllers and Providers are scoped by the module they are declared in. Modules and their classes (Controllers and Providers) form a graph that determines how Nest performs, for a class to serve as a NestJS module it should be decorated with the `@Module()` decorator. Let's examine the contents of the `app.module.ts`. ```typescript import { Module } from '@nestjs/common'; import { AppController } from './app.controller.ts'; import { AppService } from './app.service.ts'; @Module({ imports: [], controllers: [AppController], providers: [AppService] }) export class AppModule {} ``` The imports array is responsible for handling other modules that this module depends on, as our application grows we would have other modules because NestJS suggests that each feature in application should have it's own module, rather than polluting the global module namespace. The imports array handles those other modules. The `controllers` array handles the contorllers that we create in our application, while the providers array handles `services` that we create in our application. The module class also scopes the controllers and providers, making them available only in the module they are registered with. We are going to have a brief overview of `contorllers` and `services`. ### app.service.ts A service in NestJS is similar in concept to a service in Angular, a service is just a class that encapsulates helper methods in our application, we define functions that helps get certain things done, in this example we define only one method on `app.service.ts` which returns hello world! Another name for a service is a provider. Let's inspect our `app.service.ts` file. ```typescript import { Injectable } from '@nestjs/common'; @Injectable() export class AppService { getHello(): string { return 'Hello World!'; } } ``` For a class to serve provider it should be decorated with the `@Injectable` which is exported by `@nestjs/common`, we can proceed to declaring methods on the class that we use in our code. ### app.controller.ts A controller is a just another fancy name for a route handler, a controller will define a method that is usually attached to a route, whenever there is a request to the server that matches a particular route, the controller will call the function that is attached to the route. ```typescript import { Controller, Get } from '@nestjs/common'; import { AppService } from './app.service'; @Controller() export class AppController { constructor(private readonly appService: AppService) {} @Get() getHello(): string { return this.appService.getHello(); } } ``` We already established that NestJs makes use of dependency injection, in the above controller we inject the `AppService` provider so we can use the methods declared on it. For a class to serve as a controller it should be decorated with the `@Controller` decorator as demonstrated above, the decorator can accept a string as an argument, that string will serve as a base route for that controller. Http verbs can be applied as decorators to the functions that will process an incoming request. In the above example, a `@Get` decorator which matches to the http verb `GET` is applied to the `getHello` function. Thus whenever there is a `GET` request to the server, the `getHello` function is called as the handler to that route. The decorators that serve as http verbs also accepts strings as argument and they serve as a secondary path to match after the one defined in the `@Controller`. In the above example, the base route is `/` because no argument is passed into the `@Contorller` decorator and the route for the `@Get` controller is also `/` because no argument is also passed to the `@Get` decorator attached to the `getHello` function that is why a request made to the server `http://localhost:3000/` will return `hello world`. ### Starting the app To kick start the server, we simply run; ```bash npm start ``` The NestJS CLI which you have access to if installed with `npm i @nestjs/cli` will bootstrap and start the application for us in production mode. To start the server in development mode, which enables hot reload we can run `npm i start:dev` and any changes we make while the server is running locally will take effect. In fututre articles in this series we will take our time to explore NestJS providers, controllers, modules and lots of other features of NestJS, that is it for today, i hope you enjoyed this and i hope you found this useful, please leave your comment down below about what your thoughts or experience on using NestJS, feel free to add in anything you feel i left out in this introduction in the comments.
kalashin1
1,150,891
Meme Monday 😍
Welcome to another Meme Monday post! Today's cover image comes from last week's thread. DEV is an...
0
2022-07-25T12:31:46
https://dev.to/ben/meme-monday-3m4
discuss, watercooler, jokes
Welcome to another **Meme Monday** post! Today's cover image comes from [last week's thread](https://dev.to/ben/meme-monday-5d7f). DEV is an inclusive space! Humor in poor taste will be downvoted by mods.
ben