id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
343,512 | Beautify your Terminal - WSL2 | I used Ubuntu as a VM for front-end development. But recently I have been testing Windows Subsystem o... | 0 | 2020-05-25T18:15:41 | https://dev.to/rishabk7/beautify-your-terminal-wsl2-5fe2 | linux, terminal, productivity, wsl | I used Ubuntu as a VM for front-end development. But recently I have been testing [Windows Subsystem of Linux (WSL 2)](https://devblogs.microsoft.com/commandline/wsl-2-is-now-available-in-windows-insiders/) and so far it's good.
No need to run VM anymore! (Since I only care about the command line functionality).
Also, I have been trying out the [oh-my-zsh](https://github.com/ohmyzsh/ohmyzsh) and I gotta say that it is amazing!
Here is a guide on [how to get started with WSL](https://dev.to/pluralsight/getting-started-with-wsl-1abp) by @jeremycmorgan.
-------------------------------
So my terminal looks like this right now:

And here is the guide for the same. (Assuming that you have WSL enabled, Ubuntu and Windows Terminal App installed, if not, you can [follow this guide](https://dev.to/pluralsight/getting-started-with-wsl-1abp))
-------------------------------
### Install oh-my-zsh:
Make sure zsh is installed:
```
apt install zsh
```
Install ohmyzsh
```
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
```
-----------------------------------------
### Install and configure Powerline fonts
To install the Powerline fonts:
1. Open a Powershell session as administrator.
2. Download and expand the Powerline fonts repository:
`powershell -command "& { iwr [<https://github.com/powerline/fonts/archive/master.zip>](<https://github.com/powerline/fonts/archive/master.zip>) -OutFile ~\\fonts.zip }" Expand-Archive -Path ~\\fonts.zip -DestinationPath ~`
3. Update the execution policy to allow the installation of the fonts:
`Set-ExecutionPolicy Bypass`
4. Run the installation script:
`~\\fonts-master\\install.ps1`
5. Revert the execution policy back the default value:
`Set-ExecutionPolicy Default`
--------------------------
### Edit the settings for WSL:
To configure the fonts:
For Windows Terminal App:
- Open the Windows Terminal App.
- Go to settings.

- Update the json, list one of the Powerline fonts.

For Ubuntu App:
- Open the Ubuntu app.
- Open the **Properties** dialog.
- From the **Font** tab, select one of the Powerline fonts, such as *ProFont for Powerline*.
- Click **OK**.
-------------------------
### Choose your theme! 🎨
You can now choose the theme you want for your terminal, there are many to [choose from](https://github.com/ohmyzsh/ohmyzsh/wiki/Themes). I am using "agnoster".
You can do so by:
- Edit the '.zshrc' file
` nano ~/.zshr `
- Change the theme to one that you selected:
```
# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="agnoster"
# Set list of themes to pick from when loading at random
# Setting this variable when ZSH_THEME=random will cause zsh to load
# a theme from this variable instead of looking in ~/.oh-my-zsh/themes/
# If set to an empty array, this variable will have no effect.
# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )
```
You can also enable different Plugins:
```
plugins=(
git
bundler
dotenv
osx
rake
rbenv
ruby
)
```
Let me know which theme you picked! Also, feel free to reach out if you have any concerns.
| rishabk7 |
343,543 | What every developer needs to know about TCP | What is it and why the hell do we use it? | 0 | 2020-05-25T17:27:14 | https://breadth.substack.com/p/the-low-down-on-tcp | webdev, beginners, career, programming | ---
title: What every developer needs to know about TCP
published: true
description: What is it and why the hell do we use it?
tags: webdev, beginners, career, programming
canonical_url: https://breadth.substack.com/p/the-low-down-on-tcp
---
*This article was originally posted on [Breadth](https://breadth.substack.com/subscribe) a weekly newsletter aimed at helping you increasing your technical (and non-technical) knowledge to help you become a more effective engineer*
---
TCP / IP - the real MVP of the internet.
Nearly every website/application uses them.
It is literally the fundamental building blocks that pretty much powers everything, yet many developers who build upon these foundations have no idea how they work.
And you know what? That can be ok - the best part about it being around so long is that it has proven to be a trustworthy and semi-reliable set of protocols.
You don't have to know in-depth how it works, but it is good have a basic foundational knowledge.
Let's dive in.
## 👇 The low down
---
Firstly, let's clear something up.
TCP / IP isn't a single thing - it's two separate protocols. Well, sort of. Really, it's one protocol built on top of the other. IP is the base level protocol. It stands for Internet Protocol.
Aptly put by Cloudflare:
> *The Internet Protocol (IP) is a protocol, or set of rules, for routing and addressing packets of data so that they can travel across networks and arrive at the correct destination.*
Essentially, it is pretty much saying that it's a set of rules that define how a piece of data (packet) is sent across the internet.
Now you might be thinking - is that where an "IP Address" comes from? And you'd be right.
IP protocol basically tells computers how to get a packet of data from one IP address to another.
However, it isn't quite that simple.
To illustrate. Let's say, for example, you wanted to send a letter to your friend Bob.
Now Bob lives on the other side of the country, and your mail system can only allow letters 5cm by 5cm to be delivered.
Unfortunately, the letter you've written is much larger. Hence, you decide to cut up the letter into three parts and mail each piece individually.

Now when you send your letter, it gets sent with three different mailmen.
* **Mailmen 1** - has a few other deliveries to do on the way, so it takes him a bit longer to reach your friend.
* **Mailmen 2** - can take the letter straight there.
* **Mailmen 3** - unfortunately, get's into an accident and so the third part of the letter doesn't make it.
Now Bob has got 2 parts to a 3 part letter. Even worse, Bob isn't the brightest of the lot and can't figure out how to put pieces 1 and 2 back together in the right order. Since part 2 arrived before part 1, he reads in that order.
So here the IP is what tells you how to get a letter from A -> B. It says that you must use a mailman, you must give the mailman an address and you must only send small pieces of data at a time. You need to trust that they will get there time.
However, as we have trust seen, this isn't always that reliable. In fact, it's turned into a bit of a mess.
To illustrate the above example using actual packets (pieces of a letter), imagine you want to send some data from LA -> Melbourne.
One packet might get routed from LA -> LONDON -> MELBOURNE.
Another might got straight from LA -> MELBOURNE.
And the last packet might get dropped along the way.

*Clearly I struggle with drawing countries.*
Now because of this, if the receiving server attempts to read the packets as they come in, they will get B -> A and no C.
Basically, they will get rubbish and have no idea what it means.
Because of this, we call IP an unreliable protocol. It will try to get your data from A -> B as best it can. Still, it makes no guarantees about the order or deliverability of those packets.
So how does the internet work at all? How do we make this unreliable protocol, reliable?
**Enter TCP**

TCP (Transmission Control Protocol) is a protocol built on top of the IP protocol (yes, I am aware that's the same as saying ATM machine, no, I am not changing).
It attempts (and does a reasonably solid job) at making IP reliable.
## ♤ Digging Deeper
---
So how does TCP/IP work?
Well, let's go back to the letter example.
So to start with, before we actually even send the data, with TCP, we have to open a connection. We do this to tell the receiver we are going to send some data.
So in our example, instead of just sending the letter with our message, we instead send him a letter telling him we are going to send a letter. Bit meta right. We also ask him to send a confirmation that he got this letter.
This tells us a few things. If Bob sends us a confirmation back:
1. We know that his address was correct
2. We know his mailbox works and he can receive letters.
3. We know he has the basic writing skills of a 5 year old.
Now if Bob doesn’t reply we know:
1. He can’t receive letters or his address was wrong.
2. We shouldn’t bother sending him our real letter because he probably won’t receive it.
3. Or he just doesn’t want to hear from us.

This lets Bob know how to put the letters back in the same order as they were sent. Now it doesn't matter how slow the letters were or which order they arrived, Bob would still be able to reconstruct the original letter.
This solves one problem. We still have to figure out a way to handle the case of missing letters.
To do this, we also tell Bob that every time he receives a letter, he again needs to send us a letter back confirming what number-letter he received.
Now from when you send a letter, you start a timer. You know it will take a maximum of 3 days to send a letter to Bob and for him to send one back.

So if after three days, you don't receive a letter from Bob confirming he received it, then you will resend that part of the letter again.

That's the general gist. Obviously, there is much more that goes on in the internals. Still, for now, you and Bob have a working system for delivering messages.
The first part of any TCP connection is the handshake.
With a TCP connection, this handshake is comprised of three distinct parts.
1. Syn (short for synchronize)
2. Syn-Ack(short for Acknowledgement)
3. Ack
Because it's three parts, this is often referred to as a three-way handshake.
Essentially what goes on during the handshake is synchronization of sequence numbers (syn) and a set of acknowledgments (ack). This lets the other party know how to arrange the packets. It will also allow it to know if there is a packet missing.
For example, if you open a connection to a server and tell it that your sequence number starts at 123. When it acknowledges with 124, you can know that the server has received all packets up to, but not including, packet 124.
The server will also send you it's synchronization number, for example, 432. Now to let the server know you have received all messages up to 432, you send and ack with 433.

Now that everyone is on the same page, the transmission of data can begin.

Now the client and server can communicate, using seq numbers and acknowledgments, to transmit packets reliably.
The sequence numbers aren't actually incremented by 1 each time, however, and instead by the number of bytes being sent. This adds another level of reliability. Now not only do you know the order of the packets being received, but you can also tell if any bytes are missing.
Of course, there are still going to be many issues along the way. Still, there is a reasonably robust framework in place for dealing with and mitigating a lot of these errors.
## 🤷♂️ When would you use it?
---
Most likely, you are already using TCP/IP.
TCP is the best used where reliability is needed. Where, if any missing pieces of data can have a negative effect on the experience of the client.
For example, if you missed 10% of packets when loading a webpage - then chances are you are going to receive a hot mess, and the page will fail to render.
However, there are cases where reliability and error checking isn't really vital. In fact, it can be detrimental.
Gaming and Video streaming are two areas where TCP isn't actually the best choice. Instead, UDP is a much better option because UDP has a stronger focus on speed then reliability. Ensuring every single packet in a video call is received is nowhere near as important as making sure the participants can see and hear each other.
Does it really matter if 5 pixels in the corner are missing for 1 frame? Will anyone really notice? Most likely not. Here speed is king, and a few dropped packets aren't going to be a big deal.
## 💯 Gimme More
---
Hopefully, you now know a bit more about TCP then before you started reading. If you want to dive deeper and not only increase your breadth but also your depth then check out the below resources 👇
* [Computerphile TCP Meltdown](https://youtu.be/AAssk2N_oPk)
* [Cloudflare - What is TCP](https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/)
* [Cloudflare - What is IP](https://www.cloudflare.com/learning/ddos/glossary/internet-protocol/)
* [Chris Greer Video series on how TCP works](https://youtu.be/HCHFX5O1IaQ)
## This weeks puzzle
---
This weeks puzzle is based around the Collatz conjecture.
The Collatz conjecture is quite simple. It states that:
Given a number (n):
If (n) is === 1 👉 return 1 and finish
If (n) is even 👉 return n / 2
If (n) is odd 👉 return 3n + 1
Repeat
This simple set of rules will always return 1 (well maybe not always but it hasn’t been proven otherwise).
The path from N -> 1 is called the sequence. E.g.
n = 20
20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1
So if n = 20, the sequence is 8 numbers long.
Your challenge is to find the longest sequence where n < 1,000,000.
Please let me know if you come up with a solution in by responding to this tweet:
{% twitter 1264998881013698566 %}
Alternatively if you liked this article and want to show some love then above tweet is the place to do it ☝️
**If you want to subscribe to Breadth and get posts the like above straight to your mail box - [then click here.](https://breadth.substack.com/subscribe)**
| harryblucas |
343,547 | Depois do Café - Episodio 12 - Desenvolvimento de Software no Interior (com André Angelucci e Gabriel Dias) | Neste episódio a gente fala das diferenças, vantagens e desvantagens de ser uma pessoa desenvolvedora no interior. Falamos com o André Angelucci e o Gabriel Dias sobre trabalhar em Fernandópolis - SP. | 0 | 2020-05-25T18:08:44 | https://dev.to/depoisdocafe/depois-do-cafe-episodio-12-desenvolvimento-de-software-no-interior-com-andre-angelucci-e-gabriel-dias-amf | podcast, programação, portugues, interior | ---
title: Depois do Café - Episodio 12 - Desenvolvimento de Software no Interior (com André Angelucci e Gabriel Dias)
published: true
description: Neste episódio a gente fala das diferenças, vantagens e desvantagens de ser uma pessoa desenvolvedora no interior. Falamos com o André Angelucci e o Gabriel Dias sobre trabalhar em Fernandópolis - SP.
tags: podcast, programação, portugues, interior
---
{% spotify spotify:episode:5E9VSdgwaZq5FNPVw7Qeoj %}
Não se esqueçam de entrar no nosso grupo do telegram para episodios exclusivos: https://chat.depois.cafe
Você também pode ouvir no seu aplicativo preferido: [iTunes](https://podcasts.apple.com/br/podcast/depois-do-caf%C3%A9-com-airton-zanon/id1480842641), [Breaker](https://www.breaker.audio/depois-do-cafe-com-airton-zanon), [Google Podcasts](https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9lMGU0MDU4L3BvZGNhc3QvcnNz), [Overcast](https://overcast.fm/itunes1480842641/depois-do-caf-com-airton-zanon), [Pocket Casts](https://pca.st/bpyo3i4y), [Radio Public](https://radiopublic.com/depois-do-caf-com-airton-zanon-6r2oLq), [Spotify](https://open.spotify.com/show/4cqX5o40bClwqtYHv9X7Lp) e outros...
Neste episódio a gente fala das diferenças, vantagens e desvantagens de ser uma pessoa desenvolvedora no interior. Falamos com o André Angelucci e o Gabriel Dias sobre trabalhar em Fernandópolis - SP.
De um jeito pratico, “Depois do Café” de ser, falamos sobre métodos ageis, diferença salarial, diversidade e muito mais.
----------------------
**Participantes:**
Airton Zanon - [@airtonzanon](https://twitter.com/airtonzanon) (twitter)
Andre Angelucci - [@AndreAngelucci](https://twitter.com/AndreAngelucci) (twitter)
Gabriel Dias - [@gdiasb12](https://twitter.com/gdiasb12) (twitter)
----------------------
Para mais episodios e saber mais sobre o podcast acesse https://episodios.depois.cafe ou https://dev.to/depoisdocafe.
Siga-nos no twitter [@dpsdocafe](https://twitter.com/dpsdocafe) | airtonzanon |
343,551 | My Journey Through Tech Volunteering: Anticipation, Passion, Burnout, and Looking Ahead | This is the story about how I found my path volunteering in tech, how it gave that new chapter of my life new meaning, how I burned out, and what's next. | 0 | 2020-05-25T17:39:53 | https://blog.eyas.sh/2020/05/my-journey-through-tech-volunteering/ | volunteering, teaching, career, motivation | ---
title: My Journey Through Tech Volunteering: Anticipation, Passion, Burnout, and Looking Ahead
published: true
description: This is the story about how I found my path volunteering in tech, how it gave that new chapter of my life new meaning, how I burned out, and what's next.
tags: volunteering, teaching, career, motivation
canonical_url: https://blog.eyas.sh/2020/05/my-journey-through-tech-volunteering/
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/37moz6r5jm9y92t1vqhc.jpg
---
_Cover Image: Winding Path by Phil Bulleyment, via Flickr. CC BY-2.0_
I had been working in New York City for just over a year when I sat down at one of my [favorite cafes](https://sweatshop.coffee/) in my neighborhood to write a personal journal entry. I gave it the title _"On the crossroads between goal-oriented and process-oriented"_ and I wrote down stream-of-consciousness reflections on my life, career, and how I wanted to do things differently.
It was October 2015, and I had finished grad school and moved to NYC to work full-time as a Software Developer in a fin-tech company. I was having what I have come to see as the seminal quarter-life crisis many folks go through when they finish their formal years of education. I had been chasing goals all my life up until then, and now I had the luxury and privilege of deciding whether I should set another goal or do things radically different than what I had done so far. _Goal-oriented_ versus _Process-oriented_, I called it.
A lot of my thoughts at the time (and those in that journal) have been foundational to my thinking in this new life chapter. Most of those are a story for another time. One I kept coming back to, however, was knowing that I needed to re-engage with _doing good_ in the world.
There are many ways to do good in the world, but I eventually settled on finding _skill based volunteering_ opportunities that used my tech skills as potentially most effective and satisfying. My thinking was: If tech companies put a high dollar amount on my time and skill, wouldn't giving some of it away be the most effective thing I can do?
This is the story about how I found my path volunteering in tech, how it gave that new chapter of my life new meaning, how I burned out, and what's next.
<figure>
<figcaption>
Photo by _Women of Color in Tech stock images_ ([via Flickr](https://www.flickr.com/photos/wocintechchat/25720969670/))
</figcaption></figure>
## Finding my path in Volunteering in Tech
Being passionate about tech and justice, I was interested especially in areas where I can help in issues of equity, representation, and access. I also liked teaching, so my first intuition was looking at programming bootcamps, as well as programs like [Girls Who Code](https://girlswhocode.com/), [Black Girls Code](https://www.blackgirlscode.com/). Bootcamps appealed to me because they gave an opportunity to work with folks looking to start a career in tech _soon_ and provided access to minoritized people often excluded from the "traditional" STEM pathway. School programs were exciting for different reasons: it's way more indirect, but it gave the opportunity to inspire a student to consider a totally new path.
One important piece of my situation back then is immigration: I was on an [H1-B visa](https://en.wikipedia.org/wiki/H-1B_visa), which meant that my continued ability to live in the U.S. was tied to my job. Therefore, I wasn't really looking for full-time or sabbatical-type service opportunities.
This ended up excluding most bootcamps at that time. It also excluded the more intense summer programs like the Girls Who Code [Summer Immersion Program](https://girlswhocode.com/programs/summer-immersion-program).
I decided the best match for me at the time was the Girls Who Code _Clubs_ program, which gave me the ability to work with a school in the NYC-area as an instructor in their after-school program. Finally, after a lot of indecision, I ended up applying on Dec 31.
The background screen started on Jan 3. Five days later, I received an e-mail saying I was matched with [Chloe Taylor](https://www.chloetaylortech.com/), who [was just co-starting a Club](https://medium.com/@GirlsWhoCode/a-girls-who-code-club-facilitator-shares-how-giving-back-boosted-her-career-870a7461a525) at her school. For the coming spring term, I would be helping as an instructor for an after-school club for 5<sup>th</sup>- and 6<sup>th</sup>-grade girls interested in coding.
## Your *N*th reminder that teaching is hard
Everyone tells you teaching is hard, but _boy was it hard_. I always felt like teaching and mentoring was a core part of me and my passions. I deeply cared about doing a good job. I practiced my very first class again and again, but I don't think I was truly ready; I underestimated how difficult it would be to command the attention of a classroom. I was used to TA-ing in college where the students needed me, or mentoring co-workers asking for help, but those were adults who had figured out how to manage their attention and distractions. I remember feeling a hard-to-pinpoint—_almost embarrassed_—sensation after that first club meeting, like I failed at something and was anxious to show my face the next week.
Some of the girls were incredibly excited to code. For others, their parents wanted them to get into it, but they were not yet convinced. They all were eager to try coding! The first good day, where I _felt_ I truly taught and excited them, I was over the moon. But the hard days where I felt like a total failure never really went away.
The clubs were set up (at least back then) to meet once a week, have the instructor briefly present a new concept, then work on practice exercises. None of the girls had prior coding experience, so we were using the [Scratch](https://scratch.mit.edu/) programming language[^1]. I would walk around the classroom, with the other teachers (who were themselves picking up coding and doing some of the exercises) and we'd answer questions, check-in on the students, and guide them through their projects.
[^1]: A more advanced course with Python was also available.
It was a bit of a trip re-learning how a tween in junior high acts: How they go off day-dreaming mid-sentence or be very single-mindedly focused when something excites them, moving seamlessly between both. It's incredibly charming! When they got stuck in their coding project, they demanded _immediate_ attention, but if my answer ran a _bit_ too long, they'd have already wandered to the next thought.
We had awesome field trips to the BuzzFeed and Facebook[^2] offices, heard from folks in tech about their experiences getting into it. BuzzFeed had a panel of people with all sorts of backgrounds entering the field, told us about challenges faced by women in tech, and answered all sorts of questions. Facebook's free food, snacks, and office design probably made the students more excited about being future engineers than anything I did that year. Whatever does the job, I guess!
[^2]: They made sure to tell all their friends they were going to the _Instagram_ offices. It was cooler.
As the semester came to a close, we said our goodbyes. I was thankful for the experience and hopeful I might have helped. At that point, I had the sense that teaching younger students was likely not in my wheelhouse. I wanted to get better at it but also knew a trained educator is really what this program needed.
## Meanwhile, the Election...
You almost forgot all of this is happening in 2016, didn't you? All this time I had been thinking about _"How do I do more good in the world?"_ I was also facing a feeling of impending doom. The U.S., arguably the world's first successful experiment in multicultural democracy, was faced with the possibility of heading towards nationalistic, isolationist, and racist populism. Election anxiety was getting the best of me, an experience that was proving to be [very](https://time.com/4299527/election-mental-health/), [very](https://www.theatlantic.com/health/archive/2016/05/how-to-preserve-your-mental-health-despite-the-2016-election/484160/) [common](https://www.theatlantic.com/health/archive/2016/11/election-anxiety/505964/).
In August, the answer came to me: [DevProgress](https://web.archive.org/web/20190101164332/https://devprogress.us/), a group of volunteers in tech working in loose partnership with the Hillary Clinton campaign to help with progressive causes. DevProgress was effectively a Slack organization, Trello board, and GitHub organization of interested individuals budding off and forming projects with _some_ level of community. Some worked on apps to help organize carpools to vote, others worked with artists making Hillary-inspired art, etc. I applied to join, and by September, I had joined in earnest.
As someone living in the US for 6 years at that point, I cared deeply about where the country is headed. Yet as a visa-holder, I was not allowed to donate to political campaigns, which is traditionally what someone anxious about the election could do. I _could_, however, [volunteer my skills for a campaign](https://www.fec.gov/help-candidates-and-committees/candidate-taking-receipts/volunteer-activity/).
<figure>
<figcaption>As someone working in a closed-source company, my involvement in DevProgress can be seen very starkly in that brief period in 2016.</figcaption></figure>
I contributed to a few of these projects, and spent countless nights especially helping on a project called ["I like Hillary, but..."](https://github.com/DevProgress/i-like-hillary-but), aimed at combatting what we perceived as misinformation and innuendo about Hillary Clinton[^3].
[^3]: We later found out how much of that misinformation was carefully amplified through bot rings, fake profiles, and strategic online advertising targeting both right- and left-leaning voters.
Working with the talented folks of DevProgress was one of the few ways I managed my anxieties in 2016. Things were uncertain, and they were scary—but I could tell myself that I'm doing my best. In retrospect I still wonder if I did my best, or if there's any small action by one person that could have had a butterfly-effect on the whole election. In the midst of it all, though, it was a particularly good coping mechanism.
Volunteering with DevProgress was always going to end abruptly on November 8, but I was hoping it wouldn't end _devastatingly_ too. It did, but a lot of the experience is something I keep with me always: Working with a diverse group of passionate people and knowing that _my tech skills had value for social issues_.
While DevProgress itself became defunct, one of its main leaders [Brady Kriss](https://twitter.com/bradykriss) founded [Ragtag](https://ragtag.org/), which in many ways became the spiritual successor of DevProgress. They're always looking for volunteers, I suggest you join!
For me, part of regrouping and picking the pieces after 2016 involved a lot of soul searching. I wasn't ready to face post-Election realities yet, so I held off on joining Ragtag in earnest. But the experience of _writing code for good_, coupled with another transformative experience that happened around the same time (which I'll talk about next), lead me down what became my passion for the coming few years.
## Building Websites for Good
Around the time I started looking into DevProgress in 2016, I came across a call for volunteers. A non-profit called [Out in Tech](https://outintech.com/) was launching a new hackathon-type event:
[Digital Corps](https://outintech.com/digital-corps/). Their announcement read:
> **Out in Tech is teaming up with Squarespace** to build websites for ten (10) organizations fighting to protect LGBTQ+ rights in their home countries, from Pakistan to Botswana.
They announced a one-day event where teams of techy volunteers would build a website for an activist organization working to protect the rights of LGBTQ+ people in their area. I forward this to a few friends, adding _"That's pretty interesting. I'm considering signing up.."_
The event was scheduled for a Saturday in September. I wasn't quite sure how impactful building a _website_ would be, but the idea of doing good using tech appealed to me. They also expressed a need for Arabic speakers, which made it all the more compelling to me to try and help.
That day, what quickly became apparent as my team started working on our website, was just how impactful an online presence can be. These activists were all either in countries where it was illegal or extremely dangerous to be LGBTQ, in incredibly underserved areas, or both.
A website is a safe way for both the brave activists to reach their communities, and for at-risk individuals to get the information they need. A website is a bold assertion of existence in countries that silence and deny the existence of citizens that do not conform. In many cases, the websites we build will be the first affirming resource written in a given native language or dialect. Those websites, with their content written by local activists, will often be the first resource from a local perspective on issues of sexual orientations and gender identity. Those websites, however small they might seem, end up countering the narrative that the modern LGBTQ movement is merely a Western movement.
For people in the west, amplifying voices of local activists is also important to counter [homonationalism](https://en.wikipedia.org/wiki/Homonationalism) in Western countries.
Lifting up those voices is important, meaningful, consequential work. I felt that deeply.
## The Digital Corps become an army
A few months into 2017, an opportunity came knocking. I was still buried deep in the pits of despair following the 2016 election, too anxious to think about politics and activism. Out in Tech sent out a call asking people to sign up to volunteer on the organizing side.
Digital Crops stood out to me as a dream project I'd love to work on: It is globally minded, fairly activist and political, yet not necessarily directly engaging with the realities of the US political climate which I was hoping not to have to think about it all the time.
And just like that (fast-forwarding a few weeks), I became part of the first group of organizers taking on Digital Corps as a repeatable long-term program. We ended up partnering with [Automattic](https://automattic.com/) the creators of WordPress. Automattic proved to be the best partner a non-profit can ask for: they offered our partner organizations free Premium and Business hosting and tech support, and they offered our volunteers _excellent_ support as they built these websites, sending _Automatticians_ to provide front-line support as we're building these sites.
We set up the organizing team into federated groups of _organizers_ each working with one or more activists on understanding their needs prior to the event. We figured out requirements and collected anything we might need: reference information, content, photos, etc., ahead of the event, then worked on selecting volunteers and getting them up-to-speed on the day of.

My first ever project was working with the [Massachusetts Transgender Political Coalition (MTPC)](https://www.masstpc.org/) on creating a [Name & Gender Marker Change Guide](https://mtpcidproject.com/) for people navigating this incredibly difficult process in Massachusetts. The result was gorgeous and thoughtful, in no small part to [Rita de Almeida](https://www.ritadealmeida.com/) a volunteer on the team. Rita is also an artist, UX designer, and researcher. She soon also became the goddess of Digital Corps, heading the entire organizing effort.
The first event I helped organize in June 2017 was a success. It gave me great confidence that the "magic" I felt working on one website was repeatable, and with it the potential for a positive impact on the activists around it.
Things weren't _perfect_ right away, we were still trying to understand: how to scope projects so they fit in a day; what level of maintenance should we expect the organizations to do themselves; what level of support should we offer them; and more. We tried working with a large multinational organization to extend their Drupal/Salesforce pipeline and quickly learned that self-contained projects have a _way_ higher chance to succeed than incremental pieces in large projects.
We also learned that WordPress powers [30% of the world's websites](https://venturebeat.com/2018/03/05/wordpress-now-powers-30-of-websites/) for a reason: It's maintainable and accessible for all users, while still highly customizable by web developers.
With that, from 2017 until 2020, I've worked on 10 or so of these events (Digital Corps tries to hold 3-4 events/year across the world). Each of those events is quite a big production: From logistical event planning, marketing, volunteer recruitment and selection, to finding and vetting partner organizations, working with them on their requirements, and making sure we have the needed requisites to build them a high-quality presence.
Working with these organizations over the last few years was among the most meaningful work I had done.

I worked with **[Roopbaan](https://roopbaan.org)**, Bangladesh's first and only LGBT magazine. It was founded by hero and activist [Xulhaz Mannan](https://en.wikipedia.org/wiki/Xulhaz_Mannan). Mannan first published Roopbaan in 2014, and the magazine rose to prominence (as well as infamy) in Bangladesh. Xulhaz was [murdered in 2016](https://www.theguardian.com/world/2016/apr/25/editor-bangladesh-first-lgbt-magazine-killed-reports-say-roopbaan), along with fellow activist and Roopbaan member K Mahbub Rabbi, abruptly putting a stop to the magazine. I learned all this from Mannan's activist friends and colleagues, who wanted to continue the Roopbaan effort online where it could not be silenced by those who wanted to.
I worked with the [African Queer Youth Initiative](https://aqyi.org/) on creating their website and online presence, and watched them grow their website and blog into a respected authoritative perspective on issues impacting queer youth in Africa.

I worked with **[Outcasts Tunisia](https://outcaststunisia.com/)** in developing one of the first Arabic-language transgender-affirming resources in the Maghreb region.
Those, and many more, inspired me daily to the best I can do between working hours and on weekends. I took calls at odd times, working across many time zones, meeting new organizers and activists hoping to make their local communities better, against all odds.
I also met countless volunteers who are passionate about making a difference. Folks who decided to show up at 9:00 AM on a Saturday[^4] at some brightly-lit tech office to work on some websites. They're internationally-minded, many hailing from all over the world, and all passionate about giving back. I'm grateful today to call many of those people good friends.
[^4]: This is tech we're talking about, so 9:00 AM on a weekend is _early_.
### What makes Digital Corps successful?
Volunteer efforts almost always suffer from friction, a lack of motivation, and fundamental challenges with initiative-taking. It's easy to sign up to do something, but it's just as easy to procrastinate doing it. Saying "I'll do this thing" _feels good_ on its own, so when it comes time to reply to that e-mail or decide what project to work on, you might just wait for someone else to do it—or just postpone it a little.
From my experience, _every_ volunteer-based organization I've seen suffers from this.
From my perspective, the Digital Corps program sidesteps this by making so much of the _action_ self-contained within a single build day. We just ask our volunteers to _show up_ on a given day (and have a certain skill set), and we count on them to work as they're energized by the colleagues around them.

Seeing these organizations and the impact associated with their work is incredibly meaningful. People are proud of the work they're done, rate the event highly in surveys, and will often apply for future events and tell their friends about it.
Effectively, we retain a highly motivated, highly skilled population of volunteers by shifting a lot of the day-to-day _engagement_ work into organizers. The organizers in turn work to define a self-contained single-day deliverable for these volunteers.
## Burning Out
One downside to the Digital Corps model is that it might be easy to burn out as an organizer. At least that's what happened to me. Working on something _meaningful_ on tight deadlines with a rather short periodicity (once every few months) is a bit of an emotional rollercoaster. You get to know a set of activists and invest your relationship with them. You race against the clock figuring out requirements, structure, content, etc. It culminates in a huge event that hopefully goes swimmingly. Then you're back to square one.
The periodicity of it really affected me. Especially as I tried to balance work and [other volunteer commitments](https://stargate.mit.edu/ectrackweb/home.mit).
My work with Digital Corps helped me survive and come to terms with the new realities of the post-2016 era. It spanned an apartment move, two jobs, and a few promotions. I joined Digital Corps when I was just about checked out with the world around me, and it snapped me out of it.
Yet as 2019 went by I was becoming more fatigued and checked out. It's a really weird experience when the thing that brought you back from detachment is making you more detached.
Part of this was that my first few years with Digital Corps involved me coasting at a fin-tech job then joining Google and ramping up. Ramping up isn't _quite_
the same as coasting, but there are overlaps with it often being less high-stakes which helps keeping volunteering involvement front-and-center. It felt easy to be an engaged volunteer coasting at work, but it was much harder trying to do both. It felt like my full-time job, though not quite a do-good profession in the same way, had its turn to come first.
## Thinking Ahead
Today, I maintain some minimal involvement in Digital Corps. I keep wanting to explore being more active, but with the pandemic upon us in all its glory, it's now harder than ever to think about how to stay motivated with what you're already doing.
As I wrote this, I had questions on my mind that I hoped spelling this out would answer: What are the decision points for re-engaging now or later? What circumstances might need to be in place for me to re-engage in a different capacity?
I was disappointed the answer didn't come to me. I sent an early draft of this to friends hoping for perspective. Here are some proposed parameters:
- Dive back in again and then accept the inevitability of burning out (again)
- Angle for a career shift within Google to get _paid_ to do "good" (not just do-no-harm work) work?
- Does having a green card mean you can donate to political causes? If so, it might be time to take the rich person offramp and donate as a way of exercising your values.
- Can you look for avenues in Digital Corps for a smaller amount of contribution?
- Focus on a different area of volunteering so you can chase the high of being a newbie again and ride that wave until the next trough finally hits.
Embracing burn-out as a (sometimes) inevitable part of the process and working with it resonates. We all burn out and work, but it rarely means we're forced into early retirement.
Google, for its part, does have the [Google.org Fellowship](https://blog.google/outreach-initiatives/google-org/googleorg-fellowship/), where employees effectively take a sabbatical from their role to work with non-profits that need that work. This, and other types of sabbaticals, are definitely appealing. Working out the timing (especially when putting work on hold might slow down your career advancement) and figuring out the right trade-off remains something I haven't mastered there.
Donating should absolutely become part of the wider picture of wanting to do good. Donating competes with buying other goods for money, but it doesn't really compete on time with other commitments _per se_. If I can, I'd still love to do both.
I also need to answer questions about _what_ and _how much_. Can I find smaller meaningful chunks that I contribute? Would those fit in the common volunteering model where it is harder to _come up_ with self-contained tasks that need work than it is to _do the work_ for those tasks? Will working in a new area re-inspire me in ways where I might have gotten numb[^5]?
[^5]: Analogous to [compassion fatigue](https://en.wikipedia.org/wiki/Compassion_fatigue), if you will.
I'm still not sure what looks ahead. But soon enough the pendulum will swing from burnt out to restless, and the cycle will start over.
---
_If my discussion on [Digital Corps](https://outintech.com/digital-corps/) excites you, please consider applying for their coming events! Their mailing list will inform you of upcoming opportunities._
_If this resonates, I'd love to hear what you think. Tweet at [@EyasSH](https://twitter.com/EyasSH). Or, be sure to [sign up to get updates](http://eepurl.com/gVgusL) on future articles._
| eyassh |
343,888 | Different ways of structuring Arrays in Javascript | Arrays are indispensable data-structures in javascript and understanding how to effectively use them... | 0 | 2020-05-26T08:20:08 | https://dev.to/carter/different-ways-of-structuring-arrays-in-javascript-5dac | arrays, datastructures, javascript, destructure | Arrays are indispensable data-structures in javascript and understanding how to effectively use them to solve problems is a crucial skill to master.
We will be taking a look into some of the many ways to create Arrays in Javascript.
----------
**_Table of Contents:_**
- [Basic way](https://www.jeffubayi.site/blog/array-data-structures/#basic-way)
- [With Array Constructor](https://www.jeffubayi.site/blog/array-data-structures/#with-array-constructor)
- [Spread Operator](https://www.jeffubayi.site/blog/array-data-structures/#spread-operator)
- [From another Array](https://www.jeffubayi.site/blog/array-data-structures/#from-another-array)
- [From Array-Like Objects](https://www.jeffubayi.site/blog/array-data-structures/#from-array-like-objects)
- [Using Loops like Map and Reduce](https://www.jeffubayi.site/blog/array-data-structures/#using-loops-like-map-and-reduce)
• [Array Map](https://www.jeffubayi.site/blog/array-data-structures/#array-map)
• [Array Reduce](https://www.jeffubayi.site/blog/array-data-structures/#array-reduce)
- [New Array of Length and Fill with some value](https://www.jeffubayi.site/blog/array-data-structures/#new-array-of-length-and-fill-with-some-value)
- [Form Objects using Object.keys and Object.values](https://www.jeffubayi.site/blog/array-data-structures/#form-objects-using-object-keys-and-object-values)
- [Array Concat Function](https://www.jeffubayi.site/blog/array-data-structures/#array-concat-function)
----------
I'll be using the Avengers Comic flick just to make learning fun while creating an array of Superheros.

Lets "Assemble the Avengers".
##What is an Array
An array data structure or an array is an ordered list of values, or a collection of elements (values or variables) identified by an index or key. The most simple type of array data structure is a linear array.
## Basic way
At first, the basic way to create arrays is here as follows:
```js
const Avengers = ['Ironman', 'Hulk', 'Thor', 'Cpt America'];
```
----------
## With Array Constructor
Another way to create array is by using Array Constructor function.
```js
const Avengers = new Array('Hulk', 'Thor', 'Ironman', 'Cpt America');
```
You can achieve same with new Array function `of`. Like in the following example for `Array.of` , we create array of mixed values:
```js
const Avengers = Array.of('Hulk', null, 'Thor', undefined);
console.log(Avengers);
// 👆 (4) ["Hulk", null, "Thor", undefined]
```
Interesting thing to notice about the Constructor function is its handy override. The override is that if you pass only one argument and it is an integer, the Constructor function will create an empty array for you of that specified length.
----------
## Spread Operator
It **spreads** the items that are contained in an **iterable** (an iterable is anything that can be looped over, like Arrays, Sets…) inside a **receiver** (A receiver is something that receives the spread values)
Like in the following example, we will add the new item and spread the old array to create a complete new Array.
```js
const moreAvengers = ['Cpt Marvel', ...Avengers ];
```
----------
## From another Array
`Array.from` will allow you to create the Arrays from another array.
The newly created array is completely new copyrights and is not gonna mutate any changes to the old array.
```js
const Avengers = new Array('Hulk', 'Thor', 'Cpt America', 'Ironman');
const copyOfAvengers = Array.from(Avengers);
```
----------
## From Array-Like Objects
Some Lists look like Arrays but are not arrays. And, at that time you might wanna convert it to Array to better operability and readability on the data structure.
One of such list is NodeList which you receive as an output of `document.querySelectorAll`
```js
const divs = document.querySelectorAll('div');
const divsArray = Array.prototype.slice.call(divs);
```
Here you can use the `Array.from` function as well to create the array from the Array-like objects. Let’s see that in the following example:
```js
const divs = document.querySelectorAll('div');
const divsArray = Array.from(divs);
```

## Using Loops like Map and Reduce
Event though map and reduce are used to loop over the Arrays. Their non-mutating nature allows us to create new Arrays in different ways.
### Array Map
Map function will loop over items and return a new array of mapped Items
```js
const Avengers = ['Hulk', 'Thor', 'Ironman', 'Cpt Amrica'];
const avengersEndgame = Avengers.map(a => `${a} kills Thanos`);
console.log(avengersEndgame);
// 👆 (4) ["Hulk kills Thanos", "Thor kills Thanos", "Ironman kills Thanos", "Cpt America kills Thanos"]
```
### Array Reduce
Reduce will allow you to loop over the items and do any kind of operation related to the item. The outputs of those operations can be added to any kind of collection, and here, a new Array.
```js
const avengers = ['Ironman', 'Hulk', 'Thor', 'cpt America'];
const avengersCopy = avengers.reduce((gang, avengers) => [
...gang,
{ avengers }
], []);
console.log(avengersCopy);
/* 👆
. (4) [{…}, {…}, {…}, {…}]
. 0: {avenger: "Hulk"}
. 1: {avenger: "Thor"}
. 2: {avenger: "Cpt America"}
. 3: {avenger: "Ironman"}
. length: 4
*/
```
## New Array of Length and Fill with some value
We can quickly create new Arrays of any finite length with Array constructor.
All we have to do is to pass that indefinite length of the desired array as a number to the constructor.
Like in the following example, we will create a new Array of length `6`.
Though creating an empty array is useless because you will not be able to use the Array functions until it has items in it.
One quick way to do so is to use the `.fill` method of the array and put an arbitrary value in each index of the Array.
Once the array is Filled, you can use the loops to enhance it more with the different values.
```js
const emojis = new Array( 6 ).fill( '😎' );
console.log(emojis);
// 👆 (6) ["😎", "😎", "😎", "😎", "😎", "😎"]
// Breakdown:
const arr = new Array( 6 );
console.log(arr);
/* 👆
. (6) [empty × 6]
. length: 6
*/
arr.fill( Math.random().toFixed(2) );
/* 👆
. (6) ["0.80", "0.80", "0.80", "0.80", "0.80", "0.80"]
. 0: "0.80"
. 1: "0.80"
. 2: "0.80"
. 3: "0.80"
. 4: "0.80"
. 5: "0.80"
. length: 6
*/
```
## Form Objects using Object.keys and Object.values
You can create array of Keys or Values of any Object with functions `Object.keys` and `Object.values` respectively.
```js
const avengers = {
1: 'Black Panther',
2: 'Ironman',
3: 'Cpt America',
4: 'Thor',
5: 'Hulk',
6: 'Cpt Marvel',
7: 'Antman'
```

## Array Concat Function
You can use the Array Concat function to create new Arrays as well.
If you use an empty array as the starting point, the output of `[].concat` will be a new copy of concatenated Arrays.
```js
const Avenger = ['Hulk'];
const moreAvengers = [].concat(Avenger, 'Thor', ['Ironman']);
console.log(moreAvengers);
// (3) ["Hulk", "Thor", "Ironman"]
```
## Conclusion
As we have seen some different ways to create Arrays in JavaScript.
Not all of these methods can be used in same ways and every methods has its perk for specific use cases.
| carter |
343,924 | What is Blitz.js? | What is Blitz.js? Blitz.js is a new framework that's built on Next.js.It's positioned as a... | 0 | 2020-06-13T11:19:55 | https://blog.sethcorker.com/what-is-blitz-js | react, webdev, javascript, frontend | ---
title: What is Blitz.js?
published: true
date: 2020-05-25 08:36:39 UTC
tags: react,webdev,javascript,frontend
canonical_url: https://blog.sethcorker.com/what-is-blitz-js
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/79soyekjjgx0bjchkzdn.png
---
## What is Blitz.js?
[Blitz.js](https://blitzjs.com/) is a new framework that's built on [Next.js](https://nextjs.org/).It's positioned as a <abbr title="Ruby on Rails">Rails</abbr>-like framework, it's monolithic and focused on developer productivity while using the modern JavaScript tech you're used to.Although new, I'm watching Blitz.js as I believe it has great potential to refresh JS fullstack development, it looks like a great way to dive in without the hassle of decision fatigue and complicated configuration.
## Why does Blitz.js exist?

Web development has evolved a lot over the past decade.There are more libraries and frameworks for JavaScript than ever before but there's been a trend to opt for smaller libraries, decoupling and microservices.These all have benefits but one of the tradeoffs is productivity. Blitz.js is a reaction to this, it want's to bring back the simplicity of the web with modern tooling.It takes some of the powerful tools you use today and packages them up into a nice and easy to work with bundle.
### Simplicity
I enjoy the modern web. There's many ways to get things done the way you need to.You can choose the right libraries for your particular project and requirements.One critique of this however, is decision fatigue. With so many options, what do you choose?When starting a new project you will constantly make decisions about each and every library, tool and weight their strengths and weaknesses.
1. Should you make a <abbr tit="Single Page Application">SPA</abbr> or use <abbr title="Server Side Rendering">SSR</abbr>?
2. Will you use <abbr title="Representational State Transfer">REST</abbr> or [GraphQL](https://graphql.org/)?
3. How will you manage your state? Redux? [MobX](a-peek-at-state-management-with-mobx)?
4. What view layer do you want to use, React, Vue or maybe Svelte?
5. How will the project be built? Webpack? Rollup?
6. Which features of JS do I want to use and which babel plugins do I need to add?
These decisions are before you've written a line of actual application code, it's all setup.It's no wonder beginners can get overwhelmed and web veterans have can get disenchanted with the direction modern web development is going.Tools like Create React App are a reaction to this just like Blitz.js. The benefit is simplicity while maintaining the right to pick and choose.
### Small companies
A lot of existing libraries are created for large companies that have different problems to smaller companies.Sometimes it doesn't make sense to adopt certain technologies because they just don't solve the problems small companies are trying to solve.Technologies like GraphQL and frameworks like Relay or Apollo fit together to solve problems which you might only run into at large scale.Blitz.js looks to be your-go to tool until you need to scale.Even then, it's built on other technologies that are proven at scale so the leap might not even be that great.It's a foundation that grows with your needs when you need to grow.
## Why am I excited?

### The _good_ old days of web development
I started my career as a web developer redesigning and maintaining websites.During the early days I did everything by hand. The HTML was handcrafted, so was the CSS.What little JS which was added was mainly for a sticky header or some mobile optimizations.Once I had a version which was ready to deploy, I'd connect to the server over FTP and copy the files over.Where they simpler times? Probably. Was I more productive than I am today, not really.It may have been simpler but there was a lot of intensive processes around the code.Adding a header and footer to every page was manual, the changes to would require a massive find and replace across every HTML file.I never knew about source control so manual backups had to take place after every change.Over time, I evolved my process and new tools came out to make it easier to achieve.Copy and pasting header HTML was replaced with templating and a build step.I traded some additional complexity for developer productivity.Over time I integrated Gulp and Bower to ease minification, browser compatibility compilation of SCSS.Why am I telling you this? Everything in programming is a tradeoff and you need to find the right tradeoffs for the things you're building.I evolved over time as a developer and my tooling evolved too.Blitz.js looks like a way to bring back the simplicity you remember having with modern tools and the benefits that come with them.
### My first fullstack experience
My first foray into fullstack development was Ruby on Rails.The reason I chose it despite not knowing Ruby at the time was developer productivity.I needed to create something I had never done before, I needed a new approach because my cobbled together web tools only took me so far and I'd never worked with databases, an API or CRUD outside of the classroom.Despite the odds stacked against me, I managed to learn and be productive with Rails.It was batteries included and I owe it a lot.It was flexible enough to allow you to get stuff done and opinionated enough that made it easy to figure out the _right_ way to do it.
This was a big contrast to my next job where React powered the frontend, the backend was a RESTful API with no ORM.There were benefits but there were also times I missed the simplicity of Rails.I thought of going back for side projects but I'm too invested in the JS ecosystem, that's where I'm most productive and I don't want to leave it behind. Blitz.js might be the best of both worlds.A different take on Rails for JS. It's Rails but with React built in.
## Why does Blitz.js have a future?

The JS ecosystem is vast and powerful, there is great tooling and libraries available to suit almost any need but the challenge comes in choosing these tools, configuring them correctly and combining them while being productive.Blitz.js does this work for you, the tools exist and they've been configured for you.What I think gives Blitz.js a good future is that it is built on what already exists.It leverages other ecosystems to thrive.
### Next.js
Next.js is a powerful framework in its own right. By leveraging this, Blitz.js can build on top of this solid foundation and get TypeScript support, routing, code splitting and more for free.As Next.js evolves, Blitz.js can too.
### Prisma 2
Blitz.js piggybacks off the work done by Prisma. Rails had a great ORM which I liked a lot, Prisma is step above that which allows for more flexible data modelling and is setup to work well with TypeScript.
### CLI
My favorite feature of Rails is the CLI scaffolding.As a beginner to fullstack development, Rails made it easy to generate everything you needed to get an entire working MVC setup.With a single command a model would be created along with a controller and all the CRUD views you desired. The CLI is what brings everything Blitz.js has to offer, together in one easy to use place.
### Community evolution
[Blitz.js has a manifesto](https://github.com/blitz-js/blitz/blob/canary/MANIFESTO.md#7-community-over-code) which defines one of the most important tenants, ["Community over Code"](https://github.com/blitz-js/blitz/blob/canary/MANIFESTO.md).This is a simple idea but it's powerful, it sets the stage for a constructive community that learns from other communities rather than competing with them.Part of this commitment includes some transparent development practices, there already exists an <a href="https://github.com/blitz-js/blitz/blob/rfc-architecture/rfc-docs/01-architecture.md"><abbr title="Request for Comments">RFC</abbr> for the Blitz App architecture</a>.This means you can have a say in how Blitz.js evolves and what choices should be made.
## Is it ready for production?

No. Blitz.js still lacks maturity. It's early days so expect the APIs to change a lot.The documentation is limited but if you're brave though, Blitz.js utilizes existing tech so much that you'll probably be able to make something production ready with some extra time and effort.Nevertheless, I'm excited to see Blitz.js grow and evolve - I hope it can be the Rails for JS developers.
#### Resources
If you're interested in finding out more, take a look at some of the official places down below.
- [Blitz.js Repo on GitHub](https://github.com/blitz-js/blitz)
- [User Guide](https://github.com/blitz-js/blitz/blob/canary/USER_GUIDE.md)
* * *
[Illustrations provided by ManyPixels](https://www.manypixels.co/gallery/)
* * *
Check out my blog, [Benevolent Bytes](https://blog.sethcorker.com/) for more articles on web development and to [read What is Blitz.js](https://blog.sethcorker.com/what-is-blitz-js).
For more about web development, follow me, [Seth Corker (@Darth_Knoppix) on Twitter](https://twitter.com/Darth_Knoppix). | darthknoppix |
343,946 | 40 reasons why I love being a DEV! | My personal reasons to why I love being a developer. | 0 | 2020-05-26T10:13:02 | https://www.codewall.co.uk/40-reasons-why-i-love-being-a-developer/ | webdev, beginners, codenewbie | ---
title: 40 reasons why I love being a DEV!
published: true
description: My personal reasons to why I love being a developer.
canonical_url: https://www.codewall.co.uk/40-reasons-why-i-love-being-a-developer/
tags: webdev, beginners, discuss, codenewbie
---
Here’s my personal list of reasons why I love being a developer and always will. I hope you can share this list with people who aspire to be web developers or programmers and maybe, it will make them jump and give it go! Honestly, If you’re fed up with your current job, and it’s not in development, try being a developer, it’s awesome.
The whole purpose of this post is to try to give non-developers a glimpse into the life of what it’s like to be a developer. From there, you can make your own mind up!
Reasons why being a developer is awesome
----------------------------------------
1. Learning something new pretty much every day.
2. Being challenged every day whether that be a simple bug fix, to something big like a new project.
3. Being able to see technology make life easier.
4. Taking part in a fast-moving industry that never ceases to amaze you.
5. Lots of coffee.
6. Discovering new open-sourced (free) tools & resources to work with consistently – thanks to GitHub.
7. The opportunity to be creative with your own ideas.
8. Being part of a global industry where everyone knows that arrays start at zero :).
9. With the right dedication and consistency, you can learn yourself to be a pro at this type of work.
10. Pretty decent money, especially for something you love.
11. More coffee.
12. There is always room to learn more.
13. Technology is the future, and it will be here for a long, long time.
14. No worries about the development industry disappearing.
15. It’s great fun and there is definitely lots of coffee.
16. The only job that you can eat a chocolate cake for breakfast and now be frowned upon. (Devs need much sugar)
17. Using beautiful coding software like Visual Studio.
18. Learning to interface applications with just about anything these days.
19. This one is hard to believe… Actually not being too bothered about going to work.
20. Realizing that StackOverflow is my new Bible.
21. Google is now my best friend.
22. Appreciating, deeply, how much effort is used in things like Xbox Games & Virtual Assistants like Amazon’s Alexa.
23. The only job that you and your colleagues can get over-excited when you reduce 6 lines of code to 1.
24. Maintaining and being proud of my 80 words per minute typing speed.
25. Red Bull.
26. Making time-consuming tasks super fast.
27. Having a good reason to actually be on Twitter. Ahem, follow me – [@DanEnglishby](https://twitter.com/DanEnglishby)
28. Developing things that quite possibly have the potential to change the world.
29. Seeing the look on the client’s faces when they get their software/website or another technical resource!
30. My nan telling everyone I’m a genius because I’m a coder.
31. Being able to take my laptop to a beach and write code (Yep I did this once).
32. Having the skills to become a freelancer or start my own business easily.
33. Being able to just look at a website and pretty much figure out how it was made.
34. Caring about what color my IDE theme is. (Dark, of course)
35. Having a genuinely fulfilling career.
36. Taking pride in telling people why Internet Explorer is useless and Google Chrome is superior.
37. Having great job security, there are always jobs available for developers.
38. Being passionate about what I do.
39. There is how-to-guides to build or learn just about anything.
40. Did I mention there is lots of coffee?
### Summing Up
I hope you enjoyed my reasons as to why being a web developer or programmer is awesome. I would love to hear your personal reasons as to why you love being a developer too! Please leave a comment with yours.
Like my content? Feel free to check out my blog : [CodeWall](https://codewall.co.uk)
| danenglishby |
343,966 | Build Web App with Go for Beginners Part I | Go was designed at Google in 2007 to improve programming productivity in an era of multicore, netw... | 0 | 2020-05-29T22:04:14 | https://dev.to/iamhabbeboy/build-web-app-with-go-for-beginners-part-i-4cjk | go, webapp | ---
title: Build Web App with Go for Beginners Part I
published: true
description:
tags: #go #golang #webapp
---

> Go was designed at Google in 2007 to improve programming productivity in an era of multicore, networked machines and large code bases.The designers wanted to address criticism of other languages in use at Google, but keep their useful characteristics.
During past couple of days I have been learning Golang and I noticed some of the challenges I faced was the lack of constraints or standards, choosing external packages and much more😭. I would like to share some findings that have worked for me and also walk you through the steps by building a simple todo app.
## Prerequisites
* A Golang compiler installed on your Machine
* An IDE or text editor
* Basic understanding of Go
* Basic understanding of HTML and CSS
* Basic understanding of SQL (Only required to create Database 😉)
### Project Layout
We would start by creating a module for our project so as to better track our packages in a file called `go.mod` which would makes it easier for distribution or deployment. it's more like `package.json` or `composer.json`.
```
go mod init github.com/iamhabbeboy/todoapp
```
Next, we need to create the following folders.
> You don't have to follow this structure.😊
```
todoapp
├──models
| └── Todo.go
├── controllers
| └── TodoController.go
├── database
| └── connect.go
├── views
| └── index.html
| └── create.html
|── main.go
|── .env
```
### Packages
The following packages would be used to better develop our project.
* Gorm: Go ORM that support mysql and other databases.
```
go get -u github.com/jinzhu/gorm
```
* Mux: It's handles our routes and requests.
```
go get -u github.com/gorilla/mux
```
* Godotenv: Would be used to access our `.env` variables
```
go get github.com/joho/godotenv
```
* MySQL driver: Allow connection to MySQL DB
```
go get github.com/go-sql-driver/mysql
```
I wouldn't want you to be bored with this post so I decided to split it into two. The other part would be posted in the comment section soon.
Please do leave a comment in case you have any contributions, I would really appreciate that.😉
Thanks for reading.❤️
| iamhabbeboy |
343,970 | Blue light optics (glasses) - Myth? | Firstly, I would like to start by saying hello and welcome to anyone who reads this. I have written b... | 0 | 2020-05-30T12:52:27 | https://dev.to/luckynos7evin/blue-light-optics-glasses-myth-197h | health, development, wellbeing | <!-----
NEW: Your output is on the clipboard!
NEW: Check the "Suppress top comment" option to remove this info from the output.
Conversion time: 0.781 seconds.
Using this Markdown file:
1. Paste this output into your source file.
2. See the notes and action items below regarding this conversion run.
3. Check the rendered output (headings, lists, code blocks, tables) for proper
formatting and use a linkchecker before you publish this page.
Conversion notes:
* Docs to Markdown version 1.0β24
* Sat May 30 2020 05:41:30 GMT-0700 (PDT)
* Source doc: Untitled document
----->
Firstly, I would like to start by saying hello and welcome to anyone who reads this. I have written below what I believe regarding blue light. I'm here to give my experience, hope others learn from that or I create a debate/discussion on the matter.
## Blue Light, what?
So what exactly is blue light? Well, until 2018, I wasn't even really aware of it so here's the Wikipedia definition taken from [here](https://en.wikipedia.org/wiki/High-energy_visible_light) [taken on 25th May 2020].
> “In ophthalmology, high-energy visible light (HEV light) is high-frequency, high-energy light in the violet/blue band from 400 to 450 nm in the visible spectrum. Despite a lack of concurring scientific evidence, HEV light has sometimes been claimed to be a cause of age-related macular degeneration. Some sunglasses and beauty creams specifically block HEV, for added marketing value.”
## My Story
In late 2018 I started a new role for a company. This role was almost 100% remote. After the initial visit to HQ, I would only visit again 6-7 times during the 15 months I was there. Since those 15 months came to an end, I have now started my own company.
So back to the reason I mention the 100% remote working. I was working, streaming, developing and gaming from the same setup, all week, every week. I was finishing work, doing some family stuff and then going back to that same environment. In mid 2019 I was starting to suffer, I was doing too much. I cut down streaming. I cut down extra work and I cut back on PC time. I felt a lot better.
I had felt like this before in 2017 where my prescription had changed a lot in only 2 years. The previous incidents in 2017 included vomiting and hospital visits. There is very much an importance on eye health and eye strain, since then I even get my prescription done yearly
The final issue I had was headaches. It wasn't my prescription as I have my eyes tested regularly, I also knew it wasn't down to much else, apart from screen time. When your job is software development, it's pretty hard not to look at a screen most of the time.
## What happened next?
It came to TwitchCon 2019. I was off to San Diego for a few days, not only to network, I was also there to see a little bit of contract work I did for Twitch go live (it worked BTW). The conference was going on and as I was walking around the shop floor so to speak, I found [GUNNAR Optiks](https://gunnar.com/). I had a great talk with them about their glasses, about blue light, which led to me standing there for an hour just discussing life in general. I made one point, "It's a shame you don't do prescription!". I had heard of GUNNAR before, not really looked into it though, and as soon as I said it, I got an instant reply of "Yeah, we do!". “Wow, fantastic, I'll take a pair!!”
## Fast forward 3 months
I went home, waited for Black Friday and “boom!”, Got myself a prescription pair of GUNNAR glasses. I went with the 65 BLPF, as I thought the clear 35 BLPF was going to be too light, and the 90 and 98 were going to be too strong. The middle ground is seemingly the best place to be at times.
## 7 months on
It's now May 2020, I get asked when I stream, in meetings and on other occasions about my glasses.
* “Do they work?”
* “How do you find the tint?”
* “Why not put a filter on your screen?”
Hopefully I can give my experience on some of these questions.
### Do they work?
Well, I haven't had a headache since November 2019, I think that's pretty good. Now, when I say I haven't had a headache, that's a little bit of a lie, I had a few too many Gin & Tonics a couple of months ago. That gave me a headache, rather than looking at the screen all day.
### Do you notice the tint when you’re wearing them?
In all honesty no, there are even times I don't take them off when I finish work, as I just forget I have them on. Yes you can tell things aren't quite white, however your mind seems to adjust to this after about 30 minutes, and you completely forget white is not white anymore.
### Why not put a filter on your screen?
With five monitors, two TVs, mobile phones and tablets, in all honesty, it's easier wearing the glasses.
### Why not use build in blue light filters on monitors/screens?
Okay, that may help a little (I don't know, I've never tested it). However, surely you are just dimming the light that is still being directed at you rather than actually filtering that light? Until someone can show me this actually does work, I will remain a sceptic on the matter of built in filters.
## In the end
In the end, this is my opinion, yes it works.
Let's have a healthy discussion on why you think they don't or if you agree with me, why? Have you got any, and where did you get them from?
| luckynos7evin |
344,018 | NextJS APIs validator with Middleware | Validate the request.body is a must for every APIs development. NextJS is able to use Connect compati... | 0 | 2020-05-26T11:46:05 | https://dev.to/meddlesome/nextjs-apis-validator-with-middleware-3njl | nextjs, express, middleware, api | Validate the `request.body` is a must for every APIs development. NextJS is able to use [Connect](https://github.com/senchalabs/connect) compatible Middlewares for an extendable feature on top of each request/response like ExpressJS.
Here is a guide to integrating the `express-validator`, a wrapper of `validator.js` inside your NextJS application.
First, Install `express-validator` in NextJS project
```
yarn add express-validator
```
Next, Create NextJS Middleware Helper Method as `/lib/init-middleware.js `
```
export default function initMiddleware(middleware) {
return (req, res) =>
new Promise((resolve, reject) => {
middleware(req, res, (result) => {
if (result instanceof Error) {
return reject(result)
}
return resolve(result)
})
})
}
```
Next, Create NextJS Validator Middleware to handle error and response out as `/lib/validate-middleware.js`
```
export default function validateMiddleware(validations, validationResult) {
return async (req, res, next) => {
await Promise.all(validations.map((validation) => validation.run(req)))
const errors = validationResult(req)
if (errors.isEmpty()) {
return next()
}
res.status(422).json({ errors: errors.array() })
}
}
```
Now it's time to integrate validate rules and middleware into your NextJS API Routes. You can use any validator.js's [validator functions](https://github.com/validatorjs/validator.js#validators) as sample below.
```
import initMiddleware from '../../../lib/init-middleware'
import validateMiddleware from '../../../lib/validate-middleware'
import { check, validationResult } from 'express-validator'
const validateBody = initMiddleware(
validateMiddleware([
check('first_name').isLength({min:1, max: 40}),
check('day').isInt({ min: 1, max: 31}),
check('gender').isIn(['male','female']),
check('mobile_phone').isMobilePhone(['th-TH']),
check('boolean').isBoolean(),
], validationResult)
)
export default async (req, res) => {
switch (req.method) {
case "POST":
await validateBody(req, res)
const errors = validationResult(req)
if (!errors.isEmpty()) {
return res.status(422).json({ errors: errors.array() })
}
nextFunction(req, res)
break;
default:
res.status(404).json({ message: "Request HTTP Method Incorrect." })
break;
}
}
```
Now the `express-validator` will be working on top of each HTTP Request to validate your `request.body` in NextJS API Route :) | meddlesome |
344,024 | Answer: Is it possible to share states between components using the useState() hook in React? | answer re: Is it possible to share st... | 0 | 2020-05-26T12:00:56 | https://dev.to/betula/answer-is-it-possible-to-share-states-between-components-using-the-usestate-hook-in-react-31i9 | {% stackoverflow 62015805 %} | betula | |
344,035 | Finding Meow 😺 — the cutest cat from each town using Elasticsearch | #cutest-cat-per-town Cat fact: Meow is one of the most common names for a cat (for obvious... | 0 | 2020-05-26T12:28:56 | https://medium.com/activeai/finding-meow-the-cutest-cat-from-each-town-using-elastic-search-29a9417bc24d | elasticsearch, sql, codenewbie, challenge | #### #cutest-cat-per-town
Cat fact: **Meow** is one of the most common names for a cat (for obvious reasons 😸).
While researching for this article, I realized there are plenty of people who love cats. There are numerous websites to suggest names for cats, see [www.findcatnames.com](https://www.findcatnames.com) for example. [iknowwhereyourcatlives.com](https://iknowwhereyourcatlives.com/) is an interesting data visualization experiment that locates a sample of one million public images of cats in the world. To be honest, this is a core reason I finalized on cats as an example for this article. 😉
Let’s get into understanding the problem statement itself:
> Find the most cutest cat named Meow from each town.
In a town, there would be many cute cats. We have to pick only one cutest cat from each town, whose name is Meow.
This problem statement is an example of a typical _greatest-n-per-group_ query (hence _cutest-cat-per-town)_. Interestingly, this question comes up several times per week on [StackOverflow](https://stackoverflow.com/search?q=greatest-n-per-group). To be accurate, at the time of writing this article, there are 10,206 questions with the tag *greatest-n-per-group *and counting.
Let’s take a small dataset of cats with the columns id, cat name, town, and cuteness level ranging from 1 to 10, 10 being most cutest.

The expected output per this dataset would be :

Before jumping to Elasticsearch, let’s see how we can achieve this using SQL. There are quite a few ways to go about this, and depending on the database; there can be better performance-oriented solutions too. One which works across databases is:
```
SELECT * FROM cat c4 where id in (SELECT MIN(c1.id)
FROM cat c1
JOIN(SELECT c2.town,
MAX(c2.cuteness) AS max_cuteness
FROM cat c2
WHERE c2.name LIKE "%meow%"
GROUP BY c2.town) c3 ON c3.town = c1.town
AND c3.max_cuteness = c1.cuteness AND c1.name LIKE "%meow%"
GROUP BY c1.town;
```
We are not going in detail with the SQL solution, as the objective is to solve using Elasticsearch. However, feel free to drop in a comment in case you want me to explain anything in particular.
---
Let’s jump to Elasticsearch to solve this problem.
Brief about Elasticsearch as per Wikipedia.
> [Elasticsearch](https://www.elastic.co/) is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License.
A local setup of Elasticsearch and Kibana (optional) would be great to follow along. If you already have one, it’s perfect. Else I suggest to install it to try out the scripts yourself. Follow instructions from the [official Elasticsearch website](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html) to install it locally. Or if it appears to be too cumbersome (it’s actually not), you can use cloud services like [Elasticsearch Cloud](https://www.elastic.co/elasticsearch/service) _(14 days free trial, no credit card required)_ or [bonsai.io](https://bonsai.io) _(free plan available with no credit card required)_. Personally, I tried bonsai.ai and within 5 minutes I was running queries on Kibana provision by bonsai.io. Superb. 🤓
Even if that’s too much effort 🙄, use [this playground](https://finding-meow.herokuapp.com/) for search queries.
Oh, and I forgot to introduce Kibana. Kibana as per Wikipedia.
> [Kibana](https://www.elastic.co/kibana) is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.
With whichever option you selected, let’s start with creating a new index `cat` and bulk import our dataset.
If you are using Kibana, just copy & paste the query below to create a new index `cat` with properties `id` , `name` , `town` & `cuteness` .
```
PUT cat
{
"mappings": {
"properties": {
"id": {
"type": "long"
},
"name": {
"type": "keyword"
},
"town": {
"type": "keyword"
},
"cuteness": {
"type": "short"
}
}
}
}
```
To bulk import, the sample data, copy & paste below query.
```
PUT _bulk
{ "index" : { "_index" : "cat", "_id" : "1" } }
{ "id" : 1, "name":"meow", "town":"meerut", "cuteness":9 }
{ "index" : { "_index" : "cat", "_id" : "2" } }
{ "id" : 2, "name":"tom", "town":"delhi", "cuteness":10 }
{ "index" : { "_index" : "cat", "_id" : "3" } }
{ "id" : 3, "name":"meow", "town":"delhi", "cuteness":10 }
{ "index" : { "_index" : "cat", "_id" : "4" } }
{ "id" : 4, "name":"meow", "town":"delhi", "cuteness":4 }
{ "index" : { "_index" : "cat", "_id" : "5" } }
{ "id" : 5, "name":"cameow", "town":"delhi", "cuteness":10 }
{ "index" : { "_index" : "cat", "_id" : "6" } }
{ "id" : 6, "name":"meowses", "town":"meerut", "cuteness":4 }
{ "index" : { "_index" : "cat", "_id" : "7" } }
{ "id" : 7, "name":"cameow", "town":"bangalore", "cuteness":8 }
{ "index" : { "_index" : "cat", "_id" : "8" } }
{ "id" : 8, "name":"tiger", "town":"bangalore", "cuteness":3 }
{ "index" : { "_index" : "cat", "_id" : "9" } }
{ "id" : 9, "name":"meowses", "town":"mumbai", "cuteness":7 }
{ "index" : { "_index" : "cat", "_id" : "10" } }
{ "id" : 10, "name":"meow", "town":"mumbai", "cuteness":9 }
```
\*_All the Elasticsearch scripts are tested on Elasticsearch 7.2 version._
To improve accuracy, we will change the solution to give more preference to exact match than the partial match and have the result ordered by cuteness.
First, let’s start with the search query to get the cats with the name meow. We can have a simple [wildcard](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-wildcard-query.html) query, but we also need to boost the score for the exact match. For that, we will use the [function_score](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-function-score-query.html) query with two functions (Line no.13 & 20 below), giving more weight to an exact match. The `function_score` allows you to modify the score of documents that are retrieved by a query. This is how our search query will look like:
```
1. GET cat/_search/
2. {
3. "size": 10,
4. "query": {
5. "function_score": {
6. "query": {
7. "wildcard": {
8. "name": "*meow*"
9. }
10. },
11. "boost": "5",
12. "functions": [{
13. "filter": {
14. "match": {
15. "name": "meow"
16. }
17. },
18. "weight": 2
19. }, {
20. "filter": {
21. "wildcard": {
22. "name": "*meow*"
23. }
24. },
25. "weight": 1
26. }]
27. }
28. }
29. }
```
Now let’s look at the aggregation part. In Elasticsearch, we can do aggregation on the search query. There are multiple [aggregation techniques](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html). In our case, will use [term based aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html) which belongs to the bucketing family. For better understanding, let’s look at the final query first. Use [this](https://finding-meow.herokuapp.com/?query=ewogICAgInNpemUiOiAwLAogICAgInF1ZXJ5IjogewogICAgICAgICJmdW5jdGlvbl9zY29yZSI6IHsKICAgICAgICAgICAgInF1ZXJ5IjogewogICAgICAgICAgICAgICAgIndpbGRjYXJkIjogewogICAgICAgICAgICAgICAgICAgICJuYW1lIjogIiptZW93KiIKICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgIH0sCiAgICAgICAgICAgICAiYm9vc3QiOiAiNSIsCiAgICAgICAgICAgICAiZnVuY3Rpb25zIjogW3sKICAgICAgICAgICAgICAgICAiZmlsdGVyIjogewogICAgICAgICAgICAgICAgICAgICAibWF0Y2giOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICAibmFtZSI6ICJtZW93IgogICAgICAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgICAgICAgfSwKICAgICAgICAgICAgICAgICAid2VpZ2h0IjogMgogICAgICAgICAgICAgfSwgewogICAgICAgICAgICAgICAgICJmaWx0ZXIiOiB7CiAgICAgICAgICAgICAgICAgICAgICJ3aWxkY2FyZCI6IHsKICAgICAgICAgICAgICAgICAgICAgICAgICJuYW1lIjogIiptZW93KiIKICAgICAgICAgICAgICAgICAgICAgfQogICAgICAgICAgICAgICAgIH0sCiAgICAgICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICAgICAgIH1dCiAgICAgICAgIH0KICAgICB9LAogICAgICJhZ2dzIjogewogICAgICAgICAiZ3JvdXAiOiB7CiAgICAgICAgICAgICAidGVybXMiOiB7CiAgICAgICAgICAgICAgICAgImZpZWxkIjogInRvd24iLAogICAgICAgICAgICAgICAgICJvcmRlciI6IHsKICAgICAgICAgICAgICAgICAgICAgIm1heF9jdXRlbmVzcyI6ICJkZXNjIgogICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgIH0sCiAgICAgICAgICAgICAiYWdncyI6IHsKICAgICAgICAgICAgICAgICAiZ3JvdXBfZG9jcyI6IHsKICAgICAgICAgICAgICAgICAgICAgInRvcF9oaXRzIjogewogICAgICAgICAgICAgICAgICAgICAgICAgInNpemUiOiAxLAogICAgICAgICAgICAgICAgICAgICAgICAgInNvcnQiOiBbewogICAgICAgICAgICAgICAgICAgICAgICAgICAgICJjdXRlbmVzcyI6IHsKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIm9yZGVyIjogImRlc2MiCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfQogICAgICAgICAgICAgICAgICAgICAgICAgfSwgewogICAgICAgICAgICAgICAgICAgICAgICAgICAgICJfc2NvcmUiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJvcmRlciI6ICJkZXNjIgogICAgICAgICAgICAgICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgICAgICAgICAgICAgIH1dCiAgICAgICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgICAgICB9LAogICAgICAgICAgICAgICAgICJtYXhfY3V0ZW5lc3MiOiB7CiAgICAgICAgICAgICAgICAgICAgICJtYXgiOiB7CiAgICAgICAgICAgICAgICAgICAgICAgICAic2NyaXB0IjogImRvY1snY3V0ZW5lc3MnXS52YWx1ZSIKICAgICAgICAgICAgICAgICAgICAgfQogICAgICAgICAgICAgICAgIH0KICAgICAgICAgICAgIH0KICAgICAgICAgfQogICAgIH0KIH0) link to execute and try it yourself.
```
1. GET cat/_search/
2. {
3. "size": 0,
4. "query": {
5. "function_score": {
6. "query": {
7. "wildcard": {
8. "name": "*meow*"
9. }
10. },
11. "boost": "5",
12. "functions": [{
13. "filter": {
14. "match": {
15. "name": "meow"
16. }
17. },
18. "weight": 2
19. }, {
20. "filter": {
21. "wildcard": {
22. "name": "*meow*"
23. }
24. },
25. "weight": 1
26. }]
27. }
28. },
29. "aggs": {
30. "group": {
31. "terms": {
32. "field": "town",
33. "order": {
34. "max_cuteness": "desc"
35. }
36. },
37. "aggs": {
38. "group_docs": {
39. "top_hits": {
40. "size": 1,
41. "sort": [{
42. "cuteness": {
43. "order": "desc"
44. }
45. }, {
46. "_score": {
47. "order": "desc"
48. }
49. }]
50. }
51. },
52. "max_cuteness": {
53. "max": {
54. "script": "doc['cuteness'].value"
55. }
56. }
57. }
58. }
59. }
60. }
```
As we need to create one bucket per town, I have added `"field": "town"` in terms aggregation. This will give us multiple cats per town, but we need only the cutest hence we will use sub aggregation. Look at the block from line 37 above. I have added two sub aggregation *group_docs *and *max_cuteness. *group_docs is a * [top_hits aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-hits-aggregation.html) where I have configured to return only one record sort based on the cuteness and the search query score. This aggregation will help to produce only one record per bucket.
Now we also need to order the buckets based on the max cuteness per bucket. For this, I have defined *max_cuteness *sub aggregator of [max aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-max-aggregation.html). It returns max cuteness per town which I am using in the primary _group_ aggregator at line 34 to order each bucket. At line 2, `"size": 0` is only to return the aggregated result.
To summarize, for this query, I have used function_score query, terms, top_hits and max aggregations to solve the cutest*-cat-per-town *problem. Thanks for reading! Do comment if you have any doubts or you feel you know a better way to “Find Meow!”

P.S: Ready for the *cutest-cat-per-town *challenge*? *Tweet your best solution for SQL or Elasticsearch with the hashtag #*cutest-cat-per-town, #findingmeow*👨💻 or comment below. If we find your solution better than ours, we will include your submission along with your name. Use [this playground](https://finding-meow.herokuapp.com/?query=e30=) to test and share your solution.
*If you liked the article, please like the article and help others see it. Meow would appreciate that *😻*. Follow me for other such articles.*
| sharmasha2nk |
344,083 | How to update Google Sheets with JSON API | A short tutorial on how to update Google Sheets with JSON API | 0 | 2020-05-26T13:49:32 | https://maxkatz.org/2020/05/21/how-to-update-google-sheets-with-json-api/ | nocode, tutorial | ---
title: How to update Google Sheets with JSON API
published: true
date: 2020-05-21 23:39:05 UTC
tags: no code, tutorial
canonical_url: https://maxkatz.org/2020/05/21/how-to-update-google-sheets-with-json-api/
description: A short tutorial on how to update Google Sheets with JSON API
---
Google Sheets is a well-known service and online spreadsheet. Google Sheets can be more than just a spreadsheet, it can be used as a back-end or a database for applications. For example, [Glide](https://www.glideapps.com/) uses Google Sheets as a database for its mobile applications. Glide allows to build a mobile application without any code connected to Google Sheets. It’s incredible how fast you can build a real app, .
I was building an app in Glide that displays the latest news using [News API](https://newsapi.org/). I connected the app to a Google Sheets spreadsheet where I entered a number of news stories manually. The app looks like this:
<figcaption>Glide app</figcaption>
This is how the Google Sheets spreadsheets looks:
<figcaption>Data for Glide app in Google Sheets</figcaption>
If I need to update any news I can manually edit the Google Sheets spreadsheets and the Glide app will be updated.
This manual update is not ideal of course.
I wanted to see if there is a way to consume an API form Google Sheets and update the spreadsheet. One way is to add a [script to Google Sheets](https://medium.com/unly-org/how-to-consume-any-json-api-using-google-sheets-and-keep-it-up-to-date-automagically-fb6e94521abd). I wanted to see if there is way to achieve the same result using no-code tools. I went back to [Parabola](https://dev.to/maxkatz/parabola-watson-nlu-news-api-and-twilio-tools-i-used-to-build-a-no-code-application-4k3k) and created a flow that calls a REST API and updates a Google Sheets document.
The flow looks like this:
<figcaption>Parabola flow to invoke an API and export the result to a Google Sheets spreadsheet</figcaption>
Let’s look at each step.
The **API Import** step invokes an external REST API. In this example I’m using News API service.
<figcaption>API Import step</figcaption>
The next step is **JSON Flattener**. This steps take the API response and puts it in columns/rows format. Below we are specifically flattening the _articles_ column.
<figcaption>JSON Flattener step</figcaption>
The **Column Filter** step is optional. It allows to remove columns that we don’t need in Google Sheets spreadsheet. Instead of removing columns you can specify which columns to keep as shown below.
<figcaption>Column Filter step</figcaption>
The last step, **Google Sheets Export** exports the data to a spreadsheet.
<figcaption>Google Sheets Export step</figcaption>
When the flow is run it first gets the latest news from News API and then exports the result to Google Sheets. I like that this is a no-code approach. We are updating a Google Sheets spreadsheet with JSON from an external API. You can also look at as consuming an API from Google Sheets. | maxkatz |
344,199 | Fallacies of Distributed Computing: 5. Topology doesn't change | The fallacies of distributed computing are a set of assertions describing false assumptions made... | 0 | 2020-05-25T00:00:00 | https://dereklawless.ie/fallacies-of-distributed-computing-5-topology-doesnt-change/ | distributedcomputing | ---
title: "Fallacies of Distributed Computing: 5. Topology doesn't change"
date: "2020-05-25T00:00:00.000Z"
tags: ["distributedcomputing"]
---
[The fallacies of distributed computing](https://web.archive.org/web/20171107014323/http://blog.fogcreek.com/eight-fallacies-of-distributed-computing-tech-talk/) are a set of assertions describing false assumptions made about distributed systems.
[L. Peter Deutsch](https://en.wikipedia.org/wiki/L._Peter_Deutsch) drafted the first 7 fallacies in 1994, with the 8<sup>th</sup> added by [James Gosling](https://en.wikipedia.org/wiki/James_Gosling) in 1997.
The 8 fallacies are:
1. [The network is reliable](https://dereklawless.ie/fallacies-of-distributed-computing-1-the-network-is-reliable)
1. [Latency is zero](https://dereklawless.ie/fallacies-of-distributed-computing-2-latency-is-zero)
1. [Bandwidth is infinite](https://dereklawless.ie/fallacies-of-distributed-computing-3-bandwidth-is-infinite)
1. [The network is secure](https://dereklawless.ie/fallacies-of-distributed-computing-4-the-network-is-secure)
1. [Topology doesn't change](https://dereklawless.ie/fallacies-of-distributed-computing-5-topology-doesnt-change)
1. [There is one administrator](https://dereklawless.ie/fallacies-of-distributed-computing-6-there-is-one-administrator)
1. [Transport cost is zero](https://dereklawless.ie/fallacies-of-distributed-computing-7-transport-cost-is-zero)
1. [The network is homogeneous](https://dereklawless.ie/fallacies-of-distributed-computing-8-the-network-is-homogeneous)
## 5. Topology doesn't change
### The problem
Network topology is volatile:
- Environments may have differing network requirements -- from quickly spun up development environments through to production environments supporting high availability, resiliency, and redundancy
- Firewall rules are likely to change in response to security concerns or application requirements
- Services may be introduced (e.g. federated authentication) or deprecated, changing network topology
The topology of the network is typically outside your control. Servers may be added or retired, network services may be upgraded or deprecated, planned and unplanned outages may occur.
In cloud computing scenarios -- where elasticity is typically a key motivation for adoption -- network topology will mutate in response to load and utilisation.
__Network topology changes constantly.__
### Solutions
Given that network toplogy is indeed volatile, there are two broad approaches that can be taken to buffer against topology changes.
#### Abstract network specifics
- Avoid referencing IP addresses directly, instead using <abbr title="Domain Name System">DNS</abbr> hostnames to reference resources
- Consider using a [service discovery pattern](https://microservices.io/patterns/server-side-discovery.html) for Microservice architectures
#### Design for failure
- Design your architecture to avoid irreplaceability, assume any server may fail
- Introduce [chaos engineering](https://en.wikipedia.org/wiki/Chaos_engineering) and test for system behaviour under infrastructure, network, and application failures
| dereklawless |
344,228 | Optional Chaining is amazing, here's why? | Optional Chaining is the new javascript operator, which is included as part of EcmaScript 2020. The o... | 0 | 2020-05-26T17:43:39 | https://learn-n-share.hashnode.dev/optional-chaining-is-amazing-heres-why-ckanokqle023vbbs1xzqfsu1e | optionalchaining, es2020optionalchaining, jsoptionalchaining | **Optional Chaining** is the new javascript operator, which is included as part of EcmaScript 2020. The operator permits reading the value of a property located deep within a chain of connected objects without having to explicitly validate that each reference in the chain is valid. The operator works for validating the properties or methods in the object. They don't enable anything in terms of functionality, but they make the code a lot easy to read and write. Let's see how:
I'm sure that we all in our coding experience have faced a common problem, for which we already have a solution.
**The Problem**
```javascript
{
data: {
user: {
fullname: "Soumya Mishra"
wishlist: ["wId_101", "wId_102"]
}
}
}
// Error prone-version, could throw a TypeError if any intermediate key is undefined..
const wishlistItem = db.user.wishlist.length;
```
Let's assume that the above object is for loggedIn user details along with user's wishlist data. Now, there can be users who have not added anything to their wishlist, so the wishlist property in the object might not be present for those users. But we have to pull the wishlist Ids from this object and make another request to fetch data.
Till date, we would choose one of the below ways to handle such scenarios.
**The Solution**
```javascript
const wishlistIds =
data ?
(data.user ?
(data.user.wishlist ?
data.user.wishlist :
[]) :
[]) :
[];
const wishlistItemCount = wishlistIds.length;
OR
const wishlistIds = [];
if (data && data.user && data.user.wishlist) {
wishlistIds = data.user.wishlist;
}
const wishlistItemCount = wishlistIds.length;
```
This is something just for the small object, assume it for more complex object structure. Even if one writes the code correctly, it will not provide any readability.
**The Magic Solution**
Now we will see how we can achieve the above solution in just 1 line.
```javascript
const wishlistItemCount = data?.user?.wishlist?.length;
```
This magic solution has been achieved through **Optional Chaining (?.) Operator**.
Now let's see how this Operator works if the property in the object is a **Function** or an **array**.
**Optional Chaining with Dynamic Properties / Arrays**
Optional Chaining works well with the dynamic properties with a different syntax. Let us see how:
> // The syntax for dynamic properties or array:
?.[ ]
```javascript
const userdata = {
data : {
user: {
name: Ram Kumar,
socialMediaAccounts: ['twitter', 'linkedIn'],
primarySocialMedia: 'twitter',
socialLinks: {
twitter: 'https://twitter.com/mishraaSoumya',
linkedIn: 'https://www.linkedin.com/in/mishraa-soumya/'
}
}
}
}
const primarySocialMedia = data?.user?.primarySocialMedia; // 'twitter'
// the value of primarySocialMedia can be different or dynamic property.
const socialMediaIUrl = data?.user?.socialLinks?.[primarySocialMedia]; // 'https://twitter.com/mishraaSoumya'
```
Let me explain the above example of how it works. So, In an app, a user can have multiple socialMediaAccounts, but there can be only 1 primary account, which can be different and dynamic. So, while pulling the URL for the primary social media account, a developer has to ensure that the dynamic value is a property in socialLinks object.
The same syntax works with the **arrays** also, where you are not sure if the index will always be part of the array. Check the below example:
```javascript
// If the `usersArray` is `null` or `undefined`,
// then `userName` gracefully evaluates to `undefined`.
let userIndex = 13;
const userName = usersArray?.[userIndex].name;
```
**Optional Chaining with Function Calls**
Optional Chaining operator also works with Function Calls, but with an additional syntax form.
> // Syntax for Function Calls
?.( )
```typescript
interface InputProps = {
value: string;
onBlur?: Function;
onChange: Function;
}
const inputProps: InputProps = {
value,
onBlur,
onChange
}
const onBlurHandler = inputProps?.onBlur?.();
```
So, now you don't have to write multiple checks for your component props before actually calling the function.
**Properties of Optional Chaining**
The optional chaining operator has few properties: *short-circuiting*, *stacking*, and *optional deletion*. Let's go through these properties with an example.
1. **Short-Circuiting**: It means the if LHS of the expression evaluates to null or undefined, then the RHS is not evaluated.
> a?.[++x]
In the above example, x is only incremented, if a is not null/undefined.
2. **Stacking**: It means that more than 1 optional chaining operator can be used on a sequence of property accesses. Also, you can use different forms of the operator in the same sequence.
```javascript
const handler = db?.user?.[13]?.onClick?.();
```
So, you can combine all forms of optional chaining operator in a sequence.
3. **Optional Delete**: It means that the delete operator can be combined with an optional chain.
```javascript
delete db?.user;
```
So, the user in the db object is only deleted if db is not null.
----
That's all for the article. Thanks for reading it. Please comment your feedback and suggestions.
[Twitter](https://twitter.com/mishraaSoumya) | [LinkedIn](https://www.linkedin.com/in/mishraa-soumya/)
| mishraasoumyaa |
344,232 | Solving "Boo who" / freeCodeCamp Algorithm Challenges | My guide, notes, and solution to freeCodeCamp's basic algorithm challenge, "Boo who" | 6,770 | 2020-05-26T18:51:29 | https://virenb.cc/fcc-010-boo-who | freecodecamp, algorithms, challenge, javascript | ---
title: Solving "Boo who" / freeCodeCamp Algorithm Challenges
published: true
description: My guide, notes, and solution to freeCodeCamp's basic algorithm challenge, "Boo who"
tags: #freeCodeCamp, #algorithms, #challenge, #javascript
canonical_url: https://virenb.cc/fcc-010-boo-who
series: Solving freeCodeCamp's Algorithm Challenges
---
Post can also be found on my website [https://virenb.cc/fcc-010-boo-who](https://virenb.cc/fcc-010-boo-who "Post on virenb.cc")

Let's solve freeCodeCamp's Basic Algorithm Scripting Challenge, "Boo who"
## Our Starter Code (& Tests)
```javascript
function booWho(bool) {
return bool;
}
booWho(null);
```
```
// Tests
booWho(true) should return true.
booWho(false) should return true.
booWho([1, 2, 3]) should return false.
booWho([].slice) should return false.
booWho({ "a": 1 }) should return false.
booWho(1) should return false.
booWho(NaN) should return false.
booWho("a") should return false.
booWho("true") should return false.
booWho("false") should return false.
```
### Our Instructions
Check if a value is classified as a boolean primitive. Return true or false.
Boolean primitives are true and false.
## Thoughts
* The argument's data types vary. Some booleans, strings, arrays, functions, etc.
* After reading the instructions and tests a few times, we must narrow in on true or false inputs/arguments only.
* We have to return a boolean, true or false.
### Further Thoughts
Reading the instructions again, the challenge is asking us to return true for **boolean primatives**.
(Looking at the tests, booWho(false) must return **true**.)
So, we must write a function, which returns true if the input is a **true** or **false**. If it is any other value, we must return false.
There is a built in operator in JavaScript, `typeof` which returns the data type.
[MDN documentation: typeof]("https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof")
Some pseudo pseudocode:
```
booWho(input) {
if input type is true or false
return true
else
return false
}
```
We are just checking the `typeof` of the argument.
## Solution
<details>
<summary>**[SPOILER: SOLUTION TO CODE BELOW]**</summary>
```javascript
function booWho(bool) {
return typeof bool == 'boolean';
}
```
</details>
---
### Links & Resources
['Boo who' Challenge on fCC](https://www.freecodecamp.org/learn/javascript-algorithms-and-data-structures/basic-algorithm-scripting/boo-who "Basic Algorithm Scripting: Boo who Challenge on fCC Website")
[freeCodeCamp](https://www.freecodecamp.org/, "freeCodeCamp")
[Donate to FCC!](https://www.freecodecamp.org/donate "fCC Donation Page")
[Solution on my GitHub](https://github.com/virenb/fcc-algorithms/blob/master/010---boo-who.js "GitHub Solution")
Thank you for reading! | virenb |
344,233 | Make any Static Site Dynamic with Zapier | Without any doubt, there has been a huge adoption for Static Site Generators in these past 2 years, a... | 0 | 2020-05-26T18:12:18 | https://emasuriano.com/blog/make-any-static-site-dynamic-with-zapier | gatsby, static, zapier, release | Without any doubt, there has been a huge adoption for Static Site Generators in these past 2 years, and one of the main reasons was the huge growth of Gatsby and its community.
In my case, Gatsby was my first experience using an SSG (Static Site Generator) and I can confirm that the development experience is wonderful and I will continue using it for future projects!
## But there is something … 🤔
If they are super fast and easy to build with all the frameworks out there that can help you, then why is not everybody switching to them?
Because sometimes the term _Static_ can be very problematic and might not be aligned with your application. If we look for the definition, the one I like the most is this one:
> A static website contains Web pages with fixed content. Each page is coded in HTML and displays the same information to every visitor. A static site can be built by simply creating a few HTML pages and publishing them to a Web server. **Since static Web pages contain fixed code, the content of each page does not change unless it is manually updated by the webmaster.** — [source](https://techterms.com/definition/staticwebsite)
In summary, if you want to update your beautiful deployed application you need to deploy a new build. This may sound obvious if you are used to triggering a new deploy every time you make a change in your frontend, but now you need to trigger a new deploy when something in your database changed! 🤦♂️

## Let’s solve this! 👷♂️
You can always build and host your own service that will be watching for changes in your resources/dependencies and then trigger a new deployment when there is a change. But obviously it may take some time, especially depending on the amount and kind of dependencies you are using.
I’m a true believer of Serverless Solutions because they provide an out of the box solution for problems that you don’t need to solve by yourself, so you can focus on the important stuff. Such as, building a robust product.
So on this occasion, this new service I found recently: [Zapier](http://zapier.com/). This is how they describe the product:
> Zapier is a tool that allows you to connect apps you use every day to automate tasks and save time. You can connect any of our 1,000+ integrated apps together to make your own automation. What’s more it’s quick and easy to set up — you don’t need to be a developer, anyone can make a Zap! — [source](https://zapier.com/help/what-is-zapier/)

The concept is quite similar [IFTTT](https://ifttt.com/) “X should happen after Y”. But Zapier provides more services to integrate and also it has a much better way of creating these flows. For example, you can create a chain of services or add different paths given a condition on an event.
Also, the free plan allows you to create up to 5 Zaps (services integration) and a total of 100 runs per month. If you want to automate any small side project, you will be covered!
## My Experience 🙋♂️
I built my [own portfolio](https://gatsby-starter-mate.netlify.app/) using Gatsby where I divided the Landing Page into different sections: Home, About, Projects, and Writing. The first 3 are populated with data from a CMS, but the Writing Section is reading the posts published on Medium.

So as I explained before, I found it very annoying that every time I write a new article or update my CMS I need to manually trigger a deploy. And the more _dependencies_ you have, the more noticeable is this issue.
So when I heard about Zapier I was super excited because it will allow me to forget about this constraint.
## Integrating with Zapier ⚡️
The first logical step is to create an account inside [their website](http://zapier.com/). Then click on the orange button in the top right corner “Make a Zap” in order to start an empty integration. You’ll see this screen:

On the left, you have the chain of events, which it’s empty by default. And on the center, an input to search for apps or integrations.
In the case of Medium, I had to do a workaround because the current integration doesn’t have the possibility to detect new posts. Luckily, Medium provides an [RSS feed](https://help.medium.com/hc/en-us/articles/214874118-RSS-feeds) for any registered user, so when I publish a new article this RSS will change and then I just need to watch for it. And guess what, Zapier has an RSS Trigger 🎉
Then I just need to take my RSS feed from Medium which is [https://medium.com/feed/@emasuriano](https://medium.com/feed/@emasuriano)

Once I set up the RSS watcher, I can add the next step to deploy my site. I’m using [Netlify](https://netlify.com) to host and deploy it, so I searched for the Netlify integration and select the action of “Start Deploy”.
For every integration you make, Zapier will enquire you to log-in so it can access your account information. In the scenario of Netlify, it will show you all your sites to deploy and I selected my portfolio.

The last step is to assign a name to the Zap and enable it! 🎉

A small comment about this integration, the RSS watcher takes 15 minutes to refresh the RSS content so don’t panic if you see the deploy is not being triggered automatically 😅
## Last Words 👋
Static Sites are simply amazing and when you combine them with tools like Zapier you remove the “static restrictions” they have and can absolutely compete with the dynamic websites!
This has been my experience using the tool and I found it very smooth. In case you are working with a Static Website, I encourage you to go and try the tool. I’m sure that you can find several integrations to fill your needs.
One more thing before you leave, I decided to start a newsletter so in case you want to hear about what I’m posting please consider following it! **No SPAM, no hiring, no application marketing, just tech posts** 👌
[**EmaSuriano Newsletter**](http://eepurl.com/gXvbF5) | emasuriano |
344,248 | Say Hello to Reactjs | Reactjs is trending frontend javascript library in this article we will see how to setup reactjs. In... | 0 | 2020-05-26T18:31:23 | https://saketh-kowtha.github.io/react | react, javascript, babel, webpack | **Reactjs** is trending frontend javascript library in this article we will see how to setup reactjs.
In this blog we are going to setup react app using *create-react-app*
### #1. Install Nodejs and Npm
Before setup we need to install *nodejs* and *npm* you can install from here [click](https://nodejs.org/en/download/).
Note : If you install *nodejs* then *npm* will be added automatically
### #2. Checking Node and Npm version
Checking NPM version
```bash
npm -v
```
Checking Node version
```bash
node -v
```
### #3. Installing Reactjs App
```bash
npx create-react-app myapp
```
now move to *myapp* directory your react app project structure will be like this

### #4. Running React in Dev Mode
To start with your react app in development mode run the following command
```bash
npm start
```
### 5. Testing React app
Our react app will be created along with **JEST**(testing framework created by facebook) and **React Testing Library**(library used to test components) here after RTL. We can use jest and RTL to test our app.
### 6. Generating Build
To Generate build from our app we will use the following command
```bash
npm build
```
#### Lets make hands dirty by writing some code in react
open react app in your favourite Editor or IDE and go to *App.js* file and override that file with the following code.
```javascript
import React from 'react'
const App = () => <div>Hey I did It</div>
export default App
```
Now start the server and check the output in browser. To start the server use *npm start* command. once server started go to http://localhost:3000 and check the output in browser.
We are done with Phase 1. It's time to Phase 2 i.e Testing our APP
Go to *App.test.js* and override that file with the following code.
```javascript
import React from 'react'
import App from './App'
import {render} from '@testing-library/react'
test("It should work", () => {
const {getByText} = render(<App />)
expect(getByText("Hey I did It")).toBeTruthy()
})
```
Run *npm test* to run tests no need to specify names it will take all the files having extensions *(.test.js, .spec.js, .__test__.js)*
After successful test our work is getting a build use *npm build* to generate build and after successful you will able to find build folder in your project folder. You can deploy that folder in any server env like (Nginx, Apache or express static server etc..)
### Finally

| sakethkowtha |
344,257 | Crystal Faith | A post by Crisscrosse | 0 | 2020-05-26T18:49:34 | https://dev.to/crisscrosse/crystal-faith-1fi | crisscrosse | ||
344,332 | Group Conversation on Remote Working | I was happy to sit down with Robert Sösemann, Elizabeth DeGroot, LeeAnne Rimel, Peter Chittum and Kev... | 0 | 2020-05-26T00:00:00 | https://developer.salesforce.com/blogs/2020/04/group-conversation-on-remote-working.html | remote, teams | ---
title: Group Conversation on Remote Working
published: true
date: 2020-05-26 UTC
tags: remote, teams
canonical_url: https://developer.salesforce.com/blogs/2020/04/group-conversation-on-remote-working.html
---
I was happy to sit down with [Robert Sösemann](https://twitter.com/rsoesemann), [Elizabeth DeGroot](https://twitter.com/elizabethdgroot), [LeeAnne Rimel](https://twitter.com/leeanndroid), [Peter Chittum](https://twitter.com/pchittum) and [Kevin Poorman](https://twitter.com/codefriar) to have an in-depth conversation about our experiences working from home. Personally, I used to have a standard commute – nowadays my immediate co-workers have been a couple of cats. In this session, we talk about a wide variety of topics, but focus on challenges in working remotely; including maintaining a healthy mental and physical lifestyle, productivity tips on getting things done while finding personal time, and working with team members over the Internet. Some key highlights include:
* The importance of keeping a routine, even when your schedule is without cues (like a daily commute)
* Making time and space that is both dedicated and distinct from work
* How aspects of the development cycle can provide cues for work throughout the day
* What kind of hardware is important in setting up your office space – especially the chair
* Tips on what to do if you don’t have the luxury of a dedicated room for work
[Watch the full video](https://play.vidyard.com/cBe4poSNDKDdVvwXfycA42.html)
[](https://play.vidyard.com/cBe4poSNDKDdVvwXfycA42.html)
| joshbirk |
344,515 | Connecting Sucks...it shouldn't be so | Hey there, I'm a 17-year-old but you can bet I've passed through a lot in life. I'm so happy that I... | 0 | 2020-05-26T20:43:37 | https://dev.to/pidoxy/connecting-sucks-it-shouldn-t-be-so-1ola | codenewbie, mentalhealth, beginners, programming | Hey there,
I'm a **17-year-old** but you can bet I've passed through a lot in life. I'm so happy that I even encouraged me to write this post.
From rejections to no replies to ...
There is just so much out there that leads to depression.
I'm learning to javascript presently and I do frontend web design.
From college rejections to not getting replies to not finding internships I can apply to, I've learnt to strengthen my emotions and keep pushing, but sometimes these feelings are just overwhelming and depressing.
I love connecting with people and most especially in the tech field it helps one to grow and I hear all about how to go about it, watch webinars on what to and what not to do and I'm super cautious in doing it right but then you just won't get accepted all the time(replies)...yeah people are busy and so I get it and I've trained myself to understand that but what hurts me the most is that some establish a relationship and we connect (do not spam them) but after some time they ghost you.
I think it's really unfair to do such. If you are busy an explanation would really make a difference but some people are not just into that after all. This is the point where twitter and all social media platforms where you can decide not to see if the person has read your message or not have made me happy.
Also, seniors in the industry(basically anyone not a beginner)- could also be juniors, should try to understand that we are beginners and if you think we're not doing it(communicating) the right way you should correct us rather than ghosting us.
Beginners also have a role to play. Many of us have not been the best we could have been and that's why a lot of this is happening. Please as a beginner there are tendencies to do things wrongly. Please try not to. Do your research, have a purpose before connecting!
Sincerely speaking, thinking about these things is really depressing. I give credit and appreciate all those who take time out of their busy schedule to assist others. They are real people and if you do otherwise please change today.
I hope we have a better tomorrow, starting with this.
| pidoxy |
351,927 | Fully encapsulating vulkan and win32 in mruby | After about a week I finally finished pushing all of my original prototype work into the mruby VM. Pr... | 6,645 | 2020-06-10T01:02:43 | https://dev.to/roryo/fully-encapsulating-vulkan-and-win32-in-mruby-51ag | ruby, cpp | After about a week I finally finished pushing all of my original prototype work into the mruby VM. Previously the program was a C program which called into the mruby VM once per frame. Now it's the opposite, the C program does just enough to set up the VM state and then hand over control into the VM. For example, the `main` function originally looked like
```c++
int main() {
ApplicationState *state = (ApplicationState *)VirtualAlloc(0, sizeof(ApplicationState), MEM_COMMIT, PAGE_READWRITE);
state->ruby_state = mrb_open();
create_GUI_class(state->ruby_state);
load_ruby_files(state);
mrb_sym world_sym = mrb_intern_cstr(state->ruby_state, "World");
state->world_const = mrb_const_get(state->ruby_state,
mrb_obj_value(state->ruby_state->object_class),
world_sym);
start_gui(state);
loop(state);
vkDeviceWaitIdle(state->vulkan_state.device);
ImGui_ImplVulkan_Shutdown();
ImGui_ImplWin32_Shutdown();
ImGui::DestroyContext();
CleanupVulkanWindow(state);
CleanupVulkan(&state->vulkan_state);
mrb_close(state->ruby_state);
return 0;
}
```
The `loop` function was lengthy. Abbreviated, it looked like
```c++
void loop(ApplicationState *state) {
MSG msg = {};
while (msg.message != WM_QUIT) {
ImGui_ImplVulkan_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
int ai = mrb_gc_arena_save(state->ruby_state);
mrb_funcall(state->ruby_state, state->world_const, "render", 0);
mrb_gc_arena_restore(state->ruby_state, ai);
ImGui::Render();
FrameRender(state);
FramePresent(state);
}
}
```
This makes it more clear what I meant. The main program loop is C with the program state in an `ApplicationState` struct. It performed all of it's work in C, then yielded to the VM calling the Ruby code `World.render`, [which I covered before](https://dev.to/roryo/ruby-gui-progress-leaking-worlds-naming-is-hard-4j1k). Control comes back to C where it finishes rendering the frame. This did work and I proved to myself the possibility of the idea I have in my mind. The switching of control back and forth between C and the VM caused stability issues. The wrong Ruby incantation causes the program crashing down.
There is also a secondary issue concerning me. I noticed that every frame the object count increased, causing a GC every few seconds.

This bothered me. From [other experiences](https://github.com/mruby/mruby/blob/master/doc/guides/gc-arena-howto.md) I understood calling back and forth between C and the VM could generate `Proc` objects. I was hoping that eliminating frequent calls to the VM would reduce or eliminate the amount of object garbage generated.
Now the `main` program looks like this
```c++
int main() {
mrb_state *state = mrb_open();
create_GUI_class(state);
create_VulkanState_class(state);
create_W32Window_class(state);
create_Imgui_Impl_class(state);
// we could do this within the VM now
load_ruby_files(state);
mrb_value World = mrb_obj_value(mrb_module_get(state, "World"));
mrb_funcall(state, World, "start", 0);
mrb_close(state);
return 0;
}
```
The program does just enough to set up the VM and then passes control over to it. It creates four objects from C for interfacing with low level libraries like Vulkan and Win32.
The main entry point for the VM is in `World.start`
```ruby
module World
def self.start
@native_window = W32Window.new 'D E A R G', width: 1920, height: 1080
# TODO see comments in VulkanState.cpp, need support for setting width/height
# different from the native window eventually
# shoutouts to wayland being massively different than every other
# window system for no real good reason
@vulkan_state = VulkanState.new @native_window
ImguiImpl.startup @vulkan_state, @native_window
while !@is_finished
@native_window.process_window_messages
ImguiImpl.new_frame
process_windows
ImguiImpl.render @vulkan_state
end
rescue => ex
puts "woah #{ex}"
raise ex
ensure
ImguiImpl.shutdown
end
end
```
This should now make it completely clear on what I mean by the program is now a Ruby program and not a C program. The main program loop runs Ruby methods. Some of the method implementations are in C. For example, the `W32Window` looks somewhat like the [W32Window]() class I used as a research project. The `VulkanState` class is 500 lines of C performing all the Vulkan initialization ceremony. There's no need to cover that here, and I'm writing a book on Vulkan for that anyway! As an aside while going through the Vulkan implementation I discovered I was using chapters of my own book as reference over anything else I found. Good confidence boost that my authoring project is worthwhile.
We create module instance variables because native windows and Vulkan favor creating instances of data. We also place them in the `World` module as module instance variables so they aren't ever accidentally garbage collected. OpenGL has an internal hidden global state and wouldn't lend itself to this pattern. The same for imgui, it has an internal hidden state we interface with through C functions. That's why `ImguiImpl` is a global module with static methods.
Speaking of garbage collection I discovered an interesting and useful property of the mruby VM. When calling `mrb_close(state)`, terminating the VM, the VM actually garbage collects all the remaining objects left in the VM! This gives us a chance for cleaning up the low level resources which I use for deallocating Vulkan objects. Similar to the old C++ RAII pattern. We can do this by declaring a custom deallocation function when registering the type with the VM. Usually we use the standard `mrb_free`
```c++
static const mrb_data_type VulkanState_type = { "VulkanState", VulkanState_delete };
```
Then we have an opportunity for destroying Vulkan objects correctly
```c++
void
VulkanState_delete(mrb_state* state, void* thing) {
VulkanState* st = reinterpret_cast<VulkanState *>(thing);
vkDeviceWaitIdle(st->device);
// lots of vulkan yadda
mrb_free(state, st);
}
```
Running it we have exactly the same result as before the move into the VM

Exactly the same, including the issue where it generates significant garbage every frame. That's a result I didn't expect since most of the execution happens within imgui and Vulkan C code. There's something else going on inside the VM I don't understand yet.
One other interesting note. Unlocking the frame rate and letting the system run as fast as possible results in a couple thousand FPS. Frame execution times are between 200 and 1000 nanoseconds, or 0.2 to 1.0 ms. This is without any optimizations, with garbage collection happening every few frames, and a debug binary. It encourages me that the speed of the VM won't become a hindrance later down the line. It also sticks in my mind that in the far future I'll need some threading systems separating the VM execution from the frame limit.
Next up I want to understand what all the object allocations are. The `ObjectSpace` and `GC` Ruby modules have some methods on interrogating the state of the VM memory and the statistics within. However, all of these methods perform a full GC on each method call. What I want are statistics on the metrics of the VM at that moment in time without any GC performed. That way I can perform historical analysis on the volume of objects generated and potentially where they came from. Since the entire project is within the VM building anything is a great deal faster and safer than previously. Interesting how a typical development cycle is the first tools you build on a new tool are tools to help you understand the tools you just built.
To build such tools I must have an understanding on how the mruby VM arranges and performs memory book keeping. That will be my next treatise, a full explanation on the mruby memory management system. | roryo |
344,527 | "The Lean Startup" 10 Years On: Have We Failed? | I was going to write a more traditional book review for The Lean Startup (2011) by Eric Ries, which I... | 0 | 2020-05-26T21:18:33 | https://dev.to/awwsmm/the-lean-startup-10-years-on-have-we-failed-1lck | books, healthydebate, startup | I was going to write a more traditional book review for [_The Lean Startup_ (2011) by Eric Ries](https://en.wikipedia.org/wiki/The_Lean_Startup), which I just finished [listening to](https://www.audible.co.uk/pd/The-Lean-Startup-Audiobook/B005LXUMPO) today, until I got to the epilogue. Let me give you a bit of background, and then I'll explain why I'd like to open this up to a broader discussion.
---
_The Lean Startup_ promotes ideas which Ries has been developing since at least his time at [IMVU](https://en.wikipedia.org/wiki/IMVU) around how startups should be nurtured and managed. Ries defines a startup as
> _"...a human institution designed to deliver a new product or service under conditions of extreme uncertainty."_
And, throughout the book, applies this label to a variety of organisations, including stereotypical tech startups, but also government agencies and manufacturing. An institution doesn't need to be creating the next hot social network or [IoT](https://en.wikipedia.org/wiki/Internet_of_things) gadget to fall under this definition. Any group of people that are trying to break new ground and deliver something new can learn from this book's advice.
Ries argues that startups should seek to achieve _validated learning_, wherein changes to a product or service are dictated by indirect customer feedback, and not by hunches or predefined plans. As Ries stresses throughout this work:
> _"...planning and forecasting are only accurate when based on a long, stable operating history and a relatively static environment. Startups have neither."_
"Indirect" because consumers often don't really know what they want, or don't know how to express it properly. "Validated" because the approach should be scientific -- develop a hypothesis, carry out experiments, and determine whether your hypothesis was correct or incorrect
> _"If the plan is to see what happens, a team is guaranteed to succeed -- at seeing what happens – but won’t necessarily gain validated learning. If you cannot fail, you cannot learn."_
[A/B testing](https://en.wikipedia.org/wiki/A/B_testing), [MVP](https://en.wikipedia.org/wiki/Minimum_viable_product)s, and the [lean methodology](https://en.wikipedia.org/wiki/Lean_thinking) were revolutionary only a few decades ago, but are now the default mode of working for organisations looking to break new ground. In this sense, _The Lean Startup_ can be seen as a sort of spiritual successor to [_The Mythical Man-Month_](https://dev.to/awwsmm/book-review-the-mythical-man-month-1995-1hpn), whose ideas were also unorthodox at the time, but are now mainstream.
But the end of the book is what really got to me.
In the epilogue, Ries talks about the precursor to the lean methodology movement -- [scientific management](https://en.wikipedia.org/wiki/Scientific_management). In the late 18th and early 19th centuries, Frederick Winslow Taylor, an American mechanical engineer, sought to improve the efficiency of businesses. Dividing businesses up into functional departments, dividing workloads into tasks, streamlining production lines to reduce waste and inefficiency -- all of these are obvious today, but less so in Taylor's time. "Taylor effectively invented what we now consider just 'management'", notes Ries, as well as "the idea that work can be studied and improved through conscious effort."
> _"The revolution that he unleashed has been -- in many ways -- too successful. Whereas Taylor preached science as a way of thinking, many people confused his message with the rigid techniques he advocated. ... Many of these ideas proved extremely harmful, and required the efforts of later theorists and managers to undo."_
Ries goes on to recount an anecdote told to him by someone who had attended one of his recent conference talks. This person took Ries' advice to heart and promoted validated learning, [the five whys](https://en.wikipedia.org/wiki/Five_whys), and other aspects of the lean startup within his business. As a result, he gained a reputation as a brilliant engineer within his company. But his superiors didn't actually learn to follow lean methodology, despite his proselytising -- they simply thought they needed to improve their hiring process to find more ["10X engineers"](https://www.7pace.com/blog/10x-engineers), like him.
I fear that this story is a microcosm of lean methodology as a whole, as it is interpreted and applied today.
Just as those in Frederick Taylor's time couldn't see the forest for the trees -- rigidly applying the techniques of scientific management without fully understanding or appreciating their motivation, or significance -- some startups today apply "lean methodology" [without really understanding what that means](https://medium.com/@ahlofan/running-scrum-or-kanban-doesnt-mean-you-are-doing-agile-f13ce43b12f0).
I sometimes get the impression that [Gantt charts](https://en.wikipedia.org/wiki/Gantt_chart), [Kanban boards](https://en.wikipedia.org/wiki/Kanban_board), and [Jira tickets](https://en.wikipedia.org/wiki/Jira_(software)) -- while still very useful when used correctly -- have simply become a way of signaling "we do lean development", without actually following lean principles.
At their cores, both scientific management and the lean methodology are driven by a scientific approach to understanding and improving the development of products and services. But I get the feeling that, while some companies are doing actual science, others are just filling their laboratories with equipment without ever performing any experiments. | awwsmm |
344,531 | Lógica booleana e operadores lógicos | O assunto de hoje é bem básico, mas é muito importante na programação. Vamos começar falando da lógi... | 0 | 2020-05-26T23:27:18 | https://dev.to/linivecristine/logica-booleana-e-operadores-logicos-269a | beginners, tutorial, go | **O assunto de hoje é bem básico, mas é muito importante na programação**.
Vamos começar falando da lógica booleana.
*Curiosidade: O nome é em homenagem ao George Boole. Um matemático e criador da lógica booleana.*
A lógica booleana trabalha com ``true`` e ``false``, ``0`` e ``1``, ligado e desligado. São apenas dois valores opostos. Nenhuma novidade para nós, já trabalhamos com variáveis booleanas antes.
Bem, agora vamos entrar na parte da lógica...
Se eu falar: **Hoje eu vou a praia E ao cinema**
Necessariamente eu tenho que ir aos dois lugares. Se eu não for a praia, terei contado uma mentira. Se eu não for ao cinema, a frase também será falsa. Para a frase ser verdade, eu tenho que ir aos dois lugares.
Mas se eu falar: **Hoje vou a praia OU ao cinema**
Eu só precisarei ir a um dos dois lugares para a frase ser verdade. Não importa qual dos dois, basta ir a praia ou ao cinema.
O "E" e o "OU" são operadores lógicos. Eles são utilizados tanto no dia a dia, quanto na programação, a diferença é que não são representados da mesma forma.
- ``&&`` = "E"
- ``||`` = "OU"
- ``!`` = "NOT"

**Hoje eu vou a praia E ao cinema**
```golang
fuiPraia := true
fuiCinema := true
frase := fuiPraia && fuiCinema
fmt.Println("A frase é", frase)
//Resultaso: "A frase é true"
```
A frase é verdadeira, pois o operador lógico é o ``&&`` e as variaveis são ``true``. Um "E" só retornará verdade quando todos os elementos forem verdadeiros, ou seja, basta um ``false`` para a frase ser falsa.
```golang
fuiPraia := false
fuiCinema := true
frase := fuiPraia && fuiCinema
fmt.Println("A frase é", frase)
//Resultaso: "A frase é false"
```

O ``not`` é a negação de algo. Se uma variavel é ``true``, com o ``not`` ela passa a ser ``false``. Ele converte tudo para o oposto.
```golang
fuiPraia := false
fuiCinema := true
frase := !fuiPraia && fuiCinema //estou negando a variável praia
fmt.Println("A frase é", frase)
//Resultaso: "A frase é true"
```
A praia tem seu valor ``false``, mas quando negamos, ela deixa se ser ``false`` e passa a ser ``true``. E como já vimos ``true && true = true``
```golang
fuiPraia := false
fuiCinema := true
frase := !fuiPraia && fuiCinema //estou negando a variável praia
fmt.Println("A frase é", !frase)//Estou negando a frase
//Resultaso: "A frase é false"
```
O valor da frase era ``true``, mas estamos negando, então passa a ser ``false``.
Agora vamos testar o ``&&`` e o ``!`` junto com um ``if``. Lembra do exemplo do post passado?
{% link https://dev.to/linivecristine/if-e-operadores-relacionais-em-golang-21c4 %}
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
if fizerSol {
fmt.Println("Vou a praia")
} else if tenhoDinheiro {
if diaDaSemana {
fmt.Println("Vou ao cinema")
} else {
fmt.Println("Vou a festa")
}
} else {
fmt.Println("Ficarei em casa")
}
```
Ele pode ficar bem mais legível utilizando um operador lógico.
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
if fizerSol && tenhoDinheiro && !diaDaSemana {
fmt.Println("Vou a praia")
} else if tenhoDinheiro && diaDaSemana {
fmt.Println("Vou ao cinema")
} else if tenhoDinheiro && !diaDaSemana{
fmt.Println("Vou a festa")
}else {
fmt.Println("Ficarei em casa")
}
/*
Resultado : Vou a festa
*/
```
Vamos analisar por partes...
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
/*Se fizer sol, e se eu tiver dinheiro e não for dia de semana, eu vou a praia*/
if fizerSol && tenhoDinheiro && !diaDaSemana {
fmt.Println("Vou a praia")
}
```
Infelizmente, não está fazendo sol, ``fizerSol := false``. Já sabemos que um ``false`` já basta para o ``&&`` ser falso. Então toda a condição do primeiro ``if`` será falsa.
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
/*Se tiver dinheiro e for dia da semana, vou ao cinema*/
...else if tenhoDinheiro && diaDaSemana {
fmt.Println("Vou ao cinema")
}
```
Eu tenho dinheiro 🙌🏽, ``tenhoDinheiro := true``, mas não é dia da semana ``diaDaSemana := false``. Basta um falso para a condição ser falsa, então não vou ao cinema.
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
/*Se tiver dinheiro e NÃO for dia da semana, vou a festa*/
...else if tenhoDinheiro && !diaDaSemana{
fmt.Println("Vou a festa")
}
```
As duas variáveis são verdadeiras. Eu tenho dinheiro e não é dia da semana, então a condição é verdadeira e eu vou a festa 💁🏽♂️.
Para finalizar, vamos conhecer o "OU".

**Hoje vou a praia OU ao cinema**
```golang
fuiPraia := false
fuiCinema := true
frase := fuiPraia || fuiCinema
fmt.Println("A frase é", frase)
//Resultaso: "A frase é true"
```
Diferente do "E" que precisa de duas verdades, no "OU" uma verdade basta. Só preciso ir a um dos dois ligares e a frase será verdadeira.
```golang
fuiPraia := false
fuiCinema := false
frase := !fuiPraia || fuiCinema //estou negando a variável praia
fmt.Println("A frase é", frase)
//Resultaso: "A frase é true"
```
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
if fizerSol || tenhoDinheiro || !diaDaSemana {
fmt.Println("Vou a praia")
}
if tenhoDinheiro || diaDaSemana {
fmt.Println("Vou ao cinema")
}
if tenhoDinheiro || !diaDaSemana{
fmt.Println("Vou a festa")
}
/*
Resultado :
Vou a praia
Vou ao cinema
Vou a festa
*/
```
Todos os casos foram verdadeiros utilizando o ``||``, pois em cada condição tinha pelo menos uma verdade e uma verdade já basta.
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
/* Se fizer sol OU eu tiver dinheiro OU não for dia da semana, vou a praia.
Não fez sol, mas eu tenho dinheiro e não é dia da semana*/
// false true true
if fizerSol || tenhoDinheiro || !diaDaSemana {
fmt.Println("Vou a praia")
}
/*Se tiver dinheiro OU for dia da semana, vou a cinema.
Eu tenho dinheiro, mas não é dia da semana*/
// true false
if tenhoDinheiro || diaDaSemana {
fmt.Println("Vou ao cinema")
}
/*Se eu tiver dinheiro ou não for dia da semana, vou a festa.
Tenho dinheiro e não é dia da semana*/
// true true
if tenhoDinheiro || !diaDaSemana{
fmt.Println("Vou a festa")
}
```
Também podemos utilizar todos os operadores em uma condição:
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
noite := true
if (fizerSol && tenhoDinheiro) || !diaDaSemana {
fmt.Println("Vou a praia")
}
if (tenhoDinheiro || diaDaSemana) && (!fizerSol && !noite) {
fmt.Println("Vou ao cinema")
}
if (tenhoDinheiro && !diaDaSemana) && noite {
fmt.Println("Vou a festa")
}
```
Aqui misturamos todos os operadores, vocês conseguem descobrir qual será a resposta?
A resposta será: Vou a praia / Vou a festa.
Vamos analisar o código...
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
noite := true
/*Se fizer sol E eu tiver dinheiro, OU for dia da semana, vou a praia*/
if (fizerSol && tenhoDinheiro) || !diaDaSemana {
fmt.Println("Vou a praia")
}
/*Se tiver dinheiro OU for dia da semana, E não fizer sol E não for noite, vou ao cinema*/
if (tenhoDinheiro || diaDaSemana) && (!fizerSol && !noite) {
fmt.Println("Vou ao cinema")
}
/*Se tiver dinheiro E não for dia da semana E for noite, vou a festa*/
if (tenhoDinheiro && !diaDaSemana) && noite {
fmt.Println("Vou a festa")
}
```
Como toda operação matemática, devemos começar pelos parênteses.
```golang
fizerSol := false
tenhoDinheiro := true
diaDaSemana := false
noite := true
// (false && true)
// false || true = true
if (fizerSol && tenhoDinheiro) || !diaDaSemana {
fmt.Println("Vou a praia")
}
// (true || false) (true && false)
// true && false = false
if (tenhoDinheiro || diaDaSemana) && (!fizerSol && !noite) {
fmt.Println("Vou ao cinema")
}
// (true || true)
// true && true = true
if (tenhoDinheiro && !diaDaSemana) && noite {
fmt.Println("Vou a festa")
}
```
Espero que tenham entendido a lógica booleana, é um assunto que eu acho muito divertido.
Se quiserem me acompanhar nos estudos, [chema mais](https://youtu.be/WiGU_ZB-u0w) ✌🏽
**Espero que estejam bem e se puder, fiquem em casa** 🏡
**Até amanhã.**

| linivecristine |
344,552 | All about aria-current attribute | aria-current The aria-current attribute is used when an element within collections is visu... | 0 | 2020-05-26T22:56:27 | https://dev.to/manjula_dube/all-about-aria-current-attribute-3gkf | web, security |
---
title: All about aria-current attribute
published: true
description:
tags: #web #security
---
## `aria-current`
The `aria-current` attribute is used when an element within collections is visually styled to indicate it is the current item in the set. This can be an active tab on the nav bar which visually is shown active, or make be a breadcrumb link which is active.
For some more info on this topic read on [digitala11y](https://www.digitala11y.com/aria-current-state/)
- In short `aria-current` is an attribute defined in the [WAI-ARIA](https://www.w3.org/TR/wai-aria-1.1/#aria-current) specification. This specification extends native HTML, allowing you to change the way an HTML element is "translated" into the accessibility tree.
- It can take multiple values, for example: <i> <b>page, step, location, date, time, false, true </b></i>
##### According to the ARIA 1.1 specification, the `aria-current` attribute can be given one of a predefined set of values:
- page - represents the current page within a set of pages.
- step - represents the current step within a process.
- location - represents the current location within an environment or context.
- date - represents the current date within a collection of dates.
- time - represents the current time within a set of times.
- true - represents the current item within a set.
- false - does not represent item within a set.
#### Ok, so the concept goes like this:
Using `aria-current` the right way.
- First we will go through <code>aria-current = page</code>
<i> I am taking an example of my website. Below you see talks section is an active page the user is currently on.</i>

```javascript
<a aria-current="page" class="css-wbxg5e active" href="/talks" style="font-size: 1em;"><i>Talks</i></a>
```
The active sections `talks` indicates the current page in the main navigation. While visible to sighted users, it also uses `aria-current="page"` to convey the information to screen reader users.
- `aria-current="date"` and `aria-current="time"` are very similar to each other. This can be used when implementing date picker, when we display a list of dates, and when it's today's date, we should use the `aria-current="date" to mark the current date to screen reader users.
- `aria-current="step"`
If we need to indicate the current step within a step indicator of a multi-step process (e.g: multi-step checkout process etc), aria-current="step" should be used to indicate the current step to screen reader users.
Some resources to checkout if you interested to learn more on <code> aria-current </code> [a11ysupport.io](https://a11ysupport.io/tests/tech__aria__aria_current), [tink.uk](https://tink.uk/using-the-aria-current-attribute/)
If you want to know more about ARIA attributes checkout [MDN Docs](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA)
This article was originally published on https://www.manjuladube.dev/. Head over there if you like this post and want to read others like it." | manjula_dube |
344,768 | Amazon UI created with React App | This is my first Project which I have created today...go n check it out below link.... https://rjdon1... | 0 | 2020-05-27T10:30:54 | https://dev.to/rjdon11/amazon-ui-created-with-react-app-5apa | This is my first Project which I have created today...go n check it out below link....
https://rjdon11.github.io/amazon-react/ | rjdon11 | |
349,514 | Remote experienced professional developer teams | Services: Strategy & Consulting We offer process consulting, strategic planning, finance, legal,... | 0 | 2020-06-04T16:59:44 | https://scaleblue.com | Services:
Strategy & Consulting
We offer process consulting, strategic planning, finance, legal, analysis and documentation as a service which can enable and facilitate your business.
Dedicated Team
Your Team, Your Tools and Your Culture. Build a remote development team, remote distributed professionals and get work done directly using the tools and methods you know.
Startups and Ideas
Thinking about building an effective product for the world to last with long term commitment, but unsure of where to start? you are already here. we can help you work everything out from ideas to business.
We Promise. We Deliver.
Have an idea? Work with us and see how it goes. We promise to go the extra mile for every project that we take onboard.
If you have still have questions, Contact Us
https://scaleblue.com/contact
| scaleblueofficial | |
350,021 | "Drawing" with box-shadow | I like to share one "trick" or technique that I really like, it's one of those things that's good to... | 0 | 2020-06-05T22:35:49 | https://dev.to/dan_1s/drawing-with-box-shadow-12kh | css, webdev | I like to share one "trick" or technique that I really like, it's one of those things that's good to have in the tool belt and I've found myself using it pretty often over the years.
## The box-shadow property
When you think of box shadow, it's easy to only think of it as shadows on boxes, but it can do more. You can actually use it to draw in a sense.
{% codesandbox wandering-firefly-fjsgv %}
<small>*Here we have two boxes, one is obviously using a box shadow, but the other one is as well.*</small>
<hr>
You can "draw" on the outside:
```css
.el {
box-shadow:
0 0 0 3px red,
0 0 0 8px green,
0 0 0 10px blue;
}
```
or "draw" on the inside:
```css
.el {
box-shadow:
inset 0 0 0 3px red,
inset 0 0 0 8px green,
inset 0 0 0 10px blue;
}
```
You apply multiple box shadows to draw lines that overlap. So the above would result in a "border" of `3px red`, `5px green`, `2px blue` and in this lies the power.
<hr>
You can get really creative with this technique.
{% codesandbox quiet-fire-ogksm %}
Like the tablet above, it has two HTML elements with psuedo `::before` and `::after` elements. Most of the layers are box-shadows. 🙂 | dan_1s |
350,150 | Purging TailwindCSS without ejecting Create-React-App | What's Purging: Purging is a term for eliminating unused css code. It decreases css file size in prod... | 0 | 2020-06-06T08:33:32 | https://dev.to/jmhungdev/purging-tailwindcss-without-ejecting-create-react-app-4mef | tailwindcss, react | What's Purging: Purging is a term for eliminating unused css code. It decreases css file size in production to help browser load files faster. You may hear the term, <b>tree shaking</b> , normally used in the context of eliminating unused libraries to decrease js bundle size. Purge is the same concept.
There's an [official doc](https://tailwindcss.com/docs/controlling-file-size/) to configure purge feature, but it doesn't go into the setup in the create-react-app environment. So if you want a step by step guide on using purge feature without ejecting your create-react-app, keep reading:
TailwindCSS 1.4.0+ has added PurgeCSS natively, users can now directly configure tailwind.config.js to eliminate unused css code in production.
* First, you need to create a <code>tailwind.config.js</code> file.
* Second, add all the js or jsx files that contains Tailwindcss classes under content property.
* You also have the option to add "whitelist" for a list of class Names to <b>not be purged</b>
```js
module.exports = {
purge: {
content: [
'./src/*.js',
'./src/*.jsx'
],
options: {
whitelist: ['bg-color-500']
}
},
theme: {
extend: {},
},
variants: {},
plugins: [],
}
```
Once the config file is done, we need to run the build.
The only way to trigger purge is by setting <code>NODE_ENV=production</code>.
You can either <code>export NODE_ENV=production</code> in the console first or you can add it as a prefix before run script in <code>package.json</code>,
```js
"scripts": {
"build:tailwind": "tailwindcss build src/index.css -o src/tailwind.css",
"prestart":"npm run build:tailwind",
"start": "react-scripts start",
"build": "react-scripts build",
"prod:build": "NODE_ENV=production npm run build:tailwind react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject
},
```
When you run <code>npm run prod:build</code>
1. first it will set production as NODE_ENV variable
2. run build:tailwind which then will trigger purge feature
3. and build your React app with the purged version of tailwind.css
Make sure in your <code>index.js</code>, you are referencing the compiled Tailwind css file instead of index.css.
```jsx
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
import './tailwind.css';
import * as serviceWorker from './serviceWorker';
```
For more questions DM me on Twitter @jmhungdev
| jmhungdev |
351,198 | Using Serverless at scale to build a Party Parrot GIFs generator | Have you ever loved something so much that you might never get enough of it? Party Parrots are someth... | 0 | 2020-06-12T18:14:52 | https://dev.to/vipulgupta2048/using-serverless-at-scale-to-build-a-party-parrot-gifs-generator-32be | serverless, azure, beginners, azuredevstories | Have you ever loved something so much that you might never get enough of it? Party Parrots are something like that for me. There hasn't been a single time when these quirky parrots weren't successful in making me smile.

Since the time the world slipped into a pause. It only made more and more sense to why not build a [Party Parrot GIF generator](https://github.com/vipulgupta2048/partystarter/) called **PartyStarter** for my love of the parrots. As crazy as the idea sounds, I wanted to go a step ahead fit the use-case to be a serverless deployment, and hence this blog for taking you through my build process and how you can do stuff with serverless. GIF's looked absolutely gorgeous, I tweeted some out.
{% twitter 1271422247182336001 %}
Cutting straight to the chase, I planned to keep my build as straightforward as possible. Focusing just on the application logic rather than worrying about deployment, scaling, or even setting up the server as serverless will take care of it all.
[Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview) are nothing but small fragments of code that can be run without worrying about the underlying infrastructure, scaling, security, and 100 other things that come up when deploying code to production. Hence, this containerized style deployments of Azure functions seemed like the right fit for PartyStarter going serverless.

---
## Understanding how PartyStarter works!
Through [John Hobbs's help](https://github.com/jmhobbs/cultofthepartyparrot.com/issues/483), the maintainer for "Cult of the Party Parrot" website. I was able to find accurate colors of the OG Party Parrot GIF. With that piece of information, a little Python, and a cup of coffee. I sat down to build the v1 of PartyStarter.
The first hurdle I had to jump through, was transforming the right pixels on an image that contains a color and not alter the ones in the background. Hence, a necessary requirement ofPartyStarter are to be **transparent image**. This also makes sense because **GIFs look considerably better with a transparent background** & work better with my use-case. Let's try to understand this better in Python (Azure Functions is available for several popular languages & stacks)
## Crash course in RGBA color coding
So, each image you see is made of pixels. Lots and lots of pixels. If we have a `200x200` image. That means we will be parsing through 40k data points pixel by pixel, change color of said pixels, and save the image. It's a resource-intensive task which is **another great reason behind going with Serverless**. No matter how big the picture a user uploads, our server can handle it like a boss.
In programming terms, each image is basically a 2-dimensional array that can be traversed with an `X` and `Y` coordinate telling us what pixel is at which position in the image. Let's open a random image and load it using `Image.load()` method of [Python Pillow library](https://pillow.readthedocs.io/en/stable/).
```
>>> Image.open('~/partystarter/input/twitter.png')
<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=400x400 at 0x7F86416BFCF8>
>>> im.load()
<PixelAccess at 0x7f864285b170>
```
The next requirement, is images need to be RGBA mode. In next iterations, we can work towards converting them into RGBA and maintain transparency. We check what color is the pixel at [223, 123] in the image and find it to be(29, 161, 242, 255) according to RGBA color model (Red, Blue, Green, Alpha) where Alpha is transparency. These are the **pixels we want to alter into a different color**.
```
>>> pixels = im.load()
>>> pixels[223,123]
(29, 161, 242, 255)
```
Similarly, for a **pixel located in a transparent area** for which we get the output as (0,0,0,0) are ones that we don't want to alter. And, that's about it. Simple right?
```
>>> pixels[0,0]
(0, 0, 0, 0)
```
## Concept is clear, show me the code!
With Python, I traverse through **only color pixels** of the original image, changed their colors to what and how we want them. Later, repeat this about a total of 18 times to basically create 18 different frames. And voila! Generate a GIFs from that runs at 18 FPS. Here's how the color altering `party_changer()` method looks like.
{% gist https://gist.github.com/vipulgupta2048/b1c877bcf42b9853f2ec2b9cc6b16f23 %}
While the party is being changed, there needs to be someone willing to step in and save the party as well. Hence, here's how I save the party GIF with the `party_saver()` method.
{% gist https://gist.github.com/vipulgupta2048/34a80ca526b13e9c0bf988441ae063bc %}
The FPS or Frames per second matters a lot when creating great GIF's. Low FPS means your GIF will stutter and won't look smooth. While keeping a higher FPS GIF would mean folks seeing it might have a chance to contract a seizure. Let me demonstrate.

As you can see the difference, the GIF on the left is showing 18 frames at 150 milliseconds (ms) and the on the right is showing just 6 frames in 180 ms. The GIF on the right is stuck at one color for some ms to keep in sync with the other GIF in the collage. Here's how the entire code looks when it comes together.
{% gist https://gist.github.com/vipulgupta2048/3d478b8298bd23a47200e0ae56d1dbe6 %}
---
# Houston, we ready for deployment!
Till this point, I have created my application, the code works on my machine and it looks great to be honest. PartyStarter is ready to party and get deployed. To deploy the Azure Function, I used VScode to build a new Azure function locally for testing and later tweaked my code to deploy it over Azure. If a practical approach is not your thing, why not read a story that I wrote about serverless in [The what, why, and how of Serverless the easiest way possible](https://dev.to/vipulgupta2048/simple-english-the-what-why-and-how-of-serverless-the-easiest-way-possible-403i).
## Understanding the Why Serverless? for PartyStarter!
- If users grow, serverless helps to auto-scale dynamically as per traffic using only what's needed leading to higher efficiency and greater savings on your subscription.
- If users upload HD photos and PartyStarter needs more resources to process then serverless manages performance like a pro without supervision
- and other 100 things that I won't sell you on rather you can read [here](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview#features). Let's build Devs!

## Refactoring the code to work with Azure Functions
Like functions we code in Python or any language, they need to be called somewhere in the code to make them run when needed. Similar is the case for Azure Functions where they are "called" by triggers that lead to event-driven execution of functions. These triggers can execute our code with as simple events as making an HTTP call on an endpoint or processing files when they get uploaded onto Blob storage or even timer triggers that run at predefined schedules much like cron jobs. You can check them all of them out on [Azure Docs](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview#what-can-i-do-with-functions).
What I envisioned PartyStarter in a serverless use-case was for users to upload images on to containers which triggers a Function to basically run my code to create a GIF of that image. Therefore, I will be going with the BlobTrigger for my use-case, where we can process new and modified Azure Storage blobs. Blobs are nothing but files that we can upload on containers hosted on Azure Storage account. Check out the diagram below to get a better idea of the same. We will be managing our blobs with [Python SDK for Blob Storage](https://docs.microsoft.com/en-in/azure/storage/blobs/storage-quickstart-blobs-python).

Source: [Microsoft Azure Documentation](https://docs.microsoft.com/en-in/azure/storage/blobs/storage-quickstart-blobs-python)
Much like that awesome tutorial, I always tend to find 15-20 examples, guides, tutorials detailing every use-case and feature right in the [Azure docs](https://docs.microsoft.com/en-us/azure/azure-functions) whenever I was looking for an idea to build something. That way I didn't have to consult any other sources for [getting started with Python Azure Functions](https://docs.microsoft.com/en-us/azure/developer/python/tutorial-vs-code-serverless-python-01). The best part and NECROMANCY ALERT that I saw. Everything just ... works right from the get-go. VScode integration of setting up the entire project locally with the right stack, virtual environment, settings, authorization keys, .gitignore gives the control back to the user to build, customize, and test. It gives you ready to test the function template and code in nearly seconds. I went wrong with a couple of steps, but once I got the hang of it. I was flying off from screen to screen like a ninja. What a rush!
Here are my details of setting up a new local Azure Functions projects + the configuration for the Azure Blob Storage Trigger. When you have tested them locally, then be sure to connect them to a subscription for them to run without interruptions.

Uploading the Azure Function that I just created and testing it out through the Code+Test playground on Azure was a breeze and exactly what I hoped for. Making a few tweaks to my PartyStarter code. It was ready for the party!

## PartyStarter goes Serverless: The road ahead
With the PartyStarter now a serverless app. I can use it however I like. I could build a web or native app around it, integrate with a command-line interface, use it to generate a bunch of emoji packs for folks to use in their organizations. The possibilities for PartyStarter feel truly endless. The best part the code is open-source at [GitHub](https://github.com/vipulgupta2048/partystarter) and go ahead take a crack at it.

Looks Pretty Good, right?!
Well, that's about it. Starting these many parties has been tiring. I have to fix a lot of bugs that come up in the code as well. I hope this build blog helps you get some motivation to build your next idea, project, or work stuff with serverless!
| vipulgupta2048 |
351,857 | container less cloud computing - wascc | web assembly on the server without a browser, with hot swap capabilities | 0 | 2020-06-09T08:11:47 | https://dev.to/5422m4n/container-less-cloud-computing-wascc-27d3 | cloud, webassembly, rust, polyglot | ---
title: container less cloud computing - wascc
published: true
description: web assembly on the server without a browser, with hot swap capabilities
tags: cloud, webassembly, rust, polyglot
//cover_image: https://direct_url_to_image.jpg
---
If you are curious about a container less secure server side future you need to watch this talk.
At time index 19min it's getting real. Kevin Hoffman demonstrates how hot swap of modules is possible, means deployment with zero downtime, not only of code but also of attached resources like key value stores etc.
This is super exciting! What do you think about this possible future?
{% youtube vqBtoPJoQOE %}
[Read more about waSCC.](https://wascc.dev/) | 5422m4n |
351,863 | Protractor Tutorial: Handle Mouse Actions & Keyboard Events | At times, while performing automated browser testing, you often have to deal with elements, which... | 0 | 2020-06-09T08:18:04 | https://www.lambdatest.com/blog/protractor-tutorial-handle-mouse-actions-keyboard-events/ | protractor, javascript, selenium, automation | At times, while performing [automated browser testing](https://www.lambdatest.com/?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd&utm_content=webpage), you often have to deal with elements, which reveals only after you hover on the menu or after you click on them. In such cases, you can opt for using the action class for keyboard and mouse actions in Selenium Protractor. With the action class, you can automate the representation of mouse activities, such as mouse-clicking, mouse hovering, etc.
The Selenium Protractor framework has in-built capabilities to manage various forms of keyboard and mouse events. This handling of keyboard and mouse events is achieved using the Advanced User Interfaces API. These are web-based API for emulating complex movements performed by the user.
In this Protractor tutorial, I’ll take a look at various aspects of how to handle mouse and keyboard actions in the Selenium Protractor framework. Along with a few examples of the frequently used keyboard and mouse actions in Selenium Protractor. I’ve already covered how to [run tests on Selenium with Protractor](https://www.lambdatest.com/blog/automated-cross-browser-testing-with-protractor-selenium/?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd&utm_content=blog) and what are the requirements for it in a previous article.
## Mouse Actions in Selenium Protractor
Mouse actions are the representation of mouse activities, such as hover, drag, and drop, and clicking multiple elements. They can be easily simulated in Selenium Protractor with predefined methods for mouse movement, clicking, and others.
The following are some of the mouse actions in Selenium Protractor to automate events while performing Selenium test automation :
* mouseMove () : — Performs the mouse movement on the web page.
* dragAndDrop ( source , target ) : — This performs the click of the mouse at the present location i.e. the source and moves the cursor to the desired location i.e. the target without releasing the mouse. Therefore, it moves the element from source to target.
* click () : — Performs the mouse click event on the web element.
* click () : — Performs the mouse click event on the web element.
* mouseUp () : — Performs a mouse up event on the web page.
* mouseDown () : — Performs a mouse down event on the web page.
* contextClick () : — This action performs a right click on any target element on the web page.
* clickAndHold () : — This action performs an event of mouse click at the present location without releasing the button.
* dragAndDropBy ( source, xOffset, yOffset ) : — This action performs a click and hold mouse event on the web page at the source location. It shifts the value by an offset value provided in the argument and then frees mouse actions in Selenium Protractor. Here xOffset shifts the mouse horizontally and yOffset shifts the mouse.
* moveByOffset ( xOffset, yOffset ) : — This action moves the mouse from its current location or the starting location i.e. (0,0) to the specified offset. Here, xOffset is used to set the horizontal offset (negative value–move the cursor to the left) and yOffset is used to set the vertical offset (negative value–move the cursor upwards).
* moveToElement ( toElement ) : — This action moves the mouse to the middle of the web element.
* release () : — This function releases the pressed mouse left button at the current mouse position.
An important thing to note here is that we need to call the perform() method after making any mouse actions in Selenium Protractor on the webpage. If we don’t use perform function after calling any mouse action, then the actions will not have any effect on the web page.
## Move and Hover Mouse Actions In Selenium Protractor
While performing [Selenium test automation](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd&utm_content=webpage), you’d often come across test cases where you’d have to move the mouse cursor and hover over an item in the browser. This can be easily done with the mouseMove() method of mouse actions in the Selenium Protractor framework library. This helps us to get access to elements on the HTML page that get exposed only after you click on them, like the menu or the sub-items.
In the following example in this Protractor tutorial, I’ll have a look at the first action object. I’ll move the mouse cursor on the menu item through mouse actions in Selenium Protractor, and move it to the sub menu item. After this, I’ll hover on the menu that can be fetched with the id as ‘hover-menu’. This approach is also known as mouseHover().
COPY
// include all the required modules from selenium web driver and Protractor framework in this Protractor tutorial for Selenium test automation //
import { browser, element, by, ElementFinder} from 'Protractor'
// describing the test for the mouse actions demonstration //
describe(' Mouse Action Demonstration in Protractor ', function() {
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle mouse actions in Selenium Protractor //
it(' Test to handle mouse move and hover operations in Protractor', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get("http://the-internet.herokuapp.com/hovers")
// mouse hover on a submenu
browser.actions().mouseMove(element(by.id("hover-menu"))).perform()
});
});
*Did you know? The [Payment Request API](https://www.lambdatest.com/web-technologies/payment-request?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd&utm_content=web_technologies) makes checkout flows easier, faster and consistent on shopping sites using a familiar browser-like interface.*
## Drag and Drop Mouse Actions In Selenium Protractor
The dragAndDrop () action of the mouse event drags the source element to the target element via mouse actions in Selenium Protractor. After this, you can perform, click or any other operation as per your requirement This action accepts two major parameters as inputs :
* Source: the one which we want to pull
* Target: the location where we would like to drag and drop
In the following example for this Protractor tutorial, I’ll show you how to perform the drag and drop mouse actions in Selenium Protractor
COPY
// include all the required modules from selenium web driver and Protractor framework for Protractor tutorial //
import { browser, element, by, ElementFinder} from 'Protractor'
// describing the test for the mouse actions demonstration for Selenium test automation//
describe(' Mouse Action Demonstration in Protractor ', function() {
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle mouse actions in Protractor //
it('Test to handle drag and drop mouse operation in Protractor', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get("http://the-internet.herokuapp.com/drag_and_drop")
// perform drag and drop
browser.actions().dragAndDrop(
element(by.id("drag1")),
element(by.id("div2"))
).perform();
});
});
## Click Mouse Actions In Selenium Protractor
The click() action is one of the most commonly used methods in the mouse event. Selenium click button method performs a click on the given element at a given position and then executes certain actions on the element. The location of the elements can vary depending on the size of the display on the screen.
In the following example, we execute the click action:
COPY
// include all the required modules from selenium web driver and Protractor framework for Protractor tutorial //
import { browser, element, by, ElementFinder} from 'Protractor'
// describing the test for the mouse actions in Selenium test automation demonstration //
describe(' Mouse Action Demonstration in Protractor ', function() {
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle mouse actions in Selenium Protractor //
it(' Test to handle click mouse action in Protractor ', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get("http://the-internet.herokuapp.com/javascript_alerts")
// click the alert button
browser.actions().click(element(by.name("alert"))).perform()
});
});
*Did you know? [Permissions Policy](https://www.lambdatest.com/web-technologies/permissions-policy?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd&utm_content=web_technologies) enables powerful browser features for a given site using the Document Security Policy. New in this release is the ability to check if a document was served over HTTPS.*
## Double Click Mouse Actions In Selenium Protractor
Similar to the click method the doubleClick () method simulates a double click of the mouse. Generally, when an element is double-clicked it either activates the particular element or lifts that object from a certain point.
In the following example, we will perform a double-clicking event on the browser.
COPY
// include all the required modules from selenium web driver and Protractor framework for Protractor tutorial //
import { browser, element, by, ElementFinder} from 'Protractor'
// describing the test for the mouse action demonstration for Selenium test automation//
describe(' Mouse Action Demonstration in Protractor ', function() {
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle mouse actions in Selenium Protractor //
it(' Test to handle double click mouse action in Protractor ', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get("http://the-internet.herokuapp.com/javascript_alerts")
// double click the double click button
browser.actions().doubleClick(element(by.id("double-click"))).perform();
});
});
## Mouse Up and Mouse Down With Example
As we click up and down the button on the mouse to perform an activity. Similarly, the mouse up and mouse down methods in Protractor are used to click up and down the primary mouse button. This method is flexible and varies on the option that we configure for the primary and secondary mouse buttons in the control panel based on our choice. Suppose, if we are right-handed, we may choose the right key as primary else for left-handed we choose the primary keys as left.
In the following example, mouse up and mouse down events are executed simultaneously.
COPY
// include all the required modules from selenium web driver and Protractor framework for Protractor tutorial //
import { browser, element, by, ElementFinder} from 'Protractor'
// describing the test for the mouse action demonstration //
describe(' Mouse Action Demonstration in Protractor ', function() {
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle mouse actions in Selenium Protractor //
it(' Test to handle mouse up and mouse down event in Protractor ', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get(" http://the-internet.herokuapp.com/drag_and_drop ")
// double click the double click button
// browser.actions().doubleClick(element(by.id("double-click"))).perform();
browser.actions().mouseDown(element(by.id("drag1")).getWebElement()).perform()
browser.actions().mouseMove(element(by.id("div2")).getWebElement()).perform()
browser.actions().mouseUp().perform()
});
});
*Did you know? By setting [**Pointer Events](https://www.lambdatest.com/web-technologies/pointer?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd&utm_content=web_technologies)** property to “none”, hover and click events will be handled on the element, instead of any elements that are behind it. Setting this to “none” makes it possible to create drop-down menus by only ensuring that the intended elements are active and overlapping correctly.*
## Keyboard Actions In Selenium Protractor
The following are a few important methods that are present in the framework and can be used to emulate keyboard actions in the browser with Protractor :
* keyUp ( key ) : — This keyboard action sends a key press without releasing it. Further subsequent acts can presume that this is pressed. For example — Keys.ALT, Keys.SHIFT, or Keys.CONTROL .
* keyDown ( key ) : — This function performs a key release for the above control keys that are pressed.
* sendKeys ( keysTosend ) : — This function sends a series of keystrokes to the web element.
Similar to the mouse actions in Selenium Protractor, we need to call the perform() method after making any keyboard action on the webpage. If we don’t use perform() method after calling any keyboard action, then these actions will not have any effect on the web page.
## Key Up, Key Down, and Send Keys With Example
The keyboard action has the Key up and Key down as the main methods that are used to trigger the API function keys in the Protractor. These approaches would be helpful if you want to hit helper keys as standard as CTRL+A, SHIFT+A, CTRL+SHIFT+Delete.
In this example for this Protractor tutorial, I’ll show this functionality by entering the character “P” value in the text bar of the web page. Later with the help of pressing the Shift key we will pass the lower case using the sendKeys function. Moreover, if you look at the bigger picture you’ll notice that all of the keyboard actions are being used together.
COPY
// include all the required modules from selenium web driver and Protractor framework for Protractor tutorial //
import { browser, element, by, ElementFinder, ProtractorBrowser, Protractor} from 'Protractor'
// describing the test for the mouse action demonstration for Selenium test automation//
describe(' Keyboard Action Demonstration in Protractor ', function() {
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle keyboard actions in Protractor //
it(' Tests to handle keyboard actions in Protractor ', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get(" http://the-internet.herokuapp.com/key_presses ")
browser.actions()
.click(element(by.name("E")))
.keyDown(Protractor.Key.SHIFT)
.sendKeys("p")
.keyUp(Protractor.Key.SHIFT)
.perform()
});
});
## Mouse Actions In Selenium Protractor on Cloud Selenium Grid
We can run the same Selenium test automation script for handling mouse behavior on a cloud [Selenium grid](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=jun30_rd&utm_term=rd) platform. It gives us the opportunity to run tests across 2000+ real-time browsers and devices in parallel. You only need to make a few changes in the tet script, i.e. to create a driver to connect to the LambdaTest hub. Below is the revised script with the requisite modifications.
COPY
// test_config.js //
// The test_config.js file serves as a configuration file for our test case for Protractor tutorial //
LT_USERNAME = process.env.LT_USERNAME || "irohitgoyal"; // Lambda Test User name
LT_ACCESS_KEY = process.env.LT_ACCESS_KEY || "r9JhziRaOvd5T4KCJ9ac4fPXEVYlOTealBrADuhdkhbiqVGdBg"; // Lambda Test Access key
exports.capabilities = {
'build': ' Automation Selenium Webdriver Test Script ', // Build Name to be display in the test logs
'name': ' Protractor Selenium Frame Test on Chrome', // The name of the test to distinguish amongst test cases //
'platform':'Windows 10', // Name of the Operating System
'browserName': 'chrome', // Name of the browser
'version': '79.0', // browser version to be used
'console':false, // flag to check whether to capture console logs.
'tunnel': false // flag to check if it is required to run the localhost through the tunnel
'visual': false, // flag to check whether to take step by step screenshot
'network':false, // flag to check whether to capture network logs
};
// setting required config parameters //
exports.config = {
directConnect: true,
// Desired Capabilities that are passed as an argument to the web driver instance.
capabilities: {
'browserName': 'chrome' // name of the browser used to test //
},
// Flavour of the framework to be used for our test case //
framework: 'jasmine',
// The patterns which are relative to the current working directory when
Protractor methods are invoked //
specs: ['test_frame.js'],
// overriding default value of allScriptsTimeout parameter //
allScriptsTimeout: 10000,
jasmineNodeOpts: {
// overriding default value of defaultTimeoutInterval parameter //
defaultTimeoutInterval: 10000
},
onPrepare: function () {
browser.manage().window().maximize();
browser.manage().timeouts().implicitlyWait(10000);
}
};
Test Script : –
COPY
// include all the required modules from selenium web driver and Protractor framework for Protractor tutorial //
import { browser, element, by, ElementFinder} from 'Protractor'
var script = require (‘Protractor’) ;
var webdriver = require (‘selenium-webdriver’) ;
// Building the web driver that we will be using in Lambda Test
var buildDriver = function(caps) {
return new webdriver.Builder()
.usingServer(
"http://" +
LT_USERNAME +
":" +
LT_ACCESS_KEY +
"@hub.lambdatest.com/wd/hub"
)
.withCapabilities(caps)
.build();
};
// describing the test for the mouse action demonstration //
describe(' Mouse Action Demonstration in Protractor ', function() {
// adding the before each event that builds the driver and triggers before the execution of the test script.
beforeEach(function(done) {
caps.name = this.currentTest.title;
driver = buildDriver(caps);
done();
});
// disable synchronization for non angular websites //
browser.ignoreSynchronization = true;
browser.manage().window().maximize()
// the test case which defines how to handle mouse actions in Selenium Protractor //
it(' Test to handle mouse up and mouse down event in Protractor ', function() {
// this is required to wait and assign implicit time value to 15 seconds
browser.manage().timeouts().implicitlyWait(15000);
browser.get(" http://the-internet.herokuapp.com/drag_and_drop ")
// double click the double click button
// browser.actions().doubleClick(element(by.id("double-click"))).perform();
browser.actions().mouseDown(element(by.id("drag1")).getWebElement()).perform()
browser.actions().mouseMove(element(by.id("div2")).getWebElement()).perform()
browser.actions().mouseUp().perform()
});
});
As seen above, by including a few lines of code, you can connect to the LambdaTest Platform and execute our regular test script in the cloud. In order to have this configured, you need to create the desired capability matrix.
You can visit LambdaTest Selenium desired capabilities generator for generating the appropriate configuration, using this you can determine the environment in which you will conduct the tests. Moreover, you only need to pass our LambdaTest username and access key to the config file that will securely recognize us on the LambdaTest platform.

## All In All
With this comes the end of this Protractor tutorial! To summarize, I’ve explored how you can simulate mouse and keyboard behavior in a browser using various functions in the Selenium Protractor framework. With Selenium Protractor, you also get the flexibility to combine mouse and keyboard actions for automated browser testing. After executing the functions, perform() is used to execute the actions.
This is the end! Or at least the end of the beginning, we’ve covered quite a few topics on Selenium Protractor now, I’d like you to go ahead and read them. Also, do click on the bell icon for any future notifications on new blogs and tutorials. Do share this article with your friends looking for the answers on handling mouse actions and keyboard events, a retweet or a social is always welcome. Happy Testing!!!😊
| paulharshit |
351,956 | Daily Developer Jokes - Tuesday, Jun 9, 2020 | Check out today's daily developer joke! (a project by Fred Adams at xtrp.io) | 4,070 | 2020-06-09T12:00:21 | https://dev.to/dailydeveloperjokes/daily-developer-jokes-tuesday-jun-9-2020-4h13 | jokes, dailydeveloperjokes | ---
title: "Daily Developer Jokes - Tuesday, Jun 9, 2020"
description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)"
series: "Daily Developer Jokes"
cover_image: "https://private.xtrp.io/projects/DailyDeveloperJokes/thumbnail_generator/?date=Tuesday%2C%20Jun%209%2C%202020"
published: true
tags: #jokes, #dailydeveloperjokes
---
Generated by Daily Developer Jokes, a project by [Fred Adams](https://xtrp.io/) ([@xtrp](https://dev.to/xtrp) on DEV)
___Read about Daily Developer Jokes on [this blog post](https://xtrp.io/blog/2020/01/12/daily-jokes-bot-release/), and check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/).___
### Today's Joke is...

---
*Have a joke idea for a future post? Email ___[xtrp@xtrp.io](mailto:xtrp@xtrp.io)___ with your suggestions!*
*This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit.*
<!--
Joke text:
___Q:___ How did pirates collaborate before computers ?
___A:___ Pier to pier networking.
-->
| dailydeveloperjokes |
351,992 | Designing the Life You Want to Live | This post was originally published on June 9, 2020 on my blog. Something's been on my mind for a whi... | 0 | 2020-06-09T12:34:18 | https://dev.to/alexlsalt/designing-the-life-you-want-to-live-2gg4 | womenintech, codenewbie, devjournal | _This post was originally published on June 9, 2020 on [my blog] (https://alexlsalt.github.io/blog)._
Something's been on my mind for a while that I'd love to share here. It's been an idea that's been simmering in my head for the past few weeks (maybe month or two, who really knows where our ideas come from or when they arrive?).
It's the idea that if we're not currently living the lives that we want to live in the future, then we need to make changes *now* to be able to live the types of lives we want to.
Now, I'm not talking about wanting to be extremely wealthy in the future future, and then acting as though you are *now* and consequently spending exorbitant amounts of money as a result. In my humble opinion, that's actually not a great (or sustainable) idea.
What I'm talking about mostly has to do with our own habits, beliefs, and systems of both work- and home-life.
For example, in the not-so-distant past, I would work myself to the absolute bone every single day without rest. If you've been reading along with me for any amount of time, then you may know that I've been rather focused on having time for working hard and also having designated time for resting and taking intentional breaks from said work.
One example of how I see it: if I want to be a conscious and fully-present parent to a child one day, I need to pay attention to how conscious and fully present I am *now*, each and every day of my present life.
I can't keep going on, entertaining my workaholism and can't-stop-won't-stop mentality now if I don't want that to be my reality in six, twelve, eighteen plus months from now.
The thing is - we don't just wake up one day totally different from how we've been our whole lives. We really have to work to change those habits and life systems.
We may think we'll "grow up" as the years go on, but I think our current habits actually tend to get further and further ingrained - and thus more difficult to change - as the years pass us by.
So for me, those designated times of rest and breaks from working are kind of revolutionary.
I'm not doing it each day for its own sake, but rather I'm consciously designing the very life I hope to be leading in which I don't need to make these very intentional efforts since I will have already built the foundations for them by living that way today and tomorrow and so on.
Thanks for reading! [Now let's be friends over on Twitter >>] (https://twitter.com/alexlsaltt) | alexlsalt |
352,020 | Road to Genius: beginner #2 | Each day I solve several challenges and puzzles from Codr's ranked mode. The goal is to reach genius... | 0 | 2020-06-09T13:30:03 | https://dev.to/codr/2-road-to-genius-4ejg | javascript, beginners, computerscience | Each day I solve several challenges and puzzles from Codr's ranked mode. The goal is to reach genius rank, along the way I explain how I solve them. You do not need any programming background to get started, but you will learn a ton of new and interesting things as you go.

This challenge is slightly more complex than the previous one. Don't be fooled by the amount of code, let's dissect the challenge.
As you can see at the bottom comments, there is only one bug we need to solve 💚 (a number), and we get a list of answers to choose from.
The code starts by creating 3 arrays, the first two (`a1` and `a2`) are filled with numbers, the third `arr` is empty. Then we have a while loop, whose condition is the length of `a1` and `a2`. This means, as long as those two arrays are not empty, it will execute the code inside the loop `{...}`.
This inner code pops from `a1` and `a2` respectively into `x` and `y` variables. Then it compares `x` with `y`, if `x` is greater than `y` it first adds `x` into `arr` then `y`, in the other case it adds first `y` then `x`. This is all we need to know.
The challenge also states that `R` should be 6. `R` is some value from `arr` at an unknown position (=index) represented by our bug 💚 (a number). So all we need is to find an index of `arr` such that the value at that index is 6.
Here's an example:
`let demo = [2, 4, 6]`
an array is zero-indexed, meaning the first element is at position (index) 0, the second element is at index 1, and so on...
In this example the value 6 is at index 2.
Now back to our challenge. We know that the loop takes elements from two different arrays, and adds them to a new array, all we need is find the position (index) of a value 6. Notice that there are 2 possible answers, because number 6 appears twice in `a2`. But we are very lucky since one of these numbers appears at the very end of `a2`. All we need is evaluate the inner loop just once to find the index, like this:
```
x = 1 (pop from a1)
y = 6 (pop from a2)
if (x > y) this is false
...
else { here we go
arr.push(y)
arr.push(x)
}
'arr' is now [6, 1]
```
value 6 is at index/position 0 in 'arr'
this means that 💚 should be 0.

---
If you feel inspired and motivated to upgrade your coding + debugging skills, join me on the Road to Genius at https://nevolin.be/codr/ | codr |
352,100 | Beginner Python tips Day - 06 try...except...else...finally | # Python Day 06 - try...except...else...finally # try, except(also known as catch) and finally patte... | 7,108 | 2020-06-09T15:32:33 | https://dev.to/paurakhsharma/beginner-python-tips-day-06-try-except-else-finally-2l5h | python, beginners, codenewbie, tips |
```python
# Python Day 06 - try...except...else...finally
# try, except(also known as catch) and finally pattern is seen in many programming language
# but on top of that Python comes with else block as well,
# else block runs before finally block
# when there is no exception in the try block
# Example 1
try:
print(100/0)
# Doesn't print anything because there is an exception
except ZeroDivisionError:
print('This does run as there is ZeroDivisionError')
else:
print("This doesn't run as there is an exception in try block")
finally:
print('This runs regardless of the exception')
# Example 2
try:
print(100/5)
# Prints 20.0
except ZeroDivisionError:
print("This doesn't run as there is no ZeroDivisionError")
else:
print("This runs as there is no exception in try block")
finally:
print('This runs regardless of the exception')
``` | paurakhsharma |
352,173 | A Rust Client for PostgREST | At Supabase, we rely heavily on PostgREST, an open source tool that turns your Postgres database into... | 0 | 2020-06-10T12:59:25 | https://dev.to/supabase/a-rust-client-for-postgrest-4ka5 | At [Supabase](https://supabase.io), we rely heavily on [PostgREST](https://postgrest.org/en/stable/index.html), an open source tool that turns your Postgres database into a RESTful API. We even have our own JavaScript client for it in the form of [postgrest-js](https://github.com/supabase/postgrest-js).
But I use Rust, and it's required by my religion to rewrite everything in Rust, so rewrite it I did. To wit: [postgrest-rs](https://github.com/supabase/postgrest-rs) 🦀.
# 🤔 What can I do with it?
postgrest-rs brings an ORM interface to PostgREST. This means you can interact with Postgres (through PostgREST) from within Rust. For example:
```rust
use postgrest::Postgrest;
let client = Postgrest::new("https://your.postgrest.endpoint");
let resp = client
.from("table")
.select("*")
.execute()
.await?;
```
# 🤷♀️ Why would I want it?
Say I have a table of `users`, and I want to know the name of the last user that logged on. In PostgREST, you do this by making the following request:
```
GET https://your.postgrest.endpoint/users?select=username&order=last_seen.desc HTTP/1.1
Accept: application/vnd.pgrst.object+json
```
This gets cumbersome and error-prone as queries get more complex. Compare this to its equivalent in postgrest-rs, which feels more at home:
```rust
let client = Postgrest::new("https://your.postgrest.endpoint");
let resp = client
.from("users")
.select("username")
.order("last_seen.desc")
.single()
.execute()
.await?;
```
There are many other cool stuff you can do, such as switching schemas, row filtering, calling stored procedures, and much more. You can check out the repo [here](https://github.com/supabase/postgrest-rs) and play around with it.
And there you have it! 🎉
This project is part of my internship at Supabase. I saw that there was an interest in a [Rust client library](https://github.com/supabase/supabase/issues/5), and offered to work on that, among other things. Soon after, I'm writing a Rust library, and it was great! So credit where it's due: the Supabase team for all the support, and of course, PostgREST!
---
We'll announce all our future features with more freebies here on DEV first. Follow us so that you don't miss out.
[Sign up](https://app.supabase.io) for our early alpha!
 | soedirgo | |
352,265 | Simplifying: Stacks and Queues | Stacks and queues: this is how I remember them: Stacks: I picture something vertical: a pile of pl... | 0 | 2020-06-09T22:36:03 | https://dev.to/moyarich/simplifying-stacks-and-queues-1i3h | stack, queue, computerscience, javascript | 
Stacks and queues: this is how I remember them:
**Stacks:** I picture something vertical: a pile of plates,a bottle.
**Queues:** I picture something horizontal: a pipe, a line (I join first, I get served first).
Stacks - LIFO: you can only add(append, push) and remove(pop) from the back(top,end).
Queues - FIFO : add (enqueue,append) to the back(rear), only remove(dequeue,popleft) from the front .
LIFO : Last in, first out.
FIFO : first in, first out.
If you are JavaScript developer, you are unconsciously working with stacks and queues everyday:
- You use stacks every time you run your code: "function call stack".
- You use queues every time you run asynchronous code: "The event queue" of the event loop.
Here are some examples of stacks and queues in the real world:
**Stack:**
- Your favorite text editor: undo/redo feature.
- Backtracking: your browsers "back" button.
- Reverse : try to reverse your name.
**Queue:**
- Order processing: you stand 6 feet apart from everyone as you wait in line to place your order with a cashier.
- Message processing: your long SMS messages are stored in a queue( messages are sent in the order they are received). Test this feature out on twitter by exceeding your 143 character limit
---
Now, how have you used stacks and queues consciously in your career?
Let's talk about your usage of these data structures or concepts in your projects.
- I coded a Java paint application to draw shapes on a canvas : https://github.com/moyarich/JPaint.
- I used the open source [bull queue manager](https://github.com/OptimalBits/bull) in my project to control the pace at which I send data to a API. Each subsequent item is sent to the API, after it connects to my webhook url. I wrote this custom function(https://gist.github.com/moyarich/4d6735b8d417c5e2f7e5f03469d32fb7)to get bull queue to manually process only one job in the queue on demand.
| moyarich |
352,270 | Touching a quantum computer | Try to remember the first time you saw a computer. Just a regular computer based on a binary set of r... | 0 | 2020-06-09T22:21:46 | https://dev.to/shtabnoy/touching-a-quantum-computer-158k | quantumcomputing, quantumcomputers, quantum, computing | Try to remember the first time you saw a computer. Just a regular computer based on a binary set of rules, that we all use now. What did you feel back then? It was like a whole new world hidden inside it. The whole new universe was captivating you with its infinite tools and options. Some dinosaurs can remember computers without a GUI, only text-based terminal where you could gently ask "whoami", and your "metal servant" would kindly respond with your name. I saw my first PC not that long ago, about 15 years ago, at one of my local buddies home. That computer was powered by Microsoft's Windows XP, which was the most popular version of Windows back then. I was fascinated even by the welcoming screen with that famous sound and a "bliss" background of California's Wine Country (my favorite was Stonehenge anyway). And its graphic editor "Paint" would cause even more awe, admiration and joy. Later, when you started to get used to a computer, it started to reveal you more and more secrets and its endless possibilities.
Today we live on the verge of a new technology revolution and we have a unique possibility to witness its birth in the form of quantum computers. And not only witness but touch it. By touching I mean not a literal touching, but running a task on it, experiment and get some graphical feedback. Probably you can even literally touch it if you're close to these universities and giant tech companies that have them in their mysterious basements where they are breaking cryptographic protocols and achieving "quantum supremacy". But I'm gonna tell you about some remote opportunities of touching a quantum computer.
I don't remember exactly when I first heard about quantum computing, but once it settled in my head, it started to slowly grow there fueling my interest in that field. So from time to time I read some articles here and there about this mysterious topic not understanding much at first, but slowly acquiring basic ideas behind it. The main question you always have to ask yourself about any new technology is "why does it exist?", or rephrasing it, "why have people even invented it, for what purpose?". And this always helps to come to an understanding of every advance or progress that humans did. It also always means that there was something imperfect, something that was not enough to solve some problems, something that had to be changed in order to improve a technology, to simplify sophisticated tasks, to unfold a whole new branch of possibilities.
In case of computing there always has been a problem with representing physical processes and simulating behavior of atoms and particles. These simulations are particularly useful in some realms of physics, chemistry, biology and medicine. They are based on quantum properties of matter, and therefore it has always been a challenge to write even a simplest simulation of a simplest molecule on a regular computer, cause it involves too many states of too many particles that are highly intertwined with each other. To put it simply all this things that could probably help us understand our world better, decode our genome or even find a cure from cancer are actually not possible to write even on the fastest supercomputer. I'm not going to dig deeper into the basics of quantum computing or quantum mechanics. For this purpose there's a [great resource](https://quantum.country/qcvc) that explains everything you need to know about it in a very rigorous and strict scientific way.
So this seemingly inconceivable branch of science kept haunting me forcing me to read more technical papers, than just mundane trivial explanations, where an author uses some wierd analogies just so even your grandma could understand it. And then finally I stumbled upon a D-Wave website, a Canadian company that produces quantum computers and drives a lot of R&D in that area. To my utter amazement and complete surprise it turned out that they also provide a possibility to use their quantum computers in the cloud.
Stop for a couple of seconds here and ponder on this for a while. Quantum computer... in the cloud! Isn't it amazing? And furthermore, it's been there for quite some time now. I couldn't find an exact date when cloud quantum computers became a thing, but wikipedia says that back then in 2011 D-Wave announced first commercially available quantum computer. I have no idea how you could access this incredible machine almost 9 years ago, but now so many new tech companies, who provide quantum computing as a service, have emerged, like IBM with its IBM Q Experience, Microsoft with LIQUil, Google with its Quantum Playground and others. And all of the offer an incredible tool set to play around with their quantum computers. But for now I just focus on two of them, D-Wave and IBM. And I will try to describe you my experience with working with their quantum machines and express my feelings about that.
I started to play around with D-Wave system called Leap, cause it was basically first that I found out there as I previously said. And it swallowed me completely. I was very excited by their demos. For instance, [factoring with a quantum computer](https://cloud.dwavesys.com/leap/demos/factoring/intro) is so straightforward and well written, that it immediately became so clear to me, what the advantage of using a quantum computer for that kind of tasks is, and that our current cryptographic algorithms would be so weak for this incredible machines. Also, this demo maps all known features of binary computing to their twin brothers in the quantum world. So bits become qubits, logic gates become couples (basically, a bunch of qubits bundled together). And the icing on the cake, the most thrilling and exciting part, something that really hooked me on and forced me to write this post is the ability to run this demo on a real quantum computer hidden behind the curtains of the browser and network connection. A real physical D-Wave 2000Q quantum computer, that you can "touch" via code. Just think about it. These computers are still such a rare commodity. Only few companies and institutes have them and even fewer can afford to provide a service over cloud for everyone. And we have now a unique opportunity to try these things out, even though we still can not see them directly.
And of course everything comes with a price. Because of its structure complexity it's very hard to build, maintain and fix them. Therefore they are very expensive this days. This leads to the scarcity of these machines and to the time limits developers have to apply to reduce a rising demand. That's why after I ran my very first demo on a quantum computer, I saw that scary circle showed me how much time I have left. Actually I spent not that much by running just one factoring demo (because this demo was just about factoring number 21), about 0.36% of all the time available to me. But does that even matter if you can play with a real quantum machine?
But this is just a tip of an iceberg. They created their own integrated into browser VSCode-like IDE, where you can run your own arbitrary python algorithms and challenge their computer to "solve your problems". Besides demos there are ready-to-use examples for problems that fit for quantum computers very well, great documentation and a learning platform. Seems like a really good place to start your journey into the vague world of quantum computing.
Later I also found another great quantum cloud service made by another well-known mastodon tech company, that no one calls International Business Machines anymore. This one was as good as D-Wave or even better. It provides the same feature of having these [jupyter python notebooks](https://quantum-computing.ibm.com/jupyter) where you can run your code against their quantum beasts. But also they have a nice interactive circuit building tool, where you can create a program by just dragging and dropping predefined elements of quantum circuits onto the board and then run it and see the result. For this ones you don't even need to have any programming skills. What's better with IBM is that "they have a ton load of money" they give your this all for free. But because of that, there's a long list of people willing to test their beautiful programs and you have to stay in line for quite some time (it was about 5-10 min for me). That's why ingenious engineers of IBM created a quantum simulator that you can also try. And you'll get result very fast, almost instantly, just wait when an http response comes back to you with the data from the server.
It reminds me of the days when I was a kid and few people in my little town had computers back then. The majority would still go to so called "computer clubs" to play some computer games or surf the internet. They had to pay an hourly fee to get some time on those PCs with Windows 98 which were powered by incredibly fast Pentium III processors. And even then computers weren't that rare as quantum computers are today. And now, that era has long gone and changed to the era of social networks and mobile devices. It's fun to reminisce and get nostalgic about those times and compare it to the reality we have now. It helps to extrapolate changes into the future and assume that eventually everyone will have a personal quantum computer with its tiny cryostat encompassing its QPU made of millions of qubits. With that amount of power potentially you'll be able to bruteforce any modern encryption. That's why scientists already stay up all nights working on post-quantum cryptography, but that's another story.
On the other hand, quantum computers now are designed to solve some specific tasks like analyzing complex molecules and chemical processes, finding new materials and using them to boost progress. So you would probably say that not everyone needs a quantum computer to share their instagram stories or watch new videos of their favorite blogger. That is actually true. You don't need to have a quantum computer for such trivial tasks. But if you're a scientist, a developer, an engineer or just a curious person who wants to contribute to something important for humanity, you will definitely appreciate a small personal quantum computer instead of using cloud-based mega expensive machines on the other side of the world. Also, quantum computers could be really helpful in the field of AI, cause machine learning matches really well with quantum computing and these two branches of science would definitely help one another in the future.
####Resources
https://quantum-computing.ibm.com/
https://cloud.dwavesys.com/leap | shtabnoy |
352,293 | Debugging | When will you use breakpoints? A- Breakpoints placed within your code where it will pause so that yo... | 0 | 2020-06-10T00:29:38 | https://dev.to/tony5293/debugging-3m16 | swift, beginners | When will you use breakpoints?
A- Breakpoints placed within your code where it will pause so that you can inspect the program for any bugging issues.
Why would you change the iOS simulator?
A- The reason you'd change the iOS simulator is to make sure the bugs are fixed within all device and aren't localized to one iOS. | tony5293 |
352,318 | Things to Avoid to Become a Good Developer | Subscribe to my email list now at http://jauyeung.net/subscribe/ Follow me on Twitter at https://twi... | 0 | 2020-06-10T02:01:43 | https://dev.to/aumayeung/things-to-avoid-to-become-a-good-developer-3358 | career | **Subscribe to my email list now at http://jauyeung.net/subscribe/**
**Follow me on Twitter at https://twitter.com/AuMayeung**
**Many more articles at https://medium.com/@hohanga**
**Even more articles at http://thewebdev.info/**
Being a good developer is hard, but being a bad one is easy. There’re many ways to make ourselves a bad developer.
In this article, we’ll look at some ways that we can become a bad developer.
Assume There Are No Bugs in Our Code
====================================
Assuming that there are no bugs in our code is always a big mistake. It’s just impossible to have no bugs in our code unless the app it’s a very simple app.
Otherwise, we’re making a very bad assumption since it’s very hard to write any code.
Every line can do something that we may not expect. If we don’t think hard enough and take into account all the cases that something can go wrong, then something will go wrong.
Most apps are at least thousands of lines if they’re used by people in production and people are paying for it, so we better look at everything line and think of all the edge cases and risks that can arise from our code.
Computers are dumb. They just run the code that’s in our programs, so we shouldn’t assume that there’re no bugs since every line can go wrong and computers can’t read our minds.
Write Code Without Reasoning
============================
Before we put code into a file, we should make sure that the code makes sense before we put it in.
We can’t just try random things and hope that something works. Doing that definitely makes us a bad developer since we’ll probably end up with something that doesn’t work.
Also, the code we put in must serve a purpose. Otherwise, the code is useless and they don’t help anyone.
Take Pleasure in Writing More Code
==================================
We shouldn’t write any more code than we need to. The desire to write more code should be stopped.
Writing useless code just makes our code more complex and long and annoys other people reading the code.
The more code there is the more mistakes that we make. Each extra line of code adds more risk for bugs and undesirable behavior.
Therefore, the more code there is, the more ways we can screw up. More code also adds more burden of testing, code reviews, and many more tasks that we can do less of if we have less code.
Writing Code for Machines Rather Than Humans
============================================
Writing code for machines will get us trouble eventually. Code with names that we don’t understand or badly formatted code will get us eventually.
No one including the write will understand it given enough time.
Another way to write code for machines is spaghetti code. Spaghetti code is just code that has messy workflows, branching, and whatever other ways we can think of to make a mess.
That’s just not acceptable by any means since they are impossible to change because of the mess that the code had turned to.
By writing very long functions and classes, we’re also writing code for machines. We don't want that since again, only a machine can get it.
No human can read long functions and classes and understand it completely. Therefore, it’s one more thing that we should avoid.
Instead, we should look at the principles of Clean Code and write our code based on that. The first half of the book applies to any programming language, so focus on the first half.
**Thinks Emotionally**
======================
Thinking emotionally definitely makes us a bad developer. We shouldn’t take constructive criticism personally.
They help us improve our code. Also, we shouldn’t let our egos get in the way of rational thinking. We should take useful suggestions and apply them if the way we did things were deficient.
Also, we shouldn’t just code by feeling. We should test our stuff so that we can be sure that stuff works the way that we expect them to.
Conclusion
==========
There’re many ways to become a bad developer. We can assume that our code never has bugs.
Also, we can let our egos and feelings get in our way of rational thinking.
Furthermore, we can write code that machines can read with bad formatting and non-descriptive names.
Finally, we can just put code that’s useless or has no reason to be there. | aumayeung |
352,328 | Serve Website/Api from your own system | Have you ever seen a condition when you need to change your design or API frequently to meet the dema... | 0 | 2020-06-10T03:05:08 | https://dev.to/anshul_gupta/serve-website-api-from-your-own-system-4ke | api, hosting, agile, opensource | Have you ever seen a condition when you need to change your design or API frequently to meet the demand?
I did, a few days back. Because of lockdown, My colleagues and I were working on a project from home. We are not in the same local network, so to connect backend and frontend I had to push to code to server then my colleagues start working.
Then I found a service called [NGROK](https://ngrok.com/)
[**Ngrok is a cross-platform service that enables you to expose your local development server to the internet.**](https://www.blog.guidefather.in/2020/06/host-websiteapi-from-local-system-ngrok.html#how-ngrok-work)
## How to use NGROK
- Go to their [website](https://ngrok.com/download)
- Download setup for your OS and exact it.
- Open terminal in the same folder and type
> **./ngrok http port_to_map**
example ./ngrok http 3000
Ngrok will provide you an URL. Using which you can access your api/website publically.

If you want to know more you can check this link [Host website from local system](https://www.blog.guidefather.in/2020/06/host-websiteapi-from-local-system-ngrok.html).
I believe using NGROK is more easy than reading this article.
If you have a better alternative or solution, please let me know. I would love to explore.
▀█▀ █░█ ▄▀█ █▄░█ █▄▀ █▄█ █▀█ █░█
░█░ █▀█ █▀█ █░▀█ █░█ ░█░ █▄█ █▄█
| anshul_gupta |
352,345 | Haskell : Parsing a log message | Week 2 of CIS194 has an interesting problem which deals with parsing log messages. A set of types are... | 0 | 2020-06-24T08:05:29 | https://dev.to/anaynayak/haskell-parsing-a-log-message-1290 | haskell | Week 2 of CIS194 has an interesting problem which deals with parsing log messages. A set of types are provided and we need to write a `parseMessage` method which returns a `LogMessage` from a `String` parameter.
##### Provided types:
```haskell
data MessageType = Info
| Warning
| Error Int
deriving (Show, Eq)
type TimeStamp = Int
data LogMessage = LogMessage MessageType TimeStamp String
| Unknown String
deriving (Show, Eq)
```
##### Sample log file:
```
I 11 Initiating self-destruct sequence
E 70 3 Way too many pickles
E 65 8 Bad pickle-flange interaction detected
W 5 Flange is due for a check-up
I 7 Out for lunch, back in two time steps
Bad message
```
The log structure is largely similar for all log lines except for Error where we also have an error code.
`W 5 Flange is due for a check-up` is parsed as a Warning message with timestamp=5 and the rest as the message.
`E 23 5 Flange is due for a check-up` is parsed as a Error with code 23, timestamp=5 and the rest as the message.
We parse the message as Unknown if any of the following hold true:
1. Timestamp is not an integer
2. message starts with a symbol other than E/W/I
3. E messages not followed by an integer code.
4. The message structure doesn't match `<type> <ts> <msg>`
Attempt #1
```haskell
parseMessage :: String -> LogMessage
parseMessage msg = case parseCode $ words msg of
(Just messageType, Just timestamp, rest) -> LogMessage messageType timestamp rest
_ -> Unknown msg
parseCode :: [String] -> (Maybe MessageType, Maybe TimeStamp, String )
parseCode ("E":code:ts:rest) = (parseError code, toInt ts, unwords rest)
parseCode ("W":ts:rest) = (Just Warning, toInt ts, unwords rest)
parseCode ("I":ts:rest) = (Just Info, toInt ts, unwords rest)
parseCode msg = (Nothing, Nothing, unwords msg)
parseError :: String -> Maybe MessageType
parseError code = Error `fmap` toInt code
toInt :: String -> Maybe Int
toInt = readMaybe
```
Notes:
1. The parsing of timestamp can fail and so we use a `Maybe Int` type to handle the absence of a meaningful timestamp. Same holds true for error code.
2. We pattern match on the `parseCode` response so that we can handle happy scenarios and fallback to Unknown for everything else.
Attempt #2
```haskell
parseMessage :: String -> LogMessage
parseMessage s =
let (maybeMessagetype, s1) = parseType $ words s
(maybeTs, s2) = parseTs s1
lm = liftA3 LogMessage maybeMessagetype maybeTs (Just $ unwords s2)
in fromMaybe (Unknown s) lm
parseType :: [String] -> (Maybe MessageType, [String])
parseType ("I":xs) = (Just Info, xs)
parseType ("W":xs) = (Just Warning, xs)
parseType s@("E":code:rest) = (Error <$> readMaybe code, rest)
parseType s = (Nothing, s)
parseTs :: [String] -> (Maybe TimeStamp, [String])
parseTs (x:xs) = (readMaybe x, xs)
```
In Attempt #1, we looked at the entire log message within the `parseCode` method so that the structure was visible. However this doesn't scale that well. Instead, we change the structure so that each method handles a subset of the string and returns a `Maybe` along with the string that wasn't consumed.
Notes:
1. parseType and parseTs are only responsible for handling the bits that they understand. If it can't process the string, it returns a tuple with `Nothing` and the original string.
2. the parseMessage method needs to pass through left over state to the subsequent parseXYZ method.
3. We combine all the Just values into a LogMessage by using liftA3.
Week 10 introduces us to a `Parser` type:
`newtype Parser a = Parser { runParser :: String -> Maybe (a, String) }`
The method signature in the previous solution looks very similar to a Parser.
The Parser type lets you define parsers such as:
```haskell
satisfy :: (Char -> Bool) -> Parser Char
satisfy p = Parser f
where
f [] = Nothing
f (x:xs)
| p x = Just (x, xs)
| otherwise = Nothing
```
How do you use it ? For e.g. `char c = satisfy (== c)` defines an exact match character parser. Similarly we can define a parser for other smaller units and compose them.
Attempt #3
```haskell
parseMessage :: String -> LogMessage
parseMessage str = fromJust $ runLogMessageParser str where
runLogMessageParser s = fst <$> runParser logMessage s
logMessage = parseError <|> parseInfo <|> parseWarn <|> parseUnknown
parseUnknown = fmap Unknown parseMsg
parseError = liftA3 LogMessage parseECode parseTs parseMsg
parseInfo = liftA3 LogMessage parseICode parseTs parseMsg
parseWarn = liftA3 LogMessage parseWCode parseTs parseMsg
parseECode = Error . fromInteger <$> (char 'E' *> char ' ' *> posInt)
parseICode = char 'I' $> Info
parseWCode = char 'W' $> Warning
parseTs = fromInteger <$> (char ' ' *> posInt)
parseMsg = many <$> satisfy $ const True
```
Notes:
1. We use `Alternative` to make the various possibilities clear. `logMessage = parseError <|> parseInfo <|> parseWarn <|> parseUnknown` tells us that we can expect only one of those four.
2. We can compose small and well defined parsers so that the code is much more readable and expresses the state clearly. Unlike the previous attempt, we no longer have to explicitly pass the leftover string.
Some symbols from the previous block:
```haskell
(*>) :: f a -> f b -> f b
($>) :: f a -> b -> f b
```
Approach 3 doesn't show all the underlying constructs required to get the solution working. See the [full solution](https://gist.github.com/anaynayak/fd93e44d9953a7d2516e77e804cc7136#file-loganalysisv3full-hs) for all that. There are a lot of concepts like Functor, Applicative and Alternatives which help bring clarity in the final solution.
You can also experiment with the various solutions using repl.it
{% replit @anaynayak/LogMessage %}
| anaynayak |
352,396 | Getting started with Terraform and Kubernetes on Azure AKS | Using Azure Kubernetes Service (AKS) instead of creating your cluster is convenient if you are a small team and don't want to spend time monitoring and maintaining Kubernetes control planes. But while you can create a cluster with few clicks in the Azure portal, it usually a better idea to keep the configuration for your cluster under source control. | 0 | 2020-06-11T10:47:56 | https://learnk8s.io/blog/get-start-terraform-aks | kubernetes, azure, aks, terraform | ---
title:Getting started with Terraform and Kubernetes on Azure AKS
published: true
description: Using Azure Kubernetes Service (AKS) instead of creating your cluster is convenient if you are a small team and don't want to spend time monitoring and maintaining Kubernetes control planes. But while you can create a cluster with few clicks in the Azure portal, it usually a better idea to keep the configuration for your cluster under source control.
tags: kubernetes, azure, aks, terraform
canonical_url: https://learnk8s.io/blog/get-start-terraform-aks
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/ia2e1wr54n0ejj1pe09c.jpg
---
**TL;DR:** _In this tutorial you will learn how to use Terraform 0.12 and Helm 3 to provision an Azure Kubernetes Cluster (AKS) with managed identities._
_This article was originally published [on Learnk8s.io](https://learnk8s.io/blog/get-start-terraform-aks)._
Azure offers a managed Kubernetes service where you can request for a cluster, connect to it and use it to deploy applications.
Using [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service) instead of creating your cluster is convenient if you are a small team and don't want to spend time monitoring and maintaining Kubernetes control planes.
_Why worrying about scaling APIs, managing databases, provisioning compute resources, and offering five-nines reliability when you can outsource all of it to Azure._
For free — yes, **Azure doesn't charge you a penny for the master nodes in Azure Kubernetes Service (AKS).**
But while you can create a cluster with few clicks in the Azure portal, it usually a better idea to keep the configuration for your cluster under source control.
If you accidentally delete your cluster or decide to provision a copy in another region, you can replicate the exact same configuration.
And if you're working as part of a team, source control gives you peace of mind.
You know precisely why changes occurred and who made them.
## Table of contents
- [Infrastructure as code: Pulumi vs Azure Templates vs Terraform](#managing-infrastructure-as-code)
- [Getting started with Terraform](#getting-started-with-terraform)
- [Creating a resource group with Terraform](#creating-a-resource-group-with-terraform)
- [Provisioning a Kubernetes cluster on Azure with Terraform](#provisioning-a-kubernetes-cluster-on-azure-with-terraform)
- [Installing an Ingress controller](#installing-an-ingress-controller)
- [A fully configured cluster in one click](#a-fully-configured-cluster-in-one-click)
- [Creating copies of the cluster with modules](#creating-copies-of-the-cluster-with-modules)
- [What's next](#what-s-next)
## Managing infrastructure as code
You have a few options when it comes to keeping your cluster configuration under version control.
The following section is designed to compare Terraform, Pulumi and Azure Resource Manager templates as different options to create infrastructure from code.
If you prefer to jump to skip this part, [you can click here](#getting-started-with-terraform).
### Pulumi
[Pulumi](https://github.com/pulumi/pulumi) offers a novel approach to configuration management through code.
It resembles **a universal SDK that works across cloud providers**.
The infrastructure on Azure (or Google Cloud or Amazon Web Services) is exposed as a collection of objects that you can leverage from your favourite programming language.
Imagine instantiating a LoadBalancer class in Typescript and having an [Azure load balancer](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview) provisioned as a side effect.
_But it doesn't end there._
Pulumi stores the current state of your infrastructure.
So if you run your code twice, it will create a single load balancer and not two.
While technically promising, it's also a new technology.
Pulumi was released at the beginning of 2018, and some of the features are not as polished as in Terraform or Azure Resource Manager templates.
Creating an Azure load balancer in Pulumi using Typescript looks like this:
```typescript
import * as azure from '@pulumi/azure'
const testResourceGroup = new azure.core.ResourceGroup('test', {
location: 'West US',
name: 'LoadBalancerRG',
})
const testPublicIp = new azure.network.PublicIp('test', {
allocationMethod: 'Static',
location: 'West US',
name: 'PublicIPForLB',
resourceGroupName: testResourceGroup.name,
})
const testLoadBalancer = new azure.lb.LoadBalancer('test', {
frontendIpConfigurations: [
{
name: 'PublicIPAddress',
publicIpAddressId: testPublicIp.id,
},
],
location: 'West US',
name: 'TestLoadBalancer',
resourceGroupName: testResourceGroup.name,
})
```
> Please note that Pulumi supports Javascript, Go, Python and Typescript out of the box. However, more languages will be supported later on.
In the example above, you created three resources:
- a resource group to store all of your resources,
- a public IP address to assign to the load balancer
- and the load balancer.
Note how IP address and load balancer are referencing the resource group.
```typescript
import * as azure from '@pulumi/azure'
const testResourceGroup = new azure.core.ResourceGroup('test', {
location: 'West US',
name: 'LoadBalancerRG',
})
const testPublicIp = new azure.network.PublicIp('test', {
allocationMethod: 'Static',
location: 'West US',
name: 'PublicIPForLB',
resourceGroupName: testResourceGroup.name,
})
const testLoadBalancer = new azure.lb.LoadBalancer('test', {
frontendIpConfigurations: [
{
name: 'PublicIPAddress',
publicIpAddressId: testPublicIp.id,
},
],
location: 'West US',
name: 'TestLoadBalancer',
resourceGroupName: testResourceGroup.name,
})
```
Assuming that you have the `pulumi` binary installed, you can execute the script and create the load balancer with:
```bash
pulumi up
```
### Azure Resource Manager Templates
[Azure Resource Manager templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates), _often abbreviated ARM templates_, are a toolset developed by Microsoft designed to provision and control resources in Azure.
ARM templates describe a resource and its related dependencies.
The templates are akin to JSON files and not particularly human-friendly.
Also, ARM lack advanced features such as keeping track of what's being deployed, dry runs, and the ability to modularise and reuse your code.
If Pulumi gave you the extreme flexibility of writing your own code, ARM takes it away by giving you a semi-static JSON file where you can dynamically inject variables.
Azure Templates are made of two parts:
1. a generic template and
1. a parameter file that is used to inject values in the template
Here you can [find the generic template for the Azure Load Balancer](https://github.com/Azure/azure-quickstart-templates/blob/master/201-1-vm-loadbalancer-2-nics/azuredeploy.json).
Creating an Azure load balancer in ARM looks like this:
```powershell
New-AzResourceGroupDeployment -Name TestRG -Location uswest `
-TemplateFile 'azuredeploy.json' `
-TemplateParameterFile 'azuredeploy.parameters.json'
```
You can obtain the:
- `azuredeploy.json` from [here](https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-2-vms-loadbalancer-lbrules/azuredeploy.json)
- `azuredeploy.parameters.json` from [here](https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-2-vms-loadbalancer-lbrules/azuredeploy.parameters.json)
Notice how you had to specify the parameter file to customise some of the values.
If you wish to explore more examples of ARM templates [the official website has a handy collection of quickstart templates](https://azure.microsoft.com/en-us/resources/templates/).
### Terraform
[Terraform](https://www.terraform.io/intro/index.html#what-is-terraform-) gained most of its popularity from being a friendly tool to provision infrastructure on Amazon Web Services.
Terraform is not a library that you use in your favourite programming language, and it's not even a collection of JSON templates.
It's something in between.
It's a sort of DSL — a domain-specific language that's designed to be easy to read and write.
Terraform configurations can be written in HashiCorp Configuration Language (HCL).
**HCL is also fully JSON compatible.**
That is, JSON can be used as entirely valid input to a system expecting HCL.
_It looks like real code, but it lacks some of the flexibility._
Terraform doesn't know how to connect to a cloud provider and orchestrate their API.
It delegates all the work to plugins called providers.
Providers are in charge of translating the terraform DSL into HTTP requests to Azure, Amazon Web Service or any other cloud provider.
Of course, there is a [Terraform provider for Azure](https://www.terraform.io/docs/providers/azurerm/index.html), [as well as many others](https://www.terraform.io/docs/providers/).
Creating an Azure load balancer in Terraform looks like this:
```hcl
resource "azurerm_resource_group" "test" {
name = "LoadBalancerRG"
location = "West US"
}
resource "azurerm_public_ip" "test" {
name = "PublicIPForLB"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
allocation_method = "Static"
}
resource "azurerm_lb" "test" {
name = "TestLoadBalancer"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
frontend_ip_configuration {
name = "PublicIPAddress"
public_ip_address_id = "${azurerm_public_ip.test.id}"
}
}
```
> Please note how the code is remarkably similar to Pulumi's.
Terraform has a powerful mechanism where it can trace dependencies across resources and store them in a graph.
The graph is used to optimise creating infrastructure: independent resources are created in parallel instead of sequentially.
The dependency graph for the load balancer above is straightforward.

But you can imagine that once you have a dozen services to maintain, things could become more complicated.
The following elaborate dependency graph was drawn with [Blast Radius](https://github.com/28mm/blast-radius) — a tool for reasoning about Terraform dependency graphs with interactive visualisations.

Terraform also keeps track of the current state of your infrastructure, so running the script twice holds the same result.
In the rest of this article, you will explore why Terraform is loved by small and large enterprises that use it every day in production.
## Getting started with Terraform
Terraform uses a different set of credentials to provision the infrastructure, so you should create those first.
The first step is to install the Azure CLI.
You can find detailed [instructions on how to install it on the official website](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest).
> [Sign up for an Azure account](https://azure.microsoft.com/en-us/free/), if you don't own one already. You will receive USD200 in free credits.
You can link your Azure CLI to your account with:
```bash
az login
```
And you can list your accounts with:
```bash
az account list
```
**Make a note now of your subscription id.**
> If you have more than one subscription, you can set your active subscription with `az account set --subscription="SUBSCRIPTION_ID"`. You still need to make a note of your subscription id.
Terraform needs a Service Principal to create resources on your behalf.
You can think of it as a user identity (login and password) with a specific role, and tightly controlled permissions to access your resources.
It could have fine-grained permissions such as only to create virtual machines or read from a particular blob storage.
In your case, you need a _Contributor_ Service Principal — enough permissions to create, and delete resources.
You can create the Service Principal with:
```bash
az ad sp create-for-rbac \
--role="Contributor" \
--scopes="/subscriptions/SUBSCRIPTION_ID"
```
The previous command should print a JSON payload like this:
```json
{
"appId": "00000000-0000-0000-0000-000000000000",
"displayName": "azure-cli-2017-06-05-10-41-15",
"name": "http://azure-cli-2017-06-05-10-41-15",
"password": "0000-0000-0000-0000-000000000000",
"tenant": "00000000-0000-0000-0000-000000000000"
}
```
Make a note of the `appId`, `password` and `tenant`. You need those to set up Terraform.
Export the following environment variables:
```bash
export ARM_CLIENT_ID=<insert the appId from above>
export ARM_SUBSCRIPTION_ID=<insert your subscription id>
export ARM_TENANT_ID=<insert the tenant from above>
export ARM_CLIENT_SECRET=<insert the password from above>
```
## Creating a resource group with Terraform
You should install the Terraform CLI. You can [follow the instructions from the official website](https://learn.hashicorp.com/terraform/getting-started/install.html).
If the installation is successful, you should be able to test it by printing the current version of the binary:
```bash
terraform version
```
Let's create the most straightforward Terraform file.
Create a file named `main.tf` with the following content:
```hcl
provider "azurerm" {
version = "=2.13.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "test"
location = "uksouth"
}
```
The file contains the provider and an empty resource group.
In the same directory initialise Terraform with:
```bash
terraform init
```
The command executes two crucial tasks:
1. It downloads the Azure provider that is necessary to translate the Terraform instructions into API calls.
1. It initialises the state where it keeps track of all the resources that are created.
You're ready to create your resource group using Terraform.
Two commands are frequently used in succession. The first is:
```bash
terraform plan
```
Terraform executes a dry run.
It's always a good idea to double-check what happens to your infrastructure before you commit the changes.
**You don't want to accidentally destroy a database** because you forgot to add or remove a resource.
Once you are happy with the changes, you can create the resources for real with:
```bash
terraform apply
```
Terraform created the resource group.
Congratulations, you just used Terraform to provision your infrastructure!
You can imagine that, by adding more block resources, you can create more components in your infrastructure.
You can have a look at all the resources that you could create [in the left column of the official provider page for Azure](https://www.terraform.io/docs/providers/azurerm/index.html).
> Please note that you should have sufficient knowledge of Azure and its resources to understand how components can be plugged in together. The documentation provides excellent examples, though.
Before you provision a cluster, let's clean up the existing resources.
You can delete the resource group with:
```bash
terraform destroy
```
Terraform prints a list of resources that are ready to be deleted.
As soon as you confirm, it destroys all the resources.
## Provisioning a Kubernetes cluster on Azure with Terraform
The bill of material to provision a Kubernetes cluster on Azure is as follow. You need:
- a resource group to contain all of the resources
- a Kubernetes master node (which is managed by Azure)
The list translates to the following Terraform code:
```hcl
provider "azurerm" {
version = "=2.13.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "aks-cluster"
location = "uksouth"
}
resource "azurerm_kubernetes_cluster" "cluster" {
name = "aks"
location = azurerm_resource_group.rg.location
dns_prefix = "aks"
resource_group_name = azurerm_resource_group.rg.name
kubernetes_version = "1.18.2"
default_node_pool {
name = "aks"
node_count = "1"
vm_size = "Standard_D2s_v3"
}
identity {
type = "SystemAssigned"
}
}
```
> The code is also available as [a repository on Github](https://github.com/learnk8s/terraform-aks).
Please notice how you are referencing variables from the resource into the cluster.
Also, pay attention to the `azurerm_kubernetes_cluster` resource block:
- `default_node_pool` defines how many virtual machines should be part of the cluster and what their configuration should look like.
- `identity` is used by the API to provision a Service Principal for the cluster automatically.
Before you apply the changes, execute a dry-run with:
```bash
terraform plan
```
You should notice that there are a lot of resources that are ready to be created.
If the proposed changes resonate with what you asked for, you can apply them with:
```bash
terraform apply
```
_It's time for a cup of coffee._
Provisioning a cluster on AKS takes in average about ten minutes.
What happens in the background is that Azure receives your request, calls the Azure APIs and creates the extra resources needed (such as NICs and virtual machines) to provision the cluster.
_Done?_
**You might wonder where is your cluster.**
_How do you connect to it?_
You can head back to the Azure console and search for your cluster and download the kubeconfig file.
But there's a quicker way.
**Terraform can print information about the state.**
You could use that to print the kubeconfig file associated with the cluster.
You should add the following snippet to the end of your `main.tf` file:
```hcl
output "kube_config" {
value = azurerm_kubernetes_cluster.cluster.kube_config_raw
}
```
You should go through another cycle of `terraform plan` and `terraform apply` and verify that nothing changed.
You should also notice that after the last `terraform apply`, the kubeconfig file is printed to the terminal before the script completes.
You could copy the content and save it locally.
Or, if you prefer, you can use the following command to access the value and save it to disk:
```bash
echo "$(terraform output kube_config)" > azurek8s
```
You can load that kubeconfig with:
```bash
export KUBECONFIG="${PWD}/azurek8s"
```
Assuming you have kubectl installed locally, you can test the connection to the cluster with:
```bash
kubectl get pods --all-namespaces
```
**Hurrah!**
You have provisioned a cluster using Terraform.
_The cluster is empty, though._
And you don't have an Ingress controller to route the traffic to the pods.
## Installing an Ingress controller
In Kubernetes, the Ingress controller is the component in charge of routing the traffic from outside the cluster to your Pods.
You could think about the Ingress as a router.
All the traffic is proxied to the Ingress, and it's then distributed to one of the Pods.
If you wish to do intelligent path-based routing, TLS termination or route the traffic to different backends based on the domain name, you can do so with the Ingress.

While there're several kinds of Ingresses such as [Kong](https://konghq.com/blog/kong-kubernetes-ingress-controller/), [HAProxy](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/) and [Ambassador](https://www.getambassador.io/), the ingress-nginx is the most popular.
You'll use the [ingress-nginx](https://github.com/kubernetes/ingress-nginx) in this guide.
When you install the ingress controller, you have two options.
You can install the nginx-ingress controller using a [service of type NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport).
Each node in your agent pool will expose a fixed port, and you can route the traffic to that port to reach the Pods in your cluster.
To reach the port on a node, you need the node's IP address.
Unfortunately, you can't reach the node's IP address directly because the IP is private.
You're left with another option: using [a Service of `type: LoadBalancer`.](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer)
Services of this type are cloud provider aware and can create a load balancer in your current cloud provider such as Azure.
So instead of exposing your Services as NodePort and struggling to send the traffic to the nodes, you have Azure doing the work.
[Azure creates a real load balancer](https://azure.microsoft.com/en-us/services/load-balancer/) and connect all the nodes in the cluster to it.
_But how do you submit the YAML resources for your ingress?_
You could [follow the manual instructions and install the ingress-nginx](https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md).
Or you could install it as a package with a single command and Helm.
[Helm is the Kubernetes package manager.](https://helm.sh/)
It's convenient when you want to install a collection of YAML resources.
You can see what packages are already [available in the public registry](https://github.com/helm/charts).
You can find the instruction on how to install the Helm CLI [in the official documentation](https://helm.sh/docs/intro/install/).
Helm automatically uses your kubectl credentials to connect to the cluster.
You can install the ingress with:
```bash
helm install ingress stable/nginx-ingress \
--set rbac.create=true
```
Helm installed the resources such as ConfigMaps, Deployment and Service for the Nginx Ingress controller.
You can verify that by typing:
```bash
helm status ingress
```
It also provisions a Service of `type: Loadbalancer`.
The IP address of the load balancer is dynamically assigned.
> Please note that you might need to wait up to 10 minutes for Azure to provision a Load balancer and link it to the Ingress.
You can retrieve the IP with:
```bash
kubectl describe service ingress-nginx-ingress-controller
```
> The `LoadBalancer Ingress` should contain an IP address.
You can test that the Ingress is working as expected by using `curl`:
```bash
curl <ip address>
```
You should see a `default backend - 404` message.
That message is coming from Nginx and suggests that you haven't deployed an application yet, but the ingress controller is working.
Congratulations, you have a fully working cluster that is capable of routing the traffic using Nginx.
## A fully configured cluster in one click
_Wouldn't be great if you could create the cluster and configure the Ingress with a single command?_
You could type `terraform apply` and create a production cluster in a blink of an eye.
The good news is that [Terraform has a Helm provider](https://www.terraform.io/docs/providers/helm/index.html).
Let's try that.
However, before you continue, you should remove the existing Ingress.
Terraform doesn't recognise the resources that it hasn't created and it won't delete the load balancer created with the Ingress controller.
You can delete the existing Ingress with:
```bash
helm delete ingress
```
When you use the `helm` CLI locally, it uses your kubeconfig credentials to connect to the cluster.
It'd be great if Terraform could pass the login credentials to the Helm provider after the cluster is created.
The following snippet illustrates how you can integrate Helm in your existing Terraform file.
```hcl
#... cluster terraform code
provider "helm" {
version = "1.2.2"
kubernetes {
host = azurerm_kubernetes_cluster.cluster.kube_config[0].host
client_key = base64decode(azurerm_kubernetes_cluster.cluster.kube_config[0].client_key)
client_certificate = base64decode(azurerm_kubernetes_cluster.cluster.kube_config[0].client_certificate)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.cluster.kube_config[0].cluster_ca_certificate)
load_config_file = false
}
}
resource "helm_release" "ingress" {
name = "ingress"
chart = "stable/nginx-ingress"
set {
name = "rbac.create"
value = "true"
}
}
```
> The snippet above doesn't include the terraform for the cluster. You can find [the full script on the GitHub repository](https://github.com/learnk8s/terraform-aks).
You can test the changes with `terraform plan`.
When you're ready, you can apply the changes with `terraform apply`.
If the installation was successful, you could retrieve the IP address of the new load balancer with:
```bash
kubectl describe service ingress-nginx-ingress-controller
```
> Please note that it takes time for Azure to provision a load balancer and attach it to the cluster.
And you can repeat the test that you did earlier:
```bash
curl <ip of the load balancer>
```
The command should return the same `default backend - 404`.
_Everything is precisely the same, so what's the advantage of using a single Terraform file?_
## Creating copies of the cluster with modules
The beauty of Terraform is that you can use the same code to generate several clusters with different names.
You can parametrise the name of your resources and create clusters that are exact copies.
And since the Terraform script creates fully working clusters with Ingress controllers, it's easy to provision copies for several environments such as Development, Preproduction and Production.
You can reuse the existing Terraform code and provision two clusters simultaneously using [Terraform modules](https://www.terraform.io/docs/modules/index.html) and [expressions](https://www.terraform.io/docs/configuration/expressions.html).
> Before you execute the script, it's a good idea to destroy any cluster that you created previously with `terraform destroy`.
The expression syntax is straightforward — have a look at an example of a parametrised resource group:
```hcl
provider "azurerm" {
version = "=2.13.0"
features {}
}
variable "name" {
default = "test"
}
resource "azurerm_resource_group" "rg" {
name = "aks-${var.name}"
location = "uksouth"
}
```
As you notice, there's a `variable` block that defines a value that could change.
You can set the resource group name with `terraform apply -var="name=production"`.
> If you leave that empty, it defaults to _test_.
Terraform modules use variables and expressions to encapsulate resources.
Let's have a look at an example.
To create a reusable module, you have to parametrise the Terraform file.
Instead of having a fixed named for the resources, you should interpolate the variable called name:
```hcl
variable "name" {}
resource "azurerm_resource_group" "rg" {
name = "aks-cluster-${var.name}"
location = "uksouth"
}
# more resources...
```
Then, you can move the existing script to a new folder called `aks-module` and create a new `main.tf` file with the following content:
```hcl
module "dev_cluster" {
source = "./aks-module"
name = "dev"
}
module "preprod_cluster" {
source = "./aks-module"
name = "preprod"
}
```
> Please note that the script and module are [available in the GitHub repository in full](https://github.com/learnk8s/terraform-aks).
The `module` keyword is used to define a new module.
The `source` field in the module points to the module.
Any `variable` in the source module is an argument in the `module` block.
You should notice that both clusters have different names.
Before you can plan and apply the changes, you should run `terraform init` one more time.
The command downloads and initialises the local module.
Running `terraform plan` and `terraform apply` should create two clusters now.
_How do you connect to the Development and Production cluster, though?_
You can read the modules' output and create `output` blocks like this:
```hcl
output "kubeconfig_dev" {
value = module.dev_cluster.kube_config
}
output "kubeconfig_preprod" {
value = module.preprod_cluster.kube_config
}
```
You should `terraform apply` the changes again to see the output with the kubeconfig file.
You should connect to the cluster, retrieve the IP address of the load balancer and make a `curl` request to it.
Everything works as expected!
Hurrah!
You have two identical clusters, but you can create a lot more now!
## What's next
Having the infrastructure defined as code in your repository makes your job easier.
If you wish to change the version of the cluster, you can do it in a centralised manner and have it applied to all clusters.
The setup described is only the beginning, if you're provisioning production-grade infrastructure you should look into:
- [How to structure your Terraform](https://www.terraform.io/docs/cloud/guides/recommended-practices/part1.html) in global and environment-specific layers.
- [Managing Terraform state](https://www.terraform.io/docs/backends/state.html) and how to work with the rest of your team.
- [How to use External DNS](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/azure.md) to point to the load balancer and use domain names instead of IP addresses.
- How to set up [TLS with cert-manager](https://github.com/jetstack/cert-manager).
- How to set up a private [Azure Container Registry (ACR)](https://azure.microsoft.com/en-us/services/container-registry/).
And the beauty is that External DNS and Cert Manager are available as charts, so you could integrate them with your AKS module and have all the cluster updated at the same time.
_This article was originally published [on Learnk8s.io](https://learnk8s.io/blog/get-start-terraform-aks)._
| danielepolencic |
352,397 | Django and Modern JS Libraries - Svelte (3) | Django and Modern JS Libraries - Svelte (Note:This article is originally published on... | 7,199 | 2020-06-22T12:51:00 | https://www.cbsofyalioglu.com/post/django-and-modern-js-libraries-svelte | django, python, svelte, javascript |
# Django and Modern JS Libraries - Svelte
(Note:This article is originally published on [cbsofyalioglu.com](https://www.cbsofyalioglu.com/post/django-and-modern-js-libraries-svelte) while building the websites of [Istanbul private transfer](https://istanbultransferexpert.con), [Istanbul Cruise Port Transfer](https://istanbulcruisetransfer.com) and [Izmir alarm sistemleri](https://www.filizguvenlik.com.tr))
Another shameless plug of mine is the article that reviews [Best Blogging Platforms for Developers](https://bloggingplatforms.app/blog/best-blogging-platforms-for-developers)
<p>In the previous part, we built a Django backend and GraphQL API. In this part, we will integrate the Django project with Svelte.</p>
<p>Thus, it's necessary to follow first part of the tutorial .</p>
----------
## What is Svelte and How it differs from React?
I told that I like Python and its ecosystem. I also like Just-In-Time compilers and language supersets like Cython, which really boosts Python performance. When I learned that JavaScript is an interpreted language, I tried to look Cython equivalent of it. Because of different browser compilers, I couldn’t find what I want and it made a disappointment. Maybe it is the reason I feel excitement when I give Svelte a chance.
<p>If you didn't try Svelte before, you may give it a chance. Svelte's interactive API and tutorials are also worth to praise. Being familiar with <a href="https://svelte.dev/tutorial/basics" rel="nofollow noreferrer noopener" class="anchor-color">Svelte API and Tutorials</a> is definitely recommended.</p>
When I'm talking about Svelte, I'm strictly speaking about Svelte 3. It is another JavaScript library written by Rich Harris. What makes Svelte special is:
- It is truly a reactive library and it doesn't use virtual DOM like React. Therefore, there is no VDOM diff calculations.
- It has a compiler and when you build your application it produces optimized JavaScript code. In the end, Svelte code almost disappear and you have vanilla JavaScript.
- You can write HTML, CSS and JavaScript in single file component and there will be no global CSS pollution.
Yes, React was revolutionary. However, how many times we have to deal with virtual DOM synchronization problems or the extra burden for even very small operations are the other side of the medallion.

## Svelte Configuration with Webpack from Scratch
### Step - 1: Configuring development environment
(Note: if you already installed the node, you can skip this part)
We will use Node backend for the development environment. Therefore, we need to install Node and Node package manager npm. To prevent potential dependency problems, we will create a clean node environment. I will use NVM which is Node version manager, and it allows us to create isolated Node environments. In your terminal, run the code below.
**Setup Node Environment with NVM**
In your terminal, run the code below.
``` bash
# install node version manager
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
# check installation
command -v nvm
# should prints nvm, if it doesn't, restart your terminal
```
``` bash
# install node
# node" is an alias for the latest version
# use the installed version
nvm use node
nvm install node
```
Now we can create frontend directory in Django project. Go to the root directory of the project. **'backend/'**
In your terminal copy and paste the code.
``` bash
# create frontend directory
mkdir FRONTEND
cd FRONTEND
# now your terminal directory should be
# backend/FRONTEND
# create a node project
npm init
# you may fill the rest
```
Now we can install front-end and development libraries .
``` bash
# install svelte and other libs
npm install --save-dev svelte serve cross-env graphql-svelte
# install webpack and related libs
npm install --save-dev webpack webpack-cli webpack-dev-server
# install webpack loaders and plugins
npm install --save-dev style-loader css-loader svelte-loader mini-css-extract-plugin
npm install --save node-fetch svelte-routing
```
Update **package.json** scripts part as below. Your file should look like this and ignore the versions.
``` json
{
"name": "django-svelte-template",
"description": "Django Svelte template. ",
"main": "index.js",
"scripts": {
"build": "cross-env NODE_ENV=production webpack",
"dev": "webpack-dev-server --content-base ../templates"
},
"devDependencies": {
"cross-env": "^7.0.2",
"css-loader": "^3.5.3",
"graphql-svelte": "^1.1.9",
"mini-css-extract-plugin": "^0.9.0",
"serve": "^11.3.1",
"style-loader": "^1.2.1",
"svelte": "^3.22.3",
"svelte-loader": "^2.13.6",
"webpack": "^4.43.0",
"webpack-cli": "^3.3.11",
"webpack-dev-server": "^3.11.0"
},
"dependencies": {
"node-fetch": "^2.6.0",
"svelte-routing": "^1.4.2"
}
}
```
Let's create application necessary files and folders for Svelte. In the root directory of the project '**backend/'** , open your terminal.
``` bash
# create HTML file of the project
cd templates
touch index.html
# change directory to backend/FRONTEND
cd ../FRONTEND
mkdir src
touch index.js
touch webpack.config.js
# change directory to backend/FRONTEND/src
cd src
touch App.svelte
touch MovieList.svelte
touch MoviePage.svelte
touch api.js
```
### Step 2 - Webpack configuration
**What is webpack ?**
Webpack is a module bundler and a task runner. We will bundle all our JavaScript application including CSS styling into two JavaScript files, if you prefer you can output only one file. Because of the rich plugins, you can also do many things with Webpack like compressing with different algorithms, eliminate unused CSS code, extracting your CSS to different files, uploading your bundle to cloud provider like S3 etc…
I made two different Webpack settings in one file. One is for development environment, and the other one is for production environment. Also note that we do not optimize these configurations.
Copy/Paste the following code into ********webpack.config.js******** file.
``` javascript
const MiniCssExtractPlugin = require('mini-css-extract-plugin');
const path = require('path');
const mode = process.env.NODE_ENV || 'development';
const isEnvProduction = mode === 'production';
const productionSettings = {
mode,
entry: {
bundle: ['./index.js']
},
resolve: {
alias: {
svelte: path.resolve('node_modules', 'svelte')
},
extensions: ['.mjs', '.js', '.svelte'],
mainFields: ['svelte', 'browser', 'module', 'main']
},
output: {
path: path.resolve(__dirname, '../static'),
filename: 'js/[name].js',
chunkFilename: 'js/[name].[id].js'
},
optimization: {
minimize: true,
runtimeChunk: false,
},
module: {
rules: [
{
test: /\.svelte$/,
use: {
loader: 'svelte-loader',
options: {
emitCss: true,
hotReload: true
}
}
},
{
test: /\.css$/,
use: [
/**
* MiniCssExtractPlugin doesn't support HMR.
* For developing, use 'style-loader' instead.
* */
MiniCssExtractPlugin.loader,
'css-loader'
]
}
]
},
devtool: false,
plugins: [
new MiniCssExtractPlugin({filename: '[name].css'})
],
};
const devSettings = {
mode,
entry: {
bundle: ['./index.js']
},
resolve: {
alias: {
svelte: path.resolve('node_modules', 'svelte')
},
extensions: ['.mjs', '.js', '.svelte'],
mainFields: ['svelte', 'browser', 'module', 'main']
},
output: {
publicPath: "/",
filename: 'static/js/bundle.js',
chunkFilename: 'static/js/[name].chunk.js',
},
devtool: 'source-map',
devServer: {
historyApiFallback: true,
stats: 'minimal',
},
module: {
rules: [
{
test: /\.svelte$/,
use: {
loader: 'svelte-loader',
options: {
emitCss: true,
hotReload: true
}
}
},
{
test: /\.css$/,
use: [
/**
* MiniCssExtractPlugin doesn't support HMR.
* For developing, use 'style-loader' instead.
* */
'style-loader',
'css-loader'
]
}
]
},
mode,
plugins: [
],
}
module.exports = isEnvProduction ? productionSettings : devSettings;
```
### Step 3 - Create a Single-Page-App witth Svelte
First, fill the '**backend/FRONTEND/index.js'.**
``` javascript
import App from './src/App.svelte';
const app = new App({
target: document.body,
});
window.app = app;
export default app;
```
Next, fill the 'App.svelte' file with proper logic.
``` html
<!-- App.svelte -->
<script>
import { Router, Link, Route } from "svelte-routing";
import MovieList from "./MovieList.svelte";
import MoviePage from "./MoviePage.svelte";
export let url = "";
</script>
<Router url="{url}">
<nav class="navbar">
<Link to="/">Home</Link>
</nav>
<div class="main-container">
<Route path="movie/:slug" component="{MoviePage}" />
<Route path="/"><MovieList /></Route>
</div>
</Router>
<style>
.navbar {
background-color:rgba(0,0,0,0.6);
display: flex;
padding: 16px 64px;
font-weight: bold;
color:white;
}
.main-container {
margin-top:32px;
display:flex;
justify-content: center;
align-items: center;
background-color: rgba(0,0,0, 0.15);
}
</style>
```
Before routing pages, I will first write the client-side queries. Please open api.js and copy/paste the code below.
``` javascript
import { GraphQLProvider, reportCacheErrors } from "graphql-svelte";
const client = GraphQLProvider({
url: 'http://127.0.0.1:8000/graphql',
headers: () => ({
"content-type": "application/json",
Accept: 'application/json'
})
})
client.graphql.on('cache', reportCacheErrors)
// our first query will requests all movies
// with only given fields
// note the usage of string literals (`)
export const MOVIE_LIST_QUERY = `
query movieList{
movieList{
name, posterUrl, slug
}
}
`
// Note the usage of argument.
// the exclamation mark makes the slug argument as required
// without it , argument will be optional
export const MOVIE_QUERY = `
query movie($slug:String!){
movie(slug:$slug){
id, name, year, summary, posterUrl, slug
}
}
`
// This is generic query function
// We will use this with one of the above queries and
// variables if needed
export async function get(query, variables = null) {
const response = await client.get({ query , variables })
console.log("response", response);
return response
}
```
Now, route pages: MovieList.svelte will be shown on homepage as we defined in the above. If user clicks any movie card, then MoviePage.svelte file will be rendered.
Fill the MovieList.svelte.
``` html
<script>
import { Router, Link, Route } from "svelte-routing";
import { get, MOVIE_QUERY, MOVIE_LIST_QUERY } from "./api.js";
var movielist = get(MOVIE_LIST_QUERY);
</script>
<div class="wrapper">
<!-- promise is pending -->
{#await movielist}
loading
<!-- promise was fulfilled -->
{:then response}
{#if response.data.movieList.length > 0}
{#each response.data.movieList as movie}
<div class="card">
<Link to={`/movie/${movie.slug}`}>
<img class="poster" alt={movie.name} src={movie.posterUrl} />
<p class="movie-title">{movie.name}</p>
</Link>
</div>
{/each}
{/if}
<!-- promise was rejected -->
{:catch error}
<p>Something went wrong: {error.message}</p>
{/await}
</div>
<style>
.wrapper {
width:100%;
height: auto;
display:flex;
flex-direction: row;
flex-wrap: wrap;
}
.card {
box-sizing: border-box;
position: relative;
width:200px;
height:auto;
margin:16px;
border-radius: 8px;
overflow: hidden;
box-shadow: 0 4px 4px rgba(0,0,0,0.25);
}
.poster {
width:100%;
height:auto;
cursor: pointer;
}
.movie-title {
padding:4px 8px;
font-weight: bold;
text-decoration: none;
cursor: pointer;
}
</style>
```
Also fill MoviePage.svelte according to this.
``` html
<script>
import { Router, Link, Route } from "svelte-routing";
import { get, MOVIE_QUERY } from "./api.js";
// acquired from dynamic route part => /movie/:slug
export let slug;
const moviedata = get(MOVIE_QUERY, {slug})
</script>
<div class="wrapper">
<!-- promise is pending -->
{#await moviedata}
<p>Movie {slug} is loading</p>
<!-- promise was fulfilled -->
{:then moviedata}
{#if moviedata.data}
<div class="movie-container">
<img
src={moviedata.data.movie.posterUrl}
alt={`${moviedata.data.movie.name} poster`}
class="movie-poster"
/>
<div class="text-box">
<h1 class="movie-title">{moviedata.data.movie.name}</h1>
<p class="movie-description">{moviedata.data.movie.summary}</p>
</div>
</div>
{/if}
<!-- promise was rejected -->
{:catch error}
<p>Something went wrong: {error.message}</p>
{/await}
</div>
<style>
.wrapper {
width:100%;
height: auto;
display:flex;
flex-direction: column;
justify-content: center;
align-items: center;
}
.movie-container {
display: flex;
flex-wrap: wrap;
max-width:500px;
}
.movie-poster {
width:250px;
height:auto;
}
.text-box {
display: flex;
flex-direction: column;
}
</style>
```
----------

## Start Svelte App in Development Environment
In the development environment we will run two different servers. When our Svelte app is running, it requests data from Django server. After the response come, Webpack Development server render the page with proper data. This is only for development stage.
When we finish the front-end development, we will build and bundle client-side app. After then we will start Django server, and it will be the only server we will use in production environment, as I promise before.
Go to the root folder of the Django project. '****backend/'****
Execute the below command and make the Django server ready for front-end requests.
```
# execute it on the root folder of Django 'backend/'
python manage.py runserver
```
Open another terminal and change the directory to the '**backend/FRONTEND**'
```
# On another terminal
npm run start
```
When Svelte app successfully compiled, open your browser 'localhost:8080/'.
You should see similar screen like the image below.

MovieList.svelte will render the screen

MoviePage.svelte screen will render this if the user clicks any movie card
**What will be happened at the moment?**
At this moment, **“/“** root page will be rendered. Because of our routing configurations, MovieList.svelte file will be rendered first. If the user clicks any film card, then the MoviePage.svelte file will be rendered regarding its slug value.
_**We succesfully integrated Django and Svelte. Now make the production build.**_
----------
## Django and Svelte Integration in Production Environment
Now you can stop webpack server while **keeping the Django server alive**.
In the backend/FRONTEND/ directory, execute the command below.
```
npm run build
```
This will build and bundle all your Svelte app in bundle.js file. When bundling process is over, go to the URL of Django server in your browser. --> "127.0.0.1:8000/"
You should see the same screens above. Also note the static folder which has new files coming from webpack bundling.
FINISHED
[This is the code repo of all three parts.](https://github.com/canburaks/django-and-modern-js-libraries)
(Note:This article is originally published on [cbsofyalioglu.com](https://www.cbsofyalioglu.com/post/django-and-modern-js-libraries-svelte) while building the websites of [Istanbul Airport Transfer](https://istanbultransferexpert.con), [Istanbul Cruise Port Transfer](https://istanbulcruisetransfer.com) and [Istanbul Travel Guide](https://istanbull.org)) | canburaks |
352,491 | What documentation should I ask before I quit from a fully remote job? | A post by padaki-pavan | 0 | 2020-06-10T10:12:53 | https://dev.to/padakipavan/what-documentation-should-i-ask-before-i-quit-from-a-fully-remote-job-39cn | discuss, career | padakipavan | |
352,517 | Globally accessible CSS and SCSS in your Nuxt component files | Introduction When building an App in Nuxt, it's likely you may choose to take advantage of... | 0 | 2020-06-10T11:24:16 | https://medium.com/@wearethreebears/globally-accessible-css-and-scss-sass-in-your-nuxt-component-files-7c1c012d31bd | vue, css, javascript, tutorial | # Introduction
When building an App in Nuxt, it's likely you may choose to take advantage of the style tag with your single file components. The style tag in single file components allows you keep all of your component specific styles together with your component's template markup and scripts.
# Nuxt styling out of the box
Out of the box Nuxt allows us to work with CSS in single file components and gives us a couple of options for working with those styles, `global`, `unscoped` and `scoped`.
## Global CSS
If you come from a more traditional CSS background, `global` CSS will be most familiar to you, `global` CSS allows you to import CSS for use throughout your entire App. While in Nuxt/ Vue it's common practice to write the majority of styles at component level, it can be useful in certain circumstances to have CSS that is available throughout. A prime example would be a grid framework, if your project is using a grid framework like [Bootstap grid](https://getbootstrap.com/docs/4.0/layout/grid/) or [Honeycomb](https://honeycomb.wearethreebears.co.uk/), you will only want to import that CSS once and you'll want it available throughout your Application. To import `global` css, open your `nuxt.config.js` file and navigate to the `css` array, here you can add any global CSS. For example if you have a grid styles in `assets/css/my-grid.css` you can add that to your global CSS array like so:
```
css: [
'@/assets/css/my-grid.css'
]
```
## Unscoped CSS
The use of `unscoped` CSS is similar to `global` CSS. `unscoped` styles, like `global` styles will effect the entire project. However, unlike `global` CSS, `unscoped` CSS is at component level, so will only be loaded if the component is present on the page. To use `unscoped` css in your components simply add the following tags:
```
<style>
/* global styles */
</style>
```
## Scoped CSS
If you come from a more traditional CSS background, `scoped` CSS may not be so familiar, the idea of `scoped` CSS was floated a number of years ago, however it was later deprecated and removed from HTML5 and is not supported by any major browser. However in Nuxt that is not the case, the creators of Vue, which Nuxt is built on top of, supports `scoped` styles within single file components. The purpose of `scoped` styles is that they will only effect the component in which the styles have been specified. This can be hugely advantageous when naming styles because you no longer have to worry about class names clashing with and overriding styles within other components in your project. To use `scoped` CSS in your single file components add the scoped attribute to your style tags:
```
<style scoped>
/* local styles */
</style>
```
## What about Scoped and Global CSS together?
In some, mostly rare, situations you may feel the need to use both `scoped` and `unscoped` CSS together in a single component, thankfully Vue and in turn, Nuxt makes it possible for you to add both. This comes in particularly helpful in components when you may be pulling in HTML markup data from a CMS which you'd like to style, while keeping the rest of the component scoped:
```
<style>
/* global styles */
</style>
<style scoped>
/* local styles */
</style>
```
# SCSS in Nuxt
Nuxt / Vue doesn't come with SCSS or SASS support by default, however getting started with SCSS or SCSS in Nuxt / Vue is as simple as adding a dependency and a `lang` attribute to your `style` tags. To install the dependency, open the root of your Nuxt project in your console window and run the following command:
```
npm install --save-dev node-sass sass-loader
```
Once the dependency is installed, you'll be able to add SCSS/ SASS support to your single file components. To add SCSS/ SASS support, open your desired component and add the `lang` attribute and set it's value to your preferred `scss` or `sass`. The `lang` attribute can also be used in conjunction with `scoped`, for example:
```
<style lang="scss" scoped>
/* local styles */
</style>
```
# Dealing with common imports
It's not uncommon when writing styles for your web application to have a single source of variables, for example color variables. When writing styles within Single file components, by default this would involve `importing` those variables into every component that needs access to them. However, we can resolve this by taking advantage of the Nuxt Style Resource module. To install the Nuxt Style Resource module, navigate to the root of your Nuxt project in your console and run the following command:
```
npm install --save-dev @nuxtjs/style-resources
```
Upon completing the installation, open your `nuxt.config.js` file and add `@nuxtjs/style-resources` to your modules:
```
modules: [
'@nuxtjs/style-resources',
]
```
You can then add your Style Resources to your `nuxt.config.js` file. For example, if you'd like access to a variables file from `assets/scss/variables.scss` throughout your app, you could add:
```
styleResources: {
scss: [
'~/assets/scss/variables.scss',
]
}
```
Your variables will now be available in all of your components, without the need to `import` them in every file.
__Note:__ Do not import actual styles. Use this module only to import variables, mixins, functions (et cetera) as they won't exist in the actual build. Importing actual styles will include them in every component and will also make your build/HMR magnitudes slower.
If you’ve found this article useful, please follow me on [Medium](https://medium.com/@wearethreebears), Dev.to and/ or [Twitter](https://www.twitter.com/wearethreebears). | wearethreebears |
352,541 | Best Free and Open-Source Icons & Icon Packs of 2023 | Icon sets are a must-have for all web designers. With UI design moving forward and changing... | 0 | 2020-06-10T13:02:29 | https://dev.to/icons/free-icons-jna | icons, html, css, webdev | Icon sets are a must-have for all web designers. With UI design moving forward and changing continually, it can be quite arduous to stay updated with all the new changes in the area of icon design. 2023 has not seen any drastic or radical trend changes. Yet, it is clear that icons have become much more refined, simpler, and more specialized by blending the necessary minimal features with the existing popular icon styles.
An icon pack is a collection of icons used to enhance the visual appeal of an interface. These icons are typically used in software development, web design, and graphic design to provide visual cues and aid users in navigating a digital environment.
Free & open-source icon packs are incredibly useful for developers and designers, as they don't cost anything and can be tailored to the specific requirements of an individual project. They're a great asset to have in your arsenal.
Icon packs are really useful and add an appealing effect to any project. Options like Material Design Icons by Google, Lineicons by Lineicons Team, Ionicons by Ionic Framework, Feather by Cole Bemis and Octicons by GitHub come with a huge selection of icons and they're free, open source and easy to customize.
In this article, we share with you the top 20 best free and open source icon packs in 2023 that you can consider using for your web development and designing needs. We have carefully sorted out the best 20 to ensure that you do not have to go anywhere else and can simply rely on this list for the best free icons.
So, without further ado, let us begin!
###[Lineicons](https://lineicons.com/)

Lineicons can easily be considered as one of the best free icon pack of 2023 has to offer. It provides developers and designers with more than 5700+ line icons that are handcrafted for modern user interfaces. This includes Android, iOS, web as well as desktop app projects. This free icon pack is simple yet complete. Packed with all essential icons from 40+ different categories, 2 weight variations, scalable file formats (SVG, Web Font, React, etc.) high legibility, free CDN, and much more.
Best thing about lineicons is they offer powerful icons editor, free CDN and 596 unique icons absolutely free!
####[Explore and Download](https://lineicons.com/icons)
---
###[Feather Icons](https://feathericons.com/)

Feather offers beautiful open source icons. You can get started on the website in no time at all and download them with just a few clicks. It even allows you to customize the icons as per your desires according to the size and stroke width.
###[Material Icons](https://material.io/resources/icons/)

Material Icons is another wonderful choice you can go for. These are beautiful and delightfully crafted symbols for common items and actions. You can easily download these free icons on your desktop for use in your digital projects for iOS, Android, and Web. For web projects, you can make use of our easy-to-use web font icon.
###[Material Design Icons](https://materialdesignicons.com/)

Material design icons is a site that has an ever-growing collection of icons that allows developers and designers to target a number of platforms to download various icons in the size, color, and format that they require for all kinds of projects.
###[CSS Icons](https://css.gg/app)

CSS Icons offer a plethora of free icons that you can select from based on your needs and personal preference. They are also completely free to download, and you can use them easily for your online development projects. What’s more, you can choose from a number of categories to make things easy.
###[CoreUI Icons](https://coreui.io/icons/)

You can also go for Core UI Icons, which is a free premium-designed set of icons. It comes with marks in Webfont, Raster, and SVG formats. It offers beautifully crafted symbols that can be used for common actions and items. You can use these easily for your web or mobile app development projects.
###[IconMonstr](https://iconmonstr.com/)

iconmonstr is another wonderful choice for best free icons in 2023. It offers bold/fill heavyweight icons as well as thin, lightweight icons. Hence, you can choose the ones you like the best according to your needs. On this free icon site, you can find more than 4496 free icons to use in 313 new collections.
###[Ionic Icons](https://ionic.io/ionicons/)

Our open source icon library from ionic has a huge selection of hand-crafted designs to choose from. Download what you need, or customize the icons to fit your project's style and theme.
###[Hero Icons](https://heroicons.com/)

Hero Icons can be stunningly beautiful, hand-crafted SVG icons for your next project. These icons are made with love from the makers of Tailwind CSS, and are perfect for any website or app.
###[Jam Icons](https://jam-icons.com/)

Jam icons are extremely popular among website developers and designers. It offers 896 free icons that you can use with a quick download. There are a number of categories that you can choose from according to your needs and preferences. The support team of this free icon site is also great in case you need any assistance.
###[Orion Icons](https://orioniconlibrary.com/)

Orion icon library is considered to be one of the best icon tools in the online world today. It offers 6014 free vector icons for SVG. All the icons offered by this site are advanced and interactive for professional-looking web projects.
###[Shape](https://shape.so/)

Shape offers a whopping 4300+ free icons and illustrations that you can easily download and start using. They are animated and customizable icons, as well as illustrations that are exportable to code. This means that you can easily customize the colors, borders, and styles of animated and static icons with ease.
###[Simple Icons](https://simpleicons.org/)

Extensive collection of 2419 free SVG icons from leading companies and brands. Editing, customizing and downloading is easy and you can use them for web, mobile or print projects.
###[Pixel Love](https://www.pixellove.com/)

Pixellove is a much loved free icon site that is widely used by developers and designers. It offers a staggering 15000 icons that come in 6 distinct styles. You can easily view the icons you like on the website and download the ones you fancy in a quick and hassle-free manner.
###[Iconic](https://useiconic.com/open)

Iconic is one of the best free icon sites in 2023. It offers smart icons in three unique sizes. Moreover, you can choose from multi-color themes and enjoy this open-source icon set that comes with 223 marks in Webfont, raster, and SVG format. It is also foundation-ready and bootstrap-ready to be used with your favorite frameworks.
###[Tabler Icons](https://tablericons.com/)

Get high-quality, customizable SVG icons for your projects. Tabler icons contains 1424 open source icons to choose from, you can easily find the perfect icon for your project. Highly customizable and easy to use, these SVG icons will give your project a professional look
###[Unicons](https://iconscout.com/unicons)

You can also opt for Unicons. It offers an extensive icon library with more than 2200 icons for you to use. You can choose from a total of 27 categories, as well as multiple styles. In short, you will be spoilt for choice.
###[Icon Sweets](https://designbombs.com/iconsweets2/)

Last but definitely not least is Iconsweets 2. It provides users with the perfect icons for all kinds of design work. You can enjoy a huge set of icons that are custom designed. You can use these icons for your iOS, Android, and web app projects without any problem.
###[CustIcon](https://custicon.com/)

2000+ free icons for use in web, iOS, Android, and desktop apps. Support for SVG. License: MIT, Free for commercial or personal use.
###[Devicons](https://devicon.dev/)

Devicon is a collection of high-quality icons representing programming languages, designing & development tools. It contains logos from popular tools like HTML5, CSS3, JavaScript, React and more. Get the perfect icon for your project with Devicon today.
##Bottom Line
And there you go! These were the best free icons that 2023 has to offer you. You can choose any of these free icon sites for an impressive website development project. These 15 are definitely well worth your consideration and attention! | icons |
352,547 | 6 Ways to Loop Through an Array in JavaScript | Dealing with arrays is everyday work for every developer. In this article, we are going to see 6 diff... | 0 | 2020-06-10T12:37:29 | https://www.codespot.org/ways-to-loop-through-an-array-in-javascript/ | javascript, beginners, tutorial, webdev | Dealing with arrays is everyday work for every developer. In this article, we are going to see 6 different approaches to how you can iterate through in Javascript.
[**Continue reading...**](https://www.codespot.org/ways-to-loop-through-an-array-in-javascript/) | vasilevskialeks |
352,552 | Random Color Generator Expo App | Table of Content Introduction Getting Setup App Overview Making Navigation Screen State m... | 0 | 2020-06-10T12:51:26 | https://dev.to/utkarshyadav/random-color-generator-expo-app-5g4b | reactnative, react, javascript, expo | # **Table of Content**
- Introduction
- Getting Setup
- App Overview
- Making Navigation Screen
- State management {useState}
- Making App Screen(Simple Color Generating function)
- Ready to Roll 🥳
### **Introduction**
Expo is a framework for React-Applications. Developer can easily built both Ios and Android platform mobile application. we can easily develop, build , deploy the app quickly. And the best part about react-native is that it gives a Native look to our Mobile / Web Application from the same JavaScript and TypeScript codebase.
### **Getting Setup**
I am thinking that you have `NODE.JS` already been installed in your Machine.
>Install from here if Not! 👉 [Node](https://nodejs.org)
**setting-up Expo** :
```javascript
npm install -g expo-cli
expo init Random-color-generator
```
### **App Overview**
<img src="https://dev-to-uploads.s3.amazonaws.com/i/bi4ycpcadl8mnvem1byq.jpeg" height="300" width="200"/>
- By Clicking on the `Add Color` button. we should be able to learn create Block of different colors.. 🌈
### **Making Navigation Screen**
Make Sure that you have following dependencies installed.
- react-navigation
- react-navigation-stack
```javascript
npm i react-navigation react-navigation-stack
```
> **For navigation Screen Copy the following code and paste inside your `App.js` File.**
```javascript
import { createAppContainer } from 'react-navigation'; // calling createAppContainer from react-navigation
import { createStackNavigator } from 'react-navigation-stack';
import HomeScreen from "./src/screens/HomeScreen"; //importing both screens to the main--> APP.js
import ColorScreen from './src/screens/ColorScreen';
const navigator = createStackNavigator(
{
Home: HomeScreen, //Stacking HomeScreen
randC: ColorScreen //Stacking ColorScreen i.e our main Application
},
{
initialRouteName: "Home", //The Priority Route to be displayed first
defaultNavigationOptions: {
title: "App" //Title of the header is APP
}
}
);
export default createAppContainer(navigator); //exporting default navigator
```
Now you have made the `App.js`. Now we need to make the screen between which me are navigating.
- **HomeScreen** (`FileName: HomeScreen.js`)
- **ColorScreen** (`FileName: ColorScreen.js`)
> Disclaimer : Remember that the File Structure will go like this...
```
|---src
|---screen
|---HomeScreen.js
|---ColorScreen.js
```
#### ***HomeScreen.js***
```javascript
import React from "react";
import { Text, StyleSheet, View, Button } from "react-native";
const HomeScreen = ({navigation}) => {
return (
<View>
<Text style={styles.text}>HomeScreen</Text>
<Button
onPress={() => navigation.navigate('randC')}
title="Color screen Demo" />
</View>
);};
const styles = StyleSheet.create({
text: {
fontSize: 30,
alignItems: 'center',
justifyContent: 'center'
}
});
export default HomeScreen;
```
### **State management {useState}**
let's understand it via example.
```javascript
const [count,setCount] = useState(0);
```
This means that the initial value of the setCount is `0`.
Hooks are functions that let you “hook into” React state and lifecycle features from function components. React uses an observable object as the state that observes what changes are made to the state and helps the component behave accordingly.
### **Making App Screen**(Color Generating function implemented)
```javascript
import React, { useState } from "react";
import { View,Text,StyleSheet,Button,FlatList } from "react-native";
const ColorScreen = (props) => {
const [color, setColor] = useState([]); //UseState Hook
return (
<View>
<Button title="Add a Color" onPress={()=> {
setColor([...color,randomRGB()]) //Change Of state
}} />
<FlatList //Making FlatList
keyExtractor={(item)=>item}
data={color}
renderItem={({item}) =>{
return <View style={{ height:100, width:100, backgroundColor: item }} />
}}
/>
</View>
)}
const randomRGB = () => { //Color Generation Function
const red = Math.floor(Math.random()*256);
const green = Math.floor(Math.random()*256);
const blue = Math.floor(Math.random()*256);
return `rgb(${red}, ${green}, ${blue})`;
}
export default ColorScreen; //Exporting the Screen for App.js file
const styles = StyleSheet.create({ //Defining StyleSheet
container: {
flex: 1,
alignItems: 'center',
justifyContent: 'center'
}
});
```
### **Ready to Roll** 🎉
Now We are Done with our Application. Now time to see the Application rolling.
```Javascript
expo start //This will start expo tunnel
```
- Scan the QR code and play application on Real device..
### **ScreenShots**
- The ScreenShot shown are from my `Iphone`.
- You can also use `Android` App No worries.. React-Native is there for You.
> Disclaimer : Kindly install Expo-client Application on your devices.
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/i/z15hjgzxpq1i9kipfeoj.jpeg" height="300" width="200"/>
<img src="https://dev-to-uploads.s3.amazonaws.com/i/bi4ycpcadl8mnvem1byq.jpeg" height="300" width="200"/>
</p>
- Please Star it That will Make Me happy. ⭐===😍
- Fork Repository : HERE 👇
{% github Uyadav207/Expo-React-Native %}
Thanks for Reading!
Happy Coding !
{% user uyadav207 %} | utkarshyadav |
353,846 | Good sites satta King and play bazaar | Online Games Chart 2019 Today Gali, Desawar, Gaziabad, Faridabad Delhi Bazar Matka Games http://www.p... | 0 | 2020-06-12T11:47:11 | https://dev.to/sattaking003/good-sites-satta-king-and-play-bazaar-261d | sattaking, playbazaar | Online Games Chart 2019 Today Gali, Desawar, Gaziabad, Faridabad Delhi Bazar Matka Games http://www.playbazzar.xyz/ | sattaking003 |
354,761 | How to Add Subscription Based Throttling to a Django API | Extending the Django Rest Framework to throttle API requests based on user-specific limits | 0 | 2020-06-14T15:55:03 | https://dev.to/mattschwartz/how-to-add-subscription-based-throttling-to-a-django-api-28j0 | python, django, saas, api | ---
title: How to Add Subscription Based Throttling to a Django API
published: true
description: Extending the Django Rest Framework to throttle API requests based on user-specific limits
tags: #python #django #saas #api
//cover_image: https://direct_url_to_image.jpg
---
Python was a natural choice when I started [SocialSentiment.io](https://socialsentiment.io). It let me use the same language for both the machine learning algorithms and web development. And I had used [Django](https://www.djangoproject.com/) previously for other projects. The [Django Rest Framework](https://www.django-rest-framework.org/) (DRF) is a great package to quickly and easily extend a Django project to offer APIs. Today we'll look at how to extend its capabilities to support custom throttling based on user subscriptions.
## Subscription Model
First let's define our application's [subscription model](https://socialsentiment.io/plans/) and throttling requirements:
- A free tier allowing a few hundred API requests per day
- A low cost paid tier offering a few thousand requests per day
- A higher cost tier offering unlimited requests
- All tiers limited to 5 requests per second
This is a very common use case for a modern SaaS application.
## Custom Throttling Class
One great thing about Django Rest Framework is it includes many built-in options for authentication and throttling. Each can be applied globally or to specific endpoints. If you desire any type of dynamic throttling options you'll need to extend it. Fortunately the architecture of DRF lets you override just about any part of it.
Let's start by writing a custom class that overrides DRF's `UserRateThrottle`:
```python
from rest_framework.throttling import UserRateThrottle
class SubscriptionRateThrottle(UserRateThrottle):
# Define a custom scope name to be referenced by DRF in settings.py
scope = "subscription"
def __init__(self):
super().__init__()
def allow_request(self, request, view):
"""
Override rest_framework.throttling.SimpleRateThrottle.allow_request
Check to see if the request should be throttled.
On success calls `throttle_success`.
On failure calls `throttle_failure`.
"""
if request.user.is_staff:
# No throttling
return True
if request.user.is_authenticated:
user_daily_limit = get_user_daily_limit(request.user)
if user_daily_limit:
# Override the default from settings.py
self.duration = 86400
self.num_requests = user_daily_limit
else:
# No limit == unlimited plan
return True
# Original logic from the parent method...
if self.rate is None:
return True
self.key = self.get_cache_key(request, view)
if self.key is None:
return True
self.history = self.cache.get(self.key, [])
self.now = self.timer()
# Drop any requests from the history which have now passed the
# throttle duration
while self.history and self.history[-1] <= self.now - self.duration:
self.history.pop()
if len(self.history) >= self.num_requests:
return self.throttle_failure()
return self.throttle_success()
```
What we're doing is dynamically looking up the user-specific throttle at the key moment to override the default DRF picks up from your settings file. Define a method `get_user_daily_limit` to look up the value. I highly recommend using Django's cache methods if this is stored in a database for performance.
## Settings
Next let's see what's required in `settings.py`:
```python
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [...],
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated'
],
'DEFAULT_THROTTLE_CLASSES': [
'rest_framework.throttling.UserRateThrottle',
'app.throttling.SubscriptionDailyRateThrottle'
],
'DEFAULT_THROTTLE_RATES': {
'user': '5/second',
'subscription': '200/day'
}
}
```
Here we set up two types of throttling. The built-in `UserRateThrottle` will handle the global 5 requests per second limit. It finds that setting in `DEFAULT_THROTTLE_RATES` with key `user`. Our custom throttle class is also enabled and defaults to the `subscription` value if a user subscription isn't found. Of course the application should be written so this never happens, but it's good to have a fallback plan if a user isn't configured properly.
## Subscriptions
How you code and model your subscriptions is up to you. In my case I wrote static classes that define the details of each subscription tier. A `Subscription` model links the user to a specific plan with details such as start time, payment details, etc.
The nice thing is Django and DRF don't dictate how you design your user subscriptions. Any way you choose to model it they'll handle because you can customize every aspect of authorization and throttling.
## Conclusion
So far I only have good things to say about the flexibility of Django and DRF and the customizations they allow. They took the right approach in offering a wide variety of built-in capabilities while allowing developers the opportunity to easily extend or override them. It's been working great for SocialSentiment.io and [our APIs](https://socialsentiment.io/api/v1/getting-started/). I'd like to hear how others have added their own features to Django Rest Framework in the comments below. | mattschwartz |
354,844 | The only productivity advice you need: The two-day rule
| Productivity craze has taken over the internet with a new passion in the midst of quarantine. And I c... | 0 | 2020-06-13T14:28:13 | https://dev.to/danilapetrova/the-only-productivity-rule-you-need-the-two-day-rule-2aj9 | productivity, motivation, career | Productivity craze has taken over the internet with a new passion in the midst of quarantine. And I can see why. The idea that we can stay safe in our homes and turn the stress and anxiousness over everything that is happening into measurable value for success is tempting.
And I am no exception. I have written an article or two about how to make the most of quarantine and listed all of the habits I have been trying to cultivate. "Just think about everything I would be able to get done if I just pushed myself!".
I imagined myself leaving my home post-quarantine in a gorgeous fit body, with straight A’s on my university projects and receiving praise for my outstanding performance at work. Glorious isn’t it?
Yet, the reality proved to be more challenging. Months in, I have found my steadily depleting motivation to be a chain that is pulling me down. And so is the fatigue that built up over my unrealistically high-performance expectations.
##Unrealistic expectations
As we are all navigating uncharted territory, and we are all dealing the best we can with the level of uncertainty that we have. No one knows how long we will have to base our lives on social distancing.
Or how deeply it will affect our careers, personal lives and overall psyche. The changes are deep within our lifestyle and for now we should think of this as the new reality for the forseeable future.
###So we should approach this as a *marathon* and not a sprint
I think that now is the time to recognize setbacks as a natural part of life, that we don’t necessarily need to power through on willpower alone.
As such we will not be able to be on our best performance at all times. And that is ok.The truth is that we do not need to be perfect with following our routines. Sometimes we need to take a break to recharge from the accumulated fatigue. Especialy when we are bombarded with negative news day in and day out that affects our overall wellbeing.
The key is to know when to push and when to take a step back. Evaluate what is the most importanhaving a negative impact on you. Then choose if you need to spend less time and energy on activities that wear you down, even if they are textbook "good for you".
##Productivity is reliant on your habits
While sometimes you need to cut your losses and take a step back to take a break, you are likely still bound by deadlines, so you cannot afford to skimp out on getting work done.
In the [IT industry] (https://dreamix.eu/) in particular, turning in your work on time is imperative to the overall project performance. In this case, the best you can do is make the most of what you have at your disposal.
Everything you do is a habit. Whether is checking the corona statistics in your area every day, or The routines you have in place help drive your productivity almost subconsciously, once they have been previously established. So if you find yourself demotivated, the key is to make your work process as easy to follow as possible.
Observe your habits. Ask yourself why you are doing the actions, what you get from it, and what the side effects of it are. For example
One of my most-maitained habit has been working out. After quarantine I couldn't go to the gym four days a week for two hours, so I adapted it to a home workout. But I considered all of the stress of the transition as well as the emotional wear-down of worrying about my family's safety.
I chose to go the micro-habits route. I now aim to have just a little bit of exercise daily. A small walk, some stretching, working with resistance bands or circuit training. I also introduced my bike to my routine as well as it seems to be a great stress release. I cannot comfortably maintain a grueling training program, so I instead tweaked my habits to do just enough to feel ok and accommodate my current needs.
##The two-day rule
Habits rely on accumulation over time. You start to associate repetitive surroundings, actions and activities with their respective outcome. For example - when you grab a water bottle, change into your gym clothes and put on your workout playlist your brain expects you to switch into training mode. And so you feel more ready to take on your exercise.
If you perform the same actions on a regular basis, it will be significantly easier to maintain the activity and extract the best results from your efforts. The more you skip the weaker the habit is, and the more time it will take you to get to your goals. The key to being productive is the two-day rule.
It is the only rule you need to follow to bring strong foundational habits. The core idea of it is that if you have a scheduled or repetitive routine you cannot allow yourself to skip two times in a row. And let’s face it, things come up. Sometimes we can’t show up to maintain your habits - you may be on vacation, or an emergency situation may come up.
However, if you set out to never skip on your activities twice in a row, you will still get farther ahead than if you were to get your habits derail completely and have to keep trying to rebuild them from scratch.
##Getting things done does not have to be complicated
When it comes to being productive it is so easy to get in our heads. We become obsessive with the process and planning and setting up systems in place as an attempt to micromanage our daily routines. But in all of our desire for improvement, we forget to set aside the time for ourselves.
Yes. We need to have practices and a plan to follow. But more often than not the biggest changes come with doing as much as you can consistently. And allowing yourself to have a moment of weakness, as long as you keep yourself accountable not to quit will go a long way to making any good changes stick.
Do you believe in the two-day rule? Is this something you have been doing subconsciously? How do you stay on top of your habits?
| danilapetrova |
355,995 | Deploy A React app on GitHub pages | GitHub offers more than just a host for your code. In this short tutorial, I will walk you through de... | 0 | 2020-06-15T10:41:34 | https://dev.to/chrisachinga/deploy-a-react-app-on-github-pages-5925 | react, github, deploy | GitHub offers more than just a host for your code. In this short tutorial, I will walk you through deploying a static react app/project on [GitHub Pages](https://pages.github.com/).
I will be deploying a project I did today (Nov, 28 - 2020). To follow along, feel free to clone or fork the repo.
Link to the repo: [GitHub/myRepo](https://github.com/ChrisAchinga/myRepos/tree/174080e27061235d153ff10e51ac0ad3f5661222)
>A programmer's learning tool is by practicing --I said that.
Let's Get Started:
## Step 1: Install the Dependencies:
I use npm for my projects, so after cloning the repo, open the project on your terminal or cmd (windows) and execute:
```shell
npm install
```
### Install the *gh-pages* package as a dev-dependency of the app
```shell
npm install gh-pages --save-dev
```
## Step 2: Define Homepage in package.json
In the `package.json` file in your react app and add homepage property using the given syntax:
```shell
http://{username}.github.io/{repo-name}
```
where {username} is your GitHub username, and {repo-name} is the name of the GitHub repository. Below is an example for my project:
```json
"homepage": "http://ChrisAchinga.github.io/myRepos",
```

## Step 3: Deploy script in `package.json` file
Now we can add the deploy script in the package.json file. In the existing scripts property, add a predeploy property and a deploy property, each having the values shown below:
```json
"scripts": {
// some code before
"predeploy": "npm run build",
"deploy": "gh-pages -d build"
}
```
So your "scripts" should look like this:

## Step 4: Deploy Your App
Update your GitHub repository using git commands:
```shell
npm run deploy
```
## Step 5: Commit and Push to GitHub
On your project terminal, run the deploy script
```shell
git add .
git commit -m "gh-pages deploy"
git push
```
Kudos your React app is ready for view... on https://chrisachinga.github.io/myRepos/
Get The Complete Source Code:
%[https://github.com/ChrisAchinga/myRepos]
| chrisachinga |
356,270 | Scraping a complicated website for my CLI application | (originally published August 9, 2019) When it came time to make my first application using Ruby, I k... | 0 | 2020-06-15T18:00:41 | https://sharkham.github.io/scraping_and_my_pok_mon_starter_generator_cli_application | ruby, codenewbie | *(originally published August 9, 2019)*
When it came time to make my first application using Ruby, I knew pretty quickly what I wanted to do.
My partner and I run a homebrew Pokémon Tabletop RPG, so a generator app to give our players three randomized choices of Pokémon partner was something I was interested in, fit within the scope of the app I wanted to build, and was something I could actually use in real life!
When it came time to actually put this together though, the biggest problem I ran into was scraping.
Scraping was something I fumbled through in the Flatiron labs on the subject but didn’t really feel like I had *learned*. I knew enough to make the `rspec` tests pass and, well, I didn’t have the time to dive further in at the time.
Scraping an actual webpage for data I actually wanted to use didn’t leave me much choice, I had to go a lot deeper—and that ended up being a good thing!

>The website I was trying to scrape, with relevant section indicated in yellow.
As of the current series of Pokémon games, there are 809 Pokémon. Since the games are rather complex, each Pokémon has a lot of information on it. For the website I was scraping, this meant pages with a very nested table structure with some very genericly named CSS selectors to look through.
While looking only at the CSS selectors closest to the data I wanted might have worked for the examples used when I was learning scraping, it would not work here. I had to get creative, and dive a lot further in.
I started by refreshing my memory of CSS selectors using the wonderful [CSS Diner](http://flukeout.github.io/) app, complete with helpful pictures, and then went into my console and typed out exactly what I wanted, which looked something like:
`index_page.css('table:nth-of-type(2) tr:nth-of-type(2) td:nth-of-type(2) div:nth-of-type(2) div p tbody tr:nth-of-type(2) td:nth-of-type(2) td.fooinfo `.
Yeaaaaah. It’s a lot. And it didn’t work, either.
A few results of an empty array later, I knew I was going to have to take it one step at a time to figure out where the problem was.

> Writing out all the CSS selectors in my notebook to keep track of them.
As I've been learning, this is a good way to look at a coding problem in general. If something's broken, go through it bit by bit to see where. I ran the method again, and again, each time with one more CSS selector than before, each time looking carefully through the site's HTML in the inspector to see how it matched up with the Nokogiri object output by my terminal. If the element in my terminal had the same attributes as the element in the website inspector, I went down another level.
And so on, and so forth.
Eventually, I ran into an element that returned the empty array when I included it in my CSS selectors, but luckily, ignoring that one in my list and going down to its child elements instead got me the result I needed. And I had it!
`"table:nth-of-type(2) tr:nth-of-type(2) td:nth-of-type(2) div:nth-of-type(2) div td.fooinfo td.foopika"` (try saying that five times fast!)
Or, I thought I had it.
While most of the Ruby learning I’d been doing up to this point involved making prewritten `rspec` tests pass, this app was a different animal. I was free-falling! And with a program that was going to go through three randomized Pokémon pages out of 809, I knew I had better test out this part of the program as much as I could to make sure it didn’t go splat on the pavement. I didn't know what the edge cases would be here, if there were any, so this meant running the scraper again and again.
After 6th time, I got an empty array again. And when I went to the URL that returned it, I figured out why. Because of the way the website was set up, my Pokémon page scraper *was* working—but only 152 out of 809 entries had the data I wanted!
Without getting into the *really specific* Pokémon worldbuilding reasons for this (and they exist!), I was able to use a slightly different individual Pokémon page to get the info I needed, and because the data was roughly in the same place, I only had to change the very last bit of my long chain of CSS selectors to do it:
`"table:nth-of-type(2) tr:nth-of-type(2) td:nth-of-type(2) div:nth-of-type(2) div td.fooinfo td.ruby"`
As a side effect of this, my app was only able to randomly generate from a list of 721 Pokémon instead of 809, but I’m still considering it a win. That is a lot of Pokémon!
Going through this process I learned a lot about how scraping with Nokogiri works, and became a lot more comfortable digging through raw data to find what I wanted. Scraping this way, step by step through all of the elements on the page, is definitely the long way around, but for complex websites you really want to grab some data from, it’s worth it! | sharkham |
356,283 | TIL: you can use `cd ~number` to navigate back to previously visited directories | If you are a heavy user of the command line (as I am 🤓), there is a high chance that you use the comm... | 0 | 2020-06-15T18:31:19 | https://diamantidis.github.io/tips/2020-06-15-cd-command-hidden-gems | If you are a heavy user of the command line (as I am 🤓), there is a high chance that you use the command `cd` quite often to navigate back and forth to different directories.
Besides the popular `cd <dir>`, `cd` has some more capabilities that are not so widely known and can make the navigation between different directories much more efficient.
Let's say we have the following directory structure and we have to navigate through those folders.
```console
parent
├── child1
│ └── grandchild1
└── child2
└── grandchild2
```
For example, we have to run the following commands:
```sh
cd child1
cd grandchild1
cd ../../child2
cd grandchild2
```
Did you know that you can use `cd ~2` to navigate back to `~/parent/child1/grandchild1`?
Let me try to explain how this is working.
All the directories that we have visited are stored in a stack. To display this stack, we can use the command `dirs -v`, which will output something like the following:
```console
0 ~/parent/child2/grandchild2
1 ~/parent/child2
2 ~/parent/child1/grandchild1
3 ~/parent/child1
4 ~/parent
```
You can now use the number on the left of the directory to navigate through the stack. In our case, we are using `cd ~2` to navigate to the item in position 2.
Isn't that cool? Especially compared to the alternative(`cd ../../child1/grandchild1`)? :sunglasses:
---
➡️ This post was originally published on my [blog](https://diamantidis.github.io). | diamantidis | |
356,341 | Validation in ASP .NET Core 3.1 | This is the twenty-second of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, w... | 0 | 2020-06-15T19:49:47 | https://wakeupandcode.com/validation-in-asp-net-core-3-1/ | dotnet, webdev, csharp | ---
title: Validation in ASP .NET Core 3.1
published: true
date: 2020-06-15 14:00:00 UTC
tags: dotnet, webdev, csharp
canonical_url: https://wakeupandcode.com/validation-in-asp-net-core-3-1/
---

This is the twenty-second of a new [series of posts](https://wakeupandcode.com/aspnetcore/#aspnetcore2020) on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled **ASP .NET Core A-Z!** To differentiate from the [2019 series](https://wakeupandcode.com/aspnetcore/#aspnetcore2019), the 2020 series will mostly focus on a growing single codebase ([NetLearner!](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/)) instead of new unrelated code snippets week.
Previous post:
- [Unit Testing in ASP .NET Core 3.1](https://wakeupandcode.com/unit-testing-in-asp-net-core-3-1/)
**NetLearner on GitHub** :
- Repository: [https://github.com/shahedc/NetLearnerApp](https://github.com/shahedc/NetLearnerApp)
- v0.22-alpha release: [https://github.com/shahedc/NetLearnerApp/releases/tag/v0.22-alpha](https://github.com/shahedc/NetLearnerApp/releases/tag/v0.22-alpha)
# In this Article:
- [V is for Validation](#V)
- [Validation Attributes](#attr)
- [Server-Side Validation](#server)
- [Client-Side Validation](#client)
- [Client to Server with Remote Validation](#remote)
- [Custom Attributes](#custom)
- [References](#refs)
# V is for Validation
To build upon a previous post on [Forms and Fields in ASP .NET Core](https://wakeupandcode.com/forms-and-fields-in-asp-net-core-3-1/), this post covers Validation in ASP .NET Core. When a user submits form field values, proper validation can help build a more user-friendly and secure web application. Instead of coding each view/page individually, you can simply use server-side attributes in your models/viewmodels.
**NOTE** : As of ASP .NET Core 2.2, validation may be skipped automatically if ASP .NET Core decides that validation is not needed. According to the [“What’s New” release notes](https://docs.microsoft.com/en-us/aspnet/core/release-notes/aspnetcore-2.2?view=aspnetcore-2.2#validation-performance), this includes primitive collections (e.g. a byte[] array or a Dictonary<string, string> key-value pair collection)
<figcaption>Validation in ASP .NET Core</figcaption>
# Validation Attributes
To implement model validation with [Attributes], you will typically use Data Annotations from the [System.ComponentModel.DataAnnotations](https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations) namespace. The list of attribute does go beyond just validation functionality though. For example, the DataType attribute takes a datatype parameter, used for inferring the data type and used for displaying the field on a view/page (but does not provide validation for the field).
Common attributes include the following
- **Range** : lets you specify min-max values, inclusive of min and max
- **RegularExpression** : useful for pattern recognition, e.g. phone numbers, zip/postal codes
- **Required** : indicates that a field is required
- **StringLength** : sets the maximum length for the string entered
- **MinLength** : sets the minimum length of an array or string data
From the sample code, here is an example from the [LearningResource model class](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.SharedLib/Models/LearningResource.cs) in [NetLearner](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/)‘s shared library:
```
public class **LearningResource** { public int Id { get; set; } **[DisplayName("Resource")] [Required] [StringLength(100)]** public string Name { get; set; } **[DisplayName("URL")] [Required] [StringLength(255)] [DataType(DataType.Url)]** public string Url { get; set; } public int ResourceListId { get; set; } **[DisplayName("In List")]** public ResourceList ResourceList { get; set; } **[DisplayName("Feed Url")]** public string ContentFeedUrl { get; set; } public List<LearningResourceTopicTag> LearningResourceTopicTags { get; set; }}
```
From the above code, you can see that:
- The value for **Name** is a required string, needs to be less than 100 characters
- The value for **Url** is a required string, needs to be less than 255 characters
- The value for **ContentFeedUrl** can be left blank, but has to be less than 255 characters.
- When the **DataType** is provided (e.g. **DataType.Url** , Currency, Date, etc), the field is displayed appropriately in the browser, with the proper formatting
- For numeric values, you can also use the **[Range(x,y)]** attribute, where x and y sets the minimum and maximum values allowed for the number
Here’s what it looks like in a browser when validation fails:
<figcaption>Validation errors in NetLearner.MVC</figcaption>
<figcaption> Validation errors in NetLearner.Pages</figcaption>
<figcaption> Validation errors in NetLearner.Blazor</figcaption>
The validation rules make it easier for the user to correct their entries before submitting the form.
- In the above scenario, the “is required” messages are displayed directly in the browser through client-side validation.
- For field-length restrictions, the client-side form will automatically prevent the entry of string values longer than the maximum threshold
- If a user attempts to circumvent any validation requirements on the client-side, the server-side validation will automatically catch them.
In the **MVC** and **Razor Pages** web projects, the validation messages are displayed with the help of <div> and <span> elements, using asp-validation-summary and asp-validation-for.
**NetLearner.Mvc** : [/Views/LearningResources/Create.cshtml](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Mvc/Views/LearningResources/Create.cshtml)
```
<div **asp-validation-summary** ="ModelOnly" class="text-danger"></div> <div class="form-group"> <label asp-for="Name" class="control-label"></label> <input asp-for="Name" class="form-control" /> <span **asp-validation-for** ="Name" class="text-danger"></span> </div>
```
**NetLearner.Pages** : [/Pages/LearningResources/Create.cshtml](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Pages/Pages/LearningResources/Create.cshtml)
```
<div **asp-validation-summary** ="ModelOnly" class="text-danger"></div> <div class="form-group"> <label asp-for="LearningResource.Name" class="control-label"></label> <input asp-for="LearningResource.Name" class="form-control" /> <span **asp-validation-for** ="LearningResource.Name" class="text-danger"></span> </div>
```
In the **Blazor** project, the “The DataAnnotationsValidator component attaches validation support using data annotations” and “The ValidationSummary component summarizes validation messages”.
**NetLearner.Blazor** : [/Pages/ResourceDetail.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/ResourceDetail.razor)
```
<EditForm Model="@LearningResourceObject" OnValidSubmit="@HandleValidSubmit"> < **DataAnnotationsValidator** /> < **ValidationSummary** />
```
For more information on Blazor validation, check out the official documentation at:
- Blazor Forms and Validation: [https://docs.microsoft.com/en-us/aspnet/core/blazor/forms-validation?view=aspnetcore-3.1](https://docs.microsoft.com/en-us/aspnet/core/blazor/forms-validation?view=aspnetcore-3.1)
# Server-Side Validation
Validation occurs before an MVC controller action (or equivalent handler method for Razor Pages) takes over. As a result, you should check to see if the validation has passed before continuing next steps.
e.g. in an MVC controller
```
[**HttpPost**][**ValidateAntiForgeryToken**]public async Task<IActionResult> **Create** (...){ if ( **ModelState.IsValid** ) { // ... return RedirectToAction(nameof(Index)); } return View(...);}
```
e.g. in a Razor Page’s handler code:
```
public async Task<IActionResult> **OnPostAsync** (){ if (! **ModelState.IsValid** ) { return Page(); } //... return RedirectToPage(...);}
```
Note that **ModelState**. **IsValid** is checked in both the **Create** () action method of an MVC Controller or the **OnPostAsync** () handler method of a Razor Page’s handler code. If **IsValid** is true, perform actions as desired. If false, reload the current view/page as is.
In the Blazor example, the **OnValidSubmit** event is triggered by < **EditForm** > when a form is submitted, e.g.
```
< **EditForm** Model="@SomeModel" **OnValidSubmit** ="@ **HandleValidSubmit**">
```
The method name specified refers to a C# method that handles the form submission when valid.
```
private async void **HandleValidSubmit** (){ ...}
```
# Client-Side Validation
It goes without saying that you should always have server-side validation. All the client-side validation in the world won’t prevent a malicious user from sending a GET/POST request to your form’s endpoint. Cross-site request forgery in the [Form tag helper](https://docs.microsoft.com/en-us/aspnet/core/mvc/views/working-with-forms#the-form-tag-helper) does provide a certain level of protection, but you still need server-side validation. That being said, client-side validation helps to catch the problem before your server receives the request, while providing a better user experience.
When you create a new ASP .NET Core project using one of the built-in templates for MVC or Razor Pages, you should see a shared partial view called [\_ValidationScriptsPartial.cshtml](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Mvc/Views/Shared/_ValidationScriptsPartial.cshtml). This partial view should include references to [jQuery unobtrusive validation](https://github.com/aspnet/jquery-validation-unobtrusive), as shown below:
```
<script src="~/lib/jquery-validation-unobtrusive/jquery.validate.unobtrusive.min.js"></script>
```
If you create a scaffolded controller with views/pages, you should see the following reference at the bottom of your page or view.
e.g. at the bottom of [Create.cshtml](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Mvc/Views/LearningResources/Create.cshtml) view
```
@section Scripts { @{await Html.RenderPartialAsync(" **\_ValidationScriptsPartial**");}}
```
e.g. at the bottom of the [Create.cshtml](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Pages/Pages/LearningResources/Create.cshtml) page
```
@section Scripts { @{await Html.RenderPartialAsync(" **\_ValidationScriptsPartial**");}}
```
Note that the syntax is identical whether it’s an MVC view or a Razor page. If you ever need to disable client-side validation for some reason, that can be accomplished in different ways, whether it’s for an MVC view or a Razor page. (Blazor makes use of the aforementioned EditForm element in ASP .NET Core to include built-in validation, with the ability to track whether a submitted form is valid or invalid.)
From the official [docs](https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation#disable-client-side-validation), the following code should be used within the **ConfigureServices** () method of your Startup.cs class, to set **ClientValidationEnabled** to false in your HTMLHelperOptions configuration.
```
services. **AddMvc** (). **AddViewOptions** (options =>{ if (\_env.IsDevelopment()) { options.HtmlHelperOptions. **ClientValidationEnabled** = false; }});
```
Also mentioned in the official docs, the following code can be used for your Razor Pages, within the **ConfigureServices** () method of your Startup.cs class.
```
services.Configure< **HtmlHelperOptions** >(o => o. **ClientValidationEnabled** = false);
```
# Client to Server with Remote Validation
If you need to call a server-side method while performing client-side validation, you can use the [**Remote**] attribute on a model property. You would then pass it the name of a server-side action method which returns an **IActionResult ** with a true boolean result for a valid field. This [**Remote**] attribute is available in the Microsoft.AspNetCore.Mvc namespace, from the [Microsoft.AspNetCore.Mvc.ViewFeatures](https://www.nuget.org/packages/Microsoft.AspNetCore.Mvc.ViewFeatures) NuGet package.
The model property would look something like this:
```
[**Remote** (action: " **MyActionMethod**", controller: " **MyControllerName**")]public string **MyProperty** { get; set; }
```
In the controller class, (e.g. **MyControllerName** ), you would define an action method with the name specified in the [**Remote**] attribute parameters, e.g. **MyActionMethod. **
```
[**AcceptVerbs** ("Get", "Post")]public IActionResult **MyActionMethod** (...){ if (TestForFailureHere()) { return **Json** ("Invalid Error Message"); } return **Json(true)**;}
```
You may notice that if the validation fails, the controller action method returns a JSON response with an appropriate error message in a string. Instead of a text string, you can also use a false, null, or undefined value to indicate an invalid result. If validation has passed, you would use **Json(true)** to indicate that the validation has passed.
_So, when would you actually use something like this?_ Any scenario where a selection/entry needs to be validated by the server can provide a better user experience by providing a result as the user is typing, instead of waiting for a form submission. For example: imagine that a user is buying online tickets for an event, and selecting a seat number displayed on a seating chart. The selected seat could then be displayed in an input field and then sent back to the server to determine whether the seat is still available or not.
# Custom Attributes
In addition to all of the above, you can simply build your own custom attributes. If you take a look at the classes for the built-in attributes, e.g. [RequiredAttribute](https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.requiredattribute), you will notice that they also extend the same parent class:
- System.ComponentModel.DataAnnotations.ValidationAttribute
You can do the same thing with your custom attribute’s class definition:
```
public class **MyCustomAttribute** : ValidationAttribute { // ...}
```
The parent class [ValidationAttribute](https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.validationattribute), has a virtual **IsValid** () method that you can override to return whether validation has been calculated successfully (or not).
```
public class **MyCustomAttribute** : **ValidationAttribute** { // ... protected override ValidationResult **IsValid** ( object value, ValidationContext validationContext) { if (TestForFailureHere()) { return new **ValidationResult** ("Invalid Error Message"); } return **ValidationResult.Success** ; }}
```
You may notice that if the validation fails, the **IsValid(**) method returns a **ValidationResult** () with an appropriate error message in a string. If validation has passed, you would return **ValidationResult.Success** to indicate that the validation has passed.
# References
- Add validation to an ASP.NET Core MVC app: [https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/validation](https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/validation)
- Model validation in ASP.NET Core MVC and Razor Pages: [https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation](https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation)
- System.ComponentModel.DataAnnotations Namespace: [https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations](https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations)
- ValidationAttribute Class (System.ComponentModel.DataAnnotations): [https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.validationattribute](https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.validationattribute)
- Blazor Forms and Validation: [https://docs.microsoft.com/en-us/aspnet/core/blazor/forms-validation?view=aspnetcore-3.1](https://docs.microsoft.com/en-us/aspnet/core/blazor/forms-validation?view=aspnetcore-3.1) | shahedc |
356,347 | Securing a Ruby on Rails API with JWTs | Ruby on Rails is a modern web framework, but also a great way to build an API. The ability to quickly... | 0 | 2020-06-15T19:58:04 | https://fusionauth.io/blog/2020/06/11/building-protected-api-with-rails-and-jwt | rails, ruby, jwts, security | ---
title: Securing a Ruby on Rails API with JWTs
published: true
date: 2020-06-11 06:00:00 UTC
tags: rails,ruby,jwts,security
canonical_url: https://fusionauth.io/blog/2020/06/11/building-protected-api-with-rails-and-jwt
---
Ruby on Rails is a modern web framework, but also a great way to build an API. The ability to quickly jam out your business logic, the ease of creating and modifying data models, and the built-in testing support all combine to make creating a JSON API in Rails a no brainer. Add in a sleek admin interface using something like [RailsAdmin](https://github.com/sferik/rails_admin) and you can build and manage APIs easily.
But you don’t typically want just anyone to consume your API. You want to ensure the right people and applications are doing so. In this tutorial, we’re going to build an API in Ruby on Rails 6, and secure it using JSON Web Tokens (JWTs).
<!--more-->
As always, the code is available under an Apache2 license [on GitHub](https://github.com/FusionAuth/fusionauth-example-rails-api), if you’d rather jump ahead.
## Prerequisites
This post assumes you have Ruby and Rails 6 installed. If you don’t, we suggest you follow the steps in the [Getting Started with Rails](https://guides.rubyonrails.org/getting_started.html) guide. Other than that we presume nothing about your knowledge of Ruby or Rails.
## Build the API
To build the API, we’re going to create a new Rails application. Using the `--api` switch avoids generating a bunch of functionality we won’t need (like views).
```
rails new hello_api --api
```
Change to the created directory, `hello_api`. We’re now going to add our controller to the routes file. Edit the `config/routes.rb` file and change the contents to:
```
Rails.application.routes.draw do
resources :messages, only: [:index]
end
```
This exposes the path `/messages` and ties it to a `Messages` controller. This `Messages` controller won’t be too complicated. It returns a hardcoded list of messages when a `GET` request is made to `/messages`. In a real-world application, of course, you would store messages in the database and pull them dynamically using ActiveRecord. But for this tutorial, a hardcoded list suffices.
Create the controller at `app/controllers/messages_controller.rb`. Here is what the class looks like:
```
class MessagesController < ApplicationController
def index
messages = []
messages << "Hello"
render json: { messages: messages }.to_json, status: :ok
end
end
```
If you start up your Rails server:
```
rails s -p 4000
```
You should now be able to visit `http://localhost:4000/messages` and see some messages:
```
{"messages":["Hello"]}
```
But let’s add a test so future changes don’t cause surprises. Create the controller test at `test/controllers/messages_controller_test.rb`. Here are the contents of that class:
```
require 'test_helper'
class MessagesTest < ActionDispatch::IntegrationTest
test "can get messages" do
get "/messages"
assert_response :success
end
test "can get messages content" do
get "/messages"
res = JSON.parse(@response.body)
assert_equal '{"messages"=>["Hello"]}', res.to_s
end
end
```
Now we can run our test and make sure that we are getting what we expect:
```
$ rails test test/integration/messages_test.rb
Running via Spring preloader in process 15492
Run options: --seed 1452
# Running:
..
Finished in 0.119373s, 16.7542 runs/s, 16.7542 assertions/s.
2 runs, 2 assertions, 0 failures, 0 errors, 0 skips
```
Excellent! We have a working API which returns well-formed JSON! Rails even takes care of setting the `Content-Type` header to `application/json; charset=utf-8`. Now let’s secure our API.
## Secure the API
As a reminder, we’re going to use a JWT to secure this API. While you can secure Rails APIs using [a variety of methods](https://edgeguides.rubyonrails.org/action_controller_overview.html#http-authentications), using a JWT has certain advantages. You can integrate with a number of identity providers offering OAuth or SAML support. This allows you to leverage an existing robust identity management system to control API access. You can also embed additional metadata into a JWT, including attributes like roles.
To create tokens we’re using the [Ruby JWT library](https://github.com/jwt/ruby-jwt). Add that to your `Gemfile` and then run `bundle install`. Add a line to the bottom of your `Gemfile`:
```
# ...
gem 'jwt'
```
Run `bundle install` to install it:
```
bundle install
```
After making sure we have the required gems, the next step is to write tests. Let’s modify the test to provide a JWT and expect `:forbidden` HTTP statuses when the token doesn’t meet our expectations.
```
class MessagesTest < ActionDispatch::IntegrationTest
test "can' get messages with no auth" do
get "/messages"
assert_response :forbidden
end
test "can get messages with header" do
get "/messages", headers: { "HTTP_AUTHORIZATION" => "Bearer " + build_jwt }
assert_response :success
end
test "expired jwt fails" do
get "/messages", headers: { "HTTP_AUTHORIZATION" => "Bearer " + build_jwt(-1) }
assert_response :forbidden
end
test "can get messages content" do
get "/messages", headers: { "HTTP_AUTHORIZATION" => "Bearer " + build_jwt }
res = JSON.parse(@response.body)
assert_equal '{"messages"=>["Hello"]}', res.to_s
end
def build_jwt(valid_for_minutes = 5)
exp = Time.now.to_i + (valid_for_minutes*60)
payload = { "iss": "fusionauth.io",
"exp": exp,
"aud": "238d4793-70de-4183-9707-48ed8ecd19d9",
"sub": "19016b73-3ffa-4b26-80d8-aa9287738677",
"name": "Dan Moore",
"roles": ["USER"]
}
JWT.encode payload, Rails.configuration.x.oauth.jwt_secret, 'HS256'
end
end
```
We look for the JWT in the `Authorization` HTTP header. Rails exposes it via the `HTTP_AUTHORIZATION` key in the `headers` hash of the request. Let’s look more closely at the token.
[JWTs have claims](https://tools.ietf.org/html/rfc7519#section-4), basically information embedded in the JWT. The keys of the JSON payload assembled in the `build_jwt` function, such as `iss` and `name`, are claims. Some of these are defined in the JWT RFC. These are ‘registered’ claims. Others are recorded with the IANA but are not part of the standard; these are ‘public’ claims. And yet others are defined by the token creator; these are ‘private’ claims.
```
# ...
def build_jwt(valid_for_minutes = 5)
exp = Time.now.to_i + (valid_for_minutes*60)
payload = { "iss": "fusionauth.io",
"exp": exp,
"aud": "238d4793-70de-4183-9707-48ed8ecd19d9",
"sub": "19016b73-3ffa-4b26-80d8-aa9287738677",
"name": "Dan Moore",
"roles": ["USER"]
}
JWT.encode payload, Rails.configuration.x.oauth.jwt_secret, 'HS256'
# ...
```
Above, we add registered claims to a JWT that any consumer of the token, including our API classes, may examine. `exp` indicates when the JWT will expire. `aud` is an identifier of who or what this token is intended for (the “audience”). `sub` is the person or piece of software to which this token applies; to quote the RFC: “The claims in a JWT are normally statements about the subject.” `iss` is an identifier for the issuer of the JWT, typically an user identity management server. Since we’re generating the JWT ourselves here, we can specify any value we’d like.
We also add the `name` public claim, which lets JWT consumers know the user’s name. `roles` are a private claim with a meaning undefined outside of our application. Note that because content of JWTs is not typically encrypted, claims should contain no secrets or private data.
The last thing we do is encode our JWT. This signs it, adds needed metadata and creates the URL encoded version. Here’s what one of the JWTs generated by `build_jwt` looks like:
```
eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJmdXNpb25hdXRoLmlvIiwiZXhwIjoxNTkwMTgxNjE5LCJhdWQiOiIyMzhkNDc5My03MGRlLTQxODMtOTcwNy00OGVkOGVjZDE5ZDkiLCJzdWIiOiIxOTAxNmI3My0zZmZhLTRiMjYtODBkOC1hYTkyODc3Mzg2NzciLCJuYW1lIjoiRGFuIE1vb3JlIiwicm9sZXMiOlsiVVNFUiJdfQ.P7KXBV8fNElGGr1McKIMQbU7-mZPMxv8tw5AbufZgr0
```
We use the HMAC signature algorithm because in this tutorial we control both the issuer of the token and the consumer (our API). We can, therefore, share a secret reliably between them. If we didn’t have a good way to share secrets, using an asymmetric signing key would be a wiser choice.
For this tutorial, we put the HMAC secret in the environment configuration files. For production usage, use your normal secrets management solution. You should make the HMAC secret a long string, but don’t use any other shared secrets, such as the session secret.
Let’s add our authorization code now that our tests fail. They fail because they are expecting certain unauthorized requests to return `:forbidden`.
There are two places we could put the code that checks the token. We could add it to the `Messages` controller. Or we could add it to the `Application` controller. This latter choice would enforce authorization for all requests. This is the better option.
Since we are building an API that should never be accessed without authorization, we should protect all our resources. If and when we need to distinguish between different claims (for instance, we may want to have some APIs only accessible for users with the `ADMIN` role), we can refactor and move the verification code to different controllers.
Here’s the authorization code for the `app/controllers/application_controller.rb` file:
```
class ApplicationController < ActionController::API
before_action :require_jwt
def require_jwt
token = request.headers["HTTP_AUTHORIZATION"]
if !token
head :forbidden
end
if !valid_token(token)
head :forbidden
end
end
private
def valid_token(token)
unless token
return false
end
token.gsub!('Bearer ','')
begin
decoded_token = JWT.decode token, Rails.configuration.x.oauth.jwt_secret, true
return true
rescue JWT::DecodeError
Rails.logger.warn "Error decoding the JWT: "+ e.to_s
end
false
end
end
```
This code expects the JWT in the `Authorization` HTTP header prepended by `Bearer`, which is defined in [RFC 6750](https://tools.ietf.org/html/rfc6750). If it doesn’t exist, we deny access. If it does, we try to decode it. If it decodes without raising an exception, it is a valid JWT.
## Verify claims
But really, what does valid mean? That’s something developers define on an application by application basis, though the `jwt` gem provides a baseline: [it checks the `exp` and `nbf` claims](https://github.com/jwt/ruby-jwt/blob/master/lib/jwt/default_options.rb) and verifies the signature.
But for this application, we need to be extra sure. After all, if our messages fell into the wrong hands, who knows what could happen?
So, let’s validate additional claims when we are decoding the JWT. We’ll check to make sure that claims like the issuer are what we expect. We can perform these checks by providing options to the `JWT.decode` method. Instead of:
```
# ...
decoded_token = JWT.decode token, Rails.configuration.x.oauth.jwt_secret, true
# ...
```
We’ll actually check the `iss` and `aud` claims are expected. This adds an additional layer of security. Here’s another article about [securing your signed JWTs](/learn/expert-advice/tokens/building-a-secure-jwt).
```
# ...
expected_iss = 'fusionauth.io'
expected_aud = '238d4793-70de-4183-9707-48ed8ecd19d9'
# ...
decoded_token = JWT.decode token, Rails.configuration.x.oauth.jwt_secret, true, { verify_iss: true, iss: expected_iss, verify_aud: true, aud: expected_aud, algorithm: 'HS256' }
# ...
```
The options at the end of the JWT `decode` method specify which claims we want to verify. If there were private claims that we wanted to check, we could do that as well. For instance, perhaps your application has a domain-specific claim that the API needs to ensure is present.
A warning, while we are guaranteed by the signature that the contents of the token are exactly what they were when it was created, we aren’t guaranteed that the contents will remain unexamined.
Therefore, add claims, but remember JWTs should contain a bare minimum of them. If needed, consumers can always make additional requests of the identity management server if they need information too private for a token. For the JWT we’ve generated, the consumer could retrieve more information about the subscriber `19016b73-3ffa-4b26-80d8-aa9287738677` with a direct request.
We also added some tests, but you’ll need to check out the GitHub repository to see them.
## Take it further
If you are interested in extending this example, make the API more realistic. Create a `Messages` model and store them in the database. Change your claims to include a preferred greeting, and prepend that to any messages. Add more API endpoints and only allow users with certain roles to access them.
The code is [on GitHub](https://github.com/FusionAuth/fusionauth-example-rails-api) for your perusal.
## Next steps
You’ll notice we never specified the source of the JWT. We just generated one using the `jwt` gem. In general, tokens are provided by an authentication process. Integrating a user identity store, such as FusionAuth, to provide such tokens is what we’ll tackle next. | fusionauth |
356,367 | Flatiron Second Project: Sinatra | The past two weeks have been spent working on my second project for Flatiron’s Software Engineering B... | 0 | 2020-06-15T20:44:27 | https://dev.to/mmcclure11/flatiron-second-project-sinatra-26j9 | ruby, sinatra | The past two weeks have been spent working on my second project for Flatiron’s Software Engineering Bootcamp. It is an MVC app built using Sinatra. For my project I decided to create a recipe catalog that would allow a user to make recipes to save in their catalog, and provide them the option to view all of their recipes in a list or organised by category. This was mostly selfishly motivated as I have recipes written on notecards, in three different notebooks, and scraps of paper stuck to the refrigerator.
One of the hardest parts was getting the category functionality that I wanted as a user. I set up my database so that a category can have many recipes, and a recipe can have many categories. Which reflects the fact that a recipe can be both vegan and breakfast for example. When a user makes a new recipe I set up the form so that they can select from a series of checkboxes the categories they want to add, as well as made a space for them to submit a new category name.
```
<label for="category">Choose a Category:</label><br>
<% Category.all.sort {|a, b| a.name <=> b.name}.each do |category| %>
<input type="checkbox" name="categories[]" id="<%= category.id %>" value="<%= category.id %>"><%= category.name %></input>
<% end %>
<p>Or Create a Category:</p>
<label for="new_category">Category Name:</label>
<input type="text" name="category[name]">
<input type="submit" value="Create">
```
My post controller would then set the category ids for any categories that were submitted via the checklist, and also check to see if the user typed in a category name. First it checks if the params[:category][:name] is empty, and if it’s not it checks that the name does not already belong to the recipe, to prevent users from making duplicates of a category for a recipe.
```ruby
@recipe.category_ids = params[:categories]
if !params[:category][:name].empty? && !@recipe.categories.include?(Category.find_by(name: sanitize(params[:category][:name]).downcase.capitalize))
@category = Category.find_or_create_by(name: sanitize(params[:category][:name]).downcase.capitalize)
end
```
This gave me most of the functionality that I wanted. However when I was testing out my app I discovered that my sanitize method for javascript attacks such as ``` <script>alert('malicious popup alert')</script> ``` would clear out the entry entirely and a new category would be created with no name at all! Which was definitely not what I wanted. This then led me to test out putting in a bunch of whitespaces. This gave the same result, so now a user was able to create virtually an infinite number of categories that were nameless.
My initial fix was to change my sanitize method, and instead of using the [Sanitize Gem](https://rubygems.org/gems/sanitize/versions/4.0.1), simply gsub out the carrots and white spaces. Which worked, but I wanted to be able to use the gem for all of my params. After struggling for a day trying different variations of conditionals, I reached out to my cousin, a Ruby software engineer for some help.
We added a validation to the Category class to validate the presence of name. I hadn’t done this initially, thinking that this would mean that the field would always be required. Before validating the name, we added a method #trim_whitespace to remove any spaces before or after the name entered, or all whitespaces if that was the only input.
I learned that in ruby you can use [&.](https://stackoverflow.com/questions/36812647/what-does-ampersand-dot-mean-in) which is a [Safe Navigation Operator](https://en.wikipedia.org/wiki/Safe_navigation_operator), meaning that the operator returns null if the first argument is null, otherwise it will perform the operation. This avoids the undefined method for nil:NilClass error that would otherwise crop up. Which is what we used to strip the name of the category of whitespace.
```ruby
class Category < ActiveRecord::Base
has_many :recipe_categories
has_many :recipes, through: :recipe_categories
before_validation :trim_whitespace
validates :name, presence: true
def trim_whitespace
name&.strip
end
end
```
Then we added a nested if statement to check to see if the created category persisted, saving it to the recipe’s categories if successful, and rendering the form again if it was not persisted.
```ruby
if !params[:category][:name].empty? && !@recipe.categories.include?(Category.find_by(name: sanitize(params[:category][:name]).downcase.capitalize))
@category = Category.find_or_create_by(name: sanitize(params[:category][:name]).downcase.capitalize)
if @category.persisted?
@recipe.categories << @category
else
return(erb :'/recipes/new')
end
end
if @recipe.save
redirect "/recipes/#{@recipe.id}"
else
erb :'/recipes/new'
end
```
By adding the error messages given to us by the validation for Category name, when the form is rerendered the error with the full message will be displayed to the user, telling them that the “Category name cannot be blank”.
```
<% if @category && @category.errors.any? %>
<ul>
<% @category.errors.full_messages.each do |error|%>
<li>Category <%= error.downcase %></li>
<% end %>
</ul>
<% end %>
```
After all this, from the user perspective, my program was functioning as I wanted. If a user entered in a script or a bunch of spaces, the recipe's new form was rerendered with the error message. And if the user filled out all the fields properly but left the Create a Category form blank, it successfully saved the recipe and redirected the user to the recipe’s show page.
However, I was still confused as to why it was working. My primary stumping point was that, if the Category model has a validation that requires that the category have a name, why was the recipe able to successfully save when there is no input at all?
It turns out that it depends on how you set up the post request in your controller and understanding it requires some knowledge on how blank form fields get submitted. The important piece of code is this:
```ruby
if !params[:category][:name].empty?
```
If we inspect the params that come in to the controller when the form is submitted with the category field unfilled, they look like this:
```ruby
Params[:category][:name]
#=> “”
```
Which is an empty string.
“”[.empty?](https://blog.appsignal.com/2018/09/11/differences-between-nil-empty-blank-and-present.html) is true, and so the statement evaluates to !true, ie. false. Because the first part of the condition fails, ruby short-circuits the rest of the code inside the if statement. And the next line of code that is run is the if @recipe.save. So the validation on Category name is never run at all! It only gets run if the field is not empty, so when a user enters a valid name, a bunch of white spaces, or a script of some kind. And because there is a method that removes whitespace before the validation, malicious input would fail the validation, giving us the error messages.
It took some playing around with the params and seeing exactly what inputs will return true and false for the #empty? method, but once it clicked, I could see the elegance in how this code was working.
There were lots of bumps along the way to completing this project, and I learned a lot. I also found that making a web app is a lot like making art. I keep thinking of things I can add, changes to make, features to create, edits and little bugs to fix, but at some point you stop working on it, because there are others to be made. It’s not truly finished, just abandoned, for the moment. I’ll probably come back to it, when feeling inspired or wanting more practice, just like I do for my paintings and drawings.
> Shoutout to my cousin Kori for reviewing this blog post, and helping me understand why the validation was never called.
> Kori is a web developer at JOGL.io. Follow him on twitter @koriroys.
| mmcclure11 |
356,465 | imbalanced-learn 0.7.0 is out | imbalanced-learn, probably, is your favorite python package that offers a number of re-sampling techn... | 0 | 2020-06-16T02:24:53 | https://dev.to/chkoar/imbalanced-learn-0-7-0-is-out-1o8 | machinelearning, python, datascience | imbalanced-learn, probably, is your favorite python package that offers a number of re-sampling techniques commonly used in datasets showing strong between-class imbalance.
This is release should be be fully compatible with the latest version of `scikit-learn`.
### Maintenance
- Pipelines can cope with older versions of `joblib`.
- Common tests have been refactored.
- Feature warnings have been removed.
- Imposing keywords only arguments as in `scikit-learn`.
### Changed models
- `imblearn.ensemble.BalancedRandomForestClassifier` is expected to give different results for the same input (using the same random state).
- Fix `make_index_balanced_accuracy` which was unusable due to the latest version of `scikit-learn`.
- Raise a proper error message when only numerical or categorical features are given in `imblearn.over_sampling.SMOTENC`.
- Fix a bug when the median of the standard deviation is null in `imblearn.over_sampling.SMOTENC`.
### Bug fixes
- `min_samples_leaf` default value has been changed to be consistent with `scikit-learn`
### Enhancements
- The classifier implemented in imbalanced-learn, `imblearn.ensemble.BalancedBaggingClassifier`, `imblearn.ensemble.BalancedRandomForestClassifier`, `imblearn.ensemble.EasyEnsembleClassifier`, and `imblearn.ensemble.RUSBoostClassifier`, accept `sampling_strategy` with the same key than in y without the need of encoding y in advance.
- Import `keras` module lazily.
### Installation
You can install it either by using pip
```
pip install imbalanced-learn -U
```
or by using the conda package manager
```
conda update imbalanced-learn
```
The changelog can be found [here](http://imbalanced-learn.org/stable/whats_new.html#version-0-7), while installation instructions, API documentation, examples and a user guide can be found [here](http://imbalanced-learn.org/stable/).
Happy hacking,
Chris | chkoar |
356,695 | Roadmap for Modern Frontend Web Development | I am posting this for those who are just getting started with frontend development. If you are new to... | 0 | 2020-06-17T12:44:29 | https://dev.to/ozanbolel/roadmap-for-modern-frontend-web-development-2od6 | webdev, css, javascript, beginners | I am posting this for those who are just getting started with frontend development. If you are new to coding, it could be better for you to learn a low level programming language first to have a deeper understanding of algorithms and computers. In this post however, I'll be giving a roadmap for frontend newbies. Of course this isn't the only way, but I wanted to share the way I know with you in the style of a questionnaire. So, let's get started!
###**Should I start with React, or Vue, or Angular?**
None. Of. Them. I understand that React, Vue, Angular and TikTok are too darn popular at the moment, but if you want to be good at any of them, you’ll need to have a good grasp on JavaScript. Those frameworks are built on top of JavaScript, not the other way around. Start with learning frontend, not with learning frameworks, and please don’t use TikTok.
###**Where should I start learning frontend?**
Start with basics. In frontend web development, you’ll be using different technologies in the same environment. It is for your best to thoroughly learn every one of them. The first thing you should learn is **HTML**. Then learn **CSS** for styling, and after that advance your skills in **JavaScript**. When you are comfortable with coding in HTML, CSS and JavaScript combined, you’ll have the fundamentals for building complex interfaces like a ninja, regardless of which framework you use.
###**\<h1\>Hello world!\</h1\>, am I set?**
Umm, kinda. HTML is relatively easy, but you should understand things like **inputs**, **forms**, **lists**, **tables** and **metatags**. Experiment, don’t skip them just because they look easy. As I stated before, in frontend you’ll be using several technologies at the same time. You can never know which one will save you time in a random challenging situation.
###**This CSS boi is tricky.**
Yes it is. Let's remember the legend:

When you first start writing CSS, it'll always has its own mind. Don’t let it intimidate you. As you practice more, you’ll realise it's fun to work with (IE developers might disagree). Remember; CSS is what users see, and every now and then it's what they experience. Learn it properly.
I’ll give some keywords that are crucial for you to research and study:
- Viewport
- EM and REM Units
- Responsive Design
- Flexbox
- CSS Grid, FR Unit
- CSS Variables
**Tip:** Use [caniuse.com](https://caniuse.com) to determine which CSS or HTML features you can start using today. Not every browser support every feature or API.
###**What about Bootstrap?**
Fuck it. Using Bootstrap too early will make you lazy and uncompetitive in the field. Once you learn CSS thoroughly, you can always create your own structures for styling.
###**I wanna dive into JavaScript.**
Sure. But don’t dive too deep. Best way to learn JavaScript is to learn JavaScript. Not JQuery, not React, not Vue, not… Well, you got the point. JavaScript is an old pal, and it came a long way since its creation. Make sure your learning material covers the latest goodies. For mastering modern JavaScript; learn **ES6** features and search what **ECMAScript** means.
**Tip:** Move from **var** to **const** and **let**.
**Tip:** Don't forget to look into **async/await**.
###**Wait... What about JQuery?**
Fuck it too. Using JQuery too early will make you lazy and uncompetitive in the field. Once you learn JavaScript thoroughly, you will never need JQuery. And yes those sentences were copypasta from above, beacuse I used JQuery in the past and got lazy. You see it now?
**Tip:** What year is this?! Don’t use JQuery for a new project.
###**Should I know anything else before getting into a framework?**
Yes dear reader. Here’s a list:
- CSS Preprocessors
- NPM
- Babel
- Webpack
You can learn a CSS preprocessor like **SASS** to give CSS superpowers. Also search for other items on the list to understand how today’s frameworks work. Try to create a **webpack project** with them for a deeper understanding.
###**Big question: Which one should I choose; React, Vue, or Angular?**
React. Contrary to what you think, this answer is actually unopinionated. The reason why I’m giving you an one-word answer is purely because React is more common. And more importantly; choosing between them will kill you from the inside unless you start learning one of them.
Sure you can choose Vue for a different approach, or choose Angular for a reason only god knows why (well, that was opinionated). But the main point is no one is keeping you from learning all of them. Just don't waste your time and energy for choosing between them.
Also, when you got started, pay attention to state management. Learn **Redux**, **Context API**, **Vuex**, or other central state management tools depending on which framework you work on.
**Tip:** **Next**, **Nuxt** and **Gatsby** are great tools for eliminating the cons of **clientside rendering** and **SPA**s in genenal. If you don’t know what I mean, it's perfectly okay. Keep those tools in mind, and do your research.
###**What's next?**
As a frontend developer, you should learn more about **UI** and **UX**. You are in a very critical position where user interacts with the app through you. Good knowledge in UX will carry you further in creating interfaces that users will love. Also, learn more about **colors**, **typography**, and **negative space**.
Definitely look into **testing** and **TypeScript** too. Other than that, I honestly don’t know. Once you taste the feeling of building things that people can interact with, trust me, you'll know what you’ll do next.
**Tip:** Look into tools like **Jest** for testing.
###**Dude, what?! There is so much to learn!**
Do not rush it, take your time. Don’t jump from one thing to another, stay on course. Having good fundementals on core technologies is the key. Learn the basics, and more will follow. Don’t overload yourself with the idea of “learning everything”. You can’t, and you absolutely don’t need to.
###**What kind of a roadmap is this? You didn’t even explain most of the things you mentioned?**
Probably not the best kind, but this is the point. There are way too experienced and way too knowledgeable people than me. Search [Egghead.io](https://egghead.io), search YouTube, search Twitter. Find them and learn from them. Make a habbit of Googling everything. In this profession you choose, you should always be searching, and learning. With this post, I'm just trying to light the way for newcomers, and give them a starting point.
---
I hope this was useful, you can also follow me on Twitter for future content:
[twitter.com/oznbll](https://twitter.com/oznbll)
| ozanbolel |
356,794 | COMPILER EXPLORER ou comment désassembler facilement son code ! | A post by Younup | 0 | 2020-06-16T14:03:29 | https://www.younup.fr/blog/toi-aussi-decompile-ton-code-avec-compiler-explorer | c, compilerexplorer, french, video | {% youtube GzTkqK8vkeA %}
| younup_it |
356,931 | Subscribe to Datasets: New CKAN Feature Explained | Last month, we announced the launch of a new CKAN feature developed by Datopian that allows users to... | 0 | 2020-06-16T15:20:45 | https://dev.to/annabelvandaalen/subscribe-to-datasets-new-ckan-feature-explained-1pf0 | database | Last month, [we announced the launch of a new CKAN feature](https://www.datopian.com/blog/2020/04/28/release-subscribe-to-ckan-datasets/) developed by Datopian that allows users to subscribe to datasets. This is an opt-in feature that sends users an email notification when a dataset to which they are subscribed is changed or updated. Let’s take a look at the feature in more detail.
<figure>
<img src="https://dev-to-uploads.s3.amazonaws.com/i/cgbqn5li5phu1cq69g4d.jpg" alt="Letterbox">
<figcaption style="text-align: center"><a href="https://unsplash.com/photos/P1I67ke0bAU">Photo by Dele Oke on Unsplash</a></figcaption>
</figure>
## Why subscribe to datasets with CKAN?
The subscribe to datasets feature designed by Datopian was born out of the needs of our enterprise customers. In order to provide clients with a robust messaging system, we needed to build a feature outside of the main application process.
Before Datopian developed a subscribe to datasets feature, data portal users had no good way of finding out about changes to datasets. Approaches to notifying users of changes include using RSS feeds or CKAN’s built-in email integration. However, these approaches were not applicable for our client's context because:
* Some datasets and resources can change rapidly, and many different types of stakeholders can subscribe to change notifications. This means that anywhere from 50,000 to 200,000 notifications may be broadcast in a given month.
* Our client wants to extend the notification feature to support additional notification channels as well as email. A next iteration will add SMS notifications, giving users the choice to receive notifications by SMS, email, or both.
Another advantage of the feature is that the granularity is high. Users can currently receive the following information via email notifications:
* The name of the datasets in which a change has taken place.
* Whether the change was applied to a whole dataset, or a single resource.
* Whether there were changes to the metadata.
Here’s an example notification:
<figure style="text-align: center">
<img src="https://dev-to-uploads.s3.amazonaws.com/i/wu088nxg03dq3q1rtm64.png" alt="Screenshot" height="200">
<figcaption>Screenshot section of an example email notification</a></figcaption>
</figure>
## Overview
<figure style="text-align: center">
<img src="https://dev-to-uploads.s3.amazonaws.com/i/bul95p2084lt5g8a6e4d.jpg" alt="Subscription diagram">
<figcaption>Fig 1.1. Diagram demonstrates that data curators edit the metadata and data of a dataset or resource to which a user is subscribed. </a></figcaption>
</figure>
<figure style="text-align: center">
<img src="https://dev-to-uploads.s3.amazonaws.com/i/mhzdboi0qvwzivta1xtm.jpg" alt="Subscription diagram 2">
<figcaption>Fig 1.2. Diagram shows, at a high level, the technical design of the data subscription service, including how it interacts with CKAN.</a></figcaption>
</figure>
## Current features
1. Configure notification frequency - system administrators can determine the frequency with which users receive email notifications. This is particularly helpful for users subscribed to very large datasets that are updated multiple times per minute/hour.
2. Disable notifications for certain datasets - system administrators may opt to disable notifications for certain datasets for a number of reasons. In particular, companies using CKAN data portals may choose to disable notifications for datasets that are updated frequently, should the cost of mass emailing become too high.
## Upcoming features
1. Subscribe to new datasets - soon, CKAN users will be able to receive emails notifying them when new datasets are added to the portal. This is particularly helpful for users monitoring all portal activity.
## How can I get the new feature?
The data subscriptions service is currently available for use. If you are interested in deploying it against your existing CKAN installation, please reach out to us by visiting the project on GitHub [here](https://github.com/datopian/data-subscriptions) and creating an issue. Additionally, [contact Datopian](https://www.datopian.com/contact/) to discuss how we can deploy a data subscription integration for your platform.
## Call to Action!
CKAN is an open-source software that relies on collaboration to develop functionality. If you extend this new feature, we would be really interested in using this code to improve CKAN and thereby encourage others to opt for open-source solutions.
_By Annabel van Daalen and Irio Musskopf, with graphics by Monika Popova._ | annabelvandaalen |
357,107 | Developing in a docker container | Ah, a new open source project that looks interesting. Let's pull it down from GitHub, open it up in... | 0 | 2020-06-16T20:36:42 | https://SimonReynolds.ie/developing-in-a-docker-container/ | docker, netcore, productivity | ---
title: Developing in a docker container
published: true
date: 2020-06-13 13:26:56 UTC
tags: Docker,NETCore,Productivity
canonical_url: https://SimonReynolds.ie/developing-in-a-docker-container/
---
Ah, a new open source project that looks interesting. Let's pull it down from GitHub, open it up in the IDE of our choice and.... Oh... It turns out the build scripts assume you have some build tool already installed or, even worse, a specific version of it.
This is a great example of where the [Remote Development](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack) extension pack for [Visual Studio Code](https://code.visualstudio.com) comes in. Grab it and you can do all your development in a docker container for your projects!
Naturally, you'll need to have Docker installed, either [Docker Desktop](https://www.docker.com/products/docker-desktop) for Windows or MacOS, or [Docker CE/EE](https://docs.docker.com/install/#supported-platforms) for the Linux distribution of your choosing.
And in even better news, with the release of WSL 2, Docker now uses it instead of Hyper-V as the backend, so you can run Docker on Windows 10 Home Edition too! Read more about it [here](https://www.docker.com/blog/docker-hearts-wsl-2/).
Once that's installed all you need to add is a `.devcontainer` folder in the root of your project and throw in a `devcontainer.json` file. You can use it to either specify a prebuilt `image` or specify a `DOCKERFILE` that you can use to build a custom development environment.
You can use this for any project in any language, I'm going to detail how I use it for a .NET Core project built in F# but the same ideas can be applied to any project.
You can base your Dockerfile on any image, [PHP](https://hub.docker.com/_/php/), [NodeJS](https://hub.docker.com/_/node/), the [.NET Core SDK](https://hub.docker.com/_/microsoft-dotnet-core-sdk/), whatever tech stack you're building with. Then build on it, add in whatever extra tools you need, maybe you want to add `git`?
A prebuilt environment containing everything you need, with exactly the versions you want. What more could you need?
<figcaption>But wait...there's more!</figcaption>
Why stop at specifying just your build environment? How about what VS Code settings or extensions are needed?
Let's extend the `devcontainer.json` with some extra instructions. We can say we want some extensions installed by default. If you use VS Code for F# then you probably already have the amazing [Ionide](https://ionide.io/) plugin installed.
But you want your project to be easy for newcomers to get up and running. Telling them to go install 5 different extensions just get started isn't the best way to start off a relationship with a potential new user or contributor.
Let's just list the extensions we need instead...
<!--kg-card-begin: markdown-->
```json
{
"name": "MyAwesomeApp",
"dockerFile": "Dockerfile",
"appPort": [8080],
"extensions": [
"ionide.ionide-fsharp",
"ms-vscode.csharp",
"editorconfig.editorconfig",
"ionide.ionide-paket",
"ionide.ionide-fake"
]
}
```
<!--kg-card-end: markdown-->
What about VS Code settings? All settings in VS Code are specified in json files, so we just add one to the `.devcontainer` folder and reference it in the Dockerfile.
<!--kg-card-begin: markdown-->
```json
{
"FSharp.fsacRuntime":"netcore"
}
```
<!--kg-card-end: markdown-->
This example is named `settings.vscode.json` so we add it to our Dockerfile
<!--kg-card-begin: markdown-->
```docker
FROM fsharp:netcore
# Copy endpoint specific user settings into container to specify
# .NET Core should be used as the runtime.
COPY settings.vscode.json /root/.vscode-remote/data/Machine/settings.json
# Install git, process tools
RUN apt-get update && apt-get -y install git procps
```
<!--kg-card-end: markdown-->
So now when we open the project in VS Code we get a prompt to reopen it in a container. It will create a Docker container based on the supplied `DOCKERFILE`, automatically open and forward port 8080 from the container to the host and install all the extensions listed. Now we can happily write our build script that assumes the presence of a build tool and make sure that making changes to it is a smoother experience because VS Code will even have the right extensions for it!
The full reference for what `devcontainer.json` can do is [here](https://code.visualstudio.com/docs/remote/containers#_devcontainerjson-reference), and it can do a lot! | simonreynolds |
357,286 | PHP Fatal error: Uncaught ReflectionException: Class mailer does not exist | Sorry. only japanese. | 0 | 2020-06-17T05:08:45 | https://dev.to/a_yasui/php-fatal-error-uncaught-reflectionexception-class-mailer-does-not-exist-2ke6 | laravel | ---
title: PHP Fatal error: Uncaught ReflectionException: Class mailer does not exist
published: true
description: Sorry. only japanese.
tags: laravel
//cover_image: https://direct_url_to_image.jpg
---
よくあるハマりどころ。大体は Facade の書き出しが失敗してたりするのが原因っぽい。
他の考えられるのは、classmap を作成時に全く関係ない箇所でエラーが出ているが `App\Exceptions\Handler` で地道なエラーメール通知をしてるので、 `Mail::` を使ってない?とかそこらへん。
## 確認箇所 その1 -- check point 1
file : `app/Exceptions/Handler.php`
If it send a email in `report' method, You do comment out it.
The Upgrading work and some error catch the report when the email facade does not compiling it.
## やってみること その1 -- try 1
`config/app.php` の `providers` に `Illuminate\Mail\MailServiceProvider::class` が抜けていないか確認
You check the `Illuminate\Mail\MailServiceProvider::class` in the `providers` at `config/app.php`.
## やってみること その2 -- try 2
`composer dumpautoload`
## やってみること その3 -- try 3
```
> rm -rf vendor && composer install
```
## やってみること その4 -- try 4
If exists `bootstrap/cache/compiled.php`, delete it.
```
> rm -f bootstrap/cache/compiled.php
```
## 参考
- [Stack overflow -- ReflectionException: Class mailer does not exist](https://stackoverflow.com/questions/54151821/reflectionexception-class-mailer-does-not-exist)
- [Stack overflow -- Laravel Class mailer does not exist](https://stackoverflow.com/questions/38503654/laravel-class-mailer-does-not-exist)
- [Class mailer does not exist when using ANY artisan command](https://stackoverflow.com/questions/55976631/class-mailer-does-not-exist-when-using-any-artisan-command)
- [Update to laravel 5.4, class Mailer does not exist](https://laracasts.com/discuss/channels/forge/update-to-laravel-54-class-mailer-does-not-exist)
| a_yasui |
357,119 | Top weekly stories in tech and programming - The Grind: Issue #1 | Subscribe to The Grind to get the best tech and programming stories of the week delivered to your in... | 0 | 2020-06-16T20:27:30 | https://dev.to/treyhuffine/top-weekly-stories-in-tech-and-programming-the-grind-issue-1-3pli | javascript, webdev, codenewbie | ---
title: Top weekly stories in tech and programming - The Grind: Issue #1
published: true
description:
tags: javascript,webdev,codenewbie
cover_image: https://miro.medium.com/max/4760/1*IwqaMXLYt1GToBiGnWQlRQ.png
---
[**_Subscribe to The Grind_**](https://thegrind.news) _to get the best tech and programming stories of the week delivered to your inbox._
----------
### 📈 **Trends in Tech**
[**Microsoft: Rust Is the Industry’s ‘Best Chance’ at Safe Systems Programming — The New Stack**](https://thenewstack.io/microsoft-rust-is-the-industrys-best-chance-at-safe-systems-programming/#) (4min 23sec)
Microsoft has publicly stated that they are moving away from C++ in favor of the equally highly performant Rust language. The announcement follows similar adoption at other tech giants including Apple, Google, Dropbx, Cloudflare, and AWS
[**Facebook’s TranscoderAI can transform code to different programming languages**](https://venturebeat.com/2020/06/08/facebooks-transcoder-ai-converts-code-from-one-programming-language-into-another/) (5min 7sec)
Transcompilation — translating code from one language to another — has been a longstanding necessary evil of developing applications for multiple platforms, or porting apps from one platform to another. Facebook’s new tool could be the beginning steps in creating a Rosetta Stone of code, where languages can seamlessly be swapped with each other.
[**OpenAI launches an API to commercialize its research**](https://venturebeat.com/2020/06/11/openai-launches-an-api-to-commercialize-its-research/) (5min 13sec)
The company OpenAI has created an API, which constitutes a big step towards their goal of providing the power of natural language processing to the entire development community. You can request to join the API’s waitlist here.
----------
### 🔐 Security
[**Honda halts production at some plants after being hit by a cyberattack**](https://arstechnica.com/information-technology/2020/06/honda-halts-production-at-some-plants-after-being-hit-by-a-cyberattack/) (2min 18sec)
Honda was forced to suspend factory operations in multiple plants in the US and abroad due to a ransomware infection, echoing a similar attack against the company in 2017 using the now infamous WannaCry worm.
[**Google says Iranian, Chinese hackers targeted Trump, Biden campaigns**](https://techcrunch.com/2020/06/04/google-china-iran-trump-biden/) (2min 19sec)
In an official statement from Google, researchers found the Biden campaign was targeted by a Chinese group, and the Trump campaign was targeted by an Iranian group. Though neither attacked appears to have succeeded, this is surely not the last attempt we’ll see against American political campaigns as the general election approaches.
----------
### 📚 **Tech & Society**
[**Researchers find racial discrimination in ‘dynamic pricing’ algorithms used by Uber, Lyft, and others**](https://venturebeat.com/2020/06/12/researchers-find-racial-discrimination-in-dynamic-pricing-algorithms-used-by-uber-lyft-and-others/) (7min 11sec)
Turns out the cost of your last commute might have as much to do with your skin as much as the distance or time of day you traveled. An interesting morality tale about what can happen when machine learning is applied to social data.
[**The Internet’s most important — and misunderstood — law, explained**](http://arstechnica.com/tech-policy/2020/06/section-230-the-internet-law-politicians-love-to-hate-explained/) (17min 57sec)
When Twitter made the decision to start fact checking Donald Trump’s tweets recently, Twitter CEO Jack Dorsey took a bold stance against misinformation. Trump hopes to seek legal revenge by attacking the 1996 law Secion 230, which gives social media companies virtually zero accountability for the content on their platforms. But interestingly, Joe Biden feels the same way as Trump does about this law. Take a deeper dive to understand what this law is, how it might change, and how these changes could affect you.
[**Pentagon Documents Reveal The U.S. Has Planned For A Bitcoin Rebellion**](https://www.forbes.com/sites/billybambrough/2020/06/10/pentagon-documents-reveal-the-us-has-planned-for-a-bitcoin-rebellion/#5da2cf184cc0) (2min 46sec)
Official documents from the U.S. Department of Defense describe a Robin Hood-esque organization called Zbellion, that converts stolen money into bitcoin. They then turn it around to fund a global anti-establishment movement.
[**Activists rally to save Internet Archive as lawsuit threatens site**](https://decrypt.co/31906/activists-rally-save-internet-archive-lawsuit-threatens) (4min 43sec)
The Internet Archive is a Non Profit organization, and the site hosts a treasure trove of culture artifacts of all sorts, including millions of movies, books, software, and audio recordings. When it temporarily suspended the waitlist feature to help address the problem of COVID-19 driven shut downs of public libraries, publishing giants Hachette, HarperCollins, Penguin Random House, and Wiley sued over it. And they’re seeking damages at $210,000,000,000 (that’s two hundred and ten billion dollars). That would be game over for the Archive.
----------
### 🕴 **Tech Business**
[**Facebook establishing VC arm to invest in startups**](https://www.axios.com/facebook-establishing-a-venture-arm-to-invest-in-startups-91d9ee71-2282-4032-8f31-45b861a6ba9c.html) (3min 24sec)
Facebook is making a big push to fund startups, hoping to get the jump on the next big thing before it blows up.
[**Zoom closed account of U.S.-based Chinese activist “to comply with local law”**](https://www.axios.com/zoom-closes-chinese-user-account-tiananmen-square-f218fed1-69af-4bdd-aac4-7eaf67f34084.html) (2min 58sec)
Zoom shut down Chinese activists living in America, for holding a meeting to remember the Tiananmen Square Massacre. This was likely a move to appease China’s law prohibiting the discussing of the famous pro democracy Tiananmen protests that occurred in 1989.
----------
### ⚕ **Health**
[**Is Dark Mode Such A Good Idea?**](https://kevq.uk/is-dark-mode-such-a-good-idea/) (4min 33sec)
I like dark mode. At this moment, I’m tying this on my laptop on an app using dark mode. Even though I haven’t come back from the dark side just yet, this short, educational read has me second guessing this choice for the first time since dark mode has been an option.
[**The Reality of Developer Burnout**](https://kenreitz.org/essays/the-reality-of-developer-burnout) (4min 48sec)
From a few years ago, but still highly relevant today. Most of us have experienced burnout before. Understanding how to notice it, and how to handle deal without before it’s a problem, can save you from a boatload of stress, illness, fatigue, and premature gray hairs. Solid strategies and great practical advice here.
----------
### 🛠 **Dev Tools**
[**Ikonate**](https://ikonate.com/) — Fully customizable & accessible vector icons
This site is a wonderful tool for generating a set of icons for your next project, free of charge. Customize size, border density and color to match your design or idea.
[**Grid.js**](https://gridjs.io/) — Grid.js is a Free and open-source HTML table plugin written in TypeScript
An open source project that sets out to solve the all-too-common problem of developing robust, responsive, reusable, data driven tables. Grid.js provides a simple API, and boasts easy extensibility.
[**Discuss**](https://news.ycombinator.com/item?id=23468193): Which tools have made you a better programmer?
A great thread, where developers from all walks of life discuss the tools that got them to where they are now. It’s a great conversation to learn about a new tool that might make a big difference for you, or share what tools you’ve benefited from over the course of your career.
----------
### ✨ **Fancy Projects**
[**Futuremood**](https://futuremood.com/)
We have no idea if the AURAFLOW 5000 sunglasses really alter your mood (the company promises it’s been “scientifically proven” 😂), but we are sure that Futuremood knows how to make an eye catching website.
[**Think Bear**](https://thinkbear.net/)
The portfolio of digital designer Jens Nielsen has a beautifully minimalist modern design. He uses the glamorous Bigilla font to great effect, set within a restrained, black, and yellow palette. Attractive and elegant.
### Meme of the Week

_Does anyone do social distancing better than programmers? Probably not. We’ve had a lot of practice, and we’re up to the challenge._ 💪🤓
----------
[**_Subscribe to The Grind_**](https://thegrind.news) _to get the best tech and programming stories of the week delivered to your inbox._ | treyhuffine |
357,134 | Awesome hackathon prizes? | I’ve been to so many hackathons I can’t count anymore. When it comes to prizes, I thought I have seen... | 0 | 2020-06-17T20:17:55 | https://dev.to/jdorfman/awesome-hackathon-prizes-2ehf | discuss, hackathon | ---
title: Awesome hackathon prizes?
published: true
description:
tags: #discuss #hackathon
cover_image: https://p21.p4.n0.cdn.getcloudapp.com/items/E0uzny0E/Image%202020-06-17%20at%201.25.07%20PM.png?v=308f6c24bfd818eebfaae443a5268394
---
I’ve been to so many hackathons I can’t count anymore. When it comes to prizes, I thought I have seen them all¹ until I saw "Zneakers" in my Twitter feed yesterday. 😍
{% twitter 1272654699498336258 %}
My question to you is, what are some awesome hackathon prizes you’ve seen (or won)?
1. Oculus Rifts Arduinos, shirts, socks, drones, etc.
| jdorfman |
357,162 | An Opinionated Review of Asynchronous Team Collaboration Tools for 2020 | The opinionated part is that this review assumes the main purpose of team collaboration software is t... | 0 | 2020-06-16T22:37:06 | https://dev.to/uclusionhq/an-opinionated-review-of-asynchronous-team-collaboration-tools-for-2020-9p9 | productivity, agile | The opinionated part is that this review assumes the main purpose of team collaboration software is to serve the communications needs of the team using it. If you are more in the market for a tool that organizes across projects and teams, focuses on reporting productivity or tracks hours billed then this review will not be as useful. You have to have a main purpose in mind when you choose your team’s tools because software that does it all would be too complex to do any of it well.
Here we’re also going to assume that your video conferencing and direct messaging needs are out of scope. I don’t think the addition of synchronous communication will make a big difference in your choice of asynchronous tools. Most likely the best in class asynchronous team collaboration tool wins regardless of whether or not it has integrated video conferencing and synchronous messaging.
## Uclusion
### Where do your requirements go?
In a workspace’s description:

Somewhat like Wiki when the description changes everyone will be notified and see a diff.
### Where do your stories go?
In swimlanes showing what each person in a workspace is up to. Note that you can only have one story in progress per Workspace and so you always know what is actually being worked on.

### Where does your communication about requirements and stories go?
Structured, resolvable comments in a workspace or story. If you open a blocking comment it blocks, a TODO must be closed before changing stage, etc.

You can also reach a decision about requirements or implementation of a story in a dialog:

Finally you can kick of a new project or direction by launching an initiative which has up down voting with certainty and reasons:

### How do you know what everyone has done, will do now and will do next?
The swimlanes shown above already display at a glance what everyone is up to but in addition you are prompted to vote certainty, days estimate and reason on the assignment of a story right inside the tool without having to have additional meetings.
## Trello
### Where do your requirements go?
In cards on a Kanban board. According to Trello it would look something like this:
](https://cdn-images-1.medium.com/max/2798/1*SyivUYnBLEAvsTst1hSrQA.png)*From [https://trello.com/en-US/teams/engineering](https://trello.com/en-US/teams/engineering)*
Where every requirement, no matter how small, gets its own card in an ever growing backlog.
### Where do your stories go?
Same place.
### Where does your communication about requirements and stories go?
[Inside a card](https://help.trello.com/article/765-commenting-on-cards).
*Seriously?*
### How do you know what everyone has done, will do now and will do next?
Trello [recommends](https://blog.trello.com/trello-power-ups-for-remote-work) the [daily updates power up](https://trello.com/power-ups/5d5b3b96fe9c9f88bc7bd311):
*I’m not making this up*
## Trello + Slack
Like the Trello solution except now some of your messaging will no longer be co-located with the requirement or story it is about and instead scroll up the screen hoping to be noticed before it disappears.
## Trello + Slack + Google Docs
We asked someone what they liked about this solution and they said, “Everything integrates well with each other.”
## Azure DevOps
### Where do your requirements go?
Kind of laborious but you could put the requirements in an Epic and then link Issues to the Epic:

### Where do your stories go?
Following the above idea (Azure DevOps is non-opinionated software) they go in issues and issues have their own Kanban display like above.
### Where does your communication about requirements and stories go?
Rudimentary commenting inside the cards like Trello.
### How do you know what everyone has done, will do now and will do next?
Filter the Kanban board by assigned. You can add columns to the Kanban board and you can also add what is called swimlanes to the columns (like a subcolumn). However this approach is unlikely to tell you much because Azure DevOps encourages backlogs. So its a bit like looking at someone’s email Inbox to figure out what they are doing.
Essentially Azure DevOps, Jira, Linear and others are designed for issue tracking - smaller, potentially customer driven, problems that are less about approval and design and more about cranking out on a schedule. Using an issue tracker is a good idea for anything that does not require much collaboration.
## Asana
Same as Trello in use of a Kanban board but comments inside a card look like this

and if you want to know what people are up to you apparently [create a whole different Kanban board](https://asana.com/templates/for/marketing/agile-daily-standup) and search inside each card one by one.
## Monday.com
Less of a collaboration tool and more of a pre built spreadsheet:

## Clubhouse

Less of a collaboration tool and more like a database barfed on you. Do your requirements go in an Epic or a Project? Or maybe you use a Workflow?

Supposedly there are swim lanes like Uclusion has but after 30 minutes of really trying I was unable to get them to show. The way to compete with Jira is not by being more complex.
## Zepel
### Where do your requirements go?
In their own list:

### Where do your stories go?
On a Kanban board which is a different view of the requirements list:

### Where does your communication about requirements and stories go?
A requirement and the story to do a requirement share a card so you can comment inside of it:

### How do you know what everyone has done, will do now and will do next?
I wasn’t able to find any way to do this. You can generate a burn down / up report

but going back to our opinionated purpose how is this report actionable for the team doing the work? It doesn’t say who is blocked on what or give any indication of progress reports etc.
## Favro
Again the opinionated issue comes up — are we optimizing for ease of development or ease of reporting? Or put another way is it more important that stories get done or that they get done on a certain timeline?

There could well be situations where the timeline is the more important factor. For instance suppose this is a situation where zero creativity is required and success is solely dependent on speed of doing a multitude of straight forward tasks.
## Microsoft Teams
According to Microsoft [support](https://support.microsoft.com/en-ie/office/add-a-kanban-board-to-teams-3cf84256-c2b8-4108-83fa-f4e93d1ffa57) you can add a Kanban board by

Teams literally does everything — chat, boards, wiki, conferencing, screen sharing, you name it. Reviewing it as a solution is like reviewing a laptop as a team collaboration tool; a team that collaborates is likely to have laptops and they also may be using Microsoft for something.
## Notion
Similar to Teams but even more so Notion is more like a high level programming language for collaboration applications. Excellent if you want something specific that you are willing to build yourself or you want to share finished content that you worked on by yourself.
| uclusion |
357,175 | Configure Travis CI for Ruby on Rails | Original Article Source Code In this tutorial I am going to show you how to configure Travis CI to... | 0 | 2020-06-17T00:19:13 | https://stevepolito.design/blog/configure-travis-ci-for-ruby-on-rails/ | ruby, rails | - [Original Article](https://stevepolito.design/blog/configure-travis-ci-for-ruby-on-rails/)
- [Source Code](https://github.com/stevepolitodesign/rails-travis-ci-example)
In this tutorial I am going to show you how to configure Travis CI to run your Rails' test suite and system tests everytime you push a new change to your repository.
## Create a Simple Rail Application
First we'll need to create simple Rail application. Open up your terminal and run the following commands.
1. `$ rails new rails-travis-ci-example -d=postgresql`
2. `$ rails db:create`
2. `$ rails g scaffold Post title body:text`
- This step will automatically generate tests and system tests.
3. `$ rails db:migrate`
## Configure Rails Application to run System Tests in Travis CI
Rails is configured by default to run [system tests](https://guides.rubyonrails.org/testing.html#system-testing) in Google Chrome. However, I ran into an issue with Travis CI when it came to running system tests using the default configuration. My solution was to update `test/application_system_test_case.rb` by declearing `:headless_chrome` instead of the default `:chrome` setting.
1. Edit `test/application_system_test_case.rb`
```ruby{5}
# test/application_system_test_case.rb
require "test_helper"
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :headless_chrome, screen_size: [1400, 1400]
end
```
2. Run the test suite locally to ensure it works and passes.
```
$ rails test
$ rails test:system
```
## Configure Travis CI to run the Rails Test Suite and System Tests
Next we need to create a `.travis.yml` file in order for Travis CI to know how to build our application.
1. Creat a `.travis.yml` file and add the following:
```yml
language: ruby
cache:
- bundler
- yarn
services:
- postgresql
before_install:
- nvm install --lts
before_script:
- bundle install --jobs=3 --retry=3
- yarn
- bundle exec rake db:create
- bundle exec rake db:schema:load
script:
- bundle exec rake test
- bundle exec rake test:system
```
|Key|Description|
|-------|-----------|
|[os](https://config.travis-ci.com/ref/os)|Sets the build's operating system. Note that we did **not** add an `os` key, and are using the [default environment](https://docs.travis-ci.com/user/reference/xenial/)|
|[language](https://config.travis-ci.com/ref/language)|Selects the language support used for the build. We select `ruby` since this is a Rails project|
|[cache](https://config.travis-ci.com/ref/job/cache)|Activates caching content that does not often change in order to speed up the build process. We add `bundler` and `yarn` since Rails uses [bundler](https://bundler.io/) and [yarn](https://yarnpkg.com/) to managage dependencies.|
|[services](https://config.travis-ci.com/ref/job/services)|Services to set up and start. We add `postgresql` since our database is postgresql. You could also add `redis`.|
|[before_install](https://config.travis-ci.com/)|Scripts to run before the install stage. We add `nvm install --lts` to use the latest stable version of Node. This will be needed when we run `yarn` later.|
|[before_script](https://config.travis-ci.com/)|Scripts to run before the script stage. This sets up our Rails application. Note that I do not seed the database, since we only care about the test environment. I run `bundle install --jobs=3 --retry=3` instead of `bundle` becuase that's what the [documentation](https://docs.travis-ci.com/user/languages/ruby#bundler) recommends.|
|[script](https://config.travis-ci.com/)|Scripts to run at the script stage. In our case, we just run our tests.|
2. Log into Travis CI and navigate to `https://travis-ci.org/account/repositories`.
3. Search for your repository, and it enabled. If your repository doesn't appear click the **Sync account** button.
4. Navigate to your project and trigger a build. Alternatively, make a new commit and push to GitHub to trigger a new build.
5. If you're using Heroku you can use GitHub as your deployment method and enable automatic deployments, but have it configured to wait for the CI to pass first.
---
Did you like this post? [Follow me on Twitter](https://twitter.com/stevepolitodsgn) to get even more tips. | stevepolitodesign |
357,212 | Day 16 - #100DaysofCode - Understanding MVC | Table Of Contents What is MVC? Model View Controller Advanta... | 7,070 | 2020-06-17T02:14:11 | https://dev.to/sincerelybrittany/day-16-100daysofcode-understanding-mvc-3h75 | ruby, 100daysofcode, womenintech, codenewbie | <center> Table Of Contents
[What is MVC?](#chapter-1)
[Model](#chapter-2)
[View](#chapter-3)
[Controller](#chapter-4)
[Advantages of MVC framework] (#chapter-5)
[Resources](#chapter-6)
</center>
### What is MVC? <a name="chapter-1"></a>
MVC is stands for Model-View-Controller and is pretty much how an application can be designed. It is not required when building an application, but is a format that you may come across while building applications. Each part interacts with each other and has a particular use.
### Model <a name="chapter-2"></a>
The <b>Model </b> stores data (usually from your database) and handles the logic. <i><q>It represents data that is being transferred between controller components or any other related business logic. </i></q>
In a simple Sinatra app, this is where you will inherit ActiveRecord::Base and your activerecord associations. In addition, you would set and authenticate against a <a href="https://api.rubyonrails.org/classes/ActiveModel/SecurePassword/ClassMethods.html"> BCrypt password</a> inside of the models class .
### View <a name="chapter-3"></a>
The <b> view </b> is where the user can input information and is what the user can see. So usually your ``.erb`` or ``.html`` files. It is everything your user sees and interacts with when visiting your application.
### Controller <a name="chapter-4"></a>
The <b> Controller </b> is the middleman. It handles and updates both the models (database) and views(user input). It can accept user input and perform CRUD actions. What is CRUD? CRUD is Create, Read, Update, and Delete -- I have reviewed CRUD in past posts. But I feel like this <a href="https://www.codecademy.com/articles/what-is-crud"> code academy article </a> puts it all together.
Do you remember <a href="https://dev.to/sincerelybrittany/day-11-100daysofcode-restful-routes-1ab1"> RESTful routing</a>? Well, the Controller interacts with the views to ensure restful routing is successfully used for user accessibility and our sanity.
In short, with MVC everything interconnects -- <q> The model handles the data (usually from the database), the view is the user interface, and the controller processes all of the inputs from the database and the user interface. </q>
### What are the Advantages of MVC framework and why do we use it? <a name="chapter-5">
I read two articles <a href="https://www.whizsolutions.co.uk/advantages-using-mvc-framework-web-development/"> whizsolutions </a> and <a href="https://www.brainvire.com/six-benefits-of-using-mvc-model-for-effective-web-application-development/#:~:text=Faster%20development%20process%3A,logic%20of%20the%20web%20application"> Brainvire </a> and both indicated that these are some of the top advantages of using the MVS framework:
<ol>
<li>Saves Time and effective use of resources – saving time, money, resources and managing the resources effectively.</li>
<li>Facilitates multiple views, separates data, and makes duplication of code is certainly less.</li>
<li>Modification does not affect the entire model </li>
<li> SEO Friendly Platform providing ease to develop SEO friendly URL’s in order to generate more visits on a specifies pages. </li>
</ol>
Be sure to keep in mind that although MVC is a great tool and design to use, it is not the only way to build an application and not everything will necessarily fit into the MVC design. For example, while building my application I needed a different class for my API manager that was not necessarily a model, view, or controller. It is okay to create your own classes and to do what is best for your particular needs.
### Resources <a name="chapter-4"></a>
MVC resources
<a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller"> MVC Wikipedia </a>
<a href="https://www.guru99.com/mvc-tutorial.html"> MVC Guru </a>
<a href="https://techterms.com/definition/mvc"> MVC Tech Terms </a>
<a href="https://www.tomdalling.com/blog/software-design/model-view-controller-explained/"> MVC Tom Dalling Blog </a>
CRUD resources
<a href="https://www.codecademy.com/articles/what-is-crud"> Code Academy Article </a>
<a href="https://dev.to/sincerelybrittany/day-9-100daysofcode-activerecord-and-a-database-6de"> Day 9: #100DaysofCode - ActiveRecord and a Database </a>
<a href= "https://dev.to/sincerelybrittany/day-10-100daysofcode-activerecord-and-a-database-17i8"> Day 10: #100DaysofCode - ActiveRecord and a Database
</a>
RESTful resources
<a href="https://dev.to/sincerelybrittany/day-11-100daysofcode-restful-routes-1ab1"> RESTful routing </a>
Song of the day:
{% soundcloud https://soundcloud.com/ravynlenaesounds/free-room-ft-appleby-prod-by-monte-booker %}
| sincerelybrittany |
357,304 | HTML Clickable Image Alternative | Yesterday we had a look at the HTML map element, and as mentioned, there might be a better solution n... | 0 | 2020-06-17T06:03:40 | https://daily-dev-tips.com/posts/html-clickable-image-alternative/ | html | Yesterday we had a look at the [`HTML` `map` element](https://daily-dev-tips.com/posts/html-image-map/), and as mentioned, there might be a better solution nowadays.
Today we'll be looking at creating a very similar effect, but with cool hovers.
## HTML Structure
```html
<div class="container">
<img
width="467px"
src="https://images.unsplash.com/photo-1491378630646-3440efa57c3b?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=934&q=80"
/>
<a href="#sky" class="overlay overlay-sky"></a>
<a href="#sea" class="overlay overlay-sea"></a>
<a href="#sand" class="overlay overlay-sand"></a>
</div>
```
So what we are doing is creating a container div, with the image inside and our overlay areas. We are making three areas here to click on.
## CSS Structure
We are going to use `position: absolute` to position the area's correct. Therefore the container needs to be relative and inline.
```css
.container {
position: relative;
display: inline-block;
}
```
And then for the overlays the share the following `CSS`:
```css
.overlay {
position: absolute;
width: 100%;
left: 0;
}
```
And then for the specific ones:
```css
.overlay-sky {
height: 150px;
top: 0;
}
.overlay-sea {
height: 300px;
top: 150px;
}
.overlay-sand {
height: 255px;
bottom: 0px;
}
```
And last we will add a very simple hover to demonstrate.
```css
.overlay:hover {
background: rgba(0, 0, 0, 0.3);
}
```
View and play on this Codepen.
<p class="codepen" data-height="265" data-theme-id="dark" data-default-tab="html,result" data-user="rebelchris" data-slug-hash="yLeaagr" style="height: 265px; box-sizing: border-box; display: flex; align-items: center; justify-content: center; border: 2px solid; margin: 1em 0; padding: 1em;" data-pen-title="HTML Clickable Image Alternative">
<span>See the Pen <a href="https://codepen.io/rebelchris/pen/yLeaagr">
HTML Clickable Image Alternative</a> by Chris Bongers (<a href="https://codepen.io/rebelchris">@rebelchris</a>)
on <a href="https://codepen.io">CodePen</a>.</span>
</p>
<script async src="https://static.codepen.io/assets/embed/ei.js"></script>
## Browser Support
So this can be supported by every browser! It might need some hacks in the old Internet Explorers.
### Thank you for reading, and let's connect!
Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on [Facebook](https://www.facebook.com/DailyDevTipsBlog) or [Twitter](https://twitter.com/DailyDevTips1)
| dailydevtips1 |
357,355 | My attempt at recreating AWS | For the last 3 months or so, I have been making a new project that is a service- providing (almost)... | 0 | 2020-06-17T11:16:18 | https://dev.to/17lwinn/my-attempt-at-recreating-aws-5647 | html, javascript | For the last 3 months or so, I have been making a new project that is a service- providing (almost) the same services as Amazon Web Services (AWS).
Not releasing the name in this post yet.
Basically you navigate to https://pws-cluster-list.glitch.me and you'll be on our 'cluster' page, a set of open-source community-run web desktops. The whole service runs on glitch.com and has been a good thing to do.
Some help decoding the names:
- SR1 = Server: has applications designed for use with servers/WS (Due to SSH issues we will remove xTerm at some point)
- W2 = Write: Mainly for writing, with a PDF reader and text editor alongside other apps.
- C3 (beta) = This is kinda a first look at the new cluster software (AuoraOS 4), it has a configuration wizard, a welcome screen alongside other apps. I would leave this for a few days while we add more to it :)
-----------
They each have a Virtual File System and are regularly updated
I might take a break from cluster computing (let word spread) and start working on a database as a service- idk yet.
Cluster software - OS.js (SR1/W2 clusters) auoraOS 4 (C3 clusters)*
*- update expected soon which will switch us to auoraOS, stay tuned!
<b> self host our clusters, goto https://github.com/ProTech-IT-Solutions/PWS-server-cluster-temp ( published without configuration wizard )
Check the wiki for self hosting instructions!
<small> visit ProTech IT Solutions website at https://ptuk.tk</small> | 17lwinn |
357,399 | Hacktoberfest broke its promise | It's almost a year from last Hacktoberfest, same as a many of my friends in Germany, we didn't receiv... | 0 | 2020-06-17T09:34:57 | https://dev.to/tavallaie/hacktoberfest-broke-its-promise-5enh | hacktoberfest, digitalocean | It's almost a year from last Hacktoberfest, same as a many of my friends in Germany, we didn't receive for Hacktoberfest T-shirt of 2019, because their fund was limited! so when I tracked my shirt it comes to Germany but after 3 day return to US. and Now it was destroyed! I'm very frustrated now! | tavallaie |
357,406 | List of Major Android App Development Challenges That Developers Faces | Creating Apps for the Android OS gives a great deal of opportunity to developers and access to an eve... | 0 | 2020-06-17T09:53:18 | https://dev.to/abbasmurtza/list-of-major-android-app-development-challenges-that-developers-faces-347i | development, developmentchallenges, android | Creating Apps for the Android OS gives a great deal of opportunity to developers and access to an ever-developing user base to the app owner. Be that as it may, the developers face numerous Android app development challenges simultaneously.
The Android platform presents extraordinary numerous open doors for Android app developers. There are numerous [Android app development experts](https://ripenapps.com/android-app-development) too who can assist ventures with making novel business applications for the Android platform. Being the most well-known OS for smartphones over the globe, Android gives the app owner access to a colossal client base.
An Android app developer needs to concentrate on different measures before he is up to arrangement, create, and test the mobile application. The center zones may incorporate the usefulness, availability, execution, security, and ease of use of the app so they can keep their users held on the app for additional time. Propelled android devices are another thing nowadays to make the app convey customized user experience across various devices just as variants of a working framework. The developers are exceptionally expected to address different basic difficulties for making a reliable Android app.
Be that as it may, developers, regardless of numerous chances, must face numerous difficulties.
###Software Fragmentation:
As said prior, all the individual Android versions contain distinctive market share of the pie. As indicated by certain reports from Google, one of the ongoing variants of the Android stage includes a lower piece of the pie than its future ones.
There are numerous Android OS forms which developers discover hard to stay aware of with regards to app development. It is unfeasible to concentrate just on the latest Android form as not all users may have moved up to the latest OS.
###Hardware Fragmentation:
Android is an open-source mobile framework that makes it not the same as the different OSs. The device producers are permitted by the Alphabet to tweak its operating system according to their particular needs.
This turns into a major Android app development challenge since there are almost devices running the OS. Every device has various features as console structures, screen size, camera catches, and so forth., making it a development bad dream.
###No Software/ Hardware Standardization:
The enormous number of devices running Android offers ascends to another Android app development challenge–the absence of software/hardware normalization over the devices. This turns into a bad dream for devices as every device has an alternate capacity for an alternate catch.
###A few Carriers:
Android app development specialist co-ops should realize that there are numerous bearers accessible for the Android OS, each with the opportunity to alter the OS for their motivations. This just duplicates the discontinuity issues for developers.
###Security:
The open-source nature of the android makes the devices producers effectively modify the android app development process according to their particular needs. In any case, the continually rising fame and receptiveness of the framework have made it helpless against steady security assaults.
Not at all like Apple's severe rules for app development, no such administration exists for Android apps. Thus, numerous malware issues emerge and software/hardware fracture just makes fixing the issues increasingly troublesome. This offers to ascend to gigantic measures of security issues.
###Statistical surveying Costs:
One of the greatest Android app development challenges for developers is the expense behind statistical surveying. Understanding the end client is critical to Android application improvement, yet it can require a great deal of research, making it expensive for engineers.
###Patent Issues:
The ongoing claims demonstrate that few Android features might be proclaimed as an infringement of patent issues. This can turn into a major [Mobile application development](https://ripenapps.com/app-development-company-australia) challenge for developers.
###Android Market Search Engine:
One of the significant Android app development challenges for developers is the Android commercial center. Android has in excess of 8 million apps on its commercial center today and gets your application obvious among them is a test. Thus, even with an incredible android application created, in the event that you don't focus on its advancement, you may miss out on increasing any footing.
###No Rule for UI Design Process
No proclaimed normal (UI) planning procedure or rules are there to be trailed by the android engineers.
Henceforth, most of the developers manage to make the standard UI improvement procedure or rules. At the point when the especially based UI interface is created by the designers in their own specific manner, the consistency of the applications across various gadgets vary. The decent variety, just as a contradiction of the UI, influences the exclusively conveyed client encounters straightforwardly by the Android application. | abbasmurtza |
357,408 | So I Have This New Idea | This post was originally published on June 17, 2020 on my blog. If you know me at all (and by 'know,... | 0 | 2020-06-17T09:57:58 | https://dev.to/alexlsalt/so-i-have-this-new-idea-83e | devjournal, womenintech, codenewbie | _This post was originally published on June 17, 2020 on [my blog] (https://alexlsalt.github.io/blog)._
If you know me at all (and by 'know,' I mean that generously as in 'if you've read some of my blog posts'), you'll know that I love new ideas and sometimes can't contain them and then they end up bursting at the seams of my brain.
It's a fun thing to experience, but it can be overwhelming if I get overly existential about it.
Kind of like when people get sad when they think about the fact that they will literally *never* in their lives be able to hug *every* single puppy or kitten in the world, or be able to read *every* single book ever written, or meet *every* single individual worldwide who could potentially be an awesome fit as a friend...
It's kind of like that for me with ideas in my mind.
While I try my best to see them through, it just so happens that sometimes the idea has to pass through or by us before moving on to someone else who can perhaps better bring it to life (a paraphrased idea coming from Elizabeth Gilbert's Big Magic, which I recently read for the second time).
Anyway, as I was in the shower just now, I had an idea to develop a little web app to help me keep track of chores that need doing around my apartment.
Coincidentally, it's the same structure as I used for my [Wellbean project] (https://alexlsalt.github.io/wellbean/) in keeping track of contacting friends and loved ones. So maybe it's something I can spin up this weekend. We shall see... I'm excited about it! Hehehe.
Thanks for reading! [Now let's be friends over on Twitter >>] (https://twitter.com/alexlsaltt) | alexlsalt |
357,417 | Artificial intelligence can be implemented through Javascript. An example is the snake A.I. | A post by Ashwani Kumar | 0 | 2020-06-17T10:13:56 | https://dev.to/consultashwani/artificial-intelligence-can-be-implemented-through-javascript-an-example-is-the-snake-a-i-54m | ai, javascript, gamedev | {% youtube ZxR4yU1c_E8 %} | consultashwani |
357,445 | Neumorphism (aka neomorphism) : new trend in UI design | This area, which arises from a basic human need, such as the urge to communicate, is constantly... | 0 | 2020-06-17T10:43:49 | https://www.ma-no.org/en/web-design/ui-ux-design/neumorphism-aka-neomorphism-new-trend-in-ui-design | neumorphism, ux, webdesign | ---
title: Neumorphism (aka neomorphism) : new trend in UI design
published: true
date: 2020-06-17 10:40:00 UTC
tags: Neumorphism, UX, webdesign
canonical_url: https://www.ma-no.org/en/web-design/ui-ux-design/neumorphism-aka-neomorphism-new-trend-in-ui-design
---
This area, which arises from a basic human need, such as the urge to communicate, is constantly changing thanks to the advances of the technological era. Today, we invite you to reflect on the origin of this discipline and its future challenges.Graphic language has always been present throughout our lives as a way of representing reality. We've seen this ever since the caveman started painting the cave walls. His drawings were mostly hunting images made with materials such as charcoal, resin, blood and plants. While they used their hands or reeds as a tool to apply them on the wall.We… !
[Read all](https://www.ma-no.org/en/web-design/ui-ux-design/neumorphism-aka-neomorphism-new-trend-in-ui-design) | salvietta150x40 |
357,592 | Noticeable technology changes related to racism | It is a lot happening around recently. I couldn’t stop writing about it as each voice matters. It's t... | 0 | 2020-06-17T15:07:33 | https://dev.to/rameshvr/noticeable-technology-changes-related-to-racism-4gi7 | It is a lot happening around recently. I couldn’t stop writing about it as each voice matters. It's time to come out of that mindset that existed and existing.
As I work in technology space, highlighting few significant tools who took noticeable steps supporting the cause. Some changes were years old and some are new.
Link to original article: https://www.linkedin.com/pulse/noticable-technology-changes-related-racism-ramesh-rajagopal/?trackingId=zr1VDUJuQNaV6DmBxcpxaw%3D%3D | rameshvr | |
357,598 | Configure Emacs for Clojure | Basic configuration to start using Clojure with Emacs Create the init.el configur... | 7,273 | 2020-06-17T15:15:43 | https://dev.to/ivanguerreschi/configure-emacs-for-clojure-120f | # Basic configuration to start using Clojure with Emacs
## Create the init.el configuration file
Open a terminal and create a directory: *mkdir ~/emacs.d*
Enter the newly created directory: *cd emacs.d*
Create file: *touch init.el*
## Install MELPA repository
Editing file: *emacs init.el*
(require 'package)
(let* ((no-ssl (and (memq system-type '(windows-nt ms-dos))
(not (gnutls-available-p))))
(proto (if no-ssl "http" "https")))
(when no-ssl (warn "\
Your version of Emacs does not support SSL connections,
which is unsafe because it allows man-in-the-middle attacks.
There are two things you can do about this warning:
1. Install an Emacs version that does support SSL and be safe.
2. Remove this warning from your init file so you won't see it again."))
(add-to-list 'package-archives (cons "melpa" (concat proto ://melpa.org/packages/")) t)
;; Comment/uncomment this line to enable MELPA Stable if desired. See `package-archive-priorities`
;; and `package-pinned-packages`. Most users will not need or want to do this.
;;(add-to-list 'package-archives (cons "melpa-stable" (concat proto "://stable.melpa.org/packages/")) t)
)
(package-initialize)
With this code we have installed the MELPA repository, this repository contains many packages for expanding Emacs
## Install CIDER
CIDER is the Clojure(Script) Interactive Development Environment that Rocks!
(use-package cider
:ensure t)
Save the file *C-x C-s* close *C-x C-c* and restart Emacs
This is a basic configuration for using Clojure with Emacs
| ivanguerreschi | |
357,609 | Day #7: Number of possible decodable messages | Hello everyone. Today is the Day 7 of the #100DaysOfCodeChallenge. And it's a week!!!! Received a pro... | 7,222 | 2020-06-17T15:31:23 | https://dev.to/nmreddy1911/day-7-number-of-possible-decodable-messages-14nd | python, prolog, beginners |
Hello everyone.
Today is the Day 7 of the #100DaysOfCodeChallenge.
And it's a week!!!!
Received a problem previously asked by Facebook with a hard tag to it.
## The Question On Day #6:
Given the mapping a = 1, b = 2, ... z = 26, and an encoded message, count the number of ways it can be decoded.
For example, the message '111' would give 3, since it could be decoded as 'aaa', 'ka', and 'ak'.
You can assume that the messages are decodable. For example, '001' is not allowed.
## Algorithm
- The algorithm is similar to the concept behind fibonacci series. For every increase in the value of n is fibonacci we added it to the n-1 sum.
- Similarly, for every increase in the length of the message, we add a particular value to the number of possible ways, based on certain conditions.
- Points to check:
- a 1-digit number is considered a message only if it is greater than 0, because a is represented by 1 according to the question.
- a 2-digit number is considered a message only if it has 2 in the ten's place and digit<7 in the one's place or 1 in ten's place and any digit in one's place.
## **Python Code**
{% gist https://gist.github.com/nmreddy1911/b68ce4c953a8ca458e70e18b04fce539.js %}
**Output**
Number Of Ways: 3
# Visualiser
You can look into the visualiser to understand the BTS of the logic [here](http://pythontutor.com/visualize.html#code=message%20%3D%20%221212121%22%0Al%20%3D%20len%28message%29%0Aa%20%3D%201%0Ab%20%3D%201%0Ac%20%3D%200%0Afor%20i%20in%20range%282,%20l%2B1%29%3A%0A%20%20%20%20m1%3Dmessage%5Bi-1%5D%0A%20%20%20%20m2%3Dmessage%5Bi-2%5D%0A%20%20%20%20if%20i%20!%3D%202%3A%0A%20%20%20%20%20%20%20%20a%20%3D%20b%0A%20%20%20%20%20%20%20%20b%20%3D%20c%0A%20%20%20%20%20%20%20%20c%20%3D%200%0A%20%20%20%20if%20m1%20!%3D%20'0'%3A%0A%20%20%20%20%20%20%20%20c%20%3D%20b%0A%20%20%20%20if%20%28m2%20%3D%3D%20'1'%20or%20m2%20%3D%3D%20'2'%29%20and%20m1%20%3C%20'7'%3A%0A%20%20%20%20%20%20%20%20c%20%2B%3D%20a%0Aprint%28%22Number%20Of%20Ways%3A%22,%20c%29&cumulative=false&curInstr=46&heapPrimitives=nevernest&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false).
Feel free to reach out for any query clearance.
**Thanks and cheers:)**
| nmreddy1911 |
358,248 | Real World Learning | Overview Learning new things is the key to success. But if you are waiting for a good time... | 0 | 2020-06-18T17:46:59 | https://dev.to/theamanjs/real-world-learning-2i33 | beginners, developer, learning | #Overview
Learning new things is the key to success. But if you are waiting for a good time to launch yourself and keep learning till then, apparently you are going to lose a lot of great opportunities. Let's see how Learn and Stack in brain is way different from Learning By Doing. I'll try to emphasis on why learning by doing is better than learn and stack. This post is much useful for the new developer rather than old one.
#How you are learning things?
Are you one who learn and store data like HDD? Learning from courses and documentation for the purpose of learning new things and stacking them in mind is very popular among new developers. Are you one of them? Perhaps, you are learning new things which are cool and fun to learn. But, are you really making difference to your knowledge for good? Let's see further does it really worthful learning, learning, and learning all the way till job.
You may be thinking that learning new things is good then why it is not strongly endorsed yet. Think of the time when things were started creating. When Larry Page and Sergey Brin created Google; they were not experts in dozens of languages. After Google was created and other new technologies came into scene they didn't go back to any course or college learning to learn them and make a pile of them in brain. They learned all the new technologies into action. Either by using in Google or by doing other projects. This is what learn by doing. Just doing the work using the technology by reading the documentation is great rather than just learning and stacking in the brain. This is wastage of time, as your learning growth will be slow by learning, learning, and learning. When you will try these things in action you might need some docs or reference to recall the stored material.
#Conclusion
Don't wait for the moment to start working on the things by learning bunch of languages and other technologies. Suppose, you started learning new framework of JavaScript. No doubt, you solved some basic practise assignments. What about real world problems? They are not as real as these tiny things. So, if you find an opportunity to do work on actual projects by freelancing or open-source contribution don't waste it assuming you are learning yet and way behind real world. | theamanjs |
357,659 | How to: Automate file creation with node's fs module | TL;DR Do you ever find yourself creating the same files over and over again and wish for a... | 0 | 2020-06-17T16:23:57 | https://dev.to/hmintoh/how-to-automate-file-creation-with-node-s-fs-module-3ni1 | tutorial, webdev | ## TL;DR
Do you ever find yourself creating the same files over and over again and wish for a script to do the boring work for you without having to use a third-party library?
## The story
I've been creating a lot of React components lately and wanted to find a way to automate file creation. Specifically, a script that could create a *ComponentName* folder containing the following files:
* `index.tsx`
* `styles.tsx`
After doing some research, I found that [node's fs](https://nodejs.org/api/fs.html) module is able to meet my needs. Specifically,
* `fs.mkdirSync(path)` synchronously creates a directory given a path (+ other options)
* `fs.writeFileSync(filepath, content)` synchronously creates files in the given filepath and optionally fills them with content
## Step 1: Defining the file templates
```Javascript
// src/scaffold.js
const templates = {
index: `// Comment to begin our index file`,
styles: `// Comment to begin our styles file`,
};
```
## Step 2: Write functions to create the component folder and files
```Javascript
// src/scaffold.js
const fs = require("fs");
const args = process.argv.slice(2);
function createFolder(component) {
const directory = `./src/components/${component}`;
if (!fs.existsSync(directory)) {
fs.mkdirSync(directory);
}
}
function writeFile(component, type) {
const filepath =`./src/components/${component}/${filename}.tsx`;
fs.writeFile(filepath, templates[type], (err) => {
if (err) throw err;
console.log("Created file: ", filepath);
return true;
});
}
function generate(component) {
createFolder(component);
const fileTypes = ["index", "styles"];
for (let type of fileTypes) {
writeFile(component, type)
}
}
generate(args[0]);
```
* `createFolder(component)` creates a folder named `component` if it does not exist in the specified path
* `writeFile(component, type)` creates files in the specified folder and fills them with content specified in `templates[type]`
## Step 3: Adding the script
Finally, add the following script to `package.json`
```javascript
"scripts": {
"component": "node scaffold.js"
},
```
That's it! The magic command that is `yarn component <ComponentName>`.

Let me know your thoughts in the comments below :point_down:
| hmintoh |
357,669 | Best way to generate and manage SSH keys | Connecting to a remote server using an SSH key is quite simple. However, when you have a lot of keys... | 0 | 2020-06-17T17:18:51 | https://trubavuong.com/articles/ssh-key/ | productivity, devops, tutorial, git | ---
title: Best way to generate and manage SSH keys
published: true
date: 2020-06-17 16:20:02 UTC
tags: productivity, devops, tutorial, git
canonical_url: https://trubavuong.com/articles/ssh-key/
---
Connecting to a remote server using an SSH key is quite simple. However, when you have a lot of keys or multiple GitHub accounts, problems may arise. In this article, I am going to show you how to generate and manage keys in a neat way.
## Contents
- 1. What is SSH?
- 2. SSH key
- 3. Generating a new SSH key
* Step #1: Run "ssh-keygen" command
* Step #2: Enter the private key's location
* Step #3: Enter a passphrase
* Step #4: Done
- 4. Potential problems due to multiple SSH keys
* Problem #1: Too many authentication failures
* Problem #2: Multiple GitHub accounts
- 5. SSH config file
- 6. Benefits
* Benefit #1: Connecting to a server using its alias
* Benefit #2: Dealing with multiple GitHub accounts
- 7. Conclusion
## 1. What is SSH?
SSH (Secure Shell) is an authenticated and encrypted network protocol used for remote communication between machines.
SSH supports various authentication methods. Password authentication is the easiest method, but it suffers from security vulnerabilities, such as brute force attacks. Another method is public-key authentication, which is more secure, and for me, more convenient.
## 2. SSH key
An SSH key, or an SSH key pair, is a pair of keys: a public key and a private key.
The public key:
- Usually named `id_rsa.pub`
- Acting as a public lock
- Placed on the SSH server that you want to log into
- Used to encrypt data
The private key:
- Usually named `id_rsa`
- Acting as a secret key
- Securely stored on your machine only
- Used to decrypt data
By default, these keys are stored in the `~/.ssh` directory. You can also have multiple key pairs on your machine.
## 3. Generating a new SSH key
To create a pair of keys, you can use the `ssh-keygen` tool with just a few steps. Let's do it!
### Step #1: Run "ssh-keygen" command
```bash
$ ssh-keygen -t rsa -b 4096 -C "Your Name <your_email@example.com>"
```
The `-C` option is used as a note to specify who/when/where the key was generated. Although this option is optional, I highly recommend filling it.
### Step #2: Enter the private key's location
```bash
Generating public/private rsa key pair.
Enter file in which to save the key (/home/<user>/.ssh/id_rsa): [Type]
```
### Step #3: Enter a passphrase
A passphrase, an extra security layer, is used to encrypt your private key. You can leave it blank and press enter to skip creating a passphrase, but I strongly recommend setting up the one.
```bash
Enter passphrase (empty for no passphrase): [Type]
Enter same passphrase again: [Re-type]
```
### Step #4: Done
You can check the newly generated keys:
```bash
$ ls -l ~/.ssh
$ cat ~/.ssh/id_rsa.pub
```
## 4. Potential problems due to multiple SSH keys
If you have only one SSH key, congratulations, you have nothing to worry about. But what if you have more keys, for example, to access more remote servers?
I will show you a few potential problems right away.
### Problem #1: Too many authentication failures
```bash
$ ssh -i ~/.ssh/id_rsa_vps user@1.2.3.4
Received disconnect from 1.2.3.4 port 22:2: Too many authentication failures
Disconnected from 1.2.3.4 port 22
```
As you can see, I failed to connect to the server with the corresponding private key, and the error was "Too many authentication failures".
What?
That means the SSH client tried a lot of other keys, and the authentication process failed before the key I specified was used.
Let me explain.
The SSH agent tracks private keys and their passphrases. When connecting to a server, the SSH client uses all the keys in the agent and the specified key to compose a key list to try each by each. The specified key will be added to the top of the list if it has been already tracked by the agent. Otherwise, it will be added to the end of the list.
Not too complicated, right?
Okay. The fact that the key I specified, `~/.ssh/id_rsa_vps`, has not been tracked. Thus, there are at least 3 solutions as follows:
#### Solution #1.1: Adding the private key to the SSH agent by using the "ssh-add" tool
```bash
$ ssh-add ~/.ssh/id_rsa_vps
```
#### Solution #1.2: Ignoring the key list in the SSH agent by providing the SSH option "IdentitiesOnly=yes"
```bash
$ ssh -o IdentitiesOnly=yes -i ~/.ssh/id_rsa_vps user@1.2.3.4
```
#### Solution #1.3: Using the SSH config file
Keep reading. I will describe it in section 5.
### Problem #2: Multiple GitHub accounts
This is an example of GitHub, but it is similar to any Git servers, such as GitLab, BitBucket, etc.
Suppose you have multiple GitHub accounts (`personal` and `team`), and you have added the corresponding key to each account. When you try to clone a repository from one account, you may see the following error.
```bash
$ git clone git@github.com:personal/repo.git
Cloning into 'repo'...
ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
Note: if you do not see any errors, try the other account's repository.
Let me explain.
First, you cannot add the same key to multiple GitHub accounts. Second, when you clone a repository using SSH, the SSH client sends each key in the list (that I have just described in the "Problem #1" section) to the GitHub server.
Depending on the key list order, `id_rsa_personal` or `id_rsa_team` will be accepted by the server. You know, if the accepted one does not belong to the owner of the repository, the permission error will be returned.
Okay. The solution is right below.
## 5. SSH config file
Forget the SSH agent. Using the SSH config file is a better way to manage multiple SSH keys. Believe me, the above potential problems will never happen again.
You can find the config file at `~/.ssh/config`, or just create one.
Here is an example:
```bash
$ cat ~/.ssh/config
Host vps
HostName 1.2.3.4
Port 22
User user
IdentityFile ~/.ssh/id_rsa_vps
IdentitiesOnly yes
Host github-personal
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_github_personal
IdentitiesOnly yes
Host github-team
HostName github.com
User git
IdentityFile ~/.ssh/id_rsa_github_team
IdentitiesOnly yes
```
Let me explain.
- Host: I used as an alias of the host
- HostName: the real hostname to log into
- Port: the port number
- User: the user used to log into
- IdentityFile: the private key file
- IdentitiesOnly: setting it to "yes" that tells the SSH client to only use the private keys configured in the SSH config files, so the "Problem #1" will be resolved!
## 6. Benefits
Let's discuss the benefits of using the SSH config file.
### Benefit #1: Connecting to a server using its alias
You do not have to specify user, host, or port every time to connect to a server via SSH.
```bash
$ ssh vps
```
Awesome!
An example of using SCP to copy a file from the remote server to the local machine:
```bash
$ scp vps:/path/to/the/remote/file /home/<user>/Downloads
```
### Benefit #2: Dealing with multiple GitHub accounts
You can clone the repository that belongs to the corresponding account. For example, to clone the repository of the `personal` account, I replaced `github.com` by the alias `github-personal` in the config file.
```bash
$ git clone git@github-personal:personal/repo.git
```
## 7. Conclusion
Managing multiple SSH keys by using the SSH config file is not complicated. You can explore [even more SSH options](https://linux.die.net/man/5/ssh_config) if you like. I hope this article is helpful to you.
| trubavuong |
357,692 | How to Verify Your Instagram Account ✅ in Just 4 Minutes (2020) | Instagram Blue Tick Account | Watch Here: https://www.youtube.com/watch?v=pUNVdRTenZc In this video, you learn about how to verify... | 0 | 2020-06-17T17:44:49 | https://dev.to/ankitsaxena06/how-to-verify-your-instagram-account-in-just-4-minutes-2020-instagram-blue-tick-account-3oh4 | instagram, socialaccount, instagramaccount | Watch Here: https://www.youtube.com/watch?v=pUNVdRTenZc
In this video, you learn about how to verify your Instagram account ✅ in Just 4 Minutes | Instagram Blue Badge Account in 2020 | Genuine Process. In this video you are going to learn a practical work.
If You looking for the genuine process that how to apply for the Instagram official account watch this video till end.
You can also contact us for the process.
Also Check if watch to learn about Facebook Leads Ads
https://www.youtube.com/watch?v=kRJf5...
Subscribe Our Youtube Channel
https://www.youtube.com/channel/UCkWv...
Know more about Digitalmise:
Digitalmise is one of the Best Web Development and Digital Marketing company that helps you to build a strong online presence of your business. We have expertise in Website development, App Development, Ecommerce Optimisation, Search Engine Marketing, Facebook Marketing, Instagram Marketing, Linkedin Marketing, Twitter Marketing and other Social Media Channels.
We create this youtube channel to share our knowledge about digital marketing, social media, tip and tricks about SEO and advance seo. So subscribe our channel and turn on the notification.
Visit our official website: https://digitalmise.com
#Instagram #VerifyAccount #Verification #Bluetick | ankitsaxena06 |
357,712 | IsAnagram? - Quick Hack | Hello folks, Just sharing a quick hack to find out if two words are anagrams of each other or not.... | 0 | 2020-06-17T18:39:58 | https://dev.to/shyams1993/isanagram-quick-hack-2pm0 | python, useful, quickhack | Hello folks,
Just sharing a quick hack to find out if two words are anagrams of each other or not.
```
def isAnagram(word,word2):
return sorted(word.lower().strip().replace(" ","")) == sorted(word2.lower().replace(" ",""))
print(isAnagram("Dormitory","Dirty room"))
print(isAnagram("School master", "The classroom"))
```
This function returns a boolean output based on whether they're anagrams or not.
Using sorted() to compare two words in an orderly manner to check their anagrammatic property & it just makes things a lot easier! :)
<ul>
<li>Added the "<code>.lower()</code>" to convert the uppercase to lowercase (if there's an upper case letter) and check since different letter cases impact the result.</li>
<li>Also, using "<code>.replace(" ","")</code>" to remove whitespaces.</li>
</ul>
<code>Code on</code> -- Cheers!
<i> Thanks to the brilliant comments, made some edits to change the logic from set() to sorted() to rule out false positives</i> | shyams1993 |
357,826 | yaml quote escape | In double quoted strings if you need to include a literal double quote in your string you can escape... | 0 | 2020-06-18T01:14:55 | https://dev.to/icy1900/yaml-quote-escape-4770 | yaml | > In double quoted strings if you need to include a literal double quote in your string you can escape it by prefixing it with a backslash \ (which you can in turn escape by itself). __In single quoted strings the single quote character can be escaped by prefixing it with another single quote, basically doubling it.__ Backslashes in single quoted strings do not need to be escaped. | icy1900 |
357,835 | React Project - Idea to Production - Part Two - Setting up a Component Library | This was originally posted here This is the second post in the series. You can find the first post h... | 0 | 2020-06-18T02:21:01 | https://dev.to/debojitroy/react-project-idea-to-production-part-two-setting-up-a-component-library-41fk | react, typescript, storybook, design | This was originally posted [here](https://debojitroy.com/blogs/react-idea-to-production-part-two/)
This is the second post in the series. You can find the first post [here](https://dev.to/debojitroy/react-project-idea-to-production-part-one-wireframes-and-project-setup-b08)
## Where are we
Ok so till now we have
- Brainstormed on our brilliant idea to build a Movie App.
- We have decided what features are needed as part of the MVP.
- Our design team has given us the wireframes.
- We have setup our project as a Monorepo.
- We have setup linting rules, code formatter and commit hooks.
## What are we going to do now
Ok so the next step is to break down our wireframe into components. We will build a component library which can be used across various projects. Finally we will setup storybook to showcase our component library.
## TL;DR
This is a 5 part post
- [Part One : Wireframes and Project Setup](https://debojitroy.com/blogs/react-idea-to-production-part-one)
- Part Two : Setting up a Component Library
- [Part Three : Building the Movie App using component library](https://debojitroy.com/blogs/react-idea-to-production-part-three/)
- [Part Four: Hosting the Movie app and setting up CI/CD](https://debojitroy.com/blogs/react-idea-to-production-part-four/)
Source Code is available [here](https://github.com/debojitroy/movie-app)
Component Library Demo is available [here](https://d3f0roag7dlk8c.cloudfront.net/)
Movie App Demo is available [here](https://d1c9s36vd9mohd.cloudfront.net/)
## Setting the Component Library
Now let's move ahead by setting up our component library.
Move to the `packages` folder
```shell
cd packages
```
Create a new folder for our `components`
```shell
mkdir components
cd components
```
Initialize the yarn project
```shell
yarn init
```
Naming is important here as we will be referring our projects in our workspace using the name. I prefer organization scoped name to avoid naming conflicts. So for our example I will be using `@awesome-movie-app` as our organization name. Feel free to replace with your organization scope.
Next thing to keep in mind is how you want to publish your packages to `npm`. If you would like to publish packages to npm, then make sure the version is semantic and let `lerna` handle the publishing to packages.
If you have a restricted / private NPM organization, make sure to add `publishConfig` with `restricted` access in your `package.json` to avoid accidental publishing of the packages to public npm.
```json
"publishConfig": {
"access": "restricted"
}
```
As for the purpose of this post, we will not be publishing our packages to npm, so we will skip defining the `publishConfig`.
So our `package.json` looks like
```json
{
"name": "@awesome-movie-app/components",
"version": "1.0.0",
"description": "Component Library for Awesome Movie App",
"main": "index.js",
"repository": "git@github.com:debojitroy/movie-app.git",
"author": "Debojit Roy <debojity2k@gmail.com>",
"license": "MIT",
"private": true
}
```
## Defining the requirements
Our project is now setup, lets define our requirements before we move further.
- Our components will be `React` components
- We will use `TypeScript` to build our components
- We want to showcase our components using `Storybook`
- We will use `Bootstrap` for base styles
- We will adopt **CSS-in-JS** and use `StyledComponents`
- We will transpile our code using `Babel`
### Why no Webpack
In a ideal world we will be publishing our packages to `npm`. Before publishing our packages to `npm` we would want to transpile and package them nicely. For this my ideal choice will be webpack.
But one very important feature for libraries is the package should support [Tree Shaking](https://webpack.js.org/guides/tree-shaking/). **Tree Shaking** is a fancy word for trimming excess fat i.e. eliminating code that is not used in the importing library. Due to this known webpack [issue](https://github.com/webpack/webpack/issues/6386), sadly it makes it impossible right now.
To work around the problem we can use [Rollup](https://rollupjs.org/guide/en/), but as we are not interested right now in publishing our package to `npm`, we will use `babel` to transpile our components. I will cover how to use Rollup and tree shake your library in another post.
## Preparing the project
Ok that was way too much theory, now let's move on to setup our project.
Last bit of theory before we move ahead. As we are using `lerna` as our high-level dependency manager, we will use `lerna` to manage dependencies. Which means to add a new dependency we will use this format
```shell
lerna add <dependency-name> --scope=<sub-project-name> <--dev>
```
**dependency-name**: Name of the `npm` package we want to install
**sub-project-name**: This is optional. If you omit this then the dependency will be installed across all the projects. If you want the dependency to be installed only for a specifc project, then pass in the name of the project from individual `package.json`
**--dev**: Same as yarn options. If you want to install only dev dependencies, pass in this flag.
## Adding Project Dependencies
Usually I will go ahead and add most of the dependencies in one command. But for this post I will go verbose explaining each of the dependencies I am adding and the reasoning behind it.
**Note:** We will be adding everything from the **root folder** of the project i.e. the root folder of `movie-app` (one level above `packages` folder)
### Adding React
```shell
lerna add react --scope=@awesome-movie-app/components --dev
lerna add react-dom --scope=@awesome-movie-app/components --dev
```
#### Why one dependency at a time
Sadly due to this [limitation](https://github.com/lerna/lerna/issues/2004) of lerna 😞
#### Why is React a dev dependency 🤔
This part is important. As this library will be consumed in other project, we don't want to dictate our version of `React`, rather we want the consuming project to inject the dependency. So we are going to add common libraries as `dev` dependencies and mark them as peer dependencies. This is true for any common libraries you may want to build.
We will be adding `React` in our peer dependencies of `@awesome-movie-app/components`
```json
"peerDependencies": {
"react": "^16.13.1",
"react-dom": "^16.13.1"
}
```
### Adding TypeScript
```shell
lerna add typescript --scope=@awesome-movie-app/components --dev
```
Adding types for `React`
```shell
lerna add @types/node --scope=@awesome-movie-app/components
lerna add @types/react --scope=@awesome-movie-app/components
lerna add @types/react-dom --scope=@awesome-movie-app/components
```
Adding `tsconfig` for TypeScript
```json
{
"compilerOptions": {
"outDir": "lib",
"module": "commonjs",
"target": "es5",
"lib": ["es5", "es6", "es7", "es2017", "dom"],
"sourceMap": true,
"allowJs": false,
"jsx": "react",
"moduleResolution": "node",
"rootDirs": ["src"],
"baseUrl": "src",
"forceConsistentCasingInFileNames": true,
"noImplicitReturns": true,
"noImplicitThis": true,
"noImplicitAny": true,
"strictNullChecks": true,
"suppressImplicitAnyIndexErrors": true,
"noUnusedLocals": true,
"declaration": true,
"allowSyntheticDefaultImports": true,
"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"esModuleInterop": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "build", "scripts"]
}
```
### Adding Storybook
```shell
lerna add @storybook/react --scope=@awesome-movie-app/components --dev
```
Adding some cool add-ons
```shell
lerna add @storybook/addon-a11y --scope=@awesome-movie-app/components --dev
lerna add @storybook/addon-actions --scope=@awesome-movie-app/components --dev
lerna add @storybook/addon-docs --scope=@awesome-movie-app/components --dev
lerna add @storybook/addon-knobs --scope=@awesome-movie-app/components --dev
lerna add @storybook/addon-viewport --scope=@awesome-movie-app/components --dev
lerna add storybook-addon-styled-component-theme --scope=@awesome-movie-app/components --dev
lerna add @storybook/addon-jest --scope=@awesome-movie-app/components --dev
```
### Adding Test Libraries
We will be using `jest` for unit testing
```shell
lerna add jest --scope=@awesome-movie-app/components --dev
lerna add ts-jest --scope=@awesome-movie-app/components --dev
```
We will be using [enzyme](https://github.com/enzymejs/enzyme) for testing our React Components
```shell
lerna add enzyme --scope=@awesome-movie-app/components --dev
lerna add enzyme-adapter-react-16 --scope=@awesome-movie-app/components --dev
lerna add enzyme-to-json --scope=@awesome-movie-app/components --dev
```
Adding [jest-styled-components](https://github.com/styled-components/jest-styled-components) for supercharing `jest`
```shell
lerna add jest-styled-components --scope=@awesome-movie-app/components --dev
```
Configure `enzyme` and `jest-styled-components` to work with `jest`. We will add `setupTests.js`
```javascript
require("jest-styled-components")
const configure = require("enzyme").configure
const EnzymeAdapter = require("enzyme-adapter-react-16")
const noop = () => {}
Object.defineProperty(window, "scrollTo", { value: noop, writable: true })
configure({ adapter: new EnzymeAdapter() })
```
Configure `jest.config.js`
```javascript
module.exports = {
preset: "ts-jest",
// Automatically clear mock calls and instances between every test
clearMocks: true,
// Indicates whether the coverage information should be collected while executing the test
collectCoverage: true,
// An array of glob patterns indicating a set of files for which coverage information should be collected
collectCoverageFrom: [
"src/**/*.{ts,tsx}",
"!src/**/index.{ts,tsx}",
"!src/**/styled.{ts,tsx}",
"!src/**/*.stories.{ts,tsx}",
"!node_modules/",
"!.storybook",
"!dist/",
"!lib/",
],
// The directory where Jest should output its coverage files
coverageDirectory: "coverage",
// An array of regexp pattern strings used to skip test files
testPathIgnorePatterns: ["/node_modules/", "/lib/", "/dist/"],
// A list of reporter names that Jest uses when writing coverage reports
coverageReporters: ["text", "html", "json"],
// An array of file extensions your modules use
moduleFileExtensions: ["ts", "tsx", "js", "jsx"],
// A list of paths to modules that run some code to configure or set up the testing framework before each test
setupFilesAfterEnv: ["./setupTests.js"],
// A list of paths to snapshot serializer modules Jest should use for snapshot testing
snapshotSerializers: ["enzyme-to-json/serializer"],
}
```
### Adding Styled Components and BootStrap
```shell
lerna add styled-components --scope=@awesome-movie-app/components --dev
lerna add react-bootstrap --scope=@awesome-movie-app/components --dev
lerna add bootstrap --scope=@awesome-movie-app/components --dev
lerna add @types/styled-components --scope=@awesome-movie-app/components
```
### Adding Babel
As we will be using babel to transpile everything. Its important we configure Babel properly.
Adding Babel Dependencies
```shell
lerna add @babel/core --scope=@awesome-movie-app/components --dev
lerna add babel-loader --scope=@awesome-movie-app/components --dev
lerna add @babel/cli --scope=@awesome-movie-app/components --dev
lerna add @babel/preset-env --scope=@awesome-movie-app/components --dev
lerna add @babel/preset-react --scope=@awesome-movie-app/components --dev
lerna add @babel/preset-typescript --scope=@awesome-movie-app/components --dev
lerna add core-js --scope=@awesome-movie-app/components --dev
```
A bit on the `babel` components we added
- **@babel/core** : Core `babel` functionality
- **babel-loader** : Used by `storybook` `webpack` builder
- **@babel/cli** : Will be used by us to transpile files from command line
- **@babel/preset-env** : Environment setting for transpiling
- **@babel/preset-react** : React setting for `babel`
- **@babel/preset-typescript** : TypeScript settings for `babel`
- **core-js** : Core JS for `preset-env`
Now let's add our `.babelrc` file
```json
{
"presets": [
"@babel/preset-typescript",
[
"@babel/preset-env",
{
"useBuiltIns": "entry",
"corejs": "3",
"modules": false
}
],
"@babel/preset-react"
]
}
```
## Bringing it all together
### Important Note
The steps below may differ based on which version of `Storybook` and `Jest` you are using. The below steps are written for `Storybook` `v5.3+` and `Jest` `v26.0+`
### Setting up our theme
First step will be to setup our `theme`. We can start with a blank `theme` and fill it up as we go.
```shell
cd packages/components
mkdir theme
```
Defining the `Theme`
```typescript
export interface Theme {
name: string
color: {
backgroundColor: string
primary: string
secondary: string
}
}
```
Defining `Light` theme
```typescript
import { Theme } from "./theme"
const lightTheme: Theme = {
name: "LIGHT",
color: {
backgroundColor: "#fff",
primary: "#007bff",
secondary: "#6c757d",
},
}
export default lightTheme
```
Defining `Dark` theme
```typescript
import { Theme } from "./theme"
const darkTheme: Theme = {
name: "DARK",
color: {
backgroundColor: "#000",
primary: "#fff",
secondary: "#6c757d",
},
}
export default darkTheme
```
### Setting up Storybook
To configure `storybook`, we need to setup the configuration folder first. We will use the default `.storybook` folder, but feel free to use folder name.
```shell
mkdir .storybook
```
Now inside `.storybook` folder we will create the configuration files needed for `storybook`
#### main.js
This is the `main` configuration file for storybook. We will configure the path for `stories`, register our `addons` and override `webpack` config to process `typescript files`.
```javascript
// .storybook/main.js
module.exports = {
stories: ["../src/**/*.stories.[tj]sx"],
webpackFinal: async config => {
config.module.rules.push({
test: /\.(ts|tsx)$/,
use: [
{
loader: require.resolve("ts-loader"),
},
],
})
config.resolve.extensions.push(".ts", ".tsx")
return config
},
addons: [
"@storybook/addon-docs",
"@storybook/addon-actions/register",
"@storybook/addon-viewport/register",
"@storybook/addon-a11y/register",
"@storybook/addon-knobs/register",
"storybook-addon-styled-component-theme/dist/register",
"@storybook/addon-jest/register",
],
}
```
#### manager.js
Here we configure the Storybook manager. There are many options that can be overriden, for our project we want the add-ons panel to be at the `bottom` (default is `right`)
```javascript
// .storybook/manager.js
import { addons } from "@storybook/addons"
addons.setConfig({
panelPosition: "bottom",
})
```
#### preview.js
Finally we will configure the Story area. We intialize our add-ons and pass global configurations.
```javascript
// .storybook/preview.js
import { addParameters, addDecorator } from "@storybook/react"
import { withKnobs } from "@storybook/addon-knobs"
import { withA11y } from "@storybook/addon-a11y"
import { withThemesProvider } from "storybook-addon-styled-component-theme"
import { withTests } from "@storybook/addon-jest"
import results from "../.jest-test-results.json"
import lightTheme from "../theme/light"
import darkTheme from "../theme/dark"
export const getAllThemes = () => {
return [lightTheme, darkTheme]
}
addDecorator(withThemesProvider(getAllThemes()))
addDecorator(withA11y)
addDecorator(withKnobs)
addDecorator(
withTests({
results,
})
)
addParameters({
options: {
brandTitle: "Awesome Movie App",
brandUrl: "https://github.com/debojitroy/movie-app",
showRoots: true,
},
})
```
### Creating React Components
Now we can create our very first react component.
#### Our first button
We will first create a `src` folder
```shell
mkdir src && cd src
```
Then we will create a folder for our component. Let's call it `Sample`
```shell
mkdir Sample && cd Sample
```
Now let's create a simple `styled` `button` and pass some props to it.
```typescript
// styled.ts
import styled from "styled-components"
export const SampleButton = styled.button`
background-color: ${props => props.theme.color.backgroundColor};
color: ${props => props.theme.color.primary};
`
```
```typescript
// Button.tsx
import React from "react"
import { SampleButton } from "./styled"
const Button: React.FC<{
value: string
onClickHandler: () => void
}> = ({ value, onClickHandler }) => (
<SampleButton onClick={onClickHandler}>{value}</SampleButton>
)
export default Button
```
Awesome !!! We have our first component finally !!!
#### Adding unit tests
Now let's add some tests for our new button.
```shell
mkdir tests
```
```typescript
// tests/Button.test.tsx
import React from "react"
import { mount } from "enzyme"
import { ThemeProvider } from "styled-components"
import lightTheme from "../../../theme/light"
import Button from "../Button"
const clickFn = jest.fn()
describe("Button", () => {
it("should simulate click", () => {
const component = mount(
<ThemeProvider theme={lightTheme}>
<Button onClickHandler={clickFn} value="Hello" />
</ThemeProvider>
)
component.find(Button).simulate("click")
expect(clickFn).toHaveBeenCalled()
})
})
```
#### Adding stories
Now with the new button in place, lets add some `stories`
```shell
mkdir stories
```
We will use the new [Component Story Format (CSF)](https://storybook.js.org/docs/formats/component-story-format/)
```typescript
// stories/Button.stories.tsx
import React from "react"
import { action } from "@storybook/addon-actions"
import { text } from "@storybook/addon-knobs"
import Button from "../Button"
export default {
title: "Sample / Button",
component: Button,
}
export const withText = () => (
<Button
value={text("value", "Click Me")}
onClickHandler={action("button-click")}
/>
)
withText.story = {
parameters: {
jest: ["Button.test.tsx"],
},
}
```
### Time to check if everything works
#### Transpiling our code
As we discussed in the beginning, we will be using `babel` to transpile our code and let the calling projects take care of minification and tree-shaking.
So going ahead with that, we will add some scripts and test they are working.
##### Typecheck and Compilation
We will first use `TypeScript`'s compile to compile our code.
```json
"js:build": "cross-env NODE_ENV=production tsc -p tsconfig.json"
```
If everything is fine, we should see an output like this
```shell
$ cross-env NODE_ENV=production tsc -p tsconfig.json
✨ Done in 5.75s.
```
##### Transpiling with Babel
Next step will be to transpile our code with `babel`
```json
"build-js:prod": "rimraf ./lib && yarn js:build && cross-env NODE_ENV=production babel src --out-dir lib --copy-files --source-maps --extensions \".ts,.tsx,.js,.jsx,.mjs\""
```
If everything is fine, we should see an output like this
```shell
$ rimraf ./lib && yarn js:build && cross-env NODE_ENV=production babel src --out-dir lib --copy-files --source-maps --extensions ".ts,.tsx,.js,.jsx,.mjs"
$ cross-env NODE_ENV=production tsc -p tsconfig.json
Successfully compiled 4 files with Babel.
✨ Done in 7.02s.
```
##### Setting up watch mode for development
During development, we would like incremental compilation every time we make changes. So let's add a watch script.
```json
"js:watch": "rimraf ./lib && cross-env NODE_ENV=development concurrently -k -n \"typescript,babel\" -c \"blue.bold,yellow.bold\" \"tsc -p tsconfig.json --watch\" \"babel src --out-dir lib --source-maps --extensions \".ts,.tsx,.js,.jsx,.mjs\" --copy-files --watch --verbose\""
```
We should see output like this
```shell
Starting compilation in watch mode...
[typescript]
[babel] src/Sample/Button.tsx -> lib/Sample/Button.js
[babel] src/Sample/stories/Button.stories.tsx -> lib/Sample/stories/Button.stories.js
[babel] src/Sample/styled.ts -> lib/Sample/styled.js
[babel] src/Sample/tests/Button.test.tsx -> lib/Sample/tests/Button.test.js
[babel] Successfully compiled 4 files with Babel.
[typescript]
[typescript] - Found 0 errors. Watching for file changes.
```
#### Running Unit Tests
Once we are sure our compilation and transpiling works, lets make sure our tests work.
```json
"test": "jest"
```
Running our tests should show an output similar to this

We are getting there slowly 😊
Now we need to generate `json` output for storybook to consume and show next to our stories. Let's configure that as well.
```json
"test:generate-output": "jest --json --outputFile=.jest-test-results.json || true"
```
#### Running Storybook
Finally we want to run storybook with our stories. Lets run storybook in dev mode.
```json
"storybook": "start-storybook -p 8080"
```
If everything was configured properly, we should see the storybook in our [Browser](http://localhost:8080)

We will add couple of more commands for building storybook for deployment. We will be using these when we configure Continuous Deployment in our last post - [Part Four: Hosting the Movie app and setting up CI/CD](https://debojitroy.com/blogs/react-idea-to-production-part-four/)
```json
"prebuild:storybook": "rimraf .jest-test-results.json && yarn test:generate-output",
"build:storybook": "build-storybook -c .storybook -o dist/"
```
After this we can start splitting our wireframes into components. I will not go into the details of that as there are much better posts out there which do a better job of explaining the process. You can find the code that we complete till now [here](https://github.com/debojitroy/movie-app/tree/master/packages/components)
In the next part we will setup and build our movie app, continue to [Part Three : Building the Movie App using component library](https://debojitroy.com/blogs/react-idea-to-production-part-three/) | debojitroy |
357,868 | TypeScript (patterns?) | Throughout my journey with TypeScript, I've been impressed by its features. The v2 documentation is... | 0 | 2020-06-24T16:01:32 | https://dev.to/hcapucho/typescript-patterns-2d8n | typescript, patterns |
> Throughout my journey with TypeScript, I've been impressed by its features. The v2 [documentation](https://www.typescriptlang.org/docs/home.html) is a really good improvement to the learning resources. However, some patterns are not so easy to find documented in the common ground of TS materials. Therefore, I wrote this quick article with 3 interesting things in TS, that from my perspective, can make life a bit easier :).
We shall begin!

## Companion Object
I've found this one when I read [Programming TypeScript](https://www.amazon.com/Programming-TypeScript-Making-JavaScript-Applications/dp/1492037656). It provides a simple and easy way to enable your module customers to import the type and a factory for that type in a single import. Hence, the name "Companion Object".
It amazed me by how simple and useful this can be. This is how the module is presented:
```typescript
// Currency.ts
// Here we will create a type and a variable
// with the same name
type Currency = {
unit: "EUR" | "GBP" | "JPY" | "USD";
value: number;
};
let Currency = {
DEFAULT: "USD",
from(value: number, unit = Currency.DEFAULT): Currency {
return { unit, value };
}
};
export { Currency };
```
And this is how the module is consumed:
```typescript
// index.ts
import { Currency } from "./Currency";
// Use case 1: Used as type
let amountDue: Currency = {
unit: "JPY",
value: 83733.1
};
// use case 2: Used as factory object
let otherAmountDue = Currency.from(330, "EUR");
console.log({ amountDue, otherAmountDue });
```
If we look at the `index.ts` file, a single import of `Currency` is declared. Yet, as you can see, it has two use cases. The first one as a type and the later as a factory object. By looking at the `Currency.ts` file, it's possible to see that we also have a single export, which is valid for both variable and type.
Consequently, **with a single export and a single import, you gain a type and an object factory.**. This enables you to work with both smoothly. However, things can't be so easy. Since we usually opt-in for the `strict` option in TS, this code throws an error:
`7022: 'Currency' implicitly has type 'any' because it does not have a type annotation and is referenced directly or indirectly in its initializer.`
_(If you want to understand a bit more on the compiler options, [this link](https://www.typescriptlang.org/docs/handbook/compiler-options.html) provides an explanation to every option TypeScript contains.)_

However, whenever we developers like an idea, we ~~smash the code until it works~~ find a proper path to make it reasonable. In this particular case, it is possible to use this with the strict option, by doing a small tweak. Our module will look like the following:
``` typescript
type ValidCurrencies = "EUR" | "GBP" | "JPY" | "USD";
// "Small tweak": Type that will be used on
// the variable that is used as constructor
type NotExposedCurrency = {
from: (value: number, unit: ValidCurrencies) => Currency
DEFAULT: ValidCurrencies
}
// Here we have the things we would like to export
type Currency = {
unit: ValidCurrencies
value: number;
};
// Type Constructor
// Here is where we use the tweak
let Currency: NotExposedCurrency = {
DEFAULT: "USD",
from(value: number, unit = Currency.DEFAULT): Currency {
return { unit, value };
}
};
export { Currency }
```
By typing the factory with a private type, the variable won't have an `implicit any` type. This won't impact our usage since we have no interest in the Factory type. Our interest is at the factory function return type.
Applying this change, the `index.ts` export was still the same and it enabled the feature usage even when the compiler is set to a strict mode.

## Exceptions: Java, Go, and TypeScript?
In my career, I've worked mostly as a Java Developer. In Java, you can add the Exceptions that might be thrown to your methods signatures, thus, enforcing the client to properly handle those cases.
Such a simple thing, which I've never imagined how much I would miss. For those who might have never used Java or a language with this feature, here's a piece of code with an example:
```java
public class ThisOneThrows {
// ThisOneThrows.java:4: error:
// unreported exception Exception;
// must be caught or declared to be thrown
// ThisOneThrows.hereWeThrow();
public static void main(String[] args) {
ThisOneThrows.hereWeThrow();
}
public static void hereWeThrow() throws Exception {
if(true) {
throw new Exception();
}
}
}
```
Java compiler will enforce you to declare that the `main` function throws as well or that you need to wrap the call `ThisOneThrows.hereWeThrow()` in a `try...catch` block.
```java
public class ThisOneThrows {
// public static void main(String[] args) throws Exception {
// ThisOneThrows.hereWeThrow();
// }
// OR
public static void main(String[] args) {
try {
ThisOneThrows.hereWeThrow();
} catch (Exception e) {
e.printStackTrace();
}
}
public static void hereWeThrow() throws Exception {
if(true) {
throw new Exception();
}
}
}
```
Knowing this behavior upfront is always resourceful when dealing with error handling, and I've always missed that in TS. It's a really impressive type system, especially considering the environment it runs at, but not being able to know the possible errors I could expect bugged me for a while.
However, the Go community has been dealing with this for a while. In Go, you don't have a _throws_ declaration, the solution? __Return an actual error object to the function consumer__.
So, why not doing that in TS as well? _(This is a controversial pattern for many, but from my perspective, as long as it increases the chances of catching an issue before the client does, it brings value to the table)_.
Here's one example of how you can do that in TS, again, from the book [Programming TypeScript](https://www.amazon.com/Programming-TypeScript-Making-JavaScript-Applications/dp/1492037656). I think I've done some really small tweaks to it, so I could make use of function return types as `Type Guards`.
One example of how we could define the errors and `Type Guards`:
```typescript
// helpers.ts
// First Part: define the errors
class InvalidDateFormatError extends Error {}
class DateIsInTheFutureError extends Error {}
// Type guard for the errors
function isError(input: unknown): input is Error {
return input instanceof Error
}
// a helper functions for dates.
function isValid(date: Date) {
return Object.prototype.toString.call(date) === '[object Date]'
&& !Number.isNaN(date.getTime())
}
```
This is how we could define our logic to handle the errors:
```typescript
// birthday.ts
import {
InvalidDateFormatError,
DateIsInTheFutureError,
isError,
isValid
} from './helpers'
function parse(
birthday: string
): Date | InvalidDateFormatError | DateIsInTheFutureError {
let date = new Date(birthday)
if (!isValid(date)) {
return new InvalidDateFormatError(
'Enter a date in the form YYYY/MM/DD'
)
}
if (date.getTime() > Date.now()) {
return new DateIsInTheFutureError('A what?')
}
return date
}
function getYear(birthday: string = new Date().toISOString()): number {
const possibleDate = parse(birthday) // step 1
if(isError(possibleDate)) {
// step 2
console.log(possibleDate.message)
return
}
// step 3
return possibleDate.getFullYear()
}
```
Let's start with the `step 1` part, inside the `getYear` function. Here we call the `parse` function. This function, as the signature states, tries to parse a string into a Date. The signature also shows that besides the Date we want, 2 errors could be returned from the validation conditions.
By doing so, at the moment parse is executed and returns the value to the `possibleDate` variable, TypeScript is exactly like this:

Is `possibleDate` a Date or an Error? Since TypeScript can't figure it out at compile-time, it won't allow us to safely access any value. We will have to check the variable using a `Type Guard`. Only after that, you will be able to access the value you want.
In step two, we have our `Type Guard`. If the call to `isError` returns true, which has a return defined as `input is Error`, TS knows that inside that if condition, we're dealing with an Error. Consequently, the compiler will allow access to the message attribute.
_(To see more on this Type Guards, take a look at [this link](https://www.typescriptlang.org/docs/handbook/advanced-types.html#using-type-predicates))_
Moreover, by figuring that out on step 2, TypeScript also knows that the only possible type left for the `possibleDate`, after executing the if condition, is the Date type. This is because both options are a subtype of Error.
Now you're allowed to access the Date functions and attributes. :)
> Some Functional Programming concepts can improve this error handling in a quite nice manner, I will write a post regarding that in the next few weeks. If you're curious about it already, I can recommend reading this [book](https://mostly-adequate.gitbooks.io/mostly-adequate-guide/).
> Mostly Adequate Guide to Functional Programming is pleasant to read, with exercises so you can practice the concepts. I'm quite sure it will be a nice addition to your library.
## Mapped Objects
This one I don't recall where I've encountered for the first time, but it's a simple use of generics to improve a lot our safety.
For instance, imagine you have an interface with the event types for a button, containing those events like: `click`, `mouseover`, etc.. Furthermore, imagine that you expose a client to that module, where users can subscribe to your events. Have you ever seen something like `.on('click', callbackFunction)`?
Maybe you also want to tell them the types they might expect in their callback functions. It would be way better if we had a type system that could provide this information, right?
This pattern can help your module users a lot on those questions mentioned above. I can't stop thinking of a younger version of me while learning the JS basics. The countless number of times I had to look up on MDN which events were available on a given type of element.
With this pattern, you can derive types that will improve your IntelliSense and autocomplete features by simply using [Generics](https://www.typescriptlang.org/docs/handbook/generics.html).
```typescript
// redis.d.ts
type Events = {
ready: void
error: Error
reconnecting: {attempt: number, delay: number}
}
type RedisClient = {
// subscriber function
on<E extends keyof Events>(
event: E,
f: (arg: Events[E]) => void
): void
}
```
And this is how this would be used:
```typescript
// redis.ts
import Redis from 'redis'
// Create a new instance of a Redis client
let client: RedisClient = redis.createClient()
// Listen for a few events emitted by the client
client.on('ready', () => console.info('Client is ready'))
```
If we take a look at our subscriber function in`redis.d.ts`, TypeScript will realize that the event can only be one of the types `'read' | 'error' | 'reconnecting'`, since the values are keys from the Event type. It will also add types and dynamically validate the function arguments since it will be based on the event type you select.
This is also resourceful whenever you add a new key into that type, all your clients will be able to see the new addition, and if we didn't break anything, use that new event :)
## Time to say goodbye :)
Hopefully, this article can bring some ideas to your future coding, even if it's to avoid using these concepts.
Feel free to comment and bring your ideas to this article.

## References and Resources
- [TypeScript Docs](https://www.typescriptlang.org/docs/handbook/generics.html)
- [Programming TypeScript](https://www.amazon.com/Programming-TypeScript-Making-JavaScript-Applications/dp/1492037656)
- [FrontendMasters - TypeScript](https://frontendmasters.com/courses/typescript-v2/); Some material related to this one is presented for free in [here](https://github.com/mike-works/typescript-fundamentals)
- [TypeScript Deep Dive](https://basarat.gitbook.io/typescript/)
- [Udemy - Understanding TypeScript](https://www.udemy.com/course/understanding-typescript/)
## Appreciation
- Martin Fieber, that helped me a lot with my TypeScript learnings and had a lot (I mean, a LOT) of patience with me.
| hcapucho |
357,923 | Automate configuration of Teams Tab SSO with PowerShell. | If you have no interest in reading the blog post and just want the final script, you can find it on... | 0 | 2020-06-18T08:08:06 | https://techwatching.dev/posts/teams-sso-powershell | powershell, azuread, microsoftteams | ---
title: Automate configuration of Teams Tab SSO with PowerShell.
published: true
date: 2020-06-15 00:00:00 UTC
tags: powershell, azuread, microsoftteams
canonical_url: https://techwatching.dev/posts/teams-sso-powershell
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p85go4um19ucy8zgsp9c.jpg
---
If you have no interest in reading the blog post and just want the final script, you can find it on this [GitHub repository](https://github.com/TechWatching/TeamsDev/blob/master/infra/Scripts/ConfigureTeamsTabSSO.ps1).
## Context
Several months ago, I supervised a student project aiming at developing a Teams application for my company. The application is mainly composed of a tab where Human Resources people can see information about arrivals and departures in the company. Once the project finished and a first version of the application available, I provisioned the application infrastructure on my company Azure tenant using [Pulumi](https://www.pulumi.com/) which is a really nice infrastructure as code platform.
However, configuring Single Sign-On for the tab of the application did not seem possible with Pulumi as it internally uses Terraform Provider for AzureAD which at the time of writing don't have all functionalities necessary to configure this. The [documentation about SSO for Teams tab](http://aka.ms/teams-sso) currently lists all the steps necessary to configure it from the Azure Portal, however it mentions nothing about automating it, hence this blog post.
## Steps to create the PowerShell script
Usually I prefer Azure CLI to PowerShell as I find easier to find commands I need, but Azure CLI doesn't have yet the necessary commands. Most of the code comes from [this script](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/blob/master/3.-Web-api-call-Microsoft-graph-for-personal-accounts/AppCreationScripts/Configure.ps1) located in a repository of the [Azure Samples GitHub organization](https://github.com/Azure-Samples). I took only what was necessary for Teams Tab SSO, adapted it to use Microsoft Graph objects / commands and added missing commands.
I am not an expert in PowerShell so there might me things to improve in the final script, but I hope the following steps will help you to understand how to configure SSO for your Teams Tab.
### Interacting with Azure Active Directory
PowerShell has a module called [AzureAd](https://docs.microsoft.com/en-us/powershell/module/azuread/?view=azureadps-2.0) that allow us to interact with Azure Active Directory.First step is to install this module if not already installed, import it and authenticate to Azure AD in order to be able to use Active Directory commands once authenticated.
```powershell
if ($null -eq (Get-Module -ListAvailable -Name "AzureAD")) {
Install-Module -Name "AzureAD" -Force
}
Import-Module AzureAD
Connect-AzureAD -TenantId $tenantId
```
This will prompt us to login with our AD account. We will see later in the article how we can avoid that if we are using this script in an Azure Pipeline.
### Retrieving the application registration
I already created my application registration in AD with Pulumi so I just have to retrieve it before configuring it.
```powershell
$app = Get-AzureADMSApplication -ObjectId $applicationObjectId
```
If you don't have an existing application registration you can create one with the `New-AzureADMSApplication` command.
💎 You may note that there are similar commands `Get-AzureADApplication` and `New-AzureADApplication` that exist. Both commands work fine but commands with _MS_ in their name internally use Microsoft Graph which seems to be the modern way to interact with Azure AD.
### Creating the service principal
When you register an application in Azure Portal it creates an Application object and a Service Principal in your tenant. But if you create the Application outside the Azure Portal (Azure CLI, PowerShell, Pulumi, ...), you will have to create the Service Principal as well. Just as a reminder the [application object should be considered as the global representation of your application for use across all tenants, and the service principal as the local representation for use in a specific tenant](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals#application-and-service-principal-relationship).
```powershell
New-AzureADServicePrincipal -AppId $app.AppId -Tags {WindowsAzureActiveDirectoryIntegratedApp}
```
### Exposing an application as an API
To expose an application as an API, it is necessary to set the identifier URI of the application. We will use a variable `$customDomainName` to specify the custom domain of the application. Indeed as stated by the documentation, for the moment Teams Tab SSO does not support applications that use the azurewebsites.net domain.
```powershell
$appId = $app.AppId
Set-AzureADMSApplication -ObjectId $app.Id -IdentifierUris "api://$customDomainName/$appId"
```
### Creating the access\_as\_user scope
Teams Tab SSO works by making the Teams client (whether it be Teams mobile app, desktop app or web app) ask for an Azure AD token with the scope `access_as_user` of the Tab application you developed. So we need to create a scope `access_as_user` in the application.
```powershell
# Add all existing scopes first
$scopes = New-Object System.Collections.Generic.List[Microsoft.Open.MsGraph.Model.PermissionScope]
$app.Api.Oauth2PermissionScopes | foreach-object { $scopes.Add($_) }
$scope = CreateScope -value "access_as_user" `
-userConsentDisplayName "Teams can access the user’s profile" `
-userConsentDescription "Allows Teams to call the app’s web APIs as the current user." `
-adminConsentDisplayName "Teams can access your user profile and make requests on your behalf" `
-adminConsentDescription "Enable Teams to call this app’s APIs with the same rights that you have"
$scopes.Add($scope)
$app.Api.Oauth2PermissionScopes = $scopes
Set-AzureADMSApplication -ObjectId $app.Id -Api $app.Api
```
This piece of PowerShell just ensures existing scopes won't be deleted when adding the scope `access_as_user`. Display names and descriptions of the new scope are the ones recommended in the documentation. This code calls a PowerShell function that simply creates the scope object.
```powershell
<#.Description
This function creates a new Azure AD scope (OAuth2Permission) with default and provided values
#>
function CreateScope(
[string] $value,
[string] $userConsentDisplayName,
[string] $userConsentDescription,
[string] $adminConsentDisplayName,
[string] $adminConsentDescription)
{
$scope = New-Object Microsoft.Open.MsGraph.Model.PermissionScope
$scope.Id = New-Guid
$scope.Value = $value
$scope.UserConsentDisplayName = $userConsentDisplayName
$scope.UserConsentDescription = $userConsentDescription
$scope.AdminConsentDisplayName = $adminConsentDisplayName
$scope.AdminConsentDescription = $adminConsentDescription
$scope.IsEnabled = $true
$scope.Type = "User"
return $scope
}
```
### Preauthorize Teams clients.
As the Teams clients will ask a token with the previously created scope, they must be authorized to have access to this permission. That is what does the following script:
```powershell
# Authorize Teams mobile/desktop client and Teams web client to access API
$preAuthorizedApplications = New-Object 'System.Collections.Generic.List[Microsoft.Open.MSGraph.ModePreAuthorizedApplication]'
$teamsRichClienPreauthorization = CreatePreAuthorizedApplication `
-applicationIdToPreAuthorize '1fec8e78-bce4-4aaf-ab1b-5451cc387264' `
-scopeId $scope.Id
$teamsWebClienPreauthorization = CreatePreAuthorizedApplication `
-applicationIdToPreAuthorize '5e3ce6c0-2b1f-4285-8d4b-75ee78787346' `
-scopeId $scope.Id
$preAuthorizedApplications.Add($teamsRichClienPreauthorization)
$preAuthorizedApplications.Add($teamsWebClienPreauthorization)
$app = Get-AzureADMSApplication -ObjectId $applicationObjectId
$app.Api.PreAuthorizedApplications = $preAuthorizedApplications
Set-AzureADMSApplication -ObjectId $app.Id -Api $app.Api
```
This code calls a PowerShell function that simply creates the PreAuthorizedApplication object.
```powershell
<#.Description
This function creates a new PreAuthorized application on a specified scope
#>
function CreatePreAuthorizedApplication(
[string] $applicationIdToPreAuthorize,
[string] $scopeId)
{
$preAuthorizedApplication = New-Object 'Microsoft.Open.MSGraph.Model.PreAuthorizedApplication'
$preAuthorizedApplication.AppId = $applicationIdToPreAuthorize
$preAuthorizedApplication.DelegatedPermissionIds = @($scopeId)
return $preAuthorizedApplication
}
```
### Grant user-level Graph API permissions
Next step consists in specifying the permissions the application will need for the AAD endpoint: email, offline\_access, openid, profile ([OpenID connect scopes](https://docs.microsoft.com/fr-fr/azure/active-directory/develop/v2-permissions-and-consent#openid-connect-scopes)).
```powershell
# Add API permissions needed
$requiredResourcesAccess = New-Object System.Collections.Generic.List[Microsoft.Open.MsGraph.Model.RequiredResourceAccess]
$requiredPermissions = GetRequiredPermissions `
-applicationDisplayName 'Microsoft Graph' `
-requiredDelegatedPermissions "User.Read|email|offline_access|openid|profile"
$requiredResourcesAccess.Add($requiredPermissions)
Set-AzureADMSApplication -ObjectId $app.Id -RequiredResourceAccess $requiredPermissions
```
This codes calls a PowerShell function `GetRequiredPermissions` that add the delegated or application permissions specified in parameter. Here we only ask for delegated permissions of Microsoft Graph needed to retrieve an OpenId Connect token but this function is generic and could be used to require scopes or roles of other APIs.
```powershell
#
# Example: GetRequiredPermissions "Microsoft Graph" "Graph.Read|User.Read"
# See also: http://stackoverflow.com/questions/42164581/how-to-configure-a-new-azure-ad-application-through-powershell
function GetRequiredPermissions(
[string] $applicationDisplayName,
[string] $requiredDelegatedPermissions,
[string]$requiredApplicationPermissions,
$servicePrincipal)
{
# If we are passed the service principal we use it directly, otherwise we find it from the display name (which might not be unique)
if ($servicePrincipal)
{
$sp = $servicePrincipal
}
else
{
$sp = Get-AzureADServicePrincipal -Filter "DisplayName eq '$applicationDisplayName'"
}
$requiredAccess = New-Object Microsoft.Open.MsGraph.Model.RequiredResourceAccess
$requiredAccess.ResourceAppId = $sp.AppId
$requiredAccess.ResourceAccess = New-Object System.Collections.Generic.List[Microsoft.Open.MsGraph.Model.ResourceAccess]
# $sp.Oauth2Permissions | Select Id,AdminConsentDisplayName,Value: To see the list of all the Delegated permissions for the application:
if ($requiredDelegatedPermissions)
{
AddResourcePermission $requiredAccess -exposedPermissions $sp.Oauth2Permissions -requiredAccesses $requiredDelegatedPermissions -permissionType "Scope"
}
# $sp.AppRoles | Select Id,AdminConsentDisplayName,Value: To see the list of all the Application permissions for the application
if ($requiredApplicationPermissions)
{
AddResourcePermission $requiredAccess -exposedPermissions $sp.AppRoles -requiredAccesses $requiredApplicationPermissions -permissionType "Role"
}
return $requiredAccess
}
```
The `GetRequiredPermissions` function calls a `AddResourcePermission` function that creates permissions (ResourceAccess objects).
```powershell
# Adds the requiredAccesses (expressed as a pipe separated string) to the requiredAccess structure
# The exposed permissions are in the $exposedPermissions collection, and the type of permission (Scope | Role) is
# described in $permissionType
function AddResourcePermission(
$requiredAccess,
$exposedPermissions,
[string]$requiredAccesses,
[string]$permissionType)
{
foreach($permission in $requiredAccesses.Trim().Split("|"))
{
foreach($exposedPermission in $exposedPermissions)
{
if ($exposedPermission.Value -eq $permission)
{
$resourceAccess = New-Object Microsoft.Open.MsGraph.Model.ResourceAccess
$resourceAccess.Type = $permissionType # Scope = Delegated permissions | Role = Application permissions
$resourceAccess.Id = $exposedPermission.Id # Read directory data
$requiredAccess.ResourceAccess.Add($resourceAccess)
}
}
}
}
```
## Using the script in an Azure Pipeline
To execute this script in the Azure pipeline that deploys and configures the rest of the application infrastructure we can use an [Azure PowerShell task](https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-powershell?view=azure-devops).
The task of the Azure Pipeline will look like this:
```powershell
- task: AzurePowerShell@5
displayName: 'Configure Teams tab SSO'
inputs:
azureSubscription: 'My Azure Service Connection'
ScriptType: 'FilePath'
ScriptPath: 'infra/AdditionalScripts/ConfigureTeamsTabSSO.ps1'
ScriptArguments:
-applicationObjectId $(AzureAdObjectId) `
-customDomainName $(CustomDomainName)
azurePowerShellVersion: 'LatestVersion'
```
The advantage is that this task will connect to Azure with an Azure Service Connection that has enough rights to execute the Azure AD commands in this script. However it involves passing to the `Connect-AzureAD` command the access token of the Service Principal associated to the Azure Service Connection. This can easily be done as I found out in [a stackoverflow post](https://stackoverflow.com/questions/60185213/automate-connect-azuread-using-powershell-in-azure-devops).
```powershell
$context = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile.DefaultContext
$graphToken = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account, $context.Environment, $context.Tenant.Id.ToString(), $null, [Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never, $null, "https://graph.microsoft.com").AccessToken
$aadToken = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account, $context.Environment, $context.Tenant.Id.ToString(), $null, [Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never, $null, "https://graph.windows.net").AccessToken
Connect-AzureAD -AadAccessToken $aadToken -MsAccessToken $graphToken -AccountId $context.Account.Id -TenantId $context.tenant.id
```
### Summary
In this post, I wanted to show the different steps to configure Teams Tab SSO in PowerShell. The final script can be found [here](https://github.com/TechWatching/TeamsDev/blob/master/infra/Scripts/ConfigureTeamsTabSSO.ps1) and is directly used in an Azure pipeline to automate this configuration. Although it does the job, I hope doing such Azure AD configurations will be supported soon in Pulumi as it would have been easier to set it up instead of coming up with a big PowerShell script like this which is not idempotent. | techwatching |
357,925 | [Easy]: Truncate a table and restart Sequences (Identity and cascade) | Sometimes you may have to delete or truncate your records in the table for your local or dev testing.... | 0 | 2020-06-18T07:55:50 | https://dev.to/jinagamvasubabu/easy-truncate-a-table-and-restart-sequences-identity-and-cascade-1hjp | postgres, sql | Sometimes you may have to delete or truncate your records in the table for your local or dev testing. Easy way is to do with the below command.
```
delete from <table> where <condition> = <value>
```
This Works!!!
But above command doesn't restart the sequences i.e; `IDENTITY` and `CASCADE` associated with table columns.
```
TRUNCATE <table_name> RESTART IDENTITY CASCADE;
```
References:
[Stack Overflow](https://stackoverflow.com/questions/5342440/reset-auto-increment-counter-in-postgres)
| jinagamvasubabu |
357,973 | Git Commands (Continuation) | In this article we are going to talk about pull command and branches. This will be a continuation of... | 0 | 2020-06-18T09:13:03 | http://billyokeyo.codes/articles/git-commands-continuation/ | git, github, beginners, tutorial | ---
title: Git Commands (Continuation)
published: true
date:
tags: git, github, beginners, tutorial
canonical_url: http://billyokeyo.codes/articles/git-commands-continuation/
---
In this article we are going to talk about pull command and branches. This will be a continuation of the second bit of the series. So let's get started.
In our last article we had talked about how to add code in GitHub from the online editor. So let's now talk about starting your project locally and pushing to github.
#### **2. Writing your code in your text-editor locally then pushing to GitHub.**
First we need to create our project folder, then inside the folder we can create a Readme.md file as we did earlier but this time locally. Add some content into your Readme.md file then save.
Now we need initialize git locally in our machine and we do that by running
```
git init
```
This will initialize git and a hidden folder will be created called .git which will be saving all our commits.
Now that we have that set up, we now need to add files to our git and we do that by running
```
git add .
```
This will stage all our files in the current directory then we can now commit by running
```
git commit -m "Created Readme"
```
All these commands I had explained in the previous article, so if you don't understand what's going on here, I'd suggest you go check those articles.
Up to there, we now have our files committed but they are still within our local machine, we need now to push them to github, and this time we will use a different command before the push command.
We need to tell GitHub where it should push our files into.
So just head on into Github quick and create a repo, I'll name mine "demo-2", name yours anything you want then as we copied the url when we wanted to clone you can do that, or after creating your repo you'll see a screen showing you how to add files into your repo, there will be a link that will be displayed there, copy that link.
Now let's go back to our terminal and add that and we do that by running git remote add origin <paste the link here>
```
git remote add origin git@github.com:cartel360/demo-2.git
```
After that we can check which remote repositories we have by running
```
git remote -v
```
Note: You only set the remote repos once, after doing that, you can now push without running the add remote command again.
After setting this, we can now use
```
git push origin master
```
And this will now push our code to GitHub.
And that's all for this, you can now continue adding more files and code, then you git add . then git commit then git push
#### **3. Pushing an Already Created Project.**
There are two ways you can do this.
You can just use the GUI in the GitHub webpage as shown below and drag and drop your files or you can do it from your terminal.

You will Click on The _ **"Upload Files"** _ button which will open another screen which will allow you drag and drop files or Choose your files.

Doing it from the terminal follows the same procedures as discussed in (2). You just initialize your project with git init, then add your files with git add . , commit the files with git commit, create a repo in Github and copy the link and add a remote repo with git add remote then finally push.
Now that that's done, we'll go ahead and discuss about the pull command but before we get into pull let's discuss briefly about branching
#### **Branching**
When working with git you might have noticed somewhere written master, master is a branch and it acts as the main branch. You can work with that one branch or you can choose to make other branches.
On creating a branch you'll be be able to make changes in that particular branch without messing with the master branch. This is ideal when you want to test out things but don't want to break your code. You can now test your codes in your created branch and when satisfied with the outcome you can merge the changes into your master branch. This is also ideal if you are working with a team and each person handles different features of the project.
Type
```
git branch
```
This will list all your branches.
> To exit out of this type q
Now to create a branch we use git checkout -b <branch name>
```
git checkout -b feature-readme
```
You can use any branch name you prefer but make it as much descriptive as it can be.
> -b is used when creating branches. So only use that when you want to create a new branch
To switch between branches we now use git checkout <branch name> but without the -b
```
git checkout master
```
This will switch our working to master, you can do the same to switch back to our feature-readme branch, you'll only change the branch name.

Hope you have switched back to your created branch because we want to work on it.
Now add something in your readme file then run
```
git commit -am "Made changes to readme"
```
This will commit the changes and note the _ **-am** _ I have used that because I want it to add/ stage changes then commit instead of running git add first then commit.
This changes will now be saved in your feature-readme branch and won't be visible in your master branch.
Let's now merge the feature-readme into our master branch and we do that by running git merge <branch name>
> Before merging make sure you switch to your master branch then run the merge command
```
git merge feature-readme
```

This will merge those changes you made into your master branch or alternatively you can merge the changes in GitHub where you'll use the graphical representation.
That's it guys, this article is becoming kind of long, so I'll stop there and in the next article will be our final article for this article, where we will talk about how to make a pull request, talk about the pull command and how to merge your branches in github.
See you in the next article. | billy_de_cartel |
357,982 | Top 13 Challenges Faced In Agile Testing By Every Tester | Even though we strive for success in whatever journey we undertake, sometimes failure is inevitable.... | 0 | 2020-06-18T09:32:33 | https://www.lambdatest.com/blog/top-13-challenges-faced-in-agile-testing-by-every-tester/ | challenge, testing, agile, qa | Even though we strive for success in whatever journey we undertake, sometimes failure is inevitable. But in most cases, if we just avoid a few blunders and get through major challenges hampering progress, the path to success won’t seem so challenging. The same goes for agile testing teams where the pressure of continuous delivery can be overwhelming.
Now, I’m not telling you to aim for 100% perfection. ‘Figure out everything before leaving the room.’ Doesn’t this approach to sprint planning sound like a hostage situation? Agile testing teams usually try to eliminate the uncertainty factor as much as possible. But don’t you think keeping it short and effective would yield better results?

This was just an example of the hurdles that can actually sabotage a sprint! Speaking of which, in this article, I’m going to take a detailed look into some of the challenges in Agile Testing by every tester. So, let’s begin.
## 1. Not Keeping Up With Changing Requirements

Coming up with a good [Agile testing plan](https://www.lambdatest.com/blog/top-13-challenges-faced-in-agile-testing-by-every-tester/?utm_source=dev&utm_medium=Blog&utm_campaign=Vethee-18062020&utm_term=Vethee) is vital, no doubt about that. But if you believe that your plan is fool-proof and you won’t ever need to make modifications, think again. Most teams waste a lot of time trying to come up with an ideal Agile testing plan.
Now, although how much we’d like to achieve it, the truth is a perfect Agile testing plan does not exist. The complex environment won’t permit it. Sometimes, you have to make changes on an ad hoc basis. Or you might have to remove some processes. All in all, you have to be flexible and adapt to changes in the sprint, of course, keeping in mind that it all aligns with the sprint goal and you’d be ahead of all the challenges in Agile Testing.
## 2. Not Planning Cross Browser Testing
Most firms cease testing when their site successfully runs on primary browsers such as Google Chrome and the Mozilla Firefox. But do you really think you can have a wide customer base if your site runs well only on a handful of popular browsers?
After all, no customer wants to be restricted to a bunch of browsers. It takes away the versatile nature of business. You also can’t assume that if a web application or website works fine in one browser, the same would be the case for others. This is why it becomes important to ensure that your browser matrix is covered while performing cross browser testing. You can refer to our article on creating browser compatibility matrix to solve any challenges in agile testing due to not targeting the right browser!
Moreover, if you are using cutting edge technology, it’s also important to check whether your site works well in different browser versions. It’s important to note that[cross browser testing](https://www.lambdatest.com/?utm_source=dev&utm_medium=Blog&utm_campaign=Vethee-18062020&utm_term=Vethee) provides a consistent behavior across various browsers, devices, and platforms. This increases your chances of having a wide customer base. You can even choose to utilise a Selenium Grid to scale your cross browser testing efforts.
## 3. Failing To Incorporate Automation

Speaking strictly in business terms, time is money. If you fail to accommodate automation in your testing process, the amount of time to run tests is high, this can be a major cause of challenges in Agile Testing as you’d be spending a lot running these tests. You also have to fix glitches after the release which further takes up a lot of time.
If the company isn’t performing test automation the overall test coverage might be low . But as firms implement test automation, there is a sharp decline in the amount of time testers need for running different tests. Thus, it leads to accelerated outcomes and lowered business expenses. You can even implement [automated browser testing](https://www.lambdatest.com/selenium-automation?utm_source=dev&utm_medium=Blog&utm_campaign=Vethee-18062020&utm_term=Vethee) to automate your browser testing efforts.
Moreover, you can always reuse automated tests and utilize them via different approaches. Teams can identify defects at an early stage which makes fixing glitches cost-effective.
## 4. Excessive Focus on Scrum Velocity

Most teams emphasize maximizing their velocity with each sprint. For instance, if a team did 60 Story Points the last time. So, this time, they’ll at least try to do 65. However, what if the team could only do 20 Story Points when the sprint was over?
Did you realize what just happened? Instead of making sure that the flow of work happened on the scrum board seamlessly from left to right, all the team members were concentrating on keeping themselves busy.
Sometimes, committing too much during sprint planning can cause challenges in Agile Testing. With this approach, team members are rarely prepared in case something unexpected occurs.
Under-committing can provide more room for learning and leaves more mind space to improve on present tasks. As a result, the [collaboration between testers and developers](https://www.lambdatest.com/blog/better-collaboration-testers-and-developers/) gets better and they can get more work done in shorter time spans.
This approach also increases flexibility in the sprint backlog. In case, time permits, you can add more tasks later on. When you are under-committing, you also reduce the chances of carrying over the leftover work to the next sprint.
## 5. Lack of a Strategic Agile Testing Plan

Too much planning can cause challenges in Agile testing but that doesn’t mean you don’t plan at all! The lack of a strategic plan helps teams focus in the direction they are headed to. After all, can we even dream about delivering a project or pushing a release with no plan at all?
Benjamin Franklin has rightly said, “Failing to plan is planning to fail”. Having a basic guide to reach a goal or a vision assists team members in overcoming challenging situations. Therefore, after setting a goal, don’t forget to define the metrics necessary to reach your goal.
For instance, you can divide your plan into different phases. It’s a wise move to arrange meetings from time to time to review the progress and clear doubts. Some things to discuss during the meetings include sprint velocity, task estimation, and stretch goals.
The plan should be rigid enough to provide a direction to the team on how to work and instill confidence in team members. At the same time, it has to be flexible enough to incorporate changes and work on the feedback.
## 6. Considering Agile a Process Instead of a Framework
Even experienced developers or testers who have been in this for a while tend to think of Agile as just any other process . They fail to realize that it’s a framework that defines the entire development process. It also helps various teams in matching their requirements with preset guidelines.
Agile is about fine-tuning the process and making the required adjustments by using empirical data from shorter cycles of development and release. The team members should put their heads together in every sprint to improve with the aim of making each sprint more effective.
## 7. Micromanaging Agile Testing Teams

In a waterfall model, the management is responsible for setting a schedule and the pace for teams involved. This model has been in existence for a long time, thus making the management stick to previous practices and habits.
But in an Agile project, if the management closely observes and tries to control what the employees are doing all the time, failure in a sprint becomes inevitable. Agile testing teams are self-organizing. They are cross-functional and work together to achieve successful sprints.
Teams comprise motivated individuals who can make decisions and are flexible enough to adapt during times of change. Everyone is equally empowered working towards a common goal. But when you micromanage Agile testing teams, the constant interference can negatively affect the ability of employees to accomplish the goal in their own way.
If you take ownership and empowerment away from the team, there is no point in adopting an Agile framework.
## 8. Incoherency in Defining ‘Done’
My work here is done! Sounds so relieving, right? But when a person says this, what do they really mean by done? A developer can just check in the code and say that they’re done. On the other hand, some other developer can say this only when they are done with checking in, running tests and static analysis, etc.
Every person on the team has a different definition when they say ‘done’. But an incorrect interpretation about the same can put both employees and the management in a pickle. It can lead to incompletion of various tasks which can cause a lot of trouble, especially, at the end of a sprint.
Therefore, it’s important for everyone to be on the same page. When someone says that they have completed their tasks, they should maintain clarity and reveal the specifics.
## 9. Aiming for Perfection By Detailing an Agile Testing Plan
As discussed earlier, there is nothing worse than aiming for perfection and detailing out the Agile testing plan too much. It’s important to note that you can’t have all the information readily available at the beginning of the sprint.
The best thing to do is to settle for an Agile testing plan that’s good enough. This way, you won’t spend all your precious time planning. When you have more information, add to the Agile testing plan and make it better. Did you just end up with a killer Agile testing plan without wasting time planning it out? Well, that’s the beauty of Agile.
## 10. Mishandling of Carry Over Work
Now, no matter how much you try to be on time when it comes to accomplishing tasks of the sprint goal, you can’t completely avoid some carry over work. There is always going to be something left over when the sprint ends.
It’s tough to estimate the time the leftover tasks will take. Even if you are done with 75% of the task, the remaining 25% can take up a lot of time. To be safe, never underestimate the amount of work that is remaining. In this case, remember, overestimating won’t harm you.
Even if you end up overestimating the work, you can always add more if time permits later. But if you tend to underestimate, there are chances that there can be a tonne of leftover work when the sprint ends.
## 11. Lacking Skills and Experience With Agile Methods
Agile and scrum are relatively new in the tech industry. So, it’s understandable why people are not so experienced. It’s not possible to get your company to a flying start with sudden implementation of a new framework.
While the lack of experience itself is not a big issue, if you fail to address this in the short term, it’s going to cost you for the long haul. There is a risk of your employees falling back into the same old comfortable pattern of work.
The more you delay, the harder it gets to make your employees relinquish their comfort zone. So, to analyze the experience of different team members, hold meetings and conduct a thorough gap analysis. After that, when you get a vague idea, start educating them on the basics and work your way up to the more intricate parts.
## 12. Technical Debt
Procrastination is one of the biggest challenges in Agile Testing due to its quick-paced nature. This attitude can pile up to a mountain of technical debt that’s harder to pay off than one might think. It’s tough to pay off the technical debt with the workload of an ongoing task. It also affects what you are currently working on in the case when you get too caught up in clearing the debt.
When you pick up something you put off earlier, the entire sprint will suffer. Sometimes, when the new tasks suffer due to extremely high technical debt, the sprint can even fail. This is one of the main reasons why you should avoid technical debts and overcome the associated challenges in Agile testing.
## 13. Compromised Estimation
The biggest mistake some teams do is that they start to treat estimations as accurate statistics. It’s important to note that the nature of estimations is vague! They can’t be accurate all the time. But in most cases, bad estimations are a result of the agile testing team failing to see the complexity or the depth of the user story or a task.
For instance, the developer can uncover dependencies in the user story in further stages of the sprint. This leads to unexpected delays by the implementation team. Now, in an agile framework, you can be prepared for minor delays. But what if 10-hours estimation turns to 20?
The team sometimes has to deal with such circumstances. But if compromised estimations occur on a frequent basis, the sprint format is likely to take a big hit. Therefore, you should be extra careful while making estimations so as to avoid inaccuracies as much as possible.
## Final Words!
Always keep in mind, the holy grail of a sprint in the agile is flexibility. There are always going to be times when a particular step does not deliver the expected results. But agile is far from the ‘plan and execute’ approach. You have to be flexible and adaptive.
A deviation in the Agile testing plan or the occurrence of an obstacle is not the core issue here. Instead, how you eliminate as many challenges in agile testing as possible and deal with the existing ones determine the success of your sprint.
To sum up, I would like to say that if you stay mindful of the above challenges in agile testing, the chances of success increases substantially. So, the next time you plan a sprint, keep in mind the challenges in agile testing stated above. Try to overcome as many as possible and you’ll definitely notice a positive impact.
I hope you liked the article, and you’re ready to tackle these challenges when and where they occur. Share your challenges with us in the comment section down below. Also, don’t forget to share this article with your peers. Any retweet or share is always welcomed. That’s all for now! **Happy Testing!!!** 😊 | vetheedixit |
358,009 | Advantage of going Serverless for app development | As we think about an App Development📱, we mostly think about economic factor 💰 and easiest way to get... | 0 | 2020-06-18T10:40:39 | https://dev.to/bijinazeez/advantage-of-going-serverless-for-app-development-3i7g | aws, serverless | As we think about an App Development📱, we mostly think about economic factor 💰 and easiest way to get into market 👥.So we choose the team 👩💻👨💻 and technology for development based on that. As we go forward, one of our main headache would be server management based on load and activities.No more worries, here is your answer for that - Serverless Architecture.
Serverless may have a confusing name and might have people believe that it is “server-less” but it is still an impressive architect with various benefits. From a business’ perspective, the best advantage of going serverless is reduced time-to-market. Others being, less operational costs, no infrastructural management and efficiency. As all the architectures, there’s no doubt that the success and viability of a serverless approach too, depends on the requirements of the business. Despite having many advantages, it cannot solve every problem. Therefore, be careful before adopting a serverless approach.
There are cases when it makes more sense, both from a cost perspective and from a system architecture perspective, to use dedicated servers that are either self-managed or offered as a service. For instance, large applications with a fairly constant, predictable workload may require a traditional setup, and in such cases the traditional setup is probably less expensive. Additionally, it may be prohibitively difficult to migrate legacy applications to a new infrastructure with an entirely different <a href=https://ateam-texas.com/serverless-architecture-advantage-of-going-serverless-for-your-next-app-development/ > architecture </a>. | bijinazeez |
358,042 | How to build painless multi-language apps with Angular and ngx-translate | I'm sure if you're reading this post is because either you're curious or you understand the pain of... | 2,337 | 2020-06-18T12:11:07 | https://supernovaic.blogspot.com/2020/07/how-to-build-painless-multi-language.html | angular, ngxtranslate, multilanguage, internationalization | I'm sure if you're reading this post is because either you're curious or you understand the pain of supporting multiple languages in Angular.
I love Angular and it's my main JS Modern Framework, but there is something that drives me nuts its poor multi-lingual support. It's extremely over-complicated to my taste.
I have developed multiple websites and over 11 apps for Windows and Android and I cannot understand how the creators couldn't be inspired by Google's most important OS, **Android**.
In Android, you only need a couple of XMLs (en, es, etc.), minor adjustments in the UI XMls, or code something in Java/Kotlin and magically, you have an app that supports multiple languages.
After several attempts, I found **ngx-translate**, which leveraged my work significantly. **So, how to use it?**
npm install @ngx-translate/core @ngx-translate/http-loader --save
Next, configure your **app.module.ts**
####1. Enable the **translation service**:
```javascript
import { TranslateModule, TranslateLoader, TranslateService } from '@ngx-translate/core';
import { TranslateHttpLoader } from '@ngx-translate/http-loader';
import { HttpClient, HttpClientModule } from '@angular/common/http';
```
####2. Configure the **loader**:
a. **Root domain** (www.mydomain.com):
```javascript
export function translateHttpLoaderFactory(http: HttpClient) {
return new TranslateHttpLoader(http);
}
```
b. **Sub-domains** (myself.github.io/myapp):
```javascript
export function translateHttpLoaderFactory(http: HttpClient) {
return new TranslateHttpLoader(http, './assets/i18n/', '.json');
}
```
In this option, you are going to configure the location of your JSON files.
####3. Configure your **constructor** (my version auto-detects the browser language and set it by default):
```javascript
availableLng = ['en', 'es'];
//start the translation service
constructor(private translateService: TranslateService) {
//defines the default language
let tmpLng = 'en';
//gets the default browser language
const currentLng = window.navigator.language.substring(0,2);
if (this.availableLng.includes(currentLng))
tmpLng = currentLng;
translateService.setDefaultLang(tmpLng);
}
```
####4. Add to your imports:
```javascript
HttpClientModule,
TranslateModule.forRoot({
loader: {
provide: TranslateLoader,
useFactory: translateHttpLoaderFactory,
deps: [HttpClient]
}
})
```
Your final **app.module.ts** might look like this one:
```javascript
import { HttpClientModule, HttpClient } from '@angular/common/http';
//configure translation service
import { TranslateModule, TranslateLoader, TranslateService } from '@ngx-translate/core';
import { TranslateHttpLoader } from '@ngx-translate/http-loader';
/* IMPORTANT:
This only works if you are setting your app in your main domain: www.mydomain.com
*/
export function translateHttpLoaderFactory(http: HttpClient) {
return new TranslateHttpLoader(http);
}
/*
For sub-domains like myself.github.io/myapp
You need to use this code or a variation with the location of your assets:
*/
/*
export function translateHttpLoaderFactory(http: HttpClient) {
return new TranslateHttpLoader(http, './assets/i18n/', '.json');
}
*/
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
HttpClientModule,
TranslateModule.forRoot({
loader: {
provide: TranslateLoader,
useFactory: translateHttpLoaderFactory,
deps: [HttpClient]
}
})
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule {
//define available languages
availableLng = ['en', 'es'];
//start the translation service
constructor(private translateService: TranslateService) {
//defines the default language
let tmpLng = 'en';
//gets the default browser language
const currentLng = window.navigator.language.substring(0,2);
if (this.availableLng.includes(currentLng))
tmpLng = currentLng;
translateService.setDefaultLang(tmpLng);
}
}
```
After, **create** a new folder in **assets** called **i18n**. Inside of it, you are going to create the language assets like **en.json**:
```javascript
{
"Title": "Translation demo",
"WelcomeMessage": "Welcome to the international demo application"
}
```
And **es.json**:
```javascript
{
"Title": "Demo de traducción",
"WelcomeMessage": "Bienvenido a la aplicación de demostración internacional"
}
```
Now, inside the **HTML** part of your components, you can call it like this:
```html
<h1 translate>Title</h1>
```
Where **translate** indicates the tag that is going to be translated and **Title** the JSON key.
## Can this be extended beyond HTML tags?
Definitely, what if you have a **placeholder** in an **input**?
You can use it like this:
```html
<input placeholder="{{'Title' | translate}}" />
```
Or how do you use it from the **TypeScript** file?
First, enable it from the **component constructor**:
```javascript
constructor(private translateService: TranslateService) { }
```
Now, you can access it with a simple piece of code like this one (synchronous, but might return the same _key_ if it hasn't loaded yet):
```javascript
console.log(this.translateService.instant('WelcomeMessage'));
```
The best option would be to use the _async_ option:
```javascript
this.translateService.get('WelcomeMessage').subscribe((data: any) => { console.log(data); });
```
The greatest benefit in my experience with **ngx-translate** is that it works and looks similar to common Apps development in Android or Windows.
This post has some inspiration from Yaser's blog:
https://yashints.dev/blog/2018/01/17/multi-language-angular-applications
Plus, if you want to see a full action app using this framework, you can check it here:
https://fanmixco.github.io/gravitynow-angular
Also, here is its repository:
https://github.com/FANMixco/gravitynow-angular
### Follow me on:
| Personal | LinkedIn |YouTube|Instagram|Cyber Prophets|Sharing Your Stories|
|:----------|:----------|:------------:|:------------:|:------------:|:------------:|
|[](https://federiconavarrete.com)|[](https://www.linkedin.com/in/federiconavarrete)|[](https://youtube.com/c/FedericoNavarrete)|[](https://www.instagram.com/federico_the_consultant)|[](https://redcircle.com/shows/cyber-prophets)|[](https://redcircle.com/shows/sharing-your-stories)|
[](https://www.buymeacoffee.com/fanmixco) | fanmixco |
358,165 | Visualizing React state flow and component hierarchy | An extension to display React state flow and component hierarchy React applications are... | 0 | 2020-06-18T15:40:10 | https://dev.to/hmitrea/visualizing-react-state-flow-and-component-hierarchy-39b | webdev, react, opensource | #### An extension to display React state flow and component hierarchy

React applications are built of components which are connected with one another and to see the connection with the simple react dev tools when developing an application could be tedious and difficult.
Because of that we decided to build an **open source** [Firefox] (https://addons.mozilla.org/en-US/firefox/addon/realizeforreact/?src=search) and [Chrome] (https://chrome.google.com/webstore/detail/realize-for-react/llondniabnmnappjekpflmgcikaiilmh?authuser=0&hl=en) extension to aid in the viewing of the components.
Realize for React is a tool to help developers visualize the structure and state flow of their React applications, especially when they are growing in scale and complexity. It currently supports React v.16.8.

#Functionality includes:
__Zoom & Pan__ - Hold down shift to enable dragging and zooming on the tree (to recenter just click the center button)
__Component Focus__ - Click on a node to view state, props and children in the right and panel
__State Flow__ - Click the 'state' toggle to show state flow on the tree. Stateful components have blue nodes and state flow is show by blue links
__Search and Highlight__ - Enter a component name in the search bar to see all matching nodes pulsate
We are an [open source project] (https://github.com/oslabs-beta/Realize) where you can contribute and if you have any concerns please message any of us
*The Team that made it all possible:*
**[Fan Shao] (https://github.com/fan-shao)
[Harry Clifford] (https://github.com/HpwClifford/)
[Henry Black] (https://github.com/blackhaj)
[Horatiu Mitrea] (https://github.com/hmitrea)** | hmitrea |
358,175 | A community that converts cars to electric | Do you dream of converting a combustion car into electric? It doesn’t have to be a dream. We talk to... | 0 | 2020-06-18T15:49:41 | http://coders4climatestrike.com/2020/05/a-community-that-converts-cars-to-electric.html | electricvehicles | Do you dream of converting a combustion car into electric? It doesn’t have to be a dream. We talk to Kevin Sharpe of New Electric Ireland
### Tell me a little bit about your company
We are a group of 5 [companies](http://www.newelectric.nl/) headquartered in the Netherlands. We convert boats, buses, cars, and trucks to electric drive, often complete
vehicle fleets. We work with both battery and hydrogen powered vehicles.
The company was founded in The Netherlands in 2008 by Anne Kloppenborg a friend and colleague for many years. The workshop in Ireland was established in 2018.
<img src="http://coders4climatestrike.com/img/2020-newireland/marine_tugboat.jpeg" width="400" alter="Electric marine tugboat ">
### How do these car conversions work?
At New Electric Ireland we sometimes do the car conversions, we also sometimes sponsor other people to do it. However, most impact comes from
teaching people to do it, so we do all the admin work, provide workshop space, etc to enable that.
Ireland is a real hub for development. We are lucky to work with [Damien Maguire](https://www.evbmw.com/), an exceptional engineering talent. He runs our courses and has been converting cars for more than a decade.
There are two things driving this forward:
- We want to extend the life of existing vehicles
- We want to ensure this can be achieved at low cost, since new electric vehicles are fairly expensive. Two years ago Damien was able to make a conversion for just €1000. This vehicle has a 40 mile range.
<img src="http://coders4climatestrike.com/img/2020-newireland/2020-05-workshop-small.jpg" width="400" alter="Car workshop showing a Jaguar and a Mini" data-toggle="tooltip" title="Car workshop showing a Jaguar and a Mini" >
### I see there are a lot of Open Source references in the [Open Inverter website](https://openinverter.org/forum/). Can you tell us a little about you and Open Source projects?
We have been supporting open source for many years both financially and donating equipment. My main business is in the medical field in the US.
The business is very successful and the core of the technology is open source. I fully endorse open source and it can be made a viable
business, it doesn't need to be a charitable exercise.
In the electric car space, we wanted to formalise our support, that's why when I was coming to Ireland, we took a decision to set up a
facility to teach people about conversions and open source. We're currently teaching about 100 people a year. We also undertake open
source R&D.
<img src="http://coders4climatestrike.com/img/2020-newireland/2020-05-wip.jpeg" width="400" alter="Car conversion in progress">
### Is there anything happening at the moment that enables the car conversions?
What has become obvious is that a lot of the components that are being used in electric vehicles, particularly hybrids, are available at a very low price point. For instance the Toyota Prius inverter, the 2nd and 3rd generations of those, are capable of over 300 horsepower and Damien (Maguire) recently bought one for €40 - to think that five years ago we were spending €500 for an inverter!
If you buy a bespoke inverter and you have a fault, you are very dependent on that manufacturer, but if you buy from, say Toyota, you can
buy the part off the shelf.
We have been trying hard to understand the capabilities of OEM components because we want to put together a very capable platform. Our
current goal is to get a solid 200 mile range with 250kW charging and 300 horsepower for €7000.
Additionally, we are aiming for a conversion with a bolting kit that enables the procedure to be completed in a day - assuming a moderate
mechanical skill level.
### So you run the courses. Are they for mechanics only?
About a third of the people enrolled for our courses are professional car mechanics. The car aspect of the conversion process is not that
large. We want to try to help with this by making the process more like a bolt in kit.
People with less experience in mechanics tend to struggle more, since Open Inverter has many users with a great deal of domain expertise that
can be difficult to understand for the layperson. However, there are five or six builds going on today where people are making good progress
and intend to give back to the community. Learners providing feedback into the community will help to bridge the gap of experience and ease
the transition for newcomers.
One can imagine a car conversion group, the same way you would have a woodworking group. In that group there would be at least one person who
has performed a conversion before, so that together the group can assist each other and make more conversions.
Part of what we want to do with Open Inverter is to stop reinventing the wheel. There is always new stuff out there, but we think that we now
have found all the key components we need and we can now get to build a bolt in kit - and when I say we - I mean the community.
Undoubtedly there will be companies producing things to make the process simpler. For instance, a coupler that locks up the Lexus GS450
transmission is a bolt-in piece that we need. Whilst it is not a particularly complicated thing to make yourself, if you don't have the
metalworking skills it could present quite a challenge. More companies producing these kinds of things helps to bring the conversion process
within the scope of more people.
<img src="http://coders4climatestrike.com/img/2020-newireland/2020-05-battery-packs.jpeg" width="400" alter="Car workshop showing a Jaguar and a Mini">
### What are some benefits of open development you have personally experienced?
I guess it's simpler to show with some examples: There is a fantastic project called Open Vehicles. It's a monitoring and control tool that
has been developed in the last decade. Open Vehicles gives you all the remote features that you would have on a Tesla. We hope to incorporate it with Open Inverter, it would give a whole new level of control over the converted vehicles.
Another example is: recently we reversed engineered the Tesla Model 3 charger, it uses CCS charging (rapid charging standard for Europe). With CCS you can charge up to 250KW, meaning you can top up your battery in a few minutes. We can use that technology in the converted cars, and there is no reason why people can't take that technology to update an existing electric car. There are a lot of extra benefits.
### Are you hoping to open many locations to teach people to convert cars?
It wouldn't be our company that does this. This is where we want the community at large to take over. What we are particularly hoping is that people from the course will pick it up and run with it, perhaps creating their own groups, workshops and educational material.
We can't possibly service the billion odd cars that are out there. It is much more important that we teach people with an ethos of community and sharing.
We are fortunate in that we can basically run this as a non-profit, because we are part of a group of companies and because Anne and myself
are both very passionate about trying to grow the number of clean vehicles. This is our contribution to stop climate change.
### If a software developer wanted to get involved in this project somehow, is there anything particular she should look at?
An obvious start would be Open Vehicles, the monitoring system. This is an important software project. It needs consistent focus from some
software people so that it can be used by Open Inverter.
A video how to install Open Inverter and a high level overview of what it provides:
<iframe width="560" height="315" src="https://www.youtube.com/embed/Z9Fsbvd1Kfg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
#### Do the thing!!
So, you read this far and you are interested in trying to do something?
Well here are some options for you:
* [Open Vehicles](https://www.openvehicles.com/) : Open Source electric vehicle telemetry. They have a [developer guide](https://docs.google.com/document/d/1q5M9Lb5jzQhJzPMnkMKwy4Es5YK12ACQejX\_NWEixr0/edit\#) available, as well as [extensive documentation](https://docs.openvehicles.com/en/latest/index.html). The github project page shows the current status of the ongoing work.
* [Open Inverter](https://openinverter.org/wiki/Main_Page): You can get go to the course ([More info](https://openinverter.org/forum/viewtopic.php?f=15&t=491)). Explore Open Inverter wiki.
Many thanks to Ross McKinlay, Richard Dallaway, Charlotte Franklin, Jamie Beevor and many others that helped editing the interview. Many thanks to Kevin Sharpe for his sharing this with us.
Follow to the next article on the series: [Damien Maguire. An Electric Engineer that converts cars to electric](http://coders4climatestrike.com/2020/06/a-community-that-converts-cars-to-electric-part-2-damien-maguire-interview.html)
| coders4climate |
358,275 | How to communicate as an Introvert | I suck at public interactions. Meeting new people, blah. Presentations for the team, dreadful. Quest... | 0 | 2020-06-18T19:04:22 | https://dev.to/musthaveskill/how-to-communicate-as-an-introvert-4hlg | watercooler, inclusion | 
I suck at public interactions. Meeting new people, blah. Presentations for the team, dreadful. Questions? No thank you! Even when I have something I'm proud of and need to show it at work for consideration, I'd rather keep the feeling of accomplishment to myself.
If you are reading this, but are thinking, "what a loser", so am I. I have had this trait since a kid. Nobody ever noticed, or made me feel awkward, until I started applying for jobs.
Feelings stronger than any relationship I've been in sabatoged my interactions, making me seem unqualified, unprepared, uninterested and plain stupid.
Overcoming this was not easy. Because this made me feel like I was "defective", the time I wasted trying to understand and overcome my own insecurities wax long, and something I still work on today. But hopefully some of the following advice can make someone else take steps to overcome anything similar without doing it alone.
1. Daily interactions!
I hope I'm the only person with this problem, but staying home all the time used to be my routine. But in order to pass interviews, and communicate to teams or clients for work, my habits needed to change. I needed practice.
I couldn't count on going to a million interviews and practice, and family/friend practice was fake. Here's what I did.
Every day I added time to go to a super market, coffee shop, corner store for a drink. I started asking the cashier how their day was. That's it.
This small interaction felt so uplifting. I soon started asking people waiting in line to checkout if they would like to go ahead of me. Now I ask the person behind me if I can pay for their order when I'm at Starbucks.
This type of situation usually happens before I went to work, and not only forced me to start the day with having a small conversation, but filled me with a sense of pride and happiness for doing a good deed! (In case you're wondering, YES I have overpaid for some Starbucks orders after asking and finding out it wasn't "just a coffee"... Still did it just to not break the habit...). You don't need to do these exact scenarios, but forcing yourself to leave and interact with one person daily can have awesome results!
2. Sign up for "Meet-ups" and PRESENT.
Meet-ups have groups. But the groups are all there with the same interests. This can make it easier to start conversations because of the chances of everyone enjoying the same topics. If your area doesn't have the type of group you would attend (my area used to have ONE tech-related group...), be brave and pick something else. A big trend I've seen is "MasterMind"/Public Speaking groups. They help people build their skills for talking to audiences. Even if you aren't ready to speak, show up and watch others present. Let the organizer know you aren't ready, but are working on it. If you find a group you like, force yourself to sign up to present or speak. Do it without having any idea of what to talk about. The "act" of signing up will cause a desire to come up with something. When I would tell myself that I would sign up after creating my topic to speak on, I ALWAYS neglected it. Sometimes commitment to the engagement can cause your fear of failure to outweigh your fear of speaking.
It will be tough, or maybe it will make you realize that it was all that was needed to become the badass you always knew you were!
Hope this help!
| musthaveskill |
358,278 | Angular HTTP Proxy (CORS) in 10 minutes | With Client Side Proxies you can easily redirect cross domain url requests! First create a proxy.co... | 0 | 2020-06-18T19:40:23 | https://dev.to/jwp/angular-http-proxy-fundamentals-2e09 | With Client Side Proxies you can easily redirect cross domain url requests!
- First create a proxy.conf.json file in root directory.
```Typescript
{
"/g": {
"target": "https://www.google.com",
"secure": false,
"pathRewrite": {
"^/g": ""
},
"changeOrigin": true
}
}
```
So if the url is http://localhost/g
```html
The g becomes https://google.com
```
- Set up the Angular.json config. to use the proxy.
```typescript
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"options": {
"browserTarget": "TESTING:build",
// this line here.
"proxyConfig": "proxy.conf.json"
},
...
```
**Results**

**Note**
This does not work on an http client get like this:
```typescript
// beacuse there's no listener on port 4200
http.get("localhost:4200/g")
```
This shows what client side routing does; which is, parse a URL in the browser, and route to proper component. Client side Routing does not imply an endpoint, it's just an entry to another page.
Server side routing does imply endpoints that are highly strict about what they do. Instead of serving a page (which they can do statically) they mostly serve data.
The proxy is an easy way around CORS issues, but will soon need a bit more regarding same-site policy implementation.
| jwp | |
358,293 | Week 9: YelpCamp: Initial Routes and Databases | This week was YelpCamp: Initial Routes and Databases from Colt Steele The Web Developer... | 0 | 2020-06-18T22:08:32 | https://dev.to/code_regina/week-9-yelpcamp-initial-routes-and-databases-54fd | javascript, webdev, codenewbie, beginners | ---
title: Week 9: YelpCamp: Initial Routes and Databases
published: true
description:
tags: javascript, webdev, codenewbie, beginners
cover_image: https://miro.medium.com/max/1400/1*RtLmDhbpg2h1I8cG0l4yyg.png
---
####This week was YelpCamp: Initial Routes and Databases from Colt Steele The Web Developer Bootcamp.
-YelpCamp: Initial Routes
-YelpCamp: Layout
-YelpCamp: Creating Campgrounds
-What is a Database
-Mongo Shell Basics
-Introduction to Mongoose
###YelpCamp: Initial Routes
YelpCamp will have a landing page and a campground page that lists all campgrounds.
Each campground will have a name and an image.
There is an easier way to install multiple packages at once.
npm install express ejs –save
The landing page will be on the root app
```
Var express = require(“express”);
Var app = express();
App.set(“View engine”, “ejs”);
App.get(“/”, function(req, res) {
Res.render(“landing”);
})
})
```
Until we start the database, the data will be held in an array.
####This is an example
```
[
{name: “Camp Ground”, image: http://www.image.com},
{name: “Camp Ground”, image: http://www.image.com} ,
{name: “Camp Ground”, image: http://www.image.com}
]
```
####The array is sufficient to hold data for the time being.
```
App.get(“/campgrounds”, function(req, res) {
Var campgrounds = [
{name: “Salmon Creek”, image: “” } ,
{name: “Granite Hill”, image: “” } ,
{name: “Mountain Goat”, image: “” }
]
Res.render(“campgrounds”);
})
App.listen(process.env.PORT, process.env.IP, function() {
Console.log(“The YelpCamp Server Has Started!”);
```
###YelpCamp: Layout
Create our header and footer partials as well as add in Bootstrap for styling.
```
<html>
<head>
<title>YelpCamp</title>
</head>
```
The header is placed in it one partial that separates it from the body. The body is placed in its own partial as well.
```
<body>
</body>
</html>
```
Partials are used to define the header and the footer which make routing across pages more seamless.
###YelpCamp: Creating Campgrounds
-Creating new campgrounds
-Setup new campground POST route
####This is the post route
```app.post(“/campgrounds”, function(req, res) {
var name = req.body.name
var image = req.body.image
var newCampground = {name: name, image: image}
campgrounds.push(newCampground);
res.redirect(“/campgrounds”);
});
A form will send a post request to somewhere then inside that post route we take the form data, we do something with it, then we redirect back to somewhere else.
The feature will be added that will allow a user to submit a new campground.
To do this, we must first setup the post route that we create with the new campground added. Then we will need to add in the body parser and make sure that everything is properly configured. Then we will need to create the form and then create the route for that form. A user will be able to send that POST request.
###What is a Database
If the server were to stop, we would lose all the data.
A database is a collection of information/data. Databases have an interface for interacting with the stored data. Interaction with data may consist of adding a new user to the database or removing a user from a database. Editing a user information within that database.
An interface is just a way to write code that interacts with that data within the database.
SQL (relational database) vs. NoSQL (non-relational database)
SQL databases are tabular in that they are flat.
Tabular means that you must define a table ahead of time before the data can be utilized. It is not very flexible.
You must define what a user looks like with
ID | Name | Age | City
| --------- |:---------:| -------:|
| Customer | Ruby | 30 | New York
All users must have this pattern of id/name/age/city.
JOIN TABLES are what tie relationships between data together.
A join table can tie a user id with their comment id together.
NoSQL (non-relational database)
Tables are not used, data can be nested. It is not a flat database. It looks similar to JSON but the data is actually written in BJSON which is binary JavaScript Object Notation. But really it is just JSON with a bunch of key-value pairs.
```
{
Name: “June”,
Age: 25,
City: New York,
Comments: [
{text: “New York is Awesome”}
{text: “New York is The best”}
]
}
```
Comments can be nested inside of the data, IDs are not needed, tables do not need to be defined ahead of time. It is more flexible.
###Mongo Shell Basics
mongod
Mongo
Help
Show dbs
Use
Insert
Find
Update
remove
###Introduction to Mongoose
Mongoose is known as an ODM which is an object data mapper. It is a way for us to write JavaScript inside our JavaScript files.
Mongoose makes it possible to use schemes within our data. A schema is a way to organize our data in a way that makes since to our project.
https://mongoosejs.com/
#####This week I have learned how important databases are to maintaining data integrity. I have also learned that there are two types of databases SQL and NoSQL. Both have their usefulness within the scope of a given project direction. | code_regina |
358,296 | Applying the Well-Architected Framework, Small Edition | Do you ever tackle a problem and know that you’ve just spent way too much time on it? But you also kn... | 0 | 2020-06-27T15:08:01 | https://dev.to/aws-heroes/applying-the-well-architected-framework-small-edition-18fj | aws, serverless, security | Do you ever tackle a problem and know that you’ve just spent way too much time on it? But you also know that it was worth it? This post (which is also long!) sums up my recent experience with exactly that type of problem.
## tl:dr
- AWS Lambda has a storage limit for `/tmp` of 512MB
- AWS Lambda functions needs to be in a VPC to connect to an Amazon EFS filesystem
- AWS Lambda functions within in a VPC **require** a NAT gateway to access the internet
- Amazon EC2 instances can use cloud-init to run a custom script on boot
- A solution needs to address all five pillars of the AWS Well-Architected Framework in order to be the “best”
Read on to find out the whole story…
## The Problem
I love learning and want to keep tabs on several key areas. Ironically, tabs themselves [are massive issue for me](https://twitter.com/marknca/status/1017237825199247361).
Every morning, I get a couple of tailored emails from a service called [Mailbrew](https://mailbrew.com). Each of these emails contains the latest results from Twitter queries, subreddits, and key websites (via RSS).
The problem is that I want to track a lot of websites and Mailbrew only supports adding sites one-by-one.
This leads to a problem statement of…
> Combine N feeds into one super feed
## Constraints
Ideally these *super feeds* would be published on my website. That site is build in [Hugo](https://gohugo.io) and deployed to [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html) with [CloudFlare](https://www.cloudflare.com) in front. This setup is ideal for me.
Following the [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/); it’s highly performant, low cost, has minimal operational burden, a strong security posture, and is very reliable. It’s a win across all five pillars.
Adding the feeds to this setup shouldn’t compromise any of these attributes.
> I think it’s important to point out that there a quite a few services out there that combine feeds for your. [RSSUnify](https://rssunify.com/) and [RSSMix](http://www.rssmix.com/) come to mind, but there are many, many others…
The nice thing about Hugo is that it uses text files to build out your site, including [custom RSS feeds](https://benjamincongdon.me/blog/2020/01/14/Tips-for-Customizing-Hugo-RSS-Feeds/). The solution should write these feed items as unique posts (a/k/a text files) in my Hugo content directory structure.
## 🐍 Python To The Rescue
A quick little python script ([available here](https://gist.github.com/marknca/c863d166cf91d710c247f6af563ca73b)) and I’ve got a tool that takes a list of feeds and writes them into unique Hugo posts.
Problem? **Solved.**
Hmmm…I forgot these feeds need to be kept up to date and my current build pipeline (a [GitHub Action](https://github.com/features/actions)) doesn’t support running on a schedule.
Besides, trying to run that code in the action is going to require another event to hook into or it’ll get stuck in an update loop as the new feed items are committed to the repo.
New problem statement…
> Run a python script on-demand and a set schedule
This feels like a good problem for [a serverless solution](https://markn.ca/2019/road-to-reinvent-what-is-serverless/).
## AWS Lambda
I immediately thought of [AWS Lambda](https://aws.amazon.com/lambda/). I run a crazy amount of little operational tasks just like this using AWS Lambda functions triggered by a scheduled [Amazon CloudWatch Event](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html). It’s a strong, simple pattern.
It turns out that getting a Lambda function to access a git repo isn’t very straight forward. I’ll save you the sob story but here’s how I got it running using a python 3.8 runtime;
- add the git binaries as a Lambda Layer, I used [git-lambda-layer](https://github.com/lambci/git-lambda-layer)
- use the [GitPython](https://github.com/gitpython-developers/GitPython) module
That allows a simple code setup like this:
```python
repo = git.Repo.clone_from(REPO_URL_WITH_AUTH)
...
# do some other work, like collect RSS feeds
...
for fn in repo.untracked_file:
print("File {} hasn't been added to the repo yet".format(fn))
# You can also use git commands directly-ish
repo.git.add('.')
repo.git.commit('Updated from python')
```
This makes it easy enough to work with a repo. With a little bit of hand wavy magic, I wired a Lambda function up to a scheduled CloudWatch Event and I was done.
…until I remembered—and by, “remembered”, I mean the function threw an exception—about the Lambda `/tmp` storage [limit of 512MB](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html). The repo for my website is around 800MB and growing.
## Amazon EFS
Thankfully, AWS just released a new integration between [Amazon EFS and AWS Lambda](https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications/). I followed along the relatively simple process to get this set up.
I hit two big hiccups.
The first, for a Lambda function to connect to an EFS file system, both need to be “in” the same VPC. This is easy enough to do if you have a VPC setup and [even if you don’t](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html). We’re going to come back to this one in a second.
The second issue was that I initially set the path for the EFS access point to `/`. There wasn’t a warning (that I saw) in the official documents but an off-handed remark in [a fantastic post by Peter Sbarski](https://read.acloud.guru/how-i-used-lambda-and-efs-for-massively-parallel-compute-96575bc85157) highlighted this problem.
That was a simple fix (I went with `/data`) but the VPC issue brought up a bigger challenge.
The simplest of VPCs that will solve this problem is one or two subnets with an internet gateway configured. This structure is free and only incurs [charges for inbound/outbound data transfer](https://www.duckbillgroup.com/blog/understanding-data-transfer-in-aws/).
**Except** that my Lambda function needs internet access and that requires one more piece.
[That piece is a NAT gateway](https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/). No big deal, it’s [a one click deploy](https://aws.amazon.com/premiumsupport/knowledge-center/nat-gateway-vpc-private-subnet/) and a same routing change. The new problem is cost.
The need for a NAT gateway makes completely sense. Lambda runs adjacent to your network structure. Routing those functions into your VPC requires an explicit structure. From a security perspective, we don’t want an implicit connection between our private network (VPC) and other random bits of AWS.

## Well-Architected Principles
This is where things really start to go off the of the rails. As mentioned above, the Well-Architected Framework is built on five pillars;
- [Operational Excellence](https://d1.awsstatic.com/whitepapers/architecture/AWS-Operational-Excellence-Pillar.pdf)
- [Reliability](https://d1.awsstatic.com/whitepapers/architecture/AWS-Reliability-Pillar.pdf)
- [Security](https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pdf)
- [Performance Efficiency](https://d1.awsstatic.com/whitepapers/architecture/AWS-Performance-Efficiency-Pillar.pdf)
- [Cost Optimization](https://d1.awsstatic.com/whitepapers/architecture/AWS-Cost-Optimization-Pillar.pdf)
The AWS Lambda + Amazon EFS route continues to perform well in all of the pillars except for one; cost optimization.
Why? Well I use accounts and VPCs as [strong security barriers](https://d0.awsstatic.com/aws-answers/AWS_Multi_Account_Security_Strategy.pdf). So the VPC that this Lambda function and EFS filesystem are running in is only for this solution. The NAT gateway would only be used during the build of my site.
The [cost of a NAT gateway](https://aws.amazon.com/vpc/pricing/) per month? **$32.94** + bandwidth consumed.
That’s not an unreasonable amount of money until you put that in the proper context of the project. The site costs less than $0.10 per month to host. If we add in the AWS Lambda function + EFS filesystem, that skyrockets to **$0.50 per month** 😉.
That NAT gateway is looking very unreasonable now
## Alternatives
Easy alternatives to AWS Lambda for computation are [AWS Fargate](https://aws.amazon.com/fargate/) and good old [Amazon EC2](https://aws.amazon.com/ec2/). As much as everyone would probably lean towards containers and I’ve heard people say it’s the next logical choice…
{% twitter 1273575669390327809 %}
…I went old school and started to explore what a solution in EC2 would look like.
For an Amazon EC2 instance to access the internet, it only needs to be in a public subnet of a VPC with an internet gateway. No NAT gateway needed. This removes the $32.94 each month but does put us into a more expensive compute range.
But can we automate this easily? Is this going to be a reliable solution? What about the security aspects?
## Amazon EC2 Solution
The 🔑 key? Remembering the _[user-data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html)_ feature of EC2 and that all AWS-managed AMIs support [cloud-init](https://cloudinit.readthedocs.io/en/latest/).
This provides us with 16KB of space to configure an instance on the fly to accomplish out task. That should be plenty...if you haven't figured it out from my profile pic, I'm approaching the greybeard powers phase of my career 🧙♂️.
A quick bit of troubleshooting (lots of instances firing up and down), and I ended up with this [bash](https://www.gnu.org/software/bash/manual/html_node/index.html#SEC_Contents) script (yup, bash) to solve the problem;
```bash
#! /bin/bash
sleep 15
sudo yum -y install git
sudo yum -y install python3
sudo pip3 install boto3
sudo pip3 install dateparser
sudo pip3 install feedparser
cat > /home/ec2-user/get_secret.py <<- PY_FILE
# Standard libraries
import base64
import json
import os
import sys
# 3rd party libraries
import boto3
from botocore.exceptions import ClientError
def get_secret(secret_name, region_name):
secret = None
session = boto3.session.Session()
client = session.client(service_name='secretsmanager', region_name=region_name)
try:
get_secret_value_response = client.get_secret_value(SecretId=secret_name)
except ClientError as e:
print(e)
else:
if 'SecretString' in get_secret_value_response:
secret = get_secret_value_response['SecretString']
else:
decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
return json.loads(secret)
if __name__ == '__main__':
github_token = get_secret(secret_name="GITHUB_TOKEN", region_name="us-east-1")['GITHUB_TOKEN']
print(github_token)
PY_FILE
git clone https://\`python3 get_secret.py\`:x-oauth-basic@github.com/USERNAME/REPO /home/ec2-user/website
python3 RUN_MY_FEED_UPDATE_SCRIPT
cd /home/ec2-user/website
# Build my website
./home/ec2-user/website/bin/hugo -b https://markn.ca
# Update the repo
git add .
git config --system user.email MY_EMAIL
git config --system user.name "MY_NAME"
git commit -m "Updated website via AWS"
git push
# Sync to S3
aws s3 sync /home/ec2-user/website/public s3://markn.ca --acl public-read
# Handy URL to clear the cache
curl -X GET "https://CACHE_PURGING_URL"
# Clean up by terminating the EC2 instance this is running on
aws ec2 terminate-instances --instance-ids `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` --region us-east-1
```
This entire run takes on average 4 minutes. Even at the higher on-demand cost (vs. [spot](https://aws.amazon.com/ec2/spot/pricing/)), each run costs $0.000346667 on a t3.nano in us-east-1.
For the month, that’s $0.25376244 (for 732 scheduled runs).
We’re well 😉 over the AWS Lambda compute price ($0.03/mth) but still below the Lambda + EFS cost ($0.43/mth) and certainly well below the total cost including a NAT gateway. It’s weird, but this is how cloud works.
Each of these runs is triggered by a CloudWatch Event that calls an AWS Lambda function to create the EC2 instance. That’s one extra step compared to the pure Lambda solution but it’s still reasonable.
## Reliability
In practice, this solution has been working well. After 200+ runs, I have experienced zero failures. That’s a solid start. Additionally, the cost of failure is low. If this process fails to run, the site isn’t updated.
Looking at the overall blast radius, there are really only two issues that need to be considered;
1. If the sync to S3 fails and leaves the site in an incomplete state
2. If the instance fails to terminate
The likelihood of a sync failure is very low but if it does fail the damage would only be to one asset on the site. The AWS CLI command copies files over one-by-one if they are newer. If one fails, the command stops. This means that only one asset (page, image, etc.) would be in a damaged state.
As long as that’s not the main .css file for the site, we should be ok. Even then, clean HTML markup leaves the site still readable.
The second issue could have more of an impact.
The hourly cost of the t3.nano instance is $0.0052/hr. Every time this function runs, another instance is created. This means I could have a few dozen of these instances running in a failure event…running up a bill that would quickly top $100/month if left unchecked.
In order to mitigate this risk, I added another command to the bash script; `shutdown`. Also ensuring that the API parameter of `—instance-initiated-shutdown-behavior` set to `terminate` is set on instance creation. This means the instance calls the EC2 API to terminate itself and shuts itself down to terminate…just in case.
Adding [a billing alert](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) rounds out the mitigations to significantly reduce the risk.
## Security
The security aspects of this solution concerned me. AWS Lambda presents a much smaller set of responsibilities for the user. In fact, an instance is the most responsibility taken on by the builder within [the Shared Responsibility Model](https://markn.ca/2019/road-to-reinvent-the-shared-responsibility-model/). That’s the opposite way we want to be moving.
Given that this instance isn’t handling inbound requests, the security group is completely locked down. It only allows outbound traffic, nothing inbound.
Additionally, using an IAM Role, I’ve only provided the necessary permissions to accomplish the task at hand. This is called the principle of least privilege. It can be a pain to setup sometimes but does wonders to reduce the risk of any solution.
You may have noticed in the *user-data* script above that we’re actually writing a python script to the instances on boot. This script allows the instance to access [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) to get a secret and print its value to stdout.
I’m using that to store the [GitHub Personal Access Token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) required to clone and update the private repo that holds my website. This reduces the risk to my GitHub credentials which are the most important piece of data in this entire solution.
This means that the instance needs to the following permissions;
```
secretsmanager:GetSecretValue
secretsmanager:DescribeSecret
s3:ListBucket
s3:*Object
ec2:TerminateInstances
```
The permissions for `secretsmanager` are locked to the specific ARN of the secret for the GitHub token. The `s3` permissions are restricted to [read/write](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html) my website bucket.
`ec2:TerminateInstances` was a bit trickier as we don’t know the instance ID ahead of time. You could dynamically assign the permission but that adds needless complexity. Instead, this is a great use case for resource tags as a condition for the permission. If the instance isn’t tagged properly (in this case I use a “Task” key with a value set to a random, constant value), this role can’t terminate it.
Similarly, the AWS Lambda function has standard execution rights and;
```
iam:PassRole
ec2:CreateTags
ec2:RunInstances
```
Cybersecurity is simply making sure that what you build does what you intend…and only what is intended.
If we run through the possibilities for this solution, there isn’t anything that an attacker could do without already having other rights and access within our account.
We’ve minimized the risk to a more than acceptable level, even though we’re using EC2. It appears that this solution can only do what is intended.
## What Did I Learn?
The Well-Architected Framework isn’t just useful for big projects or for a point-in-time review. The principles promoted by the framework apply continuously to any project.
I thought I had a simple, slam dunk solution using a serverless design. In this case, a pricing challenge required me to change the approach. Sometimes it’s a performance, sometimes it’s security, sometimes it’s another aspect.
Regardless, you need to be evaluating your solution across all five pillars to make sure you’re striking the right balance.
There’s something about the instance approach that still worries me a bit. I don’t have the same peace of mind that I do when I deploy Lambda but the data is showing this as reliable and meeting all of my needs.
This setup also leaves room for expansion. Adding addition tasks to the *user-data* script is straightforward and would not dramatically shift any of the concerns around the five pillars if done well. The risk here is expanding this into a custom CI/CD pipeline which is something to avoid.
“I built my own…”, generally means you’ve taken a wrong turn somewhere along the way. Be concerned when you find yourself echoing those sentiments.
This is also a reminder that there are a ton of features and functionality within the big three ([AWS](https://aws.amazon.com), [Azure](https://azure.microsoft.com/en-ca/), [Google Cloud](https://cloud.google.com)) cloud platforms and that can be a challenge to stay on top of.
If I didn’t remember the cloud-init/user-data combo, I’m not sure I would’ve evaluated EC2 as a possible solution.
One more reason to keep on learning and sharing!
Btw, checkout the results of this work at;
- [Security Super Feed](https://markn.ca/security-super-feed/)
- [DevOps Super Feed](https://markn.ca/devops-super-feed/)
- [Cloud Super Feed](https://markn.ca/cloud-super-feed/)
And if I'm missing a link with a feed that you think should be there, [please let me know](https://docs.google.com/forms/d/e/1FAIpQLSeASl4P9NEgdtVQeLrJ8sOm0x-x_1SWAsIvTGDYcfNpOW1vTA/viewform)!
## Total Cost
If you’re interested in the total cost breakdown for the solution at the end of all this, here is it;
```
Per month
===========
24 * 30.5 = 732 scheduled runs
+ N manual runs
===========
750 runs per month
EC2 instance, t3.nano at $0.0052/hour
+ running for 4m on average
===========
(0.0052/60) * 4 = $0.000346667/run
AWS Lambda function, 128MB at $0.0000002083/100ms
+ running for 3500ms on average
============
$0.00000729/run
Per run cost
============
$0.000346667/run EC2
$0.00000729/run Lambda
============
$0.000353957/run
Monthly cost
=============
$0.26546775/mth to run the build 750 times
$0.00 for VPC with 2 public subnets and Internet Gateway
$0.00 for 2x IAM roles and 2x policies
$0.40 for 1x secret in AWS Secrets Manager
$0.00073 for 732 CloudWatch Events (billed eventually)
$0.00 for 750 GB inbound transfer to EC2 (from GitHub)
$0.09 for 2 GB outbound trasnfer (to GitHub)
=============
$0.75620775/mth
This means it'll take 3.5 years before we've spent the same as one month of NAT Gateway support.
* Everything is listed in us-east-1 pricing
``` | marknca |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.